Compare commits
3 Commits
a0a866fab5
...
dc16b5988a
Author | SHA1 | Date | |
---|---|---|---|
dc16b5988a | |||
e79dbfeb91 | |||
66b7951ee2 |
6
src/blog/draft_ai-is-in-the-cave.cfg.php
Normal file
6
src/blog/draft_ai-is-in-the-cave.cfg.php
Normal file
@ -0,0 +1,6 @@
|
||||
<?php
|
||||
$title = "AI Is in the Cave";
|
||||
$description = "Delving into the (apparent) mystery of Large Language Models (a.k.a. AI) and why none of them are actually intelligent.";
|
||||
$created = "2025-03-13";
|
||||
$updated = "2025-03-13";
|
||||
?>
|
86
src/blog/draft_ai-is-in-the-cave.html.php
Normal file
86
src/blog/draft_ai-is-in-the-cave.html.php
Normal file
@ -0,0 +1,86 @@
|
||||
<?php
|
||||
require 'config.php';
|
||||
require 'draft_ai-is-in-the-cave.cfg.php';
|
||||
require 'templates/blog-header.php';
|
||||
?>
|
||||
<p>
|
||||
For your average technological layman most digital technologies seem to pretty
|
||||
much be magic. Somehow a bunch of ones and zeros can be transformed into a movie
|
||||
which is streamed over cables and even thin-air until it reaches my tablet and
|
||||
is transformed to something actually recognizable on the screen. So it should
|
||||
come as no surprise that when it comes to Large Language Models (LLMs, a.k.a.
|
||||
AI) the usual amazement and mystification of technology reached new peaks as
|
||||
folks started speculating that perhaps this is truly the point where computers
|
||||
can pass the Turing Test, and the more enthusiastic among these would go
|
||||
further and say that perhaps even the machines could gain
|
||||
conciousness<sup><a href="#n1" >(1)</a></sup> if they have not already, all
|
||||
ultimately raising the question of whether we have reached the point of
|
||||
Singularity.<sup><a href="#n2" >(2)</a></sup> In fact, we could say that the
|
||||
capabilities of LLMs are so amazing that even some of the CEOs of these
|
||||
companies are suggesting they either have or may soon truly achieve
|
||||
consciousness.<sup><a href="#r1" >[1]</a></sup> However, if you were to ask
|
||||
these people how we can know that LLMs have reached this point the test
|
||||
provided is generally a very unsophisticated one: if it can ape human behavior,
|
||||
it must have an intellect at the same level as a human.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Now, for anyone who has some familiarity with Pro-Life apologetics, this kind
|
||||
of reasoning to determine personhood sounds extremely familiar, but is now
|
||||
being applied in the reverse. It is the attribution of personhood on the basis
|
||||
of what the thing <em>can do</em> in a specific stage of its development rather
|
||||
than on the basis of what <em>kind of being</em> the thing is. In the case of
|
||||
the unborn it is used to claim they do not have personhood because at that
|
||||
stage of their development they cannot do certain things, while in the case of
|
||||
LLMs it is used to claim they do have personhood because they can do these
|
||||
things (at least those things which we associate with the intellect). All this
|
||||
because our modern materialist culture cannot actually understand the concept
|
||||
of a <em>kind</em> of being, for all being is merely an assortment of atoms
|
||||
that just so happen to organize themselves in such a way that a conscious being
|
||||
is formed. Thus, to form other conscious beings all you have to do is put the
|
||||
same kinds of atoms together in the same pattern and you can reproduce life!
|
||||
And not any sort of life, but rational life at that. Furthermore, in the case
|
||||
of LLMs, it would seem that it is not even necessary for it to be the same
|
||||
kinds of atoms in the same pattern at all, but instead we can replace neurons
|
||||
and alike with transistors and other electronic elements, coded to interact
|
||||
with each other to do the same thing human beings have been doing for tens of
|
||||
thousands of years, and what pretty much most animals have been able to do as
|
||||
well: pattern recognition and replication.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The fact of the matter is that LLMs may seem, from a purely superficial
|
||||
standpoint, like a child who is slowly learning to speak. In the past few years
|
||||
we have seen drastic improvements as many of the tell-tale signs have been
|
||||
smoothed out of the algorithms. But even as they become indistinguishable from
|
||||
the product of actual human work,
|
||||
</p>
|
||||
|
||||
<h2>The Man Who "Learned" Chinese</h2>
|
||||
|
||||
<h2>Here be Demons</h2>
|
||||
|
||||
<h2>Hammers Are for Nails</h2>
|
||||
|
||||
<h2>Resources</h2>
|
||||
<h3>Notes</h3>
|
||||
<ol class="notes" >
|
||||
<li id="n1" >
|
||||
Some people confuse the Turing Test to be an indicator that a machine
|
||||
<em>has</em> reached conciousness, but this is a misunderstanding. The
|
||||
test merely indicates that in a blind-folded scenario a human cannot
|
||||
tell whether they are talking to another human or a machine, usually via
|
||||
a text prompt.
|
||||
</li>
|
||||
<li id="n2" >
|
||||
Singularity in this context is understood as a point-of-no-return past
|
||||
which the advances of technological complexity are beyond our control.
|
||||
</li>
|
||||
</ol>
|
||||
<h3>References</h3>
|
||||
<ol class="refs" >
|
||||
<li id="r1" ><a href="https://arstechnica.com/ai/2025/03/anthropics-ceo-wonders-if-future-ai-should-have-option-to-quit-unpleasant-tasks/" target="_blank" >Anthropic CEO floats idea of giving AI a “quit job” button, sparking skepticism - Ars Technica</a></li>
|
||||
</ol>
|
||||
<?php
|
||||
require 'templates/blog-footer.php';
|
||||
?>
|
Loading…
x
Reference in New Issue
Block a user