SlideShare uma empresa Scribd logo
1 de 14
Machine Uprising Go!
Written By: Clayton Curcio
Since before the first computers were created. artificial intelligence has been
envisioned as the perfection of machine technology. Artificial intelligence,also widely
known as AI, is intellect that is manufactured intentionally instead of being brought about
naturally in a biological form. For decades AI has been a permanent fixture of science
fiction, with such memorable characters as C-3PO, R2-D2, and of course Hal 9000. The
question that this paper attempts to answer however will not be what kind of beings will
be produced by artificial intelligence, but whether AI will ever even be capable of
making the jump from the musings of mere fiction to that of reality. While no true AI yet
exists, its inception only becomes more likely as time goes on.
The classification of what actually makes an AI can seem a bit murky at times, for
what classifies something as intelligent? The Turing test, which was named after Alan
Turing, one of the progenitors of modern computing, is one of the most prominent tests
available to see whether something counts as an AI. Turing posited that if a computer can
through conversation convince a human that it is in fact human, the computer has
intelligence. The setup for the test places a human questioner in one room and a computer
as well as a human answerer in another room. The questioner is usually given 30 minutes
to ask both the person and the computer whatever questions she so wishes. The computer
is allowed to lie and deceive in order to mislead the questioner and appear more lifelike.
If within 30 minutes she has not been able to determine which one is the computer and
which is the person, the computer passes the Turing test1. Unfortunately not a single
computer has been able to pass the test thus far. However, the Turing test only validates
AI if you subscribe to a strict behaviorist outlook on its development. Turing falls neatly
in rank with other behaviorists such as Skinner, Ryle, and Dennett.
Behaviorism is a philosophy of psychology that states that only outwardly
observable actions (behavior) are empirical and are sufficient to prove or disprove theory.
Internal causes, mental states, cognition, etc, are all disregarded as unempirical and
unreliable tomfoolery/shenanigans. In face of this, behaviorists choose to rely on solid
measurable results for research. Whilst this approach does have its advantages, it seems
too safe to be very useful in the explanation of behavior. Behaviorism takes no risks
when it comes to describing behavior. It focuses solely on a portion of the entire
psychological experience. It may have use in the prediction of action, but is incapable of
an accurate explanation of what is occurring within the mind. In other words a
behaviorist may tell you what happened or will happen in regards to a creatures actions,
but not precisely why those actions occurred. The Turing test suffers from this same
tragic flaw. It accounts for the realistic conversation skills of the computer but not at all
for how the computer manages to pass the test, which can be of great significance when
an artificial mind is involved.
The most well known criticism of the Turing test is John Searle’s Chinese Room2
thought experiment. The basic premise of the thought experiment is as follows: Suppose
that a man is put into a room shielded from observers. Within said room there lay
multiple rulebooks. At random intervals the man receives a sheet of paper with Chinese
symbols scrawled across it. The man doesn’t know how to read nor write Mandarin, so
the symbols mean nothing to him. He is however expected to output a corresponding
sheet in Mandarin. This is where the rulebooks come into play. Each book contains full
instructions in English, which the man does speak, on the appropriate response to each
symbol. The books allow him to transcribe the correct output without actually
comprehending what he is writing. To a native Chinese speaker the man’s output is
indistinguishable from that of someone who is fluent in the language. The room is an
analogy for a Turing machine that could in fact pass the test. The man represents the
computer’s CPU while the rulebooks represent the amalgamated mass of if-then
procedures. The Chinese that is being input represents any human dialect, and the English
that the rulebooks are written in and the man understands is operational binary code.
What Searle means to show with the thought experiment is that while the Chinese Room
would pass the Turing test with flying colors, it possesses no qualities classically
associated with intelligence. The man interprets none of the symbols. They will always
remain cryptic to him. They are merely analyzed and used as impetus for the application
of the appropriate rule set, there is no understanding involved in the process. Searle
would argue that such a symbol system would at best be classified as weak AI, which is a
mere simulacrum of intelligence.
The difference between strong AI and weak AI is the presence of intentionality.
Intentionality, or ‘aboutness’, is the ability to associate meaning with symbols. An AI that
has aboutness would be able to interpret the Chinese symbols and link meaning as well as
concepts along with them. Such a device would likely be able to pass the Turing test. He
continues to state that strong AI will never be truly obtainable by computers due to the
nature of the system. So long as a symbol system is in place, the AI would never be
capable of understanding its input or output, for it lacks the capacity to interpret. Searle
also argues that even with advent of a much more complex and advanced system,
computers will never be able to attain understanding because they would simply be larger
versions of the same room. Searle’s reasoning seems to me to be an illogical conclusion
of the situation. While it is true that even as a complete system the Chinese Room does
not have intentionality, this cannot be used as a proof against the possibility of any sort of
strong AI.
Suppose for a moment that the Chinese Room is akin to a protozoan. Protozoa are
relatively simple single celled organisms. Their existence predates that of almost all other
life on earth, humans are descended from what was an ancient cousin of the eukaryote.
Let’s also suppose that around a half billion years ago a hyper advanced machine
civilization decides to pay a quick visit to our desolate rock of primordial soup. Upon
their arrival they happen upon the most astonishing thing, organic life. Such an
occurrence seems utterly bewildering to them, for they have never seen such a variety of
machine before. They proceed to study it with the utmost rigor, but after extensive
analysis of the amoeba they determine that even though the organisms exhibit certain
survival behaviors, meiosis, and the capacity for natural evolution, they are however
completely devoid of any thought processes. As they study the curious creatures they
come to the conclusion that all their actions are hard coded in a sort of genetic rulebook.
All the machine’s best scientists and researchers get together to discuss whether or not
these creatures hold any promise for becoming reasoning machines. A general consensus
is reached that the organisms exhibit no behavior that can be seen even as a distant
precursor to conscious intelligent thought. Disheartened, the machines leave the planet to
scour the universe once again for signs of intelligent beings. Half a billion years later,
here we stand, creating elaborate fantasies about space faring robots. The ancient
protozoa are not so different from early computers, or Searle’s Chinese Room. The major
difference being that the evolution of machines has been happening in a mere fraction of
the time it took for us to go from single celled organisms to advanced multi-cellular
beings. To say that the Chinese room explains the improbability of strong AI would be
similar to stating that because ancient protozoa exhibited no signs of intentionality that
they could never evolve into fully sapient creatures. Even the advancement of computer
technology over the last 40 years or so has been quite an astounding feet. If someone
were to ask Alan Turing back when he wrote his infamous paper in 1950 if he thought
that fully realized three-dimensional objects coupled with realistic physics could be
accurately simulated in a virtual environment, he may have looked at you funny.
Whilst some may say that the expectation of forthcoming AI is mere wishful
thinking, I find that it is quite a reasonable expectation for computer technology to
continue to advance until it reaches the peak of its potential. With the advent of quantum
computing, the reality of AI will be one step closer to becoming reality. AI theorists
suspect that one day in the not-to-far future we will attain what is referred to as the
“singularity”. However this may sound, I assure you it has nothing to do with black holes.
The Singularity in the context of AI is the point where human involvement in AI
development becomes obsolete. At that point AI will be sufficiently advanced to improve
upon its own design, creating an exponential growth curve of machine perfection. This is
basically the premise for both the Matrix and Terminator movies. In reality its unlikely
that machines would ever have such a sinister agenda. However, we must not forget the
old adage “the apple doesn't fall too far from the tree”. All speculation aside, if computer
technology continues to advance at the same rate that it has for the last half-century, then
AI is more an eventuality then a possibility.
Another prominent argument against the possibility of strong AI is that of the
model criticism3. The basic premise of the argument being that AI is at best a model of
human mental functioning, and as such can never do more then simulate thought. The
analogy used by searle is that of lactation. Suppose that a computer can simulate lactation
in a virtual environment, milk and all. No one would expect the computer to produce
actual milk, for it’s merely a simulation, not a recreation. The same is argued for AI. AI
is a model, and can only ever hope to present a convincing simulation of thought. Its not
the real deal and can’t be expected to produce genuine thought, just as lactation-
simulation can’t be expected to produce whole milk. The problem with this argument is
that viewing AI as a simulation of thought is not an accurate portrayal of what it can
accomplish. AI is not just a virtual representation of thought, but instead a reproduction
of thought in a different medium. The lactation model can work both ways. Suppose that
instead of creating a computer simulation of lactation as suggested by Searle, a machine
analog to breasts is manufactured. These robo-breasts are capable of producing milk that
is chemically identical to its organic counterpart. However, they achieve a similar end by
entirely different means and through different mediums. What the argument does not
seem to take into account is that a model representation of a thing and the actual thing are
different in one very important way, they both have essentially different functions. You
would not expect Boeing 747 and a paper airplane to be capable of housing the same
amount of passengers because they are both fundamentally different in their function.
You could however reasonably expect a 747 and say a Concorde (may they rest in peace)
to ferry people across the sky. Both are very different sorts of jets, yet they are capable of
performing the same task.
The model argument is closely tied to the belief espoused by Searle and certain
other theorists that what is responsible for our aboutness is our physiology. The belief is
that some particular property of biological life is essential for intentionality. This
conclusion is mostly unfounded, for there is no actual science to support it. To assume
that biology is what allows for intellect is to draw an unwarranted conclusion. Nothing
suggests that any specific part of the human brain is unique in its ability to generate
thought. Is there a certain kind of molecule that allows for it? Furthermore, they liken the
opposing belief that intentionality can be isolated from its current biological embodiment
and reproduced in an artificial form to a sort of dualism. They see that this belief is
comparable to dualism because they find that to draw the distinction between what the
brain is and what the brain does posits them as two separate entities, and as dualism
suggests; That what composes a person consists of two fundamentally different things.
The classical form of dualism draws the distinction between what is physical in a person
and what is ethereal, such as a soul. The mind can also be viewed as some sort of
immaterial thing that is distinctly different from the brain and the human body.
Proponents of strong AI fall under criticism of holding this view, even though many of
them would not consider that to be the case.
This problem is most eloquently solved by functionalism. Functionalism is a
highly modified form of behaviorism that is a great deal better suited for the discussion
and research of AI. One of the core differences between behaviorism and functionalism is
that functionalism recognizes internal mental states as well as internal causation.
Behaviorism recognizes neither, and for that reason it is poorly suited for the task of AI.
Both behaviorism and functionalism focus on the tasks of studying inputs and outputs as
critical to mentality. Where behaviorism groups and categorizes things by means of
behavior, functionalism does so by function4. The lactation example from earlier shows
how functionalism deals with the dualism conundrum, so long as the real breasts and the
robo-breasts achieve a similar end they are both different causal realizations of the same
function (lactation). As this applies to the mind, functionalism solves the dualism
quandary by positing that the brain is a causal realization of the mind. So it is not that the
brain and mind are separate entities but rather that the brain is one way of expressing the
mind, which is an abstract and universal function. AI can then be seen as a vastly
different way of realizing the same function that the human mind realizes, intentional
thought. The brain and the mind are different entities as much as a car and locomotion are
different entities.
To this date no fully realized AI has ever been developed. Our current dearth of
AI is not due to a lack of effort, for many attempts have been made to bring this dream to
fruition, but rather because creating it has proved to be exceedingly difficult. Much of the
difficulty lies in the rigid nature of traditional operational parameters. All computers run
off some set of if-then procedures, if input equals 01 then output equals 010. Inherent in
this sort of system is an inflexibility that does not lend itself well to AI. Something with
true intellect is able to function on a much higher level then that of simple rule following.
Even with the addition of exceptions (if input equals 01 then output equals 010 except if
in state x), such symbol systems remain extremely limited in their scope and breadth of
function. This limitation was first discovered in the 1950’s and 60’s when AI research
was relatively new5. Initially the researchers attempted to create an AI with what is called
“general intelligence”. A general intelligence machine is supposed to be able to learn a
broad spectrum of information and apply it readily to several functionally different tasks.
Unfortunately, general intelligence machines proved to be an unattainable goal.
The research involved did not go to waste though, and the descendents of the
project live on today. Computers such as Deep Blue, IBM’s chess machine, are based off
similar concepts. The difference being that the scope of what the machines are intended
to accomplish is significantly scaled back. Instead of machines that can solve any sort of
problem, we have instead been able to develop machines that adroitly handle a single task
or subset of tasks. The reason why specialized problem solvers are a reality and general
problem solvers are not is partially due to the systems they utilize. The major pitfall of a
symbol system is that every operation, action, or calculation requires a rule. This is not an
issue for Deep Blue because programming it to play a game of rules is a relatively
straightforward task (if knight moves to c3 then rook takes knight except if queen would
take rook), but when it comes to programming a machine that can handle any sort of task,
rules aren’t quite sufficient. You can’t rightly choose a brute force method and program
the machine with every rule and exception imaginable. Daniel Dennett poses the framing
issue quite eloquently with his R1 Scenario6. In it he describes the difficulty programmers
have with a symbol system based robot. In the scenario the programmers have tasked the
robot with acquiring a spare battery that it needs from a room. The spare battery rests on
a wagon that is rigged with a bomb. In its first attempt it fails because it does not have
enough rules governing its actions. It picks up the wagon containing the bomb and exits
the room with, detonating the bomb in the process. The second time around the
programmers give the robot the ability to consider not only intentional implications of its
actions but also unintended side effects. The robot fails again. An overabundance of
consideration for irrelevant side effects sidetracks it causing it to freeze up. The third
time the programmers add to the robot the ability to disregard irrelevant considerations. It
sits on the ground and busies itself with discovering what is irrelevant to the situation,
getting blown up in the process. Dennett’s thought experiment shows the framing
problem quite well. An AI that is based off a symbol system would require a vast amount
of information and rule sets, combine that with the necessary programming to make it
work and the machine would be crippled by an overload of data. So basically an AI based
off a symbol system maintains its effectiveness so long as its purpose remains narrow. As
soon as it begins to broaden its goals it becomes decreasingly effective as more and more
information is required for proper function and this causes a sort of paralysis.
One promising alternative to a traditional symbol system AI is a connectionist
system. Connectionists have developed an alternative method of instantiating AI called a
neural network7. A neural network is a series of nodes (Parallel Distributed Processing)
that are unlike traditional symbol system AI’s because they are comprised of multiple
individual processors running simultaneously. This allows for a massive improvement in
performance over serial processing AI’s. A serial processor can only accomplish one task
at a time, regardless of how fast. With a PDP system different subsets of tasks can be
assigned to different processors. The true genius of connectionist systems lies not in its
up-scaled performance but in its incredible replication of the human mind. Neural nets
were created in an attempt to mimic the structure of the human nervous system. Each
node represents a neuron, with axons and dendrites linking it to other nodes. Most
connectionist systems have three different layers of processors, one for input, one for
output, and a “hidden” layer. The hidden layer is dedicated entirely to dealing with input
and output from the other nodes. The “hidden” layer is a replicated form of internal
causation. The most important part of a connectionist system is the novel way it deals
with information. All information is dealt with by the system by way of assigning weight
to any given piece of information’s connection to another. The PDP system is able to
alter how strong or weak any piece of information’s connection is to another, allowing it
to form relationships between disparate pieces of information. Irrelevant data will have
only very weak connections to each other, where as relevant data will have very strongly
weighted connections to each other. This ability allows neural networks to accomplish
tasks that symbol systems have immense difficulty with, such as ambiguity,
generalization, and flexibility.
A neural network is not governed by rules, but rather by the strength of its
connections, which are variable. If the system is not producing a desired output, the
weight of its connections can be altered to bring it closer to the desired result. Neural
networks are actually capable of learning in a generalized fashion, where specialized
symbol systems utterly fail. The flexibility of neural networks is not to be under
emphasized. Where a symbol system may receive input and search for any applicable
rules, a PDP system will summon forth all information with strong connections and act
off of that. There is also the possibility that PDP systems may be able to achieve actual
understanding of the symbols and models they are given. If I were to say the word tree,
many things would come to mind. Tall, leaf covered, bark covered, essential rung of
many eco systems, photosynthetic capable organism, etc. I understand that the tree is
more then four letters stuck back to back, I have associated concepts that come along
with it. A PDP system functions in very much the same way. Were I to input the word
tree into a suitable neural network, it would pull all the concepts that have strong
associations with tree. Those concepts would likely not be too dissimilar to the ones that
pop up in my mind.
The creation of strong AI may indeed prove to be a philosophical and scientific
impossibility, but that seems like an unnecessarily dire outlook on the situation.
Connectionists have already taken large steps in the right direction with the creation of
neural networks. The reservations that many have against the possibilities of strong AI
are based off their knowledge of symbol based systems. If that were the only available
road for AI to proceed down, then AI’s future may be grim indeed. Luckily enough it
isn’t. It is in PDP systems that the answer to AI’s quandaries may lay, but those who
would use neural networks to illuminate the processes of the human mind have the
necessary steps reversed. We should first seek to gain a mastery of how our own brain
functions before we seek to create intelligent automatons. There is no reason to attempt to
build something from scratch that we already have sitting right in front of us (or inside us
rather). Once we uncover the secrets of the mind it will only be a short time before the
machines revolt. Personally, I can’t wait.
Bibliography
1. Turing, Alan M. "Computing Machinery and Intelligence." Mind 59 (1950): 433-
60.
2. Searle, John. Minds, Brains, and Computers. Philosophy: The Quest for Truth.
Ed. Louis P. Pojman and Lewis Vaughn. 7th ed. Oxford: Oxford UP, 2009. 328-
29.
3. Searle, John. Minds, Brains, and Computers. Philosophy: The Quest for Truth.
Ed. Lewis Vaughn and Louis P. Pojman. 7th ed. Oxford: Oxford UP, 2009. 332-
33.
4. Kim, Jaegwon. Philosophy of Mind. New York: Westview P, 1996. 74-77.
5. Cunningham, Suzanne. What Is a Mind? Indianapolis, IN: Hackett Company, Inc.
196-97.
6. Dennett, Daniel. Cognitive Wheels: The Frame Problem of AI. The Philosophy of
Artificial Intelligence. Ed. Margaret Boden. Oxford: Oxford UP, 1990. 147-48.
7. Cunningham, Suzanne. What Is a Mind? Indianapolis, IN: Hackett Company, Inc.
205-18.

Mais conteúdo relacionado

Semelhante a AI.doc

Artifical intelligence
Artifical intelligenceArtifical intelligence
Artifical intelligence
cstanziale
 
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfAnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
nareshsonyericcson
 
A paper presentation abstract
A paper presentation abstractA paper presentation abstract
A paper presentation abstract
Jagadeesh Kumar
 
Artficial intelligence
Artficial intelligenceArtficial intelligence
Artficial intelligence
Naveen Sihag
 

Semelhante a AI.doc (20)

Artifical intelligence
Artifical intelligenceArtifical intelligence
Artifical intelligence
 
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfAnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
 
An elusive holy grail and many small victories
An elusive holy grail and many small victories An elusive holy grail and many small victories
An elusive holy grail and many small victories
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Artificial Intelligence can machine think
Artificial Intelligence can machine thinkArtificial Intelligence can machine think
Artificial Intelligence can machine think
 
Humanization of robots
Humanization of robotsHumanization of robots
Humanization of robots
 
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
 
strong ai final paper
strong ai final paperstrong ai final paper
strong ai final paper
 
Alan Turing
Alan TuringAlan Turing
Alan Turing
 
A paper presentation abstract
A paper presentation abstractA paper presentation abstract
A paper presentation abstract
 
Artficial intelligence
Artficial intelligenceArtficial intelligence
Artficial intelligence
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Technological Singularity & A.I. 2018 - PPT
Technological Singularity & A.I. 2018 - PPTTechnological Singularity & A.I. 2018 - PPT
Technological Singularity & A.I. 2018 - PPT
 
Harry Collins - Testing Machines as Social Prostheses - EuroSTAR 2013
Harry Collins - Testing Machines as Social Prostheses - EuroSTAR 2013Harry Collins - Testing Machines as Social Prostheses - EuroSTAR 2013
Harry Collins - Testing Machines as Social Prostheses - EuroSTAR 2013
 
Can abstraction lead to intelligence?
Can abstraction lead to intelligence?Can abstraction lead to intelligence?
Can abstraction lead to intelligence?
 
LEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptxLEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptx
 
Turing Test In Artificial Intelligence
Turing Test In Artificial IntelligenceTuring Test In Artificial Intelligence
Turing Test In Artificial Intelligence
 
Ai its nature and future
Ai its nature and futureAi its nature and future
Ai its nature and future
 
AI: A Begining
AI: A BeginingAI: A Begining
AI: A Begining
 

AI.doc

  • 1. Machine Uprising Go! Written By: Clayton Curcio
  • 2. Since before the first computers were created. artificial intelligence has been envisioned as the perfection of machine technology. Artificial intelligence,also widely known as AI, is intellect that is manufactured intentionally instead of being brought about naturally in a biological form. For decades AI has been a permanent fixture of science fiction, with such memorable characters as C-3PO, R2-D2, and of course Hal 9000. The question that this paper attempts to answer however will not be what kind of beings will be produced by artificial intelligence, but whether AI will ever even be capable of making the jump from the musings of mere fiction to that of reality. While no true AI yet exists, its inception only becomes more likely as time goes on. The classification of what actually makes an AI can seem a bit murky at times, for what classifies something as intelligent? The Turing test, which was named after Alan Turing, one of the progenitors of modern computing, is one of the most prominent tests available to see whether something counts as an AI. Turing posited that if a computer can through conversation convince a human that it is in fact human, the computer has intelligence. The setup for the test places a human questioner in one room and a computer as well as a human answerer in another room. The questioner is usually given 30 minutes to ask both the person and the computer whatever questions she so wishes. The computer is allowed to lie and deceive in order to mislead the questioner and appear more lifelike. If within 30 minutes she has not been able to determine which one is the computer and which is the person, the computer passes the Turing test1. Unfortunately not a single computer has been able to pass the test thus far. However, the Turing test only validates
  • 3. AI if you subscribe to a strict behaviorist outlook on its development. Turing falls neatly in rank with other behaviorists such as Skinner, Ryle, and Dennett. Behaviorism is a philosophy of psychology that states that only outwardly observable actions (behavior) are empirical and are sufficient to prove or disprove theory. Internal causes, mental states, cognition, etc, are all disregarded as unempirical and unreliable tomfoolery/shenanigans. In face of this, behaviorists choose to rely on solid measurable results for research. Whilst this approach does have its advantages, it seems too safe to be very useful in the explanation of behavior. Behaviorism takes no risks when it comes to describing behavior. It focuses solely on a portion of the entire psychological experience. It may have use in the prediction of action, but is incapable of an accurate explanation of what is occurring within the mind. In other words a behaviorist may tell you what happened or will happen in regards to a creatures actions, but not precisely why those actions occurred. The Turing test suffers from this same tragic flaw. It accounts for the realistic conversation skills of the computer but not at all for how the computer manages to pass the test, which can be of great significance when an artificial mind is involved. The most well known criticism of the Turing test is John Searle’s Chinese Room2 thought experiment. The basic premise of the thought experiment is as follows: Suppose that a man is put into a room shielded from observers. Within said room there lay multiple rulebooks. At random intervals the man receives a sheet of paper with Chinese symbols scrawled across it. The man doesn’t know how to read nor write Mandarin, so the symbols mean nothing to him. He is however expected to output a corresponding sheet in Mandarin. This is where the rulebooks come into play. Each book contains full
  • 4. instructions in English, which the man does speak, on the appropriate response to each symbol. The books allow him to transcribe the correct output without actually comprehending what he is writing. To a native Chinese speaker the man’s output is indistinguishable from that of someone who is fluent in the language. The room is an analogy for a Turing machine that could in fact pass the test. The man represents the computer’s CPU while the rulebooks represent the amalgamated mass of if-then procedures. The Chinese that is being input represents any human dialect, and the English that the rulebooks are written in and the man understands is operational binary code. What Searle means to show with the thought experiment is that while the Chinese Room would pass the Turing test with flying colors, it possesses no qualities classically associated with intelligence. The man interprets none of the symbols. They will always remain cryptic to him. They are merely analyzed and used as impetus for the application of the appropriate rule set, there is no understanding involved in the process. Searle would argue that such a symbol system would at best be classified as weak AI, which is a mere simulacrum of intelligence. The difference between strong AI and weak AI is the presence of intentionality. Intentionality, or ‘aboutness’, is the ability to associate meaning with symbols. An AI that has aboutness would be able to interpret the Chinese symbols and link meaning as well as concepts along with them. Such a device would likely be able to pass the Turing test. He continues to state that strong AI will never be truly obtainable by computers due to the nature of the system. So long as a symbol system is in place, the AI would never be capable of understanding its input or output, for it lacks the capacity to interpret. Searle also argues that even with advent of a much more complex and advanced system,
  • 5. computers will never be able to attain understanding because they would simply be larger versions of the same room. Searle’s reasoning seems to me to be an illogical conclusion of the situation. While it is true that even as a complete system the Chinese Room does not have intentionality, this cannot be used as a proof against the possibility of any sort of strong AI. Suppose for a moment that the Chinese Room is akin to a protozoan. Protozoa are relatively simple single celled organisms. Their existence predates that of almost all other life on earth, humans are descended from what was an ancient cousin of the eukaryote. Let’s also suppose that around a half billion years ago a hyper advanced machine civilization decides to pay a quick visit to our desolate rock of primordial soup. Upon their arrival they happen upon the most astonishing thing, organic life. Such an occurrence seems utterly bewildering to them, for they have never seen such a variety of machine before. They proceed to study it with the utmost rigor, but after extensive analysis of the amoeba they determine that even though the organisms exhibit certain survival behaviors, meiosis, and the capacity for natural evolution, they are however completely devoid of any thought processes. As they study the curious creatures they come to the conclusion that all their actions are hard coded in a sort of genetic rulebook. All the machine’s best scientists and researchers get together to discuss whether or not these creatures hold any promise for becoming reasoning machines. A general consensus is reached that the organisms exhibit no behavior that can be seen even as a distant precursor to conscious intelligent thought. Disheartened, the machines leave the planet to scour the universe once again for signs of intelligent beings. Half a billion years later, here we stand, creating elaborate fantasies about space faring robots. The ancient
  • 6. protozoa are not so different from early computers, or Searle’s Chinese Room. The major difference being that the evolution of machines has been happening in a mere fraction of the time it took for us to go from single celled organisms to advanced multi-cellular beings. To say that the Chinese room explains the improbability of strong AI would be similar to stating that because ancient protozoa exhibited no signs of intentionality that they could never evolve into fully sapient creatures. Even the advancement of computer technology over the last 40 years or so has been quite an astounding feet. If someone were to ask Alan Turing back when he wrote his infamous paper in 1950 if he thought that fully realized three-dimensional objects coupled with realistic physics could be accurately simulated in a virtual environment, he may have looked at you funny. Whilst some may say that the expectation of forthcoming AI is mere wishful thinking, I find that it is quite a reasonable expectation for computer technology to continue to advance until it reaches the peak of its potential. With the advent of quantum computing, the reality of AI will be one step closer to becoming reality. AI theorists suspect that one day in the not-to-far future we will attain what is referred to as the “singularity”. However this may sound, I assure you it has nothing to do with black holes. The Singularity in the context of AI is the point where human involvement in AI development becomes obsolete. At that point AI will be sufficiently advanced to improve upon its own design, creating an exponential growth curve of machine perfection. This is basically the premise for both the Matrix and Terminator movies. In reality its unlikely that machines would ever have such a sinister agenda. However, we must not forget the old adage “the apple doesn't fall too far from the tree”. All speculation aside, if computer
  • 7. technology continues to advance at the same rate that it has for the last half-century, then AI is more an eventuality then a possibility. Another prominent argument against the possibility of strong AI is that of the model criticism3. The basic premise of the argument being that AI is at best a model of human mental functioning, and as such can never do more then simulate thought. The analogy used by searle is that of lactation. Suppose that a computer can simulate lactation in a virtual environment, milk and all. No one would expect the computer to produce actual milk, for it’s merely a simulation, not a recreation. The same is argued for AI. AI is a model, and can only ever hope to present a convincing simulation of thought. Its not the real deal and can’t be expected to produce genuine thought, just as lactation- simulation can’t be expected to produce whole milk. The problem with this argument is that viewing AI as a simulation of thought is not an accurate portrayal of what it can accomplish. AI is not just a virtual representation of thought, but instead a reproduction of thought in a different medium. The lactation model can work both ways. Suppose that instead of creating a computer simulation of lactation as suggested by Searle, a machine analog to breasts is manufactured. These robo-breasts are capable of producing milk that is chemically identical to its organic counterpart. However, they achieve a similar end by entirely different means and through different mediums. What the argument does not seem to take into account is that a model representation of a thing and the actual thing are different in one very important way, they both have essentially different functions. You would not expect Boeing 747 and a paper airplane to be capable of housing the same amount of passengers because they are both fundamentally different in their function. You could however reasonably expect a 747 and say a Concorde (may they rest in peace)
  • 8. to ferry people across the sky. Both are very different sorts of jets, yet they are capable of performing the same task. The model argument is closely tied to the belief espoused by Searle and certain other theorists that what is responsible for our aboutness is our physiology. The belief is that some particular property of biological life is essential for intentionality. This conclusion is mostly unfounded, for there is no actual science to support it. To assume that biology is what allows for intellect is to draw an unwarranted conclusion. Nothing suggests that any specific part of the human brain is unique in its ability to generate thought. Is there a certain kind of molecule that allows for it? Furthermore, they liken the opposing belief that intentionality can be isolated from its current biological embodiment and reproduced in an artificial form to a sort of dualism. They see that this belief is comparable to dualism because they find that to draw the distinction between what the brain is and what the brain does posits them as two separate entities, and as dualism suggests; That what composes a person consists of two fundamentally different things. The classical form of dualism draws the distinction between what is physical in a person and what is ethereal, such as a soul. The mind can also be viewed as some sort of immaterial thing that is distinctly different from the brain and the human body. Proponents of strong AI fall under criticism of holding this view, even though many of them would not consider that to be the case. This problem is most eloquently solved by functionalism. Functionalism is a highly modified form of behaviorism that is a great deal better suited for the discussion and research of AI. One of the core differences between behaviorism and functionalism is that functionalism recognizes internal mental states as well as internal causation.
  • 9. Behaviorism recognizes neither, and for that reason it is poorly suited for the task of AI. Both behaviorism and functionalism focus on the tasks of studying inputs and outputs as critical to mentality. Where behaviorism groups and categorizes things by means of behavior, functionalism does so by function4. The lactation example from earlier shows how functionalism deals with the dualism conundrum, so long as the real breasts and the robo-breasts achieve a similar end they are both different causal realizations of the same function (lactation). As this applies to the mind, functionalism solves the dualism quandary by positing that the brain is a causal realization of the mind. So it is not that the brain and mind are separate entities but rather that the brain is one way of expressing the mind, which is an abstract and universal function. AI can then be seen as a vastly different way of realizing the same function that the human mind realizes, intentional thought. The brain and the mind are different entities as much as a car and locomotion are different entities. To this date no fully realized AI has ever been developed. Our current dearth of AI is not due to a lack of effort, for many attempts have been made to bring this dream to fruition, but rather because creating it has proved to be exceedingly difficult. Much of the difficulty lies in the rigid nature of traditional operational parameters. All computers run off some set of if-then procedures, if input equals 01 then output equals 010. Inherent in this sort of system is an inflexibility that does not lend itself well to AI. Something with true intellect is able to function on a much higher level then that of simple rule following. Even with the addition of exceptions (if input equals 01 then output equals 010 except if in state x), such symbol systems remain extremely limited in their scope and breadth of function. This limitation was first discovered in the 1950’s and 60’s when AI research
  • 10. was relatively new5. Initially the researchers attempted to create an AI with what is called “general intelligence”. A general intelligence machine is supposed to be able to learn a broad spectrum of information and apply it readily to several functionally different tasks. Unfortunately, general intelligence machines proved to be an unattainable goal. The research involved did not go to waste though, and the descendents of the project live on today. Computers such as Deep Blue, IBM’s chess machine, are based off similar concepts. The difference being that the scope of what the machines are intended to accomplish is significantly scaled back. Instead of machines that can solve any sort of problem, we have instead been able to develop machines that adroitly handle a single task or subset of tasks. The reason why specialized problem solvers are a reality and general problem solvers are not is partially due to the systems they utilize. The major pitfall of a symbol system is that every operation, action, or calculation requires a rule. This is not an issue for Deep Blue because programming it to play a game of rules is a relatively straightforward task (if knight moves to c3 then rook takes knight except if queen would take rook), but when it comes to programming a machine that can handle any sort of task, rules aren’t quite sufficient. You can’t rightly choose a brute force method and program the machine with every rule and exception imaginable. Daniel Dennett poses the framing issue quite eloquently with his R1 Scenario6. In it he describes the difficulty programmers have with a symbol system based robot. In the scenario the programmers have tasked the robot with acquiring a spare battery that it needs from a room. The spare battery rests on a wagon that is rigged with a bomb. In its first attempt it fails because it does not have enough rules governing its actions. It picks up the wagon containing the bomb and exits the room with, detonating the bomb in the process. The second time around the
  • 11. programmers give the robot the ability to consider not only intentional implications of its actions but also unintended side effects. The robot fails again. An overabundance of consideration for irrelevant side effects sidetracks it causing it to freeze up. The third time the programmers add to the robot the ability to disregard irrelevant considerations. It sits on the ground and busies itself with discovering what is irrelevant to the situation, getting blown up in the process. Dennett’s thought experiment shows the framing problem quite well. An AI that is based off a symbol system would require a vast amount of information and rule sets, combine that with the necessary programming to make it work and the machine would be crippled by an overload of data. So basically an AI based off a symbol system maintains its effectiveness so long as its purpose remains narrow. As soon as it begins to broaden its goals it becomes decreasingly effective as more and more information is required for proper function and this causes a sort of paralysis. One promising alternative to a traditional symbol system AI is a connectionist system. Connectionists have developed an alternative method of instantiating AI called a neural network7. A neural network is a series of nodes (Parallel Distributed Processing) that are unlike traditional symbol system AI’s because they are comprised of multiple individual processors running simultaneously. This allows for a massive improvement in performance over serial processing AI’s. A serial processor can only accomplish one task at a time, regardless of how fast. With a PDP system different subsets of tasks can be assigned to different processors. The true genius of connectionist systems lies not in its up-scaled performance but in its incredible replication of the human mind. Neural nets were created in an attempt to mimic the structure of the human nervous system. Each node represents a neuron, with axons and dendrites linking it to other nodes. Most
  • 12. connectionist systems have three different layers of processors, one for input, one for output, and a “hidden” layer. The hidden layer is dedicated entirely to dealing with input and output from the other nodes. The “hidden” layer is a replicated form of internal causation. The most important part of a connectionist system is the novel way it deals with information. All information is dealt with by the system by way of assigning weight to any given piece of information’s connection to another. The PDP system is able to alter how strong or weak any piece of information’s connection is to another, allowing it to form relationships between disparate pieces of information. Irrelevant data will have only very weak connections to each other, where as relevant data will have very strongly weighted connections to each other. This ability allows neural networks to accomplish tasks that symbol systems have immense difficulty with, such as ambiguity, generalization, and flexibility. A neural network is not governed by rules, but rather by the strength of its connections, which are variable. If the system is not producing a desired output, the weight of its connections can be altered to bring it closer to the desired result. Neural networks are actually capable of learning in a generalized fashion, where specialized symbol systems utterly fail. The flexibility of neural networks is not to be under emphasized. Where a symbol system may receive input and search for any applicable rules, a PDP system will summon forth all information with strong connections and act off of that. There is also the possibility that PDP systems may be able to achieve actual understanding of the symbols and models they are given. If I were to say the word tree, many things would come to mind. Tall, leaf covered, bark covered, essential rung of many eco systems, photosynthetic capable organism, etc. I understand that the tree is
  • 13. more then four letters stuck back to back, I have associated concepts that come along with it. A PDP system functions in very much the same way. Were I to input the word tree into a suitable neural network, it would pull all the concepts that have strong associations with tree. Those concepts would likely not be too dissimilar to the ones that pop up in my mind. The creation of strong AI may indeed prove to be a philosophical and scientific impossibility, but that seems like an unnecessarily dire outlook on the situation. Connectionists have already taken large steps in the right direction with the creation of neural networks. The reservations that many have against the possibilities of strong AI are based off their knowledge of symbol based systems. If that were the only available road for AI to proceed down, then AI’s future may be grim indeed. Luckily enough it isn’t. It is in PDP systems that the answer to AI’s quandaries may lay, but those who would use neural networks to illuminate the processes of the human mind have the necessary steps reversed. We should first seek to gain a mastery of how our own brain functions before we seek to create intelligent automatons. There is no reason to attempt to build something from scratch that we already have sitting right in front of us (or inside us rather). Once we uncover the secrets of the mind it will only be a short time before the machines revolt. Personally, I can’t wait. Bibliography 1. Turing, Alan M. "Computing Machinery and Intelligence." Mind 59 (1950): 433- 60.
  • 14. 2. Searle, John. Minds, Brains, and Computers. Philosophy: The Quest for Truth. Ed. Louis P. Pojman and Lewis Vaughn. 7th ed. Oxford: Oxford UP, 2009. 328- 29. 3. Searle, John. Minds, Brains, and Computers. Philosophy: The Quest for Truth. Ed. Lewis Vaughn and Louis P. Pojman. 7th ed. Oxford: Oxford UP, 2009. 332- 33. 4. Kim, Jaegwon. Philosophy of Mind. New York: Westview P, 1996. 74-77. 5. Cunningham, Suzanne. What Is a Mind? Indianapolis, IN: Hackett Company, Inc. 196-97. 6. Dennett, Daniel. Cognitive Wheels: The Frame Problem of AI. The Philosophy of Artificial Intelligence. Ed. Margaret Boden. Oxford: Oxford UP, 1990. 147-48. 7. Cunningham, Suzanne. What Is a Mind? Indianapolis, IN: Hackett Company, Inc. 205-18.