SlideShare uma empresa Scribd logo
1 de 25
Baixar para ler offline
An Elusive Holy Grail and Many Small
Victories
The State of AGI and Other AI Concerns
Abstract 1
Introduction: An AI Prehistory 2
AGI From Conjecture to Goal 4
The Frame Problem and Knowledge Representation 7
An Example, with Solutions and Objections 7
Phenomenological AI 13
Enactivism: Adding World to Brain and Body 17
Classifying Fictional Robots 18
Conclusion 20
“Strong versus Weak” Misrepresents the AI Landscape 20
The Frame Problem Yields Vital Lessons Across Multiple Disciplines 22
There Are Crucial Philosophical Concerns Raised by So-Called Weak AI 24
Abstract
A survey of the philosophical foundations and the history of Artificial Intelligence (AI) reveal a
discipline that is simultaneously in a vibrant era of creative achievement (many victories) and is
yet in the midst of an identity crisis (an elusive grail). Early history reveals conflated and
unrealistic goals that ran headlong into an iconic implementation issue--the frame
problem--which aptly showed the industry to be on an errant track. Subsequent forays into, and
lessons learned from, phenomenology and enactivism have led to breakthroughs in robotics,
and have unveiled a still-current debate on the benefits and misuses of knowledge
representation. These issues are illuminated with hypotheticals and examples from literature
and cinema. The findings raised in this investigation support three related conclusions: (1) the
dichotomy of “strong versus weak” AI is misleading and misrepresents the current state of the
industry; (2) the frame problem yields insights into not only AI and cognitive science, but into
philosophy of mind and personal identity; and (3) the broader philosophy of technology should
take primacy on the current state of AI concerns.
1
Introduction: An AI Prehistory
There is a long tradition in the history of ideas that can be said to anticipate artificial intelligence.
Aristotle, cited by Selmer Bringsjord, provided some of the foundation with the notion of a
formalism, wherein “certain patterns of logical thought are valid by virtue of their syntactic form,
independently of their content….[This] remains at the heart of the contemporary computational
theory of the mind.” Thomas Hobbes, in Leviathan, expressed a computational view of thinking:1
“By ratiocination I mean computation. Now to compute, is either to collect the sum of many
things that are added together, or to know when one thing is taken out of another.2
Interestingly, Rene Descartes, who in the Sixth Meditation established a well-known substance
dualism that provided the bedrock for a computational theory of mind maintained that3 4
machines would never duplicate human performance, and indeed that it would be “morally
impossible that there should be sufficient diversity in any machine to allow it to act in all the
events of life in the same way as our reason causes us to act.5
The tradition emphasized classical logic and a mechanical, computational framework to emulate
aspects of human performance. Then, in 1950, Alan Turing proposed to explore whether
machines could think. This can be considered an early modern allusion to what would later be6
dubbed Artificial General Intelligence (AGI). However, Turing made it clear in his seminal paper
that terms such as “think” and “machine” cannot be defined clearly, which is why the question
lent itself more to a survey than a study. Substituting for this inquiry, he presented a thought
experiment he called ​The Imitation Game​, in which it would be explored whether it may be
possible for question-and-answer responses from a computer to effectively be indistinguishable
from answers that a human being might give.
Turing’s substitution of the game suggests that he believed the idea of a thinking machine to be
misleading: a question that cannot be answered. Perhaps, for Turing, it is not even well
formulated: what does it mean to think? He was careful to draw sharp lines between physical
and intellectual capabilities, and he even considered typewritten correspondence as an
instrument, indicating an awareness of the importance of tone and the potential overload of
language wherein (nearly) all meaning comes from context.
Turing appears to prefigure embodiment and the distinction in degrees of difficulty between
stationary machines and robots in AI:
It will not be possible to apply exactly the same teaching process to the machine as to a
normal child. It will not, for instance, be provided with legs, so that it could not be asked
to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these
1
​Selmer Bringsjord, ​The Philosophical Foundations of Artificial Intelligence​, 2007, page 4.
2
Hobbes, T., ​Elements of Philosophy​, (Sect. 1, de Corpore 1, 1, 2)
3
​Hatfield, Gary, "​René Descartes​", The Stanford Encyclopedia of Philosophy (Summer 2018 Edition)
4
Loosely, “mind:body::program:computer”
5
Descartes, R.: 1911, ​The Philosophical Works of Descartes, Volume 1​. Translated by Elizabeth S. Haldane and
G.R.T. Ross, Cambridge University Press, Cambridge, UK, page 116.
6
​A. M. Turing (1950) ​Computing Machinery and Intelligence​. Mind 49: 433-460.
2
deficiencies might be overcome by clever engineering, one could not send the creature
to school without the other children making excessive fun of it.7
Another reference that makes this distinction prescient for the time period is that ​the term “robot”
in that era (1946), was still occasionally used to describe computers. Some of the definitions in8
the original problem space led to confusion as to the goals of the endeavor, and researchers
between (and sometimes within) disciplines started talking past each other. Carlos Herrera and
Ricardo Sanz point this out when they describe the later, inevitable change in the emphasis of
cognitive science:
Human mental operations are interesting insofar as they result from some embodied
interaction. Robots, on the other hand, have replaced computer and information
processing systems as a proper object of empirical research.9
As for predictions, Turing is (compared to his successors in the next section) comparatively
circumspect. He states his focus is not “whether all digital computers would do well in the
[imitation] game nor whether the computers that are presently available would do well, but
whether there are imaginable computers which would do well" and then he gives quite a long
horizon (half a century). Part of what makes this timing noteworthy is that the fundamentals of
computer science itself were at that time just getting formulated. Turing himself was performing
foundational work in algorithms and programming theory.
In the following passage, Turing is bounding his problem on a simulation:
“I do not wish to give the impression that I think there is no mystery about
consciousness. There is, for instance, something of a paradox connected with any
attempt to localise it. But I do not think these mysteries necessarily need to be solved
before we can answer the question with which we are concerned in this paper.”10
Turing is openly admitting that capturing consciousness in a local agent may be unsolvable, but
to him, that’s not the question. The question is whether we can make the machine ​act
intelligently. His goal, according to Stevan Harnad, is to determine “not whether or not machines
can think, but whether or not machines can do what thinkers like us can do -- and if so, how.”11
John Searle, holding Turing to the promise of creating machines that think in the same way that
humans do, has been a prominent critic of the notion of AGI, with his Chinese Room Argument12
and the formulation of ​Strong AI​, referring to “machines capable of experiencing
7
​A. M. Turing (1950) ​Computing Machinery and Intelligence​. Mind 49: 433-460.
8
​See this ​advertisement for ENIAC in Modern Mechanics​ that seems to conflate the notion of computer and robot.
Calling ENIAC “The Army Brain” anticipates the role that defense department funding would play in computer science
in general, and AI in particular.
9
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016.
10
​A. M. Turing (1950) ​Computing Machinery and Intelligence​. Mind 49: 433-460.
11
​Stevan Harnad, ​The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence​, 2008.
12
Cole, David, "​The Chinese Room Argument​", The Stanford Encyclopedia of Philosophy (Winter 2015 Edition),
Edward N. Zalta (ed.)
3
consciousness.” Stevan Harnad, on the other hand, believes AGI can theoretically13
appear--look just as real as intelligence--without human consciousness or understanding, and
he points out that one of Turing’s goals, and in fact the appropriate goal for cognitive science, is
“total internal microfunction.”14
Harnad also notes that “Turing's methodology for cognitive science (the Turing Test) is based
on doing: Design a model that can do anything a human can do, indistinguishably
from a human, to a human, and you have explained cognition.” For Harnad, reverse15
engineering cognition is a sufficiently difficult problem, even if functional equivalence is all that is
called for.16
AGI From Conjecture to Goal
A common pattern in the history of AGI is the underestimation of how difficult it is to simulate
routine activity, including low level sensorimotor skills. This was famously expressed in both
Moravec’s Paradox and in a formulation called the frame problem, which highlights the17 18
complexity of determining which “state changes” must take effect as a result of common
decisions. With the frame problem, the main issue (detailed in the next section) is of one of
relevance​: when updating new information in the middle of a project, how do you determine
which attributes and states to take into consideration, and which to completely discard?19
How did Computer Science get itself into the game of justifying a symbolic-representational
(codeable) view of common sense? It is traceable at least to the ​Proposal for the Dartmouth
Summer Research Project on Artificial Intelligence​. Here we see the beginnings of the pitfalls20
of making explicit promises with vague language. Moving well beyond what Turing had
proposed, the report begins by asserting that all aspects of learning and intelligence can be
precisely described and simulated by a machine. After that assertion, the goals actually drop off
quickly into the realm of, in some cases, fairly common problems in applications or systems
programming.
This confluence of goals began with an ambitious spirit and a very wide range of difficulty. ​Table
1 ​is constructed from the second and third sentences of the report, parsed out to provide notes
on scope.
Table 1: What They’ll Do on Their Summer Vacation
13
​Wikipedia entry on ​Artificial General Intelligence​.
14
​Stevan Harnad, ​Mind, Machines and Turing: The Indistinguishability of Indistinguishables​, 2001.
15
Stevan Harnad, ​Alan Turing and the Hard and Easy Problem of Cognition (Doing and Feeling)​: 2012
16
​Stevan Harnad, ​Minds, machines, and Searle 2: What's right and wrong about the chinese room argument​ 1996
17
Hans Moravec, ​Mind Children​, Harvard University Press, 1988: “It is comparatively easy to make computers exhibit
adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a
one-year-old when it comes to perception and mobility.”
18
​The Frame Problem​, Stanford Encyclopedia of Philosophy, 2016; here, the problem is defined as “the challenge of
representing the effects of action in logic without having to represent explicitly a large number of intuitively obvious
non-effects.”
19
McCarthy, J. & Hayes, P.J. (1969), “Some Philosophical Problems from the Standpoint of Artificial Intelligence”, in
Machine Intelligence 4, ed. D.Michie and B.Meltzer, Edinburgh University Press, pp. 463–502
20
​See ​http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf​, from August 31, 1955.
4
Conjecture or Goal Notes on Difficulty
1 The study is to proceed on the basis of
the conjecture that every aspect of
learning or any other feature of
intelligence can in principle be so
precisely described that a machine can
be made to simulate it.
This is ​full blown AGI​. It goes beyond
the ​Turing Test​, calling for not only
simulation but a ​complete description
of intelligence. The bounds of the
intelligence can be debated but this is
a lofty promise.
2 An attempt will be made to find how to
make machines use language….
Joseph Weizenbaum’s ​ELIZA ​is a21
noteworthy early attempt, and many
current examples include Siri, Google
Home, Amazon Echo, Google Duplex,
etc. All imperfect and impervious to
tone, yet positively progressive in
natural language processing (NLP).
3 ….form abstractions and concepts…. Any language translator, compiler, or
software-based controller would
qualify. Hundreds of thousands of
examples.
4 ….and improve themselves. Machine learning, deep learning,
control of weights and biases in neural
networks.
The conjecture in Row 1 is an implicit goal. The boldness of it calls to mind Turing’s observation
that, “The popular view that scientists proceed inexorably from well-established fact to
well-established fact, never being influenced by any improved conjecture, is quite mistaken.
Provided it is made clear which are proved facts and which are conjectures, no harm can result.
Conjectures are of great importance since they suggest useful lines of research.”22
But what a claim: ​all ​learning, ​every ​feature of intelligence. This has been, and perhaps always
will be, the stuff of science fiction. With this conjecture, there is a strong implication that the
intelligence we are talking about is full human performance. But this is where definitions get
tricky: there is a division even within this high echelon--we can consider that the need is met
even with a computer that has very limited “inputs” and is not mobile or dexterous (as you would
think of a robot).23
Goals 2 and 3, while interesting computer science problems, are so much simpler that they cast
an ironic light on John McCarthy’s (one of the authors of the proposal) complaint that “as soon
21
​Wikipedia entry for ​ELIZA​, an early natural language processing program that loosely simulated a Rogerian
therapist.
22
Andrew Hodges, ​Alan Turing: The Enigma​, Walker Books, 2000.
23
​A fictional illustration of this goal being realized might be computer (“HAL”) in 2001: A Space Odyssey. On the
other hand, far more ambitious would be to try and build a robot on the order of Data (Star Trek) or the replicants in
Blade Runner. It is understood that these examples don’t paint a rosy picture of AGI.
5
as it works, no one calls it AI anymore.” Similarly, Herrera and Sanz have noted that “huge24
successes that before were celebrated under the umbrella of AI now became computer science,
and the challenge of creating intelligent artifacts was taken over by the field of robotics.25
Goal 4 is more difficult than 2 and 3, but nowhere near the difficulty of Goal 1. This direction has
recently seen a lot of progress with neural networks and in commercial applications involving
machine learning, as well as more “traditional” AI problems such as theorem proving, chess and
Go mastery, war game simulations, and Jeopardy. Since it is more in the domain of what we
think of when we think of AI, it is again illustrative to call forth an image from literature. The
“mending apparatus” that automatically diagnoses and remediates the air, water, food delivery
and communications systems with the world-controlling Machine in EM Forster’s ​The Machine
Stops applies in Goal 4, and is a literary example of a sophisticated expert system: it is “the26
system charged with repairing defects that appear in the Machine.”27
Some of this early optimism was likely intended to encourage funding, largely from the Defense
Advanced Research Projects Agency (DARPA), which in 1963 provided a $2.2 million grant to28
the "AI Group" at MIT and then provided approximately $3 million dollars per year until 1974,
while making similar grants to programs at Carnegie-Mellon and Stanford. And there were29
some early successes that justified optimism. For instance, two attendees, Allen Newell and
Herbert A Simon, had a reasoning program, the Logic Theorist , that was able to prove most of30
the theorems in Chapter 2 of ​Principia Mathematica​. ​ ​Achievements such as this, along with
(probably) a desire to be able to maintain funding interest, led Marvin Minsky (a key AI
researcher from the start) in 1968 to declare that, “Within a generation we will have intelligent
computers like HAL in the film 2001.31
Much of the misunderstanding of the potentialities of AI have come from inverting the complexity
difficulties of a “thinking” task in a very bounded domain versus a “without thinking” one in a
much broader domain (the world!). Both Moravec’s Paradox and the frame problem illustrate
this issue, and Donald Knuth aptly summed up the state of the art: “AI has by now succeeded in
doing essentially everything that requires 'thinking’ but has failed to do most of what people and
animals do 'without thinking’ — that, somehow, is much harder!”32
24
​Bertrand Meyer, Communications of the ACM, ​Obituary for John McCarthy​, 2011.
25
​ ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016.
26
​E.M. Forster, ​The Machine Stops​, 1909. In addition to the machine learning branch of AI, this story anticipates
email, videoconferencing and social networking.
27
​Wikipedia entry on ​The Machine Stops​.
28
​Rockwell Anyoha, ​The History of Artificial Intelligence​, Harvard Graduate School of Arts and Sciences: [T]he
advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the
Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions.”
29
​Crevier, Daniel (1993), ​AI: The Tumultuous Search for Artificial Intelligence​, New York, NY: BasicBooks, ISBN
0-465-02997-3, page 64-65.​ Via Wikipedia page on ​History of AI​.
30
Wikipedia: ​Logic Theorist​.
31
​From a 1968 press release for the film. Some of the optimism stemmed from other successes in complex gaming,
and this was before the intractability of coding the more “mundane” aspects of intelligence were confronted.
32
​Kristen Brown, ​Why Artificial Intelligence Always Seems So Far Away​, SF Gate, 2015.
6
The Frame Problem and Knowledge Representation
The frame problem highlighted a major issue with solving Goal 1 in ​Table 1​: how to represent
(with symbolic logic) all of the effects of any action (given continual new information) without
having to question whether to represent (or to search and verify) all of the obvious “non-effects.”
When actions are performed, some things change and some do not, and the problem becomes
how we determine ​which things​ do/not change when a particular action is performed. The
contents to be considered must include all the attribute-states (called fluents, because they
change over time) that will be stored along with agents and actions.
Thus, the range of knowledge that needs to be represented is encapsulated in the frame. This
gave rise to formalisms known as the situation calculus, wherein situations are best described
as a “snapshot” (point in time and space) of the state of the universe. These states change (are
fluent) when the situation changes, and actions generate new situations based on old ones. The
situation calculus has formulas for preconditions and successor states; collections of formulas
and axioms describe worlds (domains).33
As an example, suppose that you color an object and then move it. The color of the object may
subsequently be affected by the move (i.e., it is paled by the sun, or passes through a
spray-paint machine), but there is no clear way in first order logic to represent this. One has to
continue to add new attributes and states, and even new contexts (i.e., why exactly were you
coloring and moving objects in the first place?).
The set of conclusions we can draw from premises grows in a single direction (monotonically)
as we add formulas. This technique--providing frame axioms for every fluent--could result (as
Hubert Dreyfus pointed out) in a deep regress of frames. For example, actions andm n34
fluents require ( ) axioms. So the problem becomes one of how to how to avoid thesem * n
axioms, as they are intractable in handling real-world scenarios; thus, some​ ​bounding is
necessary for the “world” we are simulating.
An early, and still used, method of dealing with this is nonmonotonic logic, wherein inferences
that are made are open to revision. Effectively, all conclusions are temporary, and open to35
revision once there is information to the contrary. For example, we know (background
knowledge) that birds fly and that Tweety is a bird. Thus, we can tentatively conclude that
Tweety flies until (new information) we learn that he is an ostrich or a penguin.36
An Example, with Solutions and Objections
In ​The Cult of Information​, Theodore Roszak illustrates the complexity of even simple human
activity. He describes an effort to build a robot to perform a trivial household task. The question
33
Wikipedia entry on ​Situation Calculus
34
Hubert Dreyfus: ​Why Heideggerian AI Failed and How Fixing it Would Make it More Heideggerian​ ​(2007)
35
​Strasser, Christian and Antonelli, G. Aldo, "​Non-monotonic Logic​", The Stanford Encyclopedia of Philosophy
(Summer 2018 Edition)
36
Jarek Gryz, ​The Frame Problem in Artificial Intelligence and Philosophy​, 2013.
7
posed is how to solve the problem of wanting to know what’s happening the world (gain access
to “the news”):
How can this be done? Let us say by reading the morning newspaper. Where is the
morning paper? Somewhere on the front lawn. Then, obviously, you should go out, and
bring it into the house. Correct. Unless it is raining. If it is raining, you don’t wish to get
wet. How can this be avoided? By putting on a raincoat, then going out and picking up
the paper.37
After explaining that was an actual research project, Roszak (a historian, cultural critic and
novelist with no formal computer science training) describes the difficulty of programming the
required nuances.
Not much of a problem, it would seem. But for the computer, this tiny, slice-of-life
adventure must be scripted into an enormously long and detailed program….and if the
choice of using an umbrella is worked into the situation, or the decision to understand
how ​wet one might be willing to get before using either umbrella or raincoat, the program
becomes a jungle of conflicting contingencies.38
Roszak is suggesting that the way computers simulate human cognitive activity is not at all
indicative of the way humans think:
[T]he human mind may often work in the same quirky, informal, fuzzy way it functions
when it decides (without seeming to think about it), that there is no point in fetching a
sopping wet newspaper off the front lawn anyway, and so settles for turning on the radio
to catch the news.39
The logic to formalize this sort of temporal updating can get very deep very fast, because
relevance cannot be easily codified in logic. Difficulties emerge in multiple dimensions:
examples include concurrent actions, actions with unpredictable effects, and continuous
change. All proposed solutions to date reveal problems. Two examples are ​Global Workspace40
Theory ​and the ​Frame Default​.41 42
Global Workspace Theory (GWT) proposes an architecture with a model of combined serial and
parallel information flow (​Figure 1​).
37
​Cult of Information​, 1986, Random House, page 124
38
​Ibid., page 125
39
​Ibid., page 126
40
​The Problem with Solutions to the Frame Problem​ (Leora Morgenstern 1996)
41
​Murray Shanahan and Bernard Baars, ​Applying Global Workspace Theory to the Frame Problem​ (2004)
42
Raymond Reiter. A logic for default reasoning. ​Artificial Intelligence​, 13:81–132, 1980.
8
Figure 1: Global Workspace Architecture
The global workspace itself is a working memory store--a cognitive architecture for both
conscious and unconscious processes. There are many parallel unconscious specialist
processes that inform, and are informed by, the state of the global workspace. The contents of
this global memory are what we are conscious of at a point in time (durations may vary). You
can think of the global workspace as the executive function and the specialist processes as
subsystems.
To use the news-gathering robot example, a specialist process may be “check for rain
awareness” or “determine whether radio is already broadcasting the news.” In any case, the
specialist processes ​writ large ​(​Figure 2​) “disseminate their messages to all other processes in
an effort to recruit more cohorts and thereby increase the likelihood of achieving their goals.”43
Figure 2: The Global Workspace Model of Information Flow
The question, as with the original frame problem, is one of relevance. The specialists in the
global workspace self-determine their relevance, and announce it to the global workspace. For
instance, a specialist process that may be relevant (and belong in the global workspace) might
43
​Wikipedia entry on ​Global Workspace Theory​.
9
be that the paper is wrapped in plastic, and won’t be wet after all, assuming it is only drizzling
(not raining hard) and the paper was not thrown directly into a puddle in the driveway.
According to Jerry Fodor, the question of relevance, and the unlikelihood of being able to handle
all of the potentially relevant representations, is insurmountable. Interestingly, his example
builds another complexity into the very scenario posed by Roszak:
“The totality of one’s epistemic commitments is vastly too large a space to have to
search if all one’s trying to do is figure out whether, since there are clouds, it would be
wise to carry an umbrella. Indeed, the totality of one’s epistemic commitments is vastly
too large a space to have to search whatever it is that one is trying to figure out.”44
Murray Shanahan and Bernard Baars planned to “solve” this by noting efficiencies in search
(order of n​2​
as opposed to n*log(n)), avoiding a combinatorial explosion. Their paper also45
claims that parallel processing will speed things up sufficiently. That, along with having
self-aware specialists in a global workspace, is the essence of the approach.
The Frame Default theory states that no properties change as a result of actions except the
explicitly specified ones. Thus, Roszak’s news-gathering robot would not use data such as time
of day, or direction the wind is blowing. The list of exceptions can grow in time, yet the
no-change claim remains in place for all other properties (so the force of a gale may be an
issue). This theory has led to some interesting nonmonotonic logic constructs and became a
basis for defining syntax and semantics in the Prolog programming language.46
So there are logical methods and algorithms for handling the frame problem in the abstract. Still,
as noted in a technical analysis (44 years later) by computer scientist Jarek Gryz:
Interestingly, the discussion of the problem subsided within the AI community. It is hard
to believe that everyone has accepted Shanahan’s claim that the problem is
“more-or-less solved” and moved on. If that were the case, where are the thinking
machines that this problem has prevented us from building? A more likely explanation is
that after so many failed attempts researchers lost faith that the problem can be solved
by logic.47
And an analysis by Katelyn Rivers concludes that while GWT may indeed be
neurophysiologically plausible, it doesn’t quite scale to where it can solve the “relevance”
question:
The issue with the GWT solution as it currently stands is not that this kind of architecture
is doomed never to perform a relevance computation, but that its founders failed to
provide successful measures by which a system could recognize that something is
relevant. In other words, Shanahan, Baars, and others have not provided successful,
44
Fodor, J.A. (2000). ​The Mind Doesn't Work That Way​. MIT Press.
45
​Murray Shanahan and Bernard Baars, ​Applying Global Workspace Theory to the Frame Problem​ (2004)
46
​Vladimir Lifschitz, ​The Dramatic True Story of the Frame Default​, 2015​.
47
​Jarek Gryz: ​The Frame Problem in Artificial Intelligence and Philosophy​, 2013.
10
non-question-begging standards by which a system can determine that an item of
information is relevant with a reasonable level of foresight and success.48
To Hubert Dreyfus, on the other hand, the question is whether the frame problem ​needs ​be
solved; it appears to him that specifying which propositions are relevant to varied contexts is
hopeless: “if each context can be recognized only in terms of features selected as relevant and
interpreted in a broader context, the AI worker is faced with a regress of contexts.”49
More broadly, with regard to the physical symbol system hypothesis, AI researchers (according
to Dreyfus): “had taken over Hobbes’s claim that reasoning was calculating, Descartes’ mental
representations, Leibniz’s idea of ‘universal characteristic’ (a set of primitives in which all
knowledge could be expressed), Kant’s claim that concepts were rules, Frege’s formalization of
such rules, and Wittgenstein’s postulation of logical atoms in his ​Tractatus​”.50
In short, Dreyfus believed that the proper way forward was not through rationalist philosophy but
instead by attempting to express (in software) the existential and phenomenological
perspectives of Martin Heidegger and Maurice Merleau-Ponty. These arguments were generally
met with resistance in the AI community: it seemed anathema to the whole process to not
represent a model of the environment in a robotic implementation. Prominent AI researcher
Edward Feigenbaum, a major force in the development of expert systems, complained, "What
does [Dreyfus] offer us? Phenomenology! That ball of fluff. That cotton candy!"51
However, there was an early (partial) success from Rodney A. Brooks in developing ant-like
robots called “animats”; these mobile robots mainly referred to their own sensors to govern their
motion--all without knowledge representations. This work, according to Herrera and Sanz,52
quickly changed the “mood in the AI community” and the organization of the field:
In a short span of time, the once marginal and radical critique of the establishment
became commonplace….embodiment and situatedness became buzzwords, and
although Heidegger’s influence is still taken with caution, it is undoubtedly among the
sources of new AI and cognitive science.53
And there was, according to Dreyfus, some Heideggerian expression in a game developed by
Phil Agre and David Chapman called ​Pengi​, in which the “world” for the game’s “agents” does
not consist of objects but of ​possibilities for action​ (my emphasis). This is more in line with the
intentionality (​the quality of mental states that consists in their being directed toward some 
object or state of affairs​) described by Heidegger in ​Being and Time​, with representations that
designate “not a particular object in the world, but rather a role that an object might play in a
48
Katelyn Rivers, ​Can Global Workspace Theory Solve the Frame Problem​, 2018
49
​Hubert Dreyfus, ​What Computers Still Can't Do​ 1992, p. 189
50
​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian
51
​McCorduck, Pamela​ (2004), ​Machines Who Think​ (2nd ed.), Natick, MA: A. K. Peters, Ltd., 1979, page 204
52
​Rodney A. Brooks. “​Intelligence without Representation​,” Mind Design, John Haugeland, Ed., The MIT Press,
1988, 416. There was in fact some knowledge representation in the animats that Dreyfus believed impacted their
ability to learn.
53
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016, page 499.
11
certain time-extended pattern of interaction between an agent and its environment.” Thus, the54
important aspect is the role (significance), not the object (represented knowledge).
Still, Agre’s implementation, according to Dreyfus, is a “virtual agent in a virtual micro-world” in
which no actual learning takes place: this limited micro-world is closed-off and represented as
the total storehouse of possible actions. It is still missing Merleau-Ponty’s feedback loop
between the agent and the world: “Cognitive life, the life of desire or perceptual life--is
subtended by an ‘intentional arc’ which projects around us our past, our future, and our human
setting.”55
Accordingly, the question remains as to whether some amount knowledge representation is
needed in order to create a general intelligence, or whether:
[A] Heideggerian Cognitive Science would require working out an ontology,
phenomenology, and brain model, that denies a basic role to any sort of
representation--even action-oriented ones--and defends a dynamical model like
Merleau-Ponty’s and van Gelder’s that gives a primordial place to equilibrium and in
general to rich coupling.56
Other authorities on Heidegger--Michael Wheeler, for instance --would say that combining57
some knowledge representation with the embodied and situated (in the world) coping of
intentionality, is required if we are going to use digital computers to simulate intelligence. It
seems to him counterintuitive to insist that knowledge not be represented at all, or represented
only in the world, outside the body and brain.
Even if you grant that the world itself is the ideal (or only) model, and that there is (obviously)
too much knowledge to effectively represent, it seems overly proscriptive, according to Nancy
Salay to not allow a knowledge base into the workings of a computer program:
Dreyfus is too hasty in his condemnation of all representational views; the argument he
provides licenses only a rejection of disembodied models of cognition. In casting his net
too broadly, Dreyfus circumscribes the cognitive playing field so closely that one is left
wondering how his Heideggerian alternative could ever provide a foundation
explanatorily robust enough for a theory of cognition.”58
So in higher level functioning, far away from lower motor functions and common sense, and tied
to a micro-world to address a narrow problem, knowledge representation (and placing limits on
the defined world), may be possible. But is this an ideal solution or a compromise? Is it
plausible that we only store meanings and values?
54
​Philip E. Agre, ​Computation and Human Experience​, Cambridge University Press New York City, 1997, page 243.
55
​Maurice Merleau-Ponty, ​Phenomenology of Perception​, trans C Smith, Routledge and Kegan Paul, 1962, page
136.
56
​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian
57
​See Michael Wheeler, ​Reconstructing the Cognitive World​, MIT Press, 2005.
58
​Nancy Salay, ​Why Dreyfus’ Frame Argument Cannot Justify Anti Representational AI​, 2009.
12
In a sense, it depends on what we are actually trying to build. ​As Herrera and Sanz have
noted:
The road to autonomous intelligent machines is proving more difficult and
challenging than it was first expected--not only because of a lack of instruments and
methods, but because it is from the beginning unclear what is meant to be achieved.
59
In any case, to evaluate this question, we need a rendering (even a model) in cognitive science
of how snapshots of coping situations or intentional arcs may be created. The question is
whether this model, with its representations, will be applicable only in the micro-world or to an
AGI.
Phenomenological AI
Before exploring whether a coded AGI solution with no knowledge representation is even
possible, or what this means for AGI in general, it’s useful to take a look at how a cognitive
scientist (Phil Turner) interprets (via Merleau-Ponty) the intentional relationships between
beings and objects in their world (​Figure 3​).
Figure 3: Rendering of Intentional Arc60
Turner notes that each type of intentionality in the individual (affective/feeling, social, corporeal,
and cognitive/perceptual), there is a corresponding ​affordance​, or offering from the world. In all
of these mappings, there is, for Turner: “the knowledge of how to act in a way that ‘coheres’
59
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016, page 500.
60
Phil Turner, ​The Intentional Basis of Presence​, University of Edinburgh, 2007, page 4
13
with one’s environment bringing body and world together.” But this is more than just being61
physically present in the world; it is continuous and active:
According to Merleau-Ponty, higher animals and human beings are always trying to get a
maximal grip on their situation. When we are looking at something, we tend, without
thinking about it, to find the best distance for taking in both the thing as a whole and its
different parts. When grasping something, we tend to grab it in such a way as to get the
best grip on it.62
Using the intentional arc as a basis, Dreyfus identifies a “Merleau-Pontian Neurodynamic” in63
the work of the neuroscientist Walter J. Freeman, who developed a model of learning in rabbits
based on mapping activity in the rabbit’s brain to stimuli in the environment:
[T]he brain moves beyond the mere extraction of features...it combines sensory
messages with past experience...to identify both the stimulus and its particular meaning
to the individual.64
Freeman concentrated on the rabbit’s olfactory system, noting responses in the brain when, for
instance, the scent of a carrot is introduced. The scent produces cell assemblies that are
thereafter geared to respond to the scent of this food.
Similarly, cell assemblies may form for other reasons, depending on what is introduced. For
example, thirst (water), fear (fox), or sex (female rabbit) will trigger the firing of the
corresponding cell assemblies in earlier introductions. The goals are to satisfy the needs and65
have the assemblies revert to minimal energy states known as attractors. The brain states that
tend towards these attractors are called ​basins of attraction​ and the set of these basins that the
animal has learned is called an ​attractor landscape​.66
The key here, and what ties this effort (according to both Dreyfus and Freeman) to
Merleau-Ponty, is the difference between representation and significance. And in the
perception/action loop that takes place, “[T]he memory of significance is in the repertoire of
attractors as classifications of possible responses--the attractors themselves being the product
of past experience.”67
The attractors selected in the face of new events illustrate how Freeman’s model explains the
intentional arc in terms of Temporal Difference Reinforcement Learning (TDRL), in which the
learner continually updates assumptions based on new information; according to Dreyfus:
61
​Ibid., page 2
62
Ibid., page 2 (emphasis mine)
63
​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian​, page 18
64
​Walter J. Freeman, ​The Physiology of Perception​, ​Scientific American​, Vol. 242, February 1991, p. 78
65
​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian​, page 20
66
​Ibid., page 21
67
​Ibid., page 25
14
Our previous coping mechanisms feed back to determine what action the current
situation solicits, while the TDRL model keeps the animal moving forward towards a
sense of minimal tension, that is, a least rate of change in expected reward.68
Accordingly, Freeman has created (​Figure 4​), a flow diagram of the main components of the
intentional arc in the brain.
Figure 4: Dynamic Architecture of Limbic System69
At a high level, this shows action-perception cycles resulting in ​circular causality ​(in the
Merleau-Pontian sense ), for goal-oriented behavior arising from the interactions between70
motor and sensory systems. The main loops in this architecture that map to the diagram in
Figure 3​ are the motor loop (i.e., the systems engage with, and respond to, the world) and the
proprioceptive loop (i.e., the systems engage with, and respond to, the body):
● The motor loop (brain to/from world) maps to the affective and social intentionalities and
affordances
● The proprioceptive loop (brain to/from body) maps to the corporeal and
cognitive/perceptual intentionalities and affordances
The cortex transmits and receives signals (sometimes through interactions with the cognitive
map in the hippocampus) in circular loops to the sensory and motor systems:
● A reafference loop (sensory stimulation caused by body movement) to and from the
sensory systems
● A control loop (moving intention from the cortex to the world) to and from the motor
systems
In studying, and then modeling, this circular causality, Freeman uses only significances, and no
knowledge representations. The begged question, for those who want to attempt to ​implement
68
Ibid., page 27
69
​Walter J Freeman, ​Consciousness, Intentionality, and Causality, ​Journal of Consciousness Studies​, ​1999, page 12.
70
Ibid., extensive discussion throughout.
15
phenomenological AI, is this: if knowledge cannot be represented, and logic cannot be applied
to this knowledge, can digital computers in fact be used to create this AI? It appears that the
answer is yes, because the global state transitions between the attractor basins can have
symbols assigned to them:
At macroscopic levels each perceptual pattern of neuroactivity is discrete, because it is
marked by state transitions when it is formed and ended. ... I conclude that brains don't
use numbers as symbols, but they do use discrete events in time and space, so we can
represent them …by numbers in order to model brain states with digital computers.71
Ever the anti-representationalist, Dreyfus quickly points out that, “[T]he model is not an
intentional being, only a description of such.” Furthermore, the approach amounts to a tacit72
admission that we are indeed not at all close to being able to imitate human intelligence, and
that we have been overlooking the requirement to build the right ​body​ (it must be “like us”)​ ​to
comprise the necessary sensors and limbic subsystems to enable this to happen:
Thus, to program Heideggerian AI, we would not only need a model of the brain
functioning underlying coupled coping such as Freeman’s but we would also need—and
here’s the rub—a model of our particular way of being embedded and embodied such
that what we experience is significant for us in the particular way that it is. That is, we
would have to include in our program a model of a body very much like ours with our
needs, desires, pleasures, pains, ways of moving, cultural background, etc.73
Hence the requirement for a lived-in body. This leaves the question open as to whether there
are some cases, as Nancy Salay and Michael Wheeler would have it, where knowledge
representation may be of use. It is possible to imagine such cases if we take for granted that the
created “world” will be more truncated, limited, and specialized. Even the news gathering robot
example might qualify, provided its tasks are limited to selected “fetch” activities such as the
ones mentioned. But it must be understood that there is a design trade-off: a great deal of
richness and true intelligence would be left out of the product, which is effectively just a
micro-world.
Finally, the phenomenological perspective on AI, for Herrera and Sanz, has a higher mission,
beyond the implementation issues:
It is not the role of Heideggerian AI to solve the puzzle for AI development, but to unveil
the being of robots, now shadowed by the idea of an “artificial human.” The more we
uncritically identify robots with animals and humans, the further we are from becoming
aware of what robots really are.74
Thus, the truly Heideggerian endeavor is not to simply assist in the implementation of
AGI-capable entities, but to explore the ontology of the entities being built.
71
​Walter J Freeman, ​Societies of Brains​, 1996, page 105.
72
​Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian​, page 28
73
​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian​, page 30.
74
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016, page 505.
16
Enactivism: Adding World to Brain and Body
Further developments in the biology of consciousness have taken the phenomenological
perspective a step further, and indicated that consciousness arises from interactions between
the body and world: the world is not only part of an intentional arc, it is part of consciousness
itself. Alva Noë, for instance, argues that “[W]e must look not inward, into the recesses of our
insides; rather, we need to look to the ways in which each of us, as a whole animal, carries on
the process of living in and with and in response to the world around us.”75
The scientific basis for this is strong, building on observations about perception being an active
activity as opposed to a passive absorption of the environment: “Perception is not a process in
the brain, but a kind of skillful activity of the body as a whole. We enact our perceptual
experience.” Thus, as noted by Jascha Hoffman, “[W]e do not passively absorb data from our76
eyes and ears, as a camera or a microphone would, but rather scan and probe our
environments so we can act on them.”77
It is in this fuller context of the entire body and the world, and this active view of perception, that
Noë builds on applied phenomenology as discussed in the previous section:
My central claim in this book is that to understand consciousness—the fact that we think
and feel and that a world shows up for us—we need to look at a larger system of which
the brain is only one element. Consciousness is not something the brain achieves on its
own. Consciousness requires the joint operation of brain, body, and world. Indeed,
consciousness is an achievement of the whole animal in its environmental context.78
Regarding the implications for AGI, Noë doesn’t rule out the possibility, but postulates that
“[T]he only route to artificial consciousness is through artificial life.” Thus, consciousness does79
arise in part from the brain and the body, but “we need to keep the whole organism in its natural
environmental setting in focus. Neuroscience matters, as does chemistry and physics. But from
these lower-level or internal perspectives, our subject matter loses resolution for us.”80
The emphasis on full context and action, on empirical observation of the agent, its body, and the
environment, amounts to psychologists such as Howard Rachlin as a revival of behaviorism:
“Dissatisfaction with neural representations as explanations of mental events, as well as the
divorce of the mind (in such identity theories) from its biological function, has led Noë to expand
his concept of the mind outward from the brain to overt behavior, including social behavior.”81
75
​Alva Noë, Out of Our Heads: ​Why You Are Not You Brain and Other Lessons from the Biology Consciousness,​ Hill
and Wang; 1 edition (February 2, 2010), page 7.
76
​Alva Noë, ​Action in Perception
77
​Jascha Hoffman, Scientific American Mind, ​Review of Out of Our Heads​, April 2009
78
​Alva Noë, Out of Our Heads, page 10
79
​Ibid., (Kindle Locations 715-719)
80
​Ibid., (Kindle Locations 715-719)
81
​Howard Rachlin, ​Is the Mind Your Brain?​, Journal of the Experimental Analysis of Behavior, July 2012.
17
Still, according to Tom Froese, an empirical approach to phenomenology and enactivism can
lead to the next developments in AI: “[E]nactivism can have a strong influence on AI because of
its biologically grounded account of autonomous agency and sense-making. The development
of such enactive AI, while challenging to current methodologies, has the potential to address
some of the problems currently in the way of significant progress in embodied-embedded AI.”82
The potential success of enactivism in AI research, therefore, depends on the development of
what Froese calls a “phenomenological pragmatics.”83
Classifying Fictional Robots
It’s been shown that many of the questions that arise in the viability of AI systems (particularly
AGI) stem from ambiguous definitions and goals. In an attempt to sort out these varied goals,
Selmer Bringsjord and Naveen Govindarajulu draw from the influential textbook ​Artificial
Intelligence: A Modern Approach to provide a neat categorization (​Table 2​).84 85
To this table, I’ve added, in bold, possible examples that may be familiar from popular culture.
Along these lines, Herrera and Sanz note that “[S]cience fiction is a strong source of our
pre-ontological understanding of robots.”86
Table 2: Reasoning or Acting as Human or Abstract Rational Being
Human-Based Ideal Rationality
Reasoning-Based Systems that think like
humans. ​(Commander Data)
Systems that think rationally.
(HAL 9000)
Behavior-Based Systems that act like humans.
(Stepford Wife Joanna)
Systems that act rationally.
(Mending Apparatus)
Where would the questions of the frame problem, knowledge representation, and other
implementation considerations, fall in various examples from science fiction? Their appearance
in our culture can help sort out the discussion, and there’s value in being more exact about what
we’re talking about in terms of goals and plausibility. In ​Table 1​, for instance, the range of goals
is so vast and public that misunderstanding was almost certain to result.
For example, the frame problem clearly shows itself more in robotics than in generalized
intelligence from an inert computer. This distinction has only recently been fully teased out, and
82
​Tom Froese, ​On the role of AI in the ongoing paradigm shift within the cognitive sciences​, M. Lungarella, F. Iida, J.
Bongard & R. Pfeifer (eds.), page 12.
83
​Ibid., page 2.
84
​Russell, S. & Norvig, P., 2009, ​Artificial Intelligence: A Modern Approach​ 3rd edition, Saddle River, NJ: Prentice
Hall
85
Bringsjord, Selmer and Govindarajulu, Naveen Sundar, "​Artificial Intelligence​", The Stanford Encyclopedia of
Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)
86
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016, page 506.
18
that lack of clarity has undoubtedly contributed to some of the misunderstandings as to where
the successes and failures have been. Still, it does seem like the “framers” of AI did not (until
McCarthy’s 1969 paper) explicitly specify a robot. And, as noted earlier, Turing had astutely87
allowed typewritten pages as one interface medium (which substantially limits the degree of
difficulty).
Thus, for all the Dreyfusian insistence on embodiment and embeddedness, it is useful to look at
our murderous friend HAL from ​2001: A Space Odyssey​, a ubiquitous icon of fictional AI.
Whether HAL is badly programmed, just plain malicious, or correctly programmed by an agent
with evil motive can be bracketed. The question at hand is whether his powers can be bounded
in something greater than the micro worlds that sprung out of early attempts to solve the frame
problem with logic, neural networks, global workspaces, or well-chosen defaults.
Consider embodiment. What senses and capabilities does HAL have? He certainly has sight,
speech, “touch” (he knows when he is being disabled), and it doesn’t seem beyond the pale that
he might have taste or smell. But fundamentally he appears to be a massive mainframe
requiring an air-conditioned room to function at all. Consider embeddedness. HAL’s
(micro-)world is the spaceship itself, greatly simplifying the environmental concerns that come
with even the most basic household robot, which may (as discussed in Roszak’s example
above) have to negotiate the house, the yard, and the weather in performing its duties.
HAL is actually fairly narrowly specified and scoped. This is the spirit in which Wheeler and
Salay are correct to say that proscribing representation is an overreach that limits shorter-term
practical AI applications. HAL’s main responsibility of commanding the Discovery mission to
Jupiter is more akin to the higher function tasks (i.e., Chess, Jeopardy, and Go) that have seen
great success in AI research. His job function is cerebral and squarely in the domain of
higher-functioning tasks that have been solved in AI.
On the other hand, if we turn to Lieutenant Commander Data of Starships Enterprise-D and -E,88
we have a vastly more daunting set of specifications. Here, the Federation has gone to the great
trouble of creating exactly (or nearly enough) the full embodiment that entails human-caliber AI
of the sort that may in reality be centuries away.
At the other end of the spectrum (and as referenced above), it’s worthwhile to consider E.M.
Forster’s mending apparatus from ​The Machine Stops​. This subsystem within “The Machine”
(the underground honeycomb of condos in which all civilization lives) automatically diagnoses
and remediates any issues that arise: lack of air, water or food, technological subsystems
(videoconferencing), etc. The mending apparatus is ideal-based, not human, but it dispatches
its duties in a rational manner.
Finally, in the Human/Behavior block of Column 1 in ​Table 2​, Joanna Eberhart from ​The
Stepford Wives notices and fights the trend of her female neighbors becoming docile to the89
point of zombification. As she winds up being transformed into a robotic version of herself, and
becomes another submissive Stepford Wife, she loses her ability to think independently.
87
​Some Philosophical Problems from the Standpoint of Artificial Intelligence
88
​See Wikipedia entry on ​Data (Star Trek)
89
​See Wikipedia entry on ​The Stepford Wives​.
19
Hewing close to human-like AGI, which has been the top goal and promise of AI, it’s useful first
to ask: How much embodiment is needed to get into the “AGI Family”? As examples, consider
the “Ideals Based” column in ​Table 2​. HAL exhibits (some) general intelligence, whereas the
mending apparatus does not. HAL also exhibits some limited embodiment, and the mending
apparatus again does not, but is still a heightened intelligence with ready-to-hand techniques for
remediating the air, water, communication systems, and transportation tunnels in Forster’s
dystopia.
Furthermore, given a domain of responsibility that is very high in abstract executive function
(which, as noted by Moravec, Minsky and Knuth, is much easier to implement than common
sense), who is to say that a suitable “mini-world” (the Discovery) or a “massively connected
world model” (the Machine) does not qualify as a sufficiently large AI domain to be worthy of
philosophical consideration and criticism? On the other hand, the “Human-based” column in
Table 2 ​requires much deeper embodiment, almost the full creation of a human body--certainly
well beyond anything we’ve been able to even begin achieving.
Conclusion
There are three related conclusions supported by this investigation:
● The dichotomy of “strong versus weak” AI misrepresents the AI landscape and
sometimes hinders productive discussion
● The realities of the frame problem provide insights into cognitive science, computer
science and philosophy
● The field of (what we now call “weak”) AI needs fresh attention from the (general)
philosophy of technology
These are covered in the following sections.
“Strong versus Weak” Misrepresents the AI Landscape
The current identity crisis in AI, as to what is being built and why, and whether any of it
constitutes human general intelligence and performance, owes a lot to loose definitions,
breathtaking claims, and animus across computer science and philosophy.
It is easy to see both why philosophers would be startled by being told their tradition is null and
void because of what’s going on in the AI Lab, and why computer science researchers might90
be resentful of criticism that doesn’t appear helpful to their concerns. ​Aggressive language went
in both directions, as philosophers asserted that the quest for the Holy Grail of AGI would never,
for a variety of reasons, be realized: outdated philosophies (Dreyfus) or misunderstanding the
meaning of mind (Searle).
90
​My reference here is to the MIT Lab in the 1960’s and the earliest discussions between Dreyfus and Minsky et al. I
think there has been a sort of “two cultures” (a la Charles Percy Snow) debate going on over the decades, and some
of the heat from it has to do with unclear terms, goals, and objectives.
20
Pamela McCorduck summed up the genesis of the discord when she observed that “[A] great
misunderstanding accounts for public confusion about thinking machines, a misunderstanding
perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking
machines are already here, or at any rate, just around the corner.“ On the other, McCorduck91
also notes (of Dreyfus) that, “His derisiveness has been so provoking that he has estranged
anyone he might have enlightened. And that's a pity."92
As for Searle’s dichotomy of Strong versus Weak AI, the way it is received in the cognitive and
computer science world (where the work is vibrant and the practitioners are collecting what I call
the “many small victories”) is aptly summed up by Harnad: “[I]t is not at all clear that Searle's
‘Strong AI/’Weak AI’ distinction captures all the possibilities, or is even representative of the
views of most cognitive scientists. Much of AI is in any case concerned with making machines
do intelligent things rather than with modeling the mind.” This view, according to both Selmer93
Bringsjord and Peter Norvig, is shared by all but the very few AI researchers.94 95
Thus, philosophical arguments around AGI (Strong AI), while useful to providing insights into the
problem of modeling knowledge, often amount to definitional games around words like ​thinking​,
mind​, and ​consciousness ​of persons and machines. This is important inasmuch as it leads to a
scientific treatment of philosophy of mind (discussed below). But over-rotating on Strong AI
misses not only the current landscape but its urgent implications. And it is also misguided from a
historical perspective, as Turing himself proposed only a weak, functionalist AI, and many of the
promises (all but the loftiest) of the Dartmouth Summer Project have in fact been realized.
The range of possible “intelligences” (or simulations) that are set out by both the Dartmouth
Summer Project (​Table 1​) and Bringsjord and ​Govindarajulu (​Table 2​) illustrate how easily our
understanding can get lost. ​In ​Table 2​, for instance, McCarthy et al leapt wholeheartedly into the
Human/Reasoning quadrant (the only "Strong AI" box) and that’s the quadrant that carried the
public’s imagination and that most science fiction explores. Later, when he identified the frame96
problem in 1969, McCarthy didn’t inspire dialog with his assertion that: “Since the philosophers
have not really come to an agreement in 2500 years it might seem that artificial intelligence is in
a rather hopeless state if it is to depend on getting concrete enough information out of
philosophy to write computer programs.”97
Still, the forbidding moniker of unattainable Strong AI does not need to stay attached to other
worthy (if lesser) notions of general intelligence (learning ability, knowledge categorization, the
91
​McCorduck, Pamela (2004), ​Machines Who Think (2nd ed.)​, Natick, MA: A. K. Peters, Ltd., page 212
92
​Ibid, page 236.
93
​Harnad, S. (1989) ​Minds, Machines and Searle.​ Journal of Theoretical and Experimental Artificial
Intelligence 1: 5-25, page 15
94
Selmer Bringsjord, ​What Robots Can and Can’t Be​, Kluwer 1992. Bringsjord argues, in a position he holds to this
day, that “(1) AI will continue to produce machines with the capacity to pass stronger and stronger versions of the
Turing Test but that (2) the "Person Building Project" (the attempt by AI and Cognitive Science to build a machine
which is a person) will inevitably fail.”
95
​TechTalks: ​Why We Should Focus on Weak AI for the Moment​, April 23. 2018; Supporting this position, Norvig
adds: “We want humans and machines to partner and do something that they cannot do on their own.”
96
​Bringsjord, Selmer and Govindarajulu, Naveen Sundar, "​Artificial Intelligence​", The Stanford Encyclopedia of
Philosophy (Fall 2018 Edition), July 2018
97
​McCarthy, John, ​Some Philosophical Problems from the Standpoint of Artificial Intelligence​, 1969.
21
capability to reason) and should not overshadow the many interesting (yet possibly dangerous!)
research. ​This imperious approach also (per Harnad) diluted Searle's contributions, which could,
with slight modifications, have been directed straight towards computationalism (which sorely
deserved a broadside critique), instead of attacking a strawman (Strong AI) that only the very
few practitioners were willing to defend.98
As if to make the point of how unnecessary and unhelpful this dichotomy became, Dreyfus, as
an applied philosopher, was both the original gadfly of AI and eventually came into close
consultation with the engineers and cognitive scientists who were willing to discuss the modern
understandings of mind in philosophy (such as phenomenology) and relate them to scientific
understandings (in biology) and implications for implementation (in cognitive science and AI).
While the resultant cross-disciplinary studies were undertaken regardless of whether they were
in the neighborhood of AGI or a simulation, progress was made with useful contributions and
earnest endeavors to get closer to constructs of perception such as the intentional arc.
The Frame Problem Yields Vital Lessons Across Multiple
Disciplines
We all owe a lot to the frame problem. At its inception, it was very technical and narrow,
confined to the context of taking specific actions in specific situations, but it grew in many99
directions. As Selmer Bringsjord has noted, it is variously now understood (with degrees of truth
in each case) as a relevance problem, a statement on the futility of symbolic reasoning in
holistic actions or thought processes, and the conditions under which a belief should be updated
after an action has been undertaken.100
Essentially, ​nothing highlights the differences between our day-to-day existence and AI’s
attempts to reason about it like the frame problem. First and foremost, the problem highlights a
disparity between a philosophical and a technological assessment of what it means to be
“solved.” In computer science, this largely amounts to questions of tractability (in the form of
compute time).
GWT, for instance, utilized event-driven logic flows, which made sense for handling the parallel
unconscious specialists (​Figure 2​ above) but of course never answered the relevance question.
And default logic is an excellent way to minimize the number of frame axioms (by ruling all but
the few likely correlative changes of any action) in many feasible micro-worlds, and was
adopted in Prolog, a very useful language for machine-learning applications. But of course it
doesn't handle even rudimentary hypotheticals such as the Yale Shooting Problem, let alone101
myriad less-well-defined real-world situations. However, it’s worth noting that nonmonotonic
98
​Stevan Harnad, ​Mind, Machines and Searle 2: What’s right and wrong about the chinese room argument​, Oxford
University Press, 2006: Harnad here reformulates Searle’s arguments so that they are the recognizable tenets of
computrationalism, which (as opposed to Strong AI, which “no one” believed in), had adherents, and needed
refutation.
99
​McCarthy, John, ​Some Philosophical Problems from the Standpoint of Artificial Intelligence​, 1969.
100
​Selmer Bringsjord and Konstantine Arkoudas, ​The Philosophical Foundations of Artificial Intelligence​, 2007
101
​Shanahan, Murray, "​The Frame Problem​", The Stanford Encyclopedia of Philosophy (Spring 2016 Edition)
22
logic and the situation calculus, both borne of the frame problem, have led to great strides in the
efficacy of expert systems, especially when combined with deep learning algorithms.102
At first blush, the spectacular misunderstandings that emerged from studies in the frame
problem not only led us nowhere in the quest for AGI, but threatened to forge forever a
misconception as to what it means to be to be able to “solve” intractable problems in the
simplest of human endeavors. Humans do not face the frame problem because they do not go
about their daily lives by representing the world in logical formulas.103
But conversely, and perhaps counterintuitively, this failure provides positive news for the
philosophy of personhood, as we question “at what point in one’s development from a fertilized
egg there comes to be a person, or what it would take for a chimpanzee or a Martian or an
electronic computer to be a person, if they could ever be.” Thus, the frame problem, which104
appears “solvable” in theory but not in practice, has amply illustrated that the states of105
day-to-say human being can’t be successfully chased down by an endless succession of frame
axioms.
Within AI research itself, the frame problem led AI research from a classical approach into a
connectionist and dynamical systems approach, which in itself was a huge departure from the
problems being worked on previously. From there, As ​Table 3 ​(sourced from Selmer Bringsjord
and Konstantine Arkoudas) shows, the problem spaces leapt from purely intellectual tasks into
action-oriented ones, from static to dynamic requirements, and from an atomistic to a holistic
endeavor.
Table 3: From Classical to Situated AI106
Classical Embodied
Representational Non-representational
Individualistic Social
Abstract Concrete
Context-independent Context-dependent
Static Dynamic
Atomistic Holistic
Computer-inspired Biology-inspired
Thought-oriented Action-oriented
102
​See, for instance, D.M. Gabay, ​Theoretical Foundations for Nonmonotonic Reasoning in Expert Systems​, Logics
and Models of Concurrent Systems, 1985.
103
​Selmer Bringsjord and Konstantine Arkoudas, ​The Philosophical Foundations of Artificial Intelligence​, 2007
104
​Olson, Eric T., "​Personal Identity​", The Stanford Encyclopedia of Philosophy (Summer 2017 Edition)
105
And of course, this calls into question the integrity of the theory!
106
​Selmer Bringsjord and Konstantine Arkoudas , ​The Philosophical Foundations of Artificial Intelligence​, 2007
23
As such, the frame problem has illustrated just how different are the purely intellectual and the
action-oriented problem spaces. Both are still being pursued in parallel, as evidenced by
Watson and other high achievements in natural language processing being contrasted with the
agile but infantile robots being built by Boston Dynamics and others for manual labor or107
military purposes. Obviously, neither effort touches on what it means to be human, but both
represent high achievement that should be both respected and scrutinized.
Finally, the questions of knowledge representation are currently being debated not only in
cognitive science and AI, but in biology and the philosophy of mind. Disputes on representations
are to be found not only amongst Heideggerians (such as Wheeler and Salay) but by enactivist
philosophers of mind such as Pierre Steiner and Anthony Chemero, who are also empirical108 109
cognitive scientists studying the limitations of enactive anti-representationalism. This is probably
more positive than not, as ​Herrera and Sanz have noted: “Science crystallizes in a body of
knowledge, but it cannot be reduced to a set of true propositions about the world.”110
There Are Crucial Philosophical Concerns Raised by So-Called
Weak AI
Both classical AI and subsequent embodied and situated AI deserve further analysis, even as
the plethora of “lesser” AI applications ignore the elusive goal of synthetic consciousness. As for
that Holy Grail, the more critical task at hand, far more urgent than the “mind” we may create in
a few centuries, is to observe with discerning caution even the “weak” AI robots that could soon
be as “smart” as Watson and as dangerous as the products leaping out of top defense
contractors.
In fact, while there is an allure to the notion of creating consciousness, the AI pursuit is mainly
tied to industrial profitability and productivity, or quests for weaponry in arms races, or111 112
endeavors to influence populations via AI-fueled social networking. In the bloom of these113
developments, our first thought should not be whether the successful robot validates Strong AI,
but what it means to us that it was brought into being at all, not to mention into ​our ​being. In
part, we are called, according to Herrera and Sanz, to explore the ontological problem of “not
how robots are built, but what they are.” Weak AI may not ultimately be as dangerous as114
Strong AI (this is debatable ), but there is plenty to question there.115
107
​Boston Dynamics​: I’m not saying that these robots couldn’t (in a Turing-machine universe) be fortified to match the
intellectual capabilities of Watson, but that is not the focus of either IBM or Boston Dynamics.
108
​Pierre Steiner, ​Enacting anti-representationalism​, 2014.
109
Anthony Chemero, ​Anti-representationalism and the dynamical stance​, Philosophy of Science, December 2000
110
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016, page 500.
111
​Louis Columbus, ​Artificial Intelligence will Enable 38% Profitability Gains by 2035​, Forbes, June 22, 2017.
112
Military.com: ​China Leaving US Behind​ July 2018
113
​Simon McCarthy Jones, ​Are Social Networking Sites Controlling You Mind?​, Scientific American, December 2017.
114
​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​,
2016, page 500.
115
​Gil Press, Forbes, ​Artificial Intelligence Pioneers: Peter Norvig, Google​, December 2016.
24
Many technologies cause more problems than they solve, but AI is promising to achieve this at
very high speeds. For example, the ability to analyze live closed-caption videos with machine
learning algorithms is blithely ushered in as a boon to for public safety, emergency response
capabilities, and other scientific and industrial applications. The risks and threats to privacy116
and social justice are important questions to take up at any stage in this process.
The problems that those in the AI community have been claiming to be on the verge of solving
appear to be a long way into the future: first (with phenomenology) a body is required, then (with
enactivism) a world is added on, all against the backdrop of indecision as to whether
representations are desirable. It appears that we are quite a long distance from AGI, but we can
still investigate (for instance) how best to deal with robots, or deep learning systems, that
resemble or surpass (even if they do not duplicate) our capabilities in many areas and action. If
the social i​mplications of AI are rising rapidly, and they are doing so because intelligent robots
will soon become a normal part of our day to day existence, it doesn’t much matter that they
won’t be reflecting AGI for any time in foreseeable future.
,
116
​James Vincent, ​Artificial Intelligence is going to Supercharge Surveillance​, The Verge, January 2018
25

Mais conteúdo relacionado

Mais procurados

Artificial intelligence and cognitive modeling have the same problem chapter2
Artificial intelligence and cognitive modeling have the same problem   chapter2Artificial intelligence and cognitive modeling have the same problem   chapter2
Artificial intelligence and cognitive modeling have the same problem chapter2sabdegul
 
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...Antonio Lieto
 
Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics? Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics? PhiloWeb
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligenceRishab Acharya
 
The IOT Academy Training for Artificial Intelligence ( AI)
The IOT Academy Training for Artificial Intelligence ( AI)The IOT Academy Training for Artificial Intelligence ( AI)
The IOT Academy Training for Artificial Intelligence ( AI)The IOT Academy
 
Can computers think
Can computers thinkCan computers think
Can computers thinkGTClub
 
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Antonio Lieto
 
A Computational Framework for Concept Representation in Cognitive Systems and...
A Computational Framework for Concept Representation in Cognitive Systems and...A Computational Framework for Concept Representation in Cognitive Systems and...
A Computational Framework for Concept Representation in Cognitive Systems and...Antonio Lieto
 
DWX 2018 Session about Artificial Intelligence, Machine and Deep Learning
DWX 2018 Session about Artificial Intelligence, Machine and Deep LearningDWX 2018 Session about Artificial Intelligence, Machine and Deep Learning
DWX 2018 Session about Artificial Intelligence, Machine and Deep LearningMykola Dobrochynskyy
 
Can computers think
Can computers thinkCan computers think
Can computers thinkGTClub
 
From embodied Artificial Intelligence to Artificial Life
From embodied Artificial Intelligence to Artificial LifeFrom embodied Artificial Intelligence to Artificial Life
From embodied Artificial Intelligence to Artificial LifeKrzysztof Pomorski
 
ARTIFICIAL INTELLIGENCETterm Paper
ARTIFICIAL INTELLIGENCETterm PaperARTIFICIAL INTELLIGENCETterm Paper
ARTIFICIAL INTELLIGENCETterm PaperMuhammad Ahmed
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceMhd Sb
 

Mais procurados (20)

Artificial intelligence and cognitive modeling have the same problem chapter2
Artificial intelligence and cognitive modeling have the same problem   chapter2Artificial intelligence and cognitive modeling have the same problem   chapter2
Artificial intelligence and cognitive modeling have the same problem chapter2
 
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...
 
Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics? Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics?
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
The IOT Academy Training for Artificial Intelligence ( AI)
The IOT Academy Training for Artificial Intelligence ( AI)The IOT Academy Training for Artificial Intelligence ( AI)
The IOT Academy Training for Artificial Intelligence ( AI)
 
AI And Philosophy
AI And PhilosophyAI And Philosophy
AI And Philosophy
 
AI Lecture 1 (introduction)
AI Lecture 1 (introduction)AI Lecture 1 (introduction)
AI Lecture 1 (introduction)
 
Ai chap1 intro
Ai chap1 introAi chap1 intro
Ai chap1 intro
 
Can computers think
Can computers thinkCan computers think
Can computers think
 
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
 
A Computational Framework for Concept Representation in Cognitive Systems and...
A Computational Framework for Concept Representation in Cognitive Systems and...A Computational Framework for Concept Representation in Cognitive Systems and...
A Computational Framework for Concept Representation in Cognitive Systems and...
 
DWX 2018 Session about Artificial Intelligence, Machine and Deep Learning
DWX 2018 Session about Artificial Intelligence, Machine and Deep LearningDWX 2018 Session about Artificial Intelligence, Machine and Deep Learning
DWX 2018 Session about Artificial Intelligence, Machine and Deep Learning
 
Can computers think
Can computers thinkCan computers think
Can computers think
 
From embodied Artificial Intelligence to Artificial Life
From embodied Artificial Intelligence to Artificial LifeFrom embodied Artificial Intelligence to Artificial Life
From embodied Artificial Intelligence to Artificial Life
 
Artificial intelligence research
Artificial intelligence researchArtificial intelligence research
Artificial intelligence research
 
AI_6 Uncertainty
AI_6 Uncertainty AI_6 Uncertainty
AI_6 Uncertainty
 
ar
arar
ar
 
ARTIFICIAL INTELLIGENCETterm Paper
ARTIFICIAL INTELLIGENCETterm PaperARTIFICIAL INTELLIGENCETterm Paper
ARTIFICIAL INTELLIGENCETterm Paper
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Artificial intel
Artificial intelArtificial intel
Artificial intel
 

Semelhante a An Elusive Holy Grail and Many Small Victories: Philosophical Foundations and History of AI

Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceAkshay Thakur
 
A paper presentation abstract
A paper presentation abstractA paper presentation abstract
A paper presentation abstractJagadeesh Kumar
 
Artificial Intelligence can machine think
Artificial Intelligence can machine thinkArtificial Intelligence can machine think
Artificial Intelligence can machine thinkpaulmateo5
 
PHIL20031 Applied Philosophy.docx
PHIL20031 Applied Philosophy.docxPHIL20031 Applied Philosophy.docx
PHIL20031 Applied Philosophy.docxwrite5
 
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfAnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfnareshsonyericcson
 
Artifical intelligence
Artifical intelligenceArtifical intelligence
Artifical intelligencecstanziale
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceBise Mond
 
Introduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.docIntroduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.docbutest
 
The Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docxThe Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docxarnoldmeredith47041
 
How to Talk to a Cognitive Computer, NLDB2015
How to Talk to a Cognitive Computer,  NLDB2015How to Talk to a Cognitive Computer,  NLDB2015
How to Talk to a Cognitive Computer, NLDB2015Csaba Veres
 
Application Of Artificial Intelligence In Electrical Engineering
Application Of Artificial Intelligence In Electrical EngineeringApplication Of Artificial Intelligence In Electrical Engineering
Application Of Artificial Intelligence In Electrical EngineeringAmy Roman
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceIman Ardekani
 
Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...
Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...
Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...Robert Farrow
 

Semelhante a An Elusive Holy Grail and Many Small Victories: Philosophical Foundations and History of AI (20)

Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
A paper presentation abstract
A paper presentation abstractA paper presentation abstract
A paper presentation abstract
 
Artificial Intelligence can machine think
Artificial Intelligence can machine thinkArtificial Intelligence can machine think
Artificial Intelligence can machine think
 
PHIL20031 Applied Philosophy.docx
PHIL20031 Applied Philosophy.docxPHIL20031 Applied Philosophy.docx
PHIL20031 Applied Philosophy.docx
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfAnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
 
Alan Turing
Alan TuringAlan Turing
Alan Turing
 
1.INTRODUCTION AI.pdf
1.INTRODUCTION AI.pdf1.INTRODUCTION AI.pdf
1.INTRODUCTION AI.pdf
 
AI.doc
AI.docAI.doc
AI.doc
 
Artifical intelligence
Artifical intelligenceArtifical intelligence
Artifical intelligence
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Introduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.docIntroduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.doc
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
The Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docxThe Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docx
 
How to Talk to a Cognitive Computer, NLDB2015
How to Talk to a Cognitive Computer,  NLDB2015How to Talk to a Cognitive Computer,  NLDB2015
How to Talk to a Cognitive Computer, NLDB2015
 
Application Of Artificial Intelligence In Electrical Engineering
Application Of Artificial Intelligence In Electrical EngineeringApplication Of Artificial Intelligence In Electrical Engineering
Application Of Artificial Intelligence In Electrical Engineering
 
Introduction to artificial intelligence
Introduction to artificial intelligenceIntroduction to artificial intelligence
Introduction to artificial intelligence
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Artificial intelligence research
Artificial intelligence researchArtificial intelligence research
Artificial intelligence research
 
Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...
Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...
Opening the door to the Chinese Room: Intersubjectivity and the Phenomenology...
 

Último

Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 

Último (20)

Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 

An Elusive Holy Grail and Many Small Victories: Philosophical Foundations and History of AI

  • 1. An Elusive Holy Grail and Many Small Victories The State of AGI and Other AI Concerns Abstract 1 Introduction: An AI Prehistory 2 AGI From Conjecture to Goal 4 The Frame Problem and Knowledge Representation 7 An Example, with Solutions and Objections 7 Phenomenological AI 13 Enactivism: Adding World to Brain and Body 17 Classifying Fictional Robots 18 Conclusion 20 “Strong versus Weak” Misrepresents the AI Landscape 20 The Frame Problem Yields Vital Lessons Across Multiple Disciplines 22 There Are Crucial Philosophical Concerns Raised by So-Called Weak AI 24 Abstract A survey of the philosophical foundations and the history of Artificial Intelligence (AI) reveal a discipline that is simultaneously in a vibrant era of creative achievement (many victories) and is yet in the midst of an identity crisis (an elusive grail). Early history reveals conflated and unrealistic goals that ran headlong into an iconic implementation issue--the frame problem--which aptly showed the industry to be on an errant track. Subsequent forays into, and lessons learned from, phenomenology and enactivism have led to breakthroughs in robotics, and have unveiled a still-current debate on the benefits and misuses of knowledge representation. These issues are illuminated with hypotheticals and examples from literature and cinema. The findings raised in this investigation support three related conclusions: (1) the dichotomy of “strong versus weak” AI is misleading and misrepresents the current state of the industry; (2) the frame problem yields insights into not only AI and cognitive science, but into philosophy of mind and personal identity; and (3) the broader philosophy of technology should take primacy on the current state of AI concerns. 1
  • 2. Introduction: An AI Prehistory There is a long tradition in the history of ideas that can be said to anticipate artificial intelligence. Aristotle, cited by Selmer Bringsjord, provided some of the foundation with the notion of a formalism, wherein “certain patterns of logical thought are valid by virtue of their syntactic form, independently of their content….[This] remains at the heart of the contemporary computational theory of the mind.” Thomas Hobbes, in Leviathan, expressed a computational view of thinking:1 “By ratiocination I mean computation. Now to compute, is either to collect the sum of many things that are added together, or to know when one thing is taken out of another.2 Interestingly, Rene Descartes, who in the Sixth Meditation established a well-known substance dualism that provided the bedrock for a computational theory of mind maintained that3 4 machines would never duplicate human performance, and indeed that it would be “morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.5 The tradition emphasized classical logic and a mechanical, computational framework to emulate aspects of human performance. Then, in 1950, Alan Turing proposed to explore whether machines could think. This can be considered an early modern allusion to what would later be6 dubbed Artificial General Intelligence (AGI). However, Turing made it clear in his seminal paper that terms such as “think” and “machine” cannot be defined clearly, which is why the question lent itself more to a survey than a study. Substituting for this inquiry, he presented a thought experiment he called ​The Imitation Game​, in which it would be explored whether it may be possible for question-and-answer responses from a computer to effectively be indistinguishable from answers that a human being might give. Turing’s substitution of the game suggests that he believed the idea of a thinking machine to be misleading: a question that cannot be answered. Perhaps, for Turing, it is not even well formulated: what does it mean to think? He was careful to draw sharp lines between physical and intellectual capabilities, and he even considered typewritten correspondence as an instrument, indicating an awareness of the importance of tone and the potential overload of language wherein (nearly) all meaning comes from context. Turing appears to prefigure embodiment and the distinction in degrees of difficulty between stationary machines and robots in AI: It will not be possible to apply exactly the same teaching process to the machine as to a normal child. It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these 1 ​Selmer Bringsjord, ​The Philosophical Foundations of Artificial Intelligence​, 2007, page 4. 2 Hobbes, T., ​Elements of Philosophy​, (Sect. 1, de Corpore 1, 1, 2) 3 ​Hatfield, Gary, "​René Descartes​", The Stanford Encyclopedia of Philosophy (Summer 2018 Edition) 4 Loosely, “mind:body::program:computer” 5 Descartes, R.: 1911, ​The Philosophical Works of Descartes, Volume 1​. Translated by Elizabeth S. Haldane and G.R.T. Ross, Cambridge University Press, Cambridge, UK, page 116. 6 ​A. M. Turing (1950) ​Computing Machinery and Intelligence​. Mind 49: 433-460. 2
  • 3. deficiencies might be overcome by clever engineering, one could not send the creature to school without the other children making excessive fun of it.7 Another reference that makes this distinction prescient for the time period is that ​the term “robot” in that era (1946), was still occasionally used to describe computers. Some of the definitions in8 the original problem space led to confusion as to the goals of the endeavor, and researchers between (and sometimes within) disciplines started talking past each other. Carlos Herrera and Ricardo Sanz point this out when they describe the later, inevitable change in the emphasis of cognitive science: Human mental operations are interesting insofar as they result from some embodied interaction. Robots, on the other hand, have replaced computer and information processing systems as a proper object of empirical research.9 As for predictions, Turing is (compared to his successors in the next section) comparatively circumspect. He states his focus is not “whether all digital computers would do well in the [imitation] game nor whether the computers that are presently available would do well, but whether there are imaginable computers which would do well" and then he gives quite a long horizon (half a century). Part of what makes this timing noteworthy is that the fundamentals of computer science itself were at that time just getting formulated. Turing himself was performing foundational work in algorithms and programming theory. In the following passage, Turing is bounding his problem on a simulation: “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”10 Turing is openly admitting that capturing consciousness in a local agent may be unsolvable, but to him, that’s not the question. The question is whether we can make the machine ​act intelligently. His goal, according to Stevan Harnad, is to determine “not whether or not machines can think, but whether or not machines can do what thinkers like us can do -- and if so, how.”11 John Searle, holding Turing to the promise of creating machines that think in the same way that humans do, has been a prominent critic of the notion of AGI, with his Chinese Room Argument12 and the formulation of ​Strong AI​, referring to “machines capable of experiencing 7 ​A. M. Turing (1950) ​Computing Machinery and Intelligence​. Mind 49: 433-460. 8 ​See this ​advertisement for ENIAC in Modern Mechanics​ that seems to conflate the notion of computer and robot. Calling ENIAC “The Army Brain” anticipates the role that defense department funding would play in computer science in general, and AI in particular. 9 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016. 10 ​A. M. Turing (1950) ​Computing Machinery and Intelligence​. Mind 49: 433-460. 11 ​Stevan Harnad, ​The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence​, 2008. 12 Cole, David, "​The Chinese Room Argument​", The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.) 3
  • 4. consciousness.” Stevan Harnad, on the other hand, believes AGI can theoretically13 appear--look just as real as intelligence--without human consciousness or understanding, and he points out that one of Turing’s goals, and in fact the appropriate goal for cognitive science, is “total internal microfunction.”14 Harnad also notes that “Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition.” For Harnad, reverse15 engineering cognition is a sufficiently difficult problem, even if functional equivalence is all that is called for.16 AGI From Conjecture to Goal A common pattern in the history of AGI is the underestimation of how difficult it is to simulate routine activity, including low level sensorimotor skills. This was famously expressed in both Moravec’s Paradox and in a formulation called the frame problem, which highlights the17 18 complexity of determining which “state changes” must take effect as a result of common decisions. With the frame problem, the main issue (detailed in the next section) is of one of relevance​: when updating new information in the middle of a project, how do you determine which attributes and states to take into consideration, and which to completely discard?19 How did Computer Science get itself into the game of justifying a symbolic-representational (codeable) view of common sense? It is traceable at least to the ​Proposal for the Dartmouth Summer Research Project on Artificial Intelligence​. Here we see the beginnings of the pitfalls20 of making explicit promises with vague language. Moving well beyond what Turing had proposed, the report begins by asserting that all aspects of learning and intelligence can be precisely described and simulated by a machine. After that assertion, the goals actually drop off quickly into the realm of, in some cases, fairly common problems in applications or systems programming. This confluence of goals began with an ambitious spirit and a very wide range of difficulty. ​Table 1 ​is constructed from the second and third sentences of the report, parsed out to provide notes on scope. Table 1: What They’ll Do on Their Summer Vacation 13 ​Wikipedia entry on ​Artificial General Intelligence​. 14 ​Stevan Harnad, ​Mind, Machines and Turing: The Indistinguishability of Indistinguishables​, 2001. 15 Stevan Harnad, ​Alan Turing and the Hard and Easy Problem of Cognition (Doing and Feeling)​: 2012 16 ​Stevan Harnad, ​Minds, machines, and Searle 2: What's right and wrong about the chinese room argument​ 1996 17 Hans Moravec, ​Mind Children​, Harvard University Press, 1988: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” 18 ​The Frame Problem​, Stanford Encyclopedia of Philosophy, 2016; here, the problem is defined as “the challenge of representing the effects of action in logic without having to represent explicitly a large number of intuitively obvious non-effects.” 19 McCarthy, J. & Hayes, P.J. (1969), “Some Philosophical Problems from the Standpoint of Artificial Intelligence”, in Machine Intelligence 4, ed. D.Michie and B.Meltzer, Edinburgh University Press, pp. 463–502 20 ​See ​http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf​, from August 31, 1955. 4
  • 5. Conjecture or Goal Notes on Difficulty 1 The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. This is ​full blown AGI​. It goes beyond the ​Turing Test​, calling for not only simulation but a ​complete description of intelligence. The bounds of the intelligence can be debated but this is a lofty promise. 2 An attempt will be made to find how to make machines use language…. Joseph Weizenbaum’s ​ELIZA ​is a21 noteworthy early attempt, and many current examples include Siri, Google Home, Amazon Echo, Google Duplex, etc. All imperfect and impervious to tone, yet positively progressive in natural language processing (NLP). 3 ….form abstractions and concepts…. Any language translator, compiler, or software-based controller would qualify. Hundreds of thousands of examples. 4 ….and improve themselves. Machine learning, deep learning, control of weights and biases in neural networks. The conjecture in Row 1 is an implicit goal. The boldness of it calls to mind Turing’s observation that, “The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.”22 But what a claim: ​all ​learning, ​every ​feature of intelligence. This has been, and perhaps always will be, the stuff of science fiction. With this conjecture, there is a strong implication that the intelligence we are talking about is full human performance. But this is where definitions get tricky: there is a division even within this high echelon--we can consider that the need is met even with a computer that has very limited “inputs” and is not mobile or dexterous (as you would think of a robot).23 Goals 2 and 3, while interesting computer science problems, are so much simpler that they cast an ironic light on John McCarthy’s (one of the authors of the proposal) complaint that “as soon 21 ​Wikipedia entry for ​ELIZA​, an early natural language processing program that loosely simulated a Rogerian therapist. 22 Andrew Hodges, ​Alan Turing: The Enigma​, Walker Books, 2000. 23 ​A fictional illustration of this goal being realized might be computer (“HAL”) in 2001: A Space Odyssey. On the other hand, far more ambitious would be to try and build a robot on the order of Data (Star Trek) or the replicants in Blade Runner. It is understood that these examples don’t paint a rosy picture of AGI. 5
  • 6. as it works, no one calls it AI anymore.” Similarly, Herrera and Sanz have noted that “huge24 successes that before were celebrated under the umbrella of AI now became computer science, and the challenge of creating intelligent artifacts was taken over by the field of robotics.25 Goal 4 is more difficult than 2 and 3, but nowhere near the difficulty of Goal 1. This direction has recently seen a lot of progress with neural networks and in commercial applications involving machine learning, as well as more “traditional” AI problems such as theorem proving, chess and Go mastery, war game simulations, and Jeopardy. Since it is more in the domain of what we think of when we think of AI, it is again illustrative to call forth an image from literature. The “mending apparatus” that automatically diagnoses and remediates the air, water, food delivery and communications systems with the world-controlling Machine in EM Forster’s ​The Machine Stops applies in Goal 4, and is a literary example of a sophisticated expert system: it is “the26 system charged with repairing defects that appear in the Machine.”27 Some of this early optimism was likely intended to encourage funding, largely from the Defense Advanced Research Projects Agency (DARPA), which in 1963 provided a $2.2 million grant to28 the "AI Group" at MIT and then provided approximately $3 million dollars per year until 1974, while making similar grants to programs at Carnegie-Mellon and Stanford. And there were29 some early successes that justified optimism. For instance, two attendees, Allen Newell and Herbert A Simon, had a reasoning program, the Logic Theorist , that was able to prove most of30 the theorems in Chapter 2 of ​Principia Mathematica​. ​ ​Achievements such as this, along with (probably) a desire to be able to maintain funding interest, led Marvin Minsky (a key AI researcher from the start) in 1968 to declare that, “Within a generation we will have intelligent computers like HAL in the film 2001.31 Much of the misunderstanding of the potentialities of AI have come from inverting the complexity difficulties of a “thinking” task in a very bounded domain versus a “without thinking” one in a much broader domain (the world!). Both Moravec’s Paradox and the frame problem illustrate this issue, and Donald Knuth aptly summed up the state of the art: “AI has by now succeeded in doing essentially everything that requires 'thinking’ but has failed to do most of what people and animals do 'without thinking’ — that, somehow, is much harder!”32 24 ​Bertrand Meyer, Communications of the ACM, ​Obituary for John McCarthy​, 2011. 25 ​ ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016. 26 ​E.M. Forster, ​The Machine Stops​, 1909. In addition to the machine learning branch of AI, this story anticipates email, videoconferencing and social networking. 27 ​Wikipedia entry on ​The Machine Stops​. 28 ​Rockwell Anyoha, ​The History of Artificial Intelligence​, Harvard Graduate School of Arts and Sciences: [T]he advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions.” 29 ​Crevier, Daniel (1993), ​AI: The Tumultuous Search for Artificial Intelligence​, New York, NY: BasicBooks, ISBN 0-465-02997-3, page 64-65.​ Via Wikipedia page on ​History of AI​. 30 Wikipedia: ​Logic Theorist​. 31 ​From a 1968 press release for the film. Some of the optimism stemmed from other successes in complex gaming, and this was before the intractability of coding the more “mundane” aspects of intelligence were confronted. 32 ​Kristen Brown, ​Why Artificial Intelligence Always Seems So Far Away​, SF Gate, 2015. 6
  • 7. The Frame Problem and Knowledge Representation The frame problem highlighted a major issue with solving Goal 1 in ​Table 1​: how to represent (with symbolic logic) all of the effects of any action (given continual new information) without having to question whether to represent (or to search and verify) all of the obvious “non-effects.” When actions are performed, some things change and some do not, and the problem becomes how we determine ​which things​ do/not change when a particular action is performed. The contents to be considered must include all the attribute-states (called fluents, because they change over time) that will be stored along with agents and actions. Thus, the range of knowledge that needs to be represented is encapsulated in the frame. This gave rise to formalisms known as the situation calculus, wherein situations are best described as a “snapshot” (point in time and space) of the state of the universe. These states change (are fluent) when the situation changes, and actions generate new situations based on old ones. The situation calculus has formulas for preconditions and successor states; collections of formulas and axioms describe worlds (domains).33 As an example, suppose that you color an object and then move it. The color of the object may subsequently be affected by the move (i.e., it is paled by the sun, or passes through a spray-paint machine), but there is no clear way in first order logic to represent this. One has to continue to add new attributes and states, and even new contexts (i.e., why exactly were you coloring and moving objects in the first place?). The set of conclusions we can draw from premises grows in a single direction (monotonically) as we add formulas. This technique--providing frame axioms for every fluent--could result (as Hubert Dreyfus pointed out) in a deep regress of frames. For example, actions andm n34 fluents require ( ) axioms. So the problem becomes one of how to how to avoid thesem * n axioms, as they are intractable in handling real-world scenarios; thus, some​ ​bounding is necessary for the “world” we are simulating. An early, and still used, method of dealing with this is nonmonotonic logic, wherein inferences that are made are open to revision. Effectively, all conclusions are temporary, and open to35 revision once there is information to the contrary. For example, we know (background knowledge) that birds fly and that Tweety is a bird. Thus, we can tentatively conclude that Tweety flies until (new information) we learn that he is an ostrich or a penguin.36 An Example, with Solutions and Objections In ​The Cult of Information​, Theodore Roszak illustrates the complexity of even simple human activity. He describes an effort to build a robot to perform a trivial household task. The question 33 Wikipedia entry on ​Situation Calculus 34 Hubert Dreyfus: ​Why Heideggerian AI Failed and How Fixing it Would Make it More Heideggerian​ ​(2007) 35 ​Strasser, Christian and Antonelli, G. Aldo, "​Non-monotonic Logic​", The Stanford Encyclopedia of Philosophy (Summer 2018 Edition) 36 Jarek Gryz, ​The Frame Problem in Artificial Intelligence and Philosophy​, 2013. 7
  • 8. posed is how to solve the problem of wanting to know what’s happening the world (gain access to “the news”): How can this be done? Let us say by reading the morning newspaper. Where is the morning paper? Somewhere on the front lawn. Then, obviously, you should go out, and bring it into the house. Correct. Unless it is raining. If it is raining, you don’t wish to get wet. How can this be avoided? By putting on a raincoat, then going out and picking up the paper.37 After explaining that was an actual research project, Roszak (a historian, cultural critic and novelist with no formal computer science training) describes the difficulty of programming the required nuances. Not much of a problem, it would seem. But for the computer, this tiny, slice-of-life adventure must be scripted into an enormously long and detailed program….and if the choice of using an umbrella is worked into the situation, or the decision to understand how ​wet one might be willing to get before using either umbrella or raincoat, the program becomes a jungle of conflicting contingencies.38 Roszak is suggesting that the way computers simulate human cognitive activity is not at all indicative of the way humans think: [T]he human mind may often work in the same quirky, informal, fuzzy way it functions when it decides (without seeming to think about it), that there is no point in fetching a sopping wet newspaper off the front lawn anyway, and so settles for turning on the radio to catch the news.39 The logic to formalize this sort of temporal updating can get very deep very fast, because relevance cannot be easily codified in logic. Difficulties emerge in multiple dimensions: examples include concurrent actions, actions with unpredictable effects, and continuous change. All proposed solutions to date reveal problems. Two examples are ​Global Workspace40 Theory ​and the ​Frame Default​.41 42 Global Workspace Theory (GWT) proposes an architecture with a model of combined serial and parallel information flow (​Figure 1​). 37 ​Cult of Information​, 1986, Random House, page 124 38 ​Ibid., page 125 39 ​Ibid., page 126 40 ​The Problem with Solutions to the Frame Problem​ (Leora Morgenstern 1996) 41 ​Murray Shanahan and Bernard Baars, ​Applying Global Workspace Theory to the Frame Problem​ (2004) 42 Raymond Reiter. A logic for default reasoning. ​Artificial Intelligence​, 13:81–132, 1980. 8
  • 9. Figure 1: Global Workspace Architecture The global workspace itself is a working memory store--a cognitive architecture for both conscious and unconscious processes. There are many parallel unconscious specialist processes that inform, and are informed by, the state of the global workspace. The contents of this global memory are what we are conscious of at a point in time (durations may vary). You can think of the global workspace as the executive function and the specialist processes as subsystems. To use the news-gathering robot example, a specialist process may be “check for rain awareness” or “determine whether radio is already broadcasting the news.” In any case, the specialist processes ​writ large ​(​Figure 2​) “disseminate their messages to all other processes in an effort to recruit more cohorts and thereby increase the likelihood of achieving their goals.”43 Figure 2: The Global Workspace Model of Information Flow The question, as with the original frame problem, is one of relevance. The specialists in the global workspace self-determine their relevance, and announce it to the global workspace. For instance, a specialist process that may be relevant (and belong in the global workspace) might 43 ​Wikipedia entry on ​Global Workspace Theory​. 9
  • 10. be that the paper is wrapped in plastic, and won’t be wet after all, assuming it is only drizzling (not raining hard) and the paper was not thrown directly into a puddle in the driveway. According to Jerry Fodor, the question of relevance, and the unlikelihood of being able to handle all of the potentially relevant representations, is insurmountable. Interestingly, his example builds another complexity into the very scenario posed by Roszak: “The totality of one’s epistemic commitments is vastly too large a space to have to search if all one’s trying to do is figure out whether, since there are clouds, it would be wise to carry an umbrella. Indeed, the totality of one’s epistemic commitments is vastly too large a space to have to search whatever it is that one is trying to figure out.”44 Murray Shanahan and Bernard Baars planned to “solve” this by noting efficiencies in search (order of n​2​ as opposed to n*log(n)), avoiding a combinatorial explosion. Their paper also45 claims that parallel processing will speed things up sufficiently. That, along with having self-aware specialists in a global workspace, is the essence of the approach. The Frame Default theory states that no properties change as a result of actions except the explicitly specified ones. Thus, Roszak’s news-gathering robot would not use data such as time of day, or direction the wind is blowing. The list of exceptions can grow in time, yet the no-change claim remains in place for all other properties (so the force of a gale may be an issue). This theory has led to some interesting nonmonotonic logic constructs and became a basis for defining syntax and semantics in the Prolog programming language.46 So there are logical methods and algorithms for handling the frame problem in the abstract. Still, as noted in a technical analysis (44 years later) by computer scientist Jarek Gryz: Interestingly, the discussion of the problem subsided within the AI community. It is hard to believe that everyone has accepted Shanahan’s claim that the problem is “more-or-less solved” and moved on. If that were the case, where are the thinking machines that this problem has prevented us from building? A more likely explanation is that after so many failed attempts researchers lost faith that the problem can be solved by logic.47 And an analysis by Katelyn Rivers concludes that while GWT may indeed be neurophysiologically plausible, it doesn’t quite scale to where it can solve the “relevance” question: The issue with the GWT solution as it currently stands is not that this kind of architecture is doomed never to perform a relevance computation, but that its founders failed to provide successful measures by which a system could recognize that something is relevant. In other words, Shanahan, Baars, and others have not provided successful, 44 Fodor, J.A. (2000). ​The Mind Doesn't Work That Way​. MIT Press. 45 ​Murray Shanahan and Bernard Baars, ​Applying Global Workspace Theory to the Frame Problem​ (2004) 46 ​Vladimir Lifschitz, ​The Dramatic True Story of the Frame Default​, 2015​. 47 ​Jarek Gryz: ​The Frame Problem in Artificial Intelligence and Philosophy​, 2013. 10
  • 11. non-question-begging standards by which a system can determine that an item of information is relevant with a reasonable level of foresight and success.48 To Hubert Dreyfus, on the other hand, the question is whether the frame problem ​needs ​be solved; it appears to him that specifying which propositions are relevant to varied contexts is hopeless: “if each context can be recognized only in terms of features selected as relevant and interpreted in a broader context, the AI worker is faced with a regress of contexts.”49 More broadly, with regard to the physical symbol system hypothesis, AI researchers (according to Dreyfus): “had taken over Hobbes’s claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of ‘universal characteristic’ (a set of primitives in which all knowledge could be expressed), Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Wittgenstein’s postulation of logical atoms in his ​Tractatus​”.50 In short, Dreyfus believed that the proper way forward was not through rationalist philosophy but instead by attempting to express (in software) the existential and phenomenological perspectives of Martin Heidegger and Maurice Merleau-Ponty. These arguments were generally met with resistance in the AI community: it seemed anathema to the whole process to not represent a model of the environment in a robotic implementation. Prominent AI researcher Edward Feigenbaum, a major force in the development of expert systems, complained, "What does [Dreyfus] offer us? Phenomenology! That ball of fluff. That cotton candy!"51 However, there was an early (partial) success from Rodney A. Brooks in developing ant-like robots called “animats”; these mobile robots mainly referred to their own sensors to govern their motion--all without knowledge representations. This work, according to Herrera and Sanz,52 quickly changed the “mood in the AI community” and the organization of the field: In a short span of time, the once marginal and radical critique of the establishment became commonplace….embodiment and situatedness became buzzwords, and although Heidegger’s influence is still taken with caution, it is undoubtedly among the sources of new AI and cognitive science.53 And there was, according to Dreyfus, some Heideggerian expression in a game developed by Phil Agre and David Chapman called ​Pengi​, in which the “world” for the game’s “agents” does not consist of objects but of ​possibilities for action​ (my emphasis). This is more in line with the intentionality (​the quality of mental states that consists in their being directed toward some  object or state of affairs​) described by Heidegger in ​Being and Time​, with representations that designate “not a particular object in the world, but rather a role that an object might play in a 48 Katelyn Rivers, ​Can Global Workspace Theory Solve the Frame Problem​, 2018 49 ​Hubert Dreyfus, ​What Computers Still Can't Do​ 1992, p. 189 50 ​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian 51 ​McCorduck, Pamela​ (2004), ​Machines Who Think​ (2nd ed.), Natick, MA: A. K. Peters, Ltd., 1979, page 204 52 ​Rodney A. Brooks. “​Intelligence without Representation​,” Mind Design, John Haugeland, Ed., The MIT Press, 1988, 416. There was in fact some knowledge representation in the animats that Dreyfus believed impacted their ability to learn. 53 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016, page 499. 11
  • 12. certain time-extended pattern of interaction between an agent and its environment.” Thus, the54 important aspect is the role (significance), not the object (represented knowledge). Still, Agre’s implementation, according to Dreyfus, is a “virtual agent in a virtual micro-world” in which no actual learning takes place: this limited micro-world is closed-off and represented as the total storehouse of possible actions. It is still missing Merleau-Ponty’s feedback loop between the agent and the world: “Cognitive life, the life of desire or perceptual life--is subtended by an ‘intentional arc’ which projects around us our past, our future, and our human setting.”55 Accordingly, the question remains as to whether some amount knowledge representation is needed in order to create a general intelligence, or whether: [A] Heideggerian Cognitive Science would require working out an ontology, phenomenology, and brain model, that denies a basic role to any sort of representation--even action-oriented ones--and defends a dynamical model like Merleau-Ponty’s and van Gelder’s that gives a primordial place to equilibrium and in general to rich coupling.56 Other authorities on Heidegger--Michael Wheeler, for instance --would say that combining57 some knowledge representation with the embodied and situated (in the world) coping of intentionality, is required if we are going to use digital computers to simulate intelligence. It seems to him counterintuitive to insist that knowledge not be represented at all, or represented only in the world, outside the body and brain. Even if you grant that the world itself is the ideal (or only) model, and that there is (obviously) too much knowledge to effectively represent, it seems overly proscriptive, according to Nancy Salay to not allow a knowledge base into the workings of a computer program: Dreyfus is too hasty in his condemnation of all representational views; the argument he provides licenses only a rejection of disembodied models of cognition. In casting his net too broadly, Dreyfus circumscribes the cognitive playing field so closely that one is left wondering how his Heideggerian alternative could ever provide a foundation explanatorily robust enough for a theory of cognition.”58 So in higher level functioning, far away from lower motor functions and common sense, and tied to a micro-world to address a narrow problem, knowledge representation (and placing limits on the defined world), may be possible. But is this an ideal solution or a compromise? Is it plausible that we only store meanings and values? 54 ​Philip E. Agre, ​Computation and Human Experience​, Cambridge University Press New York City, 1997, page 243. 55 ​Maurice Merleau-Ponty, ​Phenomenology of Perception​, trans C Smith, Routledge and Kegan Paul, 1962, page 136. 56 ​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian 57 ​See Michael Wheeler, ​Reconstructing the Cognitive World​, MIT Press, 2005. 58 ​Nancy Salay, ​Why Dreyfus’ Frame Argument Cannot Justify Anti Representational AI​, 2009. 12
  • 13. In a sense, it depends on what we are actually trying to build. ​As Herrera and Sanz have noted: The road to autonomous intelligent machines is proving more difficult and challenging than it was first expected--not only because of a lack of instruments and methods, but because it is from the beginning unclear what is meant to be achieved. 59 In any case, to evaluate this question, we need a rendering (even a model) in cognitive science of how snapshots of coping situations or intentional arcs may be created. The question is whether this model, with its representations, will be applicable only in the micro-world or to an AGI. Phenomenological AI Before exploring whether a coded AGI solution with no knowledge representation is even possible, or what this means for AGI in general, it’s useful to take a look at how a cognitive scientist (Phil Turner) interprets (via Merleau-Ponty) the intentional relationships between beings and objects in their world (​Figure 3​). Figure 3: Rendering of Intentional Arc60 Turner notes that each type of intentionality in the individual (affective/feeling, social, corporeal, and cognitive/perceptual), there is a corresponding ​affordance​, or offering from the world. In all of these mappings, there is, for Turner: “the knowledge of how to act in a way that ‘coheres’ 59 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016, page 500. 60 Phil Turner, ​The Intentional Basis of Presence​, University of Edinburgh, 2007, page 4 13
  • 14. with one’s environment bringing body and world together.” But this is more than just being61 physically present in the world; it is continuous and active: According to Merleau-Ponty, higher animals and human beings are always trying to get a maximal grip on their situation. When we are looking at something, we tend, without thinking about it, to find the best distance for taking in both the thing as a whole and its different parts. When grasping something, we tend to grab it in such a way as to get the best grip on it.62 Using the intentional arc as a basis, Dreyfus identifies a “Merleau-Pontian Neurodynamic” in63 the work of the neuroscientist Walter J. Freeman, who developed a model of learning in rabbits based on mapping activity in the rabbit’s brain to stimuli in the environment: [T]he brain moves beyond the mere extraction of features...it combines sensory messages with past experience...to identify both the stimulus and its particular meaning to the individual.64 Freeman concentrated on the rabbit’s olfactory system, noting responses in the brain when, for instance, the scent of a carrot is introduced. The scent produces cell assemblies that are thereafter geared to respond to the scent of this food. Similarly, cell assemblies may form for other reasons, depending on what is introduced. For example, thirst (water), fear (fox), or sex (female rabbit) will trigger the firing of the corresponding cell assemblies in earlier introductions. The goals are to satisfy the needs and65 have the assemblies revert to minimal energy states known as attractors. The brain states that tend towards these attractors are called ​basins of attraction​ and the set of these basins that the animal has learned is called an ​attractor landscape​.66 The key here, and what ties this effort (according to both Dreyfus and Freeman) to Merleau-Ponty, is the difference between representation and significance. And in the perception/action loop that takes place, “[T]he memory of significance is in the repertoire of attractors as classifications of possible responses--the attractors themselves being the product of past experience.”67 The attractors selected in the face of new events illustrate how Freeman’s model explains the intentional arc in terms of Temporal Difference Reinforcement Learning (TDRL), in which the learner continually updates assumptions based on new information; according to Dreyfus: 61 ​Ibid., page 2 62 Ibid., page 2 (emphasis mine) 63 ​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian​, page 18 64 ​Walter J. Freeman, ​The Physiology of Perception​, ​Scientific American​, Vol. 242, February 1991, p. 78 65 ​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian​, page 20 66 ​Ibid., page 21 67 ​Ibid., page 25 14
  • 15. Our previous coping mechanisms feed back to determine what action the current situation solicits, while the TDRL model keeps the animal moving forward towards a sense of minimal tension, that is, a least rate of change in expected reward.68 Accordingly, Freeman has created (​Figure 4​), a flow diagram of the main components of the intentional arc in the brain. Figure 4: Dynamic Architecture of Limbic System69 At a high level, this shows action-perception cycles resulting in ​circular causality ​(in the Merleau-Pontian sense ), for goal-oriented behavior arising from the interactions between70 motor and sensory systems. The main loops in this architecture that map to the diagram in Figure 3​ are the motor loop (i.e., the systems engage with, and respond to, the world) and the proprioceptive loop (i.e., the systems engage with, and respond to, the body): ● The motor loop (brain to/from world) maps to the affective and social intentionalities and affordances ● The proprioceptive loop (brain to/from body) maps to the corporeal and cognitive/perceptual intentionalities and affordances The cortex transmits and receives signals (sometimes through interactions with the cognitive map in the hippocampus) in circular loops to the sensory and motor systems: ● A reafference loop (sensory stimulation caused by body movement) to and from the sensory systems ● A control loop (moving intention from the cortex to the world) to and from the motor systems In studying, and then modeling, this circular causality, Freeman uses only significances, and no knowledge representations. The begged question, for those who want to attempt to ​implement 68 Ibid., page 27 69 ​Walter J Freeman, ​Consciousness, Intentionality, and Causality, ​Journal of Consciousness Studies​, ​1999, page 12. 70 Ibid., extensive discussion throughout. 15
  • 16. phenomenological AI, is this: if knowledge cannot be represented, and logic cannot be applied to this knowledge, can digital computers in fact be used to create this AI? It appears that the answer is yes, because the global state transitions between the attractor basins can have symbols assigned to them: At macroscopic levels each perceptual pattern of neuroactivity is discrete, because it is marked by state transitions when it is formed and ended. ... I conclude that brains don't use numbers as symbols, but they do use discrete events in time and space, so we can represent them …by numbers in order to model brain states with digital computers.71 Ever the anti-representationalist, Dreyfus quickly points out that, “[T]he model is not an intentional being, only a description of such.” Furthermore, the approach amounts to a tacit72 admission that we are indeed not at all close to being able to imitate human intelligence, and that we have been overlooking the requirement to build the right ​body​ (it must be “like us”)​ ​to comprise the necessary sensors and limbic subsystems to enable this to happen: Thus, to program Heideggerian AI, we would not only need a model of the brain functioning underlying coupled coping such as Freeman’s but we would also need—and here’s the rub—a model of our particular way of being embedded and embodied such that what we experience is significant for us in the particular way that it is. That is, we would have to include in our program a model of a body very much like ours with our needs, desires, pleasures, pains, ways of moving, cultural background, etc.73 Hence the requirement for a lived-in body. This leaves the question open as to whether there are some cases, as Nancy Salay and Michael Wheeler would have it, where knowledge representation may be of use. It is possible to imagine such cases if we take for granted that the created “world” will be more truncated, limited, and specialized. Even the news gathering robot example might qualify, provided its tasks are limited to selected “fetch” activities such as the ones mentioned. But it must be understood that there is a design trade-off: a great deal of richness and true intelligence would be left out of the product, which is effectively just a micro-world. Finally, the phenomenological perspective on AI, for Herrera and Sanz, has a higher mission, beyond the implementation issues: It is not the role of Heideggerian AI to solve the puzzle for AI development, but to unveil the being of robots, now shadowed by the idea of an “artificial human.” The more we uncritically identify robots with animals and humans, the further we are from becoming aware of what robots really are.74 Thus, the truly Heideggerian endeavor is not to simply assist in the implementation of AGI-capable entities, but to explore the ontology of the entities being built. 71 ​Walter J Freeman, ​Societies of Brains​, 1996, page 105. 72 ​Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian​, page 28 73 ​Why Heideggerian AI failed and how Fixing it would Require Making it More Heideggerian​, page 30. 74 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016, page 505. 16
  • 17. Enactivism: Adding World to Brain and Body Further developments in the biology of consciousness have taken the phenomenological perspective a step further, and indicated that consciousness arises from interactions between the body and world: the world is not only part of an intentional arc, it is part of consciousness itself. Alva Noë, for instance, argues that “[W]e must look not inward, into the recesses of our insides; rather, we need to look to the ways in which each of us, as a whole animal, carries on the process of living in and with and in response to the world around us.”75 The scientific basis for this is strong, building on observations about perception being an active activity as opposed to a passive absorption of the environment: “Perception is not a process in the brain, but a kind of skillful activity of the body as a whole. We enact our perceptual experience.” Thus, as noted by Jascha Hoffman, “[W]e do not passively absorb data from our76 eyes and ears, as a camera or a microphone would, but rather scan and probe our environments so we can act on them.”77 It is in this fuller context of the entire body and the world, and this active view of perception, that Noë builds on applied phenomenology as discussed in the previous section: My central claim in this book is that to understand consciousness—the fact that we think and feel and that a world shows up for us—we need to look at a larger system of which the brain is only one element. Consciousness is not something the brain achieves on its own. Consciousness requires the joint operation of brain, body, and world. Indeed, consciousness is an achievement of the whole animal in its environmental context.78 Regarding the implications for AGI, Noë doesn’t rule out the possibility, but postulates that “[T]he only route to artificial consciousness is through artificial life.” Thus, consciousness does79 arise in part from the brain and the body, but “we need to keep the whole organism in its natural environmental setting in focus. Neuroscience matters, as does chemistry and physics. But from these lower-level or internal perspectives, our subject matter loses resolution for us.”80 The emphasis on full context and action, on empirical observation of the agent, its body, and the environment, amounts to psychologists such as Howard Rachlin as a revival of behaviorism: “Dissatisfaction with neural representations as explanations of mental events, as well as the divorce of the mind (in such identity theories) from its biological function, has led Noë to expand his concept of the mind outward from the brain to overt behavior, including social behavior.”81 75 ​Alva Noë, Out of Our Heads: ​Why You Are Not You Brain and Other Lessons from the Biology Consciousness,​ Hill and Wang; 1 edition (February 2, 2010), page 7. 76 ​Alva Noë, ​Action in Perception 77 ​Jascha Hoffman, Scientific American Mind, ​Review of Out of Our Heads​, April 2009 78 ​Alva Noë, Out of Our Heads, page 10 79 ​Ibid., (Kindle Locations 715-719) 80 ​Ibid., (Kindle Locations 715-719) 81 ​Howard Rachlin, ​Is the Mind Your Brain?​, Journal of the Experimental Analysis of Behavior, July 2012. 17
  • 18. Still, according to Tom Froese, an empirical approach to phenomenology and enactivism can lead to the next developments in AI: “[E]nactivism can have a strong influence on AI because of its biologically grounded account of autonomous agency and sense-making. The development of such enactive AI, while challenging to current methodologies, has the potential to address some of the problems currently in the way of significant progress in embodied-embedded AI.”82 The potential success of enactivism in AI research, therefore, depends on the development of what Froese calls a “phenomenological pragmatics.”83 Classifying Fictional Robots It’s been shown that many of the questions that arise in the viability of AI systems (particularly AGI) stem from ambiguous definitions and goals. In an attempt to sort out these varied goals, Selmer Bringsjord and Naveen Govindarajulu draw from the influential textbook ​Artificial Intelligence: A Modern Approach to provide a neat categorization (​Table 2​).84 85 To this table, I’ve added, in bold, possible examples that may be familiar from popular culture. Along these lines, Herrera and Sanz note that “[S]cience fiction is a strong source of our pre-ontological understanding of robots.”86 Table 2: Reasoning or Acting as Human or Abstract Rational Being Human-Based Ideal Rationality Reasoning-Based Systems that think like humans. ​(Commander Data) Systems that think rationally. (HAL 9000) Behavior-Based Systems that act like humans. (Stepford Wife Joanna) Systems that act rationally. (Mending Apparatus) Where would the questions of the frame problem, knowledge representation, and other implementation considerations, fall in various examples from science fiction? Their appearance in our culture can help sort out the discussion, and there’s value in being more exact about what we’re talking about in terms of goals and plausibility. In ​Table 1​, for instance, the range of goals is so vast and public that misunderstanding was almost certain to result. For example, the frame problem clearly shows itself more in robotics than in generalized intelligence from an inert computer. This distinction has only recently been fully teased out, and 82 ​Tom Froese, ​On the role of AI in the ongoing paradigm shift within the cognitive sciences​, M. Lungarella, F. Iida, J. Bongard & R. Pfeifer (eds.), page 12. 83 ​Ibid., page 2. 84 ​Russell, S. & Norvig, P., 2009, ​Artificial Intelligence: A Modern Approach​ 3rd edition, Saddle River, NJ: Prentice Hall 85 Bringsjord, Selmer and Govindarajulu, Naveen Sundar, "​Artificial Intelligence​", The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.) 86 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016, page 506. 18
  • 19. that lack of clarity has undoubtedly contributed to some of the misunderstandings as to where the successes and failures have been. Still, it does seem like the “framers” of AI did not (until McCarthy’s 1969 paper) explicitly specify a robot. And, as noted earlier, Turing had astutely87 allowed typewritten pages as one interface medium (which substantially limits the degree of difficulty). Thus, for all the Dreyfusian insistence on embodiment and embeddedness, it is useful to look at our murderous friend HAL from ​2001: A Space Odyssey​, a ubiquitous icon of fictional AI. Whether HAL is badly programmed, just plain malicious, or correctly programmed by an agent with evil motive can be bracketed. The question at hand is whether his powers can be bounded in something greater than the micro worlds that sprung out of early attempts to solve the frame problem with logic, neural networks, global workspaces, or well-chosen defaults. Consider embodiment. What senses and capabilities does HAL have? He certainly has sight, speech, “touch” (he knows when he is being disabled), and it doesn’t seem beyond the pale that he might have taste or smell. But fundamentally he appears to be a massive mainframe requiring an air-conditioned room to function at all. Consider embeddedness. HAL’s (micro-)world is the spaceship itself, greatly simplifying the environmental concerns that come with even the most basic household robot, which may (as discussed in Roszak’s example above) have to negotiate the house, the yard, and the weather in performing its duties. HAL is actually fairly narrowly specified and scoped. This is the spirit in which Wheeler and Salay are correct to say that proscribing representation is an overreach that limits shorter-term practical AI applications. HAL’s main responsibility of commanding the Discovery mission to Jupiter is more akin to the higher function tasks (i.e., Chess, Jeopardy, and Go) that have seen great success in AI research. His job function is cerebral and squarely in the domain of higher-functioning tasks that have been solved in AI. On the other hand, if we turn to Lieutenant Commander Data of Starships Enterprise-D and -E,88 we have a vastly more daunting set of specifications. Here, the Federation has gone to the great trouble of creating exactly (or nearly enough) the full embodiment that entails human-caliber AI of the sort that may in reality be centuries away. At the other end of the spectrum (and as referenced above), it’s worthwhile to consider E.M. Forster’s mending apparatus from ​The Machine Stops​. This subsystem within “The Machine” (the underground honeycomb of condos in which all civilization lives) automatically diagnoses and remediates any issues that arise: lack of air, water or food, technological subsystems (videoconferencing), etc. The mending apparatus is ideal-based, not human, but it dispatches its duties in a rational manner. Finally, in the Human/Behavior block of Column 1 in ​Table 2​, Joanna Eberhart from ​The Stepford Wives notices and fights the trend of her female neighbors becoming docile to the89 point of zombification. As she winds up being transformed into a robotic version of herself, and becomes another submissive Stepford Wife, she loses her ability to think independently. 87 ​Some Philosophical Problems from the Standpoint of Artificial Intelligence 88 ​See Wikipedia entry on ​Data (Star Trek) 89 ​See Wikipedia entry on ​The Stepford Wives​. 19
  • 20. Hewing close to human-like AGI, which has been the top goal and promise of AI, it’s useful first to ask: How much embodiment is needed to get into the “AGI Family”? As examples, consider the “Ideals Based” column in ​Table 2​. HAL exhibits (some) general intelligence, whereas the mending apparatus does not. HAL also exhibits some limited embodiment, and the mending apparatus again does not, but is still a heightened intelligence with ready-to-hand techniques for remediating the air, water, communication systems, and transportation tunnels in Forster’s dystopia. Furthermore, given a domain of responsibility that is very high in abstract executive function (which, as noted by Moravec, Minsky and Knuth, is much easier to implement than common sense), who is to say that a suitable “mini-world” (the Discovery) or a “massively connected world model” (the Machine) does not qualify as a sufficiently large AI domain to be worthy of philosophical consideration and criticism? On the other hand, the “Human-based” column in Table 2 ​requires much deeper embodiment, almost the full creation of a human body--certainly well beyond anything we’ve been able to even begin achieving. Conclusion There are three related conclusions supported by this investigation: ● The dichotomy of “strong versus weak” AI misrepresents the AI landscape and sometimes hinders productive discussion ● The realities of the frame problem provide insights into cognitive science, computer science and philosophy ● The field of (what we now call “weak”) AI needs fresh attention from the (general) philosophy of technology These are covered in the following sections. “Strong versus Weak” Misrepresents the AI Landscape The current identity crisis in AI, as to what is being built and why, and whether any of it constitutes human general intelligence and performance, owes a lot to loose definitions, breathtaking claims, and animus across computer science and philosophy. It is easy to see both why philosophers would be startled by being told their tradition is null and void because of what’s going on in the AI Lab, and why computer science researchers might90 be resentful of criticism that doesn’t appear helpful to their concerns. ​Aggressive language went in both directions, as philosophers asserted that the quest for the Holy Grail of AGI would never, for a variety of reasons, be realized: outdated philosophies (Dreyfus) or misunderstanding the meaning of mind (Searle). 90 ​My reference here is to the MIT Lab in the 1960’s and the earliest discussions between Dreyfus and Minsky et al. I think there has been a sort of “two cultures” (a la Charles Percy Snow) debate going on over the decades, and some of the heat from it has to do with unclear terms, goals, and objectives. 20
  • 21. Pamela McCorduck summed up the genesis of the discord when she observed that “[A] great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.“ On the other, McCorduck91 also notes (of Dreyfus) that, “His derisiveness has been so provoking that he has estranged anyone he might have enlightened. And that's a pity."92 As for Searle’s dichotomy of Strong versus Weak AI, the way it is received in the cognitive and computer science world (where the work is vibrant and the practitioners are collecting what I call the “many small victories”) is aptly summed up by Harnad: “[I]t is not at all clear that Searle's ‘Strong AI/’Weak AI’ distinction captures all the possibilities, or is even representative of the views of most cognitive scientists. Much of AI is in any case concerned with making machines do intelligent things rather than with modeling the mind.” This view, according to both Selmer93 Bringsjord and Peter Norvig, is shared by all but the very few AI researchers.94 95 Thus, philosophical arguments around AGI (Strong AI), while useful to providing insights into the problem of modeling knowledge, often amount to definitional games around words like ​thinking​, mind​, and ​consciousness ​of persons and machines. This is important inasmuch as it leads to a scientific treatment of philosophy of mind (discussed below). But over-rotating on Strong AI misses not only the current landscape but its urgent implications. And it is also misguided from a historical perspective, as Turing himself proposed only a weak, functionalist AI, and many of the promises (all but the loftiest) of the Dartmouth Summer Project have in fact been realized. The range of possible “intelligences” (or simulations) that are set out by both the Dartmouth Summer Project (​Table 1​) and Bringsjord and ​Govindarajulu (​Table 2​) illustrate how easily our understanding can get lost. ​In ​Table 2​, for instance, McCarthy et al leapt wholeheartedly into the Human/Reasoning quadrant (the only "Strong AI" box) and that’s the quadrant that carried the public’s imagination and that most science fiction explores. Later, when he identified the frame96 problem in 1969, McCarthy didn’t inspire dialog with his assertion that: “Since the philosophers have not really come to an agreement in 2500 years it might seem that artificial intelligence is in a rather hopeless state if it is to depend on getting concrete enough information out of philosophy to write computer programs.”97 Still, the forbidding moniker of unattainable Strong AI does not need to stay attached to other worthy (if lesser) notions of general intelligence (learning ability, knowledge categorization, the 91 ​McCorduck, Pamela (2004), ​Machines Who Think (2nd ed.)​, Natick, MA: A. K. Peters, Ltd., page 212 92 ​Ibid, page 236. 93 ​Harnad, S. (1989) ​Minds, Machines and Searle.​ Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25, page 15 94 Selmer Bringsjord, ​What Robots Can and Can’t Be​, Kluwer 1992. Bringsjord argues, in a position he holds to this day, that “(1) AI will continue to produce machines with the capacity to pass stronger and stronger versions of the Turing Test but that (2) the "Person Building Project" (the attempt by AI and Cognitive Science to build a machine which is a person) will inevitably fail.” 95 ​TechTalks: ​Why We Should Focus on Weak AI for the Moment​, April 23. 2018; Supporting this position, Norvig adds: “We want humans and machines to partner and do something that they cannot do on their own.” 96 ​Bringsjord, Selmer and Govindarajulu, Naveen Sundar, "​Artificial Intelligence​", The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), July 2018 97 ​McCarthy, John, ​Some Philosophical Problems from the Standpoint of Artificial Intelligence​, 1969. 21
  • 22. capability to reason) and should not overshadow the many interesting (yet possibly dangerous!) research. ​This imperious approach also (per Harnad) diluted Searle's contributions, which could, with slight modifications, have been directed straight towards computationalism (which sorely deserved a broadside critique), instead of attacking a strawman (Strong AI) that only the very few practitioners were willing to defend.98 As if to make the point of how unnecessary and unhelpful this dichotomy became, Dreyfus, as an applied philosopher, was both the original gadfly of AI and eventually came into close consultation with the engineers and cognitive scientists who were willing to discuss the modern understandings of mind in philosophy (such as phenomenology) and relate them to scientific understandings (in biology) and implications for implementation (in cognitive science and AI). While the resultant cross-disciplinary studies were undertaken regardless of whether they were in the neighborhood of AGI or a simulation, progress was made with useful contributions and earnest endeavors to get closer to constructs of perception such as the intentional arc. The Frame Problem Yields Vital Lessons Across Multiple Disciplines We all owe a lot to the frame problem. At its inception, it was very technical and narrow, confined to the context of taking specific actions in specific situations, but it grew in many99 directions. As Selmer Bringsjord has noted, it is variously now understood (with degrees of truth in each case) as a relevance problem, a statement on the futility of symbolic reasoning in holistic actions or thought processes, and the conditions under which a belief should be updated after an action has been undertaken.100 Essentially, ​nothing highlights the differences between our day-to-day existence and AI’s attempts to reason about it like the frame problem. First and foremost, the problem highlights a disparity between a philosophical and a technological assessment of what it means to be “solved.” In computer science, this largely amounts to questions of tractability (in the form of compute time). GWT, for instance, utilized event-driven logic flows, which made sense for handling the parallel unconscious specialists (​Figure 2​ above) but of course never answered the relevance question. And default logic is an excellent way to minimize the number of frame axioms (by ruling all but the few likely correlative changes of any action) in many feasible micro-worlds, and was adopted in Prolog, a very useful language for machine-learning applications. But of course it doesn't handle even rudimentary hypotheticals such as the Yale Shooting Problem, let alone101 myriad less-well-defined real-world situations. However, it’s worth noting that nonmonotonic 98 ​Stevan Harnad, ​Mind, Machines and Searle 2: What’s right and wrong about the chinese room argument​, Oxford University Press, 2006: Harnad here reformulates Searle’s arguments so that they are the recognizable tenets of computrationalism, which (as opposed to Strong AI, which “no one” believed in), had adherents, and needed refutation. 99 ​McCarthy, John, ​Some Philosophical Problems from the Standpoint of Artificial Intelligence​, 1969. 100 ​Selmer Bringsjord and Konstantine Arkoudas, ​The Philosophical Foundations of Artificial Intelligence​, 2007 101 ​Shanahan, Murray, "​The Frame Problem​", The Stanford Encyclopedia of Philosophy (Spring 2016 Edition) 22
  • 23. logic and the situation calculus, both borne of the frame problem, have led to great strides in the efficacy of expert systems, especially when combined with deep learning algorithms.102 At first blush, the spectacular misunderstandings that emerged from studies in the frame problem not only led us nowhere in the quest for AGI, but threatened to forge forever a misconception as to what it means to be to be able to “solve” intractable problems in the simplest of human endeavors. Humans do not face the frame problem because they do not go about their daily lives by representing the world in logical formulas.103 But conversely, and perhaps counterintuitively, this failure provides positive news for the philosophy of personhood, as we question “at what point in one’s development from a fertilized egg there comes to be a person, or what it would take for a chimpanzee or a Martian or an electronic computer to be a person, if they could ever be.” Thus, the frame problem, which104 appears “solvable” in theory but not in practice, has amply illustrated that the states of105 day-to-say human being can’t be successfully chased down by an endless succession of frame axioms. Within AI research itself, the frame problem led AI research from a classical approach into a connectionist and dynamical systems approach, which in itself was a huge departure from the problems being worked on previously. From there, As ​Table 3 ​(sourced from Selmer Bringsjord and Konstantine Arkoudas) shows, the problem spaces leapt from purely intellectual tasks into action-oriented ones, from static to dynamic requirements, and from an atomistic to a holistic endeavor. Table 3: From Classical to Situated AI106 Classical Embodied Representational Non-representational Individualistic Social Abstract Concrete Context-independent Context-dependent Static Dynamic Atomistic Holistic Computer-inspired Biology-inspired Thought-oriented Action-oriented 102 ​See, for instance, D.M. Gabay, ​Theoretical Foundations for Nonmonotonic Reasoning in Expert Systems​, Logics and Models of Concurrent Systems, 1985. 103 ​Selmer Bringsjord and Konstantine Arkoudas, ​The Philosophical Foundations of Artificial Intelligence​, 2007 104 ​Olson, Eric T., "​Personal Identity​", The Stanford Encyclopedia of Philosophy (Summer 2017 Edition) 105 And of course, this calls into question the integrity of the theory! 106 ​Selmer Bringsjord and Konstantine Arkoudas , ​The Philosophical Foundations of Artificial Intelligence​, 2007 23
  • 24. As such, the frame problem has illustrated just how different are the purely intellectual and the action-oriented problem spaces. Both are still being pursued in parallel, as evidenced by Watson and other high achievements in natural language processing being contrasted with the agile but infantile robots being built by Boston Dynamics and others for manual labor or107 military purposes. Obviously, neither effort touches on what it means to be human, but both represent high achievement that should be both respected and scrutinized. Finally, the questions of knowledge representation are currently being debated not only in cognitive science and AI, but in biology and the philosophy of mind. Disputes on representations are to be found not only amongst Heideggerians (such as Wheeler and Salay) but by enactivist philosophers of mind such as Pierre Steiner and Anthony Chemero, who are also empirical108 109 cognitive scientists studying the limitations of enactive anti-representationalism. This is probably more positive than not, as ​Herrera and Sanz have noted: “Science crystallizes in a body of knowledge, but it cannot be reduced to a set of true propositions about the world.”110 There Are Crucial Philosophical Concerns Raised by So-Called Weak AI Both classical AI and subsequent embodied and situated AI deserve further analysis, even as the plethora of “lesser” AI applications ignore the elusive goal of synthetic consciousness. As for that Holy Grail, the more critical task at hand, far more urgent than the “mind” we may create in a few centuries, is to observe with discerning caution even the “weak” AI robots that could soon be as “smart” as Watson and as dangerous as the products leaping out of top defense contractors. In fact, while there is an allure to the notion of creating consciousness, the AI pursuit is mainly tied to industrial profitability and productivity, or quests for weaponry in arms races, or111 112 endeavors to influence populations via AI-fueled social networking. In the bloom of these113 developments, our first thought should not be whether the successful robot validates Strong AI, but what it means to us that it was brought into being at all, not to mention into ​our ​being. In part, we are called, according to Herrera and Sanz, to explore the ontological problem of “not how robots are built, but what they are.” Weak AI may not ultimately be as dangerous as114 Strong AI (this is debatable ), but there is plenty to question there.115 107 ​Boston Dynamics​: I’m not saying that these robots couldn’t (in a Turing-machine universe) be fortified to match the intellectual capabilities of Watson, but that is not the focus of either IBM or Boston Dynamics. 108 ​Pierre Steiner, ​Enacting anti-representationalism​, 2014. 109 Anthony Chemero, ​Anti-representationalism and the dynamical stance​, Philosophy of Science, December 2000 110 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016, page 500. 111 ​Louis Columbus, ​Artificial Intelligence will Enable 38% Profitability Gains by 2035​, Forbes, June 22, 2017. 112 Military.com: ​China Leaving US Behind​ July 2018 113 ​Simon McCarthy Jones, ​Are Social Networking Sites Controlling You Mind?​, Scientific American, December 2017. 114 ​C Herrera and R Sanz, ​Heideggerian AI and the Being of Robots​, ​Fundamental Issues of Artificial Intelligence​, 2016, page 500. 115 ​Gil Press, Forbes, ​Artificial Intelligence Pioneers: Peter Norvig, Google​, December 2016. 24
  • 25. Many technologies cause more problems than they solve, but AI is promising to achieve this at very high speeds. For example, the ability to analyze live closed-caption videos with machine learning algorithms is blithely ushered in as a boon to for public safety, emergency response capabilities, and other scientific and industrial applications. The risks and threats to privacy116 and social justice are important questions to take up at any stage in this process. The problems that those in the AI community have been claiming to be on the verge of solving appear to be a long way into the future: first (with phenomenology) a body is required, then (with enactivism) a world is added on, all against the backdrop of indecision as to whether representations are desirable. It appears that we are quite a long distance from AGI, but we can still investigate (for instance) how best to deal with robots, or deep learning systems, that resemble or surpass (even if they do not duplicate) our capabilities in many areas and action. If the social i​mplications of AI are rising rapidly, and they are doing so because intelligent robots will soon become a normal part of our day to day existence, it doesn’t much matter that they won’t be reflecting AGI for any time in foreseeable future. , 116 ​James Vincent, ​Artificial Intelligence is going to Supercharge Surveillance​, The Verge, January 2018 25