This document summarizes Rodney Brooks' embodied AI approach from the 1980s. It discusses how Brooks argued that sensorimotor skills are essential for intelligent behavior, in contrast to the dominant symbolic AI approach. Brooks proposed a bottom-up approach known as the subsumption architecture, where autonomous robots are built up from layers of independent behaviors rather than using internal representations. While this approach enabled robust real-world behavior, it lacked abilities like meta-cognition, coordinated goal-setting, and learning from experience.
4. Embodied AI – philosophical position
0 Embodiment: The functions of the mind can be
described in terms of aspects of the body.
0 Cognitivism: the functions of the mind can be described
in terms of information processing.
0 Computationalism: the functions of the mind can be
described in computational terms.
0 Cartesian dualism: the functions of the mind are
described in immaterial terms.
0 Embodiment hypothesis: conceptual and linguistic
structures are shaped by the peculiarities of
perceptual structures. [Lakoff & Johnson 1999]
6. Moravec paradox – The argument
0 The time that evolution took to produce a certain skill is
proportional to the difficulty to implement that skill.
0 The oldest human skills are unconscious and effortless.
0 The youngest human skills are conscious and require lots of
effort.
0 Effortless skills are the most difficult to implement.
0 Difficult skills are the easiest to implement, once the
effortless skills have been implemented.
But:
0 Cultural evolution is faster than biological evolution.
0 Temporal progression need not parallel complexity.
0 Temporal progression suggests a quantitative development.
15. AI as bottom‐up engineering
0 AI should be the engineering task of building
Creatures that
0 are completely autonomous mobile agents
0 co‐exist with humans in the world
0 are seen by humans as intelligent beings in their own
right
0 Creatures should follow the following engineering
principles. They should
0 operate in a timely fashion
0 be robust, exhibiting a gradual change in capability
under environmental change
0 maintain multiple goals
0 do something, have a purpose in being
16. Horizontal vs vertical layers
0 [A] In traditional AI research, the assumptions that independent
research fields make are not forced to be realistic. This is a bug in the
functional decomposition approach.
0 The vertical layers: machine learning, vision, knowledge systems,
automatic translation.
0 [B] The traditional decomposition separates, among other things,
peripheral perception and action modules from central reasoning or
processing modules.
0 The fact that [A] assumptions are not enforced, does not imply [B] that
the underlying decomposition is wrong.
0 It must be shown that under the traditional, functional decomposition
of the research field, assumptions cannot possibly be enforced.
0 What really plays a role here: the assumption that reasoning and
language are heavily influenced by sensors and actuators, and by being
in the world.
0 The horizontal layers: obstacle avoidance, path finding, path planning.
17. Sparseness of representations
0 The world is its own best representation.
0 No world model maintenance, so more robust.
0 Each layer has an independent and implicit purpose or
goal.
0 The purpose of the entire Creature is implicit in the
collation of the independent purposes of the individual
layers.
Layer interactions in non‐symbolic terms:
0 Suppression: side‐tapping, replacing an original input
message by a message from a lower level.
0 Inhibition: side‐tapping, inhibiting an output message
without replacing it.
19. Embodied AI’s Disadvantages
0 Meta‐cognition: no reification of tasks, goals or
processes.
0 Goal interference: independent goal‐directed
behaviors.
0 Task coordination: subsumption in the case of
multiple levels is weakly structured.
0 Learning: related to the meta‐cognition disadvantage,
since there is no medium in which learning can take
place, i.e. no reification of thoughts.