SlideShare uma empresa Scribd logo
1 de 57
Baixar para ler offline
What has been done what need to
be done.
An Adaptive Cognitive System to Support
the Driving Task of Vehicles of the Future
Francesco Biral
University of Trento
francesco.biral@unitn.it
What do we need to do?7
What I will talk about today?
What kind of technology do we need to support drivers and
interact with them in complex road scenarios?
Self driving cars does not take driver into the loop8
Self driving cars are only a part of the answer
Will be an artificial robot that drivers as an expert human driver
sufficient?
Will the driver accept robotic cars?
9
Potential reactions of some customers
certain segments of the population will be less likely to
embrace autonomous driving (e.g. Car enthusiasts)
The “Digital Natives” and “Gen. Now” generations’ identity
is less likely to be attached to the “driving experience.”
like horse and rider
like a driving instructor
Driver and machine need to cooperate
10
We must understand/know the driver
We must know his/her goals and intentions and driving abilities11
A human would drive in distinctly different ways
depending on whether his/her goal is.
depending on his/her driving skills and experience
Like in the riding-horse metaphor
understand your intentions
understand your driving abilities
silently and gently support you when necessary
improving your manoeuvre
executing autonomously a manoeuvre initiated
execute a task when required (eg. take me there)
leave you the control but intervene only when driver reaches his/her
limits or underestimate a scenario or did not see a better option
The ideal support
12
Design principles of Co-Driver
Definition of Codriver
Architecture
The simulation theory of cognition and human like sensory motor
cycles
Co-Driver developed in EU interactIVe project
Instantiation of a CoDriver
Experimental examples
Limitations
Impact of Co-Driver technology & research lines
Contents
13
What a Co-Drivers is? Theoretical background
Incomplete definitions
Autopilot (emphasis on automatism)
Companion Driver (robot: emphasis on human-robot interaction)
Virtual User (alter-ego: emphasis on reproducing human skills)
Driver Tutor (emphasis on supervising)
Natural co-drivers exist (animals and especially horses – H-
metaphor, Flemish, Norman, et. al.)
What a Co-Driver might be?
What is a codriver?15
A ”co-driver” is an intelligent agent which:
Understands human driver intentions.
Produces human-like motor behaviours.
has an internal structure that copies the human one 

(at least in some sense)
Interacts accordingly and rectifying mistakenly executed actions.
Characteristics of a Co-Driver
What a CoDriver is?16
Co-Driver must “understand” human driver
How would a human drive?
This question has multiple answers!
Answer depend on some higher level motivations/goals.
The co-driver must put himself “in the shoes” of the human
driver and understand the goal.
It is not an easy task since there are a multiplicity of goals
Key Technology #1
17
Co-driver must understand the goals of the human driver
(humans can do that with other humans, how do they do?)
Simulation theory of Cognition is a conceptual framework which
essentially states that “thinking is simulation of perception and
action”, carried out as covert motor-sensory activity.

Also understanding of others’ intentions is also a simulation
process, carried out via the “mirroring” of observed motor
activities of others.
Hurley, S.L., 2008. The shared circuits model (SCM): how control, mirroring, and simulation can
enable imitation, deliberation, and mindreading. Behav. Brain Sci. 31, 1–58.
Grush, R. 2004. "The Emulation Theory of Representation: Motor Control, Imagery, and
Perception." Behavioral and Brain Sciences 27 (3): 377-396.
Jeannerod, M. 2001. "Neural Simulation of Action: A Unifying Mechanism for Motor Cognition."
NeuroImage 14 (1 II): S103-S109.
Understand the goal: theoretical background
18
Generative Approach is to generate agent behaviours under a
number of alternative hypotheses, which are then tested by
comparison with observed behaviours. 

“multiple simulations” are run in parallel, and the most salient
one(s) are selected). 

The observed behaviour identifies the internal states of the
observed agent, and thus the intentions
(Haruno, Wolpert, and Kawato 2001; Demiris and Khadhouri 2006).
“Like me” framework for understanding of others’ intentions:
“others who act like me have internal states like me” (Meltzoff).

The “like-me” framework essentially states that one agent “stands
in the shoes of another”.
Simulation/mirroring theories of cognition
19
Summing up: agents with similar sensory-motor system
and capable of covert motor activities can use their
sensory-motor system to “simulate” observed actions, and
thus know the intentions of the observed agent
“putting the co-driver in the shoes of the real driver” means
the co-driver “emulates” the real driver such as in covert
motor activities.
Objective: link driver behaviour to meaningful goals
(understand driver goals/motivations).
Simulation/mirroring theories of cognition
20
Exemplification of terms used
21
longitudinal control
lateral
control
1 2 3 4 5 6 7 8
3c
3a
3b
Goals
Driver
behaviours
Map of controls
alternative
hypotesis
covert motor
activities
Co-driver must be able to generate human like motor primitives:
Humanlike. Reproduce human sensory-motor strategies (path
planning and motor patterns just like a human).
D Liu, E. Todorov, Evidence for the Flexible Sensorimotor Strategies
Predicted by Optimal Feedback Control, Journal of Neuroscience, 2007 •
27(35):9354 –9368
P. Viviani, T. Flash, Minimum-jerk, two-thirds power law, and isochrony:
converging approaches to movement planning. J. Exp. Psychol. 21: 32-53,
1995.
“Even if skilled performance on a certain task is not exactly optimal,
but is just ‘good enough’, it has been made good enough by
processes whose limit is optimality”.
Human motor patterns respond to optimality criteria and may be
reproduced by Receding Horizon Optimal Control (minimum
intervention principle)
Key Technology #2
22
Experimental data
drivers reduce speed in curves to maintain the accuracy in lateral position
23
Acceleration patterns and Two third law
In order to improve movement accuracy, while preserving average speed, it is
convenient to increase speed in straighter arcs and reduce it along curvier ones.
alat =
a0
s✓
1
⇣
v
v0
⌘2
◆2
+ 2
⇣
v
v0
⌘2
v =
↵
3
p

Architecture#1: sense-think-act paradigm
It is the traditional architecture of AI also known as computer metaphor.

The central idea is the existence of an “internal model of the world”.

Problems:
perception “per se”;
not scalable (interfaces are choke points);
difficult to test;
is not what happens in the human brain;
not fault tolerant;
hard to conceal with motor imagery and covert sensory-motor activity.
Key Technology #3: Architecture
24
A tutor made of a one-level virtual driver (called “reference
maneuver”) was built into SASPENCE and INSAFES (+ evasive
maneuver).
Limitation: missing motor imagery it was not able to
“understand” the driver goal (giving recommendation for a
pre-defined goal).
Da Lio, Biral et. al, T-ITS, 2010 (2 papers)
Sense-think-act success story/1
25
(Versailles test track: reference manovre (red) vs. real driver
(blue) movie)
Sense-think-act success story/2
26
Optimal control replanning on 200m horizon every 0.1s
Behaviour on a curve
Other examples
27
Decomposition in parallel behaviours (hierarchical levels of
competence).
Is based on Perception-Action cycles (no internal model of the
world).
Multi-goal, multi-sensor (perceptual synthesis), robust, scalable,
subsumptive, each level includes sub-level competences.
R. A. Brooks. A robust layered control system for a mobile robot. IEEE
Journal of Robotics and Automation, 14(23), April 1986.
Architecture: the behavioural model
28
Shared Circuit Model combines the above ideas into an
interpretative scheme named with in the behavioural
architecture
“Thinking” is a simulated interaction.
Emulation theory of cognition (Grush, Hurley, Jannerod, et al.)
enables imitation, motor imagery, deliberation, mindreading,
understanding….
Theory of Cognition by means of emulation
29
PERCEPTION ACTION
INVERSE MODEL
FORWARD EMULATOR
ENVIRONMENT
IMITATION SIMULATED
INPUT
OWNS
ACT
BODY
OTHER'S
ACT
OUTPUT
INHIBITED
The general idea that the brain monitors a small number of task
parameters y instead of the full state x, generates abstract commands
v, and maps them into muscle activations u using motor synergies.
Humans are organized in hierarchies of subsumptive behaviors
(Brooks, 1986; Michon 1985; Hatakka et al. 2002; Hollnagel and Woods 1999,
2005).
Human cognition is “grounded” in which the intelligent agent is seen in
the loop with the environment, and perception and action are no longer
divided.
(Gibson 1986; Varela, Thompson, and Rosch 1991; Thelen and Smith 1994; Van
Gelder 1995; Harvey 1996; Clark 1997; Seitz 2000; Beer 2000; Barsalou 2008)
The traditional paradigm of AI (the computer metaphor: input-
processing-output) suffers symbol grounding problems of the abstract
amodal symbol systems. It fails in modeling mutual understanding of
agents.
Human-like sensory-motor systems
30
The ECOM is a subsumptive hierarchical behavioural model
of human driving.
(Hollnagel and Woods, 1999, 2002, 2005)
Successfully used in FP7 DIPLECS.
The Extended Control Model (ECOM)
31
Long term goals and psychological states

(e.g., go home quickly)
Short term goals and driving styles. 

(e.g. overtake “a” instead of “b”).
Space-Time Trajectories 

(i.e., including speed).
Vehicle control.
It is inspired by this general organization of the sensorimotor system.
The low-level controller receives information about the plant state x, and
generates an abstract and more compact state representation y(x) that is
sent to the high level. The high-level controller monitors task progress,
and issues commands v(y) which in general specify how y should change.
The job of the low-level controller is to compute energy-efficient controls
u(v,x) consistent with v. Thus the low-level controller does not solve a
specific subtask (as usually assumed in hierarchical reinforcement
learning), but instead performs an instantaneous feedback transformation.
This enables the high level to control y unencumbered by the full details
of the plant.
Hierarchical control scheme
32
act with the musculoskeletal system directly: They re-
ceive rich sensory input, and generate corresponding
motor output before the rest of the brain has had time
to react to that input. Higher-level circuits interact
with an augmented plant, that consists of the lower
levels and the musculoskeletal system. The lower lev-
els perform a ͑not well understood͒ transformation,
allowing higher levels to operate on increasingly
more abstract and more goal-related movement
representations.4
Here, we propose a hierarchical control scheme
inspired by this general organization of the
sensorimotor system, as well as by prior work on hi-
erarchical control in robotics.5–7
We focus on two-
level feedback control hierarchies as illustrated in
Figure 1. The low-level controller receives informa-
tion about the plant state x, and generates an abstract
and more compact state representation y͑x͒ that is
sent to the high level. The high-level controller moni-
tors task progress, and issues commands v͑y͒ which
in general specify how y should change. The job of
the low-level controller is to compute energy-efficient
controls u͑v,x͒ consistent with v. Thus the low-level
controller does not solve a specific subtask ͑as usually
assumed in hierarchical reinforcement learning͒,8,9
but instead performs an instantaneous feedback
another motivation for the present scheme
blown optimization on redundant tasks is k
yield hierarchical structure, it makes sense to
the optimization to an ͑appropriately chosen
of hierarchical controllers.
The general idea that the brain monitors
number of task parameters y instead of the
x, generates abstract commands v, and ma
into muscle activations u using motor syner
been around for a long time.13,14
Anumber of
models of end-effector control have been for
in the context of reaching tasks.15–20
The h
state in such models is assumed to be hand
the abstract command is desired velocity
space or in joint space, and the high-level c
is a simple positional servo. While these mo
related to our work, in some sense they leav
hard questions unanswered: It is unclear how
parameters are actually controlled ͑i.e., what
responding muscle synergies are͒, and whe
choice of task parameters can yield satisfact
formance. We address these questions here.
Our framework is related in interesting
input-output feedback linearization21,22
as w
the operational space formulation6
—which a
the general scheme in Figure 1. These metho
linear dynamics on the high level, by cance
plant nonlinearities at the low level. Howev
systems of interest cannot be linearized, and
more it is not clear that linearization is des
the first place. Suppressing the natural plant
ics may require large control signals—which
ergetically expensive, and also increase erro
tems subject to control-multiplicative n
universal characteristic of biological movem
In contrast, we summarize the plant dynami
high level and thus create opportunities for
ing them. Recent work in biped locomotion2
scores the potential of such approaches. In
our objective is dimensionality reduction rat
Figure 1. Schematic illustration of the proposed
framework.
Journal of Robotic Systems 22(11), 691–710 (2005)
CoDriver in InteractIVe CRF Demonstrator
Design an agent capable of enacting the “like-me” framework,
which means that it must have sensory- motor strategies similar
to that of a human, and that it must be capable of using them
to mirror human behaviour for inference of intentions and
human-machine interaction for preventive safety, emergency-
handling and efficient vehicle control.
Implemented
Four layers ECOM-like behavioural subsumptive architecture.
Forward/mirroring mechanisms (by Optimal Control).
Motor imagery, inference of driver’s goals.
Main goal
34
states goals
Forward emulators are vehicle dynamics models that neglect high
frequencies (not afforded by humans) but consider non-linarites.
A predictive model here serves to test the viability of different hypotheses of
human driving intentions. Thus, its main requirement is similarity to humans’
used model (if not, co-driver predictions will not match observations even for
a correct hypothesis).
we make the assumption that slow, if non-linear, phenomena are capable of
being human-directed whereas faster ones are not (due to human actuation
limits and band width).
Inverse emulators are minimum jerk/minimum time optimal control
(OC) plans that links perceptual goals (i.e. desired states) to the
actions needed to achieve those goals
Other approaches may be based on machine learning; for example, the
learning of either or both inverse and forward models.
OC has the advantage that needs knowing only the forward model and
optimality criteria
Building blocks#1 - Emulators
35
Motor primitives are parametric instantiations of inverse
emulators that achieve specified goals. They are the solution
of inverse models that determine the (optimal) control
required to reach a desired state a some future time T.
Since there may be several types of final states and optimization
criteria, the inversion problem produces a corresponding number
of solutions, which we may regard as different motor primitives
parameterized.
There are 4 motor primitives:
Speed Adaptation (SA)
Speed Matching (SM)
Lateral Displacement (LD)
Lane Alignment (LA).
Building blocks#2 – Motor primitives
36
Motor primitives
37
Example for longitudinal dynamics #1
Optimal control formulation with some simplifications
d
dt
s(t) = u(t)
d
dt
u(t) =
1
Me
fx(p(t), u(t)) k0 kvu(t)2
| {z }
a(t)
d
dt
a(t) = kpjx(t)
J =
Z T
0
jx(t)2
+ wT dtgoal function
system model
B(x(0), x(T)) = 0Boundary conditions
J =
Z T
0
[jx(t)2
+ wT + 1(t)
✓
d
dt
s(t) u(t)
◆
+ 2(t)
✓
d
dt
u(t) a(t)
◆
+ 3(t)
✓
d
dt
a(t) kpjx(t)
◆
]dt
x = [s(t), u(t), a(t)]states
Speed matching
38
Example for longitudinal dynamics #2
We define the motor primitive boundary conditions
After first variation and applying Pontryagin Principle we get:
perceptual goalsBoundary conditions
r Speed Matching (SM)
B(x(0), x(T)) =
2
6
6
6
6
6
6
4
s(0) = 0
u(0) = ui
a(0) = uf
s(T) = sf
u(T) = uf
a(T) = 0
3
7
7
7
7
7
7
5
jx(t) = kp
3(t)
2
Optimal control law
d
dt
1(t) = 0
d
dt
2(t) + 1(t) = wT
d
dt
3(t) + 2(t) = 0
Co-State equations
Speed matching
39
Example for longitudinal dynamics #2
Given the space with can solve for the minimum time T
Non linear equation in T to be solve numerically:
a (⇣) =
✓
3
⇣2
T2 4
⇣
T
+ 1
◆
ai +
✓
6
⇣2
T3 + 6
⇣
T2
◆
uf +
✓
6
⇣2
T3 6
⇣
T2
◆
ui +
✓
⇣3
12
⇣2
8
T +
T2
⇣
24
◆
wT
sf = s (⇣) =
✓
2⇣3
3T
+
⇣2
2
+
1⇣4
4T2
◆
ai +
✓
1⇣4
2T3 +
⇣3
T2
◆
uf +
✓
1⇣4
2T3 + ⇣
⇣3
T2
◆
ui
+
✓
1
240
⇣5 1
96
⇣4
T +
1
144
⇣3
T2
◆
wT
u (⇣) =
✓
2
⇣2
T
+ ⇣ +
⇣3
T2
◆
ai +
✓
2
⇣3
T3 + 3
⇣2
T2
◆
uf +
✓
2
⇣3
T3 + 1 3
⇣2
T2
◆
ui +
✓
⇣4
48
⇣3
T
24
+
⇣2
T2
48
◆
wT
3 (⇣) =
✓
12
⇣
T2 + 8 T 1
◆
ai +
✓
24
⇣
T3 12 T 2
◆
uf +
✓
24
⇣
T3 + 12 T 2
◆
ui +
✓
⇣2
2
+
T⇣
2
T2
12
◆
wT
jx(⇣) =
3(⇣)
2
Optimal control
Speed matching
40
Example for longitudinal dynamics #2
for different wT: please note the different value of initial jerk
Space-time trajectories deals with either longitudinal or lateral control to
manage one single motor task.
There are 6 functions:
FollowObject (FO):

approach a preceding obstacle with desired time gap TH (Time Headway)
ClearObject (CO):

The purpose of this maneuver is to clear a frontal object on either side of the host
vehicle
FreeFlow (FF)

This maneuver produces a SA primitive by guessing a target speed uT
LaneFollow (LF)
LandMarks (LM)
Curves (CU)
Building blocks#3 - Trajectories
41
It is a combination of motor primitives
Obstacle lateral movement model: align the road in some time T*
according to the same ego-vehicle lateral forward model
first compute the encounter time T° using the longitudinal motion
models;
produce an LD primitive (i.e., parameters nT and T) such that a
specified clearance c0, is obtained at T°.
Example Clear Object
42
eview
Only
ect, th, wT )→ SM xT ,uT ,wT( ) (15)
puting the target point xT and velocity uT
llowing the object as required:
− lo
(16)
gitudinal clearance that accounts for the
icle and obstacle plus any extra desired
aimed-at time headway gap, so is the ini-
object and T is the maneuver duration,
solving (16) together with (5).
t function thus instantiates an SM primi-
s between two levels indicate this form of
ship.
of the longitudinal control:
,wT ) (17)
lar significance, because it indicates how
to drive now in order to follow the object,
ly compared with the longitudinal control
r employs.
owed object does not need to be in the host
is function to apply. If it is travelling in a
ng the case where it is behind the host car,
of intentional assessment. At this point we do not try to esti-
mate T*
, but use the heuristically-derived figure T*
~ 2.5 s (see
also next section and section IV).
The maximum lateral displacement of the object will be
achieved at t = T*
:
sn,max = sn, 0 + vn
T *
2
(19)
If this position falls within one lane of the current object
lane then model (18) is confirmed (i.e., we assume the object
is following our road, possibly changing one lane only). If not,
the object is considered to be crossing our road. In this case its
transverse motion is taken to be uniform:
sn = sn, 0 + vn t (20)
With an object predictive model (in our case the simple
Fig. 5. Evasive maneuvers.
econtounter time T°
nO = nO0 + vn
t
2
t = 0 . . . T⇤
Combines level 2 motor chunks into maneuvers.
Makes arrays of hypotheses, including incorrect ones that will
be used for inference of intentions.
!
Building blocks#4 – Navigation hypotheses
43
4
14
Ji = w ||j ˆj ||2
+ wp||jp
ˆjp||2
+ wnJn
wnJn = steering cost
Building blocks#5a – Inference of intentions
Compares co-driver motor output of the generated
hypotheses with human control (generative approach)
Use a saliency approach.
44
longitudinal control
lateral
control
1 2 3 4 5 6 7 8
3c
3a
3b
Ji = w ||j ˆj ||2
+ wp||jp
ˆjp||2
+ wnJn
wnJn = steering cost
Building blocks#5a – Interaction
45
Interactions are application-dependent (CRF CS is an
assistive system)
Intention is known.
If it is correct the system does nothing.
If t is incorrect, the system knows two ways to rectify it.
The system suggests the longitudinal correction, except
when the lateral correction is in lane.
Inference of driver intentions (model identification problem).
Implements motor imagery, imitation, mindreading (model
identification) for all meaningful goal.
Top level (goal/motivations level)
46
User tests Examples and discussion
The test route was a 53 km loop from CRF headquarters in Orbassano (Turin,
Italy) to Pinerolo, Piossasco and back, which included urban arterials, extra
urban roads, motorways, roundabouts, ramps, and intersections.
A total of 35 hours of logs have been collected, including sensor data, co-driver
output, and images from a front camera.
24 test users
Vehicle demonstrator & User Test Route
48
How does it work?
Experimental results49
Example #1: Car following
How does it work?
Experimental results50
Example #2: Pedestrian
Standing still Crossing
How does it work?
Experimental results51
Example #3: Cut-in manoeuvre
when the two agents
disagree, to assess the reason
it is necessary to manually
inspect the recordings.
Reasons for mismatch may be:
poorly-optimized co-driver
(frequent during
development)
perception noise
driver error
simple difference of
“opinions” between the two
agents (see below).
52
ForRevie
“opinions” between the two agents (see below).
Fig.8 (a) shows an example situation, which happened 1.1 s
before the event depicted in Fig.7, when, for the first time, the
co-driver detected a risk for maneuver 1.
Fig. 8. (a, top) difference in longitudinal acceleration between the two
agents; (b, center) distribution of acceleration difference; (c, bottom)
distribution of lateral position difference.
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
accepting a slightly short time
headway (0.9 s) for a while.
Anticipation of overtake
53
Fo
T-ITS-13-11-0605.R1
These events represent a different form of inference of in-
tentions, pertaining to a higher cognitive level, which are de-
dictive tracki
The predic
Fig. 12. Anticipation of overtake, anticipation of lane change and actual lane change for cases 2,
detected as second-level state transition (FollowObject to ClearObject behavior). Second row: la
crossing the lane). Third row: actual lane crossing.
Page 13 of 19 IEEE Intelligent Transportation Systems Transa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Overtake
intention
lanechange
detected
actual
lanecrossing
Anticipation of overtake
54
13
in-
de-
dictive tracking of dynamic actions” [123].
The prediction of overtaking, for maneuvers 2, 6, 10, 14 and
lane change for cases 2, 6, 10, 14 and 18 (left to right). First row: overtake intention
ehavior). Second row: lane change detected as motor primitive level transition (LD
n Systems Transactions and Magazine
Overtake
intention
lanechange
detected
actual
lanecrossing
ForReview
Only
driver passes to edges during transitions (e.g., taking the first
exit ramp at cycle ~700). The numbered light green bands
stand for the lane changes. The interval between when the LD
motor primitive predicts the lane crossing and the actual cross-
ing of the lane is shaded. There are 21 changes correctly pre-
dicted, with anticipation ranging from 1.1 s to 2.4 s (median
1.6 s).
There are two false crossing predictions, labeled a, and b, that
happen in the non-motorway section when the trajectory pass-
es close to edges. Fig 10 shows the camera view to demon-
strate how demanding this situation actually is. Despite the
false prediction, the absolute value of the lateral prediction
error is limited (a fraction of the vehicle width). Non-
motorway segments are characterized by often-irregular lane
geometry, with splitting and merging lanes, often with missing
marking traits (Fig.10 b), or else the camera failing to recog-
nize them.
Fig.8 (c) shows that 0.0025 quantile curves, i.e., 99.5% of
the LD motor primitives depart from the real trajectory for less
than one quarter of lane in 2 s, less than half lane in 2.5 s and
less than one full lane in 5 s. The points where larger devia-
tions happen may be seen in Fig.9 (the prediction error is plot-
ted). They are typically at inversions of the heading angle and
in complex geometries (a and b).
In Fig.9 (b), the dashed blue vertical lines before lane
changes 2, 6 10, 14 and 18, mark the point where the co-driver
switches from FollowObject (second-level) behavior to Clear-
Object behavior. This is the point where the agent realizes that
the human intention may be to overtake.
For example, Fig.11 shows the control output space 4.1 s
before lane change 18, showing how the driver is going to
chose the overtake maneuver.
Fig. 10 Complex geometry at false alarm points.
Fig. 11 Detection of the intention to overtake (the camera view for this is
given by the top right frame of Fig.12).
Fig. 9. Comparison of driver and co-drover on a 10 minute course. (a, top) longitudinal dynamics (see text). (b, bottom) lateral dynamics.
PLEASE KEEP CONFIDENTIAL
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Incorrect interpretation
55
Sensory system plays a fundamental role.
Must be accurate.
e
driver passes to edges during transitions (e.g., taking the first
exit ramp at cycle ~700). The numbered light green bands
stand for the lane changes. The interval between when the LD
motor primitive predicts the lane crossing and the actual cross-
ing of the lane is shaded. There are 21 changes correctly pre-
dicted, with anticipation ranging from 1.1 s to 2.4 s (median
1.6 s).
There are two false crossing predictions, labeled a, and b, that
happen in the non-motorway section when the trajectory pass-
es close to edges. Fig 10 shows the camera view to demon-
Fig. 10 Complex geometry at false alarm points.
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
Missing behaviors (incomplete PA architecture), e.g., overtake maneuver.
Complex behaviors may often be decomposed in simpler ones (e.g., overtake ->
lane change + free flow + lane change)
The system in this case understands the single phase but not the complex one (it
still works!).
Inaccurate behaviors. A co-driver with non-human behaviors fails to
understand the intentions (e.g., forgetting to model under steer).
Missing hypotheses. The co-driver uses the closest hypotheses and may
fail.
Plausibility approach, together with behavioral discretization increase
robustness at the expense of granularity of intention resolution.
Behaviours from basic principles (e.g., adaptive lane keeping arise
naturally from the use of the second manoeuvre.
Discussion– Inference of intentions
56
The system has been tested on a ~50 km road path with ordinary
drivers (24 drivers, twice each for a total of 35 hours).
False alarms were a few (2-4) per trip most due to noise in the
perception system. Very few alarms may be ascribed to incomplete/
missing/imperfectly designed co-driver behaviors (most of these
being due to mismatch between the driving styles, so not critical).
Collect data will help refine the motor primitives and behaviors built
into the system.
The hierarchical architecture is easily scalable, maintainable and
testable.
Major limitations: the system does not work yet in intersecting roads
(has poor understanding of intersecting vehicles intentions).
Discussion– User tests
57
Technology impact on future applications
A co-driver that understands driver goals, is a “friend” that:
Enables Adaptive Automation (offer the appropriate support type
and level at any time as a human peer would do)
Improve execution of manoeuvres (substitute human execution with
machine execution while preserving the goal – just like chassis
control but at navigation-cognitive level)
Navigate by hints (just like a horse) and largely autonomously until
new goals come manifest form the human
Take over/supervise driver control (under certain conditions)
Is understandable to other drivers.
Unified framework for smart (safe, green, comfort) functions.
Peer-to-peer human-robot interactions
59
Virtual drivers enables cooperative swarm behaviors.
They exchange each other goal (i.e., their Drivers ECOM states).
Inference of other agents goals can be carried out if they are not
cooperative (they also have some goal).
Safety as emergent behavior. Each agent adapts own plans to the
others, producing a collective emerging swarm behavior.
Green cooperative driving as an emerging behavior (produced by
energy efficiency criterion in the mirroring mechanisms).
Cooperative systems
60
Novel situations happens occasionally.
Drivers are often not prepared to handle rare events, because they
have no prior experience.
Motor imagery can be used to analyze them by simulation and extend
by synthetic learning the subsumtive architecture with novel PA loops.
Accidents are rarer such events.
Co-drivers (even if they don’t survive) will collect very detailed
accident data that could be later used for synthetic learning.
Accidents (and near miss) of some co-driver will teach something to
others, which will become more and more capable of reacting
properly in rare events.
Cognitve co-driver will be able to self-extend their application
domain.
Synthetic learning for novel situations and rare
events
61
Co-drivers will gradually collect more and more experience
This can be shared:
improved driver profiles (personas),
improved interactions,
improved handling of critical situations,
special driver classes (elder, and people with some disabilities),
naturalistic data collection,
other...
Learning interaction
62

Mais conteúdo relacionado

Semelhante a Adaptive cognitive driving systems

ArtificialIntelligenceDefinitionEthicsandStandards.docx
ArtificialIntelligenceDefinitionEthicsandStandards.docxArtificialIntelligenceDefinitionEthicsandStandards.docx
ArtificialIntelligenceDefinitionEthicsandStandards.docx
DiaaDahir
 
Artificial IntelligenceNavya Reddy Karnati (556139)Ven.docx
Artificial IntelligenceNavya Reddy Karnati (556139)Ven.docxArtificial IntelligenceNavya Reddy Karnati (556139)Ven.docx
Artificial IntelligenceNavya Reddy Karnati (556139)Ven.docx
tarifarmarie
 

Semelhante a Adaptive cognitive driving systems (20)

Human Computer Interaction
Human Computer InteractionHuman Computer Interaction
Human Computer Interaction
 
AI: Introduction to artificial intelligence
AI: Introduction to artificial intelligenceAI: Introduction to artificial intelligence
AI: Introduction to artificial intelligence
 
AI: Introduction to artificial intelligence
AI: Introduction to artificial intelligenceAI: Introduction to artificial intelligence
AI: Introduction to artificial intelligence
 
Chapter 3 - EMTE.pptx
Chapter 3 - EMTE.pptxChapter 3 - EMTE.pptx
Chapter 3 - EMTE.pptx
 
Artificial-Intelligence--AI And ES Nowledge Base Systems
Artificial-Intelligence--AI And ES Nowledge Base SystemsArtificial-Intelligence--AI And ES Nowledge Base Systems
Artificial-Intelligence--AI And ES Nowledge Base Systems
 
ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS KNOWLEDGE-BASED SYSTEMS TEACHING ...
ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS  KNOWLEDGE-BASED SYSTEMS TEACHING ...ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS  KNOWLEDGE-BASED SYSTEMS TEACHING ...
ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS KNOWLEDGE-BASED SYSTEMS TEACHING ...
 
AI in Autonomous Vehicles
AI in Autonomous VehiclesAI in Autonomous Vehicles
AI in Autonomous Vehicles
 
Artificial intelligent
Artificial intelligent Artificial intelligent
Artificial intelligent
 
ARTIFICIAL INTELLIGENCE AND ROBOTICS
ARTIFICIAL INTELLIGENCE AND ROBOTICS ARTIFICIAL INTELLIGENCE AND ROBOTICS
ARTIFICIAL INTELLIGENCE AND ROBOTICS
 
ArtificialIntelligenceDefinitionEthicsandStandards.docx
ArtificialIntelligenceDefinitionEthicsandStandards.docxArtificialIntelligenceDefinitionEthicsandStandards.docx
ArtificialIntelligenceDefinitionEthicsandStandards.docx
 
Introduction to Knowledge Graphs
Introduction to Knowledge GraphsIntroduction to Knowledge Graphs
Introduction to Knowledge Graphs
 
UNIT I - AI.pptx
UNIT I - AI.pptxUNIT I - AI.pptx
UNIT I - AI.pptx
 
Artificial IntelligenceNavya Reddy Karnati (556139)Ven.docx
Artificial IntelligenceNavya Reddy Karnati (556139)Ven.docxArtificial IntelligenceNavya Reddy Karnati (556139)Ven.docx
Artificial IntelligenceNavya Reddy Karnati (556139)Ven.docx
 
Selected topics in Computer Science
Selected topics in Computer Science Selected topics in Computer Science
Selected topics in Computer Science
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Artificial intelligence.pptx
Artificial intelligence.pptxArtificial intelligence.pptx
Artificial intelligence.pptx
 
ARTIFICIAL INTELLIGENCE.pptx
ARTIFICIAL INTELLIGENCE.pptxARTIFICIAL INTELLIGENCE.pptx
ARTIFICIAL INTELLIGENCE.pptx
 
Ai lect 1
Ai lect 1Ai lect 1
Ai lect 1
 
01 introduction to_artificial_intelligence
01 introduction to_artificial_intelligence01 introduction to_artificial_intelligence
01 introduction to_artificial_intelligence
 
Introduction to Artificial Intelligence
Introduction to Artificial IntelligenceIntroduction to Artificial Intelligence
Introduction to Artificial Intelligence
 

Mais de Institute for Transport Studies (ITS)

Social networks, activities, and travel - building links to understand behaviour
Social networks, activities, and travel - building links to understand behaviourSocial networks, activities, and travel - building links to understand behaviour
Social networks, activities, and travel - building links to understand behaviour
Institute for Transport Studies (ITS)
 
Rail freight in Japan - track access
Rail freight in Japan - track accessRail freight in Japan - track access
Rail freight in Japan - track access
Institute for Transport Studies (ITS)
 

Mais de Institute for Transport Studies (ITS) (20)

Transport Projects Aimed at Fostering Economic Growth – experience in the UK ...
Transport Projects Aimed at Fostering Economic Growth – experience in the UK ...Transport Projects Aimed at Fostering Economic Growth – experience in the UK ...
Transport Projects Aimed at Fostering Economic Growth – experience in the UK ...
 
BA Geography with Transport Studies at the University of Leeds
BA Geography with Transport Studies at the University of LeedsBA Geography with Transport Studies at the University of Leeds
BA Geography with Transport Studies at the University of Leeds
 
Highways Benchmarking - Accelerating Impact
Highways Benchmarking - Accelerating ImpactHighways Benchmarking - Accelerating Impact
Highways Benchmarking - Accelerating Impact
 
Using telematics data to research traffic related air pollution
Using telematics data to research traffic related air pollutionUsing telematics data to research traffic related air pollution
Using telematics data to research traffic related air pollution
 
Masters Dissertation Posters 2017
Masters Dissertation Posters 2017Masters Dissertation Posters 2017
Masters Dissertation Posters 2017
 
Institute for Transport Studies - Masters Open Day 2017
Institute for Transport Studies - Masters Open Day 2017Institute for Transport Studies - Masters Open Day 2017
Institute for Transport Studies - Masters Open Day 2017
 
London's Crossrail Scheme - its evolution, governance, financing and challenges
London's Crossrail Scheme  - its evolution, governance, financing and challengesLondon's Crossrail Scheme  - its evolution, governance, financing and challenges
London's Crossrail Scheme - its evolution, governance, financing and challenges
 
Secretary of State Visit
Secretary of State VisitSecretary of State Visit
Secretary of State Visit
 
Business model innovation for electrical vehicle futures
Business model innovation for electrical vehicle futuresBusiness model innovation for electrical vehicle futures
Business model innovation for electrical vehicle futures
 
A clustering method based on repeated trip behaviour to identify road user cl...
A clustering method based on repeated trip behaviour to identify road user cl...A clustering method based on repeated trip behaviour to identify road user cl...
A clustering method based on repeated trip behaviour to identify road user cl...
 
Cars cars everywhere
Cars cars everywhereCars cars everywhere
Cars cars everywhere
 
Annual Review 2015-16 - University of leeds
Annual Review 2015-16 - University of leedsAnnual Review 2015-16 - University of leeds
Annual Review 2015-16 - University of leeds
 
Social networks, activities, and travel - building links to understand behaviour
Social networks, activities, and travel - building links to understand behaviourSocial networks, activities, and travel - building links to understand behaviour
Social networks, activities, and travel - building links to understand behaviour
 
Rail freight in Japan - track access
Rail freight in Japan - track accessRail freight in Japan - track access
Rail freight in Japan - track access
 
Real time traffic management - challenges and solutions
Real time traffic management - challenges and solutionsReal time traffic management - challenges and solutions
Real time traffic management - challenges and solutions
 
Proportionally fair scheduling for traffic light networks
Proportionally fair scheduling for traffic light networksProportionally fair scheduling for traffic light networks
Proportionally fair scheduling for traffic light networks
 
Capacity maximising traffic signal control policies
Capacity maximising traffic signal control policiesCapacity maximising traffic signal control policies
Capacity maximising traffic signal control policies
 
Bayesian risk assessment of autonomous vehicles
Bayesian risk assessment of autonomous vehiclesBayesian risk assessment of autonomous vehicles
Bayesian risk assessment of autonomous vehicles
 
Agent based car following model for heterogeneities of platoon driving with v...
Agent based car following model for heterogeneities of platoon driving with v...Agent based car following model for heterogeneities of platoon driving with v...
Agent based car following model for heterogeneities of platoon driving with v...
 
A new theory of lane selection on highways
A new theory of lane selection on highwaysA new theory of lane selection on highways
A new theory of lane selection on highways
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

Adaptive cognitive driving systems

  • 1. What has been done what need to be done. An Adaptive Cognitive System to Support the Driving Task of Vehicles of the Future Francesco Biral University of Trento francesco.biral@unitn.it
  • 2. What do we need to do?7 What I will talk about today? What kind of technology do we need to support drivers and interact with them in complex road scenarios?
  • 3. Self driving cars does not take driver into the loop8 Self driving cars are only a part of the answer Will be an artificial robot that drivers as an expert human driver sufficient?
  • 4. Will the driver accept robotic cars? 9 Potential reactions of some customers certain segments of the population will be less likely to embrace autonomous driving (e.g. Car enthusiasts) The “Digital Natives” and “Gen. Now” generations’ identity is less likely to be attached to the “driving experience.”
  • 5. like horse and rider like a driving instructor Driver and machine need to cooperate 10
  • 6. We must understand/know the driver We must know his/her goals and intentions and driving abilities11 A human would drive in distinctly different ways depending on whether his/her goal is. depending on his/her driving skills and experience
  • 7. Like in the riding-horse metaphor understand your intentions understand your driving abilities silently and gently support you when necessary improving your manoeuvre executing autonomously a manoeuvre initiated execute a task when required (eg. take me there) leave you the control but intervene only when driver reaches his/her limits or underestimate a scenario or did not see a better option The ideal support 12
  • 8. Design principles of Co-Driver Definition of Codriver Architecture The simulation theory of cognition and human like sensory motor cycles Co-Driver developed in EU interactIVe project Instantiation of a CoDriver Experimental examples Limitations Impact of Co-Driver technology & research lines Contents 13
  • 9. What a Co-Drivers is? Theoretical background
  • 10. Incomplete definitions Autopilot (emphasis on automatism) Companion Driver (robot: emphasis on human-robot interaction) Virtual User (alter-ego: emphasis on reproducing human skills) Driver Tutor (emphasis on supervising) Natural co-drivers exist (animals and especially horses – H- metaphor, Flemish, Norman, et. al.) What a Co-Driver might be? What is a codriver?15
  • 11. A ”co-driver” is an intelligent agent which: Understands human driver intentions. Produces human-like motor behaviours. has an internal structure that copies the human one 
 (at least in some sense) Interacts accordingly and rectifying mistakenly executed actions. Characteristics of a Co-Driver What a CoDriver is?16
  • 12. Co-Driver must “understand” human driver How would a human drive? This question has multiple answers! Answer depend on some higher level motivations/goals. The co-driver must put himself “in the shoes” of the human driver and understand the goal. It is not an easy task since there are a multiplicity of goals Key Technology #1 17
  • 13. Co-driver must understand the goals of the human driver (humans can do that with other humans, how do they do?) Simulation theory of Cognition is a conceptual framework which essentially states that “thinking is simulation of perception and action”, carried out as covert motor-sensory activity.
 Also understanding of others’ intentions is also a simulation process, carried out via the “mirroring” of observed motor activities of others. Hurley, S.L., 2008. The shared circuits model (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading. Behav. Brain Sci. 31, 1–58. Grush, R. 2004. "The Emulation Theory of Representation: Motor Control, Imagery, and Perception." Behavioral and Brain Sciences 27 (3): 377-396. Jeannerod, M. 2001. "Neural Simulation of Action: A Unifying Mechanism for Motor Cognition." NeuroImage 14 (1 II): S103-S109. Understand the goal: theoretical background 18
  • 14. Generative Approach is to generate agent behaviours under a number of alternative hypotheses, which are then tested by comparison with observed behaviours. 
 “multiple simulations” are run in parallel, and the most salient one(s) are selected). 
 The observed behaviour identifies the internal states of the observed agent, and thus the intentions (Haruno, Wolpert, and Kawato 2001; Demiris and Khadhouri 2006). “Like me” framework for understanding of others’ intentions: “others who act like me have internal states like me” (Meltzoff).
 The “like-me” framework essentially states that one agent “stands in the shoes of another”. Simulation/mirroring theories of cognition 19
  • 15. Summing up: agents with similar sensory-motor system and capable of covert motor activities can use their sensory-motor system to “simulate” observed actions, and thus know the intentions of the observed agent “putting the co-driver in the shoes of the real driver” means the co-driver “emulates” the real driver such as in covert motor activities. Objective: link driver behaviour to meaningful goals (understand driver goals/motivations). Simulation/mirroring theories of cognition 20
  • 16. Exemplification of terms used 21 longitudinal control lateral control 1 2 3 4 5 6 7 8 3c 3a 3b Goals Driver behaviours Map of controls alternative hypotesis covert motor activities
  • 17. Co-driver must be able to generate human like motor primitives: Humanlike. Reproduce human sensory-motor strategies (path planning and motor patterns just like a human). D Liu, E. Todorov, Evidence for the Flexible Sensorimotor Strategies Predicted by Optimal Feedback Control, Journal of Neuroscience, 2007 • 27(35):9354 –9368 P. Viviani, T. Flash, Minimum-jerk, two-thirds power law, and isochrony: converging approaches to movement planning. J. Exp. Psychol. 21: 32-53, 1995. “Even if skilled performance on a certain task is not exactly optimal, but is just ‘good enough’, it has been made good enough by processes whose limit is optimality”. Human motor patterns respond to optimality criteria and may be reproduced by Receding Horizon Optimal Control (minimum intervention principle) Key Technology #2 22
  • 18. Experimental data drivers reduce speed in curves to maintain the accuracy in lateral position 23 Acceleration patterns and Two third law In order to improve movement accuracy, while preserving average speed, it is convenient to increase speed in straighter arcs and reduce it along curvier ones. alat = a0 s✓ 1 ⇣ v v0 ⌘2 ◆2 + 2 ⇣ v v0 ⌘2 v = ↵ 3 p 
  • 19. Architecture#1: sense-think-act paradigm It is the traditional architecture of AI also known as computer metaphor.
 The central idea is the existence of an “internal model of the world”.
 Problems: perception “per se”; not scalable (interfaces are choke points); difficult to test; is not what happens in the human brain; not fault tolerant; hard to conceal with motor imagery and covert sensory-motor activity. Key Technology #3: Architecture 24
  • 20. A tutor made of a one-level virtual driver (called “reference maneuver”) was built into SASPENCE and INSAFES (+ evasive maneuver). Limitation: missing motor imagery it was not able to “understand” the driver goal (giving recommendation for a pre-defined goal). Da Lio, Biral et. al, T-ITS, 2010 (2 papers) Sense-think-act success story/1 25
  • 21. (Versailles test track: reference manovre (red) vs. real driver (blue) movie) Sense-think-act success story/2 26
  • 22. Optimal control replanning on 200m horizon every 0.1s Behaviour on a curve Other examples 27
  • 23. Decomposition in parallel behaviours (hierarchical levels of competence). Is based on Perception-Action cycles (no internal model of the world). Multi-goal, multi-sensor (perceptual synthesis), robust, scalable, subsumptive, each level includes sub-level competences. R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 14(23), April 1986. Architecture: the behavioural model 28
  • 24. Shared Circuit Model combines the above ideas into an interpretative scheme named with in the behavioural architecture “Thinking” is a simulated interaction. Emulation theory of cognition (Grush, Hurley, Jannerod, et al.) enables imitation, motor imagery, deliberation, mindreading, understanding…. Theory of Cognition by means of emulation 29 PERCEPTION ACTION INVERSE MODEL FORWARD EMULATOR ENVIRONMENT IMITATION SIMULATED INPUT OWNS ACT BODY OTHER'S ACT OUTPUT INHIBITED
  • 25. The general idea that the brain monitors a small number of task parameters y instead of the full state x, generates abstract commands v, and maps them into muscle activations u using motor synergies. Humans are organized in hierarchies of subsumptive behaviors (Brooks, 1986; Michon 1985; Hatakka et al. 2002; Hollnagel and Woods 1999, 2005). Human cognition is “grounded” in which the intelligent agent is seen in the loop with the environment, and perception and action are no longer divided. (Gibson 1986; Varela, Thompson, and Rosch 1991; Thelen and Smith 1994; Van Gelder 1995; Harvey 1996; Clark 1997; Seitz 2000; Beer 2000; Barsalou 2008) The traditional paradigm of AI (the computer metaphor: input- processing-output) suffers symbol grounding problems of the abstract amodal symbol systems. It fails in modeling mutual understanding of agents. Human-like sensory-motor systems 30
  • 26. The ECOM is a subsumptive hierarchical behavioural model of human driving. (Hollnagel and Woods, 1999, 2002, 2005) Successfully used in FP7 DIPLECS. The Extended Control Model (ECOM) 31 Long term goals and psychological states
 (e.g., go home quickly) Short term goals and driving styles. 
 (e.g. overtake “a” instead of “b”). Space-Time Trajectories 
 (i.e., including speed). Vehicle control.
  • 27. It is inspired by this general organization of the sensorimotor system. The low-level controller receives information about the plant state x, and generates an abstract and more compact state representation y(x) that is sent to the high level. The high-level controller monitors task progress, and issues commands v(y) which in general specify how y should change. The job of the low-level controller is to compute energy-efficient controls u(v,x) consistent with v. Thus the low-level controller does not solve a specific subtask (as usually assumed in hierarchical reinforcement learning), but instead performs an instantaneous feedback transformation. This enables the high level to control y unencumbered by the full details of the plant. Hierarchical control scheme 32 act with the musculoskeletal system directly: They re- ceive rich sensory input, and generate corresponding motor output before the rest of the brain has had time to react to that input. Higher-level circuits interact with an augmented plant, that consists of the lower levels and the musculoskeletal system. The lower lev- els perform a ͑not well understood͒ transformation, allowing higher levels to operate on increasingly more abstract and more goal-related movement representations.4 Here, we propose a hierarchical control scheme inspired by this general organization of the sensorimotor system, as well as by prior work on hi- erarchical control in robotics.5–7 We focus on two- level feedback control hierarchies as illustrated in Figure 1. The low-level controller receives informa- tion about the plant state x, and generates an abstract and more compact state representation y͑x͒ that is sent to the high level. The high-level controller moni- tors task progress, and issues commands v͑y͒ which in general specify how y should change. The job of the low-level controller is to compute energy-efficient controls u͑v,x͒ consistent with v. Thus the low-level controller does not solve a specific subtask ͑as usually assumed in hierarchical reinforcement learning͒,8,9 but instead performs an instantaneous feedback another motivation for the present scheme blown optimization on redundant tasks is k yield hierarchical structure, it makes sense to the optimization to an ͑appropriately chosen of hierarchical controllers. The general idea that the brain monitors number of task parameters y instead of the x, generates abstract commands v, and ma into muscle activations u using motor syner been around for a long time.13,14 Anumber of models of end-effector control have been for in the context of reaching tasks.15–20 The h state in such models is assumed to be hand the abstract command is desired velocity space or in joint space, and the high-level c is a simple positional servo. While these mo related to our work, in some sense they leav hard questions unanswered: It is unclear how parameters are actually controlled ͑i.e., what responding muscle synergies are͒, and whe choice of task parameters can yield satisfact formance. We address these questions here. Our framework is related in interesting input-output feedback linearization21,22 as w the operational space formulation6 —which a the general scheme in Figure 1. These metho linear dynamics on the high level, by cance plant nonlinearities at the low level. Howev systems of interest cannot be linearized, and more it is not clear that linearization is des the first place. Suppressing the natural plant ics may require large control signals—which ergetically expensive, and also increase erro tems subject to control-multiplicative n universal characteristic of biological movem In contrast, we summarize the plant dynami high level and thus create opportunities for ing them. Recent work in biped locomotion2 scores the potential of such approaches. In our objective is dimensionality reduction rat Figure 1. Schematic illustration of the proposed framework. Journal of Robotic Systems 22(11), 691–710 (2005)
  • 28. CoDriver in InteractIVe CRF Demonstrator
  • 29. Design an agent capable of enacting the “like-me” framework, which means that it must have sensory- motor strategies similar to that of a human, and that it must be capable of using them to mirror human behaviour for inference of intentions and human-machine interaction for preventive safety, emergency- handling and efficient vehicle control. Implemented Four layers ECOM-like behavioural subsumptive architecture. Forward/mirroring mechanisms (by Optimal Control). Motor imagery, inference of driver’s goals. Main goal 34 states goals
  • 30. Forward emulators are vehicle dynamics models that neglect high frequencies (not afforded by humans) but consider non-linarites. A predictive model here serves to test the viability of different hypotheses of human driving intentions. Thus, its main requirement is similarity to humans’ used model (if not, co-driver predictions will not match observations even for a correct hypothesis). we make the assumption that slow, if non-linear, phenomena are capable of being human-directed whereas faster ones are not (due to human actuation limits and band width). Inverse emulators are minimum jerk/minimum time optimal control (OC) plans that links perceptual goals (i.e. desired states) to the actions needed to achieve those goals Other approaches may be based on machine learning; for example, the learning of either or both inverse and forward models. OC has the advantage that needs knowing only the forward model and optimality criteria Building blocks#1 - Emulators 35
  • 31. Motor primitives are parametric instantiations of inverse emulators that achieve specified goals. They are the solution of inverse models that determine the (optimal) control required to reach a desired state a some future time T. Since there may be several types of final states and optimization criteria, the inversion problem produces a corresponding number of solutions, which we may regard as different motor primitives parameterized. There are 4 motor primitives: Speed Adaptation (SA) Speed Matching (SM) Lateral Displacement (LD) Lane Alignment (LA). Building blocks#2 – Motor primitives 36
  • 32. Motor primitives 37 Example for longitudinal dynamics #1 Optimal control formulation with some simplifications d dt s(t) = u(t) d dt u(t) = 1 Me fx(p(t), u(t)) k0 kvu(t)2 | {z } a(t) d dt a(t) = kpjx(t) J = Z T 0 jx(t)2 + wT dtgoal function system model B(x(0), x(T)) = 0Boundary conditions J = Z T 0 [jx(t)2 + wT + 1(t) ✓ d dt s(t) u(t) ◆ + 2(t) ✓ d dt u(t) a(t) ◆ + 3(t) ✓ d dt a(t) kpjx(t) ◆ ]dt x = [s(t), u(t), a(t)]states
  • 33. Speed matching 38 Example for longitudinal dynamics #2 We define the motor primitive boundary conditions After first variation and applying Pontryagin Principle we get: perceptual goalsBoundary conditions r Speed Matching (SM) B(x(0), x(T)) = 2 6 6 6 6 6 6 4 s(0) = 0 u(0) = ui a(0) = uf s(T) = sf u(T) = uf a(T) = 0 3 7 7 7 7 7 7 5 jx(t) = kp 3(t) 2 Optimal control law d dt 1(t) = 0 d dt 2(t) + 1(t) = wT d dt 3(t) + 2(t) = 0 Co-State equations
  • 34. Speed matching 39 Example for longitudinal dynamics #2 Given the space with can solve for the minimum time T Non linear equation in T to be solve numerically: a (⇣) = ✓ 3 ⇣2 T2 4 ⇣ T + 1 ◆ ai + ✓ 6 ⇣2 T3 + 6 ⇣ T2 ◆ uf + ✓ 6 ⇣2 T3 6 ⇣ T2 ◆ ui + ✓ ⇣3 12 ⇣2 8 T + T2 ⇣ 24 ◆ wT sf = s (⇣) = ✓ 2⇣3 3T + ⇣2 2 + 1⇣4 4T2 ◆ ai + ✓ 1⇣4 2T3 + ⇣3 T2 ◆ uf + ✓ 1⇣4 2T3 + ⇣ ⇣3 T2 ◆ ui + ✓ 1 240 ⇣5 1 96 ⇣4 T + 1 144 ⇣3 T2 ◆ wT u (⇣) = ✓ 2 ⇣2 T + ⇣ + ⇣3 T2 ◆ ai + ✓ 2 ⇣3 T3 + 3 ⇣2 T2 ◆ uf + ✓ 2 ⇣3 T3 + 1 3 ⇣2 T2 ◆ ui + ✓ ⇣4 48 ⇣3 T 24 + ⇣2 T2 48 ◆ wT 3 (⇣) = ✓ 12 ⇣ T2 + 8 T 1 ◆ ai + ✓ 24 ⇣ T3 12 T 2 ◆ uf + ✓ 24 ⇣ T3 + 12 T 2 ◆ ui + ✓ ⇣2 2 + T⇣ 2 T2 12 ◆ wT jx(⇣) = 3(⇣) 2 Optimal control
  • 35. Speed matching 40 Example for longitudinal dynamics #2 for different wT: please note the different value of initial jerk
  • 36. Space-time trajectories deals with either longitudinal or lateral control to manage one single motor task. There are 6 functions: FollowObject (FO):
 approach a preceding obstacle with desired time gap TH (Time Headway) ClearObject (CO):
 The purpose of this maneuver is to clear a frontal object on either side of the host vehicle FreeFlow (FF)
 This maneuver produces a SA primitive by guessing a target speed uT LaneFollow (LF) LandMarks (LM) Curves (CU) Building blocks#3 - Trajectories 41
  • 37. It is a combination of motor primitives Obstacle lateral movement model: align the road in some time T* according to the same ego-vehicle lateral forward model first compute the encounter time T° using the longitudinal motion models; produce an LD primitive (i.e., parameters nT and T) such that a specified clearance c0, is obtained at T°. Example Clear Object 42 eview Only ect, th, wT )→ SM xT ,uT ,wT( ) (15) puting the target point xT and velocity uT llowing the object as required: − lo (16) gitudinal clearance that accounts for the icle and obstacle plus any extra desired aimed-at time headway gap, so is the ini- object and T is the maneuver duration, solving (16) together with (5). t function thus instantiates an SM primi- s between two levels indicate this form of ship. of the longitudinal control: ,wT ) (17) lar significance, because it indicates how to drive now in order to follow the object, ly compared with the longitudinal control r employs. owed object does not need to be in the host is function to apply. If it is travelling in a ng the case where it is behind the host car, of intentional assessment. At this point we do not try to esti- mate T* , but use the heuristically-derived figure T* ~ 2.5 s (see also next section and section IV). The maximum lateral displacement of the object will be achieved at t = T* : sn,max = sn, 0 + vn T * 2 (19) If this position falls within one lane of the current object lane then model (18) is confirmed (i.e., we assume the object is following our road, possibly changing one lane only). If not, the object is considered to be crossing our road. In this case its transverse motion is taken to be uniform: sn = sn, 0 + vn t (20) With an object predictive model (in our case the simple Fig. 5. Evasive maneuvers. econtounter time T° nO = nO0 + vn t 2 t = 0 . . . T⇤
  • 38. Combines level 2 motor chunks into maneuvers. Makes arrays of hypotheses, including incorrect ones that will be used for inference of intentions. ! Building blocks#4 – Navigation hypotheses 43 4 14 Ji = w ||j ˆj ||2 + wp||jp ˆjp||2 + wnJn wnJn = steering cost
  • 39. Building blocks#5a – Inference of intentions Compares co-driver motor output of the generated hypotheses with human control (generative approach) Use a saliency approach. 44 longitudinal control lateral control 1 2 3 4 5 6 7 8 3c 3a 3b Ji = w ||j ˆj ||2 + wp||jp ˆjp||2 + wnJn wnJn = steering cost
  • 40. Building blocks#5a – Interaction 45 Interactions are application-dependent (CRF CS is an assistive system) Intention is known. If it is correct the system does nothing. If t is incorrect, the system knows two ways to rectify it. The system suggests the longitudinal correction, except when the lateral correction is in lane.
  • 41. Inference of driver intentions (model identification problem). Implements motor imagery, imitation, mindreading (model identification) for all meaningful goal. Top level (goal/motivations level) 46
  • 42. User tests Examples and discussion
  • 43. The test route was a 53 km loop from CRF headquarters in Orbassano (Turin, Italy) to Pinerolo, Piossasco and back, which included urban arterials, extra urban roads, motorways, roundabouts, ramps, and intersections. A total of 35 hours of logs have been collected, including sensor data, co-driver output, and images from a front camera. 24 test users Vehicle demonstrator & User Test Route 48
  • 44. How does it work? Experimental results49 Example #1: Car following
  • 45. How does it work? Experimental results50 Example #2: Pedestrian Standing still Crossing
  • 46. How does it work? Experimental results51 Example #3: Cut-in manoeuvre
  • 47. when the two agents disagree, to assess the reason it is necessary to manually inspect the recordings. Reasons for mismatch may be: poorly-optimized co-driver (frequent during development) perception noise driver error simple difference of “opinions” between the two agents (see below). 52 ForRevie “opinions” between the two agents (see below). Fig.8 (a) shows an example situation, which happened 1.1 s before the event depicted in Fig.7, when, for the first time, the co-driver detected a risk for maneuver 1. Fig. 8. (a, top) difference in longitudinal acceleration between the two agents; (b, center) distribution of acceleration difference; (c, bottom) distribution of lateral position difference. 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 accepting a slightly short time headway (0.9 s) for a while.
  • 48. Anticipation of overtake 53 Fo T-ITS-13-11-0605.R1 These events represent a different form of inference of in- tentions, pertaining to a higher cognitive level, which are de- dictive tracki The predic Fig. 12. Anticipation of overtake, anticipation of lane change and actual lane change for cases 2, detected as second-level state transition (FollowObject to ClearObject behavior). Second row: la crossing the lane). Third row: actual lane crossing. Page 13 of 19 IEEE Intelligent Transportation Systems Transa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Overtake intention lanechange detected actual lanecrossing
  • 49. Anticipation of overtake 54 13 in- de- dictive tracking of dynamic actions” [123]. The prediction of overtaking, for maneuvers 2, 6, 10, 14 and lane change for cases 2, 6, 10, 14 and 18 (left to right). First row: overtake intention ehavior). Second row: lane change detected as motor primitive level transition (LD n Systems Transactions and Magazine Overtake intention lanechange detected actual lanecrossing ForReview Only driver passes to edges during transitions (e.g., taking the first exit ramp at cycle ~700). The numbered light green bands stand for the lane changes. The interval between when the LD motor primitive predicts the lane crossing and the actual cross- ing of the lane is shaded. There are 21 changes correctly pre- dicted, with anticipation ranging from 1.1 s to 2.4 s (median 1.6 s). There are two false crossing predictions, labeled a, and b, that happen in the non-motorway section when the trajectory pass- es close to edges. Fig 10 shows the camera view to demon- strate how demanding this situation actually is. Despite the false prediction, the absolute value of the lateral prediction error is limited (a fraction of the vehicle width). Non- motorway segments are characterized by often-irregular lane geometry, with splitting and merging lanes, often with missing marking traits (Fig.10 b), or else the camera failing to recog- nize them. Fig.8 (c) shows that 0.0025 quantile curves, i.e., 99.5% of the LD motor primitives depart from the real trajectory for less than one quarter of lane in 2 s, less than half lane in 2.5 s and less than one full lane in 5 s. The points where larger devia- tions happen may be seen in Fig.9 (the prediction error is plot- ted). They are typically at inversions of the heading angle and in complex geometries (a and b). In Fig.9 (b), the dashed blue vertical lines before lane changes 2, 6 10, 14 and 18, mark the point where the co-driver switches from FollowObject (second-level) behavior to Clear- Object behavior. This is the point where the agent realizes that the human intention may be to overtake. For example, Fig.11 shows the control output space 4.1 s before lane change 18, showing how the driver is going to chose the overtake maneuver. Fig. 10 Complex geometry at false alarm points. Fig. 11 Detection of the intention to overtake (the camera view for this is given by the top right frame of Fig.12). Fig. 9. Comparison of driver and co-drover on a 10 minute course. (a, top) longitudinal dynamics (see text). (b, bottom) lateral dynamics. PLEASE KEEP CONFIDENTIAL 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
  • 50. Incorrect interpretation 55 Sensory system plays a fundamental role. Must be accurate. e driver passes to edges during transitions (e.g., taking the first exit ramp at cycle ~700). The numbered light green bands stand for the lane changes. The interval between when the LD motor primitive predicts the lane crossing and the actual cross- ing of the lane is shaded. There are 21 changes correctly pre- dicted, with anticipation ranging from 1.1 s to 2.4 s (median 1.6 s). There are two false crossing predictions, labeled a, and b, that happen in the non-motorway section when the trajectory pass- es close to edges. Fig 10 shows the camera view to demon- Fig. 10 Complex geometry at false alarm points. 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
  • 51. Missing behaviors (incomplete PA architecture), e.g., overtake maneuver. Complex behaviors may often be decomposed in simpler ones (e.g., overtake -> lane change + free flow + lane change) The system in this case understands the single phase but not the complex one (it still works!). Inaccurate behaviors. A co-driver with non-human behaviors fails to understand the intentions (e.g., forgetting to model under steer). Missing hypotheses. The co-driver uses the closest hypotheses and may fail. Plausibility approach, together with behavioral discretization increase robustness at the expense of granularity of intention resolution. Behaviours from basic principles (e.g., adaptive lane keeping arise naturally from the use of the second manoeuvre. Discussion– Inference of intentions 56
  • 52. The system has been tested on a ~50 km road path with ordinary drivers (24 drivers, twice each for a total of 35 hours). False alarms were a few (2-4) per trip most due to noise in the perception system. Very few alarms may be ascribed to incomplete/ missing/imperfectly designed co-driver behaviors (most of these being due to mismatch between the driving styles, so not critical). Collect data will help refine the motor primitives and behaviors built into the system. The hierarchical architecture is easily scalable, maintainable and testable. Major limitations: the system does not work yet in intersecting roads (has poor understanding of intersecting vehicles intentions). Discussion– User tests 57
  • 53. Technology impact on future applications
  • 54. A co-driver that understands driver goals, is a “friend” that: Enables Adaptive Automation (offer the appropriate support type and level at any time as a human peer would do) Improve execution of manoeuvres (substitute human execution with machine execution while preserving the goal – just like chassis control but at navigation-cognitive level) Navigate by hints (just like a horse) and largely autonomously until new goals come manifest form the human Take over/supervise driver control (under certain conditions) Is understandable to other drivers. Unified framework for smart (safe, green, comfort) functions. Peer-to-peer human-robot interactions 59
  • 55. Virtual drivers enables cooperative swarm behaviors. They exchange each other goal (i.e., their Drivers ECOM states). Inference of other agents goals can be carried out if they are not cooperative (they also have some goal). Safety as emergent behavior. Each agent adapts own plans to the others, producing a collective emerging swarm behavior. Green cooperative driving as an emerging behavior (produced by energy efficiency criterion in the mirroring mechanisms). Cooperative systems 60
  • 56. Novel situations happens occasionally. Drivers are often not prepared to handle rare events, because they have no prior experience. Motor imagery can be used to analyze them by simulation and extend by synthetic learning the subsumtive architecture with novel PA loops. Accidents are rarer such events. Co-drivers (even if they don’t survive) will collect very detailed accident data that could be later used for synthetic learning. Accidents (and near miss) of some co-driver will teach something to others, which will become more and more capable of reacting properly in rare events. Cognitve co-driver will be able to self-extend their application domain. Synthetic learning for novel situations and rare events 61
  • 57. Co-drivers will gradually collect more and more experience This can be shared: improved driver profiles (personas), improved interactions, improved handling of critical situations, special driver classes (elder, and people with some disabilities), naturalistic data collection, other... Learning interaction 62