Talk from /dev/summer
Brief overview of Simulatneous Localistion and Mapping incl. brief intro to localisation methods. Relates these methods to autonomous vehicles and touches on ethical concerns.
1. The Joy of SLAM
Samantha Ahern - @2standandstare
Centre for Computational Intelligence, De Montfort University
2. SLAM Presentation Plan
Simultaneous Localisation and Mapping (SLAM) is the core element of
navigation systems for mobile robots and vehicles. In this talk I will discuss
how SLAM works,
the main implementation methods and
examples of their applications.
I will discuss my own work in implementing a SLAM system on a small
autonomous robot and discuss the parallels with autonomous vehicles.
3. Where’s Johnny?
An autonomous agent
needs to know:
About its environment
Pre-existing map
Creates a map as it
explores
Where it is in relation to its
environment 𝑥
𝑦
𝜃
?
4. Types of SLAM
Feature-based SLAM
Pose-based SLAM
Appearance-based SLAM
Variants - these include Active SLAM and Multi-robot SLAM.
E. Zamora and W. Yu, ‘Recent advances on simultaneous localization and mapping for mobile robots’, IETE Tech. Rev.
Inst. Electron. Telecommun. Eng. India, vol. 30, no. 6, pp. 490–496, 2013.
11. Kalman Filter Updates
X = estimate
P = uncertainty covariance
F = state transition matrix
U = motion vector
Z = measurement
H = measurement function
R = measurement noise
I = identity matrix
Position update:
X’ = FX + U
P’ = F . P . F’
Measurement update:
Y = Z – H . X
𝑆 = H . P . H’ + R
K = P . H’ . 𝑆−1
X’ = X + ( KY )
P’ = ( I – K . H ) P
12. Particle Filters
Easiest to program of the 3 filters
Estimates continuous states
Has a multi-modal distribution
All calculations are approximate
Level of efficiency is unclear
13. PF: Core concepts
Particles consist of:
x position
y position
Direction
N is the number of particles
……
Possible position
of robot
(particle)
For each particle (.) the
expected distance from each
point (.) is calculated.
Mismatch between expected and
actual measurements determine
weight.
Higher weighted particles are
more likely to survive
resampling.
19. Occupancy Grids
Occupancy grids utilise random field
representation,
Each cell in the grid stores a
probabilistic estimate of the cell's
state.
The probabilistic estimate is obtained
through the integration and
interpretation of sensor data from
multiple sensors of the same type or
different complimentary sensor types.
Occupancy grids can incorporate
positional uncertainty into the mapping
process.
http://www.cs.cmu.edu/~motionplanning/papers/sbp_papers/integrated4/elfes_occup_grids.pdf
20. Open RatSLAM
Inspired by the rodent hippocampal complex
Hybrid method combining characteristics of:
Feature based
Grid based
Topological SLAM techniques.
Consists of four nodes:
Pose Cell Network
Local View Cells
Experience Map
Visual Odometry (for image only datasets).
Developed by Queensland University of Technology
24. DELPHI Car technology
The Delphi car is fitted with:
Radar:
Long Range Radar x 6
360Dg Radar x 4
4 Layer LiDAR x 6
Cameras:
Forward camera
HD camera
Infra-red camera
Plus:
GPS Antennae
Wheel odometers
Where am I?
What do I need to ‘see’?
26. Autonomous vehicles – main difficulties
Noisy data
Incompleteness
Dynamicity
Discrete measurements in
real-time
Decision
Control
Perception
Key blocks
27. Will / can it do the right thing?
Hybrid agent architecture
Control System
What it does
Rational Agent
Why it does it
Control System
Low Level
Rational Agent
High Level
Autonomous System
28. Verifying the Rational Agent
External Interactions
• Stochastic Analysis
Feedback Control
• Differential Equations
Decision Making
• Discrete Logic
Probabilistic
Determinite
Infinite
Non-determinate
Finite
Finite Abstraction
29. Essential elements - A S/A Vehicles
Sensors and perception
Computing platforms & control systems
Electrical architecture & network management
Vehicle connectivity
User experience
Off-board (cloud) support & services
Functional safety & cyber security
30. Conclusions
93% road accidents caused by human error
Perception and decision making take place under uncertainty
Bayesian estimators are used for localisation and mapping
Interaction between driver and autonomous / semi-autonomous
vehicle needs to be managed
Interaction between autonomous, semi-autonomous and manual
vehicles needs to be managed
Same concepts used by autonomous drones
31. References
https://www.udacity.com/course/progress#!/c-cs373
http://www.sartre-project.eu/en/Sidor/default.aspx
http://www.delphi.com/delphi-drive
J. Borenstein, H. R. Everett, L. Feng, and D. Wehe, ‘Mobile robot positioning:
Sensors and techniques’, J. Robot. Syst., vol. 14, no. 4, pp. 231–249, 1997.
A. Elfes, ‘Using occupancy grids for mobile robot perception and navigation’,
Computer, vol. 22, no. 6, pp. 46–57, Jun. 1989.
E. Zamora and W. Yu, ‘Recent advances on simultaneous localization and mapping
for mobile robots’, IETE Tech. Rev. Inst. Electron. Telecommun. Eng. India, vol.
30, no. 6, pp. 490–496, 2013.
D. Ball, S. Heath, J. Wiles, G. Wyeth, P. Corke, and M. Milford, ‘OpenRatSLAM: an
open source brain-based SLAM system’, Auton. Robots, vol. 34, no. 3, pp. 149–176,
2013.
R. Smith, M. Self, and P. Cheeseman, ‘Estimating uncertain spatial relationships in
robotics’, in 1987 IEEE International Conference on Robotics and Automation.
Proceedings, 1987, vol. 4, pp. 850–850.
Feature-based SLAM:
It is the most popular approach to solve the SLAM problem. It uses predefined landmarks and environment model to estimate the robot current state (or robot path) and the map [1].
Pose-Based:
Only the robot state trajectory is estimated, without landmark positions. The robot path is estimated using constraints imposed by the landmark positions or the raw laser (or visual) data.
Appearance-based:
It does not use metric information and the landmark positions. The robot path is not tracked in metric sense. The visual images or spatial information are utilized to recognize the place. It is very common that these appearance techniques are used complementary to any metric SLAM method to detect loop closures [7].
Active SLAM derives a control law for robot navigation in order to achieve efficiently a certain desired accuracy of the robot location and the map [10]. Multi-robot SLAM uses many robots for large environment [11].
Belief: Probability
Sense: Product followed by normalization
Move: Convolution (addition)
Histogram Filter:
Discrete state estimation very rarely used
Kalman Filter / Extended Kalman filter:
Used in feature-based slam
Particle filter:
Used in filter and posed based SLAM
Multivariate gaussians can be used to infer velocity from measurement update
For my dissertation project am implementing a version in NXC using sonar sensors, translation from Robot C
Uses detailed map for driving in urban areas but on highways builds grids
Where am I? -> Localisation
What do I need to ‘see’? -> Vision, perception and mapping
The project aims to encourage a step change in personal transport usage by developing of environmental roadtrains called platoons.
Systems will be developed facilitating the safe adoption of road trains on un-modified public highways with interaction with other traffic.
A scheme will be developed whereby a lead vehicle with a professional driver will take responsibility for a platoon. Following vehicles will enter a semi-autonomous control mode that allows the driver of the following vehicle to do other things that would normally be prohibited for reasons of safety; for example, operate a phone, reading a book or watching a movie.
Other research projects are working on fully autonomous version – the first vehicle implements full SLAM, following vehicles localisation and comms between vehicles?
Who is in control? When should control be handed back? Should return to human control be refused?
Ethics? System can order options based on ethical priorities
Save humans >> save animals >> save property
EI: Sensors / actuators
FC: Control system etc.
DM: Rational agent