Autonomous vehicles use various sensors like ultrasonic sensors, RADAR, LIDAR, image sensors, GPS, and wheel speed sensors to navigate without human input. They rely on sensor integration and technologies like computer vision, V2X communication, and electronic control units to process sensor data. While self-driving cars offer benefits like safety and efficiency, there are also challenges to overcome like unpredictable humans, bad weather, and needing detailed digital maps. Fully autonomous vehicles may become common by 2040 if these issues can be addressed.
10. Sensor 2: RADAR sensor
High tech radar systems are capable of extracting useful
information from very high noise levels.
RADAR Sensor Components of a RADAR
16. Sensor 4: Image Sensor
Image sensor Components of an Image sensor
An image sensor is the soul of a camera. It is used to determine image size,
resolution, low-light performance, depth of field, dynamic range, lenses and even
the camera’s physical size.
18. Sensor 5: GPS Sensor
GPS keeps the car on its intended route with an accuracy of 30 centimeters.
With GPS covering the macro location of car, smaller on-deck cameras can
recognize details like red lights, stop signs etc.
GPS module Components of a GPS system
19. Sensor 5: GPS Sensor – Working principle
Working of GNSS
(Global Navigation Satellite System)
GPS system in action
21. In Automotive electronics, Electronic Control Unit (ECU), is a
generic term for any embedded system that controls one or more
of the electrical system or subsystems in a transport vehicle.
Electronic Control Unit (ECU) ECU block diagram
Electronic Control Unit (ECU)
22. Door Control Unit
Brake Control Unit Transmission Control module
Adaptive Cruise Control
Electronic Control Unit (ECU) – Types of ECUs
24. Sensor 6: Wheel Speed Sensor
Wheel speed sensors provide input to a number of different automotive systems
including the anti-lock brake system and electronic stability control.
Wheel speed sensor (ABS) ABS inside a car
25. Sensor 6: Wheel Speed Sensor – Hall effect
Sensor in action
Hall effect
26. HUD is the outcome of GPS and compass based data about a vehicle position
and the emergence of computer vision technology that can recognize objects on
and around the road and the navigational information as transparent colored
paths.
Heads Up display (HUD)
27. Heads Up display (HUD) – Working
HUD can be fitted in place of windscreen which will give the
view of road plus the required information.
28. Heads Up display (HUD) – Applications
Synthetic Vision Systems
Automobiles
Military aircrafts
31. V2X Communication
Cars will talk to other cars, exchanging data and alerting drivers to
potential collisions. They’ll talk to sensors on signs on stoplights,
bus stops, to get traffic updates and rerouting alerts.
32. V2X Communication types
As vehicles fall out of the signal range and drop out of the network,
other vehicles can join in, connecting vehicles to one another so that a
mobile Internet is created.
35. Features of a Self Driving Car
Adaptive Cruise Control Emergency Braking Self Parking
Traffic Jam Assistants Lane keeping
36. Challenges
• Unpredictable humans: Autonomous vehicles will have to
deal with drivers who speed, pass even when there’s a double
yellow line and drive the wrong way on a one-way street.
• Bad weather: Snow, rain, fog and other types of weather make
driving difficult for humans, and it’s no different for driverless cars,
which stay in their lanes by using cameras that track lines on the
pavement. But they can’t do that if the road has a coating of snow.
• Digital Mapping: Very few roads have been mapped to this
degree. Moreover, maps can become out of date as road
conditions change. There may be construction or detours. An
intersection with a four-way stop might get a traffic light or become
a roundabout.
37. Conclusion
In conclusion, upon addressing the mechanics of the driverless car
as well as its benefits and potential issues, it is quite interesting to
see how the world will actually become by the year 2040.
Sensors that are an integral part of an autonomous vehicle will
become more sophisticated and will potentially have more
functionality addition in the near future.
It is fascinating to see the effects this creation will have on the states
in which it is legalized as well as on the people that have chosen to
experiment with it.
Notas do Editor
Slide – 1: This presentation and the subsequent slides will focus on Autonomous vehicles or Self driving cars.
The major topics that will be covered in the upcoming slides will elaborate on:
What are Autonomous Vehicles?
Design complexity of existing autonomous cars for example Google’s Self driving car and DARPA grand challenge for autonomous vehicles.
Roadmap to Automation.
Discuss about key features such as dirver in the loop, safety controls, and user friendly interactive environment.
Sensor and V2V technologies used to build automated connected vehicle environment.
Discuss in detail about sophiticated sensor technologies such as HMI systems, image sensors, sonar sensors, HUD’s, RADAR, LIDAR, cruise systems, navigation, 360 degree awareness.
Talk about implementation of algorithms like Edge detection, motion detection, object detection.
Brief overview of the functioning of Automated connected environment.
Issues/challenges faced in implementation of sensor technology/autonomous vehicle.
Talk about Silicon Valley companies working on this technolpogy.
Discuss about future scope for development or advancement of Autonomous vehicles in Silicon Valley.
Slide – 2: Introduction
What are Autonomous Vehicles?
An Autonomous car or a self driving car is a vehicle that is capable of sensing its environment and navigating without human inputs.
Slide – 3: Google’s Self Driving Car
Where am I?
The car processes both the maps and sensors information to determine where it is in the world. Our car knows what street it's on and which lane it's in.
What’s around me?
Sensors help detect objects all around us. The software classifies objects based on their size, shape and movement pattern. It detects a cyclist and a pedestrian in this case.
What will happen next?
The software predicts what all the objects around us might do next. It predicts that the cyclist will ride by and the pedestrian will cross the street.
What should I do?
The software then chooses a safe speed and trajectory for the car. Our car nudges away from the cyclist, then slows down to yield to the pedestrian.
References:
1. https://www.google.com/selfdrivingcar/how/
Slide – 4: Google’s Self Driving Car (continued…)
Google’s self driving car is packed with all the necessary features. They are designed to navigate safely through city streets.
Sensors – It has sensors designed to detect objects as far as two football fields away in all directions, including pedestrians, cyclists and vehicles—or even fluttering plastic shopping bags and rogue birds. The software processes all the information to help the car safely navigate the road without getting tired or distracted.
Electric Batteries – Google’s self driving car has in-built electric batteries that are used to power up the vehicle.
Rounded shape – The rounded top ensures that the field of view for the sensor is maximized.
Interior – The complete interior of the car is made to provide maximum comfort to the user.
Computer – The system designed is very powerful and acts on the data given by the sensor values and implements various algorithms such as obstacle avoidance, object detection etc.
References:
https://www.google.com/selfdrivingcar/how/
https://www.google.com/selfdrivingcar/
Slide – 5: DARPA Urban Challenge
Recently, there has been a lot of interest in autonomous vehicle and so a lot of challenging competitions have also emerged where students/universities demonstrate their technical capabilities.
Perhaps the most well known driverless car competition is DARPA Grand Challenge commonly known as DARPA Urban Challenge.
Major companies are already into the business of building autonomous cars. General Motors expects them to be running on the road by 2020.
References:
1. https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2007)
Slide – 6: Sensor Technology
Earlier, we saw that The Google Driverless Car involves developing technology for driverless cars.
The system combines information gathered from Google Street View with artificial intelligence software that combines input from video cameras inside the car, a LIDAR sensor on top of the vehicle, radar sensors on the front of the vehicle and a GPS position sensor attached to one of the rear wheels that helps locate the car's position on the map.
This slides presents an overview of different types of sensor technologies used. Subsequent slides dwell deeper into the explanation of these sensor technologies.
References:
http://www.slideshare.net/MrinalSharma7/ultrasonic-based-distance-measurement-system
Image 1 credits (ADAS/Autonomous Vehicle Sensing Systems) : http://blog.leddartech.com/autonomous-driving-banking-on-lidar-sensing/
Slide – 7: Ultrasonic Sensor
What is Ultrasound?
Ultrasound is an oscillating sound pressure wave with a frequency greater than the upper limit of the human hearing range. The term sonic is applied to ultrasound waves of very high amplitudes. Ultrasound devices
operate with frequencies from 20 kHz up to several gigahertz.
What is Ultrasonic?
Ultrasonic is an application of Ultrasound. Ultrasonic sensors work on the principle, in which they evaluate attributes of a target by interpreting the echoes from radio or sound waves respectively. They generate high
frequencies sound waves and evaluate the echo which is received back by the sensor.
What are Ultrasonic transducers?
An Ultrasonic transducer is a device that converts energy into ultrasound, or sound waves above the normal range of human hearing.
References:
https://en.wikipedia.org/wiki/Ultrasonic_transducer
http://www.slideshare.net/MrinalSharma7/ultrasonic-based-distance-measurement-system
Image 1 credits (Front View and Rear view) : http://forums.parallax.com/discussion/138922/puff-the-magic-boe-bot
Slide – 8: Ultrasonic Sensor – Piezoelectric effect
What is Piezoelectric effect?
Piezoelectric effect refers to applying an electric field to a crystal, which causes realignment of the internal dipole structure resulting in the crystal to lengthen or contract. The process converts electrical energy into
kinetic or mechanical energy. The reverse of the piezoelectric effect converts kinetic or mechanical energy, due to crystal deformation, into electrical energy.
More on Piezoelectric effect…
An ultrasound wave is generated when an electric field is applied to an array of piezoelectric crystals located on the transducer surface. Electrical stimulation causes mechanical distortion of the crystals resulting in
vibration and production of sound waves (i.e. mechanical energy). The conversion of electrical to mechanical (sound) energy is called the converse piezoelectric effect (Gabriel Lipmann 1881).
Each piezoelectric crystal produces an ultrasound wave. The summation of all waves generated by the piezoelectric crystals forms the ultrasound beam. Ultrasound waves are generated in pulses (intermittent trains of
pressure waves) and each pulse commonly consists of 2 or 3 sound cycles of the same frequency.
How does sensing occur?
Transducer converts induced electric voltage into mechanical signal that vibrates rapidly at a very high frequency. The waves are transmitted and then reflect off of materials. The wave is then reflected back and read by
a receiver. The receiver knows how many seconds it took the original wave to return.
References:
http://www.slideshare.net/MrinalSharma7/ultrasonic-based-distance-measurement-system
https://radiopaedia.org/articles/piezoelectric-effect
Image 1 credits (Piezoelectric effect): https://www.honda-el.co.jp/en/ceramics/e_Piezoceramics.html
Image 2 credits (Sensor in action): Google search – piezoelectric effect ultrasound.
Slide – 9: Ultrasonic Sensor - Applications
Applications:
1. Medical Imaging – Ultrasonic imaging uses frequencies of 2 megahertz and higher, the shorter wavelength allows resolution of small internal details in structures and tissues.
2. Autonomous vehicle – They are extensively used in automotive industry especially in self driving cars as car distance sensors. Ultrasonic sensors are mounted on various sides of the car to detect objects very near
to the car.
3. Ultrasonic cleaning – Ultrasound along with a cleaning solvent can be used to clean jewelry, surgical tools, coins, etc.
4. These sensors provide parking assistance, collision warning, lane departure among other functions.
References:
https://en.wikipedia.org/wiki/Ultrasound
http://www.slideshare.net/MrinalSharma7/ultrasonic-based-distance-measurement-system
Image 1a credits (Medical Imaging) : http://www.brooksidepress.org/Products/Military_OBGYN/Ultrasound/basic_ultrasound.htm
Image 1b credits (Medical Imaging) : http://weeklyultrasounds.com/21-week-ultrasound/
Image 2 credits (Autonomous vehicle) : http://www.justcarnews.com/is-the-fully-autonomous-car-just-around-the-corner.html
Image 3 credits (Ultrasonic cleaning) : http://www.productionmachining.com/articles/ultrasonic-cleaning-for-large-lots-of-small-parts
Slide – 10: RADAR Sensor
Traditional RADARs are used to detect dangerous objects in the vehicle’s path that are more than 100 meters away.
What is RADAR?
RADAR stands for Radio Detecting and Ranging as indicated by its name. It is an object-detection system that uses radio waves to determine the range, angle, or velocity of objects.
Components of a RADAR:
RADAR in their basic form have four main components:
1. A transmitter which creates an electromagentic energy pulse in the radio or microwave domain.
2. A transmit/recieve switch that tells the antenna when to transmit and when to receive the pulses.
3. An antenna to send these pulses out in the atmosphere and receive the reflected pulse back.
4. A receiver, which detects, amplifies and transforms the received signals.
References:
https://en.wikipedia.org/wiki/Radar
http://www.bom.gov.au/australia/radar/about/what_is_radar.shtml
http://www.slideshare.net/asertseminar/autonomous-car-32512833
Image 1 credits (Radar sensor) : http://products.bosch-mobility-solutions.com/en/de/_technik/component/component_1_5251.html
Image 2 credits (components of a RADAR) : http://spectrum.ieee.org/transportation/advanced-cars/longdistance-car-radar
Slide – 11: RADAR Sensor – Doppler effect
What is Doppler effect?
The Doppler effect (or the Doppler shift) is the change in frequency or wavelength of a wave (or other periodic event) for an observer moving relative to its source.
More on Doppler effect…
When the source of the waves is moving towards the observer, each successive wave crest is emitted from a position closer to the observer than the previous wave. Therefore, each wave takes slightly less time to
reach the observer than the previous wave. Hence, the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are travelling, the distance
between successive wave fronts is reduced, so the waves "bunch together".
Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased,
reducing the frequency. The distance between successive wave fronts is then increased, so the waves "spread out”.
How does RADAR work?
RADAR’s transmitter send out electromagnetic waves called radio waves in predetermined directions. When these signals come in contact with an object they are usually reflected or scattered in many directions. The
radar signals that are reflected back towards the transmitter are the desirable ones that make radar work. If the object is moving either toward or away from the transmitter, there is a slight equivalent change in the
frequency of the radio waves.
References:
https://en.wikipedia.org/wiki/Doppler_effect
http://www.explainthatstuff.com/radar.html
Image 1 credits (Doppler effect) : http://www.schoolphysics.co.uk/age14-16/Wave%20properties/text/Doppler_effect/index.html
Image 2 credits (Sensor in action): http://www.consumerreports.org/car-safety/collision-avoidance-systems-are-changing-the-look-of-car-safety/
Slide – 12: RADAR Sensor – Applications
Applications:
The information provided by radar includes the bearing and range (and therefore position) of the object from the radar scanner. It is thus used in many different fields where the need for such positioning is crucial.
1. The first use of radar was for military purposes: to locate air, ground and sea targets.
2. They find great application in the Autonomous vehicles or self driving cars as they provide information about nearby surroundings.
3. Geologists use specialized ground-penetrating radars to map the composition of Earth's crust.
References:
https://en.wikipedia.org/wiki/Radar
Image 1 credits (Military Radars) : Google image search – Military Radar.
Image 2 credits (Autonomous vehicle) : Google image search – RADAR sensor in autonomous vehicles.
Image 3 credits (Ground Penetrating Radars) : http://www.terraprobeenvironmental.com/private-utility-locating.htm
Slide – 13: LIDAR Sensor
Autonomous vehicles use LIDAR for obstacle detection and avoidance to navigate safely through environments, using rotating laser beams.
What is LIDAR?
LIDAR stands for Light Detection and Ranging (also LADAR) as indicated by its name. It is an optical remote sensing technology that can measure the distance to, or other properties of a target by illuminating the target
with light, often using pulses from a laser.
Components of a LIDAR:
LIDAR in their basic form have four main components:
1. Laser: A laser is an acronym for light amplification by stimulated emission and of radiation. The laser beam is used to measure the distance to the first object in its path. The frequency is 50k to 200k pulses per
second.
2. Scanner: It measures the angle at which each pulse was fired. How fast images can be developed is also affected by the speed at which they are scanned.
3. Photodetectors: These are sensors of light or other electromagnetic energy. A photo detector has a p–n junction that converts light photons into current. Two main photodetector technologies are used in lidars: solid
state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design.
4. GPS (Navigation system): Lidar sensors that are mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices
generally include a Global Positioning System receiver and an Inertial Measurement Unit (IMU). GPS records the x,y,z location of the scanner while IMU meaures the angular rotation of the scanner.
References:
https://en.wikipedia.org/wiki/Lidar
http://www.slideshare.net/RichaArora4/lidar-14127416
http://web.pdx.edu/~jduh/courses/geog493f12/Week04.pdf
http://auto.howstuffworks.com/radar-detector2.htm
Image 1 credits (LIDAR system): http://www.hizook.com/blog/2009/01/04/velodyne-hdl-64e-laser-rangefinder-lidar-pseudo-disassembled
Image 2 credits (Components of a LIDAR): http://www.mogi.bme.hu/TAMOP/jarmurendszerek_iranyitasa_angol/math-ch03.html
Slide – 14: LIDAR Sensor – Types of LIDAR
Types of LIDAR:
1. Airborne LIDAR: The combination of LIDAR scanning technology with an aerial deployment platform allows for extraordinary efficiency and speed for gathering accurate spatial data to support asset management
needs for numerous industries, such as power generation and transmission, oil and gas, pipeline, rail and transportation, architecture and engineering, and others. It is a proven approach to creating fast and accurate
terrain models for applications in many types of industries.
2. Terrestrial LIDAR: Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be both stationary or mobile. Mobile lidar (also mobile laser scanning) is when two or more
scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs.
How does LIDAR work?
LIDAR uses ultraviolet, visible or infrared light to image objects and can be used with a wide range of targets, including non-metallic objects, rocks, rain, chemical compounds. The LIDAR system clocks the time it takes
a burst of infrared light to reach an object, bounce off and return back to the starting point. By multiplying this time by the speed of light, the LIDAR system determines how far away the object is. LIDAR does not
measure change in wave frequency. Instead, it sends out many infrared laser bursts in a short period of time to collect multiple distances. By comparing these different distance samples, the system can calculate how
fast the car is moving.
References:
https://lasers.llnl.gov/education/how_lasers_work
http://airborneimaginginc.com/airborne-imaging-lidar-services/airborne-lidar/
https://www.sam.biz/about/technology/airborne-lidar
https://en.wikipedia.org/wiki/Lidar#Terrestrial_lidar
Image 1 credits (Airborne LIDAR): http://www.aerialsurveys.co.nz/products/smartphotos/lidarals/
Image 2 credits (Terrestrial LIDAR) : http://pubs.usgs.gov/of/2008/1384/
Image 3 credits (LIDAR in action): http://www.webpages.uidaho.edu/vakanski/How%20Driverless%20Cars%20Work.html
Slide – 15: LIDAR Sensor – Applications
Applications:
Due to characteristics like high accuracy, fast acquisition and processing, minimum human dependence, weather/light independence, Canopy penetration etc., LIDAR has numerous applications. Few of them are listed
below:
1. They find great application in other fields such as agriculture where they can be used to help farmers determine which areas of their fields need to apply costly fertilizers to achieve high yield.
2. In the current scenario LIDARs are extensively used in the Autonomous vehicles to obtain information about the nearby objects, vehicles etc.
3. LIDAR speed guns are used by the police to measure the speed of vehicles for speed limit enforcement purposes.
References:
https://en.wikipedia.org/wiki/Lidar
Image 1 credits (LIDAR in agriculture) : https://eros.usgs.gov/doi-remote-sensing-activities/2013/songbird-habitat-modeling-using-lidar-and-national-agriculture-imagery-program-color-infrared
Image 2 credits (Autonomous vehicle) : http://www.pcworld.com/article/2046041/what-its-like-to-ride-in-a-self-driving-car.html
Image 3 credits (LIDAR speed guns) : http://www.thomasnet.com/articles/instruments-controls/how-radar-detectors-work
Slide – 17: Image Sensor
Autonomous vehicles makes full use of image processing. They use a monochrome video camera (along with other set of cameras) and image processing to extract the lane and road edge markings from the image.
What is an image sensor?
An image sensor or imaging sensor is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off
objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation.
Types of cameras:
1. Infrared camera (or Thermal imaging camera): An infrared camera is a non-contact device that detects infrared energy (heat) and converts it into an electronic signal, which is then processed to produce a thermal
image on a video monitor and perform temperature calculations. Heat sensed by an infrared camera can be very precisely quantified, or measured, allowing you to not only monitor thermal performance, but also
identify and evaluate the relative severity of heat-related problems.
2. Stereo camera: A stereo camera is a type of camera with two or more lenses with a separate image sensor or film frame for each lens. This allows the camera to simulate human binocular vision, and therefore gives
it the ability to capture three-dimensional images, a process known as stereo photography.
References:
http://www.techhive.com/article/2052159/demystifying-digital-camera-sensors-once-and-for-all.html
http://www.slideshare.net/asertseminar/autonomous-car-32512833
http://www.flir.com/corporate/display/?id=41523
https://en.wikipedia.org/wiki/Stereo_camera
Image 1 credits (Image sensor) : http://leicarumors.com/2013/12/09/cmosis-the-maker-of-the-leica-m-240-sensor-to-be-acquired-by-ta-associates.aspx/
Image 2 credits (Components of an image sensor): http://liveforce.ca/news/security-camera-sensor-cmos-ccd/
Slide – 18: Image Sensor – Computer Vision
Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and in general, deal with the extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.
What is Computer Vision?
Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images in terms of the properties of the structures present in the scene. The
ultimate goal of computer vision is to model, replicate, and more importantly exceed human vision using computer software and hardware at different levels. It needs knowledge in computer science, electrical
engineering, mathematics, physiology, biology, and cognitive science.
Typical functions found in many computer vision systems are:
1. Image acquisition: A digital image is produced by one or several image sensors and depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence.
2. Pre-processing: Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to assure that it satisfies
certain assumptions implied by the method.
3. Feature extraction: Image features at various levels of complexity are extracted from the image data. Examples are Line/edges features, Localized interest points such as corners, blobs or points.
4. Detection/Segmentation: At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are selection of a specific set of interest
points, Segmentation of one or multiple image regions which contain a specific object of interest.
5. High level processing: At this step the input is typically a small set of data, for example a set of points or an image region which is assumed to contain a specific object. Examples such as image recognition, image
registration, verification that the data satisfy model-based and application specific assumptions.
6. Decision making: Making the final decision required for the application.
References:
https://www.ecse.rpi.edu/Homepages/qji/CV/3dvision_intro.pdf
https://en.wikipedia.org/wiki/Computer_vision
Image 1 credits (Block diagram for computer vision) : http://what-when-how.com/introduction-to-video-and-image-processing/introduction-to-video-and-image-processing/
Image 2 credits (computer vision output 1): http://www.cc.gatech.edu/~hays/
Image 3 credits (computer vision output 2): https://www.wired.com/2012/01/ff_autonomouscars/
Slide – 19: GPS Sensor
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale.
A message that includes the time of transmission (TOT) of the code epoch (in GPS system time scale) and the satellite position at that time.
What is GPS?
GPS is an acronym for Global Positioning System. It is a global navigation satellite system that provides geo-location and time information to a GPS receiver in all weather conditions, anywhere on or near the Earth
where there is an unobstructed line of sight to four or more GPS satellites.
Components of a GPS:
1. Space segment: The space segment (SS) is composed of the orbiting GPS satellites, or Space Vehicles (SV) in GPS parlance that transmit signals from space. GPS uses two satellite constellations i.e. NAVSTAR
and GLONASS. NAVSTAR (Navigation satellite timing and ranging), composed of 24 satellites, arrayed in 6 orbital planes, inclined 55 degrees to the equator and with a 12 hours period.
2. Control segment: The control segment is composed of:
a) A master control station (MCS),
b) An alternate master control station,
c) Four dedicated ground antennas, and
d) Six dedicated monitor stations.
3. User segment: GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock.
References:
http://www.slideshare.net/asertseminar/autonomous-car-32512833
http://www.aboutcivil.org/components-of-gps-working-mechanism.html
http://www.slideshare.net/gokulsaud/global-positioning-system-gps-51942739
https://en.wikipedia.org/wiki/Global_Positioning_System
Image 1 credits (GPS module) : https://pixhawk.org/peripherals/sensors/gps
Image 2 credits (Components of a GPS system): http://www.slideshare.net/gokulsaud/global-positioning-system-gps-51942739
Slide – 20: GPS Sensor – Working principle
How does GPS work?
Each of the 24 satellites emits signals to receivers that determine their location or range by computing the difference between the time that a signal is sent and the time it is received. The signal contains data that a
receiver uses to compute the locations of the satellites and to make other adjustments needed for accurate positioning. The receiver must account for propagation delays, or decreases in the signal's speed caused by
the ionosphere and the troposphere.
GPS satellites carry atomic clocks that provide extremely accurate time. The time information is placed in the codes broadcast by the satellite so that a receiver can continuously determine the time the signal was
broadcast. With information about the ranges to three satellites and the location of the satellite when the signal was sent, the receiver can compute its own three-dimensional position. An atomic clock synchronized to
GPS is required in order to compute ranges from these three signals.
However, by taking a measurement from a fourth satellite, the receiver avoids the need for an atomic clock. Thus, the receiver uses four satellites to compute latitude, longitude, altitude, and time.
References:
http://www.aboutcivil.org/components-of-gps-working-mechanism.html
https://en.wikipedia.org/wiki/Global_Positioning_System
Image 1 credits (Working of GNSS) : http://www.ray-id.com/2014/05/kalau-kamu-pilih-teknik-geodesi-1.html
Image 2 credits (GPS system in action): http://gpsnavigationsite.com/gps-navigation-systems/
Slide – 21: GPS Sensor – Applications
Applications:
GPS is considered a dual-use technology, meaning it has significant military and civilian applications. Few of the them are listed below:
1. Geo-tagging – applies location coordinates to digital objects such as photographs and other documents for purposes such as creating map overlays with devices.
2. Automotive navigation systems – GPS technology integrated with computers and mobile communications technology in automotive navigation systems.
3. Fleet tracking – GPS is also used to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.
References:
https://en.wikipedia.org/wiki/Global_Positioning_System
Image 1 credits (Geo-tagging) : http://www.jetphotosoft.com/web/?s=jpstudio_2
Image 2 credits (Navigation): http://thinairwireless.com/gps-technology-features/
Image 3 credits (Fleet tracking): http://www.africanpowerlifting.com/category/gps-tracking/
Slide – 22: Electronic Control Unit (ECU)
An ECU is basically made up of hardware and software (firmware). The hardware is basically made up of various electronic components on a PCB. The most important of these components is a microcontroller chip along with an EPROM or a Flash memory chip
What is an Electronic Control Unit (ECU)?
In the Automobile industry an electronic control unit (ECU) is a embedded electronic device, basically a digital computer, that read signals coming from sensors placed at various parts and in different components of the
car and depending on this information controls various important units e.g. engine and automated operations within the car and also keeps a check on the performance of some key components used in the car.
References:
http://www.ni.com/white-paper/3312/en/
Image 1 credits (Electronic Control Unit (ECU)): http://www.computerhope.com/jargon/e/ecu.htm
Image 2 credits (Block diagram of an ECU): https://product.tdk.com/info/en/techlibrary/archives/techjournal/vol05_mlcc/contents02.html
Slide – 23: Electronic Control Unit – Types of ECUs
Types of ECUs:
1. Door Control Unit: The Door Control Unit is responsible for controlling and monitoring various electronic accessories in a vehicle's door. A DCU associated with the driver's door has some additional functionalities.
This additional features are the result of complex functions like locking, driver door switch pad, child lock switches, etc., which are associated with the driver's door. In most of the cases driver door module acts as a
master and others act as slaves in communication protocols.
2. Brake Control Unit: This is an ECU that is used in the ABS (anti-lock braking system) module of a car. They were introduced in the early 1970’s to improve vehicle braking irrespective of the road and whether
conditions. Though its very recently that it started gaining popularity. The EBCM regulates the braking systems on the basis of five inputs that it receives.
a. The Brake: This input give the status of the brake pedal i.e. deflection or assertion. This information is acquired in a digital or analog format.
b. The 4 W.D: This input gives the status in digital format whether the vehicle is in the 4-wheel-drivemode.
c. The Ignition: This input registers if the ignition key is in place, and if the engine is running or not.
d. Vehicle Speed: This input gives the information about the speed of the vehicle.
e. Wheel speed: In a typical application this will represent a set of 4 input signals that convey the information concerning the speed of each wheel. This information is used to derive all
necessary information for the control algorithm. This has been discussed in greater detail in the next slides.
3. Transmission Control module: A transmission control module is a mechanism that regulates a vehicle's automatic transmission by processing electrical signals. Sensors electronically send information to the
transmission control module, and this information is used to calculate gear shifting. The transmission control module helps the vehicle's transmission work efficiently and dependably.
4. Autonomous Cruise Control Unit: Autonomous cruise control (ACC) also called adaptive cruise control, radar cruise control, or traffic-aware cruise control is a cruise control system for road vehicles that
automatically adjusts the vehicle speed to maintain a safe distance from vehicles ahead. control is imposed based on sensor information from on-board sensors. These systems use either a radar or laser sensor
setup allowing the vehicle to slow when approaching another vehicle ahead and accelerate again to the preset speed when traffic allows. ACC technology is widely regarded as a key component of any future
generations of intelligent cars. The impact is equally on driver safety as on economizing capacity of roads by adjusting the distance between vehicles according to the conditions.
There are many other types of ECUs such as Airbag control system (ACS) systems, Electronic Stability Control (ESC) systems, Powertrain control module, Engine Control Module etc. but are beyond the scope of this
presentation.
Components of an ECU
1. Power Supply – digital and analog (power for analog sensors)
2. MPU – microprocessor and memory (usually Flash and RAM)
3. Communications Link – (e.g. CAN bus)
4. Discrete Inputs – On/Off Switch type inputs
5. Frequency Inputs – encoder type signals (e.g. crank or vehicle speed)
6. Analog Inputs – feedback signals from sensors
7. Switch Outputs – On/Off Switch type outputs
8. PWM Outputs – variable frequency and duty cycle (e.g. injector or ignition)
9. Frequency Outputs – constant duty cycle (e.g. stepper motor – idle speed control)
References:
https://en.wikipedia.org/wiki/Electronic_control_unit
http://www.ni.com/white-paper/3312/en/#toc2
https://en.wikipedia.org/wiki/Door_control_unit
https://en.wikipedia.org/wiki/Autonomous_cruise_control_system
Image 1 credits (Braking Control Unit): http://motoress.com/ride/moto-savvy/understanding-motorcycle-abs/
Image 2 credits (Transmission Control module): https://www.prlog.org/12228552-gm-makes-transmission-control-module-repair-easier-than-ever-with-release-of-programming-tools.html
Image 3 credits (Door Control Unit): http://www.tradeteda.org/en/enterprise/detail.asp?id=342
Image 4 credits (Adaptive Cruise Control): http://www.caricos.com/cars/a/audi/2014_audi_rs6_avant/images/77.html
Slide – 24: Electronic Control Unit – Working principle
How does Electronic Control Unit work?
The ECU uses closed-loop control, a control scheme that monitors the output of a system to control the inputs to a system, managing the emissions and fuel economy of the engine (as well as host of other parameters).
Gathering data from dozens of different sensors, the ECU performs millions of calculations each second, including looking up values in tables, calculating the result of long equations to decide on the best spark timing or
determining how long the fuel injector is open.
A modern ECU might contain a 32-bit, 40-MHz processor, which may not sound fast compared to the processor we probably have in our PCs, but the processor in our car runs a much more efficient code. The code in
an average ECU takes up less than 1 megabyte of memory. By comparison, have at least 2 gigabytes of programs on our computers – 2000 times the amount in an ECU.
What is CAN Bus?
CAN (Control Area Network) with deterministic response is suitable for Power-train control systems like Engine, Transmission etc. CAN bus enables communication between the ECUs through exchange of messages
similar to LIN sub-network. But there is no Master ECU. All ECUs compete with each other and the access is granted to the ECU sending message with highest priority.
References:
https://en.wikipedia.org/wiki/Electronic_control_unit
http://www.slideshare.net/AnkulGupta2/electronic-control-unitecu
Image 1 credits: https://www.linkedin.com/pulse/information-security-connected-vehicle-shashank-dhaneshwar
Slide – 25: Wheel Speed Sensor
Wheel speed sensor measure the road-wheel speed and direction of rotation.
What is the wheel speed sensor?
A wheel speed sensor, also called an "ABS sensor," is part of the Anti-lock Brake System (ABS). It is located on the tires (near the brake rotors for the front tires and in the rear end housing for the rear tires). The job of
the wheel speed sensor is to constantly monitor and report the rotational speed of each tire to the ABS control module. ABS is a safety system that prevents your car from skidding or sliding when you apply the brakes.
When the brake is applied, the ABS control module reads the speed data from the speed sensor and sends the correct pressure to each wheel to prevent any sliding/skidding (wheels locking up).
References:
https://www.yourmechanic.com/services/wheel-speed-sensor-replacement
Image 1 credits (Wheel speed sensor (ABS)) : http://www.autoparts.loc8apart.com/nav/make-model-part/Mitsubishi/Veryca/ABS.php
Image 2 credits (ABS inside a car): http://repairpal.com/abs-wheel-speed-sensor
Slide – 26: Wheel Speed Sensor – Hall effect
Working together, electricity and magnetism can make things move.
What is Hall effect?
The Hall effect is the production of a voltage difference (the Hall voltage) across an electrical conductor, transverse to an electric current in the conductor and a magnetic field perpendicular to the current.
More on Hall effect…
Current consists of the movement of many small charge carriers, typically electrons, holes, ions (see Electromigration) or all three. When a magnetic field is present, these charges experience a force, called the Lorentz
force. When such a magnetic field is absent, the charges follow approximately straight, 'line of sight' paths between collisions with impurities, phonons, etc.
However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved so that moving charges accumulate on one face of the material. This leaves equal and opposite
charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the
'line of sight' path and the applied magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electrical potential is established for as long as the
charge is flowing.
How does Hall effect sensor work?
A Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. When a beam of charged particles passes through a magnetic field, forces act on the particles and the beam is deflected
from a straight path. The flow of electrons through a conductor is known as a beam of charged carriers. When a conductor is placed in a magnetic field perpendicular to the direction of the electrons, they will be deflected
from a straight path. As a consequence, one plane of the conductor will become negatively charged and the opposite side will become positively charged. The voltage between these planes is called Hall voltage.
When the force on the charged particles from the electric field balances the force produced by magnetic field, the separation of them will stop. If the current is not changing, then the Hall voltage is a measure of the
magnetic flux density. Basically, there are two kinds of Hall effect sensors. One is linear which means the output of voltage linearly depends on magnetic flux density; the other is called threshold which means there will
be a sharp decrease of output voltage at each magnetic flux density.
References:
https://en.wikipedia.org/wiki/Hall_effect
http://www.explainthatstuff.com/hall-effect-sensors.html
http://www.electronics-tutorials.ws/electromagnetism/hall-effect.html
Image 1 credits (Hall effect): https://www.nde-ed.org/EducationResources/CommunityCollege/MagParticle/Physics/Measuring.htm
Image 2 credits (Sensor in action): http://howtomechatronics.com/how-it-works/electrical-engineering/hall-effect-hall-effect-sensors-work/
Slide – 27: Heads Up Display (HUD)
What is Heads Up Display?
A head-up display or heads-up display, also known as a HUD, is any transparent display that presents data without requiring users to look away from their usual viewpoints.
References:
https://en.wikipedia.org/wiki/Head-up_display
Image 1 credits (Heads Up Display): http://www.fly737ng.com/?page_id=18
Slide – 28: Heads Up Display (HUD) – Working
How does Heads Up Display work?
The actual components of an automotive HUD can vary a great deal, but all HUDs contain three essential elements: (I) an image source, (2) a system of lenses and mirrors that reflect, refract, focus, and magnify the
HUD image, and (3) a combiner surface.
The image source is the component of a HUD that produces the initial pattern of light energy that will eventually be viewed by the driver. The reflective and refractive systems serve to transfer the HUD image from its
source to the combiner. Combiners, the third basic component of a HUD system, serve as a final surface onto which the HUD image will be projected. The windshield of an automobile is typically used as the combiner,
and it is often treated in some manner in order to allow for high image contrast and clarity for the, area in which the HUD image is projected. As the optical element that will be viewed directly by the driver, combiners
are selected so that they serve the function of setting the distance of the HUD virtual image. The distance is often set so that the driver will see the HUD image at or near the end of the hood of the automobile.
References:
http://www.ijesit.com/Volume%204/Issue%202/IJESIT201502_17.pdf
Image 1 credits (Heads Up Display – working): http://www.uobdii.com/wholesale/head-up-display-with-obd2-interface-km-h-mph-speeding-warning-w03.html
Slide – 29: Heads Up Display (HUD) – Applications
The technology of HUD is mainly used in aviation industry, but now it has various applications in cars also.
Applications
1. Automobiles - These displays usually offer speedometer, tachometer, and navigation system displays information to the driver.
2. Synthetic Vision system – HUD systems are also being designed to display a synthetic vision system (SVS) graphic image, which uses high precision navigation, attitude, altitude and terrain databases to create
realistic and intuitive views of the outside world.
3. Military aircrafts – military applications include weapons system and sensor data such as target destination, range, velocity, weapon seeker etc.
References:
http://www.ijesit.com/Volume%204/Issue%202/IJESIT201502_17.pdf
Image 1 credits (Automobiles): http://the-gadgeteer.com/2015/09/01/a8-obd-heads-up-display-hud-review/
Image 2 credits (Synthetic Vision Systems): http://www.nasa.gov/vision/earth/improvingflight/svs_reno.html
Image 3 credits (Military aircrafts): https://commons.wikimedia.org/wiki/File:F-18_HUD_gun_symbology.jpeg
Slide – 16: Sensor Integration under the Bonnet
This slide combines all the different type of sensor technologies under one roof. The above figure depicts how different sensors are used inside a full autonomous vehicle and communicate with each other.
The upcoming slides will focus on other technologies GPS, camera/stereo vision, image sensors, video cameras, HUD’s and V2X onboard unit.
References:
Image 1 credit (Under the Bonnet) : http://www.economist.com/node/21560989
Slide – 30: Electric Vehicle Battery
What is an EV battery?
An electric vehicle battery (EVB) or traction battery is a battery used to power the propulsion of battery electric vehicles (BEVs). Vehicle batteries are usually a secondary (rechargeable) battery. Battery pack will always
incorporate many discrete cells connected in series and parallel to achieve the total voltage and current requirements of the pack.
Rechargeable electric batteries are used for many applications such as electric vehicles, autonomous vehicle etc.
References:
https://en.wikipedia.org/wiki/Electric_vehicle_battery#Internal_components
Image 1 credits (EV battery): http://www.reliableplant.com/Read/27174/Cost-electric-vehicles-A123
Image 2 credits (Components of an EV battery): http://powerelectronics.com/power-electronics-systems/pedal-metal-ev-battery-technology
Slide – 31: V2X Communication
With automated vehicle environment, cars will communicate with your house, office, and smart devices, acting as an digital assistant, gathering information you need to go about your day.
What is an Vehicular Communication System?
Vehicular communication systems are networks in which vehicles and roadside units are the communicating nodes, providing each other with information, such as safety warnings and traffic information. They can be
effective in avoiding accidents and traffic congestion.
What is an V2X Communication?
Vehicle-to-everything (V2X) communication is the passing of information from a vehicle to any entity that may affect the vehicle, and vice versa. It is a vehicular communication system that incorporates other more
specific types of communication as V2I (Vehicle-to-Infrastructure), V2V (Vehicle-to-vehicle), V2P (Vehicle-to-Pedestrian), V2D (Vehicle-to-device) and V2G (Vehicle-to-grid).
References:
https://en.wikipedia.org/wiki/Vehicle-to-everything
https://en.wikipedia.org/wiki/Vehicular_communication_systems
Image 1 credits (V2X communication): https://www.wired.com/insights/2014/09/connected-cars/
Slide – 32: V2X Communication types
Types of V2X Communication types:
1. Vehicle to Vehicle communication: Vehicle to Vehicle communication approach is most suited for short range vehicular networks. In V2V the connectivity between the vehicles may not be there all the time since
the vehicles are moving at different velocities due to which there might be quick network topology changes. It is the receiver’s responsibility to decide the relevance of emergency messages and decide on appropriate
actions.
2. Vehicle to Infrastructure/Roadside communication: Vehicle to Infrastructure provides solution to longer-range vehicular networks. It makes use of preexisting network infrastructure such as wireless access points
(Road-Side Units, RSUs). Communications between vehicles and RSUs are supported by Vehicle-to-Infrastructure (V2I) protocol and Vehicle-to-Roadside (V2R) protocol.
References:
1. https://en.wikipedia.org/wiki/Vehicle-to-everything
2. Image 1 credits (V2X communication types): www.cise.ufl.edu/~helmy/cis6930-12/Grp5-Proj-Presentation.ppt
Slide – 33: V2X Communication – Applications
Applications
1. Safety – The safety applications aim to decrease the number of accident by prediction and notifying the drivers of the information obtained through the communications between the vehicles and sensors installed on
the road.
2. Efficiency – The efficiency applications can support the better utilization of the roads and intersections. These functions can operate locally at an intersections or a given road section, or in an optimal case on a large
network, such as a busy downtown.
References:
http://www.mogi.bme.hu/TAMOP/jarmurendszerek_iranyitasa_angol/math-ch09.html
Image 1 credits (Safety): http://www.toyota-global.com
Image 2 credits (Efficiency): http://www.car-to-car.org/
Slide – 34: Roadmap to Automation
Definitions of different levels of automation
Level 1: Driver continuously performs the longitudinal and lateral dynamic driving task. No intervening vehicle system active.
Level 2: Driver continuously performs the longitudinal or lateral dynamic driving task. The other driving task is performed by the system.
Level 3: Driver must monitor the dynamic driving task and the driving environment at all times. System performs longitudinal and lateral driving task in a defined use case.
Level 4: Driver does not need to monitor the dynamic driving task nor the driving environment at all times, must always be in a position to resume control. System performs longitudinal and lateral driving task in a
defined use case. Recognizes its performance limits and requests driver to resume the dynamic driving task with sufficient time margin.
Level 5: Driver is not required during defined use case. System performs the lateral and longitudinal dynamic driving task in all situations in a defined use case.
Level 6: System performs the lateral and longitudinal dynamic driving task in all situations encountered during the entire journey. No driver required.
References:
http://www.oica.net/wp-content/uploads/WP-29-162-20-OICA-automated-driving.pdf
Image 1 credits (Roadmap to automation): http://www.autocarpro.in/news-international/delphi-partners-ottomatika-develop-solutions-automated-driving-6890
Slide 35 – Features of a Self Driving Car.
Adaptive cruise control: Also called traffic-aware cruise control is a cruise control system that automatically adjusts the vehicle speed to maintain a safe distance from vehicles ahead.
Autonomous emergency braking: Also known as advanced emergency braking (AEB), is an autonomous road vehicle safety system which employs sensors to monitor the proximity of vehicles in front and detects situations where the relative speed and distance between the host and target vehicles suggest that a collision is imminent.
Self Parking: Also known as automated parking is an autonomous car-maneuvering system that moves a vehicle from a traffic lane into a parking spot to perform parallel, perpendicular or angle parking.
Traffic Jam Assistants: This feature is similar to adaptive cruise control system.
Lane keeping: In road-transport terminology, a lane departure warning system is a mechanism designed to warn the driver when the vehicle begins to move out of its lane (unless a turn signal is on in that direction) on freeways and arterial roads. These systems are designed to minimize accidents by addressing the main causes of collisions: driver error, distractions and drowsiness. In 2009 the U.S. National Highway Traffic Safety Administration (NHTSA) began studying whether to mandate lane departure warning systems and frontal collision warning systems on automobiles.
References:
Image 1 credits (ACC): http://editorial.autoweb.com/autowebs-guide-to-adaptive-cruise-control/
Image 2 credits (Autonomous emergency braking): https://live.landrover.co.uk/technology/guide/autonomous-emergency-braking-system
Image 3 credits (Self Parking): http://auto.howstuffworks.com/car-driving-safety/safety-regulatory-devices/self-parking-car2.htm
Image 4 credits (Traffic Jam Assistants): http://www.autoblog.com/2014/01/07/audi-traffic-jam-assistant-ces-demo-video/
Image 5 credits (Lane keeping): http://pr.kia.com/en/wow/drive-wise/drive-wise-technologies/driving-assist.do?caller=d29ybGR3aWRl
Slide 36 – Challenges
Above mentioned points are problems that are yet to be solved.