Obstacle detection using laser

Rohith R
Rohith RAssociate Developer em Speridian Technologies

Obstacle Detection Using Laser

OBSTACLE DETECTION USING LASER
ABSTRACT
Cameras are the eye of robot. The movements of the robots are
controlled by the images obtained by analyzing the images captured by
this cameras. So the inputs to the robot should be capable of producing
a 3D image. But our ordinary camera is not able to capture 3D image
directly. There are several methods available to convert a 2D image into
3D image.But most of the methods use more than one camera in it and
some of them are not capable to work in dark conditions.Here, we are
introducing a new method to obtain a 3D image using a single camera
which can be implemented even in the absence of light.Beagleboard xm
is used as the image processing platform.
PROBLEM DEFINITION
INTRODUCTION
ABSTRACT
In our project we are trying to identify an object from its
environment using line laser and obtain its 3D perspective
view by capturing the image of the object constantly using a
high definition webcam and processing it using our algorithm.
Computer vision is the field concerned with the automated
processing of images from the real world to extract and interpret
information on a real time basis. It is the science and technology of
machines that see, which includes methods for acquiring,
processing, analyzing, and understanding images.
High-dimensional data from the real world is needed in order to
produce numerical or symbolic information in the form of decisions.
Trend in this field is to duplicate the abilities of human vision
electronically.
The applications of computer vision extends from machine
vision used in industries for assembly line manufacturing,
bio-medical field for analyzing the X-RAY and Microscopy images,
which are used for diagnosing patients and in military field for
detecting enemy movement, missile guidance etc.
Latest application areas of computer vision are autonomous
vehicles, which include submersibles, land-based vehicles (small
robots with wheels, cars or trucks), +aerial vehicles, and
Unmanned Aerial Vehicles (UAV). The level of autonomy ranges
from fully autonomous (unmanned) vehicles to the vehicles where
computer vision based systems support a driver or a pilot in
various situations. Fully autonomous vehicles typically use
computer vision for navigation, i.e. for knowing where it is, or for
producing a map of its environment and for detecting obstacles. It
can also be used for detecting certain task specific events, e. g., a
UAV looking for forest fires. Examples of supporting systems are
obstacle warning systems in cars, and systems for autonomous
landing of aircraft. Several car manufacturers have demonstrated
systems for autonomous driving of cars, but this technology has
still not reached a level where it can be put on the market. There
are ample examples of military autonomous vehicles ranging from
advanced missiles, to UAVs for recon missions or missile
guidance. Space exploration is already being made with
autonomous vehicles using computer vision, e.g. NASA's Mars
Exploration Rover and ESA's ExoMars Rover.
IV.PROJECT ANALYSIS AND DESIGN
Computer vision based robot can solve many problems faced
by humans in day today life. It can be helpful in fully
automated factory like a bottling plant by placing bottle caps
on every filled bottle using its ability to see and detect the
bottles. These robots find extensive application in NASA, ESA
and other space agencies for their man less missions to mars,
moon and other planetary bodies. They are also used as
ATV's (All Terrain Vehicles) that reach the places where
human cannot reach, like deep ocean etc. They are used at
time of disasters to rescue people caught below debris using
their Infra Red cameras.
Unmanned Aerial Vehicles (UAV's) are the pride of all
defense forces across the world. These are multipurpose
aircraft which can be used for aerial inspection of a place, as
a stealth aircraft for spying and also for attacking purposes
also. Driverless cars are said to be the future of automobile
industry. A driverless car should have very powerful camera,
processing unit and a very good algorithm.
Our project can be said to be miniature form of all the above
situations. Here we consider only a small case; the robot
detecting the object using laser and creating its 3D view .
Real time image processing of our robot helps it to accurately
detect an object by following the track of the laser.
Figure 1. Block Diagram
I.
II.
III.
The image acquired using a webcam ( HD i Ball) . The image taken
by the cam is transferred to the Beagle Board through USB.
Suitable cam is selected after familiarizing with beagle board. The
Beagle Board-XM is a low power, low cost single board computer
produced by Texas Instruments in association with Digi-Key.
Processing is done using beagle board. Software used for
processing is openCV in c.
The c code in openCV is executed using terminal program in Linux.
Servo motor controlled by Arduino UNO is used control the
movement of laser for scanning purpose .The laser is mounted over
the motor and is swayed back & forth using instructions from
Arduino program. The cam captures the frame of the object being
scanned which is then manipulated using openCV source code to
obtain our objective. The desired image thus obtained is displayed
at real time There is no physical connection between beagle board
and arduino board.
Object detection and its 3D generation consists of different
phases, which consist of three main regions :- familiarizing
BeagleBoard , design of physical part scanning section and image
processing.
FAMILIARISING BEAGLEBOARD
As we have the algorithm we are in need of a stand-alone system
for image acquisition and processing. So we took beagle board,
which is a single board computer system and it has an ARM–A8
processor with OMAP 3530 dsp processor as our computer. We
studied Linux OS for familiarizing operation of the beagle board
and ported Angstrom Linux (a Linux version for embedded
system developers) to SBC. Porting angstrom Linux into beagle
board is done by booting linux into the SD card which acts as a
secondary memory for the board. The following are the
instructions for booting Angstrom into the SD card. The following
instructions are performed in terminal in a lubuntu platform.
Installing Angstrom on the BeagleBoard-xM
mkdir ~/angstrom-wrk
cd ~/angstrom-wrk
wget
http://cgit.openembedded.org/cgit.cgi/openembedded/plain/contrib/
angstrom/omap3-mkcard.sh
chmod +x omap3-mkcard.sh
sudo ./omap3-mkcard.sh /dev/sdX
sync
df –h
# extract the files to the ./boot directory
tar --wildcards -xjvf [YOUR-DOWNLOAD-FILE].tar.bz2 ./boot/*
# copy the files to sc-card boot partition.
cp boot/MLO-* /media/boot/MLO
cp boot/uImage-* /media/boot/uImage
cp boot/u-boot-*.bin /media/boot/u-boot.bin
sync
sudo tar -xvj -C /media/Angstrom -f
[YOUR-DOWNLOAD-FILE].tar.bz2
sync
sync
umount /media/boot
umount /media/Angstrom
V. DESIGN OF PHYSICAL PART OF SCANNING
SECTION
Servos have integrated gears and a shaft that can be precisely
controlled. Standard servos allow the shaft to be positioned at
various angles, usually between 0 and 180 degrees. Continuous
rotation servos allow the rotation of the shaft to be set to various
speeds. Here we are employing Arduino UNO board to control
the movements of the laser. The required hardware and circuitry
is explained below
Hardware Required
1) Arduino Board
2) Servo Motor
3) hook-up wire
Circuit
Servo motors have three wires: power, ground, and signal. The
power wire is typically red, and should be connected to the 5V
pin on the Arduino board. The ground wire is typically black or
brown and should be connected to a ground pin on the Arduino
board. The signal pin is typically yellow, orange or white and
should be connected to a digital pin on the Arduino board. Note
servos draw considerable power, so if we need to drive more
than one or two, we'll probably need to power them from a
separate supply (i.e. not the +5V pin on your Arduino). Be sure
to connect the grounds of the Arduino and external power
supply together.
Figure 2 Circuit set up for arduino board
VI. IMAGE PROCESSING
The operation of image processing is performed in openCV 2.4.1
platform. So before proceeding with the processing of image
captured by cam ,first we need openCV which is loaded into
Ubuntu OS and the instructions are executed in terminal program
provided by Ubuntu OS .The following section how to install
openCV in Ubuntu OS.
VII.PROGRAM EXPLANATION
Case 1- reference plane without object
Befpre we begin with scanning of the object we plan a reference line
on either side of the plane so that the object we wish to scan should
within this reference line.By doing this it is made sure that the length
of the laser line is greater than the object size so as to detect the
change or shift in the path of the laser line once it passes over the
object. The first step involved is obtaining continuous video
sequence from the camera through which a frame is captured using
opencv frame capture instruction. The captured image is stored in
the form of matrix of pixel where each pixel occupies address
location indicated by rows and columns. The first step involved in
image processing is to obtain the first pixel from the image matrix i.e.
the first pixel location is assign row no:1 and col:1.then RGB color
corresponding to this pixel is retrieved. After this it is checked
whether the red color intensity corresponding to this pixel is greater
than green, blue color intensity of the same pixel. If yes the assign
new value to RGB channel which is 255 i.e. white and if the intensity
of red color is less then give another value to RGB channel which is
0 i.e. black.
Case 2- reference plane with object
Repeat the above steps for each pixel corresponding to that
row by incrementing the columns. This process is repeated till
the entire pixel of an image is processed. By doing this we will
be getting a grey level image corresponding RGB image. Now
we have an image with object in black and background in
white.The next step is obtaining an 3D perspective view of an
object. Here the main idea involved is that when a laser line
nears an object end it gets displaced by an amount equal to
the width of the object. So there is a displacement of line of
laser from its reference line as it nears an object end and
when RGB to grey scale conversion is performed we get white
region for red line up to the point where it is displaced. We
note row no and col no of the line where it is getting displaced.
By keeping col no constant we move to the row no where the
displaced line appears. We will check if the RGB color
corresponding to this pixel location is red or not. If yes then
note the row no of this pixel in a separate variable and
substract the previous row no with the now obtained row no to
obtain a non zero value ( say x). This value x is then multiplied
with arbitrary defined rgv value which is then applied at the
end the object to obtain the perspective view of the object.
VIII.FLOW CHART
IX.CONCLUSION
By using a single camera and line laser we are able to
produce 3D perspective view of any object even in the
absence of light.
Our project, the object detection is the only concern.
According to the type of object the project can be
extended to either avoiding the object or take necessary
actions against it. It can also used as terrain mapping in
robotics. There is a small delay in image processing
operation. As there is no perfect synchronization
between cam data rate and beagleboard there is a
chance for the formation of noise in an image. Use of
DSP optimizimation can be used to overcome this
problem.
X.RESULT
Fig.Obstacle Detection output result
Fig.3D Perspective View output

Recomendados

MOTION CAPTURE TECHNOLOGY por
MOTION CAPTURE TECHNOLOGYMOTION CAPTURE TECHNOLOGY
MOTION CAPTURE TECHNOLOGYShaik Tanveer
5.7K visualizações20 slides
Motion capture por
Motion captureMotion capture
Motion captureAswanth Talaseela
5.3K visualizações20 slides
Motion Capture Technology Computer Graphics por
Motion Capture Technology Computer GraphicsMotion Capture Technology Computer Graphics
Motion Capture Technology Computer GraphicsRohan Patel
1.2K visualizações30 slides
Motion capture by Rj por
Motion capture by RjMotion capture by Rj
Motion capture by RjShree M.L.Kakadiya MCA mahila college, Amreli
655 visualizações43 slides
Motion capture technology por
Motion capture technologyMotion capture technology
Motion capture technologyAnvesh Ranga
21.4K visualizações28 slides
DIY motion capture with KinectToPin por
DIY motion capture with KinectToPinDIY motion capture with KinectToPin
DIY motion capture with KinectToPinFakeGreenDress
7.2K visualizações26 slides

Mais conteúdo relacionado

Mais procurados

All About Robotics (pdf) por
All About Robotics (pdf)All About Robotics (pdf)
All About Robotics (pdf)Priyanshu
44 visualizações50 slides
Motion Capture por
Motion CaptureMotion Capture
Motion Captureaswathisuresh
5.4K visualizações24 slides
Motion Capturing Technology por
Motion Capturing TechnologyMotion Capturing Technology
Motion Capturing TechnologyMurlidhar Sarda
2.4K visualizações17 slides
Motion capture technology por
Motion capture technologyMotion capture technology
Motion capture technologyParvez Hassan
8.7K visualizações30 slides
Motion capture technology por
Motion capture technologyMotion capture technology
Motion capture technologyAnvesh Ranga
37.7K visualizações29 slides
Motion capture technology por
Motion capture technologyMotion capture technology
Motion capture technologyArun MK
2.3K visualizações29 slides

Mais procurados(20)

All About Robotics (pdf) por Priyanshu
All About Robotics (pdf)All About Robotics (pdf)
All About Robotics (pdf)
Priyanshu 44 visualizações
Motion Capture por aswathisuresh
Motion CaptureMotion Capture
Motion Capture
aswathisuresh5.4K visualizações
Motion Capturing Technology por Murlidhar Sarda
Motion Capturing TechnologyMotion Capturing Technology
Motion Capturing Technology
Murlidhar Sarda2.4K visualizações
Motion capture technology por Parvez Hassan
Motion capture technologyMotion capture technology
Motion capture technology
Parvez Hassan8.7K visualizações
Motion capture technology por Anvesh Ranga
Motion capture technologyMotion capture technology
Motion capture technology
Anvesh Ranga37.7K visualizações
Motion capture technology por Arun MK
Motion capture technologyMotion capture technology
Motion capture technology
Arun MK2.3K visualizações
Motion capture technology por ARUN S L
Motion capture technologyMotion capture technology
Motion capture technology
ARUN S L1.3K visualizações
Motion capture technology por harini501
Motion capture technologyMotion capture technology
Motion capture technology
harini501346 visualizações
Unit III - Solved Question Bank- Robotics Engineering - por Sanjay Singh
Unit III - Solved Question Bank-  Robotics Engineering -Unit III - Solved Question Bank-  Robotics Engineering -
Unit III - Solved Question Bank- Robotics Engineering -
Sanjay Singh1.4K visualizações
Motion Capture Technology por Gameyan Studio
Motion Capture TechnologyMotion Capture Technology
Motion Capture Technology
Gameyan Studio845 visualizações
Seminar report on image sensor por JaydeepBhayani773
Seminar report on image sensorSeminar report on image sensor
Seminar report on image sensor
JaydeepBhayani7733.5K visualizações
CCD (Charge Coupled Device) por Sagar Reddy
CCD (Charge Coupled Device)CCD (Charge Coupled Device)
CCD (Charge Coupled Device)
Sagar Reddy35.2K visualizações
Androidで出来る!! KinectとiPadを使った亀ロボ por Hirotaka Niisato
Androidで出来る!! KinectとiPadを使った亀ロボAndroidで出来る!! KinectとiPadを使った亀ロボ
Androidで出来る!! KinectとiPadを使った亀ロボ
Hirotaka Niisato1.7K visualizações
Report On Image Sensors por pranavhaldar
Report On Image SensorsReport On Image Sensors
Report On Image Sensors
pranavhaldar14.7K visualizações
Robots and new technologies por Afraz Rajiwate
Robots and new technologiesRobots and new technologies
Robots and new technologies
Afraz Rajiwate577 visualizações
2015 ccd-versus-cmos-update por Adimec
2015 ccd-versus-cmos-update2015 ccd-versus-cmos-update
2015 ccd-versus-cmos-update
Adimec2.1K visualizações
Eet3131 ccd cmos_presentation2 por djehlke
Eet3131 ccd cmos_presentation2Eet3131 ccd cmos_presentation2
Eet3131 ccd cmos_presentation2
djehlke1.5K visualizações
Maze solving robot por Kevin Mathew
Maze solving robotMaze solving robot
Maze solving robot
Kevin Mathew1.4K visualizações
Quantum film image sensing ppt por ayush1191
Quantum film image sensing pptQuantum film image sensing ppt
Quantum film image sensing ppt
ayush11914.6K visualizações

Similar a Obstacle detection using laser

SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS por
 SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS  SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS Nandakishor Jahagirdar
164 visualizações20 slides
Visual pattern recognition in robotics por
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in roboticsIAEME Publication
488 visualizações14 slides
Visual pattern recognition in robotics por
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in roboticsIAEME Publication
325 visualizações14 slides
Simultaneous Mapping and Navigation For Rendezvous in Space Applications por
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsSimultaneous Mapping and Navigation For Rendezvous in Space Applications
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsNandakishor Jahagirdar
168 visualizações3 slides
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat... por
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...Panth Shah
795 visualizações5 slides
Color Tracking Robot por
Color Tracking RobotColor Tracking Robot
Color Tracking Robotpaperpublications3
50 visualizações5 slides

Similar a Obstacle detection using laser(20)

SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS por Nandakishor Jahagirdar
 SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS  SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
Nandakishor Jahagirdar164 visualizações
Visual pattern recognition in robotics por IAEME Publication
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
IAEME Publication488 visualizações
Visual pattern recognition in robotics por IAEME Publication
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
IAEME Publication325 visualizações
Simultaneous Mapping and Navigation For Rendezvous in Space Applications por Nandakishor Jahagirdar
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsSimultaneous Mapping and Navigation For Rendezvous in Space Applications
Simultaneous Mapping and Navigation For Rendezvous in Space Applications
Nandakishor Jahagirdar168 visualizações
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat... por Panth Shah
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...
Panth Shah795 visualizações
Color Tracking Robot por paperpublications3
Color Tracking RobotColor Tracking Robot
Color Tracking Robot
paperpublications350 visualizações
Colour tracking robot.pdf por AbdessatarMazouzi
Colour tracking robot.pdfColour tracking robot.pdf
Colour tracking robot.pdf
AbdessatarMazouzi8 visualizações
IRJET- Smart Helmet for Visually Impaired por IRJET Journal
IRJET- Smart Helmet for Visually ImpairedIRJET- Smart Helmet for Visually Impaired
IRJET- Smart Helmet for Visually Impaired
IRJET Journal69 visualizações
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA... por csandit
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
csandit930 visualizações
Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing... por IJEEE
Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing...Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing...
Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing...
IJEEE 766 visualizações
Arduino_Project_Report por Jacob McCormick
Arduino_Project_ReportArduino_Project_Report
Arduino_Project_Report
Jacob McCormick453 visualizações
Development of image processing based human tracking and control algorithm fo... por adarsa lakshmi
Development of image processing based human tracking and control algorithm fo...Development of image processing based human tracking and control algorithm fo...
Development of image processing based human tracking and control algorithm fo...
adarsa lakshmi540 visualizações
MOBILE REMOTE SURVEILLANCE TOWER por IRJET Journal
MOBILE REMOTE SURVEILLANCE TOWERMOBILE REMOTE SURVEILLANCE TOWER
MOBILE REMOTE SURVEILLANCE TOWER
IRJET Journal4 visualizações
Maze solving quad_rotor por nguyendattdh
Maze solving quad_rotorMaze solving quad_rotor
Maze solving quad_rotor
nguyendattdh1.1K visualizações
IRJET- Cloth Matching and Color Selection using Intelligent Robotic System por IRJET Journal
IRJET- Cloth Matching and Color Selection using Intelligent Robotic SystemIRJET- Cloth Matching and Color Selection using Intelligent Robotic System
IRJET- Cloth Matching and Color Selection using Intelligent Robotic System
IRJET Journal9 visualizações
IRJET - The Line Follower -and- Pick and Place Robot por IRJET Journal
IRJET - The Line Follower -and- Pick and Place RobotIRJET - The Line Follower -and- Pick and Place Robot
IRJET - The Line Follower -and- Pick and Place Robot
IRJET Journal31 visualizações
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran... por IRJET Journal
IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET Journal36 visualizações
PC-based mobile robot navigation sytem por ANKIT SURATI
PC-based mobile robot navigation sytemPC-based mobile robot navigation sytem
PC-based mobile robot navigation sytem
ANKIT SURATI829 visualizações
Design and Development of a Semi-Autonomous Telerobotic Warehouse Management ... por IRJET Journal
Design and Development of a Semi-Autonomous Telerobotic Warehouse Management ...Design and Development of a Semi-Autonomous Telerobotic Warehouse Management ...
Design and Development of a Semi-Autonomous Telerobotic Warehouse Management ...
IRJET Journal3 visualizações

Último

Generative AI: Shifting the AI Landscape por
Generative AI: Shifting the AI LandscapeGenerative AI: Shifting the AI Landscape
Generative AI: Shifting the AI LandscapeDeakin University
67 visualizações55 slides
The Role of Patterns in the Era of Large Language Models por
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language ModelsYunyao Li
91 visualizações65 slides
The Power of Generative AI in Accelerating No Code Adoption.pdf por
The Power of Generative AI in Accelerating No Code Adoption.pdfThe Power of Generative AI in Accelerating No Code Adoption.pdf
The Power of Generative AI in Accelerating No Code Adoption.pdfSaeed Al Dhaheri
39 visualizações18 slides
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ... por
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...ShapeBlue
171 visualizações28 slides
Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And... por
Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And...Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And...
Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And...ShapeBlue
108 visualizações12 slides
Business Analyst Series 2023 - Week 4 Session 8 por
Business Analyst Series 2023 -  Week 4 Session 8Business Analyst Series 2023 -  Week 4 Session 8
Business Analyst Series 2023 - Week 4 Session 8DianaGray10
145 visualizações13 slides

Último(20)

Generative AI: Shifting the AI Landscape por Deakin University
Generative AI: Shifting the AI LandscapeGenerative AI: Shifting the AI Landscape
Generative AI: Shifting the AI Landscape
Deakin University67 visualizações
The Role of Patterns in the Era of Large Language Models por Yunyao Li
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language Models
Yunyao Li91 visualizações
The Power of Generative AI in Accelerating No Code Adoption.pdf por Saeed Al Dhaheri
The Power of Generative AI in Accelerating No Code Adoption.pdfThe Power of Generative AI in Accelerating No Code Adoption.pdf
The Power of Generative AI in Accelerating No Code Adoption.pdf
Saeed Al Dhaheri39 visualizações
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ... por ShapeBlue
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
ShapeBlue171 visualizações
Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And... por ShapeBlue
Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And...Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And...
Enabling DPU Hardware Accelerators in XCP-ng Cloud Platform Environment - And...
ShapeBlue108 visualizações
Business Analyst Series 2023 - Week 4 Session 8 por DianaGray10
Business Analyst Series 2023 -  Week 4 Session 8Business Analyst Series 2023 -  Week 4 Session 8
Business Analyst Series 2023 - Week 4 Session 8
DianaGray10145 visualizações
CloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlue por ShapeBlue
CloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlueCloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlue
CloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlue
ShapeBlue139 visualizações
NTGapps NTG LowCode Platform por Mustafa Kuğu
NTGapps NTG LowCode Platform NTGapps NTG LowCode Platform
NTGapps NTG LowCode Platform
Mustafa Kuğu437 visualizações
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue por ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
ShapeBlue152 visualizações
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue por ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueVNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
ShapeBlue207 visualizações
Why and How CloudStack at weSystems - Stephan Bienek - weSystems por ShapeBlue
Why and How CloudStack at weSystems - Stephan Bienek - weSystemsWhy and How CloudStack at weSystems - Stephan Bienek - weSystems
Why and How CloudStack at weSystems - Stephan Bienek - weSystems
ShapeBlue247 visualizações
Initiating and Advancing Your Strategic GIS Governance Strategy por Safe Software
Initiating and Advancing Your Strategic GIS Governance StrategyInitiating and Advancing Your Strategic GIS Governance Strategy
Initiating and Advancing Your Strategic GIS Governance Strategy
Safe Software184 visualizações
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda... por ShapeBlue
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
ShapeBlue164 visualizações
Don’t Make A Human Do A Robot’s Job! : 6 Reasons Why AI Will Save Us & Not De... por Moses Kemibaro
Don’t Make A Human Do A Robot’s Job! : 6 Reasons Why AI Will Save Us & Not De...Don’t Make A Human Do A Robot’s Job! : 6 Reasons Why AI Will Save Us & Not De...
Don’t Make A Human Do A Robot’s Job! : 6 Reasons Why AI Will Save Us & Not De...
Moses Kemibaro35 visualizações
TrustArc Webinar - Managing Online Tracking Technology Vendors_ A Checklist f... por TrustArc
TrustArc Webinar - Managing Online Tracking Technology Vendors_ A Checklist f...TrustArc Webinar - Managing Online Tracking Technology Vendors_ A Checklist f...
TrustArc Webinar - Managing Online Tracking Technology Vendors_ A Checklist f...
TrustArc176 visualizações
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ... por ShapeBlue
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
ShapeBlue129 visualizações
Business Analyst Series 2023 - Week 4 Session 7 por DianaGray10
Business Analyst Series 2023 -  Week 4 Session 7Business Analyst Series 2023 -  Week 4 Session 7
Business Analyst Series 2023 - Week 4 Session 7
DianaGray10146 visualizações
"Package management in monorepos", Zoltan Kochan por Fwdays
"Package management in monorepos", Zoltan Kochan"Package management in monorepos", Zoltan Kochan
"Package management in monorepos", Zoltan Kochan
Fwdays34 visualizações
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit... por ShapeBlue
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
ShapeBlue162 visualizações

Obstacle detection using laser

  • 1. OBSTACLE DETECTION USING LASER ABSTRACT Cameras are the eye of robot. The movements of the robots are controlled by the images obtained by analyzing the images captured by this cameras. So the inputs to the robot should be capable of producing a 3D image. But our ordinary camera is not able to capture 3D image directly. There are several methods available to convert a 2D image into 3D image.But most of the methods use more than one camera in it and some of them are not capable to work in dark conditions.Here, we are introducing a new method to obtain a 3D image using a single camera which can be implemented even in the absence of light.Beagleboard xm is used as the image processing platform. PROBLEM DEFINITION INTRODUCTION ABSTRACT In our project we are trying to identify an object from its environment using line laser and obtain its 3D perspective view by capturing the image of the object constantly using a high definition webcam and processing it using our algorithm. Computer vision is the field concerned with the automated processing of images from the real world to extract and interpret information on a real time basis. It is the science and technology of machines that see, which includes methods for acquiring, processing, analyzing, and understanding images. High-dimensional data from the real world is needed in order to produce numerical or symbolic information in the form of decisions. Trend in this field is to duplicate the abilities of human vision electronically. The applications of computer vision extends from machine vision used in industries for assembly line manufacturing, bio-medical field for analyzing the X-RAY and Microscopy images, which are used for diagnosing patients and in military field for detecting enemy movement, missile guidance etc. Latest application areas of computer vision are autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars or trucks), +aerial vehicles, and Unmanned Aerial Vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to the vehicles where computer vision based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, i.e. for knowing where it is, or for producing a map of its environment and for detecting obstacles. It can also be used for detecting certain task specific events, e. g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market. There are ample examples of military autonomous vehicles ranging from advanced missiles, to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g. NASA's Mars Exploration Rover and ESA's ExoMars Rover. IV.PROJECT ANALYSIS AND DESIGN Computer vision based robot can solve many problems faced by humans in day today life. It can be helpful in fully automated factory like a bottling plant by placing bottle caps on every filled bottle using its ability to see and detect the bottles. These robots find extensive application in NASA, ESA and other space agencies for their man less missions to mars, moon and other planetary bodies. They are also used as ATV's (All Terrain Vehicles) that reach the places where human cannot reach, like deep ocean etc. They are used at time of disasters to rescue people caught below debris using their Infra Red cameras. Unmanned Aerial Vehicles (UAV's) are the pride of all defense forces across the world. These are multipurpose aircraft which can be used for aerial inspection of a place, as a stealth aircraft for spying and also for attacking purposes also. Driverless cars are said to be the future of automobile industry. A driverless car should have very powerful camera, processing unit and a very good algorithm. Our project can be said to be miniature form of all the above situations. Here we consider only a small case; the robot detecting the object using laser and creating its 3D view . Real time image processing of our robot helps it to accurately detect an object by following the track of the laser. Figure 1. Block Diagram I. II. III.
  • 2. The image acquired using a webcam ( HD i Ball) . The image taken by the cam is transferred to the Beagle Board through USB. Suitable cam is selected after familiarizing with beagle board. The Beagle Board-XM is a low power, low cost single board computer produced by Texas Instruments in association with Digi-Key. Processing is done using beagle board. Software used for processing is openCV in c. The c code in openCV is executed using terminal program in Linux. Servo motor controlled by Arduino UNO is used control the movement of laser for scanning purpose .The laser is mounted over the motor and is swayed back & forth using instructions from Arduino program. The cam captures the frame of the object being scanned which is then manipulated using openCV source code to obtain our objective. The desired image thus obtained is displayed at real time There is no physical connection between beagle board and arduino board. Object detection and its 3D generation consists of different phases, which consist of three main regions :- familiarizing BeagleBoard , design of physical part scanning section and image processing. FAMILIARISING BEAGLEBOARD As we have the algorithm we are in need of a stand-alone system for image acquisition and processing. So we took beagle board, which is a single board computer system and it has an ARM–A8 processor with OMAP 3530 dsp processor as our computer. We studied Linux OS for familiarizing operation of the beagle board and ported Angstrom Linux (a Linux version for embedded system developers) to SBC. Porting angstrom Linux into beagle board is done by booting linux into the SD card which acts as a secondary memory for the board. The following are the instructions for booting Angstrom into the SD card. The following instructions are performed in terminal in a lubuntu platform. Installing Angstrom on the BeagleBoard-xM mkdir ~/angstrom-wrk cd ~/angstrom-wrk wget http://cgit.openembedded.org/cgit.cgi/openembedded/plain/contrib/ angstrom/omap3-mkcard.sh chmod +x omap3-mkcard.sh sudo ./omap3-mkcard.sh /dev/sdX sync df –h # extract the files to the ./boot directory tar --wildcards -xjvf [YOUR-DOWNLOAD-FILE].tar.bz2 ./boot/* # copy the files to sc-card boot partition. cp boot/MLO-* /media/boot/MLO cp boot/uImage-* /media/boot/uImage cp boot/u-boot-*.bin /media/boot/u-boot.bin sync sudo tar -xvj -C /media/Angstrom -f [YOUR-DOWNLOAD-FILE].tar.bz2 sync sync umount /media/boot umount /media/Angstrom V. DESIGN OF PHYSICAL PART OF SCANNING SECTION Servos have integrated gears and a shaft that can be precisely controlled. Standard servos allow the shaft to be positioned at various angles, usually between 0 and 180 degrees. Continuous rotation servos allow the rotation of the shaft to be set to various speeds. Here we are employing Arduino UNO board to control the movements of the laser. The required hardware and circuitry is explained below Hardware Required 1) Arduino Board 2) Servo Motor 3) hook-up wire Circuit Servo motors have three wires: power, ground, and signal. The power wire is typically red, and should be connected to the 5V pin on the Arduino board. The ground wire is typically black or brown and should be connected to a ground pin on the Arduino board. The signal pin is typically yellow, orange or white and should be connected to a digital pin on the Arduino board. Note servos draw considerable power, so if we need to drive more than one or two, we'll probably need to power them from a separate supply (i.e. not the +5V pin on your Arduino). Be sure to connect the grounds of the Arduino and external power supply together. Figure 2 Circuit set up for arduino board VI. IMAGE PROCESSING The operation of image processing is performed in openCV 2.4.1 platform. So before proceeding with the processing of image captured by cam ,first we need openCV which is loaded into Ubuntu OS and the instructions are executed in terminal program provided by Ubuntu OS .The following section how to install openCV in Ubuntu OS.
  • 3. VII.PROGRAM EXPLANATION Case 1- reference plane without object Befpre we begin with scanning of the object we plan a reference line on either side of the plane so that the object we wish to scan should within this reference line.By doing this it is made sure that the length of the laser line is greater than the object size so as to detect the change or shift in the path of the laser line once it passes over the object. The first step involved is obtaining continuous video sequence from the camera through which a frame is captured using opencv frame capture instruction. The captured image is stored in the form of matrix of pixel where each pixel occupies address location indicated by rows and columns. The first step involved in image processing is to obtain the first pixel from the image matrix i.e. the first pixel location is assign row no:1 and col:1.then RGB color corresponding to this pixel is retrieved. After this it is checked whether the red color intensity corresponding to this pixel is greater than green, blue color intensity of the same pixel. If yes the assign new value to RGB channel which is 255 i.e. white and if the intensity of red color is less then give another value to RGB channel which is 0 i.e. black. Case 2- reference plane with object Repeat the above steps for each pixel corresponding to that row by incrementing the columns. This process is repeated till the entire pixel of an image is processed. By doing this we will be getting a grey level image corresponding RGB image. Now we have an image with object in black and background in white.The next step is obtaining an 3D perspective view of an object. Here the main idea involved is that when a laser line nears an object end it gets displaced by an amount equal to the width of the object. So there is a displacement of line of laser from its reference line as it nears an object end and when RGB to grey scale conversion is performed we get white region for red line up to the point where it is displaced. We note row no and col no of the line where it is getting displaced. By keeping col no constant we move to the row no where the displaced line appears. We will check if the RGB color corresponding to this pixel location is red or not. If yes then note the row no of this pixel in a separate variable and substract the previous row no with the now obtained row no to obtain a non zero value ( say x). This value x is then multiplied with arbitrary defined rgv value which is then applied at the end the object to obtain the perspective view of the object. VIII.FLOW CHART
  • 4. IX.CONCLUSION By using a single camera and line laser we are able to produce 3D perspective view of any object even in the absence of light. Our project, the object detection is the only concern. According to the type of object the project can be extended to either avoiding the object or take necessary actions against it. It can also used as terrain mapping in robotics. There is a small delay in image processing operation. As there is no perfect synchronization between cam data rate and beagleboard there is a chance for the formation of noise in an image. Use of DSP optimizimation can be used to overcome this problem. X.RESULT Fig.Obstacle Detection output result Fig.3D Perspective View output