1. OBSTACLE DETECTION USING LASER
Cameras are the eye of robot. The movements of the robots are
controlled by the images obtained by analyzing the images captured by
this cameras. So the inputs to the robot should be capable of producing
a 3D image. But our ordinary camera is not able to capture 3D image
directly. There are several methods available to convert a 2D image into
3D image.But most of the methods use more than one camera in it and
some of them are not capable to work in dark conditions.Here, we are
introducing a new method to obtain a 3D image using a single camera
which can be implemented even in the absence of light.Beagleboard xm
is used as the image processing platform.
In our project we are trying to identify an object from its
environment using line laser and obtain its 3D perspective
view by capturing the image of the object constantly using a
high definition webcam and processing it using our algorithm.
Computer vision is the field concerned with the automated
processing of images from the real world to extract and interpret
information on a real time basis. It is the science and technology of
machines that see, which includes methods for acquiring,
processing, analyzing, and understanding images.
High-dimensional data from the real world is needed in order to
produce numerical or symbolic information in the form of decisions.
Trend in this field is to duplicate the abilities of human vision
The applications of computer vision extends from machine
vision used in industries for assembly line manufacturing,
bio-medical field for analyzing the X-RAY and Microscopy images,
which are used for diagnosing patients and in military field for
detecting enemy movement, missile guidance etc.
Latest application areas of computer vision are autonomous
vehicles, which include submersibles, land-based vehicles (small
robots with wheels, cars or trucks), +aerial vehicles, and
Unmanned Aerial Vehicles (UAV). The level of autonomy ranges
from fully autonomous (unmanned) vehicles to the vehicles where
computer vision based systems support a driver or a pilot in
various situations. Fully autonomous vehicles typically use
computer vision for navigation, i.e. for knowing where it is, or for
producing a map of its environment and for detecting obstacles. It
can also be used for detecting certain task specific events, e. g., a
UAV looking for forest fires. Examples of supporting systems are
obstacle warning systems in cars, and systems for autonomous
landing of aircraft. Several car manufacturers have demonstrated
systems for autonomous driving of cars, but this technology has
still not reached a level where it can be put on the market. There
are ample examples of military autonomous vehicles ranging from
advanced missiles, to UAVs for recon missions or missile
guidance. Space exploration is already being made with
autonomous vehicles using computer vision, e.g. NASA's Mars
Exploration Rover and ESA's ExoMars Rover.
IV.PROJECT ANALYSIS AND DESIGN
Computer vision based robot can solve many problems faced
by humans in day today life. It can be helpful in fully
automated factory like a bottling plant by placing bottle caps
on every filled bottle using its ability to see and detect the
bottles. These robots find extensive application in NASA, ESA
and other space agencies for their man less missions to mars,
moon and other planetary bodies. They are also used as
ATV's (All Terrain Vehicles) that reach the places where
human cannot reach, like deep ocean etc. They are used at
time of disasters to rescue people caught below debris using
their Infra Red cameras.
Unmanned Aerial Vehicles (UAV's) are the pride of all
defense forces across the world. These are multipurpose
aircraft which can be used for aerial inspection of a place, as
a stealth aircraft for spying and also for attacking purposes
also. Driverless cars are said to be the future of automobile
industry. A driverless car should have very powerful camera,
processing unit and a very good algorithm.
Our project can be said to be miniature form of all the above
situations. Here we consider only a small case; the robot
detecting the object using laser and creating its 3D view .
Real time image processing of our robot helps it to accurately
detect an object by following the track of the laser.
Figure 1. Block Diagram
2. The image acquired using a webcam ( HD i Ball) . The image taken
by the cam is transferred to the Beagle Board through USB.
Suitable cam is selected after familiarizing with beagle board. The
Beagle Board-XM is a low power, low cost single board computer
produced by Texas Instruments in association with Digi-Key.
Processing is done using beagle board. Software used for
processing is openCV in c.
The c code in openCV is executed using terminal program in Linux.
Servo motor controlled by Arduino UNO is used control the
movement of laser for scanning purpose .The laser is mounted over
the motor and is swayed back & forth using instructions from
Arduino program. The cam captures the frame of the object being
scanned which is then manipulated using openCV source code to
obtain our objective. The desired image thus obtained is displayed
at real time There is no physical connection between beagle board
and arduino board.
Object detection and its 3D generation consists of different
phases, which consist of three main regions :- familiarizing
BeagleBoard , design of physical part scanning section and image
As we have the algorithm we are in need of a stand-alone system
for image acquisition and processing. So we took beagle board,
which is a single board computer system and it has an ARM–A8
processor with OMAP 3530 dsp processor as our computer. We
studied Linux OS for familiarizing operation of the beagle board
and ported Angstrom Linux (a Linux version for embedded
system developers) to SBC. Porting angstrom Linux into beagle
board is done by booting linux into the SD card which acts as a
secondary memory for the board. The following are the
instructions for booting Angstrom into the SD card. The following
instructions are performed in terminal in a lubuntu platform.
Installing Angstrom on the BeagleBoard-xM
chmod +x omap3-mkcard.sh
sudo ./omap3-mkcard.sh /dev/sdX
# extract the files to the ./boot directory
tar --wildcards -xjvf [YOUR-DOWNLOAD-FILE].tar.bz2 ./boot/*
# copy the files to sc-card boot partition.
cp boot/MLO-* /media/boot/MLO
cp boot/uImage-* /media/boot/uImage
cp boot/u-boot-*.bin /media/boot/u-boot.bin
sudo tar -xvj -C /media/Angstrom -f
V. DESIGN OF PHYSICAL PART OF SCANNING
Servos have integrated gears and a shaft that can be precisely
controlled. Standard servos allow the shaft to be positioned at
various angles, usually between 0 and 180 degrees. Continuous
rotation servos allow the rotation of the shaft to be set to various
speeds. Here we are employing Arduino UNO board to control
the movements of the laser. The required hardware and circuitry
is explained below
1) Arduino Board
2) Servo Motor
3) hook-up wire
Servo motors have three wires: power, ground, and signal. The
power wire is typically red, and should be connected to the 5V
pin on the Arduino board. The ground wire is typically black or
brown and should be connected to a ground pin on the Arduino
board. The signal pin is typically yellow, orange or white and
should be connected to a digital pin on the Arduino board. Note
servos draw considerable power, so if we need to drive more
than one or two, we'll probably need to power them from a
separate supply (i.e. not the +5V pin on your Arduino). Be sure
to connect the grounds of the Arduino and external power
Figure 2 Circuit set up for arduino board
VI. IMAGE PROCESSING
The operation of image processing is performed in openCV 2.4.1
platform. So before proceeding with the processing of image
captured by cam ,first we need openCV which is loaded into
Ubuntu OS and the instructions are executed in terminal program
provided by Ubuntu OS .The following section how to install
openCV in Ubuntu OS.
3. VII.PROGRAM EXPLANATION
Case 1- reference plane without object
Befpre we begin with scanning of the object we plan a reference line
on either side of the plane so that the object we wish to scan should
within this reference line.By doing this it is made sure that the length
of the laser line is greater than the object size so as to detect the
change or shift in the path of the laser line once it passes over the
object. The first step involved is obtaining continuous video
sequence from the camera through which a frame is captured using
opencv frame capture instruction. The captured image is stored in
the form of matrix of pixel where each pixel occupies address
location indicated by rows and columns. The first step involved in
image processing is to obtain the first pixel from the image matrix i.e.
the first pixel location is assign row no:1 and col:1.then RGB color
corresponding to this pixel is retrieved. After this it is checked
whether the red color intensity corresponding to this pixel is greater
than green, blue color intensity of the same pixel. If yes the assign
new value to RGB channel which is 255 i.e. white and if the intensity
of red color is less then give another value to RGB channel which is
0 i.e. black.
Case 2- reference plane with object
Repeat the above steps for each pixel corresponding to that
row by incrementing the columns. This process is repeated till
the entire pixel of an image is processed. By doing this we will
be getting a grey level image corresponding RGB image. Now
we have an image with object in black and background in
white.The next step is obtaining an 3D perspective view of an
object. Here the main idea involved is that when a laser line
nears an object end it gets displaced by an amount equal to
the width of the object. So there is a displacement of line of
laser from its reference line as it nears an object end and
when RGB to grey scale conversion is performed we get white
region for red line up to the point where it is displaced. We
note row no and col no of the line where it is getting displaced.
By keeping col no constant we move to the row no where the
displaced line appears. We will check if the RGB color
corresponding to this pixel location is red or not. If yes then
note the row no of this pixel in a separate variable and
substract the previous row no with the now obtained row no to
obtain a non zero value ( say x). This value x is then multiplied
with arbitrary defined rgv value which is then applied at the
end the object to obtain the perspective view of the object.
By using a single camera and line laser we are able to
produce 3D perspective view of any object even in the
absence of light.
Our project, the object detection is the only concern.
According to the type of object the project can be
extended to either avoiding the object or take necessary
actions against it. It can also used as terrain mapping in
robotics. There is a small delay in image processing
operation. As there is no perfect synchronization
between cam data rate and beagleboard there is a
chance for the formation of noise in an image. Use of
DSP optimizimation can be used to overcome this
Fig.Obstacle Detection output result
Fig.3D Perspective View output