SlideShare uma empresa Scribd logo
1 de 49
Digital Image Processing &
Machine Vision
Lecture # 2
By
Dr. Abdul Rehman Abbasi
Interesting Fact
Vision accounts for about ____ of
the data flowing into the human
central nervous system
70%
Feature Machine Vision Human Vision
Spectral range Gamma rays to microwaves
(10-11 - 10-1 m)
Visible light (4x10-7 -7x10-7m)
Spatial Resolution 4x106 pixels (area scan,
growing rapidly), 8192 (line-
scan)
Effectively approximately
4000x4000 pixels
Sensor size Small (approx.5x5x15 mm3) Very large
Quantitative Yes. Capable of precise
measurement of size, area
No
Ability to cope with unseen
events
Poor Good
Performance on repetitive tasks Good Poor, due to fatigue and
boredom
Intelligence Low High (subjective)
Light level variability Fixed, closely controlled Highly variable
Light level (min) Equivalent to cloudy
moonless night
Quarter-moon light
(Greater if dark-adaptation is
extended)
Strobe lighting and lasers Possible (good screening is
needed for safety)
Unsafe
Consistency Good Poor
Capital cost Moderate Low
Running cost Low High
Feature Machine Vision Human Vision
Inspection cost, per unit Low High
Ability to “program” in situ Limited.. Special
interfaces make task
easier
Speech is effective
Able to cope with multiple
views in space and/or time
Versatile Limited
Able to work in toxic,
biohazard areas
Yes Not easily
Non-standard scanning
methods
Line scan, circular scan,
random scan, spiral-
scan, radial scan
Not possible
Image storage Good Poor without photography
or digital storage
Optical aids Numerous available Limited
Spectral Range
Application features that make your vision system attractive includes
1. Inaccessible part (a robot in the way for example)
2. Hostile manufacturing environment
3. Possible part damage from physical contact
4. Need to measure large number of features
5. Predictable interaction with light
6. Poor or no visual access to part features of interests
7. Extremely poor visibility
8. Mechanical/electrical sensors provide the necessary
data
Machine Vision Process
Frame
Grabber
Prepro-
cessing
Vision
Engine
Operation
Interface
I/O
Vision
software
tool and
algorithms
Custom
user
Software
Illumination
Image
sensor
Object
IMAGE FORMATION IMAGE CAPTURE IMAGE PROCESSING
Data collection
Illumination
Process control
Server
Image
Acquisition
Object
PC
Industrial Manufacturing cell with vision system
Image formation:
Right illumination, optical sensor such as high resolution cameras, line scan cameras,
frame grabber . Transformation of the visual image of a physical object and its intrinsic
characteristics into set of digitized data that can be used by the image processing unit.
Image processing:
Image processing consists of image grabbing, image enhancement, feature extraction
and output formatting. Function of image processing is to create a new image by
altering the data in such a way that the features of interests are enhanced and the
noise is reduced.
Image analysis:
The main function of image analysis is the automatic extraction of explicit information
regarding the content of the image, for example, the shape, size or the range data and
local orientation information from several two dimensional images. It utilizes several
redundant information representations such as edges, boundaries, disparities shading
etc. Most commonly used techniques are: template matching, statistical pattern
recognition and the Hough transform.
Decision making:
Decision making is concerned with making a decision based on the description in the
image and using AI to control the process or task
Generic Model of Machine Vision
• Scene constraints
• Image acquisition
• Preprocessing
• Segmentation
• Feature Extraction
• Classification and/or Interpretation
• Actuation
Scene Constraints
• Scene refers to the environment in which the task is taking place and into
which the machine -vision system is to be placed.
• The aim of the scene constraint sub-system is to reduce the complexity of
the subsequent subsystems to a manageable level. This is achieved by
proper exploitation of a priori constraints such as: knowledge of limited
number of objects possible in the scene, knowledge of their surface finish
and appearance etc. We can also impose new constraints such as:
replacement of ambient light with carefully controlled lighting.
• As is clear from the terminology itself the Scene refers to the industrial
environment in which the manufacturing is being done and the machine
vision system is to perform the required task in that environment.
• The aim of this module is to reduce the complexity of all the subsequent
sub-systems to a manageable level which is achieved by exploitation of
existing constraints and imposition of new ones.
Scene Constraints
Two types of scene constraints can be applied
1. Inherent or natural constraints
2. Imposed constraints
Inherent constraints
• Characteristics of the material
• Inherent features
• Limitations, in the range of objects
• Inherent positional limitations
Imposed constraints
• Control of object features
• Control of object position
• Control of lighting conditions
Control of Lighting Conditions-1
Control of Lighting Conditions-2
Lighting Sources
• LED illumination units
• Metal halide light sources (“cold light sources”
transmitted over fibre-optic cables)
• Laser illumination units
• Fluorescent light (high-frequency)
• Halogen lamps
Light Source Type Advantages Disadvantages
LED Array of light-emitting
diodes
Can form many configurations
within the arrays; single color
source can be useful in some
application
Some features hard to see with
single color source; large array
required to light large area
Fiber-Optic Illuminators
Incandescent lamp in housing;
light carried by optical fiber
bundle to application
Fiber bundles available in many
configurations; heat and
electrical power remote from
application; easy access for
lamp replacement
Incandescent lamp has low
efficiency, especially for blue
light
Fluorescent High-frequency
tube or ring lamp
Diffuse source; wide or narrow
spectral range available; lamps
are efficient and long lived
Limited range of configurations;
intensity control not available
on some lamps
Strobe Xenon arc strobe lamp,
with either direct or fiber bundle
light delivery
Freezes rapidly moving parts;
high peak illumination intensity
Requires precise timing of light
source and image capture
electronics. May require eye
protection for persons working
near the application
Laser Applications
Image Acquisition
• Translation from the light stimuli falling onto the photo sensors
of a camera to a stored digital value within the computer’s
memory.
• Each digitized picture is typically of 512x512 pixels resolution,
with each pixel representing a binary, grey or color value.
• To ensure that no useful information is lost a proper choice of
spatial and luminescence resolution parameters must be made.
• Depending on particular application cameras with line scan or
area scan elements can be made use of for image acquisition.
• While area scan sensors have lower spatial resolution but they
provide highly standardized interfacing to computers and do not
need any relative motion between the object and the camera;
the line scan sensors need relative motion to build 2-D image.
Preprocessing-1
• To produce a form of the acquired image which is better suited
for further operations the processes (contrast enhancement,
and adjustment, filtering to remove noise and improve quality)
modify and prepare pixel values for digitized image.
• Fundamental information of the image is not changed during
this module.
• The initially acquired image has direct pixel by pixel relation to
the original scene and thus lies in the spatial domain.
• Transformations from spatial to frequency domain can be done
using Fourier transforms, which is although not very
computationally efficient operation.
Preprocessing-2
• Low level processing for image improvement such as histogram
manipulations (grey level shifting or equalization) involves noisy images
clean up and highlight features of particular interest.
• With the use of some transformations pixels are shared among grey levels
which would enhance or alter the appearance of the image.
• Histogram manipulations provide simple image improvement operations,
either by grey level shifting or, equalization.
• An image histogram is easily produced by recording the number of pixels at
a particular grey level.
• If this shows a bias towards the lower intensity grey levels, then some
transformation to achieve a more equitable sharing of pixels among the grey
levels would enhance or alter the appearance of the image. Such
transformations will simply enhance or suppress contrast, and stretch or
compress grey levels, without any alteration in the structural information
present in the image.
Preprocessing-3
• Another important class of spatial domain algorithms is
designed to perform pixel transformation, whose final value is
calculated as a function of a group of pixel values (or
'neighborhood') in some specified spatial location in the original
image.
• Many filtering algorithms for smoothing (low pass) and edge
enhancement (high pass) are firmly in this category.
• This introduces the basic principle of 'windowing operations' in
which a 2-D (two-dimensional) mask, or window, defining the
neighborhood of interest is moved across the image, taking
each pixel in turn as the centre, and at each position the
transformed value of the pixel of interest is calculated.
Low Pass Filter (5x5 median)
Image Segmentation-1
• Acquired image is broken up into meaningful regions or segments, i.e.
partitioning of image.
• Segmentation is not concerned with what the image represents. Broadly two
approaches are employed:
Thresholding based on some predetermined criterion (global thresholds
the entire image into single threshold value or local thresholds partitions
image into sub-images and determines for each of them) and
• Edge-based methods (uses digital versions of standard finite operators
which accentuates intensity changes, which gives rise to peak in the first
derivative or a zero crossing in second derivative, which can be detected
and properties such as position, sharpness, and height of peak infer the
location, sharpness and contrast of intensity changes in the image). Edge
elements can be used to form the complete boundaries as shown in the
Figure 1.5.
Image Segmentation-2
The classical approach to edge-based segmentation begins with edge
enhancement which makes use of digital versions of standard finite
difference operators, as in the first-order gradient operators (e.g. Roberts
Sobel) or in the second-order Laplacian operator.
The difference operation accentuates intensity changes, and transforms
this the image into a representation from which properties of these
changes can be extracted more easily.
A significant intensity change gives rise to a peak in the first derivative or
a zero crossing in the second derivative of the smoothed intensities.
These peaks, or zero crossings, can be detected easily, and properties
such as the position, sharpness, and height of the peaks infer the
location, sharpness and contrast of the intensity changes in the image.
Edge elements can be identified from the edge-enhanced image and
these can then be linked to form complete boundaries of the regions of
interest
Feature Extraction
• During this phase the inherent characteristics or
features of different regions within the image are
identified, which are checked against predetermined
standards.
• This description should be invariant to position,
orientation and scale of the object.
• A number of basic parameters such as minimum
enclosing rectangle, centre of area (e.g. centre may
be considered as object oriented origin and series of
feature descriptors can be developed), may be
derived from an arbitrary shape and can be used for
classification and position information
Image Classification (Analysis)
• The classification sub-system is concerned with pattern recognition or image classification. This
process utilizes some or all of the extracted features to make a decision about to which
category of objects the unknown object belongs.
• There are three main techniques for classification
• Template matching
• Statistically based approaches
• Neural network approach
• Template matching is used in situations where the objects to be identified have well defined
and highly 'differentiated’ features, for example standard alphanumeric character fonts. In such
cases an unknown character is compared with a set of templates or masks, each of which fits
just one character uniquely.
• Statistical techniques can be selected to provide optimum classification performance for more
varied industrial applications.
• If the vision task is well constrained then classification may be made via a simple tree
searching algorithm where classification proceeds by making branching choices on the basis of
single feature parameters. In more complex cases, n features are combined to create a 'feature
vector' which places a candidate object within the n-dimensional feature space. Provided that
the features have been properly chosen to divide the allowable range of candidate objects into
well separated 'clusters', then classification merely consists of dividing the space with one or
more 'decision surfaces', such that each decision surface reliably separates two clusters.
Industrial Vision: Image acquisition
CCD camera
Digitalization
Data acquisition cards
Vision software
Cameras and sensors; lenses
Image acquisition: CCD camera
Gates
Photon
Silicon
Charges
Potential
Well
Image acquisition: CCD sensor
Image acquisition: digitalisation
A
D
8 Bit
GreyvalueVolts
0...255
Greyvalue
Volts
0.7
255
0.348
127
brighter
Digitalisation
camera image 8 bit grayscale
digital image
Pixel mask
Pixel = Picture Element
Digitalisation
Grayscale image
& numeric representation
255 255 255 255 253 88 74 73 72 72 75 175 255 255 255
255 255 255 255 250 82 75 74 73 74 73 190 255 255 255
255 255 255 255 231 80 73 72 72 72 76 197 255 255 255
255 255 255 255 232 83 73 73 73 76 75 172 255 255 255
255 255 255 255 226 79 75 74 74 76 75 184 255 255 255
255 255 255 255 220 84 75 73 76 79 74 159 255 255 255
255 255 255 255 224 83 76 74 77 75 75 156 255 255 255
255 255 255 255 207 90 75 76 78 77 81 172 255 255 255
255 255 255 255 252 107 75 75 80 79 79 162 255 255 255
255 255 255 255 249 136 77 76 89 81 99 217 255 255 255
255 255 255 255 255 183 78 75 80 81 120 248 255 255 255
255 255 255 255 255 249 86 76 74 84 201 255 255 255 255
255 255 255 255 255 255 115 77 77 98 251 255 255 255 255
255 255 255 255 255 255 193 80 78 143 255 255 255 255 255
255 255 255 255 255 255 217 85 78 173 255 255 255 255 255
255 255 255 255 255 255 248 97 79 220 255 255 255 255 255
255 255 255 255 255 255 255 119 80 224 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
Video sources
The video source can be:
• Video camera
• Camcorder
• Video recorder (VCR)
• Television broadcasts
• X-ray equipment
• Scanning Electron Microscope (SEM)
• CT scanner
Composite video = signal containing both video data (luminance + colour)
and the timing (synchronisation) information. It is the standard which
interconnects almost all video equipment (TVs, laserdisc, videorecorders,
camcorders) at home.
Examples of composite video standards:
• RS-170:
• used in North America and Japan
• Monochrome signal
• Spatial resolution: 640 pixels x 480 lines
• Frequency: 60 fields/second (equivalent to 30 frames/second)
• NTSC/RS-330
• used in North America and Japan
• Equivalent to RS-170 but colour information is superimposed on the
monochrome signal.
• NTSC = National Television System Committee
Signal types for Image acquisition boards
Signal types for Image acquisition boards
More composite video standards:
• CCIR
• used in Northern Europe
• Monochrome signal
• Spatial resolution: 768 pixels x 576 lines
• Frequency: 50 fields/second (equivalent to 25 frames/second)
• CCIR = Comité Consultatif International Radio
• PAL
• used in Northern Europe
• Equivalent to CCIR but colour information is superimposed on the
monochrome signal.
• PAL = Phase Alteration Line
• SECAM
• used in France, Russia and the Sovjet Republic States
• Equivalent to CCIR but colour information is superimposed on the
monochrome signal.
• SECAM = Séquencial Couleur Avec Memoire
Signal types for Image acquisition boards
S-Video (also called Y/C video): luminance (Y) and chrominance (C) are separate
signals. The Y signal contains timing (synchronisation) information. S-video can be
transported over 4 pin mini DIN connector, or over SCART connector.
Some image sources produce “nonstandard” video signals:
• Video and timing information can vary in format as well as in single or multiple
signals. They do not adhere to particular spatial resolutions, signal timing
schemes, signal characteristics … Consult the documentation provided with
your video source.
Progressive scan (25-30 frames/sec) cameras produce non interlaced signals.
All previous camera signals are analogue.
DIGITAL CAMERAS: No frame grabber required!
• Cameras with FireWire (IEEE 1394) interface.
Supported by Apple, Windows XP
• Cameras with USB interface
Fuga
Allegro
Image acquisition boards
• The video capture device is often called frame grabber card.
• Frame grabber puts a pixel mask over the image: the card converts the
analogue image (or images) supplied by a video source into a digital array
(or arrays) of data points.
• It is a plug in card (PCI) with AD convertor. The ADC must have video
speed: 20 MHz or higher (30 or 25 video frames per second, 300 kB [640 x
480 x 8 bit] per frame.
• Other features:
• input multiplexer (to select one of the 4 inputs)
• Colour notch filter = chrominance filter (to acquire monochrome signals
from colour sources)
• Programmable gain stage (to match the signal into the ADC input range)
• Timing and acquisition control (to synchronise grabbing with sync pulses
of incoming signal: PLL or Digital Clock Synchronisation)
• Camera control stage (to send to the camera or to receive from the
camera setup and control signals, e.g. horizontal and vertical sync
signals, pixel clock and reset signals)
• Most cards provide digital I/O for input or output operations, to
communicate with external digital devices (e.g. industrial process). This
saves a separate I/O board.
Block diagram of analog frame grabber
© Data
Translation
Image acquisition boards (continued)
• Plug-in cards (image grabber, frame grabber card) for analogue cameras
• Are plugged in at a VME or PCI bus
• Are delivered with Windows 98 or NT drivers
• Accept cameras according to the EIA (30 frames/sec) or CCIR (25)
standards
• Good cards have their own processor (DMA data transfer to PC) and
large RAM
• Others (cheaper ones) use the PC processor
• They accept the signals: S video, composite video TV or VCR signals
(NTSC/PAL/Secam)
• Some cards have camera control output
Image acquisition - Cameras
• Sensor types:
• Line
• Array
• Interface standards:
• CCIR / RS-170 (B&W, 50-60 fields/sec.)
• PAL / SECAM / NTSC (Colour)
• Progressive scan (25-30 frames/sec.)
• FireWire (IEEE 1394)
• USB
• Sensor technology:
• CCD (Charge Coupled Device)
• CMOS (Complementary Metal Oxide
Semiconductor). A CMOS camera produces a
1000*1000 pixel image
Spatial resolution
• The number of rows (N) from a video source generally corresponds
one-to-one with lines in the video image. The number of columns,
however, depends on the nature of the electronics that is used to
digitize the image. Different frame grabbers for the same video camera
might produce M = 384, 512, or 768 columns (pixels) per line.
• a CCIR / PAL image source can result in max 768 x 576 pixel image
• a RS-170 / NTSC source can result in max 640 x 480 pixel image
• Depending on video source or camera used, the spatial resolution can
range from 256 x 256 up to 4096 x 4096.
• Most applications use only the spatial resolution required. For fast
image transfer and manipulation, often 512 x 512 is used. For more
accurate image processing, 1024 x 1024 is common.
• The pixel aspect ratio (pixel width : pixel height) can be different from
1:1, typical 4:3. Some frame grabbers don’t convert video data into
square pixels but into rectangle ones. This creates the effect of a circle
appearing ovular, and squares appearing as rectangles.
Spatial resolution
• Example 768 x 512 (aspect ratio 3 : 2)
Brightness resolution
• Brightness resolution = bit depth resolution: number of gray levels
(monochrome) or number of colours
• RS-170 / NTSC image: 8 bits = 256 gray levels
• A standard RS-170 image is 307 kB large: 640 x 480 x 8bit.
768 pixels
512 rows
2
3
520 rows max CCIR
Interlaced / non interlaced formats
• A video signal consists of a series of lines. Horizontal sync pulses
separe the lines from each other.
• All composite video sources (RS-170/NTSC, CCIR/PAL) and some
nonstandard video sources transmit the lines in interlaced format: first
the odd (first field), afterwards the even lines (second field).
• Vertical sync pulses separate the fields from each other.
• Some nonstandard video sources transmit the lines in non-interlaced
format = progressive scan. Only one field, containing all the lines, is
transmitted.
• Progressive scan is recommended for fast moving images.
• If one is planning to use images that have been scanned from an
interlaced video source, it is important to know if the two half-images
have been appropriately "shuffled" by the digitization hardware or if that
should be implemented in software. Further, the analysis of moving
objects requires special care with interlaced video to avoid "zigzag"
edges.
Photo Sensors
Dip  lect2-Machine Vision Fundamentals
Dip  lect2-Machine Vision Fundamentals
Dip  lect2-Machine Vision Fundamentals
Dip  lect2-Machine Vision Fundamentals

Mais conteúdo relacionado

Mais procurados

digital image processing
digital image processingdigital image processing
digital image processingN.CH Karthik
 
Image processing- an introduction
Image processing- an introductionImage processing- an introduction
Image processing- an introductionAarohi Gupta
 
Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237Kunal mane
 
Digital image processing
Digital image processingDigital image processing
Digital image processingmanpreetgrewal
 
application of digital image processing and methods
application of digital image processing and methodsapplication of digital image processing and methods
application of digital image processing and methodsSIRILsam
 
imageprocessing-abstract
imageprocessing-abstractimageprocessing-abstract
imageprocessing-abstractJagadeesh Kumar
 
Introduction to image processing-Class Notes
Introduction to image processing-Class NotesIntroduction to image processing-Class Notes
Introduction to image processing-Class NotesDr.YNM
 
Basics of digital image processing
Basics of digital image  processingBasics of digital image  processing
Basics of digital image processingzahid6
 
Vision Basics
Vision BasicsVision Basics
Vision BasicsDrHemaCR
 
Real time image processing ppt
Real time image processing pptReal time image processing ppt
Real time image processing pptashwini.jagdhane
 
Presentation on Digital Image Processing
Presentation on Digital Image ProcessingPresentation on Digital Image Processing
Presentation on Digital Image ProcessingSalim Hosen
 
Vision system for robotics and servo controller
Vision system for robotics and servo controllerVision system for robotics and servo controller
Vision system for robotics and servo controllerGowsick Subramaniam
 
Fingerprint Images Enhancement ppt
Fingerprint Images Enhancement pptFingerprint Images Enhancement ppt
Fingerprint Images Enhancement pptMukta Gupta
 
image processing
image processingimage processing
image processingDhriya
 
Digital Image Processing (DIP)
Digital Image Processing (DIP)Digital Image Processing (DIP)
Digital Image Processing (DIP)Srikanth VNV
 
Digital image processing
Digital image processingDigital image processing
Digital image processingRavi Jindal
 

Mais procurados (20)

Image processing
Image processing Image processing
Image processing
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
digital image processing
digital image processingdigital image processing
digital image processing
 
Image processing- an introduction
Image processing- an introductionImage processing- an introduction
Image processing- an introduction
 
Application of image processing
Application of image processingApplication of image processing
Application of image processing
 
Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
application of digital image processing and methods
application of digital image processing and methodsapplication of digital image processing and methods
application of digital image processing and methods
 
imageprocessing-abstract
imageprocessing-abstractimageprocessing-abstract
imageprocessing-abstract
 
Introduction to image processing-Class Notes
Introduction to image processing-Class NotesIntroduction to image processing-Class Notes
Introduction to image processing-Class Notes
 
Basics of digital image processing
Basics of digital image  processingBasics of digital image  processing
Basics of digital image processing
 
Vision Basics
Vision BasicsVision Basics
Vision Basics
 
Real time image processing ppt
Real time image processing pptReal time image processing ppt
Real time image processing ppt
 
Presentation on Digital Image Processing
Presentation on Digital Image ProcessingPresentation on Digital Image Processing
Presentation on Digital Image Processing
 
Vision system for robotics and servo controller
Vision system for robotics and servo controllerVision system for robotics and servo controller
Vision system for robotics and servo controller
 
Fingerprint Images Enhancement ppt
Fingerprint Images Enhancement pptFingerprint Images Enhancement ppt
Fingerprint Images Enhancement ppt
 
image processing
image processingimage processing
image processing
 
Digital Image Processing (DIP)
Digital Image Processing (DIP)Digital Image Processing (DIP)
Digital Image Processing (DIP)
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 

Destaque

Emva 2011 Plenair Final Compact
Emva 2011 Plenair Final CompactEmva 2011 Plenair Final Compact
Emva 2011 Plenair Final CompactDickgoudriaan
 
Agrosaw Sorting grading line
Agrosaw Sorting grading lineAgrosaw Sorting grading line
Agrosaw Sorting grading lineSANJEEV SAGAR
 
Machine vision project
Machine vision projectMachine vision project
Machine vision projectWei Ang
 
Machine learning application-automated fruit sorting technique
Machine learning application-automated fruit sorting techniqueMachine learning application-automated fruit sorting technique
Machine learning application-automated fruit sorting techniqueAnudeep Badam
 
Machine Vision applications development in MatLab
Machine Vision applications development in MatLabMachine Vision applications development in MatLab
Machine Vision applications development in MatLabSriram Emarose
 
Machine vision in food & beverages
Machine vision in food & beveragesMachine vision in food & beverages
Machine vision in food & beveragesAkshay Dhole
 
Application of image processing in material handling and (1)
Application of image processing in material handling and (1)Application of image processing in material handling and (1)
Application of image processing in material handling and (1)suyash dani
 
final year student mechanical project topics
final year student mechanical project topicsfinal year student mechanical project topics
final year student mechanical project topicsAbi Nesan
 
Automatic sorting machine (cpu)
Automatic sorting machine (cpu)Automatic sorting machine (cpu)
Automatic sorting machine (cpu)vishnucool
 
pick and place robotic arm
pick and place robotic armpick and place robotic arm
pick and place robotic armANJANA ANILKUMAR
 
Automatic intelligent industrial object sorter with conveyor belt
Automatic intelligent industrial object sorter with conveyor beltAutomatic intelligent industrial object sorter with conveyor belt
Automatic intelligent industrial object sorter with conveyor beltindianspandana
 
INDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finl
INDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finlINDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finl
INDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finlanil badiger
 
Machine vision
Machine visionMachine vision
Machine visiondjehlke
 
Word Power: 11 Techniques for Writing More Persuasive Copy
Word Power: 11 Techniques for Writing More Persuasive CopyWord Power: 11 Techniques for Writing More Persuasive Copy
Word Power: 11 Techniques for Writing More Persuasive CopyBarry Feldman
 
Basics of Robotics
Basics of RoboticsBasics of Robotics
Basics of RoboticsAmeya Gandhi
 

Destaque (20)

Emva 2011 Plenair Final Compact
Emva 2011 Plenair Final CompactEmva 2011 Plenair Final Compact
Emva 2011 Plenair Final Compact
 
Luigy Bertaglia Bortolo - Poster Final
Luigy Bertaglia Bortolo - Poster FinalLuigy Bertaglia Bortolo - Poster Final
Luigy Bertaglia Bortolo - Poster Final
 
Agrosaw Sorting grading line
Agrosaw Sorting grading lineAgrosaw Sorting grading line
Agrosaw Sorting grading line
 
Mechatriks automation - Vision Inspection/Machine Vision System
Mechatriks automation - Vision Inspection/Machine Vision SystemMechatriks automation - Vision Inspection/Machine Vision System
Mechatriks automation - Vision Inspection/Machine Vision System
 
seminar presentation
seminar presentationseminar presentation
seminar presentation
 
Machine vision project
Machine vision projectMachine vision project
Machine vision project
 
Machine learning application-automated fruit sorting technique
Machine learning application-automated fruit sorting techniqueMachine learning application-automated fruit sorting technique
Machine learning application-automated fruit sorting technique
 
Machine Vision applications development in MatLab
Machine Vision applications development in MatLabMachine Vision applications development in MatLab
Machine Vision applications development in MatLab
 
Machine vision in food & beverages
Machine vision in food & beveragesMachine vision in food & beverages
Machine vision in food & beverages
 
Application of image processing in material handling and (1)
Application of image processing in material handling and (1)Application of image processing in material handling and (1)
Application of image processing in material handling and (1)
 
final year student mechanical project topics
final year student mechanical project topicsfinal year student mechanical project topics
final year student mechanical project topics
 
Automatic sorting machine (cpu)
Automatic sorting machine (cpu)Automatic sorting machine (cpu)
Automatic sorting machine (cpu)
 
pick and place robotic arm
pick and place robotic armpick and place robotic arm
pick and place robotic arm
 
Automatic intelligent industrial object sorter with conveyor belt
Automatic intelligent industrial object sorter with conveyor beltAutomatic intelligent industrial object sorter with conveyor belt
Automatic intelligent industrial object sorter with conveyor belt
 
INDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finl
INDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finlINDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finl
INDUSTRIAL APPLICATION OF MACHINE VISION ppt mrng finl
 
Machine vision
Machine visionMachine vision
Machine vision
 
Word Power: 11 Techniques for Writing More Persuasive Copy
Word Power: 11 Techniques for Writing More Persuasive CopyWord Power: 11 Techniques for Writing More Persuasive Copy
Word Power: 11 Techniques for Writing More Persuasive Copy
 
Basics of Robotics
Basics of RoboticsBasics of Robotics
Basics of Robotics
 
robotics ppt
robotics ppt robotics ppt
robotics ppt
 
Robotics project ppt
Robotics project pptRobotics project ppt
Robotics project ppt
 

Semelhante a Dip lect2-Machine Vision Fundamentals

Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image ProcessingReshma KC
 
Rendering Algorithms.pptx
Rendering Algorithms.pptxRendering Algorithms.pptx
Rendering Algorithms.pptxSherinRappai
 
Robot Vision ,components for robot vision
Robot Vision ,components for robot visionRobot Vision ,components for robot vision
Robot Vision ,components for robot visionKRSavinJoseph
 
IRJET - Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...
IRJET -  	  Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...IRJET -  	  Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...
IRJET - Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...IRJET Journal
 
Machine vision.pptx
Machine vision.pptxMachine vision.pptx
Machine vision.pptxWorkCit
 
X-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE MethodX-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE MethodIRJET Journal
 
IRJET - Change Detection in Satellite Images using Convolutional Neural N...
IRJET -  	  Change Detection in Satellite Images using Convolutional Neural N...IRJET -  	  Change Detection in Satellite Images using Convolutional Neural N...
IRJET - Change Detection in Satellite Images using Convolutional Neural N...IRJET Journal
 
Remotely sensed image segmentation using multiphase level set acm
Remotely sensed image segmentation using multiphase level set acmRemotely sensed image segmentation using multiphase level set acm
Remotely sensed image segmentation using multiphase level set acmKriti Bajpai
 
Image Processing Training in Chandigarh
Image Processing Training in Chandigarh Image Processing Training in Chandigarh
Image Processing Training in Chandigarh E2Matrix
 
50424340-Machine-Vision3 (1).pptx
50424340-Machine-Vision3 (1).pptx50424340-Machine-Vision3 (1).pptx
50424340-Machine-Vision3 (1).pptxCHARLESAHIMANA
 
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...cscpconf
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...csandit
 
Matlab Training in Jalandhar | Matlab Training in Phagwara
Matlab Training in Jalandhar | Matlab Training in PhagwaraMatlab Training in Jalandhar | Matlab Training in Phagwara
Matlab Training in Jalandhar | Matlab Training in PhagwaraE2Matrix
 
Matlab Training in Chandigarh
Matlab Training in ChandigarhMatlab Training in Chandigarh
Matlab Training in ChandigarhE2Matrix
 

Semelhante a Dip lect2-Machine Vision Fundamentals (20)

Image processing.pdf
Image processing.pdfImage processing.pdf
Image processing.pdf
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image Processing
 
Rendering Algorithms.pptx
Rendering Algorithms.pptxRendering Algorithms.pptx
Rendering Algorithms.pptx
 
Robot Vision ,components for robot vision
Robot Vision ,components for robot visionRobot Vision ,components for robot vision
Robot Vision ,components for robot vision
 
IMAGE SEGMENTATION.
IMAGE SEGMENTATION.IMAGE SEGMENTATION.
IMAGE SEGMENTATION.
 
IRJET - Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...
IRJET -  	  Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...IRJET -  	  Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...
IRJET - Computer-Assisted ALL, AML, CLL, CML Detection and Counting for D...
 
DIP PPT (1).pptx
DIP PPT (1).pptxDIP PPT (1).pptx
DIP PPT (1).pptx
 
Machine vision.pptx
Machine vision.pptxMachine vision.pptx
Machine vision.pptx
 
Image segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
Image segmentation using wvlt trnsfrmtn and fuzzy logic. pptImage segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
Image segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
 
X-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE MethodX-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE Method
 
IRJET - Change Detection in Satellite Images using Convolutional Neural N...
IRJET -  	  Change Detection in Satellite Images using Convolutional Neural N...IRJET -  	  Change Detection in Satellite Images using Convolutional Neural N...
IRJET - Change Detection in Satellite Images using Convolutional Neural N...
 
Remotely sensed image segmentation using multiphase level set acm
Remotely sensed image segmentation using multiphase level set acmRemotely sensed image segmentation using multiphase level set acm
Remotely sensed image segmentation using multiphase level set acm
 
image processing
image processing image processing
image processing
 
Image Processing Training in Chandigarh
Image Processing Training in Chandigarh Image Processing Training in Chandigarh
Image Processing Training in Chandigarh
 
Image processing.pptx
Image processing.pptxImage processing.pptx
Image processing.pptx
 
50424340-Machine-Vision3 (1).pptx
50424340-Machine-Vision3 (1).pptx50424340-Machine-Vision3 (1).pptx
50424340-Machine-Vision3 (1).pptx
 
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...
 
Matlab Training in Jalandhar | Matlab Training in Phagwara
Matlab Training in Jalandhar | Matlab Training in PhagwaraMatlab Training in Jalandhar | Matlab Training in Phagwara
Matlab Training in Jalandhar | Matlab Training in Phagwara
 
Matlab Training in Chandigarh
Matlab Training in ChandigarhMatlab Training in Chandigarh
Matlab Training in Chandigarh
 

Último

HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARKOUSTAV SARKAR
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Arindam Chakraborty, Ph.D., P.E. (CA, TX)
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptNANDHAKUMARA10
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueBhangaleSonal
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxSCMS School of Architecture
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"mphochane1998
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...Amil baba
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network DevicesChandrakantDivate1
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdfKamal Acharya
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsvanyagupta248
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersMairaAshraf6
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdfKamal Acharya
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityMorshed Ahmed Rahath
 

Último (20)

HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to Computers
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 

Dip lect2-Machine Vision Fundamentals

  • 1. Digital Image Processing & Machine Vision Lecture # 2 By Dr. Abdul Rehman Abbasi
  • 2. Interesting Fact Vision accounts for about ____ of the data flowing into the human central nervous system 70%
  • 3. Feature Machine Vision Human Vision Spectral range Gamma rays to microwaves (10-11 - 10-1 m) Visible light (4x10-7 -7x10-7m) Spatial Resolution 4x106 pixels (area scan, growing rapidly), 8192 (line- scan) Effectively approximately 4000x4000 pixels Sensor size Small (approx.5x5x15 mm3) Very large Quantitative Yes. Capable of precise measurement of size, area No Ability to cope with unseen events Poor Good Performance on repetitive tasks Good Poor, due to fatigue and boredom Intelligence Low High (subjective) Light level variability Fixed, closely controlled Highly variable Light level (min) Equivalent to cloudy moonless night Quarter-moon light (Greater if dark-adaptation is extended) Strobe lighting and lasers Possible (good screening is needed for safety) Unsafe Consistency Good Poor Capital cost Moderate Low Running cost Low High
  • 4. Feature Machine Vision Human Vision Inspection cost, per unit Low High Ability to “program” in situ Limited.. Special interfaces make task easier Speech is effective Able to cope with multiple views in space and/or time Versatile Limited Able to work in toxic, biohazard areas Yes Not easily Non-standard scanning methods Line scan, circular scan, random scan, spiral- scan, radial scan Not possible Image storage Good Poor without photography or digital storage Optical aids Numerous available Limited
  • 6. Application features that make your vision system attractive includes 1. Inaccessible part (a robot in the way for example) 2. Hostile manufacturing environment 3. Possible part damage from physical contact 4. Need to measure large number of features 5. Predictable interaction with light 6. Poor or no visual access to part features of interests 7. Extremely poor visibility 8. Mechanical/electrical sensors provide the necessary data
  • 7. Machine Vision Process Frame Grabber Prepro- cessing Vision Engine Operation Interface I/O Vision software tool and algorithms Custom user Software Illumination Image sensor Object IMAGE FORMATION IMAGE CAPTURE IMAGE PROCESSING
  • 9. Image formation: Right illumination, optical sensor such as high resolution cameras, line scan cameras, frame grabber . Transformation of the visual image of a physical object and its intrinsic characteristics into set of digitized data that can be used by the image processing unit. Image processing: Image processing consists of image grabbing, image enhancement, feature extraction and output formatting. Function of image processing is to create a new image by altering the data in such a way that the features of interests are enhanced and the noise is reduced. Image analysis: The main function of image analysis is the automatic extraction of explicit information regarding the content of the image, for example, the shape, size or the range data and local orientation information from several two dimensional images. It utilizes several redundant information representations such as edges, boundaries, disparities shading etc. Most commonly used techniques are: template matching, statistical pattern recognition and the Hough transform. Decision making: Decision making is concerned with making a decision based on the description in the image and using AI to control the process or task
  • 10. Generic Model of Machine Vision • Scene constraints • Image acquisition • Preprocessing • Segmentation • Feature Extraction • Classification and/or Interpretation • Actuation
  • 11. Scene Constraints • Scene refers to the environment in which the task is taking place and into which the machine -vision system is to be placed. • The aim of the scene constraint sub-system is to reduce the complexity of the subsequent subsystems to a manageable level. This is achieved by proper exploitation of a priori constraints such as: knowledge of limited number of objects possible in the scene, knowledge of their surface finish and appearance etc. We can also impose new constraints such as: replacement of ambient light with carefully controlled lighting. • As is clear from the terminology itself the Scene refers to the industrial environment in which the manufacturing is being done and the machine vision system is to perform the required task in that environment. • The aim of this module is to reduce the complexity of all the subsequent sub-systems to a manageable level which is achieved by exploitation of existing constraints and imposition of new ones.
  • 12. Scene Constraints Two types of scene constraints can be applied 1. Inherent or natural constraints 2. Imposed constraints Inherent constraints • Characteristics of the material • Inherent features • Limitations, in the range of objects • Inherent positional limitations Imposed constraints • Control of object features • Control of object position • Control of lighting conditions
  • 13. Control of Lighting Conditions-1
  • 14. Control of Lighting Conditions-2
  • 15. Lighting Sources • LED illumination units • Metal halide light sources (“cold light sources” transmitted over fibre-optic cables) • Laser illumination units • Fluorescent light (high-frequency) • Halogen lamps
  • 16. Light Source Type Advantages Disadvantages LED Array of light-emitting diodes Can form many configurations within the arrays; single color source can be useful in some application Some features hard to see with single color source; large array required to light large area Fiber-Optic Illuminators Incandescent lamp in housing; light carried by optical fiber bundle to application Fiber bundles available in many configurations; heat and electrical power remote from application; easy access for lamp replacement Incandescent lamp has low efficiency, especially for blue light Fluorescent High-frequency tube or ring lamp Diffuse source; wide or narrow spectral range available; lamps are efficient and long lived Limited range of configurations; intensity control not available on some lamps Strobe Xenon arc strobe lamp, with either direct or fiber bundle light delivery Freezes rapidly moving parts; high peak illumination intensity Requires precise timing of light source and image capture electronics. May require eye protection for persons working near the application
  • 18. Image Acquisition • Translation from the light stimuli falling onto the photo sensors of a camera to a stored digital value within the computer’s memory. • Each digitized picture is typically of 512x512 pixels resolution, with each pixel representing a binary, grey or color value. • To ensure that no useful information is lost a proper choice of spatial and luminescence resolution parameters must be made. • Depending on particular application cameras with line scan or area scan elements can be made use of for image acquisition. • While area scan sensors have lower spatial resolution but they provide highly standardized interfacing to computers and do not need any relative motion between the object and the camera; the line scan sensors need relative motion to build 2-D image.
  • 19. Preprocessing-1 • To produce a form of the acquired image which is better suited for further operations the processes (contrast enhancement, and adjustment, filtering to remove noise and improve quality) modify and prepare pixel values for digitized image. • Fundamental information of the image is not changed during this module. • The initially acquired image has direct pixel by pixel relation to the original scene and thus lies in the spatial domain. • Transformations from spatial to frequency domain can be done using Fourier transforms, which is although not very computationally efficient operation.
  • 20. Preprocessing-2 • Low level processing for image improvement such as histogram manipulations (grey level shifting or equalization) involves noisy images clean up and highlight features of particular interest. • With the use of some transformations pixels are shared among grey levels which would enhance or alter the appearance of the image. • Histogram manipulations provide simple image improvement operations, either by grey level shifting or, equalization. • An image histogram is easily produced by recording the number of pixels at a particular grey level. • If this shows a bias towards the lower intensity grey levels, then some transformation to achieve a more equitable sharing of pixels among the grey levels would enhance or alter the appearance of the image. Such transformations will simply enhance or suppress contrast, and stretch or compress grey levels, without any alteration in the structural information present in the image.
  • 21. Preprocessing-3 • Another important class of spatial domain algorithms is designed to perform pixel transformation, whose final value is calculated as a function of a group of pixel values (or 'neighborhood') in some specified spatial location in the original image. • Many filtering algorithms for smoothing (low pass) and edge enhancement (high pass) are firmly in this category. • This introduces the basic principle of 'windowing operations' in which a 2-D (two-dimensional) mask, or window, defining the neighborhood of interest is moved across the image, taking each pixel in turn as the centre, and at each position the transformed value of the pixel of interest is calculated.
  • 22. Low Pass Filter (5x5 median)
  • 23. Image Segmentation-1 • Acquired image is broken up into meaningful regions or segments, i.e. partitioning of image. • Segmentation is not concerned with what the image represents. Broadly two approaches are employed: Thresholding based on some predetermined criterion (global thresholds the entire image into single threshold value or local thresholds partitions image into sub-images and determines for each of them) and • Edge-based methods (uses digital versions of standard finite operators which accentuates intensity changes, which gives rise to peak in the first derivative or a zero crossing in second derivative, which can be detected and properties such as position, sharpness, and height of peak infer the location, sharpness and contrast of intensity changes in the image). Edge elements can be used to form the complete boundaries as shown in the Figure 1.5.
  • 24. Image Segmentation-2 The classical approach to edge-based segmentation begins with edge enhancement which makes use of digital versions of standard finite difference operators, as in the first-order gradient operators (e.g. Roberts Sobel) or in the second-order Laplacian operator. The difference operation accentuates intensity changes, and transforms this the image into a representation from which properties of these changes can be extracted more easily. A significant intensity change gives rise to a peak in the first derivative or a zero crossing in the second derivative of the smoothed intensities. These peaks, or zero crossings, can be detected easily, and properties such as the position, sharpness, and height of the peaks infer the location, sharpness and contrast of the intensity changes in the image. Edge elements can be identified from the edge-enhanced image and these can then be linked to form complete boundaries of the regions of interest
  • 25. Feature Extraction • During this phase the inherent characteristics or features of different regions within the image are identified, which are checked against predetermined standards. • This description should be invariant to position, orientation and scale of the object. • A number of basic parameters such as minimum enclosing rectangle, centre of area (e.g. centre may be considered as object oriented origin and series of feature descriptors can be developed), may be derived from an arbitrary shape and can be used for classification and position information
  • 26. Image Classification (Analysis) • The classification sub-system is concerned with pattern recognition or image classification. This process utilizes some or all of the extracted features to make a decision about to which category of objects the unknown object belongs. • There are three main techniques for classification • Template matching • Statistically based approaches • Neural network approach • Template matching is used in situations where the objects to be identified have well defined and highly 'differentiated’ features, for example standard alphanumeric character fonts. In such cases an unknown character is compared with a set of templates or masks, each of which fits just one character uniquely. • Statistical techniques can be selected to provide optimum classification performance for more varied industrial applications. • If the vision task is well constrained then classification may be made via a simple tree searching algorithm where classification proceeds by making branching choices on the basis of single feature parameters. In more complex cases, n features are combined to create a 'feature vector' which places a candidate object within the n-dimensional feature space. Provided that the features have been properly chosen to divide the allowable range of candidate objects into well separated 'clusters', then classification merely consists of dividing the space with one or more 'decision surfaces', such that each decision surface reliably separates two clusters.
  • 27. Industrial Vision: Image acquisition CCD camera Digitalization Data acquisition cards Vision software Cameras and sensors; lenses
  • 30. Image acquisition: digitalisation A D 8 Bit GreyvalueVolts 0...255 Greyvalue Volts 0.7 255 0.348 127 brighter
  • 31. Digitalisation camera image 8 bit grayscale digital image Pixel mask Pixel = Picture Element
  • 32. Digitalisation Grayscale image & numeric representation 255 255 255 255 253 88 74 73 72 72 75 175 255 255 255 255 255 255 255 250 82 75 74 73 74 73 190 255 255 255 255 255 255 255 231 80 73 72 72 72 76 197 255 255 255 255 255 255 255 232 83 73 73 73 76 75 172 255 255 255 255 255 255 255 226 79 75 74 74 76 75 184 255 255 255 255 255 255 255 220 84 75 73 76 79 74 159 255 255 255 255 255 255 255 224 83 76 74 77 75 75 156 255 255 255 255 255 255 255 207 90 75 76 78 77 81 172 255 255 255 255 255 255 255 252 107 75 75 80 79 79 162 255 255 255 255 255 255 255 249 136 77 76 89 81 99 217 255 255 255 255 255 255 255 255 183 78 75 80 81 120 248 255 255 255 255 255 255 255 255 249 86 76 74 84 201 255 255 255 255 255 255 255 255 255 255 115 77 77 98 251 255 255 255 255 255 255 255 255 255 255 193 80 78 143 255 255 255 255 255 255 255 255 255 255 255 217 85 78 173 255 255 255 255 255 255 255 255 255 255 255 248 97 79 220 255 255 255 255 255 255 255 255 255 255 255 255 119 80 224 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
  • 33. Video sources The video source can be: • Video camera • Camcorder • Video recorder (VCR) • Television broadcasts • X-ray equipment • Scanning Electron Microscope (SEM) • CT scanner
  • 34. Composite video = signal containing both video data (luminance + colour) and the timing (synchronisation) information. It is the standard which interconnects almost all video equipment (TVs, laserdisc, videorecorders, camcorders) at home. Examples of composite video standards: • RS-170: • used in North America and Japan • Monochrome signal • Spatial resolution: 640 pixels x 480 lines • Frequency: 60 fields/second (equivalent to 30 frames/second) • NTSC/RS-330 • used in North America and Japan • Equivalent to RS-170 but colour information is superimposed on the monochrome signal. • NTSC = National Television System Committee Signal types for Image acquisition boards
  • 35. Signal types for Image acquisition boards More composite video standards: • CCIR • used in Northern Europe • Monochrome signal • Spatial resolution: 768 pixels x 576 lines • Frequency: 50 fields/second (equivalent to 25 frames/second) • CCIR = Comité Consultatif International Radio • PAL • used in Northern Europe • Equivalent to CCIR but colour information is superimposed on the monochrome signal. • PAL = Phase Alteration Line • SECAM • used in France, Russia and the Sovjet Republic States • Equivalent to CCIR but colour information is superimposed on the monochrome signal. • SECAM = Séquencial Couleur Avec Memoire
  • 36. Signal types for Image acquisition boards S-Video (also called Y/C video): luminance (Y) and chrominance (C) are separate signals. The Y signal contains timing (synchronisation) information. S-video can be transported over 4 pin mini DIN connector, or over SCART connector. Some image sources produce “nonstandard” video signals: • Video and timing information can vary in format as well as in single or multiple signals. They do not adhere to particular spatial resolutions, signal timing schemes, signal characteristics … Consult the documentation provided with your video source. Progressive scan (25-30 frames/sec) cameras produce non interlaced signals. All previous camera signals are analogue. DIGITAL CAMERAS: No frame grabber required! • Cameras with FireWire (IEEE 1394) interface. Supported by Apple, Windows XP • Cameras with USB interface
  • 38. Image acquisition boards • The video capture device is often called frame grabber card. • Frame grabber puts a pixel mask over the image: the card converts the analogue image (or images) supplied by a video source into a digital array (or arrays) of data points. • It is a plug in card (PCI) with AD convertor. The ADC must have video speed: 20 MHz or higher (30 or 25 video frames per second, 300 kB [640 x 480 x 8 bit] per frame. • Other features: • input multiplexer (to select one of the 4 inputs) • Colour notch filter = chrominance filter (to acquire monochrome signals from colour sources) • Programmable gain stage (to match the signal into the ADC input range) • Timing and acquisition control (to synchronise grabbing with sync pulses of incoming signal: PLL or Digital Clock Synchronisation) • Camera control stage (to send to the camera or to receive from the camera setup and control signals, e.g. horizontal and vertical sync signals, pixel clock and reset signals) • Most cards provide digital I/O for input or output operations, to communicate with external digital devices (e.g. industrial process). This saves a separate I/O board.
  • 39. Block diagram of analog frame grabber © Data Translation
  • 40. Image acquisition boards (continued) • Plug-in cards (image grabber, frame grabber card) for analogue cameras • Are plugged in at a VME or PCI bus • Are delivered with Windows 98 or NT drivers • Accept cameras according to the EIA (30 frames/sec) or CCIR (25) standards • Good cards have their own processor (DMA data transfer to PC) and large RAM • Others (cheaper ones) use the PC processor • They accept the signals: S video, composite video TV or VCR signals (NTSC/PAL/Secam) • Some cards have camera control output
  • 41. Image acquisition - Cameras • Sensor types: • Line • Array • Interface standards: • CCIR / RS-170 (B&W, 50-60 fields/sec.) • PAL / SECAM / NTSC (Colour) • Progressive scan (25-30 frames/sec.) • FireWire (IEEE 1394) • USB • Sensor technology: • CCD (Charge Coupled Device) • CMOS (Complementary Metal Oxide Semiconductor). A CMOS camera produces a 1000*1000 pixel image
  • 42. Spatial resolution • The number of rows (N) from a video source generally corresponds one-to-one with lines in the video image. The number of columns, however, depends on the nature of the electronics that is used to digitize the image. Different frame grabbers for the same video camera might produce M = 384, 512, or 768 columns (pixels) per line. • a CCIR / PAL image source can result in max 768 x 576 pixel image • a RS-170 / NTSC source can result in max 640 x 480 pixel image • Depending on video source or camera used, the spatial resolution can range from 256 x 256 up to 4096 x 4096. • Most applications use only the spatial resolution required. For fast image transfer and manipulation, often 512 x 512 is used. For more accurate image processing, 1024 x 1024 is common. • The pixel aspect ratio (pixel width : pixel height) can be different from 1:1, typical 4:3. Some frame grabbers don’t convert video data into square pixels but into rectangle ones. This creates the effect of a circle appearing ovular, and squares appearing as rectangles.
  • 43. Spatial resolution • Example 768 x 512 (aspect ratio 3 : 2) Brightness resolution • Brightness resolution = bit depth resolution: number of gray levels (monochrome) or number of colours • RS-170 / NTSC image: 8 bits = 256 gray levels • A standard RS-170 image is 307 kB large: 640 x 480 x 8bit. 768 pixels 512 rows 2 3 520 rows max CCIR
  • 44. Interlaced / non interlaced formats • A video signal consists of a series of lines. Horizontal sync pulses separe the lines from each other. • All composite video sources (RS-170/NTSC, CCIR/PAL) and some nonstandard video sources transmit the lines in interlaced format: first the odd (first field), afterwards the even lines (second field). • Vertical sync pulses separate the fields from each other. • Some nonstandard video sources transmit the lines in non-interlaced format = progressive scan. Only one field, containing all the lines, is transmitted. • Progressive scan is recommended for fast moving images. • If one is planning to use images that have been scanned from an interlaced video source, it is important to know if the two half-images have been appropriately "shuffled" by the digitization hardware or if that should be implemented in software. Further, the analysis of moving objects requires special care with interlaced video to avoid "zigzag" edges.