SlideShare uma empresa Scribd logo
1 de 25
Object recognition in computer vision is the task of finding a given object in an
image or video sequence. Humans recognize a multitude of objects in images
with little effort, despite the fact that the image of the objects may vary
somewhat in different view points, in many different sizes / scale or even when
they are translated or rotated. Objects can even be recognized when they are
partially obstructed from view. This task is still a challenge for computer vision
systems in general.

Object recognition concerned with determining the identity of an object being
observed in the image from a set of known labels. Oftentimes, it is assumed that
the object being observed has been detected or there is a single object in the
image.

Object recognition system finds objects in the real world from an image of the
world from an image of the world, using object models which are known a priori.
Humans perform object recognition effortlessly and instantaneously
An object recognition system must have the
following components to perform the task:


 Model Data Base
 Feature Detector
 Hypothesizer
 Hypothesis verifier
 Model Data Base - contains all the models known to the system. The
information in the model database on the approach used for recognition. The
models of objects are abstract feature vectors, as discussed later in this section.
A feature is some attribute of the object . Size, color, and shape are the
commonly used features.

 Feature Detector – applies operators to images and identifies locations of
features that help in forming object hypothesis. The features used by a system
depend on the types of objects to be recognized.

 Model Data Base – Using the detected features in the image, it assigns
likelihoods to objects present in the scene. Used to reduce the search space for
the recognizer using certain features.

 Verifier– uses object models to verify the hypotheses and refines the
likelihood of objects. The system then selects the object with the highest
likelihood, based on all the evidence, as the correct object.
 Object or model representation: How shpuld objects be represented in the
model database? – For some objects, geometric descriptions may be available and
may also be efficient, while for another class one may have to rely on generic or
funtional features.
 Feature Extraction: Which features should be detected, and how can they be
detected reliably? – Most features can be computed in two dimensional images but
they are related to three-dimensional characteristics of objects.
 Feature-model matching: How can a set of likely objects based on the feature
matching be selected? – this step uses knowledge of the application domain to
assign some king of probability or confidence measure to different objects in the
domain.
 Object Verification - How can object models be used to select the most likely
object from the set of probable objects in a given image? – The presence of each
likely object can be verified by using their models.
 Scene Constancy: the scene complexity will depend on
whether the mages are acquired in similar conditions
(illumination, background, camera parameters, and
viewpoint) as the models.
 image-models spaces: Images may be obtained such that
three-dimensional objects can be considered two-
dimensional.
 Number of Objects in the model database: If the number
of objects is very small, one may not need the hypothesis
formation stage.
 Number of objects in an image and possibility of
occlusion: If there is only one object in an image, it may be
completely visible.
Two-Dimensional

In many applications, images are acquired from a distance sufficient to
consider the projection to be orthographic. If the objects are always in one
stable position in the scene, then they can be considered two-dimensional.
In these applications, one can use a two-dimensional model base. There are
two possible cases:

 Objects will not be ocluded, as in remote sensing and many industrial
applications.

 Objects may be occluded by other objects of interest or be partially
visible, as in the bin of parts problem
Three-Dimensional
If the images of objects can be obtained from arbitrary viewpoints, then an
object may appear very different in its two views. For object recognition using
three-dimensional models, the perspective effect and viewpoint of the image
have to be considered. The fact that the models are three-dimensional and the
images contain only two-dimensional information affects object recognition
approaches. Again, the two factors to be considered are whether objects are
separated from other objects or not.

For tree-dimensional cases, one should consider the information used in the object
information used in object recognition task. Two different cases are:

 Intensity: There is no surface information available explicitly in intensity
images. Using intensity values, features corresponding to the three-dimensional
structure of objects should be recognized.
 2.5-dimensional images: In many applications, surface representations with
viewer-centered coordinates are available, or can be computed, from images.
This information can be used in object recognition.
3D object recognition based on the use colored stripes—so called structured light—is
useful in applications ranging from 3D face recognition to measuring suspension
systems and ensuring a perfect fit for hearing aids
Uses description of objects in a coordinate system attached to objects. This description
is usually based on three dimensional features or description of objects. These are
independent of the camera parameters and location. Thus, to make them useful for
object recognition, the representation should have enough information to produce
object images or object features in images for a known camera and viewpoint.



                                                a.) an object is shown with its prominent
                                                local features highlighted.

                                                b.) graph representation is used for object
                                                recognition using a graph matching
                                                approach.
Many types of features are used for object recognition. Most features are based on
either regions or boundaries in an image. It is assumed that a region or a closed
boundary corresponds to an entity that is either an object or a part of an object.




                                                           An object and its partial
                                                          representation using multiple
                                                          local and global features
Depending on the complexity of the problem, a recognition strategy
may need to use either or both the hypothesis formation and
verification steps
Face recognition is a rapidly growing field today for is many uses in the fields
of biometric authentication, security, and many other areas. There are many
problems that exist due to the many factors that can affect the photos. When
processing images one must take into account the variations in light, image quality,
the persons pose and facial expressions along with others. In order to successfully be
able to identify individuals correctly there must be some way to account for all
these variations and be able to come up with a valid answer.
Figure



                                           Differences in Lighting and Facial Expression
Face recognition is an image processing application for automatically
identifying or verifying a person from a digital image or video frame from a video
source. One of the ways to do this is by comparing selected facial features from the
image and a facial database.




 Some facial recognition algorithms identify faces by extracting landmarks, or
 features, from an image of the subject's face. For example, an algorithm may
 analyze the relative position, size, and/or shape of the eyes, nose, cheekbones,
 and jaw. These features are then used to search for other images with
 matching features.
 Other algorithms normalize a gallery of face images and then compress the
 face data, only saving the data in the image that is useful for face detection.
Face recognition used in:
          - Human and computer Interface
          - Biometric identification
 Objective of Face recognition :
          -to determine the identity of a person from a given image.
Complications occur due to variations in:
      - Illumination
      - Pose
      -Facial expression
      -Aging
      -occlusions such as spectacles, hair, etc.
Weaknesses:
      -Face recognition is not perfect and struggles to perform under certain
      conditions.
      -Other conditions where face recognition does not work well include poor
      lighting, sunglasses, long hair, or other objects partially covering the subject’s
      face, and low resolution images.
      -less effective if facial expressions vary
Facial Recognition uses mainly the following techniques:

•Facial geometry: uses geometrical characteristics of the face. May use several
cameras to get better accuracy (2D, 3D...)

•Skin pattern recognition (Visual Skin Print)

•Facial thermogram: uses an infrared camera to map the face temperatures

•Smile: recognition of the wrinkle changes when smiling
The uniqueness of Skin Texture
offers an opportunity to identify
differences between identical
twins.

The Surface Texture Analysis
algorithm operates on the top
percentage of results as
determined by the Local feature
analysis.
Finger Printing is one of the most well-known and publicized biometrics. Because of
their uniqueness and consistency over time, fingerprints have been used for
identification for over a century, more recently becoming automated due to
advancements in computing capabilities. Fingerprint identification is important
because of the inherent ease in acquisition, the numerous sources available for
collection, and their established use and collections by law and immigration.
 A Fingerprint usually appears as a series of dark lines that represent the high,
peaking portion of friction ridge skin, while the valleys between these ridges appears
as white space and are low, shallow portion of the friction ridge skin.
 Finger Identification is based primarily on the minutiae, or the location and
direction of the ridge endings and bifurcations (splits) along a ridge path
Hardware
A variety of sensor types – optical, capacitive, ultrasound, and thermal – are
used for collecting the digital image of a fingerprint surface.
•Optical sensors take an image of the fingerprint, and are the most common
sensor today
•Capacitive sensor determines each pixel value based on the capacitance
measured, made possible because an area of air has significantly less
capacitance than an area of finger.

Software
The two main categories of fingerprint matching techniques are minutiae-based
matching and pattern matching.
• Pattern Matching simply compares two images to see how similar they are.
Usually used in fingerprint systems to detect duplicates.
• Minutiae-based matching relies on the minutiae points described above,
specifically the location and direction of each point.
Geometry-based approaches- early attempts on object recognition
were focused on using geometric models of objects to account for their
appearance variation due to viewpoint and illumination change.

Appearance-based algorithms- advanced feature descriptors and
pattern recognition algorithms are developed. Computes eigenvectors from a set
of vectors where each one represents one face image


Feature-based algorithms- lies in             finding interest points, often occured
at intensity discontinuity, that are invariant to change due to scale, illumination
and affine transformation.
Object recognition

Mais conteúdo relacionado

Mais procurados

Image pre processing
Image pre processingImage pre processing
Image pre processing
Ashish Kumar
 
Image Processing
Image ProcessingImage Processing
Image Processing
Rolando
 

Mais procurados (20)

Digital image forgery detection
Digital image forgery detectionDigital image forgery detection
Digital image forgery detection
 
Image pre processing
Image pre processingImage pre processing
Image pre processing
 
IMAGE SEGMENTATION.
IMAGE SEGMENTATION.IMAGE SEGMENTATION.
IMAGE SEGMENTATION.
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
 
Image Restoration
Image RestorationImage Restoration
Image Restoration
 
Image Enhancement - Point Processing
Image Enhancement - Point ProcessingImage Enhancement - Point Processing
Image Enhancement - Point Processing
 
COM2304: Introduction to Computer Vision & Image Processing
COM2304: Introduction to Computer Vision & Image Processing COM2304: Introduction to Computer Vision & Image Processing
COM2304: Introduction to Computer Vision & Image Processing
 
Video image processing
Video image processingVideo image processing
Video image processing
 
Image Processing
Image ProcessingImage Processing
Image Processing
 
Object detection
Object detectionObject detection
Object detection
 
Computer Vision
Computer VisionComputer Vision
Computer Vision
 
Segmentation
SegmentationSegmentation
Segmentation
 
DIGITAL IMAGE PROCESSING - LECTURE NOTES
DIGITAL IMAGE PROCESSING - LECTURE NOTESDIGITAL IMAGE PROCESSING - LECTURE NOTES
DIGITAL IMAGE PROCESSING - LECTURE NOTES
 
Image degradation and noise by Md.Naseem Ashraf
Image degradation and noise by Md.Naseem AshrafImage degradation and noise by Md.Naseem Ashraf
Image degradation and noise by Md.Naseem Ashraf
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Object tracking presentation
Object tracking  presentationObject tracking  presentation
Object tracking presentation
 
Computer Vision
Computer VisionComputer Vision
Computer Vision
 
Introduction to digital image processing
Introduction to digital image processingIntroduction to digital image processing
Introduction to digital image processing
 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentation
 

Destaque

Object recognition
Object recognitionObject recognition
Object recognition
akkichester
 

Destaque (11)

Object recognition
Object recognitionObject recognition
Object recognition
 
Object Detection and Recognition
Object Detection and Recognition Object Detection and Recognition
Object Detection and Recognition
 
Object detection
Object detectionObject detection
Object detection
 
Object Detection & Tracking
Object Detection & TrackingObject Detection & Tracking
Object Detection & Tracking
 
Identity Management Manifesto
Identity Management ManifestoIdentity Management Manifesto
Identity Management Manifesto
 
Urs Köster Presenting at RE-Work DL Summit in Boston
Urs Köster Presenting at RE-Work DL Summit in BostonUrs Köster Presenting at RE-Work DL Summit in Boston
Urs Köster Presenting at RE-Work DL Summit in Boston
 
Identity Management: What Solution is Right for You?
Identity Management: What Solution is Right for You?Identity Management: What Solution is Right for You?
Identity Management: What Solution is Right for You?
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Real Time Object Tracking
Real Time Object TrackingReal Time Object Tracking
Real Time Object Tracking
 
Object tracking
Object trackingObject tracking
Object tracking
 
Slideshare ppt
Slideshare pptSlideshare ppt
Slideshare ppt
 

Semelhante a Object recognition

An Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering MethodAn Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering Method
RSIS International
 

Semelhante a Object recognition (20)

Computer Vision(4).pptx
Computer Vision(4).pptxComputer Vision(4).pptx
Computer Vision(4).pptx
 
Pattern recognition 3d face recognition
Pattern recognition 3d face recognitionPattern recognition 3d face recognition
Pattern recognition 3d face recognition
 
Image recognition
Image recognitionImage recognition
Image recognition
 
imagerecognition-191220044946 (1).pdf
imagerecognition-191220044946 (1).pdfimagerecognition-191220044946 (1).pdf
imagerecognition-191220044946 (1).pdf
 
Remote Sensing Image Scene Classification
Remote Sensing Image Scene ClassificationRemote Sensing Image Scene Classification
Remote Sensing Image Scene Classification
 
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...Scale Invariant Feature Transform Based Face Recognition from a Single Sample...
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...
 
Ck36515520
Ck36515520Ck36515520
Ck36515520
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...
 
G041041047
G041041047G041041047
G041041047
 
Presentation1.pptx
Presentation1.pptxPresentation1.pptx
Presentation1.pptx
 
A review on digital image processing paper
A review on digital image processing paperA review on digital image processing paper
A review on digital image processing paper
 
Detection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyDetection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed Study
 
Fake Multi Biometric Detection using Image Quality Assessment
Fake Multi Biometric Detection using Image Quality AssessmentFake Multi Biometric Detection using Image Quality Assessment
Fake Multi Biometric Detection using Image Quality Assessment
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Dq4301702706
Dq4301702706Dq4301702706
Dq4301702706
 
LEARNING BASES OF ACTICITY
LEARNING BASES OF ACTICITYLEARNING BASES OF ACTICITY
LEARNING BASES OF ACTICITY
 
An Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering MethodAn Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering Method
 
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYAPPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEY
 
Applications of spatial features in cbir a survey
Applications of spatial features in cbir  a surveyApplications of spatial features in cbir  a survey
Applications of spatial features in cbir a survey
 
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITIONFEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 

Object recognition

  • 1.
  • 2. Object recognition in computer vision is the task of finding a given object in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different view points, in many different sizes / scale or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. This task is still a challenge for computer vision systems in general. Object recognition concerned with determining the identity of an object being observed in the image from a set of known labels. Oftentimes, it is assumed that the object being observed has been detected or there is a single object in the image. Object recognition system finds objects in the real world from an image of the world from an image of the world, using object models which are known a priori. Humans perform object recognition effortlessly and instantaneously
  • 3.
  • 4. An object recognition system must have the following components to perform the task:  Model Data Base  Feature Detector  Hypothesizer  Hypothesis verifier
  • 5.  Model Data Base - contains all the models known to the system. The information in the model database on the approach used for recognition. The models of objects are abstract feature vectors, as discussed later in this section. A feature is some attribute of the object . Size, color, and shape are the commonly used features.  Feature Detector – applies operators to images and identifies locations of features that help in forming object hypothesis. The features used by a system depend on the types of objects to be recognized.  Model Data Base – Using the detected features in the image, it assigns likelihoods to objects present in the scene. Used to reduce the search space for the recognizer using certain features.  Verifier– uses object models to verify the hypotheses and refines the likelihood of objects. The system then selects the object with the highest likelihood, based on all the evidence, as the correct object.
  • 6.  Object or model representation: How shpuld objects be represented in the model database? – For some objects, geometric descriptions may be available and may also be efficient, while for another class one may have to rely on generic or funtional features.  Feature Extraction: Which features should be detected, and how can they be detected reliably? – Most features can be computed in two dimensional images but they are related to three-dimensional characteristics of objects.  Feature-model matching: How can a set of likely objects based on the feature matching be selected? – this step uses knowledge of the application domain to assign some king of probability or confidence measure to different objects in the domain.  Object Verification - How can object models be used to select the most likely object from the set of probable objects in a given image? – The presence of each likely object can be verified by using their models.
  • 7.  Scene Constancy: the scene complexity will depend on whether the mages are acquired in similar conditions (illumination, background, camera parameters, and viewpoint) as the models.  image-models spaces: Images may be obtained such that three-dimensional objects can be considered two- dimensional.  Number of Objects in the model database: If the number of objects is very small, one may not need the hypothesis formation stage.  Number of objects in an image and possibility of occlusion: If there is only one object in an image, it may be completely visible.
  • 8. Two-Dimensional In many applications, images are acquired from a distance sufficient to consider the projection to be orthographic. If the objects are always in one stable position in the scene, then they can be considered two-dimensional. In these applications, one can use a two-dimensional model base. There are two possible cases:  Objects will not be ocluded, as in remote sensing and many industrial applications.  Objects may be occluded by other objects of interest or be partially visible, as in the bin of parts problem
  • 9. Three-Dimensional If the images of objects can be obtained from arbitrary viewpoints, then an object may appear very different in its two views. For object recognition using three-dimensional models, the perspective effect and viewpoint of the image have to be considered. The fact that the models are three-dimensional and the images contain only two-dimensional information affects object recognition approaches. Again, the two factors to be considered are whether objects are separated from other objects or not. For tree-dimensional cases, one should consider the information used in the object information used in object recognition task. Two different cases are:  Intensity: There is no surface information available explicitly in intensity images. Using intensity values, features corresponding to the three-dimensional structure of objects should be recognized.  2.5-dimensional images: In many applications, surface representations with viewer-centered coordinates are available, or can be computed, from images. This information can be used in object recognition.
  • 10. 3D object recognition based on the use colored stripes—so called structured light—is useful in applications ranging from 3D face recognition to measuring suspension systems and ensuring a perfect fit for hearing aids
  • 11.
  • 12. Uses description of objects in a coordinate system attached to objects. This description is usually based on three dimensional features or description of objects. These are independent of the camera parameters and location. Thus, to make them useful for object recognition, the representation should have enough information to produce object images or object features in images for a known camera and viewpoint. a.) an object is shown with its prominent local features highlighted. b.) graph representation is used for object recognition using a graph matching approach.
  • 13. Many types of features are used for object recognition. Most features are based on either regions or boundaries in an image. It is assumed that a region or a closed boundary corresponds to an entity that is either an object or a part of an object. An object and its partial representation using multiple local and global features
  • 14. Depending on the complexity of the problem, a recognition strategy may need to use either or both the hypothesis formation and verification steps
  • 15.
  • 16. Face recognition is a rapidly growing field today for is many uses in the fields of biometric authentication, security, and many other areas. There are many problems that exist due to the many factors that can affect the photos. When processing images one must take into account the variations in light, image quality, the persons pose and facial expressions along with others. In order to successfully be able to identify individuals correctly there must be some way to account for all these variations and be able to come up with a valid answer. Figure Differences in Lighting and Facial Expression
  • 17. Face recognition is an image processing application for automatically identifying or verifying a person from a digital image or video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. Some facial recognition algorithms identify faces by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face detection.
  • 18. Face recognition used in: - Human and computer Interface - Biometric identification  Objective of Face recognition : -to determine the identity of a person from a given image. Complications occur due to variations in: - Illumination - Pose -Facial expression -Aging -occlusions such as spectacles, hair, etc. Weaknesses: -Face recognition is not perfect and struggles to perform under certain conditions. -Other conditions where face recognition does not work well include poor lighting, sunglasses, long hair, or other objects partially covering the subject’s face, and low resolution images. -less effective if facial expressions vary
  • 19. Facial Recognition uses mainly the following techniques: •Facial geometry: uses geometrical characteristics of the face. May use several cameras to get better accuracy (2D, 3D...) •Skin pattern recognition (Visual Skin Print) •Facial thermogram: uses an infrared camera to map the face temperatures •Smile: recognition of the wrinkle changes when smiling
  • 20. The uniqueness of Skin Texture offers an opportunity to identify differences between identical twins. The Surface Texture Analysis algorithm operates on the top percentage of results as determined by the Local feature analysis.
  • 21.
  • 22. Finger Printing is one of the most well-known and publicized biometrics. Because of their uniqueness and consistency over time, fingerprints have been used for identification for over a century, more recently becoming automated due to advancements in computing capabilities. Fingerprint identification is important because of the inherent ease in acquisition, the numerous sources available for collection, and their established use and collections by law and immigration.  A Fingerprint usually appears as a series of dark lines that represent the high, peaking portion of friction ridge skin, while the valleys between these ridges appears as white space and are low, shallow portion of the friction ridge skin.  Finger Identification is based primarily on the minutiae, or the location and direction of the ridge endings and bifurcations (splits) along a ridge path
  • 23. Hardware A variety of sensor types – optical, capacitive, ultrasound, and thermal – are used for collecting the digital image of a fingerprint surface. •Optical sensors take an image of the fingerprint, and are the most common sensor today •Capacitive sensor determines each pixel value based on the capacitance measured, made possible because an area of air has significantly less capacitance than an area of finger. Software The two main categories of fingerprint matching techniques are minutiae-based matching and pattern matching. • Pattern Matching simply compares two images to see how similar they are. Usually used in fingerprint systems to detect duplicates. • Minutiae-based matching relies on the minutiae points described above, specifically the location and direction of each point.
  • 24. Geometry-based approaches- early attempts on object recognition were focused on using geometric models of objects to account for their appearance variation due to viewpoint and illumination change. Appearance-based algorithms- advanced feature descriptors and pattern recognition algorithms are developed. Computes eigenvectors from a set of vectors where each one represents one face image Feature-based algorithms- lies in finding interest points, often occured at intensity discontinuity, that are invariant to change due to scale, illumination and affine transformation.