SlideShare uma empresa Scribd logo
1 de 84
Recognizing
           Specific Objects & Faces




Many slides from: S. Savarese, S. Lazebnik, D. Lowe, P. Viola
Overview
• Specific Object Recognition
  – Lowe ‘04
  – Rothganger et al. ‘03

• Face Dectection
  – Sub-space methods
  – Neural Network-based approaches
  – Viola & Jones

• Face Recognition
Specific Object Recognition
Slides from S. Savarese, D. Lowe and F. Rothganger
Slide: S. Savarese
Challenges:
• Variability due to:

  – View point
  – Illumination
  – Occlusions

  – But not intra-class variation
Difference of Guassian
• a feature enhancement
  algorithm that involves the
  subtraction of one blurred
  version of an original image
  from another, less blurred
  version of the original.
• a band-pass filter that discards
  all but a handful of spatial
  frequencies that are present in
  the original grayscale image
• Removal of random noise
  which also has high frequency
Laplacian of Gaussian
•   Blob detection: refers to
    mathematical methods that
    are aimed at detecting regions
    in a digital image that differ in
    properties, such as brightness
    or color, compared to areas
    surrounding those regions
•   Laplacian: second derivate of
    Gaussian aimed at finding
    blobs in images.
Harris-Laplace
Suppose we only consider a small window of pixels
    – What defines whether a feature is a good or bad candidate?




        Slide adapted from Darya Frolova, Denis Simakov, Weizmann Institute.
Feature detection
   – How does the window change when you shift it?
   – Shifting the window in any direction causes a big change




“flat” region:            “edge”:                       “corner”:
no change in all          no change along               significant change
directions                the edge direction            in all directions


     Slide adapted from Darya Frolova, Denis Simakov, Weizmann Institute.
Scale Invariant Feature Transform
Basic idea:
    •   Take 16x16 square window around detected feature
    •   Compute edge orientation (angle of the gradient - 90°) for each pixel
    •   Throw out weak edges (threshold gradient magnitude)
    •   Create histogram of surviving edge orientations




                                                  0               2π
                                                      angle histogram




                   Adapted from slide by David Lowe
SIFT descriptor
Full version
     • Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below)
     • Compute an orientation histogram for each cell
     • 16 cells * 8 orientations = 128 dimensional descriptor




                  Adapted from slide by David Lowe
Properties of SIFT
Extraordinarily robust matching technique
   – Can handle changes in viewpoint
       • Up to about 60 degree out of plane rotation
   – Can handle significant changes in illumination
       • Sometimes even day vs. night (below)
   – Fast and efficient—can run in real time
   – Lots of code available
       •   http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT
Overview
• Specific Object Recognition
  – Lowe ‘04
  – Rothganger et al. ‘03

• Face Dectection
  – Sub-space methods
  – Neural Network-based approaches
  – Viola & Jones

• Face Recognition
Face Detection & Recognition

              Slides from
    P. Viola, S. Lazebnik, A. Torralba
Face detection
Face detection and recognition



            Detection   Recognition   “Sally”
Challenges:
• Variability due to:

  – Intra-class variation (but fairly small)
  – Illumination
  – Occlusions (eyeglasses, facial hair, hair)

  – Fairly limited viewpoint (frontal, profile)
Practical Challenges of face detection
• Sliding window detector must evaluate tens of
  thousands of location/scale combinations
• Faces are rare: 0–10 per image
  – For computational efficiency, we should try to
    spend as little time as possible on the non-face
    windows
  – A megapixel image has ~106 pixels and a
    comparable number of candidate face locations
  – To avoid having a false positive in every image
    image, our false positive rate has to be less than
    10-6
Overview
• Specific Object Recognition
  – Lowe ‘04
  – Rothganger et al. ‘03

• Face Dectection
  – Sub-space methods
  – Neural Network-based approaches
  – Viola & Jones

• Face Recognition
Subspace Methods
• PCA (“Eigenfaces”, Turk and Pentland)

• PCA (Bayesian, Moghaddam and Pentland)

• LDA/FLD (“Fisherfaces”, Belhumeur &
  Kreigman)

• ICA
Principal Component Analysis
• Given: N data points x1, … ,xN in Rd

• We want to find a new set of features that are
  linear combinations of original ones:
                u(xi) = uT(xi – µ)
  (µ: mean of data points)

• What unit vector u in Rd captures the most
  variance of the data?
Principal Component Analysis
• Direction that maximizes the variance of the projected data:
                        N



                          Projection of data point
                          N




                          Covariance matrix of data



The direction that maximizes the variance is the eigenvector
associated with the largest eigenvalue of Σ
Eigenfaces: Key idea
• Assume that most face images lie on
  a low-dimensional subspace determined by
  the first k (k<d) directions of maximum
  variance
• Use PCA to determine the vectors u1,…uk that
  span that subspace:
  x ≈ μ + w1u1 + w2u2 + … + wkuk
• Represent each face using its “face space”
  coordinates (w1,…wk)
• Perform nearest-neighbor recognition in “face
  space”
M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991
Eigenfaces example
 Training
  images
   x1,…,xN
Eigenfaces example

                     Top eigenvectors:
                          u1,…uk


    Mean: μ
Eigenfaces example
• Face x in “face space” coordinates:




           =
Eigenfaces example
    • Face x in “face space” coordinates:




               =

    • Reconstruction:

      =         +

^
x     =    µ   +    w1u1 + w2u2 + w3u3 + w4u4 + …
Summary: Recognition with eigenfaces
Process labeled training images:
• Find mean µ and covariance matrix Σ
• Find k principal components (eigenvectors of Σ) u1,…uk
• Project each training image xi onto subspace spanned
  by principal components:
  (wi1,…,wik) = (u1T(xi – µ), … , ukT(xi – µ))

Given novel image x:
• Project onto subspace:
  (w1,…,wk) = (u1T(x – µ), … , ukT(x – µ))
• Optional: check reconstruction error x – ^ to determine
                                            x
  whether image is really a face
• Classify as closest training face in k-dimensional
  subspace
Limitations
• Global appearance method: not robust to
  misalignment, background variation
Limitations
    • PCA assumes that the data has a Gaussian
      distribution (mean µ, covariance matrix Σ)




The shape of this dataset is not well described by its principal components
Distribution-Based Face Detector
•   Learn face and nonface models from examples [Sung and Poggio 95]
•   Cluster and project the examples to a lower dimensional space using
    Gaussian distributions and PCA
•   Detect faces using distance metric to face and nonface clusters
Distribution-Based Face Detector
•    Learn face and nonface models from examples [Sung and Poggio 95]




    Training Database
    1000+ Real, 3000+ VIRTUAL
    50,0000+ Non-Face Pattern
Overview
• Specific Object Recognition
  – Lowe ‘04
  – Rothganger et al. ‘03

• Face Dectection
  – Sub-space methods
  – Neural Network-based approaches
  – Viola & Jones

• Face Recognition
Neural Network-Based Face Detector
• Train a set of multilayer perceptrons and arbitrate
  a decision among all outputs [Rowley et al. 98]
The steps in preprocessing a window. First, a linear function is fit to the intensity values in the window,
and then subtracted out, correcting for some extreme lighting conditions. Then, histogram equalization is
applied, to correct for different camera gains and to improve contrast. For each of these steps, the
mapping is computed based on pixels inside the oval mask, while the mapping is applied to the entire
window.
     From: http://www.ius.cs.cmu.edu/IUS/har2/har/www/CMU-CS-95-158R/
Example face images, randomly mirrored,
   rotated, translated, and scaled by small
   amounts (photos are of the three authors).




From: http://www.ius.cs.cmu.edu/IUS/har2/har/www/CMU-CS-95-158R/
During training, the partially-trained system is applied to images of scenery which do not
contain faces (like the one on the left). Any regions in the image detected as faces (which
are expanded and shown on the right) are errors, which can be added into the set of
negative training examples.




  From: http://www.ius.cs.cmu.edu/IUS/har2/har/www/CMU-CS-95-158R/
Overview
• Specific Object Recognition
  – Lowe ‘04
  – Rothganger et al. ‘03

• Face Dectection
  – Sub-space methods
  – Neural Network-based approaches
  – Viola & Jones

• Face Recognition
Rapid Object Detection Using a Boosted Cascade of
                                  Simple Features




                                       Paul Viola     Michael J. Jones
                              Mitsubishi Electric Research Laboratories (MERL)
                                               Cambridge, MA


                         Most of this work was done at Compaq CRL before the authors moved to MERL

Manuscript available on web:
http://citeseer.ist.psu.edu/cache/papers/cs/23183/http:zSzzSzwww.ai.mit.eduzSzpeoplezSzviolazSzresearchzSzpublicationszSzICCV01-Viola-Jones.pdf/viola01robust.pdf
The Viola/Jones Face Detector
   • A seminal approach to real-time object
     detection
   • Training is slow, but detection is very fast
   • Key ideas
       • Integral images for fast feature evaluation
       • Boosting for feature selection
       • Attentional cascade for fast rejection of non-face windows




P. Viola and M. Jones.
Rapid object detection using a boosted cascade of simple features. CVPR
2001.
P. Viola and M. Jones. Robust real-time face detection. IJCV 57(2), 2004.
Image Features


“Rectangle filters”




 Value =
 ∑ (pixels in white area) –
 ∑ (pixels in black area)
Example

          Source




          Result




                   Slide: S. Lazebnik
Fast computation with integral images
• The integral image
  computes a value at each
  pixel (x,y) that is the sum
  of the pixel values above
                                (x,y)
  and to the left of (x,y),
  inclusive
• This can quickly be
  computed in one pass
  through the image




                                        Slide: S. Lazebnik
Computing the integral image




                               Slide: S. Lazebnik
Computing the integral image



                ii(x, y-1)
             s(x-1, y)

                              i(x, y)


Cumulative row sum: s(x, y) = s(x–1, y) + i(x, y)
Integral image: ii(x, y) = ii(x, y−1) + s(x, y)

MATLAB: ii = cumsum(cumsum(double(i)), 2);
                                          Slide: S. Lazebnik
Feature selection
• For a 24x24 detection region, the number of
  possible rectangle features is ~160,000!
Feature selection
• For a 24x24 detection region, the number of
  possible rectangle features is ~160,000!
• At test time, it is impractical to evaluate the
  entire feature set
• Can we create a good classifier using just a
  small subset of all possible features?
• How to select such a subset?
Boosting
    • Boosting is a classification scheme that works
      by combining weak learners into a more
      accurate ensemble classifier
    • Training consists of multiple boosting rounds
        • During each boosting round, we select a weak learner that
          does well on examples that were hard for the previous weak
          learners
        • “Hardness” is captured by weights attached to training
          examples




Y. Freund and R. Schapire, A short introduction to boosting, Journal of
Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
Training procedure
    •   Initially, weight each training example equally
    •   In each boosting round:
        •   Find the weak learner that achieves the lowest weighted
            training error
        •   Raise the weights of training examples misclassified by
            current weak learner
    •   Compute final classifier as linear combination
        of all weak learners (weight of each learner is
        directly proportional to its accuracy)
    •   Exact formulas for re-weighting and combining
        weak learners depend on the particular
        boosting scheme (e.g., AdaBoost)
Y. Freund and R. Schapire, A short introduction to boosting, Journal of
Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
Boosting




    Weak
  Classifier 1
Boosting




      Weights
      Increased
Boosting




       Weak
     Classifier 2
Boosting




      Weights
      Increased
Boosting




       Weak
     Classifier 3
Boosting




  Final classifier is
  linear combination of
  weak classifiers
Boosting for face detection
• First two features selected by boosting:




  This feature combination can yield 100%
  detection rate and 50% false positive rate
Attentional cascade
• We start with simple classifiers which reject
  many of the negative sub-windows while
  detecting almost all positive sub-windows
• Positive response from the first classifier
  triggers the evaluation of a second (more
  complex) classifier, and so on
• A negative outcome at any point leads to the
  immediate rejection of the sub-window

                             T                  T                  T
 IMAGE        Classifier 1       Classifier 2       Classifier 3       FACE
 SUB-WINDOW
                     F                  F                  F

              NON-FACE           NON-FACE           NON-FACE
Attentional cascade
• Chain classifiers that are
                                                                          Receiver operating
  progressively more complex                                                characteristic
  and have lower false positive                                                        % False Pos

  rates:                                                                       0
                                                                          vs false neg determined by
                                                                                                       50




                                                                         100
                                                       % Detection

                                                                         0
                             T                  T                                  T
 IMAGE        Classifier 1       Classifier 2       Classifier 3                       FACE
 SUB-WINDOW
                     F                  F                            F

              NON-FACE           NON-FACE           NON-FACE
Training the cascade
• Set target detection and false positive rates for
  each stage
• Keep adding features to the current stage until
  its target rates have been met
   • Need to lower AdaBoost threshold to maximize detection (as
     opposed to minimizing total classification error)
   • Test on a validation set
• If the overall false positive rate is not low
  enough, then add another stage
• Use false positives from current stage as the
  negative training examples for the next stage
The implemented system
• Training Data
  • 5000 faces
     – All frontal, rescaled to
       24x24 pixels
  • 300 million non-faces
     – 9500 non-face images
  • Faces are normalized
     – Scale, translation

• Many variations
  • Across individuals
  • Illumination
  • Pose
System performance
• Training time: “weeks” on 466 MHz Sun
  workstation
• 38 layers, total of 6061 features
• Average of 10 features evaluated per window
  on test set
• “On a 700 Mhz Pentium III processor, the
  face detector can process a 384 by 288 pixel
  image in about .067 seconds”
  • 15 Hz
  • 15 times faster than previous detector of comparable
    accuracy (Rowley et al., 1998)
Other detection tasks




Facial Feature Localization   Profile Detection




  Male vs.
  female
Profile Detection
Profile Features
Summary: Viola/Jones detector
•   Rectangle features
•   Integral images for fast computation
•   Boosting for feature selection
•   Attentional cascade for fast rejection of
    negative windows
Assignment
Use any object detection method of your choice
 to identify following objects in a static image.

1. Humans
2. Chair
3. Car
4. Road
5. Cup
6. Table

Mais conteúdo relacionado

Destaque

Subspace_Discriminant_Approach_Hyperspectral.ppt
Subspace_Discriminant_Approach_Hyperspectral.pptSubspace_Discriminant_Approach_Hyperspectral.ppt
Subspace_Discriminant_Approach_Hyperspectral.ppt
grssieee
 

Destaque (8)

A neural ada boost based facial expression recogniton System
A neural ada boost based facial expression recogniton SystemA neural ada boost based facial expression recogniton System
A neural ada boost based facial expression recogniton System
 
Subspace_Discriminant_Approach_Hyperspectral.ppt
Subspace_Discriminant_Approach_Hyperspectral.pptSubspace_Discriminant_Approach_Hyperspectral.ppt
Subspace_Discriminant_Approach_Hyperspectral.ppt
 
Sparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and ApplicationsSparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and Applications
 
Seminarppt
SeminarpptSeminarppt
Seminarppt
 
Pawel FORCZMANSKI "Dimensionality reduction methods applied to digital image ...
Pawel FORCZMANSKI "Dimensionality reduction methods applied to digital image ...Pawel FORCZMANSKI "Dimensionality reduction methods applied to digital image ...
Pawel FORCZMANSKI "Dimensionality reduction methods applied to digital image ...
 
Face Recognition Proposal Presentation
Face Recognition Proposal PresentationFace Recognition Proposal Presentation
Face Recognition Proposal Presentation
 
Automated attendance system based on facial recognition
Automated attendance system based on facial recognitionAutomated attendance system based on facial recognition
Automated attendance system based on facial recognition
 
How to come up with new research ideas
How to come up with new research ideasHow to come up with new research ideas
How to come up with new research ideas
 

Semelhante a Part2

Stixel based real time object detection for ADAS using surface normal
Stixel based real time object detection for ADAS using surface normalStixel based real time object detection for ADAS using surface normal
Stixel based real time object detection for ADAS using surface normal
TaeKang Woo
 
Andrew Zisserman Talk - Part 1a
Andrew Zisserman Talk - Part 1aAndrew Zisserman Talk - Part 1a
Andrew Zisserman Talk - Part 1a
Anton Konushin
 

Semelhante a Part2 (20)

Lecture 6-computer vision features descriptors matching
Lecture 6-computer vision features descriptors matchingLecture 6-computer vision features descriptors matching
Lecture 6-computer vision features descriptors matching
 
Machine Learning Foundations for Professional Managers
Machine Learning Foundations for Professional ManagersMachine Learning Foundations for Professional Managers
Machine Learning Foundations for Professional Managers
 
Reza talk
Reza talkReza talk
Reza talk
 
Computer Vision descriptors
Computer Vision descriptorsComputer Vision descriptors
Computer Vision descriptors
 
Recognition
RecognitionRecognition
Recognition
 
Stixel based real time object detection for ADAS using surface normal
Stixel based real time object detection for ADAS using surface normalStixel based real time object detection for ADAS using surface normal
Stixel based real time object detection for ADAS using surface normal
 
Lecture 2.B: Computer Vision Applications - Full Stack Deep Learning - Spring...
Lecture 2.B: Computer Vision Applications - Full Stack Deep Learning - Spring...Lecture 2.B: Computer Vision Applications - Full Stack Deep Learning - Spring...
Lecture 2.B: Computer Vision Applications - Full Stack Deep Learning - Spring...
 
Face recognition and deep learning โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
Face recognition and deep learning  โดย ดร. สรรพฤทธิ์ มฤคทัต NECTECFace recognition and deep learning  โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
Face recognition and deep learning โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
 
Andrew Zisserman Talk - Part 1a
Andrew Zisserman Talk - Part 1aAndrew Zisserman Talk - Part 1a
Andrew Zisserman Talk - Part 1a
 
16 17 bag_words
16 17 bag_words16 17 bag_words
16 17 bag_words
 
Introduction talk to Computer Vision
Introduction talk to Computer Vision Introduction talk to Computer Vision
Introduction talk to Computer Vision
 
cnn.pptx
cnn.pptxcnn.pptx
cnn.pptx
 
Towards Accurate Multi-person Pose Estimation in the Wild (My summery)
Towards Accurate Multi-person Pose Estimation in the Wild (My summery)Towards Accurate Multi-person Pose Estimation in the Wild (My summery)
Towards Accurate Multi-person Pose Estimation in the Wild (My summery)
 
MLIP - Chapter 6 - Generation, Super-Resolution, Style transfer
MLIP - Chapter 6 - Generation, Super-Resolution, Style transferMLIP - Chapter 6 - Generation, Super-Resolution, Style transfer
MLIP - Chapter 6 - Generation, Super-Resolution, Style transfer
 
PPT s12-machine vision-s2
PPT s12-machine vision-s2PPT s12-machine vision-s2
PPT s12-machine vision-s2
 
AlexNet
AlexNetAlexNet
AlexNet
 
Microlensing Modelling
Microlensing ModellingMicrolensing Modelling
Microlensing Modelling
 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentation
 
Feature Detection and Matching
Feature Detection and MatchingFeature Detection and Matching
Feature Detection and Matching
 
PPT s11-machine vision-s2
PPT s11-machine vision-s2PPT s11-machine vision-s2
PPT s11-machine vision-s2
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 

Part2

  • 1. Recognizing Specific Objects & Faces Many slides from: S. Savarese, S. Lazebnik, D. Lowe, P. Viola
  • 2. Overview • Specific Object Recognition – Lowe ‘04 – Rothganger et al. ‘03 • Face Dectection – Sub-space methods – Neural Network-based approaches – Viola & Jones • Face Recognition
  • 3. Specific Object Recognition Slides from S. Savarese, D. Lowe and F. Rothganger
  • 5.
  • 6.
  • 7. Challenges: • Variability due to: – View point – Illumination – Occlusions – But not intra-class variation
  • 8.
  • 9.
  • 10.
  • 11. Difference of Guassian • a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. • a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image • Removal of random noise which also has high frequency
  • 12. Laplacian of Gaussian • Blob detection: refers to mathematical methods that are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to areas surrounding those regions • Laplacian: second derivate of Gaussian aimed at finding blobs in images.
  • 13. Harris-Laplace Suppose we only consider a small window of pixels – What defines whether a feature is a good or bad candidate? Slide adapted from Darya Frolova, Denis Simakov, Weizmann Institute.
  • 14. Feature detection – How does the window change when you shift it? – Shifting the window in any direction causes a big change “flat” region: “edge”: “corner”: no change in all no change along significant change directions the edge direction in all directions Slide adapted from Darya Frolova, Denis Simakov, Weizmann Institute.
  • 15.
  • 16.
  • 17.
  • 18. Scale Invariant Feature Transform Basic idea: • Take 16x16 square window around detected feature • Compute edge orientation (angle of the gradient - 90°) for each pixel • Throw out weak edges (threshold gradient magnitude) • Create histogram of surviving edge orientations 0 2π angle histogram Adapted from slide by David Lowe
  • 19. SIFT descriptor Full version • Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below) • Compute an orientation histogram for each cell • 16 cells * 8 orientations = 128 dimensional descriptor Adapted from slide by David Lowe
  • 20. Properties of SIFT Extraordinarily robust matching technique – Can handle changes in viewpoint • Up to about 60 degree out of plane rotation – Can handle significant changes in illumination • Sometimes even day vs. night (below) – Fast and efficient—can run in real time – Lots of code available • http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30. Overview • Specific Object Recognition – Lowe ‘04 – Rothganger et al. ‘03 • Face Dectection – Sub-space methods – Neural Network-based approaches – Viola & Jones • Face Recognition
  • 31. Face Detection & Recognition Slides from P. Viola, S. Lazebnik, A. Torralba
  • 33. Face detection and recognition Detection Recognition “Sally”
  • 34. Challenges: • Variability due to: – Intra-class variation (but fairly small) – Illumination – Occlusions (eyeglasses, facial hair, hair) – Fairly limited viewpoint (frontal, profile)
  • 35. Practical Challenges of face detection • Sliding window detector must evaluate tens of thousands of location/scale combinations • Faces are rare: 0–10 per image – For computational efficiency, we should try to spend as little time as possible on the non-face windows – A megapixel image has ~106 pixels and a comparable number of candidate face locations – To avoid having a false positive in every image image, our false positive rate has to be less than 10-6
  • 36. Overview • Specific Object Recognition – Lowe ‘04 – Rothganger et al. ‘03 • Face Dectection – Sub-space methods – Neural Network-based approaches – Viola & Jones • Face Recognition
  • 37. Subspace Methods • PCA (“Eigenfaces”, Turk and Pentland) • PCA (Bayesian, Moghaddam and Pentland) • LDA/FLD (“Fisherfaces”, Belhumeur & Kreigman) • ICA
  • 38. Principal Component Analysis • Given: N data points x1, … ,xN in Rd • We want to find a new set of features that are linear combinations of original ones: u(xi) = uT(xi – µ) (µ: mean of data points) • What unit vector u in Rd captures the most variance of the data?
  • 39. Principal Component Analysis • Direction that maximizes the variance of the projected data: N Projection of data point N Covariance matrix of data The direction that maximizes the variance is the eigenvector associated with the largest eigenvalue of Σ
  • 40. Eigenfaces: Key idea • Assume that most face images lie on a low-dimensional subspace determined by the first k (k<d) directions of maximum variance • Use PCA to determine the vectors u1,…uk that span that subspace: x ≈ μ + w1u1 + w2u2 + … + wkuk • Represent each face using its “face space” coordinates (w1,…wk) • Perform nearest-neighbor recognition in “face space” M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991
  • 41. Eigenfaces example Training images x1,…,xN
  • 42. Eigenfaces example Top eigenvectors: u1,…uk Mean: μ
  • 43. Eigenfaces example • Face x in “face space” coordinates: =
  • 44. Eigenfaces example • Face x in “face space” coordinates: = • Reconstruction: = + ^ x = µ + w1u1 + w2u2 + w3u3 + w4u4 + …
  • 45. Summary: Recognition with eigenfaces Process labeled training images: • Find mean µ and covariance matrix Σ • Find k principal components (eigenvectors of Σ) u1,…uk • Project each training image xi onto subspace spanned by principal components: (wi1,…,wik) = (u1T(xi – µ), … , ukT(xi – µ)) Given novel image x: • Project onto subspace: (w1,…,wk) = (u1T(x – µ), … , ukT(x – µ)) • Optional: check reconstruction error x – ^ to determine x whether image is really a face • Classify as closest training face in k-dimensional subspace
  • 46. Limitations • Global appearance method: not robust to misalignment, background variation
  • 47. Limitations • PCA assumes that the data has a Gaussian distribution (mean µ, covariance matrix Σ) The shape of this dataset is not well described by its principal components
  • 48. Distribution-Based Face Detector • Learn face and nonface models from examples [Sung and Poggio 95] • Cluster and project the examples to a lower dimensional space using Gaussian distributions and PCA • Detect faces using distance metric to face and nonface clusters
  • 49. Distribution-Based Face Detector • Learn face and nonface models from examples [Sung and Poggio 95] Training Database 1000+ Real, 3000+ VIRTUAL 50,0000+ Non-Face Pattern
  • 50. Overview • Specific Object Recognition – Lowe ‘04 – Rothganger et al. ‘03 • Face Dectection – Sub-space methods – Neural Network-based approaches – Viola & Jones • Face Recognition
  • 51. Neural Network-Based Face Detector • Train a set of multilayer perceptrons and arbitrate a decision among all outputs [Rowley et al. 98]
  • 52. The steps in preprocessing a window. First, a linear function is fit to the intensity values in the window, and then subtracted out, correcting for some extreme lighting conditions. Then, histogram equalization is applied, to correct for different camera gains and to improve contrast. For each of these steps, the mapping is computed based on pixels inside the oval mask, while the mapping is applied to the entire window. From: http://www.ius.cs.cmu.edu/IUS/har2/har/www/CMU-CS-95-158R/
  • 53.
  • 54. Example face images, randomly mirrored, rotated, translated, and scaled by small amounts (photos are of the three authors). From: http://www.ius.cs.cmu.edu/IUS/har2/har/www/CMU-CS-95-158R/
  • 55. During training, the partially-trained system is applied to images of scenery which do not contain faces (like the one on the left). Any regions in the image detected as faces (which are expanded and shown on the right) are errors, which can be added into the set of negative training examples. From: http://www.ius.cs.cmu.edu/IUS/har2/har/www/CMU-CS-95-158R/
  • 56. Overview • Specific Object Recognition – Lowe ‘04 – Rothganger et al. ‘03 • Face Dectection – Sub-space methods – Neural Network-based approaches – Viola & Jones • Face Recognition
  • 57. Rapid Object Detection Using a Boosted Cascade of Simple Features Paul Viola Michael J. Jones Mitsubishi Electric Research Laboratories (MERL) Cambridge, MA Most of this work was done at Compaq CRL before the authors moved to MERL Manuscript available on web: http://citeseer.ist.psu.edu/cache/papers/cs/23183/http:zSzzSzwww.ai.mit.eduzSzpeoplezSzviolazSzresearchzSzpublicationszSzICCV01-Viola-Jones.pdf/viola01robust.pdf
  • 58. The Viola/Jones Face Detector • A seminal approach to real-time object detection • Training is slow, but detection is very fast • Key ideas • Integral images for fast feature evaluation • Boosting for feature selection • Attentional cascade for fast rejection of non-face windows P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. CVPR 2001. P. Viola and M. Jones. Robust real-time face detection. IJCV 57(2), 2004.
  • 59. Image Features “Rectangle filters” Value = ∑ (pixels in white area) – ∑ (pixels in black area)
  • 60. Example Source Result Slide: S. Lazebnik
  • 61. Fast computation with integral images • The integral image computes a value at each pixel (x,y) that is the sum of the pixel values above (x,y) and to the left of (x,y), inclusive • This can quickly be computed in one pass through the image Slide: S. Lazebnik
  • 62. Computing the integral image Slide: S. Lazebnik
  • 63. Computing the integral image ii(x, y-1) s(x-1, y) i(x, y) Cumulative row sum: s(x, y) = s(x–1, y) + i(x, y) Integral image: ii(x, y) = ii(x, y−1) + s(x, y) MATLAB: ii = cumsum(cumsum(double(i)), 2); Slide: S. Lazebnik
  • 64. Feature selection • For a 24x24 detection region, the number of possible rectangle features is ~160,000!
  • 65. Feature selection • For a 24x24 detection region, the number of possible rectangle features is ~160,000! • At test time, it is impractical to evaluate the entire feature set • Can we create a good classifier using just a small subset of all possible features? • How to select such a subset?
  • 66. Boosting • Boosting is a classification scheme that works by combining weak learners into a more accurate ensemble classifier • Training consists of multiple boosting rounds • During each boosting round, we select a weak learner that does well on examples that were hard for the previous weak learners • “Hardness” is captured by weights attached to training examples Y. Freund and R. Schapire, A short introduction to boosting, Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
  • 67. Training procedure • Initially, weight each training example equally • In each boosting round: • Find the weak learner that achieves the lowest weighted training error • Raise the weights of training examples misclassified by current weak learner • Compute final classifier as linear combination of all weak learners (weight of each learner is directly proportional to its accuracy) • Exact formulas for re-weighting and combining weak learners depend on the particular boosting scheme (e.g., AdaBoost) Y. Freund and R. Schapire, A short introduction to boosting, Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
  • 68. Boosting Weak Classifier 1
  • 69. Boosting Weights Increased
  • 70. Boosting Weak Classifier 2
  • 71. Boosting Weights Increased
  • 72. Boosting Weak Classifier 3
  • 73. Boosting Final classifier is linear combination of weak classifiers
  • 74. Boosting for face detection • First two features selected by boosting: This feature combination can yield 100% detection rate and 50% false positive rate
  • 75. Attentional cascade • We start with simple classifiers which reject many of the negative sub-windows while detecting almost all positive sub-windows • Positive response from the first classifier triggers the evaluation of a second (more complex) classifier, and so on • A negative outcome at any point leads to the immediate rejection of the sub-window T T T IMAGE Classifier 1 Classifier 2 Classifier 3 FACE SUB-WINDOW F F F NON-FACE NON-FACE NON-FACE
  • 76. Attentional cascade • Chain classifiers that are Receiver operating progressively more complex characteristic and have lower false positive % False Pos rates: 0 vs false neg determined by 50 100 % Detection 0 T T T IMAGE Classifier 1 Classifier 2 Classifier 3 FACE SUB-WINDOW F F F NON-FACE NON-FACE NON-FACE
  • 77. Training the cascade • Set target detection and false positive rates for each stage • Keep adding features to the current stage until its target rates have been met • Need to lower AdaBoost threshold to maximize detection (as opposed to minimizing total classification error) • Test on a validation set • If the overall false positive rate is not low enough, then add another stage • Use false positives from current stage as the negative training examples for the next stage
  • 78. The implemented system • Training Data • 5000 faces – All frontal, rescaled to 24x24 pixels • 300 million non-faces – 9500 non-face images • Faces are normalized – Scale, translation • Many variations • Across individuals • Illumination • Pose
  • 79. System performance • Training time: “weeks” on 466 MHz Sun workstation • 38 layers, total of 6061 features • Average of 10 features evaluated per window on test set • “On a 700 Mhz Pentium III processor, the face detector can process a 384 by 288 pixel image in about .067 seconds” • 15 Hz • 15 times faster than previous detector of comparable accuracy (Rowley et al., 1998)
  • 80. Other detection tasks Facial Feature Localization Profile Detection Male vs. female
  • 83. Summary: Viola/Jones detector • Rectangle features • Integral images for fast computation • Boosting for feature selection • Attentional cascade for fast rejection of negative windows
  • 84. Assignment Use any object detection method of your choice to identify following objects in a static image. 1. Humans 2. Chair 3. Car 4. Road 5. Cup 6. Table

Notas do Editor

  1. High level idea is that corners are good. You want to find windows that contain strong gradients AND gradients oriented in more than one direction
  2. While the title of this talk emphasizes the application of our ideas to an important real-world task There are two nice ideas which could have broader impact in machine learning
  3. For real problems results are only as good as the features used... This is the main piece of ad-hoc (or domain) knowledge Rather than the pixels, we have selected a very large set of simple functions Sensitive to edges and other critcal features of the image ** At multiple scales Since the final classifier is a perceptron it is important that the features be non-linear… otherwise the final classifier will be a simple perceptron. We introduce a threshold to yield binary features
  4. In general simple classifiers, while they are more efficient, they are also weaker. We could define a computational risk hierarchy (in analogy with structural risk minimization)… A nested set of classifier classes The training process is reminiscent of boosting… - previous classifiers reweight the examples used to train subsequent classifiers The goal of the training process is different - instead of minimizing errors minimize false positives
  5. This situation with negative examples is actually quite common… where negative examples are free.