SlideShare uma empresa Scribd logo
1 de 26
Baixar para ler offline
A Machine Learning (Theory)
Perspective on Computer Vision

            Peter Auer
      Montanuniversität Leoben
Outline

 What I am doing and how computer
 vision approached me (in 2002).
 Some modern machine learning
 algorithms used in computer vision,
 and their development:
   Boosting
   Support Vector Machines
 Concluding remarks
My background
 COLT 1993
   Conference on Learning Theory
   „On-Line Learning of Rectangles in Noisy
   Environments“

 FOCS 1995
   Symp. Foundations of Computer Science
   „Gambling in a Rigged Casino: The Adversarial
   Multi-Arm Bandit Problem“
   with N. Cesa-Bianchi, Y. Freund, R. Schapire

 ICML, NIPS, STOC, …
A computer vision project

 EU-Project LAVA, 2002
   “Learning for adaptable visual
   assistants”
   XRCE: Ch. Dance, R. Mohr
   IRIA Grenoble: C. Schmid, B. Triggs
   RHUL: J. Shawe-Taylor
   IDIAP: S. Bengio
LAVA Proposal
 Vision (goals)
   Recognition of generic objects and events
   Attention Mechanisms
   Base line and high-level descriptors
 Learning (means)
   Statistical Analysis
   Kernels and models and features
   Online Learning
Online learning
 Online Information Setting
   An input is received, a prediction is made, and
   then feedback is acquired.
   Goal: To make good predictions, in respect to
   a (large) set of fixed predictors.
 Online Computation Setting
   The amount of computation per new example –
   to update the learned information – is constant
   (or small).
   Goal: To be fast computationally.
 (Near) real-time learning?
Learning for vision around 2002
 Viola, Jones, CVPR 2001:
   Rapid object detection using a boosted cascade
   of simple features. (Boosting)
 Agarwal, Roth, ECCV 2002:
   Learning a Sparse Representation for Object
   Detection. (Winnow)
 Fergus, Perona, Zisserman, CVPR 2003:
   Object class recognition by unsupervised scale-
   invariant learning. (EM-type algorithm)
 Wallraven, Caputo, Graf, ICCV 2003:
   Recognition with local features: the kernel
   recipe. (SVM)
Our contribution in LAVA

 Opelt, Fussenegger, Pinz, Auer,
 ECCV 2004:
   Weak hypotheses and boosting for
   generic object detection and
   recognition.
Image classification
as a learning problem
Image classification as a learning problem

       Images are represented as vectors x = (x1 , . . . , xn ) ∈ X ⊂ Rn .

       Given
            training images x (1) , . . . , x (m) ∈ X
            with their classifications y (1) , . . . , y (m) ∈ Y = {−1, +1},
       a classifier H : X → Y is learned.


       We consider linear classifiers Hw , w ∈ Rn ,

                                             +1         if w · x ≥ 0
                        Hw (x) =
                                             −1         if w · x < 0
                    n
       (w · x =     i=1 wi xi ).



                                   P. Auer        ML Perspective on CV
The Perceptron algorithm (Rosenblatt, 1958)
   The Perceptron algorithm maintains a weight vector w (t) as its
   current classifier.
       Initialization w (1) = 0.
                            +1       if w (t) · x (t) ≥ 0
       Predict y (t) =
                 ˆ
                            −1       if w (t) · x (t) < 0
       If y (t) = y (t) then w (t+1) = w (t) ,
          ˆ
       else w (t+1) = w (t) + ηy (t) x (t) .
       (η is the learning rate.)


       The Perceptron was abandoned in 1969, when Minsky and
       Papert showed that Perceptrons are not able to learn some
       simple functions.
       Revived only in the 1980’s when neural networks became
       popular.

                                 P. Auer   ML Perspective on CV
Perceptron cannot learn XOR




 No single line can separate the green
 from the red boxes.
Non-linear classifiers



       Extending the feature space (or using kernels) prevents the
       problem:
                                                             2 2
       Since XOR is a quadratic function, use (1, x1 , x2 , x1 , x2 , x1 x2 )
       instead of (x1 , x2 ).
       For x1 , x2 ∈ {+1, −1},

                               x1 XOR x2 = x1 x2 .




                                P. Auer   ML Perspective on CV
Winnow (Littlestone 1987)


      Works like the Perceptron algorithm except for the update of
      the weights:
                       (t+1)             (t)                     (t)
                     wi        = wi            ∗ exp ηy (t) xi

      for some η > 0. (w (1) = 1.)


      Observe the multiplicative update of the weights and
            (t+1)         (t)        (t)
      log wi      = log wi + ηy (t) xi .


      Very related work:
      The Weighted Majority Algorithm (Littlestone, Warmuth)


                               P. Auer         ML Perspective on CV
Comparison of the Perceptron algorithm and Winnow


      Perceptron and Winnow scale differently in respect to
      relevant, used, and irrelevant attributes:


                         all attributes             n
                         relevant attributes        k
                         used attributes            d


                                     # training ex.
                                         √
                      Perceptron           dk
                      Winnow            k log n




                           P. Auer   ML Perspective on CV
Adaboost (Freund, Schapire, 1995)


                                              (s)
      AdaBoost maintains weights vt                     on the training examples
      (x (s) , y (s) ) over time t:

                            (s)
      Initialize weights v0       = 1.
      For t = 1, 2, . . .
           Select coordinate it with maximal correlation with the labels,
                 (s) (s) (s)
              s vt y    xi , as weak hypothesis.
                                                         (s)               (s)
           Choose αt which minimizes                s   vt exp −αt y (s) xit     .
                      (s)     (s)                         (s)
           Update vt+1 = vt exp −αt y (s) xit                    .
      For x = (x1 , . . . , xn ) predict sign (           t    αt xit ).



                                    P. Auer   ML Perspective on CV
History of Boosting (1)
 Rob Schapire:
 The strength of weak learnability, 1990.
   Showed that classifiers which are only 51%
   correct, can be combined into a 99% correct
   classifier.
   Rather a theoretical result, since the algorithm
   was complicated and not practical.
   I know people who thought that this was not
   an interesting result.
History of Boosting (2)

 Yoav Freund:
 Boosting a weak learning algorithm
 by majority, 1995.
   Improved boosting algorithm, but still
   complicated and theoretical.
   Only logarithmically many examples
   are forwarded to the weak learner!
History of Boosting (3)
 Y. Freund and R. Schapire:
 A decision-theoretic generalization of on-line
 learning and an application to boosting, 1995.
   Very simple boosting algorithm, easy to implement.
   Theoretically less interesting.
   Performs very well in practice.

 Won the Gödel price in 2003 and the Kanellakis
 price in 2004. (Both are prestigious prices in
 Theoretical Computer Science.)

 Since then many variants of Boosting (mainly to
 improve error robustness):
   BrownBoost, Soft margin boosting, LPBoost.
Support Vector Machines (SVMs)
 In its vanilla version also learns a linear classifier.

 It maximizes distance between the decision
 boundary and the nearest training points.
    Formulates learning as a well-behaved optimization
    problem.

 Invented by Vladimir Vapnik
 (1979, Russian paper).
    Translated in 1982.
    No practical applications,
    since it required linear separability.
Practical SVMs
 Vapnik:
    The Nature of Statistical Learning Theory, 1995.
    Statistical Learning Theory, 1998.

 Shawe-Taylor, Cristianini:
 Support Vector Machines, 2000.

 Soft margin SVMs:
    Tolerate incorrectly labeled training examples (by
    using slack variables).

 Non-linear classification using the “kernel trick”.
Support Vector Machines (SVMs)



                                    +
                                +                +
                          +                 +
                          +   +
                                                             −
                          + +                                    −
                                                             −
                                                         −           −
                                                     −       −
                                        −                −



                                                                         – p.21
Maschinelles Lernen   —   25.8.03   —   Peter Auer
The kernel trick (1)

       Recall the perceptron update,
                                                              t
                w (t+1) = w (t) + ηy (t) x (t) = η                 y (τ ) x (τ ) ,
                                                            τ =1

       and classification,
                                                       t
                         (t+1)
            y = sign w
            ˆ                    · x = sign                 y (τ ) x (τ ) · x        .
                                                     τ =1

       A kernel function generalizes the inner product,
                                       t
                     y = sign
                     ˆ                      y (τ ) K x (τ ) , x         .
                                     τ =1



                                 P. Auer     ML Perspective on CV
The kernel trick (2)


       The inner product x (τ ) · x is a measure of similarity:
       x (τ ) · x is maximal if x (τ ) = x.


       The kernel function is a similarity measure in feature space,
       K x (τ ) , x = Φ(x (τ ) ) · Φ(x).


       Kernel functions can be designed to capture the relevant
       similarities of the domain.


       Aizerman, Braverman, Rozonoer:
       Theoretical foundations of the potential function method in
       pattern recognition learning, 1964.


                               P. Auer   ML Perspective on CV
Where are we going?

 New learning algorithms?
 Better image descriptors!
 Probably they need to be learned.
 Probably they need to be
 hierarchical.
 We need (to use) more data.
Final remark on algorithm evaluation
and benchmarks

 Computer vision is in the state of
 machine learning 10 years ago (at
 least for object classification).

 Benchmark datasets start to
 become available, e.g. PASCAL
 VOC.

Mais conteúdo relacionado

Mais procurados

The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptronESCOM
 
Bayesian Dark Knowledge and Matrix Factorization
Bayesian Dark Knowledge and Matrix FactorizationBayesian Dark Knowledge and Matrix Factorization
Bayesian Dark Knowledge and Matrix FactorizationPreferred Networks
 
Lesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric FunctionsLesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric FunctionsMatthew Leingang
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)STAIR Lab, Chiba Institute of Technology
 
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...Alex (Oleksiy) Varfolomiyev
 
Approximate Bayesian Computation on GPUs
Approximate Bayesian Computation on GPUsApproximate Bayesian Computation on GPUs
Approximate Bayesian Computation on GPUsMichael Stumpf
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningzukun
 
Lesson 12: Linear Approximation
Lesson 12: Linear ApproximationLesson 12: Linear Approximation
Lesson 12: Linear ApproximationMatthew Leingang
 
March12 natarajan
March12 natarajanMarch12 natarajan
March12 natarajanBBKuhn
 
Machine learning of structured outputs
Machine learning of structured outputsMachine learning of structured outputs
Machine learning of structured outputszukun
 
Stability of adaptive random-walk Metropolis algorithms
Stability of adaptive random-walk Metropolis algorithmsStability of adaptive random-walk Metropolis algorithms
Stability of adaptive random-walk Metropolis algorithmsBigMC
 
Spectral Learning Methods for Finite State Machines with Applications to Na...
  Spectral Learning Methods for Finite State Machines with Applications to Na...  Spectral Learning Methods for Finite State Machines with Applications to Na...
Spectral Learning Methods for Finite State Machines with Applications to Na...LARCA UPC
 
Ml mle_bayes
Ml  mle_bayesMl  mle_bayes
Ml mle_bayesPhong Vo
 
Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture" Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture" ieee_cis_cyprus
 
Learning to discover monte carlo algorithm on spin ice manifold
Learning to discover monte carlo algorithm on spin ice manifoldLearning to discover monte carlo algorithm on spin ice manifold
Learning to discover monte carlo algorithm on spin ice manifoldKai-Wen Zhao
 
Nonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer VisionNonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer Visionzukun
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4Phong Vo
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsLesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsMatthew Leingang
 

Mais procurados (20)

The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptron
 
Bayesian Dark Knowledge and Matrix Factorization
Bayesian Dark Knowledge and Matrix FactorizationBayesian Dark Knowledge and Matrix Factorization
Bayesian Dark Knowledge and Matrix Factorization
 
Lesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric FunctionsLesson 16: Inverse Trigonometric Functions
Lesson 16: Inverse Trigonometric Functions
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
Optimal Finite Difference Grids for Elliptic and Parabolic PDEs with Applicat...
 
Approximate Bayesian Computation on GPUs
Approximate Bayesian Computation on GPUsApproximate Bayesian Computation on GPUs
Approximate Bayesian Computation on GPUs
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
 
Lesson 12: Linear Approximation
Lesson 12: Linear ApproximationLesson 12: Linear Approximation
Lesson 12: Linear Approximation
 
March12 natarajan
March12 natarajanMarch12 natarajan
March12 natarajan
 
Machine learning of structured outputs
Machine learning of structured outputsMachine learning of structured outputs
Machine learning of structured outputs
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Stability of adaptive random-walk Metropolis algorithms
Stability of adaptive random-walk Metropolis algorithmsStability of adaptive random-walk Metropolis algorithms
Stability of adaptive random-walk Metropolis algorithms
 
Spectral Learning Methods for Finite State Machines with Applications to Na...
  Spectral Learning Methods for Finite State Machines with Applications to Na...  Spectral Learning Methods for Finite State Machines with Applications to Na...
Spectral Learning Methods for Finite State Machines with Applications to Na...
 
Ml mle_bayes
Ml  mle_bayesMl  mle_bayes
Ml mle_bayes
 
Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture" Johan Suykens: "Models from Data: a Unifying Picture"
Johan Suykens: "Models from Data: a Unifying Picture"
 
Learning to discover monte carlo algorithm on spin ice manifold
Learning to discover monte carlo algorithm on spin ice manifoldLearning to discover monte carlo algorithm on spin ice manifold
Learning to discover monte carlo algorithm on spin ice manifold
 
Nonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer VisionNonlinear Manifolds in Computer Vision
Nonlinear Manifolds in Computer Vision
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsLesson 14: Derivatives of Logarithmic and Exponential Functions
Lesson 14: Derivatives of Logarithmic and Exponential Functions
 

Destaque

Critical approaches jacqui-clark
Critical approaches jacqui-clarkCritical approaches jacqui-clark
Critical approaches jacqui-clarkJacquiClark12
 
Stanley Deetz Managerialism and Organizational Democracy Approach
Stanley Deetz Managerialism and Organizational Democracy ApproachStanley Deetz Managerialism and Organizational Democracy Approach
Stanley Deetz Managerialism and Organizational Democracy ApproachMitch Herrera
 
Chapter 21 ppt (critical theory of communication in organizations)
Chapter 21 ppt (critical theory of communication in organizations)Chapter 21 ppt (critical theory of communication in organizations)
Chapter 21 ppt (critical theory of communication in organizations)Jaya Purnama
 
Critical Approach
Critical ApproachCritical Approach
Critical Approachguestc08002
 
Porous materials and metallic foams
Porous materials and metallic foamsPorous materials and metallic foams
Porous materials and metallic foamsH.
 
Critical Theory Approach To Organizations
Critical Theory Approach To OrganizationsCritical Theory Approach To Organizations
Critical Theory Approach To OrganizationsArun Jacob
 
vilas Nikam- Mechanics of structure-Column
vilas Nikam- Mechanics of structure-Columnvilas Nikam- Mechanics of structure-Column
vilas Nikam- Mechanics of structure-ColumnNIKAMVN
 
Wanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and CommendedWanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and Commendedmaditabalnco
 
Sap amazon and Kogent Demo
Sap amazon and Kogent DemoSap amazon and Kogent Demo
Sap amazon and Kogent DemoJon Boutelle
 
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”SiteGround España
 
EIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEdna Ayme-Yahil, PhD
 
Intranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónIntranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónMaximiliano Gonzalez-Fierro
 
2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports InsightsNeil Horowitz
 
83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomographyDr. Muhammad Bin Zulfiqar
 
美次級房貸風暴的影響評估
美次級房貸風暴的影響評估美次級房貸風暴的影響評估
美次級房貸風暴的影響評估Obadiah
 
Task Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementTask Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementArun Agrawal
 

Destaque (20)

Critical approaches jacqui-clark
Critical approaches jacqui-clarkCritical approaches jacqui-clark
Critical approaches jacqui-clark
 
Stanley Deetz Managerialism and Organizational Democracy Approach
Stanley Deetz Managerialism and Organizational Democracy ApproachStanley Deetz Managerialism and Organizational Democracy Approach
Stanley Deetz Managerialism and Organizational Democracy Approach
 
Chapter 21 ppt (critical theory of communication in organizations)
Chapter 21 ppt (critical theory of communication in organizations)Chapter 21 ppt (critical theory of communication in organizations)
Chapter 21 ppt (critical theory of communication in organizations)
 
Critical Approach
Critical ApproachCritical Approach
Critical Approach
 
Porous materials and metallic foams
Porous materials and metallic foamsPorous materials and metallic foams
Porous materials and metallic foams
 
Critical Theory Approach To Organizations
Critical Theory Approach To OrganizationsCritical Theory Approach To Organizations
Critical Theory Approach To Organizations
 
vilas Nikam- Mechanics of structure-Column
vilas Nikam- Mechanics of structure-Columnvilas Nikam- Mechanics of structure-Column
vilas Nikam- Mechanics of structure-Column
 
Lecture 05
Lecture 05Lecture 05
Lecture 05
 
Wanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and CommendedWanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and Commended
 
mobileHut_May_16
mobileHut_May_16mobileHut_May_16
mobileHut_May_16
 
Sap amazon and Kogent Demo
Sap amazon and Kogent DemoSap amazon and Kogent Demo
Sap amazon and Kogent Demo
 
Zaragoza Turismo 32
Zaragoza Turismo 32Zaragoza Turismo 32
Zaragoza Turismo 32
 
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
 
EIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-Version
 
Intranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónIntranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organización
 
2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights
 
83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography
 
El narcotráfico
El narcotráficoEl narcotráfico
El narcotráfico
 
美次級房貸風暴的影響評估
美次級房貸風暴的影響評估美次級房貸風暴的影響評估
美次級房貸風暴的影響評估
 
Task Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementTask Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective Management
 

Semelhante a 05 history of cv a machine learning (theory) perspective on computer vision

MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
 
Introduction
IntroductionIntroduction
Introductionbutest
 
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...npinto
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习AdaboostShocky1
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
Chapter 1 introduction (Image Processing)
Chapter 1 introduction (Image Processing)Chapter 1 introduction (Image Processing)
Chapter 1 introduction (Image Processing)Varun Ojha
 
linear SVM.ppt
linear SVM.pptlinear SVM.ppt
linear SVM.pptMahimMajee
 
Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionXin-She Yang
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxLecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxVAIBHAVSAHU55
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchAhmed BESBES
 

Semelhante a 05 history of cv a machine learning (theory) perspective on computer vision (20)

The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
The Perceptron (D1L1 Insight@DCU Machine Learning Workshop 2017)
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Introduction
IntroductionIntroduction
Introduction
 
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Googl...
 
Sparse autoencoder
Sparse autoencoderSparse autoencoder
Sparse autoencoder
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Chapter 1 introduction (Image Processing)
Chapter 1 introduction (Image Processing)Chapter 1 introduction (Image Processing)
Chapter 1 introduction (Image Processing)
 
linear SVM.ppt
linear SVM.pptlinear SVM.ppt
linear SVM.ppt
 
Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An Introduction
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxLecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
 
The Perceptron (D1L2 Deep Learning for Speech and Language)
The Perceptron (D1L2 Deep Learning for Speech and Language)The Perceptron (D1L2 Deep Learning for Speech and Language)
The Perceptron (D1L2 Deep Learning for Speech and Language)
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from Scratch
 

Mais de zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009zukun
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVzukun
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Informationzukun
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statisticszukun
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibrationzukun
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionzukun
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluationzukun
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-softwarezukun
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptorszukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectorszukun
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-introzukun
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video searchzukun
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video searchzukun
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video searchzukun
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learningzukun
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionzukun
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick startzukun
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structureszukun
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities zukun
 
Icml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant featuresIcml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant featureszukun
 

Mais de zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
 
Icml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant featuresIcml2012 learning hierarchies of invariant features
Icml2012 learning hierarchies of invariant features
 

Último

Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 

Último (20)

Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 

05 history of cv a machine learning (theory) perspective on computer vision

  • 1. A Machine Learning (Theory) Perspective on Computer Vision Peter Auer Montanuniversität Leoben
  • 2. Outline What I am doing and how computer vision approached me (in 2002). Some modern machine learning algorithms used in computer vision, and their development: Boosting Support Vector Machines Concluding remarks
  • 3. My background COLT 1993 Conference on Learning Theory „On-Line Learning of Rectangles in Noisy Environments“ FOCS 1995 Symp. Foundations of Computer Science „Gambling in a Rigged Casino: The Adversarial Multi-Arm Bandit Problem“ with N. Cesa-Bianchi, Y. Freund, R. Schapire ICML, NIPS, STOC, …
  • 4. A computer vision project EU-Project LAVA, 2002 “Learning for adaptable visual assistants” XRCE: Ch. Dance, R. Mohr IRIA Grenoble: C. Schmid, B. Triggs RHUL: J. Shawe-Taylor IDIAP: S. Bengio
  • 5. LAVA Proposal Vision (goals) Recognition of generic objects and events Attention Mechanisms Base line and high-level descriptors Learning (means) Statistical Analysis Kernels and models and features Online Learning
  • 6. Online learning Online Information Setting An input is received, a prediction is made, and then feedback is acquired. Goal: To make good predictions, in respect to a (large) set of fixed predictors. Online Computation Setting The amount of computation per new example – to update the learned information – is constant (or small). Goal: To be fast computationally. (Near) real-time learning?
  • 7. Learning for vision around 2002 Viola, Jones, CVPR 2001: Rapid object detection using a boosted cascade of simple features. (Boosting) Agarwal, Roth, ECCV 2002: Learning a Sparse Representation for Object Detection. (Winnow) Fergus, Perona, Zisserman, CVPR 2003: Object class recognition by unsupervised scale- invariant learning. (EM-type algorithm) Wallraven, Caputo, Graf, ICCV 2003: Recognition with local features: the kernel recipe. (SVM)
  • 8. Our contribution in LAVA Opelt, Fussenegger, Pinz, Auer, ECCV 2004: Weak hypotheses and boosting for generic object detection and recognition.
  • 9. Image classification as a learning problem
  • 10. Image classification as a learning problem Images are represented as vectors x = (x1 , . . . , xn ) ∈ X ⊂ Rn . Given training images x (1) , . . . , x (m) ∈ X with their classifications y (1) , . . . , y (m) ∈ Y = {−1, +1}, a classifier H : X → Y is learned. We consider linear classifiers Hw , w ∈ Rn , +1 if w · x ≥ 0 Hw (x) = −1 if w · x < 0 n (w · x = i=1 wi xi ). P. Auer ML Perspective on CV
  • 11. The Perceptron algorithm (Rosenblatt, 1958) The Perceptron algorithm maintains a weight vector w (t) as its current classifier. Initialization w (1) = 0. +1 if w (t) · x (t) ≥ 0 Predict y (t) = ˆ −1 if w (t) · x (t) < 0 If y (t) = y (t) then w (t+1) = w (t) , ˆ else w (t+1) = w (t) + ηy (t) x (t) . (η is the learning rate.) The Perceptron was abandoned in 1969, when Minsky and Papert showed that Perceptrons are not able to learn some simple functions. Revived only in the 1980’s when neural networks became popular. P. Auer ML Perspective on CV
  • 12. Perceptron cannot learn XOR No single line can separate the green from the red boxes.
  • 13. Non-linear classifiers Extending the feature space (or using kernels) prevents the problem: 2 2 Since XOR is a quadratic function, use (1, x1 , x2 , x1 , x2 , x1 x2 ) instead of (x1 , x2 ). For x1 , x2 ∈ {+1, −1}, x1 XOR x2 = x1 x2 . P. Auer ML Perspective on CV
  • 14. Winnow (Littlestone 1987) Works like the Perceptron algorithm except for the update of the weights: (t+1) (t) (t) wi = wi ∗ exp ηy (t) xi for some η > 0. (w (1) = 1.) Observe the multiplicative update of the weights and (t+1) (t) (t) log wi = log wi + ηy (t) xi . Very related work: The Weighted Majority Algorithm (Littlestone, Warmuth) P. Auer ML Perspective on CV
  • 15. Comparison of the Perceptron algorithm and Winnow Perceptron and Winnow scale differently in respect to relevant, used, and irrelevant attributes: all attributes n relevant attributes k used attributes d # training ex. √ Perceptron dk Winnow k log n P. Auer ML Perspective on CV
  • 16. Adaboost (Freund, Schapire, 1995) (s) AdaBoost maintains weights vt on the training examples (x (s) , y (s) ) over time t: (s) Initialize weights v0 = 1. For t = 1, 2, . . . Select coordinate it with maximal correlation with the labels, (s) (s) (s) s vt y xi , as weak hypothesis. (s) (s) Choose αt which minimizes s vt exp −αt y (s) xit . (s) (s) (s) Update vt+1 = vt exp −αt y (s) xit . For x = (x1 , . . . , xn ) predict sign ( t αt xit ). P. Auer ML Perspective on CV
  • 17. History of Boosting (1) Rob Schapire: The strength of weak learnability, 1990. Showed that classifiers which are only 51% correct, can be combined into a 99% correct classifier. Rather a theoretical result, since the algorithm was complicated and not practical. I know people who thought that this was not an interesting result.
  • 18. History of Boosting (2) Yoav Freund: Boosting a weak learning algorithm by majority, 1995. Improved boosting algorithm, but still complicated and theoretical. Only logarithmically many examples are forwarded to the weak learner!
  • 19. History of Boosting (3) Y. Freund and R. Schapire: A decision-theoretic generalization of on-line learning and an application to boosting, 1995. Very simple boosting algorithm, easy to implement. Theoretically less interesting. Performs very well in practice. Won the Gödel price in 2003 and the Kanellakis price in 2004. (Both are prestigious prices in Theoretical Computer Science.) Since then many variants of Boosting (mainly to improve error robustness): BrownBoost, Soft margin boosting, LPBoost.
  • 20. Support Vector Machines (SVMs) In its vanilla version also learns a linear classifier. It maximizes distance between the decision boundary and the nearest training points. Formulates learning as a well-behaved optimization problem. Invented by Vladimir Vapnik (1979, Russian paper). Translated in 1982. No practical applications, since it required linear separability.
  • 21. Practical SVMs Vapnik: The Nature of Statistical Learning Theory, 1995. Statistical Learning Theory, 1998. Shawe-Taylor, Cristianini: Support Vector Machines, 2000. Soft margin SVMs: Tolerate incorrectly labeled training examples (by using slack variables). Non-linear classification using the “kernel trick”.
  • 22. Support Vector Machines (SVMs) + + + + + + + − + + − − − − − − − − – p.21 Maschinelles Lernen — 25.8.03 — Peter Auer
  • 23. The kernel trick (1) Recall the perceptron update, t w (t+1) = w (t) + ηy (t) x (t) = η y (τ ) x (τ ) , τ =1 and classification, t (t+1) y = sign w ˆ · x = sign y (τ ) x (τ ) · x . τ =1 A kernel function generalizes the inner product, t y = sign ˆ y (τ ) K x (τ ) , x . τ =1 P. Auer ML Perspective on CV
  • 24. The kernel trick (2) The inner product x (τ ) · x is a measure of similarity: x (τ ) · x is maximal if x (τ ) = x. The kernel function is a similarity measure in feature space, K x (τ ) , x = Φ(x (τ ) ) · Φ(x). Kernel functions can be designed to capture the relevant similarities of the domain. Aizerman, Braverman, Rozonoer: Theoretical foundations of the potential function method in pattern recognition learning, 1964. P. Auer ML Perspective on CV
  • 25. Where are we going? New learning algorithms? Better image descriptors! Probably they need to be learned. Probably they need to be hierarchical. We need (to use) more data.
  • 26. Final remark on algorithm evaluation and benchmarks Computer vision is in the state of machine learning 10 years ago (at least for object classification). Benchmark datasets start to become available, e.g. PASCAL VOC.