SlideShare uma empresa Scribd logo
1 de 80
Baixar para ler offline
Motivation
        Background
          Taxonomy
          Alignment
          Discussion
         References




Review on Manifold learning



          Phong. Vo Dinh

    National Institute of Informatics

 Hitotsubashi, Chiyoda-ku, Tokyo, Japan




  Lab Meeting 25th Mar, 2009




     Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy
                            Alignment
                            Discussion
                           References



Outline


  1   Motivation
       Curse of Dimensionality
       Do we need feature invariance?
       Hypothesis about manifolds agreement
  2   Background
  3   Taxonomy
        Distance preservation
        Topology preservation
  4   Alignment
  5   Discussion
  6   References

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Outline


  1   Motivation
       Curse of Dimensionality
       Do we need feature invariance?
       Hypothesis about manifolds agreement
  2   Background
  3   Taxonomy
        Distance preservation
        Topology preservation
  4   Alignment
  5   Discussion
  6   References

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                           Background
                                             Curse of Dimensionality
                             Taxonomy
                                             Do we need feature invariance?
                             Alignment
                                             Hypothesis about manifolds agreement
                             Discussion
                            References



Hyper-volume of cubes and spheres



      In D-dimensional space, the sphere and the corresponding
      circumscripted cube:
                                             π D /2 r D
                        Vsphere (r ) =
                                           Γ(1 + D /2)

                               Vcube      = (2r )D
                                               Vsphere (r )
      When   D   increase, we obtain       lim              =0
                                           D →∞ Vcube (r )
      The volume of a sphere vanishes when dimensionality increase!


                        Phong. Vo Dinh       Review on Manifold learning
Motivation
                           Background
                                             Curse of Dimensionality
                             Taxonomy
                                             Do we need feature invariance?
                             Alignment
                                             Hypothesis about manifolds agreement
                             Discussion
                            References



Hyper-volume of cubes and spheres



      In D-dimensional space, the sphere and the corresponding
      circumscripted cube:
                                             π D /2 r D
                        Vsphere (r ) =
                                           Γ(1 + D /2)

                               Vcube      = (2r )D
                                               Vsphere (r )
      When   D   increase, we obtain       lim              =0
                                           D →∞ Vcube (r )
      The volume of a sphere vanishes when dimensionality increase!


                        Phong. Vo Dinh       Review on Manifold learning
Motivation
                           Background
                                             Curse of Dimensionality
                             Taxonomy
                                             Do we need feature invariance?
                             Alignment
                                             Hypothesis about manifolds agreement
                             Discussion
                            References



Hyper-volume of cubes and spheres



      In D-dimensional space, the sphere and the corresponding
      circumscripted cube:
                                             π D /2 r D
                        Vsphere (r ) =
                                           Γ(1 + D /2)

                               Vcube      = (2r )D
                                               Vsphere (r )
      When   D   increase, we obtain       lim              =0
                                           D →∞ Vcube (r )
      The volume of a sphere vanishes when dimensionality increase!


                        Phong. Vo Dinh       Review on Manifold learning
Motivation
                           Background
                                             Curse of Dimensionality
                             Taxonomy
                                             Do we need feature invariance?
                             Alignment
                                             Hypothesis about manifolds agreement
                             Discussion
                            References



Hyper-volume of cubes and spheres



      In D-dimensional space, the sphere and the corresponding
      circumscripted cube:
                                             π D /2 r D
                        Vsphere (r ) =
                                           Γ(1 + D /2)

                               Vcube      = (2r )D
                                               Vsphere (r )
      When   D   increase, we obtain       lim              =0
                                           D →∞ Vcube (r )
      The volume of a sphere vanishes when dimensionality increase!


                        Phong. Vo Dinh       Review on Manifold learning
Motivation
                           Background
                                          Curse of Dimensionality
                             Taxonomy
                                          Do we need feature invariance?
                             Alignment
                                          Hypothesis about manifolds agreement
                             Discussion
                            References



Hyper-volume of a thin spherical shell




      The relative hyper-volume of a thin spherical shell is
            Vsphere (r ) − Vsphere (r (1 − ε))       1D − (1 − ε)D
                                                 =
                       Vsphere (r )                       1D
  where ε is the thickness of the shell (ε   1). When D increase, the
  ratio tends to 1, meaning that the shell contains almost all the
  volume.




                        Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Diagonal of a hypercube



      Considering a hypercube [−1, +1]D with 2D corners,

      Vector from origin to one of corners v=[±1, ..., ±1]T

      The angle between a half-diagonal v and one of coordinate
      axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as
                                    v T ed  ±1
                      cos θD   =           =√
                                    v ed     D

      When D grows, half-diagonals are nearly orthogonal to all
      coordinate axes.


                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Diagonal of a hypercube



      Considering a hypercube [−1, +1]D with 2D corners,

      Vector from origin to one of corners v=[±1, ..., ±1]T

      The angle between a half-diagonal v and one of coordinate
      axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as
                                    v T ed  ±1
                      cos θD   =           =√
                                    v ed     D

      When D grows, half-diagonals are nearly orthogonal to all
      coordinate axes.


                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Diagonal of a hypercube



      Considering a hypercube [−1, +1]D with 2D corners,

      Vector from origin to one of corners v=[±1, ..., ±1]T

      The angle between a half-diagonal v and one of coordinate
      axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as
                                    v T ed  ±1
                      cos θD   =           =√
                                    v ed     D

      When D grows, half-diagonals are nearly orthogonal to all
      coordinate axes.


                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Diagonal of a hypercube



      Considering a hypercube [−1, +1]D with 2D corners,

      Vector from origin to one of corners v=[±1, ..., ±1]T

      The angle between a half-diagonal v and one of coordinate
      axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as
                                    v T ed  ±1
                      cos θD   =           =√
                                    v ed     D

      When D grows, half-diagonals are nearly orthogonal to all
      coordinate axes.


                      Phong. Vo Dinh    Review on Manifold learning
Example: hypercube in hyperspace




     Figure: An intuition about a hypercube, courtesy of Mathematica
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Outline


  1   Motivation
       Curse of Dimensionality
       Do we need feature invariance?
       Hypothesis about manifolds agreement
  2   Background
  3   Taxonomy
        Distance preservation
        Topology preservation
  4   Alignment
  5   Discussion
  6   References

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Feature invariance Or Distance invariance?

      A possible approach to introduce invariance into pattern
      recognition algorithm is to use transformation invariant
      features.
          Crucial information may be discarded
          Dicult to evaluate the impact of feature extraction on the
          classication error
      Alignment and classication can be seen as two sides of the
      same coin.
      The appropriate distance for classication is that which
      maximizes alignment.
      A lot of eorts have concentrated on seeking for invariance by
      the computation of appropriate distance measures in the
      pattern space.
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Feature invariance Or Distance invariance?

      A possible approach to introduce invariance into pattern
      recognition algorithm is to use transformation invariant
      features.
          Crucial information may be discarded
          Dicult to evaluate the impact of feature extraction on the
          classication error
      Alignment and classication can be seen as two sides of the
      same coin.
      The appropriate distance for classication is that which
      maximizes alignment.
      A lot of eorts have concentrated on seeking for invariance by
      the computation of appropriate distance measures in the
      pattern space.
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Feature invariance Or Distance invariance?

      A possible approach to introduce invariance into pattern
      recognition algorithm is to use transformation invariant
      features.
          Crucial information may be discarded
          Dicult to evaluate the impact of feature extraction on the
          classication error
      Alignment and classication can be seen as two sides of the
      same coin.
      The appropriate distance for classication is that which
      maximizes alignment.
      A lot of eorts have concentrated on seeking for invariance by
      the computation of appropriate distance measures in the
      pattern space.
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Feature invariance Or Distance invariance?

      A possible approach to introduce invariance into pattern
      recognition algorithm is to use transformation invariant
      features.
          Crucial information may be discarded
          Dicult to evaluate the impact of feature extraction on the
          classication error
      Alignment and classication can be seen as two sides of the
      same coin.
      The appropriate distance for classication is that which
      maximizes alignment.
      A lot of eorts have concentrated on seeking for invariance by
      the computation of appropriate distance measures in the
      pattern space.
                      Phong. Vo Dinh    Review on Manifold learning
Example: classication under alignment viewpoint




           Figure: A dense space of images, courtesy of [6]
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Outline


  1   Motivation
       Curse of Dimensionality
       Do we need feature invariance?
       Hypothesis about manifolds agreement
  2   Background
  3   Taxonomy
        Distance preservation
        Topology preservation
  4   Alignment
  5   Discussion
  6   References

                       Phong. Vo Dinh    Review on Manifold learning
A way of vision perception
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Manifolds in visual perception




      The retinal image is a collection of signals from photoreceptor
      cells

      Thoses photoreceptors construct an abstract image space

      Dierent appearances of an identity are expected to lie on
      low-dimensional manifold

      How the brain represents image manifolds?




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Manifolds in visual perception




      The retinal image is a collection of signals from photoreceptor
      cells

      Thoses photoreceptors construct an abstract image space

      Dierent appearances of an identity are expected to lie on
      low-dimensional manifold

      How the brain represents image manifolds?




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Manifolds in visual perception




      The retinal image is a collection of signals from photoreceptor
      cells

      Thoses photoreceptors construct an abstract image space

      Dierent appearances of an identity are expected to lie on
      low-dimensional manifold

      How the brain represents image manifolds?




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                                        Curse of Dimensionality
                           Taxonomy
                                        Do we need feature invariance?
                           Alignment
                                        Hypothesis about manifolds agreement
                           Discussion
                          References



Manifolds in visual perception




      The retinal image is a collection of signals from photoreceptor
      cells

      Thoses photoreceptors construct an abstract image space

      Dierent appearances of an identity are expected to lie on
      low-dimensional manifold

      How the brain represents image manifolds?




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Manifolds in visual perception


      Neurophysiologists found that the ring rate of each neuron
      can be expressed as a smooth fuction of several variables
          angular position of the eye
          direction of the head
          ...
      Imply that the neuron population acitivity is constrained to lie
      on a low-dimensional manifold

      The connection between neural manifolds and image
      manifolds?

      The question remains to be resolved!

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Manifolds in visual perception


      Neurophysiologists found that the ring rate of each neuron
      can be expressed as a smooth fuction of several variables
          angular position of the eye
          direction of the head
          ...
      Imply that the neuron population acitivity is constrained to lie
      on a low-dimensional manifold

      The connection between neural manifolds and image
      manifolds?

      The question remains to be resolved!

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Manifolds in visual perception


      Neurophysiologists found that the ring rate of each neuron
      can be expressed as a smooth fuction of several variables
          angular position of the eye
          direction of the head
          ...
      Imply that the neuron population acitivity is constrained to lie
      on a low-dimensional manifold

      The connection between neural manifolds and image
      manifolds?

      The question remains to be resolved!

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                                         Curse of Dimensionality
                            Taxonomy
                                         Do we need feature invariance?
                            Alignment
                                         Hypothesis about manifolds agreement
                            Discussion
                           References



Manifolds in visual perception


      Neurophysiologists found that the ring rate of each neuron
      can be expressed as a smooth fuction of several variables
          angular position of the eye
          direction of the head
          ...
      Imply that the neuron population acitivity is constrained to lie
      on a low-dimensional manifold

      The connection between neural manifolds and image
      manifolds?

      The question remains to be resolved!

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                            Background
                              Taxonomy
                              Alignment
                              Discussion
                             References



Topology and spaces

       Topology studies the properties of objects that are preserved
       through deformations, twistings, and stretchings.
       The knowledge of object does not depend on how they are
       presented, or embedded, in space.
       Used to abstract the intrinsic connectivity of objects while
       ignoring their detailed form.
       A and B are called homeomorphic (topological isomorphism) if
       there is exist a topological structure-preserving map between
       them.
  Example
  A circle is topologically equivalent to an ellipse, and a glass is
  equivalent to a torus!
                         Phong. Vo Dinh    Review on Manifold learning
Motivation
                            Background
                              Taxonomy
                              Alignment
                              Discussion
                             References



Topology and spaces

       Topology studies the properties of objects that are preserved
       through deformations, twistings, and stretchings.
       The knowledge of object does not depend on how they are
       presented, or embedded, in space.
       Used to abstract the intrinsic connectivity of objects while
       ignoring their detailed form.
       A and B are called homeomorphic (topological isomorphism) if
       there is exist a topological structure-preserving map between
       them.
  Example
  A circle is topologically equivalent to an ellipse, and a glass is
  equivalent to a torus!
                         Phong. Vo Dinh    Review on Manifold learning
Motivation
                            Background
                              Taxonomy
                              Alignment
                              Discussion
                             References



Topology and spaces

       Topology studies the properties of objects that are preserved
       through deformations, twistings, and stretchings.
       The knowledge of object does not depend on how they are
       presented, or embedded, in space.
       Used to abstract the intrinsic connectivity of objects while
       ignoring their detailed form.
       A and B are called homeomorphic (topological isomorphism) if
       there is exist a topological structure-preserving map between
       them.
  Example
  A circle is topologically equivalent to an ellipse, and a glass is
  equivalent to a torus!
                         Phong. Vo Dinh    Review on Manifold learning
Motivation
                            Background
                              Taxonomy
                              Alignment
                              Discussion
                             References



Topology and spaces

       Topology studies the properties of objects that are preserved
       through deformations, twistings, and stretchings.
       The knowledge of object does not depend on how they are
       presented, or embedded, in space.
       Used to abstract the intrinsic connectivity of objects while
       ignoring their detailed form.
       A and B are called homeomorphic (topological isomorphism) if
       there is exist a topological structure-preserving map between
       them.
  Example
  A circle is topologically equivalent to an ellipse, and a glass is
  equivalent to a torus!
                         Phong. Vo Dinh    Review on Manifold learning
Motivation
                            Background
                              Taxonomy
                              Alignment
                              Discussion
                             References



Topology and spaces

       Topology studies the properties of objects that are preserved
       through deformations, twistings, and stretchings.
       The knowledge of object does not depend on how they are
       presented, or embedded, in space.
       Used to abstract the intrinsic connectivity of objects while
       ignoring their detailed form.
       A and B are called homeomorphic (topological isomorphism) if
       there is exist a topological structure-preserving map between
       them.
  Example
  A circle is topologically equivalent to an ellipse, and a glass is
  equivalent to a torus!
                         Phong. Vo Dinh    Review on Manifold learning
Example: a glass is equivalent to a torus
Motivation
                          Background
                            Taxonomy
                            Alignment
                            Discussion
                           References



Manifold intuition




      Intuitively, a manifold is a generation of curves and surfaces to
      arbitrary dimension, or...




                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                                      Background
                                       Taxonomy
                                       Alignment
                                       Discussion
                                       References



What is manifold?

How to make sense of “locally similar” to an Euclidean space?
      Denitions
A map ϕ :
      A topological space open region U ⊆ REuclideansaid to be a
            U → Rm defined on an                              n
                                        M is locally , n ≤ m, is of dimension n if every
parameterization if:
      point p in M has a neighborhood U such that there is a
      homeomorphism ϕ from U onto an open subset of Rn .[12]
 (i) ϕ is a smooth (i.e., infinitely differentiable), one-to-one mapping.




                             ϕ
                            −→


             U ⊂ R2

This simply says that V = ϕ(U ) is produced by bending and stretching the region
      gentle, elastic manner, disallowing M is a topological space that
U in aA (topological) manifold self-intersections.                                 is locally
     Euclidean.
                                  Phong. Vo Dinh     Review on Manifold learning
Motivation
                                      Background
                                       Taxonomy
                                       Alignment
                                       Discussion
                                       References



What is manifold?

How to make sense of “locally similar” to an Euclidean space?
      Denitions
A map ϕ :
      A topological space open region U ⊆ REuclideansaid to be a
            U → Rm defined on an                              n
                                        M is locally , n ≤ m, is of dimension n if every
parameterization if:
      point p in M has a neighborhood U such that there is a
      homeomorphism ϕ from U onto an open subset of Rn .[12]
 (i) ϕ is a smooth (i.e., infinitely differentiable), one-to-one mapping.




                             ϕ
                            −→


             U ⊂ R2

This simply says that V = ϕ(U ) is produced by bending and stretching the region
      gentle, elastic manner, disallowing M is a topological space that
U in aA (topological) manifold self-intersections.                                 is locally
     Euclidean.
                                  Phong. Vo Dinh     Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Embedding

     A representation of a topological object in a certain space in
     such a way topological properties are preserved.
     Usually, a P-manifold has the dimension          P       D   than the
     embedding space RD .




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Embedding

     A representation of a topological object in a certain space in
     such a way topological properties are preserved.
     Usually, a P-manifold has the dimension          P       D   than the
     embedding space RD .




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Dimensionality Reduction with Manifolds

      Re-embedding a manifold from a high-dimensional space to a
      lower-dimensional one.
      Practically, underlying manifold is completely unknowned
      excerpt limited and noised data points!




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Dimensionality Reduction with Manifolds

      Re-embedding a manifold from a high-dimensional space to a
      lower-dimensional one.
      Practically, underlying manifold is completely unknowned
      excerpt limited and noised data points!




                      Phong. Vo Dinh    Review on Manifold learning
Example: Visualization face image space onto 2D space
Example: Unfolding the Swiss roll




  Figure: The problem of nonlinear dimensionality reduction for
  three-dimensional data (B) sampled from a two-dimensional manifold
  (A). An unsupervised learning algorithm must discover the global internal
  coordinates of the manifold without signals that explicitly indicate how
  the data should be embedded in two dimensions. The color coding
  illustrates the neighborhood- preserving mapping discovered by LLE [7];
  black outlines in (B) and (C) show the neighborhood of a single point.
Example: linear dimensionality reduction VS. nonlinear
dimensionality reduction




  Figure: Locally Linear Embedding (LLE) is an algorithm for nonlinear
  dimensionality reduction using manifold. Here we present the results of
  PCA (left) and LLE (right), applied to images of a single face translated
  across a two-dimensional background of noise. Note how LLE maps the
  images with corner faces to the corners of its two dimensional
  embedding, while PCA fails to preserve the neighborhood structure of
  nearby images. Courtesy of [5]
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Outline


  1   Motivation
       Curse of Dimensionality
       Do we need feature invariance?
       Hypothesis about manifolds agreement
  2   Background
  3   Taxonomy
        Distance preservation
        Topology preservation
  4   Alignment
  5   Discussion
  6   References

                       Phong. Vo Dinh    Review on Manifold learning
Example: 2-manifold and geodesic distance




  Figure: A sphere can be represented by a collection of two dimensional
  maps; therefore a sphere is a two dimensional manifold.
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction

      In the linear case, maximization/minimization reconstruction
      error, combined with a basic linear model, lead to robust
      methods (i.e PCA).
      In the nonlinear case, more complex data models are required.
      The motivation behind distance preservation?
          Any manifold can be fully described by pairwise distances.
      The goal:
          A low-dimensional representation can be built in such a way
          that the initial distances are reproduced.

      Spatial distance
      Geodesic distance
      Other distances: Kernel PCA, Semidenite programming
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction

      In the linear case, maximization/minimization reconstruction
      error, combined with a basic linear model, lead to robust
      methods (i.e PCA).
      In the nonlinear case, more complex data models are required.
      The motivation behind distance preservation?
          Any manifold can be fully described by pairwise distances.
      The goal:
          A low-dimensional representation can be built in such a way
          that the initial distances are reproduced.

      Spatial distance
      Geodesic distance
      Other distances: Kernel PCA, Semidenite programming
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction

      In the linear case, maximization/minimization reconstruction
      error, combined with a basic linear model, lead to robust
      methods (i.e PCA).
      In the nonlinear case, more complex data models are required.
      The motivation behind distance preservation?
          Any manifold can be fully described by pairwise distances.
      The goal:
          A low-dimensional representation can be built in such a way
          that the initial distances are reproduced.

      Spatial distance
      Geodesic distance
      Other distances: Kernel PCA, Semidenite programming
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction

      In the linear case, maximization/minimization reconstruction
      error, combined with a basic linear model, lead to robust
      methods (i.e PCA).
      In the nonlinear case, more complex data models are required.
      The motivation behind distance preservation?
          Any manifold can be fully described by pairwise distances.
      The goal:
          A low-dimensional representation can be built in such a way
          that the initial distances are reproduced.

      Spatial distance
      Geodesic distance
      Other distances: Kernel PCA, Semidenite programming
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction

      In the linear case, maximization/minimization reconstruction
      error, combined with a basic linear model, lead to robust
      methods (i.e PCA).
      In the nonlinear case, more complex data models are required.
      The motivation behind distance preservation?
          Any manifold can be fully described by pairwise distances.
      The goal:
          A low-dimensional representation can be built in such a way
          that the initial distances are reproduced.

      Spatial distance
      Geodesic distance
      Other distances: Kernel PCA, Semidenite programming
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Spatial distance




      Compute the distance separating two points of the spaces
      Do not regards to any other information, i.e the presence of a
      submanifold

      Methods
          Multidimensional scaling[5]
          Sammon's nonlinear mapping[5]
          Curvilinear component analysis[5]




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Spatial distance




      Compute the distance separating two points of the spaces
      Do not regards to any other information, i.e the presence of a
      submanifold

      Methods
          Multidimensional scaling[5]
          Sammon's nonlinear mapping[5]
          Curvilinear component analysis[5]




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Graph distance

      Attempt to overcome some shortcomings of spatial metrics like
      the Euclidean distance
      Measuring the distance along the manifold and not through
      the embedding spaces
      The distance along a manifold is called geodesic distance
      Geodesic distance is hard to minimize:
          some (noisy) points on M are available
          the input space is non-continuous
      Discretize the arc length into paths on graph
      Methods
          Isomap[11]
          Geodesic NLM[5]
          Curvilinear distance analysis[5]
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Graph distance

      Attempt to overcome some shortcomings of spatial metrics like
      the Euclidean distance
      Measuring the distance along the manifold and not through
      the embedding spaces
      The distance along a manifold is called geodesic distance
      Geodesic distance is hard to minimize:
          some (noisy) points on M are available
          the input space is non-continuous
      Discretize the arc length into paths on graph
      Methods
          Isomap[11]
          Geodesic NLM[5]
          Curvilinear distance analysis[5]
                       Phong. Vo Dinh    Review on Manifold learning
Example: graph distance in Isomap




  Figure: (A) For two arbitrary points (circled) on a nonlinear manifold,
  their Euclidean distance in the high- dimensional input space (length of
  dashed line) may not accurately reect their intrinsic similarity, as
  measured by geodesic distance along the low-dimensional manifold
  (length of solid curve). (B) The neighbor- hood graph G constructed in
  step one of Isomap allows an approximation (red segments) to the true
  geodesic path to be computed eciently in step two, as the shortest path
  in G.(C) The two-dimensional embedding recovered by Isomap in step
  three, which best preserves the shortest path distances in the
  neighborhood graph (overlaid). Straight lines in the embedding (blue)
  now represent simpler and cleaner approximations to the true geodesic
  paths than do the corresponding graph paths (red).
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Other distance:   Kernel PCA
      Closely related to classical metric MDS
      KPCA extends the algebraical properties of MDS to nonlinear
      manifolds without regards to their geometrical meaning
      The idea is to linearize the underlying manifold M
                 φ : M ⊂ RD → RQ , y −→ z = φ (y)
  in which Q is very high (innitie) dimension.
      KPCA assumes φ can map data to linear subspace of the
      Q-dimensional space (Q       D)

      Suprisingly, KPCA increase the data dimensionality rst!
      Share advantages with PCA and MDS
      Diculty in choosing appropriate kernel
      Not motivated by geometrical arguments
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Other distance:   Kernel PCA
      Closely related to classical metric MDS
      KPCA extends the algebraical properties of MDS to nonlinear
      manifolds without regards to their geometrical meaning
      The idea is to linearize the underlying manifold M
                 φ : M ⊂ RD → RQ , y −→ z = φ (y)
  in which Q is very high (innitie) dimension.
      KPCA assumes φ can map data to linear subspace of the
      Q-dimensional space (Q       D)

      Suprisingly, KPCA increase the data dimensionality rst!
      Share advantages with PCA and MDS
      Diculty in choosing appropriate kernel
      Not motivated by geometrical arguments
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Other distance:   Kernel PCA
      Closely related to classical metric MDS
      KPCA extends the algebraical properties of MDS to nonlinear
      manifolds without regards to their geometrical meaning
      The idea is to linearize the underlying manifold M
                 φ : M ⊂ RD → RQ , y −→ z = φ (y)
  in which Q is very high (innitie) dimension.
      KPCA assumes φ can map data to linear subspace of the
      Q-dimensional space (Q       D)

      Suprisingly, KPCA increase the data dimensionality rst!
      Share advantages with PCA and MDS
      Diculty in choosing appropriate kernel
      Not motivated by geometrical arguments
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Other distance:   Kernel PCA
      Closely related to classical metric MDS
      KPCA extends the algebraical properties of MDS to nonlinear
      manifolds without regards to their geometrical meaning
      The idea is to linearize the underlying manifold M
                 φ : M ⊂ RD → RQ , y −→ z = φ (y)
  in which Q is very high (innitie) dimension.
      KPCA assumes φ can map data to linear subspace of the
      Q-dimensional space (Q       D)

      Suprisingly, KPCA increase the data dimensionality rst!
      Share advantages with PCA and MDS
      Diculty in choosing appropriate kernel
      Not motivated by geometrical arguments
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Other distance:   Kernel PCA
      Closely related to classical metric MDS
      KPCA extends the algebraical properties of MDS to nonlinear
      manifolds without regards to their geometrical meaning
      The idea is to linearize the underlying manifold M
                 φ : M ⊂ RD → RQ , y −→ z = φ (y)
  in which Q is very high (innitie) dimension.
      KPCA assumes φ can map data to linear subspace of the
      Q-dimensional space (Q       D)

      Suprisingly, KPCA increase the data dimensionality rst!
      Share advantages with PCA and MDS
      Diculty in choosing appropriate kernel
      Not motivated by geometrical arguments
                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Outline


  1   Motivation
       Curse of Dimensionality
       Do we need feature invariance?
       Hypothesis about manifolds agreement
  2   Background
  3   Taxonomy
        Distance preservation
        Topology preservation
  4   Alignment
  5   Discussion
  6   References

                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction


      Distance gives too much information that is unneccessary
      Comparative information between distances, like inequalities or
      ranks, suces to characterize a manifold, for any embedding

      Most distance functions make no distinction between the
      manifold and the surrounding empty space
      Topology just considers inside the manifold

      Dicult to characterize because of data points limitation
      Most of methods work with a discrete mapping model (lattice)
      Models:
          Predened lattice
          Data-driven lattice
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction


      Distance gives too much information that is unneccessary
      Comparative information between distances, like inequalities or
      ranks, suces to characterize a manifold, for any embedding

      Most distance functions make no distinction between the
      manifold and the surrounding empty space
      Topology just considers inside the manifold

      Dicult to characterize because of data points limitation
      Most of methods work with a discrete mapping model (lattice)
      Models:
          Predened lattice
          Data-driven lattice
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy     Distance preservation
                            Alignment    Topology preservation
                            Discussion
                           References



Introduction


      Distance gives too much information that is unneccessary
      Comparative information between distances, like inequalities or
      ranks, suces to characterize a manifold, for any embedding

      Most distance functions make no distinction between the
      manifold and the surrounding empty space
      Topology just considers inside the manifold

      Dicult to characterize because of data points limitation
      Most of methods work with a discrete mapping model (lattice)
      Models:
          Predened lattice
          Data-driven lattice
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Predened lattice




      Lattice is xed in advance
      Cannot change after the dimensionality reduction has begun.
      Lattice is a rectangular or hexagonal grid made of regularly
      spaced points
      Very few manifolds t such a simple shape in practice

      Methods:
          Self-Organizing Maps[5]
          Generative Topographic Mapping[5]



                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Predened lattice




      Lattice is xed in advance
      Cannot change after the dimensionality reduction has begun.
      Lattice is a rectangular or hexagonal grid made of regularly
      spaced points
      Very few manifolds t such a simple shape in practice

      Methods:
          Self-Organizing Maps[5]
          Generative Topographic Mapping[5]



                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Data-driven lattice




      Make no assumption about the shape ans topology of the
      embedding
      Adapt to data set in order to captuer the manifold shape

      Methods
          Locally linear embedding[8, 7]
          Laplacian eigenmaps[1, 2]




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy     Distance preservation
                           Alignment    Topology preservation
                           Discussion
                          References



Data-driven lattice




      Make no assumption about the shape ans topology of the
      embedding
      Adapt to data set in order to captuer the manifold shape

      Methods
          Locally linear embedding[8, 7]
          Laplacian eigenmaps[1, 2]




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Distance between manifolds

      Recognition can also be conducted with a set of query images
      rather than single query image
      Reformulated as matching a query image set against all the
      gallery image sets representing a subject
      This problem can be converted to the problem of matching
      dierent manifolds
      Need a good denition on manifolds distance, which is
      nonlinear space
      Until present, few works devote this problem: [14, 13, 4, 3]




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Distance between manifolds

      Recognition can also be conducted with a set of query images
      rather than single query image
      Reformulated as matching a query image set against all the
      gallery image sets representing a subject
      This problem can be converted to the problem of matching
      dierent manifolds
      Need a good denition on manifolds distance, which is
      nonlinear space
      Until present, few works devote this problem: [14, 13, 4, 3]




                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy
                            Alignment
                            Discussion
                           References



Applicability

      Manifold-based nonlinear dimensionality reduction (NLDR) has
      been applied in:
          Face recognition
          Gesture recognition
          Handwritten recognition
          Human action recognition[10]
      Characteristics of current manifold-based NLDRs:
          Prefer medium or large scale database
          Data instances should be quite similar in appearances (i.e face,
          hand, handwritten)
          Small image size (e.x [200,200])
          Manually choosing manifold dimension
          Restricted in adapting new data points (i.e oine mode or
          batch mode)
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                          Background
                            Taxonomy
                            Alignment
                            Discussion
                           References



Applicability

      Manifold-based nonlinear dimensionality reduction (NLDR) has
      been applied in:
          Face recognition
          Gesture recognition
          Handwritten recognition
          Human action recognition[10]
      Characteristics of current manifold-based NLDRs:
          Prefer medium or large scale database
          Data instances should be quite similar in appearances (i.e face,
          hand, handwritten)
          Small image size (e.x [200,200])
          Manually choosing manifold dimension
          Restricted in adapting new data points (i.e oine mode or
          batch mode)
                       Phong. Vo Dinh    Review on Manifold learning
Motivation
                         Background
                           Taxonomy
                           Alignment
                           Discussion
                          References



Discussion



      Challenges/Opportunities:
          Nobody has done it before!
          Event image/event video is highly varied in appearance
          Poor-dened distance measure for event manifolds
      Schedule
          First test on event images with single actor (KTH dataset,
          Weizman dataset, IXMAS dataset)
          Then test on event images with cluttered background
          (movies,...)
          Test dierent kinds of manifold-manifold distance
          Propose a way to decrease the variation in event image/event
          video

                      Phong. Vo Dinh    Review on Manifold learning
Motivation
                   Background
                     Taxonomy
                     Alignment
                     Discussion
                    References




Mikhail Belkin and Partha Niyogi.
Laplacian eigenmaps and spectral techniques for embedding
and clustering.
In NIPS, pages 585591, 2001.
Mikhail Belkin and Partha Niyogi.
Convergence of laplacian eigenmaps.
In NIPS, pages 129136, 2006.
Andrew W. Fitzgibbon and Andrew Zisserman.
Joint manifold distance: a new approach to appearance based
clustering.
Computer Vision and Pattern Recognition, IEEE Computer
Society Conference on,   1:26, 2003.
Erosyni Kokiopoulou and Pascal Frossard.

                Phong. Vo Dinh    Review on Manifold learning
Motivation
                       Background
                         Taxonomy
                         Alignment
                         Discussion
                        References



Minimum distance between pattern transformation manifolds:
Algorithm and applications.
IEEE Transactions on Pattern Analysis and Machine
Intelligence,   99(1), 2008.
John A. Lee and Michel Verleysen.
Nonlinear Dimensionality Reduction.
Springer, 2007.
C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman.
SIFT ow: Dense correspondence across dierent scenes.
In ECCV, pages III: 2842, 2008.
S. T. Roweis and L. K. Saul.
Nonlinear dimensionality reduction by locally linear embedding.
Science, 290(5500):23232326, December 2000.

Lawrence K. Saul and Sam T. Roweis.
                    Phong. Vo Dinh    Review on Manifold learning
Motivation
                    Background
                      Taxonomy
                      Alignment
                      Discussion
                     References



Think globally, t locally: Unsupervised learning of low
dimensional manifold.
Journal of Machine Learning Research, 4:119155, 2003.

H. Sebastian Seung and Daniel D. Lee.
The manifold ways of perception.
Science, 290(5500):22682269, 2000.

Richard Souvenir and Justin Babbs.
Learning the viewpoint manifold for action recognition.
In CVPR, 2008.
J. B. Tenenbaum, V. de Silva, and J. C. Langford.
A global geometric framework for nonlinear dimensionality
reduction.
Science, 290(5500):23192323, December 2000.

Loring W. Tu.
                 Phong. Vo Dinh    Review on Manifold learning
Motivation
                     Background
                       Taxonomy
                       Alignment
                       Discussion
                      References



An Introduction to Manifolds.
Springer, 2008.
Nuno Vasconcelos and Andrew Lippman.
A multiresolution manifold distance for invariant image
similarity.
IEEE Transactions on Multimedia, 7(1):127142, 2005.

R.P. Wang, S.G. Shan, X.L. Chen, and W. Gao.
Manifold-manifold distance with application to face recognition
based on image set.
In CVPR08, pages 18, 2008.




                  Phong. Vo Dinh    Review on Manifold learning

Mais conteúdo relacionado

Destaque

Deep convolutional neural fields for depth estimation from a single image
Deep convolutional neural fields for depth estimation from a single imageDeep convolutional neural fields for depth estimation from a single image
Deep convolutional neural fields for depth estimation from a single image
Wei Yang
 
Fourier Specturm via MATLAB
Fourier Specturm via MATLABFourier Specturm via MATLAB
Fourier Specturm via MATLAB
ZunAib Ali
 
Gas detect. brochure may 2013 1
Gas detect. brochure may 2013 1Gas detect. brochure may 2013 1
Gas detect. brochure may 2013 1
a1-cbiss
 

Destaque (20)

Game Bot Identification Based on Manifold Learning
Game Bot Identification Based on Manifold LearningGame Bot Identification Based on Manifold Learning
Game Bot Identification Based on Manifold Learning
 
Deep Learning for Computer Vision: Unsupervised Learning (UPC 2016)
Deep Learning for Computer Vision: Unsupervised Learning (UPC 2016)Deep Learning for Computer Vision: Unsupervised Learning (UPC 2016)
Deep Learning for Computer Vision: Unsupervised Learning (UPC 2016)
 
Deep Learning for Computer Vision: Recurrent Neural Networks (UPC 2016)
Deep Learning for Computer Vision: Recurrent Neural Networks (UPC 2016)Deep Learning for Computer Vision: Recurrent Neural Networks (UPC 2016)
Deep Learning for Computer Vision: Recurrent Neural Networks (UPC 2016)
 
Lecture7 xing fei-fei
Lecture7 xing fei-feiLecture7 xing fei-fei
Lecture7 xing fei-fei
 
NNFL 5 - Guru Nanak Dev Engineering College
NNFL  5 - Guru Nanak Dev Engineering CollegeNNFL  5 - Guru Nanak Dev Engineering College
NNFL 5 - Guru Nanak Dev Engineering College
 
Deep convolutional neural fields for depth estimation from a single image
Deep convolutional neural fields for depth estimation from a single imageDeep convolutional neural fields for depth estimation from a single image
Deep convolutional neural fields for depth estimation from a single image
 
Review of fourier series
Review of fourier seriesReview of fourier series
Review of fourier series
 
Matlab code for comparing two microphone files
Matlab code for comparing two microphone filesMatlab code for comparing two microphone files
Matlab code for comparing two microphone files
 
Matlab implementation of fast fourier transform
Matlab implementation of  fast fourier transformMatlab implementation of  fast fourier transform
Matlab implementation of fast fourier transform
 
Fourier Specturm via MATLAB
Fourier Specturm via MATLABFourier Specturm via MATLAB
Fourier Specturm via MATLAB
 
Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1
 
Solvedproblems 120406031331-phpapp01
Solvedproblems 120406031331-phpapp01Solvedproblems 120406031331-phpapp01
Solvedproblems 120406031331-phpapp01
 
Matlab programs
Matlab programsMatlab programs
Matlab programs
 
Basics of Machine Learning
Basics of Machine LearningBasics of Machine Learning
Basics of Machine Learning
 
DSP 05 _ Sheet Five
DSP 05 _ Sheet FiveDSP 05 _ Sheet Five
DSP 05 _ Sheet Five
 
30 business i environment i society mba 2016
30 business i environment i society mba 201630 business i environment i society mba 2016
30 business i environment i society mba 2016
 
TRANSAS ECDIS CERT
TRANSAS ECDIS CERTTRANSAS ECDIS CERT
TRANSAS ECDIS CERT
 
Steer and/or sink the supertanker by Andrew Rendell
Steer and/or sink the supertanker by Andrew RendellSteer and/or sink the supertanker by Andrew Rendell
Steer and/or sink the supertanker by Andrew Rendell
 
Gas detect. brochure may 2013 1
Gas detect. brochure may 2013 1Gas detect. brochure may 2013 1
Gas detect. brochure may 2013 1
 
MarengI2012
MarengI2012 MarengI2012
MarengI2012
 

Mais de Phong Vo

Ml mle_bayes
Ml  mle_bayesMl  mle_bayes
Ml mle_bayes
Phong Vo
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
Phong Vo
 
Intro probability 3
Intro probability 3Intro probability 3
Intro probability 3
Phong Vo
 
Intro probability 2
Intro probability 2Intro probability 2
Intro probability 2
Phong Vo
 
Intro probability 1
Intro probability 1Intro probability 1
Intro probability 1
Phong Vo
 
Digital Image Processing with Matlab
Digital Image Processing with MatlabDigital Image Processing with Matlab
Digital Image Processing with Matlab
Phong Vo
 
Giới thiệu Pattern Recognition V0.1
Giới thiệu Pattern Recognition V0.1Giới thiệu Pattern Recognition V0.1
Giới thiệu Pattern Recognition V0.1
Phong Vo
 

Mais de Phong Vo (7)

Ml mle_bayes
Ml  mle_bayesMl  mle_bayes
Ml mle_bayes
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
 
Intro probability 3
Intro probability 3Intro probability 3
Intro probability 3
 
Intro probability 2
Intro probability 2Intro probability 2
Intro probability 2
 
Intro probability 1
Intro probability 1Intro probability 1
Intro probability 1
 
Digital Image Processing with Matlab
Digital Image Processing with MatlabDigital Image Processing with Matlab
Digital Image Processing with Matlab
 
Giới thiệu Pattern Recognition V0.1
Giới thiệu Pattern Recognition V0.1Giới thiệu Pattern Recognition V0.1
Giới thiệu Pattern Recognition V0.1
 

Último

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Krashi Coaching
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
fonyou31
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Último (20)

Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 

Manifold Learning

  • 1. Motivation Background Taxonomy Alignment Discussion References Review on Manifold learning Phong. Vo Dinh National Institute of Informatics Hitotsubashi, Chiyoda-ku, Tokyo, Japan Lab Meeting 25th Mar, 2009 Phong. Vo Dinh Review on Manifold learning
  • 2. Motivation Background Taxonomy Alignment Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
  • 3. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
  • 4. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
  • 5. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
  • 6. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
  • 7. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
  • 8. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of a thin spherical shell The relative hyper-volume of a thin spherical shell is Vsphere (r ) − Vsphere (r (1 − ε)) 1D − (1 − ε)D = Vsphere (r ) 1D where ε is the thickness of the shell (ε 1). When D increase, the ratio tends to 1, meaning that the shell contains almost all the volume. Phong. Vo Dinh Review on Manifold learning
  • 9. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
  • 10. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
  • 11. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
  • 12. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
  • 13. Example: hypercube in hyperspace Figure: An intuition about a hypercube, courtesy of Mathematica
  • 14. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
  • 15. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
  • 16. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
  • 17. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
  • 18. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
  • 19. Example: classication under alignment viewpoint Figure: A dense space of images, courtesy of [6]
  • 20. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
  • 21. A way of vision perception
  • 22. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
  • 23. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
  • 24. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
  • 25. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
  • 26. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
  • 27. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
  • 28. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
  • 29. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
  • 30. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
  • 31. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
  • 32. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
  • 33. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
  • 34. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
  • 35. Example: a glass is equivalent to a torus
  • 36. Motivation Background Taxonomy Alignment Discussion References Manifold intuition Intuitively, a manifold is a generation of curves and surfaces to arbitrary dimension, or... Phong. Vo Dinh Review on Manifold learning
  • 37. Motivation Background Taxonomy Alignment Discussion References What is manifold? How to make sense of “locally similar” to an Euclidean space? Denitions A map ϕ : A topological space open region U ⊆ REuclideansaid to be a U → Rm defined on an n M is locally , n ≤ m, is of dimension n if every parameterization if: point p in M has a neighborhood U such that there is a homeomorphism ϕ from U onto an open subset of Rn .[12] (i) ϕ is a smooth (i.e., infinitely differentiable), one-to-one mapping. ϕ −→ U ⊂ R2 This simply says that V = ϕ(U ) is produced by bending and stretching the region gentle, elastic manner, disallowing M is a topological space that U in aA (topological) manifold self-intersections. is locally Euclidean. Phong. Vo Dinh Review on Manifold learning
  • 38. Motivation Background Taxonomy Alignment Discussion References What is manifold? How to make sense of “locally similar” to an Euclidean space? Denitions A map ϕ : A topological space open region U ⊆ REuclideansaid to be a U → Rm defined on an n M is locally , n ≤ m, is of dimension n if every parameterization if: point p in M has a neighborhood U such that there is a homeomorphism ϕ from U onto an open subset of Rn .[12] (i) ϕ is a smooth (i.e., infinitely differentiable), one-to-one mapping. ϕ −→ U ⊂ R2 This simply says that V = ϕ(U ) is produced by bending and stretching the region gentle, elastic manner, disallowing M is a topological space that U in aA (topological) manifold self-intersections. is locally Euclidean. Phong. Vo Dinh Review on Manifold learning
  • 39. Motivation Background Taxonomy Alignment Discussion References Embedding A representation of a topological object in a certain space in such a way topological properties are preserved. Usually, a P-manifold has the dimension P D than the embedding space RD . Phong. Vo Dinh Review on Manifold learning
  • 40. Motivation Background Taxonomy Alignment Discussion References Embedding A representation of a topological object in a certain space in such a way topological properties are preserved. Usually, a P-manifold has the dimension P D than the embedding space RD . Phong. Vo Dinh Review on Manifold learning
  • 41. Motivation Background Taxonomy Alignment Discussion References Dimensionality Reduction with Manifolds Re-embedding a manifold from a high-dimensional space to a lower-dimensional one. Practically, underlying manifold is completely unknowned excerpt limited and noised data points! Phong. Vo Dinh Review on Manifold learning
  • 42. Motivation Background Taxonomy Alignment Discussion References Dimensionality Reduction with Manifolds Re-embedding a manifold from a high-dimensional space to a lower-dimensional one. Practically, underlying manifold is completely unknowned excerpt limited and noised data points! Phong. Vo Dinh Review on Manifold learning
  • 43. Example: Visualization face image space onto 2D space
  • 44. Example: Unfolding the Swiss roll Figure: The problem of nonlinear dimensionality reduction for three-dimensional data (B) sampled from a two-dimensional manifold (A). An unsupervised learning algorithm must discover the global internal coordinates of the manifold without signals that explicitly indicate how the data should be embedded in two dimensions. The color coding illustrates the neighborhood- preserving mapping discovered by LLE [7]; black outlines in (B) and (C) show the neighborhood of a single point.
  • 45. Example: linear dimensionality reduction VS. nonlinear dimensionality reduction Figure: Locally Linear Embedding (LLE) is an algorithm for nonlinear dimensionality reduction using manifold. Here we present the results of PCA (left) and LLE (right), applied to images of a single face translated across a two-dimensional background of noise. Note how LLE maps the images with corner faces to the corners of its two dimensional embedding, while PCA fails to preserve the neighborhood structure of nearby images. Courtesy of [5]
  • 46. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
  • 47. Example: 2-manifold and geodesic distance Figure: A sphere can be represented by a collection of two dimensional maps; therefore a sphere is a two dimensional manifold.
  • 48. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
  • 49. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
  • 50. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
  • 51. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
  • 52. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
  • 53. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Spatial distance Compute the distance separating two points of the spaces Do not regards to any other information, i.e the presence of a submanifold Methods Multidimensional scaling[5] Sammon's nonlinear mapping[5] Curvilinear component analysis[5] Phong. Vo Dinh Review on Manifold learning
  • 54. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Spatial distance Compute the distance separating two points of the spaces Do not regards to any other information, i.e the presence of a submanifold Methods Multidimensional scaling[5] Sammon's nonlinear mapping[5] Curvilinear component analysis[5] Phong. Vo Dinh Review on Manifold learning
  • 55. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Graph distance Attempt to overcome some shortcomings of spatial metrics like the Euclidean distance Measuring the distance along the manifold and not through the embedding spaces The distance along a manifold is called geodesic distance Geodesic distance is hard to minimize: some (noisy) points on M are available the input space is non-continuous Discretize the arc length into paths on graph Methods Isomap[11] Geodesic NLM[5] Curvilinear distance analysis[5] Phong. Vo Dinh Review on Manifold learning
  • 56. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Graph distance Attempt to overcome some shortcomings of spatial metrics like the Euclidean distance Measuring the distance along the manifold and not through the embedding spaces The distance along a manifold is called geodesic distance Geodesic distance is hard to minimize: some (noisy) points on M are available the input space is non-continuous Discretize the arc length into paths on graph Methods Isomap[11] Geodesic NLM[5] Curvilinear distance analysis[5] Phong. Vo Dinh Review on Manifold learning
  • 57. Example: graph distance in Isomap Figure: (A) For two arbitrary points (circled) on a nonlinear manifold, their Euclidean distance in the high- dimensional input space (length of dashed line) may not accurately reect their intrinsic similarity, as measured by geodesic distance along the low-dimensional manifold (length of solid curve). (B) The neighbor- hood graph G constructed in step one of Isomap allows an approximation (red segments) to the true geodesic path to be computed eciently in step two, as the shortest path in G.(C) The two-dimensional embedding recovered by Isomap in step three, which best preserves the shortest path distances in the neighborhood graph (overlaid). Straight lines in the embedding (blue) now represent simpler and cleaner approximations to the true geodesic paths than do the corresponding graph paths (red).
  • 58. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
  • 59. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
  • 60. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
  • 61. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
  • 62. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
  • 63. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
  • 64. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction Distance gives too much information that is unneccessary Comparative information between distances, like inequalities or ranks, suces to characterize a manifold, for any embedding Most distance functions make no distinction between the manifold and the surrounding empty space Topology just considers inside the manifold Dicult to characterize because of data points limitation Most of methods work with a discrete mapping model (lattice) Models: Predened lattice Data-driven lattice Phong. Vo Dinh Review on Manifold learning
  • 65. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction Distance gives too much information that is unneccessary Comparative information between distances, like inequalities or ranks, suces to characterize a manifold, for any embedding Most distance functions make no distinction between the manifold and the surrounding empty space Topology just considers inside the manifold Dicult to characterize because of data points limitation Most of methods work with a discrete mapping model (lattice) Models: Predened lattice Data-driven lattice Phong. Vo Dinh Review on Manifold learning
  • 66. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction Distance gives too much information that is unneccessary Comparative information between distances, like inequalities or ranks, suces to characterize a manifold, for any embedding Most distance functions make no distinction between the manifold and the surrounding empty space Topology just considers inside the manifold Dicult to characterize because of data points limitation Most of methods work with a discrete mapping model (lattice) Models: Predened lattice Data-driven lattice Phong. Vo Dinh Review on Manifold learning
  • 67. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Predened lattice Lattice is xed in advance Cannot change after the dimensionality reduction has begun. Lattice is a rectangular or hexagonal grid made of regularly spaced points Very few manifolds t such a simple shape in practice Methods: Self-Organizing Maps[5] Generative Topographic Mapping[5] Phong. Vo Dinh Review on Manifold learning
  • 68. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Predened lattice Lattice is xed in advance Cannot change after the dimensionality reduction has begun. Lattice is a rectangular or hexagonal grid made of regularly spaced points Very few manifolds t such a simple shape in practice Methods: Self-Organizing Maps[5] Generative Topographic Mapping[5] Phong. Vo Dinh Review on Manifold learning
  • 69. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Data-driven lattice Make no assumption about the shape ans topology of the embedding Adapt to data set in order to captuer the manifold shape Methods Locally linear embedding[8, 7] Laplacian eigenmaps[1, 2] Phong. Vo Dinh Review on Manifold learning
  • 70. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Data-driven lattice Make no assumption about the shape ans topology of the embedding Adapt to data set in order to captuer the manifold shape Methods Locally linear embedding[8, 7] Laplacian eigenmaps[1, 2] Phong. Vo Dinh Review on Manifold learning
  • 71. Motivation Background Taxonomy Alignment Discussion References Distance between manifolds Recognition can also be conducted with a set of query images rather than single query image Reformulated as matching a query image set against all the gallery image sets representing a subject This problem can be converted to the problem of matching dierent manifolds Need a good denition on manifolds distance, which is nonlinear space Until present, few works devote this problem: [14, 13, 4, 3] Phong. Vo Dinh Review on Manifold learning
  • 72. Motivation Background Taxonomy Alignment Discussion References Distance between manifolds Recognition can also be conducted with a set of query images rather than single query image Reformulated as matching a query image set against all the gallery image sets representing a subject This problem can be converted to the problem of matching dierent manifolds Need a good denition on manifolds distance, which is nonlinear space Until present, few works devote this problem: [14, 13, 4, 3] Phong. Vo Dinh Review on Manifold learning
  • 73. Motivation Background Taxonomy Alignment Discussion References Applicability Manifold-based nonlinear dimensionality reduction (NLDR) has been applied in: Face recognition Gesture recognition Handwritten recognition Human action recognition[10] Characteristics of current manifold-based NLDRs: Prefer medium or large scale database Data instances should be quite similar in appearances (i.e face, hand, handwritten) Small image size (e.x [200,200]) Manually choosing manifold dimension Restricted in adapting new data points (i.e oine mode or batch mode) Phong. Vo Dinh Review on Manifold learning
  • 74. Motivation Background Taxonomy Alignment Discussion References Applicability Manifold-based nonlinear dimensionality reduction (NLDR) has been applied in: Face recognition Gesture recognition Handwritten recognition Human action recognition[10] Characteristics of current manifold-based NLDRs: Prefer medium or large scale database Data instances should be quite similar in appearances (i.e face, hand, handwritten) Small image size (e.x [200,200]) Manually choosing manifold dimension Restricted in adapting new data points (i.e oine mode or batch mode) Phong. Vo Dinh Review on Manifold learning
  • 75.
  • 76. Motivation Background Taxonomy Alignment Discussion References Discussion Challenges/Opportunities: Nobody has done it before! Event image/event video is highly varied in appearance Poor-dened distance measure for event manifolds Schedule First test on event images with single actor (KTH dataset, Weizman dataset, IXMAS dataset) Then test on event images with cluttered background (movies,...) Test dierent kinds of manifold-manifold distance Propose a way to decrease the variation in event image/event video Phong. Vo Dinh Review on Manifold learning
  • 77. Motivation Background Taxonomy Alignment Discussion References Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, pages 585591, 2001. Mikhail Belkin and Partha Niyogi. Convergence of laplacian eigenmaps. In NIPS, pages 129136, 2006. Andrew W. Fitzgibbon and Andrew Zisserman. Joint manifold distance: a new approach to appearance based clustering. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 1:26, 2003. Erosyni Kokiopoulou and Pascal Frossard. Phong. Vo Dinh Review on Manifold learning
  • 78. Motivation Background Taxonomy Alignment Discussion References Minimum distance between pattern transformation manifolds: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99(1), 2008. John A. Lee and Michel Verleysen. Nonlinear Dimensionality Reduction. Springer, 2007. C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman. SIFT ow: Dense correspondence across dierent scenes. In ECCV, pages III: 2842, 2008. S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):23232326, December 2000. Lawrence K. Saul and Sam T. Roweis. Phong. Vo Dinh Review on Manifold learning
  • 79. Motivation Background Taxonomy Alignment Discussion References Think globally, t locally: Unsupervised learning of low dimensional manifold. Journal of Machine Learning Research, 4:119155, 2003. H. Sebastian Seung and Daniel D. Lee. The manifold ways of perception. Science, 290(5500):22682269, 2000. Richard Souvenir and Justin Babbs. Learning the viewpoint manifold for action recognition. In CVPR, 2008. J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):23192323, December 2000. Loring W. Tu. Phong. Vo Dinh Review on Manifold learning
  • 80. Motivation Background Taxonomy Alignment Discussion References An Introduction to Manifolds. Springer, 2008. Nuno Vasconcelos and Andrew Lippman. A multiresolution manifold distance for invariant image similarity. IEEE Transactions on Multimedia, 7(1):127142, 2005. R.P. Wang, S.G. Shan, X.L. Chen, and W. Gao. Manifold-manifold distance with application to face recognition based on image set. In CVPR08, pages 18, 2008. Phong. Vo Dinh Review on Manifold learning