SlideShare uma empresa Scribd logo
1 de 11
C OMPUTER V ISION :
          S INGULAR VALUE D ECOMPOSITION


                              IIT Kharagpur


                  Computer Science and Engineering,
                    Indian Institute of Technology
                             Kharagpur.




(IIT Kharagpur)                Minimization           Jan ’10   1 / 11
Singular Value Decomposition                                     SVD
  An m × n matrix of rank r maps an r dimensional unit hypersphere
  in rowspace (A) into an r -dimensional hyper-ellipse in range (A).
  The rank 2 matrix
                            √      √     
                            3        3
                         1 
                                          
                                          
                                                 b = Ax
                                         
                     A = √  −3
                           
                                    3    
                                          
                                          
                          2 1
                                         
                                     1
                                          

  transforms the unit circle on the plane into an ellipse embedded in
  three-dimensional space.




   (IIT Kharagpur)                Minimization             Jan ’10   2 / 11
Two diametrically opposite points on the unit circle are mapped
into the two endpoints of the major axis of the ellipse, and two
other diametrically opposite points on the unit circle are mapped
into the two endpoints of the minor axis of the ellipse.
The lines through these two pairs of points on the unit circle are
always orthogonal. This result can be generalized to any m × n
matrix.

 (IIT Kharagpur)            Minimization                   Jan ’10   3 / 11
SVD
If A is a real m × n matrix then there exist orthogonal matrices U, V

                           U   = [u1 , u2 , . . . , um ] ∈ Rm×m

                           V   = [v1 , v2 , . . . , vn ] ∈ Rn×n
such that
                         T
                        U AV   = Σ = diag(σ1 , . . . , σp ) ∈ Rm×n
where p = min(m, n) and σ1 ≥ σ2 ≥ . . . σp ≥ 0.

                          Equivalently                   A   = UΣV T

    Vectors v are the right singular vectors
    Vectors u are the left singular vectors
    Σ has real non-negative diagonal entries called as singular values.

      (IIT Kharagpur)                     Minimization                 Jan ’10   4 / 11
(IIT Kharagpur)   Minimization   Jan ’10   5 / 11
SVD
 SVD is one of the most useful matrix decompositions, particularly
 for numerical computations.
 Its most common application is in the solution of over-determined
 systems of equations.
 Generally the decomposition is carried out in such a way that the
 diagonal entries of Σ are in descending order.
 Since matrices U and V are orthogonal, they have the following
 properties:
                  ||Ux|| = ||x|| for any vector x
                              T
                             U U     = In×n
 In general UU T is not the identitiy unless m = n.




  (IIT Kharagpur)            Minimization                 Jan ’10   6 / 11
SVD
 One can also define SVD for matrices with more columns than
 rows, but generally this will not be of interest to us. It is
 appropriate to extend A by adding rows of zeros to obtain a square
 matrix, and then take the SVD.
 Common implementations of SVD assume that m ≥ n. In this
 case matrix U has the same dimension m × n as the input, matrix
 A may be over-written by the output matrix U.


 Since matrices U and V are orthogonal, they have the following
 properties:
                  ||Ux|| = ||x|| for any vector x
                              T
                             U U     = In×n
 In general UU T is not the identitiy unless m = n.


  (IIT Kharagpur)            Minimization                Jan ’10   7 / 11
Fitting a line
  Let pi = (xi , yi )T be a set of m ≥ 2 points on a plane.
  Consider the line ax + by − c = 0
  The normal to the line is given by the vector n = (a, b)T . We can
  assume that ||n|| = a2 + b2 = 1, thus n is a unit vector.
  The distance of this line to the origin is |c|, and the distance
  between the line n and the point pi is

                       di = |axi + byi − c| = |piT n − c|

  The best fit line minimizes the sum of squared distances.

                         min ||d||2 = min ||Pn − c1||2
                         ||n||=1         ||n||=1


  where P = (p1 , . . . , pm )T , d = (d1 , . . . , dm ), 1 is a vector of m ones.


    (IIT Kharagpur)                Minimization                       Jan ’10   8 / 11
Fitting a line
                                min ||d||2 = min ||Pn − c1||2
                                ||n||=1             ||n||=1


  Minimization w.r.t to c is done by

                               ∂||d||2
                                       =0                     gives   c = pT n
                                 ∂c
                                              1 T
  where p is the mean p =                     mP 1
  Substituting for c, the minimization problem w.r.t n is:

                     min ||d||2 = min ||Pn − 1pT n||2 = min ||Qn||2
                     ||n||=1              ||n||=1                       ||n||=1

  The solution is n = v2 where v2 is the second column of the
  matrix V .

   (IIT Kharagpur)                              Minimization                      Jan ’10   9 / 11
function [ l, residue ] = linefit(P)
[ m n ] = size(P);
if n = 2, error(’matrix P must be m x 2’), end
if m < 2, error(’Need at least two points’), end
one = ones(m, 1);
p = (P’ * one) / m;
Q = P - one * p’;
[U D V] = svd(Q);
n = V(:, 2);
l = [n ; p’ * n];
residue = D(2, 2);




  (IIT Kharagpur)   Minimization          Jan ’10   10 / 11
Singular Values and Eigen Values




  (IIT Kharagpur)   Minimization   Jan ’10   11 / 11

Mais conteúdo relacionado

Mais procurados

Pixel Relationships Examples
Pixel Relationships ExamplesPixel Relationships Examples
Pixel Relationships ExamplesMarwa Ahmeid
 
Tensor Train decomposition in machine learning
Tensor Train decomposition in machine learningTensor Train decomposition in machine learning
Tensor Train decomposition in machine learningAlexander Novikov
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaAlexander Litvinenko
 
A review on structure learning in GNN
A review on structure learning in GNNA review on structure learning in GNN
A review on structure learning in GNNtuxette
 
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...Hiroyuki KASAI
 
Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...
Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...
Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...Hiroyuki KASAI
 
Gauss Quadrature Formula
Gauss Quadrature FormulaGauss Quadrature Formula
Gauss Quadrature FormulaMaitree Patel
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012Zheng Mengdi
 

Mais procurados (20)

Pixel Relationships Examples
Pixel Relationships ExamplesPixel Relationships Examples
Pixel Relationships Examples
 
勾配法
勾配法勾配法
勾配法
 
Tensor Train decomposition in machine learning
Tensor Train decomposition in machine learningTensor Train decomposition in machine learning
Tensor Train decomposition in machine learning
 
Ica group 3[1]
Ica group 3[1]Ica group 3[1]
Ica group 3[1]
 
PMF BPMF and BPTF
PMF BPMF and BPTFPMF BPMF and BPTF
PMF BPMF and BPTF
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 
A review on structure learning in GNN
A review on structure learning in GNNA review on structure learning in GNN
A review on structure learning in GNN
 
Mgm
MgmMgm
Mgm
 
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...
ICML2016: Low-rank tensor completion: a Riemannian manifold preconditioning a...
 
Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...
Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...
Riemannian stochastic variance reduced gradient on Grassmann manifold (ICCOPT...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Gauss Quadrature Formula
Gauss Quadrature FormulaGauss Quadrature Formula
Gauss Quadrature Formula
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Modeling the dynamics of molecular concentration during the diffusion procedure
Modeling the dynamics of molecular concentration during the  diffusion procedureModeling the dynamics of molecular concentration during the  diffusion procedure
Modeling the dynamics of molecular concentration during the diffusion procedure
 
Knapsack problem using fixed tuple
Knapsack problem using fixed tupleKnapsack problem using fixed tuple
Knapsack problem using fixed tuple
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 

Destaque

조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매
조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매 조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매
조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매 rkdfhgks456
 
L'échelle d'autorité
L'échelle d'autoritéL'échelle d'autorité
L'échelle d'autoritémakeulearn
 
정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입
정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입 정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입
정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입 rkdfhgks456
 
정력제구입Σ~~ man33、net ~~ω정품시알리스구입
정력제구입Σ~~ man33、net ~~ω정품시알리스구입 정력제구입Σ~~ man33、net ~~ω정품시알리스구입
정력제구입Σ~~ man33、net ~~ω정품시알리스구입 rkdfhgks456
 
Rocket - startups, o que fazer?
Rocket - startups, o que fazer?Rocket - startups, o que fazer?
Rocket - startups, o que fazer?Gabriel Pehls
 
Como Registrase en la Lista de Regalos
Como Registrase en la Lista de RegalosComo Registrase en la Lista de Regalos
Como Registrase en la Lista de RegalosArq Danny Bolaños
 
Action research overview_0
Action research overview_0Action research overview_0
Action research overview_0Pedro Ortiz
 
Наружный каркас. База Львов. 2 часть презентации.
Наружный каркас. База Львов.   2 часть презентации. Наружный каркас. База Львов.   2 часть презентации.
Наружный каркас. База Львов. 2 часть презентации. Andrey Bobrovitskiy
 

Destaque (16)

조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매
조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매 조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매
조루방지제품판매Φ~~ man33、net ~~Ħ시알리스후불판매
 
L'échelle d'autorité
L'échelle d'autoritéL'échelle d'autorité
L'échelle d'autorité
 
정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입
정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입 정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입
정품프릴리지구입β~~ man33、net ~~Œ정품시알리스구입
 
정력제구입Σ~~ man33、net ~~ω정품시알리스구입
정력제구입Σ~~ man33、net ~~ω정품시알리스구입 정력제구입Σ~~ man33、net ~~ω정품시알리스구입
정력제구입Σ~~ man33、net ~~ω정품시알리스구입
 
Up24 (35)
Up24 (35)Up24 (35)
Up24 (35)
 
Rocket - startups, o que fazer?
Rocket - startups, o que fazer?Rocket - startups, o que fazer?
Rocket - startups, o que fazer?
 
Price 36 z
Price 36 zPrice 36 z
Price 36 z
 
All Naturals Soaps
All Naturals SoapsAll Naturals Soaps
All Naturals Soaps
 
Como Registrase en la Lista de Regalos
Como Registrase en la Lista de RegalosComo Registrase en la Lista de Regalos
Como Registrase en la Lista de Regalos
 
Up24 (26)
Up24 (26)Up24 (26)
Up24 (26)
 
Action research overview_0
Action research overview_0Action research overview_0
Action research overview_0
 
Up24 (30)
Up24 (30)Up24 (30)
Up24 (30)
 
專題報告
專題報告專題報告
專題報告
 
Cuentos de la selva
Cuentos de la selvaCuentos de la selva
Cuentos de la selva
 
Наружный каркас. База Львов. 2 часть презентации.
Наружный каркас. База Львов.   2 часть презентации. Наружный каркас. База Львов.   2 часть презентации.
Наружный каркас. База Львов. 2 часть презентации.
 
02 cell
02 cell02 cell
02 cell
 

Semelhante a Lecture 6

Sampling and low-rank tensor approximations
Sampling and low-rank tensor approximationsSampling and low-rank tensor approximations
Sampling and low-rank tensor approximationsAlexander Litvinenko
 
International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) inventionjournals
 
Complex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptxComplex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptxjyotidighole2
 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
 
New data structures and algorithms for \\post-processing large data sets and ...
New data structures and algorithms for \\post-processing large data sets and ...New data structures and algorithms for \\post-processing large data sets and ...
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
 
20070823
2007082320070823
20070823neostar
 
Chap10 slides
Chap10 slidesChap10 slides
Chap10 slidesHJ DS
 
Cheatsheet unsupervised-learning
Cheatsheet unsupervised-learningCheatsheet unsupervised-learning
Cheatsheet unsupervised-learningSteve Nouri
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfAlexander Litvinenko
 
Notes on Equation of Plane
Notes on Equation of PlaneNotes on Equation of Plane
Notes on Equation of PlaneHerbert Mujungu
 

Semelhante a Lecture 6 (20)

Sampling and low-rank tensor approximations
Sampling and low-rank tensor approximationsSampling and low-rank tensor approximations
Sampling and low-rank tensor approximations
 
Pca ppt
Pca pptPca ppt
Pca ppt
 
Brute force
Brute forceBrute force
Brute force
 
Chap05alg
Chap05algChap05alg
Chap05alg
 
Chap05alg
Chap05algChap05alg
Chap05alg
 
Optimisation random graph presentation
Optimisation random graph presentationOptimisation random graph presentation
Optimisation random graph presentation
 
International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI)
 
02e7e5277e9236393b000000
02e7e5277e9236393b00000002e7e5277e9236393b000000
02e7e5277e9236393b000000
 
overviewPCA
overviewPCAoverviewPCA
overviewPCA
 
Complex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptxComplex differentiation contains analytic function.pptx
Complex differentiation contains analytic function.pptx
 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer key
 
New data structures and algorithms for \\post-processing large data sets and ...
New data structures and algorithms for \\post-processing large data sets and ...New data structures and algorithms for \\post-processing large data sets and ...
New data structures and algorithms for \\post-processing large data sets and ...
 
20070823
2007082320070823
20070823
 
Parallel algorithm in linear algebra
Parallel algorithm in linear algebraParallel algorithm in linear algebra
Parallel algorithm in linear algebra
 
Chap10 slides
Chap10 slidesChap10 slides
Chap10 slides
 
Cheatsheet unsupervised-learning
Cheatsheet unsupervised-learningCheatsheet unsupervised-learning
Cheatsheet unsupervised-learning
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Metodo gauss_newton.pdf
Metodo gauss_newton.pdfMetodo gauss_newton.pdf
Metodo gauss_newton.pdf
 
project report(1)
project report(1)project report(1)
project report(1)
 
Notes on Equation of Plane
Notes on Equation of PlaneNotes on Equation of Plane
Notes on Equation of Plane
 

Mais de Krishna Karri (17)

11 mm91r05
11 mm91r0511 mm91r05
11 mm91r05
 
Translational health research
Translational health researchTranslational health research
Translational health research
 
Linear
LinearLinear
Linear
 
Lecture 8
Lecture 8Lecture 8
Lecture 8
 
Lecture 7
Lecture 7Lecture 7
Lecture 7
 
Lecture 4
Lecture 4Lecture 4
Lecture 4
 
Lecture 3
Lecture 3Lecture 3
Lecture 3
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Lecture 13
Lecture 13Lecture 13
Lecture 13
 
Lecture 12
Lecture 12Lecture 12
Lecture 12
 
Lecture 11
Lecture 11Lecture 11
Lecture 11
 
Lecture 10h
Lecture 10hLecture 10h
Lecture 10h
 
Lecture 9h
Lecture 9hLecture 9h
Lecture 9h
 
Lecture9
Lecture9Lecture9
Lecture9
 
Em
EmEm
Em
 
Segclus
SegclusSegclus
Segclus
 
Lecture 5
Lecture 5Lecture 5
Lecture 5
 

Lecture 6

  • 1. C OMPUTER V ISION : S INGULAR VALUE D ECOMPOSITION IIT Kharagpur Computer Science and Engineering, Indian Institute of Technology Kharagpur. (IIT Kharagpur) Minimization Jan ’10 1 / 11
  • 2. Singular Value Decomposition SVD An m × n matrix of rank r maps an r dimensional unit hypersphere in rowspace (A) into an r -dimensional hyper-ellipse in range (A). The rank 2 matrix  √ √   3 3 1    b = Ax   A = √  −3   3    2 1   1  transforms the unit circle on the plane into an ellipse embedded in three-dimensional space. (IIT Kharagpur) Minimization Jan ’10 2 / 11
  • 3. Two diametrically opposite points on the unit circle are mapped into the two endpoints of the major axis of the ellipse, and two other diametrically opposite points on the unit circle are mapped into the two endpoints of the minor axis of the ellipse. The lines through these two pairs of points on the unit circle are always orthogonal. This result can be generalized to any m × n matrix. (IIT Kharagpur) Minimization Jan ’10 3 / 11
  • 4. SVD If A is a real m × n matrix then there exist orthogonal matrices U, V U = [u1 , u2 , . . . , um ] ∈ Rm×m V = [v1 , v2 , . . . , vn ] ∈ Rn×n such that T U AV = Σ = diag(σ1 , . . . , σp ) ∈ Rm×n where p = min(m, n) and σ1 ≥ σ2 ≥ . . . σp ≥ 0. Equivalently A = UΣV T Vectors v are the right singular vectors Vectors u are the left singular vectors Σ has real non-negative diagonal entries called as singular values. (IIT Kharagpur) Minimization Jan ’10 4 / 11
  • 5. (IIT Kharagpur) Minimization Jan ’10 5 / 11
  • 6. SVD SVD is one of the most useful matrix decompositions, particularly for numerical computations. Its most common application is in the solution of over-determined systems of equations. Generally the decomposition is carried out in such a way that the diagonal entries of Σ are in descending order. Since matrices U and V are orthogonal, they have the following properties: ||Ux|| = ||x|| for any vector x T U U = In×n In general UU T is not the identitiy unless m = n. (IIT Kharagpur) Minimization Jan ’10 6 / 11
  • 7. SVD One can also define SVD for matrices with more columns than rows, but generally this will not be of interest to us. It is appropriate to extend A by adding rows of zeros to obtain a square matrix, and then take the SVD. Common implementations of SVD assume that m ≥ n. In this case matrix U has the same dimension m × n as the input, matrix A may be over-written by the output matrix U. Since matrices U and V are orthogonal, they have the following properties: ||Ux|| = ||x|| for any vector x T U U = In×n In general UU T is not the identitiy unless m = n. (IIT Kharagpur) Minimization Jan ’10 7 / 11
  • 8. Fitting a line Let pi = (xi , yi )T be a set of m ≥ 2 points on a plane. Consider the line ax + by − c = 0 The normal to the line is given by the vector n = (a, b)T . We can assume that ||n|| = a2 + b2 = 1, thus n is a unit vector. The distance of this line to the origin is |c|, and the distance between the line n and the point pi is di = |axi + byi − c| = |piT n − c| The best fit line minimizes the sum of squared distances. min ||d||2 = min ||Pn − c1||2 ||n||=1 ||n||=1 where P = (p1 , . . . , pm )T , d = (d1 , . . . , dm ), 1 is a vector of m ones. (IIT Kharagpur) Minimization Jan ’10 8 / 11
  • 9. Fitting a line min ||d||2 = min ||Pn − c1||2 ||n||=1 ||n||=1 Minimization w.r.t to c is done by ∂||d||2 =0 gives c = pT n ∂c 1 T where p is the mean p = mP 1 Substituting for c, the minimization problem w.r.t n is: min ||d||2 = min ||Pn − 1pT n||2 = min ||Qn||2 ||n||=1 ||n||=1 ||n||=1 The solution is n = v2 where v2 is the second column of the matrix V . (IIT Kharagpur) Minimization Jan ’10 9 / 11
  • 10. function [ l, residue ] = linefit(P) [ m n ] = size(P); if n = 2, error(’matrix P must be m x 2’), end if m < 2, error(’Need at least two points’), end one = ones(m, 1); p = (P’ * one) / m; Q = P - one * p’; [U D V] = svd(Q); n = V(:, 2); l = [n ; p’ * n]; residue = D(2, 2); (IIT Kharagpur) Minimization Jan ’10 10 / 11
  • 11. Singular Values and Eigen Values (IIT Kharagpur) Minimization Jan ’10 11 / 11