SlideShare uma empresa Scribd logo
1 de 43
Baixar para ler offline
Presentation of paper #7:

Nonlinear component
analysis as a kernel
eigenvalue problem
Scholkopf, Smola, Muller
Neural Computation 10, 1299-1319, MIT Press (1998)



                                                                  Group C:
              M. Filannino, G. Rates, U. Sandouk
COMP61021: Modelling and Visualization of high-dimensional data
Introduction
● Kernel Principal Component Analysis (KPCA)
  ○ KPCA is an extension of Principal Component Analysis
  ○ It computes PCA into a new feature space dimension
  ○ Useful for feature extraction, dimensionality reduction
Introduction
● Kernel Principal Component Analysis (KPCA)
  ○ KPCA is an extension of Principal Component Analysis
  ○ It computes PCA into a new feature space
  ○ Useful for feature extraction, dimensionality reduction
Motivation: possible solutions
Principal Curves

Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American
Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516.

●   Optimization (including the quality of data approximation)

●   Natural geometric meaning

●   Natural projection




              http://pisuerga.inf.ubu.es/cgosorio/Visualization/imgs/review3_html_m20a05243.png
Motivation: possible solutions
Autoencoders

Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of
data with neural networks. Science, 313, 504--507.

●   Feed forward neural network

●   Approximate the identity

    function




     http://www.nlpca.de/fig_NLPCA_bottleneck_autoassociative_autoencoder_neural_network.png
Motivation: some new problems



● Low input dimensions

● Problem dependant

● Hard optimization problems
Motivation: kernel trick
KPCA captures the overall variance of patterns
Motivation: kernel trick
Motivation: kernel trick
Motivation: kernel trick
Motivation: kernel trick




                Video
Principle
                                             Data




                                  Features
"We are not interested in PCs
in the input space, we are
interested in PCs of features
that are nonlinearly related to
the original ones"
Principle
                                                 Data




"We are not interested in PCs




                                  New features
in the input space, we are
interested in PCs of features
that are nonlinearly related to
the original ones"
Principle
Given a data set of N centered observations in a d-dimensional space



●   PCA diagonalizes the covariance matrix:




●   It is necessary to solve the following system of equations:




●   We can define the same computation in another dot product space F:
Principle
Given a data set of N centered observations in a high-dimensional space



●   Covariance matrix in new space:




●   Again, it is necessary to solve the following system of equations:



●   This means that:
Principle
●   Combining the last tree equations, we obtain:




●   we define a new function



●   and a new N x N matrix:



●   our equation becomes:
Principle
●   let λ1 ≤ λ2 ≤ ... ≤ λN denote the eigenvalues of K, and α1, ..., αN the
    corresponding eigenvectors, with λp being the first nonzero eigenvalue
    then we require they are normalized in F:




●   Encoding a data point y means computing:
Algorithm


● Centralization
  For a given data set, subtracting the mean for all the observation to
  achieve the centralized data in RN.
● Finding principal components
  Compute the matrix                      using kernel function, find
  eigenvectors   and eigenvalues
● Encoding training/testing data
                       where x is a vector that encodes the training
  data. This can be done since we calculated eigenvalues and
  eigenvectors.
Algorithm
● Reconstructing training data
  The operation cannot be done because eigenvectors do not have
  a pre-images in the original dimension.
● Reconstructing test data point
  The operation cannot be done because eigenvectors do not have
  a pre-images in the original dimension.
Disadvantages
● Centering in original space does not mean centering in F, we need
  to adjust the K matrix as follows:

● KPCA is now a parametric technique:
  ○ choice of a proper kernel function
     ■ Gaussian, sigmoid, polynomial
  ○ Mercer's theorem
     ■ k(x,y) must be continue, simmetric, and semi-defined positive
            (xTAx ≥ 0)
        ■   it guarantees that there are non-zero eigenvalues
● Data reconstruction is not possible, unless using approximation
   formula:
Advantages


●   Time complexity

    ○   we will return to this point later

●   Handle non linearly separable problems

●   Extraction of more principal components than PCA

    ○   Feature extraction vs. dimensionality reduction
Experiments

●   Applications
●   Data Sets
●   Methods compared
●   Assessment
●   Experiments
●   Results
Applications
●   Clustering
    ○   Density Estimation
        ■ ex High correlation between features
    ○   De-noising
        ■ ex Lighting removing from bright images
    ○   Compression
        ■ ex Image compression

●   Classification
    ○   ex categorisations
Datasets
Experiment Name                 Created by                          Representation
                                                   y x2 C             y= x2
●   Simple 1+2 = 3       Uniform distribution      C noise sd 0.1
                                                                      - Unlabelled
    example1             Dist [-1, 1]
                                                                      - 2 Dimensions

                                                                       Three clusters
               1+2 = 3
                         Three Gaussians                               - Unlabelled
●   Simple
                         sd = 0.1                                      - 2 Dimensions
    example2             Dist [1,1] x [0.5, 1]
    Kernels
                                                                      A circle and square
                         The eleven gaussians                         - Unlabelled
●   De-noising
                         Dist [-1, 1] with zero mean                  - 10 Dimensions


●   USPS                                                               Hand written digit
    Character                                                          -Labelled
    Recognition                                                        -256 Dimensions
                                                                       -9298 Digits
Experiments
1 Simple Example 1 experiment
Dataset : 1+ 2 = 3 The uniform dist sd = 0.2
Kernel: Polynomial 1 – 4

2 USPS Character Recognition                                            Parameters
Dataset:        USPS                                                    Kernel PCA
                                                                        Kernel Polynomial 1 7
                                                                        Components 32 2048 (x x2)
Methods
 Five layer Neural Networks Kernel SVM PCA               SVM            Neural Networks and SVM
                                                                        The best parameters for the task
3 De- noising
                                                                        Parameters
Dataset:              De-noising 11 gaussians sd = 0.1
                                                                        The best parameters for the task

Methods

  Kernel Autoencoders       Principal Curves Kernel PCA    Linear PCA

4 Kernels                                                               Parameters
                                                                        The best parameters for the task
Radial Basis Function
Sigmoid
Methods
    These are the methods we used in the experiments
                                                                Dimensionality
                                                                reduction

Classification
                 ●   Supervised            Unsupervised
                                            Linear PCA     Linear

                 Neural Networks        Kernel PCA
                 ● SVM                     Kernel Autoencoders Linear
                                                            Non
                 ● Kernel LDA              Principal Curves
Face
Recognition
Assessment
●   1 Accuracy
      Classification: Exact Classification
      Clustering:     Comparable to other clusters

●
●   2 Time Complexity
●   The time to compute
●
●   3 Storage Complexity
●   The storage of the data
●
●   4 Interpretability
●   How easy it is to understand
Simple Example
 ●       Recreated example                     ●   Nonlinear PCA paper ex
 Dataset:      The USPS Handwritten digits     Dataset: 1+ 2 =3 The uniform dist with sd 0.2
 Training set: 3000
                                               Classifier: The polynomial Kernel 1 - 4
 Classifier: The SVM dot product Kernel 1 -7
 PC: 32 – 2048 x2                              PC: 1 – 3



                                       The
                                       eigenvector
3D                                     1 -3 of
                                       highest
by a                                   eigenvalue
Kernel

Do
PCA
                                                              Kernel Polynomial 1 -4
                                      Accurate
2D                                                           The function y = x2 + B
                                      Clustering
                                      of Non                 with noise B of sd= 0.2
                                      linear                 from uniform distribution
                                      features               [-1, 1]
Character recognition
     Dataset: The USPS Handwritten digits
     Training set: 3000
     Classifier: The SVM dot product Kernel 1 -7
     PC: 32 – 2048 (x x2)
●   The performance is better
    for Linear Classifier
    trained on non linear
    components than linear
    components

●   The performance is
    improved from linear as
    the number of component
    is increased                Fig The result of the Character Recognition experiment ( )
De-noising
  Dataset:           The De-noising eleven gaussians
  Training set: 100
  Classifier: The Gaussian Kernel sd parameter
  PC: 2

The de-noising on non linear feature of the distribution




                 Fig The result of the denoising experiment ( )
Kernels
 The choice of Kernel regulates the accuracy of the algorithm and is dependent on the
 application. The Mercer Kernels Gram Matrix are




Experiments

Radial Basis Function
Dataset Three gaussian sd 0.1
Classifier y exp x y 0.1 Kernel 1 4
PC 1 8

Sigmoid
Dataset Three Gaussian sd 0.1
Classifier Kernel
PC 1 3
Results                  -The PC 1-2 separate the 3 clusters
RBF
                         - The PC of 3 -5 half the clusters
 PC 1     PC 2    PC 3
 PC 4
                         -The PC of 6-8 split them
                         orthogonally
 PC 5     PC 6    PC 7
 PC8
                         The clusters are split to 12 places.
Sigmoid
                         -The PC 1 -2 separates the 3
                         clusters

                         - The PC 3 half the 3 clusters

                         -The same no of PC’s to separate
   PC 1     PC2            clusters.
  PC3                    - The Sigmoid needs < PC to half.
Results
                      Experiment 1    Experiment 2   Experiment 3   Experiment 4

 1 Accuracy

  Kernel              Polynomial 4    Polynomial 4   Gaussian 0.2   Sigmoid

  Components          8 Split to 12   512            2              3 split to 6

  Accuracy                            4.4

 2 Time




 3 Space



 4 Interpretability

                      Very Good       Very Good      Complicated    Very good
Discussions: KDA
Kernel Fisher Discriminant (KDA)

Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf
, Klaus-Robert Müller

● Best discriminant projection




http://lh3.ggpht.com/_qIDcOEX659I/S14l1wmtv6I/AAAAAAAAAxE/3G9kOsTt0VM/s1600-h/kda62.png
Discussions
Doing PCA in F rather in Rd

●   The first k principal components carry more variance than any

    other k directions

●   The mean squared error observed by the first k principles is

    minimal
● The principal components are uncorrelated
Discussions
Going into a higher dimensionality for a lower
dimensionality

● Pick the right high dimensionality space



Need of a proper kernel

● What kernel to use?
   ○ Gaussian, sigmoidal, polynomial
● Problem dependent
Discussions
Time Complexity

● Alot of features (alot of dimensions).
● KPCA works!
   ○ Subspace of F (only the observed x's)
   ○ No dot product calculation
● Computational complexity is hardly changed by the fact that we
   need to evaluate kernel function rather than just dot products
   ○ (if the kernel is easy to compute)
   ○ e.g. Polynomial Kernels
                   Payback: using linear classifier.
Discussions
Pre-image reconstruction maybe impossible

Approximation can be done in F

Need explicite ϕ

● Regression learning problem
● Non-linear optimization problem
● Algebric Solution (rarely)
Discussions
Interpretablity

● Cross-Features Features

   ○   Dependent on the kernel




● Reduced Space Features

   ○   Preserves the highest variance

       among data in F.
Conclusions
Applications

●   Feature Extraction (Classification)

●   Clustering

●   Denoising

●   Novelty detection

●   Dimensionality Reduction (Compression)
References
[1] J.T. Kwok and I.W. Tsang, “The Pre-Image Problem in Kernel Methods,”
IEEE Trans. Neural Networks, vol. 15, no. 6, pp. 1517-1525, 2004.
[2] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of
data with neural networks. Science, 313, 504-507.
[3] Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf ,
Klaus-Robert Müller
[4] Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American
Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516.
[5] G. Moser, "Analisi delle componenti principali", Tecniche di trasformazione
di spazi vettoriali per analisi statistica multi-dimensionale.
[6] I.T. Jolliffe, "Principal component analysis", Spriger-Verlag, 2002.
[7] Wikipedia, "Kernel Principal Component Analysis", 2011.
[8] A. Ghodsi, "Data visualization", 2006.
[9] B. Scholkopf, S. Mika, A. Smola, G. Ratsch, and K.R. Muller, "Kernel PCA
pattern reconstruction via approximate pre-images". In Proceedings of the 8th
International Conference on Artificial Neural Networks, pages 147 - 152, 1998.
References


[10] J.T.Kwok, I.W.Tsang, "The pre-image problem in kernel methods",
Proceedings of the Twentieth International Conference on Machine Learning
(ICML-2003), 2003.

●   K-R, Müller, S, Mika, G, Rätsch, K,Tsuda, and B, Schölkopf “An
    Introduction to Kernel-Based Learning Algorithms” IEEE
    TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH
    2001
●   S, Mika, B, Schölkopf, A, Smola Klaus-Robert M¨uller, M,Scholz, G, Rätsch
    “Kernel PCA and De-Noising in Feature Spaces”
Thank you

Mais conteúdo relacionado

Mais procurados

MACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMMACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMPuneet Kulyana
 
Object Detection Using R-CNN Deep Learning Framework
Object Detection Using R-CNN Deep Learning FrameworkObject Detection Using R-CNN Deep Learning Framework
Object Detection Using R-CNN Deep Learning FrameworkNader Karimi
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machinesnextlib
 
Principal Component Analysis (PCA) and LDA PPT Slides
Principal Component Analysis (PCA) and LDA PPT SlidesPrincipal Component Analysis (PCA) and LDA PPT Slides
Principal Component Analysis (PCA) and LDA PPT SlidesAbhishekKumar4995
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleHakka Labs
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methodsReza Ramezani
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmPınar Yahşi
 
Implement principal component analysis (PCA) in python from scratch
Implement principal component analysis (PCA) in python from scratchImplement principal component analysis (PCA) in python from scratch
Implement principal component analysis (PCA) in python from scratchEshanAgarwal4
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual IntroductionLukas Masuch
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Evolutionary Algorithms
Evolutionary AlgorithmsEvolutionary Algorithms
Evolutionary AlgorithmsReem Alattas
 
Dimension Reduction: What? Why? and How?
Dimension Reduction: What? Why? and How?Dimension Reduction: What? Why? and How?
Dimension Reduction: What? Why? and How?Kazi Toufiq Wadud
 
Introduction to Machine Learning with SciKit-Learn
Introduction to Machine Learning with SciKit-LearnIntroduction to Machine Learning with SciKit-Learn
Introduction to Machine Learning with SciKit-LearnBenjamin Bengfort
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality ReductionKnoldus Inc.
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networksananth
 
Object detection with deep learning
Object detection with deep learningObject detection with deep learning
Object detection with deep learningSushant Shrivastava
 

Mais procurados (20)

MACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMMACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHM
 
Object Detection Using R-CNN Deep Learning Framework
Object Detection Using R-CNN Deep Learning FrameworkObject Detection Using R-CNN Deep Learning Framework
Object Detection Using R-CNN Deep Learning Framework
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Principal Component Analysis (PCA) and LDA PPT Slides
Principal Component Analysis (PCA) and LDA PPT SlidesPrincipal Component Analysis (PCA) and LDA PPT Slides
Principal Component Analysis (PCA) and LDA PPT Slides
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methods
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering Algorithm
 
Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM )
 
Implement principal component analysis (PCA) in python from scratch
Implement principal component analysis (PCA) in python from scratchImplement principal component analysis (PCA) in python from scratch
Implement principal component analysis (PCA) in python from scratch
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Evolutionary Algorithms
Evolutionary AlgorithmsEvolutionary Algorithms
Evolutionary Algorithms
 
Dimension Reduction: What? Why? and How?
Dimension Reduction: What? Why? and How?Dimension Reduction: What? Why? and How?
Dimension Reduction: What? Why? and How?
 
Introduction to Machine Learning with SciKit-Learn
Introduction to Machine Learning with SciKit-LearnIntroduction to Machine Learning with SciKit-Learn
Introduction to Machine Learning with SciKit-Learn
 
Shap
ShapShap
Shap
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networks
 
Object detection with deep learning
Object detection with deep learningObject detection with deep learning
Object detection with deep learning
 

Destaque

Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...zukun
 
fauvel_igarss.pdf
fauvel_igarss.pdffauvel_igarss.pdf
fauvel_igarss.pdfgrssieee
 
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfKernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfgrssieee
 
Different kind of distance and Statistical Distance
Different kind of distance and Statistical DistanceDifferent kind of distance and Statistical Distance
Different kind of distance and Statistical DistanceKhulna University
 
KPCA_Survey_Report
KPCA_Survey_ReportKPCA_Survey_Report
KPCA_Survey_ReportRandy Salm
 
Principal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty DetectionPrincipal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty DetectionJordan McBain
 
Adaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and mergingAdaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and mergingieeepondy
 
Analyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving itAnalyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving itMilan Rajpara
 
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...hanshang
 
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfExplicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfgrssieee
 
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...Sahidul Islam
 
Regularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial DataRegularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial DataWen-Ting Wang
 
Pca and kpca of ecg signal
Pca and kpca of ecg signalPca and kpca of ecg signal
Pca and kpca of ecg signales712
 
Probabilistic PCA, EM, and more
Probabilistic PCA, EM, and moreProbabilistic PCA, EM, and more
Probabilistic PCA, EM, and morehsharmasshare
 
Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...zukun
 
Principal Component Analysis and Clustering
Principal Component Analysis and ClusteringPrincipal Component Analysis and Clustering
Principal Component Analysis and ClusteringUsha Vijay
 
ECG: Indication and Interpretation
ECG: Indication and InterpretationECG: Indication and Interpretation
ECG: Indication and InterpretationRakesh Verma
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learningmahutte
 
Machine Learning With R
Machine Learning With RMachine Learning With R
Machine Learning With RDavid Chiu
 

Destaque (20)

Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...
 
fauvel_igarss.pdf
fauvel_igarss.pdffauvel_igarss.pdf
fauvel_igarss.pdf
 
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfKernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
 
Different kind of distance and Statistical Distance
Different kind of distance and Statistical DistanceDifferent kind of distance and Statistical Distance
Different kind of distance and Statistical Distance
 
KPCA_Survey_Report
KPCA_Survey_ReportKPCA_Survey_Report
KPCA_Survey_Report
 
Principal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty DetectionPrincipal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty Detection
 
Adaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and mergingAdaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and merging
 
Analyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving itAnalyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving it
 
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
 
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfExplicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
 
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
 
Regularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial DataRegularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial Data
 
Pca and kpca of ecg signal
Pca and kpca of ecg signalPca and kpca of ecg signal
Pca and kpca of ecg signal
 
Probabilistic PCA, EM, and more
Probabilistic PCA, EM, and moreProbabilistic PCA, EM, and more
Probabilistic PCA, EM, and more
 
Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...
 
Principal Component Analysis and Clustering
Principal Component Analysis and ClusteringPrincipal Component Analysis and Clustering
Principal Component Analysis and Clustering
 
ECG: Indication and Interpretation
ECG: Indication and InterpretationECG: Indication and Interpretation
ECG: Indication and Interpretation
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learning
 
Principal component analysis
Principal component analysisPrincipal component analysis
Principal component analysis
 
Machine Learning With R
Machine Learning With RMachine Learning With R
Machine Learning With R
 

Semelhante a Nonlinear component analysis as a kernel eigenvalue problem

Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier홍배 김
 
Neighborhood Component Analysis 20071108
Neighborhood Component Analysis 20071108Neighborhood Component Analysis 20071108
Neighborhood Component Analysis 20071108Ting-Shuo Yo
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlowBarbara Fusinska
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors지현 백
 
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reductionAaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reductionAminaRepo
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors지현 백
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningVahid Mirjalili
 
Hadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using HadoopHadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using HadoopYahoo Developer Network
 
Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]Mohammad Shaker
 
Introduction to deep learning in python and Matlab
Introduction to deep learning in python and MatlabIntroduction to deep learning in python and Matlab
Introduction to deep learning in python and MatlabImry Kissos
 
Deep learning and image analytics using Python by Dr Sanparit
Deep learning and image analytics using Python by Dr SanparitDeep learning and image analytics using Python by Dr Sanparit
Deep learning and image analytics using Python by Dr SanparitBAINIDA
 
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)Universitat Politècnica de Catalunya
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial Ligeng Zhu
 
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)Parinda Rajapaksha
 
13Kernel_Machines.pptx
13Kernel_Machines.pptx13Kernel_Machines.pptx
13Kernel_Machines.pptxKarasuLee
 
Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1khairulhuda242
 

Semelhante a Nonlinear component analysis as a kernel eigenvalue problem (20)

Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier
 
Eye deep
Eye deepEye deep
Eye deep
 
Neighborhood Component Analysis 20071108
Neighborhood Component Analysis 20071108Neighborhood Component Analysis 20071108
Neighborhood Component Analysis 20071108
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors
 
TunUp final presentation
TunUp final presentationTunUp final presentation
TunUp final presentation
 
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reductionAaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors
 
[ppt]
[ppt][ppt]
[ppt]
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
 
Hadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using HadoopHadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using Hadoop
 
Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]
 
Introduction to deep learning in python and Matlab
Introduction to deep learning in python and MatlabIntroduction to deep learning in python and Matlab
Introduction to deep learning in python and Matlab
 
Deep learning and image analytics using Python by Dr Sanparit
Deep learning and image analytics using Python by Dr SanparitDeep learning and image analytics using Python by Dr Sanparit
Deep learning and image analytics using Python by Dr Sanparit
 
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
 
Fa18_P2.pptx
Fa18_P2.pptxFa18_P2.pptx
Fa18_P2.pptx
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial
 
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
 
13Kernel_Machines.pptx
13Kernel_Machines.pptx13Kernel_Machines.pptx
13Kernel_Machines.pptx
 
Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1
 

Mais de Michele Filannino

Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...Michele Filannino
 
Temporal information extraction in the general and clinical domain
Temporal information extraction in the general and clinical domainTemporal information extraction in the general and clinical domain
Temporal information extraction in the general and clinical domainMichele Filannino
 
Mining temporal footprints from Wikipedia
Mining temporal footprints from WikipediaMining temporal footprints from Wikipedia
Mining temporal footprints from WikipediaMichele Filannino
 
Can computers understand time?
Can computers understand time?Can computers understand time?
Can computers understand time?Michele Filannino
 
Detecting novel associations in large data sets
Detecting novel associations in large data setsDetecting novel associations in large data sets
Detecting novel associations in large data setsMichele Filannino
 
Temporal expressions identification in biomedical texts
Temporal expressions identification in biomedical textsTemporal expressions identification in biomedical texts
Temporal expressions identification in biomedical textsMichele Filannino
 
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...Michele Filannino
 
Tecniche fuzzy per l'elaborazione del linguaggio naturale
Tecniche fuzzy per l'elaborazione del linguaggio naturaleTecniche fuzzy per l'elaborazione del linguaggio naturale
Tecniche fuzzy per l'elaborazione del linguaggio naturaleMichele Filannino
 
Algoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web ServiceAlgoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web ServiceMichele Filannino
 
SWOP project and META software
SWOP project and META softwareSWOP project and META software
SWOP project and META softwareMichele Filannino
 
Semantic Web Service Annotation
Semantic Web Service AnnotationSemantic Web Service Annotation
Semantic Web Service AnnotationMichele Filannino
 
Orchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPMOrchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPMMichele Filannino
 
Modulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender SystemModulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender SystemMichele Filannino
 
Serendipity module in Item Recommender System
Serendipity module in Item Recommender SystemSerendipity module in Item Recommender System
Serendipity module in Item Recommender SystemMichele Filannino
 
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...Michele Filannino
 

Mais de Michele Filannino (17)

me_t3_october
me_t3_octoberme_t3_october
me_t3_october
 
Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...
 
Temporal information extraction in the general and clinical domain
Temporal information extraction in the general and clinical domainTemporal information extraction in the general and clinical domain
Temporal information extraction in the general and clinical domain
 
Mining temporal footprints from Wikipedia
Mining temporal footprints from WikipediaMining temporal footprints from Wikipedia
Mining temporal footprints from Wikipedia
 
Can computers understand time?
Can computers understand time?Can computers understand time?
Can computers understand time?
 
Detecting novel associations in large data sets
Detecting novel associations in large data setsDetecting novel associations in large data sets
Detecting novel associations in large data sets
 
Temporal expressions identification in biomedical texts
Temporal expressions identification in biomedical textsTemporal expressions identification in biomedical texts
Temporal expressions identification in biomedical texts
 
My research taster project
My research taster projectMy research taster project
My research taster project
 
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
 
Tecniche fuzzy per l'elaborazione del linguaggio naturale
Tecniche fuzzy per l'elaborazione del linguaggio naturaleTecniche fuzzy per l'elaborazione del linguaggio naturale
Tecniche fuzzy per l'elaborazione del linguaggio naturale
 
Algoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web ServiceAlgoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web Service
 
SWOP project and META software
SWOP project and META softwareSWOP project and META software
SWOP project and META software
 
Semantic Web Service Annotation
Semantic Web Service AnnotationSemantic Web Service Annotation
Semantic Web Service Annotation
 
Orchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPMOrchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPM
 
Modulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender SystemModulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender System
 
Serendipity module in Item Recommender System
Serendipity module in Item Recommender SystemSerendipity module in Item Recommender System
Serendipity module in Item Recommender System
 
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
 

Último

Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 

Último (20)

Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 

Nonlinear component analysis as a kernel eigenvalue problem

  • 1. Presentation of paper #7: Nonlinear component analysis as a kernel eigenvalue problem Scholkopf, Smola, Muller Neural Computation 10, 1299-1319, MIT Press (1998) Group C: M. Filannino, G. Rates, U. Sandouk COMP61021: Modelling and Visualization of high-dimensional data
  • 2. Introduction ● Kernel Principal Component Analysis (KPCA) ○ KPCA is an extension of Principal Component Analysis ○ It computes PCA into a new feature space dimension ○ Useful for feature extraction, dimensionality reduction
  • 3. Introduction ● Kernel Principal Component Analysis (KPCA) ○ KPCA is an extension of Principal Component Analysis ○ It computes PCA into a new feature space ○ Useful for feature extraction, dimensionality reduction
  • 4. Motivation: possible solutions Principal Curves Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516. ● Optimization (including the quality of data approximation) ● Natural geometric meaning ● Natural projection http://pisuerga.inf.ubu.es/cgosorio/Visualization/imgs/review3_html_m20a05243.png
  • 5. Motivation: possible solutions Autoencoders Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504--507. ● Feed forward neural network ● Approximate the identity function http://www.nlpca.de/fig_NLPCA_bottleneck_autoassociative_autoencoder_neural_network.png
  • 6. Motivation: some new problems ● Low input dimensions ● Problem dependant ● Hard optimization problems
  • 7. Motivation: kernel trick KPCA captures the overall variance of patterns
  • 12. Principle Data Features "We are not interested in PCs in the input space, we are interested in PCs of features that are nonlinearly related to the original ones"
  • 13. Principle Data "We are not interested in PCs New features in the input space, we are interested in PCs of features that are nonlinearly related to the original ones"
  • 14. Principle Given a data set of N centered observations in a d-dimensional space ● PCA diagonalizes the covariance matrix: ● It is necessary to solve the following system of equations: ● We can define the same computation in another dot product space F:
  • 15. Principle Given a data set of N centered observations in a high-dimensional space ● Covariance matrix in new space: ● Again, it is necessary to solve the following system of equations: ● This means that:
  • 16. Principle ● Combining the last tree equations, we obtain: ● we define a new function ● and a new N x N matrix: ● our equation becomes:
  • 17. Principle ● let λ1 ≤ λ2 ≤ ... ≤ λN denote the eigenvalues of K, and α1, ..., αN the corresponding eigenvectors, with λp being the first nonzero eigenvalue then we require they are normalized in F: ● Encoding a data point y means computing:
  • 18. Algorithm ● Centralization For a given data set, subtracting the mean for all the observation to achieve the centralized data in RN. ● Finding principal components Compute the matrix using kernel function, find eigenvectors and eigenvalues ● Encoding training/testing data where x is a vector that encodes the training data. This can be done since we calculated eigenvalues and eigenvectors.
  • 19. Algorithm ● Reconstructing training data The operation cannot be done because eigenvectors do not have a pre-images in the original dimension. ● Reconstructing test data point The operation cannot be done because eigenvectors do not have a pre-images in the original dimension.
  • 20. Disadvantages ● Centering in original space does not mean centering in F, we need to adjust the K matrix as follows: ● KPCA is now a parametric technique: ○ choice of a proper kernel function ■ Gaussian, sigmoid, polynomial ○ Mercer's theorem ■ k(x,y) must be continue, simmetric, and semi-defined positive (xTAx ≥ 0) ■ it guarantees that there are non-zero eigenvalues ● Data reconstruction is not possible, unless using approximation formula:
  • 21. Advantages ● Time complexity ○ we will return to this point later ● Handle non linearly separable problems ● Extraction of more principal components than PCA ○ Feature extraction vs. dimensionality reduction
  • 22. Experiments ● Applications ● Data Sets ● Methods compared ● Assessment ● Experiments ● Results
  • 23. Applications ● Clustering ○ Density Estimation ■ ex High correlation between features ○ De-noising ■ ex Lighting removing from bright images ○ Compression ■ ex Image compression ● Classification ○ ex categorisations
  • 24. Datasets Experiment Name Created by Representation y x2 C y= x2 ● Simple 1+2 = 3 Uniform distribution C noise sd 0.1 - Unlabelled example1 Dist [-1, 1] - 2 Dimensions Three clusters 1+2 = 3 Three Gaussians - Unlabelled ● Simple sd = 0.1 - 2 Dimensions example2 Dist [1,1] x [0.5, 1] Kernels A circle and square The eleven gaussians - Unlabelled ● De-noising Dist [-1, 1] with zero mean - 10 Dimensions ● USPS Hand written digit Character -Labelled Recognition -256 Dimensions -9298 Digits
  • 25. Experiments 1 Simple Example 1 experiment Dataset : 1+ 2 = 3 The uniform dist sd = 0.2 Kernel: Polynomial 1 – 4 2 USPS Character Recognition Parameters Dataset: USPS Kernel PCA Kernel Polynomial 1 7 Components 32 2048 (x x2) Methods Five layer Neural Networks Kernel SVM PCA SVM Neural Networks and SVM The best parameters for the task 3 De- noising Parameters Dataset: De-noising 11 gaussians sd = 0.1 The best parameters for the task Methods Kernel Autoencoders Principal Curves Kernel PCA Linear PCA 4 Kernels Parameters The best parameters for the task Radial Basis Function Sigmoid
  • 26. Methods These are the methods we used in the experiments Dimensionality reduction Classification ● Supervised Unsupervised Linear PCA Linear Neural Networks Kernel PCA ● SVM Kernel Autoencoders Linear Non ● Kernel LDA Principal Curves Face Recognition
  • 27. Assessment ● 1 Accuracy Classification: Exact Classification Clustering: Comparable to other clusters ● ● 2 Time Complexity ● The time to compute ● ● 3 Storage Complexity ● The storage of the data ● ● 4 Interpretability ● How easy it is to understand
  • 28. Simple Example ● Recreated example ● Nonlinear PCA paper ex Dataset: The USPS Handwritten digits Dataset: 1+ 2 =3 The uniform dist with sd 0.2 Training set: 3000 Classifier: The polynomial Kernel 1 - 4 Classifier: The SVM dot product Kernel 1 -7 PC: 32 – 2048 x2 PC: 1 – 3 The eigenvector 3D 1 -3 of highest by a eigenvalue Kernel Do PCA Kernel Polynomial 1 -4 Accurate 2D The function y = x2 + B Clustering of Non with noise B of sd= 0.2 linear from uniform distribution features [-1, 1]
  • 29. Character recognition Dataset: The USPS Handwritten digits Training set: 3000 Classifier: The SVM dot product Kernel 1 -7 PC: 32 – 2048 (x x2) ● The performance is better for Linear Classifier trained on non linear components than linear components ● The performance is improved from linear as the number of component is increased Fig The result of the Character Recognition experiment ( )
  • 30. De-noising Dataset: The De-noising eleven gaussians Training set: 100 Classifier: The Gaussian Kernel sd parameter PC: 2 The de-noising on non linear feature of the distribution Fig The result of the denoising experiment ( )
  • 31. Kernels The choice of Kernel regulates the accuracy of the algorithm and is dependent on the application. The Mercer Kernels Gram Matrix are Experiments Radial Basis Function Dataset Three gaussian sd 0.1 Classifier y exp x y 0.1 Kernel 1 4 PC 1 8 Sigmoid Dataset Three Gaussian sd 0.1 Classifier Kernel PC 1 3
  • 32. Results -The PC 1-2 separate the 3 clusters RBF - The PC of 3 -5 half the clusters PC 1 PC 2 PC 3 PC 4 -The PC of 6-8 split them orthogonally PC 5 PC 6 PC 7 PC8 The clusters are split to 12 places. Sigmoid -The PC 1 -2 separates the 3 clusters - The PC 3 half the 3 clusters -The same no of PC’s to separate PC 1 PC2 clusters. PC3 - The Sigmoid needs < PC to half.
  • 33. Results Experiment 1 Experiment 2 Experiment 3 Experiment 4 1 Accuracy Kernel Polynomial 4 Polynomial 4 Gaussian 0.2 Sigmoid Components 8 Split to 12 512 2 3 split to 6 Accuracy 4.4 2 Time 3 Space 4 Interpretability Very Good Very Good Complicated Very good
  • 34. Discussions: KDA Kernel Fisher Discriminant (KDA) Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf , Klaus-Robert Müller ● Best discriminant projection http://lh3.ggpht.com/_qIDcOEX659I/S14l1wmtv6I/AAAAAAAAAxE/3G9kOsTt0VM/s1600-h/kda62.png
  • 35. Discussions Doing PCA in F rather in Rd ● The first k principal components carry more variance than any other k directions ● The mean squared error observed by the first k principles is minimal ● The principal components are uncorrelated
  • 36. Discussions Going into a higher dimensionality for a lower dimensionality ● Pick the right high dimensionality space Need of a proper kernel ● What kernel to use? ○ Gaussian, sigmoidal, polynomial ● Problem dependent
  • 37. Discussions Time Complexity ● Alot of features (alot of dimensions). ● KPCA works! ○ Subspace of F (only the observed x's) ○ No dot product calculation ● Computational complexity is hardly changed by the fact that we need to evaluate kernel function rather than just dot products ○ (if the kernel is easy to compute) ○ e.g. Polynomial Kernels Payback: using linear classifier.
  • 38. Discussions Pre-image reconstruction maybe impossible Approximation can be done in F Need explicite ϕ ● Regression learning problem ● Non-linear optimization problem ● Algebric Solution (rarely)
  • 39. Discussions Interpretablity ● Cross-Features Features ○ Dependent on the kernel ● Reduced Space Features ○ Preserves the highest variance among data in F.
  • 40. Conclusions Applications ● Feature Extraction (Classification) ● Clustering ● Denoising ● Novelty detection ● Dimensionality Reduction (Compression)
  • 41. References [1] J.T. Kwok and I.W. Tsang, “The Pre-Image Problem in Kernel Methods,” IEEE Trans. Neural Networks, vol. 15, no. 6, pp. 1517-1525, 2004. [2] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504-507. [3] Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf , Klaus-Robert Müller [4] Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516. [5] G. Moser, "Analisi delle componenti principali", Tecniche di trasformazione di spazi vettoriali per analisi statistica multi-dimensionale. [6] I.T. Jolliffe, "Principal component analysis", Spriger-Verlag, 2002. [7] Wikipedia, "Kernel Principal Component Analysis", 2011. [8] A. Ghodsi, "Data visualization", 2006. [9] B. Scholkopf, S. Mika, A. Smola, G. Ratsch, and K.R. Muller, "Kernel PCA pattern reconstruction via approximate pre-images". In Proceedings of the 8th International Conference on Artificial Neural Networks, pages 147 - 152, 1998.
  • 42. References [10] J.T.Kwok, I.W.Tsang, "The pre-image problem in kernel methods", Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), 2003. ● K-R, Müller, S, Mika, G, Rätsch, K,Tsuda, and B, Schölkopf “An Introduction to Kernel-Based Learning Algorithms” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 ● S, Mika, B, Schölkopf, A, Smola Klaus-Robert M¨uller, M,Scholz, G, Rätsch “Kernel PCA and De-Noising in Feature Spaces”