08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
Machine Learning Lecture
1. Machine Learning
Roughly speaking, for a given learning task, with a given finite amount of training data, the
best generalization performance will be achieved if the right balance is struck between the
accuracy attained on that particular training set, and the “capacity” of the machine, that is, the
ability of the machine to learn any training set without error. A machine with too much capacity
is like a botanist with a photographic memory who, when presented with a new tree,
concludes that it is not a tree because it has a different number of leaves from anything she
has seen before; a machine with too little capacity is like the botanist’s lazy brother, who
declares that if it’s green, it’s a tree. Neither can generalize well. The exploration and
formalization of these concepts has resulted in one of the shining peaks of the theory of
statistical learning.
(Vapnik, 1979)
2. What is machine learning?
Data Model Output
examples training Predictions
Classifications
Why: Face Recognition? Clusters
Ordinals
3. Categories of problems
By output:
Clustering Regression Prediction
Classification Ordinal Reg.
By input:
Vector, X Time Series, x(t)
4. One size never fits all…
• Improving an algorithm:
– First option: better features
• Visualize classes
• Trends
• Histograms WEKA or GGOBI
– Next: make the algorithm smarter (more complicated)
• Interaction of features
• Better objective and training criteria
5. Categories of ML algorithms
By training:
Supervised (labeled) Unsupervised (unlabeled)
By model:
Non-parametric Kernel Parametric
Raw data only methods Model parameters only
40 40 40
30 30 30 y=1 + 0.5t + 4t2 - t3
20 20 20
output
output
10 10 10
0 0 0
-10 -10 -10
-20 -20 -20
-4 -2 0 2 4 6 -4 -2 0 2 4 6 -4 -2 0 2 4 6
input input
7. Training a ML algorithm
• Choose data
• Optimize model parameters according to:
– Objective function
Regression Classification
40
10 Max Margin
1
30 Mean Square Error 8 2
6
20
4
10
2
0
0
-10
-2
-2 0 2 4 6 8
-20
-4 -2 0 2 4 6
8. Pitfalls of ML algorithms
• Clean your features:
– Training volume: more is better
– Outliers: remove them!
– Dynamic range: normalize it!
• Generalization
– Over fitting
– Under fitting
• Speed: parametric vs. non
• What are you learning? …features, features, features…
14. K-Means clustering
•Planar decision boundaries,
depending on space you are in…
•Highly Efficient
•Not always great (but usually
pretty good)
•Needs good starting criteria
15. K-Nearest Neighbor
•Arbitrary decision boundaries
•Not so efficient…
•With enough data in each class…
optimal
•Easy to train, known as a lazy classifier
16. Mixture of Gaussians
•Arbitrary decision boundaries
with enough boundaries
•Efficient, depending on number
of models and Gaussians
•Can represent more than just
Gaussian distributions
•Generative, sometimes tough to
train up
•Spurious singularities
•Can get a distribution for a
specific class and feature(s)… and
get a Bayesian classifier
17. Components Analysis
(principal or independent)
•Reduces dimensionality
•All other classifiers work in a
rotated space
•Remember Eigen-values and
Vectors?
18. Trees Classifiers
•Arbitrary Decision boundaries
•Can be quite efficient (or not!)
•Needs good criteria for splitting
•Easy to visualize
19. Multi-Layer Perceptron
•Arbitrary (but linear) Decision
boundaries
•Can be quite efficient (or not!)
•What did it learn?
20. Support Vector Machines
•Arbitrary Decision boundaries
•Efficiency depends on support
vector size and feature size
21. Hidden Markov Models
•Arbitrary Decision boundaries
•Efficiency depends on state
space and number of models
•Generalizes to incorporate
features that change over time
22. More sophisticated approaches
• Graphical models (like an HMM)
– Bayesian network
– Markov random fields
• Boosting
– Adaboost
• Voting
• Cascading
• Stacking…