SlideShare uma empresa Scribd logo
1 de 36
Baixar para ler offline
Machine Learning Extra : 1BMVASummer School 2014
The bits the whirlwind tour left
out ...
BMVA Summer School 2014 – extra background slides
Machine Learning Extra : 2BMVASummer School 2014
Machine Learning
Definition:
– “A computer program is said to learn from
experience E with respect to some class of
tasks T and performance measure P, if its
performance at tasks T, improves with
experience E.”
[Mitchell, 1997]
Machine Learning Extra : 3BMVASummer School 2014
Algorithm to construct decision trees ….
Machine Learning Extra : 4BMVASummer School 2014
Building Decision Trees – ID3
 node = root of tree
 Main loop:
A = “best” decision attribute for next node
.....
But which attribute is best to split on ?
Machine Learning Extra : 5BMVASummer School 2014
Entropy in machine learning
Entropy : a measure of impurity
– S is a sample of training examples
– P is the proportion of positive examples in S
– P⊖ is the proportion of negative examples in S
Entropy measures the impurity of S:
Machine Learning Extra : 6BMVASummer School 2014
Information Gain – reduction in Entropy
Gain(S,A) = expected reduction in entropy due to splitting
on attribute A
– i.e. expected reduction in impurity in the data
– (improvement in consistent data sorting)
Machine Learning Extra : 7BMVASummer School 2014
Information Gain – reduction in Entropy
– reduction in entropy in set of examples S if split on attribute A
– Sv
= subset of S for which attribute A has value v
– Gain(S,A) = original entropy – SUM(entropy of sub-nodes if split on A)
Machine Learning Extra : 8BMVASummer School 2014
Information Gain – reduction in Entropy
Information Gain :
– “information provided about the target function given the value of
some attribute A”
– How well does A sort the data into the required classes?
Generalise to c classes :
– (not just  or ⊖)
EntropyS=−∑
i=1
c
pi
log pi
Machine Learning Extra : 9BMVASummer School 2014
Building Decision Trees
 Selecting the Next Attribute
– which attribute should we split on next?
Machine Learning Extra : 10BMVASummer School 2014
Building Decision Trees
 Selecting the Next Attribute
– which attribute should we split on next?
Machine Learning Extra : 11BMVASummer School 2014
Backpropogation Algorithm ….
Machine Learning Extra : 12BMVASummer School 2014
Backpropagation Algorithm
Assume we have:
– input examples d={1...D}
• each is pair {xd
,td
} = {input
vector, target vector}
– node index n={1 … N}
– weight wji
connects node j → i
– input xji
is the input on the
connection node j → i
• corresponding weight = wji
– output error for node n is δn
• similar to (o – t)
Output
Layer
Input layer
Input, x
Output vector, Ok
Hidden
Layer
nodeindex{1…N}
Machine Learning Extra : 13BMVASummer School 2014
Backpropagation Algorithm
(1) Input Example
example d
(2) output layer error
based on :
difference between
output and target
(t - o)
derivative of
sigmoid function
(3) Hidden layer error
proportional to
node contribution
to output error
(4) Update weights wij
–
Machine Learning Extra : 14BMVASummer School 2014
Backpropagation
Termination criteria
– number of iterations
reached
– Or error below
suitable bound
Output layer error
Hidden layer error
Add weights updated
using relevant error
Machine Learning Extra : 15BMVASummer School 2014
Backpropagation
Output
Layer, unit k
Input layer
Input, x
Output vector, Ok
Hidden
Layer,
unit h
Machine Learning Extra : 16BMVASummer School 2014
Backpropagation
Output vector, Ok
δh
is expressed as a weighted
sum of the output layer errors δk
to which it contributes (i.e. whk
> 0)
Output
Layer, unit k
Input layer
Input, x
Hidden
Layer,
unit h
Machine Learning Extra : 17BMVASummer School 2014
Backpropagation
Error is propogated
backwards from network
output ....
to weights of output layer
....
to weights of the hidden
layer
…
Hence the name:
backpropagation
Output
Layer, unit k
Input layer
Input, x
Hidden
Layer,
unit h
Output vector, Ok
Machine Learning Extra : 18BMVASummer School 2014
Backpropagation
Repeat these stages for every
hidden layer in a
multi-layer network:
(using error δi
where xji
>0)
.......
Output
Layer, unit k
Input layer
Input, x
Hidden
Layer(s),
unit h
Output vector, Ok
Machine Learning Extra : 19BMVASummer School 2014
Backpropagation
Error is propogated
backwards from network
output ....
to weights of output layer
....
over weights of all N
hidden layers
…
Hence the name:
backpropagation
.......
Output
Layer, unit k
Input layer
Input, x
Hidden
Layer(s),
unit h
Output vector, Ok
Machine Learning Extra : 20BMVASummer School 2014
Backpropagation
Will perform
gradient descent
over the weight
space of {wji
} for all
connections i → j in
the network
Stochastic gradient
descent
– as updates based on
training one sample
at a time
Machine Learning Extra : 21BMVASummer School 2014
Understanding (and believing) the SVM stuff ….
Machine Learning Extra : 22BMVASummer School 2014
Remedial Note: equations of 2D lines
Line:
where:
are 2D vectors.
Offset from origin
Normal to line
2D LINES REMINDER
Machine Learning Extra : 23BMVASummer School 2014
Remedial Note: equations of 2D lines
http://www.mathopenref.com/coordpointdisttrig.html
2D LINES REMINDER
Machine Learning Extra : 24BMVASummer School 2014
Remedial Note: equations of 2D lines
For a defined line equation:
Fixed
Insert point into equation …...
Normal to line
Result is +ve if
point on this side
of line (i.e.> 0).
Result is -ve if
point on this side
of line. ( < 0 )
Result is the distance (+ve or
-ve) of point from line given by:
for:
2D LINES REMINDER
Machine Learning Extra : 25BMVASummer School 2014
Linear Separator
 Instances (i.e, examples) {xi ,
yi
}
– xi
= point in instance space (Rn
) made
up of n attributes
– yi
=class value for classification of xi
Want a linear separator. Can
view this as constraint
satisfaction problem:
Equivalently,
y = +1
y = -1
Classification of
example function
f(x) = y = {+1, -1}
i.e. 2 classes
N.B. we have a vector of weights coefficients ⃗w
Machine Learning Extra : 26BMVASummer School 2014
Linear Separator
If we define the distance of
the nearest point to the
margin as 1
→ width of margin is
(i.e. equal width each side)
We thus want to maximize:
finding the parameters:
y = +1
y = -1
Classification of example
function
f(x) = y = {+1, -1}
i.e. 2 classes
Machine Learning Extra : 27BMVASummer School 2014
which is equivalent to minimizing:
Machine Learning Extra : 28BMVASummer School 2014
…............. back to main slides
Machine Learning Extra : 29BMVASummer School 2014
So ….
Find the “hyperplane” (i.e. boundary) with:
a) maximum margin
b) minimum number of (training) examples on the
wrong side of the chosen boundary
(i.e. minimal penalties due to C)
Solve via optimization (in polynomial
time/complexity)
Machine Learning Extra : 30BMVASummer School 2014
Find hyperplane separator (plane in 3D) via optimization
Non-linear Separation (red / blue data items
on 2D plane).
Kernel projection to higher dimensional space
Non-linear boundary in original dimension
(e.g. circle n 2D) defined by planar boundary (cut) in 3D.
Example:
Machine Learning Extra : 31BMVASummer School 2014
.... but it is all about the data!
Machine Learning Extra : 32BMVASummer School 2014
Desirable Data Properties
Machine learning is a Data Driven Approach
The Data is important!
Ideally training/testing data used for learning must be:
– Unbiased
• towards any given subset of the space of examples ...
– Representative
• of the “real-world” data to be encountered in use/deployment
– Accurate
• inaccuracies in training/testing produce inaccuracies results
– Available
• the more training/testing data available the better the results
• greater confidence in the results can be achieved
Machine Learning Extra : 33BMVASummer School 2014
Data Training Methodologies
Simple approach : Data Splits
– split overall data set into separate training and test sets
• No established rule but 80%:20%, 70%:30% or ⅓:⅔ training to testing
splits common
– Training on one, test on the other
– Test error = error on the test set
– Training error = error on training set
– Weakness: susceptible to bias in data sets or “over-fitting”
• Also less data available for training
Machine Learning Extra : 34BMVASummer School 2014
Data Training Methodologies
More advanced (and robust): K Cross Validation
– Randomly split (all) the data into k-subsets
– For 1 to k
• train using all the data not in kth
subset
• test resulting learned [classifier|function …]
using kth
subset
– report mean error over all k tests
Machine Learning Extra : 35BMVASummer School 2014
Key Summary Statistics #1
tp = true positive / tn = true negative
fp = false positive / fn = false negative
Often quoted or plotted when comparing ML techniques
Machine Learning Extra : 36BMVASummer School 2014
Kappa Statistic
Measure of classification of “N items into C mutually
exclusive categories”
Pr(a) = probability of success of classification ( = accuracy)
Pr(e) = probability of success due to chance
– e.g. 2 categories = 50% (0.5), 3 categories = 33% (0.33) ….. etc.
– Pr(e) can be replaced with Pr(b) to measure agreement between
classifiers/techniques a and b
[Cohen, 1960]

Mais conteúdo relacionado

Mais procurados

digital systems and information
digital systems and informationdigital systems and information
digital systems and informationKamran Zafar
 
Cheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networksCheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networksSteve Nouri
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplicationRespa Peter
 
Data-Driven Recommender Systems
Data-Driven Recommender SystemsData-Driven Recommender Systems
Data-Driven Recommender Systemsrecsysfr
 
Lec 1 number systems converted
Lec 1 number systems convertedLec 1 number systems converted
Lec 1 number systems convertedKamran Zafar
 
06 cv mil_learning_and_inference
06 cv mil_learning_and_inference06 cv mil_learning_and_inference
06 cv mil_learning_and_inferencezukun
 
Random Forest for Big Data
Random Forest for Big DataRandom Forest for Big Data
Random Forest for Big Datatuxette
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorizationrecsysfr
 
Lecture One: Fundamental of Computational Economics
Lecture One: Fundamental of Computational EconomicsLecture One: Fundamental of Computational Economics
Lecture One: Fundamental of Computational EconomicsXi Hao Li
 
Dynamic programming1
Dynamic programming1Dynamic programming1
Dynamic programming1debolina13
 
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...NTNU
 
A practical work of matlab
A practical work of matlabA practical work of matlab
A practical work of matlabSalanSD
 
“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...
“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...
“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...Edge AI and Vision Alliance
 
07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densitieszukun
 
“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...
“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...
“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...Edge AI and Vision Alliance
 
Chapter 06 boolean algebra
Chapter 06 boolean algebraChapter 06 boolean algebra
Chapter 06 boolean algebraIIUI
 

Mais procurados (20)

digital systems and information
digital systems and informationdigital systems and information
digital systems and information
 
Cheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networksCheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networks
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplication
 
Data-Driven Recommender Systems
Data-Driven Recommender SystemsData-Driven Recommender Systems
Data-Driven Recommender Systems
 
Lec 1 number systems converted
Lec 1 number systems convertedLec 1 number systems converted
Lec 1 number systems converted
 
06 cv mil_learning_and_inference
06 cv mil_learning_and_inference06 cv mil_learning_and_inference
06 cv mil_learning_and_inference
 
Random Forest for Big Data
Random Forest for Big DataRandom Forest for Big Data
Random Forest for Big Data
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Lecture One: Fundamental of Computational Economics
Lecture One: Fundamental of Computational EconomicsLecture One: Fundamental of Computational Economics
Lecture One: Fundamental of Computational Economics
 
Digital Basics
Digital BasicsDigital Basics
Digital Basics
 
Dynamic programming1
Dynamic programming1Dynamic programming1
Dynamic programming1
 
Unit 03
Unit 03Unit 03
Unit 03
 
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
 
Unit 02
Unit 02Unit 02
Unit 02
 
A practical work of matlab
A practical work of matlabA practical work of matlab
A practical work of matlab
 
matab no4
matab no4matab no4
matab no4
 
“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...
“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...
“Can You See What I See? The Power of Deep Learning,” a Presentation from Str...
 
07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities07 cv mil_modeling_complex_densities
07 cv mil_modeling_complex_densities
 
“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...
“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...
“Practical Guide to Implementing Deep Neural Network Inferencing at the Edge,...
 
Chapter 06 boolean algebra
Chapter 06 boolean algebraChapter 06 boolean algebra
Chapter 06 boolean algebra
 

Semelhante a Machine learning for computer vision part 2

Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"Fwdays
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning systemswapnac12
 
Machine learning fro computer vision - a whirlwind of key concepts for the un...
Machine learning fro computer vision - a whirlwind of key concepts for the un...Machine learning fro computer vision - a whirlwind of key concepts for the un...
Machine learning fro computer vision - a whirlwind of key concepts for the un...potaters
 
Machine Learning Fall, 2007 Course Information
Machine Learning Fall, 2007 Course InformationMachine Learning Fall, 2007 Course Information
Machine Learning Fall, 2007 Course Informationbutest
 
Machine Learning Algorithms Review(Part 2)
Machine Learning Algorithms Review(Part 2)Machine Learning Algorithms Review(Part 2)
Machine Learning Algorithms Review(Part 2)Zihui Li
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习AdaboostShocky1
 
Machine learning Lecture 1
Machine learning Lecture 1Machine learning Lecture 1
Machine learning Lecture 1Srinivasan R
 
AML_030607.ppt
AML_030607.pptAML_030607.ppt
AML_030607.pptbutest
 
Lecture 2 Basic Concepts in Machine Learning for Language Technology
Lecture 2 Basic Concepts in Machine Learning for Language TechnologyLecture 2 Basic Concepts in Machine Learning for Language Technology
Lecture 2 Basic Concepts in Machine Learning for Language TechnologyMarina Santini
 
Building and deploying analytics
Building and deploying analyticsBuilding and deploying analytics
Building and deploying analyticsCollin Bennett
 
ML_ Unit_1_PART_A
ML_ Unit_1_PART_AML_ Unit_1_PART_A
ML_ Unit_1_PART_ASrimatre K
 
Lecture 2
Lecture 2Lecture 2
Lecture 2butest
 
The ABC of Implementing Supervised Machine Learning with Python.pptx
The ABC of Implementing Supervised Machine Learning with Python.pptxThe ABC of Implementing Supervised Machine Learning with Python.pptx
The ABC of Implementing Supervised Machine Learning with Python.pptxRuby Shrestha
 

Semelhante a Machine learning for computer vision part 2 (20)

Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning system
 
Machine learning fro computer vision - a whirlwind of key concepts for the un...
Machine learning fro computer vision - a whirlwind of key concepts for the un...Machine learning fro computer vision - a whirlwind of key concepts for the un...
Machine learning fro computer vision - a whirlwind of key concepts for the un...
 
Machine Learning Seminar
Machine Learning SeminarMachine Learning Seminar
Machine Learning Seminar
 
Machine Learning Fall, 2007 Course Information
Machine Learning Fall, 2007 Course InformationMachine Learning Fall, 2007 Course Information
Machine Learning Fall, 2007 Course Information
 
Machine learning
Machine learningMachine learning
Machine learning
 
[ppt]
[ppt][ppt]
[ppt]
 
[ppt]
[ppt][ppt]
[ppt]
 
Machine Learning Algorithms Review(Part 2)
Machine Learning Algorithms Review(Part 2)Machine Learning Algorithms Review(Part 2)
Machine Learning Algorithms Review(Part 2)
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
 
Machine learning Lecture 1
Machine learning Lecture 1Machine learning Lecture 1
Machine learning Lecture 1
 
SVM (2).ppt
SVM (2).pptSVM (2).ppt
SVM (2).ppt
 
AML_030607.ppt
AML_030607.pptAML_030607.ppt
AML_030607.ppt
 
SVM.ppt
SVM.pptSVM.ppt
SVM.ppt
 
Lecture 2 Basic Concepts in Machine Learning for Language Technology
Lecture 2 Basic Concepts in Machine Learning for Language TechnologyLecture 2 Basic Concepts in Machine Learning for Language Technology
Lecture 2 Basic Concepts in Machine Learning for Language Technology
 
Building and deploying analytics
Building and deploying analyticsBuilding and deploying analytics
Building and deploying analytics
 
ML_ Unit_1_PART_A
ML_ Unit_1_PART_AML_ Unit_1_PART_A
ML_ Unit_1_PART_A
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Module 1.pdf
Module 1.pdfModule 1.pdf
Module 1.pdf
 
The ABC of Implementing Supervised Machine Learning with Python.pptx
The ABC of Implementing Supervised Machine Learning with Python.pptxThe ABC of Implementing Supervised Machine Learning with Python.pptx
The ABC of Implementing Supervised Machine Learning with Python.pptx
 

Mais de potaters

Image formation
Image formationImage formation
Image formationpotaters
 
Ln l.agapito
Ln l.agapitoLn l.agapito
Ln l.agapitopotaters
 
Motion and tracking
Motion and trackingMotion and tracking
Motion and trackingpotaters
 
BMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorialBMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorialpotaters
 
Statistical models of shape and appearance
Statistical models of shape and appearanceStatistical models of shape and appearance
Statistical models of shape and appearancepotaters
 
Vision Algorithmics
Vision AlgorithmicsVision Algorithmics
Vision Algorithmicspotaters
 
Performance characterization in computer vision
Performance characterization in computer visionPerformance characterization in computer vision
Performance characterization in computer visionpotaters
 
Low level vision - A tuturial
Low level vision - A tuturialLow level vision - A tuturial
Low level vision - A tuturialpotaters
 
Local feature descriptors for visual recognition
Local feature descriptors for visual recognitionLocal feature descriptors for visual recognition
Local feature descriptors for visual recognitionpotaters
 
Image segmentation
Image segmentationImage segmentation
Image segmentationpotaters
 
A primer for colour computer vision
A primer for colour computer visionA primer for colour computer vision
A primer for colour computer visionpotaters
 
Cognitive Vision - After the hype
Cognitive Vision - After the hypeCognitive Vision - After the hype
Cognitive Vision - After the hypepotaters
 
Graphical Models for chains, trees and grids
Graphical Models for chains, trees and gridsGraphical Models for chains, trees and grids
Graphical Models for chains, trees and gridspotaters
 
Medical image computing - BMVA summer school 2014
Medical image computing - BMVA summer school 2014Medical image computing - BMVA summer school 2014
Medical image computing - BMVA summer school 2014potaters
 
Decision Forests and discriminant analysis
Decision Forests and discriminant analysisDecision Forests and discriminant analysis
Decision Forests and discriminant analysispotaters
 

Mais de potaters (15)

Image formation
Image formationImage formation
Image formation
 
Ln l.agapito
Ln l.agapitoLn l.agapito
Ln l.agapito
 
Motion and tracking
Motion and trackingMotion and tracking
Motion and tracking
 
BMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorialBMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorial
 
Statistical models of shape and appearance
Statistical models of shape and appearanceStatistical models of shape and appearance
Statistical models of shape and appearance
 
Vision Algorithmics
Vision AlgorithmicsVision Algorithmics
Vision Algorithmics
 
Performance characterization in computer vision
Performance characterization in computer visionPerformance characterization in computer vision
Performance characterization in computer vision
 
Low level vision - A tuturial
Low level vision - A tuturialLow level vision - A tuturial
Low level vision - A tuturial
 
Local feature descriptors for visual recognition
Local feature descriptors for visual recognitionLocal feature descriptors for visual recognition
Local feature descriptors for visual recognition
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
A primer for colour computer vision
A primer for colour computer visionA primer for colour computer vision
A primer for colour computer vision
 
Cognitive Vision - After the hype
Cognitive Vision - After the hypeCognitive Vision - After the hype
Cognitive Vision - After the hype
 
Graphical Models for chains, trees and grids
Graphical Models for chains, trees and gridsGraphical Models for chains, trees and grids
Graphical Models for chains, trees and grids
 
Medical image computing - BMVA summer school 2014
Medical image computing - BMVA summer school 2014Medical image computing - BMVA summer school 2014
Medical image computing - BMVA summer school 2014
 
Decision Forests and discriminant analysis
Decision Forests and discriminant analysisDecision Forests and discriminant analysis
Decision Forests and discriminant analysis
 

Último

Servosystem Theory / Cybernetic Theory by Petrovic
Servosystem Theory / Cybernetic Theory by PetrovicServosystem Theory / Cybernetic Theory by Petrovic
Servosystem Theory / Cybernetic Theory by PetrovicAditi Jain
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPirithiRaju
 
User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
Davis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologyDavis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologycaarthichand2003
 
ECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptx
ECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptxECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptx
ECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptxmaryFF1
 
The dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxThe dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxEran Akiva Sinbar
 
Forensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxForensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxkumarsanjai28051
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubaikojalkojal131
 
User Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationUser Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationColumbia Weather Systems
 
PROJECTILE MOTION-Horizontal and Vertical
PROJECTILE MOTION-Horizontal and VerticalPROJECTILE MOTION-Horizontal and Vertical
PROJECTILE MOTION-Horizontal and VerticalMAESTRELLAMesa2
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naJASISJULIANOELYNV
 
well logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptxwell logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptxzaydmeerab121
 
Citronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyayCitronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyayupadhyaymani499
 
Gas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptxGas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptxGiovaniTrinidad
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPirithiRaju
 
办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书
办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书
办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书zdzoqco
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...D. B. S. College Kanpur
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
Thermodynamics ,types of system,formulae ,gibbs free energy .pptx
Thermodynamics ,types of system,formulae ,gibbs free energy .pptxThermodynamics ,types of system,formulae ,gibbs free energy .pptx
Thermodynamics ,types of system,formulae ,gibbs free energy .pptxuniversity
 

Último (20)

Servosystem Theory / Cybernetic Theory by Petrovic
Servosystem Theory / Cybernetic Theory by PetrovicServosystem Theory / Cybernetic Theory by Petrovic
Servosystem Theory / Cybernetic Theory by Petrovic
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdf
 
User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)
 
Davis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologyDavis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technology
 
ECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptx
ECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptxECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptx
ECG Graph Monitoring with AD8232 ECG Sensor & Arduino.pptx
 
The dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxThe dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptx
 
Forensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxForensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptx
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
 
User Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationUser Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather Station
 
PROJECTILE MOTION-Horizontal and Vertical
PROJECTILE MOTION-Horizontal and VerticalPROJECTILE MOTION-Horizontal and Vertical
PROJECTILE MOTION-Horizontal and Vertical
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by na
 
well logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptxwell logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptx
 
Citronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyayCitronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyay
 
Let’s Say Someone Did Drop the Bomb. Then What?
Let’s Say Someone Did Drop the Bomb. Then What?Let’s Say Someone Did Drop the Bomb. Then What?
Let’s Say Someone Did Drop the Bomb. Then What?
 
Gas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptxGas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptx
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
 
办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书
办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书
办理麦克马斯特大学毕业证成绩单|购买加拿大文凭证书
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
 
Thermodynamics ,types of system,formulae ,gibbs free energy .pptx
Thermodynamics ,types of system,formulae ,gibbs free energy .pptxThermodynamics ,types of system,formulae ,gibbs free energy .pptx
Thermodynamics ,types of system,formulae ,gibbs free energy .pptx
 

Machine learning for computer vision part 2

  • 1. Machine Learning Extra : 1BMVASummer School 2014 The bits the whirlwind tour left out ... BMVA Summer School 2014 – extra background slides
  • 2. Machine Learning Extra : 2BMVASummer School 2014 Machine Learning Definition: – “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks T, improves with experience E.” [Mitchell, 1997]
  • 3. Machine Learning Extra : 3BMVASummer School 2014 Algorithm to construct decision trees ….
  • 4. Machine Learning Extra : 4BMVASummer School 2014 Building Decision Trees – ID3  node = root of tree  Main loop: A = “best” decision attribute for next node ..... But which attribute is best to split on ?
  • 5. Machine Learning Extra : 5BMVASummer School 2014 Entropy in machine learning Entropy : a measure of impurity – S is a sample of training examples – P is the proportion of positive examples in S – P⊖ is the proportion of negative examples in S Entropy measures the impurity of S:
  • 6. Machine Learning Extra : 6BMVASummer School 2014 Information Gain – reduction in Entropy Gain(S,A) = expected reduction in entropy due to splitting on attribute A – i.e. expected reduction in impurity in the data – (improvement in consistent data sorting)
  • 7. Machine Learning Extra : 7BMVASummer School 2014 Information Gain – reduction in Entropy – reduction in entropy in set of examples S if split on attribute A – Sv = subset of S for which attribute A has value v – Gain(S,A) = original entropy – SUM(entropy of sub-nodes if split on A)
  • 8. Machine Learning Extra : 8BMVASummer School 2014 Information Gain – reduction in Entropy Information Gain : – “information provided about the target function given the value of some attribute A” – How well does A sort the data into the required classes? Generalise to c classes : – (not just  or ⊖) EntropyS=−∑ i=1 c pi log pi
  • 9. Machine Learning Extra : 9BMVASummer School 2014 Building Decision Trees  Selecting the Next Attribute – which attribute should we split on next?
  • 10. Machine Learning Extra : 10BMVASummer School 2014 Building Decision Trees  Selecting the Next Attribute – which attribute should we split on next?
  • 11. Machine Learning Extra : 11BMVASummer School 2014 Backpropogation Algorithm ….
  • 12. Machine Learning Extra : 12BMVASummer School 2014 Backpropagation Algorithm Assume we have: – input examples d={1...D} • each is pair {xd ,td } = {input vector, target vector} – node index n={1 … N} – weight wji connects node j → i – input xji is the input on the connection node j → i • corresponding weight = wji – output error for node n is δn • similar to (o – t) Output Layer Input layer Input, x Output vector, Ok Hidden Layer nodeindex{1…N}
  • 13. Machine Learning Extra : 13BMVASummer School 2014 Backpropagation Algorithm (1) Input Example example d (2) output layer error based on : difference between output and target (t - o) derivative of sigmoid function (3) Hidden layer error proportional to node contribution to output error (4) Update weights wij –
  • 14. Machine Learning Extra : 14BMVASummer School 2014 Backpropagation Termination criteria – number of iterations reached – Or error below suitable bound Output layer error Hidden layer error Add weights updated using relevant error
  • 15. Machine Learning Extra : 15BMVASummer School 2014 Backpropagation Output Layer, unit k Input layer Input, x Output vector, Ok Hidden Layer, unit h
  • 16. Machine Learning Extra : 16BMVASummer School 2014 Backpropagation Output vector, Ok δh is expressed as a weighted sum of the output layer errors δk to which it contributes (i.e. whk > 0) Output Layer, unit k Input layer Input, x Hidden Layer, unit h
  • 17. Machine Learning Extra : 17BMVASummer School 2014 Backpropagation Error is propogated backwards from network output .... to weights of output layer .... to weights of the hidden layer … Hence the name: backpropagation Output Layer, unit k Input layer Input, x Hidden Layer, unit h Output vector, Ok
  • 18. Machine Learning Extra : 18BMVASummer School 2014 Backpropagation Repeat these stages for every hidden layer in a multi-layer network: (using error δi where xji >0) ....... Output Layer, unit k Input layer Input, x Hidden Layer(s), unit h Output vector, Ok
  • 19. Machine Learning Extra : 19BMVASummer School 2014 Backpropagation Error is propogated backwards from network output .... to weights of output layer .... over weights of all N hidden layers … Hence the name: backpropagation ....... Output Layer, unit k Input layer Input, x Hidden Layer(s), unit h Output vector, Ok
  • 20. Machine Learning Extra : 20BMVASummer School 2014 Backpropagation Will perform gradient descent over the weight space of {wji } for all connections i → j in the network Stochastic gradient descent – as updates based on training one sample at a time
  • 21. Machine Learning Extra : 21BMVASummer School 2014 Understanding (and believing) the SVM stuff ….
  • 22. Machine Learning Extra : 22BMVASummer School 2014 Remedial Note: equations of 2D lines Line: where: are 2D vectors. Offset from origin Normal to line 2D LINES REMINDER
  • 23. Machine Learning Extra : 23BMVASummer School 2014 Remedial Note: equations of 2D lines http://www.mathopenref.com/coordpointdisttrig.html 2D LINES REMINDER
  • 24. Machine Learning Extra : 24BMVASummer School 2014 Remedial Note: equations of 2D lines For a defined line equation: Fixed Insert point into equation …... Normal to line Result is +ve if point on this side of line (i.e.> 0). Result is -ve if point on this side of line. ( < 0 ) Result is the distance (+ve or -ve) of point from line given by: for: 2D LINES REMINDER
  • 25. Machine Learning Extra : 25BMVASummer School 2014 Linear Separator  Instances (i.e, examples) {xi , yi } – xi = point in instance space (Rn ) made up of n attributes – yi =class value for classification of xi Want a linear separator. Can view this as constraint satisfaction problem: Equivalently, y = +1 y = -1 Classification of example function f(x) = y = {+1, -1} i.e. 2 classes N.B. we have a vector of weights coefficients ⃗w
  • 26. Machine Learning Extra : 26BMVASummer School 2014 Linear Separator If we define the distance of the nearest point to the margin as 1 → width of margin is (i.e. equal width each side) We thus want to maximize: finding the parameters: y = +1 y = -1 Classification of example function f(x) = y = {+1, -1} i.e. 2 classes
  • 27. Machine Learning Extra : 27BMVASummer School 2014 which is equivalent to minimizing:
  • 28. Machine Learning Extra : 28BMVASummer School 2014 …............. back to main slides
  • 29. Machine Learning Extra : 29BMVASummer School 2014 So …. Find the “hyperplane” (i.e. boundary) with: a) maximum margin b) minimum number of (training) examples on the wrong side of the chosen boundary (i.e. minimal penalties due to C) Solve via optimization (in polynomial time/complexity)
  • 30. Machine Learning Extra : 30BMVASummer School 2014 Find hyperplane separator (plane in 3D) via optimization Non-linear Separation (red / blue data items on 2D plane). Kernel projection to higher dimensional space Non-linear boundary in original dimension (e.g. circle n 2D) defined by planar boundary (cut) in 3D. Example:
  • 31. Machine Learning Extra : 31BMVASummer School 2014 .... but it is all about the data!
  • 32. Machine Learning Extra : 32BMVASummer School 2014 Desirable Data Properties Machine learning is a Data Driven Approach The Data is important! Ideally training/testing data used for learning must be: – Unbiased • towards any given subset of the space of examples ... – Representative • of the “real-world” data to be encountered in use/deployment – Accurate • inaccuracies in training/testing produce inaccuracies results – Available • the more training/testing data available the better the results • greater confidence in the results can be achieved
  • 33. Machine Learning Extra : 33BMVASummer School 2014 Data Training Methodologies Simple approach : Data Splits – split overall data set into separate training and test sets • No established rule but 80%:20%, 70%:30% or ⅓:⅔ training to testing splits common – Training on one, test on the other – Test error = error on the test set – Training error = error on training set – Weakness: susceptible to bias in data sets or “over-fitting” • Also less data available for training
  • 34. Machine Learning Extra : 34BMVASummer School 2014 Data Training Methodologies More advanced (and robust): K Cross Validation – Randomly split (all) the data into k-subsets – For 1 to k • train using all the data not in kth subset • test resulting learned [classifier|function …] using kth subset – report mean error over all k tests
  • 35. Machine Learning Extra : 35BMVASummer School 2014 Key Summary Statistics #1 tp = true positive / tn = true negative fp = false positive / fn = false negative Often quoted or plotted when comparing ML techniques
  • 36. Machine Learning Extra : 36BMVASummer School 2014 Kappa Statistic Measure of classification of “N items into C mutually exclusive categories” Pr(a) = probability of success of classification ( = accuracy) Pr(e) = probability of success due to chance – e.g. 2 categories = 50% (0.5), 3 categories = 33% (0.33) ….. etc. – Pr(e) can be replaced with Pr(b) to measure agreement between classifiers/techniques a and b [Cohen, 1960]