SlideShare uma empresa Scribd logo
1 de 56
BackpropagationBackpropagation
NetworksNetworks
Introduction toIntroduction to
BackpropagationBackpropagation
- In 1969 a method for learning in multi-layer network,
BackpropagationBackpropagation, was invented by Bryson and Ho.
- The Backpropagation algorithm is a sensible approach
for dividing the contribution of each weight.
- Works basically the same as perceptrons
Backpropagation Learning Principles:
Hidden Layers and Gradients
There are two differences for the updating rule :differences for the updating rule :
1) The activation of the hidden unit is used instead ofinstead of
activation of the input value.activation of the input value.
2) The rule contains a term for the gradient of the activation
function.
Backpropagation NetworkBackpropagation Network
trainingtraining
• 1. Initialize network with random weights
• 2. For all training cases (called examples):
– a. Present training inputs to network and
calculate output
– b.b. For all layers (starting with output layer,
back to input layer):
• i. Compare network output with correct output
(error function)
• ii. Adapt weights in current layer
This is
what
you
want
Backpropagation LearningBackpropagation Learning
DetailsDetails
• Method for learning weights in feed-forward (FF)
nets
• Can’t use Perceptron Learning Rule
– no teacher values are possible for hidden units
• Use gradient descent to minimize the error
– propagate deltas to adjust for errors
backward from outputs
to hidden layers
to inputs
forward
backward
Backpropagation Algorithm – Main
Idea – error in hidden layers
The ideas of the algorithm can be summarized as follows :
1. Computes the error term for the output units using the
observed error.
2. From output layer, repeat
- propagating the error term back to the previous layer
and
- updating the weights between the two layers
until the earliest hidden layer is reached.
Backpropagation Algorithm
• Initialize weights (typically random!)
• Keep doing epochs
– For each example ee in training set do
• forward passforward pass to compute
– O = neural-net-output(network,e)
– miss = (T-O) at each output unit
• backward pass to calculate deltas to weights
• update all weights
– end
• until tuning set error stops improving
Backward pass explained in next slideForward pass explained
earlier
Backward Pass
• Compute deltas to weights
– from hidden layer
– to output layer
• Without changing any weights (yet),
compute the actual contributions
– within the hidden layer(s)
– and compute deltas
Gradient DescentGradient Descent
• Think of the N weights as a point in an N-
dimensional space
• Add a dimension for the observed error
• Try to minimize your position on the “error
surface”
Error Surface
Error as function of
weights in
multidimensional space
error
weights
Gradient
• Trying to make error decrease the fastest
• Compute:
• GradE = [dE/dw1, dE/dw2, . . ., dE/dwn]
• Change i-th weight by
• deltawi = -alphaalpha * dE/dwi
• We need a derivative!
• Activation function must be continuouscontinuous,
differentiable, non-decreasing, and easy to
compute
Derivatives of error for weights
Compute
deltas
Can’t use LTU
• To effectively assign credit / blame to units
in hidden layers, we want to look at the
first derivative of the activation function
• Sigmoid function is easy to differentiate
and easy to compute forward
Linear Threshold Units Sigmoid function
Updating hidden-to-output
• We have teacher supplied desired values
• deltawji = α * aj * (Ti - Oi) * g’(ini)
= α * aj * (Ti - Oi) * Oi * (1 - Oi)
– for sigmoid the derivative is, g’(x) = g(x) * (1 - g(x))
alphaalpha
derivative
miss
Here we have
general formula with
derivative, next we
use for sigmoid
Updating interior weights
• Layer k units provide values to all layer
k+1 units
• “miss” is sum of misses from all units on k+1
• missj = Σ [ ai(1- ai) (Ti - ai) wji ]
• weights coming into this unit are adjusted
based on their contribution
deltakj = α * Ik * aj * (1 - aj) * missj
For layer k+1
Compute deltas
How do we pick αα?
1. Tuning set, or
2. Cross validation, or
3. Small for slow, conservative learning
How many hidden layers?
• Usually just one (i.e., a 2-layer net)
• How many hidden units in the layer?
– Too few ==> can’t learn
– Too many ==> poor generalization
How big a training set?
• Determine your target error rate, e
• Success rate is 1- e
• Typical training set approx. n/e, where n is the
number of weights in the net
• Example:
– e = 0.1, n = 80 weights
– training set size 800
trained until 95% correct training set classification
should produce 90% correct classification
on testing set (typical)
ExamplesExamples of Backpropagation
Learning
In the restaurant
problem NN was
worse than the
decision tree
Error
decreases
with number
of epochs
Decision tree still
better for
restaurant
example
Examples of Backpropagation Learning
Majority example,
perceptron better
Restaurant
example, DT
better
BackpropagationBackpropagation
Learning MathLearning Math
See next
slide for
explanation
Visualization ofVisualization of
BackpropagationBackpropagation
learninglearning
Backprop output layer
bias neuron in input layer
Bias Neurons inBias Neurons in BackpropBackpropagationagation
LearningLearning
Training pairs
Software for Backpropagation LearningSoftware for Backpropagation Learning
Calculate difference to
desired output
Calculate total error
Run network forward.
Was explained earlier
This routine
calculate error for
backpropagation
Update output weights
Software for Backpropagation LearningSoftware for Backpropagation Learning
continuationcontinuation
Calculate hidden difference values
Update input weights
Return total error
Here we do not use
alpha, the learning rate
The generalgeneral Backpropagation Algorithm for updating weights in a multilayermultilayer network
Run network to
calculate its
output for this
example
Go through all
examples
Compute the
error in output
Update weights
to output layer
Compute error in
each hidden layer
Update weights in
each hidden layer
Repeat until
convergent
Return learned network
Here we use alpha, the
learning rate
Examples andExamples and
ApplicationsApplications
of ANNof ANN
Neural Network in Practice
NNs are used for classification and function approximation
or mapping problems which are:
- Tolerant of some imprecision.
- Have lots of training data available.
- Hard and fast rules cannot easily be applied.
NETalk (1987)
• Mapping character strings into phonemes so they
can be pronounced by a computer
• Neural network trained how to pronounce each
letter in a word in a sentence, given the three
letters before and three letters after it in a window
• Output was the correct phoneme
• Results
– 95% accuracy on the training data
– 78% accuracy on the test set
Other Examples
• Neurogammon (Tesauro & Sejnowski, 1989)
– Backgammon learning program
• Speech Recognition (Waibel, 1989)
• Character Recognition (LeCun et al., 1989)
• Face Recognition (Mitchell)
ALVINNALVINN
• Steer a van down the road
– 2-layer feedforward
• using backpropagation for learning
– Raw input is 480 x 512 pixel image 15x per sec
– Color image preprocessed into 960 input units
– 4 hidden units
– 30 output units, each is a steering direction
Neural Network Approaches
ALVINN - Autonomous Land Vehicle In a Neural Network
Learning on-the-
fly
• ALVINN learned as the vehicle
traveled
– initially by observing a human
driving
– learns from its own driving by
watching for future corrections
– never saw bad driving
• didn’t know what was
dangerous, NOT correct
• computes alternate views of
the road (rotations, shifts, and
fill-ins) to use as “bad”
examples
– keeps a buffer pool of 200 pretty
old examples to avoid overfitting
to only the most recent images
Feed-forward vs. Interactive
Nets
• Feed-forward
– activation propagates in one direction
– We usually focus on this
• Interactive
– activation propagates forward & backwards
– propagation continues until equilibrium is reached in
the network
– We do not discuss these networks here, complex
training. May be unstable.
Ways of learning with an ANN
• Add nodes & connections
• Subtract nodes & connections
• Modify connection weights
– current focus
– can simulate first two
• I/O pairs:
– given the inputs, what should the output be?
[“typical” learning problem]
More Neural NetworkMore Neural Network
ApplicationsApplications
- May provide a model for massive parallel computation.
- More successful approach of “parallelizing” traditional
serial algorithms.
- Can compute any computable function.
- Can do everything a normal digital computer can do.
- Can do even more under some impractical assumptions.
Neural Network Approaches to driving
- Developed in 1993.
- Performs driving with
Neural Networks.
- An intelligent VLSI image
sensor for road following.
- Learns to filter out image
details not relevant to
driving.
Hidden layer
Output units
Input units
•Use special hardware
•ASIC
•FPGA
•analog
Neural Network Approaches
Hidden Units Output unitsInput Array
Actual Products Available
ex1. Enterprise Miner:ex1. Enterprise Miner:
- Single multi-layered feed-forward neural networks.
- Provides business solutions for data mining.
ex2. Nestor:ex2. Nestor:
- Uses Nestor Learning System (NLS).
- Several multi-layered feed-forward neural networks.
- Intel has made such a chip - NE1000 in VLSI technology.
Ex1. Software tool - Enterprise Miner
- Based on SEMMA (Sample, Explore, Modify, Model,
Access) methodology.
- Statistical tools include :
Clustering, decision trees, linear and logistic
regression and neural networks.
- Data preparation tools include :
Outliner detection, variable transformation, random
sampling, and partition of data sets (into training,
testing and validation data sets).
Ex 2. Hardware Tool - Nestor
- With low connectivity within each layer.
- Minimized connectivity within each layer results in rapid
training and efficient memory utilization, ideal for VLSI.
- Composed of multiple neural networks, each specializing
in a subset of information about the input patterns.
- Real time operation without the need of special computers
or custom hardware DSP platforms
•Software exists.
Problems with using ANNs
1. Insufficiently characterized development
process compared with conventional software
– What are the steps to create a neural network?
1. How do we create neural networks in a
repeatable and predictable manner?
2. Absence of quality assurance methods for
neural network models and implementations
– How do I verify my implementation?
Solving Problem 1 – The Steps to createSolving Problem 1 – The Steps to create
a ANNa ANN
Define the process of developing neural networks:
1. Formally capture the specifics of the problem in
a document based on a template
2. Define the factors/parameters for creation
– Neural network creation parameters
– Performance requirements
1. Create the neural network
2. Get feedback on performance
Neural Network Development Process
Problem Specification Phase
• Some factors to define in problem specification:
– Type of neural networks (based on experience or
published results)
– How to collect and transform problem data
– Potential input/output representations
– Training & testing method and data selection
– Performance targets (accuracy and precision)
• Most important output is the ranked collection of
factors/parameters
Problem 2 –Problem 2 –
How to create a Neural NetworkHow to create a Neural Network
• Predictability (with regard to resources)
– Depending on creation approach used, record time
for one iteration
– Use timing to predict maximum and minimum times
for all of the combinations specified
• Repeatability
– Relevant information must be captured in problem
specification and combinations of parameters
Problem 3 - Quality Assurance
• Specification of generic neural network software
(models and learning)
• Prototype of specification
• Comparison of a given implementation with
specification prototype
• Allows practitioners to create arbitrary neural
networks verified against models
Two Methods for Comparison
• Direct comparison of outputs:
• Verification of weights generated by learning
algorithm:
20-10-5 (with particular connections and input)
Prototype <0.123892, 0.567442, 0.981194, 0.321438, 0.699115>
Implementation <0.123892, 0.567442, 0.981194, 0.321438, 0.699115>
20-10-5 Iteration 100 Iteration 200 ………. Iteration n
Prototype Weight state 1 Weight state 2 ………. Weight state n
Implementation Weight state 1 Weight state 2 ………. Weight state n
Further Work on
improvements
• Practitioners to use the development process or
at least document in problem specification
• Feedback from neural network development
community on the content of the problem
specification template
• Collect problem specifications and analyse to
look for commonalities in problem domains and
improve predictability (eg. control)
• More verification of specification prototype
Further Work (2)
• Translation methods for formal specification
• Extend formal specification to new types
• Fully prove aspects of the specification
• Cross discipline data analysis methods (eg. ICA,
statistical analysis)
• Implementation of learning on distributed
systems
– Peer-to-peer network systems (farm each
combination of parameters to a peer)
• Remain unfashionable
SummarySummary
- Neural network is a computational model that simulate
some properties of the human brain.
- The connections and nature of units determine the
behavior of a neural network.
- Perceptrons are feed-forward networks that can only
represent linearly separable functions.
Summary
- Given enough units, any function can be represented
by Multi-layer feed-forward networks.
- Backpropagation learning works on multi-layer
feed-forward networks.
- Neural Networks are widely used in developing
artificial learning systems.
ReferencesReferences
- Russel, S. and P. Norvig (1995). Artificial Intelligence - A
Modern Approach. Upper Saddle River, NJ, Prentice
Hall.
- Sarle, W.S., ed. (1997), Neural Network FAQ, part 1 of 7:
Introduction, periodic posting to the Usenet newsgroup
comp.ai.neural-nets,
URL: ftp://ftp.sas.com/pub/neural/FAQ.html
Eddy Li
Eric Wong
Martin Ho
Kitty Wong
Sources

Mais conteúdo relacionado

Mais procurados

Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural networkSopheaktra YONG
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANNMohamed Talaat
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Mostafa G. M. Mostafa
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Mohammed Bennamoun
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance TheoryNaveen Kumar
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing mapsraphaelkiminya
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applicationsSangeeta Tiwari
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationYan Xu
 
Advanced topics in artificial neural networks
Advanced topics in artificial neural networksAdvanced topics in artificial neural networks
Advanced topics in artificial neural networksswapnac12
 
Adaline madaline
Adaline madalineAdaline madaline
Adaline madalineNagarajan
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networksSi Haem
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronMostafa G. M. Mostafa
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsMd. Main Uddin Rony
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Suraj Aavula
 

Mais procurados (20)

Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
 
Cnn
CnnCnn
Cnn
 
Self-organizing map
Self-organizing mapSelf-organizing map
Self-organizing map
 
Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM )
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing maps
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and Regularization
 
Advanced topics in artificial neural networks
Advanced topics in artificial neural networksAdvanced topics in artificial neural networks
Advanced topics in artificial neural networks
 
Adaline madaline
Adaline madalineAdaline madaline
Adaline madaline
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning Algorithms
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 

Semelhante a backpropagation in neural networks

nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryAhmed Yousry
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxDebabrataPain1
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learningNimrita Koul
 
08 neural networks
08 neural networks08 neural networks
08 neural networksankit_ppt
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkIshaneeSharma
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxPoonam60376
 
An Introduction to Deep Learning
An Introduction to Deep LearningAn Introduction to Deep Learning
An Introduction to Deep Learningmilad abbasi
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep LearningMehrnaz Faraz
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlowBarbara Fusinska
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentationOwin Will
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsPierre de Lacaze
 

Semelhante a backpropagation in neural networks (20)

Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptx
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learning
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Deeplearning
Deeplearning Deeplearning
Deeplearning
 
Deep Learning
Deep LearningDeep Learning
Deep Learning
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
N ns 1
N ns 1N ns 1
N ns 1
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptx
 
An Introduction to Deep Learning
An Introduction to Deep LearningAn Introduction to Deep Learning
An Introduction to Deep Learning
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
 
cnn ppt.pptx
cnn ppt.pptxcnn ppt.pptx
cnn ppt.pptx
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
 
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural Nets
 

Último

How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSJoshuaGantuangco2
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Culture Uniformity or Diversity IN SOCIOLOGY.pptx
Culture Uniformity or Diversity IN SOCIOLOGY.pptxCulture Uniformity or Diversity IN SOCIOLOGY.pptx
Culture Uniformity or Diversity IN SOCIOLOGY.pptxPoojaSen20
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfJemuel Francisco
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptxiammrhaywood
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
FILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinoFILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinojohnmickonozaleda
 

Último (20)

How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Culture Uniformity or Diversity IN SOCIOLOGY.pptx
Culture Uniformity or Diversity IN SOCIOLOGY.pptxCulture Uniformity or Diversity IN SOCIOLOGY.pptx
Culture Uniformity or Diversity IN SOCIOLOGY.pptx
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
FILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinoFILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipino
 

backpropagation in neural networks

  • 2. Introduction toIntroduction to BackpropagationBackpropagation - In 1969 a method for learning in multi-layer network, BackpropagationBackpropagation, was invented by Bryson and Ho. - The Backpropagation algorithm is a sensible approach for dividing the contribution of each weight. - Works basically the same as perceptrons
  • 3. Backpropagation Learning Principles: Hidden Layers and Gradients There are two differences for the updating rule :differences for the updating rule : 1) The activation of the hidden unit is used instead ofinstead of activation of the input value.activation of the input value. 2) The rule contains a term for the gradient of the activation function.
  • 4. Backpropagation NetworkBackpropagation Network trainingtraining • 1. Initialize network with random weights • 2. For all training cases (called examples): – a. Present training inputs to network and calculate output – b.b. For all layers (starting with output layer, back to input layer): • i. Compare network output with correct output (error function) • ii. Adapt weights in current layer This is what you want
  • 5. Backpropagation LearningBackpropagation Learning DetailsDetails • Method for learning weights in feed-forward (FF) nets • Can’t use Perceptron Learning Rule – no teacher values are possible for hidden units • Use gradient descent to minimize the error – propagate deltas to adjust for errors backward from outputs to hidden layers to inputs forward backward
  • 6. Backpropagation Algorithm – Main Idea – error in hidden layers The ideas of the algorithm can be summarized as follows : 1. Computes the error term for the output units using the observed error. 2. From output layer, repeat - propagating the error term back to the previous layer and - updating the weights between the two layers until the earliest hidden layer is reached.
  • 7. Backpropagation Algorithm • Initialize weights (typically random!) • Keep doing epochs – For each example ee in training set do • forward passforward pass to compute – O = neural-net-output(network,e) – miss = (T-O) at each output unit • backward pass to calculate deltas to weights • update all weights – end • until tuning set error stops improving Backward pass explained in next slideForward pass explained earlier
  • 8. Backward Pass • Compute deltas to weights – from hidden layer – to output layer • Without changing any weights (yet), compute the actual contributions – within the hidden layer(s) – and compute deltas
  • 9. Gradient DescentGradient Descent • Think of the N weights as a point in an N- dimensional space • Add a dimension for the observed error • Try to minimize your position on the “error surface”
  • 10. Error Surface Error as function of weights in multidimensional space error weights
  • 11. Gradient • Trying to make error decrease the fastest • Compute: • GradE = [dE/dw1, dE/dw2, . . ., dE/dwn] • Change i-th weight by • deltawi = -alphaalpha * dE/dwi • We need a derivative! • Activation function must be continuouscontinuous, differentiable, non-decreasing, and easy to compute Derivatives of error for weights Compute deltas
  • 12. Can’t use LTU • To effectively assign credit / blame to units in hidden layers, we want to look at the first derivative of the activation function • Sigmoid function is easy to differentiate and easy to compute forward Linear Threshold Units Sigmoid function
  • 13. Updating hidden-to-output • We have teacher supplied desired values • deltawji = α * aj * (Ti - Oi) * g’(ini) = α * aj * (Ti - Oi) * Oi * (1 - Oi) – for sigmoid the derivative is, g’(x) = g(x) * (1 - g(x)) alphaalpha derivative miss Here we have general formula with derivative, next we use for sigmoid
  • 14. Updating interior weights • Layer k units provide values to all layer k+1 units • “miss” is sum of misses from all units on k+1 • missj = Σ [ ai(1- ai) (Ti - ai) wji ] • weights coming into this unit are adjusted based on their contribution deltakj = α * Ik * aj * (1 - aj) * missj For layer k+1 Compute deltas
  • 15. How do we pick αα? 1. Tuning set, or 2. Cross validation, or 3. Small for slow, conservative learning
  • 16. How many hidden layers? • Usually just one (i.e., a 2-layer net) • How many hidden units in the layer? – Too few ==> can’t learn – Too many ==> poor generalization
  • 17. How big a training set? • Determine your target error rate, e • Success rate is 1- e • Typical training set approx. n/e, where n is the number of weights in the net • Example: – e = 0.1, n = 80 weights – training set size 800 trained until 95% correct training set classification should produce 90% correct classification on testing set (typical)
  • 18. ExamplesExamples of Backpropagation Learning In the restaurant problem NN was worse than the decision tree Error decreases with number of epochs Decision tree still better for restaurant example
  • 19. Examples of Backpropagation Learning Majority example, perceptron better Restaurant example, DT better
  • 22.
  • 23.
  • 24.
  • 25. bias neuron in input layer Bias Neurons inBias Neurons in BackpropBackpropagationagation LearningLearning
  • 26. Training pairs Software for Backpropagation LearningSoftware for Backpropagation Learning Calculate difference to desired output Calculate total error Run network forward. Was explained earlier This routine calculate error for backpropagation
  • 27. Update output weights Software for Backpropagation LearningSoftware for Backpropagation Learning continuationcontinuation Calculate hidden difference values Update input weights Return total error Here we do not use alpha, the learning rate
  • 28. The generalgeneral Backpropagation Algorithm for updating weights in a multilayermultilayer network Run network to calculate its output for this example Go through all examples Compute the error in output Update weights to output layer Compute error in each hidden layer Update weights in each hidden layer Repeat until convergent Return learned network Here we use alpha, the learning rate
  • 30. Neural Network in Practice NNs are used for classification and function approximation or mapping problems which are: - Tolerant of some imprecision. - Have lots of training data available. - Hard and fast rules cannot easily be applied.
  • 31. NETalk (1987) • Mapping character strings into phonemes so they can be pronounced by a computer • Neural network trained how to pronounce each letter in a word in a sentence, given the three letters before and three letters after it in a window • Output was the correct phoneme • Results – 95% accuracy on the training data – 78% accuracy on the test set
  • 32. Other Examples • Neurogammon (Tesauro & Sejnowski, 1989) – Backgammon learning program • Speech Recognition (Waibel, 1989) • Character Recognition (LeCun et al., 1989) • Face Recognition (Mitchell)
  • 33. ALVINNALVINN • Steer a van down the road – 2-layer feedforward • using backpropagation for learning – Raw input is 480 x 512 pixel image 15x per sec – Color image preprocessed into 960 input units – 4 hidden units – 30 output units, each is a steering direction
  • 34. Neural Network Approaches ALVINN - Autonomous Land Vehicle In a Neural Network
  • 35. Learning on-the- fly • ALVINN learned as the vehicle traveled – initially by observing a human driving – learns from its own driving by watching for future corrections – never saw bad driving • didn’t know what was dangerous, NOT correct • computes alternate views of the road (rotations, shifts, and fill-ins) to use as “bad” examples – keeps a buffer pool of 200 pretty old examples to avoid overfitting to only the most recent images
  • 36. Feed-forward vs. Interactive Nets • Feed-forward – activation propagates in one direction – We usually focus on this • Interactive – activation propagates forward & backwards – propagation continues until equilibrium is reached in the network – We do not discuss these networks here, complex training. May be unstable.
  • 37. Ways of learning with an ANN • Add nodes & connections • Subtract nodes & connections • Modify connection weights – current focus – can simulate first two • I/O pairs: – given the inputs, what should the output be? [“typical” learning problem]
  • 38. More Neural NetworkMore Neural Network ApplicationsApplications - May provide a model for massive parallel computation. - More successful approach of “parallelizing” traditional serial algorithms. - Can compute any computable function. - Can do everything a normal digital computer can do. - Can do even more under some impractical assumptions.
  • 39. Neural Network Approaches to driving - Developed in 1993. - Performs driving with Neural Networks. - An intelligent VLSI image sensor for road following. - Learns to filter out image details not relevant to driving. Hidden layer Output units Input units •Use special hardware •ASIC •FPGA •analog
  • 40. Neural Network Approaches Hidden Units Output unitsInput Array
  • 41. Actual Products Available ex1. Enterprise Miner:ex1. Enterprise Miner: - Single multi-layered feed-forward neural networks. - Provides business solutions for data mining. ex2. Nestor:ex2. Nestor: - Uses Nestor Learning System (NLS). - Several multi-layered feed-forward neural networks. - Intel has made such a chip - NE1000 in VLSI technology.
  • 42. Ex1. Software tool - Enterprise Miner - Based on SEMMA (Sample, Explore, Modify, Model, Access) methodology. - Statistical tools include : Clustering, decision trees, linear and logistic regression and neural networks. - Data preparation tools include : Outliner detection, variable transformation, random sampling, and partition of data sets (into training, testing and validation data sets).
  • 43. Ex 2. Hardware Tool - Nestor - With low connectivity within each layer. - Minimized connectivity within each layer results in rapid training and efficient memory utilization, ideal for VLSI. - Composed of multiple neural networks, each specializing in a subset of information about the input patterns. - Real time operation without the need of special computers or custom hardware DSP platforms •Software exists.
  • 44. Problems with using ANNs 1. Insufficiently characterized development process compared with conventional software – What are the steps to create a neural network? 1. How do we create neural networks in a repeatable and predictable manner? 2. Absence of quality assurance methods for neural network models and implementations – How do I verify my implementation?
  • 45. Solving Problem 1 – The Steps to createSolving Problem 1 – The Steps to create a ANNa ANN Define the process of developing neural networks: 1. Formally capture the specifics of the problem in a document based on a template 2. Define the factors/parameters for creation – Neural network creation parameters – Performance requirements 1. Create the neural network 2. Get feedback on performance
  • 47. Problem Specification Phase • Some factors to define in problem specification: – Type of neural networks (based on experience or published results) – How to collect and transform problem data – Potential input/output representations – Training & testing method and data selection – Performance targets (accuracy and precision) • Most important output is the ranked collection of factors/parameters
  • 48. Problem 2 –Problem 2 – How to create a Neural NetworkHow to create a Neural Network • Predictability (with regard to resources) – Depending on creation approach used, record time for one iteration – Use timing to predict maximum and minimum times for all of the combinations specified • Repeatability – Relevant information must be captured in problem specification and combinations of parameters
  • 49. Problem 3 - Quality Assurance • Specification of generic neural network software (models and learning) • Prototype of specification • Comparison of a given implementation with specification prototype • Allows practitioners to create arbitrary neural networks verified against models
  • 50. Two Methods for Comparison • Direct comparison of outputs: • Verification of weights generated by learning algorithm: 20-10-5 (with particular connections and input) Prototype <0.123892, 0.567442, 0.981194, 0.321438, 0.699115> Implementation <0.123892, 0.567442, 0.981194, 0.321438, 0.699115> 20-10-5 Iteration 100 Iteration 200 ………. Iteration n Prototype Weight state 1 Weight state 2 ………. Weight state n Implementation Weight state 1 Weight state 2 ………. Weight state n
  • 51. Further Work on improvements • Practitioners to use the development process or at least document in problem specification • Feedback from neural network development community on the content of the problem specification template • Collect problem specifications and analyse to look for commonalities in problem domains and improve predictability (eg. control) • More verification of specification prototype
  • 52. Further Work (2) • Translation methods for formal specification • Extend formal specification to new types • Fully prove aspects of the specification • Cross discipline data analysis methods (eg. ICA, statistical analysis) • Implementation of learning on distributed systems – Peer-to-peer network systems (farm each combination of parameters to a peer) • Remain unfashionable
  • 53. SummarySummary - Neural network is a computational model that simulate some properties of the human brain. - The connections and nature of units determine the behavior of a neural network. - Perceptrons are feed-forward networks that can only represent linearly separable functions.
  • 54. Summary - Given enough units, any function can be represented by Multi-layer feed-forward networks. - Backpropagation learning works on multi-layer feed-forward networks. - Neural Networks are widely used in developing artificial learning systems.
  • 55. ReferencesReferences - Russel, S. and P. Norvig (1995). Artificial Intelligence - A Modern Approach. Upper Saddle River, NJ, Prentice Hall. - Sarle, W.S., ed. (1997), Neural Network FAQ, part 1 of 7: Introduction, periodic posting to the Usenet newsgroup comp.ai.neural-nets, URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  • 56. Eddy Li Eric Wong Martin Ho Kitty Wong Sources

Notas do Editor

  1. Teacher values were gaussian with variance 10