SlideShare uma empresa Scribd logo
1 de 38
Baixar para ler offline
Slides are based on Negnevitsky, Pearson Education, 2005 1
Lecture 8
Artificial neural networks:
Unsupervised learning
 Introduction
 Hebbian learning
 Generalised Hebbian learning algorithm
 Competitive learning
 Self-organising computational map:
Kohonen network
 Summary
Slides are based on Negnevitsky, Pearson Education, 2005 2
The main property of a neural network is an
ability to learn from its environment, and to
improve its performance through learning. So
far we have considered supervised or active
learning  learning with an external “teacher”
or a supervisor who presents a training set to the
network. But another type of learning also
exists: unsupervised learning.
Introduction
Slides are based on Negnevitsky, Pearson Education, 2005 3
 In contrast to supervised learning, unsupervised or
self-organised learning does not require an
external teacher. During the training session, the
neural network receives a number of different
input patterns, discovers significant features in
these patterns and learns how to classify input data
into appropriate categories. Unsupervised
learning tends to follow the neuro-biological
organisation of the brain.
 Unsupervised learning algorithms aim to learn
rapidly and can be used in real-time.
Slides are based on Negnevitsky, Pearson Education, 2005 4
In 1949, Donald Hebb proposed one of the key
ideas in biological learning, commonly known as
Hebb’s Law. Hebb’s Law states that if neuron i is
near enough to excite neuron j and repeatedly
participates in its activation, the synaptic connection
between these two neurons is strengthened and
neuron j becomes more sensitive to stimuli from
neuron i.
Hebbian learning
Slides are based on Negnevitsky, Pearson Education, 2005 5
Hebb’s Law can be represented in the form of two
rules:
1. If two neurons on either side of a connection
are activated synchronously, then the weight of
that connection is increased.
2. If two neurons on either side of a connection
are activated asynchronously, then the weight
of that connection is decreased.
Hebb’s Law provides the basis for learning
without a teacher. Learning here is a local
phenomenon occurring without feedback from
the environment.
Slides are based on Negnevitsky, Pearson Education, 2005 6
Hebbian learning in a neural network
i j
InputSignals
OutputSignals
Slides are based on Negnevitsky, Pearson Education, 2005 7
 Using Hebb’s Law we can express the adjustment
applied to the weight wij at iteration p in the
following form:
 As a special case, we can represent Hebb’s Law as
follows:
where  is the learning rate parameter.
This equation is referred to as the activity product
rule.
][ )(),()( pxpyFpw ijij 
)()()( pxpypw ijij 
Slides are based on Negnevitsky, Pearson Education, 2005 8
 Hebbian learning implies that weights can only
increase. To resolve this problem, we might
impose a limit on the growth of synaptic weights.
It can be done by introducing a non-linear
forgetting factor into Hebb’s Law:
where  is the forgetting factor.
Forgetting factor usually falls in the interval
between 0 and 1, typically between 0.01 and 0.1,
to allow only a little “forgetting” while limiting
the weight growth.
)()()()()( pwpypxpypw ijjijij 
Slides are based on Negnevitsky, Pearson Education, 2005 9
Hebbian learning algorithm
Step 1: Initialisation.
Set initial synaptic weights and thresholds to small
random values, say in an interval [0, 1].
Step 2: Activation.
Compute the neuron output at iteration p
where n is the number of neuron inputs, and j is the
threshold value of neuron j.
j
n
i
ijij pwpxpy  
1
)()()(
Slides are based on Negnevitsky, Pearson Education, 2005 10
Step 3: Learning.
Update the weights in the network:
where wij(p) is the weight correction at iteration p.
The weight correction is determined by the
generalised activity product rule:
Step 4: Iteration.
Increase iteration p by one, go back to Step 2.
)()()1( pwpwpw ijijij 
][ )()()()( pwpxpypw ijijij 
Slides are based on Negnevitsky, Pearson Education, 2005 11
To illustrate Hebbian learning, consider a fully
connected feedforward network with a single layer
of five computation neurons. Each neuron is
represented by a McCulloch and Pitts model with
the sign activation function. The network is trained
on the following set of input vectors:
Hebbian learning example

















0
0
0
0
0
1X

















1
0
0
1
0
2X

















0
1
0
0
0
3X

















0
0
1
0
0
4X

















1
0
0
1
0
5X
Slides are based on Negnevitsky, Pearson Education, 2005 12
Input layer
x1 1
Output layer
2
1 y1
y2x2 2
x3 3
x4 4
x5 5
4
3 y3
y4
5 y5
1
0
0
0
1
1
0
0
0
1
Input layer
x1 1
Output layer
2
1 y1
y2x2 2
x3 3
x4 4
x5
4
3 y3
y4
5 y5
1
0
0
0
1
0
0
1
0
1
2
5
Initial and final states of the network
Slides are based on Negnevitsky, Pearson Education, 2005 13
O u t p u t l a y e r
Inputlayer
















10000
01000
00100
00010
00001
21 43 5
1
2
3
4
5
O u t p u t l a y e r
Inputlayer 







0
0
0
0
0
1 2 3 4 5
1
2
3
4
5
0
0
2.0204
0
2.0204
1.0200
0
0
0
0 0.9996
0
0
0
0
0
0
2.0204
0
2.0204







(b).
Initial and final weight matrices
Slides are based on Negnevitsky, Pearson Education, 2005 14
 When this probe is presented to the network, we
obtain:

















1
0
0
0
1
X





















































































1
0
0
1
0
0737.0
9478.0
0907.0
2661.0
4940.0
1
0
0
0
1
2.0204002.02040
00.9996000
001.020000
2.0204002.02040
00000
signY
 A test input vector, or probe, is defined as
Slides are based on Negnevitsky, Pearson Education, 2005 15
 In competitive learning, neurons compete among
themselves to be activated.
 While in Hebbian learning, several output neurons
can be activated simultaneously, in competitive
learning, only a single output neuron is active at
any time.
 The output neuron that wins the “competition” is
called the winner-takes-all neuron.
Competitive learning
Slides are based on Negnevitsky, Pearson Education, 2005 16
 The basic idea of competitive learning was
introduced in the early 1970s.
 In the late 1980s, Teuvo Kohonen introduced a
special class of artificial neural networks called
self-organising feature maps. These maps are
based on competitive learning.
Slides are based on Negnevitsky, Pearson Education, 2005 17
Our brain is dominated by the cerebral cortex, a
very complex structure of billions of neurons and
hundreds of billions of synapses. The cortex
includes areas that are responsible for different
human activities (motor, visual, auditory,
somatosensory, etc.), and associated with different
sensory inputs. We can say that each sensory
input is mapped into a corresponding area of the
cerebral cortex. The cortex is a self-organising
computational map in the human brain.
What is a self-organising feature map?
Slides are based on Negnevitsky, Pearson Education, 2005 18
Feature-mapping Kohonen model
Input layer
Kohonen layer
(a)
Input layer
Kohonen layer
1 0
(b)
0 1
Slides are based on Negnevitsky, Pearson Education, 2005 19
 The Kohonen model provides a topological
mapping. It places a fixed number of input
patterns from the input layer into a higher-
dimensional output or Kohonen layer.
 Training in the Kohonen network begins with the
winner’s neighbourhood of a fairly large size.
Then, as training proceeds, the neighbourhood size
gradually decreases.
The Kohonen network
Slides are based on Negnevitsky, Pearson Education, 2005 20
Architecture of the Kohonen Network
Input
layer
OutputSignals
InputSignals
x1
x2
Output
layer
y1
y2
y3
Slides are based on Negnevitsky, Pearson Education, 2005 21
 The lateral connections are used to create a
competition between neurons. The neuron with the
largest activation level among all neurons in the
output layer becomes the winner. This neuron is
the only neuron that produces an output signal.
The activity of all other neurons is suppressed in
the competition.
 The lateral feedback connections produce
excitatory or inhibitory effects, depending on the
distance from the winning neuron. This is
achieved by the use of a Mexican hat function
which describes synaptic weights between neurons
in the Kohonen layer.
Slides are based on Negnevitsky, Pearson Education, 2005 22
The Mexican hat function of lateral connection
Connection
strength
Distance
Excitatory
effect
Inhibitory
effect
Inhibitory
effect
0
1
Slides are based on Negnevitsky, Pearson Education, 2005 23
 In the Kohonen network, a neuron learns by
shifting its weights from inactive connections to
active ones. Only the winning neuron and its
neighbourhood are allowed to learn. If a neuron
does not respond to a given input pattern, then
learning cannot occur in that particular neuron.
 The competitive learning rule defines the change
wij applied to synaptic weight wij as
where xi is the input signal and  is the learning
rate parameter.


 

ncompetitiothelosesneuronif,0
ncompetitiothewinsneuronif),(
j
jwx
w
iji
ij

Slides are based on Negnevitsky, Pearson Education, 2005 24
 The overall effect of the competitive learning rule
resides in moving the synaptic weight vector Wj of
the winning neuron j towards the input pattern X.
The matching criterion is equivalent to the
minimum Euclidean distance between vectors.
 The Euclidean distance between a pair of n-by-1
vectors X and Wj is defined by
where xi and wij are the ith elements of the vectors
X and Wj, respectively.
2/1
1
2
)(








 

n
i
ijij wxd WX
Slides are based on Negnevitsky, Pearson Education, 2005 25
 To identify the winning neuron, jX, that best
matches the input vector X, we may apply the
following condition:
where m is the number of neurons in the Kohonen
layer.
,j
j
minj WXX 
Slides are based on Negnevitsky, Pearson Education, 2005 26
 Suppose, for instance, that the 2-dimensional input
vector X is presented to the three-neuron Kohonen
network,
 The initial weight vectors, Wj, are given by







12.0
52.0
X







81.0
27.0
1W 






70.0
42.0
2W 






21.0
43.0
3W
Slides are based on Negnevitsky, Pearson Education, 2005 27
 We find the winning (best-matching) neuron jX
using the minimum-distance Euclidean criterion:
2
212
2
1111 )()( wxwxd  73.0)81.012.0()27.052.0( 22

2
222
2
1212 )()( wxwxd  59.0)70.012.0()42.052.0( 22

2
232
2
1313 )()( wxwxd  13.0)21.012.0()43.052.0( 22

 Neuron 3 is the winner and its weight vector W3 is
updated according to the competitive learning rule.
0.01)43.052.0(1.0)( 13113  wxw 
0.01)21.012.0(1.0)( 23223  wxw 
Slides are based on Negnevitsky, Pearson Education, 2005 28
 The updated weight vector W3 at iteration (p + 1)
is determined as:
 The weight vector W3 of the wining neuron 3
becomes closer to the input vector X with each
iteration.




















20.0
44.0
01.0
0.01
21.0
43.0
)()()1( 333 ppp WWW
Slides are based on Negnevitsky, Pearson Education, 2005 29
Step 1: Initialisation.
Set initial synaptic weights to small random
values, say in an interval [0, 1], and assign a small
positive value to the learning rate parameter .
Competitive Learning Algorithm
Slides are based on Negnevitsky, Pearson Education, 2005 30
Step 2: Activation and Similarity Matching.
Activate the Kohonen network by applying the
input vector X, and find the winner-takes-all (best
matching) neuron jX at iteration p, using the
minimum-distance Euclidean criterion
where n is the number of neurons in the input
layer, and m is the number of neurons in the
Kohonen layer.
,)()()(
2/1
1
2
][








 

n
i
ijij
j
pwxpminpj WXX
Slides are based on Negnevitsky, Pearson Education, 2005 31
Step 3: Learning.
Update the synaptic weights
where wij(p) is the weight correction at iteration p.
The weight correction is determined by the
competitive learning rule:
where  is the learning rate parameter, and j(p) is
the neighbourhood function centred around the
winner-takes-all neuron jX at iteration p.
)()()1( pwpwpw ijijij 







)(,0
)(,)(
)(
][
pj
pjpwx
pw
j
jiji
ij

Slides are based on Negnevitsky, Pearson Education, 2005 32
Step 4: Iteration.
Increase iteration p by one, go back to Step 2 and
continue until the minimum-distance Euclidean
criterion is satisfied, or no noticeable changes
occur in the feature map.
Slides are based on Negnevitsky, Pearson Education, 2005 33
 To illustrate competitive learning, consider the
Kohonen network with 100 neurons arranged in the
form of a two-dimensional lattice with 10 rows and
10 columns. The network is required to classify
two-dimensional input vectors  each neuron in the
network should respond only to the input vectors
occurring in its region.
 The network is trained with 1000 two-dimensional
input vectors generated randomly in a square
region in the interval between –1 and +1. The
learning rate parameter  is equal to 0.1.
Competitive learning in the Kohonen network
Slides are based on Negnevitsky, Pearson Education, 2005 34
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W(2,j)
Initial random weights
Slides are based on Negnevitsky, Pearson Education, 2005 35
Network after 100 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W(2,j)
Slides are based on Negnevitsky, Pearson Education, 2005 36
Network after 1000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W(2,j)
Slides are based on Negnevitsky, Pearson Education, 2005 37
Network after 10,000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
W(2,j)
Slides are based on Negnevitsky, Pearson Education, 2005 38
 One form of unsupervised learning is clustering.
 Among neural network models, the Self-
Organizing Map (SOM) and Adaptive resonance
theory (ART) are commonly used unsupervised
learning algorithms.
– The SOM is a topographic organization in which
nearby locations in the map represent inputs with
similar properties.
– ART networks are used for many pattern recognition
tasks, such as automatic target recognition and seismic
signal processing. The first version of ART was
"ART1", developed by Carpenter and Grossberg(1988).

Mais conteúdo relacionado

Mais procurados

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksmadhu sudhakar
 
Artificial neural networks and its application
Artificial neural networks and its applicationArtificial neural networks and its application
Artificial neural networks and its applicationHưng Đặng
 
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper PresentationArtificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentationguestac67362
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronMostafa G. M. Mostafa
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network AbstractAnjali Agrawal
 
mohsin dalvi artificial neural networks questions
mohsin dalvi   artificial neural networks questionsmohsin dalvi   artificial neural networks questions
mohsin dalvi artificial neural networks questionsAkash Maurya
 
Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Muhammad Ishaq
 
ARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKSARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKSAIMS Education
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIUProf. Neeta Awasthy
 
Csss2010 20100803-kanevski-lecture2
Csss2010 20100803-kanevski-lecture2Csss2010 20100803-kanevski-lecture2
Csss2010 20100803-kanevski-lecture2hasan_elektro
 
03 neural network
03 neural network03 neural network
03 neural networkTianlu Wang
 
The Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmESCOM
 

Mais procurados (17)

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
hopfield neural network
hopfield neural networkhopfield neural network
hopfield neural network
 
Artificial neural networks and its application
Artificial neural networks and its applicationArtificial neural networks and its application
Artificial neural networks and its application
 
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper PresentationArtificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentation
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network Abstract
 
ANN-lecture9
ANN-lecture9ANN-lecture9
ANN-lecture9
 
mohsin dalvi artificial neural networks questions
mohsin dalvi   artificial neural networks questionsmohsin dalvi   artificial neural networks questions
mohsin dalvi artificial neural networks questions
 
Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...
 
ARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKSARTIFICIAL NEURAL NETWORKS
ARTIFICIAL NEURAL NETWORKS
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
Csss2010 20100803-kanevski-lecture2
Csss2010 20100803-kanevski-lecture2Csss2010 20100803-kanevski-lecture2
Csss2010 20100803-kanevski-lecture2
 
03 neural network
03 neural network03 neural network
03 neural network
 
The Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning Algorithm
 

Destaque

Artificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 NegnevitskyArtificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 Negnevitskylopanath
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin Rshanelynn
 
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…Dongseo University
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)Mostafa G. M. Mostafa
 
Introduction to Machine Learning
Introduction to Machine LearningIntroduction to Machine Learning
Introduction to Machine LearningShao-Chuan Wang
 
Brief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann MachineBrief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann MachineArunabha Saha
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing mapsraphaelkiminya
 
Looking ahead at Freight Costs
Looking ahead at Freight CostsLooking ahead at Freight Costs
Looking ahead at Freight CostsNulogx
 
Factors, multiples and prime factor trees
Factors, multiples and prime factor treesFactors, multiples and prime factor trees
Factors, multiples and prime factor treesEd
 
120年前的中國 (很珍貴)
120年前的中國 (很珍貴)120年前的中國 (很珍貴)
120年前的中國 (很珍貴)psjlew
 
Subramanian Resume
Subramanian ResumeSubramanian Resume
Subramanian Resumetilak777
 

Destaque (20)

Artificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 NegnevitskyArtificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 Negnevitsky
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
 
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
 
2015-2 W16 kernel and smo
2015-2 W16   kernel and smo2015-2 W16   kernel and smo
2015-2 W16 kernel and smo
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
Bayes Aglari
Bayes AglariBayes Aglari
Bayes Aglari
 
AHaH Computing
AHaH Computing AHaH Computing
AHaH Computing
 
Sefl Organizing Map
Sefl Organizing MapSefl Organizing Map
Sefl Organizing Map
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
 
Intoduction to Neural Network
Intoduction to Neural NetworkIntoduction to Neural Network
Intoduction to Neural Network
 
Introduction to Machine Learning
Introduction to Machine LearningIntroduction to Machine Learning
Introduction to Machine Learning
 
Brief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann MachineBrief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann Machine
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing maps
 
Self-organizing map
Self-organizing mapSelf-organizing map
Self-organizing map
 
Semana da familia
Semana da familiaSemana da familia
Semana da familia
 
Looking ahead at Freight Costs
Looking ahead at Freight CostsLooking ahead at Freight Costs
Looking ahead at Freight Costs
 
Odessa
OdessaOdessa
Odessa
 
Factors, multiples and prime factor trees
Factors, multiples and prime factor treesFactors, multiples and prime factor trees
Factors, multiples and prime factor trees
 
120年前的中國 (很珍貴)
120年前的中國 (很珍貴)120年前的中國 (很珍貴)
120年前的中國 (很珍貴)
 
Subramanian Resume
Subramanian ResumeSubramanian Resume
Subramanian Resume
 

Semelhante a 2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…

Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4CSITSansar
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Mohd Faiz
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfSowmyaJyothi3
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)sai anjaneya
 
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxRatuRumana3
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.pptGrishma Sharma
 
PPT presentation.pptx
PPT presentation.pptxPPT presentation.pptx
PPT presentation.pptxPreetiNaval1
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Mohd Faiz
 
cs4811-ch11-neural-networks.ppt
cs4811-ch11-neural-networks.pptcs4811-ch11-neural-networks.ppt
cs4811-ch11-neural-networks.pptbutest
 

Semelhante a 2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur… (20)

Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
Unit 2
Unit 2Unit 2
Unit 2
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Perceptron
PerceptronPerceptron
Perceptron
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
Lec 5
Lec 5Lec 5
Lec 5
 
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptx
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Ffnn
FfnnFfnn
Ffnn
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.ppt
 
Neural network and fuzzy logic
Neural network and fuzzy logicNeural network and fuzzy logic
Neural network and fuzzy logic
 
PPT presentation.pptx
PPT presentation.pptxPPT presentation.pptx
PPT presentation.pptx
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
 
20120140503023
2012014050302320120140503023
20120140503023
 
A04401001013
A04401001013A04401001013
A04401001013
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
 
cs4811-ch11-neural-networks.ppt
cs4811-ch11-neural-networks.pptcs4811-ch11-neural-networks.ppt
cs4811-ch11-neural-networks.ppt
 

Mais de Dongseo University

Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdfLecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdfDongseo University
 
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03Dongseo University
 
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02Dongseo University
 
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01Dongseo University
 
Average Linear Selection Algorithm
Average Linear Selection AlgorithmAverage Linear Selection Algorithm
Average Linear Selection AlgorithmDongseo University
 
Lower Bound of Comparison Sort
Lower Bound of Comparison SortLower Bound of Comparison Sort
Lower Bound of Comparison SortDongseo University
 
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using ArrayRunning Time of Building Binary Heap using Array
Running Time of Building Binary Heap using ArrayDongseo University
 
Proof By Math Induction Example
Proof By Math Induction ExampleProof By Math Induction Example
Proof By Math Induction ExampleDongseo University
 
Estimating probability distributions
Estimating probability distributionsEstimating probability distributions
Estimating probability distributionsDongseo University
 
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)Dongseo University
 
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)Dongseo University
 
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #52017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5Dongseo University
 

Mais de Dongseo University (20)

Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdfLecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
 
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03
 
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02
 
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01
 
Markov Chain Monte Carlo
Markov Chain Monte CarloMarkov Chain Monte Carlo
Markov Chain Monte Carlo
 
Simplex Lecture Notes
Simplex Lecture NotesSimplex Lecture Notes
Simplex Lecture Notes
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Median of Medians
Median of MediansMedian of Medians
Median of Medians
 
Average Linear Selection Algorithm
Average Linear Selection AlgorithmAverage Linear Selection Algorithm
Average Linear Selection Algorithm
 
Lower Bound of Comparison Sort
Lower Bound of Comparison SortLower Bound of Comparison Sort
Lower Bound of Comparison Sort
 
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using ArrayRunning Time of Building Binary Heap using Array
Running Time of Building Binary Heap using Array
 
Running Time of MergeSort
Running Time of MergeSortRunning Time of MergeSort
Running Time of MergeSort
 
Binary Trees
Binary TreesBinary Trees
Binary Trees
 
Proof By Math Induction Example
Proof By Math Induction ExampleProof By Math Induction Example
Proof By Math Induction Example
 
TRPO and PPO notes
TRPO and PPO notesTRPO and PPO notes
TRPO and PPO notes
 
Estimating probability distributions
Estimating probability distributionsEstimating probability distributions
Estimating probability distributions
 
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
 
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)
 
2017-2 ML W11 GAN #1
2017-2 ML W11 GAN #12017-2 ML W11 GAN #1
2017-2 ML W11 GAN #1
 
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #52017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5
 

Último

Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 

Último (20)

Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 

2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…

  • 1. Slides are based on Negnevitsky, Pearson Education, 2005 1 Lecture 8 Artificial neural networks: Unsupervised learning  Introduction  Hebbian learning  Generalised Hebbian learning algorithm  Competitive learning  Self-organising computational map: Kohonen network  Summary
  • 2. Slides are based on Negnevitsky, Pearson Education, 2005 2 The main property of a neural network is an ability to learn from its environment, and to improve its performance through learning. So far we have considered supervised or active learning  learning with an external “teacher” or a supervisor who presents a training set to the network. But another type of learning also exists: unsupervised learning. Introduction
  • 3. Slides are based on Negnevitsky, Pearson Education, 2005 3  In contrast to supervised learning, unsupervised or self-organised learning does not require an external teacher. During the training session, the neural network receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. Unsupervised learning tends to follow the neuro-biological organisation of the brain.  Unsupervised learning algorithms aim to learn rapidly and can be used in real-time.
  • 4. Slides are based on Negnevitsky, Pearson Education, 2005 4 In 1949, Donald Hebb proposed one of the key ideas in biological learning, commonly known as Hebb’s Law. Hebb’s Law states that if neuron i is near enough to excite neuron j and repeatedly participates in its activation, the synaptic connection between these two neurons is strengthened and neuron j becomes more sensitive to stimuli from neuron i. Hebbian learning
  • 5. Slides are based on Negnevitsky, Pearson Education, 2005 5 Hebb’s Law can be represented in the form of two rules: 1. If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased. 2. If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. Hebb’s Law provides the basis for learning without a teacher. Learning here is a local phenomenon occurring without feedback from the environment.
  • 6. Slides are based on Negnevitsky, Pearson Education, 2005 6 Hebbian learning in a neural network i j InputSignals OutputSignals
  • 7. Slides are based on Negnevitsky, Pearson Education, 2005 7  Using Hebb’s Law we can express the adjustment applied to the weight wij at iteration p in the following form:  As a special case, we can represent Hebb’s Law as follows: where  is the learning rate parameter. This equation is referred to as the activity product rule. ][ )(),()( pxpyFpw ijij  )()()( pxpypw ijij 
  • 8. Slides are based on Negnevitsky, Pearson Education, 2005 8  Hebbian learning implies that weights can only increase. To resolve this problem, we might impose a limit on the growth of synaptic weights. It can be done by introducing a non-linear forgetting factor into Hebb’s Law: where  is the forgetting factor. Forgetting factor usually falls in the interval between 0 and 1, typically between 0.01 and 0.1, to allow only a little “forgetting” while limiting the weight growth. )()()()()( pwpypxpypw ijjijij 
  • 9. Slides are based on Negnevitsky, Pearson Education, 2005 9 Hebbian learning algorithm Step 1: Initialisation. Set initial synaptic weights and thresholds to small random values, say in an interval [0, 1]. Step 2: Activation. Compute the neuron output at iteration p where n is the number of neuron inputs, and j is the threshold value of neuron j. j n i ijij pwpxpy   1 )()()(
  • 10. Slides are based on Negnevitsky, Pearson Education, 2005 10 Step 3: Learning. Update the weights in the network: where wij(p) is the weight correction at iteration p. The weight correction is determined by the generalised activity product rule: Step 4: Iteration. Increase iteration p by one, go back to Step 2. )()()1( pwpwpw ijijij  ][ )()()()( pwpxpypw ijijij 
  • 11. Slides are based on Negnevitsky, Pearson Education, 2005 11 To illustrate Hebbian learning, consider a fully connected feedforward network with a single layer of five computation neurons. Each neuron is represented by a McCulloch and Pitts model with the sign activation function. The network is trained on the following set of input vectors: Hebbian learning example                  0 0 0 0 0 1X                  1 0 0 1 0 2X                  0 1 0 0 0 3X                  0 0 1 0 0 4X                  1 0 0 1 0 5X
  • 12. Slides are based on Negnevitsky, Pearson Education, 2005 12 Input layer x1 1 Output layer 2 1 y1 y2x2 2 x3 3 x4 4 x5 5 4 3 y3 y4 5 y5 1 0 0 0 1 1 0 0 0 1 Input layer x1 1 Output layer 2 1 y1 y2x2 2 x3 3 x4 4 x5 4 3 y3 y4 5 y5 1 0 0 0 1 0 0 1 0 1 2 5 Initial and final states of the network
  • 13. Slides are based on Negnevitsky, Pearson Education, 2005 13 O u t p u t l a y e r Inputlayer                 10000 01000 00100 00010 00001 21 43 5 1 2 3 4 5 O u t p u t l a y e r Inputlayer         0 0 0 0 0 1 2 3 4 5 1 2 3 4 5 0 0 2.0204 0 2.0204 1.0200 0 0 0 0 0.9996 0 0 0 0 0 0 2.0204 0 2.0204        (b). Initial and final weight matrices
  • 14. Slides are based on Negnevitsky, Pearson Education, 2005 14  When this probe is presented to the network, we obtain:                  1 0 0 0 1 X                                                                                      1 0 0 1 0 0737.0 9478.0 0907.0 2661.0 4940.0 1 0 0 0 1 2.0204002.02040 00.9996000 001.020000 2.0204002.02040 00000 signY  A test input vector, or probe, is defined as
  • 15. Slides are based on Negnevitsky, Pearson Education, 2005 15  In competitive learning, neurons compete among themselves to be activated.  While in Hebbian learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time.  The output neuron that wins the “competition” is called the winner-takes-all neuron. Competitive learning
  • 16. Slides are based on Negnevitsky, Pearson Education, 2005 16  The basic idea of competitive learning was introduced in the early 1970s.  In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called self-organising feature maps. These maps are based on competitive learning.
  • 17. Slides are based on Negnevitsky, Pearson Education, 2005 17 Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and hundreds of billions of synapses. The cortex includes areas that are responsible for different human activities (motor, visual, auditory, somatosensory, etc.), and associated with different sensory inputs. We can say that each sensory input is mapped into a corresponding area of the cerebral cortex. The cortex is a self-organising computational map in the human brain. What is a self-organising feature map?
  • 18. Slides are based on Negnevitsky, Pearson Education, 2005 18 Feature-mapping Kohonen model Input layer Kohonen layer (a) Input layer Kohonen layer 1 0 (b) 0 1
  • 19. Slides are based on Negnevitsky, Pearson Education, 2005 19  The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher- dimensional output or Kohonen layer.  Training in the Kohonen network begins with the winner’s neighbourhood of a fairly large size. Then, as training proceeds, the neighbourhood size gradually decreases. The Kohonen network
  • 20. Slides are based on Negnevitsky, Pearson Education, 2005 20 Architecture of the Kohonen Network Input layer OutputSignals InputSignals x1 x2 Output layer y1 y2 y3
  • 21. Slides are based on Negnevitsky, Pearson Education, 2005 21  The lateral connections are used to create a competition between neurons. The neuron with the largest activation level among all neurons in the output layer becomes the winner. This neuron is the only neuron that produces an output signal. The activity of all other neurons is suppressed in the competition.  The lateral feedback connections produce excitatory or inhibitory effects, depending on the distance from the winning neuron. This is achieved by the use of a Mexican hat function which describes synaptic weights between neurons in the Kohonen layer.
  • 22. Slides are based on Negnevitsky, Pearson Education, 2005 22 The Mexican hat function of lateral connection Connection strength Distance Excitatory effect Inhibitory effect Inhibitory effect 0 1
  • 23. Slides are based on Negnevitsky, Pearson Education, 2005 23  In the Kohonen network, a neuron learns by shifting its weights from inactive connections to active ones. Only the winning neuron and its neighbourhood are allowed to learn. If a neuron does not respond to a given input pattern, then learning cannot occur in that particular neuron.  The competitive learning rule defines the change wij applied to synaptic weight wij as where xi is the input signal and  is the learning rate parameter.      ncompetitiothelosesneuronif,0 ncompetitiothewinsneuronif),( j jwx w iji ij 
  • 24. Slides are based on Negnevitsky, Pearson Education, 2005 24  The overall effect of the competitive learning rule resides in moving the synaptic weight vector Wj of the winning neuron j towards the input pattern X. The matching criterion is equivalent to the minimum Euclidean distance between vectors.  The Euclidean distance between a pair of n-by-1 vectors X and Wj is defined by where xi and wij are the ith elements of the vectors X and Wj, respectively. 2/1 1 2 )(            n i ijij wxd WX
  • 25. Slides are based on Negnevitsky, Pearson Education, 2005 25  To identify the winning neuron, jX, that best matches the input vector X, we may apply the following condition: where m is the number of neurons in the Kohonen layer. ,j j minj WXX 
  • 26. Slides are based on Negnevitsky, Pearson Education, 2005 26  Suppose, for instance, that the 2-dimensional input vector X is presented to the three-neuron Kohonen network,  The initial weight vectors, Wj, are given by        12.0 52.0 X        81.0 27.0 1W        70.0 42.0 2W        21.0 43.0 3W
  • 27. Slides are based on Negnevitsky, Pearson Education, 2005 27  We find the winning (best-matching) neuron jX using the minimum-distance Euclidean criterion: 2 212 2 1111 )()( wxwxd  73.0)81.012.0()27.052.0( 22  2 222 2 1212 )()( wxwxd  59.0)70.012.0()42.052.0( 22  2 232 2 1313 )()( wxwxd  13.0)21.012.0()43.052.0( 22   Neuron 3 is the winner and its weight vector W3 is updated according to the competitive learning rule. 0.01)43.052.0(1.0)( 13113  wxw  0.01)21.012.0(1.0)( 23223  wxw 
  • 28. Slides are based on Negnevitsky, Pearson Education, 2005 28  The updated weight vector W3 at iteration (p + 1) is determined as:  The weight vector W3 of the wining neuron 3 becomes closer to the input vector X with each iteration.                     20.0 44.0 01.0 0.01 21.0 43.0 )()()1( 333 ppp WWW
  • 29. Slides are based on Negnevitsky, Pearson Education, 2005 29 Step 1: Initialisation. Set initial synaptic weights to small random values, say in an interval [0, 1], and assign a small positive value to the learning rate parameter . Competitive Learning Algorithm
  • 30. Slides are based on Negnevitsky, Pearson Education, 2005 30 Step 2: Activation and Similarity Matching. Activate the Kohonen network by applying the input vector X, and find the winner-takes-all (best matching) neuron jX at iteration p, using the minimum-distance Euclidean criterion where n is the number of neurons in the input layer, and m is the number of neurons in the Kohonen layer. ,)()()( 2/1 1 2 ][            n i ijij j pwxpminpj WXX
  • 31. Slides are based on Negnevitsky, Pearson Education, 2005 31 Step 3: Learning. Update the synaptic weights where wij(p) is the weight correction at iteration p. The weight correction is determined by the competitive learning rule: where  is the learning rate parameter, and j(p) is the neighbourhood function centred around the winner-takes-all neuron jX at iteration p. )()()1( pwpwpw ijijij         )(,0 )(,)( )( ][ pj pjpwx pw j jiji ij 
  • 32. Slides are based on Negnevitsky, Pearson Education, 2005 32 Step 4: Iteration. Increase iteration p by one, go back to Step 2 and continue until the minimum-distance Euclidean criterion is satisfied, or no noticeable changes occur in the feature map.
  • 33. Slides are based on Negnevitsky, Pearson Education, 2005 33  To illustrate competitive learning, consider the Kohonen network with 100 neurons arranged in the form of a two-dimensional lattice with 10 rows and 10 columns. The network is required to classify two-dimensional input vectors  each neuron in the network should respond only to the input vectors occurring in its region.  The network is trained with 1000 two-dimensional input vectors generated randomly in a square region in the interval between –1 and +1. The learning rate parameter  is equal to 0.1. Competitive learning in the Kohonen network
  • 34. Slides are based on Negnevitsky, Pearson Education, 2005 34 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(1,j) -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(2,j) Initial random weights
  • 35. Slides are based on Negnevitsky, Pearson Education, 2005 35 Network after 100 iterations -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(1,j) -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(2,j)
  • 36. Slides are based on Negnevitsky, Pearson Education, 2005 36 Network after 1000 iterations -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(1,j) -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(2,j)
  • 37. Slides are based on Negnevitsky, Pearson Education, 2005 37 Network after 10,000 iterations -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(1,j) -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 W(2,j)
  • 38. Slides are based on Negnevitsky, Pearson Education, 2005 38  One form of unsupervised learning is clustering.  Among neural network models, the Self- Organizing Map (SOM) and Adaptive resonance theory (ART) are commonly used unsupervised learning algorithms. – The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. – ART networks are used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg(1988).