SlideShare uma empresa Scribd logo
1 de 41
Baixar para ler offline
Parveen Malik
Assistant Professor
KIIT University
Neural Networks
Text Books:
โ€ข Neural Networks and Learning Machines โ€“ Simon Haykin
โ€ข Principles of Soft Computing- S.N.Shivnandam & S.N.Deepa
โ€ข Neural Networks using Matlab- S.N. Shivanandam, S. Sumathi ,S N Deepa
What is Neural Network ?
โ€œA neural network is a massively parallel distributed processor made
up of simple processing units that has a natural propensity for storing
experiential knowledge and making it available for use.โ€
It resembles the brain in two respects:
1. Knowledge is acquired by the network from its environment
through a learning process.
2. Interneuron connection strengths, known as synaptic weights, are
used to store the acquired knowledge.
Neural Network
Why we need Neural Networks ?
๏ƒ˜ Object Detection
๏ƒ˜ Visual Tracking
๏ƒ˜ Image Captioning
๏ƒ˜ Video Captioning
๏ƒ˜ Visual Question Answering
๏ƒ˜ Video question answering
๏ƒ˜ Video Summarization
๏ƒ˜ Generating Authentic Photos
๏ƒ˜ Facial Expression detection
Machine
Learning
Visual
Pattern
Recognition
Natural
Language
Processing
๏ƒ˜ Language Modelling
๏ƒ˜ Text prediction
๏ƒ˜ Speech Recognition
๏ƒ˜ Machine Translation
๏ƒ˜ C-H conversation
Modelling
Time series
Modelling
๏ƒ˜ Share Market Prediction
๏ƒ˜ Weather Forecasting
๏ƒ˜ Financial Data analytics
๏ƒ˜ Mood analytics
Machine Learning and Beyond
More Applications
Aerospace Defence
๏‚ง High performance aircraft
autopilot
๏‚ง Flight path simulations
๏‚ง Aircraft control system
๏‚ง Autopilot enhancements
๏‚ง Aircraft component simulations
๏‚ง Aircraft component detectors.
๏‚ง Weapon steering
๏‚ง Target tracking
๏‚ง Object discrimination
๏‚ง Facial recognition
๏‚ง New kind of sensors
๏‚ง Sonar
๏‚ง Radar signal processing
๏‚ง image signal processing
๏‚ง Data compression
๏‚ง Feature extraction
๏‚ง Noise suppression
Manufacturing
๏‚ง Manufacturing process control
๏‚ง Product design and analysis
๏‚ง Process and machine diagnosis
๏‚ง Real time particle identification
๏‚ง Visual quality inspection systems
๏‚ง Wielding analysis
๏‚ง Analysis of grinding operations
๏‚ง Chemical product design, analysis
๏‚ง Machine maintenance analysis
๏‚ง Project bidding planning and management
๏‚ง Dynamic modelling of chemical process systems.
Medical Entertainment
๏‚ง Breast cancer cell analysis
๏‚ง EEG and ECG analysis
๏‚ง Prosthesis design
๏‚ง Optimization of transplant times
๏‚ง Hospital expense reduction
๏‚ง Hospital quality improvement
๏‚ง Emergency room test
advisement
๏‚ง Animation
๏‚ง Special effects
๏‚ง Market forecasting
Electronics
๏‚ง Code sequence prediction
๏‚ง Integrated circuit chip layout
๏‚ง Process control
๏‚ง Chip failure analysis
๏‚ง Machine vision
๏‚ง Voice synthesis
๏‚ง Non-linear modelling
Applications
Financial Telecommunications
โ€ข Real estate appraisal
โ€ข Loan advisor
โ€ข Mortgage screening
โ€ข Corporate bond rating
โ€ข Credit line use analysis
โ€ข Portfolio trading program
โ€ข Corporate financial analysis
โ€ข Currency price prediction
โ€ข Image and data compression
โ€ข Automated information services
โ€ข Real time translation of spoken language
โ€ข Customer payment processing systems
Securities
โ€ข Market analysis
โ€ข Automatic bond rating
โ€ข Stock trading advisory systems
Automotive Banking
โ€ข Automobile automatic guidance systems
โ€ข Warranty activity analysers
โ€ข Check and other document readers
โ€ข Credit application evaluators
Insurance
๏‚ง Policy application evaluation
๏‚ง Product optimization
Robotics
โ€ข Trajectory control
โ€ข Forklift robot
โ€ข Manipulator controllers
โ€ข Vision systems
Speech Transportation
โ€ข Speech recognition
โ€ข Speech compression
โ€ข Vowel classification
โ€ข Text to speech synthesis
โ€ข Truck brake diagnosis systems
โ€ข Vehicle scheduling
โ€ข Routing systems
Central Nervous System
Human Brain and Neuron
CNS- Brain and Neuron
Neuron - Structural Unit of central nervous
system i.e. Brain and Spinal Cord.
โ€ข 100 billion neurons, 100 trillion synapses
โ€ข Weight -1.5 Kg to 2Kg
โ€ข Conduction Speed โ€“ 0.6 m/s to 120 m/s
โ€ข Power โ€“ 20% ,20-40 Watt,10โˆ’16 ๐ฝ
๐‘œ๐‘๐‘’๐‘Ÿ๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ 
โ€ข Ion Transport Phenomenon
โ€ข Fault tolerant
โ€ข Asynchronous firing
โ€ข Response time = 10โˆ’3
sec
โ€œThe Brain is a highly complex, non-linear and massively parallel
Computing machine.โ€
๐‘ต๐’†๐’–๐’“๐’๐’
โ€œA Neuron is a basic unit of brain that processes and transmits information.โ€
Neuron
โ€ข Dendrite: Receive signals from other
neurons
โ€ข Soma (Cell body): Process the incoming
signals.
โ€ข Myelin Sheath: Covers neurons and help
speed up neuron impulses.
โ€ข Axon : Transmits the electric potential from
soma to synaptic terminal and then finally
to other neurons, muscles or glands
โ€ข Synaptic Terminal : Release the
neurotransmitter to transmit information to
dendrites.
Neuron Connection
Hierarchal Learning โ€“ Inspired Deep Learning
Human Brain Contd.
Historical Perspective about
modelling of Brain through Medical
and Applied Mathematics World
Historical Perspective
Historical Perspective contd.
Historical Perspective contd.
Analogy between
Biological Neural Network
and
Artificial Neural Network
Biological Neural Network Vs Artificial Neural Network
Equivalent Electrical Model
Practical Neural Network (Single Neuron)
๐‘ฅ1
๐‘ฅ2
๐‘ฅ๐‘›
โ‹ฎ
๐‘ค1
๐‘ค2
๐‘ค๐‘›
เท
๐’š = ๐’‡ เท
๐’Š=๐Ÿ
๐’
๐’˜๐’Š๐’™๐’Š + ๐’ƒ
เท(๐ข๐ง๐ฉ๐ฎ๐ญ)
๐‘“
Actual Output
Inputs b (Bias)
๐‘ฅ1
๐‘ฅ2
๐‘ฅ๐‘›
โ‹ฎ
๐‘ค1
๐‘ค2
๐‘ค๐‘›
เท
๐’š = ๐’‡ เท
๐’Š=๐ŸŽ
๐’
๐’˜๐’Š๐’™๐’Š
เท(๐ข๐ง๐ฉ๐ฎ๐ญ)
๐‘“
Actual Output
(Bias)
๐‘ค0=b
๐‘ฅ0 = 1 Bias is 0th weight with input equal to 1
Activation Functions
Name Geometrical Shape Mathematical Expression Property
Hard Limit ๐’‡ ๐’™ = แ‰Š
๐Ÿ ๐’Š๐’‡ ๐’™ โ‰ฅ ๐ŸŽ
๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’†
Non-differentiable
Bipolar Hard Limit
Signum Function ๐’‡ ๐’™ = แ‰
๐Ÿ
๐ŸŽ
โˆ’๐Ÿ
๐’Š๐’‡ ๐’™ > ๐ŸŽ
๐’Š๐’‡ ๐’™ = ๐ŸŽ
๐’Š๐’‡ ๐’™ < ๐ŸŽ
Non-differentiable
Sigmoid Function
๐’‡ ๐’™ =
๐Ÿ
๐Ÿ + ๐’†โˆ’๐’™
Differentiable
๐’‡โ€ฒ
๐’™ = ๐’‡ ๐’™ ๐Ÿ โˆ’ ๐’‡ ๐’™
1
0 ๐’™
๐’‡ ๐’™
1
0
๐’™
๐’‡ ๐’™
-1
1
0
๐’‡ ๐’™
Activation Functions
Name Geometrical Shape Mathematical Expression Property
Hyperbolic Tangent
or
Bipolar sigmoid
๐’‡ ๐’™ = ๐’•๐’‚๐’๐’‰๐’™ =
๐’†๐’™โˆ’๐’†โˆ’๐’™
๐’†๐’™+๐’†โˆ’๐’™
Differentiable
๐’‡โ€ฒ ๐’™ = ๐Ÿ โˆ’ ๐’‡๐Ÿ ๐’™
Bipolar Hard Limit
Signum Function
๐’‡ ๐’™ = ๐’™
Differentiable
Rectified Linear Unit
๐’‡ ๐’™ = ๐’Ž๐’‚๐’™(๐ŸŽ, ๐’™) Differentiable
0
1
0 ๐’™
๐’‡ ๐’™
-1
๐’‡ ๐’™
๐’™
๐’‡ ๐’™
๐’™
0
Single Perceptron to Multiple layer of perceptron โ€“ Historical Perspective
McCulloch Pitts (1943) โ€“ 1st Mathematical model of neuron
Weighted sum of input signals is compared to a threshold to determine the neuron output
Hebbian Learning Algorithm -1949
Learning of weights to classify the patterns
Organization of Behaviour โ€“ David Hebb
Frank Rosenblatt (1957) โ€“ More Accurate Neuron Model (Perceptron)
Perceptron Learning Algorithm to find optimum weights
Perceptron Learning Algorithm
Delta rule or Widrow- Hoff Learning Algorithm
Approximate steepest descent algorithm
Least Means Square Algorithm
Adaptive Linear Neuron Network Learning
Marvin Minsky and Seymour Peppert (1969)
Limitation of perceptron in classifying Non separable Patterns
Back Propagation (1986)
Training of Multilayer of Perceptrons
๐‘พ๐’๐’†๐’˜ = ๐‘พ๐’๐’๐’… + ๐’™๐’Š๐’š
๐‘พ๐’๐’†๐’˜ = ๐‘พ๐’๐’๐’… + (๐’• โˆ’ ๐’‚)๐’™๐’Š
๐‘พ๐’๐’†๐’˜ = ๐‘พ๐’๐’๐’… โˆ’ แ‰š
๐œถ๐›๐‘ญ ๐’˜
๐’˜=๐‘พ๐’๐’๐’…
Geometrical Significance (Hardlimit activation function)
๐‘ฅ1
๐‘ฅ2
๐‘ฅ๐‘›
โ‹ฎ
๐‘ค1
๐‘ค2
๐‘ค๐‘›
เท
๐’š = ๐’‡ เท
๐’Š=๐Ÿ
๐’
๐’˜๐’Š๐’™๐’Š + ๐’ƒ
เท(๐ข๐ง๐ฉ๐ฎ๐ญ)
๐‘“
Inputs b (Bias)
Hyperplane
Activation
Function
0
1
Output
input
Hard limit Function
From Activation function, we can infer if ฯƒ๐’Š=๐Ÿ
๐’
๐’˜๐’Š๐’™๐’Š + ๐’ƒ or ๐‘พ๐‘ป
๐‘ฟ (inner product between weight vector and input vector) is
greater than 0 for output is 1.
๐ฐ๐ก๐ž๐ซ๐ž, ๐ฐ๐ž๐ข๐ ๐ก๐ญ ๐ฏ๐ž๐œ๐ญ๐จ๐ซ , ๐– =
๐ฐ๐Ÿ
๐ฐ๐Ÿ
โ‹ฎ
๐ฐ๐ง
and input vector, ๐‘ฟ =
๐’™๐Ÿ
๐’™๐Ÿ
โ‹ฎ
๐’™๐’
. The ฯƒ๐’Š=๐Ÿ
๐’
๐’˜๐’Š๐’™๐’Š + ๐’ƒ = ๐ŸŽ is equivalent to a hyperplane
boundary.
Geometrical Significance (Hardlimit Activation function)
2 input (๐’™๐Ÿ & ๐’™๐Ÿ) โ†’ Boundary is line (๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ+b=0) with ๐’˜๐Ÿ, ๐’˜๐Ÿ ๐’‚๐’” ๐’๐’๐’“๐’Ž๐’‚๐’ โŠฅ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’๐’‡ ๐’๐’Š๐’๐’† .
3 input (๐’™๐Ÿ, ๐’™๐Ÿ & ๐’™๐Ÿ‘) โ†’ Boundary is Plane (๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ+๐’˜๐Ÿ‘๐’™๐Ÿ‘+b=0) with ๐’˜๐Ÿ, ๐’˜๐Ÿ, ๐’˜๐Ÿ‘ ๐’‚๐’” ๐’๐’๐’“๐’Ž๐’‚๐’ โŠฅ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’๐’‡ ๐’‘๐’๐’‚๐’๐’†.
>3 input (๐’™๐Ÿ, ๐’™๐Ÿ,โ‹ฏ ๐’™๐’) โ†’ Boundary is Hyperplane (๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ+โ‹ฏ+๐’˜๐’๐’™๐’+b=0) with ๐’˜๐Ÿ, ๐’˜๐Ÿ โ‹ฏ ๐’˜๐’ ๐’‚๐’” ๐’๐’๐’“๐’Ž๐’‚๐’ โŠฅ
๐’—๐’†๐’„๐’•๐’๐’“ ๐’๐’‡ ๐’‰๐’š๐’‘๐’†๐’“๐’‘๐’๐’‚๐’๐’†.
๐–๐ž๐ข๐ ๐ก๐ญ ๐•๐ž๐œ๐ญ๐จ๐ซ
(๐ฐ๐Ÿ, ๐ฐ๐Ÿ)
๐‘ณ๐’Š๐’๐’† ๐‘ฌ๐’’๐’–๐’‚๐’•๐’Š๐’๐’ (๐ฐ๐Ÿ๐ฑ๐Ÿ + ๐ฑ๐Ÿ๐ฐ๐Ÿ+b=0)
Class 1
Class 2 ๐–๐ž๐ข๐ ๐ก๐ญ ๐•๐ž๐œ๐ญ๐จ๐ซ
(๐ฐ๐Ÿ, ๐ฐ๐Ÿ, ๐ฐ๐Ÿ‘)
๐‘ท๐’๐’‚๐’๐’† ๐‘ฌ๐’’๐’–๐’‚๐’•๐’Š๐’๐’
(๐ฐ๐Ÿ๐ฑ๐Ÿ + ๐ฑ๐Ÿ๐ฐ๐Ÿ++๐ฑ๐Ÿ‘๐ฐ๐Ÿ‘+b=0)
Class 1
Class 2
๐ฑ๐Ÿ
๐ฑ๐Ÿ
๐ฑ๐Ÿ
๐ฑ๐Ÿ
๐ฑ๐Ÿ‘
2 Class Single Neuron Classification
McCulloch Pitts Neuron
McCulloch Pitts Neuron (1943)
๏‚ง Mathematical Model of the brain
๏‚ง Depends upon the threshold (๐œƒ) with a hard limit activation function
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐‘ฅ1
๐‘ฅ2
๐‘ฅ๐‘›
โ‹ฎ
๐‘ค2
๐‘ค1
๐‘ค๐‘›
๐‘“ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ if ๐’š๐’Š๐’ โ‰ฅ ๐œฝ
๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’†
๐œฝ
OR Gate
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
0 0 0
0 1 1
1 0 1
1 1 1
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐‘ฅ1
๐‘ฅ2
๐‘ค1 = 1
๐‘ค2=1
๐‘“ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ if ๐’š๐’Š๐’ โ‰ฅ ๐Ÿ
๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’†
๐œฝ = ๐Ÿ
๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ + ๐’™๐Ÿ
McCulloch Pitts Neuron
AND Gate
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
0 0 0
0 1 0
1 0 0
1 1 1
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐‘ฅ1
๐‘ฅ2
๐‘ค1 = 1
๐‘ค2=1
๐‘“ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ if ๐’š๐’Š๐’ โ‰ฅ ๐Ÿ
๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’†
๐œฝ = ๐Ÿ
๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ + ๐’™๐Ÿ
Not Gate
๐‘ฅ Target
Output (๐’š)
0 1
1 0
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐‘ฅ
๐‘ค1 = 1
๐‘“ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ if ๐’š๐’Š๐’ < ๐ŸŽ
๐ŸŽ if ๐’š๐’Š๐’ โ‰ฅ ๐Ÿ
๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™ = ๐’™
McCulloch Pitts Neuron (XOR implementation)
๐’™๐Ÿ ๐’™๐Ÿ Target
Output
(๐’š=๐’™๐Ÿ
๐‘ป
๐’™๐Ÿ)
0 0 0
0 1 1
1 0 0
1 1 0
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐‘ฅ1
๐‘ฅ2
๐’˜๐Ÿ = โˆ’๐Ÿ
๐’˜๐Ÿ= 1 ๐‘“ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ if ๐’š๐’Š๐’ โ‰ค โˆ’๐Ÿ
๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’†
๐œฝ = โˆ’๐Ÿ
๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ
๐’™๐Ÿ ๐’™๐Ÿ Target
Output
(๐’š=๐’™๐Ÿ๐’™๐Ÿ
๐‘ป
)
0 0 0
0 1 0
1 0 1
1 1 0
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐‘ฅ1
๐‘ฅ2
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐Ÿ=-1
๐‘“ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ if ๐’š๐’Š๐’ โ‰ค โˆ’๐Ÿ
๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’†
๐œฝ = โˆ’๐Ÿ
๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ
๐’š=๐’™๐Ÿ
๐‘ป
๐’™๐Ÿ
๐’š=๐’™๐Ÿ๐’™๐Ÿ
๐‘ป
McCulloch Pitts Neuron (XOR implementation- 3 Neurons)
๐’™๐Ÿ ๐’™๐Ÿ ๐’š๐Ÿ=๐’™๐Ÿ
๐‘ป
๐’™๐Ÿ ๐’š๐Ÿ=๐’™๐Ÿ๐’™๐Ÿ
๐‘ป
๐’š=๐’™๐Ÿ
๐‘ป
๐’™๐Ÿ + ๐’™๐Ÿ๐’™๐Ÿ
๐‘ป
0 0 0 0 0
0 1 1 0 1
1 0 0 1 1
1 1 0 0 0
๐’š=๐’™๐Ÿ
๐‘ป
๐’™๐Ÿ + ๐’™๐Ÿ๐’™๐Ÿ
๐‘ป
๐’š๐’Š๐’๐Ÿ= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ
๐‘ฆ = ๐‘“ ๐’š๐’Š๐’
๐’™๐Ÿ
๐’™๐Ÿ
๐’š๐’Š๐’๐Ÿ ๐‘“ ๐’š๐’Š๐’๐Ÿ
๐’˜๐Ÿ๐Ÿ = ๐Ÿ
๐’˜๐Ÿ๐Ÿ = โˆ’๐Ÿ
๐’˜๐Ÿ๐Ÿ = โˆ’๐Ÿ
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
๐’š๐’Š๐’๐Ÿ ๐‘“ ๐’š๐’Š๐’๐Ÿ
๐’˜๐Ÿ๐Ÿ = ๐Ÿ
๐’š๐’Š๐’๐Ÿ= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ
๐’˜๐Ÿ๐Ÿ
(๐Ÿ)
= ๐Ÿ
๐’˜๐Ÿ๐Ÿ
(๐Ÿ)
= ๐Ÿ
Hebbian Learning Rule
Hebbian Learning Rule
โ€ข Donald Hebb (Psychologist)โ€“ The Organization of the behaviour (1949)
โ€ข Hebbโ€™s Postulate โ€“ โ€œWhen an axon of cell A is near enough to excite a cell
B and repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that Aโ€™s efficiency, as
one of the cells firing B, is increased.โ€
Mathematically,
๐‘พ๐’๐’†๐’˜= ๐‘พ๐’๐’๐’… + ๐’™๐’Š๐’š
Where ๐‘ฅ๐‘– is the ith input and ๐‘ฆ is output.
Bipolar inputs or outputs (-1 or +1)
Limitation โ€“ Can classify linearly separable patterns only
Hebbian Learning Rule
๐‘พ๐‘ต๐’†๐’˜ = ๐‘พ๐’๐’๐’… + ๐’™๐’Š๐’š
Bipolar inputs or outputs (-1 or +1)
AND gate Implementation
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ
0
๐’˜๐Ÿ
0
๐’˜๐ŸŽ(๐’ƒ)
0
-1 -1 -1 1 1 -1 1 1 -1
-1 1 -1 1 -1 -1 2 0 -2
1 -1 -1 -1 1 -1 1 1 -3
1 1 1 1 1 1 2 2 -2
Initialized weight & bias
๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + ๐’™๐Ÿ๐’š
๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + ๐’™๐Ÿ๐’š
๐’˜๐ŸŽ(๐’๐’†๐’˜) = ๐’˜๐ŸŽ(๐’๐’๐’…) + ๐’™๐ŸŽ๐’š (๐‘ญ๐’๐’“ ๐’ƒ๐’Š๐’‚๐’” ๐’™๐ŸŽ = ๐Ÿ & ๐’˜๐ŸŽ = ๐’ƒ)
๐’š๐’Š๐’= ๐’˜๐ŸŽ๐’™๐ŸŽ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’ƒ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ
๐’™๐ŸŽ = ๐Ÿ
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
เท
๐’š = ๐’‡ ๐’š๐’Š๐’
๐’™๐Ÿ
๐’™๐Ÿ
๐’˜๐Ÿ
๐’˜๐Ÿ
๐’˜๐ŸŽ= b
1 epoch
Iterations
Hebbian Learning Rule
OR gate Implementation
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ
0
๐’˜๐Ÿ
0
๐’˜๐ŸŽ(๐’ƒ)
0
-1 -1 -1 1 1 -1 1 1 -1
-1 1 1 -1 1 1 0 2 0
1 -1 1 1 -1 1 1 1 1
1 1 1 1 1 1 2 2 2
Initialized weight & bias
1 epoch
Iterations
Check :
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท
๐’š = ๐’‡ ๐’š๐’Š๐’
Where,
๐’‡ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ
๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ
๐’™๐Ÿ
๐’™๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐ŸŽ= 2
๐’™๐ŸŽ = ๐Ÿ
-1
-1
๐’š๐’Š๐’ = -2 ๐’‡(๐’š๐’Š๐’) = ๐’‡(โˆ’๐Ÿ)=0
Hebbian Learning Rule
OR gate Implementation
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ
0
๐’˜๐Ÿ
0
๐’˜๐ŸŽ(๐’ƒ)
0
-1 -1 -1 1 1 -1 1 1 -1
-1 1 1 -1 1 1 0 2 0
1 -1 1 1 -1 1 1 1 1
1 1 1 1 1 1 2 2 2
Initialized weight & bias
1 epoch
Iterations
Check :
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท
๐’š = ๐’‡ ๐’š๐’Š๐’
Where,
๐’‡ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ
๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ
๐’™๐Ÿ
๐’™๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐ŸŽ= 2
๐’™๐ŸŽ = ๐Ÿ
1
1
๐’š๐’Š๐’ = 6 ๐’‡(๐’š๐’Š๐’) = ๐’‡ ๐Ÿ” = 1
Hebbian Learning Rule
OR gate Implementation
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ
0
๐’˜๐Ÿ
0
๐’˜๐ŸŽ(๐’ƒ)
0
-1 -1 -1 1 1 -1 1 1 -1
-1 1 1 -1 1 1 0 2 0
1 -1 1 1 -1 1 1 1 1
1 1 1 1 1 1 2 2 2
Initialized weight & bias
1 epoch
Iterations
Check :
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท
๐’š = ๐’‡ ๐’š๐’Š๐’
Where,
๐’‡ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ
๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ
๐’™๐Ÿ
๐’™๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐ŸŽ= 2
๐’™๐ŸŽ = ๐Ÿ
-1
1
๐’š๐’Š๐’= 2 ๐’‡(๐’š๐’Š๐’) = ๐’‡(๐Ÿ)=1
Q - Are these Optimum set of weights ?
Perceptron Learning Rule
Perceptron Learning Rule
โ€ข Frank Rosenblatt โ€“ (1957)
โ€ข Key contribution - Introduction of a learning rule for training perceptron networks to
solve pattern recognition problems
โ€ข Perceptron could even learn when initialized with random values for its weights and
biases.
โ€ข Limitations โ€“ Can classify only linearly separable problems.
โ€ข Limitations were publicized in the book โ€œPerceptrons (1969)โ€ by Marvin Minsky and
Seymour Peppert.
Mathematically,
๐‘พ๐’๐’†๐’˜= ๐‘พ๐’๐’๐’… + (๐’š โˆ’ เท
๐’š) ๐’™๐’Š
Where, ๐‘ฅ๐‘– ๐‘–๐‘  ๐‘–๐‘กโ„Ž ๐‘–๐‘›๐‘๐‘ข๐‘ก, เทœ
๐‘ฆ ๐‘–๐‘  ๐‘Ž๐‘๐‘ก๐‘ข๐‘Ž๐‘™ ๐‘œ๐‘Ÿ ๐‘๐‘Ÿ๐‘’๐‘‘๐‘–๐‘๐‘ก๐‘’๐‘‘ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘Ž๐‘›๐‘‘
๐‘ฆ ๐‘–๐‘  ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก.
Perceptron Learning Rule
๐‘พ๐‘ต๐’†๐’˜ = ๐‘พ๐’๐’๐’… + (๐’š โˆ’ เท
๐’š)๐’™๐’Š
AND gate Implementation
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
Actual
Output (เท
๐’š)
(๐’š โˆ’ เท
๐’š) โˆ†๐’˜๐Ÿ
=(๐’š โˆ’ เท
๐’š)๐’™๐Ÿ
โˆ†๐’˜๐Ÿ
= (๐’š โˆ’ เท
๐’š)๐’™๐Ÿ
โˆ†๐’ƒ
= (๐’š โˆ’ เท
๐’š)
๐’˜๐Ÿ
0
๐’˜๐Ÿ
0
๐’˜๐ŸŽ(๐’ƒ)
0
0 0 0 1 -1 0 0 -1 0 0 -1
0 1 0 0 0 0 0 0 0 0 -1
1 0 0 0 0 0 0 0 0 0 -1
1 1 1 0 1 1 1 1 1 1 0
Initialized weight & bias
๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + โˆ†๐’˜๐Ÿ = ๐’˜๐Ÿ(๐’๐’๐’…) +(๐’š โˆ’ เท
๐’š)๐’™๐Ÿ
๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + โˆ†๐’˜๐Ÿ = ๐’˜๐Ÿ(๐’๐’๐’…) +(๐’š โˆ’ เท
๐’š)๐’™๐Ÿ
๐’˜๐ŸŽ(๐’๐’†๐’˜) = ๐’˜๐ŸŽ(๐’๐’๐’…) +(๐’š โˆ’ เท
๐’š)๐’™๐ŸŽ (๐‘ญ๐’๐’“ ๐’ƒ๐’Š๐’‚๐’” ๐’™๐ŸŽ = ๐Ÿ & ๐’˜๐ŸŽ = ๐’ƒ)
๐’š๐’Š๐’= ๐’˜๐ŸŽ๐’™๐ŸŽ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’ƒ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ
๐’™๐ŸŽ = ๐Ÿ
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’
เท
๐’š = ๐’‡ ๐’š๐’Š๐’
๐’‡ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ
๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ
๐’™๐Ÿ
๐’™๐Ÿ
๐’˜๐Ÿ
๐’˜๐Ÿ
๐’˜๐ŸŽ= b
1 epoch
Iterations
Weights after 1 epoch
OR gate Implementation
Initialized weight & bias
1 epoch
Iterations
Check :
๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท
๐’š = ๐’‡ ๐’š๐’Š๐’
Where,
๐’‡ ๐’š๐’Š๐’ = แ‰Š
๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ
๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ
๐’™๐Ÿ
๐’™๐Ÿ
๐’˜๐Ÿ = ๐Ÿ
๐’˜๐Ÿ = ๐ŸŽ
๐’˜๐ŸŽ= 0
๐’™๐ŸŽ = ๐Ÿ
0
0
๐’š๐’Š๐’ = 0 ๐’‡(๐’š๐’Š๐’) = ๐’‡ ๐ŸŽ = 1
๐’™๐Ÿ ๐’™๐Ÿ Target
Output (๐’š)
Actual
Output (เท
๐’š)
(๐’š โˆ’ เท
๐’š) โˆ†๐’˜๐Ÿ
=(๐’š โˆ’ เท
๐’š)๐’™๐Ÿ
โˆ†๐’˜๐Ÿ
= (๐’š โˆ’ เท
๐’š)๐’™๐Ÿ
โˆ†๐’ƒ
= (๐’š โˆ’ เท
๐’š)
๐’˜๐Ÿ
0
๐’˜๐Ÿ
0
๐’˜๐ŸŽ(๐’ƒ)
0
0 0 0 1 -1 0 0 -1 0 0 -1
0 1 1 0 1 0 1 1 0 1 0
1 0 1 1 0 0 0 0 0 1 0
1 1 1 1 0 0 0 0 0 1 0
ร— ๐‘พ๐’“๐’๐’๐’ˆ Output
Need more iterations
Perceptron Learning Rule

Mais conteรบdo relacionado

Mais procurados

Neural networks
Neural networksNeural networks
Neural networks
Slideshare
ย 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
ankita pandey
ย 

Mais procurados (20)

Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
ย 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
ย 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
ย 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
ย 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
ย 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
ย 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
ย 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ย 
I. Mini-Max Algorithm in AI
I. Mini-Max Algorithm in AII. Mini-Max Algorithm in AI
I. Mini-Max Algorithm in AI
ย 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
ย 
Perceptron
PerceptronPerceptron
Perceptron
ย 
Neural networks
Neural networksNeural networks
Neural networks
ย 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
ย 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
ย 
Neural network
Neural networkNeural network
Neural network
ย 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
ย 
Introduction to artificial neural network
Introduction to artificial neural networkIntroduction to artificial neural network
Introduction to artificial neural network
ย 
AI Lecture 3 (solving problems by searching)
AI Lecture 3 (solving problems by searching)AI Lecture 3 (solving problems by searching)
AI Lecture 3 (solving problems by searching)
ย 
Max net
Max netMax net
Max net
ย 
Particle Swarm Optimization - PSO
Particle Swarm Optimization - PSOParticle Swarm Optimization - PSO
Particle Swarm Optimization - PSO
ย 

Semelhante a Lecture 4 neural networks

Lect1_Threshold_Logic_Unit lecture 1 - ANN
Lect1_Threshold_Logic_Unit  lecture 1 - ANNLect1_Threshold_Logic_Unit  lecture 1 - ANN
Lect1_Threshold_Logic_Unit lecture 1 - ANN
MostafaHazemMostafaa
ย 
JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience
JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience
JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience
hirokazutanaka
ย 
Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyay
abhishek upadhyay
ย 
w1-01-introtonn.ppt
w1-01-introtonn.pptw1-01-introtonn.ppt
w1-01-introtonn.ppt
KotaGuru1
ย 
20141003.journal club
20141003.journal club20141003.journal club
20141003.journal club
Hayaru SHOUNO
ย 

Semelhante a Lecture 4 neural networks (20)

Lect1_Threshold_Logic_Unit lecture 1 - ANN
Lect1_Threshold_Logic_Unit  lecture 1 - ANNLect1_Threshold_Logic_Unit  lecture 1 - ANN
Lect1_Threshold_Logic_Unit lecture 1 - ANN
ย 
JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience
JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience
JAISTใ‚ตใƒžใƒผใ‚นใ‚ฏใƒผใƒซ2016ใ€Œ่„ณใ‚’็Ÿฅใ‚‹ใŸใ‚ใฎ็†่ซ–ใ€่ฌ›็พฉ04 Neural Networks and Neuroscience
ย 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
ย 
Neuromorphic computing for neural networks
Neuromorphic computing for neural networksNeuromorphic computing for neural networks
Neuromorphic computing for neural networks
ย 
Artificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisArtificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical Diagnosis
ย 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
ย 
ANN.ppt
ANN.pptANN.ppt
ANN.ppt
ย 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
ย 
Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyay
ย 
w1-01-introtonn.ppt
w1-01-introtonn.pptw1-01-introtonn.ppt
w1-01-introtonn.ppt
ย 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)
ย 
SoftComputing5
SoftComputing5SoftComputing5
SoftComputing5
ย 
Neural networks
Neural networksNeural networks
Neural networks
ย 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural Networks Introduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
ย 
20141003.journal club
20141003.journal club20141003.journal club
20141003.journal club
ย 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ย 
annintro.ppt
annintro.pptannintro.ppt
annintro.ppt
ย 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
ย 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)
ย 
Artificial Intelligence Applications in Petroleum Engineering - Part I
Artificial Intelligence Applications in Petroleum Engineering - Part IArtificial Intelligence Applications in Petroleum Engineering - Part I
Artificial Intelligence Applications in Petroleum Engineering - Part I
ย 

Mais de ParveenMalik18 (11)

Lecture 6 radial basis-function_network
Lecture 6 radial basis-function_networkLecture 6 radial basis-function_network
Lecture 6 radial basis-function_network
ย 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagation
ย 
Lecture 3 fuzzy inference system
Lecture 3  fuzzy inference systemLecture 3  fuzzy inference system
Lecture 3 fuzzy inference system
ย 
Lecture 2 fuzzy inference system
Lecture 2  fuzzy inference systemLecture 2  fuzzy inference system
Lecture 2 fuzzy inference system
ย 
Lecture 1 computational intelligence
Lecture 1  computational intelligenceLecture 1  computational intelligence
Lecture 1 computational intelligence
ย 
Chapter8
Chapter8Chapter8
Chapter8
ย 
Chapter6
Chapter6Chapter6
Chapter6
ย 
Chapter5
Chapter5Chapter5
Chapter5
ย 
Chapter3
Chapter3Chapter3
Chapter3
ย 
Chapter2
Chapter2Chapter2
Chapter2
ย 
Electrical and Electronic Measurement
Electrical and Electronic MeasurementElectrical and Electronic Measurement
Electrical and Electronic Measurement
ย 

รšltimo

VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
ย 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
dollysharma2066
ย 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
MsecMca
ย 
Top Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoorTop Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoor
dharasingh5698
ย 

รšltimo (20)

VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
ย 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
ย 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
ย 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
ย 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
ย 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
ย 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
ย 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
ย 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
ย 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
ย 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
ย 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
ย 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
ย 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
ย 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
ย 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
ย 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
ย 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
ย 
Top Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoorTop Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoor
Top Rated Call Girls In chittoor ๐Ÿ“ฑ {7001035870} VIP Escorts chittoor
ย 
Unit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfUnit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdf
ย 

Lecture 4 neural networks

  • 1. Parveen Malik Assistant Professor KIIT University Neural Networks
  • 2. Text Books: โ€ข Neural Networks and Learning Machines โ€“ Simon Haykin โ€ข Principles of Soft Computing- S.N.Shivnandam & S.N.Deepa โ€ข Neural Networks using Matlab- S.N. Shivanandam, S. Sumathi ,S N Deepa
  • 3. What is Neural Network ?
  • 4. โ€œA neural network is a massively parallel distributed processor made up of simple processing units that has a natural propensity for storing experiential knowledge and making it available for use.โ€ It resembles the brain in two respects: 1. Knowledge is acquired by the network from its environment through a learning process. 2. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge. Neural Network
  • 5. Why we need Neural Networks ?
  • 6. ๏ƒ˜ Object Detection ๏ƒ˜ Visual Tracking ๏ƒ˜ Image Captioning ๏ƒ˜ Video Captioning ๏ƒ˜ Visual Question Answering ๏ƒ˜ Video question answering ๏ƒ˜ Video Summarization ๏ƒ˜ Generating Authentic Photos ๏ƒ˜ Facial Expression detection Machine Learning Visual Pattern Recognition Natural Language Processing ๏ƒ˜ Language Modelling ๏ƒ˜ Text prediction ๏ƒ˜ Speech Recognition ๏ƒ˜ Machine Translation ๏ƒ˜ C-H conversation Modelling Time series Modelling ๏ƒ˜ Share Market Prediction ๏ƒ˜ Weather Forecasting ๏ƒ˜ Financial Data analytics ๏ƒ˜ Mood analytics Machine Learning and Beyond
  • 8. Aerospace Defence ๏‚ง High performance aircraft autopilot ๏‚ง Flight path simulations ๏‚ง Aircraft control system ๏‚ง Autopilot enhancements ๏‚ง Aircraft component simulations ๏‚ง Aircraft component detectors. ๏‚ง Weapon steering ๏‚ง Target tracking ๏‚ง Object discrimination ๏‚ง Facial recognition ๏‚ง New kind of sensors ๏‚ง Sonar ๏‚ง Radar signal processing ๏‚ง image signal processing ๏‚ง Data compression ๏‚ง Feature extraction ๏‚ง Noise suppression Manufacturing ๏‚ง Manufacturing process control ๏‚ง Product design and analysis ๏‚ง Process and machine diagnosis ๏‚ง Real time particle identification ๏‚ง Visual quality inspection systems ๏‚ง Wielding analysis ๏‚ง Analysis of grinding operations ๏‚ง Chemical product design, analysis ๏‚ง Machine maintenance analysis ๏‚ง Project bidding planning and management ๏‚ง Dynamic modelling of chemical process systems. Medical Entertainment ๏‚ง Breast cancer cell analysis ๏‚ง EEG and ECG analysis ๏‚ง Prosthesis design ๏‚ง Optimization of transplant times ๏‚ง Hospital expense reduction ๏‚ง Hospital quality improvement ๏‚ง Emergency room test advisement ๏‚ง Animation ๏‚ง Special effects ๏‚ง Market forecasting Electronics ๏‚ง Code sequence prediction ๏‚ง Integrated circuit chip layout ๏‚ง Process control ๏‚ง Chip failure analysis ๏‚ง Machine vision ๏‚ง Voice synthesis ๏‚ง Non-linear modelling Applications
  • 9. Financial Telecommunications โ€ข Real estate appraisal โ€ข Loan advisor โ€ข Mortgage screening โ€ข Corporate bond rating โ€ข Credit line use analysis โ€ข Portfolio trading program โ€ข Corporate financial analysis โ€ข Currency price prediction โ€ข Image and data compression โ€ข Automated information services โ€ข Real time translation of spoken language โ€ข Customer payment processing systems Securities โ€ข Market analysis โ€ข Automatic bond rating โ€ข Stock trading advisory systems Automotive Banking โ€ข Automobile automatic guidance systems โ€ข Warranty activity analysers โ€ข Check and other document readers โ€ข Credit application evaluators Insurance ๏‚ง Policy application evaluation ๏‚ง Product optimization Robotics โ€ข Trajectory control โ€ข Forklift robot โ€ข Manipulator controllers โ€ข Vision systems Speech Transportation โ€ข Speech recognition โ€ข Speech compression โ€ข Vowel classification โ€ข Text to speech synthesis โ€ข Truck brake diagnosis systems โ€ข Vehicle scheduling โ€ข Routing systems
  • 10. Central Nervous System Human Brain and Neuron
  • 11. CNS- Brain and Neuron Neuron - Structural Unit of central nervous system i.e. Brain and Spinal Cord. โ€ข 100 billion neurons, 100 trillion synapses โ€ข Weight -1.5 Kg to 2Kg โ€ข Conduction Speed โ€“ 0.6 m/s to 120 m/s โ€ข Power โ€“ 20% ,20-40 Watt,10โˆ’16 ๐ฝ ๐‘œ๐‘๐‘’๐‘Ÿ๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  โ€ข Ion Transport Phenomenon โ€ข Fault tolerant โ€ข Asynchronous firing โ€ข Response time = 10โˆ’3 sec โ€œThe Brain is a highly complex, non-linear and massively parallel Computing machine.โ€ ๐‘ต๐’†๐’–๐’“๐’๐’
  • 12. โ€œA Neuron is a basic unit of brain that processes and transmits information.โ€ Neuron โ€ข Dendrite: Receive signals from other neurons โ€ข Soma (Cell body): Process the incoming signals. โ€ข Myelin Sheath: Covers neurons and help speed up neuron impulses. โ€ข Axon : Transmits the electric potential from soma to synaptic terminal and then finally to other neurons, muscles or glands โ€ข Synaptic Terminal : Release the neurotransmitter to transmit information to dendrites.
  • 14. Hierarchal Learning โ€“ Inspired Deep Learning Human Brain Contd.
  • 15. Historical Perspective about modelling of Brain through Medical and Applied Mathematics World
  • 19. Analogy between Biological Neural Network and Artificial Neural Network
  • 20. Biological Neural Network Vs Artificial Neural Network Equivalent Electrical Model
  • 21. Practical Neural Network (Single Neuron) ๐‘ฅ1 ๐‘ฅ2 ๐‘ฅ๐‘› โ‹ฎ ๐‘ค1 ๐‘ค2 ๐‘ค๐‘› เท ๐’š = ๐’‡ เท ๐’Š=๐Ÿ ๐’ ๐’˜๐’Š๐’™๐’Š + ๐’ƒ เท(๐ข๐ง๐ฉ๐ฎ๐ญ) ๐‘“ Actual Output Inputs b (Bias) ๐‘ฅ1 ๐‘ฅ2 ๐‘ฅ๐‘› โ‹ฎ ๐‘ค1 ๐‘ค2 ๐‘ค๐‘› เท ๐’š = ๐’‡ เท ๐’Š=๐ŸŽ ๐’ ๐’˜๐’Š๐’™๐’Š เท(๐ข๐ง๐ฉ๐ฎ๐ญ) ๐‘“ Actual Output (Bias) ๐‘ค0=b ๐‘ฅ0 = 1 Bias is 0th weight with input equal to 1
  • 22. Activation Functions Name Geometrical Shape Mathematical Expression Property Hard Limit ๐’‡ ๐’™ = แ‰Š ๐Ÿ ๐’Š๐’‡ ๐’™ โ‰ฅ ๐ŸŽ ๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’† Non-differentiable Bipolar Hard Limit Signum Function ๐’‡ ๐’™ = แ‰ ๐Ÿ ๐ŸŽ โˆ’๐Ÿ ๐’Š๐’‡ ๐’™ > ๐ŸŽ ๐’Š๐’‡ ๐’™ = ๐ŸŽ ๐’Š๐’‡ ๐’™ < ๐ŸŽ Non-differentiable Sigmoid Function ๐’‡ ๐’™ = ๐Ÿ ๐Ÿ + ๐’†โˆ’๐’™ Differentiable ๐’‡โ€ฒ ๐’™ = ๐’‡ ๐’™ ๐Ÿ โˆ’ ๐’‡ ๐’™ 1 0 ๐’™ ๐’‡ ๐’™ 1 0 ๐’™ ๐’‡ ๐’™ -1 1 0 ๐’‡ ๐’™
  • 23. Activation Functions Name Geometrical Shape Mathematical Expression Property Hyperbolic Tangent or Bipolar sigmoid ๐’‡ ๐’™ = ๐’•๐’‚๐’๐’‰๐’™ = ๐’†๐’™โˆ’๐’†โˆ’๐’™ ๐’†๐’™+๐’†โˆ’๐’™ Differentiable ๐’‡โ€ฒ ๐’™ = ๐Ÿ โˆ’ ๐’‡๐Ÿ ๐’™ Bipolar Hard Limit Signum Function ๐’‡ ๐’™ = ๐’™ Differentiable Rectified Linear Unit ๐’‡ ๐’™ = ๐’Ž๐’‚๐’™(๐ŸŽ, ๐’™) Differentiable 0 1 0 ๐’™ ๐’‡ ๐’™ -1 ๐’‡ ๐’™ ๐’™ ๐’‡ ๐’™ ๐’™ 0
  • 24. Single Perceptron to Multiple layer of perceptron โ€“ Historical Perspective McCulloch Pitts (1943) โ€“ 1st Mathematical model of neuron Weighted sum of input signals is compared to a threshold to determine the neuron output Hebbian Learning Algorithm -1949 Learning of weights to classify the patterns Organization of Behaviour โ€“ David Hebb Frank Rosenblatt (1957) โ€“ More Accurate Neuron Model (Perceptron) Perceptron Learning Algorithm to find optimum weights Perceptron Learning Algorithm Delta rule or Widrow- Hoff Learning Algorithm Approximate steepest descent algorithm Least Means Square Algorithm Adaptive Linear Neuron Network Learning Marvin Minsky and Seymour Peppert (1969) Limitation of perceptron in classifying Non separable Patterns Back Propagation (1986) Training of Multilayer of Perceptrons ๐‘พ๐’๐’†๐’˜ = ๐‘พ๐’๐’๐’… + ๐’™๐’Š๐’š ๐‘พ๐’๐’†๐’˜ = ๐‘พ๐’๐’๐’… + (๐’• โˆ’ ๐’‚)๐’™๐’Š ๐‘พ๐’๐’†๐’˜ = ๐‘พ๐’๐’๐’… โˆ’ แ‰š ๐œถ๐›๐‘ญ ๐’˜ ๐’˜=๐‘พ๐’๐’๐’…
  • 25. Geometrical Significance (Hardlimit activation function) ๐‘ฅ1 ๐‘ฅ2 ๐‘ฅ๐‘› โ‹ฎ ๐‘ค1 ๐‘ค2 ๐‘ค๐‘› เท ๐’š = ๐’‡ เท ๐’Š=๐Ÿ ๐’ ๐’˜๐’Š๐’™๐’Š + ๐’ƒ เท(๐ข๐ง๐ฉ๐ฎ๐ญ) ๐‘“ Inputs b (Bias) Hyperplane Activation Function 0 1 Output input Hard limit Function From Activation function, we can infer if ฯƒ๐’Š=๐Ÿ ๐’ ๐’˜๐’Š๐’™๐’Š + ๐’ƒ or ๐‘พ๐‘ป ๐‘ฟ (inner product between weight vector and input vector) is greater than 0 for output is 1. ๐ฐ๐ก๐ž๐ซ๐ž, ๐ฐ๐ž๐ข๐ ๐ก๐ญ ๐ฏ๐ž๐œ๐ญ๐จ๐ซ , ๐– = ๐ฐ๐Ÿ ๐ฐ๐Ÿ โ‹ฎ ๐ฐ๐ง and input vector, ๐‘ฟ = ๐’™๐Ÿ ๐’™๐Ÿ โ‹ฎ ๐’™๐’ . The ฯƒ๐’Š=๐Ÿ ๐’ ๐’˜๐’Š๐’™๐’Š + ๐’ƒ = ๐ŸŽ is equivalent to a hyperplane boundary.
  • 26. Geometrical Significance (Hardlimit Activation function) 2 input (๐’™๐Ÿ & ๐’™๐Ÿ) โ†’ Boundary is line (๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ+b=0) with ๐’˜๐Ÿ, ๐’˜๐Ÿ ๐’‚๐’” ๐’๐’๐’“๐’Ž๐’‚๐’ โŠฅ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’๐’‡ ๐’๐’Š๐’๐’† . 3 input (๐’™๐Ÿ, ๐’™๐Ÿ & ๐’™๐Ÿ‘) โ†’ Boundary is Plane (๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ+๐’˜๐Ÿ‘๐’™๐Ÿ‘+b=0) with ๐’˜๐Ÿ, ๐’˜๐Ÿ, ๐’˜๐Ÿ‘ ๐’‚๐’” ๐’๐’๐’“๐’Ž๐’‚๐’ โŠฅ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’๐’‡ ๐’‘๐’๐’‚๐’๐’†. >3 input (๐’™๐Ÿ, ๐’™๐Ÿ,โ‹ฏ ๐’™๐’) โ†’ Boundary is Hyperplane (๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ+โ‹ฏ+๐’˜๐’๐’™๐’+b=0) with ๐’˜๐Ÿ, ๐’˜๐Ÿ โ‹ฏ ๐’˜๐’ ๐’‚๐’” ๐’๐’๐’“๐’Ž๐’‚๐’ โŠฅ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’๐’‡ ๐’‰๐’š๐’‘๐’†๐’“๐’‘๐’๐’‚๐’๐’†. ๐–๐ž๐ข๐ ๐ก๐ญ ๐•๐ž๐œ๐ญ๐จ๐ซ (๐ฐ๐Ÿ, ๐ฐ๐Ÿ) ๐‘ณ๐’Š๐’๐’† ๐‘ฌ๐’’๐’–๐’‚๐’•๐’Š๐’๐’ (๐ฐ๐Ÿ๐ฑ๐Ÿ + ๐ฑ๐Ÿ๐ฐ๐Ÿ+b=0) Class 1 Class 2 ๐–๐ž๐ข๐ ๐ก๐ญ ๐•๐ž๐œ๐ญ๐จ๐ซ (๐ฐ๐Ÿ, ๐ฐ๐Ÿ, ๐ฐ๐Ÿ‘) ๐‘ท๐’๐’‚๐’๐’† ๐‘ฌ๐’’๐’–๐’‚๐’•๐’Š๐’๐’ (๐ฐ๐Ÿ๐ฑ๐Ÿ + ๐ฑ๐Ÿ๐ฐ๐Ÿ++๐ฑ๐Ÿ‘๐ฐ๐Ÿ‘+b=0) Class 1 Class 2 ๐ฑ๐Ÿ ๐ฑ๐Ÿ ๐ฑ๐Ÿ ๐ฑ๐Ÿ ๐ฑ๐Ÿ‘ 2 Class Single Neuron Classification
  • 28. McCulloch Pitts Neuron (1943) ๏‚ง Mathematical Model of the brain ๏‚ง Depends upon the threshold (๐œƒ) with a hard limit activation function ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐‘ฅ1 ๐‘ฅ2 ๐‘ฅ๐‘› โ‹ฎ ๐‘ค2 ๐‘ค1 ๐‘ค๐‘› ๐‘“ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ if ๐’š๐’Š๐’ โ‰ฅ ๐œฝ ๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’† ๐œฝ OR Gate ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) 0 0 0 0 1 1 1 0 1 1 1 1 ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐‘ฅ1 ๐‘ฅ2 ๐‘ค1 = 1 ๐‘ค2=1 ๐‘“ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ if ๐’š๐’Š๐’ โ‰ฅ ๐Ÿ ๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’† ๐œฝ = ๐Ÿ ๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ + ๐’™๐Ÿ
  • 29. McCulloch Pitts Neuron AND Gate ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) 0 0 0 0 1 0 1 0 0 1 1 1 ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐‘ฅ1 ๐‘ฅ2 ๐‘ค1 = 1 ๐‘ค2=1 ๐‘“ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ if ๐’š๐’Š๐’ โ‰ฅ ๐Ÿ ๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’† ๐œฝ = ๐Ÿ ๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ + ๐’™๐Ÿ Not Gate ๐‘ฅ Target Output (๐’š) 0 1 1 0 ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐‘ฅ ๐‘ค1 = 1 ๐‘“ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ if ๐’š๐’Š๐’ < ๐ŸŽ ๐ŸŽ if ๐’š๐’Š๐’ โ‰ฅ ๐Ÿ ๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™ = ๐’™
  • 30. McCulloch Pitts Neuron (XOR implementation) ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š=๐’™๐Ÿ ๐‘ป ๐’™๐Ÿ) 0 0 0 0 1 1 1 0 0 1 1 0 ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐‘ฅ1 ๐‘ฅ2 ๐’˜๐Ÿ = โˆ’๐Ÿ ๐’˜๐Ÿ= 1 ๐‘“ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ if ๐’š๐’Š๐’ โ‰ค โˆ’๐Ÿ ๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’† ๐œฝ = โˆ’๐Ÿ ๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š=๐’™๐Ÿ๐’™๐Ÿ ๐‘ป ) 0 0 0 0 1 0 1 0 1 1 1 0 ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐‘ฅ1 ๐‘ฅ2 ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐Ÿ=-1 ๐‘“ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ if ๐’š๐’Š๐’ โ‰ค โˆ’๐Ÿ ๐ŸŽ ๐’๐’•๐’‰๐’†๐’“๐’˜๐’Š๐’”๐’† ๐œฝ = โˆ’๐Ÿ ๐’š๐’Š๐’= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ ๐’š=๐’™๐Ÿ ๐‘ป ๐’™๐Ÿ ๐’š=๐’™๐Ÿ๐’™๐Ÿ ๐‘ป
  • 31. McCulloch Pitts Neuron (XOR implementation- 3 Neurons) ๐’™๐Ÿ ๐’™๐Ÿ ๐’š๐Ÿ=๐’™๐Ÿ ๐‘ป ๐’™๐Ÿ ๐’š๐Ÿ=๐’™๐Ÿ๐’™๐Ÿ ๐‘ป ๐’š=๐’™๐Ÿ ๐‘ป ๐’™๐Ÿ + ๐’™๐Ÿ๐’™๐Ÿ ๐‘ป 0 0 0 0 0 0 1 1 0 1 1 0 0 1 1 1 1 0 0 0 ๐’š=๐’™๐Ÿ ๐‘ป ๐’™๐Ÿ + ๐’™๐Ÿ๐’™๐Ÿ ๐‘ป ๐’š๐’Š๐’๐Ÿ= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ ๐‘ฆ = ๐‘“ ๐’š๐’Š๐’ ๐’™๐Ÿ ๐’™๐Ÿ ๐’š๐’Š๐’๐Ÿ ๐‘“ ๐’š๐’Š๐’๐Ÿ ๐’˜๐Ÿ๐Ÿ = ๐Ÿ ๐’˜๐Ÿ๐Ÿ = โˆ’๐Ÿ ๐’˜๐Ÿ๐Ÿ = โˆ’๐Ÿ ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ ๐’š๐’Š๐’๐Ÿ ๐‘“ ๐’š๐’Š๐’๐Ÿ ๐’˜๐Ÿ๐Ÿ = ๐Ÿ ๐’š๐’Š๐’๐Ÿ= ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’™๐Ÿ โˆ’ ๐’™๐Ÿ ๐’˜๐Ÿ๐Ÿ (๐Ÿ) = ๐Ÿ ๐’˜๐Ÿ๐Ÿ (๐Ÿ) = ๐Ÿ
  • 33. Hebbian Learning Rule โ€ข Donald Hebb (Psychologist)โ€“ The Organization of the behaviour (1949) โ€ข Hebbโ€™s Postulate โ€“ โ€œWhen an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that Aโ€™s efficiency, as one of the cells firing B, is increased.โ€ Mathematically, ๐‘พ๐’๐’†๐’˜= ๐‘พ๐’๐’๐’… + ๐’™๐’Š๐’š Where ๐‘ฅ๐‘– is the ith input and ๐‘ฆ is output. Bipolar inputs or outputs (-1 or +1) Limitation โ€“ Can classify linearly separable patterns only
  • 34. Hebbian Learning Rule ๐‘พ๐‘ต๐’†๐’˜ = ๐‘พ๐’๐’๐’… + ๐’™๐’Š๐’š Bipolar inputs or outputs (-1 or +1) AND gate Implementation ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ 0 ๐’˜๐Ÿ 0 ๐’˜๐ŸŽ(๐’ƒ) 0 -1 -1 -1 1 1 -1 1 1 -1 -1 1 -1 1 -1 -1 2 0 -2 1 -1 -1 -1 1 -1 1 1 -3 1 1 1 1 1 1 2 2 -2 Initialized weight & bias ๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + ๐’™๐Ÿ๐’š ๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + ๐’™๐Ÿ๐’š ๐’˜๐ŸŽ(๐’๐’†๐’˜) = ๐’˜๐ŸŽ(๐’๐’๐’…) + ๐’™๐ŸŽ๐’š (๐‘ญ๐’๐’“ ๐’ƒ๐’Š๐’‚๐’” ๐’™๐ŸŽ = ๐Ÿ & ๐’˜๐ŸŽ = ๐’ƒ) ๐’š๐’Š๐’= ๐’˜๐ŸŽ๐’™๐ŸŽ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’ƒ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ ๐’™๐ŸŽ = ๐Ÿ ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท ๐’š = ๐’‡ ๐’š๐’Š๐’ ๐’™๐Ÿ ๐’™๐Ÿ ๐’˜๐Ÿ ๐’˜๐Ÿ ๐’˜๐ŸŽ= b 1 epoch Iterations
  • 35. Hebbian Learning Rule OR gate Implementation ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ 0 ๐’˜๐Ÿ 0 ๐’˜๐ŸŽ(๐’ƒ) 0 -1 -1 -1 1 1 -1 1 1 -1 -1 1 1 -1 1 1 0 2 0 1 -1 1 1 -1 1 1 1 1 1 1 1 1 1 1 2 2 2 Initialized weight & bias 1 epoch Iterations Check : ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท ๐’š = ๐’‡ ๐’š๐’Š๐’ Where, ๐’‡ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ ๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ ๐’™๐Ÿ ๐’™๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐ŸŽ= 2 ๐’™๐ŸŽ = ๐Ÿ -1 -1 ๐’š๐’Š๐’ = -2 ๐’‡(๐’š๐’Š๐’) = ๐’‡(โˆ’๐Ÿ)=0
  • 36. Hebbian Learning Rule OR gate Implementation ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ 0 ๐’˜๐Ÿ 0 ๐’˜๐ŸŽ(๐’ƒ) 0 -1 -1 -1 1 1 -1 1 1 -1 -1 1 1 -1 1 1 0 2 0 1 -1 1 1 -1 1 1 1 1 1 1 1 1 1 1 2 2 2 Initialized weight & bias 1 epoch Iterations Check : ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท ๐’š = ๐’‡ ๐’š๐’Š๐’ Where, ๐’‡ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ ๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ ๐’™๐Ÿ ๐’™๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐ŸŽ= 2 ๐’™๐ŸŽ = ๐Ÿ 1 1 ๐’š๐’Š๐’ = 6 ๐’‡(๐’š๐’Š๐’) = ๐’‡ ๐Ÿ” = 1
  • 37. Hebbian Learning Rule OR gate Implementation ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) โˆ†๐’˜๐Ÿ โˆ†๐’˜๐Ÿ โˆ†๐’ƒ ๐’˜๐Ÿ 0 ๐’˜๐Ÿ 0 ๐’˜๐ŸŽ(๐’ƒ) 0 -1 -1 -1 1 1 -1 1 1 -1 -1 1 1 -1 1 1 0 2 0 1 -1 1 1 -1 1 1 1 1 1 1 1 1 1 1 2 2 2 Initialized weight & bias 1 epoch Iterations Check : ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท ๐’š = ๐’‡ ๐’š๐’Š๐’ Where, ๐’‡ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ ๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ ๐’™๐Ÿ ๐’™๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐ŸŽ= 2 ๐’™๐ŸŽ = ๐Ÿ -1 1 ๐’š๐’Š๐’= 2 ๐’‡(๐’š๐’Š๐’) = ๐’‡(๐Ÿ)=1 Q - Are these Optimum set of weights ?
  • 39. Perceptron Learning Rule โ€ข Frank Rosenblatt โ€“ (1957) โ€ข Key contribution - Introduction of a learning rule for training perceptron networks to solve pattern recognition problems โ€ข Perceptron could even learn when initialized with random values for its weights and biases. โ€ข Limitations โ€“ Can classify only linearly separable problems. โ€ข Limitations were publicized in the book โ€œPerceptrons (1969)โ€ by Marvin Minsky and Seymour Peppert. Mathematically, ๐‘พ๐’๐’†๐’˜= ๐‘พ๐’๐’๐’… + (๐’š โˆ’ เท ๐’š) ๐’™๐’Š Where, ๐‘ฅ๐‘– ๐‘–๐‘  ๐‘–๐‘กโ„Ž ๐‘–๐‘›๐‘๐‘ข๐‘ก, เทœ ๐‘ฆ ๐‘–๐‘  ๐‘Ž๐‘๐‘ก๐‘ข๐‘Ž๐‘™ ๐‘œ๐‘Ÿ ๐‘๐‘Ÿ๐‘’๐‘‘๐‘–๐‘๐‘ก๐‘’๐‘‘ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘Ž๐‘›๐‘‘ ๐‘ฆ ๐‘–๐‘  ๐‘ก๐‘Ž๐‘Ÿ๐‘”๐‘’๐‘ก ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก.
  • 40. Perceptron Learning Rule ๐‘พ๐‘ต๐’†๐’˜ = ๐‘พ๐’๐’๐’… + (๐’š โˆ’ เท ๐’š)๐’™๐’Š AND gate Implementation ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) Actual Output (เท ๐’š) (๐’š โˆ’ เท ๐’š) โˆ†๐’˜๐Ÿ =(๐’š โˆ’ เท ๐’š)๐’™๐Ÿ โˆ†๐’˜๐Ÿ = (๐’š โˆ’ เท ๐’š)๐’™๐Ÿ โˆ†๐’ƒ = (๐’š โˆ’ เท ๐’š) ๐’˜๐Ÿ 0 ๐’˜๐Ÿ 0 ๐’˜๐ŸŽ(๐’ƒ) 0 0 0 0 1 -1 0 0 -1 0 0 -1 0 1 0 0 0 0 0 0 0 0 -1 1 0 0 0 0 0 0 0 0 0 -1 1 1 1 0 1 1 1 1 1 1 0 Initialized weight & bias ๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + โˆ†๐’˜๐Ÿ = ๐’˜๐Ÿ(๐’๐’๐’…) +(๐’š โˆ’ เท ๐’š)๐’™๐Ÿ ๐’˜๐Ÿ(๐’๐’†๐’˜) = ๐’˜๐Ÿ(๐’๐’๐’…) + โˆ†๐’˜๐Ÿ = ๐’˜๐Ÿ(๐’๐’๐’…) +(๐’š โˆ’ เท ๐’š)๐’™๐Ÿ ๐’˜๐ŸŽ(๐’๐’†๐’˜) = ๐’˜๐ŸŽ(๐’๐’๐’…) +(๐’š โˆ’ เท ๐’š)๐’™๐ŸŽ (๐‘ญ๐’๐’“ ๐’ƒ๐’Š๐’‚๐’” ๐’™๐ŸŽ = ๐Ÿ & ๐’˜๐ŸŽ = ๐’ƒ) ๐’š๐’Š๐’= ๐’˜๐ŸŽ๐’™๐ŸŽ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ = ๐’ƒ + ๐’˜๐Ÿ๐’™๐Ÿ + ๐’˜๐Ÿ๐’™๐Ÿ ๐’™๐ŸŽ = ๐Ÿ ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท ๐’š = ๐’‡ ๐’š๐’Š๐’ ๐’‡ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ ๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ ๐’™๐Ÿ ๐’™๐Ÿ ๐’˜๐Ÿ ๐’˜๐Ÿ ๐’˜๐ŸŽ= b 1 epoch Iterations Weights after 1 epoch
  • 41. OR gate Implementation Initialized weight & bias 1 epoch Iterations Check : ๐’š๐’Š๐’ ๐‘“ ๐’š๐’Š๐’ เท ๐’š = ๐’‡ ๐’š๐’Š๐’ Where, ๐’‡ ๐’š๐’Š๐’ = แ‰Š ๐Ÿ, ๐’š๐’Š๐’ โ‰ฅ ๐ŸŽ ๐ŸŽ, ๐’š๐’Š๐’ < ๐ŸŽ ๐’™๐Ÿ ๐’™๐Ÿ ๐’˜๐Ÿ = ๐Ÿ ๐’˜๐Ÿ = ๐ŸŽ ๐’˜๐ŸŽ= 0 ๐’™๐ŸŽ = ๐Ÿ 0 0 ๐’š๐’Š๐’ = 0 ๐’‡(๐’š๐’Š๐’) = ๐’‡ ๐ŸŽ = 1 ๐’™๐Ÿ ๐’™๐Ÿ Target Output (๐’š) Actual Output (เท ๐’š) (๐’š โˆ’ เท ๐’š) โˆ†๐’˜๐Ÿ =(๐’š โˆ’ เท ๐’š)๐’™๐Ÿ โˆ†๐’˜๐Ÿ = (๐’š โˆ’ เท ๐’š)๐’™๐Ÿ โˆ†๐’ƒ = (๐’š โˆ’ เท ๐’š) ๐’˜๐Ÿ 0 ๐’˜๐Ÿ 0 ๐’˜๐ŸŽ(๐’ƒ) 0 0 0 0 1 -1 0 0 -1 0 0 -1 0 1 1 0 1 0 1 1 0 1 0 1 0 1 1 0 0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 1 0 ร— ๐‘พ๐’“๐’๐’๐’ˆ Output Need more iterations Perceptron Learning Rule