SlideShare a Scribd company logo
1 of 43
NEURAL NETWORKS
By
P.MADHU SUDHAKAR.,M.Tech.,(Ph.D)
AUDISANKARA INSTITUTE OF
TECHNOLOGY
WHAT ARE NEURAL NETWORKS?
•Artificial Neural Network (ANN) :- an information processing paradigm
inspired by the HUMAN nervous system.
•Composed of large number of highly interconnected processing elements (neurons).
• ANNs, like people, learn by example.
•An ANN is configured for a specific application, like pattern recognition or data
classification, through learning.
•Learning in biological systems involves synaptic connections between neurons.
INTRODUCTION TO NEURAL NETWORKS
• An Artificial Neuron Network (ANN), popularly known as Neural Network is a
computational model based on the structure and functions of biological neural
networks.
• It is like an artificial human nervous system for receiving, processing, and
transmitting information in terms of Computer Science.
Basically, there are 3 different layers in a neural network :-
• Input Layer (All the inputs are fed in the model through this layer)
• Hidden Layers (There can be more than one hidden layers which are used
for processing the inputs received from the input layers)
• Output Layer (The data after processing is made available at the output
layer)
Why use neural networks ?
• Knowledge acquisition under noise and
uncertainty.
• Flexible knowledge representation.
• Efficient knowledge processing.
• Fault Tolerance .
• They have learning capability.
HOW DOES HUMAN BRAIN LEARNS
 Brain ,made up of large no. of neurons.
 Each neuron connects to thousands of neurons, communicates
by electrochemical signals.
 Signals coming are received via SYNAPSES, located at the
end of DENDRITES.
 A neuron sum up the inputs, and if threshold value is reached
then it generates a voltage and o/p signal, along the AXON.
FIGURE SHOWING NEURON
SYNAPSE
THE ARTIFICIAL NEURON:-
• Electronically modeled biological neuron.
• Has many inputs and one output.
• Has 2 modes -training mode & using mode.
• Training mode - neuron is trained to fire (or
not), for particular input patterns.
• Using mode - when a taught input pattern is
detected at input, its associated output becomes
current output .
• If input pattern does not belong in taught list,
firing rule is used.
Working of a
Biological Neuron
As shown in the above diagram, a
typical neuron consists of the following
four parts with the help of which we
can explain its working −
Dendrites − They are tree-like
branches, responsible for receiving the
information from other neurons it is
connected to. In other sense, we can
say that they are like the ears of
neuron.
Soma − It is the cell body of the
neuron and is responsible for
processing of information, they have
received from dendrites.
Axon − It is just like a cable through
which neurons send the information.
Synapses − It is the connection
between the axon and other neuron
dendrites.
Model of Artificial
Neural Network
For the above general model
of artificial neural network,
the net input can be
calculated as follows −
yin=x1.w1+x2.w2+x3.w3…xm.
wmyin=x1.w1+x2.w2+x3.w3…
xm.wm
i.e., Net
input yin=∑mixi.wiyin=∑imxi.w
i
The output can be calculated
by applying the activation
function over the net input.
Y=F(yin)Y=F(yin)
Output = function (net input
calculated)
Artificial Neural Network - Building
Blocks
• Processing of ANN depends upon the
following three building blocks −
• Network Topology
• Adjustments of Weights or Learning
• Activation Functions
Network Topology
Feedforward Network:
It is a non-recurrent network having processing
units/nodes in layers and all the nodes in a layer are
connected with the nodes of the previous layers. The
connection has different weights upon them. There is
no feedback loop means the signal can only flow in
one direction, from input to output. It may be
divided into the following two types.
Single layer feedforward
network
The concept is of feedforward
ANN having only one
weighted layer. In other
words, we can say the input
layer is fully connected to the
output layer.
Multilayer
feedforward network
The concept is of feedforward
ANN having more than one
weighted layer. As this
network has one or more
layers between the input and
the output layer, it is called
hidden layers.
LEARNING METHOD FOR IN ANN
• Learning is an application of artificial intelligence (AI) that
provides systems the ability to automatically learn and
improve from experience without being explicitly
programmed.
• Learning in ANN can be classified into three categories namely
supervised learning, unsupervised learning, and
reinforcement learning.
Model of Artificial Neural Network
Supervised Learning
•As the name suggests, this type
of learning is done under the
supervision of a teacher.
•This learning process is
dependent.
•During the training of ANN under
supervised learning, the input
vector is presented to the
network, which will give an
output vector.
•This output vector is compared
with the desired output vector. An
error signal is generated, if there
is a difference between the actual
output and the desired output
vector.
•On the basis of this error signal,
the weights are adjusted until the
actual output is matched with the
desired output.
PERCEPTRON
•Developed by Frank Rosenblatt by
using McCulloch and Pitts model,
perceptron is the basic operational
unit of artificial neural networks. It
employs supervised learning rule and
is able to classify the data into two
classes.
•Operational characteristics of the
perceptron: It consists of a single
neuron with an arbitrary number of
inputs along with adjustable weights,
but the output of the neuron is 1 or
0 depending upon the threshold. It
also consists of a bias whose weight is
always 1. Following figure gives a
schematic representation of the
perceptron
PERCEPTRON
• Perceptron thus has the following three basic elements −
Links − It would have a set of connection links, which carries a weight including a bias
always having weight 1.
Adder − It adds the input after they are multiplied with their respective weights.
Activation function − It limits the output of neuron. The most basic activation function
is a Heaviside step function that has two possible outputs. This function returns 1,
if the input is positive, and 0 for any negative input.
Training Algorithm
Training Algorithm for Single Output Unit
Step 1 − Initialize the following to start the training − Weights Bias Learning rate For
easy calculation and simplicity, weights and bias must be set equal to 0 and the
learning rate must be set equal to 1.
• Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 −
Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows –
• Step 5 − Now obtain the net input with the following relation –α
Here ‘b’ is bias and ‘n’ is the total number of input neurons
• Step 6 − Apply the following activation function to obtain the final output
• Step 7 − Adjust the weight and bias as follows
• Case 1 − if y ≠ t then, wi(new) = wi(old) + αtxi b(new) = b(old) + αt
•
Case 2 − if y = t then, wi(new) = wi(old) b(new) = b(old)
Adaptive Linear Neuron(ADALINE)
Architecture
Adaptive Linear Neuron(ADALINE)
• Adaline which stands for Adaptive Linear Neuron, is a network
having a single linear unit. It was developed by Widrow and
Hoff in 1960. Some important points about Adaline are as
follows:
• It uses bipolar activation function.
• It uses delta rule for training to minimize the Mean-Squared
Error (MSE) between the actual output and the desired/target
output.
• The weights and the bias are adjustable
• The basic structure of Adaline is similar to perceptron having
an extra feedback loop with the help of which the actual
output is compared with the desired/target output. After
comparison on the basis of training algorithm, the weights
and bias will be update
Multiple Adaptive Linear
Neuron(Madaline)
•The architecture of Madaline consists
of “n” neurons of the input layer, “m”
neurons of the Adaline layer, and 1
neuron of the Madaline layer. The
Adaline layer can be considered as the
hidden layer as it is between the input
layer and the output layer, i.e. the
Madaline layer.
•Madaline which stands for Multiple
Adaptive Linear Neuron, is a network
which consists of many Adalines in
parallel. It will have a single output
unit. Some important points about
Madaline are as follows −
•It is just like a multilayer perceptron,
where Adaline will act as a hidden unit
between the input and the Madaline
layer.
•The weights and the bias between the
input and Adaline layers, as in we see
in the Adaline architecture, are
adjustable.
The Adaline and Madaline layers have
fixed weights and bias of 1.
Training can be done with the help of
Delta rule.
Unsupervised Learning
•As the name suggests, this type of
learning is done without the
supervision of a teacher.
•This learning process is
independent.
•During the training of ANN under
unsupervised learning, the input
vectors of similar type are combined
to form clusters.
•When a new input pattern is
applied, then the neural network
gives an output response indicating
the class to which the input pattern
belongs.
•There is no feedback from the
environment as to what should be
the desired output and if it is
correct or incorrect.
•Hence, in this type of learning, the
network itself must discover the
patterns and features from the
input data, and the relation for the
input data over the output.
Reinforcement Learning
•As the name suggests, this type
of learning is used to reinforce or
strengthen the network over
some critic information.
•This learning process is similar to
supervised learning, however we
might have very less information.
•During the training of network
under reinforcement learning, the
network receives some feedback
from the environment. This
makes it somewhat similar to
supervised learning.
•However, the feedback obtained
here is evaluative not instructive,
which means there is no teacher
as in supervised learning.
•After receiving the feedback, the
network performs adjustments of
the weights to get better critic
information in future.
Neural Network Learning Rules
• We know that, during ANN learning, to change the input/output behavior,
we need to adjust the weights. Hence, a method is required with the help
of which the weights can be modified. These methods are called Learning
rules, which are simply algorithms or equations.
Following are some learning rules for the neural network −
• Hebbian Learning Rule
• Perceptron Learning Rule
• Delta Learning Rule (Widrow-Hoff Rule)
• Competitive Learning Rule (Winner-takes-all)
Hebbian Learning Rule
• This rule, one of the oldest and simplest, was introduced by Donald Hebb in his
book The Organization of Behavior in 1949. It is a kind of feed-forward,
unsupervised learning.
• Basic Concept
• This rule is based on a proposal given by Hebb, who wrote − “When an axon of cell
A is near enough to excite a cell B and repeatedly or persistently takes part in firing
it, some growth process or metabolic change takes place in one or both cells such
that A’s efficiency, as one of the cells firing B, is increased.”
• From the above postulate, we can conclude that the connections between two
neurons might be strengthened if the neurons fire at the same time and might
weaken if they fire at different times.
• Mathematical Formulation
• According to Hebbian learning rule, following is the formula to increase the weight
of connection at every time step.
• Δwji(t) = αxi(t). yj(t)
•
• Here, Δwji(t) = increment by which the weight of connection increases at time step
t
• α = the positive and constant learning rate
• xi(t) = the input value from pre-synaptic neuron at time step t
• yi(t) = the output of pre-synaptic neuron at same time step t
Perceptron Learning Rule
• This rule is an error correcting the supervised learning algorithm of single layer
feedforward networks with linear activation function, introduced by Rosenblatt.
• Basic Concept: As being supervised in nature, to calculate the error, there
would be a comparison between the desired/target output and the actual
output. If there is any difference found, then a change must be made to
the weights of connection
• Mathematical Formulation: To explain its mathematical formulation,
suppose we have ‘n’ number of finite input vectors, x(n), along with its
desired/target output vector t(n), where n = 1 to N
•
Now the output ‘y’ can be calculated, as explained earlier on the basis of the net
input, and activation function being applied over that net input can be expressed
as follows
• Where θ is threshold
•
The updating of weight can be done in the following two cases −
• Case I − when t ≠ y, then
Model of Artificial Neural Network
• Artificial neural networks can be viewed as weighted directed graphs
in which artificial neurons are nodes and directed edges with weights
are connections between neuron outputs and neuron inputs.
• The Artificial Neural Network receives input from the external world in
the form of pattern and image in vector form. These inputs are
mathematically designated by the notation x(n) for n number of inputs.
• Each input is multiplied by its corresponding weights. Weights are the
information used by the neural network to solve a problem. Typically
weight represents the strength of the interconnection between
neurons inside the neural network.
• The weighted inputs are all summed up inside computing unit (artificial
neuron). In case the weighted sum is zero, bias is added to make the
output not- zero or to scale up the system response. Bias has the weight
and input always equal to ‘1’.
Model of Artificial Neural Network
• The sum corresponds to any numerical value ranging from 0 to infinity.
• In order to limit the response to arrive at desired value, the threshold
• value is set up. For this, the sum is passed through activation function.
• The activation function is set of the transfer function used to get desired
output. There are linear as well as the non-linear activation function.
• Some of the commonly used activation function are — binary, sigmoidal
(linear) and tan hyperbolic sigmoidal functions(nonlinear).
• Binary — The output has only two values either 0 and 1. For this, the
threshold value is set up. If the net weighted input is greater than 1, an
output is assumed 1 otherwise zero.
• Sigmoidal Hyperbolic — This function has ‘S’ shaped curve. Here tan
hyperbolic function is used to approximate output from net input. The
function is defined as — f (x) = (1/1+ exp(-𝝈x)) where 𝝈— steepness
parameter.
Architecture
• Input layer— It contains those
units (artificial neurons) which
receive input from the outside
world on which network will
learn, recognize about or
otherwise process.
• Output layer— It contains
units that respond to the
informationabout how it’s
learned any task.
• Hidden layer— These units
are in between input and
output layers. The job of
hidden layer is to transform
the input into something that
output unit can use in some
way.
• Most neural networks are fully
connected that means to say
each hidden neuron is fully
connected to the every neuron
in its previous layer(input) and
to the next layer (output) layer
Learning in Biology(Human
• Learning = learning by adaptation
• The young animal learns that the green fruits are sour, while the
yellowish/reddish ones are sweet. The learning happens by
adapting the fruit picking behaviour.
• At the neural level the learning happens by changing of the synaptic
• strengths, eliminating some synapses, and building new ones.
• The objective of adapting the responses on the basis of the
information received from the environment is to achieve a better
state. E.g., the animal likes to eat many energy rich, juicy fruits that
make its stomach full, and makes it feel happy.
• In other words, the objective of learning in biological organisms
is to optimise the amount of available resources, happiness, or
in general to achieve a closer to optimal state
Learning in Artificial Neural Networks
Types of Learning in Neural Network
• Supervised Learning —In supervised learning, the training data is input to
the network, and the desired output is known weights are adjusted until
output yields desired value.
• Unsupervised Learning — The input data is used to train the network
whose output is known. The network classifies the input data and adjusts
the weight by feature extraction in input data.
• Reinforcement Learning — Here the value of the output is unknown, but
the network provides the feedback whether the output is right or wrong.
It is semi-supervised learning.
• Offline Learning —The adjustment of the weight vector and threshold is
done only after all the training set is presented to the network. it is also
called batch learning.
• Online Learning—The adjustment of the weight and threshold is done
after presenting each training sample to the network.
Characteristics of ANN
• Using ANNs requires an understanding of their characteristics.
• Choice of model: This depends on the data representation and the
• application. Overly complex models slow learning.
• Learning algorithm: Numerous trade-offs exist between learning
algorithms. Almost any algorithm will work well with the correct
hyper parameters for training on a particular data set. However,
selecting and tuning an algorithm for training on unseen data
requires significant experimentation.
• Robustness: If the model, cost function and learning algorithm
are selected appropriately, the resulting ANN can become
robust.
Uses of ANN
• ANN capabilities fall within the following broad categories
• Function approximation, or regression analysis, including time series
• prediction, fitness approximation and modeling.
• Classification, including pattern and sequence recognition, novelty detection and
sequential decision making.
• Data processing, including filtering, clustering, blind source separation and compression.
• Robotics, including directing manipulators and prostheses.
• Control, including computer numerical control.
• Classification —A neural network can be trained to classify given pattern or data set into
predefined class. It uses feed forward networks.
• Prediction— A neural network can be trained to produce outputs thatare expected from
given input. E.g.:—Stock market prediction.
• Clustering — The Neural network can be used to identify a specialfeature of the data and
classify them into different categories without any prior knowledge of the data.
Neural networks vr conventional
computers
COMPUTERS
• Algorithmic approach
• They are necessarily
programmed
• Work on predefined
set of instructions
• Operations are
predictable
ANN
• Learning approach
• Not programmed for
specific tasks
• Used in decision making
• Operation is
unpredictable
Output Layer
• The output layer of the neural network collects and transmits the information
accordingly in way it has been designed to give.
• he number of neurons in output layer should be directly related to the type of
work that the neural network was performing.
• To determine the number of neurons in the output layer, first consider the
intended use of the neural network.
Figure depicting the Activation function for ANN
Summation function = X1Wi1+X2Wi2+…+XnWin
How is Brain Different from Computers
BRAIN
• Biological Neurons or Nerve C
• 200 Billion Neurons, 32
trillion interconnections.ells.
• Neuron Size: 10-6m.
• Energy Consumption: 6-10
Joules operation per second.
• Learning Capability
COMPUTERS
• Silicon Transistor.
• 1 Billion bytes RAM, trillion
of bytes on disk.
• Single Transistor Size: 10-
9m.
• Energy Consumption: 10-16
Joules Operation per second
• Programming capability
Comparing ANN with BNN
As this concept borrowed from ANN there are lot of similarities though there
are differences too.
• Similarities are in the following table
Biological Neural Network
• Soma
• Dendrites
• Synapse
• Axon
Artificial Neural Network
• Node
• Input
• Weights or Interconnections
• Output
Criteria BNN ANN
Processing Massively parallel, slow but
superior than ANN
Massively parallel, fast but
inferior than BNN
Size 1011 neurons and 1015
interconnections
102 to 104 nodes (mainly
depends on the type of
application and network
designer)
Learning They can tolerate ambiguity Very precise, structured and
formatted data is required to
tolerate ambiguity
Fault tolerance Performance degrades with
even partial damage
It is capable of robust
performance, hence has the
potential to be fault tolerant
Storage capacity Stores the information in the
synapse
Stores the information in
continuous memory locations
Analogy of ANN with BNN
• The dendrites in biological
neural network is analogous
to the weighted inputs
based on their synaptic
interconnection in artificial
neural network.
• Cell body is analogous to
the artificial neuron unit in
artificial neural network
which also comprises of
summation and threshold
unit.
• Axon carry output that is
analogous to the output
unit in case of artificial
neural network. So, ANN
are modelled using the
working of basic
biological neurons.
Applications
• Because of their ability to reproduce and model nonlinear processes, ANNs
have found many applications in a wide range of disciplines.
• Application areas include system identification and control (vehicle control,
trajectory prediction, process control, natural
• resources management), quantum chemistry,[game-playing and decision making
(backgammon, chess, poker), pattern recognition (radar systems, face
identification, signal classification, object recognition and more), sequence
recognition (gesture, speech, handwritten text recognition), medical diagnosis,
finance (e.g. automated trading systems), data mining, visualization, machine
translation, social network filtering and e-mail spam filtering.
• ANNs have been used to diagnose cancers, including lung cancer, prostate cancer,
colorectal cancer and to distinguish highly invasive cancer cell lines from less
invasive lines using only cell shape information.
• ANNs have been used for building black-box models in geosciences: hydrology
ocean modeling and coastal engineering, and geomorphology, are just few
examples of this kind.

More Related Content

What's hot

Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Artificial neural network for machine learning
Artificial neural network for machine learningArtificial neural network for machine learning
Artificial neural network for machine learninggrinu
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural networkFerdous ahmed
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural networkNagarajan
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural networkSopheaktra YONG
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networksSi Haem
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksChristian Perone
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNNShuai Zhang
 
Neural network
Neural networkNeural network
Neural networkRamesh Giri
 
An Introduction to Neural Architecture Search
An Introduction to Neural Architecture SearchAn Introduction to Neural Architecture Search
An Introduction to Neural Architecture SearchBill Liu
 
Neural Networks
Neural NetworksNeural Networks
Neural NetworksNikitaRuhela
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshareRed Innovators
 
An introduction to Deep Learning
An introduction to Deep LearningAn introduction to Deep Learning
An introduction to Deep LearningJulien SIMON
 

What's hot (20)

Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyay
 
Artificial neural network for machine learning
Artificial neural network for machine learningArtificial neural network for machine learning
Artificial neural network for machine learning
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
Neural network
Neural networkNeural network
Neural network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
An Introduction to Neural Architecture Search
An Introduction to Neural Architecture SearchAn Introduction to Neural Architecture Search
An Introduction to Neural Architecture Search
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Back propagation
Back propagationBack propagation
Back propagation
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
An introduction to Deep Learning
An introduction to Deep LearningAn introduction to Deep Learning
An introduction to Deep Learning
 

Similar to Artificial neural networks

Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkMohd Arafat Shaikh
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...bihira aggrey
 
Artificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.pptArtificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.pptNJUSTAiMo
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Prof. Neeta Awasthy
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Randa Elanwar
 
2011 0480.neural-networks
2011 0480.neural-networks2011 0480.neural-networks
2011 0480.neural-networksParneet Kaur
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligencealldesign
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkIshaneeSharma
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.pptSivam Chinna
 
UNIT-3 .PPTX
UNIT-3 .PPTXUNIT-3 .PPTX
UNIT-3 .PPTXBobyBhagora
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptxVandanaVipparthi
 
33.-Multi-Layer-Perceptron.pdf
33.-Multi-Layer-Perceptron.pdf33.-Multi-Layer-Perceptron.pdf
33.-Multi-Layer-Perceptron.pdfgnans Kgnanshek
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Mohd Faiz
 

Similar to Artificial neural networks (20)

Unit 2
Unit 2Unit 2
Unit 2
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
 
Artificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.pptArtificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.ppt
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
2011 0480.neural-networks
2011 0480.neural-networks2011 0480.neural-networks
2011 0480.neural-networks
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
 
UNIT-3 .PPTX
UNIT-3 .PPTXUNIT-3 .PPTX
UNIT-3 .PPTX
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptx
 
33.-Multi-Layer-Perceptron.pdf
33.-Multi-Layer-Perceptron.pdf33.-Multi-Layer-Perceptron.pdf
33.-Multi-Layer-Perceptron.pdf
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 

Recently uploaded

Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
Introduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfIntroduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfsumitt6_25730773
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaOmar Fathy
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdfKamal Acharya
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdfKamal Acharya
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxSCMS School of Architecture
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxpritamlangde
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network DevicesChandrakantDivate1
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
Learn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic MarksLearn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic MarksMagic Marks
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiessarkmank1
 
Moment Distribution Method For Btech Civil
Moment Distribution Method For Btech CivilMoment Distribution Method For Btech Civil
Moment Distribution Method For Btech CivilVinayVitekari
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxMuhammadAsimMuhammad6
 
Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...
Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...
Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...jabtakhaidam7
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 

Recently uploaded (20)

Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 
Introduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfIntroduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdf
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdf
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptx
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Learn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic MarksLearn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic Marks
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Moment Distribution Method For Btech Civil
Moment Distribution Method For Btech CivilMoment Distribution Method For Btech Civil
Moment Distribution Method For Btech Civil
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
 
Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...
Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...
Jaipur ❤CALL GIRL 0000000000❤CALL GIRLS IN Jaipur ESCORT SERVICE❤CALL GIRL IN...
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 

Artificial neural networks

  • 2. WHAT ARE NEURAL NETWORKS? •Artificial Neural Network (ANN) :- an information processing paradigm inspired by the HUMAN nervous system. •Composed of large number of highly interconnected processing elements (neurons). • ANNs, like people, learn by example. •An ANN is configured for a specific application, like pattern recognition or data classification, through learning. •Learning in biological systems involves synaptic connections between neurons.
  • 3. INTRODUCTION TO NEURAL NETWORKS • An Artificial Neuron Network (ANN), popularly known as Neural Network is a computational model based on the structure and functions of biological neural networks. • It is like an artificial human nervous system for receiving, processing, and transmitting information in terms of Computer Science. Basically, there are 3 different layers in a neural network :- • Input Layer (All the inputs are fed in the model through this layer) • Hidden Layers (There can be more than one hidden layers which are used for processing the inputs received from the input layers) • Output Layer (The data after processing is made available at the output layer)
  • 4. Why use neural networks ? • Knowledge acquisition under noise and uncertainty. • Flexible knowledge representation. • Efficient knowledge processing. • Fault Tolerance . • They have learning capability.
  • 5. HOW DOES HUMAN BRAIN LEARNS  Brain ,made up of large no. of neurons.  Each neuron connects to thousands of neurons, communicates by electrochemical signals.  Signals coming are received via SYNAPSES, located at the end of DENDRITES.  A neuron sum up the inputs, and if threshold value is reached then it generates a voltage and o/p signal, along the AXON.
  • 8. THE ARTIFICIAL NEURON:- • Electronically modeled biological neuron. • Has many inputs and one output. • Has 2 modes -training mode & using mode. • Training mode - neuron is trained to fire (or not), for particular input patterns. • Using mode - when a taught input pattern is detected at input, its associated output becomes current output . • If input pattern does not belong in taught list, firing rule is used.
  • 9. Working of a Biological Neuron As shown in the above diagram, a typical neuron consists of the following four parts with the help of which we can explain its working − Dendrites − They are tree-like branches, responsible for receiving the information from other neurons it is connected to. In other sense, we can say that they are like the ears of neuron. Soma − It is the cell body of the neuron and is responsible for processing of information, they have received from dendrites. Axon − It is just like a cable through which neurons send the information. Synapses − It is the connection between the axon and other neuron dendrites.
  • 10. Model of Artificial Neural Network For the above general model of artificial neural network, the net input can be calculated as follows − yin=x1.w1+x2.w2+x3.w3…xm. wmyin=x1.w1+x2.w2+x3.w3… xm.wm i.e., Net input yin=∑mixi.wiyin=∑imxi.w i The output can be calculated by applying the activation function over the net input. Y=F(yin)Y=F(yin) Output = function (net input calculated)
  • 11. Artificial Neural Network - Building Blocks • Processing of ANN depends upon the following three building blocks − • Network Topology • Adjustments of Weights or Learning • Activation Functions
  • 12. Network Topology Feedforward Network: It is a non-recurrent network having processing units/nodes in layers and all the nodes in a layer are connected with the nodes of the previous layers. The connection has different weights upon them. There is no feedback loop means the signal can only flow in one direction, from input to output. It may be divided into the following two types.
  • 13. Single layer feedforward network The concept is of feedforward ANN having only one weighted layer. In other words, we can say the input layer is fully connected to the output layer.
  • 14. Multilayer feedforward network The concept is of feedforward ANN having more than one weighted layer. As this network has one or more layers between the input and the output layer, it is called hidden layers.
  • 15. LEARNING METHOD FOR IN ANN • Learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. • Learning in ANN can be classified into three categories namely supervised learning, unsupervised learning, and reinforcement learning.
  • 16. Model of Artificial Neural Network
  • 17. Supervised Learning •As the name suggests, this type of learning is done under the supervision of a teacher. •This learning process is dependent. •During the training of ANN under supervised learning, the input vector is presented to the network, which will give an output vector. •This output vector is compared with the desired output vector. An error signal is generated, if there is a difference between the actual output and the desired output vector. •On the basis of this error signal, the weights are adjusted until the actual output is matched with the desired output.
  • 18. PERCEPTRON •Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic operational unit of artificial neural networks. It employs supervised learning rule and is able to classify the data into two classes. •Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary number of inputs along with adjustable weights, but the output of the neuron is 1 or 0 depending upon the threshold. It also consists of a bias whose weight is always 1. Following figure gives a schematic representation of the perceptron
  • 19. PERCEPTRON • Perceptron thus has the following three basic elements − Links − It would have a set of connection links, which carries a weight including a bias always having weight 1. Adder − It adds the input after they are multiplied with their respective weights. Activation function − It limits the output of neuron. The most basic activation function is a Heaviside step function that has two possible outputs. This function returns 1, if the input is positive, and 0 for any negative input. Training Algorithm Training Algorithm for Single Output Unit Step 1 − Initialize the following to start the training − Weights Bias Learning rate For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. • Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows – • Step 5 − Now obtain the net input with the following relation –α Here ‘b’ is bias and ‘n’ is the total number of input neurons • Step 6 − Apply the following activation function to obtain the final output • Step 7 − Adjust the weight and bias as follows • Case 1 − if y ≠ t then, wi(new) = wi(old) + Îątxi b(new) = b(old) + Îąt • Case 2 − if y = t then, wi(new) = wi(old) b(new) = b(old)
  • 21. Adaptive Linear Neuron(ADALINE) • Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows: • It uses bipolar activation function. • It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual output and the desired/target output. • The weights and the bias are adjustable • The basic structure of Adaline is similar to perceptron having an extra feedback loop with the help of which the actual output is compared with the desired/target output. After comparison on the basis of training algorithm, the weights and bias will be update
  • 22. Multiple Adaptive Linear Neuron(Madaline) •The architecture of Madaline consists of “n” neurons of the input layer, “m” neurons of the Adaline layer, and 1 neuron of the Madaline layer. The Adaline layer can be considered as the hidden layer as it is between the input layer and the output layer, i.e. the Madaline layer. •Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of many Adalines in parallel. It will have a single output unit. Some important points about Madaline are as follows − •It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer. •The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable. The Adaline and Madaline layers have fixed weights and bias of 1. Training can be done with the help of Delta rule.
  • 23. Unsupervised Learning •As the name suggests, this type of learning is done without the supervision of a teacher. •This learning process is independent. •During the training of ANN under unsupervised learning, the input vectors of similar type are combined to form clusters. •When a new input pattern is applied, then the neural network gives an output response indicating the class to which the input pattern belongs. •There is no feedback from the environment as to what should be the desired output and if it is correct or incorrect. •Hence, in this type of learning, the network itself must discover the patterns and features from the input data, and the relation for the input data over the output.
  • 24. Reinforcement Learning •As the name suggests, this type of learning is used to reinforce or strengthen the network over some critic information. •This learning process is similar to supervised learning, however we might have very less information. •During the training of network under reinforcement learning, the network receives some feedback from the environment. This makes it somewhat similar to supervised learning. •However, the feedback obtained here is evaluative not instructive, which means there is no teacher as in supervised learning. •After receiving the feedback, the network performs adjustments of the weights to get better critic information in future.
  • 25. Neural Network Learning Rules • We know that, during ANN learning, to change the input/output behavior, we need to adjust the weights. Hence, a method is required with the help of which the weights can be modified. These methods are called Learning rules, which are simply algorithms or equations. Following are some learning rules for the neural network − • Hebbian Learning Rule • Perceptron Learning Rule • Delta Learning Rule (Widrow-Hoff Rule) • Competitive Learning Rule (Winner-takes-all)
  • 26. Hebbian Learning Rule • This rule, one of the oldest and simplest, was introduced by Donald Hebb in his book The Organization of Behavior in 1949. It is a kind of feed-forward, unsupervised learning. • Basic Concept • This rule is based on a proposal given by Hebb, who wrote − “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” • From the above postulate, we can conclude that the connections between two neurons might be strengthened if the neurons fire at the same time and might weaken if they fire at different times. • Mathematical Formulation • According to Hebbian learning rule, following is the formula to increase the weight of connection at every time step. • Δwji(t) = Îąxi(t). yj(t) • • Here, Δwji(t) = increment by which the weight of connection increases at time step t • Îą = the positive and constant learning rate • xi(t) = the input value from pre-synaptic neuron at time step t • yi(t) = the output of pre-synaptic neuron at same time step t
  • 27. Perceptron Learning Rule • This rule is an error correcting the supervised learning algorithm of single layer feedforward networks with linear activation function, introduced by Rosenblatt. • Basic Concept: As being supervised in nature, to calculate the error, there would be a comparison between the desired/target output and the actual output. If there is any difference found, then a change must be made to the weights of connection • Mathematical Formulation: To explain its mathematical formulation, suppose we have ‘n’ number of finite input vectors, x(n), along with its desired/target output vector t(n), where n = 1 to N • Now the output ‘y’ can be calculated, as explained earlier on the basis of the net input, and activation function being applied over that net input can be expressed as follows • Where θ is threshold • The updating of weight can be done in the following two cases − • Case I − when t ≠ y, then
  • 28. Model of Artificial Neural Network • Artificial neural networks can be viewed as weighted directed graphs in which artificial neurons are nodes and directed edges with weights are connections between neuron outputs and neuron inputs. • The Artificial Neural Network receives input from the external world in the form of pattern and image in vector form. These inputs are mathematically designated by the notation x(n) for n number of inputs. • Each input is multiplied by its corresponding weights. Weights are the information used by the neural network to solve a problem. Typically weight represents the strength of the interconnection between neurons inside the neural network. • The weighted inputs are all summed up inside computing unit (artificial neuron). In case the weighted sum is zero, bias is added to make the output not- zero or to scale up the system response. Bias has the weight and input always equal to ‘1’.
  • 29. Model of Artificial Neural Network • The sum corresponds to any numerical value ranging from 0 to infinity. • In order to limit the response to arrive at desired value, the threshold • value is set up. For this, the sum is passed through activation function. • The activation function is set of the transfer function used to get desired output. There are linear as well as the non-linear activation function. • Some of the commonly used activation function are — binary, sigmoidal (linear) and tan hyperbolic sigmoidal functions(nonlinear). • Binary — The output has only two values either 0 and 1. For this, the threshold value is set up. If the net weighted input is greater than 1, an output is assumed 1 otherwise zero. • Sigmoidal Hyperbolic — This function has ‘S’ shaped curve. Here tan hyperbolic function is used to approximate output from net input. The function is defined as — f (x) = (1/1+ exp(-𝝈x)) where 𝝈— steepness parameter.
  • 30. Architecture • Input layer— It contains those units (artificial neurons) which receive input from the outside world on which network will learn, recognize about or otherwise process. • Output layer— It contains units that respond to the informationabout how it’s learned any task. • Hidden layer— These units are in between input and output layers. The job of hidden layer is to transform the input into something that output unit can use in some way. • Most neural networks are fully connected that means to say each hidden neuron is fully connected to the every neuron in its previous layer(input) and to the next layer (output) layer
  • 31. Learning in Biology(Human • Learning = learning by adaptation • The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behaviour. • At the neural level the learning happens by changing of the synaptic • strengths, eliminating some synapses, and building new ones. • The objective of adapting the responses on the basis of the information received from the environment is to achieve a better state. E.g., the animal likes to eat many energy rich, juicy fruits that make its stomach full, and makes it feel happy. • In other words, the objective of learning in biological organisms is to optimise the amount of available resources, happiness, or in general to achieve a closer to optimal state
  • 32. Learning in Artificial Neural Networks
  • 33. Types of Learning in Neural Network • Supervised Learning —In supervised learning, the training data is input to the network, and the desired output is known weights are adjusted until output yields desired value. • Unsupervised Learning — The input data is used to train the network whose output is known. The network classifies the input data and adjusts the weight by feature extraction in input data. • Reinforcement Learning — Here the value of the output is unknown, but the network provides the feedback whether the output is right or wrong. It is semi-supervised learning. • Offline Learning —The adjustment of the weight vector and threshold is done only after all the training set is presented to the network. it is also called batch learning. • Online Learning—The adjustment of the weight and threshold is done after presenting each training sample to the network.
  • 34. Characteristics of ANN • Using ANNs requires an understanding of their characteristics. • Choice of model: This depends on the data representation and the • application. Overly complex models slow learning. • Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyper parameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation. • Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
  • 35. Uses of ANN • ANN capabilities fall within the following broad categories • Function approximation, or regression analysis, including time series • prediction, fitness approximation and modeling. • Classification, including pattern and sequence recognition, novelty detection and sequential decision making. • Data processing, including filtering, clustering, blind source separation and compression. • Robotics, including directing manipulators and prostheses. • Control, including computer numerical control. • Classification —A neural network can be trained to classify given pattern or data set into predefined class. It uses feed forward networks. • Prediction— A neural network can be trained to produce outputs thatare expected from given input. E.g.:—Stock market prediction. • Clustering — The Neural network can be used to identify a specialfeature of the data and classify them into different categories without any prior knowledge of the data.
  • 36. Neural networks vr conventional computers COMPUTERS • Algorithmic approach • They are necessarily programmed • Work on predefined set of instructions • Operations are predictable ANN • Learning approach • Not programmed for specific tasks • Used in decision making • Operation is unpredictable
  • 37. Output Layer • The output layer of the neural network collects and transmits the information accordingly in way it has been designed to give. • he number of neurons in output layer should be directly related to the type of work that the neural network was performing. • To determine the number of neurons in the output layer, first consider the intended use of the neural network.
  • 38. Figure depicting the Activation function for ANN Summation function = X1Wi1+X2Wi2+…+XnWin
  • 39. How is Brain Different from Computers BRAIN • Biological Neurons or Nerve C • 200 Billion Neurons, 32 trillion interconnections.ells. • Neuron Size: 10-6m. • Energy Consumption: 6-10 Joules operation per second. • Learning Capability COMPUTERS • Silicon Transistor. • 1 Billion bytes RAM, trillion of bytes on disk. • Single Transistor Size: 10- 9m. • Energy Consumption: 10-16 Joules Operation per second • Programming capability
  • 40. Comparing ANN with BNN As this concept borrowed from ANN there are lot of similarities though there are differences too. • Similarities are in the following table Biological Neural Network • Soma • Dendrites • Synapse • Axon Artificial Neural Network • Node • Input • Weights or Interconnections • Output
  • 41. Criteria BNN ANN Processing Massively parallel, slow but superior than ANN Massively parallel, fast but inferior than BNN Size 1011 neurons and 1015 interconnections 102 to 104 nodes (mainly depends on the type of application and network designer) Learning They can tolerate ambiguity Very precise, structured and formatted data is required to tolerate ambiguity Fault tolerance Performance degrades with even partial damage It is capable of robust performance, hence has the potential to be fault tolerant Storage capacity Stores the information in the synapse Stores the information in continuous memory locations
  • 42. Analogy of ANN with BNN • The dendrites in biological neural network is analogous to the weighted inputs based on their synaptic interconnection in artificial neural network. • Cell body is analogous to the artificial neuron unit in artificial neural network which also comprises of summation and threshold unit. • Axon carry output that is analogous to the output unit in case of artificial neural network. So, ANN are modelled using the working of basic biological neurons.
  • 43. Applications • Because of their ability to reproduce and model nonlinear processes, ANNs have found many applications in a wide range of disciplines. • Application areas include system identification and control (vehicle control, trajectory prediction, process control, natural • resources management), quantum chemistry,[game-playing and decision making (backgammon, chess, poker), pattern recognition (radar systems, face identification, signal classification, object recognition and more), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, finance (e.g. automated trading systems), data mining, visualization, machine translation, social network filtering and e-mail spam filtering. • ANNs have been used to diagnose cancers, including lung cancer, prostate cancer, colorectal cancer and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information. • ANNs have been used for building black-box models in geosciences: hydrology ocean modeling and coastal engineering, and geomorphology, are just few examples of this kind.