SlideShare uma empresa Scribd logo
1 de 27
Baixar para ler offline
ARTIFICIAL NEURAL NETWORKS:
          A TUTORIAL

                   BY:
         Negin Yousefpour
                PhD Student
       Civil Engineering Department



       TEXAS A&M UNIVERSITY
Contents
Introduction
Origin Of Neural Network
Biological Neural Networks
ANN Overview
Learningg
Different NN Networks
Challenging Problems
        g g
Summery
INTRODUCTION
    Artificial Neural Network (ANN) or Neural Network(NN)
    has provide an exciting alternative method for solving a
    variety of problems in different fields of science and
    engineering.
    engineering
    This article is trying to give the readers a :
-   Whole idea about ANN
-   Motivation for ANN development
-   Network architecture and learning models
                                         g
-   Outline some of the important use of ANN
Origin of Neural Network
     g
   Human brain has many incredible characteristics such as massive
   parallelism, distributed representation and computation, learning
   ability,
   ability generalization ability adaptivity which seems simple but is
                          ability, adaptivity,
   really complicated.
   It has been always a dream for computer scientist to create a
   computer which could solve complex perceptual problems this fast
                                                                 fast.
   ANN models was an effort to apply the same method as human
   brain uses to solve perceptual problems.
   Three i d f development f ANN
   Th periods of d l               t for ANN:
- 1940:Mcculloch and Pitts: Initial works
- 1960: Rosenblatt: perceptron convergence theorem
    Minsky and Papert: work showing the limitations of a simple
   perceptron
- 1980: Hopfield/Werbos and Rumelhart: Hopfield's energy
               p                                      p             gy
   approach/back-propagation learning algorithm
Biological Neural Network
     g
Biological Neural Network
     g
When a signal reaches a synapse: Certain
chemicals called neurotransmitters are
released.
released
Process of learning: The synapse
effectiveness can be adjusted by signal
p
passing through.
       g       g
Cerebral cortex :a large flat sheet of
neurons about 2 to 3 mm thick and 2200
cm , 10^11 neurons
Duration of impulses between neurons:
milliseconds and the amount of
information sent is also small(few bits)
Critical information are not transmitted
directly , but stored in interconnections
The term Connectionist model initiated
from this idea.
ANN Overview: COMPTIONAL MODEL FOR ARTIFICIAL NEURON
                    The McCullogh-Pitts model




                          n
                     z = ∑ wi xi ; y = H ( z )
                         i =1



Wires : axon & dendrites
Connection weights: Synapse
Threshold function: activity in soma
ANN Overview: Network Architecture
                      Connection
                      patterns



            Feed-
            F d
                                 Recurrent
            Forward
Single layer
                         perceptron

                         Multilayer
                         Perceptron
             Feed-
             Forward     Radial Basis
                         Functions
                              i
Connection
                         Competetive
patterns
                         Networks
             Recurrent
                         Kohonen’s
                         SOM

                         Hopefield
                         Network

                         ART models
Learning
           g
    What is the learning process in ANN?
-   updating network architecture and connection weights so that
    network can efficiently perform a task
    What is the source of learning for ANN?
-   Available training patterns
        l bl
-   The ability of ANN to automatically learn from examples or input-out
    p
    put relations
    How to design a Learning process?
-   Knowing about available information
-   Having a model f
                 d l from environment: Learning Paradigm
                                                       d
-   Figuring out the update process of weights: Learning rules
-   Identifying a procedure to adjust weights by learning rules: Learning
    algorithm
Learning Paradigm
       g       g
 1. Supervised
   The correct answer is provided for the network for every input p
                         p                                  y p pattern
   Weights are adjusted regarding the correct answer
   In reinforcement learning only a critique of correct answer is provided
2. Unsupervised
   Does not need the correct output
   The system itself recognize the correlation and organize patterns into
     h            lf            h        l       d
categories accordingly
3. Hybrid
   A combination of supervised and unsupervised
   Some of the weights are provided with correct output while the
                   g       p                          p
others are automatically corrected.
Learning Rules
  There are four basic types of learning rules:
- Error correction rules
- Boltzmann
- Hebbian
- Competitive learning
  Each of these can be trained with or with out a teacher
  Have a particular architecture and learning algorithm
Error Correction Rules
 An error is calculated for the output and is used to modify
 the
 th connection weights; error gradually reduces
             ti      i ht          d ll d
 Perceptron learning rule is based on this error-correction
 p
 principle
        p
 A perceptron consists of a single neuron with adjustable
 weights and a threshold u.
 If an error occurs the weights get updated b iterating unit
  f                    h       h        d d by
 ill reaching an error of zero
 Since the decision boundary is linear so if the patterns are
 linearly separated, learning process converges with an infinite
 No. of iteration
Boltzman L
B lt     Learning
              i g
Used in symmetric recurrent networks(symmetric:Wij=Wji)
Consist of binary units(+1 for on, -1 for off)
Neurons are divided into two groups: Hidden & Visible
Outputs are produced according to B l
O               d d           d     Boltzman statistical l
mechanics
Boltzman learning adjust weights until visible units satisfy a
                 g j           g                           y
desired probabilistic distribution
The change in connection weight or the error-correction is
measured between the correlation between two pair of i
        db           h        l i b                 i f input
and output neuron under clamped and free-operating
condition
Hebbian Rules
One of the oldest learning rule initiated
form neurobiological experiments
The basic concept of Hebbian Learning:
when neuron A activates, and then causes
neuron B to activate, then the connection
                         h    h
strength between the two neurons is
increased, and it will be easier for A to
activate B in the f
               h future.
Learning is done locally, it means weight of
a connection is corrected only with respect
                              y        p
to neurons connected to it.
Orientation selectivity: occurs due to
Hebbian training of a network
Competitive Learning Rules
   p               g
The basis is the “winner take all” originated from biological
Neural network
N      l t k
All input units are connected together and all units of output are
also connected via inhibitory weights but gets feed back with
excitory weight
Only one of the unites with largest or smallest input is activated
    y                           g                  p
and its weight becomes adjusted
As a result of learning process the pattern in the winner unit
(weight) b
      h become closer to h input pattern.
                   l       he
http://www.peltarion.com/blog/img/sog/competitive.gif
Multilayer Perceptron
       y        p
The most popular networks with feed-forward system
Applies the Back Propagation algorithm
As a result of having hidden units, Multilayer perceptron can form
arbitrarily complex decision boundaries
Each unit in the f h dd l
    h           h first hidden layer impose a h
                                              hyperplane in the space
                                                    l        h
pattern
Each unit in the second hidden layer impose a hyperregion on outputs of
                                    y    p       yp     g           p
the first layer
Output layer combines the hyperregions of all units in second layer
Back propagation Algorithm
Radial Basis Function Network(RBF)
A Radial Basis Function like Gaussian Kernel is applied as an
activation function.
A Radial Basis Function(also called Kernel Function) is a
real-valued
real valued function whose value depends only on the
distance from the origin or any other center: F(x)=F(|x|)
RBF network uses a hybrid learning , unsupervised clustering
                       y            g      p                g
algorithm and a supervised least square algorithm
As a comparison to multilayer perceptron Net.:
 -The Learning algorithm is faster than back-propegation
 - After training the running time is much more slower
                    Let’s see an example!
Gaussian Function
Kohonen Self-Organizing Maps
 It consists of a two dimensional array of output units connected to all input
 nodes
 It works based on the property of Topology preservation
                        p p y          p gy p
 Nearby input patterns should activate nearby output units on the map
 SOM can be recognized as a special competitive learning
 Only the weight vectors of winner and its neighbor units are updated
Adaptive Resonance Theory Model
   p                    y
ART models were proposed theories to overcome the
concerns related to stability-plasticity dilemma in
competitive learning.
Is it likely that learning could corrupt the existing knowledge
in a unite?
If the input vector is similar enough to one of the stored
           p                           g
prototypes(resonance), learning updates the prototype.
If not, a new category is defined in an “uncommitted” unit.
Similarity is controlled by vigilance parameter.
Hopfield Network
  p
Hopfield designed a network based on an energy function
As a result of dynamic recurrency, the network total energy
decreases and tends to a minimum value(attractor)
The dynamic updates take places in two ways: synchronously and
asynchronously
traveling salesman problem(TSP)
     li     l         bl (TSP)
Challenging Problems
       g g
 Pattern recognition
- Character recognition
- Speech recognition



Clustering/Categorization
- Data mining
- Data analysis
          l
Challenging Problems
       g g
Function approximation
- Engineering & scientific Modeling



Prediction/Forecasting
- Decision-making
- Weather forecasting
Challenging Problems
  Optimization
- Traveling salesman problem(TSP)
Summery
      y
A great overview of ANN is presented in this paper, it is very
easy understanding and straightforward
The different types of learning rules, algorithms and also
different architectures are well explained
A number of Networks were described through simple words
The popular applications of NN were illustrated
The author Believes that ANNS b
  h     h       l      h           brought up b h enthusiasm
                                         h    both     h
and criticism.
Actually except for some special problems there is no evidence
that NN is better working than other alternatives
More development and better performance in NN requires
the combination of ANN with new technologies
 h       b         f         h          h l

Mais conteúdo relacionado

Mais procurados

Mais procurados (20)

Introduction to artificial neural network
Introduction to artificial neural networkIntroduction to artificial neural network
Introduction to artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Neural networks
Neural networksNeural networks
Neural networks
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
neural network
neural networkneural network
neural network
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Deep Learning - CNN and RNN
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNN
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 

Destaque

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
DEEPASHRI HK
 
Pattern Recognition final project
Pattern Recognition final projectPattern Recognition final project
Pattern Recognition final project
Maher Nadar
 
Neural networks...
Neural networks...Neural networks...
Neural networks...
Molly Chugh
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
ESCOM
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?
Victor Miagkikh
 
Deep Web
Deep WebDeep Web
Deep Web
St John
 

Destaque (20)

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKS
 
Distributed generation b 3
Distributed generation b 3Distributed generation b 3
Distributed generation b 3
 
DISTRIBUTED GENERATION ENVIRONMENT WITH SMART GRID
DISTRIBUTED GENERATION ENVIRONMENT WITH SMART GRIDDISTRIBUTED GENERATION ENVIRONMENT WITH SMART GRID
DISTRIBUTED GENERATION ENVIRONMENT WITH SMART GRID
 
Use of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionUse of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognition
 
Pattern Recognition final project
Pattern Recognition final projectPattern Recognition final project
Pattern Recognition final project
 
Pattern recognition
Pattern recognitionPattern recognition
Pattern recognition
 
Neural networks...
Neural networks...Neural networks...
Neural networks...
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Deep Web
Deep WebDeep Web
Deep Web
 
Neural network
Neural networkNeural network
Neural network
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
artificial neural network
artificial neural networkartificial neural network
artificial neural network
 
Cryptography
CryptographyCryptography
Cryptography
 
Lessons from Software for Synthetic Biology
Lessons from Software for Synthetic BiologyLessons from Software for Synthetic Biology
Lessons from Software for Synthetic Biology
 
Analysis and applications of artificial neural networks
Analysis and applications of artificial neural networksAnalysis and applications of artificial neural networks
Analysis and applications of artificial neural networks
 
Micromachining Technology Seminar Presentation
Micromachining Technology Seminar PresentationMicromachining Technology Seminar Presentation
Micromachining Technology Seminar Presentation
 

Semelhante a Artificial neural networks

lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
butest
 
Multi-Layer Perceptrons
Multi-Layer PerceptronsMulti-Layer Perceptrons
Multi-Layer Perceptrons
ESCOM
 

Semelhante a Artificial neural networks (20)

NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Perceptron
PerceptronPerceptron
Perceptron
 
Ffnn
FfnnFfnn
Ffnn
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
Ann
Ann Ann
Ann
 
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptx
 
ML_Unit_2_Part_A
ML_Unit_2_Part_AML_Unit_2_Part_A
ML_Unit_2_Part_A
 
Deep learning notes.pptx
Deep learning notes.pptxDeep learning notes.pptx
Deep learning notes.pptx
 
Basic Learning Algorithms of ANN
Basic Learning Algorithms of ANNBasic Learning Algorithms of ANN
Basic Learning Algorithms of ANN
 
Multi-Layer Perceptrons
Multi-Layer PerceptronsMulti-Layer Perceptrons
Multi-Layer Perceptrons
 
Terminology Machine Learning
Terminology Machine LearningTerminology Machine Learning
Terminology Machine Learning
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Unit 2
Unit 2Unit 2
Unit 2
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
 

Último

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
MateoGardella
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Último (20)

How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 

Artificial neural networks

  • 1. ARTIFICIAL NEURAL NETWORKS: A TUTORIAL BY: Negin Yousefpour PhD Student Civil Engineering Department TEXAS A&M UNIVERSITY
  • 2. Contents Introduction Origin Of Neural Network Biological Neural Networks ANN Overview Learningg Different NN Networks Challenging Problems g g Summery
  • 3. INTRODUCTION Artificial Neural Network (ANN) or Neural Network(NN) has provide an exciting alternative method for solving a variety of problems in different fields of science and engineering. engineering This article is trying to give the readers a : - Whole idea about ANN - Motivation for ANN development - Network architecture and learning models g - Outline some of the important use of ANN
  • 4. Origin of Neural Network g Human brain has many incredible characteristics such as massive parallelism, distributed representation and computation, learning ability, ability generalization ability adaptivity which seems simple but is ability, adaptivity, really complicated. It has been always a dream for computer scientist to create a computer which could solve complex perceptual problems this fast fast. ANN models was an effort to apply the same method as human brain uses to solve perceptual problems. Three i d f development f ANN Th periods of d l t for ANN: - 1940:Mcculloch and Pitts: Initial works - 1960: Rosenblatt: perceptron convergence theorem Minsky and Papert: work showing the limitations of a simple perceptron - 1980: Hopfield/Werbos and Rumelhart: Hopfield's energy p p gy approach/back-propagation learning algorithm
  • 6. Biological Neural Network g When a signal reaches a synapse: Certain chemicals called neurotransmitters are released. released Process of learning: The synapse effectiveness can be adjusted by signal p passing through. g g Cerebral cortex :a large flat sheet of neurons about 2 to 3 mm thick and 2200 cm , 10^11 neurons Duration of impulses between neurons: milliseconds and the amount of information sent is also small(few bits) Critical information are not transmitted directly , but stored in interconnections The term Connectionist model initiated from this idea.
  • 7. ANN Overview: COMPTIONAL MODEL FOR ARTIFICIAL NEURON The McCullogh-Pitts model n z = ∑ wi xi ; y = H ( z ) i =1 Wires : axon & dendrites Connection weights: Synapse Threshold function: activity in soma
  • 8. ANN Overview: Network Architecture Connection patterns Feed- F d Recurrent Forward
  • 9. Single layer perceptron Multilayer Perceptron Feed- Forward Radial Basis Functions i Connection Competetive patterns Networks Recurrent Kohonen’s SOM Hopefield Network ART models
  • 10. Learning g What is the learning process in ANN? - updating network architecture and connection weights so that network can efficiently perform a task What is the source of learning for ANN? - Available training patterns l bl - The ability of ANN to automatically learn from examples or input-out p put relations How to design a Learning process? - Knowing about available information - Having a model f d l from environment: Learning Paradigm d - Figuring out the update process of weights: Learning rules - Identifying a procedure to adjust weights by learning rules: Learning algorithm
  • 11. Learning Paradigm g g 1. Supervised The correct answer is provided for the network for every input p p y p pattern Weights are adjusted regarding the correct answer In reinforcement learning only a critique of correct answer is provided 2. Unsupervised Does not need the correct output The system itself recognize the correlation and organize patterns into h lf h l d categories accordingly 3. Hybrid A combination of supervised and unsupervised Some of the weights are provided with correct output while the g p p others are automatically corrected.
  • 12. Learning Rules There are four basic types of learning rules: - Error correction rules - Boltzmann - Hebbian - Competitive learning Each of these can be trained with or with out a teacher Have a particular architecture and learning algorithm
  • 13. Error Correction Rules An error is calculated for the output and is used to modify the th connection weights; error gradually reduces ti i ht d ll d Perceptron learning rule is based on this error-correction p principle p A perceptron consists of a single neuron with adjustable weights and a threshold u. If an error occurs the weights get updated b iterating unit f h h d d by ill reaching an error of zero Since the decision boundary is linear so if the patterns are linearly separated, learning process converges with an infinite No. of iteration
  • 14. Boltzman L B lt Learning i g Used in symmetric recurrent networks(symmetric:Wij=Wji) Consist of binary units(+1 for on, -1 for off) Neurons are divided into two groups: Hidden & Visible Outputs are produced according to B l O d d d Boltzman statistical l mechanics Boltzman learning adjust weights until visible units satisfy a g j g y desired probabilistic distribution The change in connection weight or the error-correction is measured between the correlation between two pair of i db h l i b i f input and output neuron under clamped and free-operating condition
  • 15. Hebbian Rules One of the oldest learning rule initiated form neurobiological experiments The basic concept of Hebbian Learning: when neuron A activates, and then causes neuron B to activate, then the connection h h strength between the two neurons is increased, and it will be easier for A to activate B in the f h future. Learning is done locally, it means weight of a connection is corrected only with respect y p to neurons connected to it. Orientation selectivity: occurs due to Hebbian training of a network
  • 16. Competitive Learning Rules p g The basis is the “winner take all” originated from biological Neural network N l t k All input units are connected together and all units of output are also connected via inhibitory weights but gets feed back with excitory weight Only one of the unites with largest or smallest input is activated y g p and its weight becomes adjusted As a result of learning process the pattern in the winner unit (weight) b h become closer to h input pattern. l he http://www.peltarion.com/blog/img/sog/competitive.gif
  • 17. Multilayer Perceptron y p The most popular networks with feed-forward system Applies the Back Propagation algorithm As a result of having hidden units, Multilayer perceptron can form arbitrarily complex decision boundaries Each unit in the f h dd l h h first hidden layer impose a h hyperplane in the space l h pattern Each unit in the second hidden layer impose a hyperregion on outputs of y p yp g p the first layer Output layer combines the hyperregions of all units in second layer
  • 19. Radial Basis Function Network(RBF) A Radial Basis Function like Gaussian Kernel is applied as an activation function. A Radial Basis Function(also called Kernel Function) is a real-valued real valued function whose value depends only on the distance from the origin or any other center: F(x)=F(|x|) RBF network uses a hybrid learning , unsupervised clustering y g p g algorithm and a supervised least square algorithm As a comparison to multilayer perceptron Net.: -The Learning algorithm is faster than back-propegation - After training the running time is much more slower Let’s see an example!
  • 21. Kohonen Self-Organizing Maps It consists of a two dimensional array of output units connected to all input nodes It works based on the property of Topology preservation p p y p gy p Nearby input patterns should activate nearby output units on the map SOM can be recognized as a special competitive learning Only the weight vectors of winner and its neighbor units are updated
  • 22. Adaptive Resonance Theory Model p y ART models were proposed theories to overcome the concerns related to stability-plasticity dilemma in competitive learning. Is it likely that learning could corrupt the existing knowledge in a unite? If the input vector is similar enough to one of the stored p g prototypes(resonance), learning updates the prototype. If not, a new category is defined in an “uncommitted” unit. Similarity is controlled by vigilance parameter.
  • 23. Hopfield Network p Hopfield designed a network based on an energy function As a result of dynamic recurrency, the network total energy decreases and tends to a minimum value(attractor) The dynamic updates take places in two ways: synchronously and asynchronously traveling salesman problem(TSP) li l bl (TSP)
  • 24. Challenging Problems g g Pattern recognition - Character recognition - Speech recognition Clustering/Categorization - Data mining - Data analysis l
  • 25. Challenging Problems g g Function approximation - Engineering & scientific Modeling Prediction/Forecasting - Decision-making - Weather forecasting
  • 26. Challenging Problems Optimization - Traveling salesman problem(TSP)
  • 27. Summery y A great overview of ANN is presented in this paper, it is very easy understanding and straightforward The different types of learning rules, algorithms and also different architectures are well explained A number of Networks were described through simple words The popular applications of NN were illustrated The author Believes that ANNS b h h l h brought up b h enthusiasm h both h and criticism. Actually except for some special problems there is no evidence that NN is better working than other alternatives More development and better performance in NN requires the combination of ANN with new technologies h b f h h l