SlideShare uma empresa Scribd logo
1 de 10
INTRODUCTION TO MATLAB NEURAL NETWORK TOOLBOX
                                         Budi Santosa, Fall 2002

FEEDFORWARD NEURAL NETWORK
To create a feedforward backpropagation network we can use NEWFF

Syntax

net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description

            NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
             PR - Rx2 matrix of min and max values for R input elements.
             Si - Size of ith layer, for Nl layers.
             TFi - Transfer function of ith layer, default = 'tansig'.
             BTF - Backprop network training function, default = 'trainlm'.
             BLF - Backprop weight/bias learning function, default = 'learngdm'.
             PF - Performance function, default = 'mse'.
            and returns an N layer feed-forward backprop network.

Consider this set of data:
p=[-1 -1 2 2;0 5 0 5]
t =[-1 -1 1 1]
where p is input vector and t is target.

Suppose we want to create feed forward neural net with one hidden layer, 3 nodes in
hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function
for output layer, and with gradient descent with momentum backpropagation training
function, just simply use the following commands:

» net=newff([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’);

Note that the first input [-1 2;0 5] is the minimum and maximum values of vector p. We
might use minmax(p) , especially for large data set, then the command becomes:

»net=newff(minmax(p),[3 1],{'tansig' 'purelin'},’traingdm’);

We then want to train the network net with the following commands and parameter
values1 (or we can write these set of command as M-file):

»     net.trainParam.epochs=30;%(number of epochs)
»     net.trainParam.lr=0.3;%(learning rate)
»     net.trainParam.mc=0.6;%(momentum)
»     net=train (net,p,t);


1
    % is a sign not to execute any words after that, the phrase is just comment. We can ignore this comment


                                                       1
TRAINLM, Epoch 0/30, MSE 1.13535/0, Gradient 5.64818/1e-010
TRAINLM, Epoch 6/30, MSE 2.21698e-025/0, Gradient
4.27745e-012/1e-010
TRAINLM, Minimum gradient reached, performance goal was not
met.

The following is plot of training error vs epochs resulted after we run the above
command:


To simulate the network use the following command:
                                                P e rfo rm a n c e is 2 .2 1 6 9 8 e -0 2 5 , G o a l is 0
                                 5
                            10


                                 0
                            10


                                 -5
                            10


                                 -1 0
      Tra in in g -B lu e




                            10


                                 -1 5
                            10


                                 -2 0
                            10


                                 -2 5
                            10
                                        0   1             2                3                   4             5       6
                                                                      6 E poc hs



» y=sim(net,p);

Then see the result:
» [t;y]

ans =

    -1.0000                                      -1.0000                                           1.0000                1.0000
    -1.0000                                      -1.0000                                           1.0000                1.0000

Notice that t is actual target and y is predicted value.

Pre-processing data

Some transfer functions need that the inputs and targets are scaled so that they fall within
a specified range. In order to meet this requirement we need to pre-process the data.


                                                                                                                 2
PRESTD : preprocesses the data so that the mean is 0 and the standard deviation is 1.
Syntax: [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t)

Description

PRESTD preprocesses the network training set by normalizing the inputs (p) and targets
(t) so that they have means of zero and standard deviations of 1.

PREMNMX: preprocesses data so that the minimum is -1 and the maximum is 1.

Syntax: [pn,minp,maxp,tn,mint,maxt] = premnmx(p,t)

Description
PREMNMX preprocesses the network training set by normalizing the inputs and targets
so that they fall in the interval [-1,1]. Post-processing Data

If we pre-process the data before experiments, then after the simulation in order to see the
output, we need to convert back to the original scale. The corresponding post-processing
subroutine for prestd is poststd, and for premnmx is postmnmx.

Now, consider the following real time-series data. We can save this data as text file in
MS-excel (i.e data.txt).

  107.1       113.5   112.7
  113.5       112.7   114.7
  112.7       114.7   123.4
  114.7       123.4   123.6
  123.4       123.6   116.3
  123.6       116.3   118.5
  116.3       118.5   119.8
  118.5       119.8   120.3
  119.8       120.3   127.4
  120.3       127.4   125.1
  127.4       125.1   127.6
  125.1       127.6     129
  127.6         129   124.6
    129       124.6   134.1
  124.6       134.1   146.5
  134.1       146.5   171.2
  146.5       171.2   178.6
  171.2       178.6   172.2
  178.6       172.2   171.5
  172.2       171.5   163.6




                                             3
Now we would like to work on the above real data set. We have a 20 by 3 matrix. Split
the matrix into two new data sets, first 12 data point as training set and the rest as a
testing set. The following commands are to load the data and specify our training and
testing data:

»   load data.txt;
»   P=data(1:12,1:2);
»   T=data(1:12,3);
»   a=data(13:20,1:2);
»   s=data(13:20,3);

Subroutine premnmx is used to preprocess the data such that the converted data fall in
range [-1,1]. Note that we need to convert our data into row vectors to be able to apply
premnmx..

» [pn,minp,maxp,tn,mint,maxt]=premnmx(P',T');
» [an,mina,maxa,sn,mins,maxs]=premnmx(a',s');

We would like to create two layers neural net with 5 nodes in hidden layer.

» net=newff(minmax(pn),[5 1],{'tansig','tansig'},'traingdm')

Specify these parameters:

»   net.trainParam.epochs=3000;
»   net.trainParam.lr=0.3;
»   net.trainParam.mc=0.6;
»   net=train (net,pn,tn);

After training the network we can simulate our testing data:

                                                  P e rfo rm a n c e is 0 .2 2 5 0 5 1 , G o a l is 0
                                   0
                              10
        Tra in in g -B lu e




                                   -1
                              10
                                        0   500       1000            1500                2000          2 54
                                                                                                           00   3000
                                                                  3000 E poc hs
» y=sim(net,an)

y=

 Columns 1 through 7

 -0.4697 -0.4748 -0.4500 -0.3973                                                                    0.5257   0.6015   0.5142
 Column 8

  0.5280

To convert y back to the original scale use the following command:

» t=postmnmx(y’,mins,maxs);
See the results:
» [t s]
ans =
  138.9175 124.6000
  138.7803 134.1000
  139.4506 146.5000
  140.8737 171.2000
  165.7935 178.6000
  167.8394 172.2000
   165.4832 171.5000
 165.8551        163.6000

» plot(t,'r')
» hold
Current plot held
» plot(s)
» title('Comparison between actual targets and predictions')

                 C o m p a ris o n b e tw e e n a c tu a l ta rg e ts a n d p re d ic tio n s
   180



   170



   160



   150                         y



   140
                                       x


   130



   120
         1   2                3               4               5               6                 7    8


                                                                                                         5
To calculate mse between actual and predicted values use the following commands:

» d=[t-s].^2;
» mse=mean(d)

mse =

   177.5730
To judge our network performance we can also use regression analysis. After post-
processing the predicted values, we can apply the following command:

t» [m,b,r]=postreg(t',s’)
m =
     0.5368
b =
    68.1723
r =
     0.7539


               B es t Linear F it: X = (0.537) T + (68.2)           D ata P oints
       180                                                          X= T
                                                                    B es t Linear F it

       170


       160


       150
   X




       140
                                                  R = 0.754
       130


       120
         120   130      140       150      160       170      180
                                   T




                                                                    6
RADIAL BASIS NETWORK

NEWRB Design a radial basis network.

 Synopsis

   net = newrb
   [net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF)

 Description

Radial basis networks can be used to approximate functions. NEWRB adds neurons to
the hidden layer of a radial basis network until it meets the specified mean squared error
goal.
Consider the same data set we use before. Now we use radial basis network to do an
experiment.

>> net = newrb(P',T',0,3,12,1);
>> Y=sim(net,a')

Y =

150.4579       142.0905       166.3241       167.1528        167.1528       167.1528
167.1528       167.1528

>> d=[Y' s]

d =

   150.4579       124.6000
   142.0905       134.1000
   166.3241       146.5000
   167.1528       171.2000
   167.1528       178.6000
   167.1528       172.2000
   167.1528       171.5000
   167.1528       163.6000

>> diff=d(:,1)-d(:,2)

diff =

    25.8579
     7.9905
    19.8241
    -4.0472


                                             7
-11.4472
     -5.0472
     -4.3472
      3.5528

>> mse=mean(diff.^2)

mse =

    166.2357

RECURRENT NETWORK

Here we will use Elman network. The Elman network differs from conventional two-
layers network in that the first layer has recurrent connection. Elman networks are two-
layer backpropagation networks, with addition of a feedback connection from the output
of the hidden layer to its input. This feedback path allows Elman networks to recognize
and generate temporal patterns, as well as spatial patterns.

The following is an example of applying Elman networks.

»   [an,meana,stda,tn,meant,stdt]=prestd(a',s');
»   [pn,meanp,stdp,Tn,meanT,stdT]=prestd(P',T');
»   net = newelm(minmax(pn),[6 1],{'tansig',‘logsig'},'traingdm');
»   y=sim(net,an);
»   x=poststd(y',meant,stdt);
»   [x s]

ans =

    158.2551     124.6000
    158.2079     134.1000
    158.3120     146.5000
    159.2024     171.2000
    165.3899     178.6000
    171.6465     172.2000
    170.5381     171.5000
    169.3573     163.6000

» plot(x)
» hold
Current plot held
» plot(t,'r')




                                           8
180



         170



         160



         150



         140



         130



         120
               1   2   3   4   5     6    7    8




Learning Vector Quantization (LVQ)
LVQ networks are used to solve classification problems. Now we use iris data to
implement this technique. Iris data is very popular in data mining or statistics study
where the dimension is 150 by 5. The row is the number of observations and the last
column of the data indicates the class. There are 3 classes in the data.
The following set of commands is to specify our training and testing data:
»   load iris.txt
»   x=sortrows(iris,2);
»   P=x(21:150,1:4);
»   T=x(21:150,5);
»   a=x(1:20,1:4);
»   s=x(1:20,5);

An LVQ network can be created with the function newlvq.

Syntax
net = newlvq(PR,S1,PC,LR,LF)

       NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs,
         PR - Rx2 matrix of min and max values for R input elements.
         S1 - Number of hidden neurons.
         PC - S2 element vector of typical class percentages.
              The dimension depends on the number of classes in our data
              set. Each element indicates the percentage of each class.
              The percentages must sum to 1.
         LR - Learning rate, default = 0.01.
         LF - Learning function, default = 'learnlv2'.The




                                          9
The following code is to implement LVQ network.

P=input('input matrix training: ');
T=input('input matrix target training: ');
a=input('input matrix testing: ');
s=input('input matrix target testing: ');
Tc = ind2vec(T');
net=newlvq(minmax(P'),6,[49/130 36/130 45/130]);
net.trainParam.epochs=700;
net=train (net,P',Tc);
y=sim(net,a');
lb=vec2ind(y);
[s lb’]
ans

     2     2
     2     2
     2     2
     3     1
     1     1
     2     2
     2     2
     2     2
     2     2
     2     2
     2     2
     2     2
     2     1
     2     2
     2     2
     3     2
     3     1
     3     1
     3     1
     2     2




                                  10

Mais conteúdo relacionado

Mais procurados

Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1Faiza Saher
 
Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)Pei-Che Chang
 
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericosKristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericosKristhyanAndreeKurtL
 
Time Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal RecoveryTime Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal RecoveryDaniel Cuneo
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsNITHIN KALLE PALLY
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformationzertux
 
Fichas de trabalho 3º ano
Fichas de trabalho   3º anoFichas de trabalho   3º ano
Fichas de trabalho 3º anoclaudiaalves71
 
Fast Fourier Transform
Fast Fourier TransformFast Fourier Transform
Fast Fourier Transformop205
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOWMark Chang
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10Mark Chang
 
Distributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and RelatedDistributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and RelatedPei-Che Chang
 
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun UmraoMaxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun Umraossuserd6b1fd
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximationTarun Gehlot
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationVarun Ojha
 
Deep learning with C++ - an introduction to tiny-dnn
Deep learning with C++  - an introduction to tiny-dnnDeep learning with C++  - an introduction to tiny-dnn
Deep learning with C++ - an introduction to tiny-dnnTaiga Nomi
 
Fourier 3
Fourier 3Fourier 3
Fourier 3nugon
 

Mais procurados (20)

Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1
 
Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)
 
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericosKristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
 
Time Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal RecoveryTime Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal Recovery
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformation
 
Fichas de trabalho 3º ano
Fichas de trabalho   3º anoFichas de trabalho   3º ano
Fichas de trabalho 3º ano
 
Nabaa
NabaaNabaa
Nabaa
 
Fast Fourier Transform
Fast Fourier TransformFast Fourier Transform
Fast Fourier Transform
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
 
Distributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and RelatedDistributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and Related
 
alamouti
alamoutialamouti
alamouti
 
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun UmraoMaxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier Transformation
 
Deep learning with C++ - an introduction to tiny-dnn
Deep learning with C++  - an introduction to tiny-dnnDeep learning with C++  - an introduction to tiny-dnn
Deep learning with C++ - an introduction to tiny-dnn
 
Solved problems
Solved problemsSolved problems
Solved problems
 
Lab7 task1
Lab7 task1Lab7 task1
Lab7 task1
 
Fourier 3
Fourier 3Fourier 3
Fourier 3
 

Destaque

Neural network-toolbox
Neural network-toolboxNeural network-toolbox
Neural network-toolboxangeltejera
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3ESCOM
 
Matlab Neural Network Toolbox
Matlab Neural Network ToolboxMatlab Neural Network Toolbox
Matlab Neural Network ToolboxAliMETN
 
Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)petrus fendiyanto
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLABESCOM
 
Matlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABMatlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABESCOM
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQESCOM
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQESCOM
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab Fahim Khan
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin Rshanelynn
 
Self-organizing maps - Tutorial
Self-organizing maps -  TutorialSelf-organizing maps -  Tutorial
Self-organizing maps - Tutorialaskroll
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksstellajoseph
 

Destaque (14)

Neural network-toolbox
Neural network-toolboxNeural network-toolbox
Neural network-toolbox
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3
 
Matlab Neural Network Toolbox
Matlab Neural Network ToolboxMatlab Neural Network Toolbox
Matlab Neural Network Toolbox
 
Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLAB
 
Matlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABMatlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLAB
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
 
Sefl Organizing Map
Sefl Organizing MapSefl Organizing Map
Sefl Organizing Map
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
 
Self-organizing maps - Tutorial
Self-organizing maps -  TutorialSelf-organizing maps -  Tutorial
Self-organizing maps - Tutorial
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 

Semelhante a Intro matlab-nn

curve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptxcurve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptxabelmeketa
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxOthmanBensaoud
 
11. Linear Models
11. Linear Models11. Linear Models
11. Linear ModelsFAO
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...Xi'an Jiaotong-Liverpool University
 
Surrey dl-4
Surrey dl-4Surrey dl-4
Surrey dl-4ozzie73
 
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docxHW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docxwellesleyterresa
 
maXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VIImaXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VIIMax Kleiner
 
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...Soumen Santra
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fittingAmeen San
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkKazuki Fujikawa
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentationStephan Chang
 

Semelhante a Intro matlab-nn (20)

curve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptxcurve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptx
 
presentazione
presentazionepresentazione
presentazione
 
DCT
DCTDCT
DCT
 
DCT
DCTDCT
DCT
 
DCT
DCTDCT
DCT
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptx
 
MLE Example
MLE ExampleMLE Example
MLE Example
 
11. Linear Models
11. Linear Models11. Linear Models
11. Linear Models
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
 
Surrey dl-4
Surrey dl-4Surrey dl-4
Surrey dl-4
 
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docxHW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
 
maXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VIImaXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VII
 
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
 
Conference ppt
Conference pptConference ppt
Conference ppt
 
NLP@ICLR2019
NLP@ICLR2019NLP@ICLR2019
NLP@ICLR2019
 
parallel
parallelparallel
parallel
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fitting
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
 
Learn Matlab
Learn MatlabLearn Matlab
Learn Matlab
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentation
 

Último

Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...Pooja Nehwal
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 

Último (20)

Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 

Intro matlab-nn

  • 1. INTRODUCTION TO MATLAB NEURAL NETWORK TOOLBOX Budi Santosa, Fall 2002 FEEDFORWARD NEURAL NETWORK To create a feedforward backpropagation network we can use NEWFF Syntax net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) Description NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes, PR - Rx2 matrix of min and max values for R input elements. Si - Size of ith layer, for Nl layers. TFi - Transfer function of ith layer, default = 'tansig'. BTF - Backprop network training function, default = 'trainlm'. BLF - Backprop weight/bias learning function, default = 'learngdm'. PF - Performance function, default = 'mse'. and returns an N layer feed-forward backprop network. Consider this set of data: p=[-1 -1 2 2;0 5 0 5] t =[-1 -1 1 1] where p is input vector and t is target. Suppose we want to create feed forward neural net with one hidden layer, 3 nodes in hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function for output layer, and with gradient descent with momentum backpropagation training function, just simply use the following commands: » net=newff([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’); Note that the first input [-1 2;0 5] is the minimum and maximum values of vector p. We might use minmax(p) , especially for large data set, then the command becomes: »net=newff(minmax(p),[3 1],{'tansig' 'purelin'},’traingdm’); We then want to train the network net with the following commands and parameter values1 (or we can write these set of command as M-file): » net.trainParam.epochs=30;%(number of epochs) » net.trainParam.lr=0.3;%(learning rate) » net.trainParam.mc=0.6;%(momentum) » net=train (net,p,t); 1 % is a sign not to execute any words after that, the phrase is just comment. We can ignore this comment 1
  • 2. TRAINLM, Epoch 0/30, MSE 1.13535/0, Gradient 5.64818/1e-010 TRAINLM, Epoch 6/30, MSE 2.21698e-025/0, Gradient 4.27745e-012/1e-010 TRAINLM, Minimum gradient reached, performance goal was not met. The following is plot of training error vs epochs resulted after we run the above command: To simulate the network use the following command: P e rfo rm a n c e is 2 .2 1 6 9 8 e -0 2 5 , G o a l is 0 5 10 0 10 -5 10 -1 0 Tra in in g -B lu e 10 -1 5 10 -2 0 10 -2 5 10 0 1 2 3 4 5 6 6 E poc hs » y=sim(net,p); Then see the result: » [t;y] ans = -1.0000 -1.0000 1.0000 1.0000 -1.0000 -1.0000 1.0000 1.0000 Notice that t is actual target and y is predicted value. Pre-processing data Some transfer functions need that the inputs and targets are scaled so that they fall within a specified range. In order to meet this requirement we need to pre-process the data. 2
  • 3. PRESTD : preprocesses the data so that the mean is 0 and the standard deviation is 1. Syntax: [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t) Description PRESTD preprocesses the network training set by normalizing the inputs (p) and targets (t) so that they have means of zero and standard deviations of 1. PREMNMX: preprocesses data so that the minimum is -1 and the maximum is 1. Syntax: [pn,minp,maxp,tn,mint,maxt] = premnmx(p,t) Description PREMNMX preprocesses the network training set by normalizing the inputs and targets so that they fall in the interval [-1,1]. Post-processing Data If we pre-process the data before experiments, then after the simulation in order to see the output, we need to convert back to the original scale. The corresponding post-processing subroutine for prestd is poststd, and for premnmx is postmnmx. Now, consider the following real time-series data. We can save this data as text file in MS-excel (i.e data.txt). 107.1 113.5 112.7 113.5 112.7 114.7 112.7 114.7 123.4 114.7 123.4 123.6 123.4 123.6 116.3 123.6 116.3 118.5 116.3 118.5 119.8 118.5 119.8 120.3 119.8 120.3 127.4 120.3 127.4 125.1 127.4 125.1 127.6 125.1 127.6 129 127.6 129 124.6 129 124.6 134.1 124.6 134.1 146.5 134.1 146.5 171.2 146.5 171.2 178.6 171.2 178.6 172.2 178.6 172.2 171.5 172.2 171.5 163.6 3
  • 4. Now we would like to work on the above real data set. We have a 20 by 3 matrix. Split the matrix into two new data sets, first 12 data point as training set and the rest as a testing set. The following commands are to load the data and specify our training and testing data: » load data.txt; » P=data(1:12,1:2); » T=data(1:12,3); » a=data(13:20,1:2); » s=data(13:20,3); Subroutine premnmx is used to preprocess the data such that the converted data fall in range [-1,1]. Note that we need to convert our data into row vectors to be able to apply premnmx.. » [pn,minp,maxp,tn,mint,maxt]=premnmx(P',T'); » [an,mina,maxa,sn,mins,maxs]=premnmx(a',s'); We would like to create two layers neural net with 5 nodes in hidden layer. » net=newff(minmax(pn),[5 1],{'tansig','tansig'},'traingdm') Specify these parameters: » net.trainParam.epochs=3000; » net.trainParam.lr=0.3; » net.trainParam.mc=0.6; » net=train (net,pn,tn); After training the network we can simulate our testing data: P e rfo rm a n c e is 0 .2 2 5 0 5 1 , G o a l is 0 0 10 Tra in in g -B lu e -1 10 0 500 1000 1500 2000 2 54 00 3000 3000 E poc hs
  • 5. » y=sim(net,an) y= Columns 1 through 7 -0.4697 -0.4748 -0.4500 -0.3973 0.5257 0.6015 0.5142 Column 8 0.5280 To convert y back to the original scale use the following command: » t=postmnmx(y’,mins,maxs); See the results: » [t s] ans = 138.9175 124.6000 138.7803 134.1000 139.4506 146.5000 140.8737 171.2000 165.7935 178.6000 167.8394 172.2000 165.4832 171.5000 165.8551 163.6000 » plot(t,'r') » hold Current plot held » plot(s) » title('Comparison between actual targets and predictions') C o m p a ris o n b e tw e e n a c tu a l ta rg e ts a n d p re d ic tio n s 180 170 160 150 y 140 x 130 120 1 2 3 4 5 6 7 8 5
  • 6. To calculate mse between actual and predicted values use the following commands: » d=[t-s].^2; » mse=mean(d) mse = 177.5730 To judge our network performance we can also use regression analysis. After post- processing the predicted values, we can apply the following command: t» [m,b,r]=postreg(t',s’) m = 0.5368 b = 68.1723 r = 0.7539 B es t Linear F it: X = (0.537) T + (68.2) D ata P oints 180 X= T B es t Linear F it 170 160 150 X 140 R = 0.754 130 120 120 130 140 150 160 170 180 T 6
  • 7. RADIAL BASIS NETWORK NEWRB Design a radial basis network. Synopsis net = newrb [net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF) Description Radial basis networks can be used to approximate functions. NEWRB adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal. Consider the same data set we use before. Now we use radial basis network to do an experiment. >> net = newrb(P',T',0,3,12,1); >> Y=sim(net,a') Y = 150.4579 142.0905 166.3241 167.1528 167.1528 167.1528 167.1528 167.1528 >> d=[Y' s] d = 150.4579 124.6000 142.0905 134.1000 166.3241 146.5000 167.1528 171.2000 167.1528 178.6000 167.1528 172.2000 167.1528 171.5000 167.1528 163.6000 >> diff=d(:,1)-d(:,2) diff = 25.8579 7.9905 19.8241 -4.0472 7
  • 8. -11.4472 -5.0472 -4.3472 3.5528 >> mse=mean(diff.^2) mse = 166.2357 RECURRENT NETWORK Here we will use Elman network. The Elman network differs from conventional two- layers network in that the first layer has recurrent connection. Elman networks are two- layer backpropagation networks, with addition of a feedback connection from the output of the hidden layer to its input. This feedback path allows Elman networks to recognize and generate temporal patterns, as well as spatial patterns. The following is an example of applying Elman networks. » [an,meana,stda,tn,meant,stdt]=prestd(a',s'); » [pn,meanp,stdp,Tn,meanT,stdT]=prestd(P',T'); » net = newelm(minmax(pn),[6 1],{'tansig',‘logsig'},'traingdm'); » y=sim(net,an); » x=poststd(y',meant,stdt); » [x s] ans = 158.2551 124.6000 158.2079 134.1000 158.3120 146.5000 159.2024 171.2000 165.3899 178.6000 171.6465 172.2000 170.5381 171.5000 169.3573 163.6000 » plot(x) » hold Current plot held » plot(t,'r') 8
  • 9. 180 170 160 150 140 130 120 1 2 3 4 5 6 7 8 Learning Vector Quantization (LVQ) LVQ networks are used to solve classification problems. Now we use iris data to implement this technique. Iris data is very popular in data mining or statistics study where the dimension is 150 by 5. The row is the number of observations and the last column of the data indicates the class. There are 3 classes in the data. The following set of commands is to specify our training and testing data: » load iris.txt » x=sortrows(iris,2); » P=x(21:150,1:4); » T=x(21:150,5); » a=x(1:20,1:4); » s=x(1:20,5); An LVQ network can be created with the function newlvq. Syntax net = newlvq(PR,S1,PC,LR,LF) NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs, PR - Rx2 matrix of min and max values for R input elements. S1 - Number of hidden neurons. PC - S2 element vector of typical class percentages. The dimension depends on the number of classes in our data set. Each element indicates the percentage of each class. The percentages must sum to 1. LR - Learning rate, default = 0.01. LF - Learning function, default = 'learnlv2'.The 9
  • 10. The following code is to implement LVQ network. P=input('input matrix training: '); T=input('input matrix target training: '); a=input('input matrix testing: '); s=input('input matrix target testing: '); Tc = ind2vec(T'); net=newlvq(minmax(P'),6,[49/130 36/130 45/130]); net.trainParam.epochs=700; net=train (net,P',Tc); y=sim(net,a'); lb=vec2ind(y); [s lb’] ans 2 2 2 2 2 2 3 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 3 2 3 1 3 1 3 1 2 2 10