O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics

http://www.jaist.ac.jp/whatsnew/event/2016/07/07-3.html

  • Entre para ver os comentários

  • Seja a primeira pessoa a gostar disto

JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics

  1. 1. SS2016 Modern Neural Computation Lecture 3: Network Dynamics Hirokazu Tanaka School of Information Science Japan Institute of Science and Technology
  2. 2. Neural network as a dynamical system. In this lecture we will learn: • Attractor dynamics - Hopfield model - Winner-take-all and Winner-less competition • Randomly connectivity - Girko’s circular law - Phase transition by synaptic variability • Collective dynamics - Hebb’s cell assemblies - Synfire chain, Neuronal avalanche, small-world network • Recurrent network dynamics - Echo-state network, liquid-state network - Self-organizing recurrent network (SORN) • Synchronization - Kuramoto model
  3. 3. Hopfield model inspired by physics of ferromagnetism. 1 1 iS     Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics “Spin” variable for neuron i Excited Rest General connectivity Symmetric connectivity S1 S2 S3 S4 S5 S6 S7 S1 S2 S3 S4 S5 S6 S7 ij jiw w ij jiw w Here we will see that a neural network with symmetric connectivity exhibits an attractor dynamics.
  4. 4. Hopfield model inspired by physics of ferromagnetism.    i ij j j h t w S t                  1 1 Pr 1| Pr 1| i i i i S i i S i h t i t h S t t h t e e S t t h t ee                       H H             1 tanh Pr 1| 2 i i i h t h t t i i hi h te S t t h t e e                   , ij i j i j S w S S    H1 1 iS           lim Pr 1| sgni i iS t t h t h t          Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
  5. 5. Associative memory is stored in connection strengths. 1 1 M ij i jw p p N       1; 1, , , 1, ,ip i N M    L L M-stored patterns Overlap with patterns   1 1 1 1 M N M i ij j i j j i j j h t w S p p S p m N              1 j j j m p S N     Hebbian learning Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
  6. 6. Memory recall is a relaxation process to fixed pointes. Tank & Hopfield (1987) Scientific American 1 1 M ij i jw p p N     
  7. 7. Memory recall is a relaxation process to fixed points. M=3 case Deterministic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Suppose that an initial pattern of population activity has a significant similarity with pattern μ=3: while there are no overlap with the other patterns:  3 0 0.4m t     1 2 0 0 0.m t m t    1 1 2 2 3 3 3 3 3 0 sgn sgn sgni i i i i i iS t t p m p m p m p m p m p                              23 3 3 0 0 1 1 1.i i i i i m t t p S t t p N N         Therefore, the population activity converges to the pattern μ=3.
  8. 8. Memory recall is a relaxation process to fixed points. M=3 case Deterministic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Suppose that an initial pattern of population activity has a significant similarity with pattern μ=3: while there are no overlap with the other patterns:  3 0 0.4m t     1 2 0 0 0.m t m t    1 1 2 2 3 3 3 3 3 0 sgn sgn sgni i i i i i iS t t p m p m p m p m p m p                              23 3 3 0 0 1 1 1.i i i i i m t t p S t t p N N         Therefore, the population activity converges to the pattern μ=3.
  9. 9. Memory recall is a relaxation process to fixed pointes. Stochastic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics        3 3 0Pr 1|i i i iS t t h t g p m t g p m t                          3 0Pr 1|i iS t t h t g m t               3 0Pr 1|i iS t t h t g m t           3 1ip  For 3 1ip  For         3 3 3 3 0 0 3 3 0 03 3 1 1 1 1 1 i i i i i i i i i p p m t t p S t t N N N S t t S t t N N N N                    
  10. 10. Memory recall is a relaxation process to fixed pointes. Stochastic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics         3 3 3 3 0 0 3 3 0 03 3 1 1 1 1 1 i i i i i i i i i p p m t t p S t t N N N S t t S t t N N N N                              3 3 3 3 03 3 1 1 1 (*) Pr 1 Pr 1 2 1 i i i i i p S t t N S t t N S t t g m N N                                    3 3 3 3 03 3 1 1 1 (**) Pr 1 Pr 1 2 1 i i i i i p S t t N S t t N S t t g m N N                                 3 3 3 0 0 0m t t g m t g m t           Update rule (*) (**)
  11. 11. Memory recall is a relaxation process to fixed pointes. Figure 17,8 in Gerstner (2014) Neuronal Dynamics      3 3 3 0 0 0m t t g m t g m t               1 1 tanh 2 g m m     3 3 0 0tanhm t t m t      If we assume a sigmoid function for activation: When β>1, the network is attracted to the pattern.
  12. 12. Memory recall is a relaxation process to fixed pointes. Tank & Hopfield (1987) Scientific American
  13. 13. Demo: Matlab example. N=25, β=3
  14. 14. Demo: Matlab example. N=25, β=0.8
  15. 15. Exercise: fill the Matlab code. %% parameters N = 5^2; % # neurons beta = 3; % inverse temperature T = 9; % # simulation steps %% M=3 patterns P = [ [1,1,1,1,-1, -1,1,-1,-1,1, -1,1,-1,-1,1, -1,1,-1,-1,1, -1,1,1,1,-1]; ... [1,1,1,1,1, -1,-1,-1,1,-1, -1,-1,-1,1,-1, 1,-1,-1,1,-1, 1,1,1,-1,-1]; ... [-1,1,1,1,1, 1,-1,-1,-1,-1, 1,-1,-1,-1,-1, 1,-1,-1,-1,-1, -1,1,1,1,1] ]; figure(1); subplot(131); imagesc(reshape(P(1,:),5,5)); title('pattern 1'); subplot(132); imagesc(reshape(P(2,:),5,5)); title('pattern 2'); subplot(133); imagesc(reshape(P(3,:),5,5)); title('pattern 3'); % connectivity matrix W = 1/N*(P'*P); %% simulation S = 2*(rand(N,1)>0.5)-1; % initial pattern figure(2); subplot(1,9,1); imagesc(reshape(S,5,5)); title(['t=1']); for t=2:T h = W*S; % inputs p = 1/2*(1+tanh(beta*h)); % prob(S=+1) S = 2*(rand(N,1)<p)-1; % stochastic Glaubner dynamics figure(2); subplot(1,9,t); imagesc(reshape(S,5,5)); title(['t=' num2str(t)]); end Write your own code here.
  16. 16. How many patterns can N-neuron network remember? Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Stability condition  0i iS t p  Assume that the network represents the pattern ν at time t0: Then, in deterministic case, the network at time t0+Δt is determined by       0 1 1 1 1 1 1 sgn 1 1 sgn 1 sgn 1 1 sgn 1 N M i j j N N j j j j N j i j i j i j i i i j i j S t t p p p N p p p p p p N N p p p p p N p N                                                                     1 i i j j N jp p p p              
  17. 17. How many patterns can N-neuron network remember?      error 2 1 Pr 1 Pr 1 Pr 1| 1 erf 2 0 2 ,i i i i N P a a a a M                            : N Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Stability condition    0 sgn 1ii iS t t p a      1 1 N i ji i j j a p p p p N               1 E 0, Vari i M a a N      Therefore, the number of patterns M must be small enough compared to the number of neurons N.
  18. 18. Physics of spin systems. Ising model Spin glass   , i j i j S J S S    H   , ij i j i j S J S S    H , : nearest neighbori j 2 0 ,;ij ij J J N N J J       : N   , ij i j i j S J S S    H Short-range interaction (Edwards-Anderson model) Long-range interaction (Sherrington-Kirkpatrick model) Ising (1925); Edwards & Anderson (1975); Sherrington & Kirkpartick (1975)
  19. 19. Simple dynamics with random connections.  tanh d dt     x x Wx 2 0,ijW N       : N  N d dt     x I W x Dynamics of N interconnected neural network with random connections Linearized dynamics The origin x=0 is a fixed point. Whether it is stable or unstable is determined by the eigenvalues of the connectivity matrix W. S1 S2 S3 S4 S5 S6 S7
  20. 20. Semi-circle law of eigenvalue density function. 1 0,ij N W       : NAll components are normally distributed: And symmetric: ij ijW W Semi-circle law: Wigner (1951) In the limit of infinite n, the eigenvalues of an nxn random symmetric matrix W follow a semi-circle distribution:   21 4 2 p      %% Parameters n =10000; t =1; v = [ ]; dx = 0.1; %% Experiment for i=1:t, a=randn(n); % random nxn matrix s =(a+a')/2 ; % symmetrized matrix v =[v ; eig(s)] ; % eigenvalues end v=v/sqrt (n/2) ; %% Plot [count , x]= hist(v , -2:dx:2) ; cla reset; hold on ; %% Theory plot(x , sqrt (4-x.^2)/(2*pi) , 'k-', 'LineWidth' , 2); bar (x , count/(t*n*dx) , 'facecolor', [0.7 0.7 0.7]) ; Wigner (1951)
  21. 21. Circular law of eigenvalue density function. 1 0,ij N W       : NAll components are normally distributed: Circular law: Girko (1985) In the limit of infinite n, the eigenvalues of an nxn random (not necessarily symmetric) matrix W follow a uniform distribution in a unit circle in the complex plane. N = 20000; sigma = 1.01; W = randn(N,N)/sqrt(N)*sigma; figure(k); clf; hold on; plot(eig(W)-1, 'k.'); theta = linspace(0, 2*pi, 100); plot(cos(theta)-1, sin(theta), 'k'); plot([0 0], [-1 1], 'r') set(gca, 'color', [0.9400 0.9400 0.9400]); axis equal; Girko (1984) Teor. Veroyatnost. i Primenen.
  22. 22. Circle law of eigenvalue density function. 1.00 0.99  1.01 
  23. 23. Dale’s law: neurons are either excitatory or inhibitory. Rajan & Abbott (2006) Phys Rev Lett Excitatory neuron inhibitory neuron 0 forijW i  0 forijW i  11 1 1 1 1j n n i n ni nj n W W W W W W W W            W L L L L L L M MM M Exercise: Examine the dynamics of the neural network when Dale’s law is imposed on the random connection matrix, i.e., all components in a column are either positive (excitatory) or negative (inhibitory). This problem is already analyzed by Rajan and Abbott (2006).
  24. 24. Hierarchical modular connectivity structure. Exercise: Examine the dynamics of the neural network with a hierarchical modular connectivity matrix. This problem has NOT been analyzed so far, to my knowledge. Anatomical studies suggest that cortical neurons are not randomly connected and that they are connected with a modular and hierarchical manner. regular network random network small world HM stochastic HM Cat visual cortex n×n Hierarchical network: (1) m on-diagonal blocks of size s are connected with prob pm and n=ms. (2) 1st level of off-diagonal blocks of size s are connected with prob pc. (3) Subsequent levels of off-diagonal blocks are of size 2s, 4s, 8s, … and connected with prob pcq, pcq2, pcq3, … Robinson et al. (2009) Phys Rev Lett; Aljadeff, Stern, Sharpee (2015) Phys Rev Lett
  25. 25. Firing-rate equation. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience  r s dv v F I dt     1 uN s s s b b s b dI I w u I dt          w uSynaptic input dynamics Firing rate dynamics s r ? s s s dI I dt     w u  sv F I s r =  r dv v F dt     w u
  26. 26. Feedforward and recurrent networks. Feedforward network Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience   d dt     v v F Wu Recurrent network   d dt      v v F Wu Mv
  27. 27. Excitatory-inhibitory network.     E E E E EE E EI I I I IE E III I I d dt d dt           F h v v vM M F vh M M v v v v Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience Ev IvEE 0M II 0M IE 0M EI 0M : Excitatory population activity : Inhibitory population activityEv Iv
  28. 28. Continuously labeled network. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience    1 v T a Nv v v v LDiscretely labeled network Continuously labeled network  v v    11 1 1 v v v v N ab N N N M M M M M             M L M O M L  ,M M               , ,r dv v F d W u M v dt                          
  29. 29. Linear network: Selective amplification. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                        1 , cosM           
  30. 30. Nonlinear network: Gain modulation. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                           1 , cosM           
  31. 31. Nonlinear network: Winner-takes-all selection. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                           1 , cosM           
  32. 32. Nonlinear network: Working memory. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                           1 , cosM           
  33. 33. Non-symmetric connectivity: Winner-less competition. - Dynamics in phase space connecting saddle points (heteroclinic connections). - Memories are represented in terms of heteroclinic trajectories.          i i i ij j i j da a t a t S t dt           S S Rabinovich et al. (2001) Phys Rev Lett
  34. 34. Winnerless competition: Coupled FN neurons.       i x i i i i i i i i z i ij j j dx f x y z x dt dy x by a dt dz z g G x dt               0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 Rabinovich et al. (2001) Phys Rev Lett
  35. 35. Winnerless competition: Coupled FN neurons.       i x i i i i i i i i z i ji j j dx f x y z x dt dy x by a dt dz z g G x dt               Rabinovich et al. (2001) Phys Rev Lett 15 52 21 24 45 65 26 36 53 74 57 84 58 86 89 95 2 g g g g g g g g g g g g g g gg                 4 2 9 1 3 5 8 7 6
  36. 36. Winnerless competition: Matlab simulation. Rabinovich et al. (2001) Phys Rev Lett function Y = odeWLC(t, X) % parameters tau1 = 0.08; tau2 = 3.1; a = 0.7; b = 0.8; nu = -1.5; % stimulus S = [0.1; 0.15; 0.0; 0.0; 0.15; 0.1; 0.0; 0.0; 0.0]; % connectivity g = zeros(9, 9); g0=2; g(1,5)=g0; g(5,2)=g0; g(2,1)=g0; g(2,4)=g0; g(4,5)=g0; g(6,5)=g0; g(2,6)=g0; g(3,6)=g0; g(5,3)=g0; g(7,4)=g0; g(5,7)=g0; g(8,4)=g0; g(5,8)=g0; g(8,6)=g0; g(8,9)=g0; g(9,5)=g0; % differential equations x = X(1:9); y = X(10:18); z = X(19:27); dxdt = ((x-x.^3/3)-y-z.*(x-nu)+0.35+S)/tau1; dydt = x-b*y+a; dzdt = (g'*(x>=0)-z)/tau2; Y = [dxdt; dydt; dzdt]; x0 = [-1.2*ones(9,1); -0.62*ones(9,1); 0*ones(9,1)]; [T,X] = ode45(@odeWLC,[0 500], x0); % all FHN neurons figure(1); for n=1:9 subplot(9,1,n); plot(T,X(:,n),'k'); end % PCA [y, s, l] = pca(X(:,1:9)); figure(3); plot3(s(:,2), s(:,3), s(:,4)); grid on;
  37. 37. Winnerless competition: Matlab simulation. Rabinovich et al. (2001) Phys Rev Lett
  38. 38. Example: Olfactory processing in insects. Mazor & Laurent (2005) Neuron
  39. 39. Cell assemblies: functional units of brain computation. Definition Cell assembly: a group of neurons that perform a given action or represent a given percept. Hebb (1949) The Organization of Behavior; Harris (2005) Nature Rev Neurosci
  40. 40. Cell assemblies: functional units of brain computation. Definition Cell assembly: a group of neurons that perform a given action or represent a given percept. Hebb (1949) The Organization of Behavior; Harris (2005) Nature Rev Neurosci 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4
  41. 41. Synfire chain in a feedforward network. Diesmann et al. (1999) Nature
  42. 42. Synfire chain in a feedforward network. Brian Spiking Neural Network Simulator,http://briansimulator.org/
  43. 43. Feedforward synfire chain requires activity tuning. Diesmann et al. (1999) Nature    , 30 spikes, 2 msn      , 40 spikes, 2 msn      , 50 spikes, 2 msn      , 60 spikes, 2 msn      , 70 spikes, 2 msn      , 80 spikes, 2 msn  
  44. 44. Feedforward synfire chain requires activity tuning. Diesmann et al. (1999) Nature    , 80 spikes,1msn      , 80 spikes, 2 msn      , 80 spikes, 3 msn      , 80 spikes, 4 msn      , 80 spikes, 5 msn      , 80 spikes,10 msn  
  45. 45. Synfire chain can be made robust by feedback connections. Moldakarimov et al. (2015) PNAS 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4
  46. 46. Synfire chain can be made robust by feedback connections. Moldakarimov et al. (2015) PNAS    , 80 spikes, 5 msn      , 80 spikes, 5 msn      , 80 spikes, 5 msn   excitatory feedback: 0.1 excitatory feedback: 0.2 excitatory feedback: 0.3
  47. 47. Neural avalanche with scale-free dynamics. Beggs & Plenz (2003) J Neurosci   3 size where 2 P     :
  48. 48. Neural avalanche as a branching process. Zapperi et al. (1995) Phys Rev Lett    , , s n n s f x p P s p x     , ,n ng x p Q p x         2 1 , 1 ,n nf x p x p pf x p           2 1 , 1 ,n ng x p p pg x p     ,nP s p  ,nQ p probability of avalanche of size s probability of avalanche boundary of size s Generating function of avalanche size Generating function of avalanche boundary size Recursive relation of fn(x,p) Recursive relation of gn(x,p)
  49. 49. Neural avalanche as a branching process. Zapperi et al. (1995) Phys Rev Lett      2 , 1 ,f x p x p pf x p       2 1 1 4 , 2 pqx f x p px        2 2 3 4 5 2 2 3 !!1 1 1 5 7 1 1 1 1 2 8 16 128 256 2 2 ! s s s s x x x x x x x x s               L       2 2 1 2 2 3 !!1 1 4 1 , 2 2 2 ! s s s spqx f x p qx pq x px p s                          3 2 2 2 3 !! 2 2 !1 1 2 1 4 2 exp 2 ! 2 ln 41 ! s ss s s P s pq pq s p s ps pqs             : Recursive relation of fn(x,p) when n is large enough Using a Taylor expansion The generating function can be expanded as Therefore the probability of avalanche size s is given as:
  50. 50. Echo-state network: harnessing chaotic units. Jaeger & Haas (2004) Science
  51. 51. Echo-state network. % load the data trainLen = 2000; testLen = 2000; initLen = 100; data = load('MackeyGlass_t17.txt'); % generate the ESN reservoir inSize = 1; outSize = 1; resSize = 1000; a = 0.3; % leaking rate rand( 'seed', 42 ); Win = (rand(resSize,1+inSize)-0.5) .* 1; W = rand(resSize,resSize)-0.5; opt.disp = 0; rhoW = abs(eigs(W,1,'LM',opt)); disp 'done.' W = W .* ( 1.25 /rhoW); % allocated memory for the design (collected states) matrix X = zeros(1+inSize+resSize,trainLen-initLen); % set the corresponding target matrix directly Yt = data(initLen+2:trainLen+1)'; % run the reservoir with the data and collect X x = zeros(resSize,1); for t = 1:trainLen u = data(t); x = (1-a)*x + a*tanh( Win*[1;u] + W*x ); if t > initLen X(:,t-initLen) = [1;u;x]; end end % train the output reg = 1e-8; % regularization coefficient X_T = X'; Wout = Yt*X_T * inv(X*X_T + reg*eye(1+inSize+resSize)); Y = zeros(outSize,testLen); u = data(trainLen+1); for t = 1:testLen x = (1-a)*x + a*tanh( Win*[1;u] + W*x ); y = Wout*[1;u;x]; Y(:,t) = y; u = y; end http://minds.jacobs-university.de/mantas/code
  52. 52. Echo-state network.
  53. 53. Summary • Population neural dynamics can be formulated by using the techniques developed in physics and dynamical systems, including spin models, phase transitions, scale- free dynamics and so on. • Population activity exhibits a variety of emergent phenomena such as attractor memory dynamics, winner- takes-all process, short-term memory, winner-less competition, and so on. • Population neural dynamics is a very active field, and a number of novel researches keep going on today.

×