SlideShare a Scribd company logo
1 of 25
Download to read offline
Modeling Neural Networks In Silico - A Neural Network
      Model and Novel Experimental Design using Light-
    Directed Electrical Stimulation of Neurons for statistical
     refinement of Computational Neural Network Models
        Daniel Carlin‡, B.S.E., Daniel Cook†, and Joshua E. Mendoza-Elias§
                                     Fall 2007
                                  Duke University
       Engineering 183: Special Topics – Modeling Neural Networks In Silico
             Dr. David Needham, Ph.D. Pratt School of Engineering
                      Submitted: Sunday, December 16, 2007

 
                                                           Abstract                                 

Long‐term potentiation (LTP) and Long‐term depression (LTD) are the best understood
mechanisms by which biological systems achieve the sculpting of neural pathways necessary
for learning and memory. Foundational to the understanding of LTP and LTD is the model of
Hebbian learning. Although research has demonstrated that neural networks can be
developed and trained in silico to mimic the computational function of neural ensembles, the
development of a truly realistic model remains elusive. We construct a simulation of Hebbian
learning implementing the Hopfield network model. We then propose an experimental setup
that will use the light‐directed electrical stimulation of neurons cultured on silicon wafers in
order to gain information on the parameters of synaptic strength and firing frequencies for
all neurons in the system simultaneously (determinants of LTP and LTD). This will allow the
parallel running of neural networks in the in vivo and in silico setting, allowing statistical
comparison between the biological model and computer simulation and additional refinement
to create a more biologically relevant computational model.

    Keywords: short and long-term potentiation (LTP), short and long-term depression (LTD),
    neural plasticity, Hebbian learning, Hopfield Network, parallel neural network training,
                          light-directed electrical stimulation (LDES).
                                                                                                    
Note: This study was submitted to Vertices, The Journal of Science and Technology at Duke
University, which indicated an interest in publication of this paper. However, they decided
that it was too technical for their audience and forwarded it to Jupiter, The Undergraduate
Journal for Engineering, which was suggested as a better venue.




                                                        
 Daniel Carlin, Pratt School of Engineering, Department of Biomedical Engineering.
‡

E-mail: daniel.carlin@duke.edu.

 Daniel Cook, Trinity College of Arts and Sciences, Department of Computer Science.
†

E-mail: daniel.cook@duke.edu. Website: http://www.duke.edu~dkc6

 Joshua Mendoza-Elias, Trinity College of Arts and Sciences, Department of Biology.
§

E-mail: joshua.mendozaelias@duke.edu. Website: http://www.duke.edu/~jme17
Modeling Neural Networks In Silico                                                    Carlin, D., Cook, D., Mendoza-Elias, J.



                                              Table of Contents:
1 Introduction ........................................................................... 3
    1.1 M echanism s U nderlying LTP and LTD ....................................... 3
      Figure 1: Molecular Mechanisms of LTP. ..............................................................4
      Figure 2: Signaling Cascades activated by LTP and LTD. ...................................4
      Figure 3: Short-term and Long-term changes to LTP. ..........................................5
      Figure 4: Properties of LTP and LTD - Specifications and Associativity. ............5
    1.2 Spike Tim ing Dependent Plasticity (STDP) ................................. 6
      Figure 5: The Biological Basis of our Model. .........................................................6
    1.3 Theoretical M odels of Neural Networks ...................................... 7
      Figure 6: Abstract Representation of Model. . ........................................................8
      Figure 7: Firing Relationships. ...............................................................................8
2 Mathematical Formulation .................................................... 10
    2.1 Setup ....................................................................................... 10
    2.2 H ebbian Learning ..................................................................... 11
    2.3 Inhibitory neurons and m odel training ...................................... 12
      Supplement 1: Equation Summary. .....................................................................13
3 Computer Simulation ............................................................ 14
    3.1 Setup ....................................................................................... 14
    3.4 Solving step ............................................................................. 15
      Figure 8: Convergence of Firing Rates. ................................................................16
      Figure 9: Rates of change in Firing Rates. ..........................................................16
    3.5 Sim ulation Results ................................................................... 17
4 Future Directions ................................................................. 18
       Figure 10: Proposed Experimental Setup employing Light-Directed Electrical
       Stimulation (LDES) of Neurons cultured on Silicon Wafer. ...............................19
       .
5 Acknowledgements ................................................................ 20
6 Appendix Simulation Code .................................................... 21
       Code 1: model.m .....................................................................................................21
       Code 2: normalize.m...............................................................................................22
       Code 3: dT.m ...........................................................................................................22
       Code 4: alpha.m ......................................................................................................23
       Code 5: beta.m ........................................................................................................23
       Code 6: deadneurons.m ..........................................................................................23
7 References ............................................................................ 24


 
 
 
 
 
 

                                                      Page 2 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                              Modeling Neural Networks In Silico



1 Introduction                                                                                                                         

Synaptic connections between neurons form the basis of neural circuits that comprise neural
networks. More importantly, it is the dynamic entity of synaptic connections that impart the
property of plasticity that enable neural networks to quot;learn.quot; Synaptic connections are in a
constant state of change that is determined in response to neural activity and other
mechanisms which operate in short‐term and long‐term time scales (milliseconds to years)
[1, 8, 20]. In terms of neural ensembles, neurons are quot;trainedquot; by mechanisms of potentiation
and quot;detrainedquot; by mechanisms of depression. In terms of short‐term potentiation, synapse
connections are modulated using the mechanisms of facilitation, augmentation, and
potentiation; all of which function by enhancing neurotransmitter release from presynaptic
terminals. Complementary to this are mechanisms of short‐term depression, which decrease
the amount of neurotransmitter released and appear to result from activity‐dependent
depletion of synaptic vesicles primed for exocytosis. More pertinent to this research paper,
are the mechanisms of Long‐Term Potentiation (LTP) and Long‐Term Depression. These
biological mechanisms have provided the foundation of Hebbian Models of learning and the
modeling of neural networks. Furthermore, LTP and LTD have been most thoroughly
studied in the mammalian hippocampus and have a high degree of biological relevance for
biomedical related studies and applications [4, 8].
 
1.1 Mechanisms Underlying LTP and LTD 
LTP functions at time scales of 30 minutes or longer. LTP is accomplished by the release of
glutamate from the presynaptic cleft and its binding to AMPA receptors on the postsynatpic
cleft (See: Figure 1). AMPA receptors generate EPSPs (excitatory postsynaptic potential)
that cause the expulsion of magnesium from the NMDA receptor in the postsynaptic cleft,
thus allowing calcium to become permeable to the postsynaptic cleft (See: Figure 1). AMPA
receptors generate EPSPs (excitatory postsynaptic potential) that cause the expulsion of
magnesium from the NMDA receptor in the postsynaptic cleft, thus allowing calcium to
become permeable to the postsynaptic cleft. In turn, these rapid and high calcium spikes
cause the postsynaptic neuron to activate signal transduction cascades that include:
Ca+2/calmodulin‐dependent protein kinase (CaMKII) and protein kinase C (PKC) (See:
Figure 2) [2, 16]. These cascades are involved in short‐term changes that result in increased
synaptic efficacy for the maintenance of LTP, such as an increase in AMPA receptors in the
postsynaptic neuron (See: Figure 3) [6]. In terms of more long‐term changes, repeated
stimulation causes increased activation of genetic programs involved in the upregulation of
the transcription of AMPA receptors, tubulin, actin, signal cascade transduction networks ‐
essentially genes that will increase the size of the synaptic cleft, the radius of the axon and
sensitivity to neurotransmitter glutamate by increasing AMPA receptors in the postsynaptic
cleft. These long‐term changes in gene expression profiles are believed to be the basis by
which essentially permanent modifications to brain function are achieved. LTD functions
antagonistically to prevent synapses from attaining maximum synaptic efficacy as over time
this would make the encoding of new information impossible.
         LTD functions in a similar fashion that essentially differs in the manner by which
calcium levels rise in the postsynaptic cleft (which is modulated by stimulation from other
neurons). In opposition to LTP, where EPSPs generate brief high frequency stimulation, LTD
requires stimulation at a low rate (about 1 Hz) for long periods (10‐15 minutes). The low
frequency stimulation then causes calcium levels to rise in a more steady and slow fashion.
In turn, the rise in calcium levels causes the activation of Ca+2–dependent phosphatases that
in the short‐term result in the loss of AMPA receptors and in the long‐term change gene
expression to promote decrease in size of the synaptic button and radius of the axon [2, 3, 5,
10, 20].




                                              Page 3 of 25
Modeling Neural Networks In Silico                                            Carlin, D., Cook, D., Mendoza-Elias, J.




Figure 1: Molecular Mechanisms of LTP. A: Resting Potential: EPSPs in the presynaptic neuron are
insufficient to cause depolarization of the postsynaptic neuron and relieve the Mg+2 block on NMDA receptors. B:
Postsynaptic Depolarization: Sufficient stimulation allows for the depolarization and entry of the Ca+2 ions, allowing
the unblocking of NMDA receptors and activation of short-term LTP mechanisms. [20]
                   Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20]. 




Figure 2: Signaling Cascades activated by LTP and LTD. A: At resting potential: Signaling cascades in the
postsynaptic cleft activated in response activation of LTP. B: Siganaling cascades in the postsynaptic cleft activated
in response to activation and maintenance of LTD. [6, 20]
                    Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20]. 




                                                 Page 4 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                                         Modeling Neural Networks In Silico




                                                                                                     
Figure 3: Short-term and Long-term changes to LTP. Repeated stimulation causes short-term pathways
(signaling cascades) to cause changes in gene expression that will serve to enhance the process (LTP or LTD) that is
dominant. [20]
                   Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008). [20]
                                                           




                                                                                                             
Figure 4: Properties of LTP and LTD - Specifications and Associativity. Hebbian theory relies on Spike-
Timing Dependent Plasticity (STDP). A: Strong activation of a neural pathway without adjacent pathways firing
within an interval time-window causes the weaker pathway to not be strengthened. B: Strong activation of a neural
pathway coupled with adjacent pathways firing within an interval time-window causes an already strong synapse to
be strengthened and a weaker synapse to be strengthened. [5, 10, 20]
Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20]. 




                                                Page 5 of 25
Modeling Neural Networks In Silico                                            Carlin, D., Cook, D., Mendoza-Elias, J.



1.2 Spike Timing Dependent Plasticity (STDP) 
Central to both mechanisms is spike‐timing dependent plasticity (STDP). This imparts onto
LTP and LTD the property of associativity1 (See: Figure 4). At a given low frequency of
synaptic activity, LTD will occur if presynaptic activity precedes a postsynaptic action
potential, while LTP occurs if the postsynaptic action potential follows presynaptic activity.
The relationship between the time interval and the magnitude of synaptic change is a very
sensitive function of the time internal, with no changes observed if the presynaptic activity
and postsynaptic activity are separated by 100 milliseconds or longer. This requirement for
precise timing between presynaptic and postsynaptic activity for the induction of these forms
of long‐lasting synaptic plasticity is what drives the development of neural networks.
Although the mechanisms involved are not yet understood, it appears that these properties
of STDP arise from timing‐dependent differences in postsynaptic Ca+2 signals. Specifically, if
a postsynaptic action potential occurs after presynaptic activity, the resulting depolarization
will relieve the block of NMDA receptors by Mg+2, yielding LTP. In contrast, if the
postsynaptic action potential occurs prior to the presynaptic action potential, then the
depolarization associated the postsynaptic action potential will subside by the time that an
EPSP occurs. This will reduce the amount of Ca+2 entry through the NMDA receptors,
leading to LTD [5, 20].




                                                                                                            
Figure 5: The Biological Basis of our Model. A: Our model was inspired by the connectivity of neurocytes and
development of motor coordination in the human cerebellum. B: Signal from the eye (red) is transmitted via
climbing fibers (1–input neuron) which stimulate parallel fibers (interneurons: 2–excitatory or 3–inhibitory).
Parallel fibers are believed to function as “timers.” Those with the correct “timing” (e.g., phase) in relation to the
initial stimulus (climbing fibre) will serve as an excitatory neuron, and vice versa. C: Purkinje cells (4–output
neuron) receive stimulus from Climbing fibers and Parallel fibers. The key serves to show the relationships between
in vivo neurocytes and our in silico representation of them in Figure 6 (below).
               A: Reproduced from http://www.mphy.lu.se/avd/nf/hesslow/neuroscience/classcond.html
                 B, C: Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20]

                                                        
 Weak stimulation by itself will not trigger LTP, however if one pathway is strongly activated at the same time a
1

neighboring pathway is weakly stimulated then both synaptic pathways will undergo LTP. LTP and LTD are
restricted to the activated (stimulated) synapse. In short, cells that fire together, wire together, apart from
specificity.



                                                           Page 6 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                           Modeling Neural Networks In Silico



1.3 Theoretical Models of Neural Networks  
There are several different models of learning in neural networks, most of which originated
from Hebb's postulate that quot;When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth process or metabolic change
takes place in one or both cells such that A's efficiency, as one of the cells firing B, is
increasedquot; [13]. These models are separable into two main classes: firing‐rate‐based and
spike‐based. They differ in that firing‐rate models use the average firing rate of a neuron
over a given unitary time period as the basis of computation, while spiking models use the
spike train of a given neuron in a probabilistic fashion.
     Our model was biologically inspired by the action of LTP and LTD in determining
neural architecture in the cerebellum (See: Figure 5). Our model is a smoothed,
synchronous version of the firing‐rate model based on the Hopfield attractor network (see
Figure 6). This class of network stabilizes to an attractor (a stable pattern of neuron firing).
The network can be considered a function of the input neurons based on the synaptic
weights, so that training the network consists of defining this attractor (which can also be
termed quot;memoryquot;). This is done by reinforcing Hebbian learning proportionately to the
difference between the emergent and desired output signals at each time step, given an input
signal. We implement a slightly modified version of the model originally published by
Hopfield [14], differing in two respects:

     1. We do not update individual neurons asynchronously, but rather compute the
     firing rates of all neurons synchronously at each time step. This is justified because
     we are calculating averages over each time‐step, rather than trying to discretize a
     continuous process ‐ in other words, we are abstracting away from the mechanics of
     individual firing neurons in order to observe the state of the entire system at every
     time (t).

     2. We use neurons with a continuous, graded‐response rather than two state
     McCullough‐Pitts neurons. Networks of graded‐response neurons have been shown
     to exhibit similar emergent computational properties to the original model [15].
€
     Once again, this is justified because we are measuring the average firing rates of all
     neurons over a given time step rather than simply whether or not a neuron fired
     during that time step. Thus time steps in our simulation are several times longer
     than the refractory period of a single neuron. This allows computation of the
     simulation for a larger number of neurons, enabling a larger parallel experiment to
     be run.

We should note that networks with synchronous updating may have certain differences in
emergent dynamical properties. These were beyond the scope of this study, but more
information may be found in Little et al. [17]. (See Section 2 for a mathematical formulation
of our firing‐rate‐based model of Hebbian learning implementing a Hopfield attractor
network).
     Spike‐based models are newer and not yet as well studied as firing‐rate models. These
models assume that at a given time-step (t), there is some unknown probability distribution
describing the likelihood of each neuron spiking at that time point. This distribution is
parameterized around the hidden variable (the true input). Thus, rather than gradient
descent toward a function as with firing‐rate based models, spiking models exhibit a gradient
                                          €




                                              Page 7 of 25
Modeling Neural Networks In Silico                                               Carlin, D., Cook, D., Mendoza-Elias, J.




                                                                                                                

Figure 6: Abstract Representation of Model. The diagram outlines the relationships between the different
types of neurons: input neuron (1–Climbing fiber), excitatory interneuron (2–Parallel fiber firing in phase),
inhibitory interneuron (3–Parallel fiber firing out of phase), and output neuron (4– Purkinje cell). Please note the
key to the left corresponds to the key in Figure 5.




                                                                                                               
Figure 7: Firing Relationships. A: The equations governing the model: (i.) Firing Rate; (ii.) Synaptic Strength;
(iii.) the “Learning Rule.” B: In phase – (i.) If two neurons are synchronous, then the synaptic weight will increase by
1.0. (ii.) The closer the pre-synaptic firing rate vi (phase = α) of two neurons (increasing synchronicity), the higher vj
increases and synaptic weight wi,j in the next time-step. C: Out of phase – (i.) If two neurons completely
asynchronous, then synaptic weight will increase by 0. (ii.) The further apart the pre-synaptic firing rate vi (phase =
α) of two neurons (increasing asynchronicity), the higher vj decreases and synaptic weight wi,j in the next time-step.


                                                   Page 8 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                           Modeling Neural Networks In Silico




descent towards a distribution parameterized around the attractor state (memory). These
models then study the ability of a population of individually noisy neurons to reconstruct the
hidden variable. Indeed, much of the work thus far focuses on coding of hidden variables in
populations of afferent neurons. See Zemel et al. [22] for an explanation of population coding,
and Deneve et al. [11] for a model showing that population of spiking neurons can perform
Bayesian inference computations.
         We chose to simulate the Hopfield network primarily due to practical concerns,
namely that spike-based models are analytically and computationally intensive, and are
difficult to train. They are also not yet well understood, thus there is uncertainty as to which
model would be appropriate for our purposes here. Conversely, Hopfield attractors are
theoretically well understood (see Ermentrout et al. [9], ch. 5). Our model allows us to
compute a state of the entire network by solving a single system of linear differential
equations and then updating synaptic weights appropriately, which is a faster computation
than spike-based models allow.


 
 
 

 
 
 

 
 
 
 

 
 
 
 

 
 
 

 


                                          Page 9 of 25
Modeling Neural Networks In Silico                                                            Carlin, D., Cook, D., Mendoza-Elias, J.



2 Mathematical Formulation                                                                                                               
           Note: See Supplement 1 (pp. 13) for an Equations Summary 
 
2.1 Setup 
We define a vector V2 of dimension N (we are modeling N neurons), where vi represents the
average firing rate of neuron i over a given time step. We also define weights wi,j,
representing the efficacy of synaptic signal propagation (i.e., connection strength) between
pre-synaptic neuron vi and post-synaptic neuron vj . Due to the directionality of synapses,
wi,j does not necessarily equal wi,j, because these represent two different synapses.

The rate of firing of a neuron is determined by the following:

                                                                              N
                                                                      v j = ∑ w i, j v i                                         (1)
                                                                             i= 0


which is a linear combination of the firing rates of all neurons connected to neuron vj ,
weighted by the strength of each synapse.
                                                           €
         In a system with N neurons, this yields a system of N equations. We choose one
neuron, vi=1 , to be the ”input” neuron. This can be achieved in a real system by voltage
clamping the neuron. In this model we simply fix v1 at a specific rate during each time step.
Say we choose the rate to be v1 = a, and assuming that the neuron does not feed back on
itself (wi,j = 0) then the linear system of equations is:

                                                  v1 = av 2 = w1,2v1 + w 3,2 + ...+ w n, j v N
                                                                                               
                                                  v N = w1,N v1 + w 2,N v 2 + ...+ w N ,N −1v N −1

Let D be a column vector as follows:

                               €
                                                                                  ,



and define T to be the following matrix:



                                                                                              .                                  (2)


Then we have the matrix equation
                                                                                      ,                                          (3)

where solving
                                                                                                                                 (4)
yields the average firing rates for all neurons over a given time step [5, 7, 12, 14, 19].
                                                        
    This is a discrete model - for simplicity, time arguments are omitted where immaterial.
2




                                                               Page 10 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                                              Modeling Neural Networks In Silico



2.2 Hebbian Learning  
As in biological neurons, the simulated neurons “learn” by changing the synaptic

weight wi,j = 0. The rule by which these weights are changed classifies Hebbian learning. We

                                          is a twice differentiable function of the two local variables, vi
start by assuming that

and vj.

Expanding in Taylor series, then:




Knowing that the change in weight should be dependent on the correlation between the two
firing rates, the simplest learning rule that can be obtained is that c0 through c4 equal zero
and c5 = A, where A is some positive constant, i.e.,

                                                  dw i, j
                                                          = A ∗ v1 ∗ v 2
                                                   dt
         This is the prototype for Hebbian learning, and is our representation of LTP.
However, we would also like to include a model for LTD, so we assume that the synaptic
weights decrease in strength whenever they are not used (i.e. when vi ∗ vj = 0) at a constant
                           €
rate, B. Then;

                                               dw i, j
                                                       = A ∗ v1 ∗ v 2 − B
                                                dt
       Additionally, we seek to bound the weights of the synapses between 0 (equivalent to
disconnected neurons) and 1 (a maximally efficient synapse). In order to do this, we choose
                 and              for positive constants  and   between 0 and 1.   and
                        €
ca be viewed as the relative strengths of LTP and LTD in a single time step; respectively. We
then have:
                                            dw i, j
                                                                                                                   (5)
                                                    = α ∗ (1− w i, j ) ∗ v i ∗ v j − β ∗ w i, j
                                             dt
        Furthermore, we want to bound the firing rates between 0 (not firing) and 1 (firing at
the maximally allowed rate for that neuron). In order to do this, the constants must be
normalized by dividing by the sum of the weights of the most strongly connected neuron at
                      €
every time step. To do this, we calculate the Hebbian learning change in synaptic strength
first, and then normalize by the largest sum of the weights going into any one neuron.

        While the equations above are continuous, our model is discrete. Thus at each time
step the rule for obtaining         from        is therefore:


                                                                                                                   (6).



This introduces the important concept of competition between synapses. The idea is that

                                                  Page 11 of 25
Modeling Neural Networks In Silico                                             Carlin, D., Cook, D., Mendoza-Elias, J.



synaptic weights can only grow at the expense of others so that if a certain subgroup of
synapses is strengthened, other synapses to the same post-synaptic neuron have to be
weakened by dividing over a larger sum.

2.3 Inhibitory neurons and model training  
In order to implement learning reinforcement (this may be thought of as a dopamine drip3),
we shall transform α into a function dependent on the firing rate of some arbitrary output
neuron (let us use vN ). There are clearly a very large number of potential training functions
that might be considered, depending on the desired behavior of the network.

We shall seek here to obtain some degree of phase inversion between the
input (voltage-clamped) neuron v1 and vN .

             To do this, we set
                                                                           ,                                      (7)

so that maximal promotion of synaptic efficacy would take place only during time steps when
the input and output neurons had very different average firing rates. While this rule is
sufficient to force the firing rate of the output neuron to zero for all input firing rates, in
order to get opposing behavior, we must introduce inhibitory neurons into our model.
         Inhibitory neurons are modeled similarly to the excitatory neurons; however, rather
than a firing rate bound by [0, 1], we use [−1, 0]4. Thus, according to the learning rule above,
a presynaptic inhibitory neuron vi simultaneously exhibiting a high firing rate as a post-
synaptic excitatory neuron vj will cause the synaptic strength wi,j to be reinforced. However,
the summation in equ. (1) means this reinforcement will result in a lower average firing rate
for vj during future time steps (all else held equal).
         Using the same feedback rule as (7) and using the same update step as (6), we show
that the output neuron’s firing rate reaches a local maximum when the input neuron’s rate is
at a local minimum, yet also follows the input neuron to its maximum. Further refinement of
the model would be required to show complete phase inversion between the input and output
neurons [5, 7, 12, 14, 19].




                                                        
 We define the reinforcement factor as a function of two neurons, one voltage-clamped and the other an ”output”
3

neuron, the firing rate of which can be monitored by micro-electrode. Thus this form of learning reinforcement does
not require monitoring of the firing rate of every single neuron, which in turn means that a biological reinforcement
mechanism would only require a uniform concentration of dopamine across the entire network, rather than neuron-
specific dosages.

  Our model, though mathematically equivalent to this, differs slightly in implementation. Please refer to section 3
4

for details.



                                                           Page 12 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                                         Modeling Neural Networks In Silico




 
Supplement 1: Equation Summary. This supplementary figure accompanies Sections 2 and 3; and condenses
the mathematical relationships describes in those sections. (A-D) accompanies Sections 2. (E) accompanies Section 3.




 

                                                Page 13 of 25
Modeling Neural Networks In Silico                                                      Carlin, D., Cook, D., Mendoza-Elias, J.



3 Computer Simulation                                                                                                              
         Note: See Supplement 1 (pp. 13) for an Equations Summary 
                                      
We used a computer simulation of the model described above in order to be able to generate
testable predictions of neural network behavior. This simulation was written in MATLAB,
and simulations were performed using Octave5, with graphical representations in Gnuplot6.
After initialization, the simulation repeatedly normalizes, calculates the next step of the
strength matrix, and then solves for the resulting vector of firing rates.
 
3.1 Setup  
The implementation is a direct representation of the mathematical formulation described
above. We set N = 20, modeling a small network of 20 neurons for computational tractability,
and the simulation is run for STEPS = 100 time steps. Additionally, we define 6 inhibitory
neurons in our model.
       We begin by defining the stimulus pattern for our input neuron:

                                                                          Π
                                                                      sin t  + 1
                                                                          8
                                                                                   ,                                        (8)
                                                             v1 (t) =
                                                                            2
using a periodic function bounded between 0 and 1. This allows us to observe the phasic
relationship between v1 and vN , the output neuron. We then define the vector D:

                                           €
                                                                             7                                              (9) 



        Inhibitory neurons are stored in the V vector, and identified by a subset Inhib of the
indices of V, where dim(Inhib) < N determines the number of inhibitory neurons in the
system. All logic is performed identically for inhibitory neurons as for excitatory, with one
fundamental difference: synaptic strengths wi,j between an inhibitory neuron i and its
postsynaptic neurons j are negated in (1). This means that during the solving of (4), wi,j
terms are negated in the T matrix when i ∈ I . All other processing of inhibitory neurons
remains the same as for excitatory, and with the exception of the solving step, all weights are
maintained in the system as positive floating point numbers in order to €  ease computation of
the update matrix.
        We next initialize a random matrix T of dimensions (N, N ). We set T1,1 = 1 (the input
                                 €
neuron is entirely self-determined) and Ti, j = 0, 1 < j < N (the input neuron is not post-
                                                                           Ti, j = −1, 1 < i < N , such that no neuron is
synaptic to any other). We additionally set all
post-synaptic to itself. We then have

                                                         €
                                                               €
    See: http://www.octave.org
5



    See: http://www.gnuplot.info
6



 We must additionally constrain a second neuron to fire constantly at a rate of 1.0. This is due to equation (4), which
7

calculates each firing rate as a linear combination of the others. Each of these linear combinations is dependent on
the value of v1 . Thus when v1 = 0, such that D = 0, the zero vector, the only possible solution to (4) is that V = 0 as
well. This is an artifact of this model and is not biologically significant.



                                                               Page 14 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                                  Modeling Neural Networks In Silico



                                                                            …
                                                  1       0    0   0
                                                                            
                                                                            …
                                                  0       1    0   0
                                                                            … .                          (10)
                                              T =  w 3,1 w 3,2 −1 w 3,4
                                                                            
                                                   … … … −1                …
                                                                            
                                                  … … … …                  …

    which the reader will notice differs from equation (2) in one respect: in that equation,
    Ti, j = −1, 1 < i < N . Those values are necessary in order to avoid solving for a vector of 0
                              €
    firing rates in the solving step in (4).
     
    3.2 Normalization step  
€   As presented in (2.2), matrix T, representing synaptic strengths in the network, must be
    normalized before the first time step, and then at every subsequent time step in order to
    maintain the bounding conditions (strengths and firing rates must remain both in the
    interval [0, 1]). This normalization proceeds as follows (where Tnorm represents the
    normalized strength matrix):

                                                   norm
                                                 Ti,i = 0, i > 3



                                        €

             We then solve (4) once before the first time step in order to obtain an initial set of
    firing rates. 
     
    3.3 Differential Step 
    This step proceeds exactly as described in equation (5). See figure (9) for the implementation,
    and figures (10, 11) for implementations of the alpha and beta functions. During our
    simulation we simply used β = 0.1 (this parameter determines the effect of LTD in

    comparison to LTP in the model). We set all                    , and use the learning rule in equation

    (7).
     
    3.4 Solving step 
    The solving step proceeds as described in (4) (see: code in Code 7). However, the presence of
    inhibitory neurons raises a bounding problem in the firing rates V: a heavily inhibited
    excitatory neuron can occasionally display a negative average firing rate over a given
    time‐step, which is nonsensical (it would imply that an excitatory neuron is inhibiting other
    neurons, or that an inhibitory neuron is potentiating other neurons, by firing). This arises
    due primarily to rounding errors arising from the numerical nature of our simulation. It is
    remedied by simply setting all negative values in V to 0 at the end of each solving step.

     
     
     
     

                                                 Page 15 of 25
Modeling Neural Networks In Silico                                                Carlin, D., Cook, D., Mendoza-Elias, J.




                                                                                                                   
Figure 8: Convergence of Firing Rates. Vertical axis: Averaged neuron firing rates. Horizontal Axis: Time
progression (number of time-steps 0 to 100). Input neuron is B L U E . Output neuron is Y E L L O W . Other neurons are
                                                                BLUE                      YELLOW
                                                                BLUE                      YELLOW
represented by various colors. The graph displays the average firing rate at each time-step. Convergence (synaptic
                                                                                    th time-step. This demonstrates in
firing rates) of all neurons to the output neuron occurs after approximately the 50
silico representation of our model can be trained to a firing pattern. [Network description: (20 neurons (N), 100
STEPS, 6 inhibitory neurons (inhib 3-9)].




                                                                                                                       
Figure 9: Rates of change in Firing Rates. Vertical axis: Rates of change in neuron firing rates. Horizontal
Axis: Time progression (number of time-steps 0 to 100). The rate of change of the input neuron is B L U E ; represented
                                                                                                         BLUE
                                                                                                         BLUE
                                                                     π
                                                                 sin t  + 1
                                    π 
                                π                                    8
                       v1 '(t) = cos t                 v (t) =
                                16  8  ; derivative of 1
by the cosine function                                                        . Output neuron is yellow. The rate of change
                                                                       2
of the input neuron is All other neurons are represented by various colors eventually converge to the Y E L L O W curve
                                                                                                      YELLOW
                                                                                                      YELLOW
(output neuron), representing the time derivatives of all v i , i > 1.
                                              €
               €

                                                  Page 16 of 25
                                                 €
Carlin, D., Cook, D., Mendoza-Elias, J.                           Modeling Neural Networks In Silico



3.5 Simulation Results 
We present the results of the simulation below. In Figure 8, we show the average firing
rates of all the neurons at each time step.
             We may observe several effects taking place. The non‐input neurons all slowly
converge into a single firing pattern, and this convergence is complete after approximately 50


time steps. Additionally, we notice that the output neurons (as all non‐input neurons are
synchronized in steady‐state, and we may consider them output neurons) converge to a
periodic function containing two components: the input neuron’s sine wave, and a wave
phase‐inverted to the input neuron’s. We notice that while the output neurons climb to reach
the input neuron’s maxima, they also reach local minima very close to the input maxima.
        This behavior can be further analyzed by observing the rates of change of neuron
firing rates between each time step. Figure 9 shows a graph of these rates over the same
time period.
        We notice here that the rates of change of the output neurons’ firing rates are also
described by a periodic function. This function’s frequency is approximately double that of
the input neuron’s rate of change function. We can see that the output rates of change also
reach local minima as the input reaches local maxima. Thus the output neurons have both
phase‐inverted and frequency‐inverted components.

From these two results we can derive testable predictions as to the behavior of a biological
neural network in similar conditions. Specifically:

     • The firing rate of the output neuron will eventually stabilize to a periodic
     function containing components of both the input neuron firing pattern,
     and the learning rule reinforcement function (in this case, a phase‐inversion
     of the input neuron’s function).

     • The rate of change of the output neuron will also eventually stabilize to a
     periodic function such that local minima will be reached at maxima in the
     input neuron’s rate of change.




                                          Page 17 of 25
Modeling Neural Networks In Silico                                                Carlin, D., Cook, D., Mendoza-Elias, J.



4 Future Directions                                                                                                           
 
Now that a robust, computable model has been
developed we propose the following experimental
design to test our computational model in the
biological setting and allow its refinement8.
Traditionally, disassociated neurons cultured in
vitro have served as the model system for
studying the dynamics of neural networks [3,
21]. One of the obstacles in studying neuron
networks is the need for simultaneous recording
of all neurons. Another major obstacle is the
reproducibility of the shape, morphology,
position, and state of the neurons which cannot
be controlled with sufficient precision to ensure
networks are identical.
         Current     techniques  employ     multi-
electrode arrays (MEAs) which can only
stimulate and record electrical potentials of
neurons at one place at a time. As a result, once
neurons are cultured, studying the formation of
neural networks requires installing MEAs in
every neuron. Furthermore, the networks would
never have the identical starting conditions.
Therefore, we propose (see: Figure 10) using
neurons (embryonic rat hippocampal neurons)
cultured onto silicon wafers as described in
Starovoytov et al. [21] (see: Supplement 2).
The silicon wafer allows the selective stimulation
of neurons by pulse laser, as well as the
recording of electrical potential of neurons by
detecting changes in electrical states on the
silicon surface [21].


    Supplement 2: Starovoytov Experimental Setup.
    A: Culture Dish. Design of a culture dish used in electro-optical stimulation experiments. Silicon wafer is glued to
    the bottom of the plastic culture dish. Neurons are plated onto the exposed silicon surface in the middle opening of
    the dish. Extracellular solution is grounded using Ag/AgCl electrode, and the bottom of the dish is contacted by a
    copper electrode with InGa eutectic painted onto the back side of silicon wafer.
    B: Equivalent Circuit. Ci and Ri represent, correspondingly, the capacitance and the resistance of the double layer
    formed at the interface of electrolyte and semiconductor. Vb denotes the applied bias voltage; Rs is the resistance of
    electrolyte. Additional series resistor Rm was used for total current measurements. Extracellular potential for the
    patched cell is determined by the conditions immediately above the silicon surface.
    C: Optical arrangement. Collimating and steering optics, the AOD, and the output coupler of the fiber were all
    rigidly attached to the microscope and moved together when translation across the sample was required. This
    allowed us to keep the patch stationary at all times. The Tube and Scan lenses were introduced to lock the beam at
    the center of microscope objective for all deflection angles. Dash-dot and dotted lines show targeting different
    locations at the sample.
                Reproduced from Starovoytov, et al. Journal of Neurophysiology. 93: 1090‐1098. (2005). [20]



                                                        
    To date, no model exists that is truly capable of emulating the biological setting of LTP and LTD.
8




                                                           Page 18 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                                          Modeling Neural Networks In Silico




                                                                                                                
Figure 10: Proposed Experimental Setup employing Light-Directed Electrical Stimulation (LDES)
of Neurons cultured on Silicon Wafer.
1a Electrical Potential Scan: The bottom of the silicon wafer is fitted with Electric Potential (EP) probes. This
informs the electrical potential states in the in silico neural network representation (2a).
1b-Visual Mapping Scan: The microscopy setup from Starovoytov et al. can be employed to enable positional
mapping of the neurons in the network under study. This informs the in silico model (2a).
2a-Computer Model: Position and electrical states of neurons are compiled to make an in silico representation of the
in vivo neural network. 2b Parallel Simulations: Extrapolated future states of the in silico model are correlated the
real-time in vivo model. Statistical analysis is then used to refine rules governing the in silico model (2a) for
prediction of future states.
2b-Parallel Simulations: 2b-Parallel Simulations: The in silico model (2a) can be copied and “stimulated” virtually
to extrapolate future states. Extrapolated future states of the in silico model are correlated the real-time in vivo
model. Statistical analysis is then used to refine rules governing the in silico model (2a) for prediction of future
states. The stimulation should parallel stimulation that will, or is occurring in vivo by pulse laser. Statistical
analysis can be performed on data sets to refine the rules governing the in silico model for future simulations (e.g.,
“noise” coefficients).
3-Trainig Output Data Set: The neural network must “trained.” A solution set is computed that will “push” the
neuron firing rates to a desired output.
4-Modulatory Instructions: Pulse lasers are coordinated to stimulate neurons at the appropriate time.
5-Change in Neural Architecture: Modulation of in vivo neural network causes intercellular changes in gene
expression in neurons from the activation of LTP and LTD and the intercellular development of a neuronal pathway.
The neural architecture reflects output solution set in (3) (e.g., hand-eye coordination in cerebellum see Figure 5).

        Once the neuron plates are produced, a scan over the silicon wafer can be made to
determine the initial states (electrical potentials and firing rates) of the neurons. This data
can then be used to supply our model with data on synaptic strengths and firing rates of the
network. Then the training through stimulation of neural networks can happen both in silico
and in vitro. This allows virtual and real training of neural networks to occur in parallel and
the results from these trainings can be statistically analyzed between virtual predictions and
real‐time data. In this way, refinements can be made to the model and new factors
introduced that may alter the learning rules to include variables that are traditionally


                                                Page 19 of 25
Modeling Neural Networks In Silico                              Carlin, D., Cook, D., Mendoza-Elias, J.



“non‐Hebbian.”
         A caveat to this is that in the current model, there is no fixation of certain synaptic
weights at 0 ‐ that is to say that we model all neurons as post‐synaptic to all the others, with
the exception of a single (voltage‐clamped) input neuron. Thus an important future direction
would be to enable a sparse matrix of weights (i.e., with some values fixed at 0) to be used in
the computational simulation. In purely computational experiments, these would be
distributed randomly in proportion to the ratio of synapses to pairs of neurons that occurs in
biological neural networks. In parallel in vivo/in silico experiments, they would be set to
emulate the in vivo network’s state of connectedness. Adding random variation or “noise” to
our model ‐ either in the form of asynchronous updating of neurons as implemented in [14,
15] or as a random noise term in the synaptic weight updating rules ‐ would lend more
biological fidelity to our model.
         In addition, the focus of our model was the generation of the rules that govern LTP
and LTD with an emphasis on spike‐driven plasticity; however, this does not account for
what truly occurs, at the nano‐level, in the short‐term and long‐term behavior of neural
plasticity. In order to make the model more realistic, a model of the synapse should be added
to include neurotransmitter, receptor, and Ca+2 levels. This would involve re‐designing the
computational model to use the more modern spike‐based methods referred to in section (1.3)
(For an example of this type of model and its performance, see [18], which implements a
spiking neuron model allowing for complex behavior to be encoded).
         Lastly, the LDES platform allows recording and modulation of neural activity of
cultured neurons and thereby connectivity. Consequently, the use and development of this
platform may enable the generation of artificial neural networks with tailored neural
architecture for biomedical applications.




 
5 Acknowledgements                                                                                         
 
We would like to thank the Dr. David Needham of the Pratt School of Engineering and the
Center for Biologically Inspired Materials and Materials Systems (CBIMMS) for the helping
to create this opportunity and making this research project possible. In addition, we would
like to thank Dr. Ragachavari of the Duke Neurobiology Department for his consult and
advice on the project.
 
 
 
 
 

                                         Page 20 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                         Modeling Neural Networks In Silico



6 Appendix Simulation Code                                                                            

                                          Code 1: model.m
  
clear

N=20
Inhib=zeros(N,N);
Inhib(:,:)=1;
Inhib(3:9,:)=-1;

STEPS=100

% Connection Strengths
T=normalize(rand(N,N),N,Inhib);
% Values of neurons
T(1,1)=1;
T(1:2,2:N)=0;
T(2,1)=0;
T(2,2)=1;

D=zeros(N,1);
D(1)=.5;
D(2)=.5;
Delta = zeros(N,N);
V=inv(T)*D;

for s=1:STEPS
    D(1)=(sin(s*pi/(8))+1)/2;
    D(2)=1;
    Delta = dT(V,T,N,Inhib);
    T=T+Delta;
    T=normalize(T,N,Inhib);
    T0 = T .* Inhib;
    V=[V,inv(T0)*D];
    V=deadneurons(V);
end

                      
plot(V’);
 
 
 
 
 
 

 
 

                                            Page 21 of 25
Modeling Neural Networks In Silico                         Carlin, D., Cook, D., Mendoza-Elias, J.



                                     Code 2: normalize.m
  
% T (strength matrix) normalization
function out = normalize(T,N,Inhib)
   Normed=T;

for i=3:N
    % in order to avoid summing in w(i,i) terms:
Normed(i,i)=0;
end

Rows = sum(abs(Normed’));

for i=3:N
    for j=1:N
    Normed(i,j) = Normed(i,j) / max(Rows(2:N));
    end
end

for i=3:N
% to avoid linear independence of strengths
Normed(i,i)=-1;
end

out=Normed;
end
 
                        Code 3: dT.m
  
% T (strength matrix) differential
function out = dT(V,T,N,Inhib)
    beta=.2;
    delta=zeros(N,N);

    for i=3:N
        for j=1:N
    alph = alpha(i,j,V,T,N,Inhib);
    delta(i,j)=alph*V(i,size(V,2))*V(j,size(V,2))*(1-T(i,j))
- (.1)*T(i,j);
        end
    end

    for i=3:N
        delta(i,i)=0;
    end
out=delta;
end
 

                                         Page 22 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                   Modeling Neural Networks In Silico



 
                       Code 4: alpha.m
  
function out = alpha(i,j,V,T,N,Inhib)
    % alpha=abs(V(1)-V(N)) rule
    out = abs(V(1,size(V,2))-V(N,size(V,2)));
end
 
  
 
  
                        Code 5: beta.m
  
function out = beta(alph)
    out = 1 - alph;
end
 
  
   
                    Code 6: deadneurons.m
  
function out = deadneurons(V)
    for i = 3:size(V,1)
        if (V(i,size(V,2)) < 0)
        Vend=size(V,2);
        V(i,Vend) = 0;
    endif
    endfor
    out = V;
end
 
 
  
 
  
 
 
 
 
 
 
 
 
 
 
 


                                          Page 23 of 25
Modeling Neural Networks In Silico                          Carlin, D., Cook, D., Mendoza-Elias, J.




7 References                                                                                           
 
[1] Abraham, W.C., Logan, B., Greenwood, J.M., and Dragunow, M. Induction and experience
dependent consolidation of stable long‐term potentiation lasting months in the hippocampus.
Journal of Neuroscience. 23: 9626‐9634. (2002).

[2] Ahn, S., Ginity, D.D., Linden, D.J. A late phase of cerebellar long‐term depression
requires activation of CaMKIV and CREB. Neuron. 26: 559‐568. (1999).

[3] Bi, G., and Poo, M. Synaptic modifications in cultured hipppocampal neurons: dependence
on spike timing, synaptic strength, and postynaptic cell type. The Journal of Neuroscience.
18(24): 10484‐10472. (1998).

[4] Bi, G., and Poo, M. Distributed synaptic modification in neural networks induced by
patterned stimulation. Nature. 401: 792‐796. (1999).

[5] Brader, J.M., Senn, W., and Fusi, S. Learning real‐world stimuli in neural network with
spikedriven synaptic dynamics. Neural Computation. 19: 2881‐2912. (2007)

[6] Chung, H.J., Steinberg, J.P., Hungair, R.L., and Linden, D.J. Requirement of AMPA
receptor GluR2 phosphorylation for cerebellar long‐term depression. Science. 300:
1751‐1755.

[7] Dayan, P., Abbott., L.F. Theoretical Neuroscience. MIT Press: Massachusetts, USA.
(2001).

[8] Engert, F. and Bonohoeffer, F. Dendritic spine changes associated with hippocampal
long‐term synaptic plasticity. Nature. 399: 66‐70. (1999).

[9] Ermentrout, Bard. Neural networks as spatio‐temporal pattern‐forming systems. Reports
on Progress in Physics. 61: 353‐430. (1998).

[10] Fitzimonds, R.M., Song, H., Poo, M. Propogation of activity‐dependent synaptic
depression in simple neural networks. Nature. 388: 439‐448. (1997).

[11] Deneve, S. Bayesian Spiking Neurons I: Inference. Neural Computation. 20: 91‐117.
(2008).

[12] Gerstner W., Kistler, W. Neural networks and physical systems with emergent
computational abilities. Spiking Neuron Models. Cambridge University Press, UK. (2002).

[13] Hebb, Donald. O. The Organization of Behavior: A Neuropsychological Theory.
Wiley – Interscience, New York. (1949).

[14] Hopfield, J. J. Proceedings of the National Academy of Sciences, USA: Biophysics 79:
2554‐2558. (1982).

[15] Hopfield, J. J. Proceedings of the National Academy of Sciences, USA: Biophysics 81:
3088. (1984).

[16] Junge, H.J., Rhee, J., Jahn, O., Varoqueaux, F., Spiess, J., Waxham, M., Rosemund, N.,
and Brose, N.Calmodulin and Munc13 form a Ca+2 sensor/effect complex that controls
short‐term synaptic plasticity. Cell. 118: 389‐401. (2004).


                                         Page 24 of 25
Carlin, D., Cook, D., Mendoza-Elias, J.                        Modeling Neural Networks In Silico




[17] Little W. A. The Existence of Persistent States in the Brain. Mathematical Biosciences
19: 101‐120. (1974).

[18] Lovelace, J., Cios, K. A very simple spiking neuron model that allows for modeling of
large, complex systems. Neural Computation. 20: 65‐90. (2008).

[19] Mascagni, M., Sherman, A. Numerical methods for neuron modeling. M ethods in
Neuronal M odeling     Second Edition, ed. Koch C., Segev I. MIT Press: Massachusetts,
USA. (2001).

[20] Purves, D., Augustine, G.J., Fitzpatrick, D., Hall, W.C., LaMantia, A., McNamara, J.O.,
White, L.E. Neuroscience: Fourth Edition Sinauer Associates, Inc: Sunderland,
Massachusetts, USA. (2008)

[21] Starovoytov, A., Choi, J., Seung, H.S. Light‐directed electrical stimulation of neurons
cultured on silicon wafers. Journal of Neurophysiology. 93: 1090‐1098. (2005).

[22] Zemel, RS, Huys, QJM, Natara jan, R., Dayan, P. Probabilistic computation in spiking
populations. Advances in Neural Information Processing Systems. MIT Press:
Massachusetts, MA (2004).

[23] Orlando, D.A., Lin, C.Y., Bernard, A., Wang, J.Y., Socolar, J.E.S, Iversen, E.S.,
Hartemink, A.J., and Haase, S. Global Control of cell‐cycle transcription by coupled CDK and
network oscillators. Nature. 453: 944‐948. (2008).

[24] Strange, K. The end of “naïve” reductionism”: rise of systems biology or renaissance of
physiology? American Physiological Society Cell Physiology. 288: C968‐C974. (2005).




                                          Page 25 of 25

More Related Content

Viewers also liked

Paper Biology 280 S Minireview Advances In Cancer Detection And Therapeutics
Paper Biology 280 S Minireview Advances In Cancer Detection And TherapeuticsPaper Biology 280 S Minireview Advances In Cancer Detection And Therapeutics
Paper Biology 280 S Minireview Advances In Cancer Detection And TherapeuticsJoshua Mendoza-Elias
 
Founding
FoundingFounding
Foundingjauvard
 
131660097 actix-licen g-fse
131660097 actix-licen g-fse131660097 actix-licen g-fse
131660097 actix-licen g-fseHung Ta
 
Enterprise Unified Process
Enterprise Unified ProcessEnterprise Unified Process
Enterprise Unified Processritika.jangid
 
YouTube Video Resume Story
YouTube Video Resume StoryYouTube Video Resume Story
YouTube Video Resume StoryPeter Huboi
 
Stem Cells: Minireview Presentation
Stem Cells: Minireview PresentationStem Cells: Minireview Presentation
Stem Cells: Minireview PresentationJoshua Mendoza-Elias
 
Life at the cellular level
Life at the cellular levelLife at the cellular level
Life at the cellular levelIan Anderson
 
Biology 280S - Advanced in DNA Nanomedicine
Biology 280S - Advanced in DNA NanomedicineBiology 280S - Advanced in DNA Nanomedicine
Biology 280S - Advanced in DNA NanomedicineJoshua Mendoza-Elias
 

Viewers also liked (11)

Paper Biology 280 S Minireview Advances In Cancer Detection And Therapeutics
Paper Biology 280 S Minireview Advances In Cancer Detection And TherapeuticsPaper Biology 280 S Minireview Advances In Cancer Detection And Therapeutics
Paper Biology 280 S Minireview Advances In Cancer Detection And Therapeutics
 
Founding
FoundingFounding
Founding
 
EGR 183 Bow Tie Presentation
EGR 183 Bow Tie PresentationEGR 183 Bow Tie Presentation
EGR 183 Bow Tie Presentation
 
Talk3 - Zebrafish
Talk3 - ZebrafishTalk3 - Zebrafish
Talk3 - Zebrafish
 
131660097 actix-licen g-fse
131660097 actix-licen g-fse131660097 actix-licen g-fse
131660097 actix-licen g-fse
 
Enterprise Unified Process
Enterprise Unified ProcessEnterprise Unified Process
Enterprise Unified Process
 
YouTube Video Resume Story
YouTube Video Resume StoryYouTube Video Resume Story
YouTube Video Resume Story
 
Stem Cells: Minireview Presentation
Stem Cells: Minireview PresentationStem Cells: Minireview Presentation
Stem Cells: Minireview Presentation
 
Fast Fds
Fast FdsFast Fds
Fast Fds
 
Life at the cellular level
Life at the cellular levelLife at the cellular level
Life at the cellular level
 
Biology 280S - Advanced in DNA Nanomedicine
Biology 280S - Advanced in DNA NanomedicineBiology 280S - Advanced in DNA Nanomedicine
Biology 280S - Advanced in DNA Nanomedicine
 

Similar to Paper Egr 183 Modeling Neural Networks In Silico

Analytical Report (1)
Analytical Report (1)Analytical Report (1)
Analytical Report (1)Kade Schmitz
 
Blue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna RajBlue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna RajKrishna Raj .S
 
ANN Fracture Flow Regime
ANN Fracture Flow RegimeANN Fracture Flow Regime
ANN Fracture Flow RegimeLong Vo
 
Detailed_Design_Report (final)
Detailed_Design_Report (final)Detailed_Design_Report (final)
Detailed_Design_Report (final)Keith Ming
 
Fuzzy Logic Final Report
Fuzzy Logic Final ReportFuzzy Logic Final Report
Fuzzy Logic Final ReportShikhar Agarwal
 
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURONA PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURONijdpsjournal
 
A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...
A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...
A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...Sandra Long
 
Final Project Report
Final Project ReportFinal Project Report
Final Project ReportAvinash Pawar
 
Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health InformaticsJason J Pulikkottil
 
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...David Konyndyk
 
Artificial Neural Networks In Control And Optimization
Artificial Neural Networks In Control And OptimizationArtificial Neural Networks In Control And Optimization
Artificial Neural Networks In Control And OptimizationJim Webb
 
Spinal cord Stimulation
Spinal cord StimulationSpinal cord Stimulation
Spinal cord Stimulationpulkit saxena
 
Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...
Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...
Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...ASHOKKUMARARJANBHAIP1
 
Hybrid neural networks in cyber physical system interface control systems
Hybrid neural networks in cyber physical system interface control systemsHybrid neural networks in cyber physical system interface control systems
Hybrid neural networks in cyber physical system interface control systemsjournalBEEI
 
Protdock - Aatu Kaapro
Protdock - Aatu KaaproProtdock - Aatu Kaapro
Protdock - Aatu KaaproSwapnesh Singh
 

Similar to Paper Egr 183 Modeling Neural Networks In Silico (20)

Analytical Report (1)
Analytical Report (1)Analytical Report (1)
Analytical Report (1)
 
Kl ph d_thesisfinal
Kl ph d_thesisfinalKl ph d_thesisfinal
Kl ph d_thesisfinal
 
Blue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna RajBlue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna Raj
 
ANN Fracture Flow Regime
ANN Fracture Flow RegimeANN Fracture Flow Regime
ANN Fracture Flow Regime
 
Detailed_Design_Report (final)
Detailed_Design_Report (final)Detailed_Design_Report (final)
Detailed_Design_Report (final)
 
Fuzzy Logic Final Report
Fuzzy Logic Final ReportFuzzy Logic Final Report
Fuzzy Logic Final Report
 
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURONA PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
 
A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...
A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...
A Toolkit For Forward Inverse Problems In Electrocardiography Within The SCIR...
 
Final Project Report
Final Project ReportFinal Project Report
Final Project Report
 
Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health Informatics
 
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...
 
Artificial Neural Networks In Control And Optimization
Artificial Neural Networks In Control And OptimizationArtificial Neural Networks In Control And Optimization
Artificial Neural Networks In Control And Optimization
 
Guillemaud Thesis
Guillemaud ThesisGuillemaud Thesis
Guillemaud Thesis
 
Spinal cord Stimulation
Spinal cord StimulationSpinal cord Stimulation
Spinal cord Stimulation
 
Senior thesis
Senior thesisSenior thesis
Senior thesis
 
Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...
Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...
Modern Power Systems Analysis (Power Electronics and Power Systems) ( PDFDriv...
 
D010242223
D010242223D010242223
D010242223
 
Hybrid neural networks in cyber physical system interface control systems
Hybrid neural networks in cyber physical system interface control systemsHybrid neural networks in cyber physical system interface control systems
Hybrid neural networks in cyber physical system interface control systems
 
Molecular Electronics
Molecular ElectronicsMolecular Electronics
Molecular Electronics
 
Protdock - Aatu Kaapro
Protdock - Aatu KaaproProtdock - Aatu Kaapro
Protdock - Aatu Kaapro
 

Recently uploaded

Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment BookingCall Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Bookingnarwatsonia7
 
Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024Gabriel Guevara MD
 
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort ServiceCollege Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort ServiceNehru place Escorts
 
Call Girl Bangalore Nandini 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Nandini 7001305949 Independent Escort Service BangaloreCall Girl Bangalore Nandini 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Nandini 7001305949 Independent Escort Service Bangalorenarwatsonia7
 
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...saminamagar
 
College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...
College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...
College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...Miss joya
 
See the 2,456 pharmacies on the National E-Pharmacy Platform
See the 2,456 pharmacies on the National E-Pharmacy PlatformSee the 2,456 pharmacies on the National E-Pharmacy Platform
See the 2,456 pharmacies on the National E-Pharmacy PlatformKweku Zurek
 
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service ChennaiCall Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service ChennaiNehru place Escorts
 
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...narwatsonia7
 
Glomerular Filtration and determinants of glomerular filtration .pptx
Glomerular Filtration and  determinants of glomerular filtration .pptxGlomerular Filtration and  determinants of glomerular filtration .pptx
Glomerular Filtration and determinants of glomerular filtration .pptxDr.Nusrat Tariq
 
Ahmedabad Call Girls CG Road 🔝9907093804 Short 1500 💋 Night 6000
Ahmedabad Call Girls CG Road 🔝9907093804  Short 1500  💋 Night 6000Ahmedabad Call Girls CG Road 🔝9907093804  Short 1500  💋 Night 6000
Ahmedabad Call Girls CG Road 🔝9907093804 Short 1500 💋 Night 6000aliya bhat
 
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service MumbaiLow Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbaisonalikaur4
 
Call Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment Booking
Call Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment BookingCall Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment Booking
Call Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment BookingNehru place Escorts
 
Call Girls Hebbal Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hebbal Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Hebbal Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hebbal Just Call 7001305949 Top Class Call Girl Service Availablenarwatsonia7
 
Call Girls In Andheri East Call 9920874524 Book Hot And Sexy Girls
Call Girls In Andheri East Call 9920874524 Book Hot And Sexy GirlsCall Girls In Andheri East Call 9920874524 Book Hot And Sexy Girls
Call Girls In Andheri East Call 9920874524 Book Hot And Sexy Girlsnehamumbai
 
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Availablenarwatsonia7
 
Call Girls ITPL Just Call 7001305949 Top Class Call Girl Service Available
Call Girls ITPL Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls ITPL Just Call 7001305949 Top Class Call Girl Service Available
Call Girls ITPL Just Call 7001305949 Top Class Call Girl Service Availablenarwatsonia7
 
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...narwatsonia7
 
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service MumbaiVIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbaisonalikaur4
 
Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...
Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...
Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...narwatsonia7
 

Recently uploaded (20)

Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment BookingCall Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
 
Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024
 
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort ServiceCollege Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
 
Call Girl Bangalore Nandini 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Nandini 7001305949 Independent Escort Service BangaloreCall Girl Bangalore Nandini 7001305949 Independent Escort Service Bangalore
Call Girl Bangalore Nandini 7001305949 Independent Escort Service Bangalore
 
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
 
College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...
College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...
College Call Girls Pune Mira 9907093804 Short 1500 Night 6000 Best call girls...
 
See the 2,456 pharmacies on the National E-Pharmacy Platform
See the 2,456 pharmacies on the National E-Pharmacy PlatformSee the 2,456 pharmacies on the National E-Pharmacy Platform
See the 2,456 pharmacies on the National E-Pharmacy Platform
 
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service ChennaiCall Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
 
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
 
Glomerular Filtration and determinants of glomerular filtration .pptx
Glomerular Filtration and  determinants of glomerular filtration .pptxGlomerular Filtration and  determinants of glomerular filtration .pptx
Glomerular Filtration and determinants of glomerular filtration .pptx
 
Ahmedabad Call Girls CG Road 🔝9907093804 Short 1500 💋 Night 6000
Ahmedabad Call Girls CG Road 🔝9907093804  Short 1500  💋 Night 6000Ahmedabad Call Girls CG Road 🔝9907093804  Short 1500  💋 Night 6000
Ahmedabad Call Girls CG Road 🔝9907093804 Short 1500 💋 Night 6000
 
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service MumbaiLow Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
 
Call Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment Booking
Call Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment BookingCall Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment Booking
Call Girls Service Nandiambakkam | 7001305949 At Low Cost Cash Payment Booking
 
Call Girls Hebbal Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hebbal Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Hebbal Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hebbal Just Call 7001305949 Top Class Call Girl Service Available
 
Call Girls In Andheri East Call 9920874524 Book Hot And Sexy Girls
Call Girls In Andheri East Call 9920874524 Book Hot And Sexy GirlsCall Girls In Andheri East Call 9920874524 Book Hot And Sexy Girls
Call Girls In Andheri East Call 9920874524 Book Hot And Sexy Girls
 
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
 
Call Girls ITPL Just Call 7001305949 Top Class Call Girl Service Available
Call Girls ITPL Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls ITPL Just Call 7001305949 Top Class Call Girl Service Available
Call Girls ITPL Just Call 7001305949 Top Class Call Girl Service Available
 
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
 
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service MumbaiVIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
 
Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...
Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...
Russian Call Girl Brookfield - 7001305949 Escorts Service 50% Off with Cash O...
 

Paper Egr 183 Modeling Neural Networks In Silico

  • 1. Modeling Neural Networks In Silico - A Neural Network Model and Novel Experimental Design using Light- Directed Electrical Stimulation of Neurons for statistical refinement of Computational Neural Network Models Daniel Carlin‡, B.S.E., Daniel Cook†, and Joshua E. Mendoza-Elias§ Fall 2007 Duke University Engineering 183: Special Topics – Modeling Neural Networks In Silico Dr. David Needham, Ph.D. Pratt School of Engineering Submitted: Sunday, December 16, 2007             Abstract              Long‐term potentiation (LTP) and Long‐term depression (LTD) are the best understood mechanisms by which biological systems achieve the sculpting of neural pathways necessary for learning and memory. Foundational to the understanding of LTP and LTD is the model of Hebbian learning. Although research has demonstrated that neural networks can be developed and trained in silico to mimic the computational function of neural ensembles, the development of a truly realistic model remains elusive. We construct a simulation of Hebbian learning implementing the Hopfield network model. We then propose an experimental setup that will use the light‐directed electrical stimulation of neurons cultured on silicon wafers in order to gain information on the parameters of synaptic strength and firing frequencies for all neurons in the system simultaneously (determinants of LTP and LTD). This will allow the parallel running of neural networks in the in vivo and in silico setting, allowing statistical comparison between the biological model and computer simulation and additional refinement to create a more biologically relevant computational model. Keywords: short and long-term potentiation (LTP), short and long-term depression (LTD), neural plasticity, Hebbian learning, Hopfield Network, parallel neural network training, light-directed electrical stimulation (LDES).                           Note: This study was submitted to Vertices, The Journal of Science and Technology at Duke University, which indicated an interest in publication of this paper. However, they decided that it was too technical for their audience and forwarded it to Jupiter, The Undergraduate Journal for Engineering, which was suggested as a better venue.                                                          Daniel Carlin, Pratt School of Engineering, Department of Biomedical Engineering. ‡ E-mail: daniel.carlin@duke.edu. Daniel Cook, Trinity College of Arts and Sciences, Department of Computer Science. † E-mail: daniel.cook@duke.edu. Website: http://www.duke.edu~dkc6 Joshua Mendoza-Elias, Trinity College of Arts and Sciences, Department of Biology. § E-mail: joshua.mendozaelias@duke.edu. Website: http://www.duke.edu/~jme17
  • 2. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. Table of Contents: 1 Introduction ........................................................................... 3 1.1 M echanism s U nderlying LTP and LTD ....................................... 3 Figure 1: Molecular Mechanisms of LTP. ..............................................................4 Figure 2: Signaling Cascades activated by LTP and LTD. ...................................4 Figure 3: Short-term and Long-term changes to LTP. ..........................................5 Figure 4: Properties of LTP and LTD - Specifications and Associativity. ............5 1.2 Spike Tim ing Dependent Plasticity (STDP) ................................. 6 Figure 5: The Biological Basis of our Model. .........................................................6 1.3 Theoretical M odels of Neural Networks ...................................... 7 Figure 6: Abstract Representation of Model. . ........................................................8 Figure 7: Firing Relationships. ...............................................................................8 2 Mathematical Formulation .................................................... 10 2.1 Setup ....................................................................................... 10 2.2 H ebbian Learning ..................................................................... 11 2.3 Inhibitory neurons and m odel training ...................................... 12 Supplement 1: Equation Summary. .....................................................................13 3 Computer Simulation ............................................................ 14 3.1 Setup ....................................................................................... 14 3.4 Solving step ............................................................................. 15 Figure 8: Convergence of Firing Rates. ................................................................16 Figure 9: Rates of change in Firing Rates. ..........................................................16 3.5 Sim ulation Results ................................................................... 17 4 Future Directions ................................................................. 18 Figure 10: Proposed Experimental Setup employing Light-Directed Electrical Stimulation (LDES) of Neurons cultured on Silicon Wafer. ...............................19 . 5 Acknowledgements ................................................................ 20 6 Appendix Simulation Code .................................................... 21 Code 1: model.m .....................................................................................................21 Code 2: normalize.m...............................................................................................22 Code 3: dT.m ...........................................................................................................22 Code 4: alpha.m ......................................................................................................23 Code 5: beta.m ........................................................................................................23 Code 6: deadneurons.m ..........................................................................................23 7 References ............................................................................ 24             Page 2 of 25
  • 3. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico 1 Introduction                                                  Synaptic connections between neurons form the basis of neural circuits that comprise neural networks. More importantly, it is the dynamic entity of synaptic connections that impart the property of plasticity that enable neural networks to quot;learn.quot; Synaptic connections are in a constant state of change that is determined in response to neural activity and other mechanisms which operate in short‐term and long‐term time scales (milliseconds to years) [1, 8, 20]. In terms of neural ensembles, neurons are quot;trainedquot; by mechanisms of potentiation and quot;detrainedquot; by mechanisms of depression. In terms of short‐term potentiation, synapse connections are modulated using the mechanisms of facilitation, augmentation, and potentiation; all of which function by enhancing neurotransmitter release from presynaptic terminals. Complementary to this are mechanisms of short‐term depression, which decrease the amount of neurotransmitter released and appear to result from activity‐dependent depletion of synaptic vesicles primed for exocytosis. More pertinent to this research paper, are the mechanisms of Long‐Term Potentiation (LTP) and Long‐Term Depression. These biological mechanisms have provided the foundation of Hebbian Models of learning and the modeling of neural networks. Furthermore, LTP and LTD have been most thoroughly studied in the mammalian hippocampus and have a high degree of biological relevance for biomedical related studies and applications [4, 8].   1.1 Mechanisms Underlying LTP and LTD  LTP functions at time scales of 30 minutes or longer. LTP is accomplished by the release of glutamate from the presynaptic cleft and its binding to AMPA receptors on the postsynatpic cleft (See: Figure 1). AMPA receptors generate EPSPs (excitatory postsynaptic potential) that cause the expulsion of magnesium from the NMDA receptor in the postsynaptic cleft, thus allowing calcium to become permeable to the postsynaptic cleft (See: Figure 1). AMPA receptors generate EPSPs (excitatory postsynaptic potential) that cause the expulsion of magnesium from the NMDA receptor in the postsynaptic cleft, thus allowing calcium to become permeable to the postsynaptic cleft. In turn, these rapid and high calcium spikes cause the postsynaptic neuron to activate signal transduction cascades that include: Ca+2/calmodulin‐dependent protein kinase (CaMKII) and protein kinase C (PKC) (See: Figure 2) [2, 16]. These cascades are involved in short‐term changes that result in increased synaptic efficacy for the maintenance of LTP, such as an increase in AMPA receptors in the postsynaptic neuron (See: Figure 3) [6]. In terms of more long‐term changes, repeated stimulation causes increased activation of genetic programs involved in the upregulation of the transcription of AMPA receptors, tubulin, actin, signal cascade transduction networks ‐ essentially genes that will increase the size of the synaptic cleft, the radius of the axon and sensitivity to neurotransmitter glutamate by increasing AMPA receptors in the postsynaptic cleft. These long‐term changes in gene expression profiles are believed to be the basis by which essentially permanent modifications to brain function are achieved. LTD functions antagonistically to prevent synapses from attaining maximum synaptic efficacy as over time this would make the encoding of new information impossible. LTD functions in a similar fashion that essentially differs in the manner by which calcium levels rise in the postsynaptic cleft (which is modulated by stimulation from other neurons). In opposition to LTP, where EPSPs generate brief high frequency stimulation, LTD requires stimulation at a low rate (about 1 Hz) for long periods (10‐15 minutes). The low frequency stimulation then causes calcium levels to rise in a more steady and slow fashion. In turn, the rise in calcium levels causes the activation of Ca+2–dependent phosphatases that in the short‐term result in the loss of AMPA receptors and in the long‐term change gene expression to promote decrease in size of the synaptic button and radius of the axon [2, 3, 5, 10, 20]. Page 3 of 25
  • 4. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. Figure 1: Molecular Mechanisms of LTP. A: Resting Potential: EPSPs in the presynaptic neuron are insufficient to cause depolarization of the postsynaptic neuron and relieve the Mg+2 block on NMDA receptors. B: Postsynaptic Depolarization: Sufficient stimulation allows for the depolarization and entry of the Ca+2 ions, allowing the unblocking of NMDA receptors and activation of short-term LTP mechanisms. [20] Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20].  Figure 2: Signaling Cascades activated by LTP and LTD. A: At resting potential: Signaling cascades in the postsynaptic cleft activated in response activation of LTP. B: Siganaling cascades in the postsynaptic cleft activated in response to activation and maintenance of LTD. [6, 20] Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20].  Page 4 of 25
  • 5. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico   Figure 3: Short-term and Long-term changes to LTP. Repeated stimulation causes short-term pathways (signaling cascades) to cause changes in gene expression that will serve to enhance the process (LTP or LTD) that is dominant. [20] Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008). [20]     Figure 4: Properties of LTP and LTD - Specifications and Associativity. Hebbian theory relies on Spike- Timing Dependent Plasticity (STDP). A: Strong activation of a neural pathway without adjacent pathways firing within an interval time-window causes the weaker pathway to not be strengthened. B: Strong activation of a neural pathway coupled with adjacent pathways firing within an interval time-window causes an already strong synapse to be strengthened and a weaker synapse to be strengthened. [5, 10, 20] Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20].  Page 5 of 25
  • 6. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. 1.2 Spike Timing Dependent Plasticity (STDP)  Central to both mechanisms is spike‐timing dependent plasticity (STDP). This imparts onto LTP and LTD the property of associativity1 (See: Figure 4). At a given low frequency of synaptic activity, LTD will occur if presynaptic activity precedes a postsynaptic action potential, while LTP occurs if the postsynaptic action potential follows presynaptic activity. The relationship between the time interval and the magnitude of synaptic change is a very sensitive function of the time internal, with no changes observed if the presynaptic activity and postsynaptic activity are separated by 100 milliseconds or longer. This requirement for precise timing between presynaptic and postsynaptic activity for the induction of these forms of long‐lasting synaptic plasticity is what drives the development of neural networks. Although the mechanisms involved are not yet understood, it appears that these properties of STDP arise from timing‐dependent differences in postsynaptic Ca+2 signals. Specifically, if a postsynaptic action potential occurs after presynaptic activity, the resulting depolarization will relieve the block of NMDA receptors by Mg+2, yielding LTP. In contrast, if the postsynaptic action potential occurs prior to the presynaptic action potential, then the depolarization associated the postsynaptic action potential will subside by the time that an EPSP occurs. This will reduce the amount of Ca+2 entry through the NMDA receptors, leading to LTD [5, 20].   Figure 5: The Biological Basis of our Model. A: Our model was inspired by the connectivity of neurocytes and development of motor coordination in the human cerebellum. B: Signal from the eye (red) is transmitted via climbing fibers (1–input neuron) which stimulate parallel fibers (interneurons: 2–excitatory or 3–inhibitory). Parallel fibers are believed to function as “timers.” Those with the correct “timing” (e.g., phase) in relation to the initial stimulus (climbing fibre) will serve as an excitatory neuron, and vice versa. C: Purkinje cells (4–output neuron) receive stimulus from Climbing fibers and Parallel fibers. The key serves to show the relationships between in vivo neurocytes and our in silico representation of them in Figure 6 (below). A: Reproduced from http://www.mphy.lu.se/avd/nf/hesslow/neuroscience/classcond.html B, C: Reproduced from Purves et al (2008), Neuroscience: Fourth Edition (2008) [20]                                                          Weak stimulation by itself will not trigger LTP, however if one pathway is strongly activated at the same time a 1 neighboring pathway is weakly stimulated then both synaptic pathways will undergo LTP. LTP and LTD are restricted to the activated (stimulated) synapse. In short, cells that fire together, wire together, apart from specificity. Page 6 of 25
  • 7. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico 1.3 Theoretical Models of Neural Networks   There are several different models of learning in neural networks, most of which originated from Hebb's postulate that quot;When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increasedquot; [13]. These models are separable into two main classes: firing‐rate‐based and spike‐based. They differ in that firing‐rate models use the average firing rate of a neuron over a given unitary time period as the basis of computation, while spiking models use the spike train of a given neuron in a probabilistic fashion. Our model was biologically inspired by the action of LTP and LTD in determining neural architecture in the cerebellum (See: Figure 5). Our model is a smoothed, synchronous version of the firing‐rate model based on the Hopfield attractor network (see Figure 6). This class of network stabilizes to an attractor (a stable pattern of neuron firing). The network can be considered a function of the input neurons based on the synaptic weights, so that training the network consists of defining this attractor (which can also be termed quot;memoryquot;). This is done by reinforcing Hebbian learning proportionately to the difference between the emergent and desired output signals at each time step, given an input signal. We implement a slightly modified version of the model originally published by Hopfield [14], differing in two respects: 1. We do not update individual neurons asynchronously, but rather compute the firing rates of all neurons synchronously at each time step. This is justified because we are calculating averages over each time‐step, rather than trying to discretize a continuous process ‐ in other words, we are abstracting away from the mechanics of individual firing neurons in order to observe the state of the entire system at every time (t). 2. We use neurons with a continuous, graded‐response rather than two state McCullough‐Pitts neurons. Networks of graded‐response neurons have been shown to exhibit similar emergent computational properties to the original model [15]. € Once again, this is justified because we are measuring the average firing rates of all neurons over a given time step rather than simply whether or not a neuron fired during that time step. Thus time steps in our simulation are several times longer than the refractory period of a single neuron. This allows computation of the simulation for a larger number of neurons, enabling a larger parallel experiment to be run. We should note that networks with synchronous updating may have certain differences in emergent dynamical properties. These were beyond the scope of this study, but more information may be found in Little et al. [17]. (See Section 2 for a mathematical formulation of our firing‐rate‐based model of Hebbian learning implementing a Hopfield attractor network). Spike‐based models are newer and not yet as well studied as firing‐rate models. These models assume that at a given time-step (t), there is some unknown probability distribution describing the likelihood of each neuron spiking at that time point. This distribution is parameterized around the hidden variable (the true input). Thus, rather than gradient descent toward a function as with firing‐rate based models, spiking models exhibit a gradient € Page 7 of 25
  • 8. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J.   Figure 6: Abstract Representation of Model. The diagram outlines the relationships between the different types of neurons: input neuron (1–Climbing fiber), excitatory interneuron (2–Parallel fiber firing in phase), inhibitory interneuron (3–Parallel fiber firing out of phase), and output neuron (4– Purkinje cell). Please note the key to the left corresponds to the key in Figure 5.   Figure 7: Firing Relationships. A: The equations governing the model: (i.) Firing Rate; (ii.) Synaptic Strength; (iii.) the “Learning Rule.” B: In phase – (i.) If two neurons are synchronous, then the synaptic weight will increase by 1.0. (ii.) The closer the pre-synaptic firing rate vi (phase = α) of two neurons (increasing synchronicity), the higher vj increases and synaptic weight wi,j in the next time-step. C: Out of phase – (i.) If two neurons completely asynchronous, then synaptic weight will increase by 0. (ii.) The further apart the pre-synaptic firing rate vi (phase = α) of two neurons (increasing asynchronicity), the higher vj decreases and synaptic weight wi,j in the next time-step. Page 8 of 25
  • 9. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico descent towards a distribution parameterized around the attractor state (memory). These models then study the ability of a population of individually noisy neurons to reconstruct the hidden variable. Indeed, much of the work thus far focuses on coding of hidden variables in populations of afferent neurons. See Zemel et al. [22] for an explanation of population coding, and Deneve et al. [11] for a model showing that population of spiking neurons can perform Bayesian inference computations. We chose to simulate the Hopfield network primarily due to practical concerns, namely that spike-based models are analytically and computationally intensive, and are difficult to train. They are also not yet well understood, thus there is uncertainty as to which model would be appropriate for our purposes here. Conversely, Hopfield attractors are theoretically well understood (see Ermentrout et al. [9], ch. 5). Our model allows us to compute a state of the entire network by solving a single system of linear differential equations and then updating synaptic weights appropriately, which is a faster computation than spike-based models allow.                                     Page 9 of 25
  • 10. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. 2 Mathematical Formulation                  Note: See Supplement 1 (pp. 13) for an Equations Summary    2.1 Setup  We define a vector V2 of dimension N (we are modeling N neurons), where vi represents the average firing rate of neuron i over a given time step. We also define weights wi,j, representing the efficacy of synaptic signal propagation (i.e., connection strength) between pre-synaptic neuron vi and post-synaptic neuron vj . Due to the directionality of synapses, wi,j does not necessarily equal wi,j, because these represent two different synapses. The rate of firing of a neuron is determined by the following: N v j = ∑ w i, j v i (1) i= 0 which is a linear combination of the firing rates of all neurons connected to neuron vj , weighted by the strength of each synapse. € In a system with N neurons, this yields a system of N equations. We choose one neuron, vi=1 , to be the ”input” neuron. This can be achieved in a real system by voltage clamping the neuron. In this model we simply fix v1 at a specific rate during each time step. Say we choose the rate to be v1 = a, and assuming that the neuron does not feed back on itself (wi,j = 0) then the linear system of equations is: v1 = av 2 = w1,2v1 + w 3,2 + ...+ w n, j v N  v N = w1,N v1 + w 2,N v 2 + ...+ w N ,N −1v N −1 Let D be a column vector as follows: € , and define T to be the following matrix: . (2) Then we have the matrix equation , (3) where solving (4) yields the average firing rates for all neurons over a given time step [5, 7, 12, 14, 19].                                                          This is a discrete model - for simplicity, time arguments are omitted where immaterial. 2 Page 10 of 25
  • 11. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico 2.2 Hebbian Learning   As in biological neurons, the simulated neurons “learn” by changing the synaptic weight wi,j = 0. The rule by which these weights are changed classifies Hebbian learning. We is a twice differentiable function of the two local variables, vi start by assuming that and vj. Expanding in Taylor series, then: Knowing that the change in weight should be dependent on the correlation between the two firing rates, the simplest learning rule that can be obtained is that c0 through c4 equal zero and c5 = A, where A is some positive constant, i.e., dw i, j = A ∗ v1 ∗ v 2 dt This is the prototype for Hebbian learning, and is our representation of LTP. However, we would also like to include a model for LTD, so we assume that the synaptic weights decrease in strength whenever they are not used (i.e. when vi ∗ vj = 0) at a constant € rate, B. Then; dw i, j = A ∗ v1 ∗ v 2 − B dt Additionally, we seek to bound the weights of the synapses between 0 (equivalent to disconnected neurons) and 1 (a maximally efficient synapse). In order to do this, we choose and for positive constants and between 0 and 1. and € ca be viewed as the relative strengths of LTP and LTD in a single time step; respectively. We then have: dw i, j (5) = α ∗ (1− w i, j ) ∗ v i ∗ v j − β ∗ w i, j dt Furthermore, we want to bound the firing rates between 0 (not firing) and 1 (firing at the maximally allowed rate for that neuron). In order to do this, the constants must be normalized by dividing by the sum of the weights of the most strongly connected neuron at € every time step. To do this, we calculate the Hebbian learning change in synaptic strength first, and then normalize by the largest sum of the weights going into any one neuron. While the equations above are continuous, our model is discrete. Thus at each time step the rule for obtaining from is therefore: (6). This introduces the important concept of competition between synapses. The idea is that Page 11 of 25
  • 12. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. synaptic weights can only grow at the expense of others so that if a certain subgroup of synapses is strengthened, other synapses to the same post-synaptic neuron have to be weakened by dividing over a larger sum. 2.3 Inhibitory neurons and model training   In order to implement learning reinforcement (this may be thought of as a dopamine drip3), we shall transform α into a function dependent on the firing rate of some arbitrary output neuron (let us use vN ). There are clearly a very large number of potential training functions that might be considered, depending on the desired behavior of the network. We shall seek here to obtain some degree of phase inversion between the input (voltage-clamped) neuron v1 and vN . To do this, we set , (7) so that maximal promotion of synaptic efficacy would take place only during time steps when the input and output neurons had very different average firing rates. While this rule is sufficient to force the firing rate of the output neuron to zero for all input firing rates, in order to get opposing behavior, we must introduce inhibitory neurons into our model. Inhibitory neurons are modeled similarly to the excitatory neurons; however, rather than a firing rate bound by [0, 1], we use [−1, 0]4. Thus, according to the learning rule above, a presynaptic inhibitory neuron vi simultaneously exhibiting a high firing rate as a post- synaptic excitatory neuron vj will cause the synaptic strength wi,j to be reinforced. However, the summation in equ. (1) means this reinforcement will result in a lower average firing rate for vj during future time steps (all else held equal). Using the same feedback rule as (7) and using the same update step as (6), we show that the output neuron’s firing rate reaches a local maximum when the input neuron’s rate is at a local minimum, yet also follows the input neuron to its maximum. Further refinement of the model would be required to show complete phase inversion between the input and output neurons [5, 7, 12, 14, 19].                                                          We define the reinforcement factor as a function of two neurons, one voltage-clamped and the other an ”output” 3 neuron, the firing rate of which can be monitored by micro-electrode. Thus this form of learning reinforcement does not require monitoring of the firing rate of every single neuron, which in turn means that a biological reinforcement mechanism would only require a uniform concentration of dopamine across the entire network, rather than neuron- specific dosages. Our model, though mathematically equivalent to this, differs slightly in implementation. Please refer to section 3 4 for details. Page 12 of 25
  • 13. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico   Supplement 1: Equation Summary. This supplementary figure accompanies Sections 2 and 3; and condenses the mathematical relationships describes in those sections. (A-D) accompanies Sections 2. (E) accompanies Section 3.   Page 13 of 25
  • 14. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. 3 Computer Simulation                    Note: See Supplement 1 (pp. 13) for an Equations Summary    We used a computer simulation of the model described above in order to be able to generate testable predictions of neural network behavior. This simulation was written in MATLAB, and simulations were performed using Octave5, with graphical representations in Gnuplot6. After initialization, the simulation repeatedly normalizes, calculates the next step of the strength matrix, and then solves for the resulting vector of firing rates.   3.1 Setup   The implementation is a direct representation of the mathematical formulation described above. We set N = 20, modeling a small network of 20 neurons for computational tractability, and the simulation is run for STEPS = 100 time steps. Additionally, we define 6 inhibitory neurons in our model. We begin by defining the stimulus pattern for our input neuron:  Π sin t  + 1  8 , (8) v1 (t) = 2 using a periodic function bounded between 0 and 1. This allows us to observe the phasic relationship between v1 and vN , the output neuron. We then define the vector D: €  7                             (9)  Inhibitory neurons are stored in the V vector, and identified by a subset Inhib of the indices of V, where dim(Inhib) < N determines the number of inhibitory neurons in the system. All logic is performed identically for inhibitory neurons as for excitatory, with one fundamental difference: synaptic strengths wi,j between an inhibitory neuron i and its postsynaptic neurons j are negated in (1). This means that during the solving of (4), wi,j terms are negated in the T matrix when i ∈ I . All other processing of inhibitory neurons remains the same as for excitatory, and with the exception of the solving step, all weights are maintained in the system as positive floating point numbers in order to € ease computation of the update matrix. We next initialize a random matrix T of dimensions (N, N ). We set T1,1 = 1 (the input € neuron is entirely self-determined) and Ti, j = 0, 1 < j < N (the input neuron is not post- Ti, j = −1, 1 < i < N , such that no neuron is synaptic to any other). We additionally set all post-synaptic to itself. We then have                                                          € € See: http://www.octave.org 5 See: http://www.gnuplot.info 6 We must additionally constrain a second neuron to fire constantly at a rate of 1.0. This is due to equation (4), which 7 calculates each firing rate as a linear combination of the others. Each of these linear combinations is dependent on the value of v1 . Thus when v1 = 0, such that D = 0, the zero vector, the only possible solution to (4) is that V = 0 as well. This is an artifact of this model and is not biologically significant. Page 14 of 25
  • 15. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico … 1 0 0 0   … 0 1 0 0 … . (10) T =  w 3,1 w 3,2 −1 w 3,4    … … … −1 …   … … … … … which the reader will notice differs from equation (2) in one respect: in that equation, Ti, j = −1, 1 < i < N . Those values are necessary in order to avoid solving for a vector of 0 € firing rates in the solving step in (4).   3.2 Normalization step   € As presented in (2.2), matrix T, representing synaptic strengths in the network, must be normalized before the first time step, and then at every subsequent time step in order to maintain the bounding conditions (strengths and firing rates must remain both in the interval [0, 1]). This normalization proceeds as follows (where Tnorm represents the normalized strength matrix): norm Ti,i = 0, i > 3 € We then solve (4) once before the first time step in order to obtain an initial set of firing rates.    3.3 Differential Step  This step proceeds exactly as described in equation (5). See figure (9) for the implementation, and figures (10, 11) for implementations of the alpha and beta functions. During our simulation we simply used β = 0.1 (this parameter determines the effect of LTD in comparison to LTP in the model). We set all , and use the learning rule in equation (7).   3.4 Solving step  The solving step proceeds as described in (4) (see: code in Code 7). However, the presence of inhibitory neurons raises a bounding problem in the firing rates V: a heavily inhibited excitatory neuron can occasionally display a negative average firing rate over a given time‐step, which is nonsensical (it would imply that an excitatory neuron is inhibiting other neurons, or that an inhibitory neuron is potentiating other neurons, by firing). This arises due primarily to rounding errors arising from the numerical nature of our simulation. It is remedied by simply setting all negative values in V to 0 at the end of each solving step.         Page 15 of 25
  • 16. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J.   Figure 8: Convergence of Firing Rates. Vertical axis: Averaged neuron firing rates. Horizontal Axis: Time progression (number of time-steps 0 to 100). Input neuron is B L U E . Output neuron is Y E L L O W . Other neurons are BLUE YELLOW BLUE YELLOW represented by various colors. The graph displays the average firing rate at each time-step. Convergence (synaptic th time-step. This demonstrates in firing rates) of all neurons to the output neuron occurs after approximately the 50 silico representation of our model can be trained to a firing pattern. [Network description: (20 neurons (N), 100 STEPS, 6 inhibitory neurons (inhib 3-9)].   Figure 9: Rates of change in Firing Rates. Vertical axis: Rates of change in neuron firing rates. Horizontal Axis: Time progression (number of time-steps 0 to 100). The rate of change of the input neuron is B L U E ; represented BLUE BLUE  π sin t  + 1 π  π  8 v1 '(t) = cos t  v (t) = 16  8  ; derivative of 1 by the cosine function . Output neuron is yellow. The rate of change 2 of the input neuron is All other neurons are represented by various colors eventually converge to the Y E L L O W curve YELLOW YELLOW (output neuron), representing the time derivatives of all v i , i > 1. € € Page 16 of 25 €
  • 17. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico 3.5 Simulation Results  We present the results of the simulation below. In Figure 8, we show the average firing rates of all the neurons at each time step. We may observe several effects taking place. The non‐input neurons all slowly converge into a single firing pattern, and this convergence is complete after approximately 50 time steps. Additionally, we notice that the output neurons (as all non‐input neurons are synchronized in steady‐state, and we may consider them output neurons) converge to a periodic function containing two components: the input neuron’s sine wave, and a wave phase‐inverted to the input neuron’s. We notice that while the output neurons climb to reach the input neuron’s maxima, they also reach local minima very close to the input maxima. This behavior can be further analyzed by observing the rates of change of neuron firing rates between each time step. Figure 9 shows a graph of these rates over the same time period. We notice here that the rates of change of the output neurons’ firing rates are also described by a periodic function. This function’s frequency is approximately double that of the input neuron’s rate of change function. We can see that the output rates of change also reach local minima as the input reaches local maxima. Thus the output neurons have both phase‐inverted and frequency‐inverted components. From these two results we can derive testable predictions as to the behavior of a biological neural network in similar conditions. Specifically: • The firing rate of the output neuron will eventually stabilize to a periodic function containing components of both the input neuron firing pattern, and the learning rule reinforcement function (in this case, a phase‐inversion of the input neuron’s function). • The rate of change of the output neuron will also eventually stabilize to a periodic function such that local minima will be reached at maxima in the input neuron’s rate of change. Page 17 of 25
  • 18. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. 4 Future Directions                       Now that a robust, computable model has been developed we propose the following experimental design to test our computational model in the biological setting and allow its refinement8. Traditionally, disassociated neurons cultured in vitro have served as the model system for studying the dynamics of neural networks [3, 21]. One of the obstacles in studying neuron networks is the need for simultaneous recording of all neurons. Another major obstacle is the reproducibility of the shape, morphology, position, and state of the neurons which cannot be controlled with sufficient precision to ensure networks are identical. Current techniques employ multi- electrode arrays (MEAs) which can only stimulate and record electrical potentials of neurons at one place at a time. As a result, once neurons are cultured, studying the formation of neural networks requires installing MEAs in every neuron. Furthermore, the networks would never have the identical starting conditions. Therefore, we propose (see: Figure 10) using neurons (embryonic rat hippocampal neurons) cultured onto silicon wafers as described in Starovoytov et al. [21] (see: Supplement 2). The silicon wafer allows the selective stimulation of neurons by pulse laser, as well as the recording of electrical potential of neurons by detecting changes in electrical states on the silicon surface [21]. Supplement 2: Starovoytov Experimental Setup. A: Culture Dish. Design of a culture dish used in electro-optical stimulation experiments. Silicon wafer is glued to the bottom of the plastic culture dish. Neurons are plated onto the exposed silicon surface in the middle opening of the dish. Extracellular solution is grounded using Ag/AgCl electrode, and the bottom of the dish is contacted by a copper electrode with InGa eutectic painted onto the back side of silicon wafer. B: Equivalent Circuit. Ci and Ri represent, correspondingly, the capacitance and the resistance of the double layer formed at the interface of electrolyte and semiconductor. Vb denotes the applied bias voltage; Rs is the resistance of electrolyte. Additional series resistor Rm was used for total current measurements. Extracellular potential for the patched cell is determined by the conditions immediately above the silicon surface. C: Optical arrangement. Collimating and steering optics, the AOD, and the output coupler of the fiber were all rigidly attached to the microscope and moved together when translation across the sample was required. This allowed us to keep the patch stationary at all times. The Tube and Scan lenses were introduced to lock the beam at the center of microscope objective for all deflection angles. Dash-dot and dotted lines show targeting different locations at the sample. Reproduced from Starovoytov, et al. Journal of Neurophysiology. 93: 1090‐1098. (2005). [20]                                                          To date, no model exists that is truly capable of emulating the biological setting of LTP and LTD. 8 Page 18 of 25
  • 19. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico   Figure 10: Proposed Experimental Setup employing Light-Directed Electrical Stimulation (LDES) of Neurons cultured on Silicon Wafer. 1a Electrical Potential Scan: The bottom of the silicon wafer is fitted with Electric Potential (EP) probes. This informs the electrical potential states in the in silico neural network representation (2a). 1b-Visual Mapping Scan: The microscopy setup from Starovoytov et al. can be employed to enable positional mapping of the neurons in the network under study. This informs the in silico model (2a). 2a-Computer Model: Position and electrical states of neurons are compiled to make an in silico representation of the in vivo neural network. 2b Parallel Simulations: Extrapolated future states of the in silico model are correlated the real-time in vivo model. Statistical analysis is then used to refine rules governing the in silico model (2a) for prediction of future states. 2b-Parallel Simulations: 2b-Parallel Simulations: The in silico model (2a) can be copied and “stimulated” virtually to extrapolate future states. Extrapolated future states of the in silico model are correlated the real-time in vivo model. Statistical analysis is then used to refine rules governing the in silico model (2a) for prediction of future states. The stimulation should parallel stimulation that will, or is occurring in vivo by pulse laser. Statistical analysis can be performed on data sets to refine the rules governing the in silico model for future simulations (e.g., “noise” coefficients). 3-Trainig Output Data Set: The neural network must “trained.” A solution set is computed that will “push” the neuron firing rates to a desired output. 4-Modulatory Instructions: Pulse lasers are coordinated to stimulate neurons at the appropriate time. 5-Change in Neural Architecture: Modulation of in vivo neural network causes intercellular changes in gene expression in neurons from the activation of LTP and LTD and the intercellular development of a neuronal pathway. The neural architecture reflects output solution set in (3) (e.g., hand-eye coordination in cerebellum see Figure 5). Once the neuron plates are produced, a scan over the silicon wafer can be made to determine the initial states (electrical potentials and firing rates) of the neurons. This data can then be used to supply our model with data on synaptic strengths and firing rates of the network. Then the training through stimulation of neural networks can happen both in silico and in vitro. This allows virtual and real training of neural networks to occur in parallel and the results from these trainings can be statistically analyzed between virtual predictions and real‐time data. In this way, refinements can be made to the model and new factors introduced that may alter the learning rules to include variables that are traditionally Page 19 of 25
  • 20. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. “non‐Hebbian.” A caveat to this is that in the current model, there is no fixation of certain synaptic weights at 0 ‐ that is to say that we model all neurons as post‐synaptic to all the others, with the exception of a single (voltage‐clamped) input neuron. Thus an important future direction would be to enable a sparse matrix of weights (i.e., with some values fixed at 0) to be used in the computational simulation. In purely computational experiments, these would be distributed randomly in proportion to the ratio of synapses to pairs of neurons that occurs in biological neural networks. In parallel in vivo/in silico experiments, they would be set to emulate the in vivo network’s state of connectedness. Adding random variation or “noise” to our model ‐ either in the form of asynchronous updating of neurons as implemented in [14, 15] or as a random noise term in the synaptic weight updating rules ‐ would lend more biological fidelity to our model. In addition, the focus of our model was the generation of the rules that govern LTP and LTD with an emphasis on spike‐driven plasticity; however, this does not account for what truly occurs, at the nano‐level, in the short‐term and long‐term behavior of neural plasticity. In order to make the model more realistic, a model of the synapse should be added to include neurotransmitter, receptor, and Ca+2 levels. This would involve re‐designing the computational model to use the more modern spike‐based methods referred to in section (1.3) (For an example of this type of model and its performance, see [18], which implements a spiking neuron model allowing for complex behavior to be encoded). Lastly, the LDES platform allows recording and modulation of neural activity of cultured neurons and thereby connectivity. Consequently, the use and development of this platform may enable the generation of artificial neural networks with tailored neural architecture for biomedical applications.   5 Acknowledgements                      We would like to thank the Dr. David Needham of the Pratt School of Engineering and the Center for Biologically Inspired Materials and Materials Systems (CBIMMS) for the helping to create this opportunity and making this research project possible. In addition, we would like to thank Dr. Ragachavari of the Duke Neurobiology Department for his consult and advice on the project.           Page 20 of 25
  • 21. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico 6 Appendix Simulation Code                  Code 1: model.m    clear N=20 Inhib=zeros(N,N); Inhib(:,:)=1; Inhib(3:9,:)=-1; STEPS=100 % Connection Strengths T=normalize(rand(N,N),N,Inhib); % Values of neurons T(1,1)=1; T(1:2,2:N)=0; T(2,1)=0; T(2,2)=1; D=zeros(N,1); D(1)=.5; D(2)=.5; Delta = zeros(N,N); V=inv(T)*D; for s=1:STEPS D(1)=(sin(s*pi/(8))+1)/2; D(2)=1; Delta = dT(V,T,N,Inhib); T=T+Delta; T=normalize(T,N,Inhib); T0 = T .* Inhib; V=[V,inv(T0)*D]; V=deadneurons(V); end   plot(V’);                 Page 21 of 25
  • 22. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. Code 2: normalize.m    % T (strength matrix) normalization function out = normalize(T,N,Inhib) Normed=T; for i=3:N % in order to avoid summing in w(i,i) terms: Normed(i,i)=0; end Rows = sum(abs(Normed’)); for i=3:N for j=1:N Normed(i,j) = Normed(i,j) / max(Rows(2:N)); end end for i=3:N % to avoid linear independence of strengths Normed(i,i)=-1; end out=Normed; end   Code 3: dT.m    % T (strength matrix) differential function out = dT(V,T,N,Inhib) beta=.2; delta=zeros(N,N); for i=3:N for j=1:N alph = alpha(i,j,V,T,N,Inhib); delta(i,j)=alph*V(i,size(V,2))*V(j,size(V,2))*(1-T(i,j)) - (.1)*T(i,j); end end for i=3:N delta(i,i)=0; end out=delta; end   Page 22 of 25
  • 23. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico   Code 4: alpha.m    function out = alpha(i,j,V,T,N,Inhib) % alpha=abs(V(1)-V(N)) rule out = abs(V(1,size(V,2))-V(N,size(V,2))); end           Code 5: beta.m    function out = beta(alph) out = 1 - alph; end          Code 6: deadneurons.m    function out = deadneurons(V) for i = 3:size(V,1) if (V(i,size(V,2)) < 0) Vend=size(V,2); V(i,Vend) = 0; endif endfor out = V; end                                   Page 23 of 25
  • 24. Modeling Neural Networks In Silico Carlin, D., Cook, D., Mendoza-Elias, J. 7 References                        [1] Abraham, W.C., Logan, B., Greenwood, J.M., and Dragunow, M. Induction and experience dependent consolidation of stable long‐term potentiation lasting months in the hippocampus. Journal of Neuroscience. 23: 9626‐9634. (2002). [2] Ahn, S., Ginity, D.D., Linden, D.J. A late phase of cerebellar long‐term depression requires activation of CaMKIV and CREB. Neuron. 26: 559‐568. (1999). [3] Bi, G., and Poo, M. Synaptic modifications in cultured hipppocampal neurons: dependence on spike timing, synaptic strength, and postynaptic cell type. The Journal of Neuroscience. 18(24): 10484‐10472. (1998). [4] Bi, G., and Poo, M. Distributed synaptic modification in neural networks induced by patterned stimulation. Nature. 401: 792‐796. (1999). [5] Brader, J.M., Senn, W., and Fusi, S. Learning real‐world stimuli in neural network with spikedriven synaptic dynamics. Neural Computation. 19: 2881‐2912. (2007) [6] Chung, H.J., Steinberg, J.P., Hungair, R.L., and Linden, D.J. Requirement of AMPA receptor GluR2 phosphorylation for cerebellar long‐term depression. Science. 300: 1751‐1755. [7] Dayan, P., Abbott., L.F. Theoretical Neuroscience. MIT Press: Massachusetts, USA. (2001). [8] Engert, F. and Bonohoeffer, F. Dendritic spine changes associated with hippocampal long‐term synaptic plasticity. Nature. 399: 66‐70. (1999). [9] Ermentrout, Bard. Neural networks as spatio‐temporal pattern‐forming systems. Reports on Progress in Physics. 61: 353‐430. (1998). [10] Fitzimonds, R.M., Song, H., Poo, M. Propogation of activity‐dependent synaptic depression in simple neural networks. Nature. 388: 439‐448. (1997). [11] Deneve, S. Bayesian Spiking Neurons I: Inference. Neural Computation. 20: 91‐117. (2008). [12] Gerstner W., Kistler, W. Neural networks and physical systems with emergent computational abilities. Spiking Neuron Models. Cambridge University Press, UK. (2002). [13] Hebb, Donald. O. The Organization of Behavior: A Neuropsychological Theory. Wiley – Interscience, New York. (1949). [14] Hopfield, J. J. Proceedings of the National Academy of Sciences, USA: Biophysics 79: 2554‐2558. (1982). [15] Hopfield, J. J. Proceedings of the National Academy of Sciences, USA: Biophysics 81: 3088. (1984). [16] Junge, H.J., Rhee, J., Jahn, O., Varoqueaux, F., Spiess, J., Waxham, M., Rosemund, N., and Brose, N.Calmodulin and Munc13 form a Ca+2 sensor/effect complex that controls short‐term synaptic plasticity. Cell. 118: 389‐401. (2004). Page 24 of 25
  • 25. Carlin, D., Cook, D., Mendoza-Elias, J. Modeling Neural Networks In Silico [17] Little W. A. The Existence of Persistent States in the Brain. Mathematical Biosciences 19: 101‐120. (1974). [18] Lovelace, J., Cios, K. A very simple spiking neuron model that allows for modeling of large, complex systems. Neural Computation. 20: 65‐90. (2008). [19] Mascagni, M., Sherman, A. Numerical methods for neuron modeling. M ethods in Neuronal M odeling Second Edition, ed. Koch C., Segev I. MIT Press: Massachusetts, USA. (2001). [20] Purves, D., Augustine, G.J., Fitzpatrick, D., Hall, W.C., LaMantia, A., McNamara, J.O., White, L.E. Neuroscience: Fourth Edition Sinauer Associates, Inc: Sunderland, Massachusetts, USA. (2008) [21] Starovoytov, A., Choi, J., Seung, H.S. Light‐directed electrical stimulation of neurons cultured on silicon wafers. Journal of Neurophysiology. 93: 1090‐1098. (2005). [22] Zemel, RS, Huys, QJM, Natara jan, R., Dayan, P. Probabilistic computation in spiking populations. Advances in Neural Information Processing Systems. MIT Press: Massachusetts, MA (2004). [23] Orlando, D.A., Lin, C.Y., Bernard, A., Wang, J.Y., Socolar, J.E.S, Iversen, E.S., Hartemink, A.J., and Haase, S. Global Control of cell‐cycle transcription by coupled CDK and network oscillators. Nature. 453: 944‐948. (2008). [24] Strange, K. The end of “naïve” reductionism”: rise of systems biology or renaissance of physiology? American Physiological Society Cell Physiology. 288: C968‐C974. (2005). Page 25 of 25