3. Man versus Machine
(hardware)
3
Numbers Human brain Von Neumann
computer
# elements 1010 - 1012 neurons 107 - 108 transistors
# connections / element 104 - 103 10
switching frequency 103 Hz 109 Hz
energy / operation 10-16 Joule 10-6 Joule
power consumption 10 Watt 100 - 500 Watt
reliability of elements low reasonable
reliability of system high reasonable
4. Man versus Machine
(information processing)
4
Features Human Brain Von Neumann computer
Data representation analog digital
Memory localization distributed localized
Control distributed localized
Processing parallel sequential
Skill acquisition learning programming
No memory management,
No hardware/software/data distinction
6. The Perceptron
Binary classifier functions
Threshold activation function
6
Axon
Terminal Branches
of Axon
Dendrites
S
x1
x2
w1
w2
wn
xn
x3 w3
Yj : output from unit j
Wij : weight on connection from j to i
Xi : weighted sum of input to unit i
xi = ∑j wij yj
yi = f(xi – qi)
Threshold
7. Type 1. Perceptron
feedforward
Structure: 1 input layer
1 output layer
Supervised learning
Hebb learning rule
Able : AND or OR.
Unable: XOR
x0 f
i1
w01
y0
i2
b=1
w02
w0b
8. Learning in a Simple Neuron
Perceptron Learning Algorithm:
1. Initialize weights
2. Present a pattern and target output
3. Compute output :
4. Update weights :
Repeat starting at 2 until acceptable
level of error
]
[
2
0
i
i
i
x
w
f
y
i
i
i
w
t
w
t
w
)
(
)
1
(
9. Computing other functions: the OR
function
Assume a binary threshold activation function.
What should you set w01, w02 and w0b to be so that you
can get the right answers for y0?
9
i1 i2 y0
0 0 0
0 1 1
1 0 1
1 1 1
x0 f
i1
w01
y0
i2
b=1
w02
w0b
10. Many answers would work
y = f (w01i1 + w02i2 + w0bb)
recall the threshold function
the separation happens when
w01i1 + w02i2 + w0bb = 0
move things around and you get
i2 = - (w01/w02)i1 - (w0bb/w02)
10
i2
i1
n
N
n
n x
w
u
y
1
15. Type 3. Backpropagation Net
feedforward
1 input layer,
1 or more hidden layers,
1 output layer
supervised
backpropagation
sigmoid
Used :complex logical
operations, pattern
classification, speech
analysis
16. The Back-propagation
Algorithm
On-Line algorithm:
1. Initialize weights
2. Present a pattern and target output
3. Compute output :
4. Update weights :
Repeat starting at 2 until acceptable level
of error
o f w o
j ij
i
n
i
[ ]
0
w t w t w
ij ij ij
( ) ( )
1
31. Hodgkin-Huxley Model
)
(
)
(
)
(
)
( 4
3
t
I
E
u
g
E
u
n
g
E
u
h
m
g
dt
du
C l
l
K
K
Na
Na
)
(
)
(
0
u
u
m
m
dt
dm
m
)
(
)
(
0
u
u
n
n
dt
dn
n
)
(
)
(
0
u
u
h
h
dt
dh
h
stimulus
Na
I K
I leak
I
inside
outside
Ka
Na
Ion channels Ion pump
u u
h0(u)
m0(u) )
(u
h
)
(u
m
pulse input
I(t)
31
45. Generation of multiple Action
Potentials
Rate is dependent on depolarization
Firing frequency
1 per second is 1 Hz
Maximum is about 1000Hz
Absolute refractory period
Relative refractory period
I(ion)=g(ion)(Vm-Eion)
45
47. Mapping from biological neuron
Nervous System Computational Abstraction
Neuron Node
Dendrites Input link and propagation
Cell Body Combination function, threshold,
activation function
Axon Output link
Spike rate Output
Synaptic strength Connection strength/weight
47
51. Liquid State Machine (LSM)
• Maass’ LSM is a spiking recurrent neural
network which satisfies two properties
– Separation property (liquid)
– Approximation property (readout)
• LSM features
– Only attractor is rest
– Temporal integration
– Memoryless linear readout map
– Universal computational power: can
approximate any time invariant filter
with fading memory
– It also does not require any a-priori
decision regarding the ``neural code''
by which information is represented
within the circuit.
51
52. Maass’ Definition of the Separation Property
The current state x(t) of the microcircuit at time t has to hold all information
about preceding inputs.
Approximation Property
Readout can approximate any continuous
function f that maps current liquid states x(t)
to outputs v(t).
52
53. 2 motors, 1 minute footage of each case, 3400 frames
Readouts could utilize wave interference patterns
53
55. מאמרים סקירת
Spiking neural networks, an introduction.pdf
דור של ומבנה נוירונים רשתות על וסיכום הקדמה
3
במודלים
is the integrate-and-fire model good enough – a
review.pdf
מודל בין השוואה
I&F
מודל לבין
HH
מודל של הרחבה כולל
I&F
שניהם את שמשלב למודל
LSM (Liquid State Machine)
Liquid State Machines,a review.pdf
Liquid State Machine Built of Hodgkin–Huxley Neurons.pdf
The Echo State approach to analysing and training recurrent
neural networks.pdf
LSM Turing Maching
On the Computational Power of Circuits of Spiking neurons.pdf
The Echo State approach to analysing and training recurrent
neural networks.pdf
55