SlideShare uma empresa Scribd logo
1 de 138
Baixar para ler offline
Feedback Particle Filter and its
Applications to Neuroscience
3rd IFAC Workshop on
Distributed Estimation and Control in Networked Systems
Santa Barbara, Sep 14-15, 2012
Prashant G. Mehta
Department of Mechanical Science and Engineering
and the Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Research supported by NSF and AFOSR
Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings!
2
Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
Background
Applications in Biology
Bayesian model of sensory signal processing
4
Background
Applications in Biology
Bayesian model of sensory signal processing
4
Part I
Theory: Nonlinear Filtering
Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
6
Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
6
Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
6
Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt|Zt) =: p∗
(x,t).
6
Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt|Zt) =: p∗
(x,t).
6
Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt|Zt) =: p∗
(x,t).
Posterior is an information state
P(Xt ∈ A|Zt) =
A
p∗
(x,t)dx
E(Xt|Zt) =
R
xp∗
(x,t)dx
6
Nonlinear Filtering
Pretty Formulae in Mathematics
More often than not, these are simply stated
Euler’s identity
eiπ
= −1
Euler’s formula
v −e +f = 2
Pythagoras theorem
x2
+y2
= z2
Kenneth Chang “What Makes an Equation Beautiful” in The New York Times on October 24, 2004
7
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dˆZt = γ ˆXt dt
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dˆZt = γ ˆXt dt
Innov. error: dIt = dZt − dˆZt
= dZt −γ ˆXt dt
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dˆZt = γ ˆXt dt
Innov. error: dIt = dZt − dˆZt
= dZt −γ ˆXt dt
Control: dUt = K dIt
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dˆZt = γ ˆXt dt
Innov. error: dIt = dZt − dˆZt
= dZt −γ ˆXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
Nonlinear Filtering
Kalman filter
d ˆXt = α ˆXt dt
Prediction
+ K(dZt −γ ˆXt dt)
Update
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =
γ
σ2
W
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
Nonlinear Filtering
Kalman filter
d ˆXt = α ˆXt dt
Prediction
+ K(dZt −γ ˆXt dt)
Update
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =
γ
σ2
W
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
Nonlinear Filtering
Kalman filter
d ˆXt = α ˆXt dt
Prediction
+ K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =
γ
σ2
W
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
Nonlinear Filtering
Filtering Problem
Nonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt +σB dBt, (1)
dZt = h(Xt)dt +σW dWt (2)
Posterior distribution p∗
is a solution of a stochastic PDE:
dp∗
= L †
(p∗
)dt +
1
σ2
W
(h − ˆh)(dZt − ˆhdt)p∗
where ˆh = E[h(Xt)|Zt] = h(x)p∗
(x,t)dx
L †
(p∗
) = −
∂(p∗ ·a(x))
∂x
+
1
2
σ2
B
∂2p∗
∂x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
Nonlinear Filtering
Filtering Problem
Nonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt +σB dBt, (1)
dZt = h(Xt)dt +σW dWt (2)
Posterior distribution p∗
is a solution of a stochastic PDE:
dp∗
= L †
(p∗
)dt +
1
σ2
W
(h − ˆh)(dZt − ˆhdt)p∗
where ˆh = E[h(Xt)|Zt] = h(x)p∗
(x,t)dx
L †
(p∗
) = −
∂(p∗ ·a(x))
∂x
+
1
2
σ2
B
∂2p∗
∂x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
Nonlinear Filtering
Filtering Problem
Nonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt +σB dBt, (1)
dZt = h(Xt)dt +σW dWt (2)
Posterior distribution p∗
is a solution of a stochastic PDE:
dp∗
= L †
(p∗
)dt +
1
σ2
W
(h − ˆh)(dZt − ˆhdt)p∗
where ˆh = E[h(Xt)|Zt] = h(x)p∗
(x,t)dx
L †
(p∗
) = −
∂(p∗ ·a(x))
∂x
+
1
2
σ2
B
∂2p∗
∂x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
11
Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
11
Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
11
Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
e.g. dZt = Xt dt + small noise
11
Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
e.g. dZt = Xt dt + small noise 11
Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
Innovation error, feedback?And most importantly, is this pretty? 11
Control-Oriented Approach to Particle Filtering
Research goal: Bringing pretty back!
10
2
10
3
10
−3
10
−2
10
−1
N (number of particles)
Bootstrap (BPF)
Feedback (FPF)
MSE
Control-Oriented Approach to Particle Filtering
12
Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt +σB dBt (1)
dZt = h(Xt)dt +σW dWt (2)
Controlled system (N particles):
dXi
t = a(Xi
t )dt +σB dBi
t + dUi
t
mean field control
, i = 1,...,N (3)
{Bi
t}N
i=1 are ind. standard white noises.
Objective: Choose control Ui
t, as a function of history
{Zs,Xi
s : 0 ≤ s ≤ t}, such that the two posteriors coincide:
x∈A
p∗
(x,t) dx = P{Xt ∈ A | Zt}
x∈A
p(x,t) dx = P{Xi
t ∈ A | Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)
13
Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt +σB dBt (1)
dZt = h(Xt)dt +σW dWt (2)
Controlled system (N particles):
dXi
t = a(Xi
t )dt +σB dBi
t + dUi
t
mean field control
, i = 1,...,N (3)
{Bi
t}N
i=1 are ind. standard white noises.
Objective: Choose control Ui
t, as a function of history
{Zs,Xi
s : 0 ≤ s ≤ t}, such that the two posteriors coincide:
x∈A
p∗
(x,t) dx = P{Xt ∈ A | Zt}
x∈A
p(x,t) dx = P{Xi
t ∈ A | Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)
13
Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt +σB dBt (1)
dZt = h(Xt)dt +σW dWt (2)
Controlled system (N particles):
dXi
t = a(Xi
t )dt +σB dBi
t + dUi
t
mean field control
, i = 1,...,N (3)
{Bi
t}N
i=1 are ind. standard white noises.
Objective: Choose control Ui
t, as a function of history
{Zs,Xi
s : 0 ≤ s ≤ t}, such that the two posteriors coincide:
x∈A
p∗
(x,t) dx = P{Xt ∈ A | Zt}
x∈A
p(x,t) dx = P{Xi
t ∈ A | Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)
13
Control-Oriented Approach to Particle Filtering
FPF Solution
Linear model
Controlled system: for i = 1,...N:
dXi
t = αXi
t dt +σB dBi
t
Prediction
+ K dZt −γ
Xi
t + µt
2
dt
Update (via mean field control)
(3)
Feedback Particle Filter
-
+
14
Control-Oriented Approach to Particle Filtering
FPF Solution
Linear model
Controlled system: for i = 1,...N:
dXi
t = αXi
t dt +σB dBi
t
Prediction
+ K dZt −γ
Xi
t + µt
2
dt
Update (via mean field control)
(3)
Feedback Particle Filter
-
+
14
Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
15
Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
15
Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
Innovation error: dIi
t = dZt − dˆZi
t dIt = dZt − dˆZt
= dZt −γ Xi
t +µt
2 dt = dZt −γ ˆXt dt
15
Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
Innovation error: dIi
t = dZt − dˆZi
t dIt = dZt − dˆZt
= dZt −γ Xi
t +µt
2 dt = dZt −γ ˆXt dt
Control: dUi
t = K dIi
t dUt = K dIt
15
Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
Innovation error: dIi
t = dZt − dˆZi
t dIt = dZt − dˆZt
= dZt −γ Xi
t +µt
2 dt = dZt −γ ˆXt dt
Control: dUi
t = K dIi
t dUt = K dIt
Gain: K is the Kalman gain
15
Control-Oriented Approach to Particle Filtering
Linear Feedback Particle Filter
Mean field model is the Kalman filter
Feedback particle filter:
dXi
t = αXi
t dt +σB dBi
t +K dZt −
γ
2
Xi
t +
1
N
N
∑
j=1
Xj
t dt (3)
Xi
0 ∼ p∗
(x,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of Xi
t given
Zt. Then p = N(µt,Σt) where
dµt = αµt dt +
γΣt
σ2
W
(dZt −γµt dt)
dΣt = 2αΣt +σ2
B −
γ2Σ2
t
σ2
W
dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
Control-Oriented Approach to Particle Filtering
Linear Feedback Particle Filter
Mean field model is the Kalman filter
Feedback particle filter:
dXi
t = αXi
t dt +σB dBi
t +K dZt −
γ
2
Xi
t +
1
N
N
∑
j=1
Xj
t dt (3)
Xi
0 ∼ p∗
(x,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of Xi
t given
Zt. Then p = N(µt,Σt) where
dµt = αµt dt +
γΣt
σ2
W
(dZt −γµt dt)
dΣt = 2αΣt +σ2
B −
γ2Σ2
t
σ2
W
dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
Control-Oriented Approach to Particle Filtering
Linear Feedback Particle Filter
Mean field model is the Kalman filter
Feedback particle filter:
dXi
t = αXi
t dt +σB dBi
t +K dZt −
γ
2
Xi
t +
1
N
N
∑
j=1
Xj
t dt (3)
Xi
0 ∼ p∗
(x,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of Xi
t given
Zt. Then p = N(µt,Σt) where
dµt = αµt dt +
γΣt
σ2
W
(dZt −γµt dt)
dΣt = 2αΣt +σ2
B −
γ2Σ2
t
σ2
W
dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
Control-Oriented Approach to Particle Filtering
Variance Reduction
Filtering for simple linear model.
Mean-square error:
1
T
T
0
Σ
(N)
t −Σt
Σt
2
dt
10
2
10
3
10
−3
10
−2
10
−1
N (number of particles)
Bootstrap (BPF)
Feedback (FPF)
MSE
17
Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
Feedback Particle Filter
Feedback Particle Filter
Filtering in nonlinear non-Gaussian settings
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
FPF: dXi
t = a(Xi
t )dt + dBi
t +K(Xi
t )◦ dIi
t
Update
Innovations: dIi
t =:dZt −
1
2
(h(Xi
t )+ ˆh)dt, with cond. mean ˆh = p,h .
19
Feedback Particle Filter
Feedback Particle Filter
Filtering in nonlinear non-Gaussian settings
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
FPF: dXi
t = a(Xi
t )dt + dBi
t +K(Xi
t )◦ dIi
t
Update
Innovations: dIi
t =:dZt −
1
2
(h(Xi
t )+ ˆh)dt, with cond. mean ˆh = p,h .
19
Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
20
Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
20
Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
Innov. error: dIi
t = dZt − dˆZi
t dIi
t = dZt − dˆZi
t
= dZt − h(Xi
t )+ˆh
2 dt = dZt −γ Xi
t +µt
2 dt
20
Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
Innov. error: dIi
t = dZt − dˆZi
t dIi
t = dZt − dˆZi
t
= dZt − h(Xi
t )+ˆh
2 dt = dZt −γ Xi
t +µt
2 dt
Control: dUi
t = K(Xi
t )◦ dIi
t dUi
t = K(Xi
t )◦ dIi
t
20
Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
Innov. error: dIi
t = dZt − dˆZi
t dIi
t = dZt − dˆZi
t
= dZt − h(Xi
t )+ˆh
2 dt = dZt −γ Xi
t +µt
2 dt
Control: dUi
t = K(Xi
t )◦ dIi
t dUi
t = K(Xi
t )◦ dIi
t
Gain: K is a solution of a linear BVP K is the Kalman gain
20
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case:
Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case:
Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case:
Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
Feedback Particle Filter
Consistency
Feedback particle filter is exact
p∗
: conditional pdf of Xt given Zt,
dp∗
= L †
(p∗
)dt +(h − ˆh)(σ2
W )−1
(dZt − ˆhdt)p∗
p : conditional pdf of Xi
t given Zt,
dp = L †
(p)dt −
∂
∂x
(Kp) dZt −
∂
∂x
(up) dt +
σ2
W
2
∂2
∂x2
pK2
dt
Consistency Theorem
Consider the two evolution equations for p and p∗
.
Provided the FPF is initialized with p(x,0) = p∗
(x,0), then
p(x,t) = p∗
(x,t) for all t ≥ 0
22
Feedback Particle Filter
Consistency
Feedback particle filter is exact
p∗
: conditional pdf of Xt given Zt,
dp∗
= L †
(p∗
)dt +(h − ˆh)(σ2
W )−1
(dZt − ˆhdt)p∗
p : conditional pdf of Xi
t given Zt,
dp = L †
(p)dt −
∂
∂x
(Kp) dZt −
∂
∂x
(up) dt +
σ2
W
2
∂2
∂x2
pK2
dt
Consistency Theorem
Consider the two evolution equations for p and p∗
.
Provided the FPF is initialized with p(x,0) = p∗
(x,0), then
p(x,t) = p∗
(x,t) for all t ≥ 0
22
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
Part II
Neural Rhythms, Bayesian Inference
Oscillators in Biology
Normal Form Reduction
Derivation of oscillator model
C
dV
dt
= −gT ·m2
∞(V )·h ·(V −ET )
−gh ·r ·(V −Eh)−......
dh
dt
=
h∞(V )−h
τh(V )
dr
dt
=
r∞(V )−r
τr (V )
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
Oscillators in Biology
Normal Form Reduction
Derivation of oscillator model
C
dV
dt
= −gT ·m2
∞(V )·h ·(V −ET )
−gh ·r ·(V −Eh)−......
dh
dt
=
h∞(V )−h
τh(V )
dr
dt
=
r∞(V )−r
τr (V )
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
Oscillators in Biology
Normal Form Reduction
Derivation of oscillator model
C
dV
dt
= −gT ·m2
∞(V )·h ·(V −ET )
−gh ·r ·(V −Eh)−......
dh
dt
=
h∞(V )−h
τh(V )
dr
dt
=
r∞(V )−r
τr (V )
Normal form reduction
−−−−−−−−−−−−→
dθi (t) = ωi dt +ui (t)·Φ(θi (t))dt
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
Oscillators in Biology
Collective Dynamics of a Large Number of Oscillators
Synchrony, Neural rhythms
26
Oscillators in Biology
Functional Role of Neural Rhythms
Is synchronization useful? Does it have a functional role?
Books/review papers:
Buzsaki, Destexhe, Ermentrout, Izhikevich, Kopell, Trout and Whittington (2009), Llinas and
Ribary (2001), Pareti and Palma (2004), Sejnowski and Paulsen (2006), Singer (1993)...
Computations: Computing with intrinsic network states
Destexhe and Contreras (2006); Izhikevich (2006); Zhang and Ballard (2001).
Synaptic plasticity: Neurons that fire together wire together
And several other hypotheses:
Communication and information flow (Laughlin and Sejnowski); Binding by synchrony (Singer);
Memory formation (Jutras and Fries); Probabilistic decision making (Wang); Stimulus
competition and attention selection (Kopell); Sleep/wakefulness/disease (Steriade)
27
Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Theory:
Lee and Mumford, Hierarchical Bayesian inference Framework (2003)
Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)
Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)
Kording and Wolpert. Bayesian decision theory (2006)
And others: See
Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)
Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)
29
Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Theory:
Lee and Mumford, Hierarchical Bayesian inference Framework (2003)
Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)
Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)
Kording and Wolpert. Bayesian decision theory (2006)
And others: See
Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)
Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)
29
Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Experiments (see reviews):
Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)
R. T. Knight, Neural networks debunk phrenology, Science (2007)
Such theories naturally feed into computer vision & more generally on how
to make computer “intelligent”
30
Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Experiments (see reviews):
Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)
R. T. Knight, Neural networks debunk phrenology, Science (2007)
Such theories naturally feed into computer vision & more generally on how
to make computer “intelligent”
30
Oscillators in Biology
Bayesian Inference in Neuroscience
Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Bayes’rule Bayes’rule Bayes’rule
Similar ideas also appear in:
1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)
3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)
31
Oscillators in Biology
Bayesian Inference in Neuroscience
Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Bayes’rule Bayes’rule Bayes’rule
. . .
Part. Filter Part. Filter Part. Filter
Similar ideas also appear in:
1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)
3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)
31
Part III
Application: Filtering with Rhythms
Gait Cycle
Biological Rhythm
33
Gait Cycle
Biological Rhythm
33
Gait Cycle
Biological Rhythm
33
Gait Cycle
Biological Rhythm
33
Gait Cycle
Biological Rhythm
33
Gait Cycle
Biological Rhythm
33
Gait Cycle
Biological Rhythm
33
Application: Ankle-foot Orthoses
Estimation of gait cycle using sensor measurements
Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.
Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance
Sensors: heel, toe,
and ankle joint
Compressed CO2
Actuator
Solenoid valves:
control the flow of CO2
to the actuator
AFO system components: Power supply,
Valves, Actuator, Sensors.
Professor Liz Hsiao-Wecksler
Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.
34
Application: Ankle-foot Orthoses
Estimation of gait cycle using sensor measurements
Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.
Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance
Sensors: heel, toe,
and ankle joint
Compressed CO2
Actuator
Solenoid valves:
control the flow of CO2
to the actuator
AFO system components: Power supply,
Valves, Actuator, Sensors.
Professor Liz Hsiao-Wecksler
Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.
34
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Gait Cycle
Signal model
Stance phase Swing phase
Model (Noisy oscillator)
dθt = ω0 dt
natural frequency
+ noise
35
Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθi
t = ωi dt + dBi
t +K(θi
t )◦[dZt −
1
2
(h(θi
t )+ ˆh)dt] mod 2π, i = 1,...,N.
where ωi is sampled from a distribution.
38
Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθi
t = ωi dt + dBi
t +K(θi
t )◦[dZt −
1
2
(h(θi
t )+ ˆh)dt] mod 2π, i = 1,...,N.
where ωi is sampled from a distribution.
38
Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθi
t = ωi dt + dBi
t +K(θi
t )◦[dZt −
1
2
(h(θi
t )+ ˆh)dt] mod 2π, i = 1,...,N.
where ωi is sampled from a distribution.
Feedback Particle Filter
-
+
38
Simulation Results
Solution of the Estimation of Gait Cycle Problem
[Click to play the movie]
39
Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
40
Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
Acknowledgement
Adam Tilton Tao Yang Huibing Yin Liz Hsiao-Wecksler Sean Meyn
1 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter with mean-field coupling. In Procs. of IEEE Conf. on
Decision and Control, December 2011.
2 T. Yang, P. G. Mehta, and S. P. Meyn. A mean-field control-oriented approach to particle filtering. In Procs. of
American Control Conference, June 2011.
3 A. Tilton, E. Hsiao-Wecksler, P. G. Mehta. Filtering with rhythms: Application to estimation of gait cycle. In Procs. of
American Control Conference, 2012.
4 T. Yang, G. Huang and P. G. Mehta. Joint probabilistic data association-feedback particle filter with applications to
multiple target tracking. In Procs. of American Control Conference, 2012.
5 A. Tilton, T. Yang, H. Yin and P. G. Mehta. Feedback particle filter-based multiple target tracking using bearing-only
measurements. In Procs. of Information Fusion, 2012.
6 T. Yang, R. Laugesen, P. G. Mehta, and S. P. Meyn. Multivariable feedback particle filter. To appear in IEEE Conf. on
Decision and Control, 2012.
7 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. Conditionally accepted to IEEE Transactions on
Automatic Control.

Mais conteúdo relacionado

Mais procurados

Intro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMIntro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMNYC Predictive Analytics
 
Digital Signal Processing[ECEG-3171]-Ch1_L04
Digital Signal Processing[ECEG-3171]-Ch1_L04Digital Signal Processing[ECEG-3171]-Ch1_L04
Digital Signal Processing[ECEG-3171]-Ch1_L04Rediet Moges
 
17 16512 32451-1-sm (edit ari)
17 16512 32451-1-sm (edit ari)17 16512 32451-1-sm (edit ari)
17 16512 32451-1-sm (edit ari)IAESIJEECS
 
Image restoration1
Image restoration1Image restoration1
Image restoration1moorthim7
 
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Www
D I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{WwwD I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{Www
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Wwwguest3f9c6b
 
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)Zheng Mengdi
 
Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017Andres Hernandez
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningAndres Hernandez
 
Physics Research Summer2009
Physics Research Summer2009Physics Research Summer2009
Physics Research Summer2009Ryan Melvin
 
Actuator Constrained Optimal Control of Formations Near the Libration Points
Actuator Constrained Optimal Control of Formations Near the Libration PointsActuator Constrained Optimal Control of Formations Near the Libration Points
Actuator Constrained Optimal Control of Formations Near the Libration PointsBelinda Marchand
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
 
On selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictionOn selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictioncsandit
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 

Mais procurados (20)

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
PCA on graph/network
PCA on graph/networkPCA on graph/network
PCA on graph/network
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
 
Intro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMIntro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVM
 
Digital Signal Processing[ECEG-3171]-Ch1_L04
Digital Signal Processing[ECEG-3171]-Ch1_L04Digital Signal Processing[ECEG-3171]-Ch1_L04
Digital Signal Processing[ECEG-3171]-Ch1_L04
 
17 16512 32451-1-sm (edit ari)
17 16512 32451-1-sm (edit ari)17 16512 32451-1-sm (edit ari)
17 16512 32451-1-sm (edit ari)
 
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
 
Image restoration1
Image restoration1Image restoration1
Image restoration1
 
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Www
D I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{WwwD I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{Www
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Www
 
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
 
Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
 
Physics Research Summer2009
Physics Research Summer2009Physics Research Summer2009
Physics Research Summer2009
 
Actuator Constrained Optimal Control of Formations Near the Libration Points
Actuator Constrained Optimal Control of Formations Near the Libration PointsActuator Constrained Optimal Control of Formations Near the Libration Points
Actuator Constrained Optimal Control of Formations Near the Libration Points
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
On selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictionOn selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series prediction
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Distributed ADMM
Distributed ADMMDistributed ADMM
Distributed ADMM
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 

Semelhante a Feedback Particle Filter and its Applications to Neuroscience

A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...JuanPabloCarbajal3
 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction modelIJCI JOURNAL
 
Introduction
IntroductionIntroduction
Introductionbutest
 
Q-Metrics in Theory and Practice
Q-Metrics in Theory and PracticeQ-Metrics in Theory and Practice
Q-Metrics in Theory and PracticeMagdi Mohamed
 
Q-Metrics in Theory And Practice
Q-Metrics in Theory And PracticeQ-Metrics in Theory And Practice
Q-Metrics in Theory And Practiceguest3550292
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
 
Identifiability in Dynamic Casual Networks
Identifiability in Dynamic Casual NetworksIdentifiability in Dynamic Casual Networks
Identifiability in Dynamic Casual NetworksGraph-TA
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Michael Lie
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing ssuser2797e4
 
Jörg Stelzer
Jörg StelzerJörg Stelzer
Jörg Stelzerbutest
 
Linear Regression
Linear RegressionLinear Regression
Linear RegressionVARUN KUMAR
 
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...Belinda Marchand
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AIMarc Lelarge
 

Semelhante a Feedback Particle Filter and its Applications to Neuroscience (20)

A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...
 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction model
 
Introduction
IntroductionIntroduction
Introduction
 
Ydstie
YdstieYdstie
Ydstie
 
Q-Metrics in Theory and Practice
Q-Metrics in Theory and PracticeQ-Metrics in Theory and Practice
Q-Metrics in Theory and Practice
 
Q-Metrics in Theory And Practice
Q-Metrics in Theory And PracticeQ-Metrics in Theory And Practice
Q-Metrics in Theory And Practice
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
Identifiability in Dynamic Casual Networks
Identifiability in Dynamic Casual NetworksIdentifiability in Dynamic Casual Networks
Identifiability in Dynamic Casual Networks
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
main
mainmain
main
 
Adaptive dynamic programming algorithm for uncertain nonlinear switched systems
Adaptive dynamic programming algorithm for uncertain nonlinear switched systemsAdaptive dynamic programming algorithm for uncertain nonlinear switched systems
Adaptive dynamic programming algorithm for uncertain nonlinear switched systems
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing
 
Jörg Stelzer
Jörg StelzerJörg Stelzer
Jörg Stelzer
 
Linear Regression
Linear RegressionLinear Regression
Linear Regression
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AI
 

Último

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 

Último (20)

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 

Feedback Particle Filter and its Applications to Neuroscience

  • 1. Feedback Particle Filter and its Applications to Neuroscience 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems Santa Barbara, Sep 14-15, 2012 Prashant G. Mehta Department of Mechanical Science and Engineering and the Coordinated Science Laboratory University of Illinois at Urbana-Champaign Research supported by NSF and AFOSR
  • 2. Background Bayesian Inference/Filtering Mathematics of prediction: Bayes’ rule Signal (hidden): X X ∼ P(X), (prior, known) Solution Bayes’ rule: P(X|Y ) Posterior ∝ P(Y |X)P(X) Prior This talk is about implementing Bayes’ rule in dynamic, nonlinear, non-Gaussian settings! 2
  • 3. Background Bayesian Inference/Filtering Mathematics of prediction: Bayes’ rule Signal (hidden): X X ∼ P(X), (prior, known) Observation: Y (known) Solution Bayes’ rule: P(X|Y ) Posterior ∝ P(Y |X)P(X) Prior This talk is about implementing Bayes’ rule in dynamic, nonlinear, non-Gaussian settings! 2
  • 4. Background Bayesian Inference/Filtering Mathematics of prediction: Bayes’ rule Signal (hidden): X X ∼ P(X), (prior, known) Observation: Y (known) Observation model: P(Y |X) (known) Solution Bayes’ rule: P(X|Y ) Posterior ∝ P(Y |X)P(X) Prior This talk is about implementing Bayes’ rule in dynamic, nonlinear, non-Gaussian settings! 2
  • 5. Background Bayesian Inference/Filtering Mathematics of prediction: Bayes’ rule Signal (hidden): X X ∼ P(X), (prior, known) Observation: Y (known) Observation model: P(Y |X) (known) Problem: What is X ? Solution Bayes’ rule: P(X|Y ) Posterior ∝ P(Y |X)P(X) Prior This talk is about implementing Bayes’ rule in dynamic, nonlinear, non-Gaussian settings! 2
  • 6. Background Bayesian Inference/Filtering Mathematics of prediction: Bayes’ rule Signal (hidden): X X ∼ P(X), (prior, known) Observation: Y (known) Observation model: P(Y |X) (known) Problem: What is X ? Solution Bayes’ rule: P(X|Y ) Posterior ∝ P(Y |X)P(X) Prior This talk is about implementing Bayes’ rule in dynamic, nonlinear, non-Gaussian settings! 2
  • 7. Background Bayesian Inference/Filtering Mathematics of prediction: Bayes’ rule Signal (hidden): X X ∼ P(X), (prior, known) Observation: Y (known) Observation model: P(Y |X) (known) Problem: What is X ? Solution Bayes’ rule: P(X|Y ) Posterior ∝ P(Y |X)P(X) Prior This talk is about implementing Bayes’ rule in dynamic, nonlinear, non-Gaussian settings! 2
  • 8. Background Applications Engineering applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 3
  • 9. Background Applications Engineering applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 3
  • 10. Background Applications Engineering applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 3
  • 11. Background Applications Engineering applications Filtering is important to: Air moving target indicator (AMTI) systems, Space situational awareness Remote sensing and surveillance: Air traffic management, weather surveillance, geophysical surveys Autonomous navigation & robotics: Simultaneous localization and map building (SLAM) 3
  • 12. Background Applications in Biology Bayesian model of sensory signal processing 4
  • 13. Background Applications in Biology Bayesian model of sensory signal processing 4
  • 15. Nonlinear Filtering Nonlinear Filtering Mathematical Problem Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) 6
  • 16. Nonlinear Filtering Nonlinear Filtering Mathematical Problem Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt 6
  • 17. Nonlinear Filtering Nonlinear Filtering Mathematical Problem Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt Problem: What is Xt ? given obs. till time t =: Zt 6
  • 18. Nonlinear Filtering Nonlinear Filtering Mathematical Problem Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt Problem: What is Xt ? given obs. till time t =: Zt Answer in terms of posterior: P(Xt|Zt) =: p∗ (x,t). 6
  • 19. Nonlinear Filtering Nonlinear Filtering Mathematical Problem Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt Problem: What is Xt ? given obs. till time t =: Zt Answer in terms of posterior: P(Xt|Zt) =: p∗ (x,t). 6
  • 20. Nonlinear Filtering Nonlinear Filtering Mathematical Problem Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt Problem: What is Xt ? given obs. till time t =: Zt Answer in terms of posterior: P(Xt|Zt) =: p∗ (x,t). Posterior is an information state P(Xt ∈ A|Zt) = A p∗ (x,t)dx E(Xt|Zt) = R xp∗ (x,t)dx 6
  • 21. Nonlinear Filtering Pretty Formulae in Mathematics More often than not, these are simply stated Euler’s identity eiπ = −1 Euler’s formula v −e +f = 2 Pythagoras theorem x2 +y2 = z2 Kenneth Chang “What Makes an Equation Beautiful” in The New York Times on October 24, 2004 7
  • 22. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 23. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 24. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update Kalman Filter - + [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 25. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update Kalman Filter - + Kalman Filter Observation: dZt = γXt dt + dWt [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 26. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update Kalman Filter - + Kalman Filter Observation: dZt = γXt dt + dWt Prediction: dˆZt = γ ˆXt dt [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 27. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update Kalman Filter - + Kalman Filter Observation: dZt = γXt dt + dWt Prediction: dˆZt = γ ˆXt dt Innov. error: dIt = dZt − dˆZt = dZt −γ ˆXt dt [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 28. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update Kalman Filter - + Kalman Filter Observation: dZt = γXt dt + dWt Prediction: dˆZt = γ ˆXt dt Innov. error: dIt = dZt − dˆZt = dZt −γ ˆXt dt Control: dUt = K dIt [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 29. Nonlinear Filtering Kalman filter Solution in linear Gaussian settings dXt = αXt dt + dBt (1) dZt = γXt dt + dWt (2) Kalman filter: p∗ = N( ˆXt,Σt) d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt) Update Kalman Filter - + Kalman Filter Observation: dZt = γXt dt + dWt Prediction: dˆZt = γ ˆXt dt Innov. error: dIt = dZt − dˆZt = dZt −γ ˆXt dt Control: dUt = K dIt Gain: Kalman gain [?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
  • 30. Nonlinear Filtering Kalman filter d ˆXt = α ˆXt dt Prediction + K(dZt −γ ˆXt dt) Update This illustrates the key features of feedback control: 1 Use error to obtain control (dUt = K dIt) 2 Negative gain feedback serves to reduce error (K = γ σ2 W SNR Σt) Simple enough to be included in the first undergraduate course on control 9
  • 31. Nonlinear Filtering Kalman filter d ˆXt = α ˆXt dt Prediction + K(dZt −γ ˆXt dt) Update This illustrates the key features of feedback control: 1 Use error to obtain control (dUt = K dIt) 2 Negative gain feedback serves to reduce error (K = γ σ2 W SNR Σt) Simple enough to be included in the first undergraduate course on control 9
  • 32. Nonlinear Filtering Kalman filter d ˆXt = α ˆXt dt Prediction + K(dZt −γ ˆXt dt) Update Kalman Filter - + This illustrates the key features of feedback control: 1 Use error to obtain control (dUt = K dIt) 2 Negative gain feedback serves to reduce error (K = γ σ2 W SNR Σt) Simple enough to be included in the first undergraduate course on control 9
  • 33. Nonlinear Filtering Filtering Problem Nonlinear Model: Kushner-Stratonovich PDE Signal & Observations dXt = a(Xt)dt +σB dBt, (1) dZt = h(Xt)dt +σW dWt (2) Posterior distribution p∗ is a solution of a stochastic PDE: dp∗ = L † (p∗ )dt + 1 σ2 W (h − ˆh)(dZt − ˆhdt)p∗ where ˆh = E[h(Xt)|Zt] = h(x)p∗ (x,t)dx L † (p∗ ) = − ∂(p∗ ·a(x)) ∂x + 1 2 σ2 B ∂2p∗ ∂x2 No closed-form solution in general. Closure problem. [?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964 10
  • 34. Nonlinear Filtering Filtering Problem Nonlinear Model: Kushner-Stratonovich PDE Signal & Observations dXt = a(Xt)dt +σB dBt, (1) dZt = h(Xt)dt +σW dWt (2) Posterior distribution p∗ is a solution of a stochastic PDE: dp∗ = L † (p∗ )dt + 1 σ2 W (h − ˆh)(dZt − ˆhdt)p∗ where ˆh = E[h(Xt)|Zt] = h(x)p∗ (x,t)dx L † (p∗ ) = − ∂(p∗ ·a(x)) ∂x + 1 2 σ2 B ∂2p∗ ∂x2 No closed-form solution in general. Closure problem. [?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964 10
  • 35. Nonlinear Filtering Filtering Problem Nonlinear Model: Kushner-Stratonovich PDE Signal & Observations dXt = a(Xt)dt +σB dBt, (1) dZt = h(Xt)dt +σW dWt (2) Posterior distribution p∗ is a solution of a stochastic PDE: dp∗ = L † (p∗ )dt + 1 σ2 W (h − ˆh)(dZt − ˆhdt)p∗ where ˆh = E[h(Xt)|Zt] = h(x)p∗ (x,t)dx L † (p∗ ) = − ∂(p∗ ·a(x)) ∂x + 1 2 σ2 B ∂2p∗ ∂x2 No closed-form solution in general. Closure problem. [?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964 10
  • 36. Nonlinear Filtering Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p∗ (x,t) = 1 N N ∑ i=1 δXi t (x) Algorithm outline 1 Initialization at time 0: Xi 0 ∼ p∗ 0(·) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) 11
  • 37. Nonlinear Filtering Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p∗ (x,t) = 1 N N ∑ i=1 δXi t (x) Algorithm outline 1 Initialization at time 0: Xi 0 ∼ p∗ 0(·) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) 11
  • 38. Nonlinear Filtering Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p∗ (x,t) = 1 N N ∑ i=1 δXi t (x) Algorithm outline 1 Initialization at time 0: Xi 0 ∼ p∗ 0(·) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) 11
  • 39. Nonlinear Filtering Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p∗ (x,t) = 1 N N ∑ i=1 δXi t (x) Algorithm outline 1 Initialization at time 0: Xi 0 ∼ p∗ 0(·) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) e.g. dZt = Xt dt + small noise 11
  • 40. Nonlinear Filtering Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p∗ (x,t) = 1 N N ∑ i=1 δXi t (x) Algorithm outline 1 Initialization at time 0: Xi 0 ∼ p∗ 0(·) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) e.g. dZt = Xt dt + small noise 11
  • 41. Nonlinear Filtering Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p∗ (x,t) = 1 N N ∑ i=1 δXi t (x) Algorithm outline 1 Initialization at time 0: Xi 0 ∼ p∗ 0(·) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) Innovation error, feedback?And most importantly, is this pretty? 11
  • 42. Control-Oriented Approach to Particle Filtering Research goal: Bringing pretty back! 10 2 10 3 10 −3 10 −2 10 −1 N (number of particles) Bootstrap (BPF) Feedback (FPF) MSE Control-Oriented Approach to Particle Filtering 12
  • 43. Control-Oriented Approach to Particle Filtering Feedback Particle Filter Signal & Observations dXt = a(Xt)dt +σB dBt (1) dZt = h(Xt)dt +σW dWt (2) Controlled system (N particles): dXi t = a(Xi t )dt +σB dBi t + dUi t mean field control , i = 1,...,N (3) {Bi t}N i=1 are ind. standard white noises. Objective: Choose control Ui t, as a function of history {Zs,Xi s : 0 ≤ s ≤ t}, such that the two posteriors coincide: x∈A p∗ (x,t) dx = P{Xt ∈ A | Zt} x∈A p(x,t) dx = P{Xi t ∈ A | Zt} Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007) 13
  • 44. Control-Oriented Approach to Particle Filtering Feedback Particle Filter Signal & Observations dXt = a(Xt)dt +σB dBt (1) dZt = h(Xt)dt +σW dWt (2) Controlled system (N particles): dXi t = a(Xi t )dt +σB dBi t + dUi t mean field control , i = 1,...,N (3) {Bi t}N i=1 are ind. standard white noises. Objective: Choose control Ui t, as a function of history {Zs,Xi s : 0 ≤ s ≤ t}, such that the two posteriors coincide: x∈A p∗ (x,t) dx = P{Xt ∈ A | Zt} x∈A p(x,t) dx = P{Xi t ∈ A | Zt} Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007) 13
  • 45. Control-Oriented Approach to Particle Filtering Feedback Particle Filter Signal & Observations dXt = a(Xt)dt +σB dBt (1) dZt = h(Xt)dt +σW dWt (2) Controlled system (N particles): dXi t = a(Xi t )dt +σB dBi t + dUi t mean field control , i = 1,...,N (3) {Bi t}N i=1 are ind. standard white noises. Objective: Choose control Ui t, as a function of history {Zs,Xi s : 0 ≤ s ≤ t}, such that the two posteriors coincide: x∈A p∗ (x,t) dx = P{Xt ∈ A | Zt} x∈A p(x,t) dx = P{Xi t ∈ A | Zt} Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007) 13
  • 46. Control-Oriented Approach to Particle Filtering FPF Solution Linear model Controlled system: for i = 1,...N: dXi t = αXi t dt +σB dBi t Prediction + K dZt −γ Xi t + µt 2 dt Update (via mean field control) (3) Feedback Particle Filter - + 14
  • 47. Control-Oriented Approach to Particle Filtering FPF Solution Linear model Controlled system: for i = 1,...N: dXi t = αXi t dt +σB dBi t Prediction + K dZt −γ Xi t + µt 2 dt Update (via mean field control) (3) Feedback Particle Filter - + 14
  • 48. Control-Oriented Approach to Particle Filtering FPF Update Steps Linear model Feedback particle filter Kalman filter Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt 15
  • 49. Control-Oriented Approach to Particle Filtering FPF Update Steps Linear model Feedback particle filter Kalman filter Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt Prediction: dˆZi t = 1 2 γXi t +γµt dt dˆZt = γ ˆXt dt 15
  • 50. Control-Oriented Approach to Particle Filtering FPF Update Steps Linear model Feedback particle filter Kalman filter Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt Prediction: dˆZi t = 1 2 γXi t +γµt dt dˆZt = γ ˆXt dt Innovation error: dIi t = dZt − dˆZi t dIt = dZt − dˆZt = dZt −γ Xi t +µt 2 dt = dZt −γ ˆXt dt 15
  • 51. Control-Oriented Approach to Particle Filtering FPF Update Steps Linear model Feedback particle filter Kalman filter Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt Prediction: dˆZi t = 1 2 γXi t +γµt dt dˆZt = γ ˆXt dt Innovation error: dIi t = dZt − dˆZi t dIt = dZt − dˆZt = dZt −γ Xi t +µt 2 dt = dZt −γ ˆXt dt Control: dUi t = K dIi t dUt = K dIt 15
  • 52. Control-Oriented Approach to Particle Filtering FPF Update Steps Linear model Feedback particle filter Kalman filter Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt Prediction: dˆZi t = 1 2 γXi t +γµt dt dˆZt = γ ˆXt dt Innovation error: dIi t = dZt − dˆZi t dIt = dZt − dˆZt = dZt −γ Xi t +µt 2 dt = dZt −γ ˆXt dt Control: dUi t = K dIi t dUt = K dIt Gain: K is the Kalman gain 15
  • 53. Control-Oriented Approach to Particle Filtering Linear Feedback Particle Filter Mean field model is the Kalman filter Feedback particle filter: dXi t = αXi t dt +σB dBi t +K dZt − γ 2 Xi t + 1 N N ∑ j=1 Xj t dt (3) Xi 0 ∼ p∗ (x,0) = N(µ(0),Σ(0)) Mean-field model: Kalman filter! Let p denote cond. dist. of Xi t given Zt. Then p = N(µt,Σt) where dµt = αµt dt + γΣt σ2 W (dZt −γµt dt) dΣt = 2αΣt +σ2 B − γ2Σ2 t σ2 W dt As N → ∞, the empirical distribution approximates the posterior p∗ 16
  • 54. Control-Oriented Approach to Particle Filtering Linear Feedback Particle Filter Mean field model is the Kalman filter Feedback particle filter: dXi t = αXi t dt +σB dBi t +K dZt − γ 2 Xi t + 1 N N ∑ j=1 Xj t dt (3) Xi 0 ∼ p∗ (x,0) = N(µ(0),Σ(0)) Mean-field model: Kalman filter! Let p denote cond. dist. of Xi t given Zt. Then p = N(µt,Σt) where dµt = αµt dt + γΣt σ2 W (dZt −γµt dt) dΣt = 2αΣt +σ2 B − γ2Σ2 t σ2 W dt As N → ∞, the empirical distribution approximates the posterior p∗ 16
  • 55. Control-Oriented Approach to Particle Filtering Linear Feedback Particle Filter Mean field model is the Kalman filter Feedback particle filter: dXi t = αXi t dt +σB dBi t +K dZt − γ 2 Xi t + 1 N N ∑ j=1 Xj t dt (3) Xi 0 ∼ p∗ (x,0) = N(µ(0),Σ(0)) Mean-field model: Kalman filter! Let p denote cond. dist. of Xi t given Zt. Then p = N(µt,Σt) where dµt = αµt dt + γΣt σ2 W (dZt −γµt dt) dΣt = 2αΣt +σ2 B − γ2Σ2 t σ2 W dt As N → ∞, the empirical distribution approximates the posterior p∗ 16
  • 56. Control-Oriented Approach to Particle Filtering Variance Reduction Filtering for simple linear model. Mean-square error: 1 T T 0 Σ (N) t −Σt Σt 2 dt 10 2 10 3 10 −3 10 −2 10 −1 N (number of particles) Bootstrap (BPF) Feedback (FPF) MSE 17
  • 57. Feedback Particle Filter Methodology: Variational Formulation How do we derive the feedback particle filter? Time-stepping procedure: Signal, observ. process: dXt = a(Xt)dt +σB dBt Ztn = h(Xtn )+Wtn Feedback Particle filter Filter: dXi t = a(Xi t )dt +σB dBi t Control: Xi tn = Xi t− n +u(Xi t− n ) control Conditional distributions: p∗ n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi t |Zt Variational problem: min u D (pn(u) p∗ n) As ∆t → 0: Optimal control, u = u◦ , yields the feedback particle filter, Nonlinear filter is the gradient flow and u◦ is the optimal transport. 18
  • 58. Feedback Particle Filter Methodology: Variational Formulation How do we derive the feedback particle filter? Time-stepping procedure: Signal, observ. process: dXt = a(Xt)dt +σB dBt Ztn = h(Xtn )+Wtn Feedback Particle filter Filter: dXi t = a(Xi t )dt +σB dBi t Control: Xi tn = Xi t− n +u(Xi t− n ) control Conditional distributions: p∗ n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi t |Zt Variational problem: min u D (pn(u) p∗ n) As ∆t → 0: Optimal control, u = u◦ , yields the feedback particle filter, Nonlinear filter is the gradient flow and u◦ is the optimal transport. 18
  • 59. Feedback Particle Filter Methodology: Variational Formulation How do we derive the feedback particle filter? Time-stepping procedure: Signal, observ. process: dXt = a(Xt)dt +σB dBt Ztn = h(Xtn )+Wtn Feedback Particle filter Filter: dXi t = a(Xi t )dt +σB dBi t Control: Xi tn = Xi t− n +u(Xi t− n ) control Conditional distributions: p∗ n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi t |Zt Variational problem: min u D (pn(u) p∗ n) As ∆t → 0: Optimal control, u = u◦ , yields the feedback particle filter, Nonlinear filter is the gradient flow and u◦ is the optimal transport. 18
  • 60. Feedback Particle Filter Methodology: Variational Formulation How do we derive the feedback particle filter? Time-stepping procedure: Signal, observ. process: dXt = a(Xt)dt +σB dBt Ztn = h(Xtn )+Wtn Feedback Particle filter Filter: dXi t = a(Xi t )dt +σB dBi t Control: Xi tn = Xi t− n +u(Xi t− n ) control Conditional distributions: p∗ n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi t |Zt Variational problem: min u D (pn(u) p∗ n) As ∆t → 0: Optimal control, u = u◦ , yields the feedback particle filter, Nonlinear filter is the gradient flow and u◦ is the optimal transport. 18
  • 61. Feedback Particle Filter Feedback Particle Filter Filtering in nonlinear non-Gaussian settings Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt FPF: dXi t = a(Xi t )dt + dBi t +K(Xi t )◦ dIi t Update Innovations: dIi t =:dZt − 1 2 (h(Xi t )+ ˆh)dt, with cond. mean ˆh = p,h . 19
  • 62. Feedback Particle Filter Feedback Particle Filter Filtering in nonlinear non-Gaussian settings Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗ 0(·) Observation model: dZt = h(Xt)dt + dWt FPF: dXi t = a(Xi t )dt + dBi t +K(Xi t )◦ dIi t Update Innovations: dIi t =:dZt − 1 2 (h(Xi t )+ ˆh)dt, with cond. mean ˆh = p,h . 19
  • 63. Feedback Particle Filter Update Step How does feedback particle filter implement Bayes’ rule? Feedback particle filter Linear case Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt 20
  • 64. Feedback Particle Filter Update Step How does feedback particle filter implement Bayes’ rule? Feedback particle filter Linear case Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt Prediction: dˆZi t = h(Xi t )+ˆh 2 dt dˆZi t = γXi t +γµt 2 dt ˆh = 1 N ∑N i=1 h(Xi t ) 20
  • 65. Feedback Particle Filter Update Step How does feedback particle filter implement Bayes’ rule? Feedback particle filter Linear case Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt Prediction: dˆZi t = h(Xi t )+ˆh 2 dt dˆZi t = γXi t +γµt 2 dt ˆh = 1 N ∑N i=1 h(Xi t ) Innov. error: dIi t = dZt − dˆZi t dIi t = dZt − dˆZi t = dZt − h(Xi t )+ˆh 2 dt = dZt −γ Xi t +µt 2 dt 20
  • 66. Feedback Particle Filter Update Step How does feedback particle filter implement Bayes’ rule? Feedback particle filter Linear case Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt Prediction: dˆZi t = h(Xi t )+ˆh 2 dt dˆZi t = γXi t +γµt 2 dt ˆh = 1 N ∑N i=1 h(Xi t ) Innov. error: dIi t = dZt − dˆZi t dIi t = dZt − dˆZi t = dZt − h(Xi t )+ˆh 2 dt = dZt −γ Xi t +µt 2 dt Control: dUi t = K(Xi t )◦ dIi t dUi t = K(Xi t )◦ dIi t 20
  • 67. Feedback Particle Filter Update Step How does feedback particle filter implement Bayes’ rule? Feedback particle filter Linear case Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt Prediction: dˆZi t = h(Xi t )+ˆh 2 dt dˆZi t = γXi t +γµt 2 dt ˆh = 1 N ∑N i=1 h(Xi t ) Innov. error: dIi t = dZt − dˆZi t dIi t = dZt − dˆZi t = dZt − h(Xi t )+ˆh 2 dt = dZt −γ Xi t +µt 2 dt Control: dUi t = K(Xi t )◦ dIi t dUi t = K(Xi t )◦ dIi t Gain: K is a solution of a linear BVP K is the Kalman gain 20
  • 68. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 69. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 70. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 71. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 72. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 73. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 74. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 75. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 76. Feedback Particle Filter Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem ∇·(Kp) = −(h − ˆh)p solved at each time-step. Linear case: Nonlinear case: 21
  • 77. Feedback Particle Filter Consistency Feedback particle filter is exact p∗ : conditional pdf of Xt given Zt, dp∗ = L † (p∗ )dt +(h − ˆh)(σ2 W )−1 (dZt − ˆhdt)p∗ p : conditional pdf of Xi t given Zt, dp = L † (p)dt − ∂ ∂x (Kp) dZt − ∂ ∂x (up) dt + σ2 W 2 ∂2 ∂x2 pK2 dt Consistency Theorem Consider the two evolution equations for p and p∗ . Provided the FPF is initialized with p(x,0) = p∗ (x,0), then p(x,t) = p∗ (x,t) for all t ≥ 0 22
  • 78. Feedback Particle Filter Consistency Feedback particle filter is exact p∗ : conditional pdf of Xt given Zt, dp∗ = L † (p∗ )dt +(h − ˆh)(σ2 W )−1 (dZt − ˆhdt)p∗ p : conditional pdf of Xi t given Zt, dp = L † (p)dt − ∂ ∂x (Kp) dZt − ∂ ∂x (up) dt + σ2 W 2 ∂2 ∂x2 pK2 dt Consistency Theorem Consider the two evolution equations for p and p∗ . Provided the FPF is initialized with p(x,0) = p∗ (x,0), then p(x,t) = p∗ (x,t) for all t ≥ 0 22
  • 79. Feedback Particle Filter Kalman Filter Kalman Filter - + Innovation Error: dIt = dZt −h( ˆX)dt Gain Function: K = Kalman Gain Feedback Particle Filter Feedback Particle Filter - + Innovation Error: dIi t = dZt − 1 2 h(Xi t )+ ˆht dt Gain Function: K is solution of a linear BVP. 23
  • 80. Feedback Particle Filter Kalman Filter Kalman Filter - + Innovation Error: dIt = dZt −h( ˆX)dt Gain Function: K = Kalman Gain Feedback Particle Filter Feedback Particle Filter - + Innovation Error: dIi t = dZt − 1 2 h(Xi t )+ ˆht dt Gain Function: K is solution of a linear BVP. 23
  • 81. Feedback Particle Filter Kalman Filter Kalman Filter - + Innovation Error: dIt = dZt −h( ˆX)dt Gain Function: K = Kalman Gain Feedback Particle Filter Feedback Particle Filter - + Innovation Error: dIi t = dZt − 1 2 h(Xi t )+ ˆht dt Gain Function: K is solution of a linear BVP. 23
  • 82. Feedback Particle Filter Kalman Filter Kalman Filter - + Innovation Error: dIt = dZt −h( ˆX)dt Gain Function: K = Kalman Gain Feedback Particle Filter Feedback Particle Filter - + Innovation Error: dIi t = dZt − 1 2 h(Xi t )+ ˆht dt Gain Function: K is solution of a linear BVP. 23
  • 83. Feedback Particle Filter Kalman Filter Kalman Filter - + Innovation Error: dIt = dZt −h( ˆX)dt Gain Function: K = Kalman Gain Feedback Particle Filter Feedback Particle Filter - + Innovation Error: dIi t = dZt − 1 2 h(Xi t )+ ˆht dt Gain Function: K is solution of a linear BVP. 23
  • 84. Feedback Particle Filter Kalman Filter Kalman Filter - + Innovation Error: dIt = dZt −h( ˆX)dt Gain Function: K = Kalman Gain Feedback Particle Filter Feedback Particle Filter - + Innovation Error: dIi t = dZt − 1 2 h(Xi t )+ ˆht dt Gain Function: K is solution of a linear BVP. 23
  • 85. Part II Neural Rhythms, Bayesian Inference
  • 86. Oscillators in Biology Normal Form Reduction Derivation of oscillator model C dV dt = −gT ·m2 ∞(V )·h ·(V −ET ) −gh ·r ·(V −Eh)−...... dh dt = h∞(V )−h τh(V ) dr dt = r∞(V )−r τr (V ) [?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004 25
  • 87. Oscillators in Biology Normal Form Reduction Derivation of oscillator model C dV dt = −gT ·m2 ∞(V )·h ·(V −ET ) −gh ·r ·(V −Eh)−...... dh dt = h∞(V )−h τh(V ) dr dt = r∞(V )−r τr (V ) [?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004 25
  • 88. Oscillators in Biology Normal Form Reduction Derivation of oscillator model C dV dt = −gT ·m2 ∞(V )·h ·(V −ET ) −gh ·r ·(V −Eh)−...... dh dt = h∞(V )−h τh(V ) dr dt = r∞(V )−r τr (V ) Normal form reduction −−−−−−−−−−−−→ dθi (t) = ωi dt +ui (t)·Φ(θi (t))dt [?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004 25
  • 89. Oscillators in Biology Collective Dynamics of a Large Number of Oscillators Synchrony, Neural rhythms 26
  • 90. Oscillators in Biology Functional Role of Neural Rhythms Is synchronization useful? Does it have a functional role? Books/review papers: Buzsaki, Destexhe, Ermentrout, Izhikevich, Kopell, Trout and Whittington (2009), Llinas and Ribary (2001), Pareti and Palma (2004), Sejnowski and Paulsen (2006), Singer (1993)... Computations: Computing with intrinsic network states Destexhe and Contreras (2006); Izhikevich (2006); Zhang and Ballard (2001). Synaptic plasticity: Neurons that fire together wire together And several other hypotheses: Communication and information flow (Laughlin and Sejnowski); Binding by synchrony (Singer); Memory formation (Jutras and Fries); Probabilistic decision making (Wang); Stimulus competition and attention selection (Kopell); Sleep/wakefulness/disease (Steriade) 27
  • 91. Oscillators in Biology Prediction Brain as a reality emulator “[Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them.” “The capacity to predict the outcome of future events – critical to successful movement – is, most likely, the ultimate and most common of all brain functions.” 28
  • 92. Oscillators in Biology Prediction Brain as a reality emulator “[Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them.” “The capacity to predict the outcome of future events – critical to successful movement – is, most likely, the ultimate and most common of all brain functions.” 28
  • 93. Oscillators in Biology Prediction Brain as a reality emulator “[Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them.” “The capacity to predict the outcome of future events – critical to successful movement – is, most likely, the ultimate and most common of all brain functions.” 28
  • 94. Oscillators in Biology Prediction Brain as a reality emulator “[Prediction] is the primary function of the neocortex, and the foundation of intelligence. If we want to understand how your brain works, and how to build intelligent machines, we must understand the nature of these predictions and how the cortex makes them.” “The capacity to predict the outcome of future events – critical to successful movement – is, most likely, the ultimate and most common of all brain functions.” 28
  • 95. Oscillators in Biology Filtering in Brain? Bayesian model of sensory signal processing Theory: Lee and Mumford, Hierarchical Bayesian inference Framework (2003) Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002) Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) Ma, Beck, Latham and Pouget. Probabilistic population codes (2006) Kording and Wolpert. Bayesian decision theory (2006) And others: See Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007) Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002) 29
  • 96. Oscillators in Biology Filtering in Brain? Bayesian model of sensory signal processing Theory: Lee and Mumford, Hierarchical Bayesian inference Framework (2003) Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002) Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) Ma, Beck, Latham and Pouget. Probabilistic population codes (2006) Kording and Wolpert. Bayesian decision theory (2006) And others: See Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007) Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002) 29
  • 97. Oscillators in Biology Filtering in Brain? Bayesian model of sensory signal processing Experiments (see reviews): Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007) R. T. Knight, Neural networks debunk phrenology, Science (2007) Such theories naturally feed into computer vision & more generally on how to make computer “intelligent” 30
  • 98. Oscillators in Biology Filtering in Brain? Bayesian model of sensory signal processing Experiments (see reviews): Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007) R. T. Knight, Neural networks debunk phrenology, Science (2007) Such theories naturally feed into computer vision & more generally on how to make computer “intelligent” 30
  • 99. Oscillators in Biology Bayesian Inference in Neuroscience Lee and Mumford’s hierarchical Bayesian inference framework . . . Bayes’rule Bayes’rule Bayes’rule Similar ideas also appear in: 1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) 2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995) 3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002) 31
  • 100. Oscillators in Biology Bayesian Inference in Neuroscience Lee and Mumford’s hierarchical Bayesian inference framework . . . Bayes’rule Bayes’rule Bayes’rule . . . Part. Filter Part. Filter Part. Filter Similar ideas also appear in: 1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995) 2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995) 3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002) 31
  • 109. Application: Ankle-foot Orthoses Estimation of gait cycle using sensor measurements Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments. Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance Sensors: heel, toe, and ankle joint Compressed CO2 Actuator Solenoid valves: control the flow of CO2 to the actuator AFO system components: Power supply, Valves, Actuator, Sensors. Professor Liz Hsiao-Wecksler Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data. 34
  • 110. Application: Ankle-foot Orthoses Estimation of gait cycle using sensor measurements Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments. Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance Sensors: heel, toe, and ankle joint Compressed CO2 Actuator Solenoid valves: control the flow of CO2 to the actuator AFO system components: Power supply, Valves, Actuator, Sensors. Professor Liz Hsiao-Wecksler Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data. 34
  • 111. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 112. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 113. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 114. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 115. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 116. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 117. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 118. Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθt = ω0 dt natural frequency + noise 35
  • 119. Problem: Estimate Gait Cycle θt Sensor model Observation model: dZt = h(θt)dt+ noise Problem: What is θt given noisy observations? 36
  • 120. Problem: Estimate Gait Cycle θt Sensor model Observation model: dZt = h(θt)dt+ noise Problem: What is θt given noisy observations? 36
  • 121. Problem: Estimate Gait Cycle θt Sensor model Observation model: dZt = h(θt)dt+ noise Problem: What is θt given noisy observations? 36
  • 122. Problem: Estimate Gait Cycle θt Sensor model Observation model: dZt = h(θt)dt+ noise Problem: What is θt given noisy observations? 36
  • 123. Problem: Estimate Gait Cycle θt Sensor model Observation model: dZt = h(θt)dt+ noise Problem: What is θt given noisy observations? 36
  • 124. Solution: Particle Filter Algorithm to approximate posterior distribution “Large number of oscillators” Posterior distribution: P(φ1 < θt < φ2|Sensor readings) = Fraction of θi t in interval (φ1,φ2) Circuit: dθi t = ωi dt natural freq. of ith oscillator + noisei + dUi t mean-field control , i = 1,...,N Feedback Particle Filter: Design control law Ui t 37
  • 125. Solution: Particle Filter Algorithm to approximate posterior distribution “Large number of oscillators” Posterior distribution: P(φ1 < θt < φ2|Sensor readings) = Fraction of θi t in interval (φ1,φ2) Circuit: dθi t = ωi dt natural freq. of ith oscillator + noisei + dUi t mean-field control , i = 1,...,N Feedback Particle Filter: Design control law Ui t 37
  • 126. Solution: Particle Filter Algorithm to approximate posterior distribution “Large number of oscillators” Posterior distribution: P(φ1 < θt < φ2|Sensor readings) = Fraction of θi t in interval (φ1,φ2) Circuit: dθi t = ωi dt natural freq. of ith oscillator + noisei + dUi t mean-field control , i = 1,...,N Feedback Particle Filter: Design control law Ui t 37
  • 127. Solution: Particle Filter Algorithm to approximate posterior distribution “Large number of oscillators” Posterior distribution: P(φ1 < θt < φ2|Sensor readings) = Fraction of θi t in interval (φ1,φ2) Circuit: dθi t = ωi dt natural freq. of ith oscillator + noisei + dUi t mean-field control , i = 1,...,N Feedback Particle Filter: Design control law Ui t 37
  • 128. Solution: Particle Filter Algorithm to approximate posterior distribution “Large number of oscillators” Posterior distribution: P(φ1 < θt < φ2|Sensor readings) = Fraction of θi t in interval (φ1,φ2) Circuit: dθi t = ωi dt natural freq. of ith oscillator + noisei + dUi t mean-field control , i = 1,...,N Feedback Particle Filter: Design control law Ui t 37
  • 129. Solution: Particle Filter Algorithm to approximate posterior distribution “Large number of oscillators” Posterior distribution: P(φ1 < θt < φ2|Sensor readings) = Fraction of θi t in interval (φ1,φ2) Circuit: dθi t = ωi dt natural freq. of ith oscillator + noisei + dUi t mean-field control , i = 1,...,N Feedback Particle Filter: Design control law Ui t 37
  • 130. Filtering for Oscillators Signal & Observations dθt = ω dt + dBt mod 2π dZt = h(θt)dt + dWt − π 0 π Particle evolution, dθi t = ωi dt + dBi t +K(θi t )◦[dZt − 1 2 (h(θi t )+ ˆh)dt] mod 2π, i = 1,...,N. where ωi is sampled from a distribution. 38
  • 131. Filtering for Oscillators Signal & Observations dθt = ω dt + dBt mod 2π dZt = h(θt)dt + dWt − π 0 π Particle evolution, dθi t = ωi dt + dBi t +K(θi t )◦[dZt − 1 2 (h(θi t )+ ˆh)dt] mod 2π, i = 1,...,N. where ωi is sampled from a distribution. 38
  • 132. Filtering for Oscillators Signal & Observations dθt = ω dt + dBt mod 2π dZt = h(θt)dt + dWt − π 0 π Particle evolution, dθi t = ωi dt + dBi t +K(θi t )◦[dZt − 1 2 (h(θi t )+ ˆh)dt] mod 2π, i = 1,...,N. where ωi is sampled from a distribution. Feedback Particle Filter - + 38
  • 133. Simulation Results Solution of the Estimation of Gait Cycle Problem [Click to play the movie] 39
  • 134. Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford’s hierarchical Bayesian inference framework . . . Part. Filter Part. Filter Part. Filter Prior Noisy input . . . Part. Filter Part. Filter 40
  • 135. Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford’s hierarchical Bayesian inference framework . . . Part. Filter Part. Filter Part. Filter Prior Noisy input . . . Part. Filter Part. Filter Noisy measurements Rhythmic movement Prior Mumford’s box with neurons Normal form reduction Normal form reduction Estimate Mumford’s box with oscillators 40
  • 136. Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford’s hierarchical Bayesian inference framework . . . Part. Filter Part. Filter Part. Filter Prior Noisy input . . . Part. Filter Part. Filter Noisy measurements Rhythmic movement Prior Mumford’s box with neurons Normal form reduction Normal form reduction Estimate Mumford’s box with oscillators 40
  • 137. Filtering of Biological Rhythms with Brain Rhythms Connection to Lee and Mumford’s hierarchical Bayesian inference framework . . . Part. Filter Part. Filter Part. Filter Prior Noisy input . . . Part. Filter Part. Filter Noisy measurements Rhythmic movement Prior Mumford’s box with neurons Normal form reduction Normal form reduction Estimate Mumford’s box with oscillators 40
  • 138. Acknowledgement Adam Tilton Tao Yang Huibing Yin Liz Hsiao-Wecksler Sean Meyn 1 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter with mean-field coupling. In Procs. of IEEE Conf. on Decision and Control, December 2011. 2 T. Yang, P. G. Mehta, and S. P. Meyn. A mean-field control-oriented approach to particle filtering. In Procs. of American Control Conference, June 2011. 3 A. Tilton, E. Hsiao-Wecksler, P. G. Mehta. Filtering with rhythms: Application to estimation of gait cycle. In Procs. of American Control Conference, 2012. 4 T. Yang, G. Huang and P. G. Mehta. Joint probabilistic data association-feedback particle filter with applications to multiple target tracking. In Procs. of American Control Conference, 2012. 5 A. Tilton, T. Yang, H. Yin and P. G. Mehta. Feedback particle filter-based multiple target tracking using bearing-only measurements. In Procs. of Information Fusion, 2012. 6 T. Yang, R. Laugesen, P. G. Mehta, and S. P. Meyn. Multivariable feedback particle filter. To appear in IEEE Conf. on Decision and Control, 2012. 7 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. Conditionally accepted to IEEE Transactions on Automatic Control.