Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Feedback Particle Filter and its Applications to Neuroscience
1. Feedback Particle Filter and its
Applications to Neuroscience
3rd IFAC Workshop on
Distributed Estimation and Control in Networked Systems
Santa Barbara, Sep 14-15, 2012
Prashant G. Mehta
Department of Mechanical Science and Engineering
and the Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Research supported by NSF and AFOSR
2. Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings!
2
3. Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings!
2
4. Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
5. Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
6. Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
7. Background
Bayesian Inference/Filtering
Mathematics of prediction: Bayes’ rule
Signal (hidden): X X ∼ P(X), (prior, known)
Observation: Y (known)
Observation model: P(Y |X) (known)
Problem: What is X ?
Solution
Bayes’ rule: P(X|Y )
Posterior
∝ P(Y |X)P(X)
Prior
This talk is about implementing Bayes’ rule in
dynamic, nonlinear, non-Gaussian settings! 2
8. Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
9. Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
10. Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
11. Background
Applications
Engineering applications
Filtering is important to:
Air moving target indicator (AMTI) systems, Space situational
awareness
Remote sensing and surveillance: Air traffic management, weather
surveillance, geophysical surveys
Autonomous navigation & robotics: Simultaneous localization and
map building (SLAM)
3
18. Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt|Zt) =: p∗
(x,t).
6
19. Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt|Zt) =: p∗
(x,t).
6
20. Nonlinear Filtering
Nonlinear Filtering
Mathematical Problem
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
Problem: What is Xt ? given obs. till time t =: Zt
Answer in terms of posterior: P(Xt|Zt) =: p∗
(x,t).
Posterior is an information state
P(Xt ∈ A|Zt) =
A
p∗
(x,t)dx
E(Xt|Zt) =
R
xp∗
(x,t)dx
6
21. Nonlinear Filtering
Pretty Formulae in Mathematics
More often than not, these are simply stated
Euler’s identity
eiπ
= −1
Euler’s formula
v −e +f = 2
Pythagoras theorem
x2
+y2
= z2
Kenneth Chang “What Makes an Equation Beautiful” in The New York Times on October 24, 2004
7
22. Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
23. Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
24. Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
28. Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dˆZt = γ ˆXt dt
Innov. error: dIt = dZt − dˆZt
= dZt −γ ˆXt dt
Control: dUt = K dIt
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
29. Nonlinear Filtering
Kalman filter
Solution in linear Gaussian settings
dXt = αXt dt + dBt (1)
dZt = γXt dt + dWt (2)
Kalman filter: p∗
= N( ˆXt,Σt)
d ˆXt = α ˆXt dt + K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
Kalman Filter
Observation: dZt = γXt dt + dWt
Prediction: dˆZt = γ ˆXt dt
Innov. error: dIt = dZt − dˆZt
= dZt −γ ˆXt dt
Control: dUt = K dIt
Gain: Kalman gain
[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8
30. Nonlinear Filtering
Kalman filter
d ˆXt = α ˆXt dt
Prediction
+ K(dZt −γ ˆXt dt)
Update
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =
γ
σ2
W
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
31. Nonlinear Filtering
Kalman filter
d ˆXt = α ˆXt dt
Prediction
+ K(dZt −γ ˆXt dt)
Update
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =
γ
σ2
W
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
32. Nonlinear Filtering
Kalman filter
d ˆXt = α ˆXt dt
Prediction
+ K(dZt −γ ˆXt dt)
Update
Kalman Filter
-
+
This illustrates the key features of feedback control:
1 Use error to obtain control (dUt = K dIt)
2 Negative gain feedback serves to reduce error (K =
γ
σ2
W
SNR
Σt)
Simple enough to be included in the first undergraduate course on control
9
33. Nonlinear Filtering
Filtering Problem
Nonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt +σB dBt, (1)
dZt = h(Xt)dt +σW dWt (2)
Posterior distribution p∗
is a solution of a stochastic PDE:
dp∗
= L †
(p∗
)dt +
1
σ2
W
(h − ˆh)(dZt − ˆhdt)p∗
where ˆh = E[h(Xt)|Zt] = h(x)p∗
(x,t)dx
L †
(p∗
) = −
∂(p∗ ·a(x))
∂x
+
1
2
σ2
B
∂2p∗
∂x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
34. Nonlinear Filtering
Filtering Problem
Nonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt +σB dBt, (1)
dZt = h(Xt)dt +σW dWt (2)
Posterior distribution p∗
is a solution of a stochastic PDE:
dp∗
= L †
(p∗
)dt +
1
σ2
W
(h − ˆh)(dZt − ˆhdt)p∗
where ˆh = E[h(Xt)|Zt] = h(x)p∗
(x,t)dx
L †
(p∗
) = −
∂(p∗ ·a(x))
∂x
+
1
2
σ2
B
∂2p∗
∂x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
35. Nonlinear Filtering
Filtering Problem
Nonlinear Model: Kushner-Stratonovich PDE
Signal & Observations dXt = a(Xt)dt +σB dBt, (1)
dZt = h(Xt)dt +σW dWt (2)
Posterior distribution p∗
is a solution of a stochastic PDE:
dp∗
= L †
(p∗
)dt +
1
σ2
W
(h − ˆh)(dZt − ˆhdt)p∗
where ˆh = E[h(Xt)|Zt] = h(x)p∗
(x,t)dx
L †
(p∗
) = −
∂(p∗ ·a(x))
∂x
+
1
2
σ2
B
∂2p∗
∂x2
No closed-form solution in general. Closure problem.
[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964
10
36. Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
11
37. Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
11
38. Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
11
39. Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
e.g. dZt = Xt dt + small noise
11
40. Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
e.g. dZt = Xt dt + small noise 11
41. Nonlinear Filtering
Particle Filter
An algorithm to solve nonlinear filtering problem
Approximate posterior in terms of particles p∗
(x,t) =
1
N
N
∑
i=1
δXi
t
(x)
Algorithm outline
1 Initialization at time 0: Xi
0 ∼ p∗
0(·)
2 At each discrete time step:
Importance sampling (Bayes update step)
Resampling (for variance reduction)
Innovation error, feedback?And most importantly, is this pretty? 11
42. Control-Oriented Approach to Particle Filtering
Research goal: Bringing pretty back!
10
2
10
3
10
−3
10
−2
10
−1
N (number of particles)
Bootstrap (BPF)
Feedback (FPF)
MSE
Control-Oriented Approach to Particle Filtering
12
43. Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt +σB dBt (1)
dZt = h(Xt)dt +σW dWt (2)
Controlled system (N particles):
dXi
t = a(Xi
t )dt +σB dBi
t + dUi
t
mean field control
, i = 1,...,N (3)
{Bi
t}N
i=1 are ind. standard white noises.
Objective: Choose control Ui
t, as a function of history
{Zs,Xi
s : 0 ≤ s ≤ t}, such that the two posteriors coincide:
x∈A
p∗
(x,t) dx = P{Xt ∈ A | Zt}
x∈A
p(x,t) dx = P{Xi
t ∈ A | Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)
13
44. Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt +σB dBt (1)
dZt = h(Xt)dt +σW dWt (2)
Controlled system (N particles):
dXi
t = a(Xi
t )dt +σB dBi
t + dUi
t
mean field control
, i = 1,...,N (3)
{Bi
t}N
i=1 are ind. standard white noises.
Objective: Choose control Ui
t, as a function of history
{Zs,Xi
s : 0 ≤ s ≤ t}, such that the two posteriors coincide:
x∈A
p∗
(x,t) dx = P{Xt ∈ A | Zt}
x∈A
p(x,t) dx = P{Xi
t ∈ A | Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)
13
45. Control-Oriented Approach to Particle Filtering
Feedback Particle Filter
Signal & Observations dXt = a(Xt)dt +σB dBt (1)
dZt = h(Xt)dt +σW dWt (2)
Controlled system (N particles):
dXi
t = a(Xi
t )dt +σB dBi
t + dUi
t
mean field control
, i = 1,...,N (3)
{Bi
t}N
i=1 are ind. standard white noises.
Objective: Choose control Ui
t, as a function of history
{Zs,Xi
s : 0 ≤ s ≤ t}, such that the two posteriors coincide:
x∈A
p∗
(x,t) dx = P{Xt ∈ A | Zt}
x∈A
p(x,t) dx = P{Xi
t ∈ A | Zt}
Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)
13
46. Control-Oriented Approach to Particle Filtering
FPF Solution
Linear model
Controlled system: for i = 1,...N:
dXi
t = αXi
t dt +σB dBi
t
Prediction
+ K dZt −γ
Xi
t + µt
2
dt
Update (via mean field control)
(3)
Feedback Particle Filter
-
+
14
47. Control-Oriented Approach to Particle Filtering
FPF Solution
Linear model
Controlled system: for i = 1,...N:
dXi
t = αXi
t dt +σB dBi
t
Prediction
+ K dZt −γ
Xi
t + µt
2
dt
Update (via mean field control)
(3)
Feedback Particle Filter
-
+
14
48. Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
15
49. Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
15
50. Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
Innovation error: dIi
t = dZt − dˆZi
t dIt = dZt − dˆZt
= dZt −γ Xi
t +µt
2 dt = dZt −γ ˆXt dt
15
51. Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
Innovation error: dIi
t = dZt − dˆZi
t dIt = dZt − dˆZt
= dZt −γ Xi
t +µt
2 dt = dZt −γ ˆXt dt
Control: dUi
t = K dIi
t dUt = K dIt
15
52. Control-Oriented Approach to Particle Filtering
FPF Update Steps
Linear model
Feedback particle filter Kalman filter
Observation: dZt = γXt dt +σW dWt dZt = γXt dt +σW dWt
Prediction: dˆZi
t = 1
2 γXi
t +γµt dt dˆZt = γ ˆXt dt
Innovation error: dIi
t = dZt − dˆZi
t dIt = dZt − dˆZt
= dZt −γ Xi
t +µt
2 dt = dZt −γ ˆXt dt
Control: dUi
t = K dIi
t dUt = K dIt
Gain: K is the Kalman gain
15
53. Control-Oriented Approach to Particle Filtering
Linear Feedback Particle Filter
Mean field model is the Kalman filter
Feedback particle filter:
dXi
t = αXi
t dt +σB dBi
t +K dZt −
γ
2
Xi
t +
1
N
N
∑
j=1
Xj
t dt (3)
Xi
0 ∼ p∗
(x,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of Xi
t given
Zt. Then p = N(µt,Σt) where
dµt = αµt dt +
γΣt
σ2
W
(dZt −γµt dt)
dΣt = 2αΣt +σ2
B −
γ2Σ2
t
σ2
W
dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
54. Control-Oriented Approach to Particle Filtering
Linear Feedback Particle Filter
Mean field model is the Kalman filter
Feedback particle filter:
dXi
t = αXi
t dt +σB dBi
t +K dZt −
γ
2
Xi
t +
1
N
N
∑
j=1
Xj
t dt (3)
Xi
0 ∼ p∗
(x,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of Xi
t given
Zt. Then p = N(µt,Σt) where
dµt = αµt dt +
γΣt
σ2
W
(dZt −γµt dt)
dΣt = 2αΣt +σ2
B −
γ2Σ2
t
σ2
W
dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
55. Control-Oriented Approach to Particle Filtering
Linear Feedback Particle Filter
Mean field model is the Kalman filter
Feedback particle filter:
dXi
t = αXi
t dt +σB dBi
t +K dZt −
γ
2
Xi
t +
1
N
N
∑
j=1
Xj
t dt (3)
Xi
0 ∼ p∗
(x,0) = N(µ(0),Σ(0))
Mean-field model: Kalman filter! Let p denote cond. dist. of Xi
t given
Zt. Then p = N(µt,Σt) where
dµt = αµt dt +
γΣt
σ2
W
(dZt −γµt dt)
dΣt = 2αΣt +σ2
B −
γ2Σ2
t
σ2
W
dt
As N → ∞, the empirical distribution approximates the posterior p∗
16
56. Control-Oriented Approach to Particle Filtering
Variance Reduction
Filtering for simple linear model.
Mean-square error:
1
T
T
0
Σ
(N)
t −Σt
Σt
2
dt
10
2
10
3
10
−3
10
−2
10
−1
N (number of particles)
Bootstrap (BPF)
Feedback (FPF)
MSE
17
57. Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
58. Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
59. Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
60. Feedback Particle Filter
Methodology: Variational Formulation
How do we derive the feedback particle filter?
Time-stepping procedure:
Signal, observ. process:
dXt = a(Xt)dt +σB dBt
Ztn = h(Xtn )+Wtn
Feedback Particle filter
Filter: dXi
t = a(Xi
t )dt +σB dBi
t
Control: Xi
tn
= Xi
t−
n
+u(Xi
t−
n
)
control
Conditional distributions:
p∗
n(·): cond. pdf of Xt|Zt pn(·;u): cond. pdf of Xi
t |Zt
Variational problem: min
u
D (pn(u) p∗
n)
As ∆t → 0:
Optimal control, u = u◦
, yields the feedback particle filter,
Nonlinear filter is the gradient flow and u◦
is the optimal transport.
18
61. Feedback Particle Filter
Feedback Particle Filter
Filtering in nonlinear non-Gaussian settings
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
FPF: dXi
t = a(Xi
t )dt + dBi
t +K(Xi
t )◦ dIi
t
Update
Innovations: dIi
t =:dZt −
1
2
(h(Xi
t )+ ˆh)dt, with cond. mean ˆh = p,h .
19
62. Feedback Particle Filter
Feedback Particle Filter
Filtering in nonlinear non-Gaussian settings
Signal model: dXt = a(Xt)dt + dBt, X0 ∼ p∗
0(·)
Observation model: dZt = h(Xt)dt + dWt
FPF: dXi
t = a(Xi
t )dt + dBi
t +K(Xi
t )◦ dIi
t
Update
Innovations: dIi
t =:dZt −
1
2
(h(Xi
t )+ ˆh)dt, with cond. mean ˆh = p,h .
19
63. Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
20
64. Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
20
65. Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
Innov. error: dIi
t = dZt − dˆZi
t dIi
t = dZt − dˆZi
t
= dZt − h(Xi
t )+ˆh
2 dt = dZt −γ Xi
t +µt
2 dt
20
66. Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
Innov. error: dIi
t = dZt − dˆZi
t dIi
t = dZt − dˆZi
t
= dZt − h(Xi
t )+ˆh
2 dt = dZt −γ Xi
t +µt
2 dt
Control: dUi
t = K(Xi
t )◦ dIi
t dUi
t = K(Xi
t )◦ dIi
t
20
67. Feedback Particle Filter
Update Step
How does feedback particle filter implement Bayes’ rule?
Feedback particle filter Linear case
Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt
Prediction: dˆZi
t = h(Xi
t )+ˆh
2 dt dˆZi
t = γXi
t +γµt
2 dt
ˆh = 1
N ∑N
i=1 h(Xi
t )
Innov. error: dIi
t = dZt − dˆZi
t dIi
t = dZt − dˆZi
t
= dZt − h(Xi
t )+ˆh
2 dt = dZt −γ Xi
t +µt
2 dt
Control: dUi
t = K(Xi
t )◦ dIi
t dUi
t = K(Xi
t )◦ dIi
t
Gain: K is a solution of a linear BVP K is the Kalman gain
20
68. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
69. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
70. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case:
Nonlinear case:
21
71. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case:
Nonlinear case:
21
72. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case:
Nonlinear case:
21
73. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
74. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
75. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
76. Feedback Particle Filter
Boundary Value Problem
Euler-Lagrange equation for the variational problem
Multi-dimensional boundary value problem
∇·(Kp) = −(h − ˆh)p
solved at each time-step.
Linear case: Nonlinear case:
21
77. Feedback Particle Filter
Consistency
Feedback particle filter is exact
p∗
: conditional pdf of Xt given Zt,
dp∗
= L †
(p∗
)dt +(h − ˆh)(σ2
W )−1
(dZt − ˆhdt)p∗
p : conditional pdf of Xi
t given Zt,
dp = L †
(p)dt −
∂
∂x
(Kp) dZt −
∂
∂x
(up) dt +
σ2
W
2
∂2
∂x2
pK2
dt
Consistency Theorem
Consider the two evolution equations for p and p∗
.
Provided the FPF is initialized with p(x,0) = p∗
(x,0), then
p(x,t) = p∗
(x,t) for all t ≥ 0
22
78. Feedback Particle Filter
Consistency
Feedback particle filter is exact
p∗
: conditional pdf of Xt given Zt,
dp∗
= L †
(p∗
)dt +(h − ˆh)(σ2
W )−1
(dZt − ˆhdt)p∗
p : conditional pdf of Xi
t given Zt,
dp = L †
(p)dt −
∂
∂x
(Kp) dZt −
∂
∂x
(up) dt +
σ2
W
2
∂2
∂x2
pK2
dt
Consistency Theorem
Consider the two evolution equations for p and p∗
.
Provided the FPF is initialized with p(x,0) = p∗
(x,0), then
p(x,t) = p∗
(x,t) for all t ≥ 0
22
79. Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
80. Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
81. Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
82. Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
83. Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
84. Feedback Particle Filter
Kalman Filter
Kalman Filter
-
+
Innovation Error:
dIt = dZt −h( ˆX)dt
Gain Function:
K = Kalman Gain
Feedback Particle Filter
Feedback Particle Filter
-
+
Innovation Error:
dIi
t = dZt −
1
2
h(Xi
t )+ ˆht dt
Gain Function:
K is solution of a linear BVP.
23
86. Oscillators in Biology
Normal Form Reduction
Derivation of oscillator model
C
dV
dt
= −gT ·m2
∞(V )·h ·(V −ET )
−gh ·r ·(V −Eh)−......
dh
dt
=
h∞(V )−h
τh(V )
dr
dt
=
r∞(V )−r
τr (V )
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
87. Oscillators in Biology
Normal Form Reduction
Derivation of oscillator model
C
dV
dt
= −gT ·m2
∞(V )·h ·(V −ET )
−gh ·r ·(V −Eh)−......
dh
dt
=
h∞(V )−h
τh(V )
dr
dt
=
r∞(V )−r
τr (V )
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
88. Oscillators in Biology
Normal Form Reduction
Derivation of oscillator model
C
dV
dt
= −gT ·m2
∞(V )·h ·(V −ET )
−gh ·r ·(V −Eh)−......
dh
dt
=
h∞(V )−h
τh(V )
dr
dt
=
r∞(V )−r
τr (V )
Normal form reduction
−−−−−−−−−−−−→
dθi (t) = ωi dt +ui (t)·Φ(θi (t))dt
[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004
25
90. Oscillators in Biology
Functional Role of Neural Rhythms
Is synchronization useful? Does it have a functional role?
Books/review papers:
Buzsaki, Destexhe, Ermentrout, Izhikevich, Kopell, Trout and Whittington (2009), Llinas and
Ribary (2001), Pareti and Palma (2004), Sejnowski and Paulsen (2006), Singer (1993)...
Computations: Computing with intrinsic network states
Destexhe and Contreras (2006); Izhikevich (2006); Zhang and Ballard (2001).
Synaptic plasticity: Neurons that fire together wire together
And several other hypotheses:
Communication and information flow (Laughlin and Sejnowski); Binding by synchrony (Singer);
Memory formation (Jutras and Fries); Probabilistic decision making (Wang); Stimulus
competition and attention selection (Kopell); Sleep/wakefulness/disease (Steriade)
27
91. Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
92. Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
93. Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
94. Oscillators in Biology
Prediction
Brain as a reality emulator
“[Prediction] is the primary function of the neocortex,
and the foundation of intelligence. If we want to
understand how your brain works, and how to build
intelligent machines, we must understand the nature of
these predictions and how the cortex makes them.”
“The capacity to predict the outcome of future events –
critical to successful movement – is, most likely, the
ultimate and most common of all brain functions.”
28
95. Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Theory:
Lee and Mumford, Hierarchical Bayesian inference Framework (2003)
Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)
Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)
Kording and Wolpert. Bayesian decision theory (2006)
And others: See
Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)
Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)
29
96. Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Theory:
Lee and Mumford, Hierarchical Bayesian inference Framework (2003)
Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)
Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)
Kording and Wolpert. Bayesian decision theory (2006)
And others: See
Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)
Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)
29
97. Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Experiments (see reviews):
Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)
R. T. Knight, Neural networks debunk phrenology, Science (2007)
Such theories naturally feed into computer vision & more generally on how
to make computer “intelligent”
30
98. Oscillators in Biology
Filtering in Brain?
Bayesian model of sensory signal processing
Experiments (see reviews):
Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)
R. T. Knight, Neural networks debunk phrenology, Science (2007)
Such theories naturally feed into computer vision & more generally on how
to make computer “intelligent”
30
99. Oscillators in Biology
Bayesian Inference in Neuroscience
Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Bayes’rule Bayes’rule Bayes’rule
Similar ideas also appear in:
1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)
3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)
31
100. Oscillators in Biology
Bayesian Inference in Neuroscience
Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Bayes’rule Bayes’rule Bayes’rule
. . .
Part. Filter Part. Filter Part. Filter
Similar ideas also appear in:
1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)
2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)
3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)
31
109. Application: Ankle-foot Orthoses
Estimation of gait cycle using sensor measurements
Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.
Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance
Sensors: heel, toe,
and ankle joint
Compressed CO2
Actuator
Solenoid valves:
control the flow of CO2
to the actuator
AFO system components: Power supply,
Valves, Actuator, Sensors.
Professor Liz Hsiao-Wecksler
Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.
34
110. Application: Ankle-foot Orthoses
Estimation of gait cycle using sensor measurements
Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.
Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance
Sensors: heel, toe,
and ankle joint
Compressed CO2
Actuator
Solenoid valves:
control the flow of CO2
to the actuator
AFO system components: Power supply,
Valves, Actuator, Sensors.
Professor Liz Hsiao-Wecksler
Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.
34
119. Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
120. Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
121. Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
122. Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
123. Problem: Estimate Gait Cycle θt
Sensor model
Observation model: dZt = h(θt)dt+ noise
Problem: What is θt given noisy observations?
36
124. Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
125. Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
126. Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
127. Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
128. Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
129. Solution: Particle Filter
Algorithm to approximate posterior distribution
“Large number of oscillators”
Posterior distribution:
P(φ1 < θt < φ2|Sensor readings) = Fraction of θi
t in interval (φ1,φ2)
Circuit:
dθi
t = ωi dt
natural freq. of ith oscillator
+ noisei + dUi
t
mean-field control
, i = 1,...,N
Feedback Particle Filter: Design control law Ui
t
37
130. Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθi
t = ωi dt + dBi
t +K(θi
t )◦[dZt −
1
2
(h(θi
t )+ ˆh)dt] mod 2π, i = 1,...,N.
where ωi is sampled from a distribution.
38
131. Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθi
t = ωi dt + dBi
t +K(θi
t )◦[dZt −
1
2
(h(θi
t )+ ˆh)dt] mod 2π, i = 1,...,N.
where ωi is sampled from a distribution.
38
132. Filtering for Oscillators
Signal & Observations dθt = ω dt + dBt mod 2π
dZt = h(θt)dt + dWt
− π 0 π
Particle evolution,
dθi
t = ωi dt + dBi
t +K(θi
t )◦[dZt −
1
2
(h(θi
t )+ ˆh)dt] mod 2π, i = 1,...,N.
where ωi is sampled from a distribution.
Feedback Particle Filter
-
+
38
134. Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
40
135. Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
136. Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
137. Filtering of Biological Rhythms with Brain Rhythms
Connection to Lee and Mumford’s hierarchical Bayesian inference framework
. . .
Part. Filter Part. Filter Part. Filter
Prior
Noisy input
. . .
Part. Filter Part. Filter
Noisy
measurements
Rhythmic
movement
Prior
Mumford’s
box with
neurons
Normal form
reduction
Normal form
reduction
Estimate
Mumford’s box with
oscillators
40
138. Acknowledgement
Adam Tilton Tao Yang Huibing Yin Liz Hsiao-Wecksler Sean Meyn
1 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter with mean-field coupling. In Procs. of IEEE Conf. on
Decision and Control, December 2011.
2 T. Yang, P. G. Mehta, and S. P. Meyn. A mean-field control-oriented approach to particle filtering. In Procs. of
American Control Conference, June 2011.
3 A. Tilton, E. Hsiao-Wecksler, P. G. Mehta. Filtering with rhythms: Application to estimation of gait cycle. In Procs. of
American Control Conference, 2012.
4 T. Yang, G. Huang and P. G. Mehta. Joint probabilistic data association-feedback particle filter with applications to
multiple target tracking. In Procs. of American Control Conference, 2012.
5 A. Tilton, T. Yang, H. Yin and P. G. Mehta. Feedback particle filter-based multiple target tracking using bearing-only
measurements. In Procs. of Information Fusion, 2012.
6 T. Yang, R. Laugesen, P. G. Mehta, and S. P. Meyn. Multivariable feedback particle filter. To appear in IEEE Conf. on
Decision and Control, 2012.
7 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. Conditionally accepted to IEEE Transactions on
Automatic Control.