SlideShare uma empresa Scribd logo
1 de 46
Baixar para ler offline
Component-wise approximate Bayesian
computation via Gibbs-like steps
Christian P. Robert(1,2)
Joint work with Grégoire Clarté(1)
, Robin Ryder(1)
, Julien Stoehr(1)
(1) Université Paris-Dauphine, (2) University of Warwick
Université Paris-Dauphine
Approximate Bayesian Computation @ Clermont
ABC postdoc positions
2 post-doc positions open with the ABSint ANR research grant:
Focus on approximate Bayesian techniques like ABC, variational
Bayes, PAC-Bayes, Bayesian non-parametrics, scalable MCMC,
and related topics. A potential direction of research would be the
derivation of new Bayesian tools for model checking in such c
omplex environments.
Terms: up to 24 months, no teaching duty attached, primarily
located in Université Paris-Dauphine, with supported periods in
Oxford (J. Rousseau) [barring no-deal Brexit!] and visits to Mont-
pellier (J.-M. Marin).
No hard deadline.
If interested, send application to me: bayesianstatistics@gmail.com
Approximate Bayesian computation (ABC)
ABC is a computational method which stemmed from population ge-
netics models about 20 years ago to deal with generative intractable
distribution.
[Tavaré et al., 1997; Beaumont et al., 2002]
Settings of interest: the likelihood function f(x | θ) does not admit a
closed form as a function of θ and/or is computationally too costly.
1. Model relying on a latent process z ∈ Z
f(x | θ) =
Z
f(y, z | θ)µ(dz).
2. Model with intractable normalising constant
f(x | θ) =
1
Z(θ)
q(x | θ), where Z(θ) =
X
q(x | θ)µ(dx).
Approximate Bayesian computation (ABC)
Bayesian settings: the target is π(θ | xobs
) ∝ π(θ)f(xobs
| θ).
Algorithm: Vanilla ABC
Input: observed dataset xobs
,
number of iterations N,
threshold ε, summary
statistic s.
for i = 1, . . . , N do
θi ∼ π(·)
xi ∼ f(· | θi)
end
return θi d(s(xobs
), s(xi)) ≤ ε
s(xobs)
ε
(θi, S(xi))
Approximate Bayesian computation (ABC)
Bayesian settings: the target is π(θ | xobs
) ∝ π(θ)f(xobs
| θ).
Algorithm: Vanilla ABC
Input: observed dataset xobs
,
number of iterations N,
threshold ε, summary
statistic s.
for i = 1, . . . , N do
θi ∼ π(·)
xi ∼ f(· | θi)
end
return θi d(s(xobs
), s(xi)) ≤ ε
s(xobs)
ε
(θi, S(xi))
Ouput: distributed according to
π(θ)Pθ d(S(xobs
), S(x)) < ε ∝ π(θ | d(S(xobs
), S(x)) < ε) ∝ πε(θ | s, xob
Approximate Bayesian computation (ABC)
Two particular situations:
π∞(θ | s, xobs
) ∝ π(θ) and π0(θ | s, xobs
) ∝ π(θ | s(xobs
))= π(θ | xobs
)
Some difficulties raised by the vanilla version:
Calibration of the threshold ε: from a regression or a k-nearest
neighbour perspective.
[Beaumont et al., 2002; Wilkinson, 2013; Biau et al., 2013]
Selection of the summary statistic S: advances consider semi-automatic
procedure using a pilot-run ABC or random forest methodology.
[Fearnhead and Prangle, 2012; Prangle et al., 2014; Raynal et al.,
2018]
Simulating from the prior is often poor in efficiency: solutions
consist in modifying the proposal distribution on θ to increase the
density of x’s within the vicinity of y.
[Marjoram et al., 2003; Toni et al., 2008]
A first example : hierarchical moving average model
α
µ1 µ2 µn. . .
x1 x2 xn. . .
σ
σ1 σ2 σn. . .
First parameter hierarchy:
α = (α1, α2, α3) ∼ E(1)⊗3
;.
Independently for each i ∈ {1, . . . , n},
(βi,1, βi,2, βi,3) ∼ Dir(α1, α2, α3);
µi = (βi,1 − βi,2, 2(βi,1 + βi,2) − 1).
Second parameter hierarchy:
σ = (σ1, σ2) ∼ C+
(1)⊗2
.
Independently for each i ∈ {1, . . . , n},
σi ∼ IG(σ1, σ2).
Model for xi: independently for each i ∈ {1, . . . , n}, xi ∼ MA2(µi, σi),
i.e., for all j in N
xi,j = yj + µi,1yj−1 + µi,2yj−2 , with yj ∼ N(0, σ2
i ).
A first example : toy dataset
Settings: n = 5 times series of length T = 100 hierarchical model
with 13 parameters.
Figure: ABC posterior distribu-
tion of µ1,1 along with the prior
distribution (black line).
Size of ABC reference table:
N = 5.5 · 106
.
ABC posterior sample size:
1000.
A first example : toy dataset
Settings: n = 5 times series of length T = 100 hierarchical model
with 13 parameters.
Figure: ABC posterior distribu-
tion of µ1,1 along with the prior
distribution (black line).
Size of ABC reference table:
N = 5.5 · 106
.
ABC posterior sample size:
1000.
Not enough simulations to reach a decent threshold.
Not enough time to produce enough simulations.
The Gibbs Sampler
Our idea: combining ABC with Gibbs sampler in order to improve
its ability to efficiently explorer Θ ⊂ Rn
when the number n of pa-
rameters increases.
The Gibbs Sampler
Our idea: combining ABC with Gibbs sampler in order to improve
its ability to efficiently explorer Θ ⊂ Rn
when the number n of pa-
rameters increases.
The Gibbs Sampler produces a Markov chain with a target joint dis-
tribution π by alternatively sampling from each of its conditionals.
[Geman and Geman, 1984]
Algorithm: Gibbs sampler
Input: observed dataset xobs
, number of iterations N, starting point
θ(0)
= (θ
(0)
1 , . . . , θ
(0)
n ).
for i = 1, . . . , N do
for k = 1, . . . , n do
θ
(i)
k ∼ π · | θ
(i)
1 , . . . , θ
(i)
k−1, θ
(i−1)
k+1 , . . . , θ
(i−1)
n , xobs
end
end
return θ(0)
, . . . , θ(N)
Component-wise ABC
Algorithm: Component-wise ABC
Input: observed dataset xobs
, number of iterations N, starting point
θ(0)
= (θ
(0)
1 , . . . , θ
(0)
n ), threshold ε = (ε1, . . . , εn), statistics
s1, . . . , sn.
for i = 1, . . . , N do
for j = 1, . . . , n do
θ
(i)
j ∼ πεj
(· | xobs
, sj, θ
(i)
1 , . . . , θ
(i)
j−1, θ
(i−1)
j+1 , . . . , θ
(i−1)
n )
end
end
return θ(0)
, . . . , θ(N)
Component-wise ABC
Algorithm: Component-wise ABC
Input: observed dataset xobs
, number of iterations N, starting point
θ(0)
= (θ
(0)
1 , . . . , θ
(0)
n ), threshold ε = (ε1, . . . , εn), statistics
s1, . . . , sn.
for i = 1, . . . , N do
for j = 1, . . . , n do
θ
(i)
j ∼ πεj
(· | xobs
, sj, θ
(i)
1 , . . . , θ
(i)
j−1, θ
(i−1)
j+1 , . . . , θ
(i−1)
n )
end
end
return θ(0)
, . . . , θ(N)
Questions:
Is there a limiting distribution ν∞
ε to the algorithm?
What is the nature of this limiting distribution?
OUTLINE
1 Hierarchical models
2 General case
3 Take home messages
OUTLINE
1 Hierarchical models
2 General case
3 Take home messages
ABC within Gibbs: Hierarchical models
α
µ1 µ2 µn. . .
x1 x2 xn. . .
Hierarchical Bayes models: often allow for
simplified conditional distributions thanks to
partial independence properties, e.g.,
xj | µj ∼ π(xj | µj), µj | α
i.i.d.
∼ π(µj | α), α ∼ π(α).
Algorithm: Component-wise ABC sampler for hierarchical model
Input: observed dataset xobs
, number of iterations N, thresholds εα
and εµ, summary statistics sα and sµ.
for i = 1, . . . , N do
for j = 1, . . . , n do
µ
(i)
j ∼ πεµ
(· | xobs
j , sµ, α(i−1)
)
end
α(i)
∼ πεα
(· | µ(i)
, sα)
end
ABC within Gibbs: Hierarchical models
Assumption: n = 1.
Theorem (Clarté et al. [2019])
Assume there exists a non-empty convex set C with positive prior measure
such that
κ1 = inf
sα(µ)∈C
π(Bsα(µ), α/4) > 0 ,
κ2 = inf
α
inf
sα(µ)∈C
πεµ (Bsα(µ),3 α/2 | xobs
, sµ, α) > 0 ,
κ3 = inf
α
πεµ (sα(µ) ∈ C | xobs
, sµ, α) > 0 ,
Then the Markov chain converges geometrically in total variation distance to
a stationary distribution ν∞
ε , with geometric rate 1 − κ1κ2κ2
3.
If the prior on α is defined on a compact set, then the assumptions
are satisfied.
ABC within Gibbs: Hierarchical models
Theorem (Clarté et al. [2019])
Assume that,
L0 = sup
εα
sup
µ, ˜µ
πεα
(· | sα, µ) − π0(· | sα, ˜µ) TV < 1/2 ,
L1(εα) = sup
µ
πεα
(· | sα, µ) − π0(· | sα, µ) TV −−−−→
εα→0
0
L2(εµ) = sup
α
πεµ (· | xobs
, sµ, α) − π0(· | xobs
, sµ, α) TV −−−−→
εµ→0
0 .
Then,
ν∞
ε − ν∞
0 TV ≤
L1(εα) + L2(εµ)
1 − 2L0
−−−→
ε→0
0.
ABC within Gibbs: Hierarchical models
Compatibility issue: ν∞
0 is the limiting distribution associated to
Gibbs conditionals with different acceptance events, e.g., different statis-
tics
π(α)π(sα(µ) | α) and π(µ)f(sµ(xobs
) | α, µ).
Conditionals may then be incompatible and the limiting distribution
not a genuine posterior [incoherent use of data]
unknown [except for a specific version]
possibly far from a genuine posterior
Proposition (Clarté et al. [2019])
If sα is jointly sufficient, when the precision ε goes to zero, ABC within
Gibbs and ABC have the same limiting distribution.
Hierarchical models: toy example
Model:
α ∼ U([0 ; 20]),
(µ1, . . . , µn) | α ∼ N(α, 1)⊗n
,
(xi,1, . . . , xi,K) | µi ∼ N (µi, 0.1)
⊗K
.
Numerical experiment:
n = 20, K = 10,
Pseudo observation generated for α = 1.7,
Algorithms runs for a constant budget: Ntot = N × Nε = 21000.
We look at the estimates for µ1 whose value for the pseudo obser-
vations is 3.04.
Hierarchical models: toy example
Figure: comparison of the sampled densities of µ1 (left) and α (right)
[dot-dashed line corresponds to the true posterior]
0
1
2
3
4
0 2 4 6
0.0
0.5
1.0
1.5
2.0
−4 −2 0 2 4
Method ABC Gibbs Simple ABC
Hierarchical models: moving average example
[introduction]
Pseudo observations: xobs
1 generated for µ1 = (−0.06, −0.22).
0
1
2
3
−1.0 −0.5 0.0 0.5 1.0
value
density
type
ABCGibbs
ABCsimple
prior
1st parameter, 1st coordinate
−1.0
−0.5
0.0
0.5
1.0
−1.0 −0.5 0.0 0.5 1.0
b1
b2
0.2
0.4
0.6
0.8
level
1st parameter simple
−1.0
−0.5
0.0
0.5
1.0
−1.0 −0.5 0.0 0.5 1.0
b1b2
2.5
5.0
7.5
10.0
level
1st parameter gibbs
Separation from the prior for identical number of simulations.
Hierarchical models: moving average example
[introduction]
Real dataset: measures of 8GHz daily flux intensity emitted by 7
stellar objects from the NRL GBI website: http://ese.nrl.navy.
mil/.
[Lazio et al., 2008]
0
1
2
3
−1.0 −0.5 0.0 0.5 1.0
value
density
type
ABCGibbs
ABCsimple
prior
1st parameter, 1st coordinate
−1.0
−0.5
0.0
0.5
1.0
−1.0 −0.5 0.0 0.5 1.0
b1
b2
2
4
6
8
level
1st parameter gibbs
Separation from the prior for identical number of simulations.
Hierarchical models: moving average example
[introduction]
Real dataset: measures of 8GHz daily flux intensity emitted by 7
stellar objects from the NRL GBI website: http://ese.nrl.navy.
mil/.
[Lazio et al., 2008]
0
1
2
3
−1.0 −0.5 0.0 0.5 1.0
value
density
type
ABCGibbs
ABCsimple
prior
1st parameter, 1st coordinate
−1.0
−0.5
0.0
0.5
1.0
−1.0 −0.5 0.0 0.5 1.0
b1
b2
0.2
0.4
0.6
level
1st parameter simple
−1.0
−0.5
0.0
0.5
1.0
−1.0 −0.5 0.0 0.5 1.0
b1
b2
2
4
6
8
level
1st parameter gibbs
Separation from the prior for identical number of simulations.
Hierarchical models: g&k example
Model: the g-and-k distribution is defined through the inverse of its
cdf. It is easy to simulate from but there is no closed-form formula for
the pdf:
r ∈ (0, 1) → A+B 1 + 0.8
1 − exp(−gΦ−1
(r)
1 + exp(−gΦ−1(r)
1 + Φ−1
(r)2 k
Φ−1
(r).
α
A1
A2
...
An
x1
x2
xn
...
B
g
k
Hierarchical models: g&k example
Assumption: B, g and k known, inference on α and Ai solely.
1 2 3 4 Hyperparameter
−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0
0
2
4
6
8
value
density
Method ABC Gibbs ABC−SMC vanilla ABC
OUTLINE
1 Hierarchical models
2 General case
3 Take home messages
ABC within Gibbs: general case
A general two-parameter model:
(θ1, θ2)
x
Algorithm: ABC within Gibbs
for i = 1, . . . , N do
θ
(i)
2 ∼ πε2
(· | θ
(i−1)
1 , s2, xobs
)
θ
(i)
1 ∼ πε1
(· | θ
(i)
2 , s1, xobs
)
end
return (θ
(i)
1 , θ
(i)
2 )i=2,...,N
ABC within Gibbs: general case
A general two-parameter model:
(θ1, θ2)
x
Algorithm: ABC within Gibbs
for i = 1, . . . , N do
θ
(i)
2 ∼ πε2
(· | θ
(i−1)
1 , s2, xobs
)
θ
(i)
1 ∼ πε1
(· | θ
(i)
2 , s1, xobs
)
end
return (θ
(i)
1 , θ
(i)
2 )i=2,...,N
Theorem (Clarté et al. [2019])
Assume that there exists 0 < κ < 1/2 such that
sup
θ1, ˜θ1
πε2
(· | xobs
, s2, θ1) − πε2
(· | xobs
, s2, ˜θ1) TV = κ.
The Markov chain then converges geometrically in total variation distance
to a stationary distribution ν∞
ε , with geometric rate 1 − 2κ.
ABC within Gibbs: general case
Additional assumption: θ1 and θ2 are a priori independent
Theorem (Clarté et al. [2019])
Assume that
κ1 = inf
θ1,θ2
π(Bs1(xobs),ε1
| θ1, θ2) > 0 ,
κ2 = inf
θ1,θ2
π(Bs2(xobs), 2
| θ1, θ2) > 0 ,
κ3 = sup
θ1, ˜θ1,θ2
π(· | θ1, θ2) − π(· | ˜θ1, θ2) TV < 1/2 .
Then the Markov chain converges in total variation distance to a stationary
distribution ν∞
ε with geometric rate 1 − κ1κ2(1 − 2κ3).
ABC within Gibbs: general case
For both situations, a limiting distribution exists when the thresholds
go to 0.
Theorem (Clarté et al. [2019])
Assume that
L0 = sup
ε2
sup
θ1, ˜θ1
πε2
(· | xobs
, s2, θ1) − π0(· | xobs
, s2, ˜θ1) TV < 1/2 ,
L1(ε1) = sup
θ2
πε1
(· | xobs
, s1, θ2) − π0(· | xobs
, s1, θ2) TV −−−−→
ε1→0
0 ,
L2(ε2) = sup
θ1
πε2
(· | xobs
, s2, θ1) − π0(· | xobs
, s2, θ1) TV −−−−→
ε2→0
0 .
Then
ν∞
ε − ν∞
0 TV ≤
L1(ε1) + L2(ε2)
1 − 2L0
−−−→
ε→0
0.
ABC within Gibbs: general case
Compatibility issue: the general case inherits the compatibility issue
already noticed in the hierarchical setting.
Proposition (Clarté et al. [2019])
1. If sθ1
and sθ2
are conditionally sufficient, the conditionals are compatible
and , when the precision goes to zero, ABC within Gibbs and ABC have
the same limiting distribution.
2. If π(θ1, θ2) = π(θ1)π(θ2) and sθ1
= sθ2
, when the precision goes to
zero, ABC within Gibbs and ABC have the same limiting distribution.
General case: g&k example
Figure: posterior densities for parameters A1, . . . , A4
1 2 3 4
−3 −2 −1 0 −3 −2 −1 0 −3 −2 −1 0 −3 −2 −1 0
0
2
4
value
density
Method ABC Gibbs ABC−SMC vanilla ABC
General case: g&k example
Figure: posterior densities for α, B, g and k.
B g hyperparameter k
−2 −1 0 1 2 −2 −1 0 1 2 −2 −1 0 1 2 −2 −1 0 1 2
0
2
4
6
8
value
density
Method
ABC Gibbs
ABC−SMC
vanilla ABC
Explicit limiting distribution
For model
xj | µj ∼ π(xj | µj) , µj | α
i.i.d.
∼ π(µj | α) , α ∼ π(α)
alternative ABC based on:
˜π(α, µ | xobs
) ∝ π(α)q(µ)
generate a new µ
π(˜µ | α)1d(sα(µ),sα( ˜µ))<εα
d˜µ
× f(˜x | µ)π(xobs
| µ)
with q arbitrary distribution on µ
Explicit limiting distribution
For model
xj | µj ∼ π(xj | µj) , µj | α
i.i.d.
∼ π(µj | α) , α ∼ π(α)
induces full conditionals
˜π(α | µ) ∝ π(α) π(˜µ | α)1d(sα(µ),sα( ˜µ))<εα
d˜x
and
˜π(µ | α, xobs
) ∝ q(µ) π(˜µ | α)1d(sα(µ),sα( ˜µ))<εα
d˜µ
× f(˜x | µ)π(xobs
| µ)1d(sµ(xobs),sµ( ˜x))<εµ
d˜x
now compatible with new artificial joint
Explicit limiting distribution
For model
xj | µj ∼ π(xj | µj) , µj | α
i.i.d.
∼ π(µj | α) , α ∼ π(α)
that is,
prior simulations of α ∼ π(α) and of ˜µ ∼ π(˜µ | α) until
d(sα(µ), sα(˜µ)) < εα
simulation of µ from instrumental q(µ) and of auxiliary variables
˜µ and ˜x until both constraints satisfied
Explicit limiting distribution
For model
xj | µj ∼ π(xj | µj) , µj | α
i.i.d.
∼ π(µj | α) , α ∼ π(α)
Resulting Gibbs sampler stationary for posterior proportional to
π(α, µ) q(sα(µ))
projection
f(sµ(xobs
) | µ)
projection
that is, for likelihood associated with sµ(xobs
) and prior distribution
proportional to π(α, µ)q(sα(µ)) [exact!]
OUTLINE
1 Hierarchical models
2 General case
3 Take home messages
Take home messages
Under certain conditions to specify,
Take home messages
We provide theoretical guarantee on the convergence of ABC within
Gibbs.
• Result n°1: a limiting distribution ν∞
ε exists when the sample
size grows
• Result n°2: a limiting distribution ν∞
0 exists when the thresh-
old goes to 0
• Result n°3: ν∞
0 is the posterior distribution π(θ | s(xobs
)).
The method inherits issues from vanilla ABC, namely the
choice of the statistics [plus compatibility of the condition-
als].
In practice, ABC within Gibbs exhibits better performances than
vanilla ABC and SMC-ABC [even when conditions not satisfied]
Take home messages
We provide theoretical guarantee on the convergence of ABC within
Gibbs.
• Result n°1: a limiting distribution ν∞
ε exists when the sample
size grows
• Result n°2: a limiting distribution ν∞
0 exists when the thresh-
old goes to 0
• Result n°3: ν∞
0 is the posterior distribution π(θ | s(xobs
)).
The method inherits issues from vanilla ABC, namely the
choice of the statistics [plus compatibility of the condition-
als].
In practice, ABC within Gibbs exhibits better performances than
vanilla ABC and SMC-ABC [even when conditions not satisfied]
Thank you!
ABC workshops
[A]BayesComp, Gainesville, Florida, Jan 7-10 2020
ABC in Grenoble, France, March 18-19 2020
ISBA(BC), Kunming, China, June 26-30 2020
ABC in Longyearbyen, Svalbard, April 8-9 2021 [??]
Bibliography I
M. A. Beaumont, W. Zhang, and D. J. Balding. Approximate Bayesian
Computation in Population Genetics. Genetics, 162(4):2025–2035,
2002.
G. Biau, F. Cérou, and A. Guyader. New insights into Approximate
Bayesian Computation. Annales de l’Institut Henri Poincaré (B) Prob-
abilités et Statistiques, in press, 2013.
G. Clarté, C. P. Robert, R. Ryder, and J. Stoehr. Component-wise ap-
proximate Bayesian computation via Gibbs-like steps. arXiv preprint
arXiv:1905.13599, 2019.
P. Fearnhead and D. Prangle. Constructing summary statistics for
approximate Bayesian computation: semi-automatic approximate
Bayesian computation. Journal of the Royal Statistical Society. Series
B (Statistical Methodology), 74(3):419–474, 2012.
S. Geman and D. Geman. Stochastic Relaxation, Gibbs Distributions,
and the Bayesian Restoration of Images. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 6(6):721–741, 1984.
Bibliography II
T. J. W. Lazio, E. B. Waltman, F. D. Ghigo, R. Fiedler, R. S. Foster, and
a. K. J. Johnston. A Dual-Frequency, Multiyear Monitoring Program
of Compact Radio Sources. The Astrophysical Journal Supplement Se-
ries, 136:265, December 2008. doi: 10.1086/322531.
P. Marjoram, J. Molitor, V. Plagnol, and S. Tavaré. Markov chain Monte
Carlo without likelihoods. Proceedings of the National Academy of Sci-
ences, 100(26):15324–15328, 2003.
D. Prangle, P. Fearnhead, M. P. Cox, P. J. Biggs, and N. P. French. Semi-
automatic selection of summary statistics for ABC model choice.
Statistical applications in genetics and molecular biology, 13(1):67–82,
2014.
L. Raynal, J.-M. Marin, P. Pudlo, M. Ribatet, C. P. Robert, and A. Es-
toup. ABC random forests for Bayesian parameter inference. Bioin-
formatics, 2018. doi: 10.1093/bioinformatics/bty867.
S. Tavaré, D. J. Balding, R. C. Griffiths, and P. Donnelly. Inferring Coa-
lescence Times From DNA Sequence Data. Genetics, 145(2):505–518,
1997.
Bibliography III
T. Toni, D. Welch, N. Strelkowa, A. Ipsen, and M. P. H. Stumpf. Ap-
proximate Bayesian computation scheme for parameter inference
and model selection in dynamical systems. Journal of the Royal Soci-
ety Interface, 6(31):187–202, 2008.
R. D. Wilkinson. Approximate Bayesian computation (ABC) gives ex-
act results under the assumption of model error. Statistical Applica-
tions in Genetics and Molecular Biology, 12(2):129–141, 2013.

Mais conteúdo relacionado

Mais procurados

Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methodsChristian Robert
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Christian Robert
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Christian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapterChristian Robert
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsChristian Robert
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chaptersChristian Robert
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsChristian Robert
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapterChristian Robert
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationChristian Robert
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannolli0601
 
An overview of Bayesian testing
An overview of Bayesian testingAn overview of Bayesian testing
An overview of Bayesian testingChristian Robert
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixturesChristian Robert
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modulesChristian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 

Mais procurados (20)

Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methods
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapter
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forests
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chapters
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified models
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapter
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
An overview of Bayesian testing
An overview of Bayesian testingAn overview of Bayesian testing
An overview of Bayesian testing
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modules
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 

Semelhante a ABC-Gibbs

Stratified Monte Carlo and bootstrapping for approximate Bayesian computation
Stratified Monte Carlo and bootstrapping for approximate Bayesian computationStratified Monte Carlo and bootstrapping for approximate Bayesian computation
Stratified Monte Carlo and bootstrapping for approximate Bayesian computationUmberto Picchini
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
 
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsWorkshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsChristian Robert
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Christian Robert
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsUmberto Picchini
 
Stratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computationStratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computationUmberto Picchini
 
GonzalezGinestetResearchDay2016
GonzalezGinestetResearchDay2016GonzalezGinestetResearchDay2016
GonzalezGinestetResearchDay2016Pablo Ginestet
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
Differential Equations Assignment Help
Differential Equations Assignment HelpDifferential Equations Assignment Help
Differential Equations Assignment HelpMaths Assignment Help
 
Semi-automatic ABC: a discussion
Semi-automatic ABC: a discussionSemi-automatic ABC: a discussion
Semi-automatic ABC: a discussionChristian Robert
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
 

Semelhante a ABC-Gibbs (20)

Stratified Monte Carlo and bootstrapping for approximate Bayesian computation
Stratified Monte Carlo and bootstrapping for approximate Bayesian computationStratified Monte Carlo and bootstrapping for approximate Bayesian computation
Stratified Monte Carlo and bootstrapping for approximate Bayesian computation
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
Edinburgh, Bayes-250
Edinburgh, Bayes-250Edinburgh, Bayes-250
Edinburgh, Bayes-250
 
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsWorkshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
 
Intro to ABC
Intro to ABCIntro to ABC
Intro to ABC
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space models
 
Multivariate Methods Assignment Help
Multivariate Methods Assignment HelpMultivariate Methods Assignment Help
Multivariate Methods Assignment Help
 
Stratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computationStratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computation
 
Linear Algebra Assignment help
Linear Algebra Assignment helpLinear Algebra Assignment help
Linear Algebra Assignment help
 
Report
ReportReport
Report
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
GonzalezGinestetResearchDay2016
GonzalezGinestetResearchDay2016GonzalezGinestetResearchDay2016
GonzalezGinestetResearchDay2016
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
Differential Equations Assignment Help
Differential Equations Assignment HelpDifferential Equations Assignment Help
Differential Equations Assignment Help
 
Semi-automatic ABC: a discussion
Semi-automatic ABC: a discussionSemi-automatic ABC: a discussion
Semi-automatic ABC: a discussion
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 

Mais de Christian Robert

How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerChristian Robert
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplerChristian Robert
 

Mais de Christian Robert (15)

discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
 

Último

Cyanide resistant respiration pathway.pptx
Cyanide resistant respiration pathway.pptxCyanide resistant respiration pathway.pptx
Cyanide resistant respiration pathway.pptxSilpa
 
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryFAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryAlex Henderson
 
development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusNazaninKarimi6
 
Use of mutants in understanding seedling development.pptx
Use of mutants in understanding seedling development.pptxUse of mutants in understanding seedling development.pptx
Use of mutants in understanding seedling development.pptxRenuJangid3
 
Role of AI in seed science Predictive modelling and Beyond.pptx
Role of AI in seed science  Predictive modelling and  Beyond.pptxRole of AI in seed science  Predictive modelling and  Beyond.pptx
Role of AI in seed science Predictive modelling and Beyond.pptxArvind Kumar
 
Module for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learningModule for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learninglevieagacer
 
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS ESCORT SERVICE In Bhiwan...
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS  ESCORT SERVICE In Bhiwan...Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS  ESCORT SERVICE In Bhiwan...
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS ESCORT SERVICE In Bhiwan...Monika Rani
 
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxClimate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxDiariAli
 
FAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceFAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceAlex Henderson
 
Selaginella: features, morphology ,anatomy and reproduction.
Selaginella: features, morphology ,anatomy and reproduction.Selaginella: features, morphology ,anatomy and reproduction.
Selaginella: features, morphology ,anatomy and reproduction.Silpa
 
Grade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsGrade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsOrtegaSyrineMay
 
Human genetics..........................pptx
Human genetics..........................pptxHuman genetics..........................pptx
Human genetics..........................pptxSilpa
 
biology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGYbiology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGY1301aanya
 
PSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptxPSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptxSuji236384
 
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptxTHE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptxANSARKHAN96
 
CYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptxCYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptxSilpa
 
Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.Silpa
 
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate ProfessorThyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate Professormuralinath2
 

Último (20)

Cyanide resistant respiration pathway.pptx
Cyanide resistant respiration pathway.pptxCyanide resistant respiration pathway.pptx
Cyanide resistant respiration pathway.pptx
 
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryFAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
 
development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virus
 
Use of mutants in understanding seedling development.pptx
Use of mutants in understanding seedling development.pptxUse of mutants in understanding seedling development.pptx
Use of mutants in understanding seedling development.pptx
 
Role of AI in seed science Predictive modelling and Beyond.pptx
Role of AI in seed science  Predictive modelling and  Beyond.pptxRole of AI in seed science  Predictive modelling and  Beyond.pptx
Role of AI in seed science Predictive modelling and Beyond.pptx
 
Module for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learningModule for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learning
 
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS ESCORT SERVICE In Bhiwan...
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS  ESCORT SERVICE In Bhiwan...Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS  ESCORT SERVICE In Bhiwan...
Bhiwandi Bhiwandi ❤CALL GIRL 7870993772 ❤CALL GIRLS ESCORT SERVICE In Bhiwan...
 
Site Acceptance Test .
Site Acceptance Test                    .Site Acceptance Test                    .
Site Acceptance Test .
 
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxClimate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
 
FAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceFAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical Science
 
Selaginella: features, morphology ,anatomy and reproduction.
Selaginella: features, morphology ,anatomy and reproduction.Selaginella: features, morphology ,anatomy and reproduction.
Selaginella: features, morphology ,anatomy and reproduction.
 
Grade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsGrade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its Functions
 
Human genetics..........................pptx
Human genetics..........................pptxHuman genetics..........................pptx
Human genetics..........................pptx
 
biology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGYbiology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGY
 
PSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptxPSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptx
 
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptxTHE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
 
CYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptxCYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptx
 
Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate ProfessorThyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
 

ABC-Gibbs

  • 1. Component-wise approximate Bayesian computation via Gibbs-like steps Christian P. Robert(1,2) Joint work with Grégoire Clarté(1) , Robin Ryder(1) , Julien Stoehr(1) (1) Université Paris-Dauphine, (2) University of Warwick Université Paris-Dauphine Approximate Bayesian Computation @ Clermont
  • 2. ABC postdoc positions 2 post-doc positions open with the ABSint ANR research grant: Focus on approximate Bayesian techniques like ABC, variational Bayes, PAC-Bayes, Bayesian non-parametrics, scalable MCMC, and related topics. A potential direction of research would be the derivation of new Bayesian tools for model checking in such c omplex environments. Terms: up to 24 months, no teaching duty attached, primarily located in Université Paris-Dauphine, with supported periods in Oxford (J. Rousseau) [barring no-deal Brexit!] and visits to Mont- pellier (J.-M. Marin). No hard deadline. If interested, send application to me: bayesianstatistics@gmail.com
  • 3. Approximate Bayesian computation (ABC) ABC is a computational method which stemmed from population ge- netics models about 20 years ago to deal with generative intractable distribution. [Tavaré et al., 1997; Beaumont et al., 2002] Settings of interest: the likelihood function f(x | θ) does not admit a closed form as a function of θ and/or is computationally too costly. 1. Model relying on a latent process z ∈ Z f(x | θ) = Z f(y, z | θ)µ(dz). 2. Model with intractable normalising constant f(x | θ) = 1 Z(θ) q(x | θ), where Z(θ) = X q(x | θ)µ(dx).
  • 4. Approximate Bayesian computation (ABC) Bayesian settings: the target is π(θ | xobs ) ∝ π(θ)f(xobs | θ). Algorithm: Vanilla ABC Input: observed dataset xobs , number of iterations N, threshold ε, summary statistic s. for i = 1, . . . , N do θi ∼ π(·) xi ∼ f(· | θi) end return θi d(s(xobs ), s(xi)) ≤ ε s(xobs) ε (θi, S(xi))
  • 5. Approximate Bayesian computation (ABC) Bayesian settings: the target is π(θ | xobs ) ∝ π(θ)f(xobs | θ). Algorithm: Vanilla ABC Input: observed dataset xobs , number of iterations N, threshold ε, summary statistic s. for i = 1, . . . , N do θi ∼ π(·) xi ∼ f(· | θi) end return θi d(s(xobs ), s(xi)) ≤ ε s(xobs) ε (θi, S(xi)) Ouput: distributed according to π(θ)Pθ d(S(xobs ), S(x)) < ε ∝ π(θ | d(S(xobs ), S(x)) < ε) ∝ πε(θ | s, xob
  • 6. Approximate Bayesian computation (ABC) Two particular situations: π∞(θ | s, xobs ) ∝ π(θ) and π0(θ | s, xobs ) ∝ π(θ | s(xobs ))= π(θ | xobs ) Some difficulties raised by the vanilla version: Calibration of the threshold ε: from a regression or a k-nearest neighbour perspective. [Beaumont et al., 2002; Wilkinson, 2013; Biau et al., 2013] Selection of the summary statistic S: advances consider semi-automatic procedure using a pilot-run ABC or random forest methodology. [Fearnhead and Prangle, 2012; Prangle et al., 2014; Raynal et al., 2018] Simulating from the prior is often poor in efficiency: solutions consist in modifying the proposal distribution on θ to increase the density of x’s within the vicinity of y. [Marjoram et al., 2003; Toni et al., 2008]
  • 7. A first example : hierarchical moving average model α µ1 µ2 µn. . . x1 x2 xn. . . σ σ1 σ2 σn. . . First parameter hierarchy: α = (α1, α2, α3) ∼ E(1)⊗3 ;. Independently for each i ∈ {1, . . . , n}, (βi,1, βi,2, βi,3) ∼ Dir(α1, α2, α3); µi = (βi,1 − βi,2, 2(βi,1 + βi,2) − 1). Second parameter hierarchy: σ = (σ1, σ2) ∼ C+ (1)⊗2 . Independently for each i ∈ {1, . . . , n}, σi ∼ IG(σ1, σ2). Model for xi: independently for each i ∈ {1, . . . , n}, xi ∼ MA2(µi, σi), i.e., for all j in N xi,j = yj + µi,1yj−1 + µi,2yj−2 , with yj ∼ N(0, σ2 i ).
  • 8. A first example : toy dataset Settings: n = 5 times series of length T = 100 hierarchical model with 13 parameters. Figure: ABC posterior distribu- tion of µ1,1 along with the prior distribution (black line). Size of ABC reference table: N = 5.5 · 106 . ABC posterior sample size: 1000.
  • 9. A first example : toy dataset Settings: n = 5 times series of length T = 100 hierarchical model with 13 parameters. Figure: ABC posterior distribu- tion of µ1,1 along with the prior distribution (black line). Size of ABC reference table: N = 5.5 · 106 . ABC posterior sample size: 1000. Not enough simulations to reach a decent threshold. Not enough time to produce enough simulations.
  • 10. The Gibbs Sampler Our idea: combining ABC with Gibbs sampler in order to improve its ability to efficiently explorer Θ ⊂ Rn when the number n of pa- rameters increases.
  • 11. The Gibbs Sampler Our idea: combining ABC with Gibbs sampler in order to improve its ability to efficiently explorer Θ ⊂ Rn when the number n of pa- rameters increases. The Gibbs Sampler produces a Markov chain with a target joint dis- tribution π by alternatively sampling from each of its conditionals. [Geman and Geman, 1984] Algorithm: Gibbs sampler Input: observed dataset xobs , number of iterations N, starting point θ(0) = (θ (0) 1 , . . . , θ (0) n ). for i = 1, . . . , N do for k = 1, . . . , n do θ (i) k ∼ π · | θ (i) 1 , . . . , θ (i) k−1, θ (i−1) k+1 , . . . , θ (i−1) n , xobs end end return θ(0) , . . . , θ(N)
  • 12. Component-wise ABC Algorithm: Component-wise ABC Input: observed dataset xobs , number of iterations N, starting point θ(0) = (θ (0) 1 , . . . , θ (0) n ), threshold ε = (ε1, . . . , εn), statistics s1, . . . , sn. for i = 1, . . . , N do for j = 1, . . . , n do θ (i) j ∼ πεj (· | xobs , sj, θ (i) 1 , . . . , θ (i) j−1, θ (i−1) j+1 , . . . , θ (i−1) n ) end end return θ(0) , . . . , θ(N)
  • 13. Component-wise ABC Algorithm: Component-wise ABC Input: observed dataset xobs , number of iterations N, starting point θ(0) = (θ (0) 1 , . . . , θ (0) n ), threshold ε = (ε1, . . . , εn), statistics s1, . . . , sn. for i = 1, . . . , N do for j = 1, . . . , n do θ (i) j ∼ πεj (· | xobs , sj, θ (i) 1 , . . . , θ (i) j−1, θ (i−1) j+1 , . . . , θ (i−1) n ) end end return θ(0) , . . . , θ(N) Questions: Is there a limiting distribution ν∞ ε to the algorithm? What is the nature of this limiting distribution?
  • 14. OUTLINE 1 Hierarchical models 2 General case 3 Take home messages
  • 15. OUTLINE 1 Hierarchical models 2 General case 3 Take home messages
  • 16. ABC within Gibbs: Hierarchical models α µ1 µ2 µn. . . x1 x2 xn. . . Hierarchical Bayes models: often allow for simplified conditional distributions thanks to partial independence properties, e.g., xj | µj ∼ π(xj | µj), µj | α i.i.d. ∼ π(µj | α), α ∼ π(α). Algorithm: Component-wise ABC sampler for hierarchical model Input: observed dataset xobs , number of iterations N, thresholds εα and εµ, summary statistics sα and sµ. for i = 1, . . . , N do for j = 1, . . . , n do µ (i) j ∼ πεµ (· | xobs j , sµ, α(i−1) ) end α(i) ∼ πεα (· | µ(i) , sα) end
  • 17. ABC within Gibbs: Hierarchical models Assumption: n = 1. Theorem (Clarté et al. [2019]) Assume there exists a non-empty convex set C with positive prior measure such that κ1 = inf sα(µ)∈C π(Bsα(µ), α/4) > 0 , κ2 = inf α inf sα(µ)∈C πεµ (Bsα(µ),3 α/2 | xobs , sµ, α) > 0 , κ3 = inf α πεµ (sα(µ) ∈ C | xobs , sµ, α) > 0 , Then the Markov chain converges geometrically in total variation distance to a stationary distribution ν∞ ε , with geometric rate 1 − κ1κ2κ2 3. If the prior on α is defined on a compact set, then the assumptions are satisfied.
  • 18. ABC within Gibbs: Hierarchical models Theorem (Clarté et al. [2019]) Assume that, L0 = sup εα sup µ, ˜µ πεα (· | sα, µ) − π0(· | sα, ˜µ) TV < 1/2 , L1(εα) = sup µ πεα (· | sα, µ) − π0(· | sα, µ) TV −−−−→ εα→0 0 L2(εµ) = sup α πεµ (· | xobs , sµ, α) − π0(· | xobs , sµ, α) TV −−−−→ εµ→0 0 . Then, ν∞ ε − ν∞ 0 TV ≤ L1(εα) + L2(εµ) 1 − 2L0 −−−→ ε→0 0.
  • 19. ABC within Gibbs: Hierarchical models Compatibility issue: ν∞ 0 is the limiting distribution associated to Gibbs conditionals with different acceptance events, e.g., different statis- tics π(α)π(sα(µ) | α) and π(µ)f(sµ(xobs ) | α, µ). Conditionals may then be incompatible and the limiting distribution not a genuine posterior [incoherent use of data] unknown [except for a specific version] possibly far from a genuine posterior Proposition (Clarté et al. [2019]) If sα is jointly sufficient, when the precision ε goes to zero, ABC within Gibbs and ABC have the same limiting distribution.
  • 20. Hierarchical models: toy example Model: α ∼ U([0 ; 20]), (µ1, . . . , µn) | α ∼ N(α, 1)⊗n , (xi,1, . . . , xi,K) | µi ∼ N (µi, 0.1) ⊗K . Numerical experiment: n = 20, K = 10, Pseudo observation generated for α = 1.7, Algorithms runs for a constant budget: Ntot = N × Nε = 21000. We look at the estimates for µ1 whose value for the pseudo obser- vations is 3.04.
  • 21. Hierarchical models: toy example Figure: comparison of the sampled densities of µ1 (left) and α (right) [dot-dashed line corresponds to the true posterior] 0 1 2 3 4 0 2 4 6 0.0 0.5 1.0 1.5 2.0 −4 −2 0 2 4 Method ABC Gibbs Simple ABC
  • 22. Hierarchical models: moving average example [introduction] Pseudo observations: xobs 1 generated for µ1 = (−0.06, −0.22). 0 1 2 3 −1.0 −0.5 0.0 0.5 1.0 value density type ABCGibbs ABCsimple prior 1st parameter, 1st coordinate −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 b1 b2 0.2 0.4 0.6 0.8 level 1st parameter simple −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 b1b2 2.5 5.0 7.5 10.0 level 1st parameter gibbs Separation from the prior for identical number of simulations.
  • 23. Hierarchical models: moving average example [introduction] Real dataset: measures of 8GHz daily flux intensity emitted by 7 stellar objects from the NRL GBI website: http://ese.nrl.navy. mil/. [Lazio et al., 2008] 0 1 2 3 −1.0 −0.5 0.0 0.5 1.0 value density type ABCGibbs ABCsimple prior 1st parameter, 1st coordinate −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 b1 b2 2 4 6 8 level 1st parameter gibbs Separation from the prior for identical number of simulations.
  • 24. Hierarchical models: moving average example [introduction] Real dataset: measures of 8GHz daily flux intensity emitted by 7 stellar objects from the NRL GBI website: http://ese.nrl.navy. mil/. [Lazio et al., 2008] 0 1 2 3 −1.0 −0.5 0.0 0.5 1.0 value density type ABCGibbs ABCsimple prior 1st parameter, 1st coordinate −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 b1 b2 0.2 0.4 0.6 level 1st parameter simple −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 b1 b2 2 4 6 8 level 1st parameter gibbs Separation from the prior for identical number of simulations.
  • 25. Hierarchical models: g&k example Model: the g-and-k distribution is defined through the inverse of its cdf. It is easy to simulate from but there is no closed-form formula for the pdf: r ∈ (0, 1) → A+B 1 + 0.8 1 − exp(−gΦ−1 (r) 1 + exp(−gΦ−1(r) 1 + Φ−1 (r)2 k Φ−1 (r). α A1 A2 ... An x1 x2 xn ... B g k
  • 26. Hierarchical models: g&k example Assumption: B, g and k known, inference on α and Ai solely. 1 2 3 4 Hyperparameter −7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0−7.5 −7.0 −6.5 −6.0 −5.5 −5.0 0 2 4 6 8 value density Method ABC Gibbs ABC−SMC vanilla ABC
  • 27. OUTLINE 1 Hierarchical models 2 General case 3 Take home messages
  • 28. ABC within Gibbs: general case A general two-parameter model: (θ1, θ2) x Algorithm: ABC within Gibbs for i = 1, . . . , N do θ (i) 2 ∼ πε2 (· | θ (i−1) 1 , s2, xobs ) θ (i) 1 ∼ πε1 (· | θ (i) 2 , s1, xobs ) end return (θ (i) 1 , θ (i) 2 )i=2,...,N
  • 29. ABC within Gibbs: general case A general two-parameter model: (θ1, θ2) x Algorithm: ABC within Gibbs for i = 1, . . . , N do θ (i) 2 ∼ πε2 (· | θ (i−1) 1 , s2, xobs ) θ (i) 1 ∼ πε1 (· | θ (i) 2 , s1, xobs ) end return (θ (i) 1 , θ (i) 2 )i=2,...,N Theorem (Clarté et al. [2019]) Assume that there exists 0 < κ < 1/2 such that sup θ1, ˜θ1 πε2 (· | xobs , s2, θ1) − πε2 (· | xobs , s2, ˜θ1) TV = κ. The Markov chain then converges geometrically in total variation distance to a stationary distribution ν∞ ε , with geometric rate 1 − 2κ.
  • 30. ABC within Gibbs: general case Additional assumption: θ1 and θ2 are a priori independent Theorem (Clarté et al. [2019]) Assume that κ1 = inf θ1,θ2 π(Bs1(xobs),ε1 | θ1, θ2) > 0 , κ2 = inf θ1,θ2 π(Bs2(xobs), 2 | θ1, θ2) > 0 , κ3 = sup θ1, ˜θ1,θ2 π(· | θ1, θ2) − π(· | ˜θ1, θ2) TV < 1/2 . Then the Markov chain converges in total variation distance to a stationary distribution ν∞ ε with geometric rate 1 − κ1κ2(1 − 2κ3).
  • 31. ABC within Gibbs: general case For both situations, a limiting distribution exists when the thresholds go to 0. Theorem (Clarté et al. [2019]) Assume that L0 = sup ε2 sup θ1, ˜θ1 πε2 (· | xobs , s2, θ1) − π0(· | xobs , s2, ˜θ1) TV < 1/2 , L1(ε1) = sup θ2 πε1 (· | xobs , s1, θ2) − π0(· | xobs , s1, θ2) TV −−−−→ ε1→0 0 , L2(ε2) = sup θ1 πε2 (· | xobs , s2, θ1) − π0(· | xobs , s2, θ1) TV −−−−→ ε2→0 0 . Then ν∞ ε − ν∞ 0 TV ≤ L1(ε1) + L2(ε2) 1 − 2L0 −−−→ ε→0 0.
  • 32. ABC within Gibbs: general case Compatibility issue: the general case inherits the compatibility issue already noticed in the hierarchical setting. Proposition (Clarté et al. [2019]) 1. If sθ1 and sθ2 are conditionally sufficient, the conditionals are compatible and , when the precision goes to zero, ABC within Gibbs and ABC have the same limiting distribution. 2. If π(θ1, θ2) = π(θ1)π(θ2) and sθ1 = sθ2 , when the precision goes to zero, ABC within Gibbs and ABC have the same limiting distribution.
  • 33. General case: g&k example Figure: posterior densities for parameters A1, . . . , A4 1 2 3 4 −3 −2 −1 0 −3 −2 −1 0 −3 −2 −1 0 −3 −2 −1 0 0 2 4 value density Method ABC Gibbs ABC−SMC vanilla ABC
  • 34. General case: g&k example Figure: posterior densities for α, B, g and k. B g hyperparameter k −2 −1 0 1 2 −2 −1 0 1 2 −2 −1 0 1 2 −2 −1 0 1 2 0 2 4 6 8 value density Method ABC Gibbs ABC−SMC vanilla ABC
  • 35. Explicit limiting distribution For model xj | µj ∼ π(xj | µj) , µj | α i.i.d. ∼ π(µj | α) , α ∼ π(α) alternative ABC based on: ˜π(α, µ | xobs ) ∝ π(α)q(µ) generate a new µ π(˜µ | α)1d(sα(µ),sα( ˜µ))<εα d˜µ × f(˜x | µ)π(xobs | µ) with q arbitrary distribution on µ
  • 36. Explicit limiting distribution For model xj | µj ∼ π(xj | µj) , µj | α i.i.d. ∼ π(µj | α) , α ∼ π(α) induces full conditionals ˜π(α | µ) ∝ π(α) π(˜µ | α)1d(sα(µ),sα( ˜µ))<εα d˜x and ˜π(µ | α, xobs ) ∝ q(µ) π(˜µ | α)1d(sα(µ),sα( ˜µ))<εα d˜µ × f(˜x | µ)π(xobs | µ)1d(sµ(xobs),sµ( ˜x))<εµ d˜x now compatible with new artificial joint
  • 37. Explicit limiting distribution For model xj | µj ∼ π(xj | µj) , µj | α i.i.d. ∼ π(µj | α) , α ∼ π(α) that is, prior simulations of α ∼ π(α) and of ˜µ ∼ π(˜µ | α) until d(sα(µ), sα(˜µ)) < εα simulation of µ from instrumental q(µ) and of auxiliary variables ˜µ and ˜x until both constraints satisfied
  • 38. Explicit limiting distribution For model xj | µj ∼ π(xj | µj) , µj | α i.i.d. ∼ π(µj | α) , α ∼ π(α) Resulting Gibbs sampler stationary for posterior proportional to π(α, µ) q(sα(µ)) projection f(sµ(xobs ) | µ) projection that is, for likelihood associated with sµ(xobs ) and prior distribution proportional to π(α, µ)q(sα(µ)) [exact!]
  • 39. OUTLINE 1 Hierarchical models 2 General case 3 Take home messages
  • 40. Take home messages Under certain conditions to specify,
  • 41. Take home messages We provide theoretical guarantee on the convergence of ABC within Gibbs. • Result n°1: a limiting distribution ν∞ ε exists when the sample size grows • Result n°2: a limiting distribution ν∞ 0 exists when the thresh- old goes to 0 • Result n°3: ν∞ 0 is the posterior distribution π(θ | s(xobs )). The method inherits issues from vanilla ABC, namely the choice of the statistics [plus compatibility of the condition- als]. In practice, ABC within Gibbs exhibits better performances than vanilla ABC and SMC-ABC [even when conditions not satisfied]
  • 42. Take home messages We provide theoretical guarantee on the convergence of ABC within Gibbs. • Result n°1: a limiting distribution ν∞ ε exists when the sample size grows • Result n°2: a limiting distribution ν∞ 0 exists when the thresh- old goes to 0 • Result n°3: ν∞ 0 is the posterior distribution π(θ | s(xobs )). The method inherits issues from vanilla ABC, namely the choice of the statistics [plus compatibility of the condition- als]. In practice, ABC within Gibbs exhibits better performances than vanilla ABC and SMC-ABC [even when conditions not satisfied] Thank you!
  • 43. ABC workshops [A]BayesComp, Gainesville, Florida, Jan 7-10 2020 ABC in Grenoble, France, March 18-19 2020 ISBA(BC), Kunming, China, June 26-30 2020 ABC in Longyearbyen, Svalbard, April 8-9 2021 [??]
  • 44. Bibliography I M. A. Beaumont, W. Zhang, and D. J. Balding. Approximate Bayesian Computation in Population Genetics. Genetics, 162(4):2025–2035, 2002. G. Biau, F. Cérou, and A. Guyader. New insights into Approximate Bayesian Computation. Annales de l’Institut Henri Poincaré (B) Prob- abilités et Statistiques, in press, 2013. G. Clarté, C. P. Robert, R. Ryder, and J. Stoehr. Component-wise ap- proximate Bayesian computation via Gibbs-like steps. arXiv preprint arXiv:1905.13599, 2019. P. Fearnhead and D. Prangle. Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 74(3):419–474, 2012. S. Geman and D. Geman. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721–741, 1984.
  • 45. Bibliography II T. J. W. Lazio, E. B. Waltman, F. D. Ghigo, R. Fiedler, R. S. Foster, and a. K. J. Johnston. A Dual-Frequency, Multiyear Monitoring Program of Compact Radio Sources. The Astrophysical Journal Supplement Se- ries, 136:265, December 2008. doi: 10.1086/322531. P. Marjoram, J. Molitor, V. Plagnol, and S. Tavaré. Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sci- ences, 100(26):15324–15328, 2003. D. Prangle, P. Fearnhead, M. P. Cox, P. J. Biggs, and N. P. French. Semi- automatic selection of summary statistics for ABC model choice. Statistical applications in genetics and molecular biology, 13(1):67–82, 2014. L. Raynal, J.-M. Marin, P. Pudlo, M. Ribatet, C. P. Robert, and A. Es- toup. ABC random forests for Bayesian parameter inference. Bioin- formatics, 2018. doi: 10.1093/bioinformatics/bty867. S. Tavaré, D. J. Balding, R. C. Griffiths, and P. Donnelly. Inferring Coa- lescence Times From DNA Sequence Data. Genetics, 145(2):505–518, 1997.
  • 46. Bibliography III T. Toni, D. Welch, N. Strelkowa, A. Ipsen, and M. P. H. Stumpf. Ap- proximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. Journal of the Royal Soci- ety Interface, 6(31):187–202, 2008. R. D. Wilkinson. Approximate Bayesian computation (ABC) gives ex- act results under the assumption of model error. Statistical Applica- tions in Genetics and Molecular Biology, 12(2):129–141, 2013.