SlideShare uma empresa Scribd logo
1 de 30
Baixar para ler offline
Semi-automatic ABCA Discussion




                                    Semi-automatic ABC
                                       A Discussion

                                        Christian P. Robert

                                 Universit´ Paris-Dauphine, IuF, & CREST
                                          e
                                    http://xianblog.wordpress.com


                                   November 2, 2011
                      LA TEX code borrowed from arXiv:1004.1112v2
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




Approximate Bayesian computation




      Approximate Bayesian computation (recap)

      Summary statistic selection
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




Regular Bayesian computation issues


      When faced with a non-standard posterior distribution

                                             π(θ|y) ∝ π(θ)L(θ|y)

      the standard solution is to use simulation (Monte Carlo) to
      produce a sample
                                   θ1 , . . . , θ T
      from π(θ|y) (or approximately by Markov chain Monte Carlo
      methods)
                                              [Robert & Casella, 2004]
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




Untractable likelihoods



      Cases when the likelihood function f (y|θ) is unavailable and when
      the completion step

                                       f (y|θ) =       f (y, z|θ) dz
                                                   Z

      is impossible or too costly because of the dimension of z
       c MCMC cannot be implemented!
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




Untractable likelihoods




                                 c MCMC cannot be implemented!
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




The ABC method

      Bayesian setting: target is π(θ)f (x|θ)
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




The ABC method

      Bayesian setting: target is π(θ)f (x|θ)
      When likelihood f (x|θ) not in closed form, likelihood-free rejection
      technique:
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




The ABC method

      Bayesian setting: target is π(θ)f (x|θ)
      When likelihood f (x|θ) not in closed form, likelihood-free rejection
      technique:
      ABC algorithm
      For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly
      simulating
                           θ ∼ π(θ) , z ∼ f (z|θ ) ,
      until the auxiliary variable z is equal to the observed value, z = y.

                                             [Rubin, 1984; Tavar´ et al., 1997]
                                                                e
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




A as approximative


      When y is a continuous random variable, equality z = y is replaced
      with a tolerance condition,

                                             (y, z) ≤

      where        is a distance
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




A as approximative


      When y is a continuous random variable, equality z = y is replaced
      with a tolerance condition,

                                             (y, z) ≤

      where is a distance
      Output distributed from

                           π(θ) Pθ { (y, z) < } ∝ π(θ| (y, z) < )
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




ABC algorithm


      Algorithm 1 Likelihood-free rejection sampler
        for i = 1 to N do
          repeat
             generate θ from the prior distribution π(·)
             generate z from the likelihood f (·|θ )
          until ρ{η(z), η(y)} ≤
          set θi = θ
        end for

      where η(y) defines a (generaly in-sufficient) statistic
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




Output

      The likelihood-free algorithm samples from the marginal in z of:

                                              π(θ)f (z|θ)IA ,y (z)
                            π (θ, z|y) =                             ,
                                             A ,y ×Θ π(θ)f (z|θ)dzdθ

      where A       ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
Semi-automatic ABCA Discussion
  Approximate Bayesian computation (recap)




Output

      The likelihood-free algorithm samples from the marginal in z of:

                                               π(θ)f (z|θ)IA ,y (z)
                            π (θ, z|y) =                              ,
                                              A ,y ×Θ π(θ)f (z|θ)dzdθ

      where A       ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
      The idea behind ABC is that the summary statistics coupled with a
      small tolerance should provide a good approximation of the
      posterior distribution:

                           π (θ|y) =         π (θ, z|y)dz ≈ π(θ|η(y)) .

                                                                    [Not garanteed!]
Semi-automatic ABCA Discussion
  Summary statistic selection




Summary statistic selection



      Approximate Bayesian computation (recap)

      Summary statistic selection
         F&P’s setting
         Noisy ABC
         Optimal summary statistic
Semi-automatic ABCA Discussion
  Summary statistic selection
     F&P’s setting


F&P’s ABC

      Use of a summary statistic S(·), an importance proposal g(·), a
      kernel K(·) ≤ 1 and a bandwidth h > 0 such that

                                 (θ, ysim ) ∼ g(θ)f (ysim |θ)

      is accepted with probability (hence the bound)

                                  K[{S(ysim ) − sobs }/h]

                           [or is it K[{S(ysim ) − sobs }]/h, cf (2)? typo]
      the corresponding importance weight defined by

                                         π(θ) g(θ)
Semi-automatic ABCA Discussion
  Summary statistic selection
     F&P’s setting


Errors, errors, and errors


      Three levels of approximation
              π(θ|yobs ) by π(θ|sobs ) loss of information
              π(θ|sobs ) by

                                             π(s)K[{s − sobs }/h]π(θ|s) ds
                          πABC (θ|sobs ) =
                                                π(s)K[{s − sobs }/h] ds

              noisy observations
              πABC (θ|sobs ) by importance Monte Carlo based on N
              simulations, represented by var(a(θ)|sobs )/Nacc [expected
              number of acceptances]
Semi-automatic ABCA Discussion
  Summary statistic selection
     F&P’s setting


Average acceptance asymptotics


      For the average acceptance probability/approximate likelihood

                 p(θ|sobs ) =     f (ysim |θ) K[{S(ysim ) − sobs }/h] dysim ,

      overall acceptance probability

                     p(sobs ) =   p(θ|sobs ) π(θ) dθ = π(sobs )hd + o(hd )

                                                                        [Lemma 1]
Semi-automatic ABCA Discussion
  Summary statistic selection
     F&P’s setting


Optimal importance proposal




      Best choice of importance proposal in terms of effective sample size

                                 g (θ|sobs ) ∝ π(θ)p(θ|sobs )1/2

                                               [Not particularly useful in practice]
Semi-automatic ABCA Discussion
  Summary statistic selection
     F&P’s setting


Calibration of h

              “This result gives insight into how S(·) and h affect the
              Monte Carlo error. To minimize Monte Carlo error, we
              need hd to be not too small. Thus ideally we want S(·)
              to be a low dimensional summary of the data that is
              sufficiently informative about θ that π(θ|sobs ) is close, in
              some sense, to π(θ|yobs )” (p.5)

              Constraint on h only addresses one term in the approximation
              error and acceptance probability
              h large prevents π(θ|sobs ) to be close to πABC (θ|sobs )
              d small prevents π(θ|sobs ) to be close to π(θ|yobs )
Semi-automatic ABCA Discussion
  Summary statistic selection
     Noisy ABC


Calibrated ABC



      Definition
      For 0 < q < 1 and subset A, fix [one specific?/all?] event Eq (A)
      with PrABC (θ ∈ Eq (A)|sobs ) = q. Then ABC is calibrated if

                                 Pr(θ ∈ A|Eq (A)) = q



      Why calibrated and not exact?
Semi-automatic ABCA Discussion
  Summary statistic selection
     Noisy ABC


Calibrated ABC



      Theorem
      Noisy ABC, where

                                 sobs = S(yobs ) + h ,   ∼ K(·)

      is calibrated
                                                              [Wilkinson, 2008]
                                   no condition on h
Semi-automatic ABCA Discussion
  Summary statistic selection
     Noisy ABC


Calibrated ABC


      Theorem
      For noisy ABC, the expected noisy-ABC log-likelihood,

       E {log[p(θ|sobs )]} =     log[p(θ|S(yobs ) + h )]π(yobs |θ0 )K( )dyobs dx,

      has its maximum at θ = θ0 .

                                          [Last line of proof contains a typo]

      True for any choice of summary statistic?
                                       [Imposes at least identifiability...]
      Relevant in asymptotia and not for the data
Semi-automatic ABCA Discussion
  Summary statistic selection
     Noisy ABC


Calibrated ABC



      Corollary
      For noisy ABC, the ABC posterior converges onto a point mass on
      the true parameter value as m → ∞.

      For standard ABC, not always the case (unless h goes to zero).
      Strength of regularity conditions (c1) and (c2) in Bernardo
      & Smith, 1994?
                                                 [constraints on posterior]
      Some condition upon summary statistic?
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Loss motivated statistic

      Under quadratic loss function,
      Theorem
                                               ˆ
        (i) The minimal posterior error E[L(θ, θ)|yobs ] occurs when
            ˆ = E(θ|yobs ) (!)
            θ
       (ii) When h → 0, EABC (θ|sobs ) converges to E(θ|yobs )
                                               ˆ
         iii If S(yobs ) = E[θ|yobs ] then for θ = EABC [θ|sobs ]

                       ˆ
                E[L(θ, θ)|yobs ] = trace(AΣ) + h2   xT AxK(x)dx + o(h2 ).

      Measure-theoretic difficulties?
      dependence of sobs on h makes me uncomfortable
      Relevant for choice of K?
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Optimal summary statistic
              “We take a different approach, and weaken the
              requirement for πABC to be a good approximation to
              π(θ|yobs ). We argue for πABC to be a good
              approximation solely in terms of the accuracy of certain
              estimates of the parameters.” (p.5)

      From this result, F&P derive their choice of summary statistic,

                                      S(y) = E(θ|y)

                                                            [almost sufficient]
      suggest

                        h = O(N −1/(2+d) )   and h = O(N −1/(4+d) )

      as optimal bandwidths for noisy and standard ABC.
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Optimal summary statistic
              “We take a different approach, and weaken the
              requirement for πABC to be a good approximation to
              π(θ|yobs ). We argue for πABC to be a good
              approximation solely in terms of the accuracy of certain
              estimates of the parameters.” (p.5)

      From this result, F&P derive their choice of summary statistic,

                                      S(y) = E(θ|y)

                                                  [EABC [θ|S(yobs )] = E[θ|yobs ]]
      suggest

                        h = O(N −1/(2+d) )   and h = O(N −1/(4+d) )

      as optimal bandwidths for noisy and standard ABC.
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Caveat



      Since E(θ|yobs ) is most usually unavailable, F&P suggest
        (i) use a pilot run of ABC to determine a region of non-negligible
            posterior mass;
       (ii) simulate sets of parameter values and data;
      (iii) use the simulated sets of parameter values and data to
            estimate the summary statistic; and
      (iv) run ABC with this choice of summary statistic.
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Approximating the summary statistic




      As Beaumont et al. (2002) and Blum and Fran¸ois (2010), F&P
                                                      c
      use a linear regression to approximate E(θ|yobs ):
                                        (i)
                                 θi = β0 + β (i) f (yobs ) +   i
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Applications



      The paper’s second half covers:
              g-and-k-distribution
              stochastic kinetic biochemical networks
              LotkaVolterra model
              Ricker map ecological model
              M/G/1-queue
              tuberculosis bacteria genotype data
Semi-automatic ABCA Discussion
  Summary statistic selection
     Optimal summary statistic


Questions

              dependence on h and S(·) in the early stage
              reduction of Bayesian inference to point estimation
              approximation error in step (iii) not accounted for
              not parameterisation invariant
              practice shows that proper approximation to genuine posterior
              distributions stems from using a (much) larger number of
              summary statistics than the dimension of the parameter
              the validity of the approximation to the optimal summary
              statistic depends on the quality of the pilot run;
              important inferential issues like model choice are not covered
              by this approach.

Mais conteúdo relacionado

Mais procurados

Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methodsChristian Robert
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010Christian Robert
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsChristian Robert
 
On complementarity in qec and quantum cryptography
On complementarity in qec and quantum cryptographyOn complementarity in qec and quantum cryptography
On complementarity in qec and quantum cryptographywtyru1989
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
Understanding Dynamic Programming through Bellman Operators
Understanding Dynamic Programming through Bellman OperatorsUnderstanding Dynamic Programming through Bellman Operators
Understanding Dynamic Programming through Bellman OperatorsAshwin Rao
 
Loss Calibrated Variational Inference
Loss Calibrated Variational InferenceLoss Calibrated Variational Inference
Loss Calibrated Variational InferenceTomasz Kusmierczyk
 
Bathymetry smoothing in ROMS: A new approach
Bathymetry smoothing in ROMS: A new approachBathymetry smoothing in ROMS: A new approach
Bathymetry smoothing in ROMS: A new approachMathieu Dutour Sikiric
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Tomasz Kusmierczyk
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Christian Robert
 
Sampling Spectrahedra: Volume Approximation and Optimization
Sampling Spectrahedra: Volume Approximation and OptimizationSampling Spectrahedra: Volume Approximation and Optimization
Sampling Spectrahedra: Volume Approximation and OptimizationApostolos Chalkis
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
My PhD defence
My PhD defenceMy PhD defence
My PhD defenceJialin LIU
 
RuleML 2015 Constraint Handling Rules - What Else?
RuleML 2015 Constraint Handling Rules - What Else?RuleML 2015 Constraint Handling Rules - What Else?
RuleML 2015 Constraint Handling Rules - What Else?RuleML
 
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Don Sheehy
 

Mais procurados (20)

Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methods
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified models
 
On complementarity in qec and quantum cryptography
On complementarity in qec and quantum cryptographyOn complementarity in qec and quantum cryptography
On complementarity in qec and quantum cryptography
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
Understanding Dynamic Programming through Bellman Operators
Understanding Dynamic Programming through Bellman OperatorsUnderstanding Dynamic Programming through Bellman Operators
Understanding Dynamic Programming through Bellman Operators
 
Loss Calibrated Variational Inference
Loss Calibrated Variational InferenceLoss Calibrated Variational Inference
Loss Calibrated Variational Inference
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Bathymetry smoothing in ROMS: A new approach
Bathymetry smoothing in ROMS: A new approachBathymetry smoothing in ROMS: A new approach
Bathymetry smoothing in ROMS: A new approach
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017
 
Sampling Spectrahedra: Volume Approximation and Optimization
Sampling Spectrahedra: Volume Approximation and OptimizationSampling Spectrahedra: Volume Approximation and Optimization
Sampling Spectrahedra: Volume Approximation and Optimization
 
Shanghai tutorial
Shanghai tutorialShanghai tutorial
Shanghai tutorial
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
My PhD defence
My PhD defenceMy PhD defence
My PhD defence
 
RuleML 2015 Constraint Handling Rules - What Else?
RuleML 2015 Constraint Handling Rules - What Else?RuleML 2015 Constraint Handling Rules - What Else?
RuleML 2015 Constraint Handling Rules - What Else?
 
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
 

Semelhante a Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011

seminar at Princeton University
seminar at Princeton Universityseminar at Princeton University
seminar at Princeton UniversityChristian Robert
 
Colloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi KünschColloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi KünschChristian Robert
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Christian Robert
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapterChristian Robert
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationChristian Robert
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABCChristian Robert
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapterChristian Robert
 
ABC in London, May 5, 2011
ABC in London, May 5, 2011ABC in London, May 5, 2011
ABC in London, May 5, 2011Christian Robert
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015Christian Robert
 

Semelhante a Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011 (20)

ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Edinburgh, Bayes-250
Edinburgh, Bayes-250Edinburgh, Bayes-250
Edinburgh, Bayes-250
 
seminar at Princeton University
seminar at Princeton Universityseminar at Princeton University
seminar at Princeton University
 
Colloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi KünschColloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi Künsch
 
BIRS 12w5105 meeting
BIRS 12w5105 meetingBIRS 12w5105 meeting
BIRS 12w5105 meeting
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]
 
ABC model choice
ABC model choiceABC model choice
ABC model choice
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapter
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
 
Intro to ABC
Intro to ABCIntro to ABC
Intro to ABC
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
von Mises lecture, Berlin
von Mises lecture, Berlinvon Mises lecture, Berlin
von Mises lecture, Berlin
 
Boston talk
Boston talkBoston talk
Boston talk
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABC
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapter
 
ABC in London, May 5, 2011
ABC in London, May 5, 2011ABC in London, May 5, 2011
ABC in London, May 5, 2011
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015
 
Lex analysis
Lex analysisLex analysis
Lex analysis
 

Mais de Christian Robert

Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 

Mais de Christian Robert (20)

Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 

Último

Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 

Último (20)

Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 

Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011

  • 1. Semi-automatic ABCA Discussion Semi-automatic ABC A Discussion Christian P. Robert Universit´ Paris-Dauphine, IuF, & CREST e http://xianblog.wordpress.com November 2, 2011 LA TEX code borrowed from arXiv:1004.1112v2
  • 2. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) Approximate Bayesian computation Approximate Bayesian computation (recap) Summary statistic selection
  • 3. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) Regular Bayesian computation issues When faced with a non-standard posterior distribution π(θ|y) ∝ π(θ)L(θ|y) the standard solution is to use simulation (Monte Carlo) to produce a sample θ1 , . . . , θ T from π(θ|y) (or approximately by Markov chain Monte Carlo methods) [Robert & Casella, 2004]
  • 4. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) Untractable likelihoods Cases when the likelihood function f (y|θ) is unavailable and when the completion step f (y|θ) = f (y, z|θ) dz Z is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • 5. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) Untractable likelihoods c MCMC cannot be implemented!
  • 6. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) The ABC method Bayesian setting: target is π(θ)f (x|θ)
  • 7. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) The ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique:
  • 8. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) The ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y. [Rubin, 1984; Tavar´ et al., 1997] e
  • 9. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) A as approximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance
  • 10. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) A as approximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance Output distributed from π(θ) Pθ { (y, z) < } ∝ π(θ| (y, z) < )
  • 11. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) ABC algorithm Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} ≤ set θi = θ end for where η(y) defines a (generaly in-sufficient) statistic
  • 12. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
  • 13. Semi-automatic ABCA Discussion Approximate Bayesian computation (recap) Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) . [Not garanteed!]
  • 14. Semi-automatic ABCA Discussion Summary statistic selection Summary statistic selection Approximate Bayesian computation (recap) Summary statistic selection F&P’s setting Noisy ABC Optimal summary statistic
  • 15. Semi-automatic ABCA Discussion Summary statistic selection F&P’s setting F&P’s ABC Use of a summary statistic S(·), an importance proposal g(·), a kernel K(·) ≤ 1 and a bandwidth h > 0 such that (θ, ysim ) ∼ g(θ)f (ysim |θ) is accepted with probability (hence the bound) K[{S(ysim ) − sobs }/h] [or is it K[{S(ysim ) − sobs }]/h, cf (2)? typo] the corresponding importance weight defined by π(θ) g(θ)
  • 16. Semi-automatic ABCA Discussion Summary statistic selection F&P’s setting Errors, errors, and errors Three levels of approximation π(θ|yobs ) by π(θ|sobs ) loss of information π(θ|sobs ) by π(s)K[{s − sobs }/h]π(θ|s) ds πABC (θ|sobs ) = π(s)K[{s − sobs }/h] ds noisy observations πABC (θ|sobs ) by importance Monte Carlo based on N simulations, represented by var(a(θ)|sobs )/Nacc [expected number of acceptances]
  • 17. Semi-automatic ABCA Discussion Summary statistic selection F&P’s setting Average acceptance asymptotics For the average acceptance probability/approximate likelihood p(θ|sobs ) = f (ysim |θ) K[{S(ysim ) − sobs }/h] dysim , overall acceptance probability p(sobs ) = p(θ|sobs ) π(θ) dθ = π(sobs )hd + o(hd ) [Lemma 1]
  • 18. Semi-automatic ABCA Discussion Summary statistic selection F&P’s setting Optimal importance proposal Best choice of importance proposal in terms of effective sample size g (θ|sobs ) ∝ π(θ)p(θ|sobs )1/2 [Not particularly useful in practice]
  • 19. Semi-automatic ABCA Discussion Summary statistic selection F&P’s setting Calibration of h “This result gives insight into how S(·) and h affect the Monte Carlo error. To minimize Monte Carlo error, we need hd to be not too small. Thus ideally we want S(·) to be a low dimensional summary of the data that is sufficiently informative about θ that π(θ|sobs ) is close, in some sense, to π(θ|yobs )” (p.5) Constraint on h only addresses one term in the approximation error and acceptance probability h large prevents π(θ|sobs ) to be close to πABC (θ|sobs ) d small prevents π(θ|sobs ) to be close to π(θ|yobs )
  • 20. Semi-automatic ABCA Discussion Summary statistic selection Noisy ABC Calibrated ABC Definition For 0 < q < 1 and subset A, fix [one specific?/all?] event Eq (A) with PrABC (θ ∈ Eq (A)|sobs ) = q. Then ABC is calibrated if Pr(θ ∈ A|Eq (A)) = q Why calibrated and not exact?
  • 21. Semi-automatic ABCA Discussion Summary statistic selection Noisy ABC Calibrated ABC Theorem Noisy ABC, where sobs = S(yobs ) + h , ∼ K(·) is calibrated [Wilkinson, 2008] no condition on h
  • 22. Semi-automatic ABCA Discussion Summary statistic selection Noisy ABC Calibrated ABC Theorem For noisy ABC, the expected noisy-ABC log-likelihood, E {log[p(θ|sobs )]} = log[p(θ|S(yobs ) + h )]π(yobs |θ0 )K( )dyobs dx, has its maximum at θ = θ0 . [Last line of proof contains a typo] True for any choice of summary statistic? [Imposes at least identifiability...] Relevant in asymptotia and not for the data
  • 23. Semi-automatic ABCA Discussion Summary statistic selection Noisy ABC Calibrated ABC Corollary For noisy ABC, the ABC posterior converges onto a point mass on the true parameter value as m → ∞. For standard ABC, not always the case (unless h goes to zero). Strength of regularity conditions (c1) and (c2) in Bernardo & Smith, 1994? [constraints on posterior] Some condition upon summary statistic?
  • 24. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Loss motivated statistic Under quadratic loss function, Theorem ˆ (i) The minimal posterior error E[L(θ, θ)|yobs ] occurs when ˆ = E(θ|yobs ) (!) θ (ii) When h → 0, EABC (θ|sobs ) converges to E(θ|yobs ) ˆ iii If S(yobs ) = E[θ|yobs ] then for θ = EABC [θ|sobs ] ˆ E[L(θ, θ)|yobs ] = trace(AΣ) + h2 xT AxK(x)dx + o(h2 ). Measure-theoretic difficulties? dependence of sobs on h makes me uncomfortable Relevant for choice of K?
  • 25. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Optimal summary statistic “We take a different approach, and weaken the requirement for πABC to be a good approximation to π(θ|yobs ). We argue for πABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters.” (p.5) From this result, F&P derive their choice of summary statistic, S(y) = E(θ|y) [almost sufficient] suggest h = O(N −1/(2+d) ) and h = O(N −1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • 26. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Optimal summary statistic “We take a different approach, and weaken the requirement for πABC to be a good approximation to π(θ|yobs ). We argue for πABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters.” (p.5) From this result, F&P derive their choice of summary statistic, S(y) = E(θ|y) [EABC [θ|S(yobs )] = E[θ|yobs ]] suggest h = O(N −1/(2+d) ) and h = O(N −1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • 27. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Caveat Since E(θ|yobs ) is most usually unavailable, F&P suggest (i) use a pilot run of ABC to determine a region of non-negligible posterior mass; (ii) simulate sets of parameter values and data; (iii) use the simulated sets of parameter values and data to estimate the summary statistic; and (iv) run ABC with this choice of summary statistic.
  • 28. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Approximating the summary statistic As Beaumont et al. (2002) and Blum and Fran¸ois (2010), F&P c use a linear regression to approximate E(θ|yobs ): (i) θi = β0 + β (i) f (yobs ) + i
  • 29. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Applications The paper’s second half covers: g-and-k-distribution stochastic kinetic biochemical networks LotkaVolterra model Ricker map ecological model M/G/1-queue tuberculosis bacteria genotype data
  • 30. Semi-automatic ABCA Discussion Summary statistic selection Optimal summary statistic Questions dependence on h and S(·) in the early stage reduction of Bayesian inference to point estimation approximation error in step (iii) not accounted for not parameterisation invariant practice shows that proper approximation to genuine posterior distributions stems from using a (much) larger number of summary statistics than the dimension of the parameter the validity of the approximation to the optimal summary statistic depends on the quality of the pilot run; important inferential issues like model choice are not covered by this approach.