SlideShare a Scribd company logo
1 of 61
Download to read offline
Outline
                       Introduction
               Monte Carlo Principle
               Markov Chain Theory
                            MCMC
                         Conclusion




Monte Carlo Sampling methods using Markov
       Chains and their Applications

              Hastings-University of Toronto


      Reading seminar on classics: C.P.Robert
            presented by:Donia Skanji
                December 3, 2012


                                                              1/40
      Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                 Introduction
                         Monte Carlo Principle
                         Markov Chain Theory
                                      MCMC
                                   Conclusion


Outline


  1   Introduction

  2   Monte Carlo Principle

  3   Markov Chain Theory

  4   MCMC

  5   Conclusion



                                                                        2/40
                Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                 Introduction
         Monte Carlo Principle
         Markov Chain Theory
                      MCMC
                   Conclusion




   Introduction to MCMC Methods




                                                        3/40
Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                 Introduction
                         Monte Carlo Principle
                         Markov Chain Theory
                                      MCMC
                                   Conclusion


Introduction:


      There are several numerical problems such as Integral
      computing and Maximum evaluation in large dimensional
      spaces
      Monte Carlo Methods are often applied to solve integration
      and optimisation problems.
      Monte Carlo Markov chain (MCMC) is one of the most known
      Monte Carlo methods.
      MCMC methods involve a large class of sampling algorithms
      that have had a greatest influence on science development.


                                                                        4/40
                Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                           Introduction
                   Monte Carlo Principle
                   Markov Chain Theory
                                MCMC
                             Conclusion




Study objectif

    To expose some relevant theory and techniques of
    application related to MCMC methods                           ♣
    To present a generalization of Metropolis sampling method.




                                                                  5/40
          Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Next Steps

       Monte Carlo Principle




                                                                     6/40
             Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Next Steps

       Monte Carlo Principle




                                                        Markov Chain
                                                                       6/40
             Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Next Steps

       Monte Carlo Principle

                                              To introduce:




                                                         Markov Chain
                                                                        6/40
             Hastings-University of Toronto    Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Next Steps

       Monte Carlo Principle

                                              To introduce:
                                                  -MCMC Methods




                                                         Markov Chain
                                                                        6/40
             Hastings-University of Toronto    Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Next Steps

       Monte Carlo Principle

                                              To introduce:
                                                  -MCMC Methods
                                                       -MCMC Algorithms




                                                         Markov Chain
                                                                          6/40
             Hastings-University of Toronto    Reading Seminar:MCMC
Outline
                 Introduction
         Monte Carlo Principle
         Markov Chain Theory
                      MCMC
                   Conclusion




             Monte Carlo Methods




                                                        7/40
Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                     Introduction
                             Monte Carlo Principle
                             Markov Chain Theory
                                          MCMC
                                       Conclusion


Overview

        The idea of Monte Carlo simulation is to draw an i.i.d. set of
        samples{x i }N from a target density π.
                     i=1
        These N samples can be used to approximate the target
        density with the following empirical point-mass function:
                                                     1     N
                                    πN (x) =         N     i=1 δx (i) (x)
        For independent samples, by Law of Large numbers, one can
        approximate the integrals I (f ) with tractable sums IN (f ) that
        converge as follows:
                              1      N       i
                 IN (f ) =    N      i=1 f (x )      → I (f ) =       f (x)π(x)dx a.s

   see example


                                                                                        8/40
                   Hastings-University of Toronto        Reading Seminar:MCMC
Outline
                                       Introduction
                               Monte Carlo Principle
                               Markov Chain Theory
                                            MCMC
                                         Conclusion




N sample from π

                                     xN

                                                        x3
                           6         9              1
                       x         x              x
                7
            x                                           x2
            8                            5
        x                            x
                                                        x4

But independent sampling from π may be difficult especially in a
high dimensional space.


                                                                                    9/40
                    Hastings-University of Toronto           Reading Seminar:MCMC
Outline
                         Introduction
                 Monte Carlo Principle
                 Markov Chain Theory
                              MCMC
                           Conclusion


It turns out that N N f (x i ) → f (x)π(x)dx (N → ∞)
                    1
                        i=1
still applies if we generate samples using a Markov
chain(dependent samples).
The idea of MCMC is to use Markov chain convergence
properties to overcome the dimensionality problems met by
regular Monte carlo methods.
But first, some revision of Markov chains in a discrete set χ.




                                                                10/40
        Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                 Introduction
         Monte Carlo Principle
         Markov Chain Theory
                      MCMC
                   Conclusion




             Markov Chain Theory




                                                        11/40
Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                 Introduction
                         Monte Carlo Principle
                         Markov Chain Theory
                                      MCMC
                                   Conclusion


Definition



  Finite Markov Chain
  A Markov chain is a mathematical system that undergoes
  transitions from one state to another, between a finite or countable
  number of possible states. It is a random process usually
  characterized as memoryless:

          P(X (t+1) /X (0) , X (1) , . . . , X (t) ) = P(X (t+1) /X (t) )




                                                                            12/40
                Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Transition Matrix
  Let P = {Pij } the transition Matrix of a markov chain with states
  0, 1, 2 . . . , S then, if X (t) denotes the state occupied by the
  process at time t, we have:
                       Pr (X (t+1) = j/X (t) = i) = Pij




                                   X (t+1) = X (t) .P
                                                                       13/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Properties
    Stationarity/Irreducibility

        Stationarity



                                                                       ♣




                                                                       14/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Properties
    Stationarity/Irreducibility

        Stationarity
                           As t → ∞,the Markov chain converges to its
                           stationary(invariant) distribution:π = π.P
                                                                        ♣




                                                                        14/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Properties
    Stationarity/Irreducibility

        Stationarity
                           As t → ∞,the Markov chain converges to its
                           stationary(invariant) distribution:π = π.P
        Irreducibility                                                  ♣




                                                                        14/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Properties
    Stationarity/Irreducibility

        Stationarity
                           As t → ∞,the Markov chain converges to its
                           stationary(invariant) distribution:π = π.P
        Irreducibility                                                      ♣

                           Irreducible means any set of states can be
                           reached from any other state in a finite number
                           of moves (p(i, j) > 0 for every i and j).



                                                                            14/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


MCMC

     The idea of Markov Monte Carlo Method is to choose P the
     transition Matrix so that π(the target density which is very
     difficult to sample from) is its unique stationary distribution.
     Assume the Markov Chain:
          has a stationary distribution π(X )
          is irreducible and aperiodic
     Then we have an Ergodic Theorem:
 Theorem(Ergodic Theorem)
 if the Markov chain xt is irriducible, aperiodic and stationary then
 for any function h with E |h| ∞
              1
              N     i   h(xi ) →        h(x)dπ(x) when N → ∞

                                                                        15/40
              Hastings-University of Toronto    Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion




                              Summary

Recall that our goal is to build a markov chain (X t )
using a transition matrix P so that the limiting distri-
bution of (X t ) is the target density π and integrals can
be approximated using the ergodic theorem.




                                                                    16/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Question


     How do we construct a Markov chain whose stationary
     distribution is the target distribution,π




     Metropolis et al (1953) showed how.
     The method was generalized by Hastings (1970).


                                                                     17/40
             Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Construction of the transition matrix


      in order to construct a markov chain with π as its stationary
      distribution, we have to consider a transition matrix P that
      satisfy the reversibility condition that for all i and j
                                πi p(i → j) = πj p(j → i)
                                          πi pij = πj pji
      This property ensures that     πi pij = πj (definition of a
      stationary distribution) and hence that π is a stationary
      distribution of P


                                                                        18/40
               Hastings-University of Toronto    Reading Seminar:MCMC
Outline
                               Introduction
                       Monte Carlo Principle
                       Markov Chain Theory
                                    MCMC
                                 Conclusion


Construction of the transition matrix




     How to choose the
      transition Matrix
        P so that the                             πi Pij = πj Pji
      reversibility con-
      dition is verified?




                                                                      19/40
              Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Overview

       Suppose that we have a proposal matrix denoted Q where
         j qij = 1 .
       If it happens that Q itself satisfies the reversibility
       condition:πi qij = πj qji for all i and j then our research is
       over,but most likely it will not.
       We might find for example that for some i and j:πi qij > πj qji
       A convenient way to correct this condition is to reduce the
       number of moves from i to j by introducing a probability αij
       that the move is made.



                                                                        20/40
             Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                               Introduction
                       Monte Carlo Principle
                       Markov Chain Theory
                                    MCMC
                                 Conclusion


The choice of the transition matrix


      we assume that the transition matrix P has this form:
      Pij = qij αij if i = j
      Pii = 1 − j=i Pij if i = j
      where:
         Q = qij is the proposal matrix or jumping matrix of an
      arbitrary Markov chain on the states 0, 1..S, which suggests a
      new sample value j given a sample value i.
         αij is the acceptance probability to move from state i to
      state j.



                                                                       21/40
              Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


   In order to obtain the reversibility condition, we have to verify :

                                  πi pij = πj pji
                             πi αij qij = πj αji qji (∗)

   The probabilities αij and αji are introduced to ensure that the
two sides of (∗) are in balance.
   In his paper, Hastings defined a generic form of the acceptance
probability:
                                                 sij
                                    αij =         π q
                                              1+ πi qij
                                                   j ji


    Where:sij is a symetric function of i and j(sij = sji ) chosen so
that 0 αij 1 for all i and j
    With this form of Pij and αij suggested by Hastings, it’s readily
verified the reversibility condition.
                                                                         22/40
             Hastings-University of Toronto      Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


2-The acceptance probability α

    The choice of α

       Recall that in this paper, Hastings defined the acceptance
       probability αij as follows:
                                                         sij
                                              αij =       π q
                                                      1+ πi qij
                                                           j ji

       For a specific choice of sij , we recognize the acceptance
       probabilities suggested by both:
       ⊕Metropolis et al(1953)
       ⊕Barker(1965)




                                                                         23/40
             Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                               Introduction
                       Monte Carlo Principle
                       Markov Chain Theory
                                    MCMC
                                 Conclusion


The acceptance probability α
    The choice of Sij

        Two choices for Sij are given for all i and j by
                                                    πi qij        πj qji
                           (M)                 1+   πj qji   if   πi qij   1
                          sij     =                 πj qji        πj qji
                                               1+   πi qij   if   πi qij   1
                                                (M)
        when qij = qji and Sij = Sij                   we have the method devised
                                   (M)          π
        by Metropolis et al with αij = min(1, πji )
                                  (B)
        whenqij = qji and Sij = Sij = 1 we have the                            method
                                 (B)      πj
        devised by Barker with αij = ( πi +πj )


                                                                                        24/40
              Hastings-University of Toronto        Reading Seminar:MCMC
Outline
                          Introduction
                  Monte Carlo Principle
                  Markov Chain Theory
                               MCMC
                            Conclusion




Remark

   In this paper, Hastings mentionned that little is known about
                                       (M)         (B)
   the merits of these two choices of Sij and Sij




                                                                   25/40
         Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


The Proposal Matrix Q


    The choice of Q

       It has been recognised that the choice of the proposal
       matrix/density is crucial to the success(rapid convergence)
       of MCMC algorithm.
       The proposal matrix can be almost arbitrary which allows to
       reach all states frequently and assure a high acceptance rate




                                                                       26/40
             Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.




                                                                    27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.
 2   Also, start with some arbitrary point i0 as the first sample.




                                                                    27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.
 2   Also, start with some arbitrary point i0 as the first sample.
 3   Then, to return a new sample j given the most recent
     sample i, we proceed as follows:




                                                                    27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.
 2   Also, start with some arbitrary point i0 as the first sample.
 3   Then, to return a new sample j given the most recent
     sample i, we proceed as follows:
 4   Generate a proposed new sample value j from the jumping
     distribution Q(i → j).




                                                                    27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.
 2   Also, start with some arbitrary point i0 as the first sample.
 3   Then, to return a new sample j given the most recent
     sample i, we proceed as follows:
 4   Generate a proposed new sample value j from the jumping
     distribution Q(i → j).
 5   Accept proposal with probability α(i → j)




                                                                    27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.
 2   Also, start with some arbitrary point i0 as the first sample.
 3   Then, to return a new sample j given the most recent
     sample i, we proceed as follows:
 4   Generate a proposed new sample value j from the jumping
     distribution Q(i → j).
 5   Accept proposal with probability α(i → j)
                        -if proposal accepted then move to j/step4


                                                                     27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

 1   First, pick a proposal matrix Q(i, j) of an arbitrary Markov
     chain on the states 0, 1..S, which suggests a new sample
     value j given a sample value i.
 2   Also, start with some arbitrary point i0 as the first sample.
 3   Then, to return a new sample j given the most recent
     sample i, we proceed as follows:
 4   Generate a proposed new sample value j from the jumping
     distribution Q(i → j).
 5   Accept proposal with probability α(i → j)
                        -if proposal accepted then move to j/step4
                        -repeat until a sample from the desired size is
                        obtained
                                                                          27/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                           Introduction
                   Monte Carlo Principle
                   Markov Chain Theory
                                MCMC
                             Conclusion

Remarks

   An empirical way for checking convergence is to let two or
   more different chains run in parallel and see if they are
   concentrating on the some place.
   The calculation of α does not require knowledge of the
   normalizing constant of π because it appears both in the
   numerator and denominator.
   Although the Markov chain eventually converges to the
   desired distribution, the initial samples may follow a very
   different distribution, especially if the starting point is in a
   region of low density.
   As a result a burn in period is typically necessary.


                                                                     28/40
          Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Example:Poisson Distribution as the Target Distribution

     Consider π as the Poisson distribution with intensity λ > 0
                                       i
                     πi = e −λ λ where i = 0, 1, 2, · · ·
                               i!

     Hastings(1970)suggests the following proposal transition matrix
                                             1 1               
                                              2   2  0 0 ···
                     1                      1 0 1 0 ··· 
       q00 = q01 = 2 if i = 0               2      2          
                                                  1      1
                          if j = i − 1 Q =  0 2 0 2 · · · 
       1
      
qij =    2                                                     
                                             0 0 1 0 ··· 
       1
       2                 if j = i + 1              2          
      
         0                otherwise           . . . .
                                              . . . . ···
                                              . . . .
     Q is in fact symmetric, and the algorithm reduces to that of
   Metropolis
    skip


                                                                       29/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                               Introduction
                       Monte Carlo Principle
                       Markov Chain Theory
                                    MCMC
                                 Conclusion




                                   1          i
                                   2 min(1,   λ)              if j = i − 1
                                    1            λ
                              
                                    2 min(1,   i+1 )           if j = i + 1
                              
                   (M)
       pij = qij αij      =
                               1 − pi,i−1 − pi,i+1
                                                              j =i
                              
                                0                              otherwise
For i = 0
                               1
                          
                              2 min(1, λ)             if j = 1
                p0j =          1 − 1 min(1, λ)
                                   2                   if j = 0
                               0                       otherwise
                          

  this transition probability is aperiodic and irreducible
  In practice, if λ is small, this choice of Q seems to work fairly
well and fast to approximate π

                                                                              30/40
              Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:




                                                                    ♣




                                                                    31/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2




                                                                    ♣




                                                                    31/40
            Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2


                                                                        ♣




                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2
   qij = 2 δi−1 (j) + 1 δi+1 (j)
         1
                      2
                                                                        ♣




                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2
   qij = 2 δi−1 (j) + 1 δi+1 (j)
         1
                      2
   We calculate Metropolis and Hastings ratio:                          ♣




                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2
   qij = 2 δi−1 (j) + 1 δi+1 (j)
         1
                      2
   We calculate Metropolis and Hastings ratio:                          ♣
                       π(j)                                  i!
   αij = min{1,        π(i) }   =    min{1, λ(j−i)       ×   j! }




                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2
   qij = 2 δi−1 (j) + 1 δi+1 (j)
         1
                      2
   We calculate Metropolis and Hastings ratio:                          ♣
                       π(j)                                  i!
   αij = min{1,        π(i) }   =    min{1, λ(j−i)       ×   j! }
   let u ∼ U[0, 1]




                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2
   qij = 2 δi−1 (j) + 1 δi+1 (j)
         1
                      2
   We calculate Metropolis and Hastings ratio:                          ♣
                       π(j)                                  i!
   αij = min{1,        π(i) }   =    min{1, λ(j−i)       ×   j! }
   let u ∼ U[0, 1]
   if u ≤ αij then Xk+1 = j




                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion

Algorithm

   Given a starting point i we take:
                                        1
   j=i+1 with probability               2
                                             1
   or j=i-1 with probability                 2
   qij = 2 δi−1 (j) + 1 δi+1 (j)
         1
                      2
   We calculate Metropolis and Hastings ratio:                          ♣
                       π(j)                                  i!
   αij = min{1,        π(i) }   =    min{1, λ(j−i)       ×   j! }
   let u ∼ U[0, 1]
   if u ≤ αij then Xk+1 = j
   else Xk+1 = Xk = i



                                                                        31/40
            Hastings-University of Toronto       Reading Seminar:MCMC
Outline
                                  Introduction
                          Monte Carlo Principle
                          Markov Chain Theory
                                       MCMC
                                    Conclusion


R implementation
  > l i b r a r y ( mcsm )
  > f a c t=f u n c t i o n ( n ) {gamma( n+1)}
  > p o i s s o n f=f u n c t i o n ( n , lambda , x0 ) {
   x=x0
   xn=x0
   f o r ( i i n 1 : n ){
   i f ( xn != 0 )
   y=xn +(2∗ rbinom ( 1 , 1 , 0 . 5 ) − 1 )
   e l s e { y=rbinom ( 1 , 1 , 0 . 5 ) }
   a l p h a=min ( 1 , lambda ˆ ( y−xn ) ∗ f a c t ( xn ) / f a c t ( y ) )
   i f ( r u n i f ( 1 ) < a l p h a ) { xn=y }
   x=c ( x , xn ) }
   x}
                                                                              32/40
                 Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                 Introduction
         Monte Carlo Principle
         Markov Chain Theory
                      MCMC
                   Conclusion




                                                        33/40
Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                 Introduction
                         Monte Carlo Principle
                         Markov Chain Theory
                                      MCMC
                                   Conclusion


Multivariate Target


      if the distribution π is d-dimensional and the simulated
      process X (t) = {X1 (t), · · · Xd (t)}, we may use the following
      techniques to construct the transition matrix P
        1   In the transition from t to t + 1 all co-ordinates of X (t) may
            be changed
        2   In the transition from t to t + 1 only one co-ordinates of X (t)
            may be changed, that selection may be made at random
            among the d co-ordinates
        3   In the transition from time t to t + 1 only one co-ordinate may
            change in each transition, and the co-ordinate being selected
            in a fixed rather than a random sequence.


                                                                               34/40
                Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                       Introduction
                               Monte Carlo Principle
                               Markov Chain Theory
                                            MCMC
                                         Conclusion


  Hastings’justification

         Hastings transformed the d dimensional problem to one
         dimensional problem
         The approach is based on updating one component at each
         time
                                                                              ♣
         The transition matrix is defined as follow:P = P1 .P2 · · · Pd
         For each (k = 1 · · · d), Pk is constructed so that πPk = π
         π will be a stationary distribution of P since
         πP = πP1 · · · Pd = πP2 · · · Pd



Orthogonal Matrices


                                                                              35/40
                      Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                              Introduction
                      Monte Carlo Principle
                      Markov Chain Theory
                                   MCMC
                                Conclusion


Conclusion


     +In this paper, Hastings gives a generalization of Metropolis
     et al (1953) approach.
     +He also introduiced gibbs sampling strategy when he
     presented the multivariate target.
     +Hastings treated the continuous case using a discretization
     analogy.
     -little information about the merits of Metropolis and Barker
     acceptance forms.



                                                                     36/40
             Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                 Introduction
         Monte Carlo Principle
         Markov Chain Theory
                      MCMC
                   Conclusion




     Thank You For Your Attention




                                                        37/40
Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                Introduction
                        Monte Carlo Principle
                        Markov Chain Theory
                                     MCMC
                                  Conclusion


Bibliography


  [1]:W.K.Hastings(1970).Monte Carlo Sampling Methods Using
  Markov chain and their Applications
  [2]:Christian P Roberts (2010).Introduicing Monte Carlo Methods
  with R
  [3]:Kenneth Lange(2010).Numerical Analysis for statisticians
  [4]:Siddhartha Chib(1995).Understanding the metropolis Hastings
  algorithm
  [5]:Robert Gray(2001).Advanced statistical computing




                                                                       38/40
               Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                                   Introduction
                           Monte Carlo Principle
                           Markov Chain Theory
                                        MCMC
                                     Conclusion


Random orthogonal Matrices

          Hastings suggests an interesting chain on the space n × n
          orthogonal matrices(H H = I , det(H) = 1)
          The proposal stage of Hasting’s algorithm consists of choosing
          at random 2 indices i and j and an angle θ ∈ [0, 2π]
          The proposed replacement for the current rotation matrix H is
          then H = Eij (θ).H
          Eij (θ) coincides with the identity matrix expect for some
          entries
          since Eij (θ)−1 = Eij (−θ) the transition density is symmetric
          and the markov chain induced is reversible
   back


                                                                           39/40
                  Hastings-University of Toronto   Reading Seminar:MCMC
Outline
                             Introduction
                     Monte Carlo Principle
                     Markov Chain Theory
                                  MCMC
                               Conclusion


Estimating Pi using Monte Carlo methods (SAS output)


                                                      Problem :Estimate PI using Monte Carlo
                                                      Integration
                                                      Strategy:Equation of a circle with radius= 1 :
                                                      x 2 + y 2 = 1 which can be written y = 1 − x 2
                                                      Area of this circle =pi
                                                      Area of this circle in the first quadrant = pi   4
                                                      Generate Ux Uniform(0, 1) and Uy Uniform(0, 1)
                                                      Check to see if Uy ≤           2
                                                                                1 − Ux

                                                      The proportion of generated points when this
                                                      Condition is true is an estimate of pi 4.
                                                      Based on 10,000 simulated points using SAS:
                                                      PI (SE ) = 3.1056(0.016)
                                                          back




                                                                                                      40/40
            Hastings-University of Toronto   Reading Seminar:MCMC

More Related Content

More from Christian Robert

Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 

More from Christian Robert (20)

restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 

Recently uploaded

Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 

Recently uploaded (20)

Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 

Hastings paper discussion

  • 1. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Monte Carlo Sampling methods using Markov Chains and their Applications Hastings-University of Toronto Reading seminar on classics: C.P.Robert presented by:Donia Skanji December 3, 2012 1/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 2. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Outline 1 Introduction 2 Monte Carlo Principle 3 Markov Chain Theory 4 MCMC 5 Conclusion 2/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 3. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Introduction to MCMC Methods 3/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 4. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Introduction: There are several numerical problems such as Integral computing and Maximum evaluation in large dimensional spaces Monte Carlo Methods are often applied to solve integration and optimisation problems. Monte Carlo Markov chain (MCMC) is one of the most known Monte Carlo methods. MCMC methods involve a large class of sampling algorithms that have had a greatest influence on science development. 4/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 5. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Study objectif To expose some relevant theory and techniques of application related to MCMC methods ♣ To present a generalization of Metropolis sampling method. 5/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 6. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Next Steps Monte Carlo Principle 6/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 7. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Next Steps Monte Carlo Principle Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 8. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Next Steps Monte Carlo Principle To introduce: Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 9. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Next Steps Monte Carlo Principle To introduce: -MCMC Methods Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 10. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Next Steps Monte Carlo Principle To introduce: -MCMC Methods -MCMC Algorithms Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 11. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Monte Carlo Methods 7/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 12. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Overview The idea of Monte Carlo simulation is to draw an i.i.d. set of samples{x i }N from a target density π. i=1 These N samples can be used to approximate the target density with the following empirical point-mass function: 1 N πN (x) = N i=1 δx (i) (x) For independent samples, by Law of Large numbers, one can approximate the integrals I (f ) with tractable sums IN (f ) that converge as follows: 1 N i IN (f ) = N i=1 f (x ) → I (f ) = f (x)π(x)dx a.s see example 8/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 13. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion N sample from π xN x3 6 9 1 x x x 7 x x2 8 5 x x x4 But independent sampling from π may be difficult especially in a high dimensional space. 9/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 14. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion It turns out that N N f (x i ) → f (x)π(x)dx (N → ∞) 1 i=1 still applies if we generate samples using a Markov chain(dependent samples). The idea of MCMC is to use Markov chain convergence properties to overcome the dimensionality problems met by regular Monte carlo methods. But first, some revision of Markov chains in a discrete set χ. 10/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 15. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Markov Chain Theory 11/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 16. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Definition Finite Markov Chain A Markov chain is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process usually characterized as memoryless: P(X (t+1) /X (0) , X (1) , . . . , X (t) ) = P(X (t+1) /X (t) ) 12/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 17. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Transition Matrix Let P = {Pij } the transition Matrix of a markov chain with states 0, 1, 2 . . . , S then, if X (t) denotes the state occupied by the process at time t, we have: Pr (X (t+1) = j/X (t) = i) = Pij X (t+1) = X (t) .P 13/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 18. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Properties Stationarity/Irreducibility Stationarity ♣ 14/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 19. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Properties Stationarity/Irreducibility Stationarity As t → ∞,the Markov chain converges to its stationary(invariant) distribution:π = π.P ♣ 14/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 20. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Properties Stationarity/Irreducibility Stationarity As t → ∞,the Markov chain converges to its stationary(invariant) distribution:π = π.P Irreducibility ♣ 14/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 21. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Properties Stationarity/Irreducibility Stationarity As t → ∞,the Markov chain converges to its stationary(invariant) distribution:π = π.P Irreducibility ♣ Irreducible means any set of states can be reached from any other state in a finite number of moves (p(i, j) > 0 for every i and j). 14/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 22. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion MCMC The idea of Markov Monte Carlo Method is to choose P the transition Matrix so that π(the target density which is very difficult to sample from) is its unique stationary distribution. Assume the Markov Chain: has a stationary distribution π(X ) is irreducible and aperiodic Then we have an Ergodic Theorem: Theorem(Ergodic Theorem) if the Markov chain xt is irriducible, aperiodic and stationary then for any function h with E |h| ∞ 1 N i h(xi ) → h(x)dπ(x) when N → ∞ 15/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 23. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Summary Recall that our goal is to build a markov chain (X t ) using a transition matrix P so that the limiting distri- bution of (X t ) is the target density π and integrals can be approximated using the ergodic theorem. 16/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 24. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Question How do we construct a Markov chain whose stationary distribution is the target distribution,π Metropolis et al (1953) showed how. The method was generalized by Hastings (1970). 17/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 25. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Construction of the transition matrix in order to construct a markov chain with π as its stationary distribution, we have to consider a transition matrix P that satisfy the reversibility condition that for all i and j πi p(i → j) = πj p(j → i) πi pij = πj pji This property ensures that πi pij = πj (definition of a stationary distribution) and hence that π is a stationary distribution of P 18/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 26. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Construction of the transition matrix How to choose the transition Matrix P so that the πi Pij = πj Pji reversibility con- dition is verified? 19/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 27. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Overview Suppose that we have a proposal matrix denoted Q where j qij = 1 . If it happens that Q itself satisfies the reversibility condition:πi qij = πj qji for all i and j then our research is over,but most likely it will not. We might find for example that for some i and j:πi qij > πj qji A convenient way to correct this condition is to reduce the number of moves from i to j by introducing a probability αij that the move is made. 20/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 28. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion The choice of the transition matrix we assume that the transition matrix P has this form: Pij = qij αij if i = j Pii = 1 − j=i Pij if i = j where: Q = qij is the proposal matrix or jumping matrix of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. αij is the acceptance probability to move from state i to state j. 21/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 29. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion In order to obtain the reversibility condition, we have to verify : πi pij = πj pji πi αij qij = πj αji qji (∗) The probabilities αij and αji are introduced to ensure that the two sides of (∗) are in balance. In his paper, Hastings defined a generic form of the acceptance probability: sij αij = π q 1+ πi qij j ji Where:sij is a symetric function of i and j(sij = sji ) chosen so that 0 αij 1 for all i and j With this form of Pij and αij suggested by Hastings, it’s readily verified the reversibility condition. 22/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 30. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion 2-The acceptance probability α The choice of α Recall that in this paper, Hastings defined the acceptance probability αij as follows: sij αij = π q 1+ πi qij j ji For a specific choice of sij , we recognize the acceptance probabilities suggested by both: ⊕Metropolis et al(1953) ⊕Barker(1965) 23/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 31. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion The acceptance probability α The choice of Sij Two choices for Sij are given for all i and j by πi qij πj qji (M) 1+ πj qji if πi qij 1 sij = πj qji πj qji 1+ πi qij if πi qij 1 (M) when qij = qji and Sij = Sij we have the method devised (M) π by Metropolis et al with αij = min(1, πji ) (B) whenqij = qji and Sij = Sij = 1 we have the method (B) πj devised by Barker with αij = ( πi +πj ) 24/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 32. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Remark In this paper, Hastings mentionned that little is known about (M) (B) the merits of these two choices of Sij and Sij 25/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 33. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion The Proposal Matrix Q The choice of Q It has been recognised that the choice of the proposal matrix/density is crucial to the success(rapid convergence) of MCMC algorithm. The proposal matrix can be almost arbitrary which allows to reach all states frequently and assure a high acceptance rate 26/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 34. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 35. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the first sample. 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 36. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the first sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 37. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the first sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 38. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the first sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 5 Accept proposal with probability α(i → j) 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 39. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the first sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 5 Accept proposal with probability α(i → j) -if proposal accepted then move to j/step4 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 40. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the first sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 5 Accept proposal with probability α(i → j) -if proposal accepted then move to j/step4 -repeat until a sample from the desired size is obtained 27/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 41. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Remarks An empirical way for checking convergence is to let two or more different chains run in parallel and see if they are concentrating on the some place. The calculation of α does not require knowledge of the normalizing constant of π because it appears both in the numerator and denominator. Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result a burn in period is typically necessary. 28/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 42. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Example:Poisson Distribution as the Target Distribution Consider π as the Poisson distribution with intensity λ > 0 i πi = e −λ λ where i = 0, 1, 2, · · · i! Hastings(1970)suggests the following proposal transition matrix  1 1  2 2 0 0 ···  1  1 0 1 0 ···   q00 = q01 = 2 if i = 0  2 2  1 1 if j = i − 1 Q =  0 2 0 2 · · ·   1  qij = 2    0 0 1 0 ···   1  2 if j = i + 1  2   0 otherwise . . . . . . . . ··· . . . . Q is in fact symmetric, and the algorithm reduces to that of Metropolis skip 29/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 43. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion  1 i  2 min(1, λ) if j = i − 1 1 λ  2 min(1, i+1 ) if j = i + 1  (M) pij = qij αij =  1 − pi,i−1 − pi,i+1  j =i  0 otherwise For i = 0 1   2 min(1, λ) if j = 1 p0j = 1 − 1 min(1, λ) 2 if j = 0 0 otherwise  this transition probability is aperiodic and irreducible In practice, if λ is small, this choice of Q seems to work fairly well and fast to approximate π 30/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 44. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 45. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 46. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 47. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 48. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 49. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 50. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } let u ∼ U[0, 1] 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 51. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } let u ∼ U[0, 1] if u ≤ αij then Xk+1 = j 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 52. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Algorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } let u ∼ U[0, 1] if u ≤ αij then Xk+1 = j else Xk+1 = Xk = i 31/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 53. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion R implementation > l i b r a r y ( mcsm ) > f a c t=f u n c t i o n ( n ) {gamma( n+1)} > p o i s s o n f=f u n c t i o n ( n , lambda , x0 ) { x=x0 xn=x0 f o r ( i i n 1 : n ){ i f ( xn != 0 ) y=xn +(2∗ rbinom ( 1 , 1 , 0 . 5 ) − 1 ) e l s e { y=rbinom ( 1 , 1 , 0 . 5 ) } a l p h a=min ( 1 , lambda ˆ ( y−xn ) ∗ f a c t ( xn ) / f a c t ( y ) ) i f ( r u n i f ( 1 ) < a l p h a ) { xn=y } x=c ( x , xn ) } x} 32/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 54. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion 33/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 55. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Multivariate Target if the distribution π is d-dimensional and the simulated process X (t) = {X1 (t), · · · Xd (t)}, we may use the following techniques to construct the transition matrix P 1 In the transition from t to t + 1 all co-ordinates of X (t) may be changed 2 In the transition from t to t + 1 only one co-ordinates of X (t) may be changed, that selection may be made at random among the d co-ordinates 3 In the transition from time t to t + 1 only one co-ordinate may change in each transition, and the co-ordinate being selected in a fixed rather than a random sequence. 34/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 56. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Hastings’justification Hastings transformed the d dimensional problem to one dimensional problem The approach is based on updating one component at each time ♣ The transition matrix is defined as follow:P = P1 .P2 · · · Pd For each (k = 1 · · · d), Pk is constructed so that πPk = π π will be a stationary distribution of P since πP = πP1 · · · Pd = πP2 · · · Pd Orthogonal Matrices 35/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 57. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Conclusion +In this paper, Hastings gives a generalization of Metropolis et al (1953) approach. +He also introduiced gibbs sampling strategy when he presented the multivariate target. +Hastings treated the continuous case using a discretization analogy. -little information about the merits of Metropolis and Barker acceptance forms. 36/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 58. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Thank You For Your Attention 37/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 59. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Bibliography [1]:W.K.Hastings(1970).Monte Carlo Sampling Methods Using Markov chain and their Applications [2]:Christian P Roberts (2010).Introduicing Monte Carlo Methods with R [3]:Kenneth Lange(2010).Numerical Analysis for statisticians [4]:Siddhartha Chib(1995).Understanding the metropolis Hastings algorithm [5]:Robert Gray(2001).Advanced statistical computing 38/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 60. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Random orthogonal Matrices Hastings suggests an interesting chain on the space n × n orthogonal matrices(H H = I , det(H) = 1) The proposal stage of Hasting’s algorithm consists of choosing at random 2 indices i and j and an angle θ ∈ [0, 2π] The proposed replacement for the current rotation matrix H is then H = Eij (θ).H Eij (θ) coincides with the identity matrix expect for some entries since Eij (θ)−1 = Eij (−θ) the transition density is symmetric and the markov chain induced is reversible back 39/40 Hastings-University of Toronto Reading Seminar:MCMC
  • 61. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Estimating Pi using Monte Carlo methods (SAS output) Problem :Estimate PI using Monte Carlo Integration Strategy:Equation of a circle with radius= 1 : x 2 + y 2 = 1 which can be written y = 1 − x 2 Area of this circle =pi Area of this circle in the first quadrant = pi 4 Generate Ux Uniform(0, 1) and Uy Uniform(0, 1) Check to see if Uy ≤ 2 1 − Ux The proportion of generated points when this Condition is true is an estimate of pi 4. Based on 10,000 simulated points using SAS: PI (SE ) = 3.1056(0.016) back 40/40 Hastings-University of Toronto Reading Seminar:MCMC