SlideShare uma empresa Scribd logo
1 de 18
Baixar para ler offline
Wang–Landau algorithm
                         Improvements
                        2D Ising model
                            Conclusion




    Parallel Adaptive Wang–Landau Algorithm

                             Pierre E. Jacob

CEREMADE - Universit´ Paris Dauphine & CREST, funded by AXA Research
                    e


                           15 novembre 2011


joint work with Luke Bornn (UBC), Arnaud Doucet (Oxford), Pierre Del Moral
                     (INRIA & Universit´ de Bordeaux)
                                       e




                        Pierre E. Jacob   PAWL                               1/ 18
Wang–Landau algorithm
                            Improvements
                           2D Ising model
                               Conclusion


Outline


  1   Wang–Landau algorithm

  2   Improvements
        Automatic Binning
        Parallel Interacting Chains
        Adaptive proposals

  3   2D Ising model

  4   Conclusion



                           Pierre E. Jacob   PAWL   2/ 18
Wang–Landau algorithm
                           Improvements
                          2D Ising model
                              Conclusion


Wang–Landau


 Context
     unnormalized target density π
      on a state space X

 A kind of adaptive MCMC algorithm
      It iteratively generates a sequence Xt .
      The stationary distribution is not π itself.
      At each iteration a different stationary distribution is targeted.




                          Pierre E. Jacob   PAWL                          3/ 18
Wang–Landau algorithm
                              Improvements
                             2D Ising model
                                 Conclusion


Wang–Landau


 Partition the space
 The state space X is cut into d bins:
                        d
                X =         Xi      and        ∀i = j   Xi ∩ Xj = ∅
                      i=1


 Goal
        The generated sequence spends the same time in each bin Xi ,
        within each bin Xi the sequence is asymptotically distributed
        according to the restriction of π to Xi .



                             Pierre E. Jacob    PAWL                    4/ 18
Wang–Landau algorithm
                          Improvements
                         2D Ising model
                             Conclusion


Wang–Landau


 Stationary distribution
 Define the mass of π over Xi by:

                             ψi =               π(x)dx
                                           Xi

 The stationary distribution of the WL algorithm is:
                                                        1
                         πψ (x) ∝ π(x) ×
                                                       ψJ(x)

 where J(x) is the index such that x ∈ XJ(x)



                         Pierre E. Jacob        PAWL           5/ 18
Wang–Landau algorithm
                                                                Improvements
                                                               2D Ising model
                                                                   Conclusion


Wang–Landau

 Example with a bimodal, univariate target density: π and two πψ
 corresponding to different partitions.

                           Original Density, with partition lines                     Biased by X                         Biased by Log Density
                  0


                −2


                −4
  Log Density




                −6


                −8


                −10


                −12


                      −5      0              5             10       15   −5     0           5          10   15   −5   0            5              10   15
                                                                                        X




                                                                    Pierre E. Jacob             PAWL                                                        6/ 18
Wang–Landau algorithm
                           Improvements
                          2D Ising model
                              Conclusion


Wang–Landau


 Plugging estimates
 In practice we cannot compute ψi analytically. Instead we plug in
 estimates θt (i) of ψi at iteration t, and define the distribution πθt
 by:
                                              1
                       πθt (x) ∝ π(x) ×
                                          θt (J(x))

 Metropolis–Hastings
 The algorithm does a Metropolis–Hastings step, aiming πθt at
 iteration t, generating a new point Xt .



                          Pierre E. Jacob   PAWL                         7/ 18
Wang–Landau algorithm
                          Improvements
                         2D Ising model
                             Conclusion


Wang–Landau




 Estimate of the bias
 The update of the estimated bias θt (i) is done according to:

               θt (i) ← θt−1 (i)[1 + γt (IXt ∈Xi − d −1 )]

 with γt a decreasing sequence or “step size”. E.g. γt = 1/t.




                         Pierre E. Jacob   PAWL                  8/ 18
Wang–Landau algorithm
                           Improvements
                          2D Ising model
                              Conclusion


Wang–Landau




 Result
 In the end we get:
     a sequence Xt asymptotically following πψ ,
     as well as estimates θt (i) of ψi .




                          Pierre E. Jacob   PAWL   9/ 18
Wang–Landau algorithm
                                                                 Automatic Binning
                                     Improvements
                                                                 Parallel Interacting Chains
                                    2D Ising model
                                                                 Adaptive proposals
                                        Conclusion


Automate Binning
  Easily move from one bin to another
  Maintain some kind of uniformity within bins. If non-uniform, split
  the bin.
            Frequency




                                                                Frequency




                                Log density                                         Log density




                        (a) Before the split                                (b) After the split

                                              Pierre E. Jacob    PAWL                             10/ 18
Wang–Landau algorithm
                                                Automatic Binning
                             Improvements
                                                Parallel Interacting Chains
                            2D Ising model
                                                Adaptive proposals
                                Conclusion


Parallel Interacting Chains


               (1)          (N)
  N chains (Xt , . . . , Xt       ) instead of one.
       targeting the same biased distribution πθt at iteration t,
       sharing the same estimated bias θt at iteration t.

  The update of the estimated bias becomes:
                                                    N
                                                1
             θt (i) ← θt−1 (i)[1 + γt (                   IX (j) ∈X − d −1 )]
                                                N            t       i
                                                    j=1




                              Pierre E. Jacob   PAWL                            11/ 18
Wang–Landau algorithm
                                              Automatic Binning
                             Improvements
                                              Parallel Interacting Chains
                            2D Ising model
                                              Adaptive proposals
                                Conclusion


Adaptive proposals

  For continuous state spaces
  We can use the adaptive Random Walk proposal where the
  variance σt is learned along the iterations to target an acceptance
  rate.

  Robbins-Monro stochastic approximation update

                 σt+1 = σt + ρt (2I(A > 0.234) − 1)

  Or alternatively

                         Σt = δ × Cov (X1 , . . . , Xt )


                            Pierre E. Jacob   PAWL                          12/ 18
Wang–Landau algorithm
                           Improvements
                          2D Ising model
                              Conclusion


2D Ising model
  Higdon (1998), JASA 93(442)
  Target density
  Consider a 2D Ising model, with posterior density
                                                                    

         π(x|y ) ∝ exp α           I[yi = xi ] + β         I[xi = xj ]
                                i                     i∼j

  with α = 1, β = 0.7.

      The first term (likelihood) encourages states x which are
      similar to the original image y .
      The second term (prior) favors states x for which
      neighbouring pixels are equal, like a Potts model.
                          Pierre E. Jacob   PAWL                           13/ 18
Wang–Landau algorithm
                          Improvements
                         2D Ising model
                             Conclusion


2D Ising models




         (a) Original Image                 (b) Focused Region of Image



                         Pierre E. Jacob   PAWL                           14/ 18
Wang–Landau algorithm
                                               Improvements
                                              2D Ising model
                                                  Conclusion


2D Ising models

              Iteration 300,000         Iteration 350,000        Iteration 400,000          Iteration 450,000        Iteration 500,000
         40




                                                                                                                                              Metropolis−Hastings
         30


         20


         10
                                                                                                                                                                    Pixel
                                                                                                                                                                        On
    X2




         40                                                                                                                                                             Off

         30




                                                                                                                                              Wang−Landau
         20


         10



              10    20      30    40    10    20      30    40   10    20      30     40    10    20      30    40   10    20      30    40
                                                                       X1




  Figure: Spatial model example: states explored over 200,000 iterations
  for Metropolis-Hastings (top) and proposed algorithm (bottom).



                                                     Pierre E. Jacob                 PAWL                                                                                     15/ 18
Wang–Landau algorithm
                                Improvements
                               2D Ising model
                                   Conclusion


2D Ising models
                   Metropolis−Hastings                                      Wang−Landau

         40




         30



                                                                                                    Pixel
                                                                                                        0.4
                                                                                                        0.6
    X2




         20
                                                                                                        0.8
                                                                                                        1.0




         10




              10          20               30         40               10      20         30   40
                                                           X1




  Figure: Spatial model example: average state explored with
  Metropolis-Hastings (left) and Wang-Landau after importance sampling
  (right).

                                         Pierre E. Jacob        PAWL                                          16/ 18
Wang–Landau algorithm
                           Improvements
                          2D Ising model
                              Conclusion


Conclusion


  Automatic binning
  We still have to define a range.

  Parallel Chains
  In practice it is more efficient to use N chains for T iterations
  instead of 1 chain for N × T iterations.

  Adaptive Proposals
  Convergence results with fixed proposals are already challenging,
  and making the proposal adaptive might add a layer of complexity.



                          Pierre E. Jacob   PAWL                      17/ 18
Wang–Landau algorithm
                          Improvements
                         2D Ising model
                             Conclusion


Bibliography


      Article: An Adaptive Interacting Wang-Landau Algorithm for
      Automatic Density Exploration, L. Bornn, P.E. Jacob, P. Del
      Moral, A. Doucet, available on arXiv.
      Software: PAWL, an R package, available on CRAN:
                       install.packages("PAWL")
  References:
      F. Wang, D. Landau, Physical Review E, 64(5):56101
      Y. Atchad´, J. Liu, Statistica Sinica, 20:209-233
               e




                         Pierre E. Jacob   PAWL                     18/ 18

Mais conteúdo relacionado

Mais de Pierre Jacob

Mais de Pierre Jacob (14)

Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov models
 
On non-negative unbiased estimators
On non-negative unbiased estimatorsOn non-negative unbiased estimators
On non-negative unbiased estimators
 
Path storage in the particle filter
Path storage in the particle filterPath storage in the particle filter
Path storage in the particle filter
 
Density exploration methods
Density exploration methodsDensity exploration methods
Density exploration methods
 
SMC^2: an algorithm for sequential analysis of state-space models
SMC^2: an algorithm for sequential analysis of state-space modelsSMC^2: an algorithm for sequential analysis of state-space models
SMC^2: an algorithm for sequential analysis of state-space models
 
PAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ Warwick
 
Presentation of SMC^2 at BISP7
Presentation of SMC^2 at BISP7Presentation of SMC^2 at BISP7
Presentation of SMC^2 at BISP7
 
Presentation MCB seminar 09032011
Presentation MCB seminar 09032011Presentation MCB seminar 09032011
Presentation MCB seminar 09032011
 

Último

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 

Último (20)

Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 

Parallel Adaptive Wang Landau - GDR November 2011

  • 1. Wang–Landau algorithm Improvements 2D Ising model Conclusion Parallel Adaptive Wang–Landau Algorithm Pierre E. Jacob CEREMADE - Universit´ Paris Dauphine & CREST, funded by AXA Research e 15 novembre 2011 joint work with Luke Bornn (UBC), Arnaud Doucet (Oxford), Pierre Del Moral (INRIA & Universit´ de Bordeaux) e Pierre E. Jacob PAWL 1/ 18
  • 2. Wang–Landau algorithm Improvements 2D Ising model Conclusion Outline 1 Wang–Landau algorithm 2 Improvements Automatic Binning Parallel Interacting Chains Adaptive proposals 3 2D Ising model 4 Conclusion Pierre E. Jacob PAWL 2/ 18
  • 3. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Context unnormalized target density π on a state space X A kind of adaptive MCMC algorithm It iteratively generates a sequence Xt . The stationary distribution is not π itself. At each iteration a different stationary distribution is targeted. Pierre E. Jacob PAWL 3/ 18
  • 4. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Partition the space The state space X is cut into d bins: d X = Xi and ∀i = j Xi ∩ Xj = ∅ i=1 Goal The generated sequence spends the same time in each bin Xi , within each bin Xi the sequence is asymptotically distributed according to the restriction of π to Xi . Pierre E. Jacob PAWL 4/ 18
  • 5. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Stationary distribution Define the mass of π over Xi by: ψi = π(x)dx Xi The stationary distribution of the WL algorithm is: 1 πψ (x) ∝ π(x) × ψJ(x) where J(x) is the index such that x ∈ XJ(x) Pierre E. Jacob PAWL 5/ 18
  • 6. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Example with a bimodal, univariate target density: π and two πψ corresponding to different partitions. Original Density, with partition lines Biased by X Biased by Log Density 0 −2 −4 Log Density −6 −8 −10 −12 −5 0 5 10 15 −5 0 5 10 15 −5 0 5 10 15 X Pierre E. Jacob PAWL 6/ 18
  • 7. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Plugging estimates In practice we cannot compute ψi analytically. Instead we plug in estimates θt (i) of ψi at iteration t, and define the distribution πθt by: 1 πθt (x) ∝ π(x) × θt (J(x)) Metropolis–Hastings The algorithm does a Metropolis–Hastings step, aiming πθt at iteration t, generating a new point Xt . Pierre E. Jacob PAWL 7/ 18
  • 8. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Estimate of the bias The update of the estimated bias θt (i) is done according to: θt (i) ← θt−1 (i)[1 + γt (IXt ∈Xi − d −1 )] with γt a decreasing sequence or “step size”. E.g. γt = 1/t. Pierre E. Jacob PAWL 8/ 18
  • 9. Wang–Landau algorithm Improvements 2D Ising model Conclusion Wang–Landau Result In the end we get: a sequence Xt asymptotically following πψ , as well as estimates θt (i) of ψi . Pierre E. Jacob PAWL 9/ 18
  • 10. Wang–Landau algorithm Automatic Binning Improvements Parallel Interacting Chains 2D Ising model Adaptive proposals Conclusion Automate Binning Easily move from one bin to another Maintain some kind of uniformity within bins. If non-uniform, split the bin. Frequency Frequency Log density Log density (a) Before the split (b) After the split Pierre E. Jacob PAWL 10/ 18
  • 11. Wang–Landau algorithm Automatic Binning Improvements Parallel Interacting Chains 2D Ising model Adaptive proposals Conclusion Parallel Interacting Chains (1) (N) N chains (Xt , . . . , Xt ) instead of one. targeting the same biased distribution πθt at iteration t, sharing the same estimated bias θt at iteration t. The update of the estimated bias becomes: N 1 θt (i) ← θt−1 (i)[1 + γt ( IX (j) ∈X − d −1 )] N t i j=1 Pierre E. Jacob PAWL 11/ 18
  • 12. Wang–Landau algorithm Automatic Binning Improvements Parallel Interacting Chains 2D Ising model Adaptive proposals Conclusion Adaptive proposals For continuous state spaces We can use the adaptive Random Walk proposal where the variance σt is learned along the iterations to target an acceptance rate. Robbins-Monro stochastic approximation update σt+1 = σt + ρt (2I(A > 0.234) − 1) Or alternatively Σt = δ × Cov (X1 , . . . , Xt ) Pierre E. Jacob PAWL 12/ 18
  • 13. Wang–Landau algorithm Improvements 2D Ising model Conclusion 2D Ising model Higdon (1998), JASA 93(442) Target density Consider a 2D Ising model, with posterior density   π(x|y ) ∝ exp α I[yi = xi ] + β I[xi = xj ] i i∼j with α = 1, β = 0.7. The first term (likelihood) encourages states x which are similar to the original image y . The second term (prior) favors states x for which neighbouring pixels are equal, like a Potts model. Pierre E. Jacob PAWL 13/ 18
  • 14. Wang–Landau algorithm Improvements 2D Ising model Conclusion 2D Ising models (a) Original Image (b) Focused Region of Image Pierre E. Jacob PAWL 14/ 18
  • 15. Wang–Landau algorithm Improvements 2D Ising model Conclusion 2D Ising models Iteration 300,000 Iteration 350,000 Iteration 400,000 Iteration 450,000 Iteration 500,000 40 Metropolis−Hastings 30 20 10 Pixel On X2 40 Off 30 Wang−Landau 20 10 10 20 30 40 10 20 30 40 10 20 30 40 10 20 30 40 10 20 30 40 X1 Figure: Spatial model example: states explored over 200,000 iterations for Metropolis-Hastings (top) and proposed algorithm (bottom). Pierre E. Jacob PAWL 15/ 18
  • 16. Wang–Landau algorithm Improvements 2D Ising model Conclusion 2D Ising models Metropolis−Hastings Wang−Landau 40 30 Pixel 0.4 0.6 X2 20 0.8 1.0 10 10 20 30 40 10 20 30 40 X1 Figure: Spatial model example: average state explored with Metropolis-Hastings (left) and Wang-Landau after importance sampling (right). Pierre E. Jacob PAWL 16/ 18
  • 17. Wang–Landau algorithm Improvements 2D Ising model Conclusion Conclusion Automatic binning We still have to define a range. Parallel Chains In practice it is more efficient to use N chains for T iterations instead of 1 chain for N × T iterations. Adaptive Proposals Convergence results with fixed proposals are already challenging, and making the proposal adaptive might add a layer of complexity. Pierre E. Jacob PAWL 17/ 18
  • 18. Wang–Landau algorithm Improvements 2D Ising model Conclusion Bibliography Article: An Adaptive Interacting Wang-Landau Algorithm for Automatic Density Exploration, L. Bornn, P.E. Jacob, P. Del Moral, A. Doucet, available on arXiv. Software: PAWL, an R package, available on CRAN: install.packages("PAWL") References: F. Wang, D. Landau, Physical Review E, 64(5):56101 Y. Atchad´, J. Liu, Statistica Sinica, 20:209-233 e Pierre E. Jacob PAWL 18/ 18