SlideShare uma empresa Scribd logo
1 de 46
Baixar para ler offline
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis




Image analysis



       7    Image analysis
              Computer Vision and Classification
              Image Segmentation




                                                                          413 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



The k-nearest-neighbor method



       The k-nearest-neighbor (knn) procedure has been used in data
       analysis and machine learning communities as a quick way to
       classify objects into predefined groups.

       This approach requires a training dataset where both the class y
       and the vector x of characteristics (or covariates) of each
       observation are known.




                                                                          414 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



The k-nearest-neighbor method



       The training dataset (yi , xi )1≤i≤n is used by the
       k-nearest-neighbor procedure to predict the value of yn+1 given a
       new vector of covariates xn+1 in a very rudimentary manner.

       The predicted value of yn+1 is simply the most frequent class found
       amongst the k nearest neighbors of xn+1 in the set (xi )1≤i≤n .




                                                                           415 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



The k-nearest-neighbor method




       The classical knn procedure does not involve much calibration and
       requires no statistical modeling at all!

       There exists a Bayesian reformulation.




                                                                           416 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



vision dataset (1)



       vision: 1373 color pictures, described by 200 variables rather than
       by the whole table of 500 × 375 pixels

       Four classes of images: class C1 for motorcycles, class C2 for
       bicycles, class C3 for humans and class C4 for cars




                                                                             417 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



vision dataset (2)



       Typical issue in computer vision problems: build a classifier to
       identify a picture pertaining to a specific topic without human
       intervention

       We use about half of the images (648 pictures) to construct the
       training dataset and we save the 689 remaining images to test the
       performance of our procedures.




                                                                           418 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



vision dataset (3)




                                                                          419 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



A probabilistic version of the knn methodology (1)


       Symmetrization of the neighborhood relation: If xi belongs to the
       k-nearest-neighborhood of xj and if xj does not belong to the
       k-nearest-neighborhood of xi , the point xj is added to the set of
       neighbors of xi

       Notation: i ∼k j

       The transformed set of neighbors is then called the symmetrized
       k-nearest-neighbor system.




                                                                            420 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



A probabilistic version of the knn methodology (2)




                                                                          421 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



A probabilistic version of the knn methodology (3)

                                                                                                   

                                                             exp β                ICj (yℓ )      Nk 
                                                                           ℓ∼k i
             P(yi = Cj |y−i , X, β, k) =                                                            ,
                                                           G
                                                                exp β                ICg (yℓ )    Nk 
                                                          g=1                 ℓ∼k i

       Nk being the average number of neighbours over all xi ’s, β > 0
       and

           y−i = (y1 , . . . , yi−1 , yi+1 , . . . , yn )                 and X = {x1 , . . . , xn } .


                                                                                                          422 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



A probabilistic version of the knn methodology (4)

               β grades the influence of the prevalent neighborhood class ;
               the probabilistic knn model is conditional on the covariate
               matrix X ;
               the frequencies n1 /n, . . . , nG /n of the classes within the
               training set are representative of the marginal probabilities
               p1 = P(yi = C1 ), . . . , pG = P(yi = CG ):


       If the marginal probabilities pg are known and different from ng /n
       =⇒ reweighting the various classes according to their true
       frequencies


                                                                                423 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



Bayesian analysis of the knn probabilistic model

       From a Bayesian perspective, given a prior distribution π(β, k)
       with support [0, βmax ] × {1, . . . , K}, the marginal predictive
       distribution of yn+1 is

       P(yn+1 = Cj |xn+1 , y, X) =
                               K           βmax
                                                  P(yn+1 = Cj |xn+1 , y, X, β, k)π(β, k|y, X) dβ ,
                              k=1      0


       where π(β, k|y, X) is the posterior distribution given the training
       dataset (y, X).



                                                                                            424 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



Unknown normalizing constant


       Major difficulty: it is impossible to compute f (y|X, β, k)

       Use instead the pseudo-likelihood made of the product of the full
       conditionals
                                                G
                     f (y|X, β, k) =                            P(yi = Cg |y−i , X, β, k)
                                               g=1 yi =Cg




                                                                                            425 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



Pseudo-likelihood approximation

       Pseudo-posterior distribution

                                   π(β, k|y, X) ∝ f (y|X, β, k)π(β, k)


       Pseudo-predictive distribution

            P(yn+1 = Cj |xn+1 , y, X) =
                    K       βmax
                                   P(yn+1 = Cj |xn+1 , y, X, β, k)π(β, k|y, X) dβ .
               k=1      0




                                                                                      426 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



MCMC implementation (1)

       MCMC approximation is required

       Random walk Metropolis–Hastings algorithm

       Both β and k are updated using random walk proposals:

       (i) for β, logistic tranformation

                              β (t) = βmax exp(θ(t) ) (exp(θ(t) ) + 1) ,

       in order to be able to simulate a normal random walk on the θ’s,
       ˜
       θ ∼ N (θ(t) , τ 2 );


                                                                           427 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



MCMC implementation (2)

       (ii) for k, we use a uniform proposal on the r neighbors of k (t) ,
       namely on {k (t) − r, . . . , k (t) − 1, k (t) + 1, . . . k (t) + r} ∩ {1, . . . , K}.

       Using this algorithm, we can thus derive the most likely class
       associated with a covariate vector xn+1 from the approximated
       probabilities,
                                 M
                          1
                                      P yn+1 = l|xn+1 , y, x, (β (i) , k (i) ) .
                          M
                                i=1




                                                                                            428 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Computer Vision and Classification



MCMC implementation (3)




                                                                             2000
               8




                                                                             1500
                                                                 Frequency
               7
           β




                                                                             1000
               6




                                                                             500
               5




                                                                             0
                    0     5000       10000       15000   20000                           5.0       5.5       6.0        6.5    7.0   7.5

                                    Iterations




                                                                             15000
               40
               30




                                                                             10000
                                                                 Frequency
           k

               20




                                                                             5000
               10




                                                                             0

                    0     5000       10000       15000   20000                       4         6         8         10     12   14    16

                                    Iterations




                          βmax = 15, K = 83, τ 2 = 0.05 and r = 1
          k sequences for the knn Metropolis–Hastings β based on 20, 000 iterations


                                                                                                                                           429 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Image Segmentation

       This underlying structure of the “true” pixels is denoted by x,
       while the observed image is denoted by y.

       Both objects x and y are arrays, with each entry of x taking a
       finite number of values and each entry of y taking real values.

       We are interested in the posterior distribution of x given y
       provided by Bayes’ theorem, π(x|y) ∝ f (y|x)π(x).

       The likelihood, f (y|x), describes the link between the observed
       image and the underlying classification.



                                                                          430 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Menteith dataset (1)



       The Menteith dataset is a 100 × 100 pixel satellite image of the
       lake of Menteith.

       The lake of Menteith is located in Scotland, near Stirling, and
       offers the peculiarity of being called “lake” rather than the
       traditional Scottish “loch.”




                                                                          431 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Menteith dataset (2)



                                   100
                                   80
                                   60
                                   40
                                   20




                                                20        40        60    80   100




                                   Satellite image of the lake of Menteith

                                                                                     432 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Random Fields



       If we take a lattice I of sites or pixels in an image, we denote by
       i ∈ I a coordinate in the lattice. The neighborhood relation is
       then denoted by ∼.

       A random field on I is a random structure indexed by the lattice
       I, a collection of random variables {xi ; i ∈ I} where each xi takes
       values in a finite set χ.




                                                                             433 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Markov Random Fields


       Let xn(i) be the set of values taken by the neighbors of i.

       A random field is a Markov random field (MRF) if the conditional
       distribution of any pixel given the other pixels only depends on the
       values of the neighbors of that pixel; i.e., for i ∈ I,

                                          π(xi |x−i ) = π(xi |xn(i) ) .




                                                                          434 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Ising Models

       If pixels of the underlying (true) image x can only take two colors
       (black and white, say), x is then binary, while y is a grey-level
       image.

       We typically refer to each pixel xi as being foreground if xi = 1
       (black) and background if xi = 0 (white).

       We have

                            π(xi = j|x−i ) ∝ exp(βni,j ) ,                β > 0,

       where ni,j =              ℓ∈n(i) Ixℓ =j      is the number of neighbors of xi with
       color j.

                                                                                            435 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Ising Models

       The Ising model is defined via full conditionals

                                                             exp(βni,1 )
                           π(xi = 1|x−i ) =                                       ,
                                                        exp(βni,0 ) + exp(βni,1 )

       and the joint distribution satisfies
                                                                               

                                      π(x) ∝ exp β                       Ixj =xi  ,
                                                                 j∼i


       where the summation is taken over all pairs (i, j) of neighbors.



                                                                                        436 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Simulating from the Ising Models




       The normalizing constant of the Ising Model is intractable except
       for very small lattices I...

       Direct simulation of x is not possible!




                                                                           437 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Ising Gibbs Sampler

       Algorithm (Ising Gibbs Sampler)
               Initialization: For i ∈ I, generate independently
                                                       (0)
                                                     xi      ∼ B(1/2) .

               Iteration t (t ≥ 1):
                    1   Generate u = (ui )i∈I , a random ordering of the elements of I.
                                                   (t)      (t)
                    2   For 1 ≤ ℓ ≤ |I|, update nuℓ ,0 and nuℓ ,1 , and generate

                                                                           (t)
                                                                   exp(βnuℓ ,1 )
                                      x(t) ∼ B
                                       uℓ                          (t)             (t)
                                                                                         .
                                                        exp(βnuℓ ,0 ) + exp(βnuℓ ,1 )



                                                                                             438 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Ising Gibbs Sampler (2)
                 20   60   100         20   60   100         20   60   100         20   60   100         20   60   100
           20




                                 20




                                                       20




                                                                             20




                                                                                                   20
           40




                                 40




                                                       40




                                                                             40




                                                                                                   40
           60




                                 60




                                                       60




                                                                             60




                                                                                                   60
           80




                                 80




                                                       80




                                                                             80




                                                                                                   80
           100




                                 100




                                                       100




                                                                             100




                                                                                                   100
                 20   60   100         20   60   100         20   60   100         20   60   100         20   60   100
           20




                                 20




                                                       20




                                                                             20




                                                                                                   20
           40




                                 40




                                                       40




                                                                             40




                                                                                                   40
           60




                                 60




                                                       60




                                                                             60




                                                                                                   60
           80




                                 80




                                                       80




                                                                             80




                                                                                                   80
           100




                                 100




                                                       100




                                                                             100




                                                                                                   100
       Simulations from the Ising model with a four-neighbor neighborhood structure
       on a 100 × 100 array after 1, 000 iterations of the Gibbs sampler: β varies in
       steps of 0.1 from 0.3 to 1.2.
                                                                                                                         439 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Potts Models


       If there are G colors and if ni,g denotes the number of neighbors of
       i ∈ I with color g (1 ≤ g ≤ G) (that is, ni,g = j∼i Ixj =g )
       In that case, the full conditional distribution of xi is chosen to
       satisfy π(xi = g|x−i ) ∝ exp(βni,g ).
       This choice corresponds to the Potts model, whose joint density is
       given by                                       

                                      π(x) ∝ exp β                       Ixj =xi  .
                                                                 j∼i




                                                                                        440 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Simulating from the Potts Models (1)


       Algorithm (Potts Metropolis–Hastings Sampler)
               Initialization: For i ∈ I, generate independently
                                                 (0)
                                               xi      ∼ U ({1, . . . , G}) .

               Iteration t (t ≥ 1):
                    1   Generate u = (ui )i∈I a random ordering of the elements of I.
                    2   For 1 ≤ ℓ ≤ |I|,
                              generate

                                          x(t) ∼ U ({1, xuℓ − 1, xuℓ + 1, . . . , G}) ,
                                          ˜ uℓ           (t−1)    (t−1)




                                                                                          441 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Simulating from the Potts Models (2)



       Algorithm (continued)
                                       (t)
               compute the nul ,g and
                                                        (t)(t)
                                 ρl = exp(βnuℓ ,˜ )/ exp(βnuℓ ,xu ) ∧ 1 ,
                                                x                         ℓ


                              (t)
               and set xuℓ equal to xuℓ with probability ρl .
                                    ˜




                                                                              442 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Simulating from the Potts Models (3)
                 20   60   100         20   60   100         20   60   100         20   60   100         20   60   100
           20




                                 20




                                                       20




                                                                             20




                                                                                                   20
           40




                                 40




                                                       40




                                                                             40




                                                                                                   40
           60




                                 60




                                                       60




                                                                             60




                                                                                                   60
           80




                                 80




                                                       80




                                                                             80




                                                                                                   80
           100




                                 100




                                                       100




                                                                             100




                                                                                                   100
                 20   60   100         20   60   100         20   60   100         20   60   100         20   60   100
           20




                                 20




                                                       20




                                                                             20




                                                                                                   20
           40




                                 40




                                                       40




                                                                             40




                                                                                                   40
           60




                                 60




                                                       60




                                                                             60




                                                                                                   60
           80




                                 80




                                                       80




                                                                             80




                                                                                                   80
           100




                                 100




                                                       100




                                                                             100




                                                                                                   100
       Simulations from the Potts model with four grey levels and a four-neighbor
       neighborhood structure based on 1000 iterations of the Metropolis–Hastings
       sampler. The parameter β varies in steps of 0.1 from 0.3 to 1.2.
                                                                                                                         443 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Posterior Inference (1)
       The prior on x is a Potts model with G categories,
                                                         
                                1
                   π(x|β) =        exp β         Ixj =xi  ,
                              Z(β)
                                                                    i∈I j∼i

       where Z(β) is the normalizing constant.

       Given x, we assume that the observations in y are independent
       normal random variables,
                                                            1             1
        f (y|x, σ 2 , µ1 , . . . , µG ) =                           exp − 2 (yi − µxi )2   .
                                                        (2πσ 2 )1/2      2σ
                                                  i∈I



                                                                                               444 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Posterior Inference (2)



       Priors

                                      β ∼ U ([0, 2]) ,
            µ = (µ1 , . . . , µG ) ∼ U ({µ ; 0 ≤ µ1 ≤ . . . ≤ µG ≤ 255}) ,
                               π(σ 2 ) ∝ σ −2 I]0,∞[ (σ 2 ) ,

       the last prior corresponding to a uniform prior on log σ.




                                                                             445 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Posterior Inference (3)


       Posterior distribution
                                                                                                  
                                             1
       π(x, β, σ 2 , µ|y) ∝ π(β, σ 2 , µ) ×      exp β                                   Ixj =xi 
                                            Z(β)
                                                                                i∈I j∼i

                                                          1               −1
                                            ×                     exp          (yi − µxi )2    .
                                                      (2πσ 2 )1/2         2σ 2
                                                i∈I




                                                                                                       446 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Full conditionals (1)

                                                                                               
                                                                                  1            
           P(xi = g|y, β, σ 2 , µ) ∝ exp                     β         Ixj =g −        (yi − µg )2 ,
                                                                                 2σ 2            
                                                                 j∼i

       can be simulated directly.


                              ng =           Ixi =g       and sg =                Ixi =g yi
                                       i∈I                                 i∈I

       the full conditional distribution of µg is a truncated normal
       distribution on [µg−1 , µg+1 ] with mean sg /ng and variance σ 2 /ng


                                                                                                       447 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Full conditionals (2)


       The full conditional distribution of σ 2 is an inverse gamma
       distribution with parameters |I|2 /2 and i∈I (yi − µxi )2 /2.

       The full conditional distribution of β is such that
                                                             
                                  1
                    π(β|y) ∝          exp β          Ixj =xi  .
                               Z(β)
                                                                    i∈I j∼i




                                                                              448 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Path sampling Approximation (1)

                                         Z(β) =             exp {βS(x)} ,
                                                        x

       where S(x) =                i∈I       j∼i Ixj =xi




                             dZ(β)                                        exp(βS(x))
                                            = Z(β)                S(x)
                              dβ                              x
                                                                             Z(β)
                                            = Z(β) Eβ [S(X)] ,


                                          d log Z(β)
                                                     = Eβ [S(X)] .
                                              dβ

                                                                                       449 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Path sampling Approximation (2)




       Path sampling identity
                                                                      β1
                             log {Z(β1 )/Z(β0 )} =                         Eβ [S(x)]dβ .
                                                                    β0




                                                                                           450 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Path sampling Approximation (3)



       For a given value of β, Eβ [S(X)] can be approximated from an
       MCMC sequence.

       The integral itself can be approximated by computing the value of
       f (β) = Eβ [S(X)] for a finite number of values of β and
                                                          ˆ
       approximating f (β) by a piecewise-linear function f (β) for the
       intermediate values of β.




                                                                           451 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Path sampling Approximation (3)

                          40000
                          35000
                          30000
                          25000
                          20000
                          15000
                          10000




                                  0.0           0.5            1.0        1.5   2.0




       Approximation of f (β) for the Potts model on a 100 × 100 image, a
       four-neighbor neighborhood, and G = 6, based on 1500 MCMC iterations after
       burn-in.                                                                  452 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Posterior Approximation (1)
           35.0




                                                                   58.5 59.0 59.5 60.0 60.5
           34.0
           33.0
           32.0




                                           0   500   1000   1500                              0   500   1000   1500
           70.0 70.5 71.0 71.5 72.0 72.5




                                                                   83.0 83.5 84.0 84.5 85.0
                                           0   500   1000   1500                              0   500   1000   1500




                                                                   113.5
           95.0 95.5 96.0 96.5 97.0 97.5




                                                                   112.5
                                                                   111.5
                                                                   110.5




                                           0   500   1000   1500                              0   500   1000   1500




       Dataset Menteith: Sequence of µg ’s based on 2000 iterations of the hybrid
       Gibbs sampler (read row-wise from µ1 to µ6 ).

                                                                                                                      453 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Posterior Approximation (2)


                                100




                                                                                                      40
                                80




                                                                                                      30
                                60




                                                                                                      20
                                40




                                                                                                      10
                                20
                                0




                                                                                                      0
                                        32.0   32.5     33.0     33.5      34.0      34.5      35.0                58.5       59.0         59.5     60.0    60.5
                                60




                                                                                                      50
                                50




                                                                                                      40
                                40




                                                                                                      30
                                30




                                                                                                      20
                                20




                                                                                                      10
                                10
                                0




                                                                                                      0
                                      70.0     70.5     71.0       71.5       72.0          72.5            83.0            83.5       84.0         84.5      85.0




                                                                                                      100
                                50




                                                                                                      80
                                40




                                                                                                      60
                                30




                                                                                                      40
                                20




                                                                                                      20
                                10
                                0




                                                                                                      0




                                        95.0     95.5     96.0      96.5          97.0      97.5            110.5         111.0    111.5    112.0   112.5   113.0    113.5




       Histograms of the µg ’s


                                                                                                                                                                             454 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Posterior Approximation (3)




                                                                                              120
                                20.5




                                                                                              100
                                20.0




                                                                                              80
                                                                                              60
                                19.5




                                                                                              40
                                19.0




                                                                                              20
                                18.5




                                                                                              0
                                                                      0   500   1000   1500             18.5       19.0        19.5     20.0       20.5
                                1.295 1.300 1.305 1.310 1.315 1.320




                                                                                              100 120
                                                                                              80
                                                                                              60
                                                                                              40
                                                                                              20
                                                                                              0




                                                                      0   500   1000   1500                    1.295   1.300   1.305   1.310   1.315   1.320




       Raw plots and histograms of the σ 2 ’s and β’s based on 2000 iterations of the
       hybrid Gibbs sampler

                                                                                                                                                               455 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Image segmentation (1)
       Based on (x(t) )1≤t≤T , an estimator of x needs to be derived from
       an evaluation of the consequences of wrong allocations.

       Two common ways corresponding to two loss functions

                                            L1 (x, x) =               Ixi =ˆi ,
                                                                           x
                                                                i∈I

                                                L2 (x, x) = Ix=b ,
                                                               x


       Corresponding estimators

                          xM P M = arg max Pπ (xi = g|y) ,
                          ˆi                                                      i∈I,
                                                 1≤g≤G

                                         xM AP = arg max π(x|y) ,
                                                                 x
                                                                                         456 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Image segmentation (2)



       xM P M and xM AP not available in closed form!
       Approximation
                                                                      N
                                     xM P M
                                     ˆi          =       max               Ix(j) =g ,
                                                     g∈{1,...,G}             i
                                                                     j=1


       based on a simulated sequence, x(1) , . . . , x(N ) .




                                                                                        457 / 458
Bayesian Core:A Practical Approach to Computational Bayesian Statistics
   Image analysis
     Image Segmentation



Image segmentation (3)




                                        100
                                        80
                                        60
                                        40
                                        20


                                                  20       40      60     80   100
                                        100
                                        80
                                        60
                                        40
                                        20




                                                  20       40      60     80   100




       (top) Segmented image based on the MPM estimate produced after 2000
       iterations of the Gibbs sampler and (bottom) the observed image
                                                                                     458 / 458

Mais conteúdo relacionado

Mais procurados

Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsFrank Nielsen
 
Convolutional networks and graph networks through kernels
Convolutional networks and graph networks through kernelsConvolutional networks and graph networks through kernels
Convolutional networks and graph networks through kernelstuxette
 
Lecture 3 image sampling and quantization
Lecture 3 image sampling and quantizationLecture 3 image sampling and quantization
Lecture 3 image sampling and quantizationVARUN KUMAR
 
Habilitation à diriger des recherches
Habilitation à diriger des recherchesHabilitation à diriger des recherches
Habilitation à diriger des recherchesPierre Pudlo
 
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...IJERA Editor
 
Expert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, KanpurExpert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, KanpurSuddhasheel GHOSH, PhD
 
MLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic trackMLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic trackarogozhnikov
 
Bayesian Neural Networks
Bayesian Neural NetworksBayesian Neural Networks
Bayesian Neural Networksm.a.kirn
 
Fast Object Recognition from 3D Depth Data with Extreme Learning Machine
Fast Object Recognition from 3D Depth Data with Extreme Learning MachineFast Object Recognition from 3D Depth Data with Extreme Learning Machine
Fast Object Recognition from 3D Depth Data with Extreme Learning MachineSoma Boubou
 
Lec-08 Feature Aggregation II: Fisher Vector, AKULA and Super Vector
Lec-08 Feature Aggregation II: Fisher Vector, AKULA and Super VectorLec-08 Feature Aggregation II: Fisher Vector, AKULA and Super Vector
Lec-08 Feature Aggregation II: Fisher Vector, AKULA and Super VectorUnited States Air Force Academy
 
MLHEP Lectures - day 2, basic track
MLHEP Lectures - day 2, basic trackMLHEP Lectures - day 2, basic track
MLHEP Lectures - day 2, basic trackarogozhnikov
 
CVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pcaCVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pcazukun
 
Differential analyses of structures in HiC data
Differential analyses of structures in HiC dataDifferential analyses of structures in HiC data
Differential analyses of structures in HiC datatuxette
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 

Mais procurados (19)

Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
Convolutional networks and graph networks through kernels
Convolutional networks and graph networks through kernelsConvolutional networks and graph networks through kernels
Convolutional networks and graph networks through kernels
 
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
 
Lecture 3 image sampling and quantization
Lecture 3 image sampling and quantizationLecture 3 image sampling and quantization
Lecture 3 image sampling and quantization
 
Habilitation à diriger des recherches
Habilitation à diriger des recherchesHabilitation à diriger des recherches
Habilitation à diriger des recherches
 
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
 
Expert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, KanpurExpert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, Kanpur
 
11 clusadvanced
11 clusadvanced11 clusadvanced
11 clusadvanced
 
MLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic trackMLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic track
 
Bayesian Neural Networks
Bayesian Neural NetworksBayesian Neural Networks
Bayesian Neural Networks
 
Fast Object Recognition from 3D Depth Data with Extreme Learning Machine
Fast Object Recognition from 3D Depth Data with Extreme Learning MachineFast Object Recognition from 3D Depth Data with Extreme Learning Machine
Fast Object Recognition from 3D Depth Data with Extreme Learning Machine
 
Deep Learning Opening Workshop - Domain Adaptation Challenges in Genomics: a ...
Deep Learning Opening Workshop - Domain Adaptation Challenges in Genomics: a ...Deep Learning Opening Workshop - Domain Adaptation Challenges in Genomics: a ...
Deep Learning Opening Workshop - Domain Adaptation Challenges in Genomics: a ...
 
Lec-08 Feature Aggregation II: Fisher Vector, AKULA and Super Vector
Lec-08 Feature Aggregation II: Fisher Vector, AKULA and Super VectorLec-08 Feature Aggregation II: Fisher Vector, AKULA and Super Vector
Lec-08 Feature Aggregation II: Fisher Vector, AKULA and Super Vector
 
MLHEP Lectures - day 2, basic track
MLHEP Lectures - day 2, basic trackMLHEP Lectures - day 2, basic track
MLHEP Lectures - day 2, basic track
 
CVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pcaCVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pca
 
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
 
Lec07 aggregation-and-retrieval-system
Lec07 aggregation-and-retrieval-systemLec07 aggregation-and-retrieval-system
Lec07 aggregation-and-retrieval-system
 
Differential analyses of structures in HiC data
Differential analyses of structures in HiC dataDifferential analyses of structures in HiC data
Differential analyses of structures in HiC data
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 

Semelhante a Bayesian KNN Approach for Image Classification

ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.pptImXaib
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagationDong Guo
 
Neural Processes Family
Neural Processes FamilyNeural Processes Family
Neural Processes FamilyKota Matsui
 
03 digital image fundamentals DIP
03 digital image fundamentals DIP03 digital image fundamentals DIP
03 digital image fundamentals DIPbabak danyal
 
Naive bayes
Naive bayesNaive bayes
Naive bayesumeskath
 
5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdfRahul926331
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsMostafa G. M. Mostafa
 
On image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDAOn image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDARaghu Palakodety
 
Iwsm2014 an analogy-based approach to estimation of software development ef...
Iwsm2014   an analogy-based approach to estimation of software development ef...Iwsm2014   an analogy-based approach to estimation of software development ef...
Iwsm2014 an analogy-based approach to estimation of software development ef...Nesma
 
Parallel Bayesian Optimization
Parallel Bayesian OptimizationParallel Bayesian Optimization
Parallel Bayesian OptimizationSri Ambati
 
MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1arogozhnikov
 
Pathway Discovery in Cancer: the Bayesian Approach
Pathway Discovery in Cancer: the Bayesian ApproachPathway Discovery in Cancer: the Bayesian Approach
Pathway Discovery in Cancer: the Bayesian ApproachFrancesco Gadaleta
 

Semelhante a Bayesian KNN Approach for Image Classification (20)

ch8Bayes.pptx
ch8Bayes.pptxch8Bayes.pptx
ch8Bayes.pptx
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
 
Inex07
Inex07Inex07
Inex07
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagation
 
Lec-3 DIP.pptx
Lec-3 DIP.pptxLec-3 DIP.pptx
Lec-3 DIP.pptx
 
Neural Processes Family
Neural Processes FamilyNeural Processes Family
Neural Processes Family
 
TunUp final presentation
TunUp final presentationTunUp final presentation
TunUp final presentation
 
ML.pptx
ML.pptxML.pptx
ML.pptx
 
03 digital image fundamentals DIP
03 digital image fundamentals DIP03 digital image fundamentals DIP
03 digital image fundamentals DIP
 
Naive bayes
Naive bayesNaive bayes
Naive bayes
 
5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image Fundamentals
 
On image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDAOn image intensities, eigenfaces and LDA
On image intensities, eigenfaces and LDA
 
Iwsm2014 an analogy-based approach to estimation of software development ef...
Iwsm2014   an analogy-based approach to estimation of software development ef...Iwsm2014   an analogy-based approach to estimation of software development ef...
Iwsm2014 an analogy-based approach to estimation of software development ef...
 
Parallel Bayesian Optimization
Parallel Bayesian OptimizationParallel Bayesian Optimization
Parallel Bayesian Optimization
 
MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1
 
Pathway Discovery in Cancer: the Bayesian Approach
Pathway Discovery in Cancer: the Bayesian ApproachPathway Discovery in Cancer: the Bayesian Approach
Pathway Discovery in Cancer: the Bayesian Approach
 
Wiamis2010 poster
Wiamis2010 posterWiamis2010 poster
Wiamis2010 poster
 
Wiamis2010 poster
Wiamis2010 posterWiamis2010 poster
Wiamis2010 poster
 

Mais de Christian Robert

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 

Mais de Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 

Último

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 

Último (20)

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 

Bayesian KNN Approach for Image Classification

  • 1. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image analysis 7 Image analysis Computer Vision and Classification Image Segmentation 413 / 458
  • 2. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification The k-nearest-neighbor method The k-nearest-neighbor (knn) procedure has been used in data analysis and machine learning communities as a quick way to classify objects into predefined groups. This approach requires a training dataset where both the class y and the vector x of characteristics (or covariates) of each observation are known. 414 / 458
  • 3. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification The k-nearest-neighbor method The training dataset (yi , xi )1≤i≤n is used by the k-nearest-neighbor procedure to predict the value of yn+1 given a new vector of covariates xn+1 in a very rudimentary manner. The predicted value of yn+1 is simply the most frequent class found amongst the k nearest neighbors of xn+1 in the set (xi )1≤i≤n . 415 / 458
  • 4. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification The k-nearest-neighbor method The classical knn procedure does not involve much calibration and requires no statistical modeling at all! There exists a Bayesian reformulation. 416 / 458
  • 5. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification vision dataset (1) vision: 1373 color pictures, described by 200 variables rather than by the whole table of 500 × 375 pixels Four classes of images: class C1 for motorcycles, class C2 for bicycles, class C3 for humans and class C4 for cars 417 / 458
  • 6. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification vision dataset (2) Typical issue in computer vision problems: build a classifier to identify a picture pertaining to a specific topic without human intervention We use about half of the images (648 pictures) to construct the training dataset and we save the 689 remaining images to test the performance of our procedures. 418 / 458
  • 7. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification vision dataset (3) 419 / 458
  • 8. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification A probabilistic version of the knn methodology (1) Symmetrization of the neighborhood relation: If xi belongs to the k-nearest-neighborhood of xj and if xj does not belong to the k-nearest-neighborhood of xi , the point xj is added to the set of neighbors of xi Notation: i ∼k j The transformed set of neighbors is then called the symmetrized k-nearest-neighbor system. 420 / 458
  • 9. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification A probabilistic version of the knn methodology (2) 421 / 458
  • 10. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification A probabilistic version of the knn methodology (3)   exp β ICj (yℓ ) Nk  ℓ∼k i P(yi = Cj |y−i , X, β, k) =  , G exp β ICg (yℓ ) Nk  g=1 ℓ∼k i Nk being the average number of neighbours over all xi ’s, β > 0 and y−i = (y1 , . . . , yi−1 , yi+1 , . . . , yn ) and X = {x1 , . . . , xn } . 422 / 458
  • 11. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification A probabilistic version of the knn methodology (4) β grades the influence of the prevalent neighborhood class ; the probabilistic knn model is conditional on the covariate matrix X ; the frequencies n1 /n, . . . , nG /n of the classes within the training set are representative of the marginal probabilities p1 = P(yi = C1 ), . . . , pG = P(yi = CG ): If the marginal probabilities pg are known and different from ng /n =⇒ reweighting the various classes according to their true frequencies 423 / 458
  • 12. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification Bayesian analysis of the knn probabilistic model From a Bayesian perspective, given a prior distribution π(β, k) with support [0, βmax ] × {1, . . . , K}, the marginal predictive distribution of yn+1 is P(yn+1 = Cj |xn+1 , y, X) = K βmax P(yn+1 = Cj |xn+1 , y, X, β, k)π(β, k|y, X) dβ , k=1 0 where π(β, k|y, X) is the posterior distribution given the training dataset (y, X). 424 / 458
  • 13. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification Unknown normalizing constant Major difficulty: it is impossible to compute f (y|X, β, k) Use instead the pseudo-likelihood made of the product of the full conditionals G f (y|X, β, k) = P(yi = Cg |y−i , X, β, k) g=1 yi =Cg 425 / 458
  • 14. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification Pseudo-likelihood approximation Pseudo-posterior distribution π(β, k|y, X) ∝ f (y|X, β, k)π(β, k) Pseudo-predictive distribution P(yn+1 = Cj |xn+1 , y, X) = K βmax P(yn+1 = Cj |xn+1 , y, X, β, k)π(β, k|y, X) dβ . k=1 0 426 / 458
  • 15. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification MCMC implementation (1) MCMC approximation is required Random walk Metropolis–Hastings algorithm Both β and k are updated using random walk proposals: (i) for β, logistic tranformation β (t) = βmax exp(θ(t) ) (exp(θ(t) ) + 1) , in order to be able to simulate a normal random walk on the θ’s, ˜ θ ∼ N (θ(t) , τ 2 ); 427 / 458
  • 16. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification MCMC implementation (2) (ii) for k, we use a uniform proposal on the r neighbors of k (t) , namely on {k (t) − r, . . . , k (t) − 1, k (t) + 1, . . . k (t) + r} ∩ {1, . . . , K}. Using this algorithm, we can thus derive the most likely class associated with a covariate vector xn+1 from the approximated probabilities, M 1 P yn+1 = l|xn+1 , y, x, (β (i) , k (i) ) . M i=1 428 / 458
  • 17. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Computer Vision and Classification MCMC implementation (3) 2000 8 1500 Frequency 7 β 1000 6 500 5 0 0 5000 10000 15000 20000 5.0 5.5 6.0 6.5 7.0 7.5 Iterations 15000 40 30 10000 Frequency k 20 5000 10 0 0 5000 10000 15000 20000 4 6 8 10 12 14 16 Iterations βmax = 15, K = 83, τ 2 = 0.05 and r = 1 k sequences for the knn Metropolis–Hastings β based on 20, 000 iterations 429 / 458
  • 18. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Image Segmentation This underlying structure of the “true” pixels is denoted by x, while the observed image is denoted by y. Both objects x and y are arrays, with each entry of x taking a finite number of values and each entry of y taking real values. We are interested in the posterior distribution of x given y provided by Bayes’ theorem, π(x|y) ∝ f (y|x)π(x). The likelihood, f (y|x), describes the link between the observed image and the underlying classification. 430 / 458
  • 19. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Menteith dataset (1) The Menteith dataset is a 100 × 100 pixel satellite image of the lake of Menteith. The lake of Menteith is located in Scotland, near Stirling, and offers the peculiarity of being called “lake” rather than the traditional Scottish “loch.” 431 / 458
  • 20. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Menteith dataset (2) 100 80 60 40 20 20 40 60 80 100 Satellite image of the lake of Menteith 432 / 458
  • 21. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Random Fields If we take a lattice I of sites or pixels in an image, we denote by i ∈ I a coordinate in the lattice. The neighborhood relation is then denoted by ∼. A random field on I is a random structure indexed by the lattice I, a collection of random variables {xi ; i ∈ I} where each xi takes values in a finite set χ. 433 / 458
  • 22. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Markov Random Fields Let xn(i) be the set of values taken by the neighbors of i. A random field is a Markov random field (MRF) if the conditional distribution of any pixel given the other pixels only depends on the values of the neighbors of that pixel; i.e., for i ∈ I, π(xi |x−i ) = π(xi |xn(i) ) . 434 / 458
  • 23. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Ising Models If pixels of the underlying (true) image x can only take two colors (black and white, say), x is then binary, while y is a grey-level image. We typically refer to each pixel xi as being foreground if xi = 1 (black) and background if xi = 0 (white). We have π(xi = j|x−i ) ∝ exp(βni,j ) , β > 0, where ni,j = ℓ∈n(i) Ixℓ =j is the number of neighbors of xi with color j. 435 / 458
  • 24. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Ising Models The Ising model is defined via full conditionals exp(βni,1 ) π(xi = 1|x−i ) = , exp(βni,0 ) + exp(βni,1 ) and the joint distribution satisfies   π(x) ∝ exp β Ixj =xi  , j∼i where the summation is taken over all pairs (i, j) of neighbors. 436 / 458
  • 25. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Simulating from the Ising Models The normalizing constant of the Ising Model is intractable except for very small lattices I... Direct simulation of x is not possible! 437 / 458
  • 26. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Ising Gibbs Sampler Algorithm (Ising Gibbs Sampler) Initialization: For i ∈ I, generate independently (0) xi ∼ B(1/2) . Iteration t (t ≥ 1): 1 Generate u = (ui )i∈I , a random ordering of the elements of I. (t) (t) 2 For 1 ≤ ℓ ≤ |I|, update nuℓ ,0 and nuℓ ,1 , and generate (t) exp(βnuℓ ,1 ) x(t) ∼ B uℓ (t) (t) . exp(βnuℓ ,0 ) + exp(βnuℓ ,1 ) 438 / 458
  • 27. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Ising Gibbs Sampler (2) 20 60 100 20 60 100 20 60 100 20 60 100 20 60 100 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 100 100 100 100 100 20 60 100 20 60 100 20 60 100 20 60 100 20 60 100 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 100 100 100 100 100 Simulations from the Ising model with a four-neighbor neighborhood structure on a 100 × 100 array after 1, 000 iterations of the Gibbs sampler: β varies in steps of 0.1 from 0.3 to 1.2. 439 / 458
  • 28. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Potts Models If there are G colors and if ni,g denotes the number of neighbors of i ∈ I with color g (1 ≤ g ≤ G) (that is, ni,g = j∼i Ixj =g ) In that case, the full conditional distribution of xi is chosen to satisfy π(xi = g|x−i ) ∝ exp(βni,g ). This choice corresponds to the Potts model, whose joint density is given by   π(x) ∝ exp β Ixj =xi  . j∼i 440 / 458
  • 29. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Simulating from the Potts Models (1) Algorithm (Potts Metropolis–Hastings Sampler) Initialization: For i ∈ I, generate independently (0) xi ∼ U ({1, . . . , G}) . Iteration t (t ≥ 1): 1 Generate u = (ui )i∈I a random ordering of the elements of I. 2 For 1 ≤ ℓ ≤ |I|, generate x(t) ∼ U ({1, xuℓ − 1, xuℓ + 1, . . . , G}) , ˜ uℓ (t−1) (t−1) 441 / 458
  • 30. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Simulating from the Potts Models (2) Algorithm (continued) (t) compute the nul ,g and (t)(t) ρl = exp(βnuℓ ,˜ )/ exp(βnuℓ ,xu ) ∧ 1 , x ℓ (t) and set xuℓ equal to xuℓ with probability ρl . ˜ 442 / 458
  • 31. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Simulating from the Potts Models (3) 20 60 100 20 60 100 20 60 100 20 60 100 20 60 100 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 100 100 100 100 100 20 60 100 20 60 100 20 60 100 20 60 100 20 60 100 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 100 100 100 100 100 Simulations from the Potts model with four grey levels and a four-neighbor neighborhood structure based on 1000 iterations of the Metropolis–Hastings sampler. The parameter β varies in steps of 0.1 from 0.3 to 1.2. 443 / 458
  • 32. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Posterior Inference (1) The prior on x is a Potts model with G categories,   1 π(x|β) = exp β Ixj =xi  , Z(β) i∈I j∼i where Z(β) is the normalizing constant. Given x, we assume that the observations in y are independent normal random variables, 1 1 f (y|x, σ 2 , µ1 , . . . , µG ) = exp − 2 (yi − µxi )2 . (2πσ 2 )1/2 2σ i∈I 444 / 458
  • 33. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Posterior Inference (2) Priors β ∼ U ([0, 2]) , µ = (µ1 , . . . , µG ) ∼ U ({µ ; 0 ≤ µ1 ≤ . . . ≤ µG ≤ 255}) , π(σ 2 ) ∝ σ −2 I]0,∞[ (σ 2 ) , the last prior corresponding to a uniform prior on log σ. 445 / 458
  • 34. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Posterior Inference (3) Posterior distribution   1 π(x, β, σ 2 , µ|y) ∝ π(β, σ 2 , µ) × exp β Ixj =xi  Z(β) i∈I j∼i 1 −1 × exp (yi − µxi )2 . (2πσ 2 )1/2 2σ 2 i∈I 446 / 458
  • 35. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Full conditionals (1)    1  P(xi = g|y, β, σ 2 , µ) ∝ exp β Ixj =g − (yi − µg )2 ,  2σ 2  j∼i can be simulated directly. ng = Ixi =g and sg = Ixi =g yi i∈I i∈I the full conditional distribution of µg is a truncated normal distribution on [µg−1 , µg+1 ] with mean sg /ng and variance σ 2 /ng 447 / 458
  • 36. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Full conditionals (2) The full conditional distribution of σ 2 is an inverse gamma distribution with parameters |I|2 /2 and i∈I (yi − µxi )2 /2. The full conditional distribution of β is such that   1 π(β|y) ∝ exp β Ixj =xi  . Z(β) i∈I j∼i 448 / 458
  • 37. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Path sampling Approximation (1) Z(β) = exp {βS(x)} , x where S(x) = i∈I j∼i Ixj =xi dZ(β) exp(βS(x)) = Z(β) S(x) dβ x Z(β) = Z(β) Eβ [S(X)] , d log Z(β) = Eβ [S(X)] . dβ 449 / 458
  • 38. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Path sampling Approximation (2) Path sampling identity β1 log {Z(β1 )/Z(β0 )} = Eβ [S(x)]dβ . β0 450 / 458
  • 39. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Path sampling Approximation (3) For a given value of β, Eβ [S(X)] can be approximated from an MCMC sequence. The integral itself can be approximated by computing the value of f (β) = Eβ [S(X)] for a finite number of values of β and ˆ approximating f (β) by a piecewise-linear function f (β) for the intermediate values of β. 451 / 458
  • 40. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Path sampling Approximation (3) 40000 35000 30000 25000 20000 15000 10000 0.0 0.5 1.0 1.5 2.0 Approximation of f (β) for the Potts model on a 100 × 100 image, a four-neighbor neighborhood, and G = 6, based on 1500 MCMC iterations after burn-in. 452 / 458
  • 41. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Posterior Approximation (1) 35.0 58.5 59.0 59.5 60.0 60.5 34.0 33.0 32.0 0 500 1000 1500 0 500 1000 1500 70.0 70.5 71.0 71.5 72.0 72.5 83.0 83.5 84.0 84.5 85.0 0 500 1000 1500 0 500 1000 1500 113.5 95.0 95.5 96.0 96.5 97.0 97.5 112.5 111.5 110.5 0 500 1000 1500 0 500 1000 1500 Dataset Menteith: Sequence of µg ’s based on 2000 iterations of the hybrid Gibbs sampler (read row-wise from µ1 to µ6 ). 453 / 458
  • 42. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Posterior Approximation (2) 100 40 80 30 60 20 40 10 20 0 0 32.0 32.5 33.0 33.5 34.0 34.5 35.0 58.5 59.0 59.5 60.0 60.5 60 50 50 40 40 30 30 20 20 10 10 0 0 70.0 70.5 71.0 71.5 72.0 72.5 83.0 83.5 84.0 84.5 85.0 100 50 80 40 60 30 40 20 20 10 0 0 95.0 95.5 96.0 96.5 97.0 97.5 110.5 111.0 111.5 112.0 112.5 113.0 113.5 Histograms of the µg ’s 454 / 458
  • 43. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Posterior Approximation (3) 120 20.5 100 20.0 80 60 19.5 40 19.0 20 18.5 0 0 500 1000 1500 18.5 19.0 19.5 20.0 20.5 1.295 1.300 1.305 1.310 1.315 1.320 100 120 80 60 40 20 0 0 500 1000 1500 1.295 1.300 1.305 1.310 1.315 1.320 Raw plots and histograms of the σ 2 ’s and β’s based on 2000 iterations of the hybrid Gibbs sampler 455 / 458
  • 44. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Image segmentation (1) Based on (x(t) )1≤t≤T , an estimator of x needs to be derived from an evaluation of the consequences of wrong allocations. Two common ways corresponding to two loss functions L1 (x, x) = Ixi =ˆi , x i∈I L2 (x, x) = Ix=b , x Corresponding estimators xM P M = arg max Pπ (xi = g|y) , ˆi i∈I, 1≤g≤G xM AP = arg max π(x|y) , x 456 / 458
  • 45. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Image segmentation (2) xM P M and xM AP not available in closed form! Approximation N xM P M ˆi = max Ix(j) =g , g∈{1,...,G} i j=1 based on a simulated sequence, x(1) , . . . , x(N ) . 457 / 458
  • 46. Bayesian Core:A Practical Approach to Computational Bayesian Statistics Image analysis Image Segmentation Image segmentation (3) 100 80 60 40 20 20 40 60 80 100 100 80 60 40 20 20 40 60 80 100 (top) Segmented image based on the MPM estimate produced after 2000 iterations of the Gibbs sampler and (bottom) the observed image 458 / 458