SlideShare uma empresa Scribd logo
1 de 65
Background Modeling and Foreground
  Detection for Video Surveillance:

  Recent Advances and Future Directions


          Thierry BOUWMANS
                Associate Professor
      MIA Lab - University of La Rochelle - France
Plan
       Introduction
       Fuzzy Background Subtraction
       Background Subtraction via a Discriminative
        Subspace Learning: IMMC
       Foreground Detection via Robust Principal
        Component Analysis (RPCA)
       Conclusion - Perspectives


2
Goal
       Detection of moving objects in video sequence.
       Pixels are classified as:
                       Background(B)      Foreground (F)




               Séquence Pets 2006 : Image298 (720 x 576 pixels)
3
Background Subtraction Process
                                                 Incremental Algorithm

         t >N                      Background
                                   Maintenance

                                 t ≥N        t=t+1
            Batch Algorithm
         t ≤N
                                 N images
Video           Background               F(t)         Foreground
                Initialization          I(t+1)         Detection   Foreground
                                  N+1
                                                                   Mask


                                                  Classification task
4
Related Applications
        Video surveillance
        Optical Motion Capture
        Multimedia Applications




    Séquence Danse [Mikic 2002] – Université de Californie SanJump [Mikic 2002]
                 Projet Aqu@theque – Université de La Rochelle
                   Projet ATON                     Séquence Diego
5
On the importance of the background
       subtraction



                         Background
                                                  Processing
                         Subtraction
    Acquisition




                  Convex Hull                            Tracking

6                                      Pattern Recognition
Challenges
       Critical situations which generate false
        detections :
             Shadows Illumination variations…
          -




         Source : Séquence Pets 2006 Image 0298 (720 x 576 pixels)
7
   Multimodal Backgrounds




        Rippling      Water             Camera           Waving
        Water         Surface           Jitter           Trees
    Source: http://perception.i2r.a-star.edu.sg/bk_model/bk_index.html


8
Statistical Background Modeling




       Background Subtraction Web Site: References (553),
        datasets (10) and codes (27).

          Source: http://sites.google.com/site/backgroundsubtraction/Home.html
          (6256 Visitors, Source Google Analytics).

9
Plan
        Introduction
        Fuzzy Background Subtraction
        Background Subtraction via a Discriminative
         Subspace Learning: IMMC
        Foreground Detection via Robust Principal
         Component Analysis (RPCA)
        Conclusion - Perspectives


10
Fuzzy Background Subtraction

        A survey in Handbook on Soft Computing for Video
         Surveillance, Taylor and Francis Group [HSCVS
         2012]
        Three approaches developed at the MIA Lab:
            Background modeling by Type-2 Fuzzy Mixture of
             Gaussians Model [ISVC 2008].
            Foreground Detection using the Choquet Integral
             [WIAMIS 2008][FUZZ’IEEE 2008]
            Fuzzy Background Maintenance [ICIP 2008]

11
Weakness of the original MOG
     1.   False detections due to the matching test




              kσ1                 kσ 2                kσ 3

12
Weakness of the original MOG
     2.   False detections due to the presence of outliers in
          the training step


                                            Exact distribution




                               μ

                       μ min       μ max


13
Mixture of Gaussians
     with uncertainty on :
         the mean     and    the variance   [Zeng 2006]




       (T2 FMOG-UM)          (T2 FMOG-UV)



14
Mixture of Gaussians with uncertainty on
      the mean
      (T2 FMOG-UM)




     X t ,c : Intensity vector in the RGB color space


15
Mixture of Gaussians with uncertainty on
      the variance
      (T2 FMOG-UV)




     X t ,c : Intensity vector in the RGB color space


16
Classification B/F by T2-FMOG
     Matching test:




     Classification B/F as the MOG   ⇒
17
Results on the “SHAH” dataset
      (160 x 128 pixels) – Camera Jitter
     Video at http://sites.google.com/site/t2fmog/




                 Original sequence                        MOG




            T2 FMOG-UM (km=2)                        T2 FMOG-UV (kv=0.9)
18
Results on the “SHAH” dataset
          (160 x 128 pixels) – Camera Jitter


 Method        Error   Image   Image   Image   Image   Total   Variation in %
               Type     271     373     410     465    Error
 MOG              FN     0     1120    4818    2050
                  FP   2093    4124    2782    1589    18576
 T2-FMOG-UM       FN     0     1414    6043    2520
                  FP    203    153      252     46     10631      42,77
 T2-FMOG-UV       FN     0     957     2217    1069
                  FP   3069    1081    1119    1158    10670       42.56




19
Results on the “SHAH” dataset
       (160 x 128 pixels) – Camera Jitter




                              [Stauffer 1999]




 [Bowden 2001] – Initialization                 [Zivkovic 2004] – K is variable
20
Results on the sequence “CAMPUS”
      (160 x 128 pixels) – Waving Trees
     Video at http://sites.google.com/site/t2fmog/




                 Original Sequence                          MOG




            T2 FMOG-UM (km=2)                        T2 FMOG-UV (kv=0.9)
21
Resultat on the sequence “Water
      Surface” (160 x 128 pixels) – Water Surface
     Video at http://sites.google.com/site/t2fmog/




                 Original Sequence                          MOG




            T2 FMOG-UM (km=2)                        T2 FMOG-UV (kv=0.9)
22
Fuzzy Foreground Detection :
        Features: color, edge, stereo features, motion
         features, texture.

        Multiple features:
         More robustness in presence of illumination
         changes, shadows and multimodal backgrounds




23
Choice of the features
      Color (3 components)
      Texture (Local Binary Pattern [Heikkila – PAMI 2006])


         For each feature, a similarity (S) is computed
         following its value in the background image and
         its value in the current image.



24
Aggregation of the Color and Texture features with the
     Choquet Integral
                         BG(t)          I(t+1)



           Color Features                  Texture Features




                   S
         Similarity mesure
                      C,1
           for the Color
                               SC,2 SC,3SimilarityTexture
                                            ST measure
                                          for the



                         Fuzzy Integral
                             Classification B/F

25                       Foreground Mask
How to compute S for the Color and the
     Texture?
                       TF                          TI
                       C F, k                     C I, k     0 ≤ T,C ≤ 255

                Background Image             Current Image


                               C FBk
                                T,
                                        if CT,B < CTk
                                         if F k < I , I
                               C I Ik
                                 T
     For the
          the         ST = 1
                               ,
                      SC ,k =  1        if CT,B = CTk
                                         if F k = I , I      0≤S ≤1
                               CI ,k
     Color
     Texture                   I
                                 T       if CTk < C F ,k     k=one of the color
                               CF ,k    if I , I < TB
                              T   B                         components

26
Fuzzy operators
      « Sugeno Integral» et «Choquet Integral»

         Uncertainty and imprecision
         Great flexibility
         Fast and simple operations



             ordinal                    cardinal




27
Data Fusion using the Choquet Integral

     Mesures floues :


     Intégrale de
     Choquet :




     X = { x1 , x 2 , x 3 }   {x } {x } {x } {x ,x } {x ,x } {x
                               1     2    3    1   2   1   3      2   , x3}

28
Fuzzy Foreground Detection

     Classification using the Choquet integral

     If   C μ ( x , y ) < Th    then   ( x, y ) ∈   Background
                               else    ( x, y ) ∈   Foreground

     where Th is constant threshold. Cμ ( x, y) is the value of
       the Choquet integral for the pixel (x,y)



29
Aggregation Color, Texture
        Aqu@thèque (384 x 288 pixels) - Ohta color space


           Integral                       Choquet Sugeno
           Color space                    Ohta         Ohta
           S(A,B)                          0.40         0.27

               a) Current image               b) Ground truth




               Comparison between the Sugeno and Choquet [Zhang 2006]
30         c) Choquet integral             d) Sugeno integral
Aggregation Colors, Texture :                                      Ohta, YCrCb, HSV
        Aqu@thèque (384 x 288 pixels)


     Texture Color


         {x } {x } {x } {x ,x } {x ,x } {x
           1      2     3      1      2      1    3     2   , x3}       X = { x1 , x 2 , x 3 }
        0.6    0.3  0.1         0.9          0.7        0.4                     1
        0.5    0.4  0.1         0.9          0.6        0.5                     1
     Choquet - Ohta 0.2
        0.5    0.3              0.8          0.7        0.5
                                          Choquet - YCrCb                       1
                                                                               Choquet - HSV
        0.5 0.39 0.11          0.89         0.61        0.5                     1
       0.53 0.34 0.13          0.87         0.66       0.47                     1
         Integral
                                       Ohta      YCrCb
                            Values of the fuzzy measures            μ          HSV
         Color Space
         S(A,B)                            0.40         0.42                   0.30
         Evaluation of the Choquet integral for different color spaces
31
Aggregation Color, Texture
        VS-Pets 2003 (720 x 576)




     Current Image         Choquet - YCrCb   Sugeno – Ohta [Zhang 2006]




32
Aggregation Colors : Pets 2006 (384 x 288 pixels)
                   Original sequence   Ground truth




                  OR          Sugeno Integral         Choquet Integral




      YCrCb



      Ohta



      HSV
33
Fuzzy Background maintenance
        No-selective rule



        Selective rule




     Here, the idea is to adapt very quickly a pixel classified as
     background and very slowly a pixel classified as foreground.
34
Fuzzy adaptive rule



                           and



     Combination of the update rules of the selective scheme



35
Results on the Wallflower dataset
 Sequence Time of Day



                        Original Image 1850          Ground Truth




         No selective rule          Selective rule              Fuzzy adaptive rule

 Similarity measure

                             No selective       Selectiv     Fuzzy adaptive
                                                e
             S(A,B)%              58.40           57.08             58.96


36
Computation Time

               Algorithm          Frames/Second
               T2-FMOG-UM              11
               T2-FMOG-UV              12
               MOG                     20
               Choquet integral       31
               Sugeno integral        22
               OR                      40



     Resolution 384*288, RGB, Pentium 1,66GHz, RAM 1GB

37
Perspective
         Assessment
                    s
        Fuzzy Background Modeling by T2-FMOG
            Multimodal Backgrounds

              -   Using fuzzy approaches in other statistical models.
        Fuzzy Foreground Detection using multi-features
              -   Using more than two features
              -   Fuzzy measures by learning
        Fuzzy Background Maintenance

38
Plan
        Introduction
        Fuzzy Background Subtraction
        Background Subtraction via a Discriminative
         Subspace Learning: IMMC
        Foreground Detection via Robust Principal
         Component Analysis (RPCA)
        Conclusion - Perspectives


39
Background Modeling and Foreground Detection
         via a Discriminative Subspace Learning (MIA Lab)
        Reconstructive subspace learning models (PCA, ICA, IRT)
         [RPCS 2009]

        Assumption: The main information contained in the training
         sequence is the background meaning that the foreground
         has a low contribution.

        However, this assumption is only verified when the moving
         objects are either small or far away from the camera.


40
Discriminative Subspace Learning

        Advantages
            More efficient and often give better classification results.
            Robust supervised initialization of the background
            Incremental update of the eigenvectors and eigenvalues.
        Approach developed at the MIA Lab:
            Background initialization via MMC [MVA 2012]
            Background maintenance via Incremental Maximum
             Margin Criterion (IMMC) [MVA 2012]


41
Background Subtraction via Incremental
         Maximum Margin Criterion
        Denote the training video sequences S ={I1, ...IN}
     where It is the frame at time t
             N is the number of training frames.

        Let each pixel (x,y) be characterized by its intensity in the grey
         scale and asssume that we have the ground truth corresponding to
         this training video sequence, i.e we know for each pixel its class
         label that can be foreground or background.



42
Background Subtraction via Incremental
         Maximum Margin Criterion
        Thus, we compute respectively the inter-class scatter matrix Sb
         and the intra-class scatter matrix Sw:




     where c = 2
       I is the mean of the intensity of the pixel (x,y) over the training video
         Ii is the mean of samples belonging to class i
         pi is the prior probability for a sample belonging to class i (Background,
         Foreground).
43
Background Subtraction via Incremental
         Maximum Margin Criterion
        Batch Maximum Margin Criterion algorithm.

        Extract the first leading eigenvectors that correspond to the
         background. The corresponding eigenvalues are contained
         in the matrix LM and the leading eigenvectors in the matrix
         ΦM .

        The current image It can be approximated by the mean
         background and weighted sum of the leading
         eigenbackgrounds ΦM.
44
Background Subtraction via Incremental
         Maximum Margin Criterion
        The coordinates in leading eigenbackground space of the current
         image It can be computed :



        When wt is back projected onto the image space, the background
         image is created :




45
Background Subtraction via Incremental
         Maximum Margin Criterion


        Foreground detection




        Background maintenance via IMMC


46
Principle - Illustration

     Current Image

                IBackground

                IForeground


     Background image

     Foreground mask

47
Results on the Wallflower dataset




     Original image, ground truth , SG, MOG, KDE,
     PCA, INMF, IRT, IMMC (30), IMMC (100)
48
Perspective
         Assessment
                    s
        Advantages
            Robust supervised initialization of the background.
            Incremental update of the eigenvectors and
             eigenvalues.
        Disadvantages
            Needs ground truth in the training step.

        Others Discriminative Subspace Learning
         methods such as LDA.
49
Plan
        Introduction
        Fuzzy Background Subtraction
        Background Subtraction via a Discriminative
         Subspace Learning: IMMC
        Foreground Detection via Robust Principal
         Component Analysis (RPCA)
        Conclusion - Perspectives


50
Foreground Detection via Robust Principal
         Component Analysis
        PCA (Oliver et al 1999): Not robust to outliers.
        Robust PCA (Candes et al. 2011): Decomposition
         into low-rank and sparse matrices

        Approach developed at the MIA Lab:
            Validation [ICIP 2012][ICIAR 2012][ISVC 2012]
            RPCA via Iterative Reweighted Least Squares [BMC
             2012]


51
Robust Principal Component Analysis
        Candes et al. (ACM 2011) proposed a convex optimization to address
         the robust PCA problem. The observation matrix A is assumed
         represented as:

     where L is a low-rank matrix and S must be sparse matrix with a small
       fraction of nonzero entries.




52             http://perception.csl.illinois.edu/matrix-rank/home.html
Robust Principal Component Analysis
        This research seeks to solve for L with the following optimization
         problem:




     where ||.||* and ||.||1 are the nuclear norm (which is the l1-norm of singular
       value) and l1-norm, respectively, and λ > 0 is an arbitrary balanced
       parameter.
      Under these minimal assumptions, this approach called Principal
       Component Pursuit (PCP) solution perfectly recovers the low-rank and
       the sparse matrices.
53
Algorithms for solving PCP
      Time required to solve a 1000x1000=106 RPCA problem:
     Algorithms         Accuracy           Rank          ||E||_0           # iterations      time (sec)
     IT                 5.99e-006          50            101,268           8,550             119,370.3
     DUAL               8.65e-006          50            100,024           822               1,855.4
                                                                                                          10,000
                                                                                                          times
     APG                5.85e-006          50            100,347           134               1,468.9
                                                                                                          speedup!
     APGP               5.91e-006          50            100,347           134               82.7
     ALMP               2.07e-007          50            100,014           34                37.5
     ADMP               3.83e-007          50            99,996            23                11.8
     Source: Z. Lin , Y. Ma “The Pursuit of Low-dimensional Structures in High-dimensional
     (Visual) Data: Fast and Scalable Algorithms”


     Time required is still acceptable for ADM but for background
     modeling and foreground detection?
54
Application to Background Modeling and
     Foreground Detection
     n is the amount of pixels in a frame (106)
     m is the number of frames considered (200)
     Computation time is 200* 12s= 40 minutes!!!




     Source: http://perception.csl.illinois.edu/matrix-rank/home.html
55
PCP and its application to Background
         Modeling and Foreground Detection
        Only visual validations are provided!!!

        Limitations:

            Spatio-temporal aspect: None!
            Real Time Aspect: PCP takes 40 minutes with the
             ADM!!!
            Incremental Aspect: PCP is a batch algorithm. For
             example, (Candes et al. 2011) collected 200 images.

56
PCP and its variants

     How to improve PCP?

               Algorithms for solving PCP (17 Algorithms)
               Incremental PCP (5 papers)
               Real-Time PCP (2 papers)

     Validation for background modeling and foreground detection
       (3 papers) [ICIP 2012][ICIAR 2012][ISVC 2012]
 Source: T. Bouwmans, Foreground Detection using Principal Component Pursuit: A Survey, under preparation.



57
PCP and its variants




 Source: T. Bouwmans, Foreground Detection using Principal Component Pursuit: A Survey, under preparation.



58
Validation Background Modeling and Foreground
        Detection: Qualitative Evaluation

            Original image
            Ground truth

            PCA
            RSL

            PCP-EALM
            PCP-IADM
            PCP-LADM

            PCP-LSADM

            BPCP-IALM


59   Source: ICIP 2012, ICIAR 2012, ISVC 2012
Validation Background Modeling and Foreground
        Detection : Quantitative Evaluation
     F-Measure




     Block PCP gives the best performance!


60   Source: ICIP 2012, ICIAR 2012, ISVC 2012
PCP and its application to Background
         Modeling and Foreground Detection
        Recent improvements:
           BPCP (Tang et Nehorai (2012)) : Spatial but not incremental and
            not real time!
           Recursive Robust PCP (Qiu and Vaswani (2012) ): Incremental but
            not real time!
           Real Time Implementation on GPU (Anderson et al. (2012) ): Real
            time but not incremental!

        What we can do?
          Research on real time incremental robust PCP!




61
Perspective
         Conclusion
                    s
        Fuzzy Background Subtraction
        Background Subtraction via a Discriminative Subspace
         Learning: IMMC
        Foreground Detection via Robust Principal Component
         Analysis (RPCA)

        Fuzzy Learning Rate
        Other Discriminative Subspace Learning methods such
         as LDA
        Incremental and real time RPCA


62
Publications
Chapter                           Fuzzy Background Subtraction
     T. Bouwmans, “Background Subtraction For Visual Surveillance: A Fuzzy Approach”,
     Handbook on Soft Computing for Video Surveillance, Taylor and Francis Group, Chapter
     5, March 2012.

International Conferences :

     F. El Baf, T. Bouwmans, B. Vachon, “Fuzzy Statistical Modeling of Dynamic
     Backgrounds for Moving Object Detection in Infrared Videos”, CVPR 2009 Workshop ,
     pages 1-6, Miami, USA, 22 June 2009.

     F. El Baf, T. Bouwmans, B. Vachon, “Type-2 Fuzzy Mixture of Gaussians Model:
     Application to Background Modeling”, ISVC 2008, pages 772-781, Las Vegas, USA,
     December 2008

     F. El Baf, T. Bouwmans, B. Vachon, “A Fuzzy Approach for Background Subtraction”,
     ICIP 2008, San Diego, California, U.S.A, October 2008.

     F. El Baf, T. Bouwmans, B. Vachon. " Fuzzy Integral for Moving Object Detection ",
     IEEE-FUZZY 2008 , Hong Kong, China, June 2008.

     F. El Baf, T. Bouwmans, B. Vachon, “Fuzzy Foreground Detection for Infrared Videos”,
     CVPR 2008 Workshop , pages 1-6, Anchorage, Alaska, USA, 27 June 2008.

     F. El Baf, T. Bouwmans, B. Vachon, “Foreground Detection using the Choquet Integral”,
     International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS
     2008, pages 187-190, Klagenfurt, Austria, May 2008.
Publications
                                 Background Subtraction via IMMC
Journal

     D. Farcas, C. Marghes, T. Bouwmans, “Background Subtraction via Incremental
     Maximum Margin Criterion: A discriminative approach” , Machine Vision and
     Applications , March 2012.

International Conferences :

     C. Marghes, T. Bouwmans, "Background Modeling via Incremental Maximum Margin
     Criterion", International Workshop on Subspace Methods, ACCV 2010 Workshop
     Subspace 2010, Queenstown, New Zealand, November 2010.

     D. Farcas, T. Bouwmans, "Background Modeling via a Supervised Subspace Learning",
     International Conference on Image, Video Processing and Computer Vision, IVPCV
     2010, pages 1-7, Orlando, USA , July 2010.
Publications
Chapter                           Foreground Detection via RPCA
     C. Guyon, T. Bouwmans, E. Zahzah, “Robust Principal Component Analysis for
     Background Subtraction: Systematic Evaluation and Comparative Analysis”, INTECH,
     Principal Component Analysis, Book 1, Chapter 12, page 223-238, March 2012.

International Conferences :

     C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix
     Factorization including Spatial Constraint with Iterative Reweighted Regression”,
     International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan,
     November 2012.

     C. Guyon, T. Bouwmans. E. Zahzah, “Moving Object Detection via Robust Low Rank
     Matrix Decomposition with IRLS scheme”, International Symposium on Visual
     Computing, ISVC 2012,pages 665–674, Rethymnon, Crete, Greece, July 2012.

     C. Guyon, T. Bouwmans, E. Zahzah, “Moving Object Detection by Robust PCA solved
     via a Linearized Symmetric Alternating Direction Method”, International Symposium on
     Visual Computing, ISVC 2012, pages 427-436, Rethymnon, Crete, Greece, July 2012.

     C. Guyon, T. Bouwmans, E. Zahzah, "Foreground Detection by Robust PCA solved via a
     Linearized Alternating Direction Method", International Conference on Image Analysis
     and Recognition, ICIAR 2012, pages 115-122, Aveiro, Portugal, June 2012.

     C. Guyon, T. Bouwmans, E. Zahzah, "Foreground detection based on low-rank and
     block-sparse matrix decomposition", IEEE International Conference on Image
     Processing, ICIP 2012 , Orlando, Florida, September 2012.

Mais conteúdo relacionado

Mais procurados

第13回 配信講義 計算科学技術特論A(2021)
第13回 配信講義 計算科学技術特論A(2021)第13回 配信講義 計算科学技術特論A(2021)
第13回 配信講義 計算科学技術特論A(2021)RCCSRENKEI
 
Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)Jia-Bin Huang
 
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)Jia-Bin Huang
 
A Novel Approach for Tracking with Implicit Video Shot Detection
A Novel Approach for Tracking with Implicit Video Shot DetectionA Novel Approach for Tracking with Implicit Video Shot Detection
A Novel Approach for Tracking with Implicit Video Shot DetectionIOSR Journals
 
Optical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport AnalysisOptical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport AnalysisMatthew O'Toole
 
image segmentation
image segmentationimage segmentation
image segmentationarpanmankar
 
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...IRJET Journal
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Scienceresearchinventy
 
Pore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic FieldsPore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic FieldsAlexander Sagidullin
 
SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術
SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術
SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術SSII
 
Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Jia-Bin Huang
 
Primal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light TransportPrimal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light TransportMatthew O'Toole
 
SSII2018企画: センシングデバイスの多様化と空間モデリングの未来
SSII2018企画: センシングデバイスの多様化と空間モデリングの未来SSII2018企画: センシングデバイスの多様化と空間モデリングの未来
SSII2018企画: センシングデバイスの多様化と空間モデリングの未来SSII
 
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...grssieee
 
A fast search algorithm for large
A fast search algorithm for largeA fast search algorithm for large
A fast search algorithm for largecsandit
 
All optical image processing using third harmonic generation for image correl...
All optical image processing using third harmonic generation for image correl...All optical image processing using third harmonic generation for image correl...
All optical image processing using third harmonic generation for image correl...M. Faisal Halim
 

Mais procurados (19)

第13回 配信講義 計算科学技術特論A(2021)
第13回 配信講義 計算科学技術特論A(2021)第13回 配信講義 計算科学技術特論A(2021)
第13回 配信講義 計算科学技術特論A(2021)
 
Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)
 
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
 
Background Subtraction Algorithm for Moving Object Detection Using Denoising ...
Background Subtraction Algorithm for Moving Object Detection Using Denoising ...Background Subtraction Algorithm for Moving Object Detection Using Denoising ...
Background Subtraction Algorithm for Moving Object Detection Using Denoising ...
 
A Novel Approach for Tracking with Implicit Video Shot Detection
A Novel Approach for Tracking with Implicit Video Shot DetectionA Novel Approach for Tracking with Implicit Video Shot Detection
A Novel Approach for Tracking with Implicit Video Shot Detection
 
Optical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport AnalysisOptical Computing for Fast Light Transport Analysis
Optical Computing for Fast Light Transport Analysis
 
image segmentation
image segmentationimage segmentation
image segmentation
 
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
 
Pore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic FieldsPore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic Fields
 
SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術
SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術
SSII2021 [OS3-01] 設備や環境の高品質計測点群取得と自動モデル化技術
 
פוסטר דר פרידמן
פוסטר דר פרידמןפוסטר דר פרידמן
פוסטר דר פרידמן
 
Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)
 
Primal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light TransportPrimal-Dual Coding to Probe Light Transport
Primal-Dual Coding to Probe Light Transport
 
SSII2018企画: センシングデバイスの多様化と空間モデリングの未来
SSII2018企画: センシングデバイスの多様化と空間モデリングの未来SSII2018企画: センシングデバイスの多様化と空間モデリングの未来
SSII2018企画: センシングデバイスの多様化と空間モデリングの未来
 
Background subtraction
Background subtractionBackground subtraction
Background subtraction
 
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
MINIMUM ENDMEMBER-WISE DISTANCE CONSTRAINED NONNEGATIVE MATRIX FACTORIZATION ...
 
A fast search algorithm for large
A fast search algorithm for largeA fast search algorithm for large
A fast search algorithm for large
 
All optical image processing using third harmonic generation for image correl...
All optical image processing using third harmonic generation for image correl...All optical image processing using third harmonic generation for image correl...
All optical image processing using third harmonic generation for image correl...
 

Destaque

Real Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on FpgaReal Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on Fpgaiosrjce
 
A Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic BackgroundA Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic BackgroundChittipolu Praveen
 
Air pollution monitoring system using mobile gprs sensors array
Air pollution monitoring system using mobile gprs sensors arrayAir pollution monitoring system using mobile gprs sensors array
Air pollution monitoring system using mobile gprs sensors arraySaurabh Giratkar
 
2008-12 WMO GURME - Air Pollution Monitoring
2008-12 WMO GURME - Air Pollution Monitoring2008-12 WMO GURME - Air Pollution Monitoring
2008-12 WMO GURME - Air Pollution Monitoringurbanemissions
 
Air pollution monitoring system using mobile gprs sensors array ppt
Air pollution monitoring system using mobile gprs sensors array pptAir pollution monitoring system using mobile gprs sensors array ppt
Air pollution monitoring system using mobile gprs sensors array pptSaurabh Giratkar
 
AIR POLLUTION MONITORING USING RS
AIR POLLUTION MONITORING USING RSAIR POLLUTION MONITORING USING RS
AIR POLLUTION MONITORING USING RSAbhiram Kanigolla
 
Air quality monitoring system
Air quality monitoring systemAir quality monitoring system
Air quality monitoring systemPravin Shinde
 
Air quality sampling and monitoring m5
Air quality sampling and monitoring m5Air quality sampling and monitoring m5
Air quality sampling and monitoring m5Bibhabasu Mohanty
 
Slideshare Powerpoint presentation
Slideshare Powerpoint presentationSlideshare Powerpoint presentation
Slideshare Powerpoint presentationelliehood
 

Destaque (12)

Background subtraction
Background subtractionBackground subtraction
Background subtraction
 
Real Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on FpgaReal Time Detection of Moving Object Based on Fpga
Real Time Detection of Moving Object Based on Fpga
 
A Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic BackgroundA Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic Background
 
Air pollution monitoring system using mobile gprs sensors array
Air pollution monitoring system using mobile gprs sensors arrayAir pollution monitoring system using mobile gprs sensors array
Air pollution monitoring system using mobile gprs sensors array
 
2008-12 WMO GURME - Air Pollution Monitoring
2008-12 WMO GURME - Air Pollution Monitoring2008-12 WMO GURME - Air Pollution Monitoring
2008-12 WMO GURME - Air Pollution Monitoring
 
Air pollution monitoring system using mobile gprs sensors array ppt
Air pollution monitoring system using mobile gprs sensors array pptAir pollution monitoring system using mobile gprs sensors array ppt
Air pollution monitoring system using mobile gprs sensors array ppt
 
AIR POLLUTION MONITORING USING RS
AIR POLLUTION MONITORING USING RSAIR POLLUTION MONITORING USING RS
AIR POLLUTION MONITORING USING RS
 
Monitoring of air pollution
Monitoring of air pollutionMonitoring of air pollution
Monitoring of air pollution
 
Air quality monitoring system
Air quality monitoring systemAir quality monitoring system
Air quality monitoring system
 
Deep Learning for Computer Vision: Segmentation (UPC 2016)
Deep Learning for Computer Vision: Segmentation (UPC 2016)Deep Learning for Computer Vision: Segmentation (UPC 2016)
Deep Learning for Computer Vision: Segmentation (UPC 2016)
 
Air quality sampling and monitoring m5
Air quality sampling and monitoring m5Air quality sampling and monitoring m5
Air quality sampling and monitoring m5
 
Slideshare Powerpoint presentation
Slideshare Powerpoint presentationSlideshare Powerpoint presentation
Slideshare Powerpoint presentation
 

Semelhante a BMC 2012 - Invited Talk

Landuse Classification from Satellite Imagery using Deep Learning
Landuse Classification from Satellite Imagery using Deep LearningLanduse Classification from Satellite Imagery using Deep Learning
Landuse Classification from Satellite Imagery using Deep LearningDataWorks Summit
 
Large scale landuse classification of satellite imagery
Large scale landuse classification of satellite imageryLarge scale landuse classification of satellite imagery
Large scale landuse classification of satellite imagerySuneel Marthi
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Chapter-05c-Image-Restoration-(Reconstruction-from-Projections).ppt
Chapter-05c-Image-Restoration-(Reconstruction-from-Projections).pptChapter-05c-Image-Restoration-(Reconstruction-from-Projections).ppt
Chapter-05c-Image-Restoration-(Reconstruction-from-Projections).pptVSUDHEER4
 
igarss11benedek.pdf
igarss11benedek.pdfigarss11benedek.pdf
igarss11benedek.pdfgrssieee
 
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesA Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesIJMER
 
A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...
A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...
A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...Varun Ojha
 
"What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic...
"What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic..."What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic...
"What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic...Edge AI and Vision Alliance
 
Threshold adaptation and XOR accumulation algorithm for objects detection
Threshold adaptation and XOR accumulation algorithm for  objects detectionThreshold adaptation and XOR accumulation algorithm for  objects detection
Threshold adaptation and XOR accumulation algorithm for objects detectionIJECEIAES
 
Fundamentals of Image processing.ppt
Fundamentals of Image processing.pptFundamentals of Image processing.ppt
Fundamentals of Image processing.pptssuser9a00df
 
Video Compression Advanced.pdf
Video Compression Advanced.pdfVideo Compression Advanced.pdf
Video Compression Advanced.pdfSMohiuddin1
 

Semelhante a BMC 2012 - Invited Talk (20)

Landuse Classification from Satellite Imagery using Deep Learning
Landuse Classification from Satellite Imagery using Deep LearningLanduse Classification from Satellite Imagery using Deep Learning
Landuse Classification from Satellite Imagery using Deep Learning
 
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
 
Large scale landuse classification of satellite imagery
Large scale landuse classification of satellite imageryLarge scale landuse classification of satellite imagery
Large scale landuse classification of satellite imagery
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Chapter-05c-Image-Restoration-(Reconstruction-from-Projections).ppt
Chapter-05c-Image-Restoration-(Reconstruction-from-Projections).pptChapter-05c-Image-Restoration-(Reconstruction-from-Projections).ppt
Chapter-05c-Image-Restoration-(Reconstruction-from-Projections).ppt
 
Image denoising using curvelet transform
Image denoising using curvelet transformImage denoising using curvelet transform
Image denoising using curvelet transform
 
igarss11benedek.pdf
igarss11benedek.pdfigarss11benedek.pdf
igarss11benedek.pdf
 
IROS 2013 talk
IROS 2013 talkIROS 2013 talk
IROS 2013 talk
 
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesA Novel Background Subtraction Algorithm for Dynamic Texture Scenes
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
 
Perceptual Video Coding
Perceptual Video Coding Perceptual Video Coding
Perceptual Video Coding
 
Pixel rf
Pixel rfPixel rf
Pixel rf
 
A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...
A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...
A Framework of Secured and Bio-Inspired Image Steganography Using Chaotic Enc...
 
"What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic...
"What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic..."What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic...
"What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applic...
 
Threshold adaptation and XOR accumulation algorithm for objects detection
Threshold adaptation and XOR accumulation algorithm for  objects detectionThreshold adaptation and XOR accumulation algorithm for  objects detection
Threshold adaptation and XOR accumulation algorithm for objects detection
 
21cm cosmology with ML
21cm cosmology with ML21cm cosmology with ML
21cm cosmology with ML
 
Fundamentals of Image processing.ppt
Fundamentals of Image processing.pptFundamentals of Image processing.ppt
Fundamentals of Image processing.ppt
 
rmsip98.ppt
rmsip98.pptrmsip98.ppt
rmsip98.ppt
 
Video Compression Advanced.pdf
Video Compression Advanced.pdfVideo Compression Advanced.pdf
Video Compression Advanced.pdf
 
Foreground Detection : Combining Background Subspace Learning with Object Smo...
Foreground Detection : Combining Background Subspace Learning with Object Smo...Foreground Detection : Combining Background Subspace Learning with Object Smo...
Foreground Detection : Combining Background Subspace Learning with Object Smo...
 
Ph.D. Presentation
Ph.D. PresentationPh.D. Presentation
Ph.D. Presentation
 

Último

Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 

Último (20)

Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 

BMC 2012 - Invited Talk

  • 1. Background Modeling and Foreground Detection for Video Surveillance: Recent Advances and Future Directions Thierry BOUWMANS Associate Professor MIA Lab - University of La Rochelle - France
  • 2. Plan  Introduction  Fuzzy Background Subtraction  Background Subtraction via a Discriminative Subspace Learning: IMMC  Foreground Detection via Robust Principal Component Analysis (RPCA)  Conclusion - Perspectives 2
  • 3. Goal  Detection of moving objects in video sequence.  Pixels are classified as: Background(B) Foreground (F) Séquence Pets 2006 : Image298 (720 x 576 pixels) 3
  • 4. Background Subtraction Process Incremental Algorithm t >N Background Maintenance t ≥N t=t+1 Batch Algorithm t ≤N N images Video Background F(t) Foreground Initialization I(t+1) Detection Foreground N+1 Mask Classification task 4
  • 5. Related Applications  Video surveillance  Optical Motion Capture  Multimedia Applications Séquence Danse [Mikic 2002] – Université de Californie SanJump [Mikic 2002] Projet Aqu@theque – Université de La Rochelle Projet ATON Séquence Diego 5
  • 6. On the importance of the background subtraction Background Processing Subtraction Acquisition Convex Hull Tracking 6 Pattern Recognition
  • 7. Challenges  Critical situations which generate false detections :  Shadows Illumination variations… - Source : Séquence Pets 2006 Image 0298 (720 x 576 pixels) 7
  • 8. Multimodal Backgrounds Rippling Water Camera Waving Water Surface Jitter Trees Source: http://perception.i2r.a-star.edu.sg/bk_model/bk_index.html 8
  • 9. Statistical Background Modeling  Background Subtraction Web Site: References (553), datasets (10) and codes (27). Source: http://sites.google.com/site/backgroundsubtraction/Home.html (6256 Visitors, Source Google Analytics). 9
  • 10. Plan  Introduction  Fuzzy Background Subtraction  Background Subtraction via a Discriminative Subspace Learning: IMMC  Foreground Detection via Robust Principal Component Analysis (RPCA)  Conclusion - Perspectives 10
  • 11. Fuzzy Background Subtraction  A survey in Handbook on Soft Computing for Video Surveillance, Taylor and Francis Group [HSCVS 2012]  Three approaches developed at the MIA Lab:  Background modeling by Type-2 Fuzzy Mixture of Gaussians Model [ISVC 2008].  Foreground Detection using the Choquet Integral [WIAMIS 2008][FUZZ’IEEE 2008]  Fuzzy Background Maintenance [ICIP 2008] 11
  • 12. Weakness of the original MOG 1. False detections due to the matching test kσ1 kσ 2 kσ 3 12
  • 13. Weakness of the original MOG 2. False detections due to the presence of outliers in the training step Exact distribution μ μ min μ max 13
  • 14. Mixture of Gaussians with uncertainty on : the mean and the variance [Zeng 2006] (T2 FMOG-UM) (T2 FMOG-UV) 14
  • 15. Mixture of Gaussians with uncertainty on the mean (T2 FMOG-UM) X t ,c : Intensity vector in the RGB color space 15
  • 16. Mixture of Gaussians with uncertainty on the variance (T2 FMOG-UV) X t ,c : Intensity vector in the RGB color space 16
  • 17. Classification B/F by T2-FMOG Matching test: Classification B/F as the MOG ⇒ 17
  • 18. Results on the “SHAH” dataset (160 x 128 pixels) – Camera Jitter Video at http://sites.google.com/site/t2fmog/ Original sequence MOG T2 FMOG-UM (km=2) T2 FMOG-UV (kv=0.9) 18
  • 19. Results on the “SHAH” dataset (160 x 128 pixels) – Camera Jitter Method Error Image Image Image Image Total Variation in % Type 271 373 410 465 Error MOG FN 0 1120 4818 2050 FP 2093 4124 2782 1589 18576 T2-FMOG-UM FN 0 1414 6043 2520 FP 203 153 252 46 10631 42,77 T2-FMOG-UV FN 0 957 2217 1069 FP 3069 1081 1119 1158 10670 42.56 19
  • 20. Results on the “SHAH” dataset (160 x 128 pixels) – Camera Jitter [Stauffer 1999] [Bowden 2001] – Initialization [Zivkovic 2004] – K is variable 20
  • 21. Results on the sequence “CAMPUS” (160 x 128 pixels) – Waving Trees Video at http://sites.google.com/site/t2fmog/ Original Sequence MOG T2 FMOG-UM (km=2) T2 FMOG-UV (kv=0.9) 21
  • 22. Resultat on the sequence “Water Surface” (160 x 128 pixels) – Water Surface Video at http://sites.google.com/site/t2fmog/ Original Sequence MOG T2 FMOG-UM (km=2) T2 FMOG-UV (kv=0.9) 22
  • 23. Fuzzy Foreground Detection :  Features: color, edge, stereo features, motion features, texture.  Multiple features: More robustness in presence of illumination changes, shadows and multimodal backgrounds 23
  • 24. Choice of the features  Color (3 components)  Texture (Local Binary Pattern [Heikkila – PAMI 2006]) For each feature, a similarity (S) is computed following its value in the background image and its value in the current image. 24
  • 25. Aggregation of the Color and Texture features with the Choquet Integral BG(t) I(t+1) Color Features Texture Features S Similarity mesure C,1 for the Color SC,2 SC,3SimilarityTexture ST measure for the Fuzzy Integral Classification B/F 25 Foreground Mask
  • 26. How to compute S for the Color and the Texture? TF TI C F, k C I, k 0 ≤ T,C ≤ 255 Background Image Current Image  C FBk T,  if CT,B < CTk if F k < I , I  C I Ik T For the the ST = 1  , SC ,k =  1 if CT,B = CTk if F k = I , I 0≤S ≤1  CI ,k Color Texture  I T if CTk < C F ,k k=one of the color  CF ,k if I , I < TB T B components 26
  • 27. Fuzzy operators « Sugeno Integral» et «Choquet Integral»  Uncertainty and imprecision  Great flexibility  Fast and simple operations ordinal cardinal 27
  • 28. Data Fusion using the Choquet Integral Mesures floues : Intégrale de Choquet : X = { x1 , x 2 , x 3 } {x } {x } {x } {x ,x } {x ,x } {x 1 2 3 1 2 1 3 2 , x3} 28
  • 29. Fuzzy Foreground Detection Classification using the Choquet integral If C μ ( x , y ) < Th then ( x, y ) ∈ Background else ( x, y ) ∈ Foreground where Th is constant threshold. Cμ ( x, y) is the value of the Choquet integral for the pixel (x,y) 29
  • 30. Aggregation Color, Texture  Aqu@thèque (384 x 288 pixels) - Ohta color space Integral Choquet Sugeno Color space Ohta Ohta S(A,B) 0.40 0.27 a) Current image b) Ground truth Comparison between the Sugeno and Choquet [Zhang 2006] 30 c) Choquet integral d) Sugeno integral
  • 31. Aggregation Colors, Texture : Ohta, YCrCb, HSV  Aqu@thèque (384 x 288 pixels) Texture Color {x } {x } {x } {x ,x } {x ,x } {x 1 2 3 1 2 1 3 2 , x3} X = { x1 , x 2 , x 3 } 0.6 0.3 0.1 0.9 0.7 0.4 1 0.5 0.4 0.1 0.9 0.6 0.5 1 Choquet - Ohta 0.2 0.5 0.3 0.8 0.7 0.5 Choquet - YCrCb 1 Choquet - HSV 0.5 0.39 0.11 0.89 0.61 0.5 1 0.53 0.34 0.13 0.87 0.66 0.47 1 Integral Ohta YCrCb Values of the fuzzy measures μ HSV Color Space S(A,B) 0.40 0.42 0.30 Evaluation of the Choquet integral for different color spaces 31
  • 32. Aggregation Color, Texture  VS-Pets 2003 (720 x 576) Current Image Choquet - YCrCb Sugeno – Ohta [Zhang 2006] 32
  • 33. Aggregation Colors : Pets 2006 (384 x 288 pixels) Original sequence Ground truth OR Sugeno Integral Choquet Integral YCrCb Ohta HSV 33
  • 34. Fuzzy Background maintenance  No-selective rule  Selective rule Here, the idea is to adapt very quickly a pixel classified as background and very slowly a pixel classified as foreground. 34
  • 35. Fuzzy adaptive rule and Combination of the update rules of the selective scheme 35
  • 36. Results on the Wallflower dataset Sequence Time of Day Original Image 1850 Ground Truth No selective rule Selective rule Fuzzy adaptive rule Similarity measure No selective Selectiv Fuzzy adaptive e S(A,B)% 58.40 57.08 58.96 36
  • 37. Computation Time Algorithm Frames/Second T2-FMOG-UM 11 T2-FMOG-UV 12 MOG 20 Choquet integral 31 Sugeno integral 22 OR 40 Resolution 384*288, RGB, Pentium 1,66GHz, RAM 1GB 37
  • 38. Perspective Assessment s  Fuzzy Background Modeling by T2-FMOG  Multimodal Backgrounds - Using fuzzy approaches in other statistical models.  Fuzzy Foreground Detection using multi-features - Using more than two features - Fuzzy measures by learning  Fuzzy Background Maintenance 38
  • 39. Plan  Introduction  Fuzzy Background Subtraction  Background Subtraction via a Discriminative Subspace Learning: IMMC  Foreground Detection via Robust Principal Component Analysis (RPCA)  Conclusion - Perspectives 39
  • 40. Background Modeling and Foreground Detection via a Discriminative Subspace Learning (MIA Lab)  Reconstructive subspace learning models (PCA, ICA, IRT) [RPCS 2009]  Assumption: The main information contained in the training sequence is the background meaning that the foreground has a low contribution.  However, this assumption is only verified when the moving objects are either small or far away from the camera. 40
  • 41. Discriminative Subspace Learning  Advantages  More efficient and often give better classification results.  Robust supervised initialization of the background  Incremental update of the eigenvectors and eigenvalues.  Approach developed at the MIA Lab:  Background initialization via MMC [MVA 2012]  Background maintenance via Incremental Maximum Margin Criterion (IMMC) [MVA 2012] 41
  • 42. Background Subtraction via Incremental Maximum Margin Criterion  Denote the training video sequences S ={I1, ...IN} where It is the frame at time t N is the number of training frames.  Let each pixel (x,y) be characterized by its intensity in the grey scale and asssume that we have the ground truth corresponding to this training video sequence, i.e we know for each pixel its class label that can be foreground or background. 42
  • 43. Background Subtraction via Incremental Maximum Margin Criterion  Thus, we compute respectively the inter-class scatter matrix Sb and the intra-class scatter matrix Sw: where c = 2 I is the mean of the intensity of the pixel (x,y) over the training video Ii is the mean of samples belonging to class i pi is the prior probability for a sample belonging to class i (Background, Foreground). 43
  • 44. Background Subtraction via Incremental Maximum Margin Criterion  Batch Maximum Margin Criterion algorithm.  Extract the first leading eigenvectors that correspond to the background. The corresponding eigenvalues are contained in the matrix LM and the leading eigenvectors in the matrix ΦM .  The current image It can be approximated by the mean background and weighted sum of the leading eigenbackgrounds ΦM. 44
  • 45. Background Subtraction via Incremental Maximum Margin Criterion  The coordinates in leading eigenbackground space of the current image It can be computed :  When wt is back projected onto the image space, the background image is created : 45
  • 46. Background Subtraction via Incremental Maximum Margin Criterion  Foreground detection  Background maintenance via IMMC 46
  • 47. Principle - Illustration Current Image IBackground IForeground Background image Foreground mask 47
  • 48. Results on the Wallflower dataset Original image, ground truth , SG, MOG, KDE, PCA, INMF, IRT, IMMC (30), IMMC (100) 48
  • 49. Perspective Assessment s  Advantages  Robust supervised initialization of the background.  Incremental update of the eigenvectors and eigenvalues.  Disadvantages  Needs ground truth in the training step.  Others Discriminative Subspace Learning methods such as LDA. 49
  • 50. Plan  Introduction  Fuzzy Background Subtraction  Background Subtraction via a Discriminative Subspace Learning: IMMC  Foreground Detection via Robust Principal Component Analysis (RPCA)  Conclusion - Perspectives 50
  • 51. Foreground Detection via Robust Principal Component Analysis  PCA (Oliver et al 1999): Not robust to outliers.  Robust PCA (Candes et al. 2011): Decomposition into low-rank and sparse matrices  Approach developed at the MIA Lab:  Validation [ICIP 2012][ICIAR 2012][ISVC 2012]  RPCA via Iterative Reweighted Least Squares [BMC 2012] 51
  • 52. Robust Principal Component Analysis  Candes et al. (ACM 2011) proposed a convex optimization to address the robust PCA problem. The observation matrix A is assumed represented as: where L is a low-rank matrix and S must be sparse matrix with a small fraction of nonzero entries. 52 http://perception.csl.illinois.edu/matrix-rank/home.html
  • 53. Robust Principal Component Analysis  This research seeks to solve for L with the following optimization problem: where ||.||* and ||.||1 are the nuclear norm (which is the l1-norm of singular value) and l1-norm, respectively, and λ > 0 is an arbitrary balanced parameter.  Under these minimal assumptions, this approach called Principal Component Pursuit (PCP) solution perfectly recovers the low-rank and the sparse matrices. 53
  • 54. Algorithms for solving PCP Time required to solve a 1000x1000=106 RPCA problem: Algorithms Accuracy Rank ||E||_0 # iterations time (sec) IT 5.99e-006 50 101,268 8,550 119,370.3 DUAL 8.65e-006 50 100,024 822 1,855.4 10,000 times APG 5.85e-006 50 100,347 134 1,468.9 speedup! APGP 5.91e-006 50 100,347 134 82.7 ALMP 2.07e-007 50 100,014 34 37.5 ADMP 3.83e-007 50 99,996 23 11.8 Source: Z. Lin , Y. Ma “The Pursuit of Low-dimensional Structures in High-dimensional (Visual) Data: Fast and Scalable Algorithms” Time required is still acceptable for ADM but for background modeling and foreground detection? 54
  • 55. Application to Background Modeling and Foreground Detection n is the amount of pixels in a frame (106) m is the number of frames considered (200) Computation time is 200* 12s= 40 minutes!!! Source: http://perception.csl.illinois.edu/matrix-rank/home.html 55
  • 56. PCP and its application to Background Modeling and Foreground Detection  Only visual validations are provided!!!  Limitations:  Spatio-temporal aspect: None!  Real Time Aspect: PCP takes 40 minutes with the ADM!!!  Incremental Aspect: PCP is a batch algorithm. For example, (Candes et al. 2011) collected 200 images. 56
  • 57. PCP and its variants How to improve PCP?  Algorithms for solving PCP (17 Algorithms)  Incremental PCP (5 papers)  Real-Time PCP (2 papers) Validation for background modeling and foreground detection (3 papers) [ICIP 2012][ICIAR 2012][ISVC 2012] Source: T. Bouwmans, Foreground Detection using Principal Component Pursuit: A Survey, under preparation. 57
  • 58. PCP and its variants Source: T. Bouwmans, Foreground Detection using Principal Component Pursuit: A Survey, under preparation. 58
  • 59. Validation Background Modeling and Foreground Detection: Qualitative Evaluation Original image Ground truth PCA RSL PCP-EALM PCP-IADM PCP-LADM PCP-LSADM BPCP-IALM 59 Source: ICIP 2012, ICIAR 2012, ISVC 2012
  • 60. Validation Background Modeling and Foreground Detection : Quantitative Evaluation F-Measure Block PCP gives the best performance! 60 Source: ICIP 2012, ICIAR 2012, ISVC 2012
  • 61. PCP and its application to Background Modeling and Foreground Detection  Recent improvements:  BPCP (Tang et Nehorai (2012)) : Spatial but not incremental and not real time!  Recursive Robust PCP (Qiu and Vaswani (2012) ): Incremental but not real time!  Real Time Implementation on GPU (Anderson et al. (2012) ): Real time but not incremental!  What we can do?  Research on real time incremental robust PCP! 61
  • 62. Perspective Conclusion s  Fuzzy Background Subtraction  Background Subtraction via a Discriminative Subspace Learning: IMMC  Foreground Detection via Robust Principal Component Analysis (RPCA)  Fuzzy Learning Rate  Other Discriminative Subspace Learning methods such as LDA  Incremental and real time RPCA 62
  • 63. Publications Chapter Fuzzy Background Subtraction T. Bouwmans, “Background Subtraction For Visual Surveillance: A Fuzzy Approach”, Handbook on Soft Computing for Video Surveillance, Taylor and Francis Group, Chapter 5, March 2012. International Conferences : F. El Baf, T. Bouwmans, B. Vachon, “Fuzzy Statistical Modeling of Dynamic Backgrounds for Moving Object Detection in Infrared Videos”, CVPR 2009 Workshop , pages 1-6, Miami, USA, 22 June 2009. F. El Baf, T. Bouwmans, B. Vachon, “Type-2 Fuzzy Mixture of Gaussians Model: Application to Background Modeling”, ISVC 2008, pages 772-781, Las Vegas, USA, December 2008 F. El Baf, T. Bouwmans, B. Vachon, “A Fuzzy Approach for Background Subtraction”, ICIP 2008, San Diego, California, U.S.A, October 2008. F. El Baf, T. Bouwmans, B. Vachon. " Fuzzy Integral for Moving Object Detection ", IEEE-FUZZY 2008 , Hong Kong, China, June 2008. F. El Baf, T. Bouwmans, B. Vachon, “Fuzzy Foreground Detection for Infrared Videos”, CVPR 2008 Workshop , pages 1-6, Anchorage, Alaska, USA, 27 June 2008. F. El Baf, T. Bouwmans, B. Vachon, “Foreground Detection using the Choquet Integral”, International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2008, pages 187-190, Klagenfurt, Austria, May 2008.
  • 64. Publications Background Subtraction via IMMC Journal D. Farcas, C. Marghes, T. Bouwmans, “Background Subtraction via Incremental Maximum Margin Criterion: A discriminative approach” , Machine Vision and Applications , March 2012. International Conferences : C. Marghes, T. Bouwmans, "Background Modeling via Incremental Maximum Margin Criterion", International Workshop on Subspace Methods, ACCV 2010 Workshop Subspace 2010, Queenstown, New Zealand, November 2010. D. Farcas, T. Bouwmans, "Background Modeling via a Supervised Subspace Learning", International Conference on Image, Video Processing and Computer Vision, IVPCV 2010, pages 1-7, Orlando, USA , July 2010.
  • 65. Publications Chapter Foreground Detection via RPCA C. Guyon, T. Bouwmans, E. Zahzah, “Robust Principal Component Analysis for Background Subtraction: Systematic Evaluation and Comparative Analysis”, INTECH, Principal Component Analysis, Book 1, Chapter 12, page 223-238, March 2012. International Conferences : C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012. C. Guyon, T. Bouwmans. E. Zahzah, “Moving Object Detection via Robust Low Rank Matrix Decomposition with IRLS scheme”, International Symposium on Visual Computing, ISVC 2012,pages 665–674, Rethymnon, Crete, Greece, July 2012. C. Guyon, T. Bouwmans, E. Zahzah, “Moving Object Detection by Robust PCA solved via a Linearized Symmetric Alternating Direction Method”, International Symposium on Visual Computing, ISVC 2012, pages 427-436, Rethymnon, Crete, Greece, July 2012. C. Guyon, T. Bouwmans, E. Zahzah, "Foreground Detection by Robust PCA solved via a Linearized Alternating Direction Method", International Conference on Image Analysis and Recognition, ICIAR 2012, pages 115-122, Aveiro, Portugal, June 2012. C. Guyon, T. Bouwmans, E. Zahzah, "Foreground detection based on low-rank and block-sparse matrix decomposition", IEEE International Conference on Image Processing, ICIP 2012 , Orlando, Florida, September 2012.

Notas do Editor

  1. Fida EL BAF My name is Thierry BOUWMANS. My talk is about recent advances and future directions for background modeling and foreground detection. I will particularly focus on the methods that I developed at the MIA Lab since five years.
  2. Fida EL BAF First, I will introduce the main challenges in background modeling and foreground detection. Then, I will present the three main approaches that I developed at my lab using fuzzy tools, discriminative subspace learning and recent advances in robust PCA. Then, I will conclude with some perspectives.
  3. Fida EL BAF The goal of background modeling and foreground detection consists in detecting moving objects in video sequences. For this, pixels need to be classified as background or foreground as can be seen at the picture. White pixels correspond to foreground and black pixels correspond to background.
  4. Fida EL BAF This classification is usually achieved by background subtraction process. It is defined by 3 main steps: The background initialization which generates the first background image through N images The foreground detection which needs the background image and the new current image at time N+1 to give a decision weither the pixel corresponds to FG or BG by thresholding the decision rule The background maintenance which update the background image with the recent changes that can occur in the scene. It is why we need to update the background image with the coming of each new frame. For that, 3 information are used: 1) the BG(t), 2)the new frame I(t+1) and, 3)the foreground mask. It is important to note that the training step maybe a batch task, the foreground detection is a classification task and the background maintenance needs an incremental algorithm.
  5. Fida EL BAF The related applications are the following: 1) Video surveillance to detect cars and track them 2) Optical motion capture to detect silhouettes and construct an avatar 3) Multimedia applications such as Aquatheque developed at La Rochelle. Here, we need to detect fish in a tank with moving algae and challenging illmuniations changes.
  6. Fida EL BAF The first step of many video analysis systems is the segmentation of the foregrounds objects from the background. So, false detections on this step affect the following steps: tracking for video surveillance, pattern recognition for multimedia applications such as aquathèque and convex hull for motion capture.
  7. Fida EL BAF What are the challenges for such a system? We remind that the goal is to classify pixels as foreground or background. But some structure background changes or illumination changes or shadows can generate a false classification as we can see in this picture.
  8. Fida EL BAF Multimodal backgrounds are the more challenging ones. We can see on these pictures some examples and the false detections. Many algorithms have been developed to deal with these challenges.
  9. Fida EL BAF Statistical background modeling have attracted much attention. These models can be categorized as follows: Gaussian models, support vectors models and subspace learning models. Gaussian models are more adaptable to dynamic backgrounds, whereas subspace learning models are better suited to illumination changes. However, none of these background models can handle correctly dynamic backgrounds and illuminations changes. More information are available at the background subtraction web site where you can find references, links to codes and links to datasets.
  10. Fida EL BAF Now , I will present how fuzzy theory can be used in background modeling and foreground detection.
  11. Fida EL BAF I will focus on fuzzy approaches developed at the MIA Lab. The other ones can found in my chapter on fuzzy approaches. Fuzzy tools can be used at the background modeling step, foreground detection step and background maintenance step.
  12. Fida EL BAF The most used model in background modeling is the Mixture of Gaussians but this model presents some weakness as for example here: At the left you have the initial estimated Gaussian but during the initialization process, all the data are used to build the Gaussian: The data that are in this interval and some data that are out this interval. But over time only data that are in this interval are used to update the Gaussian. So, the Gaussian comes thicker over time as can seen in this illustration. This fact causes false detections over time.
  13. Fida EL BAF Furthermore, the presence of outliers in the training step causes a not exact estimation of the Gaussian. In this example, we can see that there are uncertainties on the mean and the variance of the Gaussians. So, we can use fuzzy theory to deal with this uncertainty.
  14. Fida EL BAF Here, we can see how we can generate uncertainty on the mean and the variance. They vary within intervals with uniform possibilities The shaded region is the footprint of uncertainty (FOU) The thick solid and dashed lines denote the lower and upper membership functions.
  15. Fida EL BAF Here, we can see the distribution with the uncertainty on the mean with X which is the intensity vector in the red green blue color space.
  16. Fida EL BAF Here, we can see the distribution with the uncertainty on the variance. For these two cases, the learning and update steps are similar to the original MOG except that we introduce uncertainty with km and kv.
  17. Fida EL BAF For the foreground detection, the matching test is different. The measure H is used to measure the uncertainty related to X. This measure is then threholded to obtain the foreground mask. This measure avoid the first weakness of the mixture of gaussians.
  18. Fida EL BAF Here, we present some results obtained by this fuzzy approach. The best results were obtained with the values km=2 and kv=0.9. We can see that we have less false positive with the fuzzy approach.
  19. Fida EL BAF This fact is confirmed using false negative and false positive. The fuzzy approach outperform the original one.
  20. Fida EL BAF We have tested this fuzzy approach on two others variants proposed by Bowden and Zivkovic. We can see that in each case the results are improved.
  21. Fida EL BAF These results show the robustness of the proposed algorithm against waving trees.
  22. Fida EL BAF These results show the robustness of T2 FMOG-UM against water surfaces. So, fuzzy approach is pertinent for background modeling. Now, we will see how we can use fuzzy tools for foreground detection.
  23. Fida EL BAF The features commonly used to compare the background and the current image, are color, edge, stereo, and texture ones. These features have different properties which allow to handle differently the critical situations like the illumination changes, motion changes, structure background changes. In general, they are used separately and the most used is the color one but the use of more than one feature can improved the results.
  24. Fida EL BAF Color features are often very discriminative features of objects but they have several limitations in presence of illumination changes, camouflage, shadows. Background subtraction methods that rely on color information will most probably fail to detect correctly the moving objects of the similar color of the background and the foregound. To solve these problems, some authors proposed to use other features like the edge, the texture and the stereo in addition to the color features. In our work we have adopted the same scheme, but what are the features to be choosed ? For example, Stereo deal with the camouflage but two cameras are needed Edge handle the local illumination changes and the ghost leaved when waking foreground objects begin to move Texture is appropriated to illumination changes and to shadows, which are a main challenge in our work. So, in addition to the intensity color for each component color, we close to utilize texture information when modeling the background and the Local binary pattern developed by Heikkila was selected as the measure of texture because of its properties to increase the robustness to illumination changes and shadows. In the other hand the proposed features are very fast to compute, which is an important property from the practical implementation point of view. Now that features are chosen, how to integrate the information that they hold to detect FG objects? In general, a simple subtraction is made between the current and the background images to detect regions corresponding to foreground. Another way to establish this comparison consists in defining a similarity measure between pixels at the same location in current and BG images. Pixels corresponding to BG should be similar while those corresponding to FG should not be similar.
  25. Fida EL BAF In the literatture, Fuzzy integrals have been successfully applied widely in classification problems. In the context of foreground detection, these integrals seem to be good model candidates for fusing sources obtained from different features. A pixel can be evaluated based on criteria or sources providing information about the state of the pixel whether it corresponds to background or foreground. The more criteria provide information about the pixel, the more relevant the decision of pixel’s state.
  26. Fida EL BAF Here I explain how to compute the similarity measure for color and for texture. We have the background image and the current frame, For each pixel, see the pixel marked in red, After the extraction of the intensity color for each component color and the code LBP for texture feature ; The similarity measure for texture feature is obtained by the ratio of the texture value in background image and the texture value in current image so as to have always a value between zero and one. In the same way, the similarity measure for color features is computed Note that the value of the Code LBP and the value of the intensity color are between 0 and 255
  27. Fida EL BAF There are two fuzzy integrals that can be used to fuse the features: the Sugeno integral and the Choquet integral.They allow to deal with uncertainty and imprecision. They offer great flexibility and they can be achieved with fast and simple operations. The Choquet integral is adapted for cardinal aggregation while Sugeno integral is more suitable for ordinal aggregation. So, the Choquet integral is well suited for foreground detection
  28. Fida EL BAF Some of color spaces allow to separate the Chrominance components from the luminance. For the chosen color space, two components x1 and x2 are chosen according to the relevant information which they contain so as to have the least sensitivity to illumination changes For texture x3 indicate the value of the texture feature obtained by the code LBP with each criterion, we associate a fuzzy measure, mu(x1), mu(x2) and mu(x3), where mu(xi) is the degree of importance of the feature xi in the decision whether pixel corresponds to BG or FD. such that the higher the mu(xi), the more important the corresponding criterion in the decision. To simplify the computing, a lambda fuzzy measure (additive) is used to compute the fuzzy measure of all subsets of criteria. By experimentation, best results are obtained with the last given measures
  29. Fida EL BAF The foreground detection is achieved by the following classification. The results of the Choquet integral are thresholded.
  30. Fida EL BAF Aquatheque dataset is a system dedicated to aquariums to detect and identify fish in a tank. The goal is to provide some educational information about the selected fish by the user. When testing our algorithm on this datatset, where the illumination conditions are uncontroled, we have obtained this result with Ohta color space. When comparing our algorithm with a similar approach using Sugeno integral in presence of Ohta color space developed by Zhang, the result shows an improvement based on visual interpretation. Numerical evaluation is usually done in terms of false negative (number of foreground pixels that we have missed) and false positive (the number of background pixels that we have marked as foreground). The ground truth is achieved manually. Firstly, we show a quantitative evaluation with respect to the measure derived by Li [33] which compare the detected region and the corresponding ground truth, so as this quantity approaches 1 when these 2 regions are similar, and 0 when they have the least similarity. It is well identified that optimum results are obtained by the Choquet integral. To see the progression of the performance of both algorithms, we drew up the ROC Curve. The overall performance of our algorithm seems to be better than the performance of the compared method of the test sequences used. The area under the curve confirms the result.
  31. Fida EL BAF At the same time, we have tried to test other color space like the YCrCb and the HSV with our algorithm. Furthermore, the Ohta and the YCrCb spaces give almost similar results (SOhta = 0,40; SYCrCb = 0,42), when the HSV space registers (SHSV = 0,30). When observing the effect of YCrCb and Ohta spaces on the images, we have noticed that the YCrCb is slightly better than the Ohta space.
  32. Fida EL BAF Some other results in video sport and video surveillance Applications. For each datasets, we provide a comparison with the method proposed by Zhang. The silhouettes are better detected and the illumination variations on the white border are less detected using our method. Here again the algorithm shows a robustness to illumination changes and shadows.
  33. Fida EL BAF Some other results in video sport and video surveillance Applications. For each datasets, we provide a comparison with the method proposed by Zhang. The silhouettes are better detected and the illumination variations on the white border are less detected using our method. Here again the algorithm shows a robustness to illumination changes and shadows.
  34. Fida EL BAF The blind background maintenance consists to update all the pixels with the same rules. The drawbacks of this scheme is that the value of pixels classified as foreground are taken into account in the computation of the new background and so polluted the background image. To solve this problem, some authors use a selective maintenance which consists of computing the new background image with a different learning rate following its previous classification into foreground or background as follows. Here, the idea is to adapt very quickly a pixel classified as background and very slowly a pixel classified as foreground. But the problem is that erroneous classification results may make permanent incorrect background model.
  35. Fida EL BAF The drawback of the selective maintenance is mainly due to the crisp decision which attributes a different rule following the classification in background or foreground. To solve this problem, we propose to take into account the uncertainty of the classification. This can be made by graduate the update rule using the result of the Choquet integral as follows
  36. Fida EL BAF This experiment shows the evaluation of the different update rules for the previous experiments. The fuzzy adaptive scheme seems to be slightly better than the other update rules from the quantitative evaluation point of view, but it shows an improvement based on visual interpretation.
  37. Fida EL BAF Here, you can see some computation times for the fuzzy approach. Their speed are still acceptable. Furthermore, the speed can be performed by a GPU implementation.
  38. Fida EL BAF So, fuzzy tools have been applied with success for background modeling, foreground detection and background maintenance. Future works may concern using fuzzy approaches in other statistical models, using more than two features for the foreground detection, and a more adaptive learning rate.
  39. Fida EL BAF Now , I will present how discriminative subspace learning can be used for background subtraction.
  40. Fida EL BAF Reconstructive subspace learning models, such as principal component analysis (PCA) have been mainly used to model the background by significantly reducing the data’s dimension. The reconstructive representations strive to be as informative as possible in terms of well approximating the original data. Their objective is mainly to encompass the variability of the training data and so they give more effort to model the background in an unsupervised manner than to precisely classify pixels as foreground or background in the foreground detection.
  41. Fida EL BAF On the other hand, discriminative methods are usually less adapted to the reconstruction of data; although they are spatially and computationally much more efficient and often give better classification results compared with the reconstructive methods. So, we propose the use of a discriminative subspace learning model called incremental maximum margin criterion (IMMC). The objective is first to enable a robust supervised initialization of the background and secondly a robust classification of pixels as background or foreground. Furthermore, IMMC also allows us an incremental update of the eigenvectors and eigenvalues.
  42. Fida EL BAF
  43. Fida EL BAF
  44. Fida EL BAF
  45. Fida EL BAF
  46. Fida EL BAF
  47. Fida EL BAF Here, at the first line, there are the current images. Then, we can see the images that corresponds the classes background and foreground, the background image and the foreground mask. Note that only the images which correspond to the class background are used to obtain the background image.
  48. Fida EL BAF Here, we present results on the Wallflower dataset. We can see that the proposed method outperforms the gaussian models and the reconstructive subspace learning.
  49. Fida EL BAF So, discriminative approaches allow us to have a robust supervised initialization of the background and an incremental update of the eigenvectors and eigenvalues. The drawback is that the method needs ground truth images for the training step. For future research, others discriminative subspace can be used.
  50. Fida EL BAF Now , I will present how recent advances in robust principal component analysis can be used for foreground detection.
  51. Fida EL BAF The first method that used PCA for background modeling and foreground detection is the one proposed by Oliver et al but this method present several limitations and it is no robust in presence of outliers. Recent advances in robust PCA which decomposes the data matrix into a low-rank matrix and sparse matrix show a nice framework to separate the moving objects for the background. At the MIA Lab, we have firstly evaluated this method and its variants. Then, we have developed a RPCA method based on the Iterative Reweighted Least Squares.
  52. Fida EL BAF At the picture, we can see the observation and how it can be decomposed in a low-rank and sparse parts. The low-rank matrix is clean and the sparse matrix contains the noise. Here, we can see the main assumption made in this method. The noise have to be uniformly distributed and it is not the case for the moving objects in background modeling and foreground detection.
  53. Fida EL BAF
  54. Fida EL BAF Time requirement is a key point in real time application such as background modeling. Here, we can see the time required for different solvers. For the Alternate Direction Method, the time is still acceptable!
  55. Fida EL BAF When we applied this method directly on background modeling and foreground detection, we can see that the amount of data is larger. Here, two thousand larger than the previous example. Then, the computation time becomes very expensive (forty minutes). At the left of the picture, we can see that the training images are stacked in column in the observation matrix. So, the spatial information is lost. At the right, we can see the decomposition. The low-rank part corresponds to the background and the sparse part to the foreground objects.
  56. Fida EL BAF So, the main drawbacks of PCP is that 1) only qualitative results are shown, 2) It is not real time and 3) PCP is a batch algorithm.
  57. Fida EL BAF There is several variants for PCP as shown in the Table. The stable PCP allow presence of noise by introducing the third term and the constraint is different. The QPCP take into account the quantization of the pixels to allow RPCA on the real data. The Block PCP allow to deal with entry wise outlier by using combined norm. The Local PCP allows to deal with multimodal issues. A complete analysis will be provided in the following paper.
  58. Fida EL BAF There is several variants for PCP as shown in the Table. The stable PCP allow presence of noise by introducing the third term and the constraint is different. The QPCP take into account the quantization of the pixels to allow RPCA on the real data. The Block PCP allow to deal with entry wise outlier by using combined norm. The Local PCP allows to deal with multimodal issues. A complete analysis will be provided in the following paper.
  59. Fida EL BAF First, we have made several quantitative evaluations on the Wallflower dataset. PCA is the one developed by Oliver et al. RSL is a robust PCA but it not decomposes the observation in two matrices as PCP. The others algorithm is PCP solved by different solvers and finally the block PCP.
  60. Fida EL BAF First, we can see the F-measure for each method. The block PCP outperforms the other ones.
  61. Fida EL BAF Recent advances have been made such as the followings.
  62. Fida EL BAF Fuzzy tools, discriminative subspace and robust PCA offer a nice framework for background modeling and foreground detection. However, they need to be investigate and improve to achieve better performances. For example, future directions may concern fuzzy learning rates, the use of other discriminative subspace, and an incremental and real-time robust PCA.
  63. Fida EL BAF
  64. Fida EL BAF
  65. Fida EL BAF