Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
iccv2009 tutorial: boosting and random forest - part III
1. Björn Stenger 28 Sep 2009 2009 京都 Tutorial – Part 3 Tracking Using Classification and Online Learning
2. Roadmap Tracking by classification On-line Boosting Multiple Instance Learning Multi-Classifier Boosting Online Feature Selection Adaptive Trees Ensemble Tracking Online Random Forest Combining off-line & on-line Tracking by optimization
3. Tracking by Optimization Example: Mean shift tracking Given: target location in frame t , color distribution In frame t+1 : Minimize distance p candidate distribution q target distribution y location Mean shift: iterative optimization Finds local optimum Extension: downweight by background [Comaniciu et al. 00]
4.
5. Displacement Expert Tracking [Williams et al. 03] Learn a nonlinear mapping from images I to displacements δ u . Off-line training On-line tracking
7. Online Selection of Discriminative Features [Collins et al. 03] Select features that best discriminate between object and background Feature pool: Discriminative score: measure separability (variance ratio) of fg/bg Within class variance should be small Total variance should be large
8. On-line Feature Selection (2) Input image: Feature ranking according to variance ratio [Collins et al. 03] Mean shift Mean shift Mean shift Median New location Combining estimates
9. Ensemble Tracking [Avidan 05] Use classifiers to distinguish object from background Image Feature space foreground background First location is provided manually All pixels are training data labeled {+1,-1} 11-dimensional feature vector 8 orientation histogram of 5x5 nhood 3 RGB values
10. Ensemble Tracking [Avidan 05] Confidence map Train T (=5) weak linear classifiers h : Combine into strong Classifier with AdaBoost Build confidence map from classifier margins Scale positive margin to [0,1] Mean shift Find the mode using mean shift Feature space foreground background
11. Ensemble Tracking Update [Avidan 05] Test examples x i using strong classifier H ( x ) For each new frame I j Run mean shift on confidence map Obtain new pixel labels y Keep K (=4) best (lowest error) weak classifiers Update their weights Train T-K (=1) new weak classifiers h 1 h 2 h 3 h 4 h 5
12.
13.
14. AdaBoost (recap) [Freund, Schapire 97] h 2 h 3 h 4 h 1 H Training examples Weighted combination … ( x 1 , y 1 ) ( x N , y N ) 1/2 1/2
15.
16.
17. Online Boosting h 2 h 3 h 4 t h 1 ( x 1 , y 1 ) [Oza, Russell 01] ( x 2 , y 2 ) ( x 3 , y 3 ) ( x 4 , y 4 ) H Weighted combination Training example …
18.
19. Priming can help [Oza 01] Batch learning on first 200 points, then online
20. Online Boosting for Feature Selection [Grabner, Bischof 06] Each feature corresponds to a weak classifier Combination of simple features
21. Selectors [Grabner, Bischof 06] A selector chooses one feature/classifier from pool. Selectors can be seen as classifiers Classifier pool Idea: Perform boosting on selectors, not the features directly.
22. Online Feature Selection one sample Init importance Estimate errors Select best weak classifier Update weight Estimate importance Current strong classifier For each training sample [Grabner, Bischof 06] Global classifier pool Estimate errors Select best weak classifier Update weight Estimate errors Select best weak classifier Update weight Estimate importance
39. Learning to Track with Multiple Observers Observation Models Off-line training of observer combinations Optimal tracker for task at hand Labeled Training Data Idea: Learn optimal combination of observers (trackers) in an off-line training stage. Each tracker can be fixed or adaptive. Given: labeled training data, object detector [Stenger et al. 09]
40. Input: Set of observers Each returns a location estimate & confidence value [Stenger et al. 09] [OB] [LDA] [BLDA] [OFS] [MS] [C] [M] [CM] [BOF] [KLT] [FF] [RT] [NCC] [SAD] On-line boosting Linear Discriminant Analysis (LDA) Boosted LDA On-line feature selection On-line classifiers Color-based mean shift Color probability Motion probability Color and motion probability Histogram Block-based optical flow Kanade-Lucas-Tomasi Flocks of features Randomized templates Local features Normalized cross-correlation Sum of absolute differences Single template
41. Combination Schemes Find good combinations of observers automatically by evaluating all pairs/triplets (using 2 different schemes). 1) 2) [Stenger et al. 09]
42. How to Measure Performance? Run each tracker on all frames (don’t stop after first failure) Measure position error Loss of track when error above threshold Re-init with detector [Stenger et al. 09]
54. Tree Adaptation with Re-Clustering [Yeh et al. 07] Identify affected neighborhood Remove exisiting boundaries Re-Cluster points
55. Accuracy drops when Adaptation is stopped [Yeh et al. 07] Recent accuracy T =100 R(j) = 1 if top ranked retrieved image belongs to same group
56.
57. On-line Random Forests [Saffari et al. 09] For each tree t Input: New training example Update tree t with k times Estimate Out-of-bag error end P(Discard tree t and insert new one) = Random forest …
58.
59. Results [Saffari et al. 09] Convergence of on-line RF classification to batch solution on USPS data set Tracking error of online RF compared to online boosting
60.
61. References Avidan, S., Support Vector Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, 2001. Avidan , S., Support Vector Tracking , IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 26(8), pp. 1064--1072, 2004. Avidan, S., Ensemble Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 29(2), pp 261-271, 2007. Avidan, S., Ensemble Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, USA, 2005. Babenko, B., Yang, M.-H., Belongie, S., Visual Tracking with Online Multiple Instance Learning, Proc. CVPR 2009. Basak, J., Online adaptive decision trees, Neural Computation, v.16 n.9, p.1959-1981, September 2004. Collins, R. T., Liu, Y., Leordeanu, M., On-Line Selection of Discriminative Tracking Features, IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), Vol 27(10), October 2005, pp.1631-1643. Collins, R. T., Liu, Y., On-Line Selection of Discriminative Tracking Features, Proceedings of the 2003 International Conference of Computer Vision (ICCV '03), October, 2003, pp. 346 - 352. Comaniciu, D., Ramesh, V., Meer, P., Kernel-Based Object Tracking, IEEE Trans. Pattern Analysis Machine Intell., Vol. 25, No. 5, 564-575, 2003. Comaniciu, D., Ramesh, V., Meer P., Real-Time Tracking of Non-Rigid Objects using Mean Shift, IEEE Conf. Computer Vision and Pattern Recognition, Hilton Head Island, South Carolina, Vol. 2, 142-149, 2000. T. G. Dietterich and R. H. Lathrop and T. Lozano-Perez, Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence 89 31-71, 1997. Freund, Y. , Schapire, R. E. , A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997. H. Grabner, C. Leistner, and H. Bischof, Semi-supervised On-line Boosting for Robust Tracking. In Proceedings European Conference on Computer Vision (ECCV), 2008. H. Grabner, P. M. Roth, H. Bischof, Eigenboosting: Combining Discriminative and Generative Information, IEEE Conference on Computer Vision and Pattern Recognition, 2007. H. Grabner, M. Grabner, and H. Bischof, Real-time Tracking via On-line Boosting, In Proceedings British Machine Vision Conference (BMVC), volume 1, pages 47-56, 2006. H. Grabner, and H. Bischof, On-line Boosting and Vision, In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 260-267, 2006. J. D. Keeler , D. E. Rumelhart , W.-K. Leow, Integrated segmentation and recognition of hand-printed numerals, Proc. 1990 NIPS 3, p.557-563, October 1990, Denver, Colorado, USA. T.-K. Kim and R. Cipolla, MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features, In Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, Dec. 2008. T-K. Kim, T. Woodley, B. Stenger, R. Cipolla, Online Multiple Classifier Boosting for Object Tracking, CUED/F-INFENG/TR631, Department of Engineering, University of Cambridge, June 2009.
62. Y. Li, H. Ai, S. Lao, M. Kawade, Tracking in Low Frame Rate Video: A Cascade Particle Filter with Discriminative Observers of Different Lifespans, Proc. CVPR, 2007. I. Matthews, T. Ishikawa, and S. Baker, The template update problem. In Proc. BMVC, 2003 I. Matthews, T. Ishikawa, and S. Baker, The Template Update Problem, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 6, June, 2004, pp. 810 - 815. K. Okuma, A. Taleghani, N. De Freitas, J. Little, D. G. Lowe, A Boosted Particle Filter: Multitarget Detection and Tracking , European Conference on Computer Vision(ECCV), May 2004. Oza, N.C., Online Ensemble Learning, Ph.D. thesis, University of California, Berkeley. Oza, N.C. and Russell, S., Online Bagging and Boosting. In Eighth Int. Workshop on Artificial Intelligence and Statistics, pp. 105–112, Key West, FL, USA, January 2001. Oza, N.C. and Russell, S., Experimental Comparisons of Online and Batch Versions of Bagging and Boosting, The Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, 2001. Saffari, A., Leistner C., Santner J., Godec M., Bischof H., On-line Random Forests, 3rd IEEE ICCV Workshop on On-line Computer Vision, 2009. S. Stalder, H. Grabner, and L. Van Gool, Beyond Semi-Supervised Tracking: Tracking Should Be as Simple as Detection, but not Simpler than Recognition. In Proceedings ICCV’09 WS on On-line Learning for Computer Vision, 2009. B. Stenger, T. Woodley, R. Cipolla, Learning to Track With Multiple Observers. Proc. CVPR, Miami, June 2009. References & Code P. A. Viola and J. Platt and C. Zhang, Multiple instance boosting for object detection, Proceedings of NIPS 2005. O. Williams, A. Blake, and R. Cipolla, Sparse Bayesian Regression for Efficient Visual Tracking, in IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, August 2005. O. Williams, A. Blake, and R. Cipolla, A Sparse Probabilistic Learning Algorithm for Real-Time Tracking, in Proceedings of the Ninth IEEE International Conference on Computer Vision, October 2003. T. Woodley, B. Stenger, R. Cipolla, Tracking Using Online Feature Selection and a Local Generative Model, Proc. BMVC, Warwick, September 2007. T. Yeh, J. Lee, and T. Darrell, Adaptive Vocabulary Forests for Dynamic Indexing and Category Learning. Proc. ICCV 2007. Code: Severin Stalder, Helmut Grabner Online Boosting, Semi-supervised Online Boosting, Beyond Semi-Supervised Online Boosting http://www.vision.ee.ethz.ch/boostingTrackers/index.htm Boris Babenko MIL Track http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml Amir Saffari http://www.ymer.org/amir/software/online-random-forests/