SlideShare a Scribd company logo
1 of 53
Download to read offline
Presentation of Personal Work
Yosen Chen
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
2
Implementation Tool
Microsoft Visual C++ with OpenCV Library
IPC (Interprocess Communication) Library
open source @ Carnegie Mellon University
Yosen Chen, Copyright Reserved 3
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
4
Intelligent Interactive Engine
Image processing
Webcam
Image data
Show message
& animation
Display Central
Server
Webcam
Our goal is to construct an interactive engine which utilizes
target motion as instruction input to the server.
Yosen Chen, Copyright Reserved 5
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
6
Basic Visual Tracking Framework
(Color)
Yosen Chen, Copyright Reserved
Camera Real-time
image
Feature Extraction
RGB
to
HSV
Hue
Saturation
Value
256-to-16
color Qtz
Similarity Evaluation
Reference
Image
Reference
16-color Histogram
Hue
Saturation
Value
Reference Color
Backprojection Probability
distribution
figure
Probability Peak
Search/Climbing
Target Information
Estimation
(Position, size,...)
If iterative
Reference
16-color Histogram
Reference Color
Backprojection
Input Hue
Image
Probability
distribution
figure
Color
similarity
score
Iterative Search
(Motion Prediction)
Previous
Current
Next
Camera View
Region of Interest
7
Demo Video: Ball Tracking
追球後段
Yosen Chen, Copyright Reserved 8
Peak climbing: MeanShift
Related Works
Varieties of Feature Extraction/Evaluation
Feature evaluation: template matching
Background subtraction
Reference color model: YCrCb, RGB
Varieties of Peak Search/Climbing
Gravity center
Max-Likelihood
MeanShift
Yosen Chen, Copyright Reserved
Cb
Cr
9
Summary
Advantages:
Simple & Fast  low-cost
Each pixel (RGBHSV, backprojection) is processed
independently  good for parallel processing
Drawbacks:
Hard to combine with more object features
 EX: edge information (object shape, contour)
 All object features 2D probability distribution figure
Hard to cope with higher-dimension target state
 EX: object orientation (rotation angle)
Yosen Chen, Copyright Reserved
??suitable
10
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
11
SIR+SIS Particle Filter Tracking
Yosen Chen, Copyright Reserved
State candidates
(Particle set) Color
Possible region
sampling
SIR
Real-time
State Candidate
Prediction
Frame
Multi-Cue Candidate
Evaluation
   
 1
m m
t tp X X 
 
  
1 1
, , ,
1,...,
mm
t t
X x y r
m N
 

 
 m
t tp O X State Candidate
Resampling
   
  
 
 
,
equal weight
m m
t t t
k
t
X p O X
X
tO
SIS
Shape
Motion
weighting
Particles from last candidate set
Particles generated
from current image
Tracking result
on Frame
tO
 m
tX
Best State Candidate
Sequential Importance Sampling
(SIS) Particle Filter
Sampling Importance Resampling
(SIR) Particle Filter
Color backproject
+ filtering
+ best contour finding
Initial State Estimation State Prediction State Evaluation 
Find Best Candidate
Resampling Last Candidates
based on weights
State Prediction State Evaluation 
Find Best Candidate
Last candidate set
{state candidate, weight}
12
Multi-Cue Likelihood Evaluation
Yosen Chen, Copyright Reserved
For each state
candidate
Inner color extraction
Color
Similarity
Correlation-based
comparison
Color weight
Reference color modelColor
Multi-Cue Candidate
Evaluation
 
 m
t tp O XShape
Motion
weighting
 m
tX
tO
inner
outer
High
continuity
Low
continuity
id
maxd
ig
 
  
, , ,
mm
t t
X x y r 
Shape
Similarity
Motion
continuity
(logic check)
outer color extraction
Shape distance
State difference with
last tracking result
id
 Ih i
 Oh i
 REFh i
Edge strength ig
Max matched size 1 null
MAX
Nr
r N
 
  
 
Symmetry 1i N id d  
, , ,x y r    
Shape weight
Motion weight

Overall
candidate score
 
 m
t tp O X
13
Demo Video: 4D Bottle Tracking
(x, y, r, θ)
Yosen Chen, Copyright Reserved 14
Candidate group behavior
Bottle tracking video
Blue: SIS (from current color projection)
Green: SIR (from last candidate set)
Red: Best candidate as tracking result
Yosen Chen, Copyright Reserved 15
Interaction
System
Interaction
Triggering
Object
Checking
Tracking
& Interaction
Losing
Handling
Start
Motion
detection by
Optical flow
Image
Initialize/Reset
all parameters
Motion
detected?
Show Checking
Box
Record target
information and
features
Target Info is
robust?
Tracking
algorithm
Quality index
> threshold ?
Wait until user
puts target in
“Losing Box”
Is target put
back?
Visual reality
interacting
program
image
Yes
No Yes
Yes
No
No
Yes
No
Image
Demo Video:
Interactive Game – Brick Breaker
Yosen Chen, Copyright Reserved
Interaction system: 4 status switching
4 degrees of freedom: location x & y, size, rotation angle
16
Motion Interaction Function Interactive Game Example
Summary
High flexibility
State dimension
Motion model
Easy to add new features & tune weights in tracker
for different applications and scenarios
Trade-off:
Feature description vs. Computation time
Impoverishment(intensive) vs. Degeneracy(sparse)
 Many particles concentrate on high-probability region
 Many particles are evaluated at low-probability region
 Number of particles ~ Complexity
Yosen Chen, Copyright Reserved 17
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
18
Related applications:
Visual surveillance
Remote monitoring
Intelligent security system
Introduction
19Yosen Chen, Copyright Reserved
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
20
Multi-Human Tracking Flow
Yosen Chen, Copyright Reserved
Prediction Evaluation Resampling
Prediction Evaluation Resampling
Prediction Evaluation Resampling
…
State joint evaluation
SIR Head tracking
Multi-human
tracking result
Real-time
captured image
Skin color
blob finding
SIS Face detection
Shape &
location filter
ROI location
Sampling
Face detector
(OpenCV)
Losing Track
Detector
(score,
counter)
Delete tracker
Create new tracker
1. SIS face detector  SIR human trackers
2. Multi-human state joint evaluation
3. Bad evaluated tracker  losing track
detector  face detector  delete tracker
21
Multi-Cue Likelihood Evaluation
for Head Tracking
Yosen Chen, Copyright Reserved
High color likelihood Low color likelihood
outer
inner
Skin color mask
Hair color mask
outer
inner
Skin color mask
Hair color mask
  1
, , t
x y r 
 , , t
x y r
  1
, , t
x y r 
 , , t
x y r
High continuity Low continuity
1n
2n
6n
9n
13n
Shape distance
1n
2n
6n
9n
13n
Num of Matched point
ig
Edge strength 1n 2n
6n
9n
13n
symmetry
Color
•Masked skin color comparison
•Masked hair color comparison
Texture
•Texture SAD similarity
Edge
•Shape distance
•Number of matched point
•Edge strength
•Contour symmetry fitting
Motion
•State continuity
22
Face Detection Implementation Details
Yosen Chen, Copyright Reserved
SIS Face Detection Flow
=======================Pseudo code========================
cvCalcBackProject;
cvEqualizeHist + cvThreshold;
cvResize + cvErode + cvDilate;
cvFindContours(LinkedList detect_contour);
for( ; detect_contour != 0 ; detect_contour = detect_contour->next )
{
if (Area < threshold) continue;
Use 8 points average to approximate center (xA, yA);
if (distance between all target T and (xA, yA) > threshold)
{
calculate geometric center (xC, yC), deviation σX, σY of detect_contour;
if ((σX > 0.7*σY) && (σY > 0.8*σX) &&
(σX /sqrt(Area) < 0.3) && (σY /sqrt(Area) < 0.4))
{
Draw k random samples (x, y) ~ N((xC, yC), (σX, σY));
where k = (Area/MIN_AREA).
if (frontalFaceTest(x, y)->detect() || profileFaceTest(x, y)->detect())
return location (x, y) as start point of new tracker;
}
}
}
return NOT_DETECTED;
Skin color
blob finding
SIS Face detection
Shape &
location filter
ROI location
Sampling
Face detector
(OpenCV)
Losing Track
Detector
Create tracker
Delete tracker
23
Face Detection Demo
Yosen Chen, Copyright Reserved
Skin color backproject
Skin color
blob finding
SIS Face detection
Shape &
location filter
ROI location
Sampling
Face detector
(OpenCV)
Losing Track
Detector
Create tracker
Delete tracker
24Skin color backproject (down-size & morphology)
Last detecting region
Blue region: Last detected face
Red region: color is matched with skin color.
Tracking System GUI panel
People Overlapping Problem
Yosen Chen, Copyright Reserved
When people get close to each other…
State candidates are attracted by other people
because trackers don’t know each other!!
Prediction
Evaluation
(individual)
Resampling
Original Head Trackers
25
Solutions of People Overlapping
Yosen Chen, Copyright Reserved
Prediction
Evaluation
(Individual
+ joint)
ResamplingJoint state
check
Depth order
check
Body occlusion
handling
Target information
storage
Modified Head Trackers for People Overlapping
For people re-entering
problem
1. Use joint evaluation to avoid candidates merging for people overlapping
2. Solution#1 (left): give higher joint score for candidates further from other people 
reduce risk of candidates merging but also lose tracking accuracy
3. Solution#2 (right): kill candidates too close to other people  better tracking accuracy
26
resVideo_FurtherIsGood2.mpg resVideo_CloseThenLowWeight2.mpg
Implementation
Yosen Chen, Copyright Reserved
Prediction
Evaluation
(Individual
+ joint)
ResamplingJoint state
check
Depth order
check
Modified Head Trackers for People Overlapping
Image distance check “if △x2 + △y2 < threshold, …” è group together
target1 target2 target3 target4 target5
Grouping (2nd-order tree)
root leaf leafleaf
Target A
candidate 1
(far from B)
Target B
last best
candidate
Target A
candidate 2
(near B)
Pointer to
target1
Candidate Evaluation (individual + joint)
Individual: (as previous mentioned)
Color, shape, motion
Joint: (not be attracted by other close targets)
Distances to all other targets in the same group
Pointer to
target2
………
Depth order: deform 2nd-tree into array, then quick sort by depth features
Sort by depth
features
Depth features:
For best candidates of each target
Texture changing rate
Contour matching score
Distance from camera to target ground points
Image source
1:Front
2:Rear
Candidate A2 has high penalty
(since Candidate A2 are too close to B !!)
27
Depth Order Evaluation with Heights
Depth order estimation with height knowledge is accurate
Even when people are not overlapping.
Yosen Chen, Copyright Reserved 28
Summary of Multi-Human Tracker
We extend the concept of single-object tracking
to multi-object tracker
We propose a entire tracking system
Face detection  multi-human trackers  detect
people leaving
Address the solution of people overlapping
Robust multi-human tracker is indeed the basis
of correspondence process within multi-camera
tracking system
Yosen Chen, Copyright Reserved 29
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
30
Multi-Camera Tracking System
Yosen Chen, Copyright Reserved
Captured image
Local camera #A
multi-head tracker
 1 2,A B
t tz z2tH
1
A
tz 2
A
tz
2
B
tz
1
B
tz
Local camera #B
multi-head tracker
Captured image
Central Server
Multi-camera multi-observation
correspondence
Person#1 Person#2
 1 2 1,A B
t tz ztH
Identify the relationship of observations
in different cameras
Local trackers: multi-human tracking and occlusion handling
Central server: observation correspondence across cameras
31
: image observation
: 3D line from image head center
j
rt
j
rt
O
L
 “Are these cameras tracking the same people? ”
 Let be the rth observation tracked by camera at time
Problem of Observation Correspondence
across Cameras
 ,j j j
rt rt rtO Lz j t
How many
people exist in
surveillance
area?
Ground plane
Camera j
Camera k
1
j
tO
2
j
tO
3
j
tO1
k
tO
2
k
tO
3
k
tO
1
j
tL
2
j
tL
3
j
tL
1
k
tL
2
k
tL
3
k
tL
Yosen Chen, Copyright Reserved 32
 Determine observation correspondence using Multi-Feature Similarity
 Multi-Feature Observation Similarity contains:
Head line intersection verification
Body color matching by blocks
Confidence level
Label history consistency
How to Determine Observation
Correspondence across Cameras
Correspondence Flow
Calculate 3D
geometric intersection
Confidence
level
Are they from the
same person?
Camera#A Camera#B
Body Color
comparison by block
Label history
consistency
2D Body image
block allocation
No, if no reasonable
intersection
No, if color unmatched
Yes, 2 observations
are from the same
person
Yosen Chen, Copyright Reserved 33
 2D Position on image plane  3D head line in real space
 3D ground point in real space  2D position on image plane
observation r
Optic center Image plane of
camera j
Ground plane
Optic center
observation q
Image plane of
camera k
3D Geometric Correspondence across
Cameras
Calibrated camera system
Image 2D↔3D coordinate transfer
is constructed off-line
Head Point
height
Estimated Ground Point
Yosen Chen, Copyright Reserved 34
How to Implement 3D-2D Mapping
Yosen Chen, Copyright Reserved
Class: Plane3D
Member functions:
isPointOnPlane
getDistFromPoint
getProjPointFromPoint
Class: line3D
Member functions:
getPositionFromX,Y,Z
isPointOnLine
getDistFromPoint
getProjPointFromPoint
getLinePointOnPlane
getMinDistPointFromLine
Class: CameraLine3D
Member functions:
setByImageLocation
setBy3DPosition
How to verify the
mapping accuracy?
3D Optic point
(≈ Camera location)
Class: CameraLine3D
2D image
plane
Reference plane
in 3D space
3D Normal vector
Reference 3D
plane center
(0,0)
derived
35
CAM2 CAM1
Locate 3D points/lines on 2D image planes
 Human Body Model Based on Head Tracking
Matching body color
 Human Body is separated into
4 blocks
ground-point
head
ground-point
head
Image
boundary
Observed ground-point Unobserved ground-point
j
rtz k
qtz
Camera kCamera j
qtrt
3D Geometric-based Body Image Block
Allocation
Yosen Chen, Copyright Reserved 36
For some observations, not all the body blocks are unobservable.
Here we take the number of observable blocks as confidence level of correspondence result
Observation Correspondence across Cameras
Object Conf = 3
Object Conf = 2
Matching body color (Region-based body color comparison)
Depth order:
Front
Depth order:
RearBody Occlusion Handling (based on depth order and 3D-2D block mapping )
Yosen Chen, Copyright Reserved 37
Body appearance is available for
correspondence process
Body appearance is not
available (Color turns darker)
Body Occlusion Handling
Yosen Chen, Copyright Reserved 38
Prediction
Evaluation
(Individual
+ joint)
ResamplingJoint state
check
Depth order
check
Body occlusion
handling
Target information
storage
Modified Head Trackers for People Overlapping
Labeling Consistency with Confidence Level
Yosen Chen, Copyright Reserved
Observation correspondence process should follow rules stated below
Rule1. Claiming “match” for one pair will also imply claiming “collision” for all
corresponding residual pairs, with the same confidence level
Rule2. Only higher-confidence decisions can enforce changes of lower-confidence ones
√ √ √
A B C
√ 1 0 0 -4
√ 2 -4 -4 4
√ 3 0 0 -4
√ √ X
A B C
√ 1 0 0 0
√ 2 -4 -4 0
√ 3 0 0 0
√ √
A B
√ 1 -3 0
√ 2 -4 -4
√ 3 3 -3
Camera#1
Camera#2
Conf = 4Conf = -4
1 2
3
A B
C
Camera#1
Camera#2
Conf = -4
1 2
3
A B
Camera#1
Camera#2
Conf = 3
Conf = -4
Conf = -3
1 2
3
A B 39
Yosen Chen, Copyright Reserved
Human Machine Interaction
Basic Visual Tracker
Multi-Cue Particle Filter Tracker
Consistent Labeling in Visual Surveillance
System
Multi-Human Tracker
Multi-Camera Multi-Human Tracker
Consistent Labeling with People Re-Entering
Problem
Outline
40
Consistent Labeling with People Re-Entering
Yosen Chen, Copyright Reserved
Captured image
Local camera #A
multi-head tracker
 1 2,A B
t tz z2tH
1
A
tz 2
A
tz
2
B
tz
1
B
tz
Local camera #B
multi-head tracker
Captured image
Central Server
Multi-camera multi-observation
correspondence
Person#1 Person#2
 1 2 1,A B
t tz ztH
Identify the relationship of observations
in different cameras
3tX2tX1tX
associate with target databases
41
Association Processes with Matching Confidence Orders
Hierarchical Association with target databases
 Objects is associated to target databases  1
m
ht h
H   1
n
it i
X 
Consistent Labeling Using Target Database
3tX
2tX
1tX
4tX
1tH
i
TARL
High
Low
1
OBJL
1
TARL
2
TARL
3
TARL
4
TARL
compare with
descendent order
: target confidence level (in database)
: object confidence level (from observation)
i
TAR
i
OBJ
L
L
if association success
update related database
else
create new database
PS: database association process is
only for objects with conf. level ≧2
Yosen Chen, Copyright Reserved 42
Association Process
 Association process:
 To determine if observed object and database are from the same
person
 Features are similar to observation correspondence process
 People height
 Body color comparison by blocks
 Association history consistency
Yosen Chen, Copyright Reserved
Observation-Database Association Flow:
Check height
difference
Are they from the same person?
Body Color
comparison by block
Association history
consistency
No, if diff > threshold No, if color unmatched
Yes, observations &
this database set are
from the same person
2tX
1tH
From Camera
observations
People information
already stored in
databases 43
How to Model & Update Database
Adaptive Gaussian Mixture Model
3 color candidates (histogram & weight) to describe 1 body block
color
Candidate’s histograms & weights are updated based on current
color of MATCHED observations
Color comparison with observed color (Define MATCHED or not)
 Bhattacharyya distance from 3 candidates
 3 weighted Gaussian kernels
Yosen Chen, Copyright Reserved 44
Color histogram (1-D for demo)
Candidate
weight
Candidate color
Adaptive Gaussian Mixture Model
Candidate#1 Candidate#3 Candidate#2
Candidate color & weight will be updated based on observations
Candidate
variance
Inspired by
[Stauffer., CVPR 2000]
Database Operation Demonstration
Consistent Labeling Using Target Database
Histogram candidatesCamera#2 histogramCamera#1 histogram
Matched with camera#2
Matched with camera#1
Yosen Chen, Copyright Reserved 45
Consistent Labeling Using Target Database
Database Demonstration (cont’)
46Yosen Chen, Copyright Reserved
Multi-People Correspondence
Yosen Chen, Copyright Reserved 47
1. Multi-people depth order
2. body occlusion handling
3. Multi-camera multi-people correspondence is correct
Multi-People Re-entering
Yosen Chen, Copyright Reserved 48
System Operation Flow:
1. Observation correspondence & Body occlusion handling
2. Associate with current databases  (Create then) update new information
3. Return people IDs (show by color) to each tracker of each camera
Future Works…
Extend to the 3rd Camera
Challenge of communication load, algorithm complexity, and
tracking/labeling stability
Performance Evaluation of Target Database
Comparison with related works
More challenges:
Outdoors Surveillance (human body detection)
Uncalibrated multi-camera system (Training)
Motive camera (rotation 3D-2D mapping)
Yosen Chen, Copyright Reserved 49
Thank You
BACKUP SLIDES
Yosen Chen, Copyright Reserved 51
Motion Detection : Optical Flow
(Modified from OpenCV Sample Code)
Input gray-scale
image
Optical point
extraction
Optical point
mapping & filtering
Motion
detection criteria
Reset criteria
Interaction
trigger
Optical flow
Resettrigger
Previous gray-scale
image
Store
Optical Flow:
1. A technique to create edge points then map them between adjacent images.
(edge tracking)
2. Better to analyze group behavior rather than individual (not accurate)
3. Can be utilized as motion detection tool
1. Not enough effective optical points on screen
2. Time out then refresh
1. Average velocity of moving points > threshold
2. Number of moving points > threshold
Moving points:
points with velocity > 3*average velocity
Yosen Chen, Copyright Reserved 52
BACKUP SLIDES END
Yosen Chen, Copyright Reserved 53

More Related Content

What's hot

Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)
nikhilus85
 
Human activity recognition
Human activity recognition Human activity recognition
Human activity recognition
srikanthgadam
 

What's hot (20)

Moving object detection in video surveillance
Moving object detection in video surveillanceMoving object detection in video surveillance
Moving object detection in video surveillance
 
Object Recognition
Object RecognitionObject Recognition
Object Recognition
 
Real Time Object Tracking
Real Time Object TrackingReal Time Object Tracking
Real Time Object Tracking
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image Fundamentals
 
Canny Edge Detection
Canny Edge DetectionCanny Edge Detection
Canny Edge Detection
 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentation
 
Human Action Recognition
Human Action RecognitionHuman Action Recognition
Human Action Recognition
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
ImageProcessing10-Segmentation(Thresholding) (1).ppt
ImageProcessing10-Segmentation(Thresholding) (1).pptImageProcessing10-Segmentation(Thresholding) (1).ppt
ImageProcessing10-Segmentation(Thresholding) (1).ppt
 
Deep sort and sort paper introduce presentation
Deep sort and sort paper introduce presentationDeep sort and sort paper introduce presentation
Deep sort and sort paper introduce presentation
 
Image Filtering in the Frequency Domain
Image Filtering in the Frequency DomainImage Filtering in the Frequency Domain
Image Filtering in the Frequency Domain
 
Image processing ppt
Image processing pptImage processing ppt
Image processing ppt
 
Object tracking a survey
Object tracking a surveyObject tracking a survey
Object tracking a survey
 
Computer vision introduction
Computer vision  introduction Computer vision  introduction
Computer vision introduction
 
Computer vision
Computer visionComputer vision
Computer vision
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
 
Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)Action Recognition (Thesis presentation)
Action Recognition (Thesis presentation)
 
Deep Learning in Computer Vision
Deep Learning in Computer VisionDeep Learning in Computer Vision
Deep Learning in Computer Vision
 
Object detection
Object detectionObject detection
Object detection
 
Human activity recognition
Human activity recognition Human activity recognition
Human activity recognition
 

Viewers also liked

2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter
nozomuhamada
 
CS221: HMM and Particle Filters
CS221: HMM and Particle FiltersCS221: HMM and Particle Filters
CS221: HMM and Particle Filters
zukun
 
Particle filtering in Computer Vision (2003)
Particle filtering in Computer Vision (2003)Particle filtering in Computer Vision (2003)
Particle filtering in Computer Vision (2003)
zukun
 
Particle Filters and Applications in Computer Vision
Particle Filters and Applications in Computer VisionParticle Filters and Applications in Computer Vision
Particle Filters and Applications in Computer Vision
zukun
 
Particle Filter Tracking in Python
Particle Filter Tracking in PythonParticle Filter Tracking in Python
Particle Filter Tracking in Python
Kohta Ishikawa
 

Viewers also liked (10)

Color based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabColor based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlab
 
2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter
 
Using particle filter for face tracking
Using particle filter for face trackingUsing particle filter for face tracking
Using particle filter for face tracking
 
Particle Filter
Particle FilterParticle Filter
Particle Filter
 
CS221: HMM and Particle Filters
CS221: HMM and Particle FiltersCS221: HMM and Particle Filters
CS221: HMM and Particle Filters
 
Particle filtering in Computer Vision (2003)
Particle filtering in Computer Vision (2003)Particle filtering in Computer Vision (2003)
Particle filtering in Computer Vision (2003)
 
Particle Filters and Applications in Computer Vision
Particle Filters and Applications in Computer VisionParticle Filters and Applications in Computer Vision
Particle Filters and Applications in Computer Vision
 
Single person pose recognition and tracking
Single person pose recognition and trackingSingle person pose recognition and tracking
Single person pose recognition and tracking
 
Pose
PosePose
Pose
 
Particle Filter Tracking in Python
Particle Filter Tracking in PythonParticle Filter Tracking in Python
Particle Filter Tracking in Python
 

Similar to Presentation of Visual Tracking

20110220 computer vision_eruhimov_lecture01
20110220 computer vision_eruhimov_lecture0120110220 computer vision_eruhimov_lecture01
20110220 computer vision_eruhimov_lecture01
Computer Science Club
 
Lecture-1-CVIntroduction.pdf
Lecture-1-CVIntroduction.pdfLecture-1-CVIntroduction.pdf
Lecture-1-CVIntroduction.pdf
TechEvents1
 
Super Resolution of Image
Super Resolution of ImageSuper Resolution of Image
Super Resolution of Image
Satheesh K
 
Topic_6
Topic_6Topic_6
Topic_6
butest
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
butest
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
butest
 
Fcv learn yu
Fcv learn yuFcv learn yu
Fcv learn yu
zukun
 
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object TrackingIntegrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
ijsrd.com
 

Similar to Presentation of Visual Tracking (20)

20110220 computer vision_eruhimov_lecture01
20110220 computer vision_eruhimov_lecture0120110220 computer vision_eruhimov_lecture01
20110220 computer vision_eruhimov_lecture01
 
Wang midterm-defence
Wang midterm-defenceWang midterm-defence
Wang midterm-defence
 
Ai use cases
Ai use casesAi use cases
Ai use cases
 
Programming with kinect v2
Programming with kinect v2Programming with kinect v2
Programming with kinect v2
 
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose EstimationHRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
 
Lecture-1-CVIntroduction.pdf
Lecture-1-CVIntroduction.pdfLecture-1-CVIntroduction.pdf
Lecture-1-CVIntroduction.pdf
 
Automated Face Detection System
Automated Face Detection SystemAutomated Face Detection System
Automated Face Detection System
 
Super Resolution of Image
Super Resolution of ImageSuper Resolution of Image
Super Resolution of Image
 
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
 
Bayesian Network 을 활용한 예측 분석
Bayesian Network 을 활용한 예측 분석Bayesian Network 을 활용한 예측 분석
Bayesian Network 을 활용한 예측 분석
 
Topic_6
Topic_6Topic_6
Topic_6
 
object-detection.pptx
object-detection.pptxobject-detection.pptx
object-detection.pptx
 
A Comparison of People Counting Techniques via Video Scene Analysis
A Comparison of People Counting Techniques viaVideo Scene AnalysisA Comparison of People Counting Techniques viaVideo Scene Analysis
A Comparison of People Counting Techniques via Video Scene Analysis
 
Human Emotion Recognition
Human Emotion RecognitionHuman Emotion Recognition
Human Emotion Recognition
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Fcv learn yu
Fcv learn yuFcv learn yu
Fcv learn yu
 
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object TrackingIntegrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
 
Scene Description From Images To Sentences
Scene Description From Images To SentencesScene Description From Images To Sentences
Scene Description From Images To Sentences
 
A multilevel automatic thresholding method based on a genetic algorithm for a...
A multilevel automatic thresholding method based on a genetic algorithm for a...A multilevel automatic thresholding method based on a genetic algorithm for a...
A multilevel automatic thresholding method based on a genetic algorithm for a...
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Recently uploaded (20)

Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 

Presentation of Visual Tracking

  • 1. Presentation of Personal Work Yosen Chen
  • 2. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 2
  • 3. Implementation Tool Microsoft Visual C++ with OpenCV Library IPC (Interprocess Communication) Library open source @ Carnegie Mellon University Yosen Chen, Copyright Reserved 3
  • 4. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 4
  • 5. Intelligent Interactive Engine Image processing Webcam Image data Show message & animation Display Central Server Webcam Our goal is to construct an interactive engine which utilizes target motion as instruction input to the server. Yosen Chen, Copyright Reserved 5
  • 6. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 6
  • 7. Basic Visual Tracking Framework (Color) Yosen Chen, Copyright Reserved Camera Real-time image Feature Extraction RGB to HSV Hue Saturation Value 256-to-16 color Qtz Similarity Evaluation Reference Image Reference 16-color Histogram Hue Saturation Value Reference Color Backprojection Probability distribution figure Probability Peak Search/Climbing Target Information Estimation (Position, size,...) If iterative Reference 16-color Histogram Reference Color Backprojection Input Hue Image Probability distribution figure Color similarity score Iterative Search (Motion Prediction) Previous Current Next Camera View Region of Interest 7
  • 8. Demo Video: Ball Tracking 追球後段 Yosen Chen, Copyright Reserved 8 Peak climbing: MeanShift
  • 9. Related Works Varieties of Feature Extraction/Evaluation Feature evaluation: template matching Background subtraction Reference color model: YCrCb, RGB Varieties of Peak Search/Climbing Gravity center Max-Likelihood MeanShift Yosen Chen, Copyright Reserved Cb Cr 9
  • 10. Summary Advantages: Simple & Fast  low-cost Each pixel (RGBHSV, backprojection) is processed independently  good for parallel processing Drawbacks: Hard to combine with more object features  EX: edge information (object shape, contour)  All object features 2D probability distribution figure Hard to cope with higher-dimension target state  EX: object orientation (rotation angle) Yosen Chen, Copyright Reserved ??suitable 10
  • 11. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 11
  • 12. SIR+SIS Particle Filter Tracking Yosen Chen, Copyright Reserved State candidates (Particle set) Color Possible region sampling SIR Real-time State Candidate Prediction Frame Multi-Cue Candidate Evaluation      1 m m t tp X X       1 1 , , , 1,..., mm t t X x y r m N       m t tp O X State Candidate Resampling            , equal weight m m t t t k t X p O X X tO SIS Shape Motion weighting Particles from last candidate set Particles generated from current image Tracking result on Frame tO  m tX Best State Candidate Sequential Importance Sampling (SIS) Particle Filter Sampling Importance Resampling (SIR) Particle Filter Color backproject + filtering + best contour finding Initial State Estimation State Prediction State Evaluation  Find Best Candidate Resampling Last Candidates based on weights State Prediction State Evaluation  Find Best Candidate Last candidate set {state candidate, weight} 12
  • 13. Multi-Cue Likelihood Evaluation Yosen Chen, Copyright Reserved For each state candidate Inner color extraction Color Similarity Correlation-based comparison Color weight Reference color modelColor Multi-Cue Candidate Evaluation    m t tp O XShape Motion weighting  m tX tO inner outer High continuity Low continuity id maxd ig      , , , mm t t X x y r  Shape Similarity Motion continuity (logic check) outer color extraction Shape distance State difference with last tracking result id  Ih i  Oh i  REFh i Edge strength ig Max matched size 1 null MAX Nr r N        Symmetry 1i N id d   , , ,x y r     Shape weight Motion weight  Overall candidate score    m t tp O X 13
  • 14. Demo Video: 4D Bottle Tracking (x, y, r, θ) Yosen Chen, Copyright Reserved 14 Candidate group behavior Bottle tracking video Blue: SIS (from current color projection) Green: SIR (from last candidate set) Red: Best candidate as tracking result
  • 15. Yosen Chen, Copyright Reserved 15 Interaction System Interaction Triggering Object Checking Tracking & Interaction Losing Handling Start Motion detection by Optical flow Image Initialize/Reset all parameters Motion detected? Show Checking Box Record target information and features Target Info is robust? Tracking algorithm Quality index > threshold ? Wait until user puts target in “Losing Box” Is target put back? Visual reality interacting program image Yes No Yes Yes No No Yes No Image
  • 16. Demo Video: Interactive Game – Brick Breaker Yosen Chen, Copyright Reserved Interaction system: 4 status switching 4 degrees of freedom: location x & y, size, rotation angle 16 Motion Interaction Function Interactive Game Example
  • 17. Summary High flexibility State dimension Motion model Easy to add new features & tune weights in tracker for different applications and scenarios Trade-off: Feature description vs. Computation time Impoverishment(intensive) vs. Degeneracy(sparse)  Many particles concentrate on high-probability region  Many particles are evaluated at low-probability region  Number of particles ~ Complexity Yosen Chen, Copyright Reserved 17
  • 18. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 18
  • 19. Related applications: Visual surveillance Remote monitoring Intelligent security system Introduction 19Yosen Chen, Copyright Reserved
  • 20. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 20
  • 21. Multi-Human Tracking Flow Yosen Chen, Copyright Reserved Prediction Evaluation Resampling Prediction Evaluation Resampling Prediction Evaluation Resampling … State joint evaluation SIR Head tracking Multi-human tracking result Real-time captured image Skin color blob finding SIS Face detection Shape & location filter ROI location Sampling Face detector (OpenCV) Losing Track Detector (score, counter) Delete tracker Create new tracker 1. SIS face detector  SIR human trackers 2. Multi-human state joint evaluation 3. Bad evaluated tracker  losing track detector  face detector  delete tracker 21
  • 22. Multi-Cue Likelihood Evaluation for Head Tracking Yosen Chen, Copyright Reserved High color likelihood Low color likelihood outer inner Skin color mask Hair color mask outer inner Skin color mask Hair color mask   1 , , t x y r   , , t x y r   1 , , t x y r   , , t x y r High continuity Low continuity 1n 2n 6n 9n 13n Shape distance 1n 2n 6n 9n 13n Num of Matched point ig Edge strength 1n 2n 6n 9n 13n symmetry Color •Masked skin color comparison •Masked hair color comparison Texture •Texture SAD similarity Edge •Shape distance •Number of matched point •Edge strength •Contour symmetry fitting Motion •State continuity 22
  • 23. Face Detection Implementation Details Yosen Chen, Copyright Reserved SIS Face Detection Flow =======================Pseudo code======================== cvCalcBackProject; cvEqualizeHist + cvThreshold; cvResize + cvErode + cvDilate; cvFindContours(LinkedList detect_contour); for( ; detect_contour != 0 ; detect_contour = detect_contour->next ) { if (Area < threshold) continue; Use 8 points average to approximate center (xA, yA); if (distance between all target T and (xA, yA) > threshold) { calculate geometric center (xC, yC), deviation σX, σY of detect_contour; if ((σX > 0.7*σY) && (σY > 0.8*σX) && (σX /sqrt(Area) < 0.3) && (σY /sqrt(Area) < 0.4)) { Draw k random samples (x, y) ~ N((xC, yC), (σX, σY)); where k = (Area/MIN_AREA). if (frontalFaceTest(x, y)->detect() || profileFaceTest(x, y)->detect()) return location (x, y) as start point of new tracker; } } } return NOT_DETECTED; Skin color blob finding SIS Face detection Shape & location filter ROI location Sampling Face detector (OpenCV) Losing Track Detector Create tracker Delete tracker 23
  • 24. Face Detection Demo Yosen Chen, Copyright Reserved Skin color backproject Skin color blob finding SIS Face detection Shape & location filter ROI location Sampling Face detector (OpenCV) Losing Track Detector Create tracker Delete tracker 24Skin color backproject (down-size & morphology) Last detecting region Blue region: Last detected face Red region: color is matched with skin color. Tracking System GUI panel
  • 25. People Overlapping Problem Yosen Chen, Copyright Reserved When people get close to each other… State candidates are attracted by other people because trackers don’t know each other!! Prediction Evaluation (individual) Resampling Original Head Trackers 25
  • 26. Solutions of People Overlapping Yosen Chen, Copyright Reserved Prediction Evaluation (Individual + joint) ResamplingJoint state check Depth order check Body occlusion handling Target information storage Modified Head Trackers for People Overlapping For people re-entering problem 1. Use joint evaluation to avoid candidates merging for people overlapping 2. Solution#1 (left): give higher joint score for candidates further from other people  reduce risk of candidates merging but also lose tracking accuracy 3. Solution#2 (right): kill candidates too close to other people  better tracking accuracy 26 resVideo_FurtherIsGood2.mpg resVideo_CloseThenLowWeight2.mpg
  • 27. Implementation Yosen Chen, Copyright Reserved Prediction Evaluation (Individual + joint) ResamplingJoint state check Depth order check Modified Head Trackers for People Overlapping Image distance check “if △x2 + △y2 < threshold, …” è group together target1 target2 target3 target4 target5 Grouping (2nd-order tree) root leaf leafleaf Target A candidate 1 (far from B) Target B last best candidate Target A candidate 2 (near B) Pointer to target1 Candidate Evaluation (individual + joint) Individual: (as previous mentioned) Color, shape, motion Joint: (not be attracted by other close targets) Distances to all other targets in the same group Pointer to target2 ……… Depth order: deform 2nd-tree into array, then quick sort by depth features Sort by depth features Depth features: For best candidates of each target Texture changing rate Contour matching score Distance from camera to target ground points Image source 1:Front 2:Rear Candidate A2 has high penalty (since Candidate A2 are too close to B !!) 27
  • 28. Depth Order Evaluation with Heights Depth order estimation with height knowledge is accurate Even when people are not overlapping. Yosen Chen, Copyright Reserved 28
  • 29. Summary of Multi-Human Tracker We extend the concept of single-object tracking to multi-object tracker We propose a entire tracking system Face detection  multi-human trackers  detect people leaving Address the solution of people overlapping Robust multi-human tracker is indeed the basis of correspondence process within multi-camera tracking system Yosen Chen, Copyright Reserved 29
  • 30. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 30
  • 31. Multi-Camera Tracking System Yosen Chen, Copyright Reserved Captured image Local camera #A multi-head tracker  1 2,A B t tz z2tH 1 A tz 2 A tz 2 B tz 1 B tz Local camera #B multi-head tracker Captured image Central Server Multi-camera multi-observation correspondence Person#1 Person#2  1 2 1,A B t tz ztH Identify the relationship of observations in different cameras Local trackers: multi-human tracking and occlusion handling Central server: observation correspondence across cameras 31
  • 32. : image observation : 3D line from image head center j rt j rt O L  “Are these cameras tracking the same people? ”  Let be the rth observation tracked by camera at time Problem of Observation Correspondence across Cameras  ,j j j rt rt rtO Lz j t How many people exist in surveillance area? Ground plane Camera j Camera k 1 j tO 2 j tO 3 j tO1 k tO 2 k tO 3 k tO 1 j tL 2 j tL 3 j tL 1 k tL 2 k tL 3 k tL Yosen Chen, Copyright Reserved 32
  • 33.  Determine observation correspondence using Multi-Feature Similarity  Multi-Feature Observation Similarity contains: Head line intersection verification Body color matching by blocks Confidence level Label history consistency How to Determine Observation Correspondence across Cameras Correspondence Flow Calculate 3D geometric intersection Confidence level Are they from the same person? Camera#A Camera#B Body Color comparison by block Label history consistency 2D Body image block allocation No, if no reasonable intersection No, if color unmatched Yes, 2 observations are from the same person Yosen Chen, Copyright Reserved 33
  • 34.  2D Position on image plane  3D head line in real space  3D ground point in real space  2D position on image plane observation r Optic center Image plane of camera j Ground plane Optic center observation q Image plane of camera k 3D Geometric Correspondence across Cameras Calibrated camera system Image 2D↔3D coordinate transfer is constructed off-line Head Point height Estimated Ground Point Yosen Chen, Copyright Reserved 34
  • 35. How to Implement 3D-2D Mapping Yosen Chen, Copyright Reserved Class: Plane3D Member functions: isPointOnPlane getDistFromPoint getProjPointFromPoint Class: line3D Member functions: getPositionFromX,Y,Z isPointOnLine getDistFromPoint getProjPointFromPoint getLinePointOnPlane getMinDistPointFromLine Class: CameraLine3D Member functions: setByImageLocation setBy3DPosition How to verify the mapping accuracy? 3D Optic point (≈ Camera location) Class: CameraLine3D 2D image plane Reference plane in 3D space 3D Normal vector Reference 3D plane center (0,0) derived 35 CAM2 CAM1 Locate 3D points/lines on 2D image planes
  • 36.  Human Body Model Based on Head Tracking Matching body color  Human Body is separated into 4 blocks ground-point head ground-point head Image boundary Observed ground-point Unobserved ground-point j rtz k qtz Camera kCamera j qtrt 3D Geometric-based Body Image Block Allocation Yosen Chen, Copyright Reserved 36 For some observations, not all the body blocks are unobservable. Here we take the number of observable blocks as confidence level of correspondence result
  • 37. Observation Correspondence across Cameras Object Conf = 3 Object Conf = 2 Matching body color (Region-based body color comparison) Depth order: Front Depth order: RearBody Occlusion Handling (based on depth order and 3D-2D block mapping ) Yosen Chen, Copyright Reserved 37 Body appearance is available for correspondence process Body appearance is not available (Color turns darker)
  • 38. Body Occlusion Handling Yosen Chen, Copyright Reserved 38 Prediction Evaluation (Individual + joint) ResamplingJoint state check Depth order check Body occlusion handling Target information storage Modified Head Trackers for People Overlapping
  • 39. Labeling Consistency with Confidence Level Yosen Chen, Copyright Reserved Observation correspondence process should follow rules stated below Rule1. Claiming “match” for one pair will also imply claiming “collision” for all corresponding residual pairs, with the same confidence level Rule2. Only higher-confidence decisions can enforce changes of lower-confidence ones √ √ √ A B C √ 1 0 0 -4 √ 2 -4 -4 4 √ 3 0 0 -4 √ √ X A B C √ 1 0 0 0 √ 2 -4 -4 0 √ 3 0 0 0 √ √ A B √ 1 -3 0 √ 2 -4 -4 √ 3 3 -3 Camera#1 Camera#2 Conf = 4Conf = -4 1 2 3 A B C Camera#1 Camera#2 Conf = -4 1 2 3 A B Camera#1 Camera#2 Conf = 3 Conf = -4 Conf = -3 1 2 3 A B 39
  • 40. Yosen Chen, Copyright Reserved Human Machine Interaction Basic Visual Tracker Multi-Cue Particle Filter Tracker Consistent Labeling in Visual Surveillance System Multi-Human Tracker Multi-Camera Multi-Human Tracker Consistent Labeling with People Re-Entering Problem Outline 40
  • 41. Consistent Labeling with People Re-Entering Yosen Chen, Copyright Reserved Captured image Local camera #A multi-head tracker  1 2,A B t tz z2tH 1 A tz 2 A tz 2 B tz 1 B tz Local camera #B multi-head tracker Captured image Central Server Multi-camera multi-observation correspondence Person#1 Person#2  1 2 1,A B t tz ztH Identify the relationship of observations in different cameras 3tX2tX1tX associate with target databases 41
  • 42. Association Processes with Matching Confidence Orders Hierarchical Association with target databases  Objects is associated to target databases  1 m ht h H   1 n it i X  Consistent Labeling Using Target Database 3tX 2tX 1tX 4tX 1tH i TARL High Low 1 OBJL 1 TARL 2 TARL 3 TARL 4 TARL compare with descendent order : target confidence level (in database) : object confidence level (from observation) i TAR i OBJ L L if association success update related database else create new database PS: database association process is only for objects with conf. level ≧2 Yosen Chen, Copyright Reserved 42
  • 43. Association Process  Association process:  To determine if observed object and database are from the same person  Features are similar to observation correspondence process  People height  Body color comparison by blocks  Association history consistency Yosen Chen, Copyright Reserved Observation-Database Association Flow: Check height difference Are they from the same person? Body Color comparison by block Association history consistency No, if diff > threshold No, if color unmatched Yes, observations & this database set are from the same person 2tX 1tH From Camera observations People information already stored in databases 43
  • 44. How to Model & Update Database Adaptive Gaussian Mixture Model 3 color candidates (histogram & weight) to describe 1 body block color Candidate’s histograms & weights are updated based on current color of MATCHED observations Color comparison with observed color (Define MATCHED or not)  Bhattacharyya distance from 3 candidates  3 weighted Gaussian kernels Yosen Chen, Copyright Reserved 44 Color histogram (1-D for demo) Candidate weight Candidate color Adaptive Gaussian Mixture Model Candidate#1 Candidate#3 Candidate#2 Candidate color & weight will be updated based on observations Candidate variance Inspired by [Stauffer., CVPR 2000]
  • 45. Database Operation Demonstration Consistent Labeling Using Target Database Histogram candidatesCamera#2 histogramCamera#1 histogram Matched with camera#2 Matched with camera#1 Yosen Chen, Copyright Reserved 45
  • 46. Consistent Labeling Using Target Database Database Demonstration (cont’) 46Yosen Chen, Copyright Reserved
  • 47. Multi-People Correspondence Yosen Chen, Copyright Reserved 47 1. Multi-people depth order 2. body occlusion handling 3. Multi-camera multi-people correspondence is correct
  • 48. Multi-People Re-entering Yosen Chen, Copyright Reserved 48 System Operation Flow: 1. Observation correspondence & Body occlusion handling 2. Associate with current databases  (Create then) update new information 3. Return people IDs (show by color) to each tracker of each camera
  • 49. Future Works… Extend to the 3rd Camera Challenge of communication load, algorithm complexity, and tracking/labeling stability Performance Evaluation of Target Database Comparison with related works More challenges: Outdoors Surveillance (human body detection) Uncalibrated multi-camera system (Training) Motive camera (rotation 3D-2D mapping) Yosen Chen, Copyright Reserved 49
  • 51. BACKUP SLIDES Yosen Chen, Copyright Reserved 51
  • 52. Motion Detection : Optical Flow (Modified from OpenCV Sample Code) Input gray-scale image Optical point extraction Optical point mapping & filtering Motion detection criteria Reset criteria Interaction trigger Optical flow Resettrigger Previous gray-scale image Store Optical Flow: 1. A technique to create edge points then map them between adjacent images. (edge tracking) 2. Better to analyze group behavior rather than individual (not accurate) 3. Can be utilized as motion detection tool 1. Not enough effective optical points on screen 2. Time out then refresh 1. Average velocity of moving points > threshold 2. Number of moving points > threshold Moving points: points with velocity > 3*average velocity Yosen Chen, Copyright Reserved 52
  • 53. BACKUP SLIDES END Yosen Chen, Copyright Reserved 53