6. 局所特徴を用いた特定物体認識
4/16/2018 5
①Extract local regions
(patches) from images
②Describe the patches
by d-dimensional vectors
③Make correspondences
between similar patches
④Calculate similarity
between the images
Similarity: 3
Position (x, y)
Orientation θ
Scale σ
Feature vector f
(e.g., 128-dim SIFT)
Local feature
12. どれを使えば良いの?
11
• 精度重視
– SIFT or Hessian Affine detector
+ RootSIFT descriptor
• 速度重視
– ORB detector + ORB descriptor
• Local Feature Detectors, Descriptors, and Image Representations: A Survey
https://arxiv.org/abs/1607.08368
13. RootSIFT [Arandjelovic+, CVPR’12]
4/16/2018 12
• Hellinger kernel works better than Euclidean distance
in comparing histograms such as SIFT
• Hellinger kernel (Bhattacharyya’s coefficient) for L1
normalized histograms x and y:
• Explicit feature map of x into x’ :
– L1 normalize x
– element-wise square root x to give x’
– then x’ is L2 normalized
• Computing Euclidean distance in the feature map
space is equivalent to Hellinger distance in the
original space:
RootSIFT
RootSIFT
14. Large-scale Object Recognition
4/16/2018 13
・
・
・
Distance
calculation
Query
image
Reference
images
Explicit feature matching
requires high computational cost
and memory footprint
Match
Bag-of-visual words!
15. Bag-of-Visual Words [Sivic+, ICCV’03]
4/16/2018 14
• Offline
– Collect a large number of training vectors
– Perform clustering algorithm (e.g., k-means)
– Centroids of clusters = visual words (VWs)
• Online:
– All features are assigned to their nearest visual words
– An image is represented by the frequency histogram of VWs
– (Dis)similarity is defined by the distance between histograms
Visual words (VW)
VW1
VWn
VW2
…
Visual words
-
-
・
・・
-
-
-
・・
・-
-
-
・・
・-
-
-
・
・・
-
-
-
・・
・
-
Frequency
}1|{ Nii vV
16. Bag-of-Visual Words [Sivic+, ICCV’03]
4/16/2018 1515
VW1
VW2
VWk
VWn
・
・
・
・
・
・
Indexing step
(quantization)
Search step
(quantization)
Match
Match
Matching can be performed in O(1)
with an inverted index
Query
image
Reference
images
Nearest VW
17. 1
2
w
N
Inverted index
Image ID
1 2 3 4 5 6 7 8 9 10 11 12 ...
Image ID
Accumulated scores
VW ID
Obtain image IDs
Query image Reference image
Image ID ...(x, y) σ θ
(1) Feature detection
(2) Feature description
(3) Quantization
(1) Feature detection
(2) Feature description
(3) Quantization
(4) Voting
...
... ...
...
Visual word v1
...
Visual word vw
...
Visual word vN
Visual words
1 4 5 7 10 16 19
Offline step
Visual word v1
...
Visual word vw
...
Visual word vN
Visual words
Get images with the top-K scores
Results
inlier
outlier
(5) Geometric verification
全体処理
Geometric
verification
24. Average Query Expansion [Chum+, ICCV’07]
4/16/2018 23
• Obtain top (m < 50) verified results of original query
• Construct new query using average of these results
Without geometric verification,
QE degrades accuracy!
Query image
Verified results
New query
25. Multiple Image Resolution Expansion [Chum+, ICCV’07]
4/16/2018 24
ROI
Query image
ROI
ROIROI
ROI
ROI
ROI
First verified results
ROI
ROI
ROI
ROI
ROI
ROI
• Calculate relative change in resolution
• Construct average query for each resolution
New query1 New query2 New query3
27. Discriminative Query Expansion [Arandjelovic+, CVPR’12]
4/16/2018 26
• Train a linear SVM classifier
– Use verified results as positive training data
– Use low ranked images as negative training data
– Rank images on their signed distance from the decision
boundary
– Reranking can be efficient with an inverted index!