(c) 2004 F. Estrada & A. Jepson & D.
Fleet
Local Features Tutorial:
Nov. 8, 04
Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International Journal of Computer Vision, Vol. 60, No. 2, 2004, pp. 91-110
Local Features Tutorial
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Local Features Tutorial
Previous week: recognition
View based models for object
- The problem: Build a model that captures general properties of eye appearance that we can use to identify eyes (though the approach is general, and does not depend on the particular object class). - Generalized model of eye appearance based on PCA. Images taken from same pose and normalized for contrast. - Demonstrated to be useful for classication, key property: the model can nd instances of eyes it has never seen before.
Local Features Tutorial
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Local Features Tutorial
Today: Local features for object recognition - The problem: Obtain a representation that allows us to nd a particular object weve encountered before (i.e. nd Pacos mug as opposed to nd a mug). - Local features based on the appearance of the object at particular interest points. - Features should be reasonably invariant to illumination changes, and ideally, also to scaling, rotation, and minor changes in viewing direction. - In addition, we can use local features for matching, this is useful for tracking and 3D scene reconstruction.
Local Features Tutorial
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Local Features Tutorial
Key properties of a good local feature: - Must be highly distinctive, a good feature should allow for correct object identication with low probability of mismatch. Question: How to identify image locations that are distinctive enough?. - Should be easy to extract. - Invariance, a good local feature should be tolerant to Image noise Changes in illumination Uniform scaling Rotation Minor changes in viewing direction Question: How to construct the local feature to achieve invariance to the above? - Should be easy to match against a (large) database of local features.
Local Features Tutorial
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT features
Scale Invariant Feature Transform (SIFT) is an approach for detecting and extracting local feature descriptors that are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. Detection stages for SIFT features: - Scale-space extrema detection - Keypoint localization - Orientation assignment - Generation of keypoint descriptors. In the following pages well examine these stages in detail.
SIFT features
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Scale-space extrema detection
Interest points for SIFT features correspond to local extrema of dierence-of-Gaussian lters at dierent scales. Given a Gaussian-blurred image L(x, y, ) = G(x, y, ) I (x, y ), where G(x, y, ) = 1/(2 ) exp
2 (x2 +y 2 )/ 2
is a variable scale Gaussian, the result of convolving an image with a dierence-of-Gaussian lter G(x, y, k ) G(x, y, ) is given by D(x, y, ) = L(x, y, k ) L(x, y, )
Scale-space extrema detection
(1)
6
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Which is just the dierence of the Gaussian-blurred images at scales and k .
Figure 1: Diagram showing the blurred images at dierent
scales, and the computation of the dierence-of-Gaussian images (from Lowe, 2004, see ref. at the beginning of the tutorial)
The rst step toward the detection of interest points is the convolution of the image with Gaussian lters at dierent scales, and the generation of dierence-ofGaussian images from the dierence of adjacent blurred images.
Scale-space extrema detection
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Scale-space extrema detection
The convolved images are grouped by octave (an octave corresponds to doubling the value of ), and the value of k is selected so that we obtain a xed number of blurred images per octave. This also ensures that we obtain the same number of dierence-of-Gaussian images per octave. Note: The dierence-of-Gaussian lter provides an approximation to the scale-normalized Laplacian of Gaussian 22G. The dierence-of-Gaussian lter is in eect a tunable bandpass lter.
Scale-space extrema detection
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Scale-space extrema detection
Figure 2: Local extrema detection, the pixel marked is
compared against its 26 neighbors in a 3 3 3 neighborhood that spans adjacent DoG images (from Lowe, 2004)
Interest points (called keypoints in the SIFT framework) are identied as local maxima or minima of the DoG images across scales. Each pixel in the DoG images is compared to its 8 neighbors at the same scale, plus the 9 corresponding neighbors at neighboring scales. If the pixel is a local maximum or minimum, it is selected as a candidate keypoint.
Scale-space extrema detection 9
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Scale-space extrema detection
For each candidate keypoint: - Interpolation of nearby data is used to accurately determine its position. - Keypoints with low contrast are removed - Responses along edges are eliminated - The keypoint is assigned an orientation To determine the keypoint orientation, a gradient orientation histogram is computed in the neighborhood of the keypoint (using the Gaussian image at the closest scale to the keypoints scale). The contribution of each neighboring pixel is weighted by the gradient magnitude and a Gaussian window with a that is 1.5 times the scale of the keypoint. Peaks in the histogram correspond to dominant orientations. A separate keypoint is created for the direction corresponding to the histogram maximum,
Scale-space extrema detection 10
(c) 2004 F. Estrada & A. Jepson & D. Fleet
and any other direction within 80% of the maximum value. All the properties of the keypoint are measured relative to the keypoint orientation, this provides invariance to rotation.
Scale-space extrema detection
11
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT feature representation
Once a keypoint orientation has been selected, the feature descriptor is computed as a set of orientation histograms on 4 4 pixel neighborhoods. The orientation histograms are relative to the keypoint orientation, the orientation data comes from the Gaussian image closest in scale to the keypoints scale. Just like before, the contribution of each pixel is weighted by the gradient magnitude, and by a Gaussian with 1.5 times the scale of the keypoint.
Figure 3: SIFT feature descriptor (from Lowe, 2004)
SIFT feature representation 12
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Histograms contain 8 bins each, and each descriptor contains an array of 4 histograms around the keypoint. This leads to a SIFT feature vector with 4 4 8 = 128 elements. This vector is normalized to enhance invariance to changes in illumination.
SIFT feature representation
13
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT feature matching
- Find nearest neighbor in a database of SIFT features from training images. - For robustness, use ratio of nearest neighbor to ratio of second nearest neighbor. - Neighbor with minimum Euclidean distance expensive search. - Use an approximate, fast method to nd nearest neighbor with high probability.
SIFT feature matching
14
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Recognition using SIFT features
- Compute SIFT features on the input image - Match these features to the SIFT feature database - Each keypoint species 4 parameters: 2D location, scale, and orientation. - To increase recognition robustness: Hough transform to identify clusters of matches that vote for the same object pose. - Each keypoint votes for the set of object poses that are consistent with the keypoints location, scale, and orientation. - Locations in the Hough accumulator that accumulate at least 3 votes are selected as candidate object/pose matches. - A verication step matches the training image for the hypothesized object/pose to the image using a least-squares t to the hypothesized location, scale, and orientation of the object.
Recognition using SIFT features 15
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Gaussian blurred images and Dierence of Gaussian images
Range: [0.11, 0.131] Dims: [959, 2044]
Figure 4: Gaussian and DoG images grouped by octave
SIFT matlab tutorial 16
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Keypoint detection
a)
b)
c)
Figure 5: a) Maxima of DoG across scales. b) Remaining
keypoints after removal of low contrast points. C) Remaining keypoints after removal of edge responses (bottom).
SIFT matlab tutorial 17
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Final keypoints with selected orientation and scale
Figure 6:
orientation.
Extracted keypoints, arrows indicate scale and
SIFT matlab tutorial
18
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Warped image and extracted keypoints
Figure 7: Warped image and extracted keypoints. The hough transform of matched SIFT features yields
SIFT matlab tutorial 19
(c) 2004 F. Estrada & A. Jepson & D. Fleet
the transformation that aligns the original and warped images:
Computed affine transformation from rotated image to original image: >> disp(aff); 0.7060 -0.7052 128.4230 0.7057 0.7100 -128.9491 0 0 1.0000 Actual transformation from rotated image to original image: >> disp(A); 0.7071 -0.7071 128.6934 0.7071 0.7071 -128.6934 0 0 1.0000
SIFT matlab tutorial
20
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Matching and alignment of dierent views using local features.
Orignial View Reference View
Range: [0, 1] Dims: [384, 512]
Range: [0, 1] Dims: [384, 512]
Aligned View
Reference minus Aligned View
Range: [0.0273, 1] Dims: [384, 512]
Range: [0.767, 0.822] Dims: [384, 512]
Two views of Wadham College and ane transformation for alignment.
SIFT matlab tutorial 21
Figure 8:
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Object recognition with SIFT
Image Model Location
Range: [0, 1] Dims: [480, 640]
Range: [0, 1] Dims: [480, 640]
Range: [0.986, 0.765] Dims: [480, 640]
Image
Model
Location
Range: [0, 1] Dims: [480, 640]
Range: [0, 1] Dims: [480, 640]
Range: [1.05, 0.866] Dims: [480, 640]
Image
Model
Location
Range: [0, 1] Dims: [480, 640]
Range: [0, 1] Dims: [480, 640]
Range: [1.07, 1.01] Dims: [480, 640]
Figure 9: Cellphone examples with dierent poses and occlusion.
SIFT matlab tutorial
22
(c) 2004 F. Estrada & A. Jepson & D. Fleet
SIFT matlab tutorial
Object recognition with SIFT
Image Model Location
Range: [0, 1] Dims: [480, 640]
Range: [0, 1] Dims: [480, 640]
Range: [0.991, 0.992] Dims: [480, 640]
Image
Model
Location
Range: [0, 1] Dims: [480, 640]
Range: [0, 1] Dims: [480, 640]
Range: [1.05, 0.963] Dims: [480, 640]
Image
Model
Location
Range: [0, 1] Dims: [480, 640]
Range: [0, 1] Dims: [480, 640]
Range: [0.988, 1.05] Dims: [480, 640]
Figure 10: Book example, what happens when we match similar
features outside the object?
SIFT matlab tutorial 23
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Closing Comments
- SIFT features are reasonably invariant to rotation, scaling, and illumination changes. - We can use them for matching and object recognition among other things. - Robust to occlusion, as long as we can see at least 3 features from the object we can compute the location and pose. - Ecient on-line matching, recognition can be performed in close-to-real time (at least for small object databases).
Closing Comments
24
(c) 2004 F. Estrada & A. Jepson & D. Fleet
Closing Comments
Questions: - Do local features solve the object recognition problem? - How about distinctiveness? how do we deal with false positives outside the object of interest? (see Figure 10). - Can we learn new object models without photographing them under special conditions? - How does this approach compare to the object recognition method proposed by Murase and Nayar? Recall that their model consists of a PCA basis for each object, generated from images taken under diverse illumination and viewing directions; and a representation of the manifold described by the training images in this eigenspace (see the tutorial on Eigen Eyes).
Closing Comments
25