0% found this document useful (0 votes)
10 views45 pages

Lecture 5 Feature

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views45 pages

Lecture 5 Feature

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Computer Vision (M.

EEC)

Feature detection and description

Andry Maykol Pinto


[email protected]
Outline

1 Feature-based techniques
Edge detection
Keypoints

2 References

2022-09-19 [email protected] 2
Feature-based techniques References

Feature-based techniques

Feature-based techniques

2022-09-19 [email protected] 3
Feature-based techniques References

Feature-based techniques

Features may be specific structures in the image such as points,


edges or objects.
The features can be classified into two main categories:
edges are indicators of object boundaries and occlusion events in
the image sequence. They are matched based on their orientation
and local appearance.
keypoint features are described by the appearance of patches of
pixels surrounding the point location.

2022-09-19 [email protected] 4
Feature-based techniques References

Feature-based techniques

Edge-based is Sobel and Canny.


Corner-based is Harris, KLT, Shi-Tomasi and FAST.
Blob-based is SIFT, SURF, ORB, BRISK, FREAK and MSER.

source: Ehab

2022-09-19 [email protected] 5
Feature-based techniques References

Edge detection

Edge detection

2022-09-19 [email protected] 6
Feature-based techniques References

Edge detection

Edge/Contour represents boundaries of objects/scenes that carry


important semantic associations.

It occur at boundaries between regions of different color, intensity,


or texture which mathematically is defined by:

where J points in the direction of steepest ascent in the intensity


function and orientation is direction perpendicular to the local
contour.

2022-09-19 [email protected] 7
Feature-based techniques References

Edge detection

Image derivatives amplifies noise since they accentuates high


frequencies.

A low-pass filter is often applied before computing the gradient.


2022-09-19 [email protected] 8
Feature-based techniques References

Edge detection

h∗I

d (h ∗ I )
dx

This can be optimized!


2022-09-19 [email protected] 9
Feature-based techniques References

Edge detection

d d
(h ∗ I ) = ( h) ∗ I (1)
dx dx

dh
dx

d h) ∗ I
( dx

2022-09-19 [email protected] 10
Feature-based techniques References

Edge detection

To the detect the peak value, let’s consider:


d2
2
(h ∗ I ) (2)
dx

d2 h
dx 2

d 2
( 2 h) ∗ I
dx

2022-09-19 [email protected] 11
Feature-based techniques References

Edge detection

where the Laplacian operator is given by:


2 d2 d2
∇ I = 2I + 2I (3)
dx dy
2022-09-19 [email protected] 12
Feature-based techniques References

Edge-based techniques: detecting edges

Prewitt Edge Detector calculates derivatives of the image in X and


Y directions using the operator (1, 1, 1) : (−1, −1, −1).

source: medium

Sobel Edge Detector uses another operator (1, 2, 1) : (−1, −2, −1).

2022-09-19 [email protected] 13
Feature-based techniques References

Edge-based techniques: detecting edges

2022-09-19 [email protected] 14
Feature-based techniques References

Edge-based techniques: detecting edges

Canny Edge Detector is a multi-stage algorithm that include Noise


Reduction, Finding Intensity Gradient of the Image, Non-maximum
Suppression and Hysteresis Thresholding.

2022-09-19 [email protected] 15
Feature-based techniques References

Keypoints

Keypoint-based techniques

2022-09-19 [email protected] 16
Feature-based techniques References

Keypoints

Keypoint features are interest of points. They can be used to find a


sparse set of corresponding locations in different images.

source: cc.gatech.edu

2022-09-19 [email protected] 17
Feature-based techniques References

Keypoints

Applications include: object recognition, image registration, visual


tracking, motion-based segmentation, visual odometry, 3D
reconstruction, etc.

Feature-based correspondence techniques can:


1find features in one image that can be accurately tracked using a
local search technique such as correlation or least squares.
2independently detect features in all the images and match
features based on their descriptors.

2022-09-19 [email protected] 18
Feature-based techniques References

Keypoints

What is the most appropriate approach when a large


amount of motion or appearance change is expected ??

2022-09-19 [email protected] 19
Feature-based techniques References

Keypoint: architecture
The keypoint detection and matching pipeline can be represented
by:
1feature detection/extraction;
2feature description;
3feature matching;
4feature tracking (alternative to feature matching).

source: Szeliski

2022-09-19 [email protected] 20
Feature-based techniques References

Keypoints: properties

Goal: to find image locations that have a meaning and/or are


good for correspondences.

Feature is the point which is expressive in texture and properties


include:
Distinctiveness
Locality
Quantity
Accuracy
Efficiency
Repeatability
Invariance
Robustness
2022-09-19 [email protected] 21
Feature-based techniques References

Keypoints: properties

2022-09-19 [email protected] 22
Feature-based techniques References

Keypoints: corner detection

Corner is an interception of two (or more) edges and a blob is a


patch of pixels that share similar properties.

Harris corner detection: maximizes the Sum of Squared Differences


(SSD) for a pixel displacement of (u,v):

(I (x, y ) − I (x + u, y + v ))2
X
SSD(u, v ) = (4)
u,v ∈Ω

Can be approximated by 1st order Taylor expansion:


I (x + u, y + v ) ≈ I (x, y ) + Ix (x, y )u + Iy (x, y )v (5)

2022-09-19 [email protected] 23
Feature-based techniques References

Keypoints: corner detection

Thus, X 2
SSD(u, v ) ≈ Ix (x, y )u + Iy (x, y )v (6)
u,v ∈Ω
dI (x,y ) dI (x,y )
where Ix = dx and Iy = dy are image derivatives in x and y
directions, respectively.

Equation 6 can be rewritten as:


X    I 2 Ix Iy   u 
SSD(u, v ) ≈ u v x (7)
Iy Ix Iy2 v
u,v ∈Ω
X   u 
= u v M
v
u,v ∈Ω

2022-09-19 [email protected] 24
Feature-based techniques References

Keypoints: corner detection

The corner matrix (M) is given by:


X  I 2 Ix Iy 
M= x (8)
Iy Ix Iy2
u,v ∈Ω

The M is symmetric and can be decomposed into:


 
−1 λ1 0
M =R R (9)
0 λ2

which gives an ellipse where λ1 and λ2 are the eigenvalues (axis


length), and the R defines eigenvectors (orientation) of M.

2022-09-19 [email protected] 25
Feature-based techniques References

Keypoints: corner detection

Remember that: eigenvectors (x) and eigenvalues (λ ) of a square


matrix A satisfy:
Ax = λx (10)
The eigenvalues can be found:
 
m11 − λ1 m12
det(A − λI ) = 0 =⇒ det (11)
m21 m22 − λ2
 
1
q
λ1,2 = m11 + m22 ± 4m12m21 + (m11 − m22)2 (12)
2
After this, the eigenvectors can be found:
  
m11 − λ1 m12 x
=0 (13)
m21 m22 − λ2 y

2022-09-19 [email protected] 26
Feature-based techniques References

Keypoints: corner detection

Cornerness function (R) can be defined as:


R = det(M) − ktrace(M)2 (14)
where k is a number ranging between 0.04 to 0.15.
det(M) = λ1.λ2
trace(M) = λ1 + λ2

Corner when R is large (λ1 and λ2 are


large).
Edge when R < 0 (λ1 >> λ2, or
vice-versa).

2022-09-19 [email protected] 27
Feature-based techniques References

Keypoints: corner detection

Harris corner detection:

2022-09-19 [email protected] 28
Feature-based techniques References

Keypoints: corner detection

Shi-Tomasi Corner Detector: is similar to Harris Corner detector


with the exception of R and can find the N strongest corners.
R = min(λ1, λ2) (15)
its classified as a corner when R is greater than a threshold.

Properties of corner detectors under geometric and photometric


changes:
Rotation invariant (since eigenvalues remain equal);
Not Scale invariant;
Robustness to illumination changes.

2022-09-19 [email protected] 29
Feature-based techniques References

Keypoints: corner detection

Shi-Tomasi corner detection:

2022-09-19 [email protected] 30
Feature-based techniques References

Keypoints: blob detection

Scale Invariant Feature Transform (SIFT): estimates feature vectors


that are invariant to scale, translation, rotation and illumination.
Feature detection and description based on SIFT include:
Scale-space extrema detection;
Keypoint localization;
Orientation assignment;
Local descriptor creation.

2022-09-19 [email protected] 31
Feature-based techniques References

Keypoints: blob detection

SIFT uses the scale-normalized LoG filter:


Laplacian of Gaussian (LoG) is found for the image;
The LoG will act as blob detector with radius proportional to σ;

LoG filter is highly peaked at the


center;
σ acts like a scaling parameter.

The local maxima across the scale and space will give a list of
(x,y,σ) values which means there is a potential keypoint at (x,y) at
σ scale.
2022-09-19 [email protected] 32
Feature-based techniques References

Keypoints: blob detection


Implementing SIFT resorts to Difference of Gaussians (DoG)
which approximates the LoG.

DoG Pyramid:
Scale-space is separated into octaves;
An octave is a set of images were blurring was sequentially
doubled.

2022-09-19 source: Guohui


[email protected] 33
Feature-based techniques References

Keypoints: blob detection

Scale-space of an image is a function L(x,y,σ) produced from the


convolution of a Gaussian kernel/Blurring at different scales;

Images are searched for local extrema over scale and space:
subpixel accuracy for each keypoint (using a second-order
Taylor expansion);
filter keypoints on edges.

2022-09-19 [email protected] 34
Feature-based techniques References

Keypoints: blob detection

Orientation assignment provides invariance to


rotation.

The gradient magnitude and direction is determined in a


neighbourhood of the keypoint.
The size of the neighbourhood is proportional to the scale of that
keypoint.
An orientation histogram is created in polar coordinates with 36
bins covering 360 degrees.
The magnitude and angle of the gradient at a pixel is calculated.

The highest peak (dominant gradient direction) in the


histogram is assigned to a keypoint.
2022-09-19 [email protected] 35
Feature-based techniques References

Keypoints: blob detection

Feature descriptor: a 16x16 neighbourhood around the keypoint is


considered.

Describe all gradient orientations relative to the keypoint


orientation.
2022-09-19 [email protected] 36
Feature-based techniques References

Keypoints: blob detection

Divide keypoint neighborhood in 4×4 regions and compute


orientation histograms along 8 directions
All histograms are concatenated into a 4x4x8=128 vector.

These 128-bin values represent the descriptor vector.

2022-09-19 [email protected] 37
Feature-based techniques References

Keypoints: blob detection

SIFT feature detector:

2022-09-19 [email protected] 38
Feature-based techniques References

Keypoints: feature matching

Feature matching: establishing correspondences between images of


the same scene/object based on a set of interest points associated
with image descriptors from each image.

Descriptor of one feature in first set and is matched with all other
features in second set using some distance calculation.
Some approaches:
Brute-Force Matcher
FLANN Matcher
K-D Trees, etc.

2022-09-19 [email protected] 39
Feature-based techniques References

Keypoints: feature matching

Potential applications: structure from motion, augmented reality,


video stabilization, stitching, motion and egomotion estimation.
2022-09-19 [email protected] 40
Feature-based techniques References

Keypoints: image registration

Compute a transformation that relates a set of features from two


(or more) images.

E.g., the Planar homography relates the transformation between


two image planes (up to a scale factor).

Considering the first set of corresponding points — (x 1, y 1) in the


first image and (x 2, y 2) in the second image.
2022-09-19 [email protected] 41
Feature-based techniques References

Keypoints: image registration

 1      2
x x h11 h12 h13 x
s y 1 = H y  = h21 h22 h23 y 2 (16)
1 1 h31 h32 h33 1
2 + h2 + h2 + h2 + h2 + h2 + h2 + h2 + h2 = 1 or
where h11 12 13 21 22 23 31 32 33
h33 = 1.

The homography matrix is a 3x3 matrix with 8 DoF (degrees of


freedom) and it is estimated up to a scale.

2022-09-19 [email protected] 42
Feature-based techniques References

Keypoints: image registration

Panorama stitching for underwater scenarios:

2022-09-19 [email protected] 43
Feature-based techniques References

References

Ehab Salahat and Murad Qasaimeh, Recent Advances in Features Extraction and Description
Algorithms: A Comprehensive Survey, https://arxiv.org/pdf/1703.06376.pdf
Guohui Wang et al., Workload Analysis and Efficient OpenCL-based Implementation of SIFT
Algorithm on a Smartphone, IEEE Global Conference on Signal and Information Processing
(GlobalSIP), 201310.1109/GlobalSIP.2013.6737002
Shi and C. Tomasi. ”Good Features to Track”. 9th IEEE Conference on Computer Vision and
Pattern Recognition. 1994
C.Harris and M.Stephens. ”A Combined Corner and Edge Detector” , Proceedings of the 4th
Alvey Vision Conference, 1988.

Andry Maykol Pinto et al (2014), Enhancing dynamic videos for surveillance and robotic
applications: The robust bilateral and temporal filter, Signal Processing: Image Communication,
Elsevier.
Chapter 13 of Robotics, Vision and Control, Peter Corke, Springer, DOI
10.1007/978-3-642-20144-8
Chapter 4 of Computer Vision: Algorithms and Applications, Richard Szeliski, 2010 Springer.

2022-09-19 [email protected] 44
Thank you!

Andry Maykol Pinto


[email protected]

You might also like