0% found this document useful (0 votes)
1K views7 pages

Question 1 - Harris Corner Detection (20 Points)

This document contains the final exam questions for a computer vision fundamentals course. Question 1 asks students to compute the Harris matrix for a 3x3 window in an image. Question 2 involves finding a scale-invariant feature transform (SIFT) keypoint in a scale space. Question 3 deals with image transformations and finding transformed points. Question 4 is about setting parameters for RANSAC to estimate image homographies. Question 5 involves training a support vector machine (SVM) classifier on 2D data points and finding the optimal separating hyperplane.

Uploaded by

Hiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views7 pages

Question 1 - Harris Corner Detection (20 Points)

This document contains the final exam questions for a computer vision fundamentals course. Question 1 asks students to compute the Harris matrix for a 3x3 window in an image. Question 2 involves finding a scale-invariant feature transform (SIFT) keypoint in a scale space. Question 3 deals with image transformations and finding transformed points. Question 4 is about setting parameters for RANSAC to estimate image homographies. Question 5 involves training a support vector machine (SVM) classifier on 2D data points and finding the optimal separating hyperplane.

Uploaded by

Hiro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Fundamentals of Computer Vision Instructor:

- Spring 1397 - B. Nasihatkon


Final Exam

Name: ID: Khordad 1397 - June 2

Question 1 - Harris Corner Detection (20 points)


Consider the following image:
I
0 0 1 4 9 d /dy
1 0 5 7 11 d /dx -1

1 4 9 12 16 -1 0 1 0
3 8 11 14 16
1
8 10 15 16 20

Compute the Harris matrix

for the 3 by 3 highlighted window. In the above formula I x = dI/dx, I y = dI/dy, and W is the
window highlighted in the image.

A) First, compute the derivatives using the differentiation kernels shown above. No
normalization (division by 2) is needed. (5 points).
I x = dI/dx I y = dI/dy
X X X X X X X X X X

X 4 7 6 X X 4 8 8 X

X 8 8 7 X X 8 6 7 X

X 8 6 5 X X 6 6 4 X

X X X X X X X X X X
B) Now compute the Harris Matrix based on the derivative matrices. (10 points).

∑ I x (x, y )2 = 42 + 72 + 62 + 82 + 82 + 72 + 82 + 62 + 52 = 403
(x,y) ∈W
2 2 2 2 2 2 2 2 2 2
∑ I y (x, y ) = 4 + 8 + 8 + 8 + 6 + 7 + 6 + 6 + 4 = 381
(x,y) ∈W

∑ I x (x, y ) I y (x, y ) = 4 * 4 + 7 * 8 + 6 * 8 + 8 * 8 + 8 * 6 + 7 * 7 + 8 * 6 + 6 * 6 + 5 * 4 = 385


(x,y) ∈W

H = [403 385]
[385 381]

2
C) Compute the Harris cornerness score C = det(H) − k trace(H) for k = 0.04 . What do
we have here? A corner? An edge? Or a flat area? Why? (5 points)

C = det(H) − K trace(H)2 = 5318 − 0.04 * (784)2 = − 19268.24

A negative Harris score indicates an edge.

K. N. Toosi University of Technology


Question 2 - Scale Space / SIFT Detection (20 points)
Gaussian Filtered Difference of Gaussian
The matrices in the left column are Images Images
the output of applying Gaussian
filters with different bandwidths for a 25 22 20 17
single octave in the SIFT detection scale = 1
algorithm. There is a ​single​ sift 25 28 19 17
0 2 0 1
keypoint in the scale space. Your job
20 19 19 17
is to find it. (This is ​before​ removing 0 -2 0 1
the edges and low contrast points, 15 15 15 15
1 2 0 1
and sub-pixel tuning). Report the x, y
and scale of the key point. As a 2 2 1 1
reference, for the highlighted cell at
25 20 20 16
scale=2, the scale-space location is scale = 2
(x=3, y=1, scale=2). 25 30 19 16
1 2 0 2
To find the keypoint, you first need to 19 17 19 16
0 -2 0 1
build the Difference of Gaussian 13 13 14 14
Images in the scale-space. The key 1 1 0 2
points are found at the locations of 1 1 1 1
extrema in scale-space as explained
in the class. Fill in the Difference of 24 18 20 14
scale = 3
Gaussian values and locate the key
25 32 19 15
point. Why is this a SIFT key point? 2 3 0 2
18 16 19 14
1 -3 1 1
Hint:​ The keypoints do not exist in
the in the first and last scale. 12 12 13 13
2 0 1 1
The keypoint is located at
1 1 2 1
x=1, y=1, scale=3, since
22 15 20 12
Compared to neighbours at scale 2 scale = 4
24 35 18 14
-3 > 1,2,0,1,1,0,1,1,1 2 5 0 4
16 16 18 13
4 -2 8 6
Compared to neighbours at scale 4 11 11 11 12
-3 > 2,5,0,4,-2,8,0,8,-2 0 8 -2 3

Compared to neighbours in the 20 10 20 8 -1 1 2 2


same scale (scale 3)
20 37 10 8
-3 > 2,3,0,1,1,2,0,1
16 8 20 10

12 10 9 10

K. N. Toosi University of Technology


Question 3 - Image Transformations (25 points)
A) The image below on the left has undergone an ​affine ​transformation y = M x + t
to create the image on the right. The locations of the transformed points A′, B ′ and
D′ are marked in the transformed image. Calculate the affine transformation (the 2
by 2 matrix M , and the vector t ) from the point correspondences (A, A′), (B, B ′), and
(D, D′). (10 points)

B) Compute the coordinates of C ′ and E ′. (5 points)

K. N. Toosi University of Technology


C) Apply the following homography transformation to the input
image of part A (the image on the left). Derive the corresponding
transformed points A', B', C', D', E', and draw the output image
(10 points).

Question 4 - RANSAC (15 points)


We want to do panorama using homographies for stitching images. We find a number of
matches between the key points of two consecutive images, out of which at most 40 per cent are
outliers. We run the RANSAC algorithm for 100 iterations. Derive a lower-bound on the probability
of detecting the outliers (and hence correctly estimating the homography).
Here is the relation between the probability of getting at least one sample with all inliers ( p ), the
minimum number of point matches to compute the transformation ( s ), and the proportion of
outliers ( e ):
(1 − p) = (1 − (1 − e)s )N
Homography ⇒ s = 4
e ≤ 0.40 ⇒ 1 − e ≥ 0.6 ⇒ (1 − e)4 ≥ 0.64 ⇒ 1 − (1 − e)4 ≤ 1 − 0.64
1 − p = (1 − (1 − e)4 )100 ≤ (1 − 0.64 )100 ⇒ p ≥ 1 − (1 − 0.64 )100 ≈ 1 − 9.37 * 10−7

K. N. Toosi University of Technology


Question 5 - SVM (20 points)
We intend to train a 2-class SVM on data points below.
The data is linearly separable. Your task is to determine
the support vectors, and compute the optimal ​w​ and ​b
for the SVM classifier. Hint: find all potential sets of
support vectors, for each of them, compute ​w​ and ​b, ​and
choose the one with the widest margin).

The support vectors from the circle class must be a


subset of {A,B}, otherwise the two classes are not
separated by a line.

The support vectors from the square class must be a


subset of {E,G}, otherwise the two classes are not
separated by a line.

There are only two possibilities:


Support Vectors = {A,B, E}, and
Support Vectors = {A,E,G};
For the other couple of cases {A,B,G} and {B,E,G}
the data is not linearly separated.

T
Let w = [u, v]
First case: Support Vectors = {A,B, E}
wT A + b = 1 ⇒ u * 1 + v * 1 + b = 1 ⇒u+v+b=1 (I)
T
w B + b = 1 ⇒u*2 + v*2+b = 1 ⇒ 2u + 2v + b = 1 (II)
T
w E + b = 1 ⇒ u * 2 + v * 0 + b = − 1 ⇒ 2u + b = − 1 (III)
(II) − (III) ⇒ 2v = 2 ⇒ v = 1
2 * (I) − (II) ⇒ b = 1
(III) ⇒ 2u + b =− 1 ⇒ 2u = − 2 ⇒ u = − 1
w = [u, v]T = [− 1, 1]T ⇒ || w || = √2 ⇒ Magin(1) = 2 / ||w|| = √2

Second case: Support Vectors = {A, E, G}


wT A + b = 1 ⇒ u * 1 + v * 1 + b = 1 ⇒u+v+b=1 (I)
wT E + b = 1 ⇒ u * 2 + v * 0 + b = − 1 ⇒ 2u + b = − 1 (II)
T
w G + b = 1 ⇒ u * 4 + v * 1 + b = − 1 ⇒ 4u + v + b =− 1 (III)
(III) − (I) ⇒ 3u =− 2 ⇒ u =− 2/3
(II) ⇒ 2 * − 2/3 + b = − 1 ⇒ b = 1/3
(I) ⇒ v = 1 − u − b ⇒ v = 4/3
w = [u, v]T = [− 2/3, 4/3]T ⇒ || w || = 2√5/3 ,
⇒ Magin(2) = 2 / ||w|| = 3√5 / 5 < √2 = M argin(1)

K. N. Toosi University of Technology


T
Margin(1) > Margin(2) => Support Vectors = {A,B, E}, w = [− 1, 1] , b = 1

K. N. Toosi University of Technology

You might also like