0% found this document useful (0 votes)
22 views44 pages

Lecture 11

Uploaded by

aastha harne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views44 pages

Lecture 11

Uploaded by

aastha harne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Motion and Optical Flow

We live in a moving world


• Perceiving, understanding and predicting motion is an
important part of our daily lives
Motion and perceptual organization
• Even “impoverished” motion data can
evoke a strong percept

G. Johansson, “Visual Perception of Biological Motion and a Model For Its


Analysis", Perception and Psychophysics 14, 201-211, 1973.
Motion and perceptual organization
• Even “impoverished” motion data can
evoke a strong percept

G. Johansson, “Visual Perception of Biological Motion and a Model For Its


Analysis", Perception and Psychophysics 14, 201-211, 1973.
Seeing motion from a static
picture?

[Link]
More examples
How is this possible?
• The true mechanism is
yet to be revealed
• FMRI data suggest that
illusion is related to some
component of eye
movements
• We don’t expect
computer vision to “see”
motion from these
patterns
The cause of motion
• Three factors in imaging process
– Light
– Object
– Camera
• Varying either of them causes motion
– Static camera, moving objects (surveillance)
– Moving camera, static scene (3D capture)
– Moving camera, moving scene (sports, movie)
– Static camera, moving objects, moving light (time
lapse)
Recovering motion
• Feature-tracking
– Extract visual features (corners, textured areas) and “track” them
over multiple frames

• Optical flow
– Recover image motion at each pixel from spatio-temporal image
brightness variations (optical flow)

Two problems, one registration method

B. Lucas and T. Kanade. An iterative image registration technique with an application to


stereo vision. In Proceedings of the International Joint Conference on Artificial
Intelligence, pp. 674–679, 1981.
Feature tracking
• Challenges
– Figure out which features can be tracked
– Efficiently track across frames
– Some points may change appearance over
time (e.g., due to rotation, moving into
shadows, etc.)
– Drift: small errors can accumulate as
appearance model is updated
– Points may appear or disappear: need to be
able to add/delete tracked points
What is Optical Flow?


Definitions

The optical flow is a velocity field in the image which


transforms one image into the next image in a
sequence [ Horn & Schunck ]

+ =

frame #1 flow field frame #2


Ambiguity of optical flow

flow (2) flow (1): true motion

Frame 1

13
What is Optical Flow?

p2 !
v2
p3 !
! v3
p1 v1
Optical Flow
p4 !
v4

I (t + 1)
I (t ),{ pi } !
Velocity vectors i }
{v

Optical flow is the pattern of apparent motion of objects, surfaces, and edges in
a visual scene caused by the relative motion between an observer and a scene
-Wiki
Optical Flow Research: Timeline

Horn&Schunck
more
Lucas&Kanade many
methods
methods

1981 1992 1998 now

Benchmark: Benchmark:
Seminal papers
Barron [Link]. Galvin [Link].
Optical Flow

I(x,y,t) I(x,y,t+1)

• Given two subsequent frames, estimate the point


translation
• Key assumptions of Lucas-Kanade Tracker
• Brightness constancy: projection of the same point looks the same in
every frame
• Small motion: points do not move very far
• Spatial coherence: points move like their neighbors
The brightness constancy constraint

I(x,y,t) I(x,y,t+1)

• Brightness Constancy Equation:


I ( x, y, t ) = I ( x + u, y + v, t + 1)
Take Taylor expansion of I(x+u, y+v, t+1) at (x,y,t) to linearize the right side:
Image derivative along x Difference over frames

I ( x + u, y + v, t + 1) » I ( x, y , t ) + I x × u + I y × v + I t
I ( x + u, y + v, t + 1) - I ( x, y , t ) = + I x × u + I y × v + I t
I x × u + I y × v + I t » 0 ® ÑI × [u v] + I t = 0
T
So:
The brightness constancy constraint
Can we use this equation to recover image motion (u,v) at each
pixel?
ÑI × [u v] + I t = 0
T

• How many equations and unknowns per pixel?


•One equation, two unknowns (u,v)
Optical Flow
How does this show up visually?
Known as the “Aperture Problem”
Aperture Problem Exposed

Motion along just an edge is ambiguous


The aperture problem

Actual motion
The aperture problem

Perceived motion
Lucas and Kanade
B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In
Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.

• How to get more equations for a pixel?


• Spatial coherence constraint
• Assume the pixel’s neighbors have the same (u,v)
– If we use a 5x5 window, that gives us 25 equations per pixel
Least squares problem
• Least squares problem:
Matching patches across images
• Overconstrained linear system

Least squares solution for d given by

The summations are over all pixels in the K x K window

Criteria for Harris corner detector


Aperture problem

Corners Lines Flat regions


Errors in Lukas-Kanade
• What are the potential causes of errors in this procedure?
– Suppose ATA is easily invertible
– Suppose there is not much noise in the image

Our assumptions
• Brightness constancy is not satisfied
• The motion is not small
• A point does not move like its neighbors
– window size is too large
– what is the ideal window size?

28
Improving accuracy
• Recall our small motion assumption
It-1(x,y)
It-1(x,y)
• This is not exact
– To do better, we need to add higher order terms back in:
It-1(x,y)
– Lukas-Kanade method does one iteration of Newton’s method
• Better results are obtained via more iterations
Iterative Refinement
• Iterative Lukas-Kanade Algorithm
1. Estimate velocity at each pixel by solving Lucas-
Kanade equations
2. Warp I(t-1) towards I(t) using the estimated flow field
- use image warping techniques
3. Repeat until convergence
Tracking in the 1D case:

I ( x, t ) I ( x, t + 1)

!
v ?
p

x
Tracking in the 1D case:

I ( x, t ) I ( x, t + 1)

It
Temporal derivative
p !
v

x
Ix

Spatial derivative
Assumptions:
¶I ¶I ! I
Ix = It = v »- t • Brightness constancy
¶x t ¶t x= p
Ix • Small motion
Tracking in the 1D case:
Iterating helps refining the velocity vector

I ( x, t ) I ( x, t + 1)
Temporal derivative at 2nd iteration

It
p

x
Ix

Can keep the same estimate for spatial derivative

! ! I
v ¬ v previous - t Converges in about 5 iterations
Ix 33
Optical Flow: Aliasing
Temporal aliasing causes ambiguities in optical flow because
images can have many pixels with the same intensity.
I.e., how do we know which ‘correspondence’ is correct?

actual shift

estimated shift

nearest match is correct nearest match is incorrect


(no aliasing) (aliasing)
To overcome aliasing: coarse-to-fine estimation.
Revisiting the small motion
assumption

• Is this motion small enough?


– Probably not—it’s much larger than one pixel
– How might we solve this problem?
35
Reduce the resolution!

36
Coarse-to-fine optical flow estimation

run iterative L-K


warp & upsample

run iterative L-K


.
.
.

image J1 image
image I2

Gaussian pyramid of image 1 (t) Gaussian pyramid of image 2 (t+1)


• Top Level A Few Details
– Apply L-K to get a flow field representing the flow from
the first frame to the second frame.
– Apply this flow field to warp the first frame toward the
second frame.
– Return L-K on the new warped image to get a flow field
from it to the second frame.
– Repeat till convergence.
• Next Level
– Upsample the flow field to the next level as the first
guess of the flow at that level.
– Apply this flow field to warp the first frame toward the
second frame.
– Rerun L-K and warping till convergence as above.
• Etc.
38
Coarse-to-fine optical flow estimation

u=1.25 pixels

u=2.5 pixels

u=5 pixels

image H
1 u=10 pixels image
image I2

Gaussian pyramid of image 1 Gaussian pyramid of image 2


Optical flow result
The Flower Garden Video

What should the


optical flow be?
Optical Flow Results

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003


Optical Flow Results

* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003


Summary
• Major contributions from Lucas, Tomasi, Kanade
– Tracking feature points
– Optical flow

• Key ideas
– By assuming brightness constancy, truncated Taylor
expansion leads to simple and fast patch matching
across frames
– Coarse-to-fine registration

You might also like