0% found this document useful (0 votes)
4 views25 pages

Image Restoration - Comprehensive Lecture Notes

The document provides comprehensive lecture notes on image restoration, distinguishing it from image enhancement and outlining various applications in fields such as medical imaging, astronomy, and photography. It details the mathematical models for image degradation and restoration, types of degradation, and noise models, along with techniques for restoration in the presence of noise. Various filtering techniques, including mean, median, and adaptive filters, are discussed for effective image restoration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views25 pages

Image Restoration - Comprehensive Lecture Notes

The document provides comprehensive lecture notes on image restoration, distinguishing it from image enhancement and outlining various applications in fields such as medical imaging, astronomy, and photography. It details the mathematical models for image degradation and restoration, types of degradation, and noise models, along with techniques for restoration in the presence of noise. Various filtering techniques, including mean, median, and adaptive filters, are discussed for effective image restoration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Image Restoration: Comprehensive Lecture Notes

1. Introduction to Image Restoration

1.1 Image Restoration vs. Enhancement


Image Enhancement:

Subjective process

Improves visual appearance

No specific model of degradation

Techniques: histogram equalization, sharpening, etc.

Goal: Make image more suitable for specific application

Image Restoration:

Objective process

Mathematical/probabilistic model of degradation

Attempts to recover original image

Based on degradation model

Goal: Reverse known degradation

1.2 Applications
Medical Imaging:

Remove blur from X-rays, MRI, CT scans

Reduce noise in ultrasound images

Enhance diagnostic accuracy

Astronomy:

Correct atmospheric turbulence effects

Remove telescope aberrations

Enhance planetary and stellar images

Forensics:

Restore blurred surveillance footage

Enhance license plates


Recover details from damaged images

Photography:

Remove motion blur

Reduce camera shake

Correct lens distortions

Satellite Imaging:

Atmospheric correction

Sensor noise reduction

Motion compensation

2. Image Degradation/Restoration Process Model

2.1 Mathematical Model


The degradation process can be modeled as:

Spatial Domain Model:

g(x, y) = h(x, y) * f(x, y) + η(x, y)

Frequency Domain Model:

G(u, v) = H(u, v) · F(u, v) + N(u, v)

Where:

f(x, y): Original undegraded image

g(x, y): Degraded/observed image

h(x, y): Degradation function (Point Spread Function - PSF)

η(x, y): Additive noise

*: Convolution operator

H(u, v): Degradation transfer function (Fourier transform of h)

F(u, v), G(u, v), N(u, v): Fourier transforms of f, g, η

2.2 Degradation/Restoration Block Diagram

Original Image → [Degradation] → Degraded Image → [Restoration] → Restored Image


f(x,y) H + Noise g(x,y) Process f̂(x,y)

η(x,y)

Forward Process (Degradation):

f(x,y) → Degradation Function h(x,y) → Add Noise η(x,y) → g(x,y)

Inverse Process (Restoration):

g(x,y) → Estimate Degradation → Apply Inverse/Filter → f̂(x,y)

2.3 Types of Degradation


1. Blur:

Motion blur (camera or object movement)

Out-of-focus blur

Atmospheric turbulence

Lens aberrations

2. Noise:

Sensor noise

Transmission errors

Quantization noise

Environmental interference

3. Geometric Distortions:

Lens distortion

Perspective distortion

Sensor misalignment

4. Radiometric Distortions:

Non-uniform illumination

Sensor non-linearity

Atmospheric effects

2.4 Linear Position-Invariant (LPI) Systems


Definition: A system is Linear and Position-Invariant if:

Linearity:

If g₁ = H[f₁] and g₂ = H[f₂]


Then H[a·f₁ + b·f₂] = a·g₁ + b·g₂

Position Invariance (Shift Invariance):

If g(x,y) = H[f(x,y)]
Then g(x-a, y-b) = H[f(x-a, y-b)]

Implications:

Degradation same at all image locations

Can be modeled by convolution

Frequency domain analysis applicable

Restoration easier than non-linear cases

Convolution Property: For LPI systems:

g(x, y) = h(x, y) * f(x, y)


G(u, v) = H(u, v) · F(u, v)

Point Spread Function (PSF):

h(x, y) is the PSF

Response to point source (delta function)

Characterizes blur completely for LPI system

3. Noise Models
Noise is random variation in image intensity values. Understanding noise characteristics is crucial for effective
restoration.

3.1 Noise Probability Density Functions

Gaussian (Normal) Noise

Most common noise model in image processing.

Probability Density Function:


p(z) = (1/(σ√(2π))) · e^(-(z-μ)²/(2σ²))

Where:

μ: Mean (average intensity)

σ²: Variance

σ: Standard deviation

Properties:

Bell-shaped curve

68% of values within μ ± σ

95% within μ ± 2σ

99.7% within μ ± 3σ

Sources:

Electronic circuit noise (thermal noise)

Sensor noise

Amplifier noise

Natural phenomena approximation

Characteristics:

Additive noise

Zero mean (μ = 0) for many cases

Described by Central Limit Theorem

Example:

μ = 0 (zero mean)
σ² = 100 (variance)
Most noise values between -30 and +30

Rayleigh Noise

PDF:

p(z) = (2/b)(z - a)e^(-(z-a)²/b) for z ≥ a


p(z) = 0 for z < a
Where:

a: Location parameter

b: Scale parameter

Mean: μ = a + √(πb/4) Variance: σ² = b(4-π)/4

Applications:

Range imaging

Radar imaging

Ultrasound imaging

Erlang (Gamma) Noise

PDF:

p(z) = (aᵇzᵇ⁻¹e^(-az))/((b-1)!) for z ≥ 0

Where:

a, b: Positive parameters

Mean: μ = b/a Variance: σ² = b/a²

Applications:

Laser imaging

Specific sensor types

Exponential Noise

Special case of Erlang (b=1)

PDF:

p(z) = ae^(-az) for z ≥ 0

Mean: μ = 1/a Variance: σ² = 1/a²

Applications:

Laser scanners

Photon counting detectors


Uniform Noise

PDF:

p(z) = 1/(b-a) for a ≤ z ≤ b


p(z) = 0 otherwise

Mean: μ = (a+b)/2 Variance: σ² = (b-a)²/12

Applications:

Quantization noise

Random number generation

Theoretical studies

Characteristics:

Equally likely values in range

Simple to generate

Less common in real images

Impulse (Salt-and-Pepper) Noise

PDF:

p(z) = Pa for z = a (pepper - dark)


p(z) = Pb for z = b (salt - bright)
p(z) = 0 otherwise

where Pa + Pb ≤ 1

Characteristics:

Discrete distribution

Only two intensity values (typically 0 and 255)

Appears as isolated bright and dark pixels

Pa: Probability of pepper (dark pixels)

Pb: Probability of salt (bright pixels)

Sources:

Transmission errors (bit errors)


Analog-to-digital conversion errors

Dead or stuck sensor elements

Sharp, sudden disturbances

Example:

Original pixel values: [100, 102, 98, 101, 99]


With salt-and-pepper: [0, 102, 255, 101, 99]
↑pepper ↑salt

If Pa = Pb:

Called bipolar impulse noise

Equal salt and pepper

If Pa = 0 or Pb = 0:

Called unipolar impulse noise

Only salt or only pepper

3.2 Periodic Noise


Characteristics:

Regular, spatial frequency pattern

Appears as sinusoidal variations

Often visible as regular patterns

Sources:

Electrical interference (50/60 Hz)

Mechanical vibrations

Electromagnetic interference (EMI)

Analog-to-digital conversion artifacts

Scanning mechanisms

Mathematical Model:

η(x, y) = A sin(2π(u₀x + v₀y) + φ)

Where:
A: Amplitude

(u₀, v₀): Frequency components

φ: Phase

Visualization:

Regular stripe patterns

Grid-like patterns

Moiré patterns

3.3 Noise Parameter Estimation


From Image Histogram:

Mean:

μ = Σ zᵢp(zᵢ)

Variance:

σ² = Σ (zᵢ - μ)²p(zᵢ)

From Flat Region: Select uniform area in image and compute statistics directly:

μ ≈ (1/MN) Σ Σ g(x,y)
σ² ≈ (1/MN) Σ Σ [g(x,y) - μ]²

Signal-to-Noise Ratio (SNR):

SNR = 10 log₁₀(σ²_signal / σ²_noise) dB

Higher SNR = Better image quality

4. Restoration in Presence of Noise Only


When degradation is primarily noise without blur (H = identity):

g(x, y) = f(x, y) + η(x, y)

4.1 Spatial Filtering Techniques

Mean Filters
Arithmetic Mean Filter

Formula:

f̂(x, y) = (1/mn) Σ Σ g(s, t)


(s,t)∈Sxy

Where Sxy is the m×n neighborhood around (x, y)

Algorithm:

1. Define neighborhood size (e.g., 3×3, 5×5)

2. For each pixel, compute average of neighbors

3. Replace center pixel with average

Example (3×3 filter):

Noisy region: Filter: Result:


100 50 110 1/9 1/9 1/9
90 105 95 * 1/9 1/9 1/9 = 97
105 95 100 1/9 1/9 1/9

Center pixel: (100+50+110+90+105+95+105+95+100)/9 ≈ 97

Properties:

Reduces Gaussian noise effectively

Simple and fast

Blurs edges and details

Larger kernel = more smoothing, more blur

Best for: Gaussian noise reduction

Geometric Mean Filter

Formula:

f̂(x, y) = [∏ g(s, t)]^(1/mn)


(s,t)∈Sxy

Properties:

Product of all pixels raised to 1/(kernel size)


Less smoothing than arithmetic mean

Better detail preservation

More computationally intensive

Comparison:

Smoother than geometric mean

Less edge preservation

More noise reduction

Harmonic Mean Filter

Formula:

f̂(x, y) = mn / [Σ Σ 1/g(s, t)]


(s,t)∈Sxy

Properties:

Works well for salt noise (bright pixels)

Fails for pepper noise (zero values)

Better edge preservation than arithmetic mean

Best for: Salt noise, Gaussian noise

Contraharmonic Mean Filter

Formula:

f̂(x, y) = [Σ Σ g(s,t)^(Q+1)] / [Σ Σ g(s,t)^Q]


(s,t)∈Sxy (s,t)∈Sxy

Where Q is the order of the filter.

Order Selection:

Q > 0: Eliminates pepper noise (dark pixels)

Q < 0: Eliminates salt noise (bright pixels)

Q = 0: Reduces to arithmetic mean

Q = -1: Reduces to harmonic mean

Example:
For pepper noise: Q = +1.5
For salt noise: Q = -1.5

Properties:

Versatile filter controlled by Q

Effective for impulse noise

Can target specific noise type

Wrong Q can worsen opposite noise type

Caution: Must choose Q based on noise type

Order-Statistics Filters

Median Filter

Most popular order-statistics filter.

Algorithm:

1. Extract neighborhood values


2. Sort values in ascending order
3. Select middle (median) value
4. Replace center pixel with median

Example (3×3 neighborhood):

Noisy pixels: [100, 50, 255, 90, 105, 95, 105, 0, 100]
Sorted: [0, 50, 90, 95, 100, 100, 105, 105, 255]

Median = 100

Properties:

Excellent for impulse noise (salt-and-pepper)

Preserves edges much better than mean filters

Non-linear filter

Removes outliers effectively

Less effective for Gaussian noise

Example with Salt-and-Pepper:


Original: Noisy: Median filtered:
100 100 100 100 0 100 100 100 100
100 100 100 100 255 100 100 100 100
100 100 100 100 100 100 100 100 100

Noise pixels (0, 255) replaced with neighborhood median (100)

Advantages:

Excellent edge preservation

Removes impulse noise effectively

No new intensity values introduced

Simple to implement

Disadvantages:

Computationally more expensive than mean

Less effective for Gaussian noise

May remove fine details with large kernels

Applications:

Preprocessing for edge detection

Medical image enhancement

Document processing

Max Filter

Formula:

f̂(x, y) = max{g(s, t)}


(s,t)∈Sxy

Properties:

Selects brightest pixel in neighborhood

Reduces pepper noise (dark pixels)

Brightens image

Good for finding bright features

Example:
Neighborhood: [100, 50, 110, 90, 0, 95, 105, 95, 100]
Max = 110

Min Filter

Formula:

f̂(x, y) = min{g(s, t)}


(s,t)∈Sxy

Properties:

Selects darkest pixel in neighborhood

Reduces salt noise (bright pixels)

Darkens image

Good for finding dark features

Midpoint Filter

Formula:

f̂(x, y) = [max{g(s,t)} + min{g(s,t)}] / 2


(s,t)∈Sxy

Properties:

Average of maximum and minimum

Works well for Gaussian and uniform noise

Combines order-statistics with averaging

Alpha-Trimmed Mean Filter

Algorithm:

1. Sort neighborhood pixels


2. Remove d/2 smallest and d/2 largest values
3. Average remaining mn - d pixels

Formula:

f̂(x, y) = (1/(mn-d)) Σ gr(s, t)


Where gr(s,t) represents the remaining pixels after trimming.

Parameter d:

d = 0: Standard arithmetic mean

d = mn-1: Median filter

0 < d < mn-1: Hybrid behavior

Properties:

Combines mean and median filtering

Robust to multiple noise types

Useful for mixed noise (Gaussian + impulse)

Trims outliers before averaging

Example (3×3, d=4):

Values: [0, 50, 90, 95, 100, 100, 105, 105, 255]
Remove 2 smallest: [0, 50] ✗
Remove 2 largest: [105, 255] ✗
Average remaining: (90+95+100+100+105)/5 = 98

Best for: Combination of Gaussian and impulse noise

4.2 Adaptive Filters


Adaptive Local Noise Reduction Filter

Adapts to local image variance.

Algorithm:

σ²_L = Local variance in neighborhood Sxy


σ²_η = Overall noise variance (estimated)

f̂(x, y) = g(x, y) - (σ²_η/σ²_L)[g(x, y) - m_L]

where m_L = local mean

Behavior:

If σ²_L >> σ²_η (edge region): Minimal filtering, preserves edges

If σ²_L ≈ σ²_η (flat region): Strong filtering, reduces noise


Advantages:

Preserves edges while reducing noise

Adapts to local image content

Better than fixed filters

Adaptive Median Filter

Adjusts window size based on local noise density.

Algorithm:

Stage A:
A1 = z_med - z_min
A2 = z_med - z_max
If A1 > 0 AND A2 < 0, go to Stage B
Else increase window size
If window ≤ S_max, repeat Stage A
Else output z_med

Stage B:
B1 = z_xy - z_min
B2 = z_xy - z_max
If B1 > 0 AND B2 < 0, output z_xy
Else output z_med

Where:

z_min, z_max, z_med: minimum, maximum, median in window

z_xy: intensity at (x, y)

S_max: maximum allowed window size

Properties:

Handles higher noise densities than standard median

Preserves details better

Removes impulse noise while maintaining sharpness

Variable window size

Advantages over Standard Median:

Less distortion in non-impulse pixels

Better edge preservation


Handles varying noise densities

5. Periodic Noise Reduction by Frequency Domain Filtering


Periodic noise appears as discrete spikes in Fourier spectrum.

5.1 Fourier Transform Review


2D Discrete Fourier Transform (DFT):

F(u, v) = Σ Σ f(x, y)e^(-j2π(ux/M + vy/N))


x=0 y=0

for u = 0,1,...,M-1; v = 0,1,...,N-1

Inverse DFT:

f(x, y) = (1/MN) Σ Σ F(u, v)e^(j2π(ux/M + vy/N))


u=0 v=0

Magnitude Spectrum:

|F(u, v)| = √[R²(u, v) + I²(u, v)]

Phase Spectrum:

φ(u, v) = tan⁻¹[I(u, v) / R(u, v)]

5.2 Periodic Noise in Frequency Domain


Characteristics:

Appears as bright spikes/peaks in magnitude spectrum

Located symmetrically about origin

Frequency corresponds to spatial periodicity

Example:

Spatial Domain: Frequency Domain:


[Striped pattern] → [Spikes at specific frequencies]
Regular noise Concentrated energy at f₀
5.3 Bandreject Filters

Remove frequencies in specified band.

Ideal Bandreject Filter:

H(u, v) = 0 if D₀ - W/2 ≤ D(u,v) ≤ D₀ + W/2


H(u, v) = 1 otherwise

where D(u, v) = √[(u - M/2)² + (v - N/2)²]

Parameters:

D₀: Center frequency to reject

W: Width of rejection band

Butterworth Bandreject Filter:

H(u, v) = 1 / [1 + [D(u,v)·W / (D²(u,v) - D₀²)]^(2n)]

Gaussian Bandreject Filter:

H(u, v) = 1 - e^(-[(D²(u,v) - D₀²) / (D(u,v)·W)]²)

Properties:

Smoother transition than ideal

No ringing artifacts

Better spatial domain characteristics

5.4 Bandpass Filters


Pass only frequencies in specified band (complement of bandreject).

Formula:

H_BP(u, v) = 1 - H_BR(u, v)

Applications:

Isolate periodic components

Analyze specific frequency bands


5.5 Notch Filters
Reject specific frequencies (typically periodic noise).

Ideal Notch Filter:

H(u, v) = 0 if D(u, v) ≤ D₀ at notch centers


H(u, v) = 1 otherwise

Multiple Notches: For periodic noise at (u₀, v₀) and (-u₀, -v₀):

H(u, v) = ∏ H_k(u, v) × H_{-k}(u, v)

Notch Reject Filter Design:

1. Identify spike locations in frequency domain

2. Design notch centered at each spike

3. Apply symmetric notches (due to conjugate symmetry)

4. Multiply with image spectrum

5. Inverse transform

Example:

Noise at (30, 40) and (-30, -40) in spectrum


Create notches at both locations
H(u,v) = 0 in circular regions around these points

Butterworth Notch Filter:

H_NR(u,v) = ∏ [1 / (1 + [D₀/D_k(u,v)]^(2n))]

Where D_k is distance from notch center k.

Gaussian Notch Filter:

H_NR(u,v) = ∏ [1 - e^(-D²_k(u,v) / (2D₀²))]

Optimum Notch Filtering:

Minimize variance of restored image:


H_opt(u, v) = [1 - Q(u,v)/G(u,v)] · H_NP(u, v) + H_NR(u, v)

Where Q(u,v) is the periodic noise spectrum estimate.

5.6 Filtering Procedure


Complete Algorithm:

1. Compute 2D FFT of degraded image


G(u, v) = FFT[g(x, y)]

2. Identify periodic noise frequencies


- Visually inspect magnitude spectrum
- Look for symmetric bright spikes
- Note spike locations

3. Design appropriate filter H(u, v)


- Notch filter for isolated spikes
- Bandreject for band of frequencies

4. Multiply spectrum by filter


F̂ (u, v) = H(u, v) · G(u, v)

5. Compute inverse FFT


f̂(x, y) = IFFT[F̂ (u, v)]

6. Take real part of result


f̂(x, y) = Real[f̂(x, y)]

Example:

Image with 60 Hz AC interference:


1. FFT shows spikes at ±60 Hz location
2. Design notch filter to reject 60 Hz ± small band
3. Apply filter in frequency domain
4. IFFT back to spatial domain
Result: Clean image without interference

6. Linear, Position-Invariant Degradations

6.1 Complete Degradation Model

G(u, v) = H(u, v) · F(u, v) + N(u, v)


Goal: Recover F(u, v) from G(u, v) knowing (or estimating) H(u, v)

6.2 Estimating the Degradation Function


Method 1: Observation

If available, image a similar scene

Compare degraded and known original

Estimate H from ratio

Method 2: Experimentation

Use impulse (bright dot) or edge in scene

Observe blur pattern

PSF ≈ observed blur of impulse

Method 3: Mathematical Modeling

Atmospheric Turbulence:

H(u, v) = e^(-k[(u-M/2)² + (v-N/2)²]^(5/6))

Motion Blur:

H(u, v) = (T/π(ua+vb)) sin(π(ua+vb)) e^(-jπ(ua+vb))

Where:

a, b: motion components in x, y

T: exposure duration

Uniform Out-of-Focus Blur:

h(r) = 1/(πR²) for r ≤ R


h(r) = 0 for r > R

where r = √(x² + y²), R = blur radius

Method 4: Blind Deconvolution

Estimate H and F simultaneously

Iterative optimization
No prior knowledge of H needed

Computationally intensive

6.3 Restoration Filters

7. Inverse Filtering
Simplest restoration approach.

7.1 Theory
If noise is negligible (N(u,v) ≈ 0):

G(u, v) = H(u, v) · F(u, v)

Therefore:
F̂ (u, v) = G(u, v) / H(u, v)

Complete Model with Noise:

F̂ (u, v) = G(u, v) / H(u, v) = F(u, v) + N(u, v)/H(u, v)

7.2 Problems
1. Division by Zero:

H(u, v) may be zero or very small at some frequencies

Division causes numerical instability

Result: extreme noise amplification

2. Noise Amplification:

Small values of H(u, v) amplify noise term

N(u, v)/H(u, v) becomes very large

Dominates restored image

Example:

If H(u₀, v₀) = 0.01 and N(u₀, v₀) = 1:


N/H = 1/0.01 = 100 (amplified 100×)

7.3 Modified Inverse Filter


Limit Filter to Reliable Frequencies:
⎧ 1/H(u, v) if |H(u, v)| > threshold
H⁻¹ = ⎨
⎩0 otherwise

Radial Limit:

F̂ (u, v) = G(u, v) / H(u, v) for D(u, v) ≤ D₀


F̂ (u, v) = 0 otherwise

7.4 Practical Implementation

1. Compute FFTs: G(u,v) = FFT[g(x,y)]


2. Obtain or estimate H(u,v)
3. Define threshold T
4. For each (u, v):
if |H(u,v)| > T:
F̂ (u,v) = G(u,v) / H(u,v)
else:
F̂ (u,v) = 0
5. Compute f̂(x,y) = IFFT[F̂ (u,v)]

Limitations:

Still sensitive to noise

Threshold selection critical

Information lost where filter = 0

Better methods exist

8. Minimum Mean Square Error (Wiener) Filtering


Optimal linear filter in mean square error sense.

8.1 Theory
Objective: Minimize mean square error:

e² = E{(f - f̂)²}

Wiener Filter:

H_w(u, v) = [H*(u, v) / (|H(u, v)|² + S_η(u,v)/S_f(u,v))]


Where:

*H(u, v)**: Complex conjugate of H(u, v)

S_η(u, v): Power spectrum of noise = |N(u, v)|²

S_f(u, v): Power spectrum of undegraded image = |F(u, v)|²

|H(u, v)|²: H(u, v) · H*(u, v)

Restored Image:

F̂ (u, v) = H_w(u, v) · G(u, v)

8.2 Simplified Form


When Power Spectra Unknown:

Use constant ratio approximation:

S_η(u, v) / S_f(u, v) ≈ K (constant)

Parametric Wiener Filter:

H_w(u, v) = [H*(u, v) / (|H(u, v)|² + K)]

Parameter K controls smoothing:

K large: More smoothing, like lowpass filter

K small: Less smoothing, closer to inverse filter

K = 0: Reduces to inverse filter (if H ≠ 0)

8.3 Properties
Advantages:

Optimal in MSE sense

Incorporates noise information

No division by zero problems

Balances restoration and smoothing

Comparison with Inverse Filter:


Inverse Filter: 1/H(u, v)
Wiener Filter: H*(u,v) / (|H(u,v)|² + K)

When |H| is small:


Inverse → ∞ (noise amplification)
Wiener → 0 (suppression)

Behavior:

Automatically suppresses frequencies where SNR is low

Preserves frequencies where SNR is high

Adaptive to noise characteristics

8.4 Implementation

1. Obtain G(u, v) = FFT[g(x, y)]


2. Estimate or model H(u, v)
3. Estimate K = σ²_η / σ²_f or adjust iteratively
4. Compute Wiener filter:
H_w

You might also like