0% found this document useful (0 votes)
32 views40 pages

Image Processing Assignment

Uploaded by

2023000000148
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views40 pages

Image Processing Assignment

Uploaded by

2023000000148
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Assignment On

Image Restoration and Reconstruction

To
Md. Mijanur Rahman
Lecturer
Department of CSE
Southeast University From
Sadia Akter Anika
ID: 2022200000012
Salma Sabiha Piya
ID: 2022100000057
Proshanta Ranjan Das
ID: 2023000000148
Saira Zaman
ID: 2023000000014
Contents

1.Model of Image Degradation and Image Restoration………………………………………04


1.1) Linear, Position-Invariant Degradation……………………………………………….05
1.2) Estimating The Degradation Function………………………………………………...05
1.2.1) Estimation By Image Observation…………………………………………….06
1.2.2) Estimation By Experimentation……………………………………………….06
1.2.3) Estimation By Modeling………………………………………………………06
1.3) Basic Restoration Filters………………………………………………………………07
1.3.1) Inverse Filter…………………………………………………………………..07
1.3.2) Wiener Filter…………………………………………………………………..07
2. Noise Model……………………………………………………………………………….09
2.1) Spatial and Frequency Properties of Noise…………………………………………..10
2.2) Some important Noise Probability Density Function………………………………..12
2.2.1) Gaussian Noise………………………………………………………………...12
2.2.2) Rayleigh Noise………………………………………………………………...12
2.2.3) Erlang(gamma) Noise…………………………………………………………13
2.2.4) Exponential Noise……………………………………………………………..14
2.2.5) Uniform Noise………………………………………………………………...14
2.2.6) Salt and Pepper Noise…………………………………………………………15
2.2.7) Periodic Noise………………………………………………………………...15
2.2.8) Estimating Noise Parameters…………………………………………………16
2.3) Restoration in the Presence of Noise Only-Spatial Filtering………………………..17
2.3.1) Mean Filters………………………………………………………………......17
2.3.1.1) Arithmetic Mean Filter……………………………………………………18
2.3.1.2) Geometric Mean Filter……………………………………………………18
2.3.1.3) Harmonic Mean Filter…………………………………………………….19
2.3.1.4) Contra-harmonic Mean Filter……………………………………………..20
2.3.2) Order Statistic Filters……………………………………………………….21
2.3.2.1) Median Filter……………………………………………………………21
2.3.2.2) Max and Min Filter……………………………………………………...22
2.3.2.3) Midpoint Filter…………………………………………………………..23
2.3.2.4) Alpha-Trimmed Mean Filter…………………………………………….23
2.3.3) Adaptive Filters………………………………………………………………24
2.3.3.1) Adaptive Local Noise Reduction Filter…………………………………25
2.3.3.2) Adaptive Median Filter………………………………………………….25
2.4) Periodic Noise Reduction Using Frequency Domain Filtering……………………..26
2.4.1) More or Notch Filtering………………………………………………………26
2.4.2) Optimum Notch Filtering……………………………………………………..27
3. Image Reconstruction From Projections………………………………………………….29
3.1) Introduction………………………………………………………………………….29
3.2) Principles of X-Ray Computed Tomography………………………………………..29
3.3) Projection and the Random Transform………………………………………………31
3.4) Back projections……………………………………………………………………..34
3.4.1) The Fourier-Slice Theorem……………………………………………………34
3.4.1.1) Reconstruction Using Parallel-Beam Filtered Back projection…………...36
3.4.1.2) Reconstruction Using Fan-Beam Filtered Back projection…………..........38
Model of Image Degradation and Image Restoration
The principal goal of restoration techniques is to improve an image in some predefined sense.
Although there are areas of overlap, image enhancement is largely a subjective process, while
restoration is for the most part an objective process. Restoration attempts to recover an image that
has been degraded by using a priori knowledge of the degradation phenomenon. Thus, restoration
techniques are oriented toward modeling the degradation and applying the inverse process in order
to recover the original image. The restoration approach usually involves formulating a criterion of
goodness that will yield an optimal estimate of the desired result, while enhancement techniques
are heuristic procedures to manipulate an image in order to take advantage of the human visual
system. Some restoration techniques are best formulated in the spatial domain, while others are
better suited for the frequency domain.
A Model of the Image Degradation/Restoration Process

Figure 5.1 shows an image degradation/restoration process. The degraded image in the spatial
domain is given by

g ( x, y ) = f ( x, y )  h ( x, y ) +  ( x, y )
where h(x,y) is the spatial representation of the degradation function.
Therefore, we can have the frequency domain representation of:

G (u , v) = H (u , v) F (u , v) + N (u , v)
Where the terms in capital letters are Fourier transforms.

• Three types of degradation that can be easily expressed mathematically


• Relative motion of the camera and object

sin(VTu )
H (U ,V ) =
UV

• Wrong lens focus


J1 ( ar )
H (U ,V ) = ..J1 is the Bessel Function
ar

• Atmospheric turbulence
5/6
H (U ,V ) = e − c (u +v2 )
2
1.1) Linear, Position-Invariant Degradation
The input-output relationship before the restoring stage is expressed as
𝒈(𝒙, 𝒚) = 𝑯[𝒇(𝒙, 𝒚)] + 𝜼(𝒙, 𝒚)
For the moment, let us assume that 𝜼(𝒙, 𝒚)=0 so that g(x,y)= H[f(x,y)]
H[af1(x,y)+bf2(x,y)]=aH[f1(x,y)]+bH[f2(x,y)]

Where a and b are scalars and f1(x,y) are any two input images.
If a=b=1 the equation become,

H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)]

With a slight (but equivalent) change in notation in the definition of the im-
pulse in Eq. (4.5-3), f(x,y) can be expressed as:
∞ ∞
𝑓(𝑥, 𝑦) = ∫ ∫ 𝑓(𝛼, 𝛽)𝛿(𝑥 − 𝛼, 𝑦 − 𝛽)𝑑𝛼 𝑑𝛽
−∞ −∞
Calculating the following equation when assume the 𝜼(𝒙, 𝒚) = 𝟎 then the homogeneity property
will be, ℎ(𝑥, 𝛼, 𝑦, 𝛽) = 𝐻[𝛿(𝑥 − 𝛼, 𝑦 − 𝛽)]

It is called the impulse response of H. In optics, the impulse becomes a point of light and
ℎ(𝑥, 𝛼, 𝑦, 𝛽) is commonly referred to as the point spread function (PSF). This name arises from
the fact that all physical optical systems blur (spread) a point of light to some degree, with the
amount of blurring being determined by the quality of the optical components.
∞ ∞
𝑔(𝑥, 𝑦) = ∫ ∫ 𝑓(𝛼, 𝛽)ℎ(𝑥 − 𝛼, 𝑦 − 𝛽)𝑑𝛼 𝑑𝛽 + 𝜂(𝑥, 𝑦)
−∞ −∞

1.2) Estimating The Degradation Function


There are three principal ways to estimate the degradation function for use in image restoration:
(1) Observation (2) Experimentation (3) Mathematical modeling.
These methods are discussed in the following sections. The process of restoring an image by using
a degradation function that has been estimated in some way sometimes is called blind
deconvolution, due to the fact that the true degradation function is seldom known completely.
1.2.1) Estimation By Image Observation
Suppose that we are given a degraded image without any knowledge about the degradation
function H. Based on the assumption that the image was degraded by a linear, position-invariant
process, one way to estimate His to gather information from the image itself. For example, if the
image is blurred, we can look at a small rectangular section of the image containing sample
structures, like part of an object and the background. In order to reduce the effect of noise, we
would look for an area in which the signal content is strong. The next step would be to process the
subimage to arrive at a result that is as unblurred as possible.
Let the observed subimage be denoted by 𝑔𝑠 (𝑥, 𝑦) and let the processed subimage (which in reality
is our estimate of the original image in that area) be denoted by 𝑓̂𝑠 (𝑥, 𝑦)
𝐺𝑠 (𝑢, 𝑣)
𝐻𝑠 (𝑢, 𝑣) =
𝐹̂𝑠 (𝑢, 𝑣)
1.2.2) Estimation By Experimentation
It is possible in principle to obtain an accurate estimate of the degradation. Images similar to the
degraded image can be acquired with various system settings until they are degraded as closely as
possible to the image we wish to restore. Then the idea is to obtain the impulse response of the
degradation by imaging an impulse(small dot of light) using the same system settings. Recalling
that the Fourier transform of an impulse is a constant, it follows from Eq. (5.5-17) that
𝐺𝑠 (𝑢, 𝑣)
𝐻𝑠 (𝑢, 𝑣) =
𝐴

1.2.3) Estimation By Modeling


Degradation modeling has been used for many years because of the insight it affords into the image
restoration problem. The model can even take into account environmental conditions that cause
degradations. Example, a degradation model proposed by Hufnagel and Stanley [1964] is based
on the physical characteristics of atmospheric turbulence. This model has a familiar form:
5
2 2
𝐻(𝑢, 𝑣) = 𝑒 −𝑘(𝑢 +𝑣 ) 6
Where k is a constant that depends on the nature of the turbulence.
1.3) Basic Restoration Filters
Restoration filters are a class of spatial filters used in digital image processing to improve the
quality of a degraded image. The goal is to recover an approximation of the original, undegraded
image. Unlike enhancement filters (which are subjective and improve visual appearance for human
viewers), restoration is often an objective process that attempts to reverse known or estimated
degradation (like blur or noise).
We basically divide this restoration filter in two types which is discussed below.
1.3.1) Inverse Filter
The material in this section is our first step in studying restoration of images degraded by a
degradation function H, which is given or obtained by a method. The simplest approach to
restoration is direct inverse filtering, where we compute an estimate 𝐹̂𝑠 (𝑢, 𝑣) of the transform of
the original image simply by dividing the transform of the degraded image 𝐺𝑠 (𝑢, 𝑣) by the
degradation function:
𝐺(𝑢, 𝑣)
𝐹̂ (𝑢, 𝑣) =
𝐻(𝑢, 𝑣)
Substituting the right side of Equation we get,
𝑁(𝑢, 𝑣)
𝐹̂ (𝑢, 𝑣) = 𝐹(𝑢, 𝑣) +
𝐻(𝑢, 𝑣)
One approach to get around the zero or small-value problem is to limit the filter frequencies to
values near the origin. From the discussion of Eq. we know that 𝐻(0,0) is usually the highest value
of 𝐻(𝑢, 𝑣) in the frequency domain. Thus, by limiting the analysis to frequencies near the origin,
we reduce the probability of encountering zero values.
1.3.2) Wiener Filter
This filter is also known as Minimum Mean Square Error Filtering. In this section, we discuss an
approach that incorporates both the degradation function and statistical characteristics of noise into
the restoration process. The method is founded on considering images and noise as random
variables, and the objective is to find an estimate 𝑓 of the uncorrupted image 𝑓 such that the mean
square error between them is minimized. This error measure is given by
2
𝑒 2 = 𝐸 {(𝑓 − 𝑓̂) }

Where 𝐸(∗) is the expected value of the argument. It is assumed that the noise and the image
are uncorrelated; that one or the other has zero mean; and hat the intensity levels in the estimate
are a linear function of the levels in the degraded image. Based on these conditions, the minimum
of the error function is given in the frequency domain by the expression:

𝐻 ∗ (𝑢, 𝑣)𝑆𝑓 (𝑢, 𝑣)


𝐹̂ (𝑢, 𝑣) = [ ] 𝐺(𝑢, 𝑣)
𝑆𝑓 (𝑢, 𝑣)|𝐻(𝑢, 𝑣)|2 + 𝑆𝜂 (𝑢, 𝑣)
1 𝐻(𝑢, 𝑣)|2
=[ ] 𝐺(𝑢, 𝑣)
𝐻(𝑢, 𝑣) |𝐻(𝑢, 𝑣)|2 + 𝑆𝜂 (𝑢, 𝑣)/𝑆𝑓 (𝑢, 𝑣)

𝐻(𝑢, 𝑣) = degradation function


𝐻 ∗ (𝑢, 𝑣) = complex conjugate of 𝐻(𝑢, 𝑣)
𝑆𝜂 (𝑢, 𝑣) = power spectrum of the noise

𝑆𝑓 (𝑢, 𝑣) = power spectrum of the undegraded image

A number of useful measures are based on the power spectra of noise and of the undegraded
image. One of the most important is the signal to noise ratio, approximately using frequency
domain quantities such as
∑𝑀−1 𝑁−1
𝑢=0 ∑𝑣=0 |𝐹(𝑢, 𝑣)|
2
𝑆𝑁𝑅 = 𝑀−1 𝑁−1
∑𝑢=0 ∑𝑣=0 |𝑁(𝑢, 𝑣)|2
This ratio gives a measure of the level of information bearing signal power to the level of
noise power.
The mean square error given in statistical form can be approximated also in terms of summation
involving the original and restore image
𝑀−1 𝑁−1
1 2
𝑀𝑆𝐸 = ∑ ∑|𝑓(𝑥, 𝑦) − 𝑓̂(𝑥, 𝑦)|
𝑀𝑁
𝑥=0 𝑦=0

if one considers the restored image to be “signal” and the difference between this image and
the original to be noise, we can define a signal-to-noise ratio in the spatial domain as
∑𝑀−1 𝑁−1 ̂ 2
𝑥=0 ∑𝑦=0 𝑓 (𝑥, 𝑦)
𝑆𝑁𝑅 = 2
∑𝑀−1 𝑁−1 ̂
𝑥=0 ∑𝑦=0 |𝑓(𝑥, 𝑦) − 𝑓 (𝑥, 𝑦)|
2) Noise Models:

• Noise is the unwanted random variation in pixel intensity values that corrupts an image.
• It occurs during image acquisition (camera sensor, scanner, etc.) or transmission
(channel interference, compression errors, etc.).
• Noise reduces image quality and makes processing tasks (like edge detection,
segmentation, recognition) difficult.

➢ Noise Model (Mathematical Representation)


𝒈(𝒙, 𝒚) = 𝒇(𝒙, 𝒚) + 𝜼(𝒙, 𝒚)
where:
✓ f(x,y) is the original image,
✓ g(x,y) is the noisy image
✓ η(x,y) is the noise function.

➢ Noise arise-
• During Image Acquisition
• Environment conditions
• Quality of sensing elements
• For x. Two factors for CCD: light level and sensor temperature
• Image Transmission

▪ Applications of Noise Models

• To simulate noisy environments for testing filters.


• To design restoration and denoising algorithms (mean filter, median filter, Wiener filter,
etc.).
• To understand real-life image acquisition problems (MRI, satellite imaging,
astrophotography).
2.1. Special and Frequency Properties of Noise

Special Properties of Noise

Noise has some general characteristics that help us model and filter it.

1. Additive vs. Multiplicative

Additive Noise:

𝒈(𝒙, 𝒚) = 𝒇(𝒙, 𝒚) + 𝜼(𝒙, 𝒚)

Common for Gaussian, Salt-and-Pepper.

Multiplicative noise:

𝒈(𝒙, 𝒚) = 𝒇(𝒙, 𝒚) ∗ 𝜼(𝒙, 𝒚)

Common in Speckle noise (radar, ultrasound).

2. Statistical Properties
o Mean (μ\muμ): average value of noise.

𝜇 = 𝐸[𝜂(𝑥, 𝑦)]

o Variance (σ2\sigma^2σ2): strength of noise.

𝜎 2 = 𝐸[(𝜂(𝑥, 𝑦) − 𝜇)2 ]

o PDF (Probability Density Function): describes how noise values are distributed.

3. Stationary vs. Non-stationary


o Stationary noise: its statistics (mean, variance) don’t change over the image.
o Non-stationary noise: statistics vary across regions (e.g., shadows, illumination
problems).

4. Spatial Properties
o Noise can be independent (uncorrelated between pixels) or correlated (structured
pattern).
o Salt-and-Pepper is independent, but periodic noise is correlated.
Frequency Properties of Noise:

Noise is often analyzed in the frequency domain because filters (like notch, Wiener, Gaussian
low-pass) work there.

1. White Noise

• Noise that has equal power at all frequencies.


• Its Power Spectral Density (PSD) is flat:

𝑵𝟎
𝑺𝒏𝒏 = , −∞ < 𝒇 < ∞
𝟐

𝑺𝒏𝒏 = 𝑷𝑺𝑫 𝒐𝒇 𝑵𝒐𝒊𝒔𝒆

𝑵𝟎
= 𝑪𝒐𝒏𝒔𝒕𝒂𝒏𝒕 𝑽𝒂𝒍𝒖𝒆
𝟐

Example: Gaussian noise in spatial domain looks like random dots, in frequency domain it is
uniformly spread.

2. Colored Noise

• If noise has different power at different frequencies, it is colored noise.


o Pink noise: more power in low frequencies.
o Blue noise: more power in high frequencies.

3. Periodic Noise

• Appears as repetitive patterns (e.g., stripes).


• In the frequency domain, it shows up as distinct impulses (spikes) at certain frequencies.
• Formula (example sinusoidal interference):

𝒏(𝒙, 𝒚) = 𝑨 𝐬𝐢𝐧 (𝟐𝝅(𝒖𝟎 𝒙 + 𝒗𝟎 𝒚) + ∅)

𝑨 = 𝑨𝒎𝒑𝒍𝒊𝒕𝒖𝒅𝒆

(𝒖𝟎 , 𝒗𝟎 ) = 𝑺𝒑𝒂𝒕𝒊𝒂𝒍 𝑭𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚 𝒐𝒇 𝒏𝒐𝒊𝒔𝒆

∅ = 𝑷𝒉𝒂𝒔𝒆

• Removal: Notch filters are used to cut those spikes.


2.2) Important Noise Probability Density Functions

2.2.1) Gaussian noise


➢ Detailed Intuition

• Imagine your camera sensor trying to measure brightness.


• Thermal vibrations of electrons, imperfect electronics → cause random additive disturbances.
• Most of these disturbances are small, but occasionally bigger ones occur.

1
p( z ) = e −( z −  ) / 2 2
2

2
Example: An image pixel has value 150. Gaussian noise with mean μ=0 and variance σ²=25 is added. If
noise sample = -3: z = 150 - 3 = 147.
New pixel value = 147

Shape

• Smooth symmetric bell curve.


• Noise values cluster near mean, extreme values rare.

Imaging Context

• Dominant in low-light photography, medical imaging, astronomy.


• Filtering often assumes Gaussian model (e.g., Wiener filter, Gaussian blur).

2.2.2) Rayleigh noise


Detailed Intuition

• Suppose a signal (like ultrasound or radar wave) reflects off many tiny scatterers.
• Resultant magnitude is always positive and follows Rayleigh distribution.

Mathematics:
 2
 ( z − a )e −( z − a ) / b for z  a
2

p( z ) =  b

0 for z  a
Example: σ=4, z=5 → p(5) = (5/16) * exp(-25/32) ≈ 0.141
.
Shape

• Starts at zero, rises to a peak, then decays exponentially.


• Skewed — not symmetric like Gaussian.

Imaging Context

• Seen in radar images, ultrasound imaging, wireless communications (signal fading).

2.2.3) Erlang (Gamma) noise


Detailed Intuition

• Think of it as the waiting time until b independent events happen.


• Example: A detector counts photons — noise accumulates as multiple arrivals.

Mathematics:

 b b −1 −( z − a ) 2 / b
a z for z  a
p ( z ) =  (b − 1)! e
0 for z  a

Example: a=2, b=2, z=1 → p(1) = 4e^(-2) ≈ 0.541.

Shape

• For b=1b=1b=1, reduces to exponential.


• For large bbb, curve looks Gaussian (Central Limit Theorem).

Imaging Context

• Multipath errors in wireless & transmission.


• Rare in direct image noise, but used in communication channel modeling.
2.2.4) Exponential noise
🔹 Detailed Intuition

• Describes processes where small values occur often, large ones rarely.
• Photon counting: most pixels get a small count, but occasionally a high one.

Formula:

p ( z ) = ae − az
Example: a=0.2, z=5 → p(5) = 0.2 * e^(-1) ≈ 0.0736.

Shape

• Max at 0, decays exponentially.


• Only positive values allowed.

Imaging Context

• Low-light photon imaging, radioactive decay, particle detectors.

2.2.5) Uniform noise


Detailed Intuition

• Every value within a range is equally likely.


• Example: during quantization, rounding errors distribute uniformly.

Formula:

 1
 for a  z  b
p( z ) =  b - a

 0 otherwise
Example: a=0, b=10 → p(z) = 1/10 = 0.1

Shape: Flat horizontal line.

Imaging Context

• Digitization errors (scanners, cameras, A/D converters).


• Unrealistic in nature, but simple to model.
2.2.6) Salt and pepper
Detailed Intuition

• Random pixels flip to black (0) or white (255).


• Happens due to transmission errors, faulty sensor bits, or dead pixels.

Formula:

 Pa for z = a

p ( z ) =  Pb for z = b
0
 otherwise
• If b>a gray level b will appear as a light dot;
• If either Pa or Pb is zero, the impulse is called unipolar
• If neither probability is zero (bipolar), and especially if they are approximately
equal: salt and pepper noise

Example: Pixel value=120, noise probability=0.1 → 5% chance becomes 0, 5% chance becomes


255, 90% chance stays 120.

Shape

• Two sharp spikes at extreme values (0 & 255).

Imaging Context

• Very destructive — easy to spot visually.


• Best removed with median filters or adaptive filters.

2.2.7) Periodic Noise

Detailed Intuition

• Comes from repeating interference:


o Electrical hum (50/60 Hz),
o Mechanical vibration,
o Scanner jitter.
• Appears as repetitive patterns (stripes, waves).
Definition: Caused by electrical or mechanical interference. Appears as regular stripes or patterns.
𝜼(𝒙, 𝒚) = 𝑨 ∗ 𝐬𝐢𝐧(𝒖𝒙 + 𝒗𝒚)
Example: A=20, u0=0.05, v0=0.1, (x,y)=(10,20): n(10,20)=0.

Shape

• In spatial domain: visible stripes.


• In frequency domain: distinct spikes at fixed frequencies.

Imaging Context

• Common in satellite images, scanned photos.


• Removed using notch filters in Fourier domain.

2.2.8) Estimating Noise Parameters:

Detailed Intuition

• Imagine your camera sensor trying to measure brightness.


• Thermal vibrations of electrons, imperfect electronics → cause random additive
disturbances.
• Most of these disturbances are small, but occasionally bigger ones occur.

𝑳−𝟏
Mean, 𝒛̅ = ∑𝑳−𝟏
𝒊=𝟎 𝒛𝒊 𝒑𝒔 (𝒛𝒊 ) & Variance, 𝝈𝟐 = ∑𝒊=𝟎 (𝒛𝒊 − 𝒛̅)𝟐 𝒑𝒔 (𝒛̂𝒊 𝒊 )

Shape

• Smooth symmetric bell curve.


• Noise values cluster near mean, extreme values rare.

Imaging Context

• Dominant in low-light photography, medical imaging, astronomy.


• Filtering often assumes Gaussian model (e.g., Wiener filter, Gaussian blur).
2.3) Restoration is the presence of noise only – spatial Filtering
Definition:
Image restoration is the process of recovering an original image from a degraded or noisy
version. Degradation can be due to noise, blur, or other distortions.

• Goal: Remove noise while preserving edges and important details.


• Types of degradation in images:
1. Noise: Random variation in pixel intensity.
2. Blur: Motion blur, defocus, atmospheric blur.
3. Geometric distortions: Scaling, rotation, or skewing.

When the degradation is mainly noise, special filters are used to restore images. These filters
fall into three main categories:

1. Mean Filters (Linear filters)


2. Order-Statistic Filters (Non-linear filters)
3. Adaptive Filters

2.3.1. Mean Filters (Linear Filters)

Concept:
Mean filters smooth the image by replacing each pixel with some kind of average of its
neighbors. They are linear because they satisfy the principle of superposition.

General Formula:

𝒂 𝒃
𝟏
𝒈(𝒙, 𝒚) = ∑ ∑ 𝒇(𝒙 + 𝒊, 𝒚 + 𝒋)
𝒎∗𝒏
𝒊=−𝒂 𝒋=−𝒃

• 𝒈(𝒙, 𝒚)The output pixel value at position ((x, y)).


• 𝒇(𝒙 + 𝒊, 𝒚 + 𝒋): The input pixel value at position ((x+i, y+j)) in the neighborhood.
2.3.1.1) Arithmetic Mean Filter
How it works:

• Replaces a pixel with the average of all pixels in its neighborhood.


• Smooths Gaussian noise efficiently.

Formula:

𝟏
𝒇̂(𝒙, 𝒚) = ∑ 𝒈(𝒔, 𝒕)
𝒎𝒏
(𝒔,𝒕)𝝐𝑺𝒙𝒚

Example: Window values [100, 102, 98, 101, 99, 100, 97, 103, 102] → Mean ≈ 100

Uses / Use Cases:

• Removing Gaussian noise in medical imaging (X-rays, MRI).


• Pre-processing images before segmentation or edge detection.
• Reducing low-level sensor noise in photos.

Advantages:

1. Simple to implement.
2. Efficient and fast for small windows.
3. Works well for Gaussian noise.

Disadvantages:

1. Blurs edges and fine details.


2. Not effective for salt-and-pepper noise.
3. Can cause over-smoothing if window size is large.

2.3.1.2) Geometric Mean Filter


How it works:

• Multiplies all pixels in the neighborhood and takes the nth root.

Formula:
𝟏
𝒎𝒏

𝒇̂(𝒙, 𝒚) = [ ∏ 𝒈(𝒔, 𝒕)]


(𝒔,𝒕)𝝐𝑺𝒙𝒚
Uses:

• Better for preserving image detail than arithmetic mean.


• Useful in satellite imaging where small texture details are important.

Advantages:

1. Preserves edge sharpness better than arithmetic mean.


2. Reduces Gaussian noise moderately.
3. Handles multiplicative noise better.

Disadvantages:

1. Cannot handle zero-valued pixels (black pixels result in zero output).


2. Slower than arithmetic mean.
3. Less effective for impulsive (salt-and-pepper) noise.

2.3.1.3) Harmonic Mean Filter


Formula:

𝒎𝒏
𝒇̂(𝒙, 𝒚) =
𝟏
∑(𝒔,𝒕)𝝐𝑺𝒙𝒚
𝒈(𝒔, 𝒕)

Example: Values [2,4,8] → Result ≈ 3.43

Uses:

• Removes salt noise (bright spots).


• Common in astronomy and remote sensing images with bright noise points.

Advantages:

1. Very effective against bright (salt) noise.


2. Simple computation.
3. Maintains darker regions better than arithmetic mean.

Disadvantages:

1. Poor for pepper noise (dark spots).


2. May reduce contrast.
3. Not suitable for mixed noise.
2.3.1.4) Contra-Harmonic Mean Filter
Formula:

∑(𝒔,𝒕)𝝐𝑺𝒙𝒚 𝒈(𝒔, 𝒕)𝑸+𝟏


𝒇̂(𝒙, 𝒚) =
∑(𝒔,𝒕)𝝐𝑺𝒙𝒚 𝒈(𝒔, 𝒕)𝑸

• Q > 0: Removes pepper noise (dark spots).


• Q < 0: Removes salt noise (bright spots).

Uses:

• Salt-and-pepper noise removal in document scanning, X-ray imaging.

Advantages:

1. Flexible; can target specific noise type.


2. Can reduce high-density noise with proper Q.
3. Better than simple mean filters for impulsive noise.

Disadvantages:

1. Selecting the correct Q is critical.


2. Not effective for mixed noise with high density.
3. Can still blur edges.
2.3.2) Order-Statistic Filters (Non-linear)
Concept:
Order-statistic filters sort pixels in a neighborhood and pick a value based on a rule. Non-linear
filtering is excellent for salt-and-pepper noise.

2.3.2.1) Median Filter


How it works:

• Sorts the neighborhood pixels.


• Replaces the center pixel with the median value.

Formula:

𝒇̂(𝒙, 𝒚) = 𝒎𝒆𝒅𝒊𝒂𝒏(𝒔,𝒕)𝝐𝑺𝒙𝒚 {𝒈(𝒔, 𝒕)}

Example: Values [97,98,99,100,100,101,102,102,103] → Median=100.

Uses / Use Cases:

• Removing salt-and-pepper noise in digital photographs.


• Pre-processing in OCR (optical character recognition).
• Reducing noise in medical imaging.

Advantages:

1. Preserves edges better than mean filters.


2. Very effective for impulsive noise.
3. Simple concept, widely used.

Disadvantages:

1. Computationally expensive for large windows.


2. May slightly alter thin lines.
3. Not ideal for Gaussian noise.
2.3.2.2) Max and Min Filter
How it works:

• Max filter: Replace pixel with maximum → removes dark (pepper) noise.
• Min filter: Replace pixel with minimum → removes bright (salt) noise.

Formula:

𝒇̂(𝒙, 𝒚) = 𝒎𝒂𝒙(𝒔,𝒕)𝝐𝑺𝒙𝒚 {𝒈(𝒔, 𝒕)}

𝒇̂(𝒙, 𝒚) = 𝒎𝒊𝒏(𝒔,𝒕)𝝐𝑺𝒙𝒚 {𝒈(𝒔, 𝒕)}

Uses:

• Max filter: Astronomy images with dark specks.


• Min filter: Images with bright salt noise.

Advantages:

1. Effective for single-type noise.


2. Simple to implement.
3. Works well for high-contrast noise.

Disadvantages:

1. Only works for one type of noise at a time.


2. Can distort edges.
3. Poor for mixed noise.
2.3.2.3) Midpoint Filter
Formula:

𝟏
𝒇̂(𝒙, 𝒚) = [𝒎𝒂𝒙(𝒔,𝒕)𝝐𝑺𝒙𝒚 {𝒈(𝒔, 𝒕)} + 𝒎𝒊𝒏(𝒔,𝒕)𝝐𝑺𝒙𝒚 {𝒈(𝒔, 𝒕)]
𝟐

Uses:

• Reduces uniform noise in satellite and aerial images.

Advantages:

1. Smooths moderate uniform noise.


2. Preserves general image structure.
3. Simple computation.

Disadvantages:

1. Blurs edges.
2. Less effective for impulsive noise.
3. Not suitable for images with varying intensity patterns.

2.3.2.4) Alpha-Trimmed Mean Filter


How it works:

• Sort pixels, remove d largest and d smallest, average remaining.

Formula:

𝟏
𝒇̂(𝒙, 𝒚) = ∑ 𝒈𝒓 (𝒔, 𝒕)
𝒎𝒏 − 𝒅
(𝒔,𝒕)𝝐𝑺𝒙𝒚

Uses:

• Mixed noise (Gaussian + salt-and-pepper) in remote sensing or MRI images.


Advantages:

1. Flexible for mixed noise.


2. Preserves edges better than simple mean.
3. Reduces extreme values effectively.

Disadvantages:

1. Needs correct parameter d.


2. Computation more complex than mean.
3. Less effective for high-density impulsive noise.

2.3.3. Adaptive Filters


Concept:
Adaptive filters change behavior based on local statistics like local mean and variance.
Excellent for non-uniform noise.

2.3.3.1) Adaptive Local Noise Reduction Filter


Formula:

𝝈𝟐𝜼
𝒇̂(𝒙, 𝒚) = 𝒈(𝒙, 𝒚) − [𝒈(𝒙, 𝒚) − 𝒎𝑳 ]
𝝈𝟐𝑳

𝝈𝟐𝜼
= 𝒏𝒐𝒊𝒔𝒆 𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆
𝝈𝟐𝑳

Example: g=120, mL=100, σL²=25, σn²=10 → f=112

Uses:

• Preserves edges in satellite images and medical images.


• Removes Gaussian noise while maintaining detail.

Advantages:

1. Preserves edges better than linear filters.


2. Works on non-uniform noise.
3. Flexible and adaptive to local image properties.

Disadvantages:

1. Requires knowledge of noise variance.


2. More computationally expensive.
2.3.3.2) Adaptive Median Filter
How it works:

• Expands window size until a median not considered noise is found.


• Removes high-density salt-and-pepper noise.

Formula:

𝒛𝒎𝒊𝒏 = minimum intensity value in 𝑺𝒙𝒚

𝒛𝒎𝒂𝒙 = maximum intensity value in 𝑺𝒙𝒚

𝒛𝒎𝒆𝒅 = median of intensity values in𝑺𝒙𝒚

𝒛𝒙𝒚 = intensity value at coordinates (x,y)

𝑺𝒎𝒂𝒙 = maximum allowed size of 𝑺𝒙𝒚

Stage A: A1= 𝒛𝒎𝒆𝒅 − 𝒛𝒎𝒊𝒏 If A1>0 AND A2<0, go to stage B, Else increase the window size

A2= 𝒛𝒎𝒆𝒅 − 𝒛𝒎𝒂𝒙 If windows size <= 𝑺𝒎𝒂𝒙 repeat stage A, else output 𝒛𝒎𝒆𝒅 .

Uses:

• Images heavily corrupted with salt-and-pepper noise (security cameras, scanned


documents).

Advantages:

1. Very effective against high-density impulsive noise.


2. Preserves edges.
3. Automatically adapts to noise density.

Disadvantages:

1. Slower due to varying window size.


2. Complex implementation.
3. Not efficient for Gaussian noise.
2.4) Periodic Noise Reduction Using Frequency Domain Filtering
Definition:
Periodic noise is a type of noise that repeats at regular intervals in an image (like stripes, hums,
or interference patterns). It often arises from electronic interference, power line hums, or
repetitive mechanical vibrations.

Method:

• Convert image to the frequency domain using Fourier Transform (FT).


• Identify frequencies corresponding to noise.
• Suppress these frequencies using Notch Filters.
• Transform back to spatial domain.

2.4.1. Notch Filtering


Definition:
A Notch Filter is a frequency domain filter that removes (rejects) a narrow band of frequencies
while leaving other frequencies mostly unaffected.

• Types:
o Notch Reject Filter (NRF): Removes unwanted frequencies.
o Notch Pass Filter (NPF): Keeps only desired frequencies, removes all others (less
common for noise removal).

Mathematical Form (Ideal Notch Reject Filter):

𝑲
𝟏
𝑯(𝒖, 𝒗) = ∏ 𝟐𝒏
𝑫𝟎
𝑲=𝟏 𝟏 +( )
𝑫𝒌 (𝒖, 𝒗)

𝑫𝟎 = 𝑪𝒖𝒕𝒐𝒇𝒇 𝑭𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚

𝒏 = 𝑭𝒊𝒍𝒕𝒆𝒓 𝑶𝒓𝒅𝒆𝒓

𝑲 = 𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑵𝒐𝒊𝒔𝒆 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚

Working:

1. Transform image to frequency domain using FFT.


2. Locate noise spikes in the spectrum.
3. Apply notch reject filter at those spikes.
4. Inverse FFT to return to spatial domain.
Example: Removes periodic noise at frequencies (±30, ±40)

Uses / Use Cases:

• Removing horizontal or vertical stripe noise in scanned documents.


• Reducing power line hum in medical images (EEG, MRI).
• Cleaning periodic interference in satellite images.

Advantages:

1. Can remove specific periodic noise without affecting other frequencies.


2. Preserves overall image details better than spatial filters.
3. Flexible: can reject multiple frequencies using multiple notches.

Disadvantages:

1. Requires accurate location of noise frequency.


2. Narrow notch may leave residual noise; wide notch may remove image details.
3. More computationally expensive than simple spatial filtering.

2.4.2. Optimum Notch Filtering


Definition:
Optimum Notch Filtering is an enhanced form of notch filtering that minimizes image
distortion while removing periodic noise.

• Unlike ideal notch filters (which have sharp cutoffs), optimum notch filters use smooth
transition functions to reduce ringing artifacts.

Mathematical Form (Butterworth Notch Reject Filter):

𝑺𝒙𝒙 (𝒖, 𝒗)
𝑯(𝒖, 𝒗) =
𝑺𝒙𝒙 (𝒖, 𝒗) + 𝑺𝒏𝒏 (𝒖, 𝒗)

𝑺𝒙𝒙 (𝒖, 𝒗) = 𝑷𝒐𝒘𝒆𝒓 𝒔𝒑𝒆𝒄𝒕𝒓𝒖𝒎 𝒆𝒔𝒕𝒊𝒎𝒂𝒕𝒆 𝒐𝒇 𝒐𝒓𝒈𝒊𝒏𝒂𝒍 𝒊𝒎𝒂𝒈𝒆

𝑺𝒏𝒏 (𝒖, 𝒗) = 𝑵𝒐𝒊𝒔𝒆 𝒔𝒑𝒆𝒄𝒕𝒓𝒖𝒎 𝒆𝒔𝒕𝒊𝒎𝒂𝒕𝒆 𝒐𝒇 𝒐𝒓𝒈𝒊𝒏𝒂𝒍 𝒊𝒎𝒂𝒈𝒆

Working:

• Provides a gradual suppression of noise frequencies.


• Reduces ringing artifacts (common in ideal notch filters).
• Maintains more natural appearance of the restored image.
Uses / Use Cases:

• High-quality restoration of scanned photographs.


• Removing periodic noise in astronomical and satellite images.
• Medical imaging where artifact suppression is critical.

Advantages:

1. Minimizes ringing artifacts in restored image.


2. Preserves fine details better than ideal notch filters.
3. Can handle multiple noise frequencies smoothly.

Disadvantages:

1. Slightly more computationally intensive.


2. Requires careful tuning of D0 and n.
3. If parameters are wrong, residual noise may remain.
3. Image Reconstruction From Projections
3.1) Introduction

Image reconstruction from projections refers to the process of estimating an unknown 2D image
or 3D object from its projection data. A projection is essentially a line integral of the object’s
internal density along a certain direction. By taking projections from many angles and combining
them, it becomes possible to reconstruct the original object.

This is the foundation of tomography (from Greek: tomos = slice, graphien = to write). The most
common application is X-ray Computed Tomography (CT), where an X-ray source rotates
around a patient, and detectors record projections at multiple angles. Using reconstruction
algorithms, doctors can “slice” the human body virtually and see the internal organs non-
invasively.

Historically, Johann Radon (1917) introduced the Radon transform, proving mathematically that
a 2D function can be reconstructed from its line integrals. Later, this theory was applied to imaging
technology, leading to the invention of CT scanners in the 1970s, for which Sir Godfrey Hounsfield
won the Nobel Prize.

3.2) Principles of X-Ray Computed Tomography

3.2.1 How CT Works:

1. X-ray Source: Emits X-rays that pass through the body.


2. Interaction with Tissue: X-rays are attenuated depending on tissue density. Dense tissues
like bone attenuate more than soft tissues.
3. Detectors: Measure the intensity of X-rays after attenuation.
4. Projection Formation: Each detector reading corresponds to the line integral of the tissue
attenuation coefficients along that path.
5. Rotation: By rotating the source and detectors around the body, projections are collected
from multiple angles.
6. Reconstruction: Using mathematical algorithms (inverse Radon transform, Fourier slice
theorem, back projection), the internal image is reconstructed.

Mathematical Form:

𝑷𝜽 (𝒕) = ∫ 𝒇(𝒙, 𝒚)𝜹(𝒙 𝒄𝒐𝒔𝜽 + 𝒚 𝒔𝒊𝒏𝜽 − 𝒕)𝒅𝒙 𝒅𝒚


−∞

𝑷𝜽 (𝒕) = 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒊𝒐𝒏 𝒂𝒕 𝒂𝒏𝒈𝒍𝒆 𝜽


𝒕 = 𝑫𝒊𝒔𝒕𝒂𝒏𝒄𝒆 𝒇𝒓𝒐𝒎 𝒐𝒓𝒊𝒈𝒊𝒏 𝒕𝒐 𝒍𝒊𝒏𝒆 𝒐𝒇 𝒊𝒏𝒕𝒆𝒈𝒓𝒂𝒕𝒊𝒐𝒏
𝜹 = 𝑫𝒊𝒓𝒂𝒄 𝑫𝒆𝒍𝒕𝒂 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 𝒕𝒉𝒂𝒕 𝒆𝒏𝒇𝒐𝒓𝒄𝒆𝒔 𝒕𝒉𝒆 𝒍𝒊𝒏𝒆 𝒊𝒏𝒕𝒆𝒈𝒓𝒂𝒕𝒊𝒐𝒏

Advantages:

1. Cross-Sectional Imaging
o Produces detailed slice images of internal structures, eliminating the overlap of
tissues seen in plain X-rays.
2. High Spatial Resolution
o Can distinguish fine anatomical details, especially useful in detecting small lesions,
fractures, or vascular changes.
3. 3D Reconstruction
o Multiple slices can be combined to create 3D models of organs, bones, or tumors,
aiding surgical planning and diagnostics.
4. Non-Invasive & Quick
o Provides internal information without surgery.
o Scan times are fast, important in trauma and emergency settings.
5. Versatility
o Used in many fields: medicine (diagnosis, treatment planning), industry
(nondestructive testing), geophysics, archaeology.
6. Quantitative Information
o Provides Hounsfield Units (CT numbers) that reflect tissue density, useful for
distinguishing soft tissues, blood, bone etc.

Disadvantages:

1. Radiation Dose
o Higher than plain X-rays; repeated scans increase risk of radiation-related effects.
2. Cost and Accessibility
o CT scanners are expensive to purchase and maintain, not always available in low-
resource settings.
3. Artifact Susceptibility
o Motion, metal implants, and beam-hardening can cause streaks or distortions in the
image.
4. Contrast Agent Risks
o Some scans require contrast media, which can cause allergic reactions or kidney
issues in vulnerable patients.
5. Limited Soft-Tissue Contrast
o While CT is better than X-ray, MRI generally provides superior soft tissue contrast
(e.g., for brain or spinal cord imaging).
6. Not Real-Time
o Unlike ultrasound or fluoroscopy, CT does not provide live imaging — it captures
static snapshots.
3.3) Projection and the Random Transform
Definition:
A projection is the shadow or summation of image values along a straight line.
If you imagine shining a light through an object, the detector measures how much light passes →
that’s a projection.
Transforms:
a) Radon Transform:
If 𝒇(𝒙, 𝒚) is a 2D image then the projection at angle 𝜃 is:
∞ ∞

𝑷𝜽 (𝒔) = ∫ ∫ 𝒇(𝒙, 𝒚)𝜹(𝒔 − 𝒙 𝒄𝒐𝒔𝜽 + 𝒚 𝒔𝒊𝒏𝜽)𝒅𝒙 𝒅𝒚


−∞ −∞

S= position on the projection


𝜽= Projection angle

we illustrate how to use the Radon transform to obtain an analytical expression for the projection
of the circular object:
∞ ∞ ∞

𝒈(𝝆, 𝜽) = ∫ ∫ 𝒇(𝒙, 𝒚)𝜹(𝒙 − 𝝆)𝒅𝒙 𝒅𝒚 = ∫ 𝒇(𝝆, 𝒚)𝒅𝒚


−∞ −∞ −∞

When 𝒈(𝝆, 𝜽) = 𝟎 then the equation will be:


√𝒓𝟐 −𝝆𝟐

𝒈(𝝆, 𝜽) = ∫ 𝑨 𝒅𝒚
−√𝒓𝟐 −𝝆𝟐

The key objective of CT is to obtain a 3-D representation of a volume from its


projections. To obtain a formal expression for a back-projected image from the
Radon transform, let us begin with a single point 𝑔(𝜌𝑗, 𝜃𝑘 ) of the complete projection

𝑔(𝜌, 𝜃𝑘 ) for a fixed value rotation, 𝜃𝑘 the formula will be:

𝒇𝜽 𝒌 (𝒙, 𝒚) = 𝒈(𝝆, 𝜽𝒌 ) = 𝒈(𝒄 𝒄𝒐𝒔𝜽𝒌 + 𝒚 𝒔𝒊𝒏𝜽𝒌 )


Example:
Let the 2x2 image (pixel size=1) be
1 2
𝑓= [ ]
3 4
With co-ordinates at pixel centers (𝑥, 𝑦) ∈ {(0.5, 0.5), (1.5, 0.5), (0.5, 1.5), (1.5, 1.5)}
For parallel-beam at 𝜃 = 0° 𝑡ℎ𝑒 𝑃0° 𝑤𝑖𝑙𝑙 𝑏𝑒 𝑓𝑜𝑟 𝑡 ∈ {0.5,1.5}
𝑡 = 0.5: 1 + 3 = 4
𝑡 = 1.5: 2 + 4 = 6
𝑇ℎ𝑢𝑠, 𝑃0° = [4,6]

Advantages:

1. Mathematical Foundation of CT
o Provides a precise framework for describing how projections (X-ray line integrals)
relate to the object.
o Essential for deriving reconstruction formulas like FBP.
2. Direct Link to Fourier Theory
o Through the Fourier Slice Theorem, the Radon transform connects projections to
the object’s Fourier transform.
o Enables analytical reconstruction approaches.
3. Generality
o Works for any dimension: 2D Radon for CT slices, 3D Radon for volumetric
imaging.
o Can be extended to fan-beam and cone-beam geometries by change of variables.
4. Well-Studied and Stable in Ideal Case
o With complete, noise-free projections over 180∘180^\circ180∘ or
360∘360^\circ360∘, exact reconstruction is possible.
o Strong theoretical guarantees (invertibility under certain conditions).
5. Useful in Many Fields
o Beyond CT: used in tomography, image processing, geophysics, nondestructive
testing, and even machine learning (for feature extraction in certain transforms).
Disadvantages:

1. Requires Complete Data


o Exact inversion requires a full set of projections over all angles.
o Limited-angle or sparse projections lead to ill-posed problems (missing information
→ artifacts).
2. Noise Sensitivity
o Real projections contain noise. Since inversion often requires high-frequency
filtering (e.g., ramp filter in FBP), noise is amplified.
3. Not Physically Perfect
o Radon transform assumes ideal line integrals (no scatter, beam hardening, or
detector blur).
o Real CT physics violates these assumptions, causing artifacts if uncorrected.
4. Computational Issues for Large Data
o In practice, handling millions of projections and applying inverse Radon transforms
requires heavy computation (though FBP makes it efficient).
5. Approximation Needed in Discrete Case
o In continuous math, Radon inversion is exact. But with discretized detectors and
finite sampling, errors occur (aliasing, interpolation errors).
6. Ill-posedness in Practical Cases
o Small errors in projection data can cause large errors in reconstruction if not
regularized (especially with incomplete data).
3.4) Back projections
3.4.1) The Fourier-Slice Theorem:
This theorem links projections in the spatial domain to the frequency domain.
Statement:
The 1D Fourier transform of a projection 𝑃𝜃 (𝑡) equals a center slice of the 2D Fourier transforms
of the image 𝑓(𝑥, 𝑦) taken along the same angle 𝜃.

𝑮(𝝎, 𝜽) = ∫ 𝒈(𝝆, 𝜽)𝒆−𝒋𝟐𝝅𝝎𝝆 𝒅𝒑


−∞
∞ ∞ ∞

= ∫ ∫ ∫ 𝒇(𝒙, 𝒚)𝜹(𝒙 𝒄𝒐𝒔𝜽 + 𝒚 𝒔𝒊𝒏𝜽 − 𝝆)𝒆−𝒋𝟐𝝅𝝎𝝆 𝒅𝒙 𝒅𝒚 𝒅𝒑


−∞ −∞ −∞
∞ ∞

= ∫ ∫ 𝒇(𝒙, 𝒚)𝒆−𝒋𝟐𝝅(𝒖𝒙+𝒗𝒚) 𝒅𝒙 𝒅𝒚
−∞ −∞

We recognize this expression as the 2-D Fourier transform of 𝒇(𝒙, 𝒚) evaluated at the values of
𝒖 𝒂𝒏𝒅 𝒗
𝑮(𝝆, 𝜽) = 𝑭(𝝎𝒔𝒊𝒏𝜽, 𝝎𝒄𝒐𝒔𝜽)
Advantages:

1. Mathematical Foundation of CT
o Provides a direct link between measured projections and the object’s Fourier
transform.
o Basis of analytical reconstruction methods like Filtered Back-projection (FBP).
2. Conceptual Simplicity
o Explains CT imaging in terms of Fourier analysis, making it intuitive for
understanding how projections map into frequency space.
3. Exact Reconstruction (in Ideal Case)
o With infinite, noise-free, and complete projection data, the theorem guarantees
exact image reconstruction.
4. Efficient Use in Algorithms
o Enables FFT-based implementations and frequency-domain filtering.
o Forms the backbone of fast reconstruction methods.

5. General Applicability
o Extends to higher dimensions (3D Radon ↔ 3D Fourier slices) and different
geometries (fan-beam, cone-beam with modifications).
6. Insight into Data Completeness
o By showing how projections fill Fourier space, it helps understand why incomplete
angular coverage or sparse data causes artifacts.

Disadvantages:

1. Requires Complete Projection Data


o Exact reconstruction needs projections from all angles. Missing or limited-angle
data leads to incomplete Fourier coverage → streaks and artifacts.
2. Noise Sensitivity
o Since high-frequency information comes from projections, noise in data gets
amplified, especially when reconstructing edges.
3. Idealized Assumptions
o Assumes perfect line integrals and ignores physics like beam hardening, scatter,
detector blur — which degrade real CT images.
4. Interpolation Errors in Practice
o In frequency-space methods (Fourier reconstruction), the theorem requires
interpolating polar samples (from projections) onto a Cartesian Fourier grid, which
introduces errors.
o This is why FBP is often preferred over direct Fourier reconstruction in practice.
5. High Data Requirement
o To avoid aliasing and ensure full Fourier coverage, many projections are needed →
higher radiation dose in CT..
3.4.1.1) Reconstruction Using Parallel-Beam Filtered Back
projection
As we saw, obtaining back projections directly yields unacceptably blurred results. Fortunately,
there is a straight forward solution to this problem based simply on filtering the projections before
computing the back projections.
∞ ∞

𝑓(𝑥, 𝑦) = ∫ ∫ 𝒇(𝒖, 𝒗)𝒆𝒋𝟐𝝅(𝒖𝒙+𝒗𝒚) 𝒅𝒖 𝒅𝒗


−∞ −∞

If the let 𝑢 = 𝜔 𝑐𝑜𝑠𝜃 and 𝑣 = 𝜔 𝑠𝑖𝑛𝜃 then the differentials become 𝑑𝑢 𝑑𝑣 = 𝜔 𝑑𝜔 𝑑𝜃 then
𝟐𝝅 ∞

𝒇(𝒙, 𝒚) = ∫ ∫ 𝑭(𝝎𝒔𝒊𝒏𝜽, 𝝎𝒄𝒐𝒔𝜽)𝒆𝒋𝟐𝝅𝝎(𝒙 𝒄𝒐𝒔𝜽+𝒚 𝒔𝒊𝒏𝜽) 𝝎 𝒅𝝎 𝒅𝜽


𝟎 𝟎

By splitting this integral into two expressions, one for 𝜃 in the range 0° to 180° and the other in
the range 180° to 360°, and using the fact that 𝑮(𝝎, 𝜽 + 𝟏𝟖𝟎) = −𝑮(𝝎, 𝜽)
𝝅 ∞

𝒇(𝒙, 𝒚) = ∫ ∫ |𝝎|𝑮(𝝎, 𝜽)𝒆𝒋𝟐𝝅𝝎(𝒙 𝒄𝒐𝒔𝜽+𝒚 𝒔𝒊𝒏𝜽) 𝒅𝝎 𝒅𝜽


𝟎 −∞

In terms of integration with respect to 𝝎 the term 𝒙 𝒄𝒐𝒔𝜽 + 𝒚 𝒔𝒊𝒏𝜽 is a constant, which we recognize as 𝝆,
𝝅 ∞

𝒇(𝒙, 𝒚) = ∫ | ∫ |𝝎|𝑮(𝝎, 𝜽)𝒆𝒋𝟐𝝅𝝎𝝆 𝒅𝝎 |


𝟎 −∞
Advantages:

1. Mathematical Simplicity
o The formula is cleaner than fan-beam FBP (no pre-weighting, no Jacobian
denominator).
o Directly linked to the Radon transform and Fourier Slice Theorem, so it’s easy to
analyze and explain.
2. Well-Studied and Standard
o Serves as the theoretical foundation of CT reconstruction.
o Useful for education, algorithm testing, and theoretical analysis.
3. Lower Computational Complexity (Conceptually)
o Only requires 1D filtering and backprojection, without extra geometry corrections
(like fan-beam cos⁡γ\cos \gammacosγ pre-weighting).
4. Noise and Artifact Understanding
o Since it’s mathematically clean, it’s easier to study the effects of noise, limited data,
and filtering in parallel-beam form.
5. Compatibility with Mathematical Theory
o Directly compatible with algorithms like Radon inversion and Fourier
reconstruction.
o Often used in research simulations before adapting to real-world fan-beam
geometry.

Disadvantages:

1. Not Realistic for Modern CT Scanners


o Modern scanners use fan-beam (or cone-beam) geometry, not parallel beams.
o A real scanner cannot capture parallel-beam data directly.
2. Requires Rebinning in Practice
o Fan-beam data from real scanners must be rebinned (interpolated) to parallel-beam
format, introducing interpolation errors and extra computation.
3. High Projection Requirement
o Like fan-beam FBP, it requires a large number of uniformly spaced projections
(high radiation dose) to produce artifact-free images.
4. Noise Sensitivity
o Ramp filter amplifies high-frequency noise. Without proper windowing,
reconstructed images can be very noisy.
5. Artifact-Prone
o Limited-angle data, sparse projections, or non-ideal conditions produce streaks and
star artifacts.
o Performs poorly with metal or motion in scans.
6. Limited Use Outside Simulation
o Primarily used in teaching, research, or when synthetic parallel-beam data is
generated.
o Rarely used directly in real medical CT reconstruction.
3.4.1.2) Reconstruction Using Fan-Beam Filtered Back projection
The discussion thus far has centered on parallel beams. Because of its simplicity and intuitiveness,
this is the imaging geometry used traditionally to introduce computed tomography. However,
modern CT systems use a fanbeam geometry.
We begin by noticing that the parameters of
line 𝐿(𝜌, 𝜃) are related to the parameters of a
Fan-beam ray by 𝜃 = 𝛽 + 𝛼
𝜌 = 𝐷 𝑠𝑖𝑛𝛼
The convolution back projection formula for the
parallel-beam imaging geometry is given.
Without loss of generality, suppose that we focus
attention on objects that are encompassed
within a circular area of radius T about the
origin of the plane.
𝟐𝝅 𝑻
𝟏
𝒇(𝒙, 𝒚) = ∫ ∫ 𝒈(𝝆, 𝜽)𝑺(𝒙 𝒄𝒐𝒔𝜽 + 𝒚 𝒔𝒊𝒏𝜽 − 𝝆)𝒅𝝆 𝒅𝜽
𝟐
𝟎 −𝑻
𝟐𝝅 𝑻
𝟏
= ∫ ∫ 𝒈(𝝆, 𝜽)𝑺(𝒓 𝒄𝒐𝒔(𝜽 − 𝜶) − 𝝆)𝒅𝝆 𝒅𝜽
𝟐
𝟎 −𝑻

This expression is nothing more than the parallel-beam reconstruction formula written in polar
coordinates. However, integration still is with respect to 𝝆 & 𝜽. To integrate with respect to 𝜶 and
𝜷 requires a transformation of coordinates,
−𝟏 𝑻
𝟏 𝟐𝝅−𝜶 𝐬𝐢𝐧 (𝑫)
𝒇(𝒓, 𝝋) = ∫ ∫ 𝒈(𝑫 𝒔𝒊𝒏𝜶, 𝜶 + 𝜷)𝒔[𝒓 𝐜𝐨𝐬(𝜷 + 𝜶 − 𝝋) − 𝑫 𝒔𝒊𝒏𝜶]𝑫𝒄𝒐𝒔𝜶 𝒅𝜶 𝒅𝜷
𝟐 −𝜶 𝑻
− 𝐬𝐢𝐧−𝟏 ( )
𝑫

After calculating all the formula with some proper equation and assume according the needs of
changes of angle we call get the formula,
𝟐𝝅
̃(𝜸𝜷 (𝒙, 𝒚), 𝜷)
𝒒
𝒇(𝒙, 𝒚) = ∫ 𝒅𝜷
(𝑹 − 𝒙 𝒄𝒐𝒔𝜷 − 𝒚 𝒔𝒊𝒏𝜷)𝟐
𝟎
Advantages:

1. Computational Efficiency
o FBP is much faster than iterative reconstruction methods since it mainly involves
filtering (FFT-based convolution) and back projection.
o Suitable for real-time or near-real-time CT imaging.
2. Simplicity of Implementation
o The algorithm is mathematically well-established and relatively straightforward to
implement compared to iterative methods.
3. Direct Reconstruction
o Produces an image directly from projection data without requiring multiple
iterations.
o Good for quick diagnostics and situations where computational resources are
limited.
4. Geometric Fit to Modern CT Scanners
o Most clinical CT scanners use fan-beam geometry. FBP naturally adapts to that
geometry (with pre-weighting and Jacobian corrections).
5. Stable and Reproducible
o Provides consistent results without risk of non-convergence (unlike iterative
reconstructions).
o Noise amplification is predictable and can be controlled with filters.
Disadvantages:

1. Noise Sensitivity
o The ramp filter amplifies high-frequency noise. Even with windowing (e.g., Hann
or Shepp–Logan), noise remains a challenge.
2. Artifact-Prone
o Streaks, star artifacts, and beam-hardening effects are more pronounced, especially
in sparse data or with metal objects in the scan.
3. High Dose Requirement
o To reduce noise and artifacts, a relatively high number of projections (and thus
higher radiation dose) is needed for good image quality.
4. Limited Flexibility
o Assumes complete or near-complete angular coverage.
o For limited-angle scans, missing data, or sparse-view CT, FBP performs poorly
compared to iterative methods.
5. Approximate Physical Modeling
o Does not model scatter, detector blur, or nonlinear effects (beam hardening, noise
statistics).
o Iterative reconstruction can incorporate these physics for better accuracy.
6. Short-Scan Complexity
o In fan-beam geometry, if only a short scan (<360°) is acquired, additional weighting
(e.g., Parker weighting) is required to correct for angular under sampling.

You might also like