Digital Image Processing
Digital Image Processing
UNIT-I
LIGHT AND THE ELECTROMAGNETIC SPECTRUM IN DIP:
In digital image processing, light and the electromagnetic spectrum are fundamental
because digital images are formed by capturing and digitizing electromagnetic radiation
from various parts of the spectrum, not just visible light. This includes radio waves for
radar and MRI, infrared for thermal imaging, and X-rays for medical and security scans,
which are then converted into digital data by sensors like Charge-Coupled Devices
(CCDs).
The electromagnetic spectrum and image acquisition
The spectrum:
The electromagnetic spectrum is the full range of electromagnetic radiation, from low-
energy radio waves to high-energy gamma rays. Visible light is just a narrow portion of
this spectrum, between about 390 and 700 nanometers.
Image formation:
Digital images can be acquired from any part of this spectrum. Different wavelengths are
used for different applications:
Visible light: Used for everyday photography.
Infrared: Used for thermal imaging and night vision.
X-rays: Used in medicine for bone imaging and in security to scan luggage.
Radio waves: Used for radar imaging and Magnetic Resonance Imaging (MRI).
Sensor function:
Digital sensors, such as CCDs, detect this electromagnetic energy. They measure the
intensity of the radiation at various points and convert this information into a numerical
format. This process involves two key steps:
Sampling: Dividing the image into a grid of pixels.
Quantization: Assigning a specific intensity value (e.g., a number from 0-255) to each
pixel.
Key concepts
Wavelength, frequency, and energy:
These are inversely related. Higher frequencies and shorter wavelengths correspond to
higher energy photons.
Photons:
Electromagnetic radiation is composed of photons, or massless particles, that carry
energy.
Color perception:
In digital images, color is created by representing the intensity of different wavelengths
of visible light.
Sampling and quantization are two fundamental steps in converting an analog image
into a digital one. Sampling digitizes the spatial coordinates (𝑥,𝑦) by taking
measurements at discrete points, creating a grid of pixels and determining the image's
spatial resolution. Quantization digitizes the amplitude (intensity) of each sample by
assigning it to one of a finite number of discrete intensity levels, determining the image's
color or gray-level resolution.
Sampling
What it is:
What it affects:
Analogy:
Quantization
What it is:
What it affects:
Analogy:
After taking snapshots (sampling), quantization is like assigning a specific,
predefined color to each snapshot. If you have only two colors (black and white),
you have low quantization. With more colors or shades of gray, you have higher
quantization and a more accurate representation of the original scene.
Order:
Sampling is always performed before quantization. You must first have samples
(pixels) before you can assign a discrete intensity value to them.
Interdependence:
Both processes are essential for creating a digital image. Sampling provides the
"where" (the grid of pixels), and quantization provides the "what" (the
color/intensity of each pixel).
Result:
The combination of sampling and quantization creates the digital image, which is
a 2D array of pixels, each with a specific numeric value representing its intensity.
Linear Transformations:
These transformations involve a linear mapping of input gray levels to output gray
levels.
Identity Transformation: The simplest form where the output gray level is
identical to the input gray level, resulting in no change to the image.
Negative Transformation: Inverts the gray levels by subtracting each pixel value
from the maximum possible value (L-1, where L is the number of gray levels). This
is useful for enhancing details in dark regions.
Logarithmic Transformations:
These transformations map a narrow range of input values to a wider range of output
values.
Log Transformation: The formula is s = c * log(1 + r), where 'r' is the input pixel
value, 's' is the output pixel value, and 'c' is a constant. This transformation is used to
expand the values of darker pixels and compress the values of brighter pixels.
In essence, gray level transformations are a fundamental tool in image processing for:
HISTOGRAM PROCESSING:
1. Understanding Histograms:
Histogram Equalization:
This technique redistributes the pixel intensities to create a more uniform
distribution, effectively enhancing the contrast of the image.
Histogram Matching (or Specification):
This method transforms the histogram of one image to resemble the histogram of
another, allowing for specific contrast or tonal characteristics to be transferred.
Local Histogram Processing:
This involves applying histogram processing to smaller regions or blocks of the
image, which can be useful for enhancing details in specific areas.
Contrast Enhancement: Improves the visibility of details by spreading out the pixel
intensities.
Brightness Adjustment: Can be used to increase or decrease the overall brightness of
an image.
Image Preprocessing: Prepares images for further analysis, such as feature extraction
in computer vision or medical image analysis.
Improved Visual Quality: Makes images more pleasing to the eye and easier to
interpret.
4. Examples of Applications:
Arthimetic operations
Logical Operations:
AND-Keeps only overlapping parts of two images/masks.
OR-Combines features from both images.
NOT-inverts pixel values (useful for negative images).
XOR-Hightlights differences in binary images,.
Applications:
Removing or adding specific regions using masks.
Highlighting changes between two images.
Combining images for overlays.
Brightness/contrast adjustments.
2. Spatial Domain: Spatial filtering operates directly on these pixel values in the
spatial domain, meaning it works with the image's physical structure.
3. The Filter (Kernel/Mask): A spatial filter is a small matrix (e.g., 3x3, 5x5) that
defines the neighborhood of pixels to be considered and the operation to be
performed.
4. Sliding the Filter: The filter is moved across the image, one pixel at a time,
applying the defined operation at each location.
5. Creating a New Image: The output of the filtering operation replaces the original
pixel value, resulting in a new, filtered image.
Median filters: Replace each pixel with the median value of its neighbors, effective
at removing salt-and-pepper noise while preserving edges.
Applications:
Noise reduction: Smoothing filters can effectively reduce random noise in images.
Edge detection: Sharpening filters can highlight edges and boundaries in an image.
Image enhancement: Spatial filtering can improve the overall appearance of an
image by adjusting its contrast, sharpness, or other visual characteristics.
Feature extraction: Specific spatial filters can be designed to extract particular
features from an image, such as textures or patterns.
Smoothing spatial filters, also known as low-pass filters, are used in image
processing to reduce noise and blur images. They work by averaging the pixel values
in a neighborhood, effectively reducing sharp transitions in intensity that characterize
noise and fine details.
Characteristics:
Noise Reduction:
Smoothing filters are highly effective in removing random noise, which often
appears as sharp, high-frequency variations in pixel values.
Blurring:
By averaging pixel values, these filters reduce the sharpness of edges and fine
details, leading to a blurring effect.
Low-Pass Filtering:
Smoothing filters allow low-frequency components (representing gradual changes in
intensity) to pass through while attenuating high-frequency components
(representing sharp transitions).
Types of Smoothing Filters:
1. 1. Linear Filters:
These filters perform a weighted average of pixel values within a
neighborhood. Common examples include:
Mean/Box Filter: A simple averaging filter where each pixel in the neighborhood is
given equal weight.
Weighted Average Filter: Assigns different weights to pixels in the neighborhood,
often giving more weight to the central pixel.
Gaussian Filter: Uses a Gaussian function to define the weights, resulting in a
smoother blurring effect.
2. Non-linear Filters:
These filters do not rely on simple averaging. Examples include:
Median Filter: Replaces each pixel with the median value of the pixels in its
neighborhood. This is particularly effective at removing salt-and-pepper noise.
Max/Min Filters: Replace each pixel with the maximum or minimum value in its
neighborhood.
Applications:
Noise Reduction:
Smoothing filters are widely used to remove noise from images before further
processing or analysis.
Preprocessing for Object Extraction:
Blurring can simplify image analysis by removing fine details and making it easier
to identify larger objects.
Edge Smoothing:
Smoothing filters can reduce the jaggedness of edges, making them appear
smoother.
Anti-aliasing:
Smoothing is used in resampling to reduce aliasing artifacts, which can occur when
an image is scaled down.
Differentiation:
Sharpening filters utilize derivative operators (first and second order) to identify and amplify
differences in pixel intensities.
Edge Enhancement:
By accentuating these intensity differences, sharpening filters make edges and boundaries
appear more prominent, improving image clarity.
Inversion of Smoothing:
Since smoothing filters (like averaging filters) perform integration, sharpening filters, which
are based on differentiation, essentially reverse this process.
Types of Sharpening Filters:
Derivative Filters:
These filters, such as the Sobel operator, are used to calculate image gradients, highlighting
edges and other discontinuities.
High-boost Filtering:
This technique amplifies the high-frequency components of an image, further enhancing detail
and sharpness.
Applications:
1. Analyze the image and identify goals: Determine the primary issues with the image (e.g.,
noise, low contrast, blurry edges) and what improvements are needed.
2. Select complementary techniques: Choose methods that address the identified issues and
can be applied sequentially.
Noise Reduction: Apply a smoothing filter like an averaging or median filter to reduce noise.
Edge Enhancement: Use a sharpening filter like the Laplacian to highlight edges and fine
details, often applied after noise reduction.
Contrast Adjustment: Employ techniques like histogram equalization or power-law
transformations to improve the overall contrast.
Gradient Enhancement: Use filters like Sobel or Prewitt to enhance edges.
3. Apply methods sequentially: Process the image with one technique, then use the result as
the input for the next technique in the sequence.
4. Iterate and refine: Evaluate the result and repeat steps to further refine the image if the initial
combination is not satisfactory.
Examples of combined methods
Denoising + Sharpening:
o Apply a median or averaging filter to reduce noise.
o Apply a power-law or logarithmic transformation to increase the dynamic range and contrast of
the image.
Non-linear Combination:
o Use a bilateral filter, which combines spatial and intensity information to smooth images while
preserving edges.
UNIT-III
Image degradation and restoration involves modeling the processes that corrupt an
image and then reversing those processes to recover the original, pristine image. The
degradation process is often modeled as a combination of a degradation function (like
blurring or geometric distortion) and additive noise. Image restoration aims to
estimate the original image by inverting the degradation and noise effects.
1. Degradation Model:
2. Restoration Process:
Inverse Process:
The restoration process aims to reverse the effects of the degradation function and
noise to recover an estimate of the original image, denoted as f̂(x,y).
Knowledge of Degradation:
Restoration relies on having some knowledge about the degradation function and
noise characteristics.
Techniques:
Various techniques are used for restoration, including inverse filtering, Wiener
filtering, and other methods that attempt to estimate and compensate for the
degradation and noise.
3. Examples:
Blurring:
If the degradation function is a blurring filter, the restoration process would involve
applying a deblurring filter.
Noise Reduction:
If the degradation is primarily due to noise, restoration would involve noise
reduction techniques.
2.NOISE MODELS:
A noise model in digital image processing describes the statistical distribution of
unwanted variations (noise) in an image
Statistical Description:
Noise models use probability distributions (like Gaussian, Poisson, or uniform) to characterize the
random variations in pixel intensities.
Noise Types:
Common noise types include:
Gaussian noise: A normal distribution, often appearing as a bell-shaped curve in histograms.
Impulse noise (Salt and Pepper): Randomly scattered bright (salt) and dark (pepper) pixels.
Uniform noise: Values are uniformly distributed within a certain range.
Rayleigh noise: Often found in radar images.
Exponential noise: Can arise in various applications.
Additive vs. Multiplicative Noise:
Noise can be added to the original signal (additive) or multiplied with it (multiplicative).
Impact on Images:
Noise can degrade image quality, introducing artifacts, blurring details, and obscuring important
features.
Noise Removal:
By understanding the noise model, appropriate filtering techniques (e.g., Gaussian filter for Gaussian
noise, median filter for impulse noise) can be applied to reduce noise and improve image quality.
3. RESTORATION IS THE PROCESS OF NOISE ONLY:
Image restoration, specifically in the context of noise, aims to recover the original, "clean" image
from a degraded version that contains unwanted variations in intensity, known as noise
Image Degradation:
Images can be degraded by various factors, including noise, blurring, and geometric distortions. In
some cases, the only form of degradation is noise.
Noise:
Noise is unwanted variations in pixel intensities that obscure the true image content. It can arise from
various sources, such as sensor imperfections, transmission errors, or environmental factors.
Spatial Filtering:
When noise is the primary degradation, spatial filtering techniques are often used. These methods
involve applying a filter mask to the image, averaging or modifying pixel values within a neighborhood
to reduce the noise effect.
Restoration vs. Denoising:
While often used interchangeably, "restoration" in the context of noise implies bringing the image back
to its presumed original state, whereas "denoising" might simply aim to reduce noise without specific
regard for the original image.
Example Techniques:
Mean filters (like arithmetic mean filters) are common examples. These filters replace each pixel with
the average value of its neighboring pixels within a defined window. Other techniques include median
filters, which replace each pixel with the median value of its neighbors, and alpha-trimmed mean
filters, which discard a certain number of extreme pixel values before averaging.
Frequency Domain:
In some cases, noise characteristics (e.g., periodic noise) can be better addressed in the frequency
domain using Fourier transforms, but spatial filtering is often the primary approach when noise is the
only degradation.
4.SPATIAL FILTERING:
Spatial filtering is a digital image processing technique that modifies pixel values by applying a
filter mask to a pixel and its neighbors to perform operations like smoothing, sharpening, or
edge detection. The mask, also called a kernel, slides across the image, and at each position, a
predefined calculation is performed to determine the new value of the center pixel. Spatial
filtering is broadly classified into linear filters (like average and weighted average filters)
and non-linear filters (like median filters).
How it works
Filter mask: A small matrix of weights is used as the filter, sliding across the image.
Neighborhood operation: At each pixel, a calculation is performed using the values of the
pixels under the mask.
Pixel replacement: The result of this calculation becomes the new value for the center pixel of
the mask's current position.
Types of spatial filters
Linear filters:
The new pixel value is a weighted sum of the original pixel values within the neighborhood.
Smoothing (low-pass): Filters like the mean or box filter average pixel values to reduce noise
and blur the image.
Sharpening (high-pass): Filters that use differentiation to highlight edges and details by
emphasizing high frequencies.
Non-linear filters:
The new pixel value is determined by a non-linear function of the pixel values in the
neighborhood.
Median filter: Replaces the center pixel with the median value of its neighbors, which is very
effective at reducing salt-and-pepper noise while preserving edges.
Applications
Noise reduction:
Smoothing filters like the mean and median filters are used to reduce unwanted noise in an
image.
Image sharpening:
Filters are used to enhance edges and details, making the image appear sharper.
Edge detection:
Filters that approximate derivatives are used to find the locations of edges and sharp changes in
intensity.
5.PERIODIC NOISE REDUCTION BY FREQUENCY DOMAIN FILTERING:
Periodic noise is reduced in digital image processing by filtering it out in the frequency
domain, which is done by applying the Discrete Fourier Transform (DFT) to convert the
image, filtering the noise in the frequency domain, and then applying the inverse Discrete
Fourier Transform (IDFT) to convert back to the spatial domain. Periodic noise appears
as distinct spikes or peaks in the frequency domain, so filters like notch and band reject
are used to specifically remove these frequency components.
linear relationship with the input pixel values, often achieved through methods like convolution or
piecewise linear transformations. This allows for operations such as image filtering (sharpening, noise
reduction) and contrast stretching, where a change in one part of the image's intensity range is applied
consistently across the entire image or in specific defined segments.
Key concepts
Linear transformation: A transformation where the output pixel values are a linear combination
of the input values. This can be represented by the equation
Convolution: A common linear operation where an image is filtered by applying a kernel (a small
matrix) across each pixel. The result is a new image where each pixel's value is a weighted sum
of its neighbors, as defined by the kernel. This is the basis for many filters, including those that
sharpen images or reduce noise.
Applications
Image enhancement: Used for improving the visual quality of an image, such as sharpening
edges, enhancing contrast, or reducing noise.
Image restoration: Correcting degradations like blur caused by motion or an unfocused lens.
Feature extraction: Used to find and emphasize linear features like roads, canals, or geological
lineaments in satellite or aerial imagery.
INVARIANT DEGRADATIONS:
Key Concepts:
Linearity:
A linear system satisfies two properties: homogeneity (multiplying the input by a
constant also multiplies the output by that constant) and additivity (the output of the
sum of inputs is the sum of the individual outputs).
Position Invariance:
The degradation effect is the same regardless of where it occurs in the image. This
means the point spread function (PSF) is constant across the image.
Degradation Function:
This function, often represented as h(x, y), describes how the image is degraded. In
linear, position-invariant systems, this function is the PSF.
Noise:
Additive noise is a common component of image degradation models, represented as
u(x, y) or η(x, y).
Convolution:
The mathematical operation that combines the degradation function and the original
image, essentially blurring or otherwise altering the image based on the PSF.
Mathematical Representation:
Where:
The same model can be represented in the frequency domain using Fourier
transforms:
Where:
Image Restoration:
Understanding the model of invariant degradations is crucial for developing image
restoration techniques, which aim to recover the original image from the degraded
version.
Restoration Algorithms:
Algorithms like inverse filtering and Wiener filtering can be applied to address these
types of degradations, particularly when the PSF and noise characteristics are known
or can be estimated.
Blind Deconvolution:
When the PSF is unknown, the problem becomes more challenging, and techniques
like blind deconvolution are used to estimate both the original image and the PSF.
1. Observation:
This approach involves analyzing a degraded image to infer the degradation. It often
involves identifying a small, relatively sharp portion of the image (where the
degradation is minimal) and using its characteristics to estimate the overall
degradation function, according to some sources on digital image processing.
2. Experimentation:
This method involves creating a controlled environment similar to the one in which
the degraded image was captured. By imaging a known object (like a small, bright
dot) under similar conditions, researchers can directly measure the impulse response
of the degradation.
3. Mathematical Modeling:
This approach relies on developing mathematical models that represent the physical
processes causing the degradation. These models can be based on principles of
optics, motion, or other relevant factors, according to information on degradation
models from ScienceDirect.
Inverse filtering is a digital image processing technique used for image restoration,
specifically to reverse the effects of blurring and other degradations. It works by
applying the inverse of the degradation function (also known as the point spread
function or impulse response) in the frequency domain
How it works:
1. 1. Degradation Model:
It's assumed that the image degradation (blurring, etc.) can be modeled by a specific
function (point spread function or impulse response).
2. 2. Frequency Domain Analysis:
The degraded image and the degradation function are transformed into the frequency
domain using the Fourier transform.
3. 3. Inverse Filtering:
The inverse of the degradation function (1/H) is calculated. This inverse filter is then
multiplied by the transformed image.
4. 4. Restoration:
The result is transformed back to the spatial domain using the inverse Fourier
transform, yielding a restored image.
Challenges:
Noise Amplification:
If the degradation function has values close to zero, its inverse can become very
large, amplifying any noise present in the image.
Ill-posed Problem:
In cases with significant noise, inverse filtering can lead to unstable and inaccurate
image restoration.
Applications:
Image Restoration:
Recovering images degraded by motion blur, out-of-focus blur, or other known
distortions.
Bridge Monitoring:
Identifying the fundamental frequency of a bridge by analyzing vibrations caused by
vehicles.
Vocal Fold Vibration Analysis:
Estimating the glottal volume velocity waveform during speech by inverse filtering
the radiated acoustic waveform or volume velocity at the mouth.
Alternatives:
Wiener Filtering:
A more robust approach that incorporates statistical information about the noise to
minimize the mean square error, making it less susceptible to noise amplification.
Regularization Techniques:
Methods that add constraints to the solution to prevent noise amplification and
improve stability.
MINIMUM MEAN SQUARE ERROR FILTERING:
Minimum Mean Square Error (MMSE) filtering aims to find the optimal filter
that minimizes the average squared difference (mean squared error) between the
filter's output and a desired signal. This technique is widely used in signal processing,
particularly for image and speech enhancement, to reduce noise and improve signal
quality.
Core Concept:
The goal of MMSE filtering is to design a filter that produces an output as close as
possible to a desired signal, in terms of minimizing the average squared error between
them.
This is achieved by minimizing the mean squared error (MSE) between the filter's
output and the desired signal.
MMSE filtering is often used in situations where the desired signal is corrupted by
noise or other distortions.
Key Applications:
Image Restoration:
MMSE filters, like the Wiener filter, are used to restore images degraded by blur and
noise by estimating the original, uncorrupted image.
Speech Enhancement:
MMSE filters can be applied to noisy speech signals to improve their quality and
intelligibility.
Noise Cancellation:
In applications like noise-canceling headphones, MMSE filters help remove
unwanted background noise.
Channel Equalization:
MMSE filters can mitigate the effects of inter-symbol interference in communication
systems.
How it Works:
1. 1. Problem Formulation:
The filtering problem is formulated as finding a filter (represented by its
coefficients) that minimizes the MSE between its output and the desired signal.
2. 2. Statistical Assumptions:
MMSE filtering often relies on statistical information about the signals involved,
such as their power spectra or correlation properties.
3. 3. Filter Design:
The filter coefficients are chosen to minimize the MSE, often using techniques like
solving a system of linear equations or applying optimization algorithms.
4. 4. Wiener Filter:
The Wiener filter is a prominent example of an MMSE filter, particularly for image
restoration and noise reduction.
Limitations:
1. The Problem:
CLS filtering minimizes an error function that measures the difference between the
degraded image and a filtered version of the estimated original image.
This minimization is performed subject to a constraint, which can be expressed as an
inequality involving a Laplacian operator or a similar measure of image roughness.
The constraint ensures that the restored image doesn't become excessively noisy or
fluctuate wildly in areas where the original image might be smooth.
3. Key Components:
In the frequency domain, the CLS filter is expressed as a function of the degradation
function, the Laplacian operator, and the parameter γ.
The formula involves dividing the Fourier transform of the degraded image by a term
that includes the degradation function and the Laplacian operator, weighted by γ.
5. Advantages of CLS Filtering:
It can be more effective than Wiener filtering when the noise is significant.
It doesn't require explicit knowledge of the power spectra of the original image and
noise.
It allows for control over the smoothness of the restored image.
6. Applications:
A geometric mean filter is a type of image processing filter used to reduce noise and
smooth images, particularly those affected by Gaussian noise. It works by replacing
each pixel's value with the geometric mean of the pixel values within a defined
neighborhood. This method is known for preserving edges better than some other
smoothing filters like the arithmetic mean filter.
Here's a more detailed explanation:
How it works:
1. 1. Neighborhood Definition:
The filter operates on a defined neighborhood around each pixel, typically a square
window (e.g., 3x3, 5x5).
2. 2. Geometric Mean Calculation:
For each pixel, the filter calculates the geometric mean of all pixel values within its
neighborhood. The geometric mean is calculated by multiplying all the pixel values
together and then taking the nth root, where n is the number of pixels in the
neighborhood.
3. 3. Pixel Replacement:
The original pixel's value is then replaced with the calculated geometric mean.
Mathematical Representation:
If S(x, y) represents the original image, and the filter mask is m x n pixels, then the
output image G(x, y) is calculated as:
G(x, y) = [ Π (S(i, j)) ] ^ (1 / mn) , where the product is taken over all pixels (i, j)
within the mask.
Advantages:
Noise Reduction: Effective at reducing Gaussian noise.
Edge Preservation: Generally better at preserving edges compared to
arithmetic mean filters.
Susceptibility to Negative Outliers: The filter can be sensitive to negative
outliers, meaning it might not perform as well in images with extreme negative
values.
Disadvantages:
Blurring: Larger filter sizes can lead to more blurring.
Computational Cost: Calculating the geometric mean can be computationally
more expensive than simpler filters.
Sensitivity to Negative Values: The filter can be sensitive to negative outliers.
GEOMETRIC TRANSFORMATIONS:
Translation:
Shifting a shape without changing its size or orientation. Think of sliding a piece on
a board game.
Rotation:
Turning a shape around a fixed point (the center of rotation). Imagine spinning a
wheel.
Reflection:
Creating a mirror image of a shape across a line (the line of reflection). Like seeing
your reflection in a still lake.
Dilation:
Changing the size of a shape, either enlarging or shrinking it. This is like using a
magnifying glass.
APPLICATIONS:
Geometric transformations are fundamental in various fields, including:
Computer graphics: Used to manipulate objects on screen, create animations, and
render images.
Image processing: Used for image enhancement, registration, and object
recognition.
Computer-aided design (CAD): Used to design and model objects.
Robotics: Used to plan robot movements and manipulate objects.
Mathematics and geometry: Provide a framework for understanding spatial
relationships and geometric properties.
UNIT-IV
IMAGE COMPRESSION STANDARDS :
Major image compression standards include JPEG for lossy and lossless
compression, JPEG 2000 which uses wavelet transforms, and PNG and GIF for lossless
compression. These standards are crucial for ensuring compatibility and efficiency in
storing and transmitting digital images, with lossy compression reducing file size by
discarding some data, while lossless methods retain all original information.
Lossy compression
JPEG:
The most common standard for photographs, it uses a lossy compression method based
on the Discrete Cosine Transform (DCT) to separate an image into frequency
components. It is highly effective at reducing file size for web and digital photography
use.
JPEG 2000:
An evolution of JPEG that uses a Fast Wavelet Transform for more efficient compression
and offers both lossy and lossless compression options.
Lossless compression
PNG:
A popular format that uses lossless compression techniques, making it ideal for web
graphics and images where data integrity is critical.
GIF:
Primarily used for animations, GIF files are limited to a palette of 256 colors and use a
lossless compression algorithm.
TIFF:
A flexible format that supports various compression schemes, including lossless methods
like LZW.
Compression techniques
Discrete Cosine Transform (DCT):
Transforms image blocks into the frequency domain, where high-frequency information
(details) can be more easily compressed. This is a core component of the JPEG standard.
Quantization:
Reduces the precision of the DCT coefficients, discarding information that is less
perceptible to the human eye, which is a key step in lossy compression.
Run-Length Encoding (RLE):
Replaces sequences of the same data value with a count, such as storing "12 zeros" as a
single entry.
Huffman Coding:
Assigns shorter binary codes to more frequently occurring data values, reducing the
overall size of the file.
Wavelet Transforms:
A method used in the JPEG 2000 standard that decomposes an image into different
frequency sub-bands, allowing for more effective compression, particularly at different
scales.