Page |1
Subject: - Image Processing
Semester: - MCA-VIII
*UNIT I*
Digital Image processing
DIP means processing digital images by means of a digital computer. We can also
say that it is a use of computer algorithms, in order to get enhanced image either to
extract some meaningful information. DIP is the use of algorithms and mathematical
models to process and analyse digital images. The goal of dip is to enhance the
quality of digital images, extract meaningful information and automate tasks.
Origin of Digital Image Processing
Digital Image Processing was first used in the newspaper industry. At that time
images were sent by submarine cable between London and New York. In the early
1920s, the Bart lane cable picture transmission system was introduced which
reduced the time requirement to transport a picture across the Atlantic from more
than a week to less than three hours. The pictures are coded in specialized printing
equipment, and after reaching the end they are reconstructed. Initial problems found
in improving the visual quality of early digital pictures were related to the selection of
printing procedures and the distribution of intensity levels.
The history of digital image processing is coupled with the development of digital
computers. John von Neumann introduced the modern digital computer in the year
of 1940. The birth of the Digital Image Processing was in 1960. The first picture of
the moon was taken by U.S. spacecraft Ranger 7 to Jet Propulsion Laboratory in
1964 which serves as the basis of future image processing. In the year 1970 image
processing was used in medical imaging, astronomy, and remote sensing. From
1960 to today's date image processing has seen tremendous growth.
The basic steps involved in digital image processing
1. Image acquisition: This involves capturing an image using a digital camera or
scanner, or importing an existing image into a computer.
2. Image enhancement: This involves improving the visual quality of an image,
such as increasing contrast, reducing noise, and removing artifacts.
3. Image restoration: This involves removing degradation from an image, such as
blurring, noise, and distortion.
4. Image segmentation: This involves dividing an image into regions or segments,
each of which corresponds to a specific object or feature in the image.
5. Image representation and description: This involves representing an image in
a way that can be analysed and manipulated by a computer, and describing the
features of an image in a compact and meaningful way.
6. Image analysis: This involves using algorithms and mathematical models to
extract information from an image, such as recognizing objects, detecting
patterns, and quantifying features.
7. Image synthesis and compression: This involves generating new images or
compressing existing images to reduce storage and transmission requirements.
8. Digital image processing is widely used in a variety of applications, including
medical imaging, remote sensing, computer vision, and multimedia.
Page |2
Elements of Digital image processing systems
1. Image Sensors: These are devices like cameras that capture the initial image data
by converting light energy into electrical signals.
2. Specialized Image Processing Hardware: This includes dedicated hardware like
frame grabbers or specialized processors that perform specific image processing
tasks, often at high speeds.
3. Computer: A general-purpose computer acts as the central processing unit for the
system, running the image processing software and controlling the overall operation.
4. Image Processing Software: This software contains algorithms for various image
processing tasks, including enhancement, restoration, segmentation, and more.
5. Mass Storage: This provides storage for the large image files generated during the
processing, including short-term, online, or archival storage.
6. Image Displays: These are the monitors or screens that display the processed
images for viewing or analysis.
7. Hardcopy Devices: These devices, like printers, create physical copies of the
processed images.
8. Network: A network connection allows for the sharing of images and processing
tasks between different components or systems.
Applications of Image Processing
1. Medical Imaging
• MRI and CT Scans: Enhancing the clarity of MRI and CT scans for better diagnosis and treatment planning.
• X-Ray Imaging: Improving the quality and detail of X-ray images to detect fractures, tumors, and other
anomalies.
• Ultrasound Imaging: Enhancing ultrasound images for more accurate visualization of internal organs and
fetal development.
2. Remote Sensing
• Satellite Imaging: Analyzing satellite images for applications like land use mapping and resource
monitoring.
• Aerial Photography: Using drones and aircraft to capture high-resolution images for mapping and
surveying.
• Environmental Monitoring: Monitoring environmental changes and natural disasters using image
analysis.
3. Photography and Image Editing
Image processing is widely used in photo enhancement and manipulation.
• How it helps: It improves photo quality, corrects lighting, removes noise, and applies filters or effects.
• Example: Mobile camera apps use image processing for features like portrait mode, night mode, or
background blur.
• Benefit: Even an average photo can be made professional-looking with the help of automated
adjustments and effects.
[Link] and Security:
Image processing is used in security systems for real-time facial recognition, motion detection, and license plate
reading to enhance safety and monitoring.
Page |3
What is an image?
An image is defined as a two-dimensional function, F(x,y), where x and y are spatial coordinates,
and the amplitude of F at any pair of coordinates ( x, y) is called the intensity of that image at that
point. When x, y and amplitude values of F are finite, we call it a digital image.
In other words, an image can be defined by a two-dimensional array specifically arranged in rows
and columns.
Digital Image is composed of a finite number of elements, each of which elements have a
particular value at a particular location. These elements are referred to as picture elements, image
elements, and pixels. A Pixel is most widely used to denote the elements of a Digital Image.
A pixel is the smallest unit of a digital image or display and stands for
“picture element.” It is a very small, isolated dot that stands for one colour and plays the most
basic part in digital images.
Types of an image
1. BINARY IMAGE– The binary image as its name suggests, contain only two-
pixel elements that’s 0 & 1, where 0 refers to black and 1 refers to white. This
image is also known as Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only black and white
colour is called BLACK AND WHITE IMAGE.
3. 8-bit COLOR FORMAT– It is the most famous image format. It has 256
different shades of colours in it and commonly known as Grayscale Image. In this
format, 0 stands for Black, and 255 stands for white, and 127 stands for grey.
4. 16-bit COLOR FORMAT– It is a colour image format. It has 65,536 different
colours in it. It’s also known as High Colour Format. In this format the distribution
of colour is not as same as Grayscale image. A 16-bit format is actually divided
into three further formats which are Red, Green and Blue.
Image as a Matrix
As we know, images are represented in rows and columns we have the following
syntax in which images are represented:
The right side of this equation is digital image by definition. Every element of this
matrix is called image element, picture element, or pixel.
Page |4
SAMPLING AND QUANTIZATION
In digital image processing, two fundamental concepts are image sampling and quantization.
These processes are crucial for converting an Analog image into a digital form that can be
stored, manipulated, and displayed by computers.
What is Image Sampling?
Image sampling is the process of converting a continuous image (Analog) into a discrete image
(digital) by selecting specific points from the continuous image. This involves measuring the image
at regular intervals and recording the intensity (brightness) values at those points.
How Image Sampling Works?
• Grid Overlay: A grid is placed over the continuous image, dividing it into small, regular sections.
• Pixel Selection: At each intersection of the grid lines, a sample point (pixel) is chosen.
Examples of Sampling
• High Sampling Rate: A digital camera with a high megapixel count captures more details
because it samples the image at more points.
• Low Sampling Rate: An old VGA camera with a lower resolution captures less detail because
it samples the image at fewer points.
What is Image Quantization?
Image quantization is the process of converting the continuous range of pixel values (intensities) into
a limited set of discrete values. This step follows sampling and reduces the precision of the sampled
values to a manageable level for digital representation.
How Image Quantization Works?
• Value Range Definition: The continuous range of pixel values is divided into a finite number of
intervals or levels.
• Mapping Intensities: Each sampled pixel intensity is mapped to the nearest interval value.
• Assigning Discrete Values: The original continuous intensity values are replaced by the
discrete values corresponding to the intervals.
Examples of Quantization
• High Quantization Levels: An image with 256 levels (8 bits per pixel) can represent shades of
grey more accurately.
• Low Quantization Levels: An image with only 4 levels (2 bits per pixel) has much less detail
and appears more posterized.
Neighbours of a pixel
A pixel p at (x,y) has 4-horizontal/vertical neighbours at (x+1,y), (x-1,y), (x,y+1) and (x,y-1). These are called
the 4-neighbours of p : N4(p).
A pixel p at (x,y) has 4 diagonal neighbours at (x+1,y+1), (x+1,y-1), (x-1,y+1) and (x-1,y-1). These are called
the diagonal-neighbours of p : ND(p).
The 4-neighbours and the diagonal neighbours of p are called 8-neighbours of p : N8(p).
Page |5
Adjacency between pixels
Let V be the set of intensity values used to define adjacency.
In a binary image, V ={1} if we are referring to adjacency of pixels with value 1. In a gray-scale image, the
idea is the same, but set V typically contains more elements.
For example, in the adjacency of pixels with a range of possible intensity values 0 to 255, set V could be
any subset of these 256 values.
We consider three types of adjacency:
a) 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).
b) 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).
c) m-adjacency(mixed adjacency): Two pixels p and q with values from V are m-adjacent if
1. q is in N4(p), or
2. 2) q is in ND(p) and the set N4(p)∩N4(q) has no pixels whose values are from V.
Connectivity between pixels
It is an important concept in digital image processing.
It is used for establishing boundaries of objects and components of regions in an image.
Two pixels are said to be connected:
Page |6
• if they are adjacent in some sense(neighbour pixels,4/8/m-adjacency)
• if their gray levels satisfy a specified criterion of similarity(equal intensity level)
There are three types of connectivity on the basis of adjacency. They are:
a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with each others.
b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with each others.
c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent with each others.