0% found this document useful (0 votes)
24 views28 pages

Image Segmentation

Image segmentation is a crucial computer vision task that separates objects and boundaries within images for analysis. It has various applications, including medicine, remote sensing, and quality control, and can be classified into techniques based on input cues and context. Key methods include boundary detection, pixel-based segmentation, and regional processing, each with specific algorithms and approaches to enhance image understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views28 pages

Image Segmentation

Image segmentation is a crucial computer vision task that separates objects and boundaries within images for analysis. It has various applications, including medicine, remote sensing, and quality control, and can be classified into techniques based on input cues and context. Key methods include boundary detection, pixel-based segmentation, and regional processing, each with specific algorithms and approaches to enhance image understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

image processing

Home Content presentation

Fundamentals of

IMAGE SEGMENTATION
image processing

Home Content presentation

Image segmentation is one of the key


computer vision tasks, It separates
objects, boundaries, or structures within

INTRODUCTION the image for more meaningful analysis.


Image segmentation plays an important role
in extracting meaningful information from
images, enabling computers to perceive and
understand visual data in a manner that
humans understand, view, and perceive it
image processing

Home Content Contact

Medicine: Recognizing tumors, tissues,


etc.
Remote Sensing: Land use
classification.
Inspection/Quality Control: Defect EVERYDAY
detection in products.
Special Cases: Alzheimer's patients
APPLICATIONS
benefit from color-based recognition.
Color Use: Color is stable and helps
in identifying objects clearly (more
than shape/texture).
image processing

Home Content presentation

Taxonomy of Techniques (Illustrative Tree Diagram)


CLASSIFICATION Segmentation techniques can be grouped as follows:
➤ Based on Input Cues
OF Discontinuity-Based:
o Edge-Based Methods: Focus on sudden intensity changes (e.g., Sobel, Canny).
SEGMENTATION Similarity-Based:
o Region-Based Methods: Group pixels with similar properties (e.g., region
METHODS growing, splitting/merging).
o Clustering Methods: K-means, hard/soft clustering based on similarity.
➤ Based on Context
Contextual: Relationships between features are considered (e.g., region
✦ Features Used in Segmentation growing).
Shape, texture, and color are primary Non-Contextual: Ignores spatial relationships (e.g., thresholding).
cues. ➤ Other Methods Mentioned
Threshold-Based: Simple, grayscale-based conversion to binary image.
Color segmentation is powerful but can
Graphical Segmentation: Implied through structural methods.
be affected by visual similarity or
--Model-Based:
naming confusion. o Watershed: Topography-based separation.
Color Spaces Used: o PDE-Based (Level-Set): Solves image as differential equations.
o Hardware-based: RGB, CMY, YIQ o ANN-Based: Uses neural networks for learning-based segmentation.
o User-based: HSV, HSL, CIE-LUV, etc.
image processing

Home Content presentation


Timmerman Industries

Home Content presentation

BOUNDARY DETECTION-BASED
TECHNIQUES IN IMAGE SEGMENTATION
image processing

Home Content presentation

BOUNDARY DETECTION-BASED
TECHNIQUES IN IMAGE SEGMENTATION
Boundary detection is an approach in image segmentation focused on finding edges or boundaries—the sharp, significant changes in
pixel intensity that typically separate different regions or objects within an image.
In segmentation, boundaries help distinguish individual parts, enabling clear identification of objects or regions.
Mathematical Condition for Boundary Detection
Use of Gradients and Intensity Transitions A boundary in a grayscale image is typically where the gradient magnitude is high.
Gradients measure how quickly pixel Mathematically:
values (intensity) change across an
image. High gradients indicate possible
edges or boundaries.
Intensity transitions are places where f(x,y) is the intensity at pixel (x,y).
the image goes quickly from dark to light
(or vice versa), usually marking the
The edge (boundary) exists where the magnitude ∣∇ ∣ f is large.
separate.
separation between objects. (If an image was attached or link provided, it would show: original image → edge-
detected image, e.g., using Canny or Sobel operation.)
image processing

Home Content presentation

Importance of Boundary-Based Segmentation This image compares different


Boundary-based segmentation, often referred to as edge boundary detection techniques
detection, specifically focuses on finding the sharp transitions or on a leg segmentation task. The
discontinuities between different areas in an image. The key first panel shows the ground truth
points about its importance include: boundaries (red), while the next
Precise Object Localization: Boundary-based techniques three panels show boundaries
accurately identify where one object ends and another begins, detected by Pb (probability of
which is vital for applications like medical diagnosis (e.g., boundary),
tumor boundaries), industrial inspection, and object
recognition.
Efficient Processing: Detecting boundaries reduces the motion-based, and combined
complexity of subsequent tasks (like object tracking or methods. The combined approach
classification) by providing a simplified structural outline. more closely matches the ground
Foundational for Further Analysis: Accurately detected truth, demonstrating improved
boundaries can act as a bridge between segmentation and segmentation by using both
higher-level recognition, helping generate candidate object appearance and motion
locations for more advanced tasks information.
image processing

Home Content presentation

Pixel-based segmentation (also called point-based) is the


simplest way to divide or separate parts of an image.
Instead of using complicated AI or deep learning models,

PIXEL-BASED this method uses basic image processing techniques to


identify which pixels belong to which object

SEGMENTATION Every pixel has a gray value (if it's a


Pixel-based segmentation checks each
grayscale image).
pixel's value and
decides:
"Is this part of the object?"
"Or is it background?"
If you set a threshold (limit), you can divide pixels into
object or background:
Bright pixels → object
Dark pixels → background
(Or the other way around, depending on the case.)

Segmentation with a global threshold: a original image; b histogram;


c–e upper right sector of a segmented with global thresholds of 110,
147, and 185, respectively.
image processing

Home Content presentation

What if Lighting Is Uneven?


Uneven lighting makes segmentation harder.
So, the first step is to try to fix or adjust the lighting.

PIXEL-BASED If fixing it isn’t possible, we must identify and correct


for the uneven brightness using special image filters or

SEGMENTATION processing.

Role of the Histogram


A histogram shows how pixel brightness values are
distributed.
If there's a clear difference between object and
It is the simplest image segmentation
background, the histogram will look bimodal: it will have
method. It works by classifying pixels
two peaks.
based on gray-level intensity — usually
One peak = object pixels
through thresholding.
One peak = background pixels
But often, you get gray area in between (intermediate
values), especially for small or blurry objects.
image processing

Home Content presentation


(a) Point detection (Laplacian) mask.
(b) X-ray image of turbine blade with a porosity. The
porosity contains a single black pixel.

(c) Result of combining the mask


with the image.
(d) Result of using Eq showing a
single point (the point was
POINT
enlarged to make it easier to see).
(Original image courtesy of X-TEK
Systems, Ltd.)
DETECTION
Detects isolated points (intensity
changes at a single pixel).
Uses a mask/kernel (like Laplacian or
second-order derivatives) to highlight
pixels different from their surroundings.
Example: High-pass filter to detect small
bright spots.
image processing

Home Content presentation

• Abrupt, local changes can be detected by spatial differentiation


• Since images are digital we should use digital differentiation operators.
• First and second order derivatives are commonly used.
• We consider 1-D case first and then expand the results to images.

(a) Image. (b) Horizontal intensity profile through the center of the image,
including the isolated noise point. (c) Simplified profile (the points are joined by dashes
for clarity). The image strip corresponds to the intensity profile, and the numbers in the
boxes are the intensity values of the dots shown in the profile. The derivatives were
obtained using Eqs.
• Comparing first and second order derivatives:
1. First-order derivatives tells us how fast the brightness is changing
2. Second order derivatives tells us where the intensity is increasing or decreasing
sharply — great for finding corners or isolated points.

Laplacian
The Laplacian is a second-order derivative
operator used in image processing, especially in
boundary detection and point detection
techniques. It highlights regions of rapid
intensity change, helping to detect edges,
points, and fine details in an image.
image processing

Home Content presentation

LINE DETECTION
Line detection in image processing is the
technique of identifying straight lines within an
image by analyzing a set of edge points. It aims to
find configurations where these points align to
form lines, which are important structural
features.

Line detection finds aligned points forming straight lines.


Hough Transform maps edge points to parameter space to detect lines.
Convolution with directional masks highlights lines of certain
orientation.
Detecting lines is critical for structuring image segmentation and
analysis.
image processing

Home Content presentation

Line detection: If each mask is moved around an image it (a) Image of a wire-bond
would respond more strongly to lines in the mentioned template.
direction. (b) Result of processing with
the +45° line detector mask .
(c) Zoomed view of the top
left region of (b).
(d) Zoomed view of the
bottom right region of (b).
(e) The image in (b) with all
negative values set to zero.
(f) All points in (e) whose
Line detection masks. Angles are with respect to the axis values satisfied the
system in Fig. (b). condition g ≥ T, where g is
the image in (e). (The points
in (f) were enlarged to make
them easier to see.)
image processing

Home Content presentation

[Link] detection is a fundamental image processing technique


used to identify the boundaries or edges of objects within an
image by detecting sudden changes in intensity or color. These
edges represent significant transitions that separate
EDGE DETECTION
different regions or objects in the scene.
An image may all the three types of edges.

A
B
C

From left to right, models (ideal


representations) of a step, a ramp,and a
A 1508 × 1970 image showing (zoomed) actual ramp (bottom, left), step
roof edge, and their corresponding intensity (top, right), and roof edge profiles. The profiles are from dark to light, in the areas
indicated by the short line segments shown in the small circles. The ramp and "step"
profiles. profiles span 9 pixels and 2 pixels, respectively. The base of the roof edge is 3 pixels.
(Original image courtesy of Dr. David R. Pickens, Vanderbilt University.)
image processing

Home Content presentation


First and Second order derivatives response at the edge:
(a) Two regions of
constant intensity
separated by an Common edge detectors:
ideal vertical [Link] Operator
ramp edge. [Link] horizontal/vertical edges
(b) Detail near [Link] Operator
Horizontal intensity the edge, showing
profile [Link] to Sobel but simpler
a horizontal
intensity profile, [Link] Operator
together with its [Link] diagonal edges.
first and second
derivatives.

1. The magnitude of the first derivative can be used to detect


First
derivative the presence of an edge at a point.
2. Sign of second derivative indicates which side of the edge
the pixel is on.
3. Zero crossings of the second derivative can be used to locate
the centers of thick edges.
Second Second derivative produces two values for every edge in an
derivative
image. An imaginary straight line
a oining the extreme positive and negative values of the second
derivative would cross zero near
Zero crossing
the midpoint of the edge. Zero-crossing point is the center of
thick edges.
b
image processing

Home Content presentation

First derivative Second derivative


Ramp edge
STEPS IN EDGE DETECTION corrupted by
steadily
increasing
1. Image smoothing for noise amounts of
additive
reduction. Gaussian noise
2. Detection of edge points (this
extracts from an image all points First column: Images and
intensity profiles of a
that are potential candidates to ramp edge corrupted by
random Gaussian noise of
become edge points). zero mean and standard
deviations of 0.0, 0.1,
3. Edge localization (this selects from 1.0, and 10.0
intensity levels,
candidate edge points only the respectively.
Second column: First-
points that are true members of derivative images and
intensity
the set of points comprising an profiles. Third column:
Second-derivative images
edge). and intensity profiles
image processing

Home Content presentation

EDGE LINKING &


BOUNDARY DETECTION
Set of pixels from edge detecting algorithms, seldom define a
boundary completely because of noise, breaks in the boundary etc.
Therefore, Edge detecting algorithms are typically followed by
linking and other detection procedures, designed to assemble edge
pixels into meaningful boundaries. 2 types – local and global

LOCAL PROCESSING REGIONALPROCESSING


image processing

Home Content presentation

LOCAL PROCESSING

Analyse the characteristics of pixels in a small


• strength of the neighbourhood (3x3, or 5x5) about every point that has
response of the undergone edge detection. All points that are similar
2 principal
gradient operator are linked, forming a boundary of pixels that share
properties for
used to produce some common properties.
establishing
the edge pixel
similarity of
edge pixels:-
• direction of A point in the predefined neighbourhood of (x,y) is
the gradient. linked to the pixel at (x,y) if both magnitude and
direction criteria are satisfied. This process is
repeated for every location in the image.
image processing

Home Content presentation

-Original image
LOCAL PROCESSING
Components of Sobel
-vertical operators
Example to find
license plate Components of Sobel
14

a b c d (a) (b)
horizontal operators
(a) Input image.
Linking points with:-
(b) G, component of the
gradient. Gradient value > 25
(c) G component of the Gradient directions did not
difer by
gradient. more than 15%
(d) Result of edge linking.
Further Processing
(Courtesy of Perceptics
Short breaks are closed
Corporation.) (d)

Remove isolated segments


image processing

Home Content presentation

Technique Description
REGIONALPROCESSING
Starts from a seed point and adds
Region Growing neighboring pixels with similar values.

Regional Processing in image processing Recursively splits the image into smaller
Region Splitting regions until homogeneity is achieved.
refers to applying operations to
specific regions of an image rather Opposite of splitting; merges adjacent
than the whole image. It focuses on Region Merging regions with similar properties.
groups of connected pixels (regions)
Connected Component Labels groups of connected pixels (useful
that share similar characteristics like
Labeling in binary images).
color, intensity, or texture.
Shape-based region processing using
Morphological Operations erosion, dilation, etc.
Key Concepts of Regional Processing
Region: A group of connected pixels with similar properties. Treats grayscale image like a topographic
Segmentation: The process of dividing the image into regions. Watershed Algorithm surface and finds boundaries between
regions.
Processing: Applying techniques only on the segmented regions.
image processing

Home Content presentation

REGIONALPROCESSING
A conceptual understanding of this idea is --E

sufficient
Requirements: FIGURE is about
(1) Two starting points must be specified; ab Illustration of the
(2) All the points must be ordered Large cd iterative
distance between successive points, polygonal fit
relative to the distance between algorithm.
other points ⇒ boundary segment (open
curve) ⇒ end points used as starting
points Seperation between points uniform
⇒ boundary (closed curve) ⇒ extreme
points used as starting points
image processing

Home Content presentation

EDGE LINKING
USING
POLYGONAL
APPROXIMATION
(a) A set of points in a clockwise
path (the points labeled A and B
were chosen as the starting
vertices).
(b) The distance from point C to
the line passing through A and B
is the largest of all the points
between A and B and also
passed the threshold test, so C is
a new vertex. (d)-(g) Various
stages of the
algorithm. (h) The final vertices,
shown connected with straight
lines to form a polygon. Table
10.1 shows
step-by-step details.
image processing

Home Content presentation

The Hough transform in image processing is a technique


used to detect simple geometric shapes in images. It
works by transforming the image space into a parameter
space, where the geometric shapes can be detected
through the identification of patterns in the
parameter space.
HOUGH
The Hough transform helps
detect shapes even when
TRANSFORM
they’re broken or hidden,
unlike normal edge
detection. It works by
turning the image into a
different space where
shapes are easier to spot.
image processing

Home Content presentation

Image after applying edge


detection technique. Red
circles show that the line is
breaking there.

How does it work?


After using The Hough transform works by converting
each pixel in the image into a curve in a
Hough
parameter space (like slope and intercept for
Transform lines). Where many curves intersect in this
space indicates the presence of a line in the
image.
image processing

Home Content presentation

(amin,amax): expected range


of slope (bmin,bmax): HOUGH TRANSFORM
expected range of
intercepts A(i,j): the
number of points in the • The cell with the largest value shows the parameters of the
cell at coordinates (i,j) line that contains the maximum number of points.
in the ab plane. Problem with this method: a approaches infinity as the line
gets perpendicular to the x axis.
Solution: use the representation of the line as: x cos θ + y
sin θ = p
image processing

Home Content presentation

HOUGH
TRANSFORM
Hough transform is applicable to any
function of the form g(v,c)=0, where
v is the vector of coordinates and c Algorithm
is the vector of coefficients. The algorithm of the Hough transform in image processing can be
summarized as follows:
3 parameters (c1,c2,c3), 3-D For each pixel in the image, compute all the possible curves
parameter space, cube like cells, in the parameter space that pass through that pixel.
accumulators of the form A(i,j,k). For each curve in the parameter space, increment a
corresponding accumulator array.
Analyze the accumulator array to detect the presence of
simple geometric shapes in the image.
image processing

Home Content presentation

THANK YOU
22011BC028
22011BC029
22011BC030
22011BC031
22011BC032

You might also like