0% found this document useful (0 votes)
25 views117 pages

Medical Image Processing Laboratory Record

The document is a laboratory record submitted by Rashni Preethi C for the Medical Image Processing Laboratory course at Anna University. It includes a bonafide certificate, an index of experiments conducted, and detailed descriptions of various image processing techniques such as RGB to grayscale conversion, histogram equalization, and filtering methods. Each experiment outlines the aim, required software, theory, algorithm, program code, output, inference, and results of the image processing tasks performed using MATLAB.

Uploaded by

jaimanirs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views117 pages

Medical Image Processing Laboratory Record

The document is a laboratory record submitted by Rashni Preethi C for the Medical Image Processing Laboratory course at Anna University. It includes a bonafide certificate, an index of experiments conducted, and detailed descriptions of various image processing techniques such as RGB to grayscale conversion, histogram equalization, and filtering methods. Each experiment outlines the aim, required software, theory, algorithm, program code, output, inference, and results of the image processing tasks performed using MATLAB.

Uploaded by

jaimanirs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

MEDICAL IMAGE PROCESSING LABORATORY

LABORATORY RECORD

Submitted by

RASHNI PREETHI C (2021116031)

BM5712 – MEDICAL IMAGE


PROCESSING LABORATORY

SEMESTER – VII
B.E BIOMEDICAL ENGINEERING (FULL TIME)
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

COLLEGE OF ENGINEERING, GUINDY,


ANNA UNIVERSITY, CHENNAI – 25.

NOVEMBER, 2024

i
ii
ANNA UNIVERSITY
COLLEGE OF ENGINEERING,
GUINDY

BONAFIDE CERTIFICATE

NAME RASHNI PREETHI C


CLASS B.E BIOMEDICAL ENGINEERING
ROLL NO. 2021116031

Certified that this is the bonafide record of work done by the above student
in the BM5712 MEDICAL IMAGE PROCESSING LABORATORY during
August 2024 – November 2024.

Signature of Lab-in-charge Signature of head of the Department

Submitted for the Practical Examination held on .

Internal Examiner
iii
iv
v
INDEX
Page Date of Marks Signature of
Sl. No. Name of the Experiment
No. Experiment (Max 10) the Staff

1. Conversion of RGB to Grayscale image 1 16/08/2024

2. Histogram Equalization 7 23/08/2024

3. Linear Spatial Filtering 15 30/08/2024

4. Non-Linear Spatial Filtering 22 06/09/2024

5. Filtering in Frequency Domain 31 13/09/2024

6. Edge Detection Operator 39 20/09/2024

7. Image Compression using DCT 49 27/09/2024

8. Steganography 53 04/10/2024

9. Conversion of Colour Spaces 59 11/10/2024

10. Medical Image Fusion 71 18/10/2024

11. Image Segmentation Using Watershed 77 25/10/2024

Transform

12. Feature Extraction in Medical Images 83 08/11/2024

vi
vii
EXPT. NO. 01
21/08/2024 CONVERSION OF RGB TO GRAYSCALE IMAGE

AIM:
To convert the RGB image to a grayscale image and display them.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
An RGB image is essentially three images layered on top of one another—red,
green, and blue—with each pixel having an 8-bit intensity value ranging from 0 to 255.
To store a single pixel of an RGB image, 8 bits of all three colors are required, for a total
of 24 bits per pixel.
On the other hand, a grayscale or gray-level image is simply an image in which
the only colors present are shades of gray. The grayscale value requires only an 8-bit
value per pixel.
In order to convert an RGB image into grayscale, 24 bits per pixels is brought
down to 8 bits per pixel, that is, grayscale image is 33% of the size of an RGB image.
There are two methods of conversion, namely
 AVERAGE METHOD:
Grayscale value = (R + G + B)/3
 WEIGHTED METHOD:
Grayscale Value = 0.299R + 0.587G + 0.114B

1
2
ALGORITHM:
STEP 1: Start
STEP 2: Read the input image from MATLAB directory.
STEP 3: Separate the red, green and blue channel values in the input image and assign
them to different variables.

STEP 4: Using average and weighted method, obtain new intensity values for the
input image and display the grayscale image.
STEP 5: Stop.

PROGRAM:
clc; clear all; close all;
I = imread('C:\Users\Admin\Downloads\
pet2.jpg'); figure();
subplot(2,2,1); imshow(I); title("Original Coloured
Image"); i = rgb2gray(I);
subplot(2,2,2);
imshow(i); title("Grayscale Image using In-Built
Function"); r = I(:,:,1);
g = I(:,:,2); b = I(:,:,3);
gray1 =
(r+g+b)/3;
subplot(2,2,3);
imshow(gray1); title("Grayscale Image using Average
Method"); gray = (0.299*r+0.543*g+0.114*b);
subplot(2,2,4);
imshow(gray); title("Grayscale Image using Weighted Average Method");

3
OUTPUT:

4
INFERENCE:
Grayscale images obtained using weighted average method and average method
are of varying image clarity. The conversion of color image to grayscale image with
weighted average technique gives the grayscale image with better clarity.

This is because the weighted average method takes into account the human eye’s
sensitivity to different colors and assigns higher weights to the green and red channels,
which are more relevant to our perception of brightness.

In contrast, the average method treats all color channels equally, which might not
accurately reflect the human perception of brightness.

RESULT:
The program to convert RGB images to grayscale images has been written and
executed in MATLAB.

5
6
EXPT. NO. 02
21/08/2024 HISTOGRAM EQUALIZATION

AIM:
To enhance the poor contrast of the input image using histogram equalization
technique.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
A histogram is the commonly used graph to show the frequency distribution. It
shows how often each different value in a set of data occurs.
In digital image processing, histogram of an image is a gray scale value
distribution showing frequency of each gray scale value. In order to modify the dynamic
range and contrast of an image, the intensity values in histogram can be altered. This
technique of adjusting image intensities to enhance contrast is known as Histogram
Equalization.
It is carried out in the following steps:
Arrange the pixel intensities in ascending order.
A. Compute the occurrence of each pixel intensity value.
B. Calculate the value probability density function (PDF) for each
intensity values:
C. Calculate cumulative distribution function (CDF)
D. Calculate the final value of each pixel by multiplying CDF with
the number ofbins CDF x (L-1).
E. Round off the final value to the nearest integer.
F. By mapping the new value onto histogram, we obtain the
equalized histogram.
G. By mapping the new value onto histogram, we obtain the
equalized histogram.

7
8
ALGORITHM:
STEP 1: Start
STEP 2: Read the input image from MATLAB directory.
STEP 3: Get the histogram of the input image.
STEP 4: Using the appropriate function, generate PDF out of histogram by dividing by
total number of pixels.
STEP 5: Compute the CDF from the values of PDF.
STEP 6: Initialize a matrix of dimension, same as that of input image.

STEP 7: Pad the elements of the matrix with zero.


STEP 8; Replace the zeroes in the matrix with the new CDF
values. STEP 9: Get the histogram of the new image.
STEP 10: Display the input and output image with corresponding histogram.
STEP 11: Stop

PROGRAM:
clc; clear all; close all;
I = imread('C:\Users\Admin\Downloads\
CT3.png'); figure(); hold on;
i =
rgb2gray(I); s
= size(i);
subplot(2,2,1); imshow(i); title('Low Contrast
Image'); c = zeros(1,256);
for j = 1:s(1)
for k =
1:s(2) a
= i(j,k);
c(a+1) = c(a+1)+1;
end
end
subplot(2,2,2);
stem(0:255,c); title('Low Contrast Image Histogram');

pdf =
c/(s(1)*s(2)); t =
pdf(1);
for g = 2:255
pdf(g) = pdf(g) +
t; t = pdf(g);
9
end
cdff = round(pdf*255);

10
11
for hi = 1:s(1)
for j
=1:s(2)
t = i(hi,j);
ni(hi,j) =
cdff(t+1);
end
end
subplot(2,2,3);
imshow(uint8(ni)); title(' Contrast Enhanced
Image'); c = zeros(1,256);
for j = 1:s(1)
for k = 1:s(2)
a =
ni(j,k);
c(a+1) = c(a+1)+1;
end
end
subplot(2,2,4);
stem(0:255,c); title(' Contrast Enhanced Image Histogram');

12
OUTPUT:

13
INFERENCE:
Histogram is a graphical plot showing number of pixels present inan image for
each intensity level, thus acting as a tool to enhance the image contrast by histogram
equalization.

Histogram equalization helps in redistributing the intensity levels of the pixels in


the image, so they cover a wider range of values, resulting in a more balanced and
uniform histogram.

RESULT:
The program to perform histogram equalization has been written and executed in
MATLAB.

14
15
EXPT. NO. 03
28/08/2024 LINEAR SPATIAL FILTERING

AIM:
To create a MATLAB program to reduce the noise from the input image using using
linear spatial filtering techniques - Gaussian and Average filters.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
A digital image is viewed as 2D function where the x-y plane indicates spatial
position information called spatial domain. The filtering operation based on x-y
spaceneighborhood is called Spatial Domain Filtering.
The filtering process in image processing is used to reduce image noise. In
thespatial domain, neighborhood averaging is commonly used to achieve the
purpose of smoothening.
Commonly used filters include: Average filters, Gaussian filters
These filters are used for linear spatial filtering that is, the result is the sum of
the products of the mask coefficients with corresponding pixels under the mask.

MASK FOR AVERAGE FILTER

1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

16
17
MASK FOR GAUSSIAN FILTER

1/16 2/16 1/16

2/16 4/16 2/16

1/16 2/16 1/16

FORMULA:

ALGORITHM:
STEP 1: Start
STEP 2: Read the input image from MATLAB directory.
STEP 3: Add noise to the image using inbuilt functions.
STEP 4: Design a mask for Average and Gaussian filter using appropriate function.

STEP 5: Filter the noisy image using the mask.


STEP 6: Display the original image, image with noise and filtered image for Average
and Gaussian filters and the PSNR value.

STEP 7: Stop

PROGRAM:

clc; clear all; close all;


I = imread('C:\Users\Admin\Downloads\
MRI3.jpg'); m = [1,1,1;1,1,1;1,1,1]; m1 =
[2,4,2;4,8,4;2,4,2];
ii = rgb2gray(I);
i = imnoise(ii,'salt & pepper',0.02);
subplot(2,2,1); imshow(I); title("Original
Image");
subplot(2,2,2); imshow(i); title('Image with Salt and Pepper
Noise'); [a,sa] = zero_padding(i);
avg_filter(m,sa,a,i); avg_filter(m1,sa,a,i);
18
OUTPUT:
FILTERING OF SALT AND PEPPER NOISE

FILTERING OF GAUSSIAN NOISE

19
figure();
i = imnoise(ii,'gaussian');
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Gaussian
Noise'); [a,sa] = zero_padding(i);
avg_filter(m,sa,a,i); avg_filter(m1,sa,a,i);
figure();
i = imnoise(ii,'speckle');
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Speckle
Noise'); [a,sa] = zero_padding(i);
avg_filter(m,sa,a,i); avg_filter(m1,sa,a,i);
figure();
i = imnoise(ii,'poisson');
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Poisson
Noise'); [a,sa] = zero_padding(i);
avg_filter(m,sa,a,i); avg_filter(m1,sa,a,i);
function [a,sa] = zero_padding(i)
s = size(i);
a(s(1)+2,s(2)+2) =
0;
sa = size(a);
for k =
2:sa(1)-1
for j = 2:sa(2)-1
a(k,j) = i(k-1,j-
1);
end
en
d
en
d
function avg_filter(m,sa,a,i)
for b = 1:sa(1)-2
for c = 1:sa(2)-2
x =
a(b:b+2,c:c+2); y
= x.*m;
ys =
sum(sum(y)')/sum(sum(m)');
n(b,c) = round(ys);
end
end
res = uint8(n);
p =
psnr_f(res,i);
if
sum(sum(m)')==9
subplot(2,2,3); imshow(res); title(sprintf('Average Filtered Image (PSNR =
%g)',p));
else
subplot(2,2,4); imshow(res); title(sprintf('Gaussian Filtered Image (PSNR =
%g)',p));
en
20
d
en
d
function p =
psnr_f(res,i) mse = 0;
c = double(res)-double(i); cs =
size(c); for t = 1:cs(1)
for u = 1:cs(2)
mse = mse + c(t,u)*c(t,u);
end

21
FILTERING OF POISSON NOISE

FILTERING OF SPECKLE NOISE

22
end
msef =
mse/(cs(1)*cs(2)); sq =
sqrt(msef);
p =
20*log10(255/sq);
end

INFERENCE:
Average filter and Gaussian filter are best suited for filtering poisson’s noise from
the input image with peak signal to noise ratio equal to 26.18 and 28.06 respectively.
For all four types of noises added to input image, Gaussian filter yields the high PSNR
value. This is because Gaussian filter uses a weighted average approach.

RESULT:
Thus, the program to perform linear spatial filtering was written and executed in
23
MATLAB.

24
OUTPUT:

FILTERING OF SALT AND PEPPER NOISE

25
EXPT. NO. 04
28/08/2024 NON-LINEAR SPATIAL FILTERING

AIM:
To create a MATLAB program to reduce noise from input image using non-linear
spatial filtering techniques – median filter, min filter and max filter.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:

Non-linear filters is that whose output is not a linear function of the input.
Median filter is one of the most basic non-linear filters which uses the median of sliding
window output values. It operates by selecting the median intensity in the window.

ALGORITHM:

STEP 1: Start
STEP 2: Read the input image.
STEP 3: Add noise to the image using inbuilt function.
STEP 4: Design a mask for median filter, min filter and max filter using appropriate
function.

STEP 5: Filter the noisy image using filter.


STEP 6: Display the original image, noisy image, filtered image and the PSNR value
of filtered and input image.
STEP 7: Stop.

26
FILTERING OF GAUSSIAN NOISE

27
PROGRAM:
clc; clear all;
I = imread('C:\Users\Admin\Downloads\
MRI3.jpg'); m = [1,1,1;1,1,1;1,1,1];
ii = rgb2gray(I);
i = imnoise(ii,'salt &
pepper',0.02); figure();
subplot(1,3,1); imshow(I); title("Original Image");
subplot(1,3,2); imshow(i); title('Image with Salt and Pepper
Noise'); [a,sa] = zero_padding(i);
median_filter(m,sa,a,i);
figure();
i = imnoise(ii,'gaussian');
subplot(1,3,1); imshow(I); title("Original Image");
subplot(1,3,2); imshow(i); title('Image with Gaussian
Noise'); [a,sa] = zero_padding(i);
median_filter(m,sa,a,i);
figure();
i = imnoise(ii,'speckle');
subplot(1,3,1); imshow(I); title("Original Image");
subplot(1,3,2); imshow(i); title('Image with Speckle
Noise'); [a,sa] = zero_padding(i);
median_filter(m,sa,a,i);
figure();
i = imnoise(ii,'poisson');
subplot(1,3,1); imshow(I); title("Original Image");
subplot(1,3,2); imshow(i); title('Image with Poisson
Noise'); [a,sa] = zero_padding(i);
median_filter(m,sa,a,i);
function [a,sa] =
zero_padding(i) s = size(i);
a(s(1)+2,s(2)+2) = 0;
sa = size(a);
for k =
2:sa(1)-1
for j = 2:sa(2)-1
a(k,j) = i(k-1,j-
1);
end
en
d
en
d
function
median_filter(m,sa,a,i) for b
= 1:sa(1)-2
for c = 1:sa(2)-2
x =
a(b:b+2,c:c+2); y
= x.*m;
ys = sort([y(1,:) y(2,:) y(3,:)] );
n(b,c) = ys(1,5);
end
end
res = uint8(n);
p = psnr_f(res,i);
28
subplot(1,3,3); imshow(res); title(sprintf('Median Filtered Image (PSNR =
%g)',p)); end
function p = psnr_f(res,i)

29
FILTERING OF POISSON NOISE

30
mse = 0;
c = double(res)-
double(i); cs = size(c);
for t = 1:cs(1)
for u = 1:cs(2)
mse = mse + c(t,u)*c(t,u);
end
end
msef =
mse/(cs(1)*cs(2)); sq =
sqrt(msef);
p =
20*log10(255/sq);
end

clc; clear all;


I = imread('C:\Users\Admin\Downloads\
MRI3.jpg'); m = [1,1,1;1,1,1;1,1,1];
ii = rgb2gray(I);
i = imnoise(ii,'salt &
pepper',0.02); figure();
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Salt and Pepper
Noise'); [a,sa] = zero_padding(i);
min_max_filter(m,sa,a,i,1); min_max_filter(m,sa,a,i,0);
figure();
i = imnoise(ii,'gaussian');
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Gaussian
Noise'); [a,sa] = zero_padding(i);
min_max_filter(m,sa,a,i,1); min_max_filter(m,sa,a,i,0);
figure();
i = imnoise(ii,'speckle');
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Speckle
Noise'); [a,sa] = zero_padding(i);
min_max_filter(m,sa,a,i,1); min_max_filter(m,sa,a,i,0);
figure();
i = imnoise(ii,'poisson');
subplot(2,2,1); imshow(I); title("Original Image");
subplot(2,2,2); imshow(i); title('Image with Poisson
Noise'); [a,sa] = zero_padding(i);
min_max_filter(m,sa,a,i,1); min_max_filter(m,sa,a,i,0);
function [a,sa] = zero_padding(i)
s = size(i);
a(s(1)+2,s(2)+2) =
0;
sa = size(a);
for k =
2:sa(1)-1
for j = 2:sa(2)-1
a(k,j) = i(k-1,j-
1);
end
en
31
d
en
d
function
min_max_filter(m,sa,a,i,z) for b
= 1:sa(1)-2

32
FILTERING OF SPECKLE NOISE

33
for c = 1:sa(2)-2
x =
a(b:b+2,c:c+2); y
= x.*m;
if z==1
ys = max(max(y));
els
e ys = min(min(y));

end
n(b,c) = ys;
en
en d
d

res = uint8(n);
p =
psnr_f(res,i);
if z==1
subplot(2,2,3); imshow(res); title(sprintf('Max Filtered Image (PSNR = %g)',p));
els
e subplot(2,2,4); imshow(res); title(sprintf('Min Filtered Image (PSNR = %g)',p));

en
d
en
d
function p =
psnr_f(res,i) mse = 0;
c = double(res)-
double(i); cs = size(c);
for t = 1:cs(1)
for u = 1:cs(2)
mse = mse + c(t,u)*c(t,u);
end
end
msef =
mse/(cs(1)*cs(2)); sq =
sqrt(msef);
p =
20*log10(255/sq);
end

INFERENCE:
The PSNR value is obtained for the filtered images of all four types of noises and
is inferred that the median filter, min filter and max filter is best suited to reduce poisson
noise from input image as it has highest PSNR value of 27.02, 18.2 and 19.29
respectively.

34
RESULT:
Thus, the program to perform non-linear spatial filtering was written and
executed in MATLAB.

35
36
EXPT. NO. 05
11/09/2024 FILTERING IN FREQUENCY DOMAIN

AIM:

To write a MATLAB program to perform the filtering operation [LPF, HPF, BPF
and BSF] in the frequency domain.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:

Frequency domain filters are used for smoothening and sharpening of image by
removal of high or low frequency components. There are four types of filters:
 LOW PASS FILTER: Removes high frequency components. Used for
imagesmoothening.
 HIGH PASS FILTER: Removes low frequency components. Used for
imagesharpening.
 BAND PASS FILTER: Removes very high and very low frequency
components.Enhances edges while reducing noise at the same time.
 BAND STOP FILTER: Removes the frequency components between a
ranges ofvalues.

2D-DFT AND 2D-IDFT FORMULA:

0<=u,v,x,y<=N-1

37
OUTPUT:

LOW PASS FILTER

HIGH PASS FILTER

38
ALGORITHM:
STEP 1: Start.
STEP 2: Read the input image.
STEP 3: Apply DFT and get the frequency spectrum of the image.
STEP 4: Perform centering using syntax ‘fftshift()’.
STEP 5: Multiply the transformed image with the high pass filter, low pass filter, and pass
filter and band stop filter.
STEP 6: Decenter the image and get IDFT to bring back the image in time domain.
STEP 7: Display the input image, the filter and the output image.
STEP 8: Stop.

PROGRAM:
clc; clear all; close all;
I = imread('C:\Users\Admin\Downloads\spine2.jpg');

[s,fc] =
plot_f(I); fl =
75;
for u = 1:s(1)
for v = 1:s(2)
d(u,v) =
sqrt((u-(s(1)/2))^2+(v-(s(2)/2))^2); if
d(u,v)<=fl
lpf(u,v) = 1;
els
lpf(u,v) = 0;
e
en
en d end
d

F = fc.*lpf; subplot(2,3,4); imshow(lpf); title("Low Pass


Filter"); subplot(2,3,5); imshow(F); title("After applying LPF");
fil = abs(ifft2(F)); fil1 = uint8(fil);
subplot(2,3,6); imshow(fil1); title("Low Pass Filtered Image");

[s,fc] =
plot_f(I); fu =
25;
for u = 1:s(1)
for v = 1:s(2)
d(u,v) =
sqrt((u-(s(1)/2))^2+(v-(s(2)/2))^2); if
d(u,v)>=fu
hpf(u,v) = 1;
39
else
hpf(u,v) = 0
en
d end

40
BAND PASS FILTER

BAND STOP FILTER

41
F = fc.*hpf; subplot(2,3,4); imshow(hpf); title("High Pass
Filter"); subplot(2,3,5); imshow(F); title("After applying HPF");
fil = abs(ifft2(F)); fil1 = uint8(fil);
subplot(2,3,6); imshow(fil1); title("High Pass Filtered Image");

[s,fc] =
plot_f(I); for u
= 1:s(1)
for v = 1:s(2)
d(u,v) =
sqrt((u-(s(1)/2))^2+(v-(s(2)/2))^2); if
d(u,v)>=fu && d(u,v)<=fl
bpf(u,v) = 1;
els
bpf(u,v) = 0;
e
en
en d end
d

F = fc.*bpf; subplot(2,3,4); imshow(bpf); title("Band Pass


Filter"); subplot(2,3,5); imshow(F); title("After applying BPF");
fil = abs(ifft2(F)); fil1 = uint8(fil);
subplot(2,3,6); imshow(fil1); title("Band Pass Filtered Image");

[s,fc] =
plot_f(I); for u
= 1:s(1)
for v = 1:s(2)
d(u,v) =
sqrt((u-(s(1)/2))^2+(v-(s(2)/2))^2); if
d(u,v)>=fu && d(u,v)<=fl
bsf(u,v) = 0;
els
bsf(u,v) = 1;
e
en
en d end
d

F = fc.*bsf; subplot(2,3,4); imshow(bsf); title("Band Stop


Filter"); subplot(2,3,5); imshow(F); title("After applying BSF");
fil = abs(ifft2(F)); fil1 = uint8(fil);
subplot(2,3,6); imshow(fil1); title("Band Stop Filtered Image");

function [s,fc] =
plot_f(I) figure();
ii = rgb2gray(I); subplot(2,3,1); imshow(ii); title("Input Image");
f = fft2(ii); subplot(2,3,2); imshow(mat2gray(log(1+abs(f)))); title("DFT
of Input Image");
fc = fftshift(f); subplot(2,3,3); imshow(mat2gray(log(1+abs(fc))));
title("Centre shifted DFT");
s =
size(ii);
end

42
43
INFERENCE:
It is inferred that low pass filter removes high frequency components and is used
for image smoothening. High pass filter removes low frequency components and is used
for image sharpening. Band pass filter removes very high and very low frequency
components and enhances edges while reducing noise at the same time. Band stop filter
removes the frequency components between a range of values.

RESULT:
Thus filtering operation in frequency domain has been performed using MATLAB.
44
45
EXPT. NO. 06
04/09/2024 EDGE DETECTION OPERATOR

AIM:
To perform edge detection operation using operators for medical images.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Edges in images are areas with strong intensity contrast – a jump with intensity
from one pixel to next. Various edge detection methods are categorized under 2
categories.
 Gradient method detects edges by looking for maximum and minimum I
firstderivatives of images.
 Laplacian method searches for zero crossings in second derivative of images
to findedges.
An edge has one dimensional shape of a ramp and calculating derivative of image can
highlight its location.
The 4 edge detectors used are:
 Sobel’s Edge Detector
 Prewitt’s Edge Detector
 Robert’s Edge Detector
 Canny Edge Detector

GRADIENT OPERATORS FOR EDGE DETECTION:


SOBEL’S EDGE DETECTOR
MX = MY=

46
47
PREWITT’S EDGE DETECTOR
MX = MY=

ROBERT’S EDGE DETECTOR


MX= MY=

ALGORITHM:

STEP 1: Start
STEP 2: Read the input image and convert it to grayscale image.
STEP 3:Define a function for applying the mask to the input image.
STEP 4: Apply gradient operators like Sobel’s Edge Detector, Prewitt’s Edge
Detector, Robert’s Edge Detector, Canny Edge Detector, LoG Edge Detector.
STEP 5: Display the input image and edge detected images.
STEP 6: Stop

PROGRAM:
clc; clear; close all;
ii = imread('C:\Users\Admin\Downloads\e2.jpg');
i = rgb2gray(ii); subplot(2,2,1); imshow(i); title('Original
Image'); s = size(i);
mpv = [-1,0,1;-1,0,1;-1,0,1]; mph = [-1,-1,-1;0,0,0;1,1,1];
msv = [-1,0,1;-2,0,2;-1,0,1]; msh = [-1,-2,-1;0,0,0;1,2,1];
mrv = [1,0;0,-1]; mrh = [0,1;-1,0];
t = 150;

48
49
pr = filtre(s,i,mpv,mph,t,2); so =
filtre(s,i,msv,msh,t,2); ro = filtre(s,i,mrv,mrh,t,1);
subplot(2,2,2);imshow(pr); title('Prewitt Edge
Detection'); subplot(2,2,3); imshow(so); title('Sobel
Edge Detection'); subplot(2,2,4); imshow(ro);
title('Robert Edge Detection'); function su =
filtre(s,i,ver,hor,t,dim)
for u = 1:s(1)-dim
for v = 1:s(2)-
dim
x = i(u:u+dim,v:v+dim);
n1 = double(x).*ver; n2 =
double(x).*hor; y1 = sum(sum(n1)');
y2 = sum(sum(n2)'); y = sqrt((y1)^2+
(y2)^2);
n0(u,v) = y;
end
end

s1 = size(n0);
for ri =
1:s1(1)
for sk = 1:s1(2)
if n0(ri,sk) <= t
su(ri,sk) = 0;
els
su(ri,sk) = 255;
e
en
en d end
d
en
d

% Canny Edge Detection


clc; clear all; close all;
I = imread('C:\Users\Admin\Downloads\
e2.jpg'); m1 = [2,4,2;4,8,4;2,4,2];
ii = rgb2gray(I); figure(); subplot(2,3,1); imshow(ii); title("Original
Image"); [a,sa] = zero_padding(ii);
res = avg_filter(m1,sa,a);
msv = [-1,0,1;-2,0,2;-1,0,1]; msh = [-1,-2,-1;0,0,0;1,2,1];
s1 = size(res);
[n0,amat] = filtre(s1,res,msv,msh);
[nn,si] = nonmaxsup(amat,n0);
db =
double_threshold(nn,0.09,0.05);
hy = hysteresis(db);
function [a,sa] =
zero_padding(i) s = size(i);
a(s(1)+2,s(2)+2) = 0;
sa = size(a);
for k =
2:sa(1)-1
for j = 2:sa(2)-1
50
a(k,j) = i(k-1,j-
1);
end
en
d
en
d
function res =
avg_filter(m,sa,a) for b =
1:sa(1)-2
for c = 1:sa(2)-2
x = a(b:b+2,c:c+2);

51
52
y = x.*m;
ys = sum(sum(y)')/sum(sum(m)');
n(b,c) = round(ys);
en
en d
d

res = uint8(n);
subplot(2,3,2); imshow(res); title("Gaussian Filtered
Image"); end
function [n0,amat] =
filtre(s,i,ver,hor) for u = 1:s(1)-2
for v = 1:s(2)-2
x = i(u:u+2,v:v+2);
n1 = double(x).*ver; n2 =
double(x).*hor; y1 = sum(sum(n1)');
y2 = sum(sum(n2)'); y = sqrt((y1)^2+
(y2)^2);
angle = atan2(y1,y2)*180/pi;
amat(u,v) = angle;
n0(u,v) = y;
end
end
n1 = uint8(n0);
subplot(2,3,3); imshow(n1); title("Sobel Edge
Detection"); end
function [nn,si] =
nonmaxsup(amat,n0) si = size(n0);
for u = 2:si(1)-1
for v =
2:si(2)-1
if amat(u,v) < 0
amat(u,v) = amat(u,v) + 180;

end
en en
d d

[n01,sk] = zero_padding(n0);
[amat1,st] =
zero_padding(amat); for t =
2:st(1)-1
for sv = 2:st(2)-1
if ((0 <= amat1(u,v)) < 22.5) || ((157.5 <= amat1(u,v)) < 180)
q = n01(t,sv-1);
r = n01(t,sv+1);
elseif ((22.5 <= amat1(u,v)) < 67.5)
q = n01(t-1,sv+1);
r = n01(t+1,sv-1);
elseif ((67.5 <= amat1(u,v)) < 112.5)
q = n01(t-1,sv);
r = n01(t+1,sv);
elseif ((112.5 <= amat1(u,v)) < 157.5)
q = n01(t-1,sv-1);
r = n01(t+1,sv+1);
end
53
if (n0(t-1,sv-1)>=q) && (n0(t-1,sv-1)>=r)
nn(t-1,sv-1) = n0(t-1,sv-
1); else
nn(t-1,sv-1) = 0;
end

54
OUTPUT:

Canny Edge Detection

55
end
end
nn1 = uint8(nn); subplot(2,3,4); imshow(nn1); title("Image after Non
Maximum Suppression");
end
function db =
double_threshold(nn,htr,ltr) m =
max(max(nn));
ht = htr*m; lt =
ht*ltr; rr =
size(nn);
for g = 1:rr(1)
for h = 1:rr(2)
if (nn(g,h)<=ht) && (nn(g,h)>=lt)
db(g,h) = 25;
elseif (nn(g,h)>ht)
db(g,h) = 255;
els
db(g,h) = 0;
e
en
en d end
d

nn2 = uint8(db); subplot(2,3,5); imshow(nn2); title("Image after Double


Threshold"); end
function hy =
hysteresis(db) sh =
size(db);
for b = 2:sh(1)-1
for c =
2:sh(2)-1
if db(b,c) == 25
if ((db(b+1,c)==255) || (db(b-1,c) == 255) || (db(b,c+1)==255) ||
(db(b,c-1)==255) || (db(b-1,c-1)==255) || (db(b+1,c+1)==255) || (db(b-
1,c+1)==255) || (db(b+1,c-1)==255))
db(b,c)=255;
els
db(b,c)=0;
e
en
en d end
en d
d

hy = db;
nn3 = uint8(hy); subplot(2,3,6); imshow(nn3); title("Canny Edge
Detection"); end

INFERENCE:
It is inferred that Canny operator perform as the best operator for edge
detection technique. The second most best edge detection operator is the Sobel
56
Operator.
RESULT:
Thus, the program to perform edge detection using operators was written and
Executed using matlab.

57
58
EXPT. NO. 07
18/09/2024 IMAGE COMPRESSION USING DCT

AIM:
To write a MATLAB program to compute image compression using DCT.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Image compression can be lossy or lossless. {Lossless: info.png; lossy: info.jpeg}
Image compression is important for efficient use of database. The main purpose of
image compression is to reduce the number of bits representing the image while
preserving the image quality and the intensity levels of the pixels as much as possible
depending on grayscale or RGB image.

ALGORITHM:
STEP 1: Start.
STEP 2: Read the input image and convert it to grayscale image.
STEP 3: Get the size of image and apply DCT.
STEP 4: Make the lower frequencies as zero.
STEP 5: Take IDFT of the compressed result.
STEP 6: Display input and output image and calculate compression ratio.
STEP 7: Stop.

59
OUTPUT:

60
PROGRAM:
clc; clear all; close all;
I = imread('C:\Users\Admin\Downloads\
brain.png'); i = rgb2gray(I);
figure(); subplot(2,2,1);
imshow(i); title("Original
Image"); d = dct2(i);
subplot(2,2,2);
imshow(d); title("DCT of Original
Image"); s = size(d);
for u = 1:s(1)
for v = 1:s(2)
if u+v <= s(2)
n(u,v) =
d(u,v);
els
n(u,v) = 0;
e
en
en d end
d

subplot(2,2,3); imshow(n); title("DCT of Compressed


Image"); ni = idct2(n); ii = uint8(ni);
subplot(2,2,4); imshow(ii); title("Compressed Image");

imwrite(ii,'CI.png');
R = imfinfo('C:\Users\Admin\Downloads\
brain.png'); R1 = R.FileSize;
disp("The size of original image:");
disp(R1); S = imfinfo("CI.png");
S1 = S.FileSize;
disp("The size of compressed image:"); disp(S1);
CR = R1/S1;disp("The compression ratio:");
disp(CR); end

INFERENCE:
Image compression is performed on an input image and compressed image is
displayed. It is inferred from the result that the amount of data size is reduced without
affecting the original information of the image.

RESULT:
61
Thus the MATLAB program to perform the image compression technique
using DCT is executed.

62
63
EXPT. NO. 08
09/10/2024 STEGANOGRAPHY

AIM:
To write a MATLAB program to perform the steganography technique on medical
images.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Image Steganography refers to the process of hiding data within an image file.
The image selected for this purpose is called the cover-image and the image obtained
after steganography is called the stego-image. Images are used as cover medium for
steganography.
A message is embedded in a digital image using algorithm and the secret key. The
stego-image is send to the receiver. On the other side, it is processed by the extraction
algorithm using the same key. During the transmission of stego-image unauthenticated
persons can only notice the transmission of an image but can’t see the existence of the
hidden message.
LSB is technique or method in encrypting and decrypting the secret information.
LSB method is based on altering the redundant bits that are least important with the bits
of the secret information. The least significant bit of the bytes inside an image is
changed to a bit of the secret message.
Increasing or decreasing the value by changing the LSB does not change the
appearance of the image; much so the resultant stego-image looks almost same as the
cover image.

64
65
ALGORITHM:
STEP 1: Start
STEP 2: Read the input images from MATLAB directory.

STEP 3: Resize the image.


STEP 4: Get the number of LSB bits need to be substituted.

STEP 5: Load the message to be hidden.


STEP 6: Extract hidden image from the cover image.
STEP 7: Display the hidden image.
STEP 8: Stop

PROGRAM:
clc; clear; close all;
ii = imread('C:\Users\Admin\Downloads\mri1.jpg');
hi = rgb2gray(ii); hi = imresize(hi, [200,200]); subplot(2,2,1);
imshow(hi); title('Secret Image');
i2 = imread("C:\Users\Admin\Downloads\spine2.jpg");
fi = rgb2gray(i2); fi = imresize(fi, [200,200]); subplot(2,2,2);
imshow(fi); title('Mask Image');
hi = double(hi); fi =
double(fi); sh = size(hi); sf =
size(fi);

ch1 = mod(hi, 2); %LSB


ch2 = mod(floor(hi/2), 2);
ch3 = mod(floor(hi/4), 2);
ch4 = mod(floor(hi/8), 2);
ch5 = mod(floor(hi/16),
2); ch6 =
mod(floor(hi/32), 2); ch7
= mod(floor(hi/64), 2);
ch8 = mod(floor(hi/128), 2); %MSB

cm1 = mod(fi, 2); %LSB


cm2 = mod(floor(fi/2), 2);
cm3 = mod(floor(fi/4), 2);
cm4 = mod(floor(fi/8), 2);
cm5 = mod(floor(fi/16),
2); cm6 =
mod(floor(fi/32), 2); cm7
= mod(floor(fi/64), 2);
cm8 = mod(floor(fi/128), 2); %MSB

Z = (2*(2*(2*(2*(2*(2*(2
*cm8+cm7)+cm6)+cm5)+ch8)+ch7)+ch6)+ch5); subplot(2,2,3);
imshow(uint8(Z)); title('Steganographed Image'); cs1 =
66
mod(Z, 2); %LSB
cs2 = mod(floor(Z/2),
2); cs3 =
mod(floor(Z/4), 2);

67
OUTPUT:

68
cs4 = mod(floor(Z/8), 2);
cs5 = mod(floor(Z/16),
2); cs6 =
mod(floor(Z/32), 2); cs7
= mod(floor(Z/64), 2);
cs8 = mod(floor(Z/128), 2); %MSB

Zf = (2*(2*(2*(2*(2*(2*(2 *cs4+cs3)+cs2)+cs1)+0)+0)+0)+0);
subplot(2,2,4); imshow(uint8(Zf)); title('Final Image');

INFERENCE:
Thus it is inferred that the steganography is used to hide data within data. The
four higher bits of the cover image and higher bits of secret images are used to perform
steganography. In the steganographed image, the higher bits of secret images are lower
bits, thus secret image can be retrieved by taking its lower bits alone with negligible
change in image resolution.

RESULT:
Thus the program to perform steganography was written and executed using
MATLAB.

69
70
EXPT. NO. 09
09/10/2024 CONVERSION OF COLOUR SPACES

AIM:
To write a MATLAB program to perform the following color models operations.
i) RGB to CMY
ii) RGB to HSI and HSI to RGB
iii) RGB to YCbCr and YCbCr to RGB

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Color models provide a standard way to specify a particular color, by defining a
3D coordinate system, and a subspace that contains all constructible colors within a
particular model. Any color that can be specified using a model will correspond to a
single point within the subspace it defines. Each color model is oriented towards either
specific hardware (RGB, CMY, YIQ), or image processing applications (HSI).
The RGB Model: In the RGB model, an image consists of three independent
image planes, one in each of the primary colors: red, green and blue. The RGB model is
used for color monitors and most video cameras.
The CMY Model: The CMY (cyan-magenta-yellow) model is a subtractive
model appropriate to absorption of colors .The CMY model is used by printing devices
and filters.
The HSI Model: Hue, Saturation, Intensity. The HSI model is used for image
processingapplications.

COLOR CONVERSION FORMULAE:


 RGB TO HSI

71
72
 HSI TO RGB
If angle is between 0 and 120

If angle is between 120 and 240

If angle is between 240 and 360

 RGB TO CMYK

73
74
 CMYK TO RGB

 RGB TO YCbCr

 YCbCr TO RBG

ALGORITHM:
STEP 1: Start
STEP 2: Read the input image
STEP 3: Convert the input RGB to HSI image.
STEP 4: Write the HSI image
STEP 5: Convert the HSI converted image and a rgb image.
STEP 6: Display the input image, HSI image and RGB image

STEP 7: Convert the input image to CMY image


STEP 8: Write the CMY image to files.

75
OUTPUT:

i. RGB to CMY

ii. RGB to HSI

76
STEP 9: Convert the CMY image to RGB image.
STEP 10: Display the CMY converted image and the RGB image.
STEP 11: Display the step 7 to 10 for YCbCr conversion.

STEP 12: Display the output.


STEP 13: Stop

PROGRAM:
clc; clear; close all;
ii = imread('C:\Users\Admin\Downloads\pet2.jpg');

%CMYK
figure(); subplot(2,3,1); imshow(ii); title("Original
Image"); r = im2double(ii(:,:,1));
g = im2double(ii(:,:,2));
b =
im2double(ii(:,:,3)); C
= im2uint8(1. - r);
M = im2uint8(1. -
g); Y = im2uint8(1.
- b); CMY =
cat(3,C,M,Y);
subplot(2,3,2); imshow(C); title("Cyan Image");
subplot(2,3,3); imshow(M); title("Magenta Image");
subplot(2,3,4.5); imshow(Y); title("Yellow Image");
subplot(2,3,5.5); imshow(CMY); title("CMYK Image");

%HSI
theta=acos((0.5*((r-g)+(r-b)))./((sqrt((r-g).^2+(r-b).*(g-
b))))); H=theta;
H(b>g)=2*pi-
H(b>g);
H=H/(2*pi);
S=1-3.*(min(min(r,g),b))./(r+g+b);
I=(r+g+b)/3;
hsi=cat(3,H,S,I)
; figure(),
subplot(2,3,1);imshow(ii);title('RGB Image');
subplot(2,3,2),imshow(H), title('Hue');
subplot(2,3,3), imshow(S), title('Saturation');
subplot(2,3,4.5), imshow(I), title('Intensity');
subplot(2,3,5.5), imshow(hsi), title('HSI
Image');

%HSI TO RGB
HSI=im2double(hsi
);
H1=HSI(:,:,1);
S1=HSI(:,:,2);
77
I1=HSI(:,:,3
);
H1=H1*360;
R1=zeros(size(H1)
);
G1=zeros(size(H1)
);

78
iii. HSI to RGB

iv. RGB to YCbCr

79
B1=zeros(size(H1));
RGB1=zeros([size(H1),3]);
B1(H1<120)=I1(H1<120).*(1-
S1(H1<120));
R1(H1<120)=I1(H1<120).*(1+((S1(H1<120).*cosd(H1(H1<120)))./cosd(60-
H1(H1<120)))); G1(H1<120)=3.*I1(H1<120)-(R1(H1<120)+B1(H1<120));
H2=H1-120;
R1(H1>=120&H1<240)=I1(H1>=120&H1<240).*(1-S1(H1>=120&H1<240));
G1(H1>=120&H1<240)=I1(H1>=120&H1<240).*(1+
((S1(H1>=120&H1<240).*cosd(H2(H1>=120&H1<24 0)))./cosd(60-
H2(H1>=120&H1<240))));
B1(H1>=120&H1<240)=3.*I1(H1>=120&H1<240)-
(R1(H1>=120&H1<240)+G1(H1>=120&H1<240)); H2=H1-240;
G1(H1>=240&H1<=360)=I1(H1>=240&H1<=360).*(1-S1(H1>=240&H1<=360));
B1(H1>=240&H1<=360)=I1(H1>=240&H1<=360).*(1+((S1(H1>=240&H1<=360).*cosd(H2(H1>=240&H1
<=360)))./cosd(60-H2(H1>=240&H1<=360))));
R1(H1>=240&H1<=360)=3.*I1(H1>=240&H1<=360)-
(G1(H1>=240&H1<=360)+B1(H1>=240&H1<=360)); RGB1(:,:,1)=R1;
RGB1(:,:,2)=G1;
RGB1(:,:,3)=B1;
RGB1=im2uint8(RGB1)
;
R11=zeros([size(H1),3]); R11(:,:,1)=R1; R11=im2uint8(R11);
G11=zeros([size(H1),3]); G11(:,:,2)=G1; G11=im2uint8(G11);
B11=zeros([size(H1),3]); B11(:,:,3)=B1; B11=im2uint8(B11);
figure();
subplot(2,3,1);imshow(hsi);title('HSI
Image'); subplot(2,3,2),imshow(R11),
title('RED');
subplot(2,3,3), imshow(G11), title('GREEN');
subplot(2,3,4.5), imshow(B11), title('BLUE');
subplot(2,3,5.5), imshow(RGB1), title('RGB Image');
%RGB TO YCbCr
I=double(ii)/255;
R=I(:,:,1); G=I(:,:,2); B=I(:,:,3);
[m,n,d]=size(ii); Y=(0.299*R)
+(0.587*G)+(0.114*B)+0;
Cb=(-0.168736*R)+(-0.331264*G)+(0.5*B)+0.5;
Cr=(0.5*R)+(-0.418688*G)-(0.081312*B)+0.5;
YCbCr(:,:,1)=Y;YCbCr(:,:,2)=Cb;YCbCr(:,:,3)=Cr;
figure(); subplot(231); imshow(ii); title('RGB
Image'); subplot(232); imshow(Y); title('Y');
subplot(233); imshow(Cb); title('Cb');
subplot(2,3,4.5); imshow(Cr); title('Cr');
subplot(2,3,5.5); imshow(YCbCr); title('YCbCr
Image'); figure();
subplot(231);imshow(YCbCr);title('YCbCr image');
%YCbCr to RGB
new_R=Y+(1.402*(Cr-
0.5));
new_G=Y-(0.34414*(Cb-0.5))-(0.71414*(Cr-
0.5)); new_B=Y+(1.772*(Cb-0.5));
RGB2=zeros(size(YCbCr));
RGB2(:,:,1)=new_R;RGB2(:,:,2)=new_G;RGB2(:,:,3)=new_B;
RGB2=im2uint8(RGB2);
R11=zeros([size(Y),3]); R11(:,:,1)=new_R; R11=im2uint8(R11);
80
G11=zeros([size(Y),3]); G11(:,:,2)=new_G; G11=im2uint8(G11);
B11=zeros([size(Y),3]); B11(:,:,3)=new_B;
B11=im2uint8(B11);
subplot(232);imshow(R11);title('Red')
subplot(233);imshow(G11);title('Green')

81
v. YCbCr to RGB

82
subplot(2,3,4.5);imshow(B11);title('Blue')
subplot(2,3,5.5);imshow(RGB2);title('YCbCr to RGB')

INFERENCE:
It is inferred that color images occupy more spaces when compared to grayscale
images. Same operation can be performed on color image like grayscale image, however
colour space images require more space.

RESULT:
Thus the given image in RGB colour space is converted to CMY, HIS and YCbCr
colour spaces and back to RGB colour space using MATLAB.

83
84
EXPT. NO. 10
18/09/2024 MEDICAL IMAGE FUSION

AIM:
To write a MATLAB program for fusing two medical images.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Medical image fusion is the process of registering and combining multiple
images from single or multiple imaging modalities to improve the imaging quality and
reduce randomness and redundancy in order to increase the clinical applicability of
medical images for diagnosis and assessment of medical problems.
The primary concept used by the wavelet based image fusion is to extract the
detail information from one image and inject it into another. The detail information in
images is usually in the high frequency and wavelets would have the ability to select the
frequencies inboth space and time. The resulting fused image would have the “good”
characteristics in termsof the features from both images that improve the quality of the
imaging.
The most used of image fusion rule using wavelet transform is maximum
selection, compare the two coefficients of DWT of the two images and select the
maximum between. While the low pass sub band is an approximation of theinput image,
the three detail sub bandsconvey information about the detail parts in horizontal, vertical
and diagonal directions. Different merging procedures will be applied to approximation
and detail sub bands. Low passsub band will be merged using simple averaging
operations since they both contain approximations of the source images.

85
86
OUTPUT:

87
ALGORITHM:
STEP 1: Start.
STEP 2: Read the two input images to be fused
STEP 3: Take DWT of Image 1.
STEP 4: Take DWT of Image 2.
STEP 5: Add the corresponding vertical, horizontal and diagonal
components STEP 6: Take inverse Discrete Wavelet Transform.
STEP 7: Thus fused image is obtained. Display input and output images.
STEP 8: Stop

PROGRAM:
clc; clear; close all;
ii = imread('C:\Users\Admin\Downloads\IM1.jpg'); i1 = imresize(ii,
[250,250]); i = imread('C:\Users\Admin\Downloads\IM2.jpg'); i2 =
imresize(i,[250,250]); [LoD, HiD] = wfilters('haar', 'd');
[cA1, cH1, cV1, cD1] = dwt_2d(i1, LoD,
HiD); figure(); sgtitle("Image 1");
subplot(221); imshow(uint8(cA1)); title('Approximation
Coefficient'); subplot(222); imshow(cH1); title('Horizontal
Component'); subplot(223); imshow(cV1); title('Vertical
Component'); subplot(224); imshow(cD1); title('Diagonal
Component');
[cA2, cH2, cV2, cD2] = dwt_2d(i2, LoD,
HiD); figure(); sgtitle("Image 2");
subplot(221); imshow(uint8(cA2)); title('Approximation
Coefficient'); subplot(222); imshow(cH2); title('Horizontal
Component'); subplot(223); imshow(cV2); title('Vertical
Component'); subplot(224); imshow(cD2); title('Diagonal
Component');
a = cA1+cA2; h = cH1+cH2; v = cV1+cV2; d =
cD1+cD2; i = idwt2(a,h,v,d,'haar');
figure();
subplot(1,3,1); imshow(i1); title("Image 1");
subplot(1,3,2); imshow(i2); title("Image 2");
subplot(1,3,3); imshow(uint8(i)); title("Fused
Image"); function [a, h, v, d] = dwt_2d(I, LoD, HiD)
s =
size(I);
for c =
1:3
for i = 1:s(1)
cL(i,:,c) = conv(I(i,:,c), LoD, 'same');
ch(i,:,c) = conv(I(i,:,c), HiD, 'same');
end
rl = imresize(cL, 0.5); rh = imresize(ch,
88
0.5); s1 = size(rl);
for i = 1:s1(2)

89
90
a(:,i,c) = conv(rl(:,i,c), LoD, 'same');
h(:,i,c) = conv(rl(:,i,c), HiD, 'same');
v(:,i,c) = conv(rh(:,i,c), LoD, 'same');
d(:,i,c) = conv(rh(:,i,c), HiD, 'same');
en
en d
d
en
d

INFERENCE:
Thus it is inferred that the given images can be fused into a single image from two
or more images using DWT and IDWT. The resulting image will provide more
complementary information than any of the input images. This is helpful in studying the
CT and MRI images of the organs.

RESULT:

91
Thus the program for medical image fusion using DWT is written and executed
using MATLAB.

92
93
EXPT. NO. 11 SEGMENTATION USING WATERSHED
16/10/2024 TRANSFORM

AIM:
To write a MATLAB program to segment the images using morphological
watershed algorithm.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Watershed segmentation is a region-based technique that utilizes image
morphology. It requires selection of at least one marker (“seed” point) interior to each
object of the image, including the background as a separate object. The markers are
chosen by an operator or are provided by an automatic procedure that takes into account
the application-specific knowledge of the objects. Once the objects are marked, they can
be grown using a morphological watershed transformation.
To understand the watershed, one can think of an image as a surface where the
bright pixels represent mountaintops and the dark pixels valleys. The surface is
punctured in some of the valleys, and then slowly submerged into a water bath. The
water will pour in each puncture and start to fill the valleys. However, the water from
different punctures is not allowed to mix, and therefore the dams need to be built at the
points of first contact. These dams are the boundaries of the water basins, and also the
boundaries of image objects.

ALGORITHM:
STEP 1: Start.
STEP 2: Read the input image.
STEP 3: Resize the image.
STEP 4: Convert RGB to gray image.
STEP 5: Convert it into binary image.
STEP 6: Use watershed algorithm to segment image.
STEP 7: Stop.

94
95
96
PROGRAM:
clc; clear all; close all; a=imread('C:\
Users\Admin\Downloads\rbc3.jpg');
subplot(231); imshow(a); title("Original
Image"); a = rgb2gray(a);
s = size(a);
c =
zeros(1,256);
for j = 1:s(1)
for k =
1:s(2) b
= a(j,k);
c(b+1) = c(b+1)+1;
end
end
m1 = max(c(1:round(length(c)/2)));
m2 =
max(c(round(length(c)/2)+1:end));
for i = 1:round(length(c)/2)
if c(i) == m1
c1 = i-1;
end
if c(i+round(length(c)/2)) ==
m2 c2 =
i+round(length(c)/2)-1;
end
end
t = (c1 + c2) /
2; for j =
1:s(1)
for k = 1:s(2)
if a(j,k) <= t
ib(j,k) =
0;
els
ib(j,k) = 255;
e
en
en d end
d

subplot(232); imshow(ib); title("Binarized Image");


se=strel('disk',2);
yy=imdilate(ib,se);
subplot(233);imshow(yy),title('Dilated
image') se1=strel('disk',1);
yy1=imerode(yy,se1);
subplot(234);imshow(yy1),title('Eroded
image') jh1=watershed(yy1);
jh2=label2rgb(jh1);
subplot(235);imshow(jh2),title('Label Matrix as RGB
Image') subplot(236); imshow(a), hold on;
himage = imshow(jh2);
set(himage, 'AlphaData',
0.3);
97
title('Superimposition of label matrix on Original Image')

98
OUTPUT:

99
INFERENCE:
Watershed transformation on a grayscale image refers metaphorically to a
geological watershed which separates the adjacent drainage basins. It is primarily used
for image segmentation.

RESULT:
Thus a program to perform image segmentation using watershed algorithm is
written and executed using MATLAB.

10
0
10
1
EXPT. NO. 12
16/10/2024 FEATURE EXTRACTION IN MEDICAL IMAGES

AIM:
To write a MATLAB program to perform feature extraction in medical images.

SOFTWARE REQUIRED:
MATLAB R2023a

THEORY:
Feature extraction is a part of the dimensionally reduction process in which an
initial set of raw data is divided and reduced. Feature extraction is a type of
dimensionality reduction sphere a large no of pixels of the image are efficiently
represented in such a way that intensity parts of image are captured effectively. Feature
may be specific structures in the image such as pointer edges or objects. This technique
is useful when there is a large data set and to reduce the amount of data to be processed
without losing any important or relevant information It helps to reduce the amount of
redundant data from the data set.

ALGORITHM:
STEP 1: Start.
STEP 2: Read the input image.
STEP 3: Convert to binary image.
STEP 4: Resize the image.
STEP 5: Apply morphological filters to extract features of the image.
STEP 6: Compute textural and statistical features of the image.
STEP 7: Stop.

PROGRAM:
clc; clear; close all; a=imread('C:\Users\
Admin\Downloads\tum2.jpg'); subplot(321);
imshow(a); title("Original Image"); a =
rgb2gray(a);
binary = binarise(a);

10
2
subplot(322); imshow(binary); title("Binarized Image");

10
3
OUTPUT:

Morphological Features
Area
757

Eccentricity
0.4819

Major axis
length
33.2707

Minor axis
length
29.1532

perimeter
96.2240

Statistical
Features Mean
intensity
203.3514

Variance
369.702
2

Textural
Features glcmp =
struct with fields:

Contrast: 42.3093
10
4
Correlation 0.2904
:
Energy: 0.0306
Homogeneity 0.5487
:

10
5
SE=strel('disk',4);k2=imopen(binary,SE);
subplot(323);imshow(k2); title('Opened
Image'); b=bwlabel(k2);
subplot(324);imshow(b,[]); title('Colourmap Image');
b(b~=4)=0; b(b==4)=1;
subplot(325);imshow(b); title('Image after removing connected components');
I=b.*double(a);subplot(326);imshow(I,[]); title('Segmented Image');
bb = im2bw(I);
disp('Morphological
Features');
Area=regionprops(bb,'Area');
Area=struct2cell(Area);
Area=max(cell2mat(Area'));
disp("Area"); disp(Area);
Eccentricity=regionprops(bb,'Eccentricity');
Eccentricity=struct2cell(Eccentricity);
Eccentricity=max(cell2mat(Eccentricity'));
disp("Eccentricity"); disp(Eccentricity);
MajorAxisLength=regionprops(bb,'MajorAxisLength
');
MajorAxisLength=struct2cell(MajorAxisLength);
MajorAxisLength=max(cell2mat(MajorAxisLength'))
; disp("Major axis length");
disp(MajorAxisLength);
MinorAxisLength=regionprops(bb,'MinorAxisLength
');
MinorAxisLength=struct2cell(MinorAxisLength);
MinorAxisLength=max(cell2mat(MinorAxisLength'))
; disp("Minor axis length");
disp(MinorAxisLength);
Perimeter=regionprops(bb,'Perimeter');
Perimeter=struct2cell(Perimeter);
Perimeter=max(cell2mat(Perimeter'));
disp("perimeter"); disp(Perimeter);
disp('Statistical Features');
meanIntensity = sum(sum(I)')/Area; disp("Mean intensity");
disp(meanIntensity); s1 = size(I); v=0;
for s = 1:s1(1)
for t =
1:s1(2)
if I(s,t)>0
u = (I(s,t)-
meanIntensity)^2; v = v +
u;

end
en en
d d

var = v/Area;
disp("Variance");
disp(var); disp('Textural
Features'); glcms =
glcm(I);
glcms(1,1) = 0;
glcmp = graycoprops(glcms)
10
6
function binary =
binarise(A) [x,
y]=size(A);
a=double(A);
sum=0;
for i=1:x
for j=1:y
sum=sum+a(i,j);
end
end

10
7
10
8
threshold=sum/(0.4*x*y);
binary=zeros(x,y);
for i=1:x
for j=1:y
if
a(i,j)>=thresho
ld
binary(i,j)=1;
els
binary(i,j)=0;
e
en
en d end
d
en
d

function glcms =
glcm(ii) s = size(ii);
glcms = zeros(32, 32);
ii(ii>=0 & ii<=7) = 0; ii(ii>=8 & ii<=15) = 1; ii(ii>=16 & ii<=23) = 2;
ii(ii>=24 & ii<=31) = 3;
ii(ii>=32 & ii<=39) = 4; ii(ii>=40 & ii<=47) = 5; ii(ii>=48 & ii<=55) = 6;
ii(ii>=56 & ii<=63) = 7;
ii(ii>=64 & ii<=71) = 8; ii(ii>=72 & ii<=79) = 9; ii(ii>=80 & ii<=87) =
10; ii(ii>=88 & ii<=95) = 11;
ii(ii>=96 & ii<=103) = 12;ii(ii>=104 & ii<=111) = 13; ii(ii>=112 & ii<=119) = 14;
ii(ii>=12 & ii<=127 = 15;
0 & ) = 16;ii(ii>=13 & ii<=143) = 17; ii(ii>=144 & ii<=151) = 18;
ii(ii>=12 ii<=135 6
8 )
ii(ii>=15 & ii<=159 = 19;
2 ) = 20;ii(ii>=16 & ii<=175) = 21; ii(ii>=176 & ii<=183) = 22;
ii(ii>=16 & ii<=167 8
0 )
ii(ii>=18 & ii<=191 = 23;
4 )
ii(ii>=19 & ii<=199 = 24;ii(ii>=20 & ii<=207) = 25; ii(ii>=208 & ii<=215) = 26;
2 & ) = 0
ii(ii>=21 ii<=223 27;
6 )
ii(ii>=22 & ii<=231 = 28;ii(ii>=23 & ii<=239) = 29; ii(ii>=240 & ii<=247) = 30;
4 ) = 2 31;
ii(ii>=24 & ii<=255
8 )
for i = 1:s(1)-1
for j = 1:s(2)-1
m= ii(i, j);
n = ii(i, j + 1);
glcms(m+1, n+1) = glcms(m+1, n+1) + 1;
end
en
d
en
d

INFERENCE:
10
9
It is inferred that the proposed algorithm can extract the statistical features,
textural features which can be used when performing analysis of complex data to detect
and isolate desired portions of the digitized images.

RESULT:
Thus the feature extraction has been performed using MATLAB.

11
0

You might also like