PRACTICAL FILE
SESSION: 2023-24
Advances in Deep Learning
Lab
(AIML308P)
III Year, VI Sem
Submitted to: Submitted by:
ame: Mr. Shubhankit Sudhakar
N Name: Shagun Srivastava
Designation: Assistant Professor Enrollment No:04818011621
INDEX
S.NO. Program Name DATE OF DATE OF SIGN.
EXPERIMENT SUBMISSION
1
I mplement multilayer perceptron algorithm for
MNIST Handwritten Digit Classification
2
esign a neural network for classifying movie
D
reviews (Binary Classification) using IMDB dataset
3 .Design a neural Network for classifying news wires
(Multi class classification) using Reuters
dataset.
4 Design a neural network for predicting
house prices using Boston Housing Price
dataset.
5 Build a Convolution Neural Network for MNIST
Handwritten Digit Classification.
6 Build a Convolution Neural Network for simple
image (dogs and Cats) Classification
7 Use a pre-trained convolutional neural network
(VGG16) for image classification.
8 Implement one hot encoding of words or characters
.
9 Implement word embeddings for the IMDB dataset.
10 Implement a Recurrent Neural Network for IMDB
movie review classification problem.
EXPERIMENT 1
im: Implement multilayer perceptron algorithm for MNIST Handwritten Digit
A
Classification
Theory
he multilayer perceptron (MLP) algorithm forMNISTHandwrittenDigitClassificationoperates
T
bylearningcomplexrelationshipsbetweenpixelintensitiesandcorrespondingdigitlabels.Through
multiple layers of interconnected neurons, the MLP processes input data, extracting features and
gradually refining its ability to classify digits accurately. By adjusting weights and biases during
training via techniques like backpropagation, the model fine-tunes its parameters to minimize
classification errors and enhance performance.
Code-
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from keras.models import Sequential
from keras.layers import Dense , Activation ,Dropout
from keras.optimizers import Adam,RMSprop
#import dataset
from keras.datasets import mnist
(x_train,y_train),(x_test,y_test)=mnist.load_data()
#plot a sample digit
plt.figure(figsize=(6,6))
for i in range(len(index)):
plt.subplot(6,6,i+1)
image =images[i]
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.show()
plt.savefig("mnist-samples.png")
p lt.close("all")
#building model architecture using keras
from keras.utils import to_categorical, plot_model
# compute no of labels
num_labels=len(np.unique(y_train))
# convert labels into one-hot vector
y_train=to_categorical(y_train)
y_test=to_categorical(y_test)
# resize & normalise
x_train=np.reshape(x_train,[-1,input_size])
x_train = x_train.astype('float32') / 255
x_test=np.reshape(x_test,[-1,input_size])
x_test = x_test.astype('float32') / 255
# Setting network parameter
batch_size = 128
hidden_units = 256
dropout = 0.45
# Model architecture is three-layer MLP with ReLU and dropout at each layer
model=Sequential()
model.add(Dense(hidden_units,input_dim=input_size))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.add(Dense(hidden_units))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
# model summary
model.summary()
#implement MLP using keras
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train,y_train,epochs=30,batch_size=batch_size)
#model testing
loss, acc = model.evaluate(x_test, y_test, batc
h_size=batch_size)
print("\nTest accuracy: %.1f%%" % (100.0 * acc))
Output-
Conclusion -
I mplementing the multilayer perceptron algorithm for MNIST Handwritten Digit Classification has
proven highly effective. Through extensive training on the MNIST dataset, the MLP demonstrates
impressive accuracy in distinguishing between handwritten digits. By leveraging the power of deep
learning and iterative optimization techniques, such as gradient descent, the model achieves robust
performance, making it a valuable tool for various applications in digit recognition and pattern
classification tasks.
Experiment -2
im- Design a neural network for classifying movie reviews (Binary
A
Classification) using IMDB dataset
Theory
eural networks for classifying movie reviews operate on the principle of learning complex
N
patternsintextualdatatodiscernsentiment.Usingalgorithmsinspiredbythehumanbrain'sneural
structure, these networks process large amounts of labeled movie review data from the IMDb
dataset. Through iterative training, the neural network adjusts its internalparameterstominimize
prediction errors, gradually improving its ability to accurately classify reviews as positive or
negative.Thisapproachharnessesthepowerofdeeplearningtoachieveremarkableperformancein
sentiment analysistasks,makingitawidelyadoptedtechniqueinnaturallanguageprocessingand
text classification domains.
Code -
f rom tensorflow.keras.datasets import imdb
# Load the data, keeping only 10,000 of the most frequently occuring words
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words = 10000)
# step 1: load the dictionary mappings from word to integer index
word_index = imdb.get_word_index()
# step 2: reverse word index to map integer indexes to their respective words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Step 3: decode the review, mapping integer indices to words
'''Decodes the review. Note that the indices are offset by 3
because 0, 1, and 2 are reserved indices for “padding,” “start of sequence,” and “unknown.”'''
decoded_review = ' '.join([reverse_word_index.get(i-3, '?') for i in train_data[0]])
decoded_review
#sample output
? this film was just brilliant casting location scenery story direction everyone's really suited the part
they played and you could just imagine being there robert ? is an amazing actor and now the same
being director ? father came from the same scottish island as myself so i loved the fact there was a
real connection with this film the witty remarks throughout the film were great it was just brilliant
so much that i bought the film as soon as it was released for ? and would recommend it to everyone
to watch and the fly fishing was amazing really cried at the end it was so sad and you know what
they say if you cry at a film it must have been good and this definitely was also ? to the two little
boy's that played the ? of norman and paul they were just brilliant children are often left out of the ?
list i think because the stars that play them all grown up are such a big profile for the whole film but
these children are amazing and should be praised for what they have done don't you thi
# Preparing the data
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
(len(sequences),10K)
for i,sequence in enumerate(sequences):
results[i,sequence] = 1
return results
# Vectorize training Data
X_train = vectorize_sequences(train_data)
# Vectorize testing Data
X_test = vectorize_sequences(test_data)
X_train.shape
#vectorize labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
# Compiling the model
from tensorflow.keras import optimizers
from tensorflow.keras import losses
from tensorflow.keras import metrics
model.compile(optimizer=optimizers.RMSprop(learning_rate=0.001),
loss = losses.binary_crossentropy,
metrics = [metrics.binary_accuracy])
#validating your approach
X_val = X_train[:10000]
partial_X_train = X_train[10000:]
# Labels for validation
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
history = model.fit(partial_X_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(X_val, y_val))
history_dict = history.history
history_dict.keys()
# Plotting the training and validation loss
import matplotlib.pyplot as plt
%matplotlib inline
# Plotting losses
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
plt.plot(epochs, loss_values, 'bo', label="Training Loss")
plt.plot(epochs, val_loss_values, 'b', label="Validation Loss")
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss Value')
plt.legend()
plt.show()
# Plotting the training and validation accuracy
plt.clf() #Clears the figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc_values, 'bo', label='Training acc')
plt.plot(epochs, val_acc_values, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Making Predictions for testing data
np.set_printoptions(suppress=True)
result = model.predict(X_test)
result
Output-
Conclusion
I n conclusion, employing neural networks for binary classification of movie reviews using the
IMDbdatasethasdemonstratedremarkableefficacy.Leveragingdeeplearningtechniques,suchas
recurrentorconvolutionalneuralnetworks,enablesthemodeltoeffectivelydiscernsentimentwith
high accuracy. Through extensive training on labeled data, these networkslearnintricatepatterns
within textual data, distinguishing between positive and negative sentiments with impressive
precision. As a result, neural networks offer a robust solution for sentiment analysis in movie
reviews, providing valuable insights for various applications in the film industry and beyond.
EXPERIMENT-3
im- Design a neural Network for classifying news wires (Multi class
A
classification) using Reuters
dataset.
Theory
esigning aneuralnetworkforclassifyingnewswiresusingtheReutersdatasetinvolvesutilizing
D
deep learning techniques to process and understand textual data. By employing neural networks,
particularlyarchitectureslikerecurrentorconvolutionalneuralnetworks,themodellearnsintricate
patternswithinnewsarticlestocategorizethemintomultipleclasses.Throughiterativetrainingon
the labeled Reuters dataset, the network adjusts its parameters to minimize classification errors,
optimizing its ability to accurately assign categories to news articles.
Code-
import numpy as np
from keras.datasets import reuters
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
# load a dataset
(XTrain, YTrain),(XTest, YTest) = reuters.load_data(num_words=None, test_split=0.3)
# Data preprocessing and visualization.
print('YTrain values = ',np.unique(YTrain))
print('YTest values = ',np.unique(YTest))
unique, counts = np.unique(YTrain, return_counts=True)
print('YTrain distribution = ',dict(zip(unique, counts)))
unique, counts = np.unique(YTest, return_counts=True)
print('YTrain distribution = ',dict(zip(unique, counts)))
p lt.figure(1)
plt.subplot(121)
p lt.hist(YTrain, bins='auto')
plt.xlabel("Classes")
plt.ylabel("Number of occurrences")
plt.title("YTrain data")
plt.subplot(122)
plt.hist(YTest, bins='auto')
plt.xlabel("Classes")
plt.ylabel("Number of occurrences")
plt.title("YTest data")
plt.show()
# The dataset_reuters_word_index() function returns a list where the names are words and the
values are integer
WordIndex = reuters.get_word_index(path="reuters_word_index.json")
print(len(WordIndex))
IndexToWord = {}
for key, value in WordIndex.items():
IndexToWord[value] = key
print(' '.join([IndexToWord[x] for x in XTrain[1]]))
print(YTrain[1])
MaxWords = 10000
# Tokenization of words.
Tok = Tokenizer(num_words=MaxWords)
XTrain = Tok.sequences_to_matrix(XTrain, mode='binary')
XTest = Tok.sequences_to_matrix(XTest, mode='binary')
# Preprocessing of labels
NumClasses = max(YTrain) + 1
YTrain = to_categorical(YTrain, NumClasses)
YTest = to_categorical(YTest, NumClasses)
p rint(XTrain[1])
print(len(XTrain[1]))
model = Sequential()
#model building
model.add(Dense(512, input_shape=(MaxWords,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(NumClasses))
model.add(Activation('relu'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(XTrain, YTrain,
validation_data=(XTest, YTest),
epochs=15,
batch_size=64)
#Evaluate the model.
Scores = model.evaluate(XTest, YTest, verbose=1)
print('Test loss:', Scores[0])
print('Test accuracy:', Scores[1])
def plotmodelhistory(history):
fig, axs = plt.subplots(1,2,figsize=(15,5))
# summarize history for accuracy
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('Model Accuracy')
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel('Epoch')
axs[0].legend(['train', 'validate'], loc='upper left')
# summarize history for loss
axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('Model Loss')
axs[1].set_ylabel('Loss')
axs[1].set_xlabel('Epoch')
axs[1].legend(['train', 'validate'], loc='upper left')
p lt.show()
# list all data in history
print(history.history.keys())
plotmodelhistory(history)
Output -
Conclusion
hisapproachoffersvaluableinsightsforinformationretrieval,newsrecommendationsystems,and
T
other applications requiring efficient organization and analysis of large volumes of news content.
EXPERIMENT 4
im: Design a neural network for predicting house prices using Boston Housing
A
Price dataset.
Theory
esigning a neural network for predicting house prices using the Boston Housing Price dataset
D
involves constructing a model that learns the complexrelationshipsbetweenvariousfeaturesofa
houseanditscorrespondingprice.Byfeedingthenetworkwithinputfeaturessuchasthenumberof
rooms,crimerate,andaccessibilitytohighways,andtrainingitoncorrespondinghouseprices,the
network adjusts its internal parameters through iterative learning to make accurate predictions.
Code-
#import necessary libraries
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.callbacks import EarlyStopping
#load a dataset
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
# separate the training and target variable
feature=house_df.iloc[:,0:13]# training variables
target=house_df.iloc[:,13]# target varible
print(feature.head())
print('\n',target.head())
#feature normalization
normalized_feature= keras.utils.normalize(feature.values)
print(normalized_feature)
# shuffle and split data into train (~80%) and test (~20%)
X_train, X_test, y_train, y_test = train_test_split(normalized_feature, target.values,
test_size=0.2, random_state=42)
print('training data shape: ',X_train.shape)
print('testing data shape: ',X_test.shape)
# get number of columns in training data
n_cols=X_train.shape[1]
# builds model
model=keras.Sequential()
model.add(keras.layers.Dense(150, activation=tf.nn.relu,
input_shape=(n_cols,)))
model.add(keras.layers.Dense(150, activation=tf.nn.relu))
model.add(keras.layers.Dense(150, activation=tf.nn.relu))
model.add(keras.layers.Dense(150, activation=tf.nn.relu))
model.add(keras.layers.Dense(150, activation=tf.nn.relu))
model.add(keras.layers.Dense(1))
#compile model
model.compile(loss='mse', optimizer='adam', metrics=[ 'mae'])# use metric as mean absolute error
#inspect the model
model.summary()
#train model and perform validation test
early_stop=EarlyStopping(monitor='val_loss', patience=1 5)
history=model.fit(X_train, y_train, epochs=3 00,
validation_split=0.2, verbose=1, callbacks=[early_stop])
# predict house price using the test data
test_predictions=model.predict(X_test).flatten()
print(test_predictions)
# visualize the prediction uisng diagonal line
y=test_predictions#y-axis
x=y_test#x-axis
fig, ax=plt.subplots(figsize=(10,6))# create figure
ax.scatter(x,y)#scatter plots for x,y
ax.set(xlim=( 0,55), ylim=( 0, 55))#set limit
ax.plot(ax.g et_xlim(), ax.get_ylim(), color='red')# draw 45 degree diagonal in figure
plt.xlabel('True Values')
p lt.ylabel('Predicted values')
plt.title('Evaluation Result')
plt.show()
Output-
Conclusion -Employing a neural network for house price prediction using the Boston Housing
rice dataset proves effective. By leveraging the network's ability to capture intricate patterns in the
P
data, it can accurately estimate house prices based on diverse features.
EXPERIMENT 5
im: Build a Convolution Neural Network for MNIST Handwritten Digit
A
Classification.
Theory
onvolutional Neural Networks (CNNs) excel at MNIST handwritten digit classification due to
C
their ability to extract hierarchical features from images. By employing convolutional layers to
detect patterns and pooling layers to reduce dimensionality, CNNs can effectively learn
representationsthatcapturethedistinctivefeaturesofhandwrittendigits.Throughsubsequentfully
connected layers and softmax activation, the network can accurately classify digits into their
respective categories.
Code-
# Import important library
import numpy as np
import pandas as ps
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Activation, Dropout
#load a dataset
(x_train, y_train),(x_test,y_test) = mnist.load_data()
# Show Data
p rint(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Let's see in image form
img_index = 31# select anything up to 60000
plt.imshow(x_train[img_index], cmap = 'gray')
plt.show()
# Reshaping the images so that it work on Keras API
# Keras accept input data as -> (num_img, img_shape, img_chenal)
x_train = x_train.reshape(x_train.shape[0],28,28,1)
x_test = x_test.reshape(x_test.shape[0],28,28,1)
print(x_train.shape)
print(x_test.shape)
# Data Scaling & Normalization
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
#Normalization the RBG codes by dividing it to the max RGB values.
x_train/=255-0.5
x_test/=255-0.5
## Creating a Sequential model for CNN in Keras
num_filter = 32
num_filter1 = 64
num_filter2 = 8
filter_size = 3
filter_size1 = 5
pool_size1 = 2
model = Sequential()
odel.add(Conv2D(num_filter, filter_size, strides=(1,1), activation = 'relu',
m
input_shape=(28,28,1)))
model.add(Conv2D(num_filter1, filter_size, strides=(1,1), activation = 'relu'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=pool_size1))
model.add(Conv2D(num_filter, filter_size1, strides=(1,1), activation = 'relu'))
model.add(Conv2D(num_filter2, filter_size1, strides=(1,1), activation = 'relu'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=pool_size1))
model.add(Flatten())
model.add(Dense(10,activation = 'softmax'))
model.summary()
# Compile the model
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Train the model
odel_history = model.fit(x_train, to_categorical(y_train), epochs =10, verbose =1,
m
validation_data=(x_test,to_categorical(y_test)))
#plot a graph
plt.plot(model_history.history['accuracy'])
plt.plot(model_history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(model_history.history['loss'])
plt.plot(model_history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Score
score = model.evaluate(x_test, to_categorical(y_test), verbose=0)
print('Test Loss', score[0])
print('Test accuracy', score[1]
Output-
Conclusion
uilding a CNN for MNIST handwritten digit classification yieldsimpressiveresults.Themodel
B
learns to recognize intricate patterns in the images and achieves high accuracy in distinguishing
between different digits. With proper tuning of hyperparameters and training on sufficient data,
CNNsofferarobustandefficientsolutionforthisclassificationtask,showcasingthepowerofdeep
learning in image recognition.
EXPERIMENT 6
im: Build a Convolution Neural Network for simple image (dogs and Cats)
A
Classification
Theory
uilding a Convolutional Neural Network (CNN) for simple image classification, such as
B
distinguishing between dogs and cats, relies on leveraging the hierarchical patterns present in
images.CNNsexcelincapturingspatialfeaturesthroughconvolutionallayers,followedbypooling
to downsample and extract essential information. By incorporating multiple convolutional and
pooling layers, the network learns to recognize intricate detailslikeshapes,textures,andpatterns
specific to dogs and cats.
Code-
#import important library
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
from tensorflow import keras
from keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.layers import Conv2D, MaxPooling2D
#load a dataset
from tensorflow.keras.utils import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img
from tensorflow.keras.preprocessing import image_dataset_from_directory
import os
import matplotlib.image as mpimg
from zipfile import ZipFile
data_path = 'dog-vs-cat-classification.zip'
with ZipFile(data_path, 'r') as zip: zip.extractall()
print('The data set has been extracted.')
path = 'dog-vs-cat-classification'
classes = os.listdir(path)
classes
fig = plt.gcf()
fig.set_size_inches(16, 16)
cat_dir = os.path.join('dog-vs-cat-classification/cats')
dog_dir = os.path.join('dog-vs-cat-classification/dogs')
cat_names = os.listdir(cat_dir)
dog_names = os.listdir(dog_dir)
pic_index = 210
cat_images = [os.path.join(cat_dir, fname) for fname in cat_names[pic_index-8:pic_index]]
dog_images = [os.path.join(dog_dir, fname) for fname in dog_names[pic_index-8:pic_index]]
for i, img_path in enumerate(cat_images + dog_images):
sp = plt.subplot(4, 4, i+1)
sp.axis('Off')
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
base_dir = 'dog-vs-cat-classification'
# Create datasets
train_datagen = image_dataset_from_directory(base_dir,
image_size=(200,200),
s ubset='training',
seed = 1,
validation_split=0.1,
batch_size= 32)
test_datagen = image_dataset_from_directory(base_dir,
image_size=(200,200),
s ubset='validation',
seed = 1,
validation_split=0.1,
batch_size= 32)
#model building
model = tf.keras.models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(200, 200, 3)),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D(2, 2),
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.Dropout(0.1),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Dense(1, activation='sigmoid')
])
model.summary()
keras.utils.plot_model( model, show_shapes=True, show_dtype)
#compile a model
model.compile( loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'] )
#fit a model
history = model.fit(train_datagen,
epochs=10,
validation_data=test_datagen)
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot()
history_df.loc[:, ['accuracy', 'val_accuracy']].plot()
plt.show()
from keras.preprocessing import image
test_image = image.load_img('1.jpg',target_size=(200,200))
#For show image
plt.imshow(test_image)
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image,axis=0)
# Result array
result = model.predict(test_image)
#Mapping result array with the main name list
i=0
if(result>=0.5):
print("Dog")
else:
print("Cat")
Output-
Conclusion Employing a CNN for dog vs. cat classification yields impressive results due to its
a bility to learn complex image features automatically. Through extensive training on labeled data,
the network becomes adept at discerning between the two categories with high accuracy. This
approach not only showcases the power of deep learning in image analysis tasks but also provides a
practical solution for various real-world applications, from pet identification systems to broader
image recognition tasks.