0% found this document useful (0 votes)
17 views82 pages

Ilovepdf Merged

The document provides an overview of TensorFlow and Keras, detailing their definitions, features, types, and usage in machine learning. It includes instructions for cloning GitHub repositories, uploading data, importing Kaggle datasets, and performing basic file operations in Google Colaboratory. The content is structured to assist users in understanding and utilizing these tools effectively for deep learning applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views82 pages

Ilovepdf Merged

The document provides an overview of TensorFlow and Keras, detailing their definitions, features, types, and usage in machine learning. It includes instructions for cloning GitHub repositories, uploading data, importing Kaggle datasets, and performing basic file operations in Google Colaboratory. The content is structured to assist users in understanding and utilizing these tools effectively for deep learning applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

School of Computer Science

And Engineering
Digital Assessment I, Fall Semester 2024-25

Class Number: VL2024250102168

Name : Adarsh Kumar Priyadarshi

Reg No : 21BCE0974

Course Name : Deep Learning Lab

Course Code : BCSE332P


Question: -
TensorFlow: -

Definition: -

TensorFlow is an open-source machine learning framework developed by


the Google Brain team. It is designed to facilitate the creation and
training of machine learning models, particularly deep learning models.

Features: -

TensorFlow offers a rich set of features that make it a powerful tool


for machine learning and deep learning applications. some of the key
features are mentioned below:

1. Extensive Library: TensorFlow provides a comprehensive library


of tools, operations, and pre-built models, making it easier to
build and deploy complex machine learning models.

2. Eager Execution: Eager execution allows for immediate evaluation


of operations, which makes debugging and development more
intuitive and straightforward.

3. Keras API Integration: TensorFlow integrates with Keras, a


high-level neural networks API, simplifying the process of
building and training models with a user-friendly interface.

4. TensorFlow Serving: This feature enables the deployment of


machine learning models in production environments, allowing for
efficient serving of predictions.
5. TensorFlow Lite: Designed for mobile and embedded devices,
TensorFlow Lite enables the deployment of lightweight models on
resource-constrained environments.

6. TensorFlow.js: This JavaScript library allows you to define, train,


and run machine learning models directly in the browser or in
Node.js.

7. TensorFlow Extended (TFX): TFX is an end-to-end platform for


deploying production machine learning pipelines, including data
validation, preprocessing, model training, and serving.

8. AutoML: TensorFlow offers tools for automating the process of


designing and training machine learning models, making it easier
for users to build high-quality models with minimal manual effort.

9. TensorBoard: TensorBoard is a visualization tool that provides


insights into the model's training process, including metrics,
graphs, and other visualizations to help understand and debug
models.

10. Multi-Platform Support: TensorFlow supports various platforms,


including CPUs, GPUs, and TPUs, as well as different operating
systems such as Windows, macOS, and Linux.

Types: -

TensorFlow comes in various versions and specialized libraries designed


to cater to different machine learning needs and deployment
environments. The main types and variants of TensorFlow are:
1. TensorFlow Core:

o The main TensorFlow library, which includes tools and


libraries for building and training machine learning models.
It provides both high-level and low-level APIs.

2. TensorFlow Lite:

o A lightweight version of TensorFlow designed for mobile and


embedded devices. TensorFlow Lite models are optimized
for performance and size, making them suitable for
resource-constrained environments.

3. TensorFlow.js:

o A JavaScript library that enables the training and


deployment of machine learning models directly in web
browsers and Node.js. It allows for in-browser model
execution and real-time, client-side inference.

4. TensorFlow Extended (TFX):

o An end-to-end platform for deploying production machine


learning pipelines. TFX includes components for data
ingestion, validation, transformation, training, evaluation,
and serving, enabling robust and scalable ML operations.

5. TensorFlow Serving:

o A flexible, high-performance serving system for deploying


machine learning models in production. TensorFlow Serving
makes it easy to deploy new algorithms and experiments
while keeping the same server architecture and APIs.

6. TensorFlow Hub:
o A repository and library for reusable machine learning
modules. TensorFlow Hub provides a wide variety of pre-
trained models and embeddings that can be easily
integrated into new models and applications.

7. TensorFlow Federated (TFF):

o A framework for federated learning, which allows the


training of machine learning models on decentralized data
located on different devices while maintaining data privacy.

8. TensorFlow Probability:

o A library for probabilistic reasoning and statistical analysis.


TensorFlow Probability combines TensorFlow with
probability theory to enable deep probabilistic models and
probabilistic machine learning.

9. TensorFlow Quantum (TFQ):

o A library for hybrid quantum-classical machine learning,


which integrates quantum computing algorithms and
simulators with TensorFlow. It is designed for researchers
working on quantum machine learning.

10. TensorFlow Graphics:

o A library for deep learning in computer graphics.


TensorFlow Graphics includes tools and models for 3D deep
learning, computer vision, and rendering tasks.
Keras: -

Definition: -

Keras is an open-source software library that provides a Python


interface for building and training neural networks. It acts as a high-
level API for several deep learning frameworks, most notably
TensorFlow.

Features: -

Keras offers a variety of features that make it a popular choice for


building and training neural network models. Some of the key features
are mentioned below:

1. User-Friendly API:

o Keras provides a simple, consistent, and intuitive API,


making it easy to build and train deep learning models. Its
straightforward syntax allows for rapid development and
experimentation.

2. Modular and Composable:

o The library is highly modular, enabling users to build models


by combining standalone modules, such as neural network
layers, loss functions, optimizers, and activation functions.

3. Support for Multiple Backends:

o While Keras initially supported multiple backends like


TensorFlow, Theano, and CNTK, it is now tightly integrated
with TensorFlow, taking full advantage of its capabilities.

4. Model Building Methods:


o Sequential API: For creating simple, linear stacks of layers.

o Functional API: For building more complex models with non-


linear topology, shared layers, and multiple inputs and
outputs.

5. Pre-trained Models:

o Keras includes a suite of pre-trained models (e.g., VGG,


Inception, ResNet) that can be easily used for transfer
learning or as a starting point for developing custom models.

6. Extensive Preprocessing Utilities:

o The library provides utilities for preprocessing image, text,


and sequence data, including data augmentation for images
and tokenization for text.

7. Customizable:

o Users can define their own custom layers, loss functions,


metrics, and optimizers if the built-in options do not meet
their needs.

8. Integration with TensorFlow:

o Since its integration with TensorFlow, Keras benefits from


TensorFlow's advanced features like distributed training,
TPU support, and TensorFlow Extended (TFX) for
production deployment.

9. Visualization Tools:

o Keras works well with TensorBoard, TensorFlow’s


visualization toolkit, allowing users to visualize model
training progress, metrics, and model graphs.
10. Support for Multiple GPUs:

o Keras can leverage multiple GPUs for training models,


enabling faster training times for large models and
datasets.

Types: -

Keras, as an API, can be utilized in different ways and integrated with


various deep learning frameworks. Main types and implementations of
Keras are:

1. Standalone Keras:

o Originally, Keras was developed as a standalone library that


could work with multiple backend engines like TensorFlow,
Theano, and Microsoft Cognitive Toolkit (CNTK). This
version provided a flexible way to switch between different
backends for computational graph execution.

2. Keras as part of TensorFlow (tf.keras):

o With the release of TensorFlow 2.0, Keras was integrated


into TensorFlow as the default high-level API. tf.keras
provides all the functionalities of Keras while leveraging
TensorFlow’s powerful features. This integration simplifies
installation and ensures better compatibility and
performance.

3. Keras Implementations for Specific Frameworks:

o Theano: Keras can run on Theano, which was one of the


original backends supported by Keras. However, Theano is
no longer actively maintained.
o Microsoft Cognitive Toolkit (CNTK): Keras can also run on
CNTK, which is known for its efficiency in deep learning
computations. CNTK is suitable for training large-scale,
distributed models.

o PlaidML: PlaidML is another backend that aims to provide


efficient computation on various hardware, including GPUs
from different vendors.

4. Keras Tuner:

o Keras Tuner is an extension of Keras that provides


hyperparameter tuning capabilities. It helps in finding the
best hyperparameters for Keras models, simplifying the
process of optimizing models.

5. Different APIs within Keras:

o Sequential API: The simplest API, used for creating models


layer-by-layer in a linear stack. It is suitable for most
common neural network architectures.

o Functional API: A more flexible and powerful API that


allows for the creation of complex models, such as those
with non-linear topology, shared layers, and multiple inputs
and outputs. This API is useful for building models that
cannot be represented as a simple stack of layers.

o Model Subclassing: This approach allows users to define


their own models by subclassing the tf.keras.Model class
and implementing the call method. It provides maximum
flexibility and is useful for custom and complex model
architectures.
Cloning GitHub Repository: -

Cloning a GitHub repository in Google Colaboratory (Colab) involves


using Colab's built-in capabilities to run shell commands. The steps to
do it are:

Steps to Clone a GitHub Repository in Google Colab:

1. Open Google Colab:

o Go to Google Colab and either create a new notebook or


open an existing one.

2. Open a Code Cell:

o Insert a new code cell in your Colab notebook.

3. Install Git (if necessary):

o In most cases, Git is already installed in Colab. You can


verify by running:

For Example: !git --version

4. Clone the Repository:

o Use the git clone command followed by the repository URL.


For example:

!git clone https://github.com/username/repository.git

Code: -

!git --version

!git clone https://github.com/nvie/gitflow.git


Upload Data: -

Uploading data to Google Colaboratory (Colab) can be done in several


ways, including directly uploading files from your local machine, using
Google Drive, or accessing files from a URL. Here are the steps for
each method:

Method 1: Direct File Upload

1. Open Google Colab:

o Go to Google Colab and create or open a notebook.

2. Insert a Code Cell:

o Add a new code cell to your notebook.

3. Run the Upload Code:

o Use the following code to upload files from your local


machine:

from google.colab import files

uploaded = files.upload()

4. Select Files to Upload:


o Run the cell, and a file selection dialog will appear. Choose
the files you want to upload. After selection, the files will
be uploaded and stored in the Colab environment.

5. Access the Uploaded Files:

o The uploaded files can be accessed using their filenames.


For example, if you uploaded a file named data.csv, you can
read it using:

import pandas as pd

# Read the uploaded CSV file

df = pd.read_csv('data.csv')

Code: -

from google.colab import files

uploaded = files.upload()

Importing Kaggle's dataset:

Importing datasets from Kaggle into Google Colaboratory (Colab) can


be done by following these steps:
Prerequisites:

 Ensure you have a Kaggle account.

 Download your Kaggle API token (a JSON file) from your account
settings.

Steps to Import Kaggle Datasets in Google Colab:

1. Upload Kaggle API Token:

o First, upload your Kaggle API token (kaggle.json) to Colab.


You can do this by running the following code in a Colab cell:

from google.colab import files

files.upload()

o A file selection dialog will appear. Select the kaggle.json file


you downloaded from Kaggle.

2. Set Up Kaggle Configuration:

o After uploading the API token, you need to set the Kaggle
configuration to point to your Kaggle directory. Run the
following code:

import os

# Create the directory for Kaggle

os.makedirs('~/.kaggle', exist_ok=True)

# Move the kaggle.json file to the Kaggle directory

!cp kaggle.json ~/.kaggle/

# Set permissions for the kaggle.json file

!chmod 600 ~/.kaggle/kaggle.json

3. Install Kaggle Package:


o Make sure the Kaggle package is installed in your Colab
environment (usually pre-installed). If not, you can install it
using:

!pip install kaggle

4. Find the Dataset URL on Kaggle:

o Go to the Kaggle dataset page that you want to import. The


URL will look something like this:

https://www.kaggle.com/datasets/username/dataset-name

o You need the dataset identifier, which is typically in the


format username/dataset-name.

5. Download the Dataset:

o Use the following command to download the dataset:

!kaggle datasets download -d username/dataset-name

o Replace username/dataset-name with the actual identifier


of the dataset you want to download.

6. Unzip the Downloaded Dataset:

o After downloading, you may need to unzip the dataset if it’s


in a ZIP format:

!unzip dataset-name.zip

o Adjust the filename based on the actual name of the


downloaded ZIP file.

7. Access the Dataset:

o After unzipping, you can access the dataset files using


standard file operations. For example:
import pandas as pd

# Load a CSV file from the dataset

df = pd.read_csv('file_name.csv') # Replace
'file_name.csv' with the actual file name

Code: -

from google.colab import files

files.upload()

import os

os.makedirs('~/.kaggle', exist_ok=True)

!cp kaggle.json ~/.kaggle/

!chmod 600 ~/.kaggle/kaggle.json

!pip install kaggle

!kaggle datasets download -d username/dataset-name

!unzip dataset-name.zip

import pandas as pd

df = pd.read_csv('file_name.csv') # Replace 'file_name.csv' with the


actual file name
Basic File operations: -

Performing basic file operations in Google Colaboratory (Colab) involves


using Python code to manipulate files, just as you would on your local
machine. Some common file operations you might need to perform in
Colab:

1. Uploading Files from Your Local Machine

1. Upload Files:

o Use the following code to upload files from your local


machine to Colab:

from google.colab import files

uploaded = files.upload()

o A file selection dialog will appear. Choose the files you want
to upload.

Access Uploaded Files:

o After uploading, you can access the files using their


filenames. For example, if you uploaded a file named
data.csv:

import pandas as pd

df = pd.read_csv('data.csv')

2. Downloading Files to Your Local Machine

1. Download Files:
o Use the following code to download files from Colab to your
local machine:

from google.colab import files

files.download('filename')

3. Reading and Writing Files

1. Read a File:

o Use standard Python file operations to read a file. For


example, reading a text file:

with open('example.txt', 'r') as file:

content = file.read()

print(content)

2. Write to a File:

o Use standard Python file operations to write to a file. For


example, writing to a text file:

with open('example.txt', 'w') as file:

file.write('Hello, world!')

4. Listing Files and Directories

1. List Files in a Directory:

o Use the os module to list files in a directory:

import os

os.listdir()

5. Moving and Renaming Files

1. Move/Rename a File:
o Use the shutil module to move or rename a file:

import shutil

shutil.move('old_filename', 'new_filename') # Replace with


actual filenames

6. Deleting Files

1. Delete a File:

o Use the os module to delete a file:

import os

os.remove('filename') # Replace 'filename' with the actual


file name

Code: -

from google.colab import files

uploaded = files.upload()

import os

print(os.listdir())

with open('example.txt', 'r') as file:

content = file.read()

print(content)

with open('new_file.txt', 'w') as file:

file.write('This is a new file.')

import shutil

shutil.move('new_file.txt', 'renamed_file.txt')
print(os.listdir())

files.download('renamed_file.txt')

os.remove('renamed_file.txt')

print(os.listdir())

Neural network to classify MNIST dataset: -

Steps: -

Implementing a neural network to classify the MNIST dataset involves


several steps, including data preprocessing, model building, training,
evaluation, and finally, making predictions. Step-by-step guide using
Python and popular libraries like TensorFlow/Keras:

1. Import Necessary Libraries

import tensorflow as tf

from tensorflow.keras import layers, models

import matplotlib.pyplot as plt

import numpy as np

2. Load and Preprocess the MNIST Dataset

# Load the dataset

mnist = tf.keras.datasets.mnist

(train_images, train_labels), (test_images, test_labels) =


mnist.load_data()
# Normalize the images to the range [0, 1]

train_images = train_images / 255.0

test_images = test_images / 255.0

# Reshape images to add a channel dimension (needed for CNN)

train_images = train_images.reshape((train_images.shape[0], 28, 28,


1))

test_images = test_images.reshape((test_images.shape[0], 28, 28, 1))

3. Build the Neural Network Model

Here, we’ll build a Convolutional Neural Network (CNN) for better


performance on image data.

model = models.Sequential([

layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),

layers.Flatten(),

layers.Dense(64, activation='relu'),

layers.Dense(10, activation='softmax')

])

4. Compile the Model


model.compile(optimizer='adam',

loss='sparse_categorical_crossentropy',

metrics=['accuracy'])

5. Train the Model

history = model.fit(train_images, train_labels, epochs=5,


validation_data=(test_images, test_labels))

6. Evaluate the Model

test_loss, test_acc = model.evaluate(test_images, test_labels)

print(f'Test accuracy: {test_acc}')

7. Visualize Training History

plt.plot(history.history['accuracy'], label='accuracy')

plt.plot(history.history['val_accuracy'], label='val_accuracy')

plt.xlabel('Epoch')

plt.ylabel('Accuracy')

plt.ylim([0, 1])

plt.legend(loc='lower right')

plt.show()

8. Make Predictions

predictions = model.predict(test_images)

# To see the predicted label for the first test image

predicted_label = np.argmax(predictions[0])

print(f'Predicted label: {predicted_label}')


print(f'True label: {test_labels[0]}')

Code: -

import tensorflow as tf

from tensorflow.keras import layers, models

import matplotlib.pyplot as plt

import numpy as np

mnist = tf.keras.datasets.mnist

(train_images, train_labels), (test_images, test_labels) =


mnist.load_data()

train_images = train_images / 255.0

test_images = test_images / 255.0

train_images = train_images.reshape((train_images.shape[0], 28, 28,


1))

test_images = test_images.reshape((test_images.shape[0], 28, 28, 1))

model = models.Sequential([

layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),

layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),

layers.Flatten(),

layers.Dense(64, activation='relu'),

layers.Dense(10, activation='softmax')

])

model.compile(optimizer='adam',

loss='sparse_categorical_crossentropy',

metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=5,


validation_data=(test_images, test_labels))

test_loss, test_acc = model.evaluate(test_images, test_labels)

print(f'Test accuracy: {test_acc}')

plt.plot(history.history['accuracy'], label='accuracy')

plt.plot(history.history['val_accuracy'], label='val_accuracy')

plt.xlabel('Epoch')

plt.ylabel('Accuracy')
plt.ylim([0, 1])

plt.legend(loc='lower right')

plt.show()

predictions = model.predict(test_images)

predicted_label = np.argmax(predictions[0])

print(f'Predicted label: {predicted_label}')

print(f'True label: {test_labels[0]}')

Link :

https://colab.research.google.com/drive/16AkRtdk6taTBclIR5ZsntrXvC63Za
3f9?usp=sharing
School of Computer Science
And Engineering
Digital Assessment II, Fall Semester 2024-25

Class Number: VL2024250102168

Name : Adarsh Kumar Priyadarshi

Reg No : 21BCE0974

Course Name : Deep Learning Lab

Course Code : BCSE332P


Question: -

Colab Link :
https://colab.research.google.com/drive/1GDDUWQN9ve3grKpjWW2RTJNhNRhrSkDW?usp=sharing
Multilayer Perceptron: -

A Multilayer Perceptron (MLP) is a type of artificial neural network that


consists of multiple layers of neurons, which are organized into an input
layer, one or more hidden layers, and an output layer. MLPs are used for
supervised learning tasks and can model complex relationships in data. Here’s
a breakdown of its components and functionality:

Components
1. Input Layer:
o This layer receives the input features of the dataset. Each neuron in this
layer corresponds to a feature.
2. Hidden Layers:
o MLPs can have one or more hidden layers. Each neuron in these layers applies
a transformation to the inputs it receives, typically using a weighted sum
followed by a non-linear activation function (like ReLU, sigmoid, or tanh). The
hidden layers allow the network to learn complex patterns and
representations.
3. Output Layer:
o The output layer produces the final prediction. The number of neurons in
this layer depends on the type of task (e.g., one neuron for binary
classification, multiple neurons for multi-class classification).

Functionality: -
 Forward Propagation: During the forward pass, input data is passed through
the network layer by layer. Each neuron's output is computed based on its
weights and biases and passed to the next layer until the final output is
produced.
 Activation Functions: These functions introduce non-linearity into the
model, allowing it to learn complex patterns. Common activation functions
include:
 Backpropagation: This is the algorithm used to train the MLP. It computes
the gradient of the loss function (which measures the difference between
the predicted and actual outputs) with respect to each weight by the chain
rule. The weights are then updated using an optimization algorithm like
stochastic gradient descent (SGD) or Adam.
Training Accuracy and Validation Accuracy Curves:
-
Mini Batch Gradient Descent: -

Mini-Batch Gradient Descent is a variant of gradient descent used to


optimize machine learning models, particularly those involving large datasets.
It combines the advantages of both batch gradient descent and stochastic
gradient descent.
Overview

In machine learning, gradient descent is an optimization algorithm used to


minimize the loss function by iteratively updating model parameters. Mini-
Batch Gradient Descent is a compromise between the efficiency of batch
gradient descent and the robustness of stochastic gradient descent.

Gradient Descent Variants

1. Batch Gradient Descent:

o Updates the model parameters based on the average gradient computed


over the entire training dataset.

o Advantages: Converges smoothly to the minimum as it uses the entire


dataset for each update.

o Disadvantages: Can be very slow and computationally expensive, especially


for large datasets.

2. Stochastic Gradient Descent (SGD):

o Updates the model parameters based on the gradient computed from a


single training example.

o Advantages: Faster updates and can handle very large datasets.

o Disadvantages: The path towards the minimum can be noisy and less
stable, making convergence less predictable.

3. Mini-Batch Gradient Descent:

o Updates the model parameters based on the gradient computed from a


small, random subset of the training data (mini-batch).

o Advantages: Balances the efficiency and stability of updates, leading to


faster convergence compared to batch gradient descent while reducing
the noise of updates compared to SGD.

o
Description of the Plots:

1. First Plot: MLP Without Mini-Batch Gradient Descent

1. Training Score: The red line indicates that the training score
remains very high (close to 1.0) across all training examples.
This suggests that the model fits the training data exceptionally
well.
2. Cross-Validation Score: The green line shows a steady increase as
more training examples are added, reflecting improved
generalization. The curve levels off at around 0.95, demonstrating
that the model performs effectively on unseen data.

2. Second Plot: MLP With Mini-Batch Gradient Descent

1. Training Score: The red line initially starts high but experiences a
slight dip before rising again. This temporary decrease suggests that
the model may briefly underfit the training data when using larger
mini-batches before stabilizing.
2. Cross-Validation Score: The green line begins at a lower level and
exhibits more fluctuations compared to the first plot. Although the
cross-validation score improves over time, it does not reach the high
level seen in the first plot, indicating some instability in the model’s
performance on new data with Mini-Batch Gradient Descent.
School of Computer Science
And Engineering
Digital Assessment III,Fall Semester 2024-25

Class Number: VL2024250102168

Name : Adarsh Kumar Priyadarshi

Reg No : 21BCE0974

Course Name : Deep Learning Lab

Course Code : BCSE332P


Question: -

Link: https://colab.research.google.com/drive/1-JpM-rxVJL6B5bDKCEcu7hPkH1rmLE3u?usp=sharing

I. Face Recognition Using CNN

Code: -

#21BCE0974
# Import necessary libraries
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from keras.optimizers import Adam
from keras.callbacks import TensorBoard

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report,
roc_curve, auc, accuracy_score
from tensorflow.keras.utils import to_categorical
import itertools
# Attempt to load the dataset
def load_data():
try:
data = np.load('ORL_faces.npz') # Adjust the path if necessary
return data
except FileNotFoundError:
print("File not found. Please upload the file or ensure the correct
file path.")
return None

# Load data (returns None if file is not found)


data = load_data()

# Check if data is loaded successfully


if data is None:
# If running in Colab, use the upload function to upload the dataset
manually
from google.colab import files
uploaded = files.upload() # Prompts file upload dialog in Colab
data = np.load(list(uploaded.keys())[0]) # Load the uploaded file
dynamically

# Load and normalize train and test images


x_train = np.array(data['trainX'], dtype='float32') / 255
x_test = np.array(data['testX'], dtype='float32') / 255

# Load the labels


y_train = data['trainY']
y_test = data['testY']

# Display the shape of the training and test data


print(f'x_train shape: {x_train.shape}')
print(f'y_train shape: {y_train.shape}')
print(f'x_test shape: {x_test.shape}')
print(f'y_test shape: {y_test.shape}')
# Split training data into training and validation sets
x_train, x_valid, y_train, y_valid = train_test_split(
x_train, y_train, test_size=0.05, random_state=42
)

# Define image dimensions and batch size


im_rows, im_cols = 112, 92
batch_size = 512
input_shape = (im_rows, im_cols, 1)

# Reshape the data to fit the model


x_train = x_train.reshape(x_train.shape[0], *input_shape)
x_test = x_test.reshape(x_test.shape[0], *input_shape)
x_valid = x_valid.reshape(x_valid.shape[0], *input_shape)

print(f'Reshaped x_train: {x_train.shape}')


print(f'Reshaped x_test: {x_test.shape}')
print(f'Reshaped x_valid: {x_valid.shape}')

# Build the CNN model


model = Sequential([
Conv2D(36, kernel_size=(7, 7), activation='relu',
input_shape=input_shape),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(54, kernel_size=(5, 5), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(2024, activation='relu'),
Dropout(0.5),
Dense(1024, activation='relu'),
Dropout(0.5),
Dense(512, activation='relu'),
Dropout(0.5),
Dense(20, activation='softmax') # 20 output classes
])
# Compile the model with Adam optimizer and sparse categorical
crossentropy
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=Adam(learning_rate=0.0001),
metrics=['accuracy']
)

# Display the model's architecture


model.summary()

# Train the model


history = model.fit(
x_train, y_train,
batch_size=batch_size,
epochs=250,
validation_data=(x_valid, y_valid),
verbose=2
)

# Evaluate the model on test data


test_loss, test_accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f'Test loss: {test_loss:.4f}')
print(f'Test accuracy: {test_accuracy:.4f}')

# Plot accuracy history


plt.figure(figsize=(12, 6))
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Model Accuracy over Epochs')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.grid(True)
plt.show()
# Plot loss history
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss over Epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(loc='upper left')
plt.grid(True)
plt.show()
Output: -
II. Classification of MNIST Dataset using CNN

Code: -

#21BCE0974

import keras

from keras.models import Sequential

from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout

from keras.optimizers import Adam

from keras.callbacks import EarlyStopping

import numpy as np

import pandas as pd

import seaborn as sns

import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split

from sklearn.metrics import confusion_matrix, classification_report

from sklearn.metrics import roc_curve, auc, accuracy_score

from tensorflow.keras.utils import to_categorical

from google.colab import files # If using Colab, otherwise omit this

# Upload the file manually

uploaded = files.upload() # Allows file upload in Colab or Jupyter

# Load dataset from uploaded file


dataset_path = 'ORL_faces.npz'

# Check if the file was uploaded and exists

if dataset_path in uploaded:

face_data = np.load(dataset_path)

# Load and normalize training images

X_train_raw = face_data['trainX']

X_train = np.array(X_train_raw, dtype='float32') / 255.0

# Load and normalize test images

X_test_raw = face_data['testX']

X_test = np.array(X_test_raw, dtype='float32') / 255.0

# Load labels for training and test data

Y_train = face_data['trainY']

Y_test = face_data['testY']

# Show the shape of the loaded data

print(f"Training set images: {X_train.shape}")

print(f"Training set labels: {Y_train.shape}")

print(f"Test set images: {X_test.shape}")

print(f"Test set labels: {Y_test.shape}")


# Split training data into training and validation sets

X_train, X_valid, Y_train, Y_valid = train_test_split(

X_train, Y_train, test_size=0.1, random_state=42

# Define image dimensions and reshape the images

image_height = 112

image_width = 92

num_channels = 1

input_shape = (image_height, image_width, num_channels)

X_train = X_train.reshape(X_train.shape[0], *input_shape)

X_valid = X_valid.reshape(X_valid.shape[0], *input_shape)

X_test = X_test.reshape(X_test.shape[0], *input_shape)

# Display reshaped data dimensions

print(f"Reshaped training set: {X_train.shape}")

print(f"Reshaped validation set: {X_valid.shape}")

print(f"Reshaped test set: {X_test.shape}")

# Build the CNN model

cnn_model = Sequential([
Conv2D(filters=32, kernel_size=(5, 5), activation='relu',
input_shape=input_shape),

MaxPooling2D(pool_size=(2, 2)),

Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),

MaxPooling2D(pool_size=(2, 2)),

Flatten(),

Dense(1024, activation='relu'),

Dropout(0.3),

Dense(512, activation='relu'),

Dropout(0.3),

Dense(256, activation='relu'),

Dropout(0.3),

Dense(20, activation='softmax') # 20 classes (outputs)

])

# Compile the model with Adam optimizer

cnn_model.compile(

loss='sparse_categorical_crossentropy',

optimizer=Adam(learning_rate=0.0001),

metrics=['accuracy']

# Display model summary


cnn_model.summary()

# Define early stopping to avoid overfitting

early_stopping = EarlyStopping(monitor='val_loss', patience=10,


restore_best_weights=True)

# Train the model and track its performance

history = cnn_model.fit(

X_train, Y_train,

batch_size=256,

epochs=10,

validation_data=(X_valid, Y_valid),

callbacks=[early_stopping],

verbose=2

# Evaluate the model's performance on the test data

test_loss, test_accuracy = cnn_model.evaluate(X_test, Y_test,


verbose=0)

print(f"Test Loss: {test_loss:.4f}")

print(f"Test Accuracy: {test_accuracy:.4f}")

# Plot training & validation accuracy and loss

def plot_training_history(history):
sns.set(style="whitegrid")

fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Plot accuracy

axes[0].plot(history.history['accuracy'], label='Train Accuracy')

axes[0].plot(history.history['val_accuracy'], label='Validation
Accuracy')

axes[0].set_title('Model Accuracy')

axes[0].set_xlabel('Epoch')

axes[0].set_ylabel('Accuracy')

axes[0].legend(loc='best')

# Plot loss

axes[1].plot(history.history['loss'], label='Train Loss')

axes[1].plot(history.history['val_loss'], label='Validation Loss')

axes[1].set_title('Model Loss')

axes[1].set_xlabel('Epoch')

axes[1].set_ylabel('Loss')

axes[1].legend(loc='best')

plt.show()

plot_training_history(history)
# Make predictions on the test set

predictions = cnn_model.predict(X_test)

predicted_classes = np.argmax(predictions, axis=1)

# Display classification report

print(classification_report(Y_test, predicted_classes))

# Confusion matrix

cm = confusion_matrix(Y_test, predicted_classes)

plt.figure(figsize=(10, 7))

sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')

plt.title('Confusion Matrix')

plt.ylabel('True Label')

plt.xlabel('Predicted Label')

plt.show()

else:

print("File not found. Please ensure the file is uploaded correctly.")


Output: -
School of Computer Science
And Engineering
Digital Assessment VI,Fall Semester 2024-25

Class Number: VL2024250102168

Name : Adarsh Kumar Priyadarshi

Reg No : 21BCE0974

Course Name : Deep Learning Lab

Course Code : BCSE332P


Question: -

Link: https://colab.research.google.com/drive/1lPESgnq-WvcYvAgTJ3bizUctFbRzlRb5?usp=sharing
Link : https://colab.research.google.com/drive/1gjh1d7pTkU9i0akK72yGJ5ns4K0aVQo5?usp=sharing

I. Sentiment Analysis using LSTM

Code: -
Image Generation using GAN

Code: -

You might also like