0% found this document useful (0 votes)
38 views58 pages

DL LAB MANUAL - Merged

The document outlines the structure and requirements for the Deep Learning Laboratory course (AD3511) for students in the AI & DS department during the 2024-2025 academic year. It includes the course objectives, a list of experiments, course outcomes, and mappings of course outcomes to program outcomes and specific objectives. Additionally, it provides a rubric for assessing student performance in the lab course.

Uploaded by

s.shanmugapriya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views58 pages

DL LAB MANUAL - Merged

The document outlines the structure and requirements for the Deep Learning Laboratory course (AD3511) for students in the AI & DS department during the 2024-2025 academic year. It includes the course objectives, a list of experiments, course outcomes, and mappings of course outcomes to program outcomes and specific objectives. Additionally, it provides a rubric for assessing student performance in the lab course.

Uploaded by

s.shanmugapriya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

RECORD NOTE BOOK

NAME OF THE STUDENT :

REGISTER NUMBER :

DEPARTMENT / SEMESTER : AI&DS / V Semester

SUBJECT CODE / TITLE : AD3511 / DEEP LEARNING


LABORATORY
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

BONAFIDE CERTIFICATE

Certified that this is the bonafide record work done by

Reg.No.

studying in III Year / V Semester of Artificial Intelligence and Data Science

branch for the AD3511 – DEEP LEARNING

LABORATORY during the academic year 2024 – 2025.

Signature of the lab In-charge Signature of HOD

Submitted for the University Examination held on

Internal Examiner External Examiner


DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

Vision of the Department

Produce globally recognized AI specialists and Data analysts.

Mission of the Department

 To impart quality education and research in AI and Data Science.


 To inculcate creative and leadership skills for solving real world problems.
 To foster innovative professionals with values for the wellbeing society.

Program Educational Objective (PEOs)

PEO1 - Design AI based solutions to solve critical real-world problems.


PEO2 - Pursue higher education and research or have a successful carrier or as entrepreneur
PEO3 - Attain professional skills by ensuring lifelong learning with the sense of social values.

Program Specific Objective (PSOs)

PSO 1 – Applying AI principles and practices for developing innovative solutions to the society.

PSO 2 – To adapt the emerging technologies and tools for solving the existing/novel problems
Program Outcomes (POs)

PO1 - Engineering Knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering problems.
PO2 - Problem Analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
PO3 – Design / Development of Solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
PO4 - Conduct Investigations of Complex Problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions.
PO5 - Modern Tool Usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with
an understanding of the limitations.
PO6 - The Engineer and Society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
PO7 - Environment and Sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for
sustainable development.
PO8 - Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9 - Individual and Team Work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
PO10 - Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive clear
instructions.
PO11 - Project Management and Finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and leader
in a team, to manage projects and in multidisciplinary environments.
PO12 - Life-Long Learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change.
AD3511- DEEP LEARNING LABORATORY

SYLLABUS
COURSE OBJECTIVES:

 To understand the tools and techniques to implement deep neural networks


 To apply different deep learning architectures for solving problems

 To implement generative models for suitable applications


 To learn to build and validate different models

LIST OF EXPERIMENTS

1. Solving XOR problem using DNN


2. Character recognition using CNN
3. Face recognition using CNN
4. Language modeling using RNN
5. Sentiment analysis using LSTM
6. Parts of speech tagging using Sequence to Sequence architecture
7. Machine Translation using Encoder-Decoder model
8. Image augmentation using GANs
9. Mini-project on real world applications

COURSE OUTCOMES:

CO1: Apply deep neural network for simple problems


CO2: Apply Convolution Neural Network for image processing
CO3: Apply Recurrent Neural Network and its variants for text analysis
CO4: Apply generative models for data augmentation
CO5: Develop real-world solutions using suitable deep neural networks
Course Name / Course No: AD3511- DEEP LEARNING LABORATORY / C307

COURSE NAME: DEEP LEARNING LABORATORY


NBA CODE FOR THE SUBJECT: AD307
SEMESTER: V (AY: 24-25 ODD)
CO -Code Course Outcome Description

C307.1 Apply deep neural network for simple problems

C307.2 Apply Convolution Neural Network for image processing


Apply Recurrent Neural Network and its variants for text analysis
C307.3
C307.4 Apply generative models for data augmentation

C307.5 Develop real-world solutions using suitable deep neural networks

CO’s-PO’s & PSO’s MAPPING

P PSO
CO O

PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO PO PO PSO1 PSO2
10 11 12

I 3 2 1 1 1 - - - 3 2 3 2 3 3

1 3 2 2 2 - - - 3 2 2 2 1 3
II

III 3 2 1 2 1 - - - 2 3 1 1 2 3

IV 3 3 1 2 1 - - - 1 3 2 2 3 2

V 3 3 3 3 2 - - - 1 2 3 1 3 3

1-Low, 2-Medium, 3-High, ‘ – ‘ No Correlation


Justification for CO-PO/PSO Mapping

CO BKL PO Justification
Number
Substantial: Strongly mapped as students analyze the efficiency of recursive
PO1 and non-recursive algorithms using engineering principles and mathematical
knowledge.
Substantial: Strongly mapped as students identify and analyze complex
PO2 problems in algorithmic efficiency, applying first principles of mathematics
and engineering.
Substantial: Strongly mapped as students design efficient algorithms and
PO3
evaluate them, considering various constraints.
Slight: Slightly mapped as students use research methods to conduct
PO4
investigations and interpret data on algorithm efficiency.
Slight: Slightly mapped as students use modern tools to analyze algorithms
PO5
with an understanding of their limitations.
C307.1 K3 Slight: Slightly mapped as students document and communicate their
PO9
analysis and findings regarding algorithm efficiency.
Slight: Slightly mapped as students apply management principles to tasks
P010
involving algorithm analysis.
Moderate: Moderately mapped as students recognize the importance of
PO11
lifelong learning in understanding evolving algorithmic techniques.
Moderate: Moderately mapped as students apply AI principles to develop
PO12
innovative algorithmic solutions.
Substantial: Strongly mapped as students adapt new technologies for
PSO1
improving algorithm efficiency.
Moderate: Moderately mapped as students analyze the efficiency of
PSO2 recursive and non-recursive algorithms using engineering principles and
mathematical knowledge.
Moderate: Moderately mapped as students apply engineering and
PO1 mathematical principles to analyze brute force, divide and conquer, and other
algorithmic techniques.
Slight: Slightly mapped as students identify key elements in different
PO2
algorithmic techniques and formulate their analysis.
Slight: Slightly mapped as students design components using various
PO3
algorithmic techniques, considering different constraints.
C307.2 K3 Substantial: Strongly mapped as students conduct research-based analysis of
PO4
algorithmic techniques and validate their efficiency.
Moderate: Moderately mapped as students use modern tools to apply
PO5
algorithmic techniques to solve complex problems.
Moderate: Moderately mapped as students communicate their analysis and
PO9
findings through effective documentation and presentations.
Moderate: Moderately mapped as students apply management principles in
P010
tasks involving the analysis of algorithmic techniques.
Slight: Slightly mapped as students understand the need for continuous
PO11
learning to stay updated with new algorithmic techniques.
Moderate: Moderately mapped as students apply AI and data science
PO12
principles to develop solutions using different algorithmic techniques.
Moderate: Moderately mapped as students adapt emerging technologies and
PSO1
tools to solve existing problems using algorithmic techniques.
Moderate: Moderately mapped as students apply engineering and
PSO2 mathematical principles to analyze brute force, divide and conquer, and other
algorithmic techniques.
Substantial: Strongly mapped as students implement and analyze problems
PO1
using dynamic programming and greedy algorithmic techniques.
Moderate: Moderately mapped as students review and formulate problems
PO2
suitable for dynamic programming and greedy methods.
Slight: Slightly mapped as students design system components using
PO3
dynamic programming and greedy techniques.
Moderate: Moderately mapped as students conduct experiments and
PO4
interpret data using dynamic programming and greedy methods.
Moderate: Moderately mapped as students use modern engineering tools to
PO5
implement dynamic programming and greedy techniques.
Moderate: Moderately mapped as students document and present their
PO9
C307.3 K3 analysis of problems solved using these techniques.
Slight: Slightly mapped as students manage projects involving dynamic
P010
programming and greedy algorithms.
Slight: Slightly mapped as students recognize the importance of lifelong
PO11 learning to stay updated on new methods in dynamic programming and
greedy algorithms.
Moderate: Moderately mapped as students apply AI principles to solve
PO12
problems using dynamic programming and greedy techniques.
Slight: Slightly mapped as students adapt dynamic programming and greedy
PSO1
methods to solve novel problems.
Substantial: Strongly mapped as students implement and analyze problems
PSO2
using dynamic programming and greedy algorithmic techniques.
Substantial: Strongly mapped as students solve problems using iterative
PO1
improvement techniques for optimization.
Moderate: Moderately mapped as students analyze optimization problems
PO2
using iterative improvement methods.
Substantial: Strongly mapped as students design and implement solutions
PO3
using iterative improvement techniques, considering various constraints.
Moderate: Moderately mapped as students conduct investigations and
C307.4 K3 PO4
interpret data related to optimization problems.
Moderate: Moderately mapped as students use modern tools and techniques
PO5
to apply iterative improvement methods for solving optimization problems.
Substantial: Strongly mapped as students effectively communicate their
PO9
process and results in solving optimization problems.
Substantial: Strongly mapped as students manage projects involving the
P010
application of iterative improvement techniques.
Substantial: Strongly mapped as students recognize the importance of
PO11
lifelong learning to keep up with advancements in optimization techniques.
Moderate: Moderately mapped as students apply AI principles to optimize
PO12
solutions using iterative improvement techniques.
Moderate: Moderately mapped as students adapt iterative improvement
PSO1
methods to solve novel optimization problems.
Slight: Slightly mapped as students solve problems using iterative
PSO2
improvement techniques for optimization.
Substantial: Strongly mapped as students compute the limitations of
PO1
algorithmic power and solve problems using backtracking techniques.
Slight: Slightly mapped as students analyze and formulate problems suitable
PO2
for backtracking.
Moderate: Moderately mapped as students design system components or
PO3
processes using backtracking techniques.
Substantial: Strongly mapped as students conduct research to understand the
PO4
limitations of algorithms and validate backtracking techniques.
Substantial: Strongly mapped as students use modern tools to implement and
PO5
solve problems using backtracking.
Moderate: Moderately mapped as students effectively communicate their
C307.5 K3 PO9
findings through reports and presentations.
Moderate: Moderately mapped as students apply project management
P010
principles in tasks related to backtracking.
Moderate: Moderately mapped as students engage in continuous learning to
PO11 understand the evolving nature of algorithmic limitations and backtracking
techniques.
Moderate: Moderately mapped as students apply AI principles to solve
PO12
complex problems using backtracking.
PSO1 Substantial: Strongly mapped as students adapt backtracking methods to
solve new and existing problems.
PSO2 Slight: Slightly mapped as students compute the limitations of algorithmic
power and solve problems using backtracking techniques.
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

Name of Lab Course Deep Learning Laboratory


Semester & Year V & III
Name of the Student
Registration No
Name of the Evaluator Mrs. R.C.Subashini / AP
Marks scored out of 10

RUBRIC ASSESSMENT FOR LAB COURSE


Performance
Level 1 (0-1) Level 2 (1-2) Level 3 (2-3)
Indicators
Explanation to, the Pre Explanation to, the Pre Explanation to, the Pre
lab questions and lab questions and lab questions and
Pre Lab objective of the objective of the objective of the
Questions, experiment, is, where experiment, is, where experiment, is, where
Objectives compared to the compared to the compared to the
(P –I) expectation of the expectation of the expectation of the
faculty is not faculty is partially faculty is highly
satisfactory. satisfactory. satisfactory.
Explanation to the Explanation to the Explanation to the
procedure of the procedure of the procedure of the
experiment, is, where experiment, is, where experiment, is, where
Procedures
compared to the compared to the compared to the
(P-II)
expectation of the expectation of the expectation of the
faculty is not faculty is partially faculty is highly
satisfactory. satisfactory. satisfactory.
Calculation of the Calculation of the Calculation of the
Data / Observations observed values and observed values and observed values and
(P-III) validation of the results validation of the results validation of the results
of the experiment of the experiment of the experiment
inaccurate. approximate. precise.
Explanation to the post Explanation to the post Explanation to the post
lab questions and lab questions and lab questions and
Post Lab Questions, conclusions of the conclusions of the conclusions of the
Conclusions experiments, is, where experiments, is, where experiments, is, where
(P-IV) compared to the compared to the compared to the
expectation of the expectation of the expectation of the
faculty is not faculty partially faculty is highly
satisfactory. satisfactory. satisfactory.
INDEX

S.No Date Experiment Page No.


ASSESSMENT SHEET
P-I P-II P-III P-IV Total
Ex. No. Date Experiment (2) (2) (3) (3) (10) Page No. Signature

TOTAL (OUT OF 10)

SIGNATURE OF THE EVALUATOR


Ex. No: 01 Solving XOR problem using DNN Date:

Aim :
To write a program for Solving XOR problem using DNN
Algorithm:
1. Initialize weights and biases randomly for each neuron in the network.
2. Set the learning rate and number of epochs for training.
3. Define activation function (sigmoid or other suitable function).
Activation Function: f(x) = 1 / (1 + exp(-x))

4. Create a feedforward neural network architecture:


Input Layer (2 input nodes for XOR inputs)
Hidden Layer (with 2 or more neurons)
Output Layer (1 output node for XOR output)

5. Training:
Repeat for each epoch:
For each training example (XOR input pairs: [0,0], [0,1], [1,0], [1,1]):
a. Compute the weighted sum for each neuron in the hidden layer using input and
weights.
b. Apply the activation function to the hidden layer neurons.
c. Compute the weighted sum for the output neuron using hidden layer outputs and
weights.
d. Apply the activation function to the output neuron.
e. Compute the error between the predicted output and the actual XOR output.
f. Backpropagate the error to adjust weights and biases using gradient descent:
- Update output layer weights and biases.
- Update hidden layer weights and biases.

6. Testing:
For each XOR input pair:
a. Pass the input through the trained neural network.
b. Check the output and compare it to the expected XOR output (0 or 1).

7. Adjust hyperparameters (learning rate, number of hidden neurons, epochs) as needed for
better performance.

8. Once the neural network produces accurate results for the XOR problem, it's trained and
ready to solve XOR inputs.

9. Use the trained neural network to predict XOR outputs for new inputs.

Program:
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD

# Define the XOR input and output data


X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])

# Build the DNN model


model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu')) # Hidden layer with 2 neurons
model.add(Dense(1, activation='sigmoid')) # Output layer

# Compile the model


model.compile(loss='binary_crossentropy', optimizer=SGD(learning_rate=0.1),
metrics=['accuracy'])

# Train the model


model.fit(X, y, epochs=1000, verbose=0)

# Evaluate the model


loss, accuracy = model.evaluate(X, y)
print(f"Model accuracy: {accuracy * 100:.2f}%")

# Make predictions
predictions = model.predict(X)
print("Predictions:")
print((predictions > 0.5).astype(int)) # Threshold predictions for binary output

Output:
1/1━━━━━━━━━━━━━━━━━━━━0s 143ms/step - accuracy:
0.2500 - loss: 0.6931
Model accuracy: 25.00%
1/1━━━━━━━━━━━━━━━━━━━━0s 51ms/step
Predictions:
[[0]
[0]
[0]
[1]]

Result:
Thus the program was executed successfully.
Ex: 02 Character Recognition using CNN Date:

Aim:
To write a python program to implement the Character recognition using CNN.

Algorithm:
1. Start the program.
2. Get the relevant packages for Recognition
3. Load the 0 to 9 Handwritten Data.csv from the directory mnist.data.
4. Reshape data for model creation
5. Train the model and Prediction on test data
6. Prediction on External Image
7. Stop the program

Program:

import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.utils import to_categorical

# Load and preprocess the MNIST dataset


(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Normalize pixel values to a [0, 1] range


X_train, X_test = X_train / 255.0, X_test / 255.0

# Reshape data to fit the CNN (28x28 pixels with 1 color channel)
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)

# One-hot encode labels for 10 classes (digits 0-9)


y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# Define the CNN model


model = Sequential([
Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(64, kernel_size=(3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(X_train, y_train, epochs=10, batch_size=128, validation_data=(X_test,
y_test))

# Evaluate the model


test_loss, test_accuracy = model.evaluate(X_test, y_test)
print(f"Test accuracy: {test_accuracy * 100:.2f}%")

# Make predictions on the test set and display some results


predictions = model.predict(X_test)

# Display a few test images with predictions and true labels


for i in range(5):
plt.imshow(X_test[i].reshape(28, 28), cmap='gray')
plt.title(f"Predicted: {np.argmax(predictions[i])}, True: {np.argmax(y_test[i])}")
plt.show()
Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/mnist.npz
11490434/11490434━━━━━━━━━━━━━━━━━━━━0s 0us/step
/usr/local/lib/python3.10/dist-
packages/keras/src/layers/convolutional/base_conv.py:107: UserWarning: Do not pass
an `input_shape`/`input_dim` argument to a layer. When using Sequential models,
prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Epoch 1/10
469/469━━━━━━━━━━━━━━━━━━━━56s 115ms/step - accuracy:
0.8665 - loss: 0.4533 - val_accuracy: 0.9828 - val_loss: 0.0560
Epoch 2/10
469/469━━━━━━━━━━━━━━━━━━━━79s 110ms/step - accuracy:
0.9835 - loss: 0.0568 - val_accuracy: 0.9868 - val_loss: 0.0396
Epoch 3/10
469/469━━━━━━━━━━━━━━━━━━━━81s 107ms/step - accuracy:
0.9877 - loss: 0.0406 - val_accuracy: 0.9893 - val_loss: 0.0335
Epoch 4/10
469/469━━━━━━━━━━━━━━━━━━━━49s 104ms/step - accuracy:
0.9903 - loss: 0.0302 - val_accuracy: 0.9871 - val_loss: 0.0381
Epoch 5/10
469/469━━━━━━━━━━━━━━━━━━━━50s 106ms/step - accuracy:
0.9931 - loss: 0.0220 - val_accuracy: 0.9899 - val_loss: 0.0327
Epoch 6/10
469/469━━━━━━━━━━━━━━━━━━━━50s 107ms/step - accuracy:
0.9941 - loss: 0.0171 - val_accuracy: 0.9915 - val_loss: 0.0284
Epoch 7/10
469/469━━━━━━━━━━━━━━━━━━━━82s 108ms/step - accuracy:
0.9951 - loss: 0.0137 - val_accuracy: 0.9909 - val_loss: 0.0317
Epoch 8/10
469/469━━━━━━━━━━━━━━━━━━━━49s 105ms/step - accuracy:
0.9965 - loss: 0.0117 - val_accuracy: 0.9887 - val_loss: 0.0383
Epoch 9/10
469/469━━━━━━━━━━━━━━━━━━━━83s 107ms/step - accuracy:
0.9966 - loss: 0.0095 - val_accuracy: 0.9895 - val_loss: 0.0349
Epoch 10/10
469/469━━━━━━━━━━━━━━━━━━━━81s 106ms/step - accuracy:
0.9971 - loss: 0.0087 - val_accuracy: 0.9906 - val_loss: 0.0282
313/313━━━━━━━━━━━━━━━━━━━━3s 9ms/step - accuracy:
0.9871 - loss: 0.0389
Test accuracy: 99.06%
313/313━━━━━━━━━━━━━━━━━━━━3s 10ms/step
Result:
Thus the program was executed successfully.
Ex. No: 3 Face Recognition using CNN Date:

Aim:
To write a python program to implement the Face recognition using CNN.

Algorithm:
1. Start the program.
2. Get the relevant packages for
Face Recognition . Reshape
data for model creation
3. Train the model and Prediction on test data
4. Prediction on External Image
5. Stop the program

Program:
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_lfw_people
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
import tensorflow.keras.preprocessing.image as image

# Load dataset
faces = fetch_lfw_people(min_faces_per_person=100, resize=1.0,
slice_=(slice(60, 188), slice(60, 188)), color=True)
class_count = len(faces.target_names)

# class names and shape


print("Class Names:", faces.target_names)
print("Image Data Shape:", faces.images.shape)

# Display some sample images


%matplotlib inline
sns.set()
fig, ax = plt.subplots(3, 6, figsize=(18, 10))
for i, axi in enumerate(ax.flat):
axi.imshow(faces.images[i] / 255.0) # Scale pixel values to [0, 1]
axi.set(xticks=[], yticks=[], xlabel=faces.target_names[faces.target[i]])

# Count
counts = Counter(faces.target)
names = {faces.target_names[key]: counts[key] for key in counts.keys()}
df = pd.DataFrame.from_dict(names, orient='index')
df.plot(kind='bar')
plt.title("Number of Images per Person")
plt.show()

# Limit
mask = np.zeros(faces.target.shape, dtype=bool)
for target in np.unique(faces.target):
mask[np.where(faces.target == target)[0][:100]] = 1

x_faces = faces.data[mask]
y_faces = faces.target[mask]
x_faces = np.reshape(x_faces, (x_faces.shape[0], faces.images.shape[1],
faces.images.shape[2], faces.images.shape[3]))

# Convert labels to categorical format


y_faces = to_categorical(y_faces, num_classes=class_count)
# Split dataset into training and test sets
x_train, x_test, y_train, y_test = train_test_split(x_faces, y_faces, train_size=0.8,
stratify=y_faces, random_state=0)

# Build CNN model


model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=x_train.shape[1:]),
MaxPooling2D(2, 2),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(2, 2),
Flatten(),
Dense(128, activation='relu'),
Dense(class_count, activation='softmax')
])

# Compile
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

# Train
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=20,
batch_size=25)

# Plot training and validation accuracy


acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, '-', label='Training Accuracy')


plt.plot(epochs, val_acc, ':', label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()

# Generate a confusion matrix


from sklearn.metrics import confusion_matrix

y_predicted = model.predict(x_test)
conf_matrix = confusion_matrix(y_test.argmax(axis=1), y_predicted.argmax(axis=1))

plt.figure(figsize=(10, 8))
sns.heatmap(conf_matrix.T, square=True, annot=True, fmt='d', cbar=False, cmap='Blues',
xticklabels=faces.target_names, yticklabels=faces.target_names)
plt.xlabel('Actual Label')
plt.ylabel('Predicted Label')
plt.show()

# Test the model with an external image


img_path = '/content/george-w-bush-1.jpg' # Replace with the path to your test image
x = image.load_img(img_path, target_size=x_train.shape[1:3]) # Resize image to match
model input
plt.imshow(x)
plt.xticks([])
plt.yticks([])
plt.show()

x = image.img_to_array(x) / 255.0 # Rescale image


x = np.expand_dims(x, axis=0) # Add batch dimension
y = model.predict(x)[0]
# Print probabilities for each class
for i in range(len(y)):
print(f"{faces.target_names[i]}: {y[i]:.4f}")

OUTPUT:
Result:
Thus the program was executed successfully.
Ex. No : 4 LANGUAGE MODELING USING RNN Date:

Aim:
To write a python program to implement the Language modeling using RNN.

Algorithm:
1. Start the program.
2. Get the relevant packages for Language modeling
3. Read a file and split into lines.
4. Build the category_lines dictionary, a list of lines per category Add the
Random item from a list
5. Get a random category and random line from
that category. One-hot vector for category
6. Make category, input, and target tensors from a random category, line
pair Sample from a category and starting letter
7. Get multiple samples from one category and multiple starting letters.
8. Train the model and Prediction on test data Prediction on
External Image
9. Stop the program
Program:

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Sample text data


text = """
Once upon a time, there was a young programmer who wanted to create amazing
things.
They practiced coding every day, creating small projects and learning new algorithms.
As they grew, their skills improved, and they started contributing to open-source
projects.
"""

# Preprocessing
tokenizer = Tokenizer()
tokenizer.fit_on_texts([text])
total_words = len(tokenizer.word_index) + 1

# Convert text to sequence of tokens


input_sequences = []
for line in text.split("\n"):
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)

# Pad sequences and separate inputs and labels


max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences,
maxlen=max_sequence_len, padding='pre'))
X, y = input_sequences[:,:-1], input_sequences[:,-1]
y = tf.keras.utils.to_categorical(y, num_classes=total_words)

# Define the RNN model


model = Sequential([
Embedding(total_words, 100, input_length=max_sequence_len - 1),
LSTM(150, return_sequences=True),
LSTM(100),
Dense(total_words, activation='softmax')
])

# Compile the model


model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.summary()

# Train the model


history = model.fit(X, y, epochs=200, verbose=1)

# Text generation function


def generate_text(seed_text, next_words, max_sequence_len):
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len - 1,
padding='pre')
predicted = model.predict(token_list, verbose=0)
predicted_word_index = np.argmax(predicted, axis=1)[0]
output_word = tokenizer.index_word[predicted_word_index]
seed_text += " " + output_word
return seed_text

# Example usage
seed_text = "Once upon a time"
next_words = 10
print(generate_text(seed_text, next_words, max_sequence_len))
Output:
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━
━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇
━━━━━━━━━━━━━━━━━┩
│ embedding (Embedding) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm_1 (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ dense (Dense) │? │ 0 (unbuilt) │
└──────────────────────────────────────┴────────
─────────────────────┴─────────────────┘
Total params: 0 (0.00 B)
Trainable params: 0 (0.00 B)
Non-trainable params: 0 (0.00 B)
Epoch 1/200
2/2━━━━━━━━━━━━━━━━━━━━7s 29ms/step - accuracy:
0.0000e+00 - loss: 3.5840
Epoch 2/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.1118
- loss: 3.5733
Epoch 3/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.0559
- loss: 3.5643
Epoch 4/200
2/2━━━━━━━━━━━━━━━━━━━━0s 22ms/step - accuracy: 0.0839
- loss: 3.5572
Epoch 5/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.0630
- loss: 3.5491
Epoch 6/200
2/2━━━━━━━━━━━━━━━━━━━━0s 32ms/step - accuracy: 0.0839
- loss: 3.5320
Epoch 7/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.0839
- loss: 3.5062
Epoch 8/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.0735
- loss: 3.4734
Epoch 9/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.0735
- loss: 3.4287
Epoch 10/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.0839
- loss: 3.3882
Epoch 11/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.0839
- loss: 3.3500
Epoch 12/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.0839
- loss: 3.3104
Epoch 13/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1957
- loss: 3.2830
Epoch 14/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1573
- loss: 3.2317
Epoch 15/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.1678
- loss: 3.1714
Epoch 16/200
2/2━━━━━━━━━━━━━━━━━━━━0s 22ms/step - accuracy: 0.1573
- loss: 3.1163
Epoch 17/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1294
- loss: 3.0763
Epoch 18/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1014
- loss: 2.9948
Epoch 19/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.1398
- loss: 2.9977
Epoch 20/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1118
- loss: 2.9365
Epoch 21/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.1573
- loss: 2.8617
Epoch 22/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1645
- loss: 2.8407
Epoch 23/200
2/2━━━━━━━━━━━━━━━━━━━━0s 29ms/step - accuracy: 0.1749
- loss: 2.7446
Epoch 24/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1749
- loss: 2.7097
Epoch 25/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1678
- loss: 2.5829
Epoch 26/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2412
- loss: 2.5273
Epoch 27/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2412
- loss: 2.4467
Epoch 28/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2029
- loss: 2.4166
Epoch 29/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.1853
- loss: 2.3107
Epoch 30/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.2204
- loss: 2.2684
Epoch 31/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.2867
- loss: 2.2096
Epoch 32/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.2867
- loss: 2.1237
Epoch 33/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.3322
- loss: 2.0782
Epoch 34/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.3147
- loss: 2.0488
Epoch 35/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4090
- loss: 1.9603
Epoch 36/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.3043
- loss: 2.0264
Epoch 37/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.3427
- loss: 1.9290
Epoch 38/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2971
- loss: 1.8503
Epoch 39/200
2/2━━━━━━━━━━━━━━━━━━━━0s 29ms/step - accuracy: 0.3427
- loss: 1.8667
Epoch 40/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4929
- loss: 1.7641
Epoch 41/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.4057
- loss: 1.7534
Epoch 42/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4545
- loss: 1.7764
Epoch 43/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.4720
- loss: 1.7404
Epoch 44/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.5000
- loss: 1.6499
Epoch 45/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.4720
- loss: 1.6377
Epoch 46/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4337
- loss: 1.7798
Epoch 47/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.5104
- loss: 1.6135
Epoch 48/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.4512
- loss: 1.6144
Epoch 49/200
2/2━━━━━━━━━━━━━━━━━━━━0s 45ms/step - accuracy: 0.3882
- loss: 1.6681
Epoch 50/200
2/2━━━━━━━━━━━━━━━━━━━━0s 40ms/step - accuracy: 0.5559
- loss: 1.5623
Epoch 51/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.4441
- loss: 1.6093
Epoch 52/200
2/2━━━━━━━━━━━━━━━━━━━━0s 39ms/step - accuracy: 0.4825
- loss: 1.5660
Epoch 53/200
2/2━━━━━━━━━━━━━━━━━━━━0s 35ms/step - accuracy: 0.5000
- loss: 1.5230
Epoch 54/200
2/2━━━━━━━━━━━━━━━━━━━━0s 40ms/step - accuracy: 0.5526
- loss: 1.5135
Epoch 55/200
2/2━━━━━━━━━━━━━━━━━━━━0s 44ms/step - accuracy: 0.6294
- loss: 1.4114
Epoch 56/200
2/2━━━━━━━━━━━━━━━━━━━━0s 43ms/step - accuracy: 0.6398
- loss: 1.4013
Epoch 57/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.6957
- loss: 1.3729
Epoch 58/200
2/2━━━━━━━━━━━━━━━━━━━━0s 37ms/step - accuracy: 0.6573
- loss: 1.3742
Epoch 59/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.6398
- loss: 1.3274
Epoch 60/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.7029
- loss: 1.3167
Epoch 61/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7308
- loss: 1.3420
Epoch 62/200
2/2━━━━━━━━━━━━━━━━━━━━0s 46ms/step - accuracy: 0.7133
- loss: 1.2698
Epoch 63/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.6469
- loss: 1.2536
Epoch 64/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.6924
- loss: 1.2184
Epoch 65/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7308
- loss: 1.1643
Epoch 66/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.7029
- loss: 1.1866
Epoch 67/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7621
- loss: 1.1167
Epoch 68/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.7692
- loss: 1.1334
Epoch 69/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.7308
- loss: 1.1132
Epoch 70/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8427
- loss: 1.0600
Epoch 71/200
2/2━━━━━━━━━━━━━━━━━━━━0s 34ms/step - accuracy: 0.7029
- loss: 1.1707
Epoch 72/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7692 - loss:
1.0260
Epoch 73/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.7588 - loss:
1.0937
Epoch 74/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8251 - loss:
1.0503
Epoch 75/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.7763 - loss:
1.0264
Epoch 76/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.7204 - loss:
1.0726
Epoch 77/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.8322 - loss:
0.9551
Epoch 78/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8147 - loss:
0.9436
Epoch 79/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.7796 - loss:
0.9205
Epoch 80/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8322 - loss:
0.9597
Epoch 81/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.8810 - loss:
0.8720
Epoch 82/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8427 - loss:
0.9067
Epoch 83/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.9058
Epoch 84/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.8882 - loss:
0.8967
Epoch 85/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.8986 - loss:
0.8346
Epoch 86/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8810 - loss:
0.8123
Epoch 87/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8810 - loss:
0.8355
Epoch 88/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.8706 - loss:
0.8047
Epoch 89/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.8032
Epoch 90/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9265 - loss:
0.7901
Epoch 91/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.8810 - loss:
0.7846
Epoch 92/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.7756
Epoch 93/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.7779
Epoch 94/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8810 - loss:
0.7669
Epoch 95/200
2/2━━━━━━━━━━━━━━━━━━━━0s 29ms/step - accuracy: 0.8810 - loss:
0.7228
Epoch 96/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9265 - loss:
0.7962
Epoch 97/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.8810 - loss:
0.7474
Epoch 98/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9161 - loss:
0.7317
Epoch 99/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8602 - loss:
0.7792
Epoch 100/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.8986 - loss:
0.6991
Epoch 101/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8706 - loss:
0.6710
Epoch 102/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8882 - loss:
0.6977
Epoch 103/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.6713
Epoch 104/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.9161 - loss:
0.6683
Epoch 105/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.6703
Epoch 106/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.8986 - loss:
0.6425
Epoch 107/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8986 - loss:
0.6057
Epoch 108/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9161 - loss:
0.6048
Epoch 109/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9545 - loss:
0.5686
Epoch 110/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9545 - loss:
0.5939
Epoch 111/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.9265 - loss:
0.5727
Epoch 112/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.5913
Epoch 113/200
2/2━━━━━━━━━━━━━━━━━━━━0s 33ms/step - accuracy: 0.9265 - loss:
0.5579
Epoch 114/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.5621
Epoch 115/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.5509
Epoch 116/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9265 - loss:
0.5637
Epoch 117/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.5524
Epoch 118/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.5461
Epoch 119/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9441 - loss:
0.5370
Epoch 120/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9825 - loss:
0.5415
Epoch 121/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5137
Epoch 122/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9720 - loss:
0.5187
Epoch 123/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5122
Epoch 124/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5017
Epoch 125/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.4730
Epoch 126/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5038
Epoch 127/200
2/2━━━━━━━━━━━━━━━━━━━━0s 32ms/step - accuracy: 0.9720 - loss:
0.4844
Epoch 128/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9545 - loss:
0.4841
Epoch 129/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.9720 - loss:
0.4608
Epoch 130/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9720 - loss:
0.4723
Epoch 131/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.4915
Epoch 132/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.4458
Epoch 133/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9265 - loss:
0.4381
Epoch 134/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.4651
Epoch 135/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9265 - loss:
0.4869
Epoch 136/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.4631
Epoch 137/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9545 - loss:
0.4328
Epoch 138/200
2/2━━━━━━━━━━━━━━━━━━━━0s 32ms/step - accuracy: 0.9441 - loss:
0.4275
Epoch 139/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.4099
Epoch 140/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9825 - loss:
0.4114
Epoch 141/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.4068
Epoch 142/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.3988
Epoch 143/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.3901
Epoch 144/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.9720 - loss:
0.4079
Epoch 145/200
2/2━━━━━━━━━━━━━━━━━━━━0s 35ms/step - accuracy: 0.9720 - loss:
0.3807
Epoch 146/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.3938
Epoch 147/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.9720 - loss:
0.4153
Epoch 148/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.3941
Epoch 149/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.3837
Epoch 150/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.9720 - loss:
0.3875
Epoch 151/200
2/2━━━━━━━━━━━━━━━━━━━━0s 37ms/step - accuracy: 0.9545 - loss:
0.3751
Epoch 152/200
2/2━━━━━━━━━━━━━━━━━━━━0s 43ms/step - accuracy: 0.9720 - loss:
0.3957
Epoch 153/200
2/2━━━━━━━━━━━━━━━━━━━━0s 46ms/step - accuracy: 0.9441 - loss:
0.3783
Epoch 154/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9441 - loss:
0.3978
Epoch 155/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9441 - loss:
0.4134
Epoch 156/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.4372
Epoch 157/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9161 - loss:
0.3709
Epoch 158/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.3626
Epoch 159/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.3587
Epoch 160/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.3294
Epoch 161/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.3388
Epoch 162/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9545 - loss:
0.3331
Epoch 163/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9720 - loss:
0.3741
Epoch 164/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9441 - loss:
0.3406
Epoch 165/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.3554
Epoch 166/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9441 - loss:
0.3656
Epoch 167/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3303
Epoch 168/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9545 - loss:
0.3160
Epoch 169/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3174
Epoch 170/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3161
Epoch 171/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.9441 - loss:
0.3486
Epoch 172/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9825 - loss:
0.2976
Epoch 173/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.3367
Epoch 174/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.3185
Epoch 175/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9720 - loss:
0.3014
Epoch 176/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.9825 - loss:
0.2910
Epoch 177/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3001
Epoch 178/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.9720 - loss:
0.2954
Epoch 179/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 1.0000 - loss:
0.2937
Epoch 180/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 1.0000 - loss:
0.2819
Epoch 181/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2852
Epoch 182/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2709
Epoch 183/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.2684
Epoch 184/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.2753
Epoch 185/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2686
Epoch 186/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2689
Epoch 187/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2618
Epoch 188/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 1.0000 - loss:
0.2519
Epoch 189/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2552
Epoch 190/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2559
Epoch 191/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2440
Epoch 192/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2484
Epoch 193/200
2/2━━━━━━━━━━━━━━━━━━━━0s 34ms/step - accuracy: 1.0000 - loss:
0.2361
Epoch 194/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2457
Epoch 195/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.2482
Epoch 196/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.2463
Epoch 197/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2492
Epoch 198/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2250
Epoch 199/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.2343
Epoch 200/200
2/2━━━━━━━━━━━━━━━━━━━━0s 33ms/step - accuracy: 0.9720 - loss:
0.2366
Once upon a time there was a young programmer who wanted to create amazing

Result:
Thus the program was executed successfully.
Ex.No: 5 Sentiment Analysis using LSTM Date:

Aim:
To write a python program for implementing sentiment analysis using LSTM.
Algorithm:
1. Load the dataset, keeping the top 10,000 most frequent words.

2. Preprocess the data by padding sequences to a fixed length (200 in this case).

3. Build a simple LSTM model with an embedding layer, LSTM layer, and a dense
output layer with a sigmoid activation function for binary sentiment classification.

4. Compile and train the model on the training data.

5. Evaluate the model's accuracy on the test data.

6. Perform sentiment analysis on custom text by tokenizing, padding, and using the
trained model to make predictions.

7. We can adjust the number of training epochs, batch size, model architecture, and
hyper parameters to improve performance based on your specific use case.

Program:

import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout
from tensorflow.keras.datasets import imdb

# Load and preprocess the IMDb dataset


vocab_size = 10000 # Use top 10,000 words in the dataset
max_len = 200 # Max review length in words (truncate/pad as needed)

# Load the dataset


(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=vocab_size)

# Pad sequences to ensure uniform input length


x_train = pad_sequences(x_train, maxlen=max_len)
x_test = pad_sequences(x_test, maxlen=max_len)

# Define the LSTM model


model = Sequential([
Embedding(vocab_size, 128, input_length=max_len),
LSTM(128, return_sequences=True),
Dropout(0.2),
LSTM(64),
Dense(1, activation='sigmoid')
])

# Compile the model


model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()

# Train the model


history = model.fit(x_train, y_train, epochs=5, batch_size=64, validation_split=0.2,
verbose=1)

# Evaluate the model


test_loss, test_acc = model.evaluate(x_test, y_test, verbose=1)
print(f"Test Accuracy: {test_acc:.2f}")

# Predict sentiment for new text


def predict_sentiment(text, model, tokenizer, max_len):
# Tokenize and pad the input text
sequences = tokenizer.texts_to_sequences([text])
padded = pad_sequences(sequences, maxlen=max_len)

# Get model prediction


prediction = model.predict(padded)
sentiment = "Positive" if prediction > 0.5 else "Negative"
return sentiment

# Example usage for predicting sentiment on a new text


sample_text = "The movie was fantastic! I really enjoyed it."
sentiment = predict_sentiment(sample_text, model, imdb.get_word_index(), max_len)
print(f"Predicted Sentiment: {sentiment}")

Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/imdb.npz
17464789/17464789━━━━━━━━━━━━━━━━━━━━0s 0us/step
Model: "sequential_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━
━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃Output Shape ┃ Param # ┃
┡ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇ ━━━━
━━━━━━━━━━━━━━━━━━━━━━━━━╇ ━━━━━━━━━━━━━━━━━┩
│ embedding_1 (Embedding) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm_2 (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ dropout (Dropout) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm_3 (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ dense_1 (Dense) │? │ 0 (unbuilt) │
└──────────────────────────────────────┴────────
─────────────────────┴─────────────────┘
Total params: 0 (0.00 B)
Trainable params: 0 (0.00 B)
Non-trainable params: 0 (0.00 B)
Epoch 1/5
313/313━━━━━━━━━━━━━━━━━━━━251s 787ms/step - accuracy: 0.7127 -
loss: 0.5322 - val_accuracy: 0.8372 - val_loss: 0.3869
Epoch 2/5
313/313━━━━━━━━━━━━━━━━━━━━259s 779ms/step - accuracy: 0.8893 -
loss: 0.2925 - val_accuracy: 0.8708 - val_loss: 0.3101
Epoch 3/5
313/313━━━━━━━━━━━━━━━━━━━━247s 787ms/step - accuracy: 0.9291 -
loss: 0.1946 - val_accuracy: 0.8626 - val_loss: 0.3258
Epoch 4/5
313/313━━━━━━━━━━━━━━━━━━━━259s 779ms/step - accuracy: 0.9516 -
loss: 0.1350 - val_accuracy: 0.8668 - val_loss: 0.3931
Epoch 5/5
313/313━━━━━━━━━━━━━━━━━━━━264s 784ms/step - accuracy: 0.9665 -
loss: 0.0982 - val_accuracy: 0.8520 - val_loss: 0.3982
782/782━━━━━━━━━━━━━━━━━━━━115s 147ms/step - accuracy: 0.8460 -
loss: 0.4178
Test Accuracy: 0.85
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/imdb_word_index.json
1641221/1641221━━━━━━━━━━━━━━━━━━━━0s 0us/step
---------------------------------------------------------------------------

Result:
Thus the program was executed successfully.
Ex. No: 6 PARTS OF SPEECH TAGGING USING SEQUENCE TO
SEQUENCEARCHITECTURE Date:

Aim:
To write a python program to implement the parts of speech tagging using
Sequence to Sequence architecture.

Algorithm:

1. Import the NLTK library and download the necessary data (tokenizers
and POS tagger).
2. Define a sample text that you want to perform POS tagging on.
3. Tokenize the text into words using nltk.word_tokenize.
4. Use nltk.pos_tag to perform POS tagging on the words.
5. Finally, loop through the tagged words and print each word along with
its corresponding POS tag
6. Run this program, it will tokenize the input text and output the POS
tags for each word. The POS tags will be something like 'NN' for
noun, 'VB' for verb, 'JJ' for adjective, and so on, depending on the part
of speech.
7. Stop the program
Program:

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense, Embedding
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import nltk
nltk.download('averaged_perceptron_tagger')

# Sample sentences for training


sentences = [
"I am learning NLP.",
"Parts of speech tagging is essential.",
"Sequence models are powerful.",
]
# tagging using nltk
target_tags = [
[tag for _, tag in nltk.pos_tag(sentence.split())]
for sentence in sentences
]

# Initialize the tokenizer


word_tokenizer = Tokenizer()
word_tokenizer.fit_on_texts(sentences)
word_index = word_tokenizer.word_index
num_words = len(word_index) + 1 # Including padding token

# Convert sentences to sequences of word indices


input_sequences = word_tokenizer.texts_to_sequences(sentences)
# sequence length for padding
max_sequence_length = max(len(seq) for seq in input_sequences)
input_sequences = pad_sequences(input_sequences, maxlen=max_sequence_length,
padding="post")

# Encode
tags = set(tag for tags in target_tags for tag in tags)
tag_to_index = {tag: i for i, tag in enumerate(tags)}
index_to_tag = {i: tag for tag, i in tag_to_index.items()}
num_tags = len(tags)

# Convert target tags to sequences


target_sequences = [[tag_to_index[tag] for tag in tag_seq] for tag_seq in target_tags]
target_sequences = pad_sequences(target_sequences, maxlen=max_sequence_length,
padding="post")
target_sequences = np.array(target_sequences)

# Encoder-Decoder model with embedding and LSTM layers


embedding_dim = 64
hidden_dim = 128

# Define encoder
encoder_inputs = Input(shape=(max_sequence_length,))
embedding_layer = Embedding(input_dim=num_words, output_dim=embedding_dim,
input_length=max_sequence_length)
encoder_embeddings = embedding_layer(encoder_inputs)
encoder_lstm = LSTM(hidden_dim, return_state=True)
_, encoder_state_h, encoder_state_c = encoder_lstm(encoder_embeddings)
encoder_states = [encoder_state_h, encoder_state_c]
# Define decoder
decoder_inputs = Input(shape=(max_sequence_length,))
decoder_embeddings = embedding_layer(decoder_inputs)
decoder_lstm = LSTM(hidden_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embeddings, initial_state=encoder_states)
decoder_dense = Dense(num_tags, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Build the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()

# Shift target_sequences to create decoder input sequences (right-shifted)


decoder_input_sequences = np.concatenate([np.zeros((target_sequences.shape[0], 1)),
target_sequences[:, :-1]], axis=1)

# Expand the target sequences to match


target_sequences = np.expand_dims(target_sequences, -1)

# Train the model


model.fit([input_sequences, decoder_input_sequences], target_sequences, epochs=20,
batch_size=1)
def predict_tags(sentence):
# Preprocess the input sentence
sequence = word_tokenizer.texts_to_sequences([sentence])
sequence = pad_sequences(sequence, maxlen=max_sequence_length, padding="post")

# Predict using the model


predictions = model.predict([sequence, sequence])
predicted_tags = np.argmax(predictions, axis=-1)[0]

# Convert predicted indices to tags


return [index_to_tag[idx] for idx in predicted_tags if idx in index_to_tag]

# Test on a sample sentence


sample_sentence = "Sequence models are powerful"
predicted_tags = predict_tags(sample_sentence)
print("Sentence:", sample_sentence)
print("Predicted Tags:", predicted_tags)
OUTPUT:
Result:
Thus the program was executed successfully.
Ex.No: 7 Machine Translation using Encoder- Decoder Model Date:
Aim:
To write a python program to implement the Machine Translation using Encoder-Decoder model
Algorithm:
1. Start the program
2. Input data files are available in the read-only
3. Decoder_target_data is ahead of decoder_input_data by one time step Setup
the decoder, using `encoder_states`as initial state
4. Run training
5. Reverse-lookup token index to decode sequences back to something
readable.Generate empty target sequence of length 1.
6. Sampling loop for a batch of sequences(to simplify, here we assume a
batch of size 1).Sample a token
7. Exit condition: either hit max length or find stop
character. Update the target sequence (of length
1).
8. Update status
9. Stop the program
Program:
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Embedding, Dense

# Encoder
encoder_inputs = Input(shape=(None,))
encoder_embedding = Embedding(input_dim=5000, output_dim=256)(encoder_inputs)
encoder_lstm = LSTM(256, return_state=True)
_, state_h, state_c = encoder_lstm(encoder_embedding)
encoder_states = [state_h, state_c]

# Decoder
decoder_inputs = Input(shape=(None,))
decoder_embedding = Embedding(input_dim=5000, output_dim=256)(decoder_inputs)
decoder_lstm = LSTM(256, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)
decoder_dense = Dense(5000, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='adam', loss='categorical_crossentropy')

# Summary
model.summary()

Output:
Model: "functional"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━
━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃
┡ ━━━━━━━━━━━━━━━━━━━━━━━━━━━╇ ━━━━━━━━━━━━━━━━━━━
━━━━━╇ ━━━━━━━━━━━━━━━━╇ ━━━━━━━━━━━━━━━━━━━━━━━━┩
│ input_layer (InputLayer) │ (None, None) │ 0│- │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ input_layer_1 │ (None, None) │ 0│- │
│ (InputLayer) │ │ │ │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ embedding (Embedding) │ (None, None, 256) │ 1,280,000 │ input_layer[0][0]

├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ embedding_1 (Embedding) │ (None, None, 256) │ 1,280,000 │
input_layer_1[0][0] │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ lstm (LSTM) │ [(None, 256), (None, │ 525,312 │ embedding[0][0] │
│ │ 256), (None, 256)] │ │ │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ lstm_1 (LSTM) │ [(None, None, 256), │ 525,312 │ embedding_1[0][0],

│ │ (None, 256), (None, │ │ lstm[0][1], lstm[0][2] │
│ │ 256)] │ │ │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ dense (Dense) │ (None, None, 5000) │ 1,285,000 │ lstm_1[0][0] │
└───────────────────────────┴────────────────────────
┴────────────────┴────────────────────────┘
Total params: 4,895,624 (18.68 MB)
Trainable params: 4,895,624 (18.68 MB)
Non-trainable params: 0 (0.00 B)

Result:
Thus the program was executed successfully.
Ex.No:8 Image Augmentation using GANs Date:

Aim:
To write a python program for the Implementation of image augmentation using deep
RBM.
Algorithm:
1. Define the augmentation pipeline
2. Load an image you want to augment
3. Apply the augmentation to the image
4. Display the original and augmented images
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Reshape, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, LeakyReLU
from tensorflow.keras.models import Sequential
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt

# 1. Load and preprocess the MNIST dataset


def load_data():
(X_train, _), (_, _) = mnist.load_data()
X_train = X_train / 127.5 - 1.0 # Normalize to [-1, 1]
X_train = np.expand_dims(X_train, axis=-1) # Shape: (n, 28, 28, 1)
return X_train

# 2. Define the Generator model


def build_generator(latent_dim):
model = Sequential([
Dense(128 * 7 * 7, activation="relu", input_dim=latent_dim),
Reshape((7, 7, 128)),
BatchNormalization(),
Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
LeakyReLU(alpha=0.2),
Conv2DTranspose(64, (4, 4), strides=(2, 2), padding="same"),
LeakyReLU(alpha=0.2),
Conv2D(1, (7, 7), activation="tanh", padding="same")
])
return model

# 3. Define the Discriminator model


def build_discriminator(input_shape=(28, 28, 1)):
model = Sequential([
Conv2D(64, (4, 4), strides=(2, 2), padding="same", input_shape=input_shape),
LeakyReLU(alpha=0.2),
Conv2D(128, (4, 4), strides=(2, 2), padding="same"),
LeakyReLU(alpha=0.2),
Flatten(),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model

# 4. Define the GAN by combining Generator and Discriminator


def build_gan(generator, discriminator):
discriminator.trainable = False # Freeze discriminator in GAN
model = Sequential([generator, discriminator])
model.compile(optimizer='adam', loss='binary_crossentropy')
return model

# 5. Train the GAN


def train_gan(generator, discriminator, gan, data, latent_dim, epochs=10000,
batch_size=128):
half_batch = batch_size // 2

for epoch in range(epochs):


# Train the discriminator with real images
idx = np.random.randint(0, data.shape[0], half_batch)
real_images = data[idx]
real_labels = np.ones((half_batch, 1))

# Generate fake images


noise = np.random.normal(0, 1, (half_batch, latent_dim))
fake_images = generator.predict(noise)
fake_labels = np.zeros((half_batch, 1))

# Train the discriminator


d_loss_real = discriminator.train_on_batch(real_images, real_labels)
d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)

# Train the generator via GAN


noise = np.random.normal(0, 1, (batch_size, latent_dim))
valid_labels = np.ones((batch_size, 1))
g_loss = gan.train_on_batch(noise, valid_labels)

# Print progress every 1000 epochs


if epoch % 1000 == 0:
print("Epoch {epoch} | D Loss: {0.5 * (d_loss_real[0] + d_loss_fake[0]):.4f} | G
Loss: {g_loss:.4f}")
generate_and_save_images(generator, latent_dim, epoch)

# 6. Generate and save images


def generate_and_save_images(generator, latent_dim, epoch, n_images=25):
noise = np.random.normal(0, 1, (n_images, latent_dim))
generated_images = generator.predict(noise)
generated_images = 0.5 * generated_images + 0.5 # Rescale to [0, 1]

plt.figure(figsize=(5, 5))
for i in range(n_images):
plt.subplot(5, 5, i + 1)
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.savefig(f"gan_generated_epoch_{epoch}.png")
plt.close()

# 7. Main function to run the GAN


if __name__ == "__main__":
latent_dim = 100 # Dimension of the latent space
data = load_data()
generator = build_generator(latent_dim)
discriminator = build_discriminator()
gan = build_gan(generator, discriminator)
train_gan(generator, discriminator, gan, data, latent_dim)

Output:

Result:
Thus the program was executed successfully.

You might also like