0% found this document useful (0 votes)
82 views59 pages

Deep Learning Lab Manual

The document outlines the Master Record for the Deep Learning Laboratory (AD3511) at AAA College of Engineering & Technology, affiliated with Anna University. It includes course objectives, outcomes, a list of experiments, hardware and software requirements, and the vision and mission of the institution and department. The laboratory course aims to equip students with practical skills in deep learning applications and data science.

Uploaded by

balavignesh3327
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views59 pages

Deep Learning Lab Manual

The document outlines the Master Record for the Deep Learning Laboratory (AD3511) at AAA College of Engineering & Technology, affiliated with Anna University. It includes course objectives, outcomes, a list of experiments, hardware and software requirements, and the vision and mission of the institution and department. The laboratory course aims to equip students with practical skills in deep learning applications and data science.

Uploaded by

balavignesh3327
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

AAA COLLEGE OF ENGINEERING & TECHNOLOGY

(Accredited by NAAC with ‘A’ Grade, An ISO 21001:2018 Certified Institution)


(Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai)
Kamarajar Educational Road, Amathur, Sivakasi – 626 005.

DEPARTMENT OF ARTIFICIAL
INTELLIGENCE AND DATA SCIENCE

LABORATORY
MASTER RECORD

AD3511- DEEP LEARNING LABORATORY

Anna University Regulation - 2021


(V Semester B.Tech – AI&DS)
AAACET DEPT. OF AI&DS

AAA COLLEGE OF ENGINEERING & TECHNOLOGY


(Accredited by NAAC with ‘A’ Grade, An ISO 21001:2018 Certified Institution)
(Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai)
Kamarajar Educational Road, Amathur, Sivakasi – 626 005.

DEPARTMENT OF ARTIFICIAL INTELLIGENCE


AND DATA SCIENCE

LABORATORY
MASTER RECORD

AD3511- DEEP LEARNING LABORATORY

Anna University Regulation - 2021


(V Semester B.Tech – AI&DS)

Prepared By Verified By Approved By


(Ms.P.Meenadharshini) (Dr.P.Devabalan, HoD) (Dr.M.Sekar, Principal)

2
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

AAA COLLEGE OF ENGINEERING & TECHNOLOGY


(Accredited by NAAC with ‘A’ Grade, An ISO 21001:2018 Certified Institution)
(Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai)
Kamarajar Educational Road, Amathur, Sivakasi – 626 005.

DEPARTMENT OF ARTIFICIAL INTELLIGENCE


AND DATA SCIENCE

LABORATORY
MASTER RECORD

AD3511- DEEP LEARNING LABORATORY

Anna University Regulation - 2021


(IV Semester B.E – AI&DS)

Name of the Student : _________________________________________

Register Number : _________________________________________

Branch : ___________________

Year : _______ Semester : _______


3
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

AAA COLLEGE OF ENGINEERING AND TECHNOLOGY

VISION of AAACET MISSION of AAACET


● Emerge as a Premier Institute for • To offer state of the art infrastructure for
Quality Technical Education and under graduate, postgraduate and
Research with social responsibilities. doctoral programs.
• To provide holistic learning ambience
blended with professional ethics,
leadership qualities and social
responsibilities.
• To generate knowledge and research in
field of technical education and
management studies.
• To inculcate innovation and creativity
among student community to become
successful entrepreneurs.
• To undertake collaborative projects with
academic, research centres and
industries to provide cost – effective
solutions.

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE


DEPARTMENT VISION DEPARTMENT MISSION
• To be a renowned ● To offer a learner-centered environment equipped
leader in data science with the latest technological infrastructure in
and artificial Artificial Intelligence and Data Science.
intelligence by ● To provide an ambiance in pursuit of excellence by
developing technically upholding strong professional values and ethics, and
proficient professionals promoting social responsibility to address real-life
through quality challenges.
education and ● To advance innovative research and development in
research to enhance Artificial Intelligence, Data Science, and its allied
societal well-being. fields through collaboration with industry partners.
● To equip aspiring entrepreneurs with the skills and
knowledge needed to transform their ideas into
successful ventures, driving the future of technology
and business.
● To enable advanced AI technologies to deliver
efficient, impactful solutions that drive innovation
and generate value across various sectors.

PROGRAM EDUCATIONAL OBJECTIVES (PEOs) : After 3-5 years of graduation

To Compete on a global scale for a professional career in Artificial Intelligence


PEO -1
and Data Science
To Provide industry-specific solutions for the society with effective
PEO -2
communication and ethics.
Hone their professional skills through research and lifelong learning
PEO -3
initiatives.

PROGRAM OUTCOMES (POs) : At the time of graduation, our graduates will


4
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Engineering knowledge: Apply the knowledge of mathematics, science,


PO-1 engineering fundamentals, and engineering specialization to the solution of
complex engineering problems.
Problem analysis: Identify, formulate, research literature, and analyze
PO-2 engineering problems to arrive at substantiated conclusions using first principles
of mathematics, natural, and engineering sciences.
Design/development of solutions: Design solutions for complex engineering
problems and design system components, processes to meet the specifications
PO-3
with consideration for the public health and safety, and the cultural, societal,
and environmental considerations.
Conduct investigations of complex problems: Use research-based knowledge
PO-4 including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.
Modern tool usage: Create, select, and apply appropriate techniques, resources,
PO-5 and modern engineering and IT tools including prediction and modeling to
complex engineering activities with an understanding of the limitations.
The engineer and society: Apply reasoning informed by the contextual
PO-6 knowledge to assess societal, health, safety, legal, and cultural issues and the
consequent responsibilities relevant to the professional engineering practice.
Environment and sustainability: Understand the impact of the professional
PO-7 engineering solutions in societal and environmental contexts, and demonstrate
the knowledge of, and need for sustainable development.
Ethics: Apply ethical principles and commit to professional ethics and
PO-8
responsibilities and norms of the engineering practice.
Individual and team work: Function effectively as an individual, and as a
PO-9
member or leader in teams, and in multidisciplinary settings.
Communication: Communicate effectively with the engineering community and
with society at large. Be able to comprehend and write effective reports
PO-10
documentation. Make effective presentations, and give and receive clear
instructions.
Project management and finance: Demonstrate knowledge and understanding
of engineering and management principles and apply these to one’s own work, as
PO-11
a member and leader in a team. Manage projects in multidisciplinary
environments.
Life-long learning: Recognize the need for, and have the preparation and ability
PO-12 to engage in independent and life-long learning in the broadest context of
technological change.
PROGRAM SPECIFIC OUTCOMES (PSOs) :

Utilize their proficiencies in the fundamental knowledge of basic sciences,


PSO-1 mathematics, Artificial Intelligence, data science and statistics to build systems
that require management and analysis of large volumes of data.
Advance their technical skills to pursue pioneering research in the field of AI and
PSO-2 Data Science and create disruptive and sustainable solutions for the welfare of
ecosystems.
Design and model AI based solutions to critical problem domains in the real world
PSO-3 also exhibit innovative thoughts and creative ideas for effective contribution
towards economy building.

5
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

TABLE OF CONTENTS

S. No. Description Page No.

1. Index 7

2. Course Syllabus 8

3. Course Objectives & Course Outcomes 9

4. List of Experiments 10

5. Hardware & Software Requirements 11

6. Introduction about the Course 13

7. Introduction about the Hardware & Software 14

8. Experiment/Exercise Details 15

6
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

1. INDEX

Ex. Signature
Page Marks
No. Date Title of the Experiment/Exercise of the
No. Awarded
faculty
1.
Solving XOR problem using DNN
2.
Character recognition using CNN
3.
Face recognition using CNN
4.
Language modeling using RNN
5.
Sentiment analysis using LSTM
6. Parts of speech tagging using Sequence to
Sequence architecture
7. Machine Translation using Encoder-
Decoder model
8.
Image augmentation using GANs
9.
Mini-project on real world application
Content Beyond Syllabus
10. Build Regression Model

7
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

2. COURSE SYLLABUS

LIST OF EXPERIMENTS:

1. Solving XOR problem using DNN

2. Character recognition using CNN

3. Face recognition using CNN

4. Language modeling using RNN

5. Sentiment analysis using LSTM

6. Parts of speech tagging using Sequence to Sequence architecture

7. Machine Translation using Encoder-Decoder model

8. Image augmentation using GANs

9. Mini-project on real world application

TOTAL: 60 PERIODS

Contents Beyond Syllabi:


1. Build Regression Model

8
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

3. COURSE OBJECTIVES & COURSE OUTCOMES


COURSE OBJECTIVES:

• To understand the tools and techniques to implement deep neural networks


• To apply different deep learning architectures for solving problems
• To implement generative models for suitable applications
• To learn to build and validate different models

COURSE OUTCOMES:

Upon completion of the course, the students will be able to:

BT
CO CO Statements
Level

CO1 Apply deep neural network for simple problems Apply

CO2 Apply Convolution Neural Network for image processing Apply

CO3 Apply Recurrent Neural Network and its variants for text analysis Apply

CO4 Apply generative models for data augmentation Apply

CO5 Develop real-world solutions using suitable deep neural networks Apply

COURSE ARTICULATION MATRIX:


Course
Outcom PO1 PO1 PO1 PS PS PS
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9
e 0 1 2 O1 O2 O3

CO1 3 2 2 2 3 2 3 2 2

CO2 3 2 2 2 3 2 3 3 3

CO3 3 2 2 2 3 2 3 3 3

CO4 3 2 2 2 3 2 3 3 3

CO5 3 3 3 3 3 3 3 3 3

9
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

4. LIST OF EXPERIMENTS

Course
S.No. LIST OF EXPERIMENTS
Outcome

1. Solving XOR problem using DNN CO1

2. Solving XOR problem using DNN CO1

3. Face recognition using CNN CO2

4. Language modeling using RNN CO2

5. Sentiment analysis using LSTM CO3

Parts of speech tagging using Sequence to Sequence


6. CO3
architecture

7. Machine Translation using Encoder-Decoder model CO3

8. Image augmentation using GANs CO4

9. Mini-project on real-world applications CO5

CONTENT BEYOND SYLLABUS

10. Build Regression Model CO1

10
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

5. HARDWARE & SOFTWARE REQUIREMENTS


Hardware Requirements:
The following hardware specifications are recommended to efficiently run the
software tools required for this laboratory course:
• Processor:
A multi-core processor (Intel i5 or higher, or equivalent AMD processor) is
recommended to handle data manipulation and computations efficiently.
• RAM:
A minimum of 8GB of RAM is required to ensure smooth execution of
programs, especially when working with large datasets. For more intensive
computations, 16GB or more is ideal.
• Storage:
At least 20GB of free hard disk space to install necessary software and store
datasets.
• Display:
A monitor with a minimum resolution of 1280x800 pixels is recommended to
ensure that all visual outputs, such as graphs and plots, are displayed clearly.
• Input Devices: Standard keyboard and mouse.
Software Requirements:
To effectively complete the experiments and tasks in this laboratory course, the
following software packages and tools are required:
1. Python:
• Python (Version 3.x or higher) is the primary programming language used for
data analysis in this laboratory. It is open-source and provides a wide array of
libraries for data manipulation, visualization, and modeling.
• Installation: Python can be downloaded from the official website:
https://www.python.org/downloads/
2. Python Libraries:
• NumPy: A fundamental library for numerical computing in Python, which
provides support for large, multi-dimensional arrays and matrices, along with a
collection of mathematical functions to operate on these arrays. Installation:
pip install numpy
• Pandas: A powerful data manipulation library for handling structured data (like
CSV, Excel files, or SQL databases). It provides high-level data structures and
tools to work with data seamlessly. Installation: pip install pandas
• Matplotlib: A plotting library for creating static, animated, and interactive
visualizations in Python. It supports various types of plots such as line, scatter,
bar, histogram, etc. Installation: pip install matplotlib
• Seaborn: Built on top of Matplotlib, Seaborn is a statistical data visualization
library that provides a high-level interface for drawing attractive and
informative statistical graphics. Installation: pip install seaborn
• SciPy: A library used for scientific and technical computing, including
statistics, optimization, integration, and more. Installation: pip install scipy
• Statsmodels: A statistical modeling library in Python that provides classes and
functions for estimating statistical models and performing hypothesis tests.
Installation: pip install statsmodels
• Plotly: A graphing library that can create interactive plots. It is often used for
web-based visualizations. Installation: pip install plotly
• Bokeh: Another interactive plotting library for creating web-based dashboards
and data visualizations. Installation: pip install bokeh
3. Package Manager:
pip (Python Package Installer) is used to install the necessary Python libraries.
Ensure pip is installed with Python, or use Anaconda.
11
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

6. INTRODUCTION ABOUT THE COURSE


The Data Science and Analytics Laboratory course is an advanced, hands-on
laboratory-based module that complements theoretical learning in data science and
analytics. The course focuses on the application of concepts in data manipulation,
analysis, and modeling using real-world datasets.

Data Collection and Cleaning:


Students will learn how to collect and clean data from diverse sources, including
CSV files, databases, web scraping, and APIs. Understanding the importance of data
preprocessing and handling missing values, outliers, and data inconsistencies.
Exploratory Data Analysis (EDA):
Hands-on practice in visualizing and exploring data to understand its structure
and patterns. Usage of tools like Pandas, NumPy, and Matplotlib for data
visualization and summary statistics.
Statistical Analysis:
Application of statistical methods to understand the distribution of data, identify
trends, and test hypotheses. Learning of advanced statistical techniques such as
hypothesis testing, correlation, regression analysis, and probability distributions.
Machine Learning:
Practical exposure to supervised and unsupervised machine learning algorithms (e.g.,
linear regression, decision trees, clustering). Implementation of machine learning
models for classification, regression, and clustering tasks.
Hands-on experience in training, evaluating, and validating machine learning models.
Time Series Analysis:
Exposure to time-series forecasting methods and their applications. Analyzing
and predicting trends over time using techniques such as ARIMA, moving averages,
and LSTM.
Data Visualization:
Creating effective visualizations to present findings, insights, and trends in the
data. Use of tools like Matplotlib, Seaborn, Tableau, or other visualization libraries.
Applications:
Business Analytics: Data-driven decision-making, customer segmentation,
forecasting sales, and marketing strategies. Healthcare: Predictive modeling for disease
outbreaks, patient data analysis, and healthcare management. Finance: Risk modeling,
fraud detection, and financial forecasting. Technology: Recommendation systems,
natural language processing, and AI applications.

12
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

7. INTRODUCTION ABOUT THE HARDWARE &


SOFTWARE
The necessary computational power to handle large datasets, run complex
algorithms, and support the software tools used in data science.
Processor (CPU): At least Intel i5 or AMD Ryzen 5 processors, but for more
demanding tasks like deep learning, an Intel i7 or AMD Ryzen 7 would be ideal.
RAM: A minimum of 8 GB of RAM is typically required, but 16 GB or more is
preferred to handle larger datasets, especially in machine learning and deep
learning tasks.

The software used in a Data Science and Analytics Laboratory course includes
various programming languages, libraries, tools, and frameworks that provide the
necessary functionality to process, analyze, model, and visualize data. Below is a
breakdown of essential software for the course:
1. Programming Languages: Python:
Python is the most widely used language in data science and analytics due to its
simplicity and extensive ecosystem of libraries and frameworks. Libraries such as
Pandas, NumPy, Matplotlib, Scikit-learn, TensorFlow, and Keras allow students to
perform data manipulation, analysis, visualization, machine learning, and deep
learning tasks efficiently.
2. Data Science and Machine Learning Libraries:
Pandas: For data manipulation and analysis, particularly with structured data
(CSV, Excel, SQL).
NumPy: For numerical computing, matrix operations, and handling arrays.
Matplotlib and Seaborn: For creating static, animated, and interactive
visualizations of data.
Scikit-learn: For machine learning tasks, such as classification, regression,
clustering, and model evaluation.
TensorFlow and Keras: Popular frameworks for deep learning, supporting neural
networks, natural language processing (NLP), and computer vision tasks.
Statsmodels: For statistical models and hypothesis testing.
Plotly: For creating interactive plots and visualizations.

13
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

8. EXPERIMENT/EXERCISE DETAILS
Ex.No: 01 DATE:

Solving XOR Problem Using DNN


AIM:
To implement a Deep Neural Network (DNN) in Python to solve the XOR logic gate
problem using basic NumPy operations.

EXERCISE DESCRIPTION:
This experiment demonstrates the use of a simple DNN to model the XOR function.
It uses a 2-input, 2-hidden, 1-output architecture with the sigmoid activation function.
The model is trained using gradient descent for binary classification.

TERMINOLOGIES USED:
• DNN: Deep Neural Network with hidden layers to learn complex patterns.
• XOR: A logic function that outputs 1 only when inputs differ.
• Sigmoid: An activation function to squash output between 0 and 1.
• Forward Propagation: Calculating output from inputs using weights.
• Backpropagation: Adjusting weights to minimize the error.
• Gradient Descent: Optimization method for learning by minimizing cost.

ALGORITHM:
1. Initialize input and output data for the XOR logic.
2. Define the sigmoid activation function and its derivative.
3. Randomly initialize weights and biases.
4. Perform forward propagation to get the predicted output.
5. Calculate error and apply backpropagation to update weights.
6. Repeat for several iterations (epochs).
7. After training, perform predictions on input data.
8. Display final predicted results

PROGRAM:

14
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

15
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:
Input: [0 1], Output: 1
Input: [1 0], Output: 1
Input: [1 1], Output: 0
Input: [0 0], Output: 0

RESULT:

Thus, the Python program to implement the XOR logic using a Deep Neural
Network (DNN) and the network correctly classified all XOR inputs was successfully
executed, and the output was verified.

16
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 02 DATE:

Character Recognition Using Convolutional Neural Network


(CNN)

AIM:
To build and train a Convolutional Neural Network (CNN) using Python and
Keras to perform handwritten character recognition on the MNIST dataset.

PROGRAM DESCRIPTION:

In this experiment, we use a CNN to recognize handwritten digits (0–9) from


the MNIST dataset. CNNs are particularly effective for image classification tasks
due to their ability to learn spatial hierarchies. The model includes convolutional,
pooling, and fully connected layers, and uses ReLU activation and SoftMax for
multi-class classification.

TERMINOLOGIES USED:
• CNN (Convolutional Neural Network): A type of deep neural network specifically
designed for image data.
• MNIST: A dataset of 28×28 grayscale images of handwritten digits (0–9).
• Convolution Layer: Extracts features from image regions using filters.
• Pooling Layer: Downsamples feature maps to reduce computation.
• ReLU: Activation function that outputs max(0, x).
• Softmax: Converts outputs to probabilities for multi-class classification.
• Epoch: One complete pass through the entire training dataset.

ALGORITHM:
1. Import the MNIST dataset and preprocess the input data.
2. Build a CNN model with convolution, pooling, and dense layers.
3. Compile the model using categorical crossentropy and Adam optimizer.
4. Train the model on the training data for a fixed number of epochs.
5. Evaluate the model's accuracy on the test data
6. Display the test accuracy and sample predictions.
7. Save the trained model (optional).

PROGRAM:

17
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

18
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:

Epoch 1/5
938/938 - 10s - accuracy: 0.92 - loss: 0.23
...
Epoch 5/5
938/938 - 4s - accuracy: 0.98 - loss: 0.05
Test Accuracy: 0.9857

RESULT:

Thus, the Python program to recognize handwritten characters using Convolutional


Neural Networks (CNN) was successfully implemented and achieved high accuracy on the
MNIST test dataset.

19
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 03 DATE:

FACE RECOGNITION USING CNN


AIM:
To write a python program to implement Face recognition using CNN.

PROGRAM DESCRIPTION:
This Python program implements a Convolutional Neural Network (CNN) using
TensorFlow and Keras to classify face images from the LFW dataset. It uses 500
samples for quick execution. The model includes convolution, pooling, and dense
layers to extract facial features and classify them into predefined identities.
Performance is evaluated using a confusion matrix.

TERMINOLOGIES:
• Conv2D – Applies convolutional filters to extract image features.
• MaxPooling2D – Downsamples feature maps to reduce dimensions.
• Flatten() – Converts 2D feature maps into a 1D vector for dense layers.
• Dense – Fully connected neural network layer for classification.
• to_categorical – Converts class labels into one-hot encoded format.
• confusion_matrix – Evaluates classification performance by comparing predictions
with actual labels.

ALGORITHM:
1. Start the program.
2. Get the relevant packages for Face Recognition
3. Load the A_Z Handwritten Data.csv from the directory.
4. Reshape data for model creation
5. Train the model and Prediction on test data
6. Prediction on External Image
7. Stop the program

PROGRAM:

20
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

21
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

22
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

23
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:
Epoch 1/5
C:\Users\rohit\anaconda3\Lib\site-
packages\keras\src\layers\convolutional\base_conv.py:107: UserWarning: Do not pass an
`input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using
an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
13/13 ━━━━━━━━━━━━━━━━━━━━ 3s 158ms/step - accuracy: 0.3264 - loss:
1.5228 - val_accuracy: 0.4600 - val_loss: 1.4287
Epoch 2/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 152ms/step - accuracy: 0.4968 - loss:
1.3866 - val_accuracy: 0.4600 - val_loss: 1.4622
Epoch 3/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 148ms/step - accuracy: 0.4497 - loss:
1.4845 - val_accuracy: 0.4600 - val_loss: 1.4306
Epoch 4/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 153ms/step - accuracy: 0.4509 - loss:
1.4498 - val_accuracy: 0.4600 - val_loss: 1.4264
Epoch 5/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 145ms/step - accuracy: 0.4656 - loss:
1.4287 - val_accuracy: 0.4600 - val_loss: 1.4183
Training completed.
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
Prediction completed.

24
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

RESULT:
Thus, the program to implement the Face Recognition using CNN was
successfully executed and the output was verified.
25
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 04 DATE:
LANGUAGE MODELING USING RNN
AIM:
To write a Python program to implement Language modeling using RNN.

PROGRAM DESCRIPTION:
In this experiment, we use a CNN to recognize handwritten digits (0–9) from the
MNIST dataset. CNNs are particularly effective for image classification tasks due to their
ability to learn spatial hierarchies. The model includes convolutional, pooling, and fully
connected layers, and uses ReLU activation and softmax for multi-class classification.

TERMINOLOGIES USED:

• RNN (Recurrent Neural Network): A neural network suited for sequential data like
text, where output depends on previous inputs.
• One-Hot Encoding: Representing categorical data (like characters or class labels) as
binary vectors.
• Category Tensor: A tensor used to represent the class (e.g., nationality) of a data
point.
• Loss Function (NLLLoss): Measures how far the model's output is from the target,
optimized during training.
• Sampling: The process of generating new data (e.g., names) by feeding predicted
characters back into the model.
• Softmax / LogSoftmax: Converts raw model outputs into probabilities for
classification.

ALGORITHM:
1. Start the program.
2. Get the relevant packages for Language modeling
3. Read a file and split into lines.
4. Build the category_lines dictionary, a list of lines per category
5. Add the Random item from a list
6. Get a random category and random line from that category.
7. One-hot vector for category
8. Make category, input, and target tensors from a random category, line pair
9. Sample from a category and starting letter
10. Get multiple samples from one category and multiple starting letters.
11. Train the model and Prediction on test data
12. Prediction on External Image
13. Stop the program

PROGRAM:

26
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

27
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

28
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

29
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

30
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:
# categories: 4 ['Russian', 'German', 'Spanish', 'Chinese']

0m 0s (200 20%) 3.9300

0m 1s (400 40%) 3.9264

0m 2s (600 60%) 3.9166

0m 3s (800 80%) 3.8827

0m 3s (1000 100%) 3.1583


31
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

RESULT:

Thus, the program to implement the Language Modeling using RNN was
successfully executed and the output was verified.

32
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 05 DATE:

SENTIMENT ANALYSIS USING LSTM


AIM:
To write a Python program to implement Sentiment analysis using LSTM.

PROGRAM DESCRIPTION:
This Python program demonstrates sentiment analysis using a Recurrent Neural
Network (RNN) built with TensorFlow and Keras. It is designed to work fully offline by
using a small set of manually defined movie reviews labeled as positive or negative. The
reviews are tokenized into sequences of integers using Keras's Tokenizer, and these
sequences are padded to a fixed length to ensure uniform input size. The model
architecture consists of an embedding layer to convert word indices into dense vectors,
followed by an LSTM (Long Short-Term Memory) layer that captures temporal
dependencies in the sequence data, and a dense output layer with a sigmoid activation
for binary classification. The model is trained and validated on a split of the dataset and
then evaluated to report accuracy and loss. This implementation provides a compact and
self-contained example of how natural language processing and deep learning can be
applied to classify text sentiment without the need for internet connectivity or large
datasets.

TERMINOLOGIES USED:

• Tokenizer: A utility that converts text into sequences of integers, where each integer
represents a word's index in the vocabulary. It prepares raw text data for model
input.
• Padding: A process that ensures all sequences (reviews) have the same length by
adding zeros (or truncating) to match a defined maximum length.
• Embedding layer: Transforms word indices into dense vector representations,
capturing semantic relationships between words in a lower-dimensional space.
• LSTM (Long Short-Term Memory): A special type of recurrent neural network layer
that is capable of learning long-term dependencies, ideal for sequence data like text.
• Binary crossentropy: A loss function used for binary classification tasks, measuring
the difference between predicted probabilities and actual binary labels

ALGORITHM:
1. Start the program.
2. Get the relevant packages for Keras-Preprocessing
3. Load the IMDB Dataset.csv file.
4. Remove HTML tags, URL and non-alphanumeric characters
5. Read a file and split into lines.
6. Tuning the hyperparameters of the model
7. Model initialization
8. compile model
9. reviews on which we need to predict.
10. Stop the program

33
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

PROGRAM:

34
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:

Epoch 1/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 224ms/step - accuracy: 0.6667 - loss:


0.6917 - val_accuracy: 0.5000 - val_loss: 0.6945

Epoch 2/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 50ms/step - accuracy: 0.8333 - loss: 0.6902


- val_accuracy: 0.0000e+00 - val_loss: 0.6946

Epoch 3/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step - accuracy: 0.8333 - loss: 0.6869


- val_accuracy: 0.5000 - val_loss: 0.6945

Epoch 4/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step - accuracy: 0.8333 - loss: 0.6857


- val_accuracy: 0.5000 - val_loss: 0.6945

Epoch 5/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 44ms/step - accuracy: 0.6667 - loss: 0.6867


- val_accuracy: 0.5000 - val_loss: 0.6945

Epoch 6/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step - accuracy: 0.6667 - loss: 0.6851


- val_accuracy: 0.5000 - val_loss: 0.6945

Epoch 7/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step - accuracy: 1.0000 - loss: 0.6809


- val_accuracy: 0.5000 - val_loss: 0.6944

35
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Epoch 8/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 45ms/step - accuracy: 1.0000 - loss: 0.6796


- val_accuracy: 0.5000 - val_loss: 0.6944

Epoch 9/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step - accuracy: 1.0000 - loss: 0.6807


- val_accuracy: 0.5000 - val_loss: 0.6944

Epoch 10/10

2/2 ━━━━━━━━━━━━━━━━━━━━ 0s 44ms/step - accuracy: 1.0000 - loss: 0.6759


- val_accuracy: 0.5000 - val_loss: 0.6944

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 203ms/step - accuracy: 0.5000 - loss:


0.6944

Loss: 0.6943531632423401

Accuracy: 0.5

RESULT:

Thus, the program to implement the Sentiment analysis using LSTM was
successfully executed, and the output was verified.
36
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 06 DATE:

PARTS OF SPEECH TAGGING USING SEQUENCE TO


SEQUENCE ARCHITECTURE

AIM:
To implement Parts of Speech tagging using Sequence to Sequence (Seq2Seq)
architecture.

PROGRAM DESCRIPTION:
This program demonstrates Parts of Speech (POS) tagging using a Sequence-to-Sequence
(Seq2Seq) architecture built with TensorFlow and Keras. The model is designed to map
input sentences (sequences of words) to corresponding sequences of POS tags. It uses an
encoder-decoder structure where the encoder LSTM processes the input sentence and
compresses it into context-rich states, which are then passed to the decoder LSTM to
generate a sequence of POS tags. During training, the model learns from sentence-tag
pairs using teacher forcing. For inference, the decoder generates POS tags sequentially
starting with a <sos> token and stopping at an <eos> token. The model is trained on a
small synthetic dataset of English sentences and their POS tags.

TERMINOLOGIES USED:
• POS Tagging: The process of assigning grammatical tags (e.g., noun, verb) to
words in a sentence.
• Seq2Seq Model: A neural network architecture that transforms one sequence
into another, used for tasks like translation and tagging.
• Encoder: The part of the model that reads and summarizes the input
sentence into a context vector.
• Decoder: The model component that generates the output sequence (POS
tags) using the context vector.
• Embedding Layer: Converts words or tags into dense vector representations
to capture semantic meaning.

ALGORITHM:

1. Define the input and output sequences.


2. Create a set of all unique words and POS tags in the dataset.
3. Add <sos> and <eos> tokens to target_words.
4. Create dictionaries to map words and POS tags to integers.
5. Define the maximum sequence lengths, prepare the encoder input data and prepare
the decoder input and target data.
6. Define the encoder input and LSTM layers.
7. Define the decoder input and LSTM layers.
8. Define, compile and train the model.
9. Define the encoder model to get the encoder states and define the decoder model with
encoder states as initial state.
10. Define a function to perform inference and generate POS tags, test the model.

37
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

PROGRAM:

38
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:
Epoch 1/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 3s 3s/step - accuracy: 0.0000e+00 - loss:


1.6484 - val_accuracy: 0.3333 - val_loss: 1.4638

Epoch 2/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 99ms/step - accuracy: 0.6667 - loss:


1.6246 - val_accuracy: 0.3333 - val_loss: 1.4635
39
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Epoch 3/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 94ms/step - accuracy: 0.7500 - loss:


1.6003 - val_accuracy: 0.6667 - val_loss: 1.4632

Epoch 4/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 99ms/step - accuracy: 0.9167 - loss:


1.5739 - val_accuracy: 0.6667 - val_loss: 1.4629

Epoch 5/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 107ms/step - accuracy: 0.9167 - loss:


1.5440 - val_accuracy: 0.6667 - val_loss: 1.4625

Epoch 6/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 0.8333 - loss:


1.5089 - val_accuracy: 0.6667 - val_loss: 1.4620

Epoch 7/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 0.8333 - loss:


1.4672 - val_accuracy: 0.6667 - val_loss: 1.4613

Epoch 8/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 0.8333 - loss:


1.4168 - val_accuracy: 0.6667 - val_loss: 1.4605

Epoch 9/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 87ms/step - accuracy: 0.8333 - loss:


1.3562 - val_accuracy: 0.6667 - val_loss: 1.4595

Epoch 10/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 95ms/step - accuracy: 0.8333 - loss:


1.2841 - val_accuracy: 0.6667 - val_loss: 1.4584

Epoch 11/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 83ms/step - accuracy: 0.8333 - loss:


1.2012 - val_accuracy: 0.6667 - val_loss: 1.4574

Epoch 12/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - accuracy: 0.8333 - loss:


1.1117 - val_accuracy: 0.5000 - val_loss: 1.4576

Epoch 13/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 98ms/step - accuracy: 0.9167 - loss:


1.0234 - val_accuracy: 0.5000 - val_loss: 1.4600

Epoch 14/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 0.9167 - loss:


0.9431 - val_accuracy: 0.6667 - val_loss: 1.4659

Epoch 15/50

40
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 86ms/step - accuracy: 0.9167 - loss:


0.8741 - val_accuracy: 0.6667 - val_loss: 1.4756

Epoch 16/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 86ms/step - accuracy: 0.9167 - loss:


0.8158 - val_accuracy: 0.6667 - val_loss: 1.4890

Epoch 17/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 87ms/step - accuracy: 0.9167 - loss:


0.7663 - val_accuracy: 0.6667 - val_loss: 1.5057

Epoch 18/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - accuracy: 0.9167 - loss:


0.7252 - val_accuracy: 0.6667 - val_loss: 1.5255

Epoch 19/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.9167 - loss:


0.6922 - val_accuracy: 0.6667 - val_loss: 1.5478

Epoch 20/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 87ms/step - accuracy: 0.9167 - loss:


0.6668 - val_accuracy: 0.6667 - val_loss: 1.5719

Epoch 21/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 0.9167 - loss:


0.6490 - val_accuracy: 0.6667 - val_loss: 1.5967

Epoch 22/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 0.9167 - loss:


0.6378 - val_accuracy: 0.5000 - val_loss: 1.6211

Epoch 23/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - accuracy: 0.8333 - loss:


0.6290 - val_accuracy: 0.5000 - val_loss: 1.6434

Epoch 24/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - accuracy: 0.8333 - loss:


0.6157 - val_accuracy: 0.5000 - val_loss: 1.6624

Epoch 25/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 133ms/step - accuracy: 0.8333 - loss:


0.5921 - val_accuracy: 0.5000 - val_loss: 1.6778

Epoch 26/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 0.8333 - loss:


0.5562 - val_accuracy: 0.6667 - val_loss: 1.6903

Epoch 27/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 0.9167 - loss:


0.5104 - val_accuracy: 0.6667 - val_loss: 1.7001
41
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Epoch 28/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 83ms/step - accuracy: 1.0000 - loss:


0.4616 - val_accuracy: 0.6667 - val_loss: 1.7071

Epoch 29/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 1.0000 - loss:


0.4168 - val_accuracy: 0.6667 - val_loss: 1.7107

Epoch 30/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 125ms/step - accuracy: 1.0000 - loss:


0.3772 - val_accuracy: 0.6667 - val_loss: 1.7115

Epoch 31/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 96ms/step - accuracy: 1.0000 - loss:


0.3438 - val_accuracy: 0.6667 - val_loss: 1.7116

Epoch 32/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 95ms/step - accuracy: 1.0000 - loss:


0.3183 - val_accuracy: 0.6667 - val_loss: 1.7129

Epoch 33/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 98ms/step - accuracy: 1.0000 - loss:


0.2951 - val_accuracy: 0.6667 - val_loss: 1.7165

Epoch 34/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 95ms/step - accuracy: 1.0000 - loss:


0.2699 - val_accuracy: 0.6667 - val_loss: 1.7231

Epoch 35/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 114ms/step - accuracy: 1.0000 - loss:


0.2466 - val_accuracy: 0.6667 - val_loss: 1.7324

Epoch 36/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 86ms/step - accuracy: 1.0000 - loss:


0.2283 - val_accuracy: 0.5000 - val_loss: 1.7429

Epoch 37/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - accuracy: 1.0000 - loss:


0.2134 - val_accuracy: 0.5000 - val_loss: 1.7537

Epoch 38/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 103ms/step - accuracy: 1.0000 - loss:


0.1987 - val_accuracy: 0.5000 - val_loss: 1.7655

Epoch 39/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 107ms/step - accuracy: 1.0000 - loss:


0.1816 - val_accuracy: 0.5000 - val_loss: 1.7793

Epoch 40/50

42
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 111ms/step - accuracy: 1.0000 - loss:


0.1656 - val_accuracy: 0.5000 - val_loss: 1.7957

Epoch 41/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 106ms/step - accuracy: 1.0000 - loss:


0.1522 - val_accuracy: 0.5000 - val_loss: 1.8137

Epoch 42/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 105ms/step - accuracy: 1.0000 - loss:


0.1411 - val_accuracy: 0.5000 - val_loss: 1.8317

Epoch 43/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 101ms/step - accuracy: 1.0000 - loss:


0.1315 - val_accuracy: 0.5000 - val_loss: 1.8480

Epoch 44/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - accuracy: 1.0000 - loss:


0.1216 - val_accuracy: 0.6667 - val_loss: 1.8617

Epoch 45/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 101ms/step - accuracy: 1.0000 - loss:


0.1129 - val_accuracy: 0.6667 - val_loss: 1.8733

Epoch 46/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 84ms/step - accuracy: 1.0000 - loss:


0.1066 - val_accuracy: 0.6667 - val_loss: 1.8836

Epoch 47/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 189ms/step - accuracy: 1.0000 - loss:


0.1019 - val_accuracy: 0.6667 - val_loss: 1.8939

Epoch 48/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - accuracy: 1.0000 - loss:


0.0974 - val_accuracy: 0.6667 - val_loss: 1.9048

Epoch 49/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 1.0000 - loss:


0.0916 - val_accuracy: 0.6667 - val_loss: 1.9162

Epoch 50/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - accuracy: 1.0000 - loss:


0.0856 - val_accuracy: 0.6667 - val_loss: 1.9270

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 131ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 126ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step


43
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Input: I love coding

Predicted POS Tags: PRP VB NNP

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 32ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step

Input: This is a pen

Predicted POS Tags: DT VBZ DT NN

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step

Input: She sings well

Predicted POS Tags: PRP VB NNP

2025-07-18 21:27:34.112108: I tensorflow/core/util/port.cc:153] oneDNN custom


operations are on. You may see slightly different numerical results due to floating-point
round-off errors from different computation orders. To turn them off, set the environment
variable `TF_ENABLE_ONEDNN_OPTS=0`.

2025-07-18 21:27:35.167947: I tensorflow/core/util/port.cc:153] oneDNN custom


operations are on. You may see slightly different numerical results due to floating-point
round-off errors from different computation orders. To turn them off, set the environment
variable `TF_ENABLE_ONEDNN_OPTS=0`.

2025-07-18 21:27:37.579811: I tensorflow/core/platform/cpu_feature_guard.cc:210] This


TensorFlow binary is optimized to use available CPU instructions in performance-critical
operations.

To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild
TensorFlow with the appropriate compiler flags.

RESULT:

Thus, the program to implement the Parts of speech tagging using Sequence
to Sequence architecture was successfully executed and the output was verified.
44
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No :07 DATE:

MACHINE TRANSLATION USING ENCODER-DECODER


MODEL
AIM:
To implement Machine Translation using the Encoder-Decoder model using Python

PROGRAM DESCRIPTION:
Machine translation using an Encoder-Decoder model is a key technique in natural
language processing (NLP). It translates sentences from one language to another using a
two-part neural network:
• The encoder processes an input sentence in the source language and creates a
context vector representing its meaning.
• The decoder uses this context to generate the corresponding sentence in the target
language.
• The model is trained on pairs of source and target language sentences. During
inference, the encoder summarizes the source sentence and the decoder translates
it word-by-word using previously predicted words and encoder states.

TERMINOLOGIES USED:

• Encoder: Part of the model that processes the input sentence into a fixed
representation.
• Decoder: Translates the encoder’s output into the target language sentence.
• LSTM (Long Short-Term Memory): A type of RNN used for handling long-term
dependencies in sequence data.
• Tokenization: The process of converting words into numerical indices.
• Embedding: Converts word indices into dense vector representations for training
neural networks.

ALGORITHM:
1. Define the input and output sequences.
2. Create a set of all unique words in the input and target sequences.
3. Add <sos> and <eos> tokens to target_words.
4. Define the maximum sequence lengths
5. Create dictionaries to map words to integers.
6. Define the maximum sequence lengths
7. Prepare the encoder input data
8. Prepare the decoder input and target data
9. Define the encoder input and LSTM layers
10. Define the decoder input and LSTM layers
11. Define, Compile and train the model
12. Define the encoder model to get the encoder states
13. Define the decoder model with encoder states as initial state
14. Define a function to perform inference and generate translations
15. Test the model

45
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

PROGRAM:

46
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

47
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:

Epoch 1/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step - accuracy: 0.0000e+00 - loss:


1.9238 - val_accuracy: 0.0000e+00 - val_loss: 1.2898

Epoch 2/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 80ms/step - accuracy: 0.2500 - loss:


1.8943 - val_accuracy: 0.0000e+00 - val_loss: 1.2952

Epoch 3/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.6250 - loss:


1.8638 - val_accuracy: 0.0000e+00 - val_loss: 1.3013

Epoch 4/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.6250 - loss:


1.8305 - val_accuracy: 0.0000e+00 - val_loss: 1.3083

Epoch 5/50

48
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.6250 - loss:


1.7927 - val_accuracy: 0.0000e+00 - val_loss: 1.3164

Epoch 6/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.6250 - loss:


1.7486 - val_accuracy: 0.0000e+00 - val_loss: 1.3261

Epoch 7/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.6250 - loss:


1.6960 - val_accuracy: 0.0000e+00 - val_loss: 1.3377

Epoch 8/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.5000 - loss:


1.6327 - val_accuracy: 0.0000e+00 - val_loss: 1.3516

Epoch 9/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.5000 - loss:


1.5561 - val_accuracy: 0.0000e+00 - val_loss: 1.3685

Epoch 10/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.5000 - loss:


1.4636 - val_accuracy: 0.0000e+00 - val_loss: 1.3891

Epoch 11/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 0.5000 - loss:


1.3534 - val_accuracy: 0.0000e+00 - val_loss: 1.4141

Epoch 12/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.5000 - loss:


1.2257 - val_accuracy: 0.0000e+00 - val_loss: 1.4439

Epoch 13/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.5000 - loss:


1.0863 - val_accuracy: 0.0000e+00 - val_loss: 1.4774

Epoch 14/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 0.5000 - loss:


0.9492 - val_accuracy: 0.0000e+00 - val_loss: 1.5115

Epoch 15/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 0.5000 - loss:


0.8324 - val_accuracy: 0.0000e+00 - val_loss: 1.5438

Epoch 16/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.5000 - loss:


0.7427 - val_accuracy: 0.0000e+00 - val_loss: 1.5741

Epoch 17/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.5000 - loss:


0.6744 - val_accuracy: 0.0000e+00 - val_loss: 1.6041
49
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Epoch 18/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 0.7500 - loss:


0.6228 - val_accuracy: 0.0000e+00 - val_loss: 1.6354

Epoch 19/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.5869 - val_accuracy: 0.0000e+00 - val_loss: 1.6685

Epoch 20/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.7500 - loss:


0.5638 - val_accuracy: 0.0000e+00 - val_loss: 1.7030

Epoch 21/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - accuracy: 0.7500 - loss:


0.5455 - val_accuracy: 0.0000e+00 - val_loss: 1.7384

Epoch 22/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.5243 - val_accuracy: 0.0000e+00 - val_loss: 1.7753

Epoch 23/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - accuracy: 0.7500 - loss:


0.4997 - val_accuracy: 0.0000e+00 - val_loss: 1.8145

Epoch 24/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.7500 - loss:


0.4755 - val_accuracy: 0.0000e+00 - val_loss: 1.8561

Epoch 25/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.4530 - val_accuracy: 0.0000e+00 - val_loss: 1.8997

Epoch 26/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.4261 - val_accuracy: 0.0000e+00 - val_loss: 1.9452

Epoch 27/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.3879 - val_accuracy: 0.0000e+00 - val_loss: 1.9938

Epoch 28/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.3382 - val_accuracy: 0.0000e+00 - val_loss: 2.0479

Epoch 29/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.2836 - val_accuracy: 0.0000e+00 - val_loss: 2.1101

Epoch 30/50

50
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.2328 - val_accuracy: 0.0000e+00 - val_loss: 2.1819

Epoch 31/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - accuracy: 0.7500 - loss:


0.1928 - val_accuracy: 0.0000e+00 - val_loss: 2.2612

Epoch 32/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.1648 - val_accuracy: 0.0000e+00 - val_loss: 2.3419

Epoch 33/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.1434 - val_accuracy: 0.0000e+00 - val_loss: 2.4173

Epoch 34/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 0.7500 - loss:


0.1236 - val_accuracy: 0.0000e+00 - val_loss: 2.4850

Epoch 35/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - accuracy: 0.7500 - loss:


0.1074 - val_accuracy: 0.0000e+00 - val_loss: 2.5471

Epoch 36/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 0.7500 - loss:


0.0976 - val_accuracy: 0.0000e+00 - val_loss: 2.6094

Epoch 37/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 0.7500 - loss:


0.0927 - val_accuracy: 0.0000e+00 - val_loss: 2.6780

Epoch 38/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.0881 - val_accuracy: 0.0000e+00 - val_loss: 2.7562

Epoch 39/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.0809 - val_accuracy: 0.0000e+00 - val_loss: 2.8428

Epoch 40/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.0725 - val_accuracy: 0.0000e+00 - val_loss: 2.9317

Epoch 41/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 0.7500 - loss:


0.0653 - val_accuracy: 0.0000e+00 - val_loss: 3.0141

Epoch 42/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - accuracy: 0.7500 - loss:


0.0599 - val_accuracy: 0.0000e+00 - val_loss: 3.0824
51
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Epoch 43/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 0.7500 - loss:


0.0556 - val_accuracy: 0.0000e+00 - val_loss: 3.1343

Epoch 44/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.0516 - val_accuracy: 0.0000e+00 - val_loss: 3.1738

Epoch 45/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 0.7500 - loss:


0.0483 - val_accuracy: 0.0000e+00 - val_loss: 3.2077

Epoch 46/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.0461 - val_accuracy: 0.0000e+00 - val_loss: 3.2426

Epoch 47/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 0.7500 - loss:


0.0446 - val_accuracy: 0.0000e+00 - val_loss: 3.2823

Epoch 48/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 0.7500 - loss:


0.0428 - val_accuracy: 0.0000e+00 - val_loss: 3.3268

Epoch 49/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 0.7500 - loss:


0.0408 - val_accuracy: 0.0000e+00 - val_loss: 3.3730

Epoch 50/50

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - accuracy: 0.7500 - loss:


0.0387 - val_accuracy: 0.0000e+00 - val_loss: 3.4166

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 124ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step

Input: I love coding

Translated Text: liebe das Coden Coden ist

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step

52
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step

Input: This is a pen

Translated Text: ist ein Stift Coden ist

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step

Input: She sings well

Translated Text: ist ein Stift Coden ist

2025-07-18 21:37:13.862762: I tensorflow/core/util/port.cc:153] oneDNN custom


operations are on. You may see slightly different numerical results due to floating-point
round-off errors from different computation orders. To turn them off, set the environment
variable `TF_ENABLE_ONEDNN_OPTS=0`.

2025-07-18 21:37:14.876939: I tensorflow/core/util/port.cc:153] oneDNN custom


operations are on. You may see slightly different numerical results due to floating-point
round-off errors from different computation orders. To turn them off, set the environment
variable `TF_ENABLE_ONEDNN_OPTS=0`.

2025-07-18 21:37:17.263196: I tensorflow/core/platform/cpu_feature_guard.cc:210] This


TensorFlow binary is optimized to use available CPU instructions in performance-critical
operations.

To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild
TensorFlow with the appropriate compiler flags.

RESULT:

Thus, the Machine Translation using Encoder-Decoder model was successfully


executed and the output was verified.
53
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 08 DATE:

IMAGE AUGMENTATION USING GANs


AIM:
To implement Image augmentation using GANs.

PROGRAM DESCRIPTION:
Image augmentation using Generative Adversarial Networks (GANs) is a technique that
leverages the power of GANs to generate new, realistic images that are variations of
existing images. This approach is commonly used in computer vision tasks, such as
image classification and object detection, to increase the diversity and size of training
datasets.
1. Generative Adversarial Networks (GANs): GANs consist of two neural networks: a
generator and a discriminator. The generator network takes random noise as
input and generates synthetic images. The discriminator network tries to
distinguish between real and synthetic images. During training, the generator
aims to produce images that are indistinguishable from real ones, while the
discriminator tries to get better at telling them apart.
2. Image Augmentation with GANs: You train a GAN on this dataset, where the
generator learns to generate images similar to those in the dataset, and the
discriminator learns to distinguish real images from generated ones.
3. Generating Augmented Images: Once the GAN is trained, you can use the
generator to create new, synthetic images. To augment an image from your
dataset, you feed it to the generator, and the generator produces a new image.
These generated images are typically variations of the original images,
introducing changes in aspects like style, lighting, perspective, or other factors
that the GAN has learned from the training data.

TERMINOLOGIES USED:

• T Generator: A neural network that creates synthetic images from random noise.
• Discriminator: A neural network that classifies images as real or fake.
• Latent Space: The input noise vector space from which the generator creates data.
• Conv2DTranspose: A layer that upsamples images, used in generators.
• Binary Crossentropy: The loss function used to train both generator and
discriminator.

ALGORITHM:

1. Load the MNIST dataset


2. Normalize and reshape the images
3. Define the generator network
4. Define the discriminator network
5. Compile the discriminator
6. Combine the generator and discriminator into a single GAN model
7. Train the hyperparameters and the Training loop has the following steps:
8. Generate a batch of fake images
9. Train the discriminator
10. Train the generator
11. Print the progress and save samples

54
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

PROGRAM:

55
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:

Epoch: 90 Discriminator Loss: 0.03508808836340904

Generator Loss: 1.736445483402349e-06

RESULT:

Thus, the Image augmentation using GANs was successfully executed and
the output was verified.
56
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

Ex.No: 10 DATE:

BUILD REGRESSION MODEL


AIM:
To implement Regression Models by using Python Programming.

PROGRAM DESCRIPTION:
Regression analysis is a supervised learning technique used to model the relationship
between a dependent variable (target) and one or more independent variables (features). It
is widely used for predicting continuous values, such as house prices, stock prices, and
sales forecasting.

TERMINOLOGIES USED:

1. Dependent and Independent Variables

Dependent Variable (Target Variable): The variable we want to predict (e.g., house price,
sales).

Independent Variables (Features): The variables used to make predictions (e.g., square
footage, number of rooms).

2. Mean Squared Error (MSE)

MSE is a common metric to evaluate the performance of a regression model. It measures


the average squared difference between actual and predicted values. Lower MSE
indicates a better model fit.

3. Coefficient of Determination (R² Score)

The R² score indicates how well the independent variables explain the variability of the
dependent variable.

• R² = 1 means the model perfectly predicts the target.


• R² close to 0 means the model performs poorly.

ALGORITHM:

1. Gather and prepare the data by handling missing values and errors.
2. Exploratory data analysis: Understand the relationships within the data through
visualizations and summary statistics.
3. Model selection and training: Choose a suitable regression algorithm and train it
on a portion of the data.
4. Model evaluation: Test the trained model on new data to assess its accuracy.
5. Deployment: Use the final, refined model to make predictions.

PROGRAM:

57
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

58
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS

OUTPUT:

Mean Squared Error: 2.04


Coefficient of Determination (R²): -4.68

RESULT:

Thus, the implementation of naïve Bayesian Classifier model to classify a set of


documents and measure the accuracy, precision, and recall was executed successfully
and the output was verified.

59
MASTER RECORD AD3511- DEEP LEARNING LABORATORY

You might also like