0% found this document useful (0 votes)
40 views92 pages

Deep Learning Lab Manual 21CSU124

The Deep Learning Lab Manual (CSL312) for the Department of Computer Science and Engineering at The NorthCap University is designed to provide practical experience in deep learning concepts for B.Tech students. It covers various topics including Artificial Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, and Natural Language Processing, along with guidelines for lab conduct and required materials. The manual emphasizes self-study and preparation for experiments, with a focus on hands-on learning and real-world applications.

Uploaded by

Tanveer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views92 pages

Deep Learning Lab Manual 21CSU124

The Deep Learning Lab Manual (CSL312) for the Department of Computer Science and Engineering at The NorthCap University is designed to provide practical experience in deep learning concepts for B.Tech students. It covers various topics including Artificial Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, and Natural Language Processing, along with guidelines for lab conduct and required materials. The manual emphasizes self-study and preparation for experiments, with a focus on hands-on learning and real-world applications.

Uploaded by

Tanveer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 92

Deep Learning

Lab Manual
Department of Computer Science and Engineering
The NorthCap University, Gurugram
ii

Deep Learning Lab Manual


(CSL312)
2025-26

DEEP LEARNING
Lab Manual CSL312

Dr. Poonam Chaudhary

Dr. Shaveta Arora

Mrs. Ruchika Saini

Department of Computer Science and Engineering

NorthCap University, Gurugram- 122001, India Session

2025-26
iii

Deep Learning Lab Manual


(CSL312)
2025-26

Published by:

School of Engineering and Technology Department of Computer

Science & Engineering The NorthCap University Gurugram

•Laboratory Manual is for Internal Circulation only

© Copyright Reserved

No part of this Practical Record Book may be

reproduced, used, stored without prior permission of The NorthCap University

Copying or facilitating copying of lab work comes under cheating and is considered as use of
unfair means. Students indulging in copying or facilitating copying shall be awarded zero marks
for that experiment. Frequent cases of copying may lead to disciplinary action. Attendance in
lab classes is mandatory.
Labs are open to 7 PM upon request. Students are encouraged to make full use of labs beyond
normal lab hours.
iv

Deep Learning Lab Manual


(CSL312)
2025-26

PREFACE
Deep Learning Lab Manual is designed to meet the course and program requirements of NCU
curriculum for B.Tech third year students of CSE branch. The concept of the lab work is to give
brief practical experience for basic lab skills to students. It provides the space and scope for self-
study so that students can come up with new and creative ideas.

The Lab manual is written on the basis of “teach yourself pattern” and expected that students
who come with proper preparation should be able to perform the experiments without any
difficulty. Brief introduction to each experiment with information about self-study material is
provided. The laboratory exercises will help students to gain a deeper, technical understanding of
Artificial Neural Networks, Convolutional Neural Networks, Auto encoders, Recurrent Neural
Networks and Long short-term memory (LSTM). It also includes concepts of Transfer Learning
along with implementation of Google Cloud Platform that consequently discusses various ways
to better understand the applications and real-time projects of deep learning with respect to the
latest industry scenario. Students are expected to come thoroughly prepared for the lab. General
disciplines, safety guidelines and report writing are also discussed.

The lab manual is a part of curriculum for the The NorthCap University, Gurugram. Teacher’s
copy of the experimental results and answer for the questions are available as sample guidelines.

We hope that lab manual would be useful to students of Computer Science & Engineering branch
and author requests the readers to kindly forward their suggestions / constructive criticism for
further improvement of the workbook.

Author expresses deep gratitude to Members, Governing Body-NCU for encouragement and
motivation.

Authors
The NorthCap University
Gurugram, India
v

Deep Learning Lab Manual


(CSL312)
2025-26

CONTENTS

S. No. Details Page No.

Syllabus vi

1 Introduction ix

2 Lab Requirement x

3 General Instructions xi

4 List of Experiments xiii

5 List of Flip Assignment xvi

6 List of Projects xvii

7 Rubrics xviii

8 Annexure 1 (Format of Lab Report) xix

9 Annexure 2 (Format of Project Report) lxxvi


vi

Deep Learning Lab Manual


(CSL312)
2025-26

SYLLABUS
1. Department: Department of Computer Science and Engineering
2. Course Name: Deep Learning 3. Course Code 4. L-T-P 5. Credits
CSL312 2-0-4 4
6. Type of Course
(Check one): Programme Core Programme Elective  Open Elective

7. Pre-requisite(s), if any: Basic understanding of the basic concepts of Machine Learning

8. Frequency of offering (check one): Odd  Even Either semester Every


semester
9. Brief Syllabus:
Introduction to ANN, Building an ANN, Evaluating, Improving and Tuning the ANN, CNN Introduction-
Building a CNN, Evaluating, Improving and Tuning the CNN, RNN Introduction - Building a RNN
Evaluating, Improving and Tuning the RNN, Autoencoders, Fundamentals Building an Auto Encoder,
Types of Encoder, LSTM, LSTM Applications, Introduction to NLP (Natural Language Processing),
Introduction to Text Classification, Deep Learning for NLP.

Total lecture, Tutorial and Practical Hours for this course (Take 15 teaching weeks per semester):
90 hours
The class size is maximum 30 learners.
Practice
Lectures: 30 hours
Tutorials : 0 hours Lab Work: 45 hours
10. Course Outcomes (COs)
On successful completion of this course students will be able to:

Define and Apply concepts of Artificial Neural Networks on real world data. Students will
CO 1 also be able to differentiate deep learning from shallow learning.[L1,L3]

Describe, Implement and Analyze Convolutional Neural Network for image


datasets.Students will be able to describe the concepts of Convultional Neural Network and
CO 2 its architecture and implement the model for predictive analysis and analyze its performance
on real world datasets.[L2,L3,L4]
Identify, Describe, Apply and Determine Natural language Processing techniques for
CO 3 textual datasets.Students will be able to identify the applications for natural language
vii

Deep Learning Lab Manual


(CSL312)
2025-26

processing ,describe the various steps involved in natural language processing process and
determine the best process for handling textual data for real world applications.
Explain ,Apply and Compare various sequential models for time series data.Students will
be able to explain the requirement of sequential models for handling time series data ,apply
CO 4 the models for prediction and compare their performance on various applications.
[L2,L3,L4]
Characterize, Use and Categorize various Autoencoders and Generative Models for
unsupervised deep learning.Students will be able to characterize different autoencoders and
CO 5 Generative Models, their usage and put them into various categories for unsupervised
learning. [L2,L3,L4]
11. UNIT WISE DETAILS No. of Units: 5

Unit Number: 1 Title: Introduction to ANN and Deep Learning No. of hours: 10

Content Summary:
Overview of Machine Learning and Neural Networks. Building an ANN, Activation Functions,
Evaluating, Improving and Tuning the ANN. Loss functions, Gradient Descent, Back propagation,
Hyperparameter tuning. Introduction to Deep Learning, Optimisers, Momemtum.

Unit Number: 2 Title: Deep Learning for Image Processing No. of hours: 10

Content Summary:
Basics of Image Processing, Introduction to Tensorflow and Keras, Introduction to CNN, Building a
CNN: Convolution layers, Activation functions, Pooling, Flattening, Full Connection, Evaluating, Tuning
the CNN, Dropout to prevent Overfiting, CNN applications,Tranfer Learning models.

Unit Number: 3 Title: Natural Language Processing No. of hours: 5

Content Summary:
Introduction to NLP (Natural Language Processing), NLTK and Spacy basics, Tokenization, Stemming,
Lemmatization, Stop Words, Bag of Words and Bag of N grams, Word Embeddings.

Unit Number: 4 Title: Models for Sequential Analysis No. of hours: 10

Content Summary:
Introduction – Recurrent Neural Network (RNN) , Vanisihing Gradient, RNN limitations, Introduction to
Long Short-Term Memory (LSTM), LSTM Variations, Gated Recurrent Neural Networks (GRU),
Application of these architectures to natural language processing and time series.
viii

Deep Learning Lab Manual


(CSL312)
2025-26

Unit Number: 5 Title: Autoencoders and Generative AI No. of hours: 10

Content Summary:
Autoencoder, Training an Autoencoder. Types of Autoencoders. Introduction to Generative AI,
Differences between generative and discriminative models, Generative Models: GANs, VAEs,
Transformers. Emerging Applications of Generative AI.
12. Brief Description of Self-learning components by students (through books/resource
material etc.):
Supplementary MOOC Courses
1. https://onlinecourses.nptel.ac.in/noc21_cs35/preview
2. https://onlinecourses.nptel.ac.in/noc21_cs05/preview

13. Advanced Learning Components


1. Various Real-time Projects using ANN, CNN and RNN
2. Project-based Research Paper
3. Presentation on different topics of Deep Learning
14. Books Recommended :

Text Books:
 Francois Chollet, Deep Learning with Python, Manning Publications, First Edition, 2018
 Ian Goodfellow ,Yoshua Bengio, Aaron Courville, Deep Learning , MIT Press, First Edition,
2016

Reference Books:
 Stephen Boyd, Convex Optimization, Cambridge University Press, First Edition, 2015
 Generative Deep Learning: Teaching Machines to Dream" by David Foster

Reference Websites: (nptel, swayam, coursera, edx, udemy, lms, official documentation
weblink)
 https://medium.com/intro-to-artificial-intelligence/deep-learning-series-1-intro-to-deep-
learning-abb1780ee20
 https://towardsdatascience.com/introducing-deep-learning-and-neural-networks-deep-
learning-for-rookies-1-bd68f9cf5883
 https://www.coursera.org/learn/neural-networks-deep-learning
 www.lms.ncuindia.edu/lms
 https://www.coursera.org/learn/introduction-to-generative-ai#modules
 https://www.v7labs.com/blog/generative-ai-guide#h2
ix

Deep Learning Lab Manual


(CSL312)
2025-26

ebooks:
 https://www.pdfdrive.com/introduction-to-deep-learning-using-r-a-step-by-step-guide-to-
learning-and-implementing-deep-learning-models-using-r-e158252417.html
 https://www.pdfdrive.com/learn-keras-for-deep-neural-networks-a-fast-track-approach-to-
modern-deep-learning-with-python-e185770502.html

Interview/Placement related commonly asked questions:


 https://www.analyticsvidhya.com/blog/2020/04/comprehensive-popular-deep-learning-
interview-questions-answers/
 https://towardsdatascience.com/50-deep-learning-interview-questions-part-1-2-8bbc8a00ec61
 edureka.co/blog/interview-questions/deep-learning-interview-questions/
 https://www.javatpoint.com/deep-learning-interview-questions
x

Deep Learning Lab Manual


(CSL312)
2025-26

1. INTRODUCTION

That ‘learning is a continuous process’ cannot be over emphasized. The theoretical


knowledge gained during lecture sessions need to be strengthened through practical
experimentation. Thus, practical makes an integral part of a learning process.

The purpose of conducting experiments can be stated as follows:

1. Understand concepts of neural networks and deep learning.

2. Implement Convolutional Neural Network.

3. Implement other Deep Learning Architectures, Autoencoder, Recurrent Neural Network,


LSTM and its variations

4. Implement of Natural language Processing, Text Classification, Deep Learning for NLP.

5. Understand other deep learning topics such as transfer learning approach for various
areas, Google cloud platform and Cloud AutoML.
xi

Deep Learning Lab Manual


(CSL312)
2025-26

2. LAB REQUIREMENTS

S. No. Requirements Details

Anaconda Navigator: Jupyter


1 Software Requirements
Notebook

2 Operating System Any Operating System

8 GB RAM (Recommended)
3 Hardware Requirements
2.60 GHz (Recommended)

4 Required Bandwidth NA
xii

Deep Learning Lab Manual


(CSL312)
2025-26

3. GENERAL INSTRUCTIONS

a. General discipline in the lab

 Students must turn up in time and contact concerned faculty for the experiment
they are supposed to perform.
 Students will not be allowed to enter late in the lab.
 Students will not leave the class till the period is over.
 Students should come prepared for their experiment.
 Experimental results should be entered in the lab report format and certified/signed
by concerned faculty/ lab Instructor.
 Students must get the connection of the hardware setup verified before switching on
the power supply.
 Students should maintain silence while performing the experiments. If any necessity
arises for discussion amongst them, they should discuss with a very low pitch
without disturbing the adjacent groups.
 Violating the above code of conduct may attract disciplinary action.
 Damaging lab equipment or removing any component from the lab may invite
penalties and strict disciplinary action.

b. Attendance

 Attendance in the lab class is compulsory.


 Students should not attend a different lab group/section other than the one assigned
at the beginning of the session.
 On account of illness or some family problems, if a student misses his/her lab
classes, he/she may be assigned a different group to make up the losses in
consultation with the concerned faculty / lab instructor. Or he/she may work in the
lab during spare/extra hours to complete the experiment. No attendance will be
granted for such case.

c. Preparation and Performance

 Students should come to the lab thoroughly prepared on the experiments they are
assigned to perform on that day. Brief introduction to each experiment with
information about self-study reference is provided on LMS.
 Students must bring the lab report during each practical class with written records
of the last experiments performed complete in all respect.
xiii

Deep Learning Lab Manual


(CSL312)
2025-26

 Each student is required to write a complete report of the experiment he has


performed and bring to lab class for evaluation in the next working lab. Sufficient
space in work book is provided for independent writing of theory, observation,
calculation and conclusion.
 Students should follow the Zero tolerance policy for copying / plagiarism. Zero
marks will be awarded if found copied. If caught further, it will lead to disciplinary
action.
 Refer Annexure 1 for Lab Report Format
xiv

Deep Learning Lab Manual


(CSL312)
2025-26

4. LIST OF EXPERIMENTS

Sr. Title of the Experiment Software Unit covered Time


No. based Required
To explore the basic features of Tensorflow and
1. Python 1 1hrs
Keras packages.
To build an ANN Model to convert temperature in 1hrs
2. Python 1
degree Celsius to Fahrenheit.
To build an ANN model for regression problem on 1hrs
3. Python 1
house predication dataset.
To build an ANN model for classification problem on 1hrs
breast cancer classification to see the effect of:
4. Python 1
a. Early Stopping
b. Dropouts
To build an advance ANN classification model for churn 1hrs
modelling data with:
5. a. Cross Validation Python 1
b. Grid Search
c. Checkpoint
To perform Convolutional Neural Networks for 1hrs
6. Python 2
Image Classification on MNSIT Dataset
1hrs
7. To create CNN model with dataset containing images of Python 2
cats and dogs for image classification
To build an image classifier with Keras and 2 hrs
8. Convolutional Neural Networks for the Fashion MNIST Python 2
dataset.
To train a CNN model to classify images from the 2hrs
9. Python 2
CIFAR-10 database.
To implement transfer learning using the pre-trained 2hrs
10. Python 2
model (VGG16) on image dataset.
a) To perform tokenisation and stemming on text data 2hrs
11. using NLTK Python 4
b) To perform lemmatization and remove stopwords
xv

Deep Learning Lab Manual


(CSL312)
2025-26

on text data using NLTK


c) To perform lemmatization and remove stopwords
on text data using Spacy
To create Bag-of-Words(BOW) and Bag-og-n-grams 2hrs
using the following
12. a) Bag-of-words Using the Count Vectorizer Python 4
b) Bag-of-n-grams Using the Count Vectorizer
c) Bag-of-words Using the Tf-Idf Vectorizer
To create a recurrent neural model on alcohol sales 2hrs
13. Python 3
dataset.
14. To implement RNN model for stock price prediction. Python 3 1hrs
To create a RNN model and predict miles travelled by 1hrs
15. Python 3
vehicles.
Using LSTM to predict the future weather of a city 1hrs
16. Python 3
using weather-data from several other cities.
To implement autoencoders for dimensionality 2hrs
17. Python
reduction.
Using MNIST dataset, improve autoencoder's 2hrs
18. Python 3
performance using convolutional layers.
Build and train a GAN for generating hand-written
19. digits. Train GAN on MNIST dataset. Generate digit Python 5 2 hrs
images that look like hand-written digit image.
Load a pre-trained Large Language Model (LLM) -
[GPT-2 model] (originally invented by OpenAI),
20. Python 5 2 hrs
finetune it to a specific text style, and
generate text based on users' input.

Value Added Experiments


1. Python 2 4 hrs
To vectorize text using different hashing techniques
xvi

Deep Learning Lab Manual


(CSL312)
2025-26

2. To apply regression and do prediction on insurance Python 2 2 hrs


dataset
Project – Autoencoders-Denoise-Image-Data-
3 Python 4 4 hrs
Convolution
4 Project – Simpsons Character Classifier Python 3,4,5 4 hrs
Unsupervised Auroencoder Anomaly Detection with
5 Python 4 4hrs
Shapley Explanations
6 Build your own miniature GPT Python 4 4hrs
xvii

Deep Learning Lab Manual


(CSL312)
2025-26

5. LIST OF FLIP EXPERIMENTS

Exp. No. Title of the Experiment


1. To design and implement a Backpropagation application using neural network.
2. To study the concept of Back propagation and implement for Wine Classification.
3. To study the concept of pattern matching and implement for Crab Classification.

4. To study ImageNet, GoogleNet, ResNet convolutional Neural Networks


Write a program to implement classification of linearly separable Data with a
5. perceptron.
To study the use of Long Short Term Memory / Gated Recurrent Units to predict the
6.
stock prices based on historic data.
xviii

Deep Learning Lab Manual


(CSL312)
2025-26

6. LIST OF PROJECTS

Sr No. Project Title

1. Build your own emoji using deep learning using the following dataset or
some other dataset.

2. The MNIST handwritten digit classification problem is a standard dataset


used in computer vision and deep learning. It is a dataset of 60,000 small
square 28×28 pixel grayscale images of handwritten single digits between 0
and 9. The task is to classify a given image of a handwritten digit into one of
10 classes representing integer values from 0 to 9, inclusively.
Students should be able to work on the following:
1. Importing MNIST Handwritten Digit Classification Dataset
2. Modelling Evaluation Methodology
3. How to Develop a Baseline Model
4. How to Develop an Improved Model
5. How to Finalize the Model and Make Prediction

3. Using Convolution Neural Networks, develop a accurate method for breast


cancer classification.

4. In this project, build a chatbot using deep learning techniques. The chatbot
will be trained on the dataset which contains categories (intents), pattern and
responses. Students should use a special recurrent neural network (LSTM) to
classify which category the user’s message belongs to and then we will give
a random response from the list of responses.

5. Create an algorithm to distinguish dogs from cats. Students can refer this
dataset or can take any dataset. In this Keras project, they will discover how
to build and train a convolution neural network for classifying images of Cats
and Dogs.

6. Image Classification with CIFAR-10 Dataset

7. Diabetes prediction using PIMA India Diabetes Dataset

8. Multivariate time series prediction with LSTM


xix

Deep Learning Lab Manual


(CSL312)
2025-26

7. RUBRICS

Evaluation Scheme

ALLOTTED
TYPE OF
PARTICULAR RANGE OF PASS CRITERIA
COURSE
MARKS

Minor Test 15%

Major Test 35% Must Secure 30%


Marks Out of
Theory+ Continuous Evaluation Through Class Combined Marks of
Practical Tests/Practice/Assignments/Presentati 10%
on/Quiz Major Test Plus Minor
(L-T-P/L-0-P)
Test with Overall 40%
Online Quiz 5% Marks in Total.

Lab Work 35%

Exams Marks

Major Exam 70
Quiz1:CO1 10
Quiz2: CO2 10
Internal Assessment (50) Quiz3:CO3 10
Class Test:CO4 10
Quiz4:CO5 10

Project based Research Refer Gantt Chart for Project*


Paper (80)
xx

Deep Learning Lab Manual


(CSL312)
2025-26

*Gantt Chart for Project


Task/Phase Month 1 Month 2 Month 3 Month Mark
4 s
1. Topic Selection Week 1- 5
2
2. Literature Review and gap
Week 3- Week 5-6
identification 10
4
3. Problem Statement Week 3- Week 5 5
Finalization 4
4. Methodology Design Week 6-8 10
5. Data Collection/Preparation Week 6-8 Week 9 5
6. Model Development Week 9-12 10
Week 12-
7. Experimentation G Results Week 11-12
13 10
Week 13-
8. Paper Writing G
14 20
Presentation
G. Viva G Final Submission Week 5
15
Total Marks 80

PO PO PO PO PO PO PO PO PO PO10 PO11 PO12 PSO1 PSO2 PSO3


1 2 3 4 5 6 7 8 9
CO1 2 3 1 3 1 1 1 - - - - 2 2 1 1
CO2 2 3 3 3 2 2 2 1 - - - 2 3 1 2
CO3 2 3 3 3 2 2 2 1 1 1 1 3 3 1 2
CO4 2 3 3 3 2 2 2 1 2 2 2 3 3 1 2
CO5 2 3 3 3 2 2 1 2 2 2 2 3 3 1 2
xxi

Deep Learning Lab Manual


(CSL312)
2025-26

Annexure 1

DEEP LEARNING
(CSL 312)

LAB PRACTICAL REPORT

Faculty name: Mrs. Ruchika Saini Student name: Tanveer Singh

Roll no: 21CSU124

Semester: 7

Group: DS – VII – DB

Department of Computer Science and Engineering


The NorthCap University, Gurugram- 122001,
India Session 2024-25
xxii

Deep Learning Lab Manual


(CSL312)
2025-26
xxiii

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 1

Student Name and Roll Number: Tanveer Singh Bindra & 21CSU124
Semester /Section: 7th semester / DS-B
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

To understand the basic features of TensorFlow and Keras packages and to know how to build models using
deep learning.
Outcome:

Student will be familiarizing with the concepts of TensorFlow and Keras packages which would help them
in building deep learning models.
Problem Statement:

To explore the basic features of TensorFlow and Keras packages.


Background Study:

TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible
ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML
and developers easily build and deploy ML powered applications. Keras is a deep learning API written in
Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on
enabling fast experimentation.

Question Bank:

1. What are the benefits of TensorFlow over other libraries? Explain.


2. Which client languages are supported in TensorFlow?
3. What are the loaders of TensorFlow?
4. Can word embedding be used in TensorFlow? Name two models used in
word embedding?
5. Is keras a library?
6. What is flatten layer in keras?
xxiv

Deep Learning Lab Manual


(CSL312)
2025-26

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs

1. 🧠 Benefits of TensorFlow over Other Libraries


TensorFlow offers several advantages that make it stand out among machine learning frameworks:
 Scalability: Easily scales across CPUs, GPUs, and TPUs for distributed training.
 Visualization: TensorBoard provides powerful tools for graph visualization and debugging.
 Flexibility: Supports both low-level and high-level APIs, allowing fine control or rapid prototyping.
 Cross-platform Deployment: Models can be deployed on mobile, web, and cloud platforms.
 Community Support: Backed by Google and a large developer community, ensuring frequent updates and rich
documentation.
 Language Compatibility: Works with Python, C++, JavaScript, and more.

2. Client Languages Supported in TensorFlow


TensorFlow supports multiple client languages for building and training models:
 Python (primary and most feature-rich)
 C++
 JavaScript (via TensorFlow.js)
xxv

Deep Learning Lab Manual


(CSL312)
2025-26
 Java
 Go
These allow TensorFlow to be used across diverse environments, from web apps to embedded systems.

3. 📦 Loaders in TensorFlow
TensorFlow provides several data loaders and utilities for handling datasets:
 tf.data.Dataset API: For building input pipelines from arrays, files, or streaming data.
 tf.keras.utils.image_dataset_from_directory: Loads image data from folders.
 tf.keras.utils.get_file: Downloads and caches files.
 tensorflow_datasets (TFDS): Offers ready-to-use datasets like MNIST, CIFAR, IMDB.
These loaders simplify preprocessing and batching for training models.

4. 🔤 Word Embedding in TensorFlow


Yes, TensorFlow supports word embeddings, which convert words into dense vector representations.
Two popular models used for word embedding:
 Word2Vec: Captures semantic relationships using skip-gram or CBOW architectures.
 GloVe (Global Vectors for Word Representation): Based on matrix factorization of word co-occurrence
statistics.
TensorFlow also allows training custom embeddings using the Embedding layer in Keras.

5. 📚 Is Keras a Library?
Yes, Keras is a high-level deep learning library that runs on top of TensorFlow. It simplifies model creation with
intuitive APIs and is now tightly integrated into TensorFlow as tf.keras.

6. 🧱 What is Flatten Layer in Keras?


The Flatten layer in Keras converts a multi-dimensional tensor into a 1D vector. It’s typically used to transition from
convolutional layers (which output 2D feature maps) to dense layers in a neural network.
Example:
model.add(Flatten())
This is essential before feeding data into fully connected layers.
xxvi

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 2

Student Name and Roll Number: Tanveer Singh Bindra and 21CSU124
Semester /Section: 7th semester / DS-B
Link to Code:
Date:

Faculty Signature:

Marks

Objective:

1. To study Artificial Neural Networks.


2. Build models using Artificial Neural Networks.
Outcome:

Students will be able to understand how to build models using Artificial Neural Networks.
Problem Statement:

Using Artificial Neural Networks implement the following:


1. To build an ANN Model to convert temperature in degree Celsius to Fahrenheit.
2. To build an ANN model for regression problem on house predication dataset.
3. To build an ANN model for classification problem on breast cancer classification.
Background Study:

An artificial neural network (ANN) is the piece of a computing system designed to simulate the way the
human brain analyzes and processes information. It is the foundation of artificial intelligence (AI) and
solves problems that would prove impossible or difficult by human or statistical standards.

Question Bank:

1. Explain Biological Neural Network and Artificial Neural network?


2. Define Activation function?
3. How is ANN useful in making a machine intelligent?
xxvii

Deep Learning Lab Manual


(CSL312)
2025-26

4. Explain Supervised and Un-Supervised learning?


5. What is a bias?
6. How can we help artificial neurons in learning?
7. How is ANN useful in making a machine intelligent?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs

Here’s a detailed answer key for your second question bank on neural networks and machine learning
fundamentals:

1. 🧠 Biological vs Artificial Neural Networks


xxviii

Deep Learning Lab Manual


(CSL312)
2025-26

Artificial Neural Network


Feature Biological Neural Network
(ANN)

Found in human and animal Designed by humans for


Origin
brains computational tasks

Neurons connected via Nodes (neurons) connected


Structure
synapses via weights

Signal Numerical values and


Electrochemical signals
Transmission mathematical functions

Learning Neuroplasticity, experience- Training via algorithms like


Mechanism based adaptation backpropagation

Cognitive functions like Tasks like classification,


Purpose
thinking, sensing prediction, detection

2. ⚡ Activation Function

An activation function determines whether a neuron should be activated or not. It introduces non-
linearity into the network, allowing it to learn complex patterns.

Common types:

 Sigmoid: Outputs between 0 and 1

 ReLU (Rectified Linear Unit): Outputs 0 for negative inputs, input itself for positive

 Tanh: Outputs between -1 and 1

3. 🤖 How ANN Makes Machines Intelligent

Artificial Neural Networks help machines:

 Learn from data: Mimic human learning by adjusting weights

 Recognize patterns: In images, speech, or text

 Generalize: Make predictions on unseen data

 Adapt: Improve performance with more training

They enable tasks like facial recognition, language translation, and autonomous driving.
xxix

Deep Learning Lab Manual


(CSL312)
2025-26

4. 🧩 Supervised vs Unsupervised Learning

Type Supervised Learning Unsupervised Learning

Data Labelled (input-output pairs) Unlabelled (only input)

Learn mapping from input to Discover hidden patterns or


Goal
output structure

Clustering, dimensionality
Examples Classification, regression
reduction

Decision Trees, SVM, Neural


Algorithms K-Means, PCA, Autoencoders
Networks

5. ⚖️What is a Bias?

In ANN, bias is an additional parameter added to the input of a neuron. It helps:

 Shift the activation function

 Improve model flexibility

 Allow better fitting of data

Mathematically, it’s like the intercept in a linear equation.

6. 🧠 Helping Artificial Neurons Learn

We help artificial neurons learn by:

 Training with data: Feeding examples and adjusting weights

 Using loss functions: Measure error between prediction and truth

 Applying optimization algorithms: Like gradient descent to minimize error

 Regularization: Prevent overfitting and improve generalization

7. 🔁 (Duplicate) How ANN Makes Machines Intelligent

This is a repeat of question 3. You can either remove it or rephrase it to ask about real-world applications
xxx

Deep Learning Lab Manual


(CSL312)
2025-26
of ANN, such as:

 Fraud detection

 Medical diagnosis

 Natural language processing


xxxi

Deep Learning Lab Manual


(CSL312)
2025-26
xxxii

Deep Learning Lab Manual


(CSL312)
2025-26
xxxiii

Deep Learning Lab Manual


(CSL312)
2025-26
xxxiv

Deep Learning Lab Manual


(CSL312)
2025-26
xxxv

Deep Learning Lab Manual


(CSL312)
2025-26
xxxvi

Deep Learning Lab Manual


(CSL312)
2025-26
xxxvii

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 3

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To build ANN model using breast cancer dataset.


2. To understand the concept of early Stopping and dropouts
Outcome:

Student will be familiarizing with the breast cancer classification and building ANN models using this
dataset.
Problem Statement:

To build an ANN model for classification problem on breast cancer classification to see the effect of:
1. Early Stopping
2. Dropouts
Background Study:

Drop Out
A single model can be used to simulate having a large number of different network architectures by
randomly dropping out nodes during training. This is called dropout and offers a very computationally
cheap and remarkably effective regularization method to reduce overfitting and improve generalization
error in deep neural networks of all kinds.

Early Stopping
A major challenge in training neural networks is how long to train them.

Too little training will mean that the model will underfit the train and the test sets. Too much training will
mean that the model will overfit the training dataset and have poor performance on the test set.

A compromise is to train on the training dataset but to stop training at the point when performance on
a validation dataset starts to degrade. This simple, effective, and widely used approach to training
neural networks is called early stopping.
Question Bank:

1. What is a good patience for early stopping?


xxxviii

Deep Learning Lab Manual


(CSL312)
2025-26

2. Which loss is minimum for early stopping?


3. What are the two main benefits of early stopping?
4. How does dropout work in neural networks?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
xxxix

Deep Learning Lab Manual


(CSL312)
2025-26
xl

Deep Learning Lab Manual


(CSL312)
2025-26
xli

Deep Learning Lab Manual


(CSL312)
2025-26
xlii

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 4

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To build an ANN classification model with churn modelling data.


2. To understand and apply the concept of with k-fold cross validation method, grid search method
and checkpoint method.
Outcome:

Student will be able to build ANN classification model with churn modelling data with k-fold method, grid
search and checkpoint method.
Problem Statement:

To build an advance ANN classification model for churn modelling data with:
1. K-fold cross validation
2. Grid Search
3. Checkpoint
Background Study:

Cross validation is used to evaluate the performance of the model with the current combination of
hyperparameters. The process of K-Fold Cross-Validation is straightforward. You divide the data into K
folds. Out of the K folds, K-1 sets are used for training while the remaining set is used for testing. The
algorithm is trained and tested K times, each time a new set is used as testing set while remaining sets are
used for training. Finally, the result of the K-Fold Cross-Validation is the average of the results obtained
on each set.

Grid-search is used to find the optimal hyperparameters of a model which results in the most ‘accurate’
predictions.

Checkpointing Neural Network Models


Application checkpointing is a fault tolerance technique for long running processes.

It is an approach where a snapshot of the state of the system is taken in case of system failure. If there is a
problem, not all is lost. The checkpoint may be used directly, or used as the starting point for a new run,
picking up where it left off.
xliii

Deep Learning Lab Manual


(CSL312)
2025-26

When training deep learning models, the checkpoint is the weights of the model. These weights can be used
to make predictions as is, or used as the basis for ongoing training.
Question Bank:

1. What are checkpoints in deep learning?


2. What is a checkpoint in TensorFlow?
3. What is the cross validation in machine learning?
4. What is meant by cross validation?
5. How do you use model checkpoints?
6. How do I load checkpoints in TensorFlow?
7. What is grid search technique?
8. What is a grid search in machine learning?
9. How do you do a grid search in keras?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
xliv

Deep Learning Lab Manual


(CSL312)
2025-26
xlv

Deep Learning Lab Manual


(CSL312)
2025-26
xlvi

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 5

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of Convolution Networks Method


2. To build methods using CNN models on MNSIT Dataset.
Outcome:

Student will be able to construct CNN classification model for image classification.
Problem Statement:

To apply Convolutional Neural Networks for Image Classification on MNSIT Dataset.


Background Study:

The MNIST handwritten digit classification problem is a standard dataset used in computer vision and deep
learning.
A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an
input image, assign importance (learnable weights and biases) to various aspects/objects in the image and
be able to differentiate one from the other.CNN uses multilayer perceptrons to do computational works.
CNNs use relatively little pre-processing compared to other image classification algorithms. This means
the network learns through filters that in traditional algorithms were hand-engineered. So, for image
processing task CNNs are the best-suited option.

Question Bank:

1. What is convolutional neural network?


2. What is convolutional neural network and how it works?
xlvii

Deep Learning Lab Manual


(CSL312)
2025-26

3. How does CNN work?


4. What are convolutional neural networks used for?
5. How do you create a CNN for Mnist handwritten digit classification?
6. What is CNN Mnist?
7. What is the best accuracy on Mnist dataset?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
xlviii

Deep Learning Lab Manual


(CSL312)
2025-26
xlix

Deep Learning Lab Manual


(CSL312)
2025-26
l

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 6

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand how to create an image classification model based on Convolution Neural


Network (CNN) step by step.
2. To train a convolution neural network for classifying images of Cats and Dogs.
Outcome:

Students would be able to build and train a convolution neural network for classifying images of Cats and
Dogs.
Problem Statement:

To create CNN model with dataset containing images of cats and dogs for image classification.
Background Study:

Cats vs Dogs classification is a fundamental Deep Learning project for beginners. We are given a set of
dog and cat images. The task is to build a model to predict the category of an animal: dog or cat?

Question Bank:

1. How do you classify a dog and a cat?


2. Which type of algorithm is used to differentiate between an image into a cat vs dog image?
3. How do I use CNN photo classification?
4. What is the total number of training images for cats and dogs in the training set?
li

Deep Learning Lab Manual


(CSL312)
2025-26

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lii

Deep Learning Lab Manual


(CSL312)
2025-26
liii

Deep Learning Lab Manual


(CSL312)
2025-26
liv

Deep Learning Lab Manual


(CSL312)
2025-26
lv

Deep Learning Lab Manual


(CSL312)
2025-26
lvi

Deep Learning Lab Manual


(CSL312)
2025-26

EXPERIMENT NO. 7

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand and implement LeNet CNN Architecture.


2. Train MNIST Dataset on LeNet Model.
Outcome:

Students would be able to build and train a Le-Net model on MNSIT Dataset.
Problem Statement:

To create Le-Net model on MNSIT Dataset.


Background Study:

The LeNet architecture was first introduced by LeCun et al. in their 1998 paper, Gradient-Based Learning
Applied to Document Recognition. As the name of the paper suggests, the authors’ implementation of LeNet
was used primarily for OCR and character recognition in documents.

The LeNet architecture is straightforward and small, (in terms of memory footprint), making it perfect for
teaching the basics of CNNs — it can even run on the CPU (if your system does not have a suitable
GPU), making it a great “first CNN”.

However, if you do have GPU support and can access your GPU via Keras, you will enjoy extremely fast
training times (in the order of 3-10 seconds per epoch, depending on your GPU).
Question Bank:

1. What is CNN LeNet?


2. What is LeNet used for?
3. How many layers does LeNet-5 have?
4. How do you use LeNet in keras?
lvi
i

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lvi
ii

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 8

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand and implement Alex-Net model


2. Train CIFAR10 Dataset on Alex-Net model
Outcome:

Students would be able to build and train Alex-Net model on CIFAR10 Dataset
Problem Statement:

To create Alex-Net model on CIFAR10 Dataset.


Background Study:

CIFAR-10 dataset consists of 60,000 RGB images of size 32x32. The images belong to objects of 10 classes
such as frogs, horses, ships, trucks etc. The dataset is divided into 50,000 training images and 10,000 testing
images. Among the training images, we used 49,000 images for training and 1000 images for validation.

AlexNet was designed by Geoffrey E. Hinton, winner of the 2012 ImageNet competition, and his student
Alex Krizhevsky. It was also after that year that more and deeper neural networks were proposed, such as
the excellent vgg, GoogleLeNet. They trained a large, deep convolutional neural network to classify the
1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes.
Its official data model has an accuracy rate of 57.1% and top 1-5 reaches 80.2%. This is already quite
outstanding for traditional machine learning classification algorithms.
Question Bank:

1. How do you implement AlexNet?


2. How do you use AlexNet in TensorFlow?
3. How do I load a cifar10 dataset?
4. How do you make a CNN model from scratch?
lix

Student Work
Area
Algorithm/Flowchart/Code/Sample Outputs
lx
Deep Learning Lab Manual
(CSL312) 2024-25
lxi

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 9

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

The objective is to identify (predict) different fashion products from the given images using a CNN model
on Fashion-MNIST Dataset.
Outcome:

Students would be able to build and train CNN model for the Fashion MNIST dataset.
Problem Statement:

To build an image classifier with Keras and Convolutional Neural Networks for the Fashion MNIST dataset.
Background Study:

The Fashion-MNIST clothing classification problem is a new standard dataset used in computer vision and
deep learning. Fashion-MNIST is a dataset of Zalando’s fashion article images —consisting of a training set
of 60,000 examples and a test set of 10,000 examples. Each instance is a 28×28 grayscale image, associated
with a label.
Question Bank:

1. How do you load fashion Mnist dataset using keras?


2. How do I create a CNN image classification?
3. How do you use Mnist fashion?
4. Which CNN model is best for image classification?
lxii

Deep Learning Lab Manual


(CSL312)
2024-25
Student Work Area
Algorithm/Flowchart/Code/Sample Outputs
lxiii

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 10

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

To implement a deep learning model for image classification using CIFAR-10 dataset
Outcome:

Students would be able to do classification of images from the CIFAR-1O dataset.


Problem Statement:

To train a CNN model to classify images from the CIFAR-10 database.


Background Study:

CIFAR-10 dataset is commonly used in Deep Learning for testing models of Image Classification. It has
60,000 color images comprising of 10 different classes. The image size is 32x32 and the dataset has 50,000
training images and 10,000 test images
Question Bank:

1. How do you normalize a Cifar-10 dataset?


2. How do you train the image classification model?
3. Which CNN architecture is best for image classification?
4. What highest accuracy is achieved by the model?
lxiv

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxv

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 11

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of autoencoders.


2. To use autocoders for dimensionality reduction.
Outcome:

Students would be able to reduce dimensionality of dataset using autoencoders.


Problem Statement:

To implement autoencoders for dimensionality reduction.


Background Study:

Autoencoder is a type of neural network where the output layer has the same dimensionality as the input
layer. In simpler words, the number of output units in the output layer is equal to the number of input units
in the input layer. An autoencoder replicates the data from the input to the output in an unsupervised manner
and is therefore sometimes referred to as a replicator neural network.

The autoencoders reconstruct each dimension of the input by passing it through the network. It may seem
trivial to use a neural network for the purpose of replicating the input, but during the replication process, the
size of the input is reduced into its smaller representation. The middle layers of the neural network have a
fewer number of units as compared to that of input or output layers. Therefore, the middle layers hold the
reduced representation of the input. The output is reconstructed from this reduced representation of the input.

Question Bank:

1. Can Autoencoders be used for dimensionality reduction?


2. Which of the Autoencoder is the most effective for dimensionality reduction of the data?
lxvi

Deep Learning Lab Manual


(CSL312)
2024-25

3. What are Autoencoders good for?


4. What are 3 ways of reducing dimensionality?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxvii

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 12

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:
Marks:

Objective:

To understand how autoencoders can used for image classification.


Outcome:

Students would be able to use autoencoders for image classification.


Problem Statement:

To apply autoencoders on a image dataset.


Background Study:

An autoencoder can be defined as a neural network whose primary purpose is to learn the underlying
manifold or the feature space in the dataset. An autoencoder tries to reconstruct the inputs at the outputs.
Unlike other non-linear dimension reduction methods, the autoencoders do not strive to preserve to a
single
property like distance (MDS), topology (LLE). An autoencoder generally consists of two parts an encoder
which transforms the input to a hidden code and a decoder which reconstructs the input from hidden code.
Question Bank:

1. Why do we need Autoencoders?


2. What are the 3 essential components of an Autoencoder?
3. What are the types of Autoencoders?
4. How is Autoencoder implemented?
5. What accuracy is achieved for the considered dataset?
lxviii

Deep Learning Lab Manual


(CSL312)
2024-25
Student Work Area
Algorithm/Flowchart/Code/Sample Outputs
lxix

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 13

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

To understand how to improve the performance of autoencoders using convolutional layers.


Outcome:

Students would be able to do the practical implementation of classification using the convolutional neural
network and convolutional autoencoder.
Problem Statement:

Using MNIST dataset, improve autoencoder's performance using convolutional layers.


Background Study:

Fashion-MNIST dataset is a 28x28 grayscale images of 70,000 fashion products from 10 categories, with
7,000 images per category. The training set has 60,000 images, and the test set has 10,000 images. Fashion-
MNIST is a replacement for the original MNIST dataset for producing better results, the image dimensions,
training and test splits are similar to the original MNIST dataset.

Autoencoders are widely used unsupervised application of neural networks whose original purpose is to
find latent lower dimensional state-spaces of datasets, but they are also capable of solving other problems,
such as image denoising, enhancement or colourization.

Question Bank:

1. How can I improve my Autoencoder performance?


2. How can I improve my convolutional neural network performance?
3. How do you improve the accuracy of a CNN model?
4. How can I improve my deep learning performance?
lxx

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxi

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 14

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:
1. To understand the concept of recurrent neural network.
2. To use autocoders for dimensionality reduction.
Outcome:

Students would be able to do develop RNN model on sales dataset.


Problem Statement:

To create a recurrent neural model on alcohol sales dataset.


Background Study:

A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes
form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behaviour.
Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable
length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected
handwriting recognition or speech recognition.
Question Bank:

1. How do you create an RNN model?


2. What makes something a recurrent neuron?
3. What is recurrent model?
4. What is recurrent neural network RNN sequence modeling?
5. What is recurrent neural network used for?
6. What is RNN and its application?
7. What is recurrent neural network used for?
8. What is RNN and CNN?
lxxii

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxiii

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 15

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

To understand how to apply RNN model to different datasets.


Outcome:

Students would be able to do implement RNN models on varied applications


Problem Statement:

1. To implement RNN model for stock price prediction


2. To create a RNN model and predict miles travelled by vehicles.
Background Study:

RNNs are very powerful, because they combine two properties:


– Distributed hidden state that allows them to store a lot of information about the past efficiently.
– Non-linear dynamics that allows them to update their hidden state in complicated ways.
With enough neurons and time, RNNs can compute anything that can be computed by your computer

Question Bank:

1. What is RNN and its application?


2. What is recurrent neural network in artificial intelligence?
3. Can RNN be used for text classification?
4. Which neural network is best for text classification?
5. Are recurrent neural networks being best suited for text processing?
6. How do you predict RNN?
7. How many types of recurrent neural networks are there in deep learning?
8. Is RNN more powerful than CNN?
lxxiv

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxv

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 16

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of LSTM model.


2. To apply LSTM model to different datasets.
Outcome:

Students would be able to do implement LSTM models on varied datasets.


Problem Statement:

Using LSTM to predict the future weather of a city using weather-data from several other cities.
Background Study:

Long Short Term Memory is a kind of recurrent neural network. In RNN output from the last step is fed as
input in the current step. LSTM was designed by Hochreiter & Schmidhuber. It tackled the problem of long-
term dependencies of RNN in which the RNN cannot predict the word stored in the long term memory but
can give more accurate predictions from the recent information. As the gap length increases RNN does not
give efficient performance. LSTM can by default retain the information for long period of time. It is used
for processing, predicting and classifying on the basis of time series data.

Question Bank:

1. Why is Lstm good for text classification?


2. Can RNN predict stock price?
3. How does machine learning predict stock prices?
4. How does Lstm work in stock predictions?
5. What does an Lstm layer do?
6. What is the main advantage of recurrent neural networks?
7. How do you use Lstm?
8. What are some common problems with Lstm?
lxxvi

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxvi
i

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 17

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of Natural Language processing.


2. To apply tokenization and stemming on text data.
Outcome:

Students would be able to do implement tokenization and stemming on text data.


Problem Statement:

To perform tokenization and stemming on text data using NLTK.


Background Study:

NLTK contains a module called tokenize () which further classifies into two sub-categories:
 Word tokenize: We use the word_tokenize () method to split a sentence into tokens or words.
 Sentence tokenize: We use the sent_tokenize () method to split a document or paragraph
into sentences.
Stemming is the process of producing morphological variants of a root/base word. Stemming programs are
commonly referred to as stemming algorithms or stemmers. A stemming algorithm reduces the words
“chocolates”, “chocolatey”, “choco” to the root word, “chocolate” and “retrieval”, “retrieved”, “retrieves”
reduce to the stem “retrieve”.

Question Bank:

1. What is NLTK used for?


2. Is NLTK an API?
lxxvi
ii

Deep Learning Lab Manual


(CSL312)
2024-25

3. What is NLTK package?


4. What is import NLTK in Python?
5. How do you use tokenization in Python?
6. How do you Tokenize a word in Python?
7. How does tokenization work in NLP?
8. What does Word_tokenize () function in NLTK do?
9. What is stemming in NLTK?
10. What is the best stemming algorithm?
11. Is stemming or Lemmatization better?
12. What is Porter stemming in Python?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxix

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 18

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of lemmatization and stopwords for efficient text classification.
2. To use NLTK and spacy to implement lemmatization and remove stopwords for text classification.
Outcome:

Students would be able to apply lemmatization and remove stopwords on text data using both NLTK and
spacy libraries.
Problem Statement:

1. To perform lemmatization and remove stopwords on text data using NLTK


2. To perform lemmatization and remove stopwords on text data using Spacy.
Background Study:

The spaCy library is one of the most popular NLP libraries along with NLTK. The basic difference
between the two libraries is the fact that NLTK contains a wide variety of algorithms to solve one problem
whereas spaCy contains only one, but the best algorithm to solve a problem.

NLTK was released back in 2001 while spaCy is relatively new and was developed in 2015. In this series of
articles on NLP, we will mostly be dealing with spaCy, owing to its state-of-the-art nature. However, we
will also touch NLTK when it is easier to perform a task using NLTK rather than spaCy.

Question Bank:

1. Is spaCy better than NLTK?


lxxx

Deep Learning Lab Manual


(CSL312)
2024-25

1. How do you use spaCy in NLP?


2. What is spaCy used for?
3. What algorithm does spaCy use?
4. What is difference between NLTK and spaCy?
5. What is spaCy used for?
6. What is the use of NLTK?
7. What does spaCy stand for?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxxi

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 19

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:

Faculty Signature:

Marks:

Objective:

1. To understand the concept of Part-of-Speech (POS) tagging.


2. To understand the concept of Named Entity Recognition and Sentence Segmentation.
Outcome:

Students would be able to understand components of Part-of-Speech (POS) tagging for text classification.
Problem Statement:

To explore following components of Part-of-Speech (POS) tagging:


1. Visualization of POS
2. Named Entity Recognition
3. Sentence Segmentation
Background Study:

Parts of speech tagging simply refers to assigning parts of speech to individual words in a sentence, which
means that, unlike phrase matching, which is performed at the sentence or multi-word level, parts of speech
tagging are performed at the token level.

Parts of speech tags are the properties of the words, which define their main context, functions, and usage
in a sentence. Some of the commonly used parts of speech tags are

Nouns: Which defines any object or entity

Verbs: That defines some action.

Adjectives and Adverbs: This acts as a modifier, quantifier, or intensifier in any sentence.
lxxxi
i

Deep Learning Lab Manual


(CSL312)
2024-25

Question Bank:

1. What do we tag in POS tagging?


2. What is part of speech tagging in NLTK?
3. What is part of speech tagging in Python?
4. How is POS tagging done?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxxi
ii

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 20

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of Bag-of-Words (BOW) and Bag-of-n-grams.


2. To understand the concept of Tf-Idf Vectorizer and Count Vectorizer.
Outcome:

Students would be able to apply of Bag-of-Words (BOW) and Bag-of-grams for text classification.
Problem Statement:

To create Bag-of-Words (BOW) and Bag-of-grams using the following


1. Bag-of-words Using the Count Vectorizer
2. b Bag-of-n-grams Using the Count Vectorizer
3. Bag-of-words Using the Tf-Idf Vectorizer
Background Study:

Bag of words is a Natural Language Processing technique of text modelling. In technical terms, we can say
that it is a method of feature extraction with text data. This approach is a simple and flexible way of
extracting features from documents.

A bag of words is a representation of text that describes the occurrence of words within a document. We
just keep track of word counts and disregard the grammatical details and the word order. It is called a “bag”
of words because any information about the order or structure of words in the document is discarded. The
model is only concerned with whether known words occur in the document, not where in the document.
Question Bank:

1. How do you make a bag of words?


2. How do you make a bag of words in python?
3. How do you make a bag of words using NLTK?
4. Is Count Vectorizer bag of words?
5. How do you use count Vectorizer?
6. How do you calculate bag words?
7. What is count Vectorizer?
8. What is Tf-Idf Vectorizer?
lxxxi
v

Deep Learning Lab Manual


(CSL312)
2024-25

9. How do I use Tf-Idf Vectorizer in Python?


10. What is Tf-Idf Vectorizer?
11. Which is better Count Vectorizer or Tf-Idf Vectorizer?

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxx
v

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 21

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:

Faculty Signature:

Marks:

Objective:

1. To understand the concept of TFIDF Vectorizer.


2. To use TDIDF Vectorizer for SPAM Detection.
Outcome:

Students would be able to do Spam Detection using TFIDF Vectorizer.


Problem Statement:

To build a NLP model for Spam Detection using TFIDF Vectorizer.


Background Study:

In recent times, the internet and social media have become the fastest and easiest ways to get information.
Today messages, reviews and opinions have become a significant source of information. In this era, Short
message service or SMS is considered one of the most powerful means of communication. As the
dependence on mobile devices has drastically increased over the period of time it has led to an increased
number of attacks in the form of SMS Spam. Thanks to advancement in technologies, we are now able to
extract meaningful information from such data using various artificial intelligence techniques. In order to
deal with such problems, natural Language Processing, a part of data science is used to give valuable
insights.
Question Bank:

1. How do I make a spam classifier?


2. What is Ham in spam?
3. What is spam classifier?
4. How do I use TFIDF vectorizer in Python?
lxxx
vi

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxx
vii

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 22

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To understand the concept of Transfer Learning.


2. To apply transfer learning using the pre-trained model (MobileNet V2) for image classification.
Outcome:

Students would be able to do classify images using the concept of transfer learning.
Problem Statement:

To implement transfer learning using the pre-trained model (MobileNet V2) for image classification.
Background Study:

Transfer Learning is an approach where we use one model trained on a machine learning task and
reuse it as a starting point for a different job. Multiple deep learning domains use this approach,
including Image Classification, Natural Language Processing, and even Gaming! The ability to adapt a
trained model to another task is incredibly valuable.

MobileNet V2 model was developed at Google, pre-trained on the ImageNet dataset with 1.4M images and
1000 classes of web images.
Question Bank:

1. How do you use image classification in MobileNet?


2. Which pre-trained model is best for image classification?
3. What is MobileNet v2?
4. What is MobileNet model?
5. How do I transfer learning in TensorFlow?
lxxx
viii

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
lxxxi
x

Deep Learning Lab Manual


(CSL312)
2024-25

EXPERIMENT NO. 23

Student Name and Roll Number:


Semester /Section:
Link to Code:
Date:
Faculty Signature:

Marks:

Objective:

1. To explore the use of transfer learning for the classification of images.


2. To apply transfer learning using the pre-trained model (VGG16) for image classification.
Outcome:

Students would be able to use pre-trained model (VGG16) using transfer learning.
Problem Statement:

To implement transfer learning using the pre-trained model (VGG16) on image dataset.
Background Study:

Transfer learning is simply the process of using a pre-trained model that has been trained on a dataset for
training and predicting on a new given dataset.

“A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-
scale image-classification task.”

VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the
University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”.
The model achieves 92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images
belonging to 1000 classes.
Question Bank:

1. How do I use VGG16 for transfer learning?


2. Which pre-trained model is best for image classification?
3. How do I use transfer learning in keras?
4. How do I import VGG16?
xc

Deep Learning Lab Manual


(CSL312)
2024-25

Student Work Area


Algorithm/Flowchart/Code/Sample Outputs
xci

Deep Learning Lab Manual


(CSL312)
2024-25

Annexure 2

DEEP LEARNING
(CSL 312)

PROJECT REPORT

Faculty name Student name

Roll No.:

Semester:

Group:

Department of Computer Science and Engineering


The NorthCap University, Gurugram- 122001,
India Session 2020-21
xcii

Deep Learning Lab Manual


(CSL312)
2024-25

CONTENTS

S. No. Details Page No.

1 Project Description

2 Problem Statement

Analysis

3.1. Hardware Requirements


3 3.2. Software Requirements

Design

4.1. Data/Input Output Description


4.2. Algorithmic Approach/ Algorithm / DFD / ER Diagram /
4 Program Steps

5 Implementation and Testing (Stage/Module wise)

6 Output (Screenshots)

7 Conclusion and Future scope

You might also like