Deep Learning Lab Manual 21CSU124
Deep Learning Lab Manual 21CSU124
Lab Manual
Department of Computer Science and Engineering
The NorthCap University, Gurugram
ii
DEEP LEARNING
Lab Manual CSL312
2025-26
iii
Published by:
© Copyright Reserved
Copying or facilitating copying of lab work comes under cheating and is considered as use of
unfair means. Students indulging in copying or facilitating copying shall be awarded zero marks
for that experiment. Frequent cases of copying may lead to disciplinary action. Attendance in
lab classes is mandatory.
Labs are open to 7 PM upon request. Students are encouraged to make full use of labs beyond
normal lab hours.
iv
PREFACE
Deep Learning Lab Manual is designed to meet the course and program requirements of NCU
curriculum for B.Tech third year students of CSE branch. The concept of the lab work is to give
brief practical experience for basic lab skills to students. It provides the space and scope for self-
study so that students can come up with new and creative ideas.
The Lab manual is written on the basis of “teach yourself pattern” and expected that students
who come with proper preparation should be able to perform the experiments without any
difficulty. Brief introduction to each experiment with information about self-study material is
provided. The laboratory exercises will help students to gain a deeper, technical understanding of
Artificial Neural Networks, Convolutional Neural Networks, Auto encoders, Recurrent Neural
Networks and Long short-term memory (LSTM). It also includes concepts of Transfer Learning
along with implementation of Google Cloud Platform that consequently discusses various ways
to better understand the applications and real-time projects of deep learning with respect to the
latest industry scenario. Students are expected to come thoroughly prepared for the lab. General
disciplines, safety guidelines and report writing are also discussed.
The lab manual is a part of curriculum for the The NorthCap University, Gurugram. Teacher’s
copy of the experimental results and answer for the questions are available as sample guidelines.
We hope that lab manual would be useful to students of Computer Science & Engineering branch
and author requests the readers to kindly forward their suggestions / constructive criticism for
further improvement of the workbook.
Author expresses deep gratitude to Members, Governing Body-NCU for encouragement and
motivation.
Authors
The NorthCap University
Gurugram, India
v
CONTENTS
Syllabus vi
1 Introduction ix
2 Lab Requirement x
3 General Instructions xi
7 Rubrics xviii
SYLLABUS
1. Department: Department of Computer Science and Engineering
2. Course Name: Deep Learning 3. Course Code 4. L-T-P 5. Credits
CSL312 2-0-4 4
6. Type of Course
(Check one): Programme Core Programme Elective Open Elective
Total lecture, Tutorial and Practical Hours for this course (Take 15 teaching weeks per semester):
90 hours
The class size is maximum 30 learners.
Practice
Lectures: 30 hours
Tutorials : 0 hours Lab Work: 45 hours
10. Course Outcomes (COs)
On successful completion of this course students will be able to:
Define and Apply concepts of Artificial Neural Networks on real world data. Students will
CO 1 also be able to differentiate deep learning from shallow learning.[L1,L3]
processing ,describe the various steps involved in natural language processing process and
determine the best process for handling textual data for real world applications.
Explain ,Apply and Compare various sequential models for time series data.Students will
be able to explain the requirement of sequential models for handling time series data ,apply
CO 4 the models for prediction and compare their performance on various applications.
[L2,L3,L4]
Characterize, Use and Categorize various Autoencoders and Generative Models for
unsupervised deep learning.Students will be able to characterize different autoencoders and
CO 5 Generative Models, their usage and put them into various categories for unsupervised
learning. [L2,L3,L4]
11. UNIT WISE DETAILS No. of Units: 5
Unit Number: 1 Title: Introduction to ANN and Deep Learning No. of hours: 10
Content Summary:
Overview of Machine Learning and Neural Networks. Building an ANN, Activation Functions,
Evaluating, Improving and Tuning the ANN. Loss functions, Gradient Descent, Back propagation,
Hyperparameter tuning. Introduction to Deep Learning, Optimisers, Momemtum.
Unit Number: 2 Title: Deep Learning for Image Processing No. of hours: 10
Content Summary:
Basics of Image Processing, Introduction to Tensorflow and Keras, Introduction to CNN, Building a
CNN: Convolution layers, Activation functions, Pooling, Flattening, Full Connection, Evaluating, Tuning
the CNN, Dropout to prevent Overfiting, CNN applications,Tranfer Learning models.
Content Summary:
Introduction to NLP (Natural Language Processing), NLTK and Spacy basics, Tokenization, Stemming,
Lemmatization, Stop Words, Bag of Words and Bag of N grams, Word Embeddings.
Content Summary:
Introduction – Recurrent Neural Network (RNN) , Vanisihing Gradient, RNN limitations, Introduction to
Long Short-Term Memory (LSTM), LSTM Variations, Gated Recurrent Neural Networks (GRU),
Application of these architectures to natural language processing and time series.
viii
Content Summary:
Autoencoder, Training an Autoencoder. Types of Autoencoders. Introduction to Generative AI,
Differences between generative and discriminative models, Generative Models: GANs, VAEs,
Transformers. Emerging Applications of Generative AI.
12. Brief Description of Self-learning components by students (through books/resource
material etc.):
Supplementary MOOC Courses
1. https://onlinecourses.nptel.ac.in/noc21_cs35/preview
2. https://onlinecourses.nptel.ac.in/noc21_cs05/preview
Text Books:
Francois Chollet, Deep Learning with Python, Manning Publications, First Edition, 2018
Ian Goodfellow ,Yoshua Bengio, Aaron Courville, Deep Learning , MIT Press, First Edition,
2016
Reference Books:
Stephen Boyd, Convex Optimization, Cambridge University Press, First Edition, 2015
Generative Deep Learning: Teaching Machines to Dream" by David Foster
Reference Websites: (nptel, swayam, coursera, edx, udemy, lms, official documentation
weblink)
https://medium.com/intro-to-artificial-intelligence/deep-learning-series-1-intro-to-deep-
learning-abb1780ee20
https://towardsdatascience.com/introducing-deep-learning-and-neural-networks-deep-
learning-for-rookies-1-bd68f9cf5883
https://www.coursera.org/learn/neural-networks-deep-learning
www.lms.ncuindia.edu/lms
https://www.coursera.org/learn/introduction-to-generative-ai#modules
https://www.v7labs.com/blog/generative-ai-guide#h2
ix
ebooks:
https://www.pdfdrive.com/introduction-to-deep-learning-using-r-a-step-by-step-guide-to-
learning-and-implementing-deep-learning-models-using-r-e158252417.html
https://www.pdfdrive.com/learn-keras-for-deep-neural-networks-a-fast-track-approach-to-
modern-deep-learning-with-python-e185770502.html
1. INTRODUCTION
4. Implement of Natural language Processing, Text Classification, Deep Learning for NLP.
5. Understand other deep learning topics such as transfer learning approach for various
areas, Google cloud platform and Cloud AutoML.
xi
2. LAB REQUIREMENTS
8 GB RAM (Recommended)
3 Hardware Requirements
2.60 GHz (Recommended)
4 Required Bandwidth NA
xii
3. GENERAL INSTRUCTIONS
Students must turn up in time and contact concerned faculty for the experiment
they are supposed to perform.
Students will not be allowed to enter late in the lab.
Students will not leave the class till the period is over.
Students should come prepared for their experiment.
Experimental results should be entered in the lab report format and certified/signed
by concerned faculty/ lab Instructor.
Students must get the connection of the hardware setup verified before switching on
the power supply.
Students should maintain silence while performing the experiments. If any necessity
arises for discussion amongst them, they should discuss with a very low pitch
without disturbing the adjacent groups.
Violating the above code of conduct may attract disciplinary action.
Damaging lab equipment or removing any component from the lab may invite
penalties and strict disciplinary action.
b. Attendance
Students should come to the lab thoroughly prepared on the experiments they are
assigned to perform on that day. Brief introduction to each experiment with
information about self-study reference is provided on LMS.
Students must bring the lab report during each practical class with written records
of the last experiments performed complete in all respect.
xiii
4. LIST OF EXPERIMENTS
6. LIST OF PROJECTS
1. Build your own emoji using deep learning using the following dataset or
some other dataset.
4. In this project, build a chatbot using deep learning techniques. The chatbot
will be trained on the dataset which contains categories (intents), pattern and
responses. Students should use a special recurrent neural network (LSTM) to
classify which category the user’s message belongs to and then we will give
a random response from the list of responses.
5. Create an algorithm to distinguish dogs from cats. Students can refer this
dataset or can take any dataset. In this Keras project, they will discover how
to build and train a convolution neural network for classifying images of Cats
and Dogs.
7. RUBRICS
Evaluation Scheme
ALLOTTED
TYPE OF
PARTICULAR RANGE OF PASS CRITERIA
COURSE
MARKS
Exams Marks
Major Exam 70
Quiz1:CO1 10
Quiz2: CO2 10
Internal Assessment (50) Quiz3:CO3 10
Class Test:CO4 10
Quiz4:CO5 10
Annexure 1
DEEP LEARNING
(CSL 312)
Semester: 7
Group: DS – VII – DB
EXPERIMENT NO. 1
Student Name and Roll Number: Tanveer Singh Bindra & 21CSU124
Semester /Section: 7th semester / DS-B
Link to Code:
Date:
Faculty Signature:
Marks:
Objective:
To understand the basic features of TensorFlow and Keras packages and to know how to build models using
deep learning.
Outcome:
Student will be familiarizing with the concepts of TensorFlow and Keras packages which would help them
in building deep learning models.
Problem Statement:
TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible
ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML
and developers easily build and deploy ML powered applications. Keras is a deep learning API written in
Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on
enabling fast experimentation.
Question Bank:
3. 📦 Loaders in TensorFlow
TensorFlow provides several data loaders and utilities for handling datasets:
tf.data.Dataset API: For building input pipelines from arrays, files, or streaming data.
tf.keras.utils.image_dataset_from_directory: Loads image data from folders.
tf.keras.utils.get_file: Downloads and caches files.
tensorflow_datasets (TFDS): Offers ready-to-use datasets like MNIST, CIFAR, IMDB.
These loaders simplify preprocessing and batching for training models.
5. 📚 Is Keras a Library?
Yes, Keras is a high-level deep learning library that runs on top of TensorFlow. It simplifies model creation with
intuitive APIs and is now tightly integrated into TensorFlow as tf.keras.
EXPERIMENT NO. 2
Student Name and Roll Number: Tanveer Singh Bindra and 21CSU124
Semester /Section: 7th semester / DS-B
Link to Code:
Date:
Faculty Signature:
Marks
Objective:
Students will be able to understand how to build models using Artificial Neural Networks.
Problem Statement:
An artificial neural network (ANN) is the piece of a computing system designed to simulate the way the
human brain analyzes and processes information. It is the foundation of artificial intelligence (AI) and
solves problems that would prove impossible or difficult by human or statistical standards.
Question Bank:
Here’s a detailed answer key for your second question bank on neural networks and machine learning
fundamentals:
2. ⚡ Activation Function
An activation function determines whether a neuron should be activated or not. It introduces non-
linearity into the network, allowing it to learn complex patterns.
Common types:
ReLU (Rectified Linear Unit): Outputs 0 for negative inputs, input itself for positive
They enable tasks like facial recognition, language translation, and autonomous driving.
xxix
Clustering, dimensionality
Examples Classification, regression
reduction
5. ⚖️What is a Bias?
This is a repeat of question 3. You can either remove it or rephrase it to ask about real-world applications
xxx
Fraud detection
Medical diagnosis
EXPERIMENT NO. 3
Marks:
Objective:
Student will be familiarizing with the breast cancer classification and building ANN models using this
dataset.
Problem Statement:
To build an ANN model for classification problem on breast cancer classification to see the effect of:
1. Early Stopping
2. Dropouts
Background Study:
Drop Out
A single model can be used to simulate having a large number of different network architectures by
randomly dropping out nodes during training. This is called dropout and offers a very computationally
cheap and remarkably effective regularization method to reduce overfitting and improve generalization
error in deep neural networks of all kinds.
Early Stopping
A major challenge in training neural networks is how long to train them.
Too little training will mean that the model will underfit the train and the test sets. Too much training will
mean that the model will overfit the training dataset and have poor performance on the test set.
A compromise is to train on the training dataset but to stop training at the point when performance on
a validation dataset starts to degrade. This simple, effective, and widely used approach to training
neural networks is called early stopping.
Question Bank:
EXPERIMENT NO. 4
Marks:
Objective:
Student will be able to build ANN classification model with churn modelling data with k-fold method, grid
search and checkpoint method.
Problem Statement:
To build an advance ANN classification model for churn modelling data with:
1. K-fold cross validation
2. Grid Search
3. Checkpoint
Background Study:
Cross validation is used to evaluate the performance of the model with the current combination of
hyperparameters. The process of K-Fold Cross-Validation is straightforward. You divide the data into K
folds. Out of the K folds, K-1 sets are used for training while the remaining set is used for testing. The
algorithm is trained and tested K times, each time a new set is used as testing set while remaining sets are
used for training. Finally, the result of the K-Fold Cross-Validation is the average of the results obtained
on each set.
Grid-search is used to find the optimal hyperparameters of a model which results in the most ‘accurate’
predictions.
It is an approach where a snapshot of the state of the system is taken in case of system failure. If there is a
problem, not all is lost. The checkpoint may be used directly, or used as the starting point for a new run,
picking up where it left off.
xliii
When training deep learning models, the checkpoint is the weights of the model. These weights can be used
to make predictions as is, or used as the basis for ongoing training.
Question Bank:
EXPERIMENT NO. 5
Marks:
Objective:
Student will be able to construct CNN classification model for image classification.
Problem Statement:
The MNIST handwritten digit classification problem is a standard dataset used in computer vision and deep
learning.
A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an
input image, assign importance (learnable weights and biases) to various aspects/objects in the image and
be able to differentiate one from the other.CNN uses multilayer perceptrons to do computational works.
CNNs use relatively little pre-processing compared to other image classification algorithms. This means
the network learns through filters that in traditional algorithms were hand-engineered. So, for image
processing task CNNs are the best-suited option.
Question Bank:
EXPERIMENT NO. 6
Marks:
Objective:
Students would be able to build and train a convolution neural network for classifying images of Cats and
Dogs.
Problem Statement:
To create CNN model with dataset containing images of cats and dogs for image classification.
Background Study:
Cats vs Dogs classification is a fundamental Deep Learning project for beginners. We are given a set of
dog and cat images. The task is to build a model to predict the category of an animal: dog or cat?
Question Bank:
EXPERIMENT NO. 7
Marks:
Objective:
Students would be able to build and train a Le-Net model on MNSIT Dataset.
Problem Statement:
The LeNet architecture was first introduced by LeCun et al. in their 1998 paper, Gradient-Based Learning
Applied to Document Recognition. As the name of the paper suggests, the authors’ implementation of LeNet
was used primarily for OCR and character recognition in documents.
The LeNet architecture is straightforward and small, (in terms of memory footprint), making it perfect for
teaching the basics of CNNs — it can even run on the CPU (if your system does not have a suitable
GPU), making it a great “first CNN”.
However, if you do have GPU support and can access your GPU via Keras, you will enjoy extremely fast
training times (in the order of 3-10 seconds per epoch, depending on your GPU).
Question Bank:
EXPERIMENT NO. 8
Marks:
Objective:
Students would be able to build and train Alex-Net model on CIFAR10 Dataset
Problem Statement:
CIFAR-10 dataset consists of 60,000 RGB images of size 32x32. The images belong to objects of 10 classes
such as frogs, horses, ships, trucks etc. The dataset is divided into 50,000 training images and 10,000 testing
images. Among the training images, we used 49,000 images for training and 1000 images for validation.
AlexNet was designed by Geoffrey E. Hinton, winner of the 2012 ImageNet competition, and his student
Alex Krizhevsky. It was also after that year that more and deeper neural networks were proposed, such as
the excellent vgg, GoogleLeNet. They trained a large, deep convolutional neural network to classify the
1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes.
Its official data model has an accuracy rate of 57.1% and top 1-5 reaches 80.2%. This is already quite
outstanding for traditional machine learning classification algorithms.
Question Bank:
Student Work
Area
Algorithm/Flowchart/Code/Sample Outputs
lx
Deep Learning Lab Manual
(CSL312) 2024-25
lxi
EXPERIMENT NO. 9
Marks:
Objective:
The objective is to identify (predict) different fashion products from the given images using a CNN model
on Fashion-MNIST Dataset.
Outcome:
Students would be able to build and train CNN model for the Fashion MNIST dataset.
Problem Statement:
To build an image classifier with Keras and Convolutional Neural Networks for the Fashion MNIST dataset.
Background Study:
The Fashion-MNIST clothing classification problem is a new standard dataset used in computer vision and
deep learning. Fashion-MNIST is a dataset of Zalando’s fashion article images —consisting of a training set
of 60,000 examples and a test set of 10,000 examples. Each instance is a 28×28 grayscale image, associated
with a label.
Question Bank:
EXPERIMENT NO. 10
Marks:
Objective:
To implement a deep learning model for image classification using CIFAR-10 dataset
Outcome:
CIFAR-10 dataset is commonly used in Deep Learning for testing models of Image Classification. It has
60,000 color images comprising of 10 different classes. The image size is 32x32 and the dataset has 50,000
training images and 10,000 test images
Question Bank:
EXPERIMENT NO. 11
Marks:
Objective:
Autoencoder is a type of neural network where the output layer has the same dimensionality as the input
layer. In simpler words, the number of output units in the output layer is equal to the number of input units
in the input layer. An autoencoder replicates the data from the input to the output in an unsupervised manner
and is therefore sometimes referred to as a replicator neural network.
The autoencoders reconstruct each dimension of the input by passing it through the network. It may seem
trivial to use a neural network for the purpose of replicating the input, but during the replication process, the
size of the input is reduced into its smaller representation. The middle layers of the neural network have a
fewer number of units as compared to that of input or output layers. Therefore, the middle layers hold the
reduced representation of the input. The output is reconstructed from this reduced representation of the input.
Question Bank:
EXPERIMENT NO. 12
Objective:
An autoencoder can be defined as a neural network whose primary purpose is to learn the underlying
manifold or the feature space in the dataset. An autoencoder tries to reconstruct the inputs at the outputs.
Unlike other non-linear dimension reduction methods, the autoencoders do not strive to preserve to a
single
property like distance (MDS), topology (LLE). An autoencoder generally consists of two parts an encoder
which transforms the input to a hidden code and a decoder which reconstructs the input from hidden code.
Question Bank:
EXPERIMENT NO. 13
Marks:
Objective:
Students would be able to do the practical implementation of classification using the convolutional neural
network and convolutional autoencoder.
Problem Statement:
Fashion-MNIST dataset is a 28x28 grayscale images of 70,000 fashion products from 10 categories, with
7,000 images per category. The training set has 60,000 images, and the test set has 10,000 images. Fashion-
MNIST is a replacement for the original MNIST dataset for producing better results, the image dimensions,
training and test splits are similar to the original MNIST dataset.
Autoencoders are widely used unsupervised application of neural networks whose original purpose is to
find latent lower dimensional state-spaces of datasets, but they are also capable of solving other problems,
such as image denoising, enhancement or colourization.
Question Bank:
EXPERIMENT NO. 14
Marks:
Objective:
1. To understand the concept of recurrent neural network.
2. To use autocoders for dimensionality reduction.
Outcome:
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes
form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behaviour.
Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable
length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected
handwriting recognition or speech recognition.
Question Bank:
EXPERIMENT NO. 15
Marks:
Objective:
Question Bank:
EXPERIMENT NO. 16
Marks:
Objective:
Using LSTM to predict the future weather of a city using weather-data from several other cities.
Background Study:
Long Short Term Memory is a kind of recurrent neural network. In RNN output from the last step is fed as
input in the current step. LSTM was designed by Hochreiter & Schmidhuber. It tackled the problem of long-
term dependencies of RNN in which the RNN cannot predict the word stored in the long term memory but
can give more accurate predictions from the recent information. As the gap length increases RNN does not
give efficient performance. LSTM can by default retain the information for long period of time. It is used
for processing, predicting and classifying on the basis of time series data.
Question Bank:
EXPERIMENT NO. 17
Marks:
Objective:
NLTK contains a module called tokenize () which further classifies into two sub-categories:
Word tokenize: We use the word_tokenize () method to split a sentence into tokens or words.
Sentence tokenize: We use the sent_tokenize () method to split a document or paragraph
into sentences.
Stemming is the process of producing morphological variants of a root/base word. Stemming programs are
commonly referred to as stemming algorithms or stemmers. A stemming algorithm reduces the words
“chocolates”, “chocolatey”, “choco” to the root word, “chocolate” and “retrieval”, “retrieved”, “retrieves”
reduce to the stem “retrieve”.
Question Bank:
EXPERIMENT NO. 18
Marks:
Objective:
1. To understand the concept of lemmatization and stopwords for efficient text classification.
2. To use NLTK and spacy to implement lemmatization and remove stopwords for text classification.
Outcome:
Students would be able to apply lemmatization and remove stopwords on text data using both NLTK and
spacy libraries.
Problem Statement:
The spaCy library is one of the most popular NLP libraries along with NLTK. The basic difference
between the two libraries is the fact that NLTK contains a wide variety of algorithms to solve one problem
whereas spaCy contains only one, but the best algorithm to solve a problem.
NLTK was released back in 2001 while spaCy is relatively new and was developed in 2015. In this series of
articles on NLP, we will mostly be dealing with spaCy, owing to its state-of-the-art nature. However, we
will also touch NLTK when it is easier to perform a task using NLTK rather than spaCy.
Question Bank:
EXPERIMENT NO. 19
Faculty Signature:
Marks:
Objective:
Students would be able to understand components of Part-of-Speech (POS) tagging for text classification.
Problem Statement:
Parts of speech tagging simply refers to assigning parts of speech to individual words in a sentence, which
means that, unlike phrase matching, which is performed at the sentence or multi-word level, parts of speech
tagging are performed at the token level.
Parts of speech tags are the properties of the words, which define their main context, functions, and usage
in a sentence. Some of the commonly used parts of speech tags are
Adjectives and Adverbs: This acts as a modifier, quantifier, or intensifier in any sentence.
lxxxi
i
Question Bank:
EXPERIMENT NO. 20
Marks:
Objective:
Students would be able to apply of Bag-of-Words (BOW) and Bag-of-grams for text classification.
Problem Statement:
Bag of words is a Natural Language Processing technique of text modelling. In technical terms, we can say
that it is a method of feature extraction with text data. This approach is a simple and flexible way of
extracting features from documents.
A bag of words is a representation of text that describes the occurrence of words within a document. We
just keep track of word counts and disregard the grammatical details and the word order. It is called a “bag”
of words because any information about the order or structure of words in the document is discarded. The
model is only concerned with whether known words occur in the document, not where in the document.
Question Bank:
EXPERIMENT NO. 21
Faculty Signature:
Marks:
Objective:
In recent times, the internet and social media have become the fastest and easiest ways to get information.
Today messages, reviews and opinions have become a significant source of information. In this era, Short
message service or SMS is considered one of the most powerful means of communication. As the
dependence on mobile devices has drastically increased over the period of time it has led to an increased
number of attacks in the form of SMS Spam. Thanks to advancement in technologies, we are now able to
extract meaningful information from such data using various artificial intelligence techniques. In order to
deal with such problems, natural Language Processing, a part of data science is used to give valuable
insights.
Question Bank:
EXPERIMENT NO. 22
Marks:
Objective:
Students would be able to do classify images using the concept of transfer learning.
Problem Statement:
To implement transfer learning using the pre-trained model (MobileNet V2) for image classification.
Background Study:
Transfer Learning is an approach where we use one model trained on a machine learning task and
reuse it as a starting point for a different job. Multiple deep learning domains use this approach,
including Image Classification, Natural Language Processing, and even Gaming! The ability to adapt a
trained model to another task is incredibly valuable.
MobileNet V2 model was developed at Google, pre-trained on the ImageNet dataset with 1.4M images and
1000 classes of web images.
Question Bank:
EXPERIMENT NO. 23
Marks:
Objective:
Students would be able to use pre-trained model (VGG16) using transfer learning.
Problem Statement:
To implement transfer learning using the pre-trained model (VGG16) on image dataset.
Background Study:
Transfer learning is simply the process of using a pre-trained model that has been trained on a dataset for
training and predicting on a new given dataset.
“A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-
scale image-classification task.”
VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the
University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”.
The model achieves 92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images
belonging to 1000 classes.
Question Bank:
Annexure 2
DEEP LEARNING
(CSL 312)
PROJECT REPORT
Roll No.:
Semester:
Group:
CONTENTS
1 Project Description
2 Problem Statement
Analysis
Design
6 Output (Screenshots)