MLR Institute of Technology (Autonomous) R22
DEEP LEARNING
IV B. TECH- I SEMESTER
Course Code Category Hours / Week Credits Maximum Marks
L T P C CIE SEE Total
A6AI17 PCC 3 0 0 3 40 60 100
Tutorial Classes:
Contact Classes: 60 Practical Classes: Nil Total Classes: 60
Nil
Course Objectives
The course should enable the students to:
1. To learn Deep learning techniques and their applications.
2. To acquire the knowledge of neural network architectures, Deep learning methods and algorithms.
3. To understand CNN and RNN algorithms and their applications.
Course Outcomes
At the end of the course, student will be able to:
1. Understand various learning models.
2. Design and develop various Neural Network Architectures.
3. Understand approximate reasoning using Convolution Neural Networks.
4. Analyze and design Deep learning algorithms in different applications.
5. Ability to apply CNN and RNN techniques to solve different applications.
UNIT-I BASICS Classes: 12
Historical Trends in Deep Learning, McCulloch Pitts Neuron, Thresholding Logic, Perceptron, Single Layer Percetron,
Multiple Layer Perceptron , Representation Power of MLPs, Maximum likelihood estimation, Sigmoid Neurons, Gradient
Descent, Feed forward Neural Networks, Curse of Dimentionality.
UNIT-II INTRODUCTION TO DEEP LEARNING Classes: 12
Learning Algorithms and motivation of Deep Learning, Gradient-Based Learning, Multi-layer perceptron, Back-
propagation, Vanishing Gradient Problem, Capacity, Overfitting and Underfitting, Activation Functions: RELU, LRELU,
ERELU, Regularization-dropout, drop connect, optimization methods for neural networks-
Adagrad, adadelta, rmsprop, adam, NAG.
UNIT-III AUTOENCODERS®ULARIZATION Classes: 12
Auto encoders : Autoencoders, Regularized Autoencoders, Denoising Autoencoders, Representational Power, Layer
size and Depth of Autoencoders, Stochastic Encoders and Decoders, Contractive Encoders.
Regularization: Bias Variance Tradeoff, L2 regularization, Early stopping, Dataset augmentation,
Parameter sharing and tying, Injecting noise at input, Ensemble methods, Dropout, Greedy Layer wise Pre- training,
Better activation functions, Better weight initialization methods, Batch Normalization
[Link] – CSE(AIML) - R22 1
MLR Institute of Technology (Autonomous) R22
UNIT-IV CONVOLUTIONNEURALNETWORKS Classes: 12
Overview of Convolutional Neural Networks Architecture-Motivation, Layers, Kernels, Convolution operation, Padding,
Stride, Pooling, Non-linear layer, Stacking Layers, Popular CNN Architectures: LeNet, AlexNet, ZFNet, VggNet.
UNIT-V RECURRENTNEURALNETWORKS Classes: 12
Recurrent Neural Networks, Bidirectional RNNs, Encoder-decoder sequence to sequence architectures, Deep
Recurrent Networks, Recursive Neural Networks, Long Short Term Memory Networks.
Text Books:
1. Goodfellow. I., Bengio. Y. and Courville. A., “ Deep Learning”, MIT Press, 2016.
Reference Books:
1. Tom M. Mitchell, “Machine Learning”, MacGraw Hill, 1997.
2. Stephen Marsland, “Machine Learning - An Algorithmic Perspective “, CRC Press, 2009.
3. LiMin Fu, “Neural Networks in Computer Intelligence”, McGraw-Hill edition, 1994.
[Link] – CSE(AIML) - R22 2