0% found this document useful (0 votes)
5 views3 pages

Deep Learning (Micro Syllabus)

The document outlines the Deep Learning course offered at MLR Institute of Technology, detailing course objectives, outcomes, and a structured syllabus divided into five units. Key topics include neural network architectures, deep learning algorithms, convolutional and recurrent neural networks, and various learning models. The course aims to equip students with practical skills in designing and applying deep learning techniques across different applications.

Uploaded by

22r21a66h2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views3 pages

Deep Learning (Micro Syllabus)

The document outlines the Deep Learning course offered at MLR Institute of Technology, detailing course objectives, outcomes, and a structured syllabus divided into five units. Key topics include neural network architectures, deep learning algorithms, convolutional and recurrent neural networks, and various learning models. The course aims to equip students with practical skills in designing and applying deep learning techniques across different applications.

Uploaded by

22r21a66h2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

MLR Institute of Technology (Autonomous) R22

DEEP LEARNING

IV B. TECH- I SEMESTER
Course Code Category Hours / Week Credits Maximum Marks
L T P C CIE SEE Total
A6AI17 PCC 3 0 0 3 40 60 100
Tutorial Classes:
Contact Classes: 60 Practical Classes: Nil Total Classes: 60
Nil

Course Objectives
The course should enable the students to:
1. To learn Deep learning techniques and their applications.
2. To acquire the knowledge of neural network architectures, Deep learning methods and algorithms.
3. To understand CNN and RNN algorithms and their applications.

Course Outcomes
At the end of the course, student will be able to:

1. Understand various learning models.


2. Design and develop various Neural Network Architectures.
3. Understand approximate reasoning using Convolution Neural Networks.
4. Analyze and design Deep learning algorithms in different applications.
5. Ability to apply CNN and RNN techniques to solve different applications.

UNIT-I BASICS Classes: 12


Historical Trends in Deep Learning- History of Deep Learning, McCulloch Pitts Neuron-
Definition,properties,limitations,significance, Thresholding Logic-How Thresholding Works,Limitations of Thresholding
Logic,Uses of threshold logic, Perceptron- Definition, Structure of a Perceptron, Key components, Significance of
Perceptron,The Learning Rate η, The Bias Input,The Perceptron Learning Algorithm , Single Layer Percetron-
Structure of a Single-Layer Perceptron, Significance of Single-Layer Perceptron, Multiple Layer Perceptron-
Structure of an MLP,Layers, Activation Functions in MLP, Amount of Training Data,Limitations of MLP, When to Stop
Learning, Classification with the MLP, Representation Power of MLPs, Maximum likelihood estimation, Sigmoid
Neurons, Gradient Descent-Definition, Purpose, Feed forward Neural Networks, Curse of Dimentionality.

UNIT-II INTRODUCTION TO DEEP LEARNING Classes: 12


Learning Algorithms and motivation of Deep Learning-Traditional Machine Learning vs. Deep Learning, Gradient-
Based Learning- Cost function, Batch, Mini-Batch, and Stochastic Gradient Descent (SGD), learning conditional
statistics, Multi-layer perceptron, Backpropagation- Computational graphs, Chain rule of calculus, Requirements of
an Activation Function, Backpropagation computation in fully connected MLP, Vanishing Gradient Problem,
Capacity, Overfitting and Underfitting, Activation Functions: RELU, LRELU, ERELU,Tanh, Sigmoid, Softmax
Regularization-dropout, drop connect, Optimization methods for neural networks- Adagrad, adadelta, rmsprop,

[Link] – CSE(AIML) - R22 1


MLR Institute of Technology (Autonomous) R22

adam, NAG.

UNIT-III AUTOENCODERS & REGULARIZATION Classes: 12


Auto encoders : Autoencoders, Purpose, Regularized Autoencoders, Denoising Autoencoders, Representational
Power, Layer size and Depth of Autoencoders, Stochastic Encoders and Decoders, Contractive Encoders,
Applications of Autoencoders.
Regularization: Bias Variance Tradeoff, L1 regularization, L2 regularization, Early stopping,
Dataset augmentation- Image data augmentation, Text data augmentation,
Parameter sharing and tying, Injecting noise at input, Ensemble methods, Dropout, Greedy Layer wise Pre- training,
Better activation functions, Better weight initialization methods, Batch Normalization.

UNIT-IV CONVOLUTION NEURAL NETWORKS Classes: 12


Overview of Convolutional Neural Networks, Architecture, Motivation, Layers- Convolutional Layer, Activation Layer,
Pooling Layer, Batch Normalization Layer, Dropout Layer, Fully Connected (FC) Layer, Kernels, Convolution operation,
Padding- No padding, zero padding, full padding, Stride, Pooling- max pooling, average pooling, Non-linear layer,
Stacking Layers, Popular CNN Architectures: LeNet, AlexNet, ZFNet, VggNet.

UNIT-V RECURRENT NEURAL NETWORKS Classes: 12


Recurrent Neural Networks-Concept and architecture of RNNs, Difference between feedforward and recurrent
networks, Techniques to mitigate vanishing gradients, Applications of RNNs, Variants of Recurrent Neural Networks,
Bidirectional RNNs- Concept of bidirectional processing, Architecture: Forward and backward passes,
Advantages,Applications, Encoder-decoder sequence to sequence architectures-Concept of sequence-to-
sequence learning, Architecture of Seq2Seq models, Role of encoder and decoder networks, Training Seq2Seq
models using RNNs and LSTMs, Applications, Deep Recurrent Networks-Concept of stacking multiple RNN layers,
Benefits and challenges of deep RNNs, Training strategies for deep RNNs, Recursive Neural Networks-Tree-
structured recursive networks, Applications, Long Short Term Memory Networks-Structure of an LSTM, LSTM
forward and backward propagation, LSTM Gates, Applications of LSTMs.

Text Books:

1. Goodfellow. I., Bengio. Y. and Courville. A., “ Deep Learning”, MIT Press, 2016.

Reference Books:

1. Tom M. Mitchell, “Machine Learning”, MacGraw Hill, 1997.


2. Stephen Marsland, “Machine Learning - An Algorithmic Perspective “, CRC Press, 2009.
3. LiMin Fu, “Neural Networks in Computer Intelligence”, McGraw-Hill edition, 1994.

[Link] – CSE(AIML) - R22 2


MLR Institute of Technology (Autonomous) R22

[Link] – CSE(AIML) - R22 3

You might also like