Important Topics in Deep Learning (with Brief Explanation)
1. Introduction to Deep Learning
• A subset of machine learning that uses neural networks with many layers to learn
from data.
• Inspired by the structure and function of the human brain.
2. Artificial Neural Networks (ANN)
• Basic unit: Neuron (also called a perceptron).
• Consists of input layer, hidden layers, and output layer.
• Used for tasks like classification, regression, etc.
3. Activation Functions
• Decide whether a neuron should be activated or not.
• Common types:
o Sigmoid: Smooth, squashes values between 0 and 1.
o ReLU (Rectified Linear Unit): Faster, allows non-linearity.
o Tanh, Leaky ReLU, Softmax (for classification).
4. Forward and Backward Propagation
• Forward Propagation: Data flows through the network to make predictions.
• Backward Propagation: Errors are sent back to update weights using Gradient
Descent.
5. Loss Functions
• Measures how far the prediction is from the actual result.
• Examples:
o MSE (Mean Squared Error): For regression.
o Cross-Entropy Loss: For classification.
6. Gradient Descent & Optimization Algorithms
• Optimizers update weights to minimize loss.
• Types:
o SGD (Stochastic Gradient Descent)
o Adam (Adaptive Momentum)
o RMSProp
7. Convolutional Neural Networks (CNNs)
• Mainly used for image processing and computer vision.
• Layers include:
o Convolution Layer
o Pooling Layer
o Flatten Layer
o Fully Connected Layer
8. Recurrent Neural Networks (RNNs)
• Used for sequential data (like time series, speech, or text).
• Maintains a memory of previous inputs.
• Problem: Vanishing gradient (solved using LSTM/GRU).
9. Long Short-Term Memory (LSTM) & GRU
• Types of RNNs that handle long-term dependencies.
• Used in language modeling, stock price prediction, etc.
10. Transfer Learning
• Using a pre-trained model (like VGG, ResNet) on a new problem to save time and
resources.
• Common in image classification and NLP.
11. Autoencoders
• Neural networks that learn efficient data representations (encoding).
• Useful for dimensionality reduction and anomaly detection.
12. Generative Adversarial Networks (GANs)
• Two networks: Generator & Discriminator that compete with each other.
• Used to generate realistic images, text, and more.
13. Hyperparameter Tuning
• Adjusting model parameters like learning rate, batch size, number of layers, etc., to
improve performance.
• Techniques: Grid Search, Random Search, Bayesian Optimization.
14. Regularization Techniques
• Prevent overfitting.
• Dropout: Randomly turns off neurons.
• L1/L2 Regularization: Adds penalty to large weights.
15. Frameworks and Libraries
• Most common:
o TensorFlow
o Keras
o PyTorch
o Theano (older)
• These tools provide APIs to build and train deep learning models.
16. Applications of Deep Learning
• Computer Vision (face recognition, object detection)
• NLP (language translation, chatbots)
• Speech Recognition (e.g., Siri, Alexa)
• Healthcare (cancer detection)
• Self-driving cars