0% found this document useful (0 votes)
42 views6 pages

Notes For Deep Learning

Deep Learning is a subset of Machine Learning that utilizes deep neural networks to excel in tasks like image recognition, natural language processing, and speech recognition. Key concepts include artificial neural networks, activation functions, and training steps, while various models such as CNNs and RNNs serve specific purposes. Important considerations in deep learning include overfitting, regularization techniques, and evaluation metrics for classification and regression tasks.

Uploaded by

matan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views6 pages

Notes For Deep Learning

Deep Learning is a subset of Machine Learning that utilizes deep neural networks to excel in tasks like image recognition, natural language processing, and speech recognition. Key concepts include artificial neural networks, activation functions, and training steps, while various models such as CNNs and RNNs serve specific purposes. Important considerations in deep learning include overfitting, regularization techniques, and evaluation metrics for classification and regression tasks.

Uploaded by

matan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

🤖 Deep Learning – Summary Notes

📘 I. What is Deep Learning?


Deep Learning is a subset of Machine Learning that uses neural networks with many layers
(deep neural networks) to learn from large amounts of data.

It excels in:

●​ Image recognition​

●​ Natural language processing​

●​ Speech recognition​

●​ Autonomous systems​

🧠 II. Key Concepts in Deep Learning


Term Description

Artificial Neural Network (ANN) Network inspired by human brain

Neuron (Node) Basic unit of a neural network

Layer A set of neurons; input, hidden, output layers

Activation Function Determines output of a node (non-linearity)

Forward Propagation Data flows from input to output

Loss Function Measures error between prediction and target

Backpropagation Algorithm to update weights using gradients

Epoch One full pass over the training dataset


Batch Size Number of samples processed at once

Learning Rate Step size during optimization

🏗️ III. Structure of a Neural Network


●​ Input Layer – Takes input features​

●​ Hidden Layers – Perform computations​

●​ Output Layer – Produces final predictions​

🔁 Training Steps:
1.​ Initialize weights​

2.​ Forward propagation​

3.​ Compute loss​

4.​ Backpropagation (using gradients)​

5.​ Update weights (optimization)​

6.​ Repeat​

🔄 IV. Types of Deep Learning Models


Model Type Purpose / Example

Feedforward Neural Network (FNN) Basic model, no memory

Convolutional Neural Network (CNN) Image processing, object detection

Recurrent Neural Network (RNN) Sequence data, time series

Long Short-Term Memory (LSTM) Advanced RNN, better memory

Transformer NLP, used in GPT/BERT


Autoencoder Data compression, anomaly detection

GAN (Generative Adversarial Network) Generates new data (e.g., images)

🧮 V. Activation Functions
Function Use / Formula

Sigmoid 1 / (1 + e^-x), squashes between 0 and


1

Tanh Squashes between -1 and 1

ReLU (Rectified Linear Unit) max(0, x), fast and popular

Leaky ReLU Allows small gradient for x < 0

Softmax Converts output into probabilities (multi-class)

⚙️ VI. Optimization Algorithms


Optimizer Description

SGD (Stochastic Gradient Descent) Updates weights using each batch

Momentum Adds previous gradients to current

Adam Adaptive learning rate, popular

RMSProp Adjusts learning rate with moving average

📉 VII. Loss Functions


Task Type Common Loss Functions

Classification Cross-Entropy Loss

Regression MSE (Mean Squared Error), MAE (Mean


Absolute Error)

Binary Classification Binary Cross-Entropy


📊 VIII. Common Deep Learning Tasks
Task Example Application

Image Classification Recognize cats vs dogs

Object Detection Detect faces in images

Speech Recognition Convert audio to text

Text Generation Chatbots, story writing

Translation English to French text

Style Transfer Artistic filters for photos

🧠 IX. Deep Learning Frameworks


Framework Description

TensorFlow Developed by Google, used in production

PyTorch Developed by Facebook, research-friendly

Keras High-level API for TensorFlow

MXNet Scalable, used by Amazon

JAX High-performance numerical computing


(Google)

🔍 X. Overfitting & Underfitting


Term Description Solutions

Overfitting Model memorizes training Use dropout, more data,


data regularization

Underfitting Model is too simple Use deeper model, train


longer
🧪 XI. Regularization Techniques
Technique Purpose

Dropout Randomly ignores neurons during training

L1 / L2 Regularization Penalize large weights

Early Stopping Stop training when validation loss increases

⚖️ XII. Evaluation Metrics


For Classification:

●​ Accuracy​

●​ Precision​

●​ Recall​

●​ F1 Score​

●​ Confusion Matrix​

●​ AUC-ROC Curve​

For Regression:

●​ MSE / RMSE​

●​ MAE​

●​ R² Score​

🧠 XIII. Quick Identification Clues


Keyword / Phrase Likely Concept
"Many layers" Deep Neural Network

"Image recognition" CNN

"Sequential data" RNN / LSTM

"Attention mechanism" Transformer

"Generated images" GAN

"Non-linear transformation" Activation function

"Overfitting prevention" Dropout, Regularization

"Backpropagation" Gradient-based learning

"One-hot encoding" Classification input

"Softmax output" Multi-class classification

💡 XIV. Tips for Studying Deep Learning


●​ Understand basic ML first.​

●​ Start small: build a neural net from scratch.​

●​ Use PyTorch or TensorFlow for projects.​

●​ Practice with MNIST, CIFAR-10, IMDB datasets.​

●​ Focus on intuition before code: understand why before how.​

You might also like