Deep Learning
Why deep learning ?
ML with no Deep Learning ML with Deep Learning
What is a neuron ?
Different parts of the human brain have
defined variety of roles in brain-coordinates
activities such as laughing , singing,
calculation
Some neurons will not fire for certain tasks
Activation functions
Filtering neurons for a task based on some conditions
Activation functions
Sigmoid : converts any number to a probability([0,1])
Softmax: extends the sigmoid for multi-class classification
Tanh() : converts any number to lie between([-1,1])
Relu : returns the maximum of { 0, num }
LeakyRelu : a modification of RELU
…
PERCEPTRON
A man made abstraction of a neuron
A PERCEPTRON
x1 w1 w1x1
Add inputs 1
x2 w2 w2x2 And compare or
with threshold 0
w3x3
w3
x3
This is a device that makes decisions by weighing up evidence
An illustration with unmarried people
A : Rich or not
B : Religious alignment
C : Tall or not
Rank these 3 criterias in order of importance
A B C RESPONSE NAMES
1 0 1 I WILL
0 1 0 NEVER
0 1 1 I WILL
0 0 1 NEVER
1 0 1 NEVER
1 0 0 I WILL
0 1 0 I WILL
Imagine using a model to respond to a proposal
An illustration with unmarried people
Assign weight to each criteria
Choose a threshold value for decision making
Compute weighted inputs and compare
with threshold value
Make your decision based on the comparison with threshold value
The perceptron rule
X = { 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 , 𝒙𝟒 ,…………………………………………………………………………𝒙𝒏 }
W = { 𝒘𝟏 , 𝒘𝟐 , 𝒘𝟑 , 𝒘𝟒 ,…………………………………………………………………………𝒘𝒏 }
Choose a threshold value b
If W•X is greater than b output = 1 else output = 0
W•X = 𝒘𝟏 𝒙𝟏 + 𝒘𝟐 𝒙𝟐 + 𝒘𝟑 𝒙𝟑 + ⋯ + 𝒘𝒏 𝒙𝒏
A bias is used to measures how easy it takes to fire a neuron
A perceptron takes 2 inputs x1 and x2 with respective weights w1 = -2 and
w2 = -2 . This perceptron will output 1 is it’s compuations results to a number
above zero and will output 0 otherwise
i) Suggest a value for the bias to make it function like a NAND gate
ii) Create a simple network of perceptrons to perform bitwise addition
A small problem
A small change in the weight
can completely flip the output
Why is it so ?
F(-4) = 0 Significant changes near the
F(-2) = 0 threshold
F(-0.00001) = 0
F(0.3654) = 1 Insignificant changes far from the
threshold
It is either the there is no change or a very large change of output wrt change in
weights and bias
The sigmoid neuron
A teachable neuron with learning abilities
A sigmoid neuron
x1 w1 w1x1
Add inputs
w2 w2x2 Output values
x2 And compare
between 0 and
with threshold
1
w3x3
w3
x3
This is achieved using our old friend ( the sigmoid function )
𝜕𝑜𝑢𝑡𝑝𝑢𝑡 𝜕𝑜𝑢𝑡𝑝𝑢𝑡
∆𝑜𝑢𝑡𝑝𝑢𝑡 = ∆𝑤𝑖 + ∆𝑏
𝜕𝑤𝑖 𝜕𝑏
Neural Networks
A simplified representation of the human brain
x1 w1
x2 w2 1
x3 w3 0
x4 w4
Input Layer Hidden Layer Output Layer
Gradient Descent