Week 9-10 RNN
Week 9-10 RNN
Recurrent
Neural Networks
Lecture
Outline
Motivation: Sequence Modeling
RNN variants:
• Long Short Term Memory (LSTM) networks, Gated recurrent units (GRUs)
• Bi-directional Sequence Learning
• Recursive Neural Networks (RecNNs): TreeRNNs and TreeLSTMs
• Deep, Multi-tasking and Generative RNNs (overview)
2
Convolutional vs Recurrent Neural Networks
CNN/FF-Nets
• all the outputs are self dependent
Feed-forward nets don’t remember historic
input data at test time unlike recurrent
networks.
RNN
• perform well when the input data is interdependent in
a sequential pattern
• correlation between previous input to the next input
• introduce bias based on your previous output
Introducti
on
Vanilla RNN (Recurrent Neural Network) is a type of neural network that is
used for processing sequential data.
It is the simplest type of RNN, where the hidden state at the current time
step is determined by the input at the current time step and the hidden
state from the previous time step
Recurrent neural networks
• Dates back to (Rumelhart et al., 1986)
• A family of neural networks for handling sequential data, which
involves variable length inputs or outputs
lengths 𝜏
• Batch data: many sequences with different
Sequential • Label: can be a scalar, a vector, or even a sequence
data • Example
• Sentiment analysis
• Machine translation
Example: machine translation
More complicated sequential data
𝑠 ( 𝑡 +1) = 𝑓 ( 𝑠 𝑡
𝑠 ( 𝑡 +1) = 𝑓(𝑠 𝑡
, 𝑥 ( 𝑡 +1) ; 𝜃 )
Figure from Deep Learning,
Goodfellow, Bengio and Courville
Compact view
𝑠 ( 𝑡 +1) = 𝑓(𝑠 𝑡
, 𝑥 ( 𝑡 +1) ; 𝜃 )
Figure from Deep Learning,
Goodfellow, Bengio and Courville
Compact square: one step time delay
view
𝑠 ( 𝑡 +1) = 𝑓(𝑠 𝑡
Key: the same 𝑓 and 𝜃
, 𝑥 ( 𝑡 +1) ; 𝜃 )
Figure from Deep Learning,
for all time steps Goodfellow, Bengio and Courville
Recurrent neural networks
Label
Loss
Output
State
Input
Math formula:
Math formula:
Gradient at 𝐿( 𝑡 ) :
(total loss is sum of
those at different
time steps)
Gradient at 𝑜 ( 𝑡 ) :
Gradient at 𝑠 ( 𝜏 ) :
Gradient at 𝑠 ( 𝑡 ) :
Gradient at parameter 𝑉:
27
Motivation: Need for Sequential Modeling
Punching
28
Motivation: Need for Sequential
Modeling
29
Motivation: Need for Sequential
Modeling
32
Motivation: Need for Sequential
Modeling
33
Motivation: Need for Sequential
Modeling
… … Word
falls falls orderin
market falls into bear territory
bear bear g,
UNK
marke Tradin marke UNK Synt act ic
t g t
int int &
o o semanti
territor … territor …
y y c
sentence1 sentence2 informat i
bear falls into
FF-net / CNN FF-net / CNN market territory on
Sequential model: RNN
34
Motivation: Need for Sequential Modeling
…
Word
falls Dlls irectio n of
fa
…
market falls into bear
orderin
territory
bear infboearr mation g,
UNK
marke Tradin mark fatter
low UNK Synt act ic
t g
int me s! &
o semanti
territor … territor
t …
y y c
sentence1 into
sentence2 informat i
bear falls into market territory
FF-net / CNN FF-net / CNN on
Sequential model: RNN
11
Motivation: Need for Sequential
Modeling
pankaj Decoder
lebt in münchen मुिनच रह ह
पंकज म
� ता ै
Decoder
Encoder Encoder
36
Motivation: Need for Sequential
Modeling
RNN
- perform well when the input data is interdependent in a sequential pattern
- correlation between previous input to the next input
- introduce bias based on your previous output
CNN/FF-Nets
- all the outputs are self dependent
- Feed-forward nets don’t remember historic input data at test time unlike recurrent networks.
37
Motivation: Need for Sequential Modeling
• ℎ 𝑡 : Hidden Unit
• 𝗑 𝑡 : Input
• 𝑜 𝑡 : Output
• 𝑾𝑾ℎℎ : Shared Weight Parameter
• 𝑾𝑾ℎ𝑜 : Parameter weight between hidden layer
and output
• 𝜃𝜃: parameter in general
• 𝑔𝜃𝜃 : non linear function
• 𝐿𝑡 :Loss between the RNN outputs and the true
output
• 𝐸𝑡 : cross entropy loss
39
Long Term and Short
Dependencies
40
Foundation of Recurrent Neural
Networks
Goal
model long term dependencies
connect previous information to the present task
model sequence of events with loops, allowing information
to persist
punching
41
Foundation of Recurrent Neural
Networks
Goal
model long term dependencies
connect previous information to the present task
model sequence of events with loops, allowing information to persist
42
Foundation of Recurrent Neural
Networks
person othe othe locatio
output labels
r r n
softmax-layer . .1 person
.2 .1 .1 .1 .8 .1 .7
8
.1 .7 .2
output layer .8 .1
.2 .1 .1 .1 .8 .1 .7 locatio
.1
.7 .2
Who n
. . . .
5 Whh 3 Whh 5 Whh 6 other
hidden layer . . . .
2 3 4 7 Recurrent Neural Network
. -. . .
7 1 9 5
Wxh
1 0 0 0
0 1 0 0
input layer
0 0 1 0
0 0 0 1
input sequence Pankaj lives in Munich
tim
43
e
(Vanilla) Recurrent Neural
Network
Process a sequence of vectors x by applying a recurrence at every
time step:
o o0 ot oT
ot-1
Unfold Who
feedback mechanism
in
or internal state loop Whh Whh Whh
A time
ℎ𝑡 =
h0 ht
Whh … …
𝑔𝜃𝜃(ℎ𝑡 −1 , 𝑥𝑡 )
Wxh
x x0 xt-1 xt xT
time
Input vector at time
new step, t Vanilla Recurrent Neural Network (RNN)
hidden old hidden
state at time some function
step, state
parameters Whh Wxh at time step, t-
twith
1
Remark: The same function g and same set of parameters W are used at
every time step
44
(Vanilla) Recurrent Neural
Network
Process a sequence of vectors x by applying a recurrence at every
time step:
o o0 ot oT
ot-1
feedback mechanism Unfold Who
or internal state in
loop A time Whh Whh
ℎ𝑡 = 𝑔𝜃 ℎ𝑡 −1 ,
Whh
𝑥𝑡
Whh … …
Wxh
ℎ𝑡 = tanh(𝑊𝑊ℎℎ ℎ𝑡−1 +
x x0 xt-1 xt xT
𝑊𝑊𝑥𝑥ℎ𝑥𝑡)
time
𝑜𝑡 = 𝑠𝑜𝑠𝑠𝑡𝑚𝑎𝑥(𝑊𝑊ℎ𝑜ℎ𝑡)
Vanilla Recurrent Neural Network (RNN)
45
Recurrent Neural Network: Probabilistic
Interpretation
RNN as a generative model x0 x1 xt <eos>
xt+1
induces a set of procedures to
model Whh Whh Whh Whh
the conditional distribution of xt+1 given
x<=t
for all t = 1, … … …
…,T
<bos> xT
x0 xt-1 xt
time
Think of the output as the probability Generative Recurrent Neural
distribution of the Network (RNN)
Details:
https://www.cs.cmu.edu/~epxing/Class/10708-17/project-reports/project10.
pdf
46
RNN: Computational
Graphs
Sequence of output
𝑜 𝑜2 𝑜3
1
𝑔𝜃 𝑔𝜃 𝑔𝜃
𝜃
Initial 𝜃 𝜃
State,
A0
𝑥1 𝑥2 𝑥3
Next state
𝜃𝜃
Sequence of Inputs
47
RNN: Different Computational
Graphs
𝑜𝟏 𝑜𝟐 𝑜𝟑 𝑜𝟏
𝑜𝟏 𝟏
𝟏 𝟐 𝟑
𝟏
𝗑𝟏 𝗑𝟒
𝗑𝟐𝟐
𝗑𝟏𝟏
𝗑𝟏 𝟏
𝗑𝟑𝟑 to One 𝟒
𝑜𝟏 𝑜𝟐 𝑜𝟑 𝑜𝟒
Many
𝑜𝟏 𝑜𝟐
One to one One to Many
𝟏
𝟏 𝟐 𝟑 𝟒
𝟏 𝟐
𝗑𝟏 𝗑𝟒
𝗑𝟐𝟐
𝗑𝟏 𝗑𝟐 𝟏
Many to Many 𝟒
𝗑𝟑𝟑
𝟏 𝟐
48
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
49
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
50
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
ℎ𝟏 ℎ𝟐 ℎ𝟑
𝟏 𝟐 𝟑
𝗑𝟏 𝗑𝟐𝟐
𝟏 𝗑𝟑𝟑
Direction of Forward pass
53
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
ℎ𝟏 ℎ𝟐 ℎ𝟑
𝟏 𝟐 𝟑
𝗑𝟏 𝗑𝟐𝟐
𝟏 𝗑𝟑𝟑
Direction of Forward pass
54
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
ℎ𝟏 ℎ𝟐 ℎ𝟑
𝟏 𝟐 𝟑
𝗑𝟏 𝗑𝟐𝟐
𝟏 𝗑𝟑𝟑
Direction of Forward pass
55
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
ℎ𝟏 ℎ𝟐 ℎ𝟑
𝟏 𝟐 𝟑
𝗑𝟏 𝗑𝟐𝟐
𝟏 𝗑𝟑𝟑
Direction of Forward pass
56
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
ℎ𝟏 ℎ𝟐 ℎ𝟑
𝟏 𝟐 𝟑
𝗑𝟏 𝗑𝟐𝟐
𝟏 𝗑𝟑𝟑
Direction of Forward pass
57
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
1 2 3
ℎ𝟏 𝜕𝜕ℎ ℎ 𝜕𝜕ℎ ℎ𝟑
𝜕𝜕ℎ 𝟐
𝟐
𝟏 2
𝜕𝜕ℎ 𝟑
3
𝗑𝟏 1 𝗑𝟐𝟐 2
𝟏 𝗑𝟑𝟑
Direction of Backward pass (via partial derivatives)
--- gradient flow ---
58
Backpropogation through time (BPTT) in
RNN
Training recurrent networks via BPTT
𝑜𝟏𝟏 𝐸
𝜕𝜕ℎ 𝐸
𝜕𝜕ℎ 𝐸
𝜕𝜕ℎ
entropy:
1 2 3
𝐸𝑡 𝑜𝑡 ′, 𝑜𝑡 = −𝑜𝑡 ′ log 𝑜𝑡
𝟐 𝟑
1 2 3
𝐸 𝑜𝑡 ′, = � 𝐸𝑡 (𝑜𝑡
𝑜𝑡 ′, 𝑜𝑡 ) ℎ𝟏 𝜕𝜕ℎ ℎ 𝜕𝜕ℎ ℎ𝟑
𝜕𝜕ℎ 𝟐
𝟐
𝑡 𝟏 2
𝜕𝜕ℎ 𝟑
3
𝐸 𝑜 𝑡 = − � 𝑜𝑡 ′ 𝗑𝟏 1 𝗑𝟐𝟐 2
′, 𝑜
𝑡 log 𝑜𝑡 𝟏 of Backward 𝗑
Direction 𝟑𝟑
pass (via partial derivatives)
𝑡
Where 𝑜𝑡′ are the truth
--- gradient flow ---
values
59
Backpropogation through time (BPTT) in
RNN
𝑊𝑊 𝑜ℎ 3
Whh
ℎ𝟏 𝜕𝜕ℎ ℎ𝟑
ℎ𝑜
𝜕𝜕𝖶ℎ
= 𝑜3′(𝑜3 − 1) × 𝜕𝜕ℎ 𝟐
softmax
ℎ𝑜 𝟐
𝜕𝜕
𝜕𝜕𝐸ℎ3
2
𝜕
𝑊 𝑜 (ℎ3)
𝟏
3 2 𝟑
𝗑𝟏 𝗑𝟐𝟐 𝜕
𝑤ℎ𝑒𝑟 ×= 𝑜𝑢𝑡𝑒𝑟
1
′
𝑎𝑛𝑑 ℎ
--- gradient flow ---
𝜕𝜕𝖶
3 ℎ 3 3
𝑜
60
Backpropogation through time (BPTT) in
RNN
𝜕𝜕
𝐸3ℎ = 𝑜3′(𝑜3 − 1) ×
offl in
𝜕𝜕
𝑊 𝑜 (ℎ3)
How ? e
𝐸𝑊 = −𝑜3 ′ 𝑜3 = 𝑎𝑛𝑑 𝑧3 =
Proof
3
𝑜3 ′, 𝑜3 log 𝑜3 𝑠𝑜𝑠𝑠𝑡𝑚𝑎𝑥(𝑧3), 𝑊𝑊ℎ𝑜ℎ3
𝜕𝜕 𝜕𝜕log(𝑜 1
= 𝑜3 = 𝑒
𝑧 3 and , Ω
𝑖𝑖𝑒
𝑧𝑖 log o3 = z3 − log
𝐸
𝜕 33 3) 𝜕
Ω
Ω
−𝑜3′
𝜕3 𝜕 =
𝜕𝜕Ω
𝑖
∑
𝑧 = � 𝑒𝑧𝑖𝑖𝑖𝑖𝛿𝛿
= 1 −𝑧 𝜕𝜕 = 𝑒 𝑧 𝑘
𝜕𝜕log(𝑜3) 1
𝑖 3
𝜕𝜕Ω
𝑧3
𝜕𝜕log(𝑜
𝜕𝜕𝑧 3 3)
𝜕𝜕𝑧 =1 3
𝜕𝜕𝑜
= 𝑜3(1 −
Ω 𝜕𝜕𝑧 3
𝑜3)
𝜕𝜕𝐸
3
= −𝑜 ′(1 − 𝑜 ) 𝑜 −
𝜕𝜕3 − 𝑜 ′
𝜕𝜕𝐸3 𝜕𝜕𝐸3 𝜕𝜕𝑜3
3
= 𝜕𝜕𝑧 = 𝑜′ 𝑜 − 1 ×
𝐸
𝜕𝜕𝑧 𝜕𝜕𝑧 3 3
=𝑜 1
3
=
3 3 3
𝜕𝜕𝑧3 (ℎ )
3 3 3
𝜕𝜕𝑧3
3 3
𝜕𝜕𝑊𝑊ℎ𝑜
𝜕𝜕𝑊𝑊 ℎ𝑜
1. http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vani
3 3 ℎ𝑜
2. https://stats.stackexchange.com/questions/235528/backpropagation-with-softmax-cross-entropy
61
Backpropogation through time (BPTT) in
RNN
𝐸 𝐸 𝐸3
to t=1
Writing gradients in a sum-of-products
𝜕𝜕𝐸 𝜕𝜕𝐸 𝜕𝜕𝐸
𝑜1𝟏 𝜕𝜕 2 𝜕𝜕 𝑜𝟑 𝜕𝜕
form
= 𝐸 𝐸
𝜕𝜕𝐸𝑡 𝜕𝜕W 𝜕𝜕ℎ 𝜕𝜕W 𝜕𝜕ℎ 𝐸
𝜕𝜕ℎ 𝜕𝜕ℎ
3 3
𝜕 𝜕𝜕ℎ
𝟏 1 1 𝑜 2 3
�
𝟐𝟐 𝟑
ℎℎ 3
𝜕
1≤𝑡≤ 3
= 3 ℎℎ
𝜕𝜕θ
W W
3
W
2
Since ℎ3 dependsθ on ℎ2 𝐚𝐚𝐚𝐚𝐚𝐚 ℎ2 depends on ℎ1,
ho ho ho
therefore 𝜕𝜕ℎ ℎ
Whh W
ℎ𝟏 𝜕𝜕ℎ ℎ𝟑
hh
3
𝜕 3 𝜕𝜕𝐸 3 𝜕 𝜕𝜕 ℎ3 𝜕𝜕
= 𝜕𝜕ℎ 𝟐
𝟐
3
𝜕 𝜕 = 2
𝜕
𝜕𝜕Wℎ 𝜕𝜕ℎ3
𝟏
𝑘 3 2 𝟑
𝜕
� e.g.
𝐸 𝑘=
𝜕𝜕ℎ
𝜕𝜕ℎ ℎ
𝜕𝜕W ℎ
�𝟏 1 𝗑𝟐𝟐
ℎ partial derivatives)
,
1
𝜕𝜕 Direction of Backward 𝗑
3 𝟏
𝜕𝜕𝐸 𝜕𝜕ℎ
ℎ 𝑘 ℎℎ
= � 𝜕𝜕ℎ2
𝟑𝟑
�
𝑡 𝑡
𝐸𝑡ℎ
pass (via
𝜕𝜕 𝜕𝜕ℎ𝑡
In general,
𝜕𝜕h1
--- gradient flow ---
Wℎ
1≤𝑘≤
𝜕𝜕ℎ
𝜕𝜕ℎ𝑘𝑘
𝑡
𝜕𝜕ℎ2 𝜕𝜕ℎ1
𝜕𝜕Wℎ ℎ
62
Backpropogation through time (BPTT) in
RNN
𝐸 𝐸 𝐸3
to t=1
Writing gradients in a sum-of-products
𝜕𝜕𝐸 𝜕𝜕𝐸 𝜕𝜕𝐸
𝑜1𝟏 𝜕𝜕 2 𝜕𝜕 𝑜𝟑 𝜕𝜕
form
= 𝐸 𝐸
𝜕𝜕𝐸𝑡 𝜕𝜕W 𝜕𝜕ℎ 𝜕𝜕W 𝜕𝜕ℎ 𝑜𝟐𝟐 𝐸𝜕𝜕ℎ 𝜕𝜕ℎ
3 3
𝜕 𝜕𝜕ℎ
𝟏 1 1 2 3
�
𝟑
ℎℎ 3
𝜕
1≤𝑡≤ 3
= 3 ℎℎ
𝜕𝜕θ
W W
3
W
2
Since ℎ3 dependsθ on ℎ2 𝐚𝐚𝐚𝐚𝐚𝐚 ℎ2 depends on ℎ1,
ho ho ho
therefore
Whh Whh
𝜕𝜕𝐸
3
𝜕𝜕𝐸
ℎ𝟏 ℎ𝟐 𝜕𝜕ℎ ℎ𝟑
𝜕𝜕 ℎ3 𝜕𝜕
𝜕𝜕ℎ3
= 𝜕𝜕ℎ 3 3 𝑘 = 𝜕
𝜕𝜕Wℎ 𝜕𝜕ℎ3
𝟏 𝟐 3 2 𝟑
𝜕
e.g.
𝗑𝟐𝟐
ℎ
𝑘=
�
𝜕𝜕ℎ 𝜕𝜕W
�𝟏
ℎ partial derivatives)
,
1
𝜕𝜕 of Backward 𝗑
3
𝜕𝜕𝐸 𝜕𝜕ℎ
ℎ 𝑘 ℎℎ 𝟏
= � 𝜕𝜕ℎ2
𝟑𝟑
�
𝑡 𝑡
𝐸𝑡 ℎ
Direction pass (via
𝜕𝜕W
In general,
𝜕𝜕h1
--- gradient flow ---
1≤𝑘
𝜕𝜕ℎ
𝜕𝜕ℎ𝑡
ℎ ≤𝑡 𝑘
𝜕𝜕ℎ
𝑇 2 𝜕𝜕ℎ1
= 𝖦 W 𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 = 𝖦 𝜕𝜕ℎ𝑖 𝜕𝜕ℎ𝑡
Transport error in time from step t back to step k
ℎ 𝑖𝑖−
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− )] 𝑡 ≥𝑖𝑖 ℎ 1
h𝑘 >𝑘 𝜕𝜕ℎ>𝑘𝑘 𝝏𝝏
1
𝜕𝜕Wℎ ℎ
Jacobian matrix ℎ𝑡 The Jacobian matrix is the matrix of all first-order partial derivatives of
𝝏𝝏
a vector-valued function. It generalizes the concept of a derivative for
63 functions with multiple inputs and outputs, showing how the output
Backpropogation through time (BPTT) in
RNN
ℎℎ 𝜕𝜕ℎ 3
𝜕𝜕ℎ
1 1 𝑜 3
𝜕 𝜕
3 W W 𝟑
ℎℎ 3
ℎ3 dependsθ on
Who
1≤𝑡≤3 2
ho
Since θ
ho
𝜕𝜕𝐸
3
𝜕𝜕𝐸 therefore
ℎ𝟏 ℎ𝟐 𝜕𝜕ℎ ℎ𝟑
𝜕𝜕 ℎ 𝜕
𝜕
𝜕𝜕ℎ3
𝜕𝜕ℎ 3 3 𝑘 = 𝜕𝜕ℎ 𝟑
3
𝜕𝜕Wℎ = 𝜕𝜕ℎ3
e.g. 𝟏 𝟐 3
𝗑𝟏 𝗑𝟐𝟐
ℎ
𝑘=
�
𝜕𝜕ℎ 𝜕𝜕W 2
,
1
𝜕𝜕
3
𝜕𝜕𝐸𝑡 𝜕𝜕ℎ𝑡 𝟏 of Backward 𝗑
ℎ 𝑘 ℎℎ
= � 𝜕𝜕ℎ 𝟑𝟑
𝐸𝑡 ℎ
2
𝜕𝜕W
In Weight matrix Direction pass (via partial derivatives)
1≤𝑘 𝜕𝜕h1
--- gradient flow ---
𝜕𝜕ℎ
general,
𝜕𝜕ℎ𝑡
ℎ ≤𝑡 𝑘
𝜕𝜕ℎ
𝑇 2 𝜕𝜕ℎ1
= 𝖦 W 𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 = 𝖦 𝜕𝜕ℎ𝑖 𝜕𝜕ℎ𝑡
Derivative of activation function
ℎ 𝑖𝑖−
h𝑘 𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖 )] 𝑡 ≥𝑖𝑖 ℎ 1
𝜕𝜕ℎ 𝝏𝝏
Transport error in time from step t back to step k
>𝑘 −1 >𝑘
𝑘
𝜕𝜕Wℎ ℎ
Jacobian matrix ℎ𝑡
64
𝝏𝝏
Backpropogation through time (BPTT) in
RNN
𝐸 𝐸
to t=1
Writing gradients in a sum-of-products 𝐸3
𝜕𝜕𝐸 𝜕𝜕𝐸 𝜕𝜕𝐸 𝑜1𝟏 𝜕𝜕 2 𝜕𝜕 𝑜𝟑 𝜕𝜕
form
𝜕𝜕𝐸𝑡 = 𝐸 Who𝑜𝟐𝟐 𝐸𝜕𝜕ℎ 𝐸𝜕𝜕ℎ
3 3
𝜕 𝜕𝜕W 𝜕𝜕ℎ
𝜕𝜕ℎ 𝜕𝜕ℎ
1 1 2 3
� 𝟏
𝟑
𝜕
1≤𝑡≤ ℎℎ 33
= 3 𝜕𝜕Wℎ ℎ
𝜕𝜕θ
W W
3
Since ℎ3 dependsθ on ℎ2 𝐚𝐚𝐚𝐚𝐚𝐚 ℎ2 depends on ℎ1,
2
ho ho
therefore 𝜕𝜕ℎ ℎ
Whh Whh
𝜕𝜕𝐸
3
𝜕𝜕𝐸
ℎ𝟏 𝜕𝜕ℎ ℎ𝟑
𝜕𝜕 ℎ3 𝜕𝜕 𝜕 1 𝟐
𝟐
𝜕𝜕ℎ
𝜕𝜕 ℎ
3
= 𝜕𝜕ℎ
𝜕𝜕ℎ
3 3 𝑘 = 𝟏 2
𝜕
𝜕
3 2 𝟑
𝜕
3 ℎ
e.g.
𝗑𝟏 𝗑𝟐𝟐
Wℎ 𝜕𝜕ℎ ℎ
𝑘=
�
ℎ
𝑘 ℎ
𝟏 of Backward 𝗑 ℎ partial derivatives)
,
1
𝜕𝜕𝜕𝜕W
3
𝜕𝜕𝐸3 𝜕𝜕ℎ𝑡
𝟑𝟑
𝜕𝜕ℎ2
𝐸𝑡ma =tri�x 𝜕𝜕ℎ
Direction pass (via
Repeated 𝜕𝜕W ℎ
𝜕𝜕h1
multiplications leads to vanishing and exp-l-
𝑡
1≤𝑘
𝜕𝜕ℎ
l dwie𝜕𝜕ℎ 𝜕𝜕ℎ𝑇 2 𝜕𝜕ℎ1
ogdriandℎgeingtrfao≤𝑡
𝜕𝜕ℎ𝑡
𝑘𝑘
=
𝜕𝜕Wℎ ℎ 𝖦 W ℎℎ 𝑑𝑑𝑑𝑎𝑔[𝑔′
-n- ts
𝜕𝜕 = 𝖦
𝜕𝜕ℎ 𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− (ℎ𝑡𝑖𝑖−1
≥𝑖𝑖)]
h 𝑖 𝝏𝝏
Transport error in time from step t back to step k
𝑘 >𝑘 1 >𝑘
ℎ
𝝏
𝝏ℎ
Jacobian matrix
𝑡
𝑘
41
BPTT: Gradient
Flow
𝜕𝜕𝐸3 3
= �
𝜕𝜕𝐸3 𝜕𝜕ℎ3
𝜕𝜕ℎ𝑘 𝐸 𝐸 𝐸3
𝜕𝜕Wℎ ℎ 𝜕𝜕ℎ3
𝜕𝜕ℎ
1 2 𝑜𝟑
𝜕𝜕ℎ 𝜕𝜕W
= 𝜕𝜕𝐸3𝑘𝜕𝜕𝑜 3 𝜕𝜕ℎ3 𝜕𝜕ℎ2
𝑘=1
ℎℎ
+𝜕𝜕𝐸3 𝜕𝜕𝑜3
𝜕𝜕ℎ
+𝜕𝜕𝐸3 𝜕𝜕𝑜3 𝜕𝜕ℎ 𝑜𝟏𝟏 𝑜𝟐𝟐
𝜕𝜕ℎ
𝟑
𝑾𝑾 𝑾𝑾
ℎ𝟏 ℎℎ
ℎ𝟐 ℎℎ ℎ𝟑
𝟏 𝟐 𝟑
𝗑𝟏 𝗑𝟐 𝗑𝟑
𝟏 𝟐 𝟑
66
Backpropogation through time (BPTT) in
RNN
Code snippet for forward-propagation is shown below (Before going for offl ine
BPTT code)
https://cs224d.stanford.edu/lectures/CS224d-
Lecture8.pdf
67
Backpropogation through time (BPTT) in
RNN
Code snippet for backpropagation w.r.t. time is
𝝏𝝏𝐸𝑡 𝑡
= �
shown below
𝝏𝝏𝐸𝑡 𝝏𝝏ℎ𝑡
𝝏𝝏𝑾𝑾 ℎ ℎ 𝝏𝝏ℎ 𝑡
𝝏𝝏ℎ
𝑘 =𝟏
offl ine
𝝏𝝏ℎ 𝑘 𝝏𝝏𝑾𝑾
𝑘
𝟏 ℎℎ
𝜕𝜕ℎ𝑡𝑡
𝜕𝜕ℎ )
𝑡 𝑡−1 𝐀
𝜕𝜕Wℎ ℎ
𝐼 𝑛 𝑰𝑰𝑡𝑰𝑰𝑎𝑙
�
𝑑 𝑒 𝑙 𝑡 𝑎 _𝑡
�
𝑑𝑒𝑙𝑡𝑎
𝜕
_𝑡
𝜕𝜕𝜕 ℎ = A + B + ⋯ (till the end of
dependency)
68 W𝐸ℎ
Challenges in Training an RNN: Vanishing
Gradients
𝐵08=
−1.11e
10 = 10 2.08e−1 −2.51e + 7.30e + 8.98e +
B 06 06
𝐴
−1.73e
−1.33e +
1.53e+107 9.11e + 1.83e +
09 08
−1.81e +
1.00e+109 4.40e + 0 𝐶 =
08 7.32e + 7.85e + 1.05e +
10 10
07 06 06
09 08
2.18e+105
07 06 05
𝜕𝜕𝜕
𝜕𝐸 5
, gradient explodes, i.e., NaN due to very large
𝜃𝜃
numbers
72
Challenges in Training an RNN: Exploding
Gradients
𝐵06=
−1.11e
10 = 10 2.08e−1 −2.51e + 7.30e + 8.98e +
B 04 04
𝐴
−1.73e
−1.33e +
1.53e+97 9.11e + 1.83e +
06 06
−1.81e +
1.00e+109 4.40e + 0 𝐶 =
06 7.32e + 7.85e + 1.05e +
10 10
04 04 04
06 06
2.18e+85
04 04 04
Large increase in the norm of the gradient during training
𝜕𝜕𝐸 5
𝜕𝜕
, gradiednuteextpoloedxepsl,oi.sei.o, NnaoNf dol uengtotveerrmy a
l crogme
n𝜃𝜃puomnbeenrsst
73
Vanishing Gradient in Long-term
Dependencies
1 3 0
2
ℎ𝟏 𝜕𝜕ℎ ℎ 𝜕𝜕ℎ ℎ𝟑 …
ℎ𝟓𝟓𝟓
𝜕𝜕ℎ 𝟐
𝟐
𝟏 2
𝜕𝜕ℎ 𝟑
3 𝟓
𝗑𝟏𝟏 1 𝗑𝟐𝟐 2
𝗑
𝟓𝟓𝟓𝟓 𝗑𝟑𝟑
In practice as the length of the sequence increases, the probability of training being
successful
decrease drastically.
Why
74
Vanishing Gradient in Long-term
Dependencies
Why
ℎ𝑡 = ℎ𝑡 −1 ,
tanh
𝑔𝖶
Expansion
𝑥𝑡 ℎ𝑡 = Wℎℎf(ℎt−1) + some other
ℎ𝑡 = tanh(Wℎℎ ℎ𝑡−1 +
W𝑥ℎ𝑥𝑡 ) terms
ℎ𝑡 = Wℎℎℎ0 + some other
𝑜𝑡 =
terms
𝑠𝑜𝑠𝑠𝑡𝑚𝑎𝑥(Wℎ𝑜ℎ𝑡)
75
Vanishing Gradient in Long-term
Dependencies
𝐸 𝐸 𝐸3
1 𝜕𝜕 2 𝟐 𝜕𝜕 𝑜𝟑 𝜕𝜕
Writing gradients in a sum-of-products
𝜕𝜕𝐸 𝑜
= � 𝑡 𝜕𝜕𝐸3 𝜕𝜕𝐸3 Who𝑜𝟏𝟏 𝐸 𝐸
form
𝜕𝜕𝐸
𝜕 = 𝜕 11 Who𝟐 𝐸 𝜕 22 𝜕 33
𝜕𝜕Wℎ ℎ 𝜕𝜕ℎ3 𝜕 𝜕 W 𝜕
Who𝟑
1≤𝑡≤ 𝜕𝜕
𝜕 𝜕𝜕Wℎ ℎ𝜕𝜕ℎ
ℎ3 dependsθ on
Since θ
3 3
ℎ ℎ ℎ
W
𝜕𝜕ℎ ℎ
hh hh
= �
Direction pass (via
𝑡 𝑡
𝐸𝑡 ℎ
𝜕𝜕W
In --- gradient flow ---
1≤𝑘
𝜕𝜕ℎ𝑘
general,
ℎ𝑡 = 𝑾 𝑾ℎ ℎ 𝐟𝐟(ℎ𝐭−𝟏𝟏 )+
𝜕𝜕 ℎ ≤𝑡 𝐭
𝝏𝝏ℎ
This term is the product of Jacobian 𝑘 matrix .
76
Vanishing Gradient in Long-term
Dependencies
𝐸 𝐸 𝐸3
𝜕𝜕 𝜕𝜕 𝑜𝟑 𝜕𝜕
Writing gradients in a sum-of-products
𝜕𝜕𝐸 𝜕𝜕𝐸3 𝑜
𝜕𝜕
1 2𝟐
= Who𝑜𝟏𝟏 𝐸 𝐸
form
𝜕𝜕𝐸 𝜕𝜕ℎ3 𝜕 11 𝐸
𝜕 22 𝜕 33
𝜕 𝑡
𝐸3ℎ = 𝜕𝜕ℎ
𝜕𝜕 𝜕
Who𝟐
𝜕 𝜕 W
� Who𝟑
1≤𝑡≤ 𝜕𝜕 3 ℎ
𝜕 Wℎ ℎ ℎ ℎ
ℎ
θ
3
θ 𝜕𝜕𝐸3 𝜕𝜕ℎ𝑡 𝜕𝜕W
Whh
𝜕𝜕 𝜕𝜕ℎ ℎ
hh
ℎ𝟏
𝐸𝑡ℎ
𝜕𝜕 𝜕𝜕ℎ
𝜕𝜕ℎ
𝟐
𝜕𝜕ℎ
3
𝜕𝜕ℎ �
𝟑
𝜕𝜕ℎ
𝑡𝑘 𝑘 �
W ℎ = 𝑡 � 𝜕𝜕ℎ
1≤𝑘≤ 2
ℎ
𝟏
ℎℎ
𝗑𝟏 𝗑𝟐𝟐
𝜕𝜕ℎ𝑡 𝜕𝜕W
𝑖 = 𝖦 Wℎℎ𝑇𝑑𝑑𝑑𝑎𝑔[𝑔′
1
𝜕𝜕ℎ
� �
2
𝜕𝜕 = 𝖦 𝟏 of Backward 𝗑
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− (ℎ𝑡𝑖𝑖−1)]
𝟑𝟑
≥𝑖𝑖
h𝑘
Direction pass (via partial derivatives)
𝑖 𝝏𝝏ℎ
>𝑘 1 >𝑘 Jacobian matrix --- gradient flow ---
𝑡
𝝏𝝏ℎ
𝑘
77
Vanishing Gradient in Long-term
Dependencies
𝐸 𝐸 𝐸3
𝜕𝜕 𝜕𝜕 𝑜𝟑 𝜕𝜕
Writing gradients in a sum-of-products
𝜕𝜕𝐸 𝜕𝜕𝐸3 𝑜
𝜕𝜕
1 2𝟐
= Who𝑜𝟏𝟏 𝐸 𝐸
form
𝜕𝜕𝐸 𝜕𝜕ℎ3 𝜕 11 𝐸
𝜕 22 𝜕 33
𝜕 𝑡
𝐸3ℎ = 𝜕𝜕ℎ
𝜕𝜕 𝜕
Who𝟐
𝜕 𝜕 W
� Who𝟑
1≤𝑡≤ 𝜕𝜕 3 ℎ
𝜕 Wℎ ℎ ℎ ℎ
ℎ
θ
3
θ 𝜕𝜕𝐸3 𝜕𝜕ℎ𝑡 𝜕𝜕W
Whh
𝜕𝜕 𝜕𝜕ℎ ℎ
hh
ℎ𝟏
𝐸𝑡ℎ
𝜕𝜕 𝜕𝜕ℎ
𝜕𝜕ℎ
𝟐
𝜕𝜕ℎ
3
𝜕𝜕ℎ �
𝟑
𝜕𝜕ℎ
𝑡𝑘 𝑘 �
W ℎ = 𝑡 � 𝜕𝜕ℎ
1≤𝑘≤ 2
ℎ
𝟏
ℎℎ
𝗑𝟏 𝗑𝟐𝟐
𝜕𝜕ℎ𝑡 𝜕𝜕W
𝑖 = 𝖦 Wℎℎ𝑇𝑑𝑑𝑑𝑎𝑔[𝑔′
1
𝜕𝜕ℎ
� �
2
𝜕𝜕 = 𝖦 𝟏 of Backward 𝗑
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− (ℎ𝑡𝑖𝑖−1)]
𝟑𝟑
≥𝑖𝑖
h𝑘
Direction pass (via partial derivatives)
𝑖
>𝑘 1 >𝑘 --- gradient flow ---
78
Mechanics behind Vanishing and Exploding
Gradients
𝐸 𝐸 𝐸3
𝜕𝜕ℎ𝑡 = 𝖦 W 𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 𝑜𝟑 𝜕𝜕
𝑇
𝜕𝜕 = 𝖦 2 𝟐 𝜕𝜕
ℎ 𝑖𝑖−
𝑜
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− )] 𝑡 ≥𝑖𝑖
1
𝜕𝜕ℎ
ℎ 1
h𝑘 𝑖
>𝑘 >𝑘 𝑜𝟏𝟏 𝐸
𝜕𝜕ℎ
1 1 𝐸
𝜕𝜕ℎ
2 𝐸
𝜕𝜕ℎ
3
1 Who 𝟐
Who 𝟑
3
Who
2
Consider identity activation
𝜕𝜕ℎ ℎ
Whh Whh
𝜕𝜕ℎ 𝟐
𝟐
𝜕𝜕ℎ 𝟑
𝑾ℎ = ∗ 𝜵𝜵
𝟏 2
diagonalizable: 3
𝗑𝟏 𝗑𝟐𝟐
∗ 𝑸𝑸
1
�ℎ
2
𝟏 of Backward 𝗑
𝑸𝑸−𝟏𝟏 of eigenvectors ℎ
Direction 𝟑𝟑
pass (via partial derivatives)
of W
matrix
�composed
ℎ
--- gradient flow ---
diagonal matrix with eigenvalues placed on the
diagonals
𝐸 𝐸 𝐸3
𝜕𝜕ℎ𝑡 = 𝖦 W 𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 𝑜𝟑 𝜕𝜕
𝑇
𝜕𝜕 = 𝖦 2 𝟐 𝜕𝜕
ℎ 𝑖𝑖−
𝑜
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− )] 𝑡 ≥𝑖𝑖
1
𝜕𝜕ℎ
ℎ 1
h𝑘 𝑖
>𝑘 >𝑘 𝑜𝟏𝟏 𝐸
𝜕𝜕ℎ
1 1 𝐸
𝜕𝜕ℎ
2 𝐸
𝜕𝜕ℎ
3
1 Who 𝟐
Who 𝟑
3
Who
2
Consider identity activation
𝜕𝜕ℎ ℎ
Whh Whh
ℎ𝟏 𝜕𝜕ℎ ℎ𝟑
function
Wℎℎ : 𝜕𝜕ℎ 𝟐
computing powers of 𝟐
𝜕𝜕ℎ 𝟑
𝑾ℎ = ∗ 𝜵𝜵
𝟏 2
3
𝗑𝟏
n
𝑾ℎ
𝗑𝟐𝟐
∗ 𝑸 𝑸 gradients
1
n
2
𝟏 of Backward 𝗑
𝑸𝑸−𝟏𝟏
Vanishing Direction 𝟑𝟑
pass (via partial derivatives)
𝜵
𝜵 𝜵𝜵
- 0 10 - 0
=
0.618 0.0081
=
1.61 0 122.9
0 8 9
Eigen values on the diagonal
Exploding gradients
Bengio et al, "On the difficulty of training recurrent neural
networks." (2012)
80
Mechanics behind Vanishing and Exploding
Gradients
𝜕𝜕ℎ𝑡 = 𝖦 W 𝑇
𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 = 𝖦 ℎ 𝑖𝑖−
𝜕𝜕ℎ 𝜕𝜕ℎ𝑖𝑖− )]
𝑡 ≥𝑖𝑖>𝑘 ℎ 1
h𝑘 𝑖
𝑡 ≥𝑖𝑖>𝑘
1 Need for tight conditions
Consider identity activation on eigen values
function
n of Wℎℎ :
during training to prevent
𝑾ℎ = ∗ 𝜵𝜵n
computing powers
𝑾ℎ ∗ 𝑸 𝑸 gradients
gradients to vanish or explode
𝑸𝑸−𝟏𝟏
Vanishing
𝜵
𝜵 𝜵𝜵
- 0 10 - 0
=
0.618 0.0081
=
1.61 0 122.9
0 8 9
Eigen values on the diagonal
Exploding gradients
Bengio et al, "On the difficulty of training recurrent neural
networks." (2012)
81
Mechanics behind Vanishing and Exploding
Gradients
𝐸 𝐸 𝐸3
𝜕𝜕 𝜕𝜕 𝑜𝟑 𝜕𝜕
Writing gradients in a sum-of-products
𝜕𝜕𝐸 𝜕𝜕 𝜕𝜕𝐸3 1 2
= Who𝑜𝟏𝟏 𝐸 𝐸
form
𝜕𝜕𝐸
𝜕
𝜕𝜕ℎ3 𝜕 11 Who𝑜𝟐𝟐 𝐸
𝜕 22 𝜕 33
𝑡
𝐸3ℎ = 𝜕𝜕ℎ
𝜕𝜕 𝜕 𝜕 W 𝜕
� Who𝟑
1≤𝑡≤ 𝜕𝜕 3 ℎ
𝜕 Wℎ ℎ ℎ ℎ
ℎ
θ
3
θ 𝜕𝜕𝐸3 𝜕𝜕ℎ𝑡 𝜕𝜕W
Whh
𝜕𝜕 𝜕𝜕ℎ ℎ
hh
ℎ𝟏
𝐸𝑡ℎ
𝜕𝜕 𝜕𝜕ℎ
𝜕𝜕ℎ
𝟐
𝜕𝜕ℎ
3
𝜕𝜕ℎ �
𝟑
𝜕𝜕ℎ
𝑡𝑘 𝑘 �
W ℎ = 𝑡 � 𝜕𝜕ℎ
1≤𝑘≤ 2
ℎ
𝟏
ℎℎ
𝗑𝟏 𝗑𝟐𝟐
𝜕𝜕ℎ𝑡 𝜕𝜕W
𝑖 = 𝖦 Wℎℎ𝑇𝑑𝑑𝑑𝑎𝑔[𝑔′
1
𝜕𝜕ℎ
� �
2
𝜕𝜕 = 𝖦 𝟏 of Backward 𝗑
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− (ℎ𝑡𝑖𝑖−1)]
𝟑𝟑
≥𝑖𝑖
h𝑘
Direction pass (via partial derivatives)
𝑖
>𝑘 1 >𝑘 --- gradient flow ---
𝜕𝜕ℎ𝑡
𝜕𝜕ℎ
Find Sufficient condition for when gradients vanish compute an upper bound for
𝜕𝜕ℎ𝑖
ǁ ≤ ℎ � 𝑑𝑑𝑑𝑎𝑔
term 𝑘
𝑔 ′ 𝑖𝑖−
W𝜕𝜕ℎ𝑖𝑖− ℎ
�
ǁ
ℎ 1
find out an upper bound for the norm of the jacobian!
1
82
Mechanics behind Vanishing and Exploding
Gradients
Lets find an upper bound for the term: 𝑾 𝑔 ′ ℎ𝑰𝑰−
𝑑𝑰𝑰𝑎 𝑔
𝑀 2 = 𝜆 𝑚𝑎𝑥 (𝑀 ∗ 𝑀) = 𝛾𝛾𝑾𝑚𝑎𝑥(M)
𝟏𝟏
Propert y of mat rix
• Proof: norm
𝑇
𝑀 2‖of a
complex matrix 𝑀 is defined as 𝑚𝑎𝑥 𝑀𝑥2 : 𝑥 =
where the spectral norm
offl ine
1 The norm of a matrix is equal to the largest singular value of the matrix and is
related to the largest Eigen value (spectral radius)
Let 𝑥 = 𝑎1𝑒1 + … … 𝑎 𝑛 𝑒 𝑛
𝑛 𝑎𝑖𝑖𝑒𝑖𝑖
𝑥 =
(linear combination of
∑𝑖𝑖=1 𝑛 𝑎𝑖2
𝑎𝑖𝑖𝑒𝑖𝑖 1/2 = ∑𝑖𝑖=1
𝑖𝑖=1
∑𝑛
eigen vectors) The specttal norm of x:
83
Mechanics behind Vanishing and Exploding
Gradients
𝑛 𝑛
Using characteristic equation to find a matrix's
𝑛
𝐵𝑥 =� =�
eigenvalues,
84
Mechanics behind Vanishing and Exploding
Gradients
Conside
r,
𝑥0 = 𝑒𝑗𝑗0 ⇒ 𝑥 = 1, 𝑠𝑜 = 𝑒𝑗𝑗0 , 𝐵 = 𝑒𝑗𝑗0 , =
𝑡ℎ𝑎𝑡 𝑀 ≥ 𝑥, 𝐵𝑥 𝑒𝑗𝑗0 𝜆𝑗𝑗0
… equation
𝜆𝑗𝑗0 𝑒𝑗𝑗0
where, 0𝑗𝑗
(2)
is the largest
offl ine
eigen value.
= max 𝜆𝑗 𝑤ℎ𝑒𝑟𝑒, 𝜆𝑗 𝑑
𝑑𝑠 𝑡ℎ𝑒 𝑒𝑑𝑑𝑔𝑒𝑛
𝑣𝑎𝑙𝑢𝑒 𝑜𝑠𝑠 𝐵 = 𝑀 𝑀∗
Combining (1) and (2) give
1≤𝑗𝑗≤𝑛
𝑀
us
2 = 𝜆 𝑚𝑎𝑥 (𝑀 ∗ 𝑀) = 𝛾𝛾𝑚𝑎𝑥(M) ….
𝑀
Conclusion :
equation (3)
Remarks:
The spectral norm of a matrix is equal to the largest singular value of
the matrix and is related to the largest Eigen value (spectral radius)
If the matrix is square symmetric, the singular value = spectral
Radius
85
Mechanics behind Vanishing and Exploding
Gradients
Let’s use these
𝜕𝜕ℎ𝑡 = 𝖦 𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
properties:
𝑇
𝜕𝜕 = 𝖦 ℎ 𝑖𝑖−
𝜕𝜕ℎ 𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− W𝑡 ≥𝑖𝑖 ℎ )] 1
h𝑘 𝑖
>𝑘 1 >𝑘
𝜕𝜕
ℎ𝑖 ǁ ≤ 𝑑𝑑
𝑑 𝑔 ′ ℎ𝑖𝑖−
�
ℎ �
ǁ 𝜕𝜕ℎ𝑖𝑖− W ℎ 𝑎𝑔 1
1
86
Mechanics behind Vanishing and Exploding Gradients
𝜕𝜕ℎ𝑡
Let’s use these properties:
1 1
ℎ 𝑔
𝛾𝛾𝑔 = ¼ for
constant
sigmoid
𝜕𝜕ℎ𝑡
Let’s use these properties:
1 1
ℎ
Largest Singular
≤ 𝛾𝛾 𝑔
value of 𝛾𝛾𝑔 𝖶
𝑾𝑾
𝜸𝜸 𝑾𝑾 𝜸𝜸𝑔 = an upper bound for the norm of
ℎ
ℎ
𝛾𝛾𝑔
jacobian!
= ¼ for
sigmoid constant
1 1
ℎ
Largest Singular
≤ 𝛾𝛾 𝑔
value of 𝛾𝛾𝑔 𝖶
𝑾𝑾
𝜸𝜸 𝑾𝑾 𝜸𝜸𝑔 = an upper bound for the norm of
ℎ
ℎ
𝜕𝜕ℎ3
jacobian!
𝑡−
𝛾𝛾𝖶 𝑘
≤ 𝜕𝜕ℎ
𝛾𝛾𝑔
ǁ ǁ
89
Mechanics behind Vanishing and Exploding
Gradients
Let’s use these
𝜕𝜕ℎ𝑡
= 𝖦 = 𝖦 Wℎℎ
�𝑑𝑑𝑑𝑎𝑔[𝑔′
properties: Sufficient Condition for Vanishing Gradient
𝜕
𝜕𝜕ℎ As 𝛾𝛾𝖶 𝛾𝑔 < 1 and (t-k)∞ then long term
�
(ℎ𝑖𝑖−1)]
𝑘𝑖
𝜕 𝑖𝑖−1
𝑡 ≥𝑖𝑖
𝜕𝜕ℎ>𝑘
h
𝜕𝜕
𝑡≥𝑖𝑖>𝑘
contributions go to 0 exponentially fast with
ℎ𝑖 ǁ ≤ �
𝑑𝑑𝑑𝑎𝑔
t-k (power iteration method).
ℎ � 𝑖𝑖−
ǁ 𝜕𝜕ℎ𝑖𝑖− W 𝑔′ ℎ
Therefore,
ℎ 1
𝛾𝛾𝖶 <
sufficient condition for vanishing gradient to
1
≤ 𝛾𝛾 1/𝛾𝛾
occur:
𝛾𝛾𝑔𝑔
Largest Singular
𝛾𝛾𝑔 𝖶
𝖶
< 4 for tanh, 𝛾𝛾𝖶 <
i.e. for sigmoid,
value of
1
i.e.,
𝑾𝑾
𝜸𝜸 𝑾𝑾 𝜸𝜸𝑔 = an upper bound for the norm of
ℎ
ℎ
𝜕𝜕ℎ3
jacobian!
𝑡−
𝛾𝛾𝖶 𝑘
≤ 𝜕𝜕ℎ
𝛾𝛾𝑔
ǁ ǁ
90
Mechanics behind Vanishing and Exploding
Gradients
Let’s use these
𝜕𝜕ℎ𝑡
= 𝖦 = 𝖦 Wℎℎ
�𝑑𝑑𝑑𝑎𝑔[𝑔′
properties: Necessary Condition for Exploding Gradient
𝜕
𝜕𝜕ℎ As 𝛾𝛾𝖶 𝛾𝑔 > 1 and (t-k)∞ then gradient
�
(ℎ𝑖𝑖−1)]
𝑘𝑖
𝜕 𝑖𝑖−1
𝑡 ≥𝑖𝑖
𝜕𝜕ℎ>𝑘
h
𝜕𝜕
𝑡≥𝑖𝑖>𝑘
explodes!!! Therefore,
ℎ𝑖 ǁ ≤ �
𝑑𝑑𝑑𝑎𝑔 𝛾𝛾𝖶>
Necessary condition for exploding gradient
ℎ � 𝑖𝑖− 1/𝛾𝛾
ǁ 𝜕𝜕ℎ𝑖𝑖− W � 𝛾𝛾
to occur:
ℎ 𝑔′ ℎ 1
𝑔𝑔
�>
i.e. for sigmoid,
i.e., for tanh, 𝛾𝛾 𝖶>
1
≤ 𝛾𝛾 4
1
Largest Singular
value of 𝛾𝛾𝑔 𝖶
𝑾𝑾
𝜸𝜸 𝑾𝑾 𝜸𝜸𝑔 = an upper bound for the norm of
ℎ
ℎ
𝜕𝜕ℎ3
jacobian!
𝑡−
𝛾𝛾𝖶 𝑘
≤ 𝜕𝜕ℎ
𝛾𝛾𝑔
ǁ ǁ
91
Vanishing Gradient in Long-term Dependencies
What have we concluded with the upper bound of derivative from recurrent step?
𝜕𝜕ℎ𝑡 = 𝖦 W 𝑇
𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 = 𝖦 𝜕 3
ℎ 𝑖𝑖−
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− )] 𝑡 ≥𝑖𝑖 𝑡−
𝜕𝜕ℎ 𝛾𝛾𝖶 𝑘
ℎ 1
h ǁ𝜕 ǁ
𝜕𝜕ℎ
𝑖
>𝑘 >𝑘
≤ℎ
𝑘 1
𝛾𝛾𝑔
𝜕𝜕
ℎ𝑖 ǁ ≤ 𝑑𝑑
𝑑 ≤
𝑔 ′ ℎ𝑖𝑖− 𝖶𝛾
𝑘
�
ℎ �
ǁ 𝜕𝜕ℎ𝑖𝑖− W ℎ 𝑎𝑔 1 𝛾
𝛾 𝑔𝑔 �
1
If we multiply the same term 𝛾𝛾𝖶 𝛾𝛾𝑔 < 1 again and again, the overall number
�
becomes very small(i.e almost equal to zero)
HOW ?
Repeated matrix multiplications leads to vanishing and exploding
gradients
92
Vanishing Gradient in Long-term
Dependencies
= ≪≪ 1 + ≪1 + <1
𝑾
𝑾
�
�
ℎ
�
ℎ � ℎ
The gradients no longer depend on the past inputs… 𝟏
𝟏
𝟐
𝟐 𝟑𝟑
Gradient due to Gradient due to
Total
since, the near past inputs
longdominate
term the gradient
short!!!
term 𝗑𝟏 𝗑𝟐 𝗑𝟑
Gradient dependencies 𝟏
dependencies
𝟐 𝟑
Remark: The gradients due to short term dependencies (just previous dependencies) dominates the
gradients due to long-term dependencies.
This means network will tend to focus on short term dependencies which is often not desired
Problem of Vanishing Gradient
93
Vanishing Gradient in Long-term
Dependencies
= ≪≪matrix
1 + ≪1 + leads
< 1 to vanishing and exploding gradients
The gradients no longer depend on the past inputs…
Repeated multiplications
𝑾
𝑾
�
since, the near past inputs dominate the gradient !!!
�
ℎ
�
ℎ � ℎ
𝟏
𝟏
𝟐
𝟐 𝟑𝟑
Gradient due to Gradient due to
Total long term short term 𝗑𝟏 𝗑𝟐 𝗑𝟑
Gradient dependencies 𝟏
dependencies
𝟐 𝟑
Remark: The gradients due to short term dependencies (just previous dependencies) dominates the
gradients due to long-term dependencies.
This means network will tend to focus on short term dependencies which is often not desired
Problem of Vanishing Gradient
94
Exploding Gradient in Long-term Dependencies
What have we concluded with the upper bound of derivative from recurrent step?
𝜕𝜕ℎ𝑡 = 𝖦 W 𝑇
𝑑𝑑𝑑𝑎𝑔[𝑔′(ℎ
𝜕𝜕 = 𝖦 𝜕 3
ℎ 𝑖𝑖−
𝑡 ≥𝑖𝑖 𝜕𝜕ℎ𝑖𝑖− )] 𝑡 ≥𝑖𝑖 𝑡−
𝜕𝜕ℎ 𝛾𝛾𝖶 𝑘
ℎ 1
h ǁ𝜕 ǁ
𝜕𝜕ℎ
𝑖
>𝑘 >𝑘
≤ℎ
𝑘 1
𝛾𝛾𝑔
𝜕𝜕
ℎ𝑖 ǁ ≤ 𝑑𝑑
𝑑 ≤
𝑔 ′ ℎ𝑖𝑖− 𝖶𝛾
𝑘
�
ℎ �
ǁ 𝜕𝜕ℎ𝑖𝑖− W ℎ 𝑎𝑔 1 𝛾
𝛾 𝑔𝑔 �
1
If we multiply the same term 𝜸𝜸𝑾𝑾 𝜸𝜸𝑔 > 1 again and again, the overall
�
number explodes and hence the gradient explodes
HOW ?
Repeated matrix multiplications leads to vanishing and exploding
gradients
95
Vanishing Gradient in Long-term
Dependencies
≫of≫ 1 + ≫≫ 1 +
≫≫ 1
Problem
= Exploding Gradient
𝑾
𝑾
�
= Very large number, i.e., NaN �
ℎ
�
ℎ � ℎ
𝟏
𝟏
𝟐
𝟐 𝟑𝟑
𝗑𝟏 𝗑𝟐 𝗑𝟑
𝟏 𝟐 𝟑
96
Vanishing vs Exploding
Gradients
𝜕𝜕ℎ3 𝑡−
𝛾𝛾𝖶 𝑘
≤ 𝜕𝜕ℎ
𝛾𝛾𝑔
ǁ ǁ For tanh or linear activation
97
Dealing With Exploding
Gradients
98
Dealing with Exploding Gradients: Gradient
Clipping
100
Dealing with Vanishing
Gradient
• As discussed, the gradient vanishes due to the recurrent part of the RNN
equations.
+ some other
ℎ𝑡 = Wℎℎ
terms
ht−1
• What if Largest Eigen value of the parameter matrix becomes 1, but in this case, memory just grows.
101
Activation
functions
(revisit)
Long Short Term Memory (LSTM): Gating
Mechanism
Gates :
way to optionally let information through.
Forge
composed out of a sigmoid neural net layer t
Gate
and a pointwise multiplication operation.
remove or add information to the cell state
„Cloud
s“
Current
Cell Outp
3 gates in Inpu
ut
t state
LSTM Gat Gate
e
Motivation:
Create a self loop path from where gradient can flow
self loop corresponds to an eigenvalue of Jacobian to be slightly less than 1
𝑛𝑒𝑤 𝑠𝑡𝑎𝑡𝑒
Self loop
Key Ingredients
Cell state - transport the information through
the units
Gates – optionally allow information passage
http://colah.github.io/posts/2015-08-Understanding-
LSTMs/
106
Long Short Term Memory (LSTM): Step by
Step
107
Long Short Term Memory (LSTM): Step by
Step
Forget Gate:
decides what information to throw away or remember from the previous
cell state
decision maker: sigmoid layer (forget gate
layer) The output of the sigmoid lies
between 0 to 1,
𝑓𝑡 = 𝑠𝑰𝑰𝑔𝑚𝑜𝑰𝑰𝑑(𝜽𝜽𝗑𝑓𝗑𝑡 +
0 being forget, 1 being keep.
𝜽𝜽ℎ𝑓ℎ𝑡−𝟏𝟏 + 𝑏 𝑓 )
looks at ht−1 and xt, and outputs a number between 0
and 1 for each number in the cell state Ct−1
108
Long Short Term Memory (LSTM): Step by
Step
Input Gate: Selectively updates the cell state based on the new input.
A multiplicative input gate unit to protect the memory contents stored in j from perturbation by
irrelevant inputs
𝑰𝑰𝑡 = 𝑠𝑰𝑰𝑔𝑚𝑜𝑰𝑰𝑑(𝜽𝜽𝗑𝑰𝑰𝗑𝑡 +
𝜽𝜽ℎ𝑰𝑰ℎ𝑡−𝟏𝟏 + 𝑏𝑰𝑰)
, that could be added to the state.
In the next step, we’ll combine these two
to
create an update to the state.
109
Long Short Term Memory (LSTM): Step by
Step
Cell Update
- update the old cell state, Ct−1, into the new cell
state Ct
-multiply the old state by ft, forgetting
the things we decided to forget earlier
-add it ∗ to get the new candidate values,
scaled by how much we decided to update
each state value.
110
Long Short Term Memory (LSTM): Step by
Step
Output Gate: Output is the filtered version of the cell state
- Decides the part of the cell we want as our output in the form of new hidden state
- multiplicative output gate to protect other units from perturbation by currently irrelevant
memory contents
-a sigmoid layer decides what parts of the cell state goes to output. Apply tanh to the cell state
and multiply it by the output of the sigmoid gate only output the parts decided
𝑜 𝑡 = 𝑠𝑰𝑰𝑔𝑚𝑜𝑰𝑰𝑑 𝜽𝜽𝗑𝑜𝗑𝑡 +
𝜽𝜽ℎ𝑜ℎ𝑡−𝟏𝟏 + 𝑏 𝑜 ℎ 𝑡 = 𝑜 𝑡 ∗
𝑡𝑎𝑛ℎ(𝐶 𝑡 )
111
Dealing with Vanishing Gradients in LSTM
As seen, the gradient vanishes due to the recurrent part of the RNN
+ some other
equations
ℎ𝑡 = Wℎℎ
terms
hHow
t−1 LSTM tackled vanishing gradient?
Answer: forget gate
The forget gate parameters takes care of the vanishing gradient problem
Activation function becomes identity and therefore, the problem of vanishing gradient
is addressed.
The derivative of the identity function is, conveniently, always one. So if f = 1,
information from the previous cell state can pass through this step unchanged
112
LSTM code
snippet
Parameter Dimension
113
LSTM code
snippet
Code snippet for LSTM unit: LSTM equations forward pass and shape
of gates Offl ine
114
Gated Recurrent Unit
(GRU)
• GRU like LSTMs, attempts to solve the Vanishing gradient problem
in RNN
Gates:
Update
Gate These 2 vectors decide
what information
should be passed to
output
Reset Gate
Update Gate:
-to determine how much of the past
information (from previous time steps)
needs to be passed along to the future.
-to learn to copy information from the past such
that gradient is not vanished.
𝑧𝑡 = 𝑠𝑑𝑑𝑔𝑚𝑜𝑑𝑑𝑑(W𝑧𝑥𝑡 + 𝑈 𝑧 ℎ 𝑡−1 )
116
Gated Recurrent Unit
(GRU)
Reset Gate
- model how much of information to forget by
the unit
Memory Content:
ℎ′𝑡 = 𝑡𝑎𝑛ℎ(W𝑥𝑡 + 𝑟_𝑡 ⊙ 𝑈ℎ𝑡−1)
ℎ𝑡 Memory
+ (1 − 𝑧𝑡 ) step
= 𝑧𝑡 ⊙ ℎ at current time
⊙𝑡
ℎ′
Final
𝑡−1
117
Dealing with Vanishing Gradient s in Gated Recurrent Unit
(GRU)
We had a product of
Jacobian: Offl ine
𝑡
𝜕𝜕ℎ𝑡 𝜕𝜕
𝖦 ℎ𝑗𝑗
𝜕𝜕ℎ ≤
𝑗𝑗=𝑘+ 𝜕𝜕ℎ𝑗𝑗−
=
𝑘 1 1 𝛼𝑡−𝑗𝑗−1
Where, alpha depends upon weight matrix and derivative of the
activation function Now,
= 𝑧𝑗 + 1
𝜕𝜕ℎ𝑗𝑗
𝑗 𝑗 𝑗
𝜕𝜕ℎ𝑗𝑗−
− 𝑧𝑗
𝘍
𝜕𝜕ℎ 𝑗
1 𝜕𝜕ℎ𝑗𝑗−
𝜕𝜕ℎ
= 1 𝑠𝑠𝑜𝑟𝑗 𝑧
1
𝑗 𝘍𝑗
𝑗 =
And
1
𝜕𝜕ℎ𝑗𝑗−
,
1
118
Code snippet of GRU
unit
119
Comparing LSTM and
GRU
LSTM over GRU
One feature of the LSTM has: controlled exposure of the memory content, not in GRU.
In the LSTM unit, the amount of the memory content that is seen, or used by other units in the network
is controlled by the output gate. On the other hand the GRU exposes its full content without any control.
120
Bi-directional •Bidirectional Recurrent Neural Networks (BRNN)
RNNs - connects two hidden layers of opposite directions to the same
output
- output layer can get information from past (backwards) and future
(forward) states simultaneously
Forward state
121
Bi-directional
RNNs
Bidirectional Recurrent Neural Networks (BRNN)
Gupta 2015. (Master Thesis). Deep Learning Methods for the Extraction of Relations in Natural Language Text
Gupta and Schütze. 2018. LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern
Transformation
Vu et al., 2016. Combining recurrent and convolutional neural networks for relation classification
100
Recursive Neural Networks (RecNNs): TreeRNN or
TreeLSTM
http://www.iro.umontreal.ca/~bengioy/talks/gss2012-YB6-NLP-recu
rsive.pdf
102
Recursive Neural Networks (RecNNs): TreeRNN or
TreeLSTM
Applications
represent the meaning of longer
phrases
Map phrases into a vector space
Sentence parsing
Scene parsing
103
Recursive Neural Networks (RecNNs): TreeRNN or
TreeLSTM
Application: Relation Extraction Within and Cross Sentence Boundaries, i.e., document-level relation
extraction
Gupta et al., 2019. Neural Relation Extraction Within and Across Sentence
Boundaries.
104
Recursive Neural Networks (RecNNs): TreeRNN or TreeLSTM
Relation Extraction Within and Cross Sentence Boundaries, i.e., document-level relation extraction
Gupta et al., 2019. Neural Relation Extraction Within and Across Sentence
Boundaries.
105
Deep and Multi-tasking
RNNs
106
RNN in Practice: Training
Tips
107
RNN in Practice: Training
Tips
https://medium.com/syncedreview/a-brief-overview-of-attention-mechanism-
13c578ba9129
Attention Mechanism: Attentive
RNNs
Context vector
takes all cells’ outputs as input
compute the probability distribution of source
language words for each word in decoder (e.g.,
‘Je’)
https://medium.com/syncedreview/a-brief-overview-of-attention-mechanism-
13c578ba9129
110
Attention Mechanism: Attentive
RNNs
How does it Work?
Idea: Compute Context vector for every output/target word, t (during
decoding)
For each target word, t
1. generate scores between each encoder state hs and the target state ht
2. apply softmax to normalize scores attention weights
(the probability distribution conditioned on the target state)
https://medium.com/syncedreview/a-brief-overview-of-attention-mechanism-
13c578ba9129
111
Explainability/Interpretability of
RNNs
Visualization
Visualize output predictions: LISA
Visualize neuron activations: Sensitivity Analysis
Further Details:
- Gupta et al, 2018. “LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example
to Pattern Transformation”. https://arxiv.org/abs/1808.01591
- Andrej Karpathy, Blog on “Unreasonable Effectiveness of Recurrent Neural Networks”
- Hendrick et al, “Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks”
112
Explainability/Interpretability of
RNNs
Visualize output predictions: LISA
https://www.researchgate.net/publication/328956863_LISA_Explaining_RNN_Judg
ments_via_Layer-
wIse_Semantic_Accumulation_and_Example_to_Pattern_Transformation_Analyzi
ng_and_Interpreting_RNNs_for_NLP
Full paper:
Gupta et al, 2018. “LISA: Explaining Recurrent Neural Network Judgments via
Layer- wIse Semantic Accumulation and Example to Pattern Transformation”.
https://arxiv.org/abs/1808.01591
113
Explainability/Interpretability of
RNNs
Visualize neuron activations via Heat maps, i.e. Sensitivity Analysis
Figure below shows the plot of the sensitivity score .Each row corresponds to saliency
score for the correspondent word representation with each grid representing each
dimension.
All three models assign high sensitivity to “hate” and dampen the influence of other tokens. LSTM offers
a clearer focus on “hate” than the standard recurrent model, but the bi-directional LSTM shows the
clearest focus, attaching almost zero emphasis on words other than “hate”. This is presumably due to
the gates structures in LSTMs and Bi-LSTMs that controls information flow, making these architectures
better at filtering out less relevant information.
115
RNNs in Topic Trend Extraction (Dynamic Topic Evolution): RNN-
RSM
119
References, Resources and Further
Reading
RNN lecture (Ian Goodfellow): https://www.youtube.com/watch?v=ZVN14xYm7JA
Andrew Ng lecture on RNN:
https://www.coursera.org/lecture/nlp-sequence-models/why-sequence-models-0h7gT
Recurrent Highway Networks (RHN)
LSTMs for Language Models (Lecture 07)
Bengio et al,. "On the difficulty of training recurrent neural networks." (2012)
Geoffrey et al, “Improving Perfomance of Recurrent Neural Network with ReLU nonlinearity”
Geoffrey et al, “A Simple Way to Initialize Recurrent Networks of Rectified Linear Units”
Cooijmans, Tim, et al. "Recurrent batch normalization."(2016).
Dropout : A Probabilistic Theory of Deep Learning, Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk.
Barth (2016) : “Semenuita et al. 2016. “Recurrent dropout without memory loss”
Andrej Karpathy, Blog on “Unreasonable Effectiveness of Recurrent Neural Networks”
Ilya Sutskever, et al. 2014. “Sequence to Sequence Learning with Neural Networks”
Bahdanau et al. 2014. “Neural Machine Translation by Jointly Learning to Align and Translate”
Hierarchical Attention Networks for Document Classification, 2016.
Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification, 2016
Good Resource: http://slazebni.cs.illinois.edu/spring17/lec20_rnn.pdf
120
References, Resources and Further
Reading
Lecture from the course Neural Networks for Machine Learning by Greff Hinton
Lecture by Richard Socher: https://cs224d.stanford.edu/lectures/CS224d-Lecture8.pdf
Understanding LSTM: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Recursive NN: http://www.iro.umontreal.ca/~bengioy/talks/gss2012-YB6-NLP-recursive.pdf
Attention: https://medium.com/syncedreview/a-brief-overview-of-attention-mechanism-13c578ba9129
Gupta, 2015. Master Thesis on “Deep Learning Methods for the Extraction of Relations in Natural Language Text”
Gupta et al., 2016. Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction.
Vu et al., 2016. Combining recurrent and convolutional neural networks for relation classification.
Vu et al., 2016. Bi-directional recurrent neural network with ranking loss for spoken language understanding.
Gupta et al. 2018. Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time
Gupta et al., 2018. LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and
Example to Pattern Transformation.
Gupta et al., 2018. Replicated Siamese LSTM in Ticketing System for Similarity Learning and Retrieval in Asymmetric Texts.
Gupta et al., 2019. Neural Relation Extraction Within and Across Sentence Boundaries
Talk/slides: https://vimeo.com/277669869
121