0% found this document useful (0 votes)
4 views14 pages

Lecture 05-Learning Rule

The document discusses learning rules for artificial neural networks (ANNs), focusing on the Perceptron Learning Rule and Hebbian Learning Rule. It explains how neurons adapt their weights based on input, output, and desired responses, with examples illustrating weight adjustments. The Perceptron rule is supervised, while Hebbian learning is unsupervised and feed-forward.

Uploaded by

medomedo200464
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views14 pages

Lecture 05-Learning Rule

The document discusses learning rules for artificial neural networks (ANNs), focusing on the Perceptron Learning Rule and Hebbian Learning Rule. It explains how neurons adapt their weights based on input, output, and desired responses, with examples illustrating weight adjustments. The Perceptron rule is supervised, while Hebbian learning is unsupervised and feed-forward.

Uploaded by

medomedo200464
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

LEARNING RULES of ANNs

LEARNING RULES of ANNs

• A neuron is considered to be an adaptive element.


Its weights are modifiable depending on the input it
receives, its output value, and the associated
teacher response.
Learning Signal
The General Learning Rule
• The increment of the weight vector Wi according to
the general learning rule is:

Where C is the learning rate.

The weight vector adapted at time t becomes at the


next instant:

For the k'th step:


Perceptron Learning Rule
• The learning signal is the difference between the
desired and actual neuron's response.
• Thus, learning is supervised and the learning
signal is equal to
r = di – oi
Where:

di is the desired response


Weight adjustments in this method:
Perceptron Learning Rule Diagram
Example
Assume the network shown in Figure above with the
initial weight vector W1

needs to be trained using the set of three input


vectors as below:

learning constant is C=0.1, The desired responses for


X1, X2, X3 are d1=-1, d2=-1, and d3=1, respectively.
Solution
• Step 1: Input is xl , desired output is d:l

updated weight vector:


Step 2: Input is x2, desired output is d2. For the present
weight vector w2 , we compute the activation value net2
as follows:

Correction is not performed in this step


since d2 = sgn ( - 1.6 ) = - 1.
Step 3: Input is x3, desired output is d3, present
weight vector is w3
•Computing net3 we obtain:

The updated weight values are:

This terminates the sequence of learning steps unless


the training set is recycled.
Hebbian Learning Rule
• Hebbian learning rule is feed-forward, unsupervised
learning.
• The learning signal is equal to the neuron's output.

t
r=oi & r=f(W iX)

The increment ΔWi becomes:


t
ΔWi=Cf(W iX)X
EXAMPLE
• Assume the network shown in Figure with the initial
weight vector. learning constant c = 1.
The output f(net)= sgn(net)
O1= sgn(3)=1
ΔW1=CrX1=CO1X1=X1

The updated weights are


W2= W1+ΔW1=W1+X1

You might also like