Application of Soft Computing
Techniques
Dr. I. Jacob Raglend
Professor,
Department of EE,
VIT Vellore
Numerical Example
0.3 0.1
0.2
0.2 0.4 Tq= 0.8
0.1
0.3
0.3
◼ Given, Input pattern x = 0.3
h
0.1
◼ Target Tq = 0.8
◼ Weights between input and hidden layer
0.1 0.4
whp . j =
0.2 0.3
◼ Weights between output and hidden layer
0.2
w pq .k =
0.3
I p . j = [ whp . j ] x h
T
0.1 0.2 0.3
=
0.4 0.3 0.1
0.05
=
0 . 15
1
pj I pj =
(
1 + exp− 0.05 )
0.5125
=
1 0.5374
(
1 + exp− 0.15 )
I q.k = [w pq.k ]T pj
0.5125
= 0.2 0.3 = 0.2637
0.5374
Output qk I qk
1
=
(
1 + exp
− 0.2637
)
= 0.5655
The squared error signal
(Tq − qk ) 2
= (0.8 − 0.5655) = 0.0550
2
Modification of weight between output
and hidden layer
Let = 1, = 0.6
pq.k 2 Tq − q.k q.k 1 − q.k
= 2 ( 0.8 − 0.5655 ) 0.5655 (1 − 0.5655) = 0.1152
q2 − 0.0354
w pq.k = −p.q = −p.q pq.k p. j =
w pq.k − 0.0371
w pq.k ( N + 1) = w pq.k ( N ) + w pq.k
0.2 − 0.0354
w pq . k ( N + 1) = +
0.3 − 0.0371
0.1646
=
0.2629
Modification of weight between input and
hidden layer
r
2
whp . j
=
q =1
( −2) ( Tq − q .k ) q .k (1 − q .k ) w pq .k p . j (1 − p . j ) x h
0.2 0.0230
w pq .k pq .k = 0.1152 =
0.3 0.0346
Let DD = w pq.k pq.k pj 1 − pj ( )
(0.0230)(0.5125)(1 − 0.5125) 0.0057
= =
(0.0346)(0.5374)(1 − 0.5374) 0.0086
0.3
HH = x h DD = 0.0057 0.0086
T
Let
0.1
0.0017 0.0026
=
0.0006 0.0009
whp. j = − hp HH
− 0.0010 − 0.0015
=
− 0.0003 − 0.0005
whp. j ( N + 1) = whp. j ( N ) + whp. j
0.1 0.4 − 0.0010 − 0.0015
= +
0.2 0.3 − 0.0003 − 0.0005
0.9900 0.3985
=
0.1997 0.2995
▪With the updated weights, error is calculated
again. Iterations are carried out till we get the
error less than the tolerance.
▪Once weights are adjusted, the network is
trained.
◼ Continue this process until the sum squared
error is less than the tolerance value or the
maximum number of iteration is reached.
◼ In this example we have taken the number of
iterations as 100,
◼ Output = 0.7912 which is closer to the target
value 0.8 and the sum squared error is
0.0088.
Error Plot for Feed Forward Backpropagation Neural Network
0.25
0.2
0.15
Sum Squared error
0.1
0.05
0
0 10 20 30 40 50 60 70 80 90 100
Number of Iterations
FACTORS THAT INFLUENCE BPN
TRAINING
❑ Bias
❑ Momentum
❑ Stability
❑ Adjusting Coefficient in Sigmoidal Term
❑ Dealing with Local Minima
❑ Learning Constants
Dr.I.Jacob Raglend, Professor,EEE
10/6/2024 Dept.,Deputy Director Research, N.I.U 13
Features of ANN
◼ Neural networks are not the only systems but
they are capable of learning by example .
◼ ANN has the ability to utilize examples taken
from data and to organize the information into
a form that is useful.
◼ Typically, this form constitutes a model that
represents the relationship between the input
and output variables.
◼ The memory of the neural network may be
both distributed and associated.
◼ Neural networks are also fault-tolerant, since
the information storage is distributed over all
the weights.
◼ Neural networks are good pattern
recognizers, even when the information
comprising the patterns is noisy, sparse, or
incomplete.
Neural Network Applications
◼ May provide a model for massive parallel
computation
◼ More successful approach of “parallelizing”
traditional serial algorithms
◼ Can compute any computable function
◼ Can do everything a normal digital computer
can do