MATLAB PROGRAMS
1 a) Write a MATLAB program for Hebb net to classify two dimensional input patterns
in bipolar with given targets.
Hebb Network
The Hebb learning rule is a simple one. Donald Hebb stated in 1949 that in the brain,
the learning is performed by the change in the synaptic gap. According to the Hebb rule, the
weight vector is found to increase proportionately to the product of the input and the learning
signal. Here the learning signal is equal to the neuron's output. In Hebb learning, if two
interconnected neurons are 'on' simultaneously then the weights associated w1ih these neurons
can be increased by the modification made in their synaptic gap (strength). The weight update
in Hebb rule is given by
Wi(new)=Wi(old)+xiy
Algorithm of Hebb Net
1. initialize all weigths. w[i]=0
2. for each input training vector and targer output pair, s:t, do steps 3-5
3. set activation for output units: x[i]=s[i] i= 1 to n
4. set activation for output unit : y=t
5. adjust the weights (for each i) and bias :
w[i](new)=w[i](old)+x[i]*y
bias(new) = bias(old)+y
As a result,
w(ne w) = w(old) + 𝚫w
The Hebb rule can be used for pattern association, pattern categorization, pattern
classification and over a range of other areas.
%Hebb network to classify two dimensional input patterns
clear;
clc;
%input patterns
E=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 1 1 1];
F=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1];
X(1,1:20)=E;
X(2,1:20)=F;
DEPT O F CSE, UVCE 1
MATLAB PROGRAMS
W(1:20)=0;
t=[1 -1];
b=0;
for i=1:2
W=W+X(i,1:20)*t(i);
b=b+t(i);
end
disp('Weight matrix');
disp(W);
disp('Bias');
disp(b);
Output:
Weight matrix
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2
Bias
0
DEPT O F CSE, UVCE 2
MATLAB PROGRAMS
1 b) Generate XOR function and AND NOT function using MCCulloch-Pitts Neural
Network.
McCulloch-Pitts Neural Network
The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in
1943. The McCulloch-Pitts neural model is also known as linear threshold gate. It is a neuron
of a set of inputs I1 , I2 ,I3……Im and one output y. The linear threshold gate simply classifies
the set of inputs into two different classes. Thus the output y is binary. Such a function can be
described mathematically using these equations:
W1 , W2 , W3 ,……Wm are weight values normalized in the range of either (0,1) or (-1,1) and
associated with each input line, is the weighted sum, and is a threshold constant. The
function f is a linear step function at threshold .
%ANDNOT function Using McCulloch-Pitts neuron
clear;
clc;
%Getting Weights and threshold value
disp('Enter Weights');
W1=input('Weight W1=');
W2=input('Weight W2=');
disp('Enter Threshold Value');
theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
DEPT O F CSE, UVCE 3
MATLAB PROGRAMS
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*W1+x2*W2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another sets of weights and threshold value');
W1=input('Weight W1=');
W2=input('Weight W2=');
theta=input('theta=');
end
end
disp('McCuloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(W1);
disp(W2);
disp('Threshold Value');
disp(theta);
DEPT O F CSE, UVCE 4
MATLAB PROGRAMS
Output:
Enter Weights
Weight W1=1
Weight W2=1
Enter Threshold Value
theta=0.1
Output of Net
0 1 1 1
Net is not learning enter another sets of weights and threshold value
Weight W1=1
Weight W2=-1
theta=1
Output of Net
0 0 1 0
Mcculloch-Pitts Net for ANDNOT function
Weights of neuron
1
-1
Threshold Value
1
DEPT O F CSE, UVCE 5
MATLAB PROGRAMS
%XOR function using Mc-Culloch-Pitts neuron
clear;
clc;
%Getting Weights and threshold value
disp('Enter the weights');
w11=input('Weight w11=');
w12=input('Weight w12=');
w21=input('Weight w21=');
w22=input('Weight w22=');
v1=input('Weight v1=');
v2=input('Weight v2=');
disp('Enter the threshold value');
theta=input('theta=');
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 1 1 0];
con=1;
while con
zin1=x1*w11+x2*w21;
zin2=x1*w21+x2*w22;
for i=1:4
if zin1(i)>=theta
y1(i)=1;
else y1(i)=0;
end
if zin2(i)>=theta
y2(i)=1;
else y2(i)=0;
end
end
yin=y1*v1+y2*v2;
for i=1:4
if yin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('output of net=');
disp(y);
if y==z
con=0;
else
disp('Net is not learning Enter another set of weights and threshold value');
w11=input('Weight w11=');
w12=input('Weight w12=');
w21=input('Weight w21=');
w22=input('Weight w22=');
DEPT O F CSE, UVCE 6
MATLAB PROGRAMS
v1=input('Weight v1=');
v2=input('Weight v2=');
theta=input('theta=');
end
end
disp('McCulloch Pitts Net for XOR function' );
disp('Weights of neuron Z1');
disp(w11);
disp(w21);
disp('Weights of neuron Z2');
disp(w12);
disp(w22);
disp('Weights of neuron ');
disp(v1);
disp(v2);
disp('Threshold value =');
disp(theta);
Output:
Enter the weights
Weight w11=1
Weight w12=-1
Weight w21=-1
Weight w22=1
Weight v1=1
Weight v2=1
Enter the threshold value
theta=1
output of net=
0 1 1 0
McCulloch Pitts Net for XOR function
Weights of neuron Z1
1
-1
Weights of neuron Z2
-1
1
DEPT O F CSE, UVCE 7
MATLAB PROGRAMS
Weights of neuron
1
1
Threshold value =
1
DEPT O F CSE, UVCE 8
MATLAB PROGRAMS
2) Write a MATLAB program to apply Back Propagation Network for Pattern
Recognition Network.
Back Propagation Network
The Back Propagation learning algorithm is one of the most important developments in
neural networks (Bryson and Ho, 1969; Werbos, 1974; Lecun, 1985; Parker, 1985; Rumelha n,
1986). This learning algorithm is applied to multilayer feed-forward networks consisting of
processing elements with continuous differentiable activation functions. The network
associated with back-propagation learning algorithm are also called Back Propagation
Networks (BPN’s).
Back Propagation Network Algorithm
Step 0: Initialize weights and learning rate (take some small random values).
Step 1: Perform Steps 2-9 when stopping condition is false.
Step 2: Perform Steps 3-8 for each training pair.
Feed-Forward Phase 1
Step 3: Each input unit receives input signal x; and sends it to the hidden unit (i = l to n).
Step 4: Each hidden unit Zi(j = 1 top) sums its Weighted inputs signals to calculate net input:
Calculate output of the hidden unit by applying its activation functions over Zinj (binary or
bipolar sigmoidal activation function): Zj = f(Zinj). And send the output signal from the
hidden unit to the input of output layer units.
Step 5: For each output unit yk (k = I to m), calculate the net input:
DEPT O F CSE, UVCE 9
MATLAB PROGRAMS
And apply the activation function to compute output signal yk = f(Zink ).
Back-Propagation for error Phase-2
Step 6: Each output unit yk (k = 1 to m) receives a target pattern corresponding to the input
training pattern and computes the error correction term
The derivative f1 (yink ) can be calculated as in Section 2.3.3. On the basis of the calculated
error correction term, update the change in weights and bias:
Also, send δk to the hidden layer backwards.
Step 7: Each hidden unit (zi,j = I top) sums its delta inputs from the output units:
The term δinj gets multiplied with the derivative of f(Zinj) to calculate the error term:
DEPT O F CSE, UVCE 10
MATLAB PROGRAMS
%back propogation network
clear all;
clc;
disp('Back Propogation Network');
v=[0.7 -0.4;-0.2 0.3]
x=[0 1]
t=[1]
w=[0.5;0.1]
t1=0
wb=-0.3
vb=[0.4 0.6]
alpha=0.25
e=1;
temp=0;
while (e<=3)
e
for i=1:2
for j=1:2
temp=temp+(v(j,i)*x(j));
end
zin(i)=temp+vb(i);
temp1=e -zin(i);
fz(i)=(1/(1+temp1));
z(i)=fz(i);
fdz(i)=fz(i)*(1- fz(i));
temp=0;
end
for k=1
for j=1:2
temp=temp+z(j)*w(j,k);
end
yin(k)=temp+wb(k);
fy(k)=(1/(1+(e -yin(k))));
y(k)=fy(k);
DEPT O F CSE, UVCE 11
MATLAB PROGRAMS
temp=0;
end
for k=1
fdy(k)=fy(k)*(1- fy(k));
delk(k)=(t(k)-y(k))*fdy(k);
end
for k=1
for j=1:2
dw(j,k)=alpha*delk(k)*z(j);
end
dwb(k)=alpha*delk(k);
end
for j=1:2
for k=1
delin(j)=delk(k)*w(j,k);
end
delj(j)=delin(j)*fdz(j);
end
for i=1:2
for j=1:2
dv(i,j)=alpha*delj(j)*x(i);
end
dvb(i)=alpha*delj(i);
end
for k=1
for j=1:2
w(j,k)=w(j,k)+dw(j,k);
end
wb(k)=wb(k)+dwb(k);
end
w,wb
for i=1:2
for j=1:2
v(i,j)=v(i,j)+dv(i,j);
end
vb(i)=vb(i)+dvb(i);
end
v,vb
te(e)=e;
e=e+1;
end
Output:
Back Propogation Network
v = 0.7000 -0.4000
-0.2000 0.3000
x=0 1
DEPT O F CSE, UVCE 12
MATLAB PROGRAMS
t=1
w = 0.5000
0.1000
t1 = 0
wb = -0.3000
vb = 0.4000 0.6000
alpha = 0.2500
e =1
w = 0.5167
0.1274
wb = -0.2699
v = 0.7000 -0.4000
-0.1963 0.3002
vb = 0.4037 0.6002
e =2
w = 0.5300
0.1450
wb = -0.2329
v = 0.7000 -0.4000
-0.1919 0.3014
vb = 0.4081 0.6014
e=3
w = 0.5392
0.1563
wb = -0.1978
v = 0.7000 -0.4000
-0.1883 0.3025
vb = 0.4117 0.6025
DEPT O F CSE, UVCE 13
MATLAB PROGRAMS
3) Develop a Kohnen Self Organizing feature map for image Recognition Problem.
The KSOM (also called a feature map or Kohonen map) is an unsupervised ANN
algorithm (Kohonen et al., 1996). It is usually presented as a dimensional grid or map whose
units (nodes or neurons) become tuned to different input data patterns. The principal goal of
the KSOM is to transform an incoming signal pattern of arbitrary dimension into a two-
dimensional discrete map. This mapping roughly preserves the most important topological and
metric relationship of the original data elements, implying that not much information is lost
during the mapping
Kohenen Self Organizing Map Algorithm
Step 1 − Initialize the weights, the learning rate α and the neighborhood topological scheme.
Step 2 − Continue step 3-9, when the stopping condition is not true.
Step 3 − Continue step 4-6 for every input vector x.
Step 4 − Calculate Square of Euclidean Distance for j = 1 to m
Step 5 − Obtain the winning unit J where D(j) is minimum.
Step 6 − Calculate the new weight of the winning unit by the following relation
Step 7 − Update the learning rate α by the following relation
α (t+1)=0.5αt
Step 8 − Reduce the radius of topological scheme.
Step 9 − Check for the stopping condition for the network.
clear all;
clc;
disp('Kohonen Self organizing feature maps');
disp('The input patterns are');
x=[1 1 0 0; 0 0 0 1; 1 0 0 0;0 0 1 1]
t=1;
alpha(t)=0.6;
e=1;
disp('Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix is');
DEPT O F CSE, UVCE 14
MATLAB PROGRAMS
w=[0.2 0.8; 0.6 0.4; 0.5 0.7; 0.9 0.3];
disp('the learning rate of this epoch is');
alpha
while(e<=3)
i=1;
j=1;
k=1;
m=1;
disp('Epoch =');
e
while(i<=4)
for j=1:2
temp=0;
for k=1:4
temp=temp+w(k,j) -(x(i,k));
end
D(j)=temp
end
if(D(1)<D(2))
J=1;
else
J=2;
end
disp('The winning unit is');
J
disp('Weight updation');
for m=1:4
w(m,J)=w(m,J) + (alpha(e) * (x(i,m)-w(m,J)));
end
w
i=i+1;
end
temp=alpha(e);
e=e+1;
alpha(e)=(0.5*temp);
disp('First Epoch completed');
disp('Learning rate updated for second epoch');
alpha(e)
end
Output:
Kohonen Self organizing feature maps
The input patterns are
x=1 1 0 0
DEPT O F CSE, UVCE 15
MATLAB PROGRAMS
0 0 0 1
1 0 0 0
0 0 1 1
Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix is
the learning rate of this epoch is
alpha = 0.6000
Epoch =
e=1
D = 0.2000
D = 0.2000 0.2000
The winning unit is
J =1
Weight updation
w=
0.6800 0.8000
0.8400 0.4000
0.2000 0.7000
0.3600 0.3000
D =1.0800 0.2000
D = 1.0800 1.2000
The winning unit is
J =1
Weight updation
w=
DEPT O F CSE, UVCE 16
MATLAB PROGRAMS
0.2720 0.8000
0.3360 0.4000
0.0800 0.7000
0.7440 0.3000
D = 0.4320 1.2000
D = 0.4320 1.2000
The winning unit is
J=1
Weight updation
w =0.7088 0.8000
0.1344 0.4000
0.0320 0.7000
0.2976 0.3000
D = -0.8272 1.2000
D =-0.8272 0.2000
The winning unit is
J= 1
Weight updation
w = 0.2835 0.8000
0.0538 0.4000
0.6128 0.7000
0.7190 0.3000
First Epoch completed
Learning rate updated for second epoch
DEPT O F CSE, UVCE 17
MATLAB PROGRAMS
ans = 0.3000
Epoch =
e= 2
D = -0.3309 0.2000
D = -0.3309 0.2000
The winning unit is
J =1
Weight updation
w = 0.4985 0.8000
0.3376 0.4000
0.4290 0.7000
0.5033 0.3000
D = 0.7684 0.2000
D = 0.7684 1.2000
The winning unit is
J=1
Weight updation
w = 0.3489 0.8000
0.2363 0.4000
0.3003 0.7000
0.6523 0.3000
D = 0.5379 1.2000
D =0.5379 1.2000
The winning unit is
DEPT O F CSE, UVCE 18
MATLAB PROGRAMS
J=1
Weight updation
w = 0.5442 0.8000
0.1654 0.4000
0.2102 0.7000
0.4566 0.3000
D = -0.6235 1.2000
D = -0.6235 0.2000
The winning unit is
J =1
Weight updation
w = 0.3810 0.8000
0.1158 0.4000
0.4471 0.7000
0.6196 0.3000
First Epoch completed
Learning rate updated for second epoch
ans = 0.1500
Epoch =
e =3
D = -0.4364 0.2000
D = -0.4364 0.2000
The winning unit is
J=1
DEPT O F CSE, UVCE 19
MATLAB PROGRAMS
Weight updation
w = 0.4738 0.8000
0.2484 0.4000
0.3801 0.7000
0.5267 0.3000
D = 0.6290 0.2000
D = 0.6290 1.2000
The winning unit is
J =1
Weight updation
w = 0.4028 0.8000
0.2112 0.4000
0.3231 0.7000
0.5977 0.3000
D = 0.5347 1.2000
D = 0.5347 1.2000
The winning unit is
J= 1
Weight updation
w = 0.4923 0.8000
0.1795 0.4000
0.2746 0.7000
0.5080 0.3000
D = -0.5455 1.2000
DEPT O F CSE, UVCE 20
MATLAB PROGRAMS
D = -0.5455 0.2000
The winning unit is
J=1
Weight updation
w = 0.4185 0.8000
0.1526 0.4000
0.3834 0.7000
0.5818 0.3000
First Epoch completed
Learning rate updated for second epoch
ans = 0.0750
DEPT O F CSE, UVCE 21
MATLAB PROGRAMS
4) Write a MATLAB program to implement Discrete Hopfield Network and test the
input Pattern.
Discrete Hopfield Network
A Hopfield network which operates in a discrete line fashion or in other words, it can be said
the input and output patterns are discrete vector, which can be either binary (0,1) or bipolar
(+1, -1) in nature. The network has symmetrical weights with no self-connections i.e., wij =
wji and wii = 0.
DEPT O F CSE, UVCE 22
MATLAB PROGRAMS
%discrete hopfield
clear all;
clc;
disp('Discrete Hopfield Network');
theta=0;
X=[1 -1 -1 -1;-1 1 1 -1;-1 -1 -1 1];
%calculating Weight Matrix
W=X'*X
%calculating Energy
k=1;
while(k<=3)
temp=0;
DEPT O F CSE, UVCE 23
MATLAB PROGRAMS
for i=1:4
for j=1:4
temp=temp+(X(k,i)*W(i,j)*X(k,j));
end
end
E(k)=(-0.5)*temp;
k=k+1;
end
%Energy Function for 3sampls
E
%Test for given pattern s[-1 1 -1 -1]
disp('Given input pattern for testing');
x1=[-1 1 -1 -1]
temp=0;
for i=1:4
for j=1:4
temp=temp+(x1(i)*W(i,j)*x1(j));
end
end
SE=(-0.5)*temp
disp('By Synchronous Updation method');
disp('Th net input calculated is');
yin=x1*W
for i=1:4
if(yin(i)>theta)
y(i)=1;
elseif(yin(i)==theta)
y(i)=yin(i);
else
y(i)=-1;
DEPT O F CSE, UVCE 24
MATLAB PROGRAMS
end
end
disp('The output calculated for net input is');
y
temp=0;
for i=1:4
for j=1:4
temp=temp+(y(i)*W(i,j)*y(j));
end
end
SE=(-0.5)*temp
n=0;
for i=1:3
if (SE==E(i))
n=0;
k=i;
else
n=n+1;
end
end
if(n==3)
disp('Pattern is not associated with any input pattern');
else
disp('The test pattern');
x1
disp('is associated with');
X(k,:)
End
Output:
Discrete Hopfield Network
DEPT O F CSE, UVCE 25
MATLAB PROGRAMS
W=
3 -1 -1 -1
-1 3 3 -1
-1 3 3 -1
-1 -1 -1 3
E=
-10 -12 -10
Given input pattern for testing
x1 =
-1 1 -1 -1
SE =
-2
By Synchronous Updation method
Th net input calculated is
yin =
-2 2 2 -2
The output calculated for net input is
y=
-1 1 1 -1
SE =
-12
The test pattern
x1 =
-1 1 -1 -1
is associated with
ans =
-1 1 1 -1
DEPT O F CSE, UVCE 26
MATLAB PROGRAMS
5) Develop a simple Ant Colony Optimization Problem with MATLAB to find the
optimum path.
xx=[3, 5]; %initial point (3, 3). we have to move to (5, 5)
yy=[3, 5];
plot(xx(1), yy(1), 'ko', 'LineWidth',4);
hold on;
startingptxxx=xx(1);
startingptyyy=yy(1);
error=1; %initialize with any random values
prev_error=2e7; %such that prev_error>>error
times=1;
savex(1)=startingptxxx;
savey(1)=startingptyyy;
total_ants=80; %greater the number of ants, more optimum the path
while error<prev_error %if error>prev_error it means
%that you have surpassed your destination
if times>1
prev_error=error; %so that this does not execute for the
end %first iterstion of while loop
times=times+1;
for i=1:1:total_ants
startingptx(i)=startingptxxx+rand*0.5; %each ant randomly takes any
startingpty(i)=startingptyyy+rand*0.5; % position near its starting point
DEPT O F CSE, UVCE 27
MATLAB PROGRAMS
plot(startingptx(i), startingpty(i), 'go', 'LineWidth',2)
hold on;
end
for i=1:1:total_ants
dist(i)=sqrt((5-startingptx(i))^2+(5-startingpty(i))^2);
e(i)=dist(i); %greater the distance greater the error
end
for i=1:1:total_ants
pheromone(i)=1/e(i);
end
bestpath=find(pheromone==max(pheromone));
if e(bestpath)<prev_error
startingptxxx=startingptx(bestpath);
startingptyyy=startingpty(bestpath);
plot(startingptxxx, startingptyyy, 'ro', 'LineWidth',4)
hold on;
savex(times)=startingptxxx;
savey(times)=startingptyyy;
end
error=e(bestpath);
end
plot(savex, savey, 'r', 'LineWidth',2)
Output:
antcolonyoptimization
bestpath
bestpath = 77
DEPT O F CSE, UVCE 28
MATLAB PROGRAMS
6) Solve feature selection problem using Artificial Bee Colony Optimization.
DEPT O F CSE, UVCE 29
MATLAB PROGRAMS
% Main.m
abc;
%RouletteWheelSelection
function i=RouletteWheelSelection(P)
r=rand;
C=cumsum(P);
i=find(r<=C,1,'first');
end
% ABC.m
clc;
clear;
close all;
%% Problem Definition
CostFunction=@(x) Sphere(x); % Cost Function
nVar=5; % Number of Decision Variables
VarSize=[1 nVar]; % Decision Variables Matrix Size
VarMin=-10; % Decision Variables Lower Bound
VarMax= 10; % Decision Variables Upper Bound
DEPT O F CSE, UVCE 30
MATLAB PROGRAMS
%% ABC Settings
MaxIt=200; % Maximum Number of Iterations
nPop=100; % Population Size (Colony Size)
nOnlooker=nPop; % Number of Onlooker Bees
L=round(0.6*nVar*nPop); % Abandonment Limit Parameter (Trial Limit)
a=1; % Acceleration Coefficient Upper Bound
%% Initialization
% Empty Bee Structure
empty_bee.Position=[];
empty_bee.Cost=[];
% Initialize Population Array
pop=repmat(empty_bee,nPop,1);
% Initialize Best Solution Ever Found
BestSol.Cost=inf;
% Create Initial Population
for i=1:nPop
pop(i).Position=unifrnd(VarMin,VarMax,VarSize);
pop(i).Cost=CostFunction(pop(i).Position);
if pop(i).Cost<=BestSol.Cost
BestSol=pop(i);
end
end
% Abandonment Counter
C=zeros(nPop,1);
% Array to Hold Best Cost Values
BestCost=zeros(MaxIt,1);
%% ABC Main Loop
for it=1:MaxIt
% Recruited Bees
for i=1:nPop
% Choose k randomly, not equal to i
K=[1:i-1 i+1:nPop];
k=K(randi([1 numel(K)]));
% Define Acceleration Coeff.
phi=a*unifrnd(-1,+1,VarSize);
% New Bee Position
newbee.Position=pop(i).Position+phi.*(pop(i).Position-pop(k).Position);
DEPT O F CSE, UVCE 31
MATLAB PROGRAMS
% Evaluation
newbee.Cost=CostFunction(newbee.Position);
% Comparision
if newbee.Cost<=pop(i).Cost
pop(i)=newbee;
else
C(i)=C(i)+1;
end
end
% Calculate Fitness Values and Selection Probabilities
F=zeros(nPop,1);
MeanCost = mean([pop.Cost]);
for i=1:nPop
F(i) = exp(-pop(i).Cost/MeanCost); % Convert Cost to Fitness
end
P=F/sum(F);
% Onlooker Bees
for m=1:nOnlooker
% Select Source Site
i=RouletteWheelSelection(P);
% Choose k randomly, not equal to i
K=[1:i-1 i+1:nPop];
k=K(randi([1 numel(K)]));
% Define Acceleration Coeff.
phi=a*unifrnd(-1,+1,VarSize);
% New Bee Position
newbee.Position=pop(i).Position+phi.*(pop(i).Position-pop(k).Position);
% Evaluation
newbee.Cost=CostFunction(newbee.Position);
% Comparision
if newbee.Cost<=pop(i).Cost
pop(i)=newbee;
else
C(i)=C(i)+1;
end
end
% Scout Bees
for i=1:nPop
DEPT O F CSE, UVCE 32
MATLAB PROGRAMS
if C(i)>=L
pop(i).Position=unifrnd(VarMin,VarMax,VarSize);
pop(i).Cost=CostFunction(pop(i).Position);
C(i)=0;
end
end
% Update Best Solution Ever Found
for i=1:nPop
if pop(i).Cost<=BestSol.Cost
BestSol=pop(i);
end
end
% Store Best Cost Ever Found
BestCost(it)=BestSol.Cost;
% Display Iteration Information
disp(['Iteration ' num2str(it) ': Best Cost = ' num2str(BestCost(it))]);
end
%% Results
figure;
%plot(BestCost,'LineWidth',2);
semilogy(BestCost,'LineWidth',2);
xlabel('Iteration');
ylabel('Best Cost');
grid on;
% Sphere.m
function z=Sphere(x)
z=sum(x.^2);
end
DEPT O F CSE, UVCE 33
MATLAB PROGRAMS
Output:
DEPT O F CSE, UVCE 34
MATLAB PROGRAMS
7) Implementation of minimum spanning tree using Particle Swarm Optimization.
The PSO algorithm consists of just three steps:
1. Evaluate fitness of each particle
2. Update individual and global bests
3. Update velocity and position of each particle
These steps are repeated until some stopping condition is met.
Main.m
% Minimum Spanning Tree using PSO
clc;
clear;
close all;
%% Problem Definition
model=CreateModel();
CostFunction=@(xhat) MyCost(xhat,model); % Cost Function
nVar=model.n*(model.n-1)/2; % Number of Decision Variables
VarSize=[1 nVar]; % Decision Variables Matrix Size
VarMin=0; % Lower Bound of Variables
VarMax=1; % Upper Bound of Variables
%% PSO Parameters
MaxIt=250; % Maximum Number of Iterations
nPop=250; % Population Size (Swarm Size)
w=0.2; % Inertia Weight
wdamp=1; % Inertia Weight Damping Ratio
c1=0.7; % Personal Learning Coefficient
c2=1.0; % Global Learning Coefficient
% Constriction Coefficients
% phi1=2.05;
% phi2=2.05;
% phi=phi1+phi2;
% chi=2/(phi-2+sqrt(phi^2-4*phi));
% w=chi; % Inertia Weight
% wdamp=1; % Inertia Weight Damping Ratio
DEPT O F CSE, UVCE 35
MATLAB PROGRAMS
% c1=chi*phi1; % Personal Learning Coefficient
% c2=chi*phi2; % Global Learning Coefficient
% Velocity Limits
VelMax=0.1*(VarMax-VarMin);
VelMin=-VelMax;
mu = 0.1; % Mutation Rate
%% Initialization
empty_particle.Position=[];
empty_particle.Cost=[];
empty_particle.Sol=[];
empty_particle.Velocity=[];
empty_particle.Best.Position=[];
empty_particle.Best.Cost=[];
empty_particle.Best.Sol=[];
particle=repmat(empty_particle,nPop,1);
BestSol.Cost=inf;
for i=1:nPop
% Initialize Position
particle(i).Position=unifrnd(VarMin,VarMax,VarSize);
% Initialize Velocity
particle(i).Velocity=zeros(VarSize);
% Evaluation
[particle(i).Cost, particle(i).Sol]=CostFunction(particle(i).Position);
% Update Personal Best
particle(i).Best.Position=particle(i).Position;
particle(i).Best.Cost=particle(i).Cost;
particle(i).Best.Sol=particle(i).Sol;
% Update Global Best
if particle(i).Best.Cost<BestSol.Cost
BestSol=particle(i).Best;
end
end
BestCost=zeros(MaxIt,1);
DEPT O F CSE, UVCE 36
MATLAB PROGRAMS
%% PSO Main Loop
for it=1:MaxIt
for i=1:nPop
% Update Velocity
particle(i).Velocity = w*particle(i).Velocity ...
+c1*rand(VarSize).*(particle(i).Best.Position-particle(i).Position) ...
+c2*rand(VarSize).*(BestSol.Position-particle(i).Position);
% Apply Velocity Limits
particle(i).Velocity = max(particle(i).Velocity,VelMin);
particle(i).Velocity = min(particle(i).Velocity,VelMax);
% Update Position
particle(i).Position = particle(i).Position + particle(i).Velocity;
% Velocity Mirror Effect
IsOutside=(particle(i).Position<VarMin | particle(i).Position>VarMax);
particle(i).Velocity(IsOutside)=-particle(i).Velocity(IsOutside);
% Apply Position Limits
particle(i).Position = max(particle(i).Position,VarMin);
particle(i).Position = min(particle(i).Position,VarMax);
% Evaluation
[particle(i).Cost, particle(i).Sol] = CostFunction(particle(i).Position);
% Mutation
for k=1:2
NewParticle=particle(i);
NewParticle.Position=Mutate(particle(i).Position, mu);
[NewParticle.Cost, NewParticle.Sol]=CostFunction(NewParticle.Position);
if NewParticle.Cost<=particle(i).Cost || rand < 0.1
particle(i)=NewParticle;
end
end
% Update Personal Best
if particle(i).Cost<particle(i).Best.Cost
particle(i).Best.Position=particle(i).Position;
particle(i).Best.Cost=particle(i).Cost;
particle(i).Best.Sol=particle(i).Sol;
% Update Global Best
if particle(i).Best.Cost<BestSol.Cost
DEPT O F CSE, UVCE 37
MATLAB PROGRAMS
BestSol=particle(i).Best;
end
end
end
% Local Search based on Mutation
for k=1:5
NewParticle=BestSol;
NewParticle.Position=Mutate(BestSol.Position, mu);
[NewParticle.Cost, NewParticle.Sol]=CostFunction(NewParticle.Position);
if NewParticle.Cost<=BestSol.Cost
BestSol=NewParticle;
end
end
BestCost(it)=BestSol.Cost;
disp(['Iteration ' num2str(it) ': Best Cost = ' num2str(BestCost(it))]);
w=w*wdamp;
% Plot Best Solution
figure(1);
PlotSolution(BestSol.Sol,model);
pause(0.01);
end
%% Results
figure;
plot(BestCost,'LineWidth',2);
xlabel('Iteration');
ylabel('Best Cost');
DEPT O F CSE, UVCE 38
MATLAB PROGRAMS
Output:
DEPT O F CSE, UVCE 39
MATLAB PROGRAMS
DEPT O F CSE, UVCE 40