The Particle Swarm
Optimization
Algorithm
Neboja Trpkovi
[email protected]
10th Dec 2010
Problem Definition
optimization of continuous nonlinear
functions
finding the best solution in problem
space
Example
Importance
function optimization
artificial neural network training
fuzzy system control
Existing Solutions
Ant Colony (ACO)
discrete
Genetic Algorithms (GA)
slow convergence
Particle Swarm Optimization
Very simple classification:
a computational method
that optimizes a problem
by iteratively trying to improve a candidate solution
with regard to a given measure of quality
Particle Swarm Optimization
Facts:
developed by Russell C. Eberhart and James Kennedy
in 1995
inspired by social behavior of bird flocking or fish
schooling
similar to evolutionary techniques such as Genetic
Algorithms (GA)
Particle Swarm Optimization
Benefits:
faster convergence
less parameters to tune
easier searching in very large
problem spaces
Particle Swarm Optimization
Basic principle:
let particle swarm move
towards the best position in search
space, remembering each particles
best known position
and global (swarms) best known
position
Velocity Change
xi specific particle
pi particles (personal) best known position
g swarms (global) best known position
vi particles velocity
vi vi + prp(pi - xi) + grg(g -
xi )
inertia cognitive social
Position Change
xi specific particle
vi particles velocity
xi x i + v i
Algorithm
For each particle
Initialize particle
END
Do
For each particle
Calculate fitness value
If the fitness value is better than the best personal fitness value in history,
set current value as a new best personal fitness value
End
Choose the particle with the best fitness value of all the particles, and if that
fitness value is better then current global best, set as a global best
fitness value
For each particle
Calculate particle velocity according velocity change equation
Update particle position according position change equation
End
While maximum iterations or minimum error criteria is not attained
Neboja Trpkovi
[email protected] Slide 12 of 18
Single Particle
Parameters selection
Different ways to choose parameters:
proper balance between exploration and
exploitation
(avoiding premature convergence to a local optimum yet still
ensuring a good rate of convergence to the optimum)
putting all attention on exploitation
(making possible searches in a vast problem spaces)
automatization by meta-optimization
Avoiding Local Optimums
adding randomization factor to velocity
calculation
adding random momentum in a specific
iterations
Swarm
Conclusion
This algorithm belongs ideologically to that philosophical
school
that allows wisdom to emerge rather than trying to impose it,
that emulates nature rather than trying to control it,
and that seeks to make things simpler rather than more
complex.
James Kennedy, Russell Eberhart
References
Wikipedia
http://www.wikipedia.org/
Swarm Intelligence
http://www.swarmintelligence.org/
Application of a particle swarm optimization
algorithm for determining optimum well location
and type, Jerome Onwunalu and Louis J. Durlofsky, 2009
Particle Swarm Optimization, James Kennedy and
Russell Eberhart, 1995
http://www.engr.iupui.edu/~shi/Coference/psopap4.html
Robot Swarm driven by Particle Swarm
Optimization algorithm, thinkfluid
http://www.youtube.com/watch?v=RLIA1EKfSys