Algorithms 12 00141
Algorithms 12 00141
Article
OPTIMUS: Self-Adaptive Differential Evolution with
Ensemble of Mutation Strategies for Grasshopper
Algorithmic Modeling
Cemre Cubukcuoglu 1,2 , Berk Ekici 1 , Mehmet Fatih Tasgetiren 3, * and Sevil Sariyildiz 1
1 Faculty of Architecture and the Built Environment, Chair of Design Informatics,
Delft University of Technology, Julianalaan 134, 2628 BL Delft, The Netherlands
2 Department of Interior Architecture and Environmental Design, Faculty of Architecture, Yasar University,
Universite Caddesi, No: 37-39, Agacli Yol, Bornova, Izmir 35100, Turkey
3 Department of Industrial and Systems Engineering, Istinye University, Maltepe Mahallesi, Cirpici Yolu B Ck.
No: 9, Zeytinburnu, Istanbul 34010, Turkey
* Correspondence: [email protected]
Received: 7 May 2019; Accepted: 8 July 2019; Published: 12 July 2019
Abstract: Most of the architectural design problems are basically real-parameter optimization problems.
So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a
little attention on using optimization methods within the computer aided design (CAD) programs.
In this paper, we present Optimus, which is a new optimization tool for grasshopper algorithmic
modeling in Rhinoceros CAD software. Optimus implements self-adaptive differential evolution
algorithm with ensemble of mutation strategies (jEDE). We made an experiment using standard test
problems in the literature and some of the test problems proposed in IEEE CEC 2005. We reported
minimum, maximum, average, standard deviations and number of function evaluations of five
replications for each function. Experimental results on the benchmark suite showed that Optimus
(jEDE) outperforms other optimization tools, namely Galapagos (genetic algorithm), SilverEye (particle
swarm optimization), and Opossum (RbfOpt) by finding better results for 19 out of 20 problems.
For only one function, Galapagos presented slightly better result than Optimus. Ultimately, we
presented an architectural design problem and compared the tools for testing Optimus in the design
domain. We reported minimum, maximum, average and number of function evaluations of one
replication for each tool. Galapagos and Silvereye presented infeasible results, whereas Optimus and
Opossum found feasible solutions. However, Optimus discovered a much better fitness result than
Opossum. As a conclusion, we discuss advantages and limitations of Optimus in comparison to other
tools. The target audience of this paper is frequent users of parametric design modelling e.g., architects,
engineers, designers. The main contribution of this paper is summarized as follows. Optimus showed
that near-optimal solutions of architectural design problems can be improved by testing different types
of algorithms with respect to no-free lunch theorem. Moreover, Optimus facilitates implementing
different type of algorithms due to its modular system.
1. Introduction
Figure
Figure 2. Existing single-objective
2. Existing single-objective optimization
optimization plug-ins
plug-ins in
in grasshopper
grasshopper (GH).
(GH).
Figure 2. Existing single-objective optimization plug-ins in grasshopper (GH).
Algorithms 2019, 12, 141 4 of 27
1.3.1. Galapagos
Galapagos [16] is one of the first released optimization plug-in for GH. The tool provides two
heuristic optimization algorithms, which are GA [8] and SA [10]. Author of the tool suggests SA for
rough landscape navigation, whereas evolutionary solver for finding reliable intermediate solutions.
Majority of the design optimization papers in the literature utilized Galapagos tool in dealing with
energy [17–19], daylight [20,21], both energy and daylight [22,23] and structure [24–26].
1.3.2. Goat
Goat [27] uses the NLopt library [28] in the graphical user interface of Galapagos. The tool
considers a mathematically-rigorous approach (gradient-free optimization algorithm) to reach fast and
deterministic results. Goat provides several optimization algorithms as well. These are constrained
optimization by linear approximation (COBYLA), bound optimization by quadratic approximation
(BOBYQA), subplex algorithm (Sbplx), the dividing rectangles algorithm (DIRECT), and controlled
random search 2 (CRS2). Very recently, Goat is used for building energy optimization [14] and structure
optimization [29,30].
1.3.3. Silvereye
Despite the gradient-free optimization and evolutionary computation, Silvereye [15] is
one of the swarm intelligence optimization plug-ins released for GH. The tool considers
ParticleSwarmOptimization.dll, which is a shared library containing an implementation of the core
version of the PSO. In the literature, Silvereye is used in the design optimization problems that are
focusing on energy [14], micro climate [31] and structural [29].
1.3.4. Opossum
Opossum [32] is the first model-based optimization tool for GH. The solver is based on an
open-source library for black-box optimization with costly function evaluations (RBFOpt) [33] such as
energy and daylight simulations. RBFOpt library uses the radial basis function with local search while
discovering satisfactory solutions with a small number of function evaluations. Opossum is used in
several design problems to deal with daylight [34], structure [29,30] and energy [14].
1.3.5. Dodo
Dodo [35] is a plug-in based on different implementation of optimization algorithms. These are
non-linear gradient free optimization based on NLopt library [28], stochastic gradient descent algorithm,
and swarm optimization. In addition, Dodo also provides several supervised and unsupervised neural
network algorithms.
algorithm depends on the nature of the problem. In other words, one algorithm can outperform
another
the algorithm
no free in a specific
lunch theorem (NFLT)problem.
[39]. The Thus, NFLT of
performance argues that there is
an optimization no global
algorithm optimization
depends on the
algorithm,
nature of thewhich can present
problem. In otherthe best results
words, for all real-world
one algorithm and benchmark
can outperform problems.in a specific
another algorithm
In the
problem. fieldNFLT
Thus, of architecture,
argues thattesting different
there is no globaltypes of algorithms
optimization for thewhich
algorithm, same architectural
can present the design
best
problem is not a common approach. One
results for all real-world and benchmark problems. of the most important reason of this fact is that computer
aidedIndesign
the field (CAD) tools of architects
of architecture, does not
testing different include
types optimization
of algorithms algorithms
for the in a wide design
same architectural range.
According to the current state of the art [40], only 3% of total users of optimization
problem is not a common approach. One of the most important reason of this fact is that computer tools are architect
in the design
aided domain(CAD)of building
tools ofperformance
architects doesoptimization.
not include This fact clearlyalgorithms
optimization shows thatinthere
a wideis arange.
little
attention on
According tousing optimization
the current methods
state of the within
art [40], only the
3% ofCAD programs.
total Therefore, this
users of optimization paper
tools introduces
are architect in
a new optimization solver, called Optimus, with significant features listed below:
the domain of building performance optimization. This fact clearly shows that there is a little attention
on
• using optimization
Compatible with methods
parametric within the CAD
design models programs.
created inTherefore,
GH [16]this paper introduces
algorithmic a new
modeling for
optimization solver, called Optimus, with significant features listed below:
Rhinoceros [41] CAD software.
•• Compatible
Supports PCA with parametric
framework design
outlined models created
in previous section. in GH [16] algorithmic modeling for
Rhinoceros [41] CAD software.
• Implements a self-adaptive [42] differential evolution algorithm with ensemble of mutation
• Supports PCA framework outlined in previous section.
strategies [43] (jEDE), explained in Section 3.
• Implements a self-adaptive [42] differential evolution algorithm with ensemble of mutation
• Presents the
strategies [43]highest
(jEDE),performance
explained inwhen compared
Section 3. to other optimization algorithms available in
• GH reviewed
Presents in Section
the highest 1.3.
performance when compared to other optimization algorithms available in
GH
o reviewed in Sectionof1.3.
The performance the Optimus is tested by benchmark suite, which consists of standard
single objective unconstrained problems and some of the test problems proposed in IEEE
# The performance of the Optimus is tested by benchmark suite, which consists of standard
Congress on Evolutionary
single objective Computation
unconstrained problems2005
and (CEC
some 2005)
of the [44]. Problem proposed
test problems formulations and
in IEEE
optimization
Congress onresults of theseComputation
Evolutionary benchmarks are2005given
(CECin2005)
Section 4.1.
[44]. Problem formulations and
o optimization
Finally, Optimusresults of these
is tested withbenchmarks are given inproblem.
a design optimization Section 4.1.
The problem formulation
# and Finally, Optimusresults
optimization is tested
arewith a design
given optimization
in Section 4.2. problem. The problem formulation
and optimization results are given in Section 4.2.
The development process of the Optimus is illustrated in Figure 3.
The development process of the Optimus is illustrated in Figure 3.
organisms. EAs are very effective to deal with NP-hard problems. Some examples of EAs are Genetic
Algorithms (GAs) [8,47]. Genetic Programming (GP) [48], Evolution Strategy (ES) [49], and DE [7].
DE, which is introduced by Storn and Price [7], is potentially one of the most powerful stochastic
real-parameter optimization algorithms in the literature. The algorithm can converge very fast in
solving real-world problems such as in scheduling [50], optics [51], communication [6], power
systems [52], pattern recognition [53] and recently in architectural design [54,55]. A recent survey by
Das and Suganthan [56,57] clearly explained the history of DE and its success. DE algorithm has many
advantages. The simple code structure of the algorithm facilitates its implementation. Other advantage
is that the number of control parameters in DE is few, which are crossover rate (CR), mutation rate
(MR), and population size (NP). In classical DE, these control parameters are constant during the whole
optimization process. However, a simple change in MR or CR generation strategies can significantly
improve the performance of the algorithm. Therefore, some variants of DE in the literature focus on
parameter settings as presented in [42,58]. Moreover, DE can tackle the large scale and computationally
expensive optimization problems. Because, the space complexity of DE is low as mentioned in [59].
1. Initialization for generating the initial target population once at the beginning.
2. Reproduction with mutation for generating the mutant population by using the target population.
3. Reproduction with crossover for generating the trial population by using the mutant population.
4. Selection to choose the next generation among trial and target populations using one-to-one
comparison. In each generation, individuals of the current population become the target population.
2.1.1. Initialization
In the basic DE algorithm, the initial target population has NP individuals with a D-dimensional
real-parameter vectors. Each vector is obtained randomly and uniformly
within the search space
restricted by the given minimum and maximum bounds: xmin
ij
, xmax
ij
. Thus, the initialization of j-th
component of i-th vector can be defined as:
x0ij = xmin
ij + xmax
ij − xmin
ij × r, (1)
where x0ij is the i-th target population at generation g = 0; and r is a uniform random number in the
range [0, 1].
2.1.2. Mutation
The difference vector in mutation operator is one of the main strengths of DEs [7]. Hence, DE
differs from other EAs since it relies on a difference vector with a scale factor MR. The mutation process
is the first step to generate new solutions. In order to obtain mutant population, two individuals are
randomly chosen from the target population. The weighted difference of these individuals is added to
a third individual from the target population as in Equation (2).
g g−1 g−1 g−1
vij = xk j + MR × xlj − xmj (2)
where k, l, m are three randomly chosen individuals from the target population such that
(k , l , m , i ∈ (1, .., NP)) and j = 1, .., D. MR > 0 is a mutation scale factor influencing the differential
g
variation between two individuals. vij is the mutant population in generation g.
Algorithms 2019, 12, 141 7 of 27
2.1.3. Crossover
To obtain the trial population, a binomial crossover is applied to each variable. If a randomly and
uniformly generated number r [0, 1] is less than or equal to the crossover rate (CR), the individuals
from mutant population is chosen, otherwise target individuals are chosen. Simply, trial population is
generated by recombining mutant individuals with their corresponding target individuals as follows:
g g
g vij
if rij ≤ CR or j = D j
uij = g−1 , (3)
xij otherwise
where the index D j is a randomly chosen dimension from ( j = 1, .., D). It makes sure that at least
g g−1
one parameter of the trial population ui j will be different from the target population xij . CR is a
g
user-defined crossover constant in the range [0, 1], and rij is a uniform random number in the interval
g
[0, 1] whereas uij is the trial population at generation g.
When trial population is obtained, parameter values might violate search boundaries. Therefore,
the solution can be restricted. For this reason, parameter values that are violating the search range are
randomly and uniformly re-generated as in Equation (4).
g
xij = xmin max min
ij + xij − xij × r. (4)
2.1.4. Selection
For the next generation, selection process is realized, which is based on the survival of the fittest
among the trial and target populations. The population that has the lower fitness value is chosen
according to one-to-one comparison, as in Equation (5).
g g g−1
ui if f ui ≤ f xi
g
xi = . (5)
x g−1
otherwise
i
where r j ∈ {1, 2, 3, 4} are uniform random numbers in the range [0, 1]. t1 and t2 represent the probabilities
to adjust the MR and CR. They are taken as t1 = t2 = 0.1 and MRl = 0.1 and MRu = 0.9.
following mutation strategies (Mi ) are considered as in Equations (8)–(10). In M1 , the individuals that
formed the mutant population are randomly selected. In M2 and M3 , strategies are benefitted from the
information of the best solution (xbest) so far.
j,t+1 j,t
j,t j,t
if STRi = 0 M1 : vi = xk + MRi × xl − xm (8)
j,t+1 j,t
j,t j,t
if STRi = 1 M2 : vij = xbest + MRi × xl − xm (9)
j,t+1 j,t
j,t j,t
j,t j,t
if STRi = 2 M3 : vi = xi + MRi × xbest − xi + F × xk − xl , (10)
where k, l, m are three randomly selected individuals from the target population such that
(k , l , m , i ∈ (1, .., NP)) and j = 1, .., D. MRi > 0 is a mutation scale factor, in our jEDE, it is
generated by using self-adaptive procedure. STRi is the strategy used in each population to choose
different mutation strategies. The pseudo code of the jEDE is given in Algorithm 1.
Algorithm 1. The self-adaptive differential evolution algorithm with ensemble of mutation strategies.
1: Set parameters g = 0, NP = 100, Mmax = 4
2: Establish
n g initial opopulation nrandomly
g g g g
o
3: P g = x1 , .., xNP with xi = xi1 , .., xiD
4: Assign a mutation strategy to each individual randomly
5: Mi = rand()%Mmax f or i = 1, .., NP
g
6: Evaluate population and find xbest
n g g o
7: f (P g ) = f x1 , .., f xNP
8: Assign CR[i] = 0.5 and F[i] = 0.9 to each individual
g
9: Repeat the following for each individual xi
g
g
10: Obtain vi = Mi xi
g
g g
11: Obtain ui = CRi xi , vi
g g g−1
ui if f ui ≤ f xi
g
12: Obtain xi = g−1
xi otherwise
g g−1
13: If f ui > f xi , M = rand()%Mmax
g g gi g
14: If f xi ≤ f xbest , xbest = xi
g g
15: Update Fi and CRi
16: If the stopping criterion is not met, go to Lines 9–15
17: Else stop and return πbest
3. Optimus
Optimus is a new optimization plug-in (https://www.food4rhino.com/app/optimus) developed
for GH. The beta version (1.0.0) of the plug-in implements the self-adaptive [42] differential evolution
algorithm with ensemble of mutation strategies [43] (jEDE). The algorithm is coded in C#, which is one
of the available programming languages for custom scripting component in the GH. Optimus is based
on a modular approach with many C# items. Every step of the optimization process can be observed
within these items. Figure 4 shows the eight components of Optimus, which are categorized under
three main titles, as follows:
1. Optimus consists of
2. Initialize consists of
Algorithms 2019, 12, x FOR PEER REVIEW 9 of 30
a.c. GetBound
One-to-one(taking the boundaries
(enabling of design
selection process variables
for next in 𝐷 dimensions)
generation)
2. b. InitPop
a.Initialize (generating
consists
GetBound of
(taking initial
the population
boundaries for 𝑁𝑃variables
of design population size
in D in 𝐷 dimensions)
dimensions)
b.a. GetBound
3. UtilitiesInitPop (generating
consists(taking
of theinitial population
boundaries for NP
of design in 𝐷 size
population
variables in D dimensions)
dimensions)
3. a.b. xBest
InitPop
Utilities (generating
(finding
consists initial population
of chromosomes that has the 𝑁𝑃 population
for lowest size in
fitness value 𝐷 dimensions)
in the population)
3. b.Utilities
Merge consists of
(collecting the fitness, population, xBest and optimization parameters)
a. xBest (finding chromosomes that has the lowest fitness value in the population)
c.a. Unmerge
xBest (finding chromosomes
(separating thatpopulation,
the fitness, has the lowest fitness
xBest and value in the population)
optimization parameters)
b. Merge (collecting the fitness, population, xBest and optimization parameters)
b. Merge (collecting the fitness, population, xBest and optimization parameters)
To
c. manage the main
Unmerge loop ofthe
(separating thefitness,
Optimus, the HoopSnake
population, xBest and component [60] parameters)
optimization is used for enabling
c. Unmerge (separating the fitness, population,
feedback loops (recursive executions) within the GH. xBest and optimization parameters)
To manage the main loop of the Optimus, the HoopSnake component [60] is used for enabling
feedback loops (recursive executions) within the GH.
To
In manage the main
the Optimus, loop
each of ofthe
Figure
the4.eight
Optimus, the
components
Components
HoopSnake component [60] is used for enabling
is transferred
of Optimus v1.0.0 (beta).to GH clusters by defining
feedback loops (recursive executions) within the GH.
collections of inputs and outputs as shown in Figure 5. This gives a flexibility to the user for
InInthe
improving theOptimus,
theOptimus,
C# codeeach of the
each of eight
according the components
eight
to his/her is transferred
components topurposes.
GH clusters
is transferred
own implementation to GHAsby defining
clusters
next step,bycollections
defining
each cluster
of inputs and outputs as shown in Figure 5. This gives a flexibility to the user
is converted to GH user object files, which are provided in the supplementary materials. By user
collections of inputs and outputs as shown in Figure 5. This gives a for
flexibilityimproving
to the the
this C#
for
way,
code according
improving
Optimus canthe to
beC# his/her
code
used own implementation
asaccording
a plug-in to of his/her purposes. As next step, each cluster is converted
the GH.own implementation purposes. As next step, each cluster to GH
user object files,
is converted which
to GH userare provided
object in theare
files, which supplementary
provided in the materials. By thismaterials.
supplementary way, OptimusBy thiscan be
way,
used as a plug-in
Optimus of theasGH.
can be used a plug-in of the GH.
Figure 5. Example cluster implementation for self-adaptive differential evolution algorithm with
ensemble of mutation strategies (jEDE) component.
Figure
Figure 5. 5. Examplecluster
Example clusterimplementation
implementation for self-adaptive
self-adaptivedifferential
differentialevolution
evolutionalgorithm with
algorithm with
ensemble
ensemble of mutation strategies of mutation
(jEDE) strategies (jEDE) component.
component.
1.2. Define
Place the population
GetBound on the size.
GH canvas and connect with number sliders.
2.3. Define the population size. using population size and output of GetBound.
Get InitPop for initialization
3.4. Evaluate
Get InitPopinitial fitness usingusing
for initialization the output of InitPop.
population size and output of GetBound.
4.5. Evaluate initial
Internalize fitnessfitness.
the initial using the output of InitPop.
5.6. Internalize the initial fitness.
Place xBest on the GH canvas.
6.7. Place xBest and
Get Merge on the GH canvas.
connect with internalized initial fitness and outputs of InitPop and xBest.
7. Get Merge and connect with internalized initial fitness and outputs of InitPop and xBest.
8. Connect Merge with starting input (S) of HoopSnake.
8. Connect Merge with starting input (S) of HoopSnake.
9. Place UnMerge on the GH canvas and connect with feedback output (F) of HoopSnake.
9. Place UnMerge on the GH canvas and connect with feedback output (F) of HoopSnake.
10. GetjEDE
10. Get jEDEandandconnect
connectoutputs
outputsof ofUnMerge,
UnMerge,InitPop,
InitPop,GetBound.
GetBound.
11. Evaluate
11. Evaluatetrial
trialfitness
fitnessusing
usingthetheoutput
outputof ofjEDE.
jEDE.
12. Get
12. GetOne-to-One
One-to-Oneand andconnect
connectwith
withinitial
initialfitness,
fitness,trial
trialfitness
fitnessand
andoutputs
outputsofofjEDE.
jEDE.
13.
13. Place
PlacexBest
xBestand
andconnect
connectoutputs
outputsof
ofOne-to-one
One-to-onefor
forupdating
updatingthe
thenew
newbest
bestchromosome.
chromosome.
14.
14. Get Merge and connect outputs of jEDE, One-to-one, and xBest.
Get Merge and connect outputs of jEDE, One-to-one, and xBest.
15.
15. Complete
Completethe
theloop
loopby
byconnecting
connectingthe
theoutput
outputof
ofMerge
Mergetotothe
thedata
datainput
input(D)
(D)of
ofthe
theHoopSnake.
HoopSnake.
These
Thesesteps
stepsare
arevisualized
visualizedininFigure
Figure66and
andan
anexample
examplefile
fileisisshared
sharedininsupplementary
supplementarymaterials.
materials.
Figure6.6.Visualization
Figure Visualizationofofthe
theOptimus
Optimusloop.
loop.
To
Tosave
savesome
somecomputation
computationtime,
time,Optimus
Optimusdoes
doesnotnotupdate
updatenumber
numbersliders
slidersininthe
theGH.
GH.During
During
the
theoptimization
optimizationprocess
processofofthe
thedesign
designproblem,
problem,thethegeometry
geometryisisgenerated
generatedwith
withinitial
initialand
andtrial
trial
populations. For this reason, 𝑁𝑃 size of geometries is observed in each iteration. During the initial
populations. For this reason, NP size of geometries is observed in each iteration. During the initial
stage,
stage,various
varioustypes
typesof
ofdesign
designalternatives
alternativesare
aregenerated.
generated.However,
However,when
whenthetheOptimus
Optimusisisconverged,
converged,
similar
similargeometries
geometriesare
areobserved,
observed,asasshown
shownininFigure
Figure7.7.
Algorithms 2019, 12, 141 11 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 11 of 30
Figure7.7.Examples
Figure Examples of
of initial and optimized
initial and optimizedpopulations.
populations.
4. Experiments
4. Experiments
In this section, we explain how the experiments were performed to test the performance of the
In this section, we explain how the experiments were performed to test the performance of the
Optimus
Optimus in in
comparison
comparisonto to
thethe
other chosen
other chosenplug-ins.
plug-ins.As
Asmentioned
mentionedbefore,
before,the
theevaluations
evaluationsare
aredone
doneby
using standard benchmark problems in the literature and one architectural design problem
by using standard benchmark problems in the literature and one architectural design problem proposed
byproposed
the authors.
by the authors.
4.1.4.1.
Benchmark
BenchmarkSuite
Suite
TheTheperformance
performanceofofthe theOptimus
Optimus (jEDE
(jEDE algorithm)
algorithm) was was firstly
firstlytested
testedby byusing
usingthe
thefollowing
following
benchmark
benchmark suite, which
suite, whichconsists
consistsofof20
20test
testfunctions. Thefirst
functions. The firstten
tenfunctions
functionsininthe thebenchmark
benchmark suite
suite were
were
classical test problems that have been commonly used in the literature. The remaining
classical test problems that have been commonly used in the literature. The remaining ten functions are ten functions
aretaken
takenfrom
fromthethebenchmark
benchmarksuit suitpresented
presentedininthe theCEC
CEC2005 2005Special
SpecialSession
SessionononReal-Parameter
Real-Parameter
Optimization.
Optimization.These
Thesefunctions
functionshave
have been
been modified from the
modified from theclassical
classicaltesttestproblems
problems inin order
order to to locate
locate
their global
their global optimum
optimumunder undersome
somecircumstances
circumstances such as as shifted
shiftedand/or
and/orrotated
rotatedlandscape,
landscape, optimum
optimum
placed
placed onon bounds,Gaussian
bounds, Gaussiannoisenoiseand/or
and/or bias
bias added
added etc.
etc. [44].
[44]. This
Thisfact
factmakes
makesthese
thesefunctions
functions more
more
difficult
difficult toto solvethan
solve thanthetheclassical
classicaltest
test functions.
functions. InInour
ourtest suit,FF1 1 to
testsuit, to FF55 are unimodal, FF66 to
are unimodal, F10are
toF10
multimodal functions.
are multimodal All the
functions. Allbenchmark
the benchmark functions are minimization
functions are minimization problems. OurOur
problems. benchmark
benchmark suite
suite is presented in Table 1. These functions are summarized
is presented in Table 1. These functions are summarized in Appendix A. in Appendix A.
Algorithms 2019, 12, 141 12 of 27
Notation Function
Fsph Sphere Function
Fros Rosenbrock’s Function
Fack Ackley’s Function
F grw Griewank’s Function
Fras Rastrigin’s Function
Fsch Generalized Schwefel’s Problem 2.26
Fsal Salomon’s Function
Fwht Whitely’s Function
Fpn1 Generalized Penalized Function 1
Fpn2 Generalized Penalized Function 2
F1 Shifted Sphere Function
F2 Shifted Schwefel’s Problem 1.2
F3 Shifted Rotated High Conditioned Elliptic Function
F4 Shifted Schwefel’s Problem 1.2 With Noise in Fitness
F5 Schwefel’s Problem 2.6 With Global Optimum on Bounds
F6 Shifted Rosenbrock’s Function
F7 Shifted Rotated Griewank’s Function without Bounds
F8 Shifted Rotated Ackley’s Function with Global Optimum on Bounds
F9 Shifted Rastrigin’s Function
F10 Shifted Rotated Rastrigin’s Function
For example, when solving Fsph , the Optimus (jEDE) approximately realized 5844 FES/minute, whereas
GA made 1643 FES/minute, PSO made 1205 FES/minute, RBFOpt made 85 FES/minute. These facts
explicitly imply the superiority of Optimus over other components in solving benchmark suite.
Table 2. Comparison of Optimus, Opossum, Silvereye, Galapagos (D = 30, NP = 30, termination: 30 min).
Table 2. Cont.
Figure 8. Cont.
Algorithms 2019, 12, x FOR PEER REVIEW 17 of 30
Figure 9. Cont.
Algorithms 2019, 12, 141 17 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 18 of 30
Algorithms 2019, 12, x FOR PEER REVIEW 18 of 30
Figure9.9. Convergence
Figure Convergence graphs
graphs of
of CEC2005
CEC2005benchmark
benchmarkfunctions.
functions.
Figure 9. Convergence graphs of CEC2005 benchmark functions.
4.2. Design Optimization Problem
4.2. Design Optimization Problem
4.2. Design
In thisOptimization
part, we presentProblem
a design problem for a frame structure, which has a span for 30 m by
In this part, we present a design problem for a frame structure, which has a span for 30 m by 25
25 m. Before generating the frame structure, we executed a base surface, which controls the shape of
m. Before
In thisgenerating the frame
part, we present structure,
a design we executed
problem a basestructure,
for a frame surface, which
whichcontrols the shape
has a span for 30ofmthe
by 25
the design. Therefore, a curved surface having 7.5 m height, five axes, and 25 controlling points is
m.design.
BeforeTherefore,
generatinga the curved
frame surface having
structure, we 7.5 m height,
executed a basefive axes, and
surface, which 25 controls
controlling points of
the shape is the
generated. Afterwards, 65 parameters are defined for 5 axes to change the shape of the base surface, so
generated.
design. Afterwards, 65 parameters are defined for 5 axes to change the
Therefore, a curved surface having 7.5 m height, five axes, and 25 controlling points is shape of the base surface,
the shape of the frame structure. Points on ground level have two parameters for x and y directions,
so the
generated. shape of the frame65
Afterwards, structure.
parametersPoints
areon groundfor
defined level haveto
5 axes two parameters
change for x and y directions,
whereas other points have three parameters for changing the positions in all the shape
directions. of
Inthe
the base surface,
next step,
sowhereas
the shapeother
of points
the framehave three
structure.parameters
Points on for changing
ground level the
havepositions
two in all
parameters directions.
for x andInythe next
directions,
using the base surface, we generated the frame structure using truss structure component provided by
step,
whereas using
other the base surface,
points which
have three we generated
parameters the frame structure using truss structure component
LunchBox [61] plugin, consists of several for changingusing
components the positions in all directions.
different geometrical shapes.InInthe
thisnext
provided by LunchBox [61] plugin, which consists of several components using different geometrical
step, using the base surface, we generated the frame structure using
component, the structure on the base surface is controlled by three parameters. These are division truss structure component
shapes. In this component, the structure on the base surface is controlled by three parameters. These
provided
amounts on byuLunchBox [61] plugin,
and v directions, which
and the depth consists of several
of the truss system. components using different
Finally, generated geometrical
frame structure is
are division amounts on u and v directions, and the depth of the truss system. Finally, generated
shapes. In this component, the structure on the base surface is controlled
combined with Karamba 3D plug-in [62], which provides evaluation of parametric structural models in by three parameters. These
frame structure is combined with Karamba 3D plug-in [62], which provides evaluation of parametric
are
GH.division
Structural amounts
evaluationon process
u and vis directions,
mentioned on andthethe depth of
following theDevelopment
lines. truss system.ofFinally, generated
the parametric
structural models in GH. Structural evaluation process is mentioned on the following lines.
frame
modelstructure is combined
is illustrated in Figure with
10. Karamba 3D plug-in [62], which provides evaluation of parametric
Development of the parametric model is illustrated in Figure 10.
structural models in GH. Structural evaluation process is mentioned on the following lines.
Development of the parametric model is illustrated in Figure 10.
To evaluate
Algorithms 2019, 12, xeach generated
FOR PEER REVIEW structure alternative, each line in the structure model is defined 19 of 30
as beam, whereas each point located on the ground level is defined as support points. In addition,
cross-section
section and and its parameters
its parameters areare also
also defined
defined bybyKaramba
Karambajust justbefore
before the
the evaluation
evaluation process.
process.For
Forthe
the
problem
problemonon hand,
hand,rectangular
rectangularcross-section is defined
cross-section usingusing
is defined three three
parameters. These are
parameters. height,
These are upper
height,
and lower
upper andwidths
lower of the cross-section.
widths To finalize
of the cross-section. the necessary
To finalize inputs, we
the necessary defined
inputs, gravity and
we defined lateral
gravity and
loads.
lateralAs gravity
loads. Asload, we load,
gravity considered the distribution
we considered of the totalofmass
the distribution for each
the total intersecting
mass point. For
for each intersecting
the lateral
point. Forload, we applied
the lateral load, 2we
kNapplied
on each2intersecting
kN on eachpoint, as well.point,
intersecting The material
as well. type of the model
The material typeisof
assigned
the model as is
steel. An example
assigned as steel.ofAn
evaluated
examplemodel is shown
of evaluated in Figure
model 11. in Figure 11.
is shown
To sum up, 70 parameters are used to define the design optimization problem. This corresponds
To sum up, 70 parameters are used to define the design optimization problem. This corresponds
approximately 1.333 × 10177 alternatives in the search space. The reason of giving such a freedom is
approximately 1.333× 10 alternatives in the search space. The reason of giving such a freedom is
to investigate the impact of the design on the performance. Moreover, design problems are different
to investigate the impact of the design on the performance. Moreover, design problems are different
than mathematical benchmark problems. A design problem can have more parameters due to an
than mathematical benchmark problems. A design problem can have more parameters due to an
optimization task (e.g., optimizing the layout scheme and the façade design of a building to find the
optimization task (e.g., optimizing the layout scheme and the façade design of a building to find the
most desirable plan scheme with the lowest energy consumption). Properties of design parameters are
most desirable plan scheme with the lowest energy consumption). Properties of design parameters
given in Table 3. To show the divergence of the design problem, some of the frame structure alternatives
are given in Table 3. To show the divergence of the design problem, some of the frame structure
are illustrated in Figure 12.
alternatives are illustrated in Figure 12.
Table 3. Properties of design parameters.
Table 3. Properties of design parameters.
Notation Design Parameter Min Max Type
Notation Design Parameter Min Max Type
x1–x13
x1–x13 Coordinates of control points in axis 1 −2.00
Coordinates of control points in axis 1
−2.002.00 2.00 Continues
Continues
x14–x26 Coordinates of control points in axis 2 −2.00 2.00 Continues
x14–x26
x27–x39 Coordinates
Coordinates of control
of control pointspoints
in axisin
3 axis 2 −2.00 −2.002.00 2.00 Continues
Continues
x27–x39
x40–x52 Coordinates
Coordinates of control
of control pointspoints
in axisin
4 axis 3 −2.00 −2.002.00 2.00 Continues
Continues
x40–x52
x53–x65 Coordinates of control points in
Coordinates of control points in axis 5 axis 4 −2.00 −2.002.00 2.00 Continues
Continues
x66
x53–x65 Coordinates of control points in axis 5 3
Division amount on u direction −2.00 10 2.00 Discrete
Continues
x67 Division amount on v direction 3
x66 Division amount on u direction 3 10 10 Discrete
Discrete
x68 Depth of the truss 0.50 1.00 Continues
x67
x69 Division
Height amount
of the on v direction
cross-section 10.00 3 30.00 10 Discrete
Continues
x68
x70 Upper and lowerDepth
width of thecross-section
of the truss 10.00 0.5030.00 1.00 Continues
Continues
x69 Height of the cross-section 10.00 30.00 Continues
x70 Upper and lower width of the cross-section 10.00 30.00 Continues
Algorithms 2019, 12, 141 19 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 20 of 30
𝐹
where F is the loading force, and K is the bending stiffness of the frame structure. To compare the
𝑣 =
optimization results of Optimus with others, we defined , a penalty function by combining m and (13) v
𝐾
as follows:
where 𝐹 is the loading force, and 𝐾 is the bending
(
m ifstiffness
v ≤ 0.1 of the frame structure. To compare the
m= . (14)
optimization results of Optimus with others, 100 we ∗defined
m a penalty function by combining 𝑚 and 𝑣
o.w.
as follows:
To sum up, for safety reasons, the final design should have minimum 0.1 m displacement. For this
reason, displacement result that has smaller 𝑚 𝑖𝑓 𝑣 ≤ 0.1 solution. In addition, for minimizing
𝑚 =than 0.1 m is a feasible . (14)
100 ∗ 𝑚
construction cost, the objective is defined as the minimization 𝑜. 𝑤. of the mass. Therefore, the final design
should Tohave
sumthe
up,smallest
for safety reasons,
amount of the final
steel design
usage. Theshould
higherhave minimum
the mass is the 0.1 m displacement.
lower For
the displacement,
thisthe
and reason,
otherdisplacement
way around. result that has smaller than 0.1 m is a feasible solution. In addition, for
minimizing construction cost, the objective is defined as the minimization of the mass. Therefore, the
4.2.1.
final Experimental
design shouldSetup
have and Evaluation
the smallest Criteria
amount of steel usage. The higher the mass is the lower the
displacement,
The designand the other way
optimization around.
problem ran on a computer that had Intel Core I7-6500U CPU @ 2.50 GHz
with 16 GB RAM. The number of dimensions D was taken as 70, whereas the population size NP was
4.2.1. Experimental Setup and Evaluation Criteria
taken as 50. As such the CEC 2005 benchmark problems, the termination criteria was determined as
30 minThefordesign optimization
each run. problem
For the instance onran on only
hand, a computer that hadcarried
one replication Intel Core
out I7-6500U CPU @ 2.50
for each optimization
GHz To
tool. with 16 GB RAM.
evaluate The numberofofdifferent
the performance dimensions 𝐷 was taken
optimization as 70,
tools, we whereas the population
report f x_min, which issize
the
𝑁𝑃 was taken as 50. As such the CEC 2005 benchmark problems, the termination criteria was
Algorithms 2019, 12, x FOR PEER REVIEW 21 of 30
determined as 30 min for each run. For the instance on hand, only one replication carried out for each
Algorithms 2019, 12, 141 20 of 27
optimization tool. To evaluate the performance of different optimization tools, we report 𝑓𝑥_𝑚𝑖𝑛,
which is the minimum fitness value of the design problem, and 𝑔(𝑥), which is the constraint value
of the minimum
minimum fitness.
fitness value ofThe maximum
the design number
problem, andofg(fitness
x), which evaluations (𝐹𝐸𝑆) within
is the constraint value of 30the
min for each
minimum
tool areThe
fitness. alsomaximum
recorded,number
which means how
of fitness many times
evaluations the fitness
(FES) within 30 is evaluated
min for each during thealso
tool are optimization
recorded,
process.
which This how
means problem many definition
times theisfitness
also available in supplementary
is evaluated materials asprocess.
during the optimization GH fileThisto contribute
problem
evaluations
definition is of
alsofurther developed
available architectural
in supplementary design optimization
materials as GH file to tools.
contribute evaluations of further
developed architectural design optimization tools.
4.2.2. Design Results
4.2.2. Design Results
After 30 min run for each tool in GH, overview of optimization results is given Table 4.
After
Convergence 30 min run for each
graph tool inOpossum,
Optimus, GH, overview of optimization
SilverEye and Galapagos results is given Table in
also given 4. Convergence
Figure 13. In
graph for final
addition, Optimus,designs Opossum,
proposed SilverEye
by eachand Galapagos
algorithm is also
are also given in in
illustrated Figure
Figure 13.14.
InBased
addition, final
on these
designs
results, proposed
Optimus by eachOpossum
and algorithmfoundare alsofeasible
illustrated in Figurewhereas
solutions, 14. BasedSilverEye
on these results, Optimus
and Galapagos
and Opossum
discovered found feasible
infeasible solutions,
alternatives. Looking whereas SilverEye
at feasible and Galapagos
solutions, there is a discovered infeasible
significant difference
alternatives.
between jEDE Looking at feasible
and RbfOpt. solutions,
Optimus foundtheresmaller
is a significant
mass amountdifference between
than Opossum.jEDE and RbfOpt.
During the
Optimus
optimizationfound smaller
process, wemassalso amount
observedthan thatOpossum. During the
Optimus evaluated optimization
fitness function process,
more than weother
also
observed
tools. From thattheOptimus
point ofevaluated
proposedfitnessdesignfunction moreGalapagos
alternatives, than other and tools. From thefound
SilverEye pointbigger
of proposed
frame
design alternatives,
structures with smaller Galapagos and SilverEye
profile dimensions. Thisfound
causesbigger frame on
an increment structures with smaller
both displacement andprofile
mass.
dimensions. This causes an increment on both displacement and
On the other hand, Opossum presented similar size with Galapagos and Silvereye, but bigger mass. On the other hand, Opossum
presented
dimensionsimilar
sizes for sizeprofile.
with Galapagos
This suggestsand Silvereye,
smaller amountbut bigger dimension sizes
of displacement. for profile.
However, the This
final
suggests
solution smaller
has more amount
than 22 of displacement.
tons of steel. From However, the final
the point of solution
Optimus,has themore
finalthan 22 tons
design of steel.
alternative
From the the
presents point of Optimus,
smallest the final design
frame structure havingalternative
small profile presents the smallest
sizes. This gives not frame
only structure
an advantage having on
small profile sizes.
displacement, but alsoThis gives not
provides only an
a design advantage
alternative on displacement,
having smaller amount but of
also provides
steel comparinga design
with
alternative having smaller
other algorithms. amount model
A parametric of steel with
comparing with other
70 decision algorithms.
variables A parametricoptimization
is a challenging model with
70 decision variables is a challenging optimization
problem in the domain of architecture. From this perspective, problem in the domain of architecture.
impact of self-adaptive parameter From this
perspective,
update, andthe impact of
ensemble of mutation
self-adaptive parameter
strategies can beupdate,
seen for andthe ensemble of mutation
architectural designstrategies
problem. can be
seen for the architectural design problem.
Table 4. Comparison of Optimus, Opossum, Silvereye, Galapagos (D = 70, NP = 50, termination: 30
Table 4. Comparison of Optimus, Opossum, Silvereye, Galapagos (D = 70, NP = 50, termination: 30 min).
min).
Optimus jEDE
Optimus jEDE Opossum RBFOpt
Opossum RBFOpt SilveEye_PSO
Silve Eye_PSO Galapagos GA
Galapagos GA
𝑓(𝑥)_𝑚𝑖𝑛
f (x)_min 6.21637× 103
6.21637 × 10 2.25410× 104
2.25410 × 10 6.89062 ×10
6.89062× 105 6.74560 × 10
6.74560× 105
Design
Design
problem 𝑔(𝑥)
g(x) 0.0994
0.0994 0.0996
0.0996 0.6674
0.6674 0.9273
0.9273
problem FES 31,800 5102 17,100 19,000
FES 31,800 5102 17,100 19,000
Figure14.
Figure 14. Optimized
Optimized results
results of
ofdesign
designproblem.
problem.
Figure 14. Optimized results of design problem.
Results presented
Results in in
presented Table 3 are
Table 3 provided in supplementary
are provided in supplementarymaterials containing
materials minimum
containing fitness
minimum
values, Results
fitness
and presented
values,
their and theirincorresponding
correspondingTable 3 are provided
chromosomes. Theinbest
supplementary
chromosomes. The best
result materials
result
discovered containing
discovered
by Optimus minimum
byisOptimus is in
illustrated
fitness values,
illustrated
Figure 15. in and
Figure their
15. corresponding chromosomes. The best result discovered by Optimus is
illustrated in Figure 15.
Figure 15. Illustration of the best alternative (Optimus) result for design optimization problem.
Figure 15.15.Illustration
Figure Illustrationofofthe
thebest
bestalternative
alternative (Optimus) resultfor
(Optimus) result fordesign
designoptimization
optimization problem.
problem.
5. Discussion
5. Discussion
5. Discussion
In this paper, we presented a new optimization tool called Optimus, which implements jEDE
In In
this
this
algorithm, paper,
paper,
for GH.we wepresented
We presented
tested the aanew
newoptimization
optimization
performance tool called
tool called
of Optimus usingOptimus,
Optimus, whichimplements
which
20 benchmark implements
problems jEDE
jEDE
and a
algorithm,
algorithm,
design for GH. We
for GH. We
optimization tested the performance
tested against
problem the performance of Optimus
GA, PSO,ofRBFOpt. using 20
OptimusExperimental benchmark
using 20 benchmark problems
results show and
problems a
that and design
jEDEa
optimization
outperforms problem
design optimization against
problemGA,
these algorithms by PSO, GA,
against
finding RBFOpt.PSO,
better Experimental
RBFOpt.
fitness values. results
There areshow
Experimental that
results
several jEDE outperforms
show
reasons that
to jEDE
explain
these
thisalgorithms
outperforms by
fact. First,these finding
algorithms
Optimus better fitness values.
by finding better
uses self-adaptive approach There
fitness are several
values. There
for producing reasons to
are several
control explain
reasons
parameters. this
This fact.
togives
explainFirst,
an
Optimus
advantageuses
this fact. First, self-adaptive
Optimus
to adapt approach
uses self-adaptive
the algorithm for producing
approach
itself according tofor control
theproducingparameters.
nature of control This gives
parameters.
the problem. an advantage
ThisOptimus
Secondly, gives an
to presented
adapt the significantly
advantage algorithm
to adapt the itself according
algorithm
smaller toaccording
the than
itselfvalues
fitness nature to ofother
thethe the problem.
nature ofThis
tools. Secondly,
the problem. Optimus
is related Secondly,
to presented
Optimus
the ensemble of
significantly
mutation smaller fitness
presentedstrategies.
significantlyWhile values
smaller
the otherthan
fitness the other
values
algorithms thanaretools.
the This
other
based is related
ontools.
only This istorelated
the ensemble
one mutation ofOptimus
to the ensemble
operator, mutation
of
mutation
strategies.
uses strategies.
threeWhile
mutation Whilealgorithms
the other
operators.the During
other algorithms
are based on
searching are abased
for only on
one
better only one
mutation
solution, mutation
more operator,
operator,
using Optimus
than Optimus
uses three
one mutation
uses three
strategy
mutation mutation
enlarges
operators. theoperators.
search
During DuringThis
space.
searching searching
for aincreases
better forsolution,
athe
better solution,
ability
using using
of more
searching more
than than one mutation
near-optimal
one mutation results.
strategy
strategy
Thirdly,
enlarges enlarges
theOptimus the
doessearch
search space. This space. number
not update
increases This increases
sliders,
the ability the ability
which
of searching of searching
corresponds
near-optimal near-optimal
toresults.
parameters,
Thirdly, results.
for each
Optimus
Thirdly,
iteration. Optimus
Instead, does
generated not update
populations numberare sliders,
directly which
connected corresponds
to the to
geometry.
does not update number sliders, which corresponds to parameters, for each iteration. Instead, generated parameters,
Therefore, for each
required
iteration.
computation Instead,
time generated
for one populations
generation is lessare directly
than the connected
other to
algorithms.
populations are directly connected to the geometry. Therefore, required computation time for onethe geometry.
Finally, Therefore,
optimal values required
of some
computation
problems
generation isare time
less for the
outside
than one
of thegeneration is lessrange
initialization
other algorithms. than the
(e.g.,
Finally, F7 ).algorithms.
other
optimal Onevalues Finally,
may argue
of some thatoptimal values
this situation
problems ofcan
some
are outside be of
problems are outside of the initialization range (e.g., F7 ). One may argue that this situation can be
the initialization range (e.g., F7 ). One may argue that this situation can be faced in architectural design
Algorithms 2019, 12, 141 22 of 27
problems, as well. Generating chromosome values outside of the initialization range is possible in the
Optimus, whereas it is not possible in the other tools.
RBFOpt algorithm is recently compared with metaheuristics using several building optimization
problems in [12–14,32]. According to the results, RBFOpt found better solutions than metaheuristics.
In these studies, termination criterion is defined as number of iterations during the evaluation of
different algorithms. This gives an advantage for RBFOpt, due to ability to discover desirable solutions
with small number of function evaluations. However, metaheuristics require more function evaluations
to find the optimal solutions. In this study, we defined the termination criterion as run time for 30 min
to make a fair comparison. Results show that RBFOpt requires more computation time to execute one
function than metaheuristics. For this reason, while comparing these algorithms, rather than iterations
and/or function evaluations, run time should be considered as termination criterion.
Fitness functions based on simulations (e.g., energy and daylight) may require a huge amount of
time for the convergence during the optimization. From the point of metaheuristics, usage of surrogate
models [63] can be a solution to overcome this drawback. Because, surrogate models require less
amount of function evaluation while approximating the fitness. By this way, researchers can consider
many replications in a short run time.
Furthermore, handling constraints is another important topic for architectural design optimization.
Most of the real-world problems require design constraints that may restrict the design alternatives during
the optimization process. There are many constraint handling methods that can be practically integrated
to the optimization algorithms [64]. In the reviewed literature, only Nelder–Mead optimization tool
provides a constraint handling method for the design optimization problems in the GH. To make a fair
comparison with Galapagos, SilverEye, and Opossum, we considered penalty function as constraint
to find the feasible solution in the design optimization problem. However, there are extremely hard
constraints that can be tackled in the real-world problems e.g., equality constraints. Currently, this is
not available in GH.
In addition to the advantages of Optimus mentioned above, there are some limitations, as well.
Firstly, even though modular approach provides a flexibility when we want to use different algorithms,
it requires more plug-in items to generate each step of the optimization. This effects the practical usage
of Optimus. Another limitation is evaluation strategy of fitness function. In other words, Optimus
creating NP size of list, where each element in the list has D size of dimensions. All these parameters
(with NPxD size) are sending to the objective function for fitness evaluation. Then, we obtain NP size
of fitness results simultaneously. In the other tools, each parameter is sending to the fitness function in
sequence. Then, the fitness function is evaluated. This fact gives an advantage for Optimus in terms of
computation time but it may not be suitable for some of the architectural design problems.
6. Conclusions
As the conclusion, this paper presented a new optimization plug-in, called Optimus for grasshopper
algorithmic modelling (GH) in the Rhinoceros CAD program. A self-adaptive differential evolution
algorithm with ensemble of mutation strategies was implemented in Optimus for architectural design
optimization problems. To test the algorithm performance, experimental design was made by using
standard test problems in the literature, some of the test problems proposed in IEEE CEC 2005 and
an architectural design problem. In general, Optimus outperformed other optimization tools in GH.
This paper showed that an algorithm that presents a good performance in solving real-parameter
benchmark functions can also find more desirable solutions in solving architectural design problems.
As future work, Optimus will be further improved by implementing different types of metaheuristic
algorithms due to NFLT. These algorithms can be variants of PSO, GA, HS, and DE. Moreover, Optimus
can be updated for constrained optimization problems using near-feasibility threshold [65,66], Superior
of Feasibility [67], and Epsilon constraint [68]. Moreover, ensemble of constraint handling techniques [69]
can be used in Optimus that may play a crucial role in architectural design problems. Finally, Optimus
can be simply extended for multi-objective optimization problems due to its modular system.
Algorithms 2019, 12, 141 23 of 27
Appendix A
D
x2i
P
Fsph : Sphere Function Fsph (x) =
i=1
F∗sph (x∗ ) = (0, .., 0) = 0−100 ≤ x ≤ 100
D−1
P 2
100 xi+1 − x2i + (1 − xi )2
Fros : Rosenbrock’s Function Fros (x) =
i=1
F∗ros (x∗ ) = (1, .., 1) = 0−100 ≤ x ≤ 100
s !
D D
1 P 2 1 P
Fack : Ackley’s Function Fack (x) = −20 exp−0.2 D xi − exp cos(2πxi ) + 20 + e
D
i=1 i=1
F∗ack (x∗ ) = (0, .., 0) = 0−32 ≤ x ≤ 32
D x2i D
P Q x
F grw : Griewank’s Function F grw (x) = − cos √i +1
4000 i
i=1 i=1
F∗grw (x∗ ) = (0, .., 0) = 0 −600 ≤ x ≤ 600
Fras : Rastrigin’s Function
D
x2i − 10 cos(2πxi ) + 10
P
Fras (x) =
i=1
F∗ras (x∗ ) = (0, .., 0) = 0 −5 ≤ x ≤ 5
Fsch : Generalized Schwefel’s Problem 2.26
PD
Fsch (x) = 418.9829N − (xi sin(|xi |))
i=1
F∗sch (x∗ ) = (420.9687, .., 420.968) = 0 −500 ≤ x ≤ 500
Fsal : Salomon’s Function
s s
D D
Fsal (x) = − cos2π
P 2
xi + 0.1
P
x2i + 1
i=1 i=1
F∗sal (x∗ ) = (0, .., 0) = 0 −100 ≤ x ≤ 100
Fwht : Whitely’s Function !
D P
P D y2ij
Fwht (x) = 4000 − cos yij + 1
j=1 i=1
2
where yij = 100 x j − xi + (1 − xi )2
References
1. Sariyildiz, S. Performative Computational Design, Keynote Speech. In Proceedings of the ICONARCH-I:
International Congress of Architecture-I, Konya, Turkey, 15–17 November 2012.
Algorithms 2019, 12, 141 25 of 27
2. Ekici, B.; Cubukcuoglu, C.; Turrin, M.; Sariyildiz, I.S. Performative computational architecture using swarm
and evolutionary optimisation: A review. Build. Environ. 2018, 147, 356–371. [CrossRef]
3. Michalewicz, Z.; Fogel, D.B. How to Solve it: Modern Heuristics; Springer Science & Business Media: New York,
NY, USA, 2013.
4. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation
2001, 76, 60–68. [CrossRef]
5. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the MHS’95.
Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995;
pp. 39–43.
6. Storn, R. On the Usage of Differential Evolution for Function Optimization. In Proceedings of the North
American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996; pp. 519–523.
7. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over
continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [CrossRef]
8. Goldberg, D.E.; Holland, J.H. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [CrossRef]
9. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman
problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [CrossRef]
10. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680.
[CrossRef] [PubMed]
11. Coello, C.A.C.; Lamont, G.B.; van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective
Problems; Springer: Berlin/Heidelberg, Germany, 2007; Volume 5.
12. Wortmann, T. Genetic Evolution vs. Function Approximation: Benchmarking Algorithms for Architectural
Design Optimization. J. Comput. Des. Eng. 2018, 6, 414–428. [CrossRef]
13. Wortmann, T.; Waibel, C.; Nannicini, G.; Evins, R.; Schroepfer, T.; Carmeliet, J. Are Genetic Algorithms Really
the Best Choice for Building Energy Optimization? In Proceedings of the Symposium on Simulation for
Architecture and Urban Design, Toronto, CA, Canada, 22–24 May 2017; p. 6.
14. Waibel, C.; Wortmann, T.; Evins, R.; Carmeliet, J. Building energy optimization: An extensive benchmark of
global search algorithms. Energy Build. 2019, 187, 218–240. [CrossRef]
15. Cichocka, J.M.; Migalska, A.; Browne, W.N.; Rodriguez, E. SILVEREYE—The Implementation of Particle
Swarm Optimization Algorithm in a Design Optimization Tool. In Proceedings of the International Conference
on Computer-Aided Architectural Design Futures, Istanbul, Turkey, 10–14 July 2017; pp. 151–169.
16. Rutten, D. Galapagos: On the logic and limitations of generic solvers. Archit. Des. 2013, 83, 132–135. [CrossRef]
17. Camporeale, P.E.; del Moyano, P.M.M.; Czajkowski, J.D. Multi-objective optimisation model: A housing
block retrofit in Seville. Energy Build. 2017, 153, 476–484. [CrossRef]
18. Calcerano, F.; Martinelli, L. Numerical optimisation through dynamic simulation of the position of trees
around a stand-alone building to reduce cooling energy consumption. Energy Build. 2016, 112, 234–243.
[CrossRef]
19. Anton, I.; Tãnase, D. Informed geometries. Parametric modelling and energy analysis in early stages of
design. Energy Procedia 2016, 85, 9–16. [CrossRef]
20. Tabadkani, A.; Banihashemi, S.; Hosseini, M.R. Daylighting and visual comfort of oriental sun responsive
skins: A parametric analysis. Build. Simul. 2018, 11, 663–676. [CrossRef]
21. Lee, K.; Han, K.; Lee, J. Feasibility study on parametric optimization of daylighting in building shading
design. Sustainability 2016, 8, 1220. [CrossRef]
22. Fathy, F.; Sabry, H.; Faggal, A.A. External Versus Internal Solar Screen: Simulation Analysis for Optimal
Daylighting and Energy Savings in an Office Space. In Proceedings of the PLEA, Edinburgh, UK, 16 August 2017.
23. Lavin, C.; Fiorito, F. Optimization of an external perforated screen for improved daylighting and thermal
performance of an office space. Procedia Eng. 2017, 180, 571–581. [CrossRef]
24. Heidenreich, C.; Ruth, J. Parametric optimization of lightweight structures, In Proceedings of the 11th World
Congress on Computational Mechanics, Barcelona, Spain, 21–25 July 2014.
25. Eisenbach, P.; Grohmann, M.; Rumpf, M.; Hauser, S. Seamless Rigid Connections of Thin Concrete Shells—A
Novel Stop-End Construction Technique for Prefab Elements. In Proceedings of the IASS Annual Symposia,
Amsterdam, The Netherlands, 17–20 August 2015; Volume 2015, pp. 1–12.
26. Almaraz, A. Evolutionary Optimization of Parametric Structures: Understanding Structure and Architecture
as a Whole from Early Design Stages. Master’s Thesis, University of Coruna, La Coruña, Spain, 2015.
Algorithms 2019, 12, 141 26 of 27
27. Simon. Goat. 2013. Available online: https://www.food4rhino.com/app/goat (accessed on 10 July 2019).
28. Johnson, S.G. The Nlopt Nonlinear-Optimization Package. Available online: https://nlopt.readthedocs.io/en/
latest/ (accessed on 10 July 2019).
29. Ilunga, G.; Leitão, A. Derivative-free Methods for Structural Optimization. In Proceedings of the 36th
eCAADe Conference, Lodz, Poland, 19–21 September 2018.
30. Austern, G.; Capeluto, I.G.; Grobman, Y.J. Rationalization and Optimization of Concrete Façade Panels.
In Proceedings of the 36th eCAADe Conference, Lodz, Poland, 19–21 September 2018.
31. Delmas, A.; Donn, M.; Grosdemouge, V.; Musy, M.; Garde, F. Towards Context & Climate Sensitive Urban
Design: An Integrated Simulation and Parametric Design Approach. In Proceedings of the 4th International
Conference On Building Energy & Environment 2018 (COBEE2018), Melbourne, Australia, 5–9 February 2018.
32. Wortmann, T. Opossum: Introducing and Evaluating a Model-based Optimization Tool for Grasshopper.
In Proceedings of the CAADRIA 2017, Hong Kong, China, 5–8 July 2017.
33. Costa, A.; Nannicini, G. RBFOpt: An open-source library for black-box optimization with costly function
evaluations. Math. Program. Comput. 2018, 10, 597–629. [CrossRef]
34. Wortmann, T. Model-based Optimization for Architectural Design: Optimizing Daylight and Glare in
Grasshopper. Technol. Archit. Des. 2017, 1, 176–185. [CrossRef]
35. Greco, L. Dodo. 2015. Available online: https://www.food4rhino.com/app/dodo (accessed on 10 July 2019).
36. Eckersley O’Callaghan’s Digital Design Group. 2013. Nelder-Mead Optimization. Available online: https:
//www.food4rhino.com/app/nelder-mead-optimisation-eoc (accessed on 10 July 2019).
37. Lagarias, J.C.; Reeds, J.A.; Wright, M.H.; Wright, P.E. Convergence properties of the Nelder–Mead simplex
method in low dimensions. SIAM J. Optim. 1998, 9, 112–147. [CrossRef]
38. Wrenn, G.A. An Indirect Method for Numerical Optimization Using the Kreisselmeir-Steinhauser Function; NASA:
Washington, DC, USA, 1989.
39. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1,
67–82. [CrossRef]
40. Attia, S.; Hamdy, M.; O’Brien, W.; Carlucci, S. Assessing gaps and needs for integrating building performance
optimization tools in net zero energy buildings design. Energy Build. 2013, 60, 110–124. [CrossRef]
41. Robert McNeel & Associates. Rhinoceros 3D. NURBS Modelling. 2015. Available online: https://www.
rhino3d.com/ (accessed on 10 July 2019).
42. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential
evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10,
646–657. [CrossRef]
43. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.-K.; Tasgetiren, M.F. Differential evolution algorithm with ensemble
of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [CrossRef]
44. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem definitions and
evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005,
2005005, 2005.
45. Blackwell, T.M.; Kennedy, J.; Poli, R. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57.
46. Sengupta, S.; Basak, S.; Peters, R. Particle Swarm Optimization: A survey of historical and recent developments
with hybridization perspectives. Mach. Learn. Knowl. Extr. 2019, 1, 157–191. [CrossRef]
47. Mirjalili, S. Genetic Algorithm. In Evolutionary Algorithms and Neural Networks; Springer: New York, NY,
USA, 2019; pp. 43–55.
48. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT press:
Cambridge, MA, USA, 1992; Volume 1.
49. Knowles, J.; Corne, D. The Pareto Archived Evolution Strategy: A New Baseline Algorithm for Pareto
Multiobjective Optimisation. In Proceedings of the Congress on Evolutionary Computation (CEC99),
Washington, DC, USA, 6–9 July 1999; Volume 1, pp. 98–105.
50. Pan, Q.-K.; Tasgetiren, M.F.; Liang, Y.-C. A discrete differential evolution algorithm for the permutation
flowshop scheduling problem. Comput. Ind. Eng. 2008, 55, 795–816. [CrossRef]
51. Venu, M.K.; Mallipeddi, R.; Suganthan, P.N. Fiber Bragg grating sensor array interrogation using differential
evolution. Optoelectron. Adv. Mater. Commun. 2008, 2, 682–685.
52. Varadarajan, M.; Swarup, K.S. Differential evolution approach for optimal reactive power dispatch.
Appl. Soft Comput. 2008, 8, 1549–1561. [CrossRef]
Algorithms 2019, 12, 141 27 of 27
53. Das, S.; Konar, A. Automatic image pixel clustering with an improved differential evolution. Appl. Soft Comput.
2009, 9, 226–236. [CrossRef]
54. Chatzikonstantinou, I.; Ekici, B.; Sariyildiz, I.S.; Koyunbaba, B.K. Multi-Objective Diagrid Façade Optimization
Using Differential Evolution. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC),
Sendai, Japan, 25–28 May 2015; pp. 2311–2318.
55. Cubukcuoglu, C.; Chatzikonstantinou, I.; Tasgetiren, M.F.; Sariyildiz, I.S.; Pan, Q.-K. A multi-objective
harmony search algorithm for sustainable design of floating settlements. Algorithms 2016, 9, 51. [CrossRef]
56. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput.
2011, 15, 4–31. [CrossRef]
57. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—an updated survey.
Swarm Evol. Comput. 2016, 27, 1–30. [CrossRef]
58. Tasgetiren, M.F.; Suganthan, P.N.; Pan, Q.-K.; Mallipeddi, R.; Sarman, S. An Ensemble of Differential Evolution
Algorithms for Constrained Function Optimization. In Proceedings of the IEEE congress on evolutionary
computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 1–8.
59. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput.
2001, 9, 159–195. [CrossRef] [PubMed]
60. Chatzikonstantinou, I. HoopSnake. 2012. Available online: https://www.food4rhino.com/app/hoopsnake
(accessed on 10 July 2019).
61. Miller, N. LunchBox. 2012. Available online: https://www.food4rhino.com/app/lunchbox (accessed on 10 July 2019).
62. Preisinger, C.; Heimrath, M. Karamba—A toolkit for parametric structural design. Struct. Eng. Int. 2014, 24,
217–221. [CrossRef]
63. Shan, S.; Wang, G.G. Survey of modeling and optimization strategies to solve high-dimensional design
problems with computationally-expensive black-box functions. Struct. Multidiscip. Optim. 2010, 41, 219–241.
[CrossRef]
64. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms:
A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [CrossRef]
65. Tasgetiren, M.F.; Suganthan, P.N. A Multi-Populated Differential Evolution Algorithm for Solving Constrained
Optimization Problem. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation,
Vancouver, BC, Canada, 16–21 July 2006; pp. 33–40.
66. Coit, D.W.; Smith, A.E. Penalty guided genetic search for reliability design optimization. Comput. Ind. Eng.
1996, 30, 895–904. [CrossRef]
67. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng.
2000, 186, 311–338. [CrossRef]
68. Haimes, Y.V. On a bicriterion formulation of the problems of integrated system identification and system
optimization. IEEE Trans. Syst. Man. Cybern. 1971, 1, 296–297.
69. Mallipeddi, R.; Suganthan, P.N. Ensemble of constraint handling techniques. IEEE Trans. Evol. Comput. 2010,
14, 561–579. [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).