0% found this document useful (0 votes)
49 views27 pages

Algorithms 12 00141

Uploaded by

Murat Acar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views27 pages

Algorithms 12 00141

Uploaded by

Murat Acar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

algorithms

Article
OPTIMUS: Self-Adaptive Differential Evolution with
Ensemble of Mutation Strategies for Grasshopper
Algorithmic Modeling
Cemre Cubukcuoglu 1,2 , Berk Ekici 1 , Mehmet Fatih Tasgetiren 3, * and Sevil Sariyildiz 1
1 Faculty of Architecture and the Built Environment, Chair of Design Informatics,
Delft University of Technology, Julianalaan 134, 2628 BL Delft, The Netherlands
2 Department of Interior Architecture and Environmental Design, Faculty of Architecture, Yasar University,
Universite Caddesi, No: 37-39, Agacli Yol, Bornova, Izmir 35100, Turkey
3 Department of Industrial and Systems Engineering, Istinye University, Maltepe Mahallesi, Cirpici Yolu B Ck.
No: 9, Zeytinburnu, Istanbul 34010, Turkey
* Correspondence: [email protected]

Received: 7 May 2019; Accepted: 8 July 2019; Published: 12 July 2019 

Abstract: Most of the architectural design problems are basically real-parameter optimization problems.
So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a
little attention on using optimization methods within the computer aided design (CAD) programs.
In this paper, we present Optimus, which is a new optimization tool for grasshopper algorithmic
modeling in Rhinoceros CAD software. Optimus implements self-adaptive differential evolution
algorithm with ensemble of mutation strategies (jEDE). We made an experiment using standard test
problems in the literature and some of the test problems proposed in IEEE CEC 2005. We reported
minimum, maximum, average, standard deviations and number of function evaluations of five
replications for each function. Experimental results on the benchmark suite showed that Optimus
(jEDE) outperforms other optimization tools, namely Galapagos (genetic algorithm), SilverEye (particle
swarm optimization), and Opossum (RbfOpt) by finding better results for 19 out of 20 problems.
For only one function, Galapagos presented slightly better result than Optimus. Ultimately, we
presented an architectural design problem and compared the tools for testing Optimus in the design
domain. We reported minimum, maximum, average and number of function evaluations of one
replication for each tool. Galapagos and Silvereye presented infeasible results, whereas Optimus and
Opossum found feasible solutions. However, Optimus discovered a much better fitness result than
Opossum. As a conclusion, we discuss advantages and limitations of Optimus in comparison to other
tools. The target audience of this paper is frequent users of parametric design modelling e.g., architects,
engineers, designers. The main contribution of this paper is summarized as follows. Optimus showed
that near-optimal solutions of architectural design problems can be improved by testing different types
of algorithms with respect to no-free lunch theorem. Moreover, Optimus facilitates implementing
different type of algorithms due to its modular system.

Keywords: grasshopper; optimization; differential evolution; architectural design; computational


design; performance based design; building performance optimization; single-objective optimization;
architectural design optimization; parametric design

Algorithms 2019, 12, 141; doi:10.3390/a12070141 www.mdpi.com/journal/algorithms


Algorithms 2019, 12, 141 2 of 27

1. Introduction

1.1. The Necessity of Optimization in Architecture


Architectural design problems have an excessive number of design parameters. All possible
combinations of design parameters correspond to thousands of different design alternatives. It is
difficult to choose the desirable design within such a big search space. In addition, architectural design
requires different performance aspects to be satisfied as design objectives focus on various topics
(e.g., social, economic, physiological, health, safety, structural, cultural, sustainability, etc.) [1]. Some of
these performance aspects (e.g., energy, and daylight) require non-linear equations, which increase the
level of the complexity. All these refer that decision-making is highly important in the early stages of
the design process. Because, the decisions that are taken in the early design phases have a great impact
on the following design stages. As a result, they influence the overall performance and the appearance
of the constructed building. At this point, computational optimization techniques became a necessity
in architectural design.
Regarding the computational optimization techniques, metaheuristic algorithms can play a vital
role for not only presenting promising design alternatives but also for dealing with complexity [2]. On the
other hand, these algorithms do not guarantee to finding the global optimal solutions [3]. However,
they can present near-optimal results within a reasonable time. For a decision maker, providing a
near-optimal solution in a reasonable time can be more advantageous than presenting the optimal
solution within extremely long time. To discover desirable solutions, metaheuristics are usually inspired
by the natural processes (such as interactions within swarms and evolution over the generations).
Some of these metaheuristic algorithms are harmony search (HS) [4], particle swarm optimization
(PSO) [5], differential evolution (DE) [6,7], genetic algorithm (GA) [8], ant colony optimization (ACO) [9],
simulated annealing (SA) [10], and evolutionary algorithm (EA) [11]. According to current state of the
art, evolutionary computation and swarm optimization algorithms are the most popular metaheuristics
in architectural domain [2].

1.2. Performative Computational Architecture Framework


In order to investigate how the desirable solution can be found in the early phase of the design
process, a general framework called performative computational architecture (PCA) [1,2] is considered
in this paper. PCA proposes an iterative method based on form finding, performance evaluation,
and optimization as illustrated in Figure 1. The form-finding stage includes the geometric generation
using parameters in algorithmic modeling environments. The performance evaluation stage comprises
of the numeric assessments of performance aspects to evaluate how well the form meets with the
objectives. Optimization stage corresponds the metaheuristic algorithms for discovering desirable
design solutions within a systematic search process.
Algorithms 2019, 12, x FOR PEER REVIEW 3 of 30
Algorithms 2019, 12, 141 3 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 3 of 30

Figure 1. Performative computational architecture (PCA) framework.


Figure 1. Performative computational architecture (PCA) framework.
Figure 1. Performative computational architecture (PCA) framework.
1.3. Current
1.3. Current Optimization
Optimization Tools in Grasshopper
Tools in Grasshopper
1.3. Current Optimization Tools in Grasshopper
In this
In this section,
section, existing
existing single
single objective
objective optimization
optimization tools
tools for
for grasshopper
grasshopper (GH)(GH) (available
(available in
in
In this section,
www.food4rhino.com) existing
are single
reviewed.objective optimization
Algorithm tools
applications for
for grasshopper
specific (GH)
optimization
www.food4rhino.com) are reviewed. Algorithm applications for specific optimization problems (such (available
problems in
www.food4rhino.com)
(such
as as topology
topology optimization are
optimizationreviewed. Algorithm
for structure)
for structure) applications
are not considered.
are not considered. for specific optimization
In thisGalapagos,
In this context, context, Galapagos,problems
Goat,
Goat, Silvereye,
(such as topology
Silvereye,
Opossum, Opossum,
Dodo, optimization
and Dodo, and
Nelder–Mead forNelder–Mead
structure) areplug-ins,
optimization not considered.
optimization inInFigure
shownplug-ins, this context,
shown Galapagos,
in 2,Goat,
FigureSome
2, are explained. are
of
Silvereye,
explained. Opossum,
Some of Dodo,
these and
plug-ins Nelder–Mead
have been optimization
compared in plug-ins,
building shown
optimization in
these plug-ins have been compared in building optimization problems in the literature, that can be Figure
problems 2,
in are
the
explained.
literature,
found Some
that canofbethese
in [12–15]. foundplug-ins have been compared in building optimization problems in the
in [12–15].
literature, that can be found in [12–15].

Figure
Figure 2. Existing single-objective
2. Existing single-objective optimization
optimization plug-ins
plug-ins in
in grasshopper
grasshopper (GH).
(GH).
Figure 2. Existing single-objective optimization plug-ins in grasshopper (GH).
Algorithms 2019, 12, 141 4 of 27

1.3.1. Galapagos
Galapagos [16] is one of the first released optimization plug-in for GH. The tool provides two
heuristic optimization algorithms, which are GA [8] and SA [10]. Author of the tool suggests SA for
rough landscape navigation, whereas evolutionary solver for finding reliable intermediate solutions.
Majority of the design optimization papers in the literature utilized Galapagos tool in dealing with
energy [17–19], daylight [20,21], both energy and daylight [22,23] and structure [24–26].

1.3.2. Goat
Goat [27] uses the NLopt library [28] in the graphical user interface of Galapagos. The tool
considers a mathematically-rigorous approach (gradient-free optimization algorithm) to reach fast and
deterministic results. Goat provides several optimization algorithms as well. These are constrained
optimization by linear approximation (COBYLA), bound optimization by quadratic approximation
(BOBYQA), subplex algorithm (Sbplx), the dividing rectangles algorithm (DIRECT), and controlled
random search 2 (CRS2). Very recently, Goat is used for building energy optimization [14] and structure
optimization [29,30].

1.3.3. Silvereye
Despite the gradient-free optimization and evolutionary computation, Silvereye [15] is
one of the swarm intelligence optimization plug-ins released for GH. The tool considers
ParticleSwarmOptimization.dll, which is a shared library containing an implementation of the core
version of the PSO. In the literature, Silvereye is used in the design optimization problems that are
focusing on energy [14], micro climate [31] and structural [29].

1.3.4. Opossum
Opossum [32] is the first model-based optimization tool for GH. The solver is based on an
open-source library for black-box optimization with costly function evaluations (RBFOpt) [33] such as
energy and daylight simulations. RBFOpt library uses the radial basis function with local search while
discovering satisfactory solutions with a small number of function evaluations. Opossum is used in
several design problems to deal with daylight [34], structure [29,30] and energy [14].

1.3.5. Dodo
Dodo [35] is a plug-in based on different implementation of optimization algorithms. These are
non-linear gradient free optimization based on NLopt library [28], stochastic gradient descent algorithm,
and swarm optimization. In addition, Dodo also provides several supervised and unsupervised neural
network algorithms.

1.3.6. Nelder–Mead Optimization


Nelder–Mead Optimization [36] is the first tool based on the Nelder–Mead method [37], a local
search-based optimization algorithm, in GH. Compared to heuristics, Nelder–Mead typically has
fewer function evaluations for computationally expensive models. In addition, the implementation of
Nelder–Mead Optimization also allows considering multiple constraints using Kreisselmeier-Steinhauser
function [38].

1.4. This Study: Optimus


In the field of computer science, different types of metaheuristic algorithms have been suggested
to solve real-parameter optimization problems by researchers and engineers in many years. As a
common approach, the performance of each developed algorithm is tested by using a set of the standard
benchmark problems such as Sphere, Schwefel’s, Rosenbrock’s, Rastrigin’s, etc. For real-world problems,
this test is done by using benchmark instances. The main reason for comparing algorithms is based on
Algorithms 2019, 12, x FOR PEER REVIEW 5 of 30
Algorithms 2019, 12, 141 5 of 27

algorithm depends on the nature of the problem. In other words, one algorithm can outperform
another
the algorithm
no free in a specific
lunch theorem (NFLT)problem.
[39]. The Thus, NFLT of
performance argues that there is
an optimization no global
algorithm optimization
depends on the
algorithm,
nature of thewhich can present
problem. In otherthe best results
words, for all real-world
one algorithm and benchmark
can outperform problems.in a specific
another algorithm
In the
problem. fieldNFLT
Thus, of architecture,
argues thattesting different
there is no globaltypes of algorithms
optimization for thewhich
algorithm, same architectural
can present the design
best
problem is not a common approach. One
results for all real-world and benchmark problems. of the most important reason of this fact is that computer
aidedIndesign
the field (CAD) tools of architects
of architecture, does not
testing different include
types optimization
of algorithms algorithms
for the in a wide design
same architectural range.
According to the current state of the art [40], only 3% of total users of optimization
problem is not a common approach. One of the most important reason of this fact is that computer tools are architect
in the design
aided domain(CAD)of building
tools ofperformance
architects doesoptimization.
not include This fact clearlyalgorithms
optimization shows thatinthere
a wideis arange.
little
attention on
According tousing optimization
the current methods
state of the within
art [40], only the
3% ofCAD programs.
total Therefore, this
users of optimization paper
tools introduces
are architect in
a new optimization solver, called Optimus, with significant features listed below:
the domain of building performance optimization. This fact clearly shows that there is a little attention
on
• using optimization
Compatible with methods
parametric within the CAD
design models programs.
created inTherefore,
GH [16]this paper introduces
algorithmic a new
modeling for
optimization solver, called Optimus, with significant features listed below:
Rhinoceros [41] CAD software.
•• Compatible
Supports PCA with parametric
framework design
outlined models created
in previous section. in GH [16] algorithmic modeling for
Rhinoceros [41] CAD software.
• Implements a self-adaptive [42] differential evolution algorithm with ensemble of mutation
• Supports PCA framework outlined in previous section.
strategies [43] (jEDE), explained in Section 3.
• Implements a self-adaptive [42] differential evolution algorithm with ensemble of mutation
• Presents the
strategies [43]highest
(jEDE),performance
explained inwhen compared
Section 3. to other optimization algorithms available in
• GH reviewed
Presents in Section
the highest 1.3.
performance when compared to other optimization algorithms available in
GH
o reviewed in Sectionof1.3.
The performance the Optimus is tested by benchmark suite, which consists of standard
single objective unconstrained problems and some of the test problems proposed in IEEE
# The performance of the Optimus is tested by benchmark suite, which consists of standard
Congress on Evolutionary
single objective Computation
unconstrained problems2005
and (CEC
some 2005)
of the [44]. Problem proposed
test problems formulations and
in IEEE
optimization
Congress onresults of theseComputation
Evolutionary benchmarks are2005given
(CECin2005)
Section 4.1.
[44]. Problem formulations and
o optimization
Finally, Optimusresults of these
is tested withbenchmarks are given inproblem.
a design optimization Section 4.1.
The problem formulation
# and Finally, Optimusresults
optimization is tested
arewith a design
given optimization
in Section 4.2. problem. The problem formulation
and optimization results are given in Section 4.2.
The development process of the Optimus is illustrated in Figure 3.
The development process of the Optimus is illustrated in Figure 3.

Figure 3. Optimus development process.


Figure 3. Optimus development process.
2. Self-Adaptive Differential Evolution Algorithm with Ensemble of Mutation Strategies
2. Self-Adaptive Differential
Metaheuristics are one ofEvolution Algorithm
the most used with Ensemble
optimization methodsofinMutation Strategies
the domain of architectural
design [2]. These algorithms
Metaheuristics are one of canthe
avoid
mostlocal
usedminima and maxima
optimization while
methods in coping with real-parameters
the domain of architectural
in large search spaces. In addition, metaheuristics can present near-optimal results
design [2]. These algorithms can avoid local minima and maxima while coping with real-parameters when compared to
other direct
in large search
search methods
spaces. in a reasonable
In addition, time [3].can present near-optimal results when compared
metaheuristics
Swarm
to other intelligence
direct (SI) and
search methods inevolutionary
a reasonable algorithms
time [3]. (EAs) are the most common sub-categories
of metaheuristics. These algorithms are inspired
Swarm intelligence (SI) and evolutionary algorithms by nature using
(EAs) different search
are the most strategies.
common SI is based
sub-categories
on interactions of swarms
of metaheuristics. such as flocks
These algorithms areofinspired
birds, schools of fish,
by nature antsdifferent
using and bees. Somestrategies.
search examples of SI SI
is
can
basedbe on
shown as Ant Colony
interactions of swarmsOptimization (ACO)
such as flocks [9] and
of birds, Particle
schools of Swarm
fish, antsOptimization (PSO)
and bees. Some [45,46].
examples
EAs
of SIare
caninbethe class of
shown as population-based metaheuristic
Ant Colony Optimization (ACO)optimization algorithms.
[9] and Particle EAs are inspired
Swarm Optimization by
(PSO)
the mechanisms
[45,46]. EAs areofinbiological
the class evolution that mimics the
of population-based selection and
metaheuristic reproduction
optimization processes of
algorithms. EAsliving
are
Algorithms 2019, 12, 141 6 of 27

organisms. EAs are very effective to deal with NP-hard problems. Some examples of EAs are Genetic
Algorithms (GAs) [8,47]. Genetic Programming (GP) [48], Evolution Strategy (ES) [49], and DE [7].
DE, which is introduced by Storn and Price [7], is potentially one of the most powerful stochastic
real-parameter optimization algorithms in the literature. The algorithm can converge very fast in
solving real-world problems such as in scheduling [50], optics [51], communication [6], power
systems [52], pattern recognition [53] and recently in architectural design [54,55]. A recent survey by
Das and Suganthan [56,57] clearly explained the history of DE and its success. DE algorithm has many
advantages. The simple code structure of the algorithm facilitates its implementation. Other advantage
is that the number of control parameters in DE is few, which are crossover rate (CR), mutation rate
(MR), and population size (NP). In classical DE, these control parameters are constant during the whole
optimization process. However, a simple change in MR or CR generation strategies can significantly
improve the performance of the algorithm. Therefore, some variants of DE in the literature focus on
parameter settings as presented in [42,58]. Moreover, DE can tackle the large scale and computationally
expensive optimization problems. Because, the space complexity of DE is low as mentioned in [59].

2.1. The Basic Differential Evolution


The classical DE has four main stages. There is a recursive process among second, third and fourth
steps as follows:

1. Initialization for generating the initial target population once at the beginning.
2. Reproduction with mutation for generating the mutant population by using the target population.
3. Reproduction with crossover for generating the trial population by using the mutant population.
4. Selection to choose the next generation among trial and target populations using one-to-one
comparison. In each generation, individuals of the current population become the target population.

2.1.1. Initialization
In the basic DE algorithm, the initial target population has NP individuals with a D-dimensional
real-parameter vectors. Each vector is obtained randomly  and uniformly
 within the search space
restricted by the given minimum and maximum bounds: xmin
ij
, xmax
ij
. Thus, the initialization of j-th
component of i-th vector can be defined as:
 
x0ij = xmin
ij + xmax
ij − xmin
ij × r, (1)

where x0ij is the i-th target population at generation g = 0; and r is a uniform random number in the
range [0, 1].

2.1.2. Mutation
The difference vector in mutation operator is one of the main strengths of DEs [7]. Hence, DE
differs from other EAs since it relies on a difference vector with a scale factor MR. The mutation process
is the first step to generate new solutions. In order to obtain mutant population, two individuals are
randomly chosen from the target population. The weighted difference of these individuals is added to
a third individual from the target population as in Equation (2).
 
g g−1 g−1 g−1
vij = xk j + MR × xlj − xmj (2)

where k, l, m are three randomly chosen individuals from the target population such that
(k , l , m , i ∈ (1, .., NP)) and j = 1, .., D. MR > 0 is a mutation scale factor influencing the differential
g
variation between two individuals. vij is the mutant population in generation g.
Algorithms 2019, 12, 141 7 of 27

2.1.3. Crossover
To obtain the trial population, a binomial crossover is applied to each variable. If a randomly and
uniformly generated number r [0, 1] is less than or equal to the crossover rate (CR), the individuals
from mutant population is chosen, otherwise target individuals are chosen. Simply, trial population is
generated by recombining mutant individuals with their corresponding target individuals as follows:

g g
g vij


 if rij ≤ CR or j = D j
uij =  g−1 , (3)
xij otherwise

where the index D j is a randomly chosen dimension from ( j = 1, .., D). It makes sure that at least
g g−1
one parameter of the trial population ui j will be different from the target population xij . CR is a
g
user-defined crossover constant in the range [0, 1], and rij is a uniform random number in the interval
g
[0, 1] whereas uij is the trial population at generation g.
When trial population is obtained, parameter values might violate search boundaries. Therefore,
the solution can be restricted. For this reason, parameter values that are violating the search range are
randomly and uniformly re-generated as in Equation (4).
g
 
xij = xmin max min
ij + xij − xij × r. (4)

2.1.4. Selection
For the next generation, selection process is realized, which is based on the survival of the fittest
among the trial and target populations. The population that has the lower fitness value is chosen
according to one-to-one comparison, as in Equation (5).
 g  g  g−1 
ui if f ui ≤ f xi

g 
xi = . (5)
x g−1

otherwise
i

2.2. Self-Adaptive Approach


In this paper, self-adaptive DE [42], so called jEDE, is employed. The jDE is very simple, effective
and converges much faster than the basic DE, especially when the dimensionality of the problem is
high or the problem is complex. In the jDE, each individual has its own MRi and CRi values. In this
paper, they are initially taken as CRi = 0.5 and MRi = 0.9 and they are updated as follows:

g MRl + r1 .MRu

 if r2 < t1
MRi = (6)
MR g−1

otherwise
i

g r3

 if r4 < t2
CRi =  , (7)
CR g−1

otherwise
i

where r j ∈ {1, 2, 3, 4} are uniform random numbers in the range [0, 1]. t1 and t2 represent the probabilities
to adjust the MR and CR. They are taken as t1 = t2 = 0.1 and MRl = 0.1 and MRu = 0.9.

2.3. Ensemble Approach


In addition to the self-adaptive approach, an ensemble approach [43] is employed in the jDE, so
called jEDE. This means that instead of using one type of mutation strategy with fixed parameter
setting as in the basic DE, each mutant individual is generated according to different mutation strategies
with different parameter settings. In this approach, each dimension has values pool for competition
of producing better future offspring according to their success in the past generations. In this paper,
Algorithms 2019, 12, 141 8 of 27

following mutation strategies (Mi ) are considered as in Equations (8)–(10). In M1 , the individuals that
formed the mutant population are randomly selected. In M2 and M3 , strategies are benefitted from the
information of the best solution (xbest) so far.
j,t+1 j,t
 j,t j,t

if STRi = 0 M1 : vi = xk + MRi × xl − xm (8)

j,t+1 j,t
 j,t j,t

if STRi = 1 M2 : vij = xbest + MRi × xl − xm (9)
j,t+1 j,t
 j,t j,t
  j,t j,t

if STRi = 2 M3 : vi = xi + MRi × xbest − xi + F × xk − xl , (10)

where k, l, m are three randomly selected individuals from the target population such that
(k , l , m , i ∈ (1, .., NP)) and j = 1, .., D. MRi > 0 is a mutation scale factor, in our jEDE, it is
generated by using self-adaptive procedure. STRi is the strategy used in each population to choose
different mutation strategies. The pseudo code of the jEDE is given in Algorithm 1.

Algorithm 1. The self-adaptive differential evolution algorithm with ensemble of mutation strategies.
1: Set parameters g = 0, NP = 100, Mmax = 4
2: Establish
n g initial opopulation nrandomly
g g g g
o
3: P g = x1 , .., xNP with xi = xi1 , .., xiD
4: Assign a mutation strategy to each individual randomly
5: Mi = rand()%Mmax f or i = 1, .., NP
g
6: Evaluate population and find xbest
n  g  g o
7: f (P g ) = f x1 , .., f xNP
8: Assign CR[i] = 0.5 and F[i] = 0.9 to each individual
g
9: Repeat the following for each individual xi
g
 g
10: Obtain vi = Mi xi
g
 g g
11: Obtain ui = CRi xi , vi
 g  g  g−1 
ui if f ui ≤ f xi

g 
12: Obtain xi =   g−1
xi otherwise
 g  g−1 
13: If f ui > f xi , M = rand()%Mmax
 g  g  gi g
14: If f xi ≤ f xbest , xbest = xi
g g
15: Update Fi and CRi
16: If the stopping criterion is not met, go to Lines 9–15
17: Else stop and return πbest

3. Optimus
Optimus is a new optimization plug-in (https://www.food4rhino.com/app/optimus) developed
for GH. The beta version (1.0.0) of the plug-in implements the self-adaptive [42] differential evolution
algorithm with ensemble of mutation strategies [43] (jEDE). The algorithm is coded in C#, which is one
of the available programming languages for custom scripting component in the GH. Optimus is based
on a modular approach with many C# items. Every step of the optimization process can be observed
within these items. Figure 4 shows the eight components of Optimus, which are categorized under
three main titles, as follows:

1. Optimus consists of

a. ReadMe (giving details about Optimus),


b. jEDE (using jEDE algorithm, generating mutant and trial populations)
c. One-to-one (enabling selection process for next generation)

2. Initialize consists of
Algorithms 2019, 12, x FOR PEER REVIEW 9 of 30

b. jEDE (using jEDE algorithm, generating mutant and trial populations)


Algorithms 2019, 12, x FOR PEER REVIEW 9 of 30
c. One-to-one (enabling selection process for next generation)
Algorithms
b. 2019,
2. InitializejEDE12, 141
consists
(using of
jEDE algorithm, generating mutant and trial populations) 9 of 27

a.c. GetBound
One-to-one(taking the boundaries
(enabling of design
selection process variables
for next in 𝐷 dimensions)
generation)
2. b. InitPop
a.Initialize (generating
consists
GetBound of
(taking initial
the population
boundaries for 𝑁𝑃variables
of design population size
in D in 𝐷 dimensions)
dimensions)
b.a. GetBound
3. UtilitiesInitPop (generating
consists(taking
of theinitial population
boundaries for NP
of design in 𝐷 size
population
variables in D dimensions)
dimensions)
3. a.b. xBest
InitPop
Utilities (generating
(finding
consists initial population
of chromosomes that has the 𝑁𝑃 population
for lowest size in
fitness value 𝐷 dimensions)
in the population)
3. b.Utilities
Merge consists of
(collecting the fitness, population, xBest and optimization parameters)
a. xBest (finding chromosomes that has the lowest fitness value in the population)
c.a. Unmerge
xBest (finding chromosomes
(separating thatpopulation,
the fitness, has the lowest fitness
xBest and value in the population)
optimization parameters)
b. Merge (collecting the fitness, population, xBest and optimization parameters)
b. Merge (collecting the fitness, population, xBest and optimization parameters)
To
c. manage the main
Unmerge loop ofthe
(separating thefitness,
Optimus, the HoopSnake
population, xBest and component [60] parameters)
optimization is used for enabling
c. Unmerge (separating the fitness, population,
feedback loops (recursive executions) within the GH. xBest and optimization parameters)

To manage the main loop of the Optimus, the HoopSnake component [60] is used for enabling
feedback loops (recursive executions) within the GH.

Figure 4. Components of Optimus


Figure 4. Optimus v1.0.0
v1.0.0 (beta).
(beta).

To
In manage the main
the Optimus, loop
each of ofthe
Figure
the4.eight
Optimus, the
components
Components
HoopSnake component [60] is used for enabling
is transferred
of Optimus v1.0.0 (beta).to GH clusters by defining
feedback loops (recursive executions) within the GH.
collections of inputs and outputs as shown in Figure 5. This gives a flexibility to the user for
InInthe
improving theOptimus,
theOptimus,
C# codeeach of the
each of eight
according the components
eight
to his/her is transferred
components topurposes.
GH clusters
is transferred
own implementation to GHAsby defining
clusters
next step,bycollections
defining
each cluster
of inputs and outputs as shown in Figure 5. This gives a flexibility to the user
is converted to GH user object files, which are provided in the supplementary materials. By user
collections of inputs and outputs as shown in Figure 5. This gives a for
flexibilityimproving
to the the
this C#
for
way,
code according
improving
Optimus canthe to
beC# his/her
code
used own implementation
asaccording
a plug-in to of his/her purposes. As next step, each cluster is converted
the GH.own implementation purposes. As next step, each cluster to GH
user object files,
is converted which
to GH userare provided
object in theare
files, which supplementary
provided in the materials. By thismaterials.
supplementary way, OptimusBy thiscan be
way,
used as a plug-in
Optimus of theasGH.
can be used a plug-in of the GH.

Figure 5. Example cluster implementation for self-adaptive differential evolution algorithm with
ensemble of mutation strategies (jEDE) component.
Figure
Figure 5. 5. Examplecluster
Example clusterimplementation
implementation for self-adaptive
self-adaptivedifferential
differentialevolution
evolutionalgorithm with
algorithm with
ensemble
ensemble of mutation strategies of mutation
(jEDE) strategies (jEDE) component.
component.

The user needs to follow several steps for using Optimus:


Algorithms 2019, 12, x FOR PEER REVIEW 10 of 30

The user needs to follow several steps for using Optimus:


Algorithms 2019, 12, 141 10 of 27
1. Place GetBound on the GH canvas and connect with number sliders.

1.2. Define
Place the population
GetBound on the size.
GH canvas and connect with number sliders.
2.3. Define the population size. using population size and output of GetBound.
Get InitPop for initialization
3.4. Evaluate
Get InitPopinitial fitness usingusing
for initialization the output of InitPop.
population size and output of GetBound.
4.5. Evaluate initial
Internalize fitnessfitness.
the initial using the output of InitPop.
5.6. Internalize the initial fitness.
Place xBest on the GH canvas.
6.7. Place xBest and
Get Merge on the GH canvas.
connect with internalized initial fitness and outputs of InitPop and xBest.
7. Get Merge and connect with internalized initial fitness and outputs of InitPop and xBest.
8. Connect Merge with starting input (S) of HoopSnake.
8. Connect Merge with starting input (S) of HoopSnake.
9. Place UnMerge on the GH canvas and connect with feedback output (F) of HoopSnake.
9. Place UnMerge on the GH canvas and connect with feedback output (F) of HoopSnake.
10. GetjEDE
10. Get jEDEandandconnect
connectoutputs
outputsof ofUnMerge,
UnMerge,InitPop,
InitPop,GetBound.
GetBound.
11. Evaluate
11. Evaluatetrial
trialfitness
fitnessusing
usingthetheoutput
outputof ofjEDE.
jEDE.
12. Get
12. GetOne-to-One
One-to-Oneand andconnect
connectwith
withinitial
initialfitness,
fitness,trial
trialfitness
fitnessand
andoutputs
outputsofofjEDE.
jEDE.
13.
13. Place
PlacexBest
xBestand
andconnect
connectoutputs
outputsof
ofOne-to-one
One-to-onefor
forupdating
updatingthe
thenew
newbest
bestchromosome.
chromosome.
14.
14. Get Merge and connect outputs of jEDE, One-to-one, and xBest.
Get Merge and connect outputs of jEDE, One-to-one, and xBest.
15.
15. Complete
Completethe
theloop
loopby
byconnecting
connectingthe
theoutput
outputof
ofMerge
Mergetotothe
thedata
datainput
input(D)
(D)of
ofthe
theHoopSnake.
HoopSnake.
These
Thesesteps
stepsare
arevisualized
visualizedininFigure
Figure66and
andan
anexample
examplefile
fileisisshared
sharedininsupplementary
supplementarymaterials.
materials.

Figure6.6.Visualization
Figure Visualizationofofthe
theOptimus
Optimusloop.
loop.

To
Tosave
savesome
somecomputation
computationtime,
time,Optimus
Optimusdoes
doesnotnotupdate
updatenumber
numbersliders
slidersininthe
theGH.
GH.During
During
the
theoptimization
optimizationprocess
processofofthe
thedesign
designproblem,
problem,thethegeometry
geometryisisgenerated
generatedwith
withinitial
initialand
andtrial
trial
populations. For this reason, 𝑁𝑃 size of geometries is observed in each iteration. During the initial
populations. For this reason, NP size of geometries is observed in each iteration. During the initial
stage,
stage,various
varioustypes
typesof
ofdesign
designalternatives
alternativesare
aregenerated.
generated.However,
However,when
whenthetheOptimus
Optimusisisconverged,
converged,
similar
similargeometries
geometriesare
areobserved,
observed,asasshown
shownininFigure
Figure7.7.
Algorithms 2019, 12, 141 11 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 11 of 30

Figure7.7.Examples
Figure Examples of
of initial and optimized
initial and optimizedpopulations.
populations.

4. Experiments
4. Experiments
In this section, we explain how the experiments were performed to test the performance of the
In this section, we explain how the experiments were performed to test the performance of the
Optimus
Optimus in in
comparison
comparisonto to
thethe
other chosen
other chosenplug-ins.
plug-ins.As
Asmentioned
mentionedbefore,
before,the
theevaluations
evaluationsare
aredone
doneby
using standard benchmark problems in the literature and one architectural design problem
by using standard benchmark problems in the literature and one architectural design problem proposed
byproposed
the authors.
by the authors.

4.1.4.1.
Benchmark
BenchmarkSuite
Suite
TheTheperformance
performanceofofthe theOptimus
Optimus (jEDE
(jEDE algorithm)
algorithm) was was firstly
firstlytested
testedby byusing
usingthe
thefollowing
following
benchmark
benchmark suite, which
suite, whichconsists
consistsofof20
20test
testfunctions. Thefirst
functions. The firstten
tenfunctions
functionsininthe thebenchmark
benchmark suite
suite were
were
classical test problems that have been commonly used in the literature. The remaining
classical test problems that have been commonly used in the literature. The remaining ten functions are ten functions
aretaken
takenfrom
fromthethebenchmark
benchmarksuit suitpresented
presentedininthe theCEC
CEC2005 2005Special
SpecialSession
SessionononReal-Parameter
Real-Parameter
Optimization.
Optimization.These
Thesefunctions
functionshave
have been
been modified from the
modified from theclassical
classicaltesttestproblems
problems inin order
order to to locate
locate
their global
their global optimum
optimumunder undersome
somecircumstances
circumstances such as as shifted
shiftedand/or
and/orrotated
rotatedlandscape,
landscape, optimum
optimum
placed
placed onon bounds,Gaussian
bounds, Gaussiannoisenoiseand/or
and/or bias
bias added
added etc.
etc. [44].
[44]. This
Thisfact
factmakes
makesthese
thesefunctions
functions more
more
difficult
difficult toto solvethan
solve thanthetheclassical
classicaltest
test functions.
functions. InInour
ourtest suit,FF1 1 to
testsuit, to FF55 are unimodal, FF66 to
are unimodal, F10are
toF10
multimodal functions.
are multimodal All the
functions. Allbenchmark
the benchmark functions are minimization
functions are minimization problems. OurOur
problems. benchmark
benchmark suite
suite is presented in Table 1. These functions are summarized
is presented in Table 1. These functions are summarized in Appendix A. in Appendix A.
Algorithms 2019, 12, 141 12 of 27

Table 1. Benchmark Suite.

Notation Function
Fsph Sphere Function
Fros Rosenbrock’s Function
Fack Ackley’s Function
F grw Griewank’s Function
Fras Rastrigin’s Function
Fsch Generalized Schwefel’s Problem 2.26
Fsal Salomon’s Function
Fwht Whitely’s Function
Fpn1 Generalized Penalized Function 1
Fpn2 Generalized Penalized Function 2
F1 Shifted Sphere Function
F2 Shifted Schwefel’s Problem 1.2
F3 Shifted Rotated High Conditioned Elliptic Function
F4 Shifted Schwefel’s Problem 1.2 With Noise in Fitness
F5 Schwefel’s Problem 2.6 With Global Optimum on Bounds
F6 Shifted Rosenbrock’s Function
F7 Shifted Rotated Griewank’s Function without Bounds
F8 Shifted Rotated Ackley’s Function with Global Optimum on Bounds
F9 Shifted Rastrigin’s Function
F10 Shifted Rotated Rastrigin’s Function

4.1.1. Experimental Setup and Evaluation Criteria


All the benchmark functions were coded in C# as individual components in the GH environment.
These components are also available in supplementary materials as GH files to contribute evaluations
of further developed architectural design optimization tools. Furthermore, all the benchmark functions
ran on a computer that has Intel Core I7-6500U CPU @ 2.50 GHz with 16 GB RAM. Both the number
of dimension D and the population size NP are taken as 30 to limit the search space. To make a fair
comparison, termination criteria defined as 30 min for each component. For each problem instance,
five replications carried out for each function and for each tool (thus, the total run time is 12,000 min).
For evaluating the performance of the algorithms (of the components), we simply reported f x_min,
f x_max and f x_avg where f x_min is the minimum fitness value of function x, f x_max is the maximum
fitness value of function x and f x_avg is the average fitness value of function x, after all replications
for each tool. The maximum number of fitness evaluations (FES) within 30 min for each tool are also
recorded, which means how many times the fitness is tested during each 30-min optimization process.

4.1.2. Experimental Results


As mentioned before, Galapagos employs GA, SilverEye (v1.1.0) uses PSO and Opossum (v1.5.0)
considers RBFOpt for enabling architectural design optimization in the GH environment. We compared
results of Optimus, which uses jEDE, with those three optimization tools to present its performance.
In addition, all the runs for each component and for each function were taken by the authors. Table 1
shows the fitness results of Optimus, Opossum, SilverEye, and Galapagos together with optimal fitness
values of each function. Table 2 clearly indicates the superiority of the Optimus over other optimization
tools such that it yielded the lowest minimum function values ( f x_min) in nineteen (19) functions
out of twenty (20) functions, significantly. On the other hand, Galapagos favors Optimus in only one
function (F8 ) with a very small difference. Maximum ( f x_max) and average ( f x_avg) function values
in Table 1 further justify the better performance of the Optimus in such a way that the maximum and
average function values of Optimus are closer to optimal fitness values in nineteen (19) functions
than those yielded by other tools. Furthermore, the average number of function evaluations (FES)
within thirty minutes in each problem were the highest in Optimus. This clearly shows high margins
between Optimus and other components where Optimus was tremendously faster in each iteration.
Algorithms 2019, 12, 141 13 of 27

For example, when solving Fsph , the Optimus (jEDE) approximately realized 5844 FES/minute, whereas
GA made 1643 FES/minute, PSO made 1205 FES/minute, RBFOpt made 85 FES/minute. These facts
explicitly imply the superiority of Optimus over other components in solving benchmark suite.

Table 2. Comparison of Optimus, Opossum, Silvereye, Galapagos (D = 30, NP = 30, termination: 30 min).

Optimus_jEDE Opossum_RBFOpt SilverEye_PSO Galapagos_GA Optimal


f (x)_min 0.0000000 × 100 1.4000000 × 10−5 5.9000000 × 10−5 1.1709730 × 100
f (x)_max 0.0000000 × 100 5.8000000 × 10−5 2.7057298 × 101 4.4052130 × 100
Fsph f (x)_avg 0.0000000 × 100 3.6400000 × 10−5 5.4171618 × 100 2.7928586 × 100 0
Std.Dev. 0.0000000 × 100 1.8039956 × 10−5 1.0820072 × 101 1.1492298 × 100
FES 194,520 3225 31,560 34,260
f (x)_min 0.0000000 × 100 2.7485056 × 101 1.6689612 × 101 6.0863438 × 103
f (x)_max 3.9866240 × 100 2.1030328 × 102 5.8965910 × 104 2.2859534 × 104
Fros f (x)_avg 2.3919744 × 100 9.0892522 × 101 1.3859753 × 104 1.3060872 × 104 0
Std.Dev. 1.9530389 × 100 7.1919037 × 101 2.2886020 × 104 6.7095472 × 103
FES 149,460 882 26,700 35,070
f (x)_min 0.0000000 × 100 3.3550000 × 10−3 1.3404210 × 100 5.7470000 × 10−2
f (x)_max 1.3404210 × 100 2.4098540 × 100 3.7340120 × 100 1.0270860 × 100
Fack f (x)_avg 2.6808420 × 10−1 1.3795174 × 100 2.2482728 × 100 4.8037520 × 10−1 0
Std.Dev. 5.3616840 × 10−1 8.5713298 × 10−1 9.1850828 × 10−1 4.0392221 × 10−1
FES 206,370 1447 38,490 28,710
f (x)_min 0.0000000v 1.5840000 × 10−3 3.2081000 × 10−2 3.4407200 × 10−1
f (x)_max 0.0000000 × 100 1.7086000 × 10−2 2.6292800 × 10−1 1.0657060 × 100
F grw f (x)_avg 0.0000000 × 100 7.6638000 × 10−3 1.2049020 × 10−1 8.2474220 × 10−1 0
Std.Dev. 0.0000000 × 100 5.6121253 × 10−3 8.1064770 × 10−2 2.6131521 × 10−1
FES 151,110 1089 26,610 37,410
f (x)_min 4.9747950 × 100 2.5870757 × 101 3.3829188 × 101 7.0535550 × 100
f (x)_max 2.3879007 × 101 4.1789542 × 101 6.1687356 × 101 2.9072445 × 101
Fras f (x)_avg 1.3332448 × 101 3.6218407 × 101 4.8355074 × 101 1.5404780 × 101 0
Std.Dev. 6.7363920 × 100 5.4349940 × 100 1.1424086 × 101 9.1077975 × 100
FES 206,520 4149 37,650 51,480
f (x)_min 2.3687705 × 102 1.2877414 × 103 4.1089621 × 103 1.9550066 × 103
f (x)_max 4.7375372 × 102 4.0111803 × 103 6.1589437 × 103 2.7977670 × 103
Fsch f (x)_avg 4.0269072 × 102 2.7169368 × 103 5.2658793 × 103 2.3201102 × 103 0
Std.Dev. 9.4750668 × 101 8.8809862 × 102 6.6677783 × 102 2.8681876 × 102
FES 148,140 1487 27,210 35,940
f (x)_min 1.9987300 × 10−1 2.8070190 × 100 2.9987300 × 10−1 1.4998750 × 100
f (x)_max 4.9987300 × 10−1 4.3000810 × 100 4.9987300 × 10−1 2.8375760 × 100
Fsal f (x)_avg 3.1987300 × 10−1 3.4413810 × 100 3.7987300 × 10−1 2.0682340 × 100 0
Std.Dev. 1.1661904 × 10−1 5.6623101 × 10−1 7.4833148 × 10−2 6.1557512 × 10−1
FES 201,720 4769 38,640 51,360
f (x)_min 2.3704633 × 101 9.6592754 × 102 1.8455490 × 102 2.3632742 × 105
f (x)_max 2.4040716 × 102 1.6904059 × 103 6.2776811 × 102 5.5055000 × 108
Fwht f (x)_avg 1.0789137 × 102 1.2610498 × 103 4.0698894 × 102 2.7440867 × 108 0
Std.Dev. 7.4951993 × 101 2.6984398 × 102 1.6923390 × 102 2.2575944 × 108
FES 146,640 728 23,250 29,730
f (x)_min 0.0000000 × 100 2.9057000 × 10−2 3.1283800 × 10−1 1.4510000 × 10−3
f (x)_max 0.0000000 × 100 9.0392970 × 100 1.3487570 × 100 1.7632000 × 10−2
Fpn1 f (x)_avg 0.0000000 × 100 2.8243854 × 100 6.7680180 × 10−1 6.0638000 × 10−3 0
Std.Dev. 0.0000000 × 100 3.1774566 × 100 4.2868737 × 10−1 6.0403379 × 10−3
FES 203,880 1394 39,420 57,720
f (x)_min 0.0000000 × 100 2.0400434 × 101 1.0000000 × 10−11 1.8037300 × 10−1
f (x)_max 1.0987000 × 10−2 2.8693232 × 101 9.3079800 × 10−1 2.7208440 × 100
Fpn2 f (x)_avg 2.1974000 × 10−3 2.5384324 × 101 2.2552480 × 10−1 1.0041520 × 100 0
Std.Dev. 4.3948000 × 10−3 3.4851206 × 100 3.5679494 × 10−1 9.4298611 × 10−1
FES 148,380 639 29,040 41,520
f (x)_min −4.5000000 × 102 −4.4999898 × 102 2.6232595 × 102 −4.4995998 × 102
f (x)_max −4.5000000 × 102 −4.4999478 × 102 8.0377273 × 103 −4.4988406 × 102
F1 f (x)_avg −4.5000000 × 102 −4.4999680 × 102 4.0824562 × 103 −4.4992015 × 102 −450
Std.Dev. 0.0000000 × 100 1.4829922 × 10−3 2.9460423 × 103 3.0428108 × 10−2
FES 198,060 6156 45,580 66,180
Algorithms 2019, 12, 141 14 of 27

Table 2. Cont.

Optimus_jEDE Opossum_RBFOpt SilverEye_PSO Galapagos_GA Optimal


f (x)_min −4.5000000 × 102 3.4590652 × 104 −3.8035693 × 102 6.8476195 × 103
f (x)_max −4.5000000 × 102 4.7978174 × 104 5.2590674 × 102 1.2302281 × 104
F2 f (x)_avg −4.5000000 × 102 4.3226072 × 104 −1.2838464 × 102 1.0174618 × 104 −450
Std.Dev. 0.0000000 × 100 4.9030645 × 103 3.3183646 × 102 1.8557926 × 103
FES 146,010 1061 33,840 50,160
f (x)_min 6.0045376 × 104 1.5561000 × 107 1.9264000 × 106 1.1250000 × 107
f (x)_max 2.4850013 × 105 7.5084000 × 107 8.0820000 × 106 2.7772000 × 107
F3 f (x)_avg 1.2857393 × 105 3.7380600 × 107 4.5525000 × 106 1.7212200 × 107 −450
Std.Dev. 6.4175884 × 104 2.0647812 × 107 2.0206526 × 106 5.6281541 × 106
FES 205,260 1293 48,030 66,000
f (x)_min −4.5000000 × 102 2.8373782 × 104 −4.2715712 × 102 7.8877685 × 103
f (x)_max −4.5000000 × 102 3.9404224 × 104 4.0484178 × 103 1.1191542 × 104
F4 f (x)_avg −4.5000000 × 102 3.2359668 × 104 5.1724092 × 102 9.4535270 × 103 −450
Std.Dev. 0.0000000 × 100 4.0412734 × 103 1.7663033 × 103 1.2966977 × 103
FES 147,240 1055 35,610 53,520
f (x)_min 9.3362001 × 102 4.7527012 × 103 7.5856684 × 103 1.8506721 × 104
f (x)_max 2.5603668 × 103 5.8813877 × 103 1.2910221 × 104 2.4057172 × 104
F5 f (x)_avg 2.0333032 × 103 5.3824799 × 103 9.4390617 × 103 2.0151105 × 104 −310
Std.Dev. 570.1512256 438.2070353 2031.289127 2004.100331
FES 195,720 1891 47,550 67,560
f (x)_min 3.9000000 × 102 1.4168473 × 103 5.0073093 × 102 9.7856127 × 102
f (x)_max 3.9398662 × 102 1.3904779 × 104 4.1540000 × 109 9.6995775 × 103
F6 f (x)_avg 3.9159465 × 102 8.7212119 × 103 9.8402870 × 108 5.6395846 × 103 390
Std.Dev. 1.9530389 × 100 4.6035484 × 103 1.5972516 × 108 3.5378379 × 103
FES 148,260 687 33,540 48,810
f (x)_min 4.5162886 × 103 4.5162896 × 103 5.8669417 × 103 4.5172240 × 103
f (x)_max 4.5162886 × 103 4.5162985 × 103 7.2432580 × 103 4.5290168 × 103
F7 f (x)_avg 4.5162886 × 103 4.5162936 × 103 6.5251090 × 103 4.5222540 × 103 −180
Std.Dev. 0.0000000 × 100 3.1911420 × 10−3 5.4380701 × 102 4.7496031 × 100
FES 200,820 10,108 42,060 58,290
f (x)_min −1.1905178 × 102 −1.1910166 × 102 −1.1901775 × 102 −1.1940297 × 102
f (x)_max −1.1902135 × 102 −1.1876717 × 102 −1.1892500 × 102 −1.1906700 × 102
F8 f (x)_avg −1.1903319 × 102 −1.1889866 × 102 −1.1899553 × 102 −1.1919711 × 102 −140
Std.Dev. 1.0538581 × 10−2 1.1070562 × 10−1 3.5483336 × 10−2 1.4127484 × 10−2
FES 149,670 2018 35,580 52,020
f (x)_min −3.1706554 × 102 −2.5827181 × 102 −2.3690804 × 102 −3.1677970 × 102
f (x)_max −3.1209075 × 102 −2.3567358 × 102 −1.6682751 × 102 −3.1164785 × 102
F9 f (x)_avg −3.1527462 × 102 −2.4625749 × 102 −1.8917798 × 102 −3.1499413 × 102 −330
Std.Dev. 1.7117897 × 100 8.0334497 × 100 2.5285354 × 101 1.8617799 × 100
FES 212,160 5577 47,610 67,560
f (x)_min −2.7030257 × 102 −2.3528777 × 102 −1.3299956 × 102 4.0215414 × 101
f (x)_max −2.2751946 × 102 −1.3298172 × 102 −8.1262211 × 101 1.8543670 × 102
F10 f (x)_avg −2.5139841 × 102 −1.8578676 × 102 −1.0572303 × 102 1.1181316 × 102 −330
Std.Dev. 1.4307998 × 101 3.9394042 × 101 1.8528544 × 101 5.7458318 × 101
FES 146,820 1192 35,220 53,070

Results presented in Table 1 are provided in supplementary materials containing minimum


fitness values of each replication, as well as their corresponding chromosomes. Considering f x_min,
convergence of standard benchmarks is presented in Figure 8, whereas convergence of CEC 2005
benchmarks is given in Figure 9.
Algorithms
Algorithms2019,
2019,12,
12, x141
FOR PEER REVIEW 1627of 30
15 of

Figure 8. Cont.
Algorithms 2019, 12, x FOR PEER REVIEW 17 of 30

Algorithms 2019, 12, x FOR PEER REVIEW 17 of 30


Algorithms 2019, 12, 141 16 of 27

Figure 8. Convergence graphs of standard benchmark functions.


Figure 8. Convergence graphs of standard benchmark functions.
Figure 8. Convergence graphs of standard benchmark functions.

Figure 9. Cont.
Algorithms 2019, 12, 141 17 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 18 of 30
Algorithms 2019, 12, x FOR PEER REVIEW 18 of 30

Figure9.9. Convergence
Figure Convergence graphs
graphs of
of CEC2005
CEC2005benchmark
benchmarkfunctions.
functions.
Figure 9. Convergence graphs of CEC2005 benchmark functions.
4.2. Design Optimization Problem
4.2. Design Optimization Problem
4.2. Design
In thisOptimization
part, we presentProblem
a design problem for a frame structure, which has a span for 30 m by
In this part, we present a design problem for a frame structure, which has a span for 30 m by 25
25 m. Before generating the frame structure, we executed a base surface, which controls the shape of
m. Before
In thisgenerating the frame
part, we present structure,
a design we executed
problem a basestructure,
for a frame surface, which
whichcontrols the shape
has a span for 30ofmthe
by 25
the design. Therefore, a curved surface having 7.5 m height, five axes, and 25 controlling points is
m.design.
BeforeTherefore,
generatinga the curved
frame surface having
structure, we 7.5 m height,
executed a basefive axes, and
surface, which 25 controls
controlling points of
the shape is the
generated. Afterwards, 65 parameters are defined for 5 axes to change the shape of the base surface, so
generated.
design. Afterwards, 65 parameters are defined for 5 axes to change the
Therefore, a curved surface having 7.5 m height, five axes, and 25 controlling points is shape of the base surface,
the shape of the frame structure. Points on ground level have two parameters for x and y directions,
so the
generated. shape of the frame65
Afterwards, structure.
parametersPoints
areon groundfor
defined level haveto
5 axes two parameters
change for x and y directions,
whereas other points have three parameters for changing the positions in all the shape
directions. of
Inthe
the base surface,
next step,
sowhereas
the shapeother
of points
the framehave three
structure.parameters
Points on for changing
ground level the
havepositions
two in all
parameters directions.
for x andInythe next
directions,
using the base surface, we generated the frame structure using truss structure component provided by
step,
whereas using
other the base surface,
points which
have three we generated
parameters the frame structure using truss structure component
LunchBox [61] plugin, consists of several for changingusing
components the positions in all directions.
different geometrical shapes.InInthe
thisnext
provided by LunchBox [61] plugin, which consists of several components using different geometrical
step, using the base surface, we generated the frame structure using
component, the structure on the base surface is controlled by three parameters. These are division truss structure component
shapes. In this component, the structure on the base surface is controlled by three parameters. These
provided
amounts on byuLunchBox [61] plugin,
and v directions, which
and the depth consists of several
of the truss system. components using different
Finally, generated geometrical
frame structure is
are division amounts on u and v directions, and the depth of the truss system. Finally, generated
shapes. In this component, the structure on the base surface is controlled
combined with Karamba 3D plug-in [62], which provides evaluation of parametric structural models in by three parameters. These
frame structure is combined with Karamba 3D plug-in [62], which provides evaluation of parametric
are
GH.division
Structural amounts
evaluationon process
u and vis directions,
mentioned on andthethe depth of
following theDevelopment
lines. truss system.ofFinally, generated
the parametric
structural models in GH. Structural evaluation process is mentioned on the following lines.
frame
modelstructure is combined
is illustrated in Figure with
10. Karamba 3D plug-in [62], which provides evaluation of parametric
Development of the parametric model is illustrated in Figure 10.
structural models in GH. Structural evaluation process is mentioned on the following lines.
Development of the parametric model is illustrated in Figure 10.

Figure 10. Process


Figure10. Process of
of parametric
parametricframe
framestructure.
structure.

To evaluate each generated structure


Figure alternative,
10. Process eachframe
of parametric line instructure.
the structure model is defined as
beam, whereas each point located on the ground level is defined as support points. In addition, cross-
To evaluate each generated structure alternative, each line in the structure model is defined as
beam, whereas each point located on the ground level is defined as support points. In addition, cross-
Algorithms 2019, 12, 141 18 of 27

To evaluate
Algorithms 2019, 12, xeach generated
FOR PEER REVIEW structure alternative, each line in the structure model is defined 19 of 30
as beam, whereas each point located on the ground level is defined as support points. In addition,
cross-section
section and and its parameters
its parameters areare also
also defined
defined bybyKaramba
Karambajust justbefore
before the
the evaluation
evaluation process.
process.For
Forthe
the
problem
problemonon hand,
hand,rectangular
rectangularcross-section is defined
cross-section usingusing
is defined three three
parameters. These are
parameters. height,
These are upper
height,
and lower
upper andwidths
lower of the cross-section.
widths To finalize
of the cross-section. the necessary
To finalize inputs, we
the necessary defined
inputs, gravity and
we defined lateral
gravity and
loads.
lateralAs gravity
loads. Asload, we load,
gravity considered the distribution
we considered of the totalofmass
the distribution for each
the total intersecting
mass point. For
for each intersecting
the lateral
point. Forload, we applied
the lateral load, 2we
kNapplied
on each2intersecting
kN on eachpoint, as well.point,
intersecting The material
as well. type of the model
The material typeisof
assigned
the model as is
steel. An example
assigned as steel.ofAn
evaluated
examplemodel is shown
of evaluated in Figure
model 11. in Figure 11.
is shown

Figure 11. Evaluation of the structure model.


Figure 11. Evaluation of the structure model.

To sum up, 70 parameters are used to define the design optimization problem. This corresponds
To sum up, 70 parameters are used to define the design optimization problem. This corresponds
approximately 1.333 × 10177 alternatives in the search space. The reason of giving such a freedom is
approximately 1.333× 10 alternatives in the search space. The reason of giving such a freedom is
to investigate the impact of the design on the performance. Moreover, design problems are different
to investigate the impact of the design on the performance. Moreover, design problems are different
than mathematical benchmark problems. A design problem can have more parameters due to an
than mathematical benchmark problems. A design problem can have more parameters due to an
optimization task (e.g., optimizing the layout scheme and the façade design of a building to find the
optimization task (e.g., optimizing the layout scheme and the façade design of a building to find the
most desirable plan scheme with the lowest energy consumption). Properties of design parameters are
most desirable plan scheme with the lowest energy consumption). Properties of design parameters
given in Table 3. To show the divergence of the design problem, some of the frame structure alternatives
are given in Table 3. To show the divergence of the design problem, some of the frame structure
are illustrated in Figure 12.
alternatives are illustrated in Figure 12.
Table 3. Properties of design parameters.
Table 3. Properties of design parameters.
Notation Design Parameter Min Max Type
Notation Design Parameter Min Max Type
x1–x13
x1–x13 Coordinates of control points in axis 1 −2.00
Coordinates of control points in axis 1
−2.002.00 2.00 Continues
Continues
x14–x26 Coordinates of control points in axis 2 −2.00 2.00 Continues
x14–x26
x27–x39 Coordinates
Coordinates of control
of control pointspoints
in axisin
3 axis 2 −2.00 −2.002.00 2.00 Continues
Continues
x27–x39
x40–x52 Coordinates
Coordinates of control
of control pointspoints
in axisin
4 axis 3 −2.00 −2.002.00 2.00 Continues
Continues
x40–x52
x53–x65 Coordinates of control points in
Coordinates of control points in axis 5 axis 4 −2.00 −2.002.00 2.00 Continues
Continues
x66
x53–x65 Coordinates of control points in axis 5 3
Division amount on u direction −2.00 10 2.00 Discrete
Continues
x67 Division amount on v direction 3
x66 Division amount on u direction 3 10 10 Discrete
Discrete
x68 Depth of the truss 0.50 1.00 Continues
x67
x69 Division
Height amount
of the on v direction
cross-section 10.00 3 30.00 10 Discrete
Continues
x68
x70 Upper and lowerDepth
width of thecross-section
of the truss 10.00 0.5030.00 1.00 Continues
Continues
x69 Height of the cross-section 10.00 30.00 Continues
x70 Upper and lower width of the cross-section 10.00 30.00 Continues
Algorithms 2019, 12, 141 19 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 20 of 30

Figure 12. Some alternatives of the design problem.


Figure 12. Some alternatives of the design problem.
Objective function, which is minimizing the mass (m) subject to displacement (v), is formulated
Objective function, which is minimizing the mass (𝑚) subject to displacement (𝑣), is formulated
as follows:
as follows:
Min (m) where m is given by
𝑀𝑖𝑛 (𝑚) where 𝑚 is given by j
X
m= Wi , (11)
𝑚 = i=1 𝑊 , (11)
where Wi is the weight of ith element of the frame structure and j is the total number of the frame
where 𝑊elements.
structure is the weight of 𝑖th element of the frame structure and 𝑗 is the total number of the frame
Subject
structure to:
elements.
Subject to: v ≤ 0.1m (12)
F
𝑣v≤=0.1m
K
, (13)
(12)

𝐹
where F is the loading force, and K is the bending stiffness of the frame structure. To compare the
𝑣 =
optimization results of Optimus with others, we defined , a penalty function by combining m and (13) v
𝐾
as follows:
where 𝐹 is the loading force, and 𝐾 is the bending
(
m ifstiffness
v ≤ 0.1 of the frame structure. To compare the
m= . (14)
optimization results of Optimus with others, 100 we ∗defined
m a penalty function by combining 𝑚 and 𝑣
o.w.
as follows:
To sum up, for safety reasons, the final design should have minimum 0.1 m displacement. For this
reason, displacement result that has smaller 𝑚 𝑖𝑓 𝑣 ≤ 0.1 solution. In addition, for minimizing
𝑚 =than 0.1 m is a feasible . (14)
100 ∗ 𝑚
construction cost, the objective is defined as the minimization 𝑜. 𝑤. of the mass. Therefore, the final design
should Tohave
sumthe
up,smallest
for safety reasons,
amount of the final
steel design
usage. Theshould
higherhave minimum
the mass is the 0.1 m displacement.
lower For
the displacement,
thisthe
and reason,
otherdisplacement
way around. result that has smaller than 0.1 m is a feasible solution. In addition, for
minimizing construction cost, the objective is defined as the minimization of the mass. Therefore, the
4.2.1.
final Experimental
design shouldSetup
have and Evaluation
the smallest Criteria
amount of steel usage. The higher the mass is the lower the
displacement,
The designand the other way
optimization around.
problem ran on a computer that had Intel Core I7-6500U CPU @ 2.50 GHz
with 16 GB RAM. The number of dimensions D was taken as 70, whereas the population size NP was
4.2.1. Experimental Setup and Evaluation Criteria
taken as 50. As such the CEC 2005 benchmark problems, the termination criteria was determined as
30 minThefordesign optimization
each run. problem
For the instance onran on only
hand, a computer that hadcarried
one replication Intel Core
out I7-6500U CPU @ 2.50
for each optimization
GHz To
tool. with 16 GB RAM.
evaluate The numberofofdifferent
the performance dimensions 𝐷 was taken
optimization as 70,
tools, we whereas the population
report f x_min, which issize
the
𝑁𝑃 was taken as 50. As such the CEC 2005 benchmark problems, the termination criteria was
Algorithms 2019, 12, x FOR PEER REVIEW 21 of 30

determined as 30 min for each run. For the instance on hand, only one replication carried out for each
Algorithms 2019, 12, 141 20 of 27
optimization tool. To evaluate the performance of different optimization tools, we report 𝑓𝑥_𝑚𝑖𝑛,
which is the minimum fitness value of the design problem, and 𝑔(𝑥), which is the constraint value
of the minimum
minimum fitness.
fitness value ofThe maximum
the design number
problem, andofg(fitness
x), which evaluations (𝐹𝐸𝑆) within
is the constraint value of 30the
min for each
minimum
tool areThe
fitness. alsomaximum
recorded,number
which means how
of fitness many times
evaluations the fitness
(FES) within 30 is evaluated
min for each during thealso
tool are optimization
recorded,
process.
which This how
means problem many definition
times theisfitness
also available in supplementary
is evaluated materials asprocess.
during the optimization GH fileThisto contribute
problem
evaluations
definition is of
alsofurther developed
available architectural
in supplementary design optimization
materials as GH file to tools.
contribute evaluations of further
developed architectural design optimization tools.
4.2.2. Design Results
4.2.2. Design Results
After 30 min run for each tool in GH, overview of optimization results is given Table 4.
After
Convergence 30 min run for each
graph tool inOpossum,
Optimus, GH, overview of optimization
SilverEye and Galapagos results is given Table in
also given 4. Convergence
Figure 13. In
graph for final
addition, Optimus,designs Opossum,
proposed SilverEye
by eachand Galapagos
algorithm is also
are also given in in
illustrated Figure
Figure 13.14.
InBased
addition, final
on these
designs
results, proposed
Optimus by eachOpossum
and algorithmfoundare alsofeasible
illustrated in Figurewhereas
solutions, 14. BasedSilverEye
on these results, Optimus
and Galapagos
and Opossum
discovered found feasible
infeasible solutions,
alternatives. Looking whereas SilverEye
at feasible and Galapagos
solutions, there is a discovered infeasible
significant difference
alternatives.
between jEDE Looking at feasible
and RbfOpt. solutions,
Optimus foundtheresmaller
is a significant
mass amountdifference between
than Opossum.jEDE and RbfOpt.
During the
Optimus
optimizationfound smaller
process, wemassalso amount
observedthan thatOpossum. During the
Optimus evaluated optimization
fitness function process,
more than weother
also
observed
tools. From thattheOptimus
point ofevaluated
proposedfitnessdesignfunction moreGalapagos
alternatives, than other and tools. From thefound
SilverEye pointbigger
of proposed
frame
design alternatives,
structures with smaller Galapagos and SilverEye
profile dimensions. Thisfound
causesbigger frame on
an increment structures with smaller
both displacement andprofile
mass.
dimensions. This causes an increment on both displacement and
On the other hand, Opossum presented similar size with Galapagos and Silvereye, but bigger mass. On the other hand, Opossum
presented
dimensionsimilar
sizes for sizeprofile.
with Galapagos
This suggestsand Silvereye,
smaller amountbut bigger dimension sizes
of displacement. for profile.
However, the This
final
suggests
solution smaller
has more amount
than 22 of displacement.
tons of steel. From However, the final
the point of solution
Optimus,has themore
finalthan 22 tons
design of steel.
alternative
From the the
presents point of Optimus,
smallest the final design
frame structure havingalternative
small profile presents the smallest
sizes. This gives not frame
only structure
an advantage having on
small profile sizes.
displacement, but alsoThis gives not
provides only an
a design advantage
alternative on displacement,
having smaller amount but of
also provides
steel comparinga design
with
alternative having smaller
other algorithms. amount model
A parametric of steel with
comparing with other
70 decision algorithms.
variables A parametricoptimization
is a challenging model with
70 decision variables is a challenging optimization
problem in the domain of architecture. From this perspective, problem in the domain of architecture.
impact of self-adaptive parameter From this
perspective,
update, andthe impact of
ensemble of mutation
self-adaptive parameter
strategies can beupdate,
seen for andthe ensemble of mutation
architectural designstrategies
problem. can be
seen for the architectural design problem.
Table 4. Comparison of Optimus, Opossum, Silvereye, Galapagos (D = 70, NP = 50, termination: 30
Table 4. Comparison of Optimus, Opossum, Silvereye, Galapagos (D = 70, NP = 50, termination: 30 min).
min).

Optimus jEDE
Optimus jEDE Opossum RBFOpt
Opossum RBFOpt SilveEye_PSO
Silve Eye_PSO Galapagos GA
Galapagos GA
𝑓(𝑥)_𝑚𝑖𝑛
f (x)_min 6.21637× 103
6.21637 × 10 2.25410× 104
2.25410 × 10 6.89062 ×10
6.89062× 105 6.74560 × 10
6.74560× 105
Design
Design
problem 𝑔(𝑥)
g(x) 0.0994
0.0994 0.0996
0.0996 0.6674
0.6674 0.9273
0.9273
problem FES 31,800 5102 17,100 19,000
FES 31,800 5102 17,100 19,000

Figure 13. Convergence graph of architectural design problem.


Figure 13. Convergence graph of architectural design problem.
Algorithms 2019, 12, 141 21 of 27
Algorithms 2019, 12, x FOR PEER REVIEW 22 of 30
Algorithms 2019, 12, x FOR PEER REVIEW 22 of 30

Figure14.
Figure 14. Optimized
Optimized results
results of
ofdesign
designproblem.
problem.
Figure 14. Optimized results of design problem.
Results presented
Results in in
presented Table 3 are
Table 3 provided in supplementary
are provided in supplementarymaterials containing
materials minimum
containing fitness
minimum
values, Results
fitness
and presented
values,
their and theirincorresponding
correspondingTable 3 are provided
chromosomes. Theinbest
supplementary
chromosomes. The best
result materials
result
discovered containing
discovered
by Optimus minimum
byisOptimus is in
illustrated
fitness values,
illustrated
Figure 15. in and
Figure their
15. corresponding chromosomes. The best result discovered by Optimus is
illustrated in Figure 15.

Figure 15. Illustration of the best alternative (Optimus) result for design optimization problem.
Figure 15.15.Illustration
Figure Illustrationofofthe
thebest
bestalternative
alternative (Optimus) resultfor
(Optimus) result fordesign
designoptimization
optimization problem.
problem.
5. Discussion
5. Discussion
5. Discussion
In this paper, we presented a new optimization tool called Optimus, which implements jEDE
In In
this
this
algorithm, paper,
paper,
for GH.we wepresented
We presented
tested the aanew
newoptimization
optimization
performance tool called
tool called
of Optimus usingOptimus,
Optimus, whichimplements
which
20 benchmark implements
problems jEDE
jEDE
and a
algorithm,
algorithm,
design for GH. We
for GH. We
optimization tested the performance
tested against
problem the performance of Optimus
GA, PSO,ofRBFOpt. using 20
OptimusExperimental benchmark
using 20 benchmark problems
results show and
problems a
that and design
jEDEa
optimization
outperforms problem
design optimization against
problemGA,
these algorithms by PSO, GA,
against
finding RBFOpt.PSO,
better Experimental
RBFOpt.
fitness values. results
There areshow
Experimental that
results
several jEDE outperforms
show
reasons that
to jEDE
explain
these
thisalgorithms
outperforms by
fact. First,these finding
algorithms
Optimus better fitness values.
by finding better
uses self-adaptive approach There
fitness are several
values. There
for producing reasons to
are several
control explain
reasons
parameters. this
This fact.
togives
explainFirst,
an
Optimus
advantageuses
this fact. First, self-adaptive
Optimus
to adapt approach
uses self-adaptive
the algorithm for producing
approach
itself according tofor control
theproducingparameters.
nature of control This gives
parameters.
the problem. an advantage
ThisOptimus
Secondly, gives an
to presented
adapt the significantly
advantage algorithm
to adapt the itself according
algorithm
smaller toaccording
the than
itselfvalues
fitness nature to ofother
thethe the problem.
nature ofThis
tools. Secondly,
the problem. Optimus
is related Secondly,
to presented
Optimus
the ensemble of
significantly
mutation smaller fitness
presentedstrategies.
significantlyWhile values
smaller
the otherthan
fitness the other
values
algorithms thanaretools.
the This
other
based is related
ontools.
only This istorelated
the ensemble
one mutation ofOptimus
to the ensemble
operator, mutation
of
mutation
strategies.
uses strategies.
threeWhile
mutation Whilealgorithms
the other
operators.the During
other algorithms
are based on
searching are abased
for only on
one
better only one
mutation
solution, mutation
more operator,
operator,
using Optimus
than Optimus
uses three
one mutation
uses three
strategy
mutation mutation
enlarges
operators. theoperators.
search
During DuringThis
space.
searching searching
for aincreases
better forsolution,
athe
better solution,
ability
using using
of more
searching more
than than one mutation
near-optimal
one mutation results.
strategy
strategy
Thirdly,
enlarges enlarges
theOptimus the
doessearch
search space. This space. number
not update
increases This increases
sliders,
the ability the ability
which
of searching of searching
corresponds
near-optimal near-optimal
toresults.
parameters,
Thirdly, results.
for each
Optimus
Thirdly,
iteration. Optimus
Instead, does
generated not update
populations numberare sliders,
directly which
connected corresponds
to the to
geometry.
does not update number sliders, which corresponds to parameters, for each iteration. Instead, generated parameters,
Therefore, for each
required
iteration.
computation Instead,
time generated
for one populations
generation is lessare directly
than the connected
other to
algorithms.
populations are directly connected to the geometry. Therefore, required computation time for onethe geometry.
Finally, Therefore,
optimal values required
of some
computation
problems
generation isare time
less for the
outside
than one
of thegeneration is lessrange
initialization
other algorithms. than the
(e.g.,
Finally, F7 ).algorithms.
other
optimal Onevalues Finally,
may argue
of some thatoptimal values
this situation
problems ofcan
some
are outside be of
problems are outside of the initialization range (e.g., F7 ). One may argue that this situation can be
the initialization range (e.g., F7 ). One may argue that this situation can be faced in architectural design
Algorithms 2019, 12, 141 22 of 27

problems, as well. Generating chromosome values outside of the initialization range is possible in the
Optimus, whereas it is not possible in the other tools.
RBFOpt algorithm is recently compared with metaheuristics using several building optimization
problems in [12–14,32]. According to the results, RBFOpt found better solutions than metaheuristics.
In these studies, termination criterion is defined as number of iterations during the evaluation of
different algorithms. This gives an advantage for RBFOpt, due to ability to discover desirable solutions
with small number of function evaluations. However, metaheuristics require more function evaluations
to find the optimal solutions. In this study, we defined the termination criterion as run time for 30 min
to make a fair comparison. Results show that RBFOpt requires more computation time to execute one
function than metaheuristics. For this reason, while comparing these algorithms, rather than iterations
and/or function evaluations, run time should be considered as termination criterion.
Fitness functions based on simulations (e.g., energy and daylight) may require a huge amount of
time for the convergence during the optimization. From the point of metaheuristics, usage of surrogate
models [63] can be a solution to overcome this drawback. Because, surrogate models require less
amount of function evaluation while approximating the fitness. By this way, researchers can consider
many replications in a short run time.
Furthermore, handling constraints is another important topic for architectural design optimization.
Most of the real-world problems require design constraints that may restrict the design alternatives during
the optimization process. There are many constraint handling methods that can be practically integrated
to the optimization algorithms [64]. In the reviewed literature, only Nelder–Mead optimization tool
provides a constraint handling method for the design optimization problems in the GH. To make a fair
comparison with Galapagos, SilverEye, and Opossum, we considered penalty function as constraint
to find the feasible solution in the design optimization problem. However, there are extremely hard
constraints that can be tackled in the real-world problems e.g., equality constraints. Currently, this is
not available in GH.
In addition to the advantages of Optimus mentioned above, there are some limitations, as well.
Firstly, even though modular approach provides a flexibility when we want to use different algorithms,
it requires more plug-in items to generate each step of the optimization. This effects the practical usage
of Optimus. Another limitation is evaluation strategy of fitness function. In other words, Optimus
creating NP size of list, where each element in the list has D size of dimensions. All these parameters
(with NPxD size) are sending to the objective function for fitness evaluation. Then, we obtain NP size
of fitness results simultaneously. In the other tools, each parameter is sending to the fitness function in
sequence. Then, the fitness function is evaluated. This fact gives an advantage for Optimus in terms of
computation time but it may not be suitable for some of the architectural design problems.

6. Conclusions
As the conclusion, this paper presented a new optimization plug-in, called Optimus for grasshopper
algorithmic modelling (GH) in the Rhinoceros CAD program. A self-adaptive differential evolution
algorithm with ensemble of mutation strategies was implemented in Optimus for architectural design
optimization problems. To test the algorithm performance, experimental design was made by using
standard test problems in the literature, some of the test problems proposed in IEEE CEC 2005 and
an architectural design problem. In general, Optimus outperformed other optimization tools in GH.
This paper showed that an algorithm that presents a good performance in solving real-parameter
benchmark functions can also find more desirable solutions in solving architectural design problems.
As future work, Optimus will be further improved by implementing different types of metaheuristic
algorithms due to NFLT. These algorithms can be variants of PSO, GA, HS, and DE. Moreover, Optimus
can be updated for constrained optimization problems using near-feasibility threshold [65,66], Superior
of Feasibility [67], and Epsilon constraint [68]. Moreover, ensemble of constraint handling techniques [69]
can be used in Optimus that may play a crucial role in architectural design problems. Finally, Optimus
can be simply extended for multi-objective optimization problems due to its modular system.
Algorithms 2019, 12, 141 23 of 27

Supplementary Materials: The supplementary materials are available online at http://www.mdpi.com/1999-4893/


12/7/141/s1.
Author Contributions: Conceptualization, C.C. and B.E.; Investigation, C.C. and B.E.; Methodology, C.C., B.E.,
M.F.T. and S.S.; Project administration, M.F.T. and S.S.; Software, C.C., B.E. and M.F.T.; Supervision, M.F.T. and
S.S.; Validation, C.C. and B.E.; Writing–original draft, C.C. and B.E.; Writing–review & editing, C.C. and B.E.
Funding: M.F.T. was partially funded by the National Natural Science Foundation of China (Grant No. 51435009).
Acknowledgments: The authors would like to thank Michela Turrin and Pirouz Nourian for their useful feedbacks
at the Chair of Design Informatics of the Faculty of Architecture and the Built Environment in TU Delft. M.F.T.
acknowledges the HUST Project in Wuhan in China.
Conflicts of Interest: The authors declare no conflict of interest.

Appendix A

Table A1. Benchmark Functions.

D
x2i
P
Fsph : Sphere Function Fsph (x) =
i=1
F∗sph (x∗ ) = (0, .., 0) = 0−100 ≤ x ≤ 100
D−1
P 2 
100 xi+1 − x2i + (1 − xi )2

Fros : Rosenbrock’s Function Fros (x) =
i=1
F∗ros (x∗ ) = (1, .., 1) = 0−100 ≤ x ≤ 100
 s  !
 D  D
1 P 2 1 P
Fack : Ackley’s Function Fack (x) = −20 exp−0.2 D xi  − exp cos(2πxi ) + 20 + e
 
D
 i=1  i=1
F∗ack (x∗ ) = (0, .., 0) = 0−32 ≤ x ≤ 32
D x2i D  
P Q x
F grw : Griewank’s Function F grw (x) = − cos √i +1
4000 i
i=1 i=1
F∗grw (x∗ ) = (0, .., 0) = 0 −600 ≤ x ≤ 600
Fras : Rastrigin’s Function
D  
x2i − 10 cos(2πxi ) + 10
P
Fras (x) =
i=1
F∗ras (x∗ ) = (0, .., 0) = 0 −5 ≤ x ≤ 5
Fsch : Generalized Schwefel’s Problem 2.26
PD
Fsch (x) = 418.9829N − (xi sin(|xi |))
i=1
F∗sch (x∗ ) = (420.9687, .., 420.968) = 0 −500 ≤ x ≤ 500
Fsal : Salomon’s Function
 s  s
 D  D
Fsal (x) = − cos2π

 P 2
xi  + 0.1
 P
x2i + 1
 i=1  i=1
F∗sal (x∗ ) = (0, .., 0) = 0 −100 ≤ x ≤ 100
Fwht : Whitely’s Function !
D P
P D y2ij  
Fwht (x) = 4000 − cos yij + 1
j=1 i=1
2
where yij = 100 x j − xi + (1 − xi )2


F∗wht (x∗ ) = (1, .., 1) = 0 −100 ≤ x ≤ 100


Fpn1 : Generalized Penalized Function 1
( )
D−1 D
π ( yi − 1)2 1 + 10 sin2 (πyi+1 ) + ( yD − 1)2 +
h i
10 sin2 (πy1 ) +
P P
Fpn1 (x) = D u(xi , 10, 100, 4)
i=1 i=1
where yi = 1 + 41 (xi + 1) and
m
xi > a

 k(xi − a)


u(xi , a, k, m) =  0 −a ≤ xi ≤ a


 k(−x − a)m x < a

i i
∗ ∗
Fpn1 (x ) = (−1, .., −1) = 0 −50 ≤ x ≤ 50
Algorithms 2019, 12, 141 24 of 27

Table A1. Cont.

Fpn2 : Generalized Penalized Function 2


( )
D−1
2 2
h i h i
∗ 2
Fpn2 (x ) = 0.1 sin (3πx1 ) +
P 2 2
(xi−1 − 1) 1 + sin (3πxi+1 ) + (xD − 1) 1 + sin (2πxD ) +
i=1
D
P
u(xi , 5, 100, 4)
i=1
F∗pn1 (x∗ ) = (1, .., 1) = 0 −50 ≤ x ≤ 50

F1 : Shifted Sphere Function


D
z2i + f _bias1
P
F1 (x) =
i=1
x ∈ [−100, 100]D , F1 (x∗ ) = f _bias1 = −450
F2 : Shifted Schwefel’s Problem 1.2
 2
D P
P  i 
F2 (x) =  z j  + f _bias2

i=1 j=1
x ∈ [−100, 100]D , F2 (x∗ ) = f _bias2 = −450
F3 : Shifted Rotated High Conditioned Elliptic Function
D   i−1
106 D−1 z2i + f _bias3
P
F3 (x) =
i=1
x ∈ [−100, 100]D , F3 (x∗ ) = f _bias3 = −450
F4 : Shifted Schwefel’s Problem 1.2 With Noise in Fitness
  2 
 PD Pi    
F4 (x) =   zi   × 1 + 0.4 N (0, 1) + f _bias4
 
i=1 j=1
x ∈ [−100, 100]D , F4 (x∗ ) = f _bias4 = −450
F5 : Schwefel’s Problem 2.6 with Global Optimum on Bounds
F5 (x) = max{|Ai x − Bi |} + f _bias5 , i = 1, 2, .., D
x ∈ [−100, 100]D , F5 (x∗ ) = f _bias5 = −310
F6 : Shifted Rosenbrock’s Function
D−1
P  2 
100 z2i − zi+1 + (zi − 1)2 + f _bias6

F6 (x) =
i=1
x ∈ [−100, 100]D , F4 (x∗ ) = f _bias6 = 390
F7 : Shifted Rotated Griewank’s Function without Bounds
D z2 D  
P i
Q z
F7 (x) = − cos √i + 1 + f _bias7
4000 i
i=1 i=1
Initialize population in [0, 600]D , Global optimum is outside of the initialization range, F7 (x∗ ) = f _bias7 = −180
F8 : Shifted Rotated
 Ackley’s Function with Global Optimum on Bounds.
s !
 D  D
F8 (x) = −20 exp−0.2 D1 z2i  − exp D1
P  P
cos(2πz) + 20 + e + f _bias8

 i=1  i=1
D
x ∈ [−32, 32] , F8 (x∗ ) = f _bias8 = −140
F9 : Shifted Rastrigin’s Function
D  
z2i − 10 cos(2πzi ) + 10 + f _bias9
P
F9 (x) =
i=1
x ∈ [−5, 5]D , F9 (x∗ ) = f _bias9 = −330
F10 : Shifted Rotated Rastrigin’s Function
D  
z2i − 10 cos(2πzi ) + 10 + f _bias10
P
F10 (x) =
i=1
x ∈ [−5, 5]D . F10 (x∗ ) = f _bias10 = −330

References
1. Sariyildiz, S. Performative Computational Design, Keynote Speech. In Proceedings of the ICONARCH-I:
International Congress of Architecture-I, Konya, Turkey, 15–17 November 2012.
Algorithms 2019, 12, 141 25 of 27

2. Ekici, B.; Cubukcuoglu, C.; Turrin, M.; Sariyildiz, I.S. Performative computational architecture using swarm
and evolutionary optimisation: A review. Build. Environ. 2018, 147, 356–371. [CrossRef]
3. Michalewicz, Z.; Fogel, D.B. How to Solve it: Modern Heuristics; Springer Science & Business Media: New York,
NY, USA, 2013.
4. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation
2001, 76, 60–68. [CrossRef]
5. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the MHS’95.
Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995;
pp. 39–43.
6. Storn, R. On the Usage of Differential Evolution for Function Optimization. In Proceedings of the North
American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996; pp. 519–523.
7. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over
continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [CrossRef]
8. Goldberg, D.E.; Holland, J.H. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [CrossRef]
9. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman
problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [CrossRef]
10. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680.
[CrossRef] [PubMed]
11. Coello, C.A.C.; Lamont, G.B.; van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective
Problems; Springer: Berlin/Heidelberg, Germany, 2007; Volume 5.
12. Wortmann, T. Genetic Evolution vs. Function Approximation: Benchmarking Algorithms for Architectural
Design Optimization. J. Comput. Des. Eng. 2018, 6, 414–428. [CrossRef]
13. Wortmann, T.; Waibel, C.; Nannicini, G.; Evins, R.; Schroepfer, T.; Carmeliet, J. Are Genetic Algorithms Really
the Best Choice for Building Energy Optimization? In Proceedings of the Symposium on Simulation for
Architecture and Urban Design, Toronto, CA, Canada, 22–24 May 2017; p. 6.
14. Waibel, C.; Wortmann, T.; Evins, R.; Carmeliet, J. Building energy optimization: An extensive benchmark of
global search algorithms. Energy Build. 2019, 187, 218–240. [CrossRef]
15. Cichocka, J.M.; Migalska, A.; Browne, W.N.; Rodriguez, E. SILVEREYE—The Implementation of Particle
Swarm Optimization Algorithm in a Design Optimization Tool. In Proceedings of the International Conference
on Computer-Aided Architectural Design Futures, Istanbul, Turkey, 10–14 July 2017; pp. 151–169.
16. Rutten, D. Galapagos: On the logic and limitations of generic solvers. Archit. Des. 2013, 83, 132–135. [CrossRef]
17. Camporeale, P.E.; del Moyano, P.M.M.; Czajkowski, J.D. Multi-objective optimisation model: A housing
block retrofit in Seville. Energy Build. 2017, 153, 476–484. [CrossRef]
18. Calcerano, F.; Martinelli, L. Numerical optimisation through dynamic simulation of the position of trees
around a stand-alone building to reduce cooling energy consumption. Energy Build. 2016, 112, 234–243.
[CrossRef]
19. Anton, I.; Tãnase, D. Informed geometries. Parametric modelling and energy analysis in early stages of
design. Energy Procedia 2016, 85, 9–16. [CrossRef]
20. Tabadkani, A.; Banihashemi, S.; Hosseini, M.R. Daylighting and visual comfort of oriental sun responsive
skins: A parametric analysis. Build. Simul. 2018, 11, 663–676. [CrossRef]
21. Lee, K.; Han, K.; Lee, J. Feasibility study on parametric optimization of daylighting in building shading
design. Sustainability 2016, 8, 1220. [CrossRef]
22. Fathy, F.; Sabry, H.; Faggal, A.A. External Versus Internal Solar Screen: Simulation Analysis for Optimal
Daylighting and Energy Savings in an Office Space. In Proceedings of the PLEA, Edinburgh, UK, 16 August 2017.
23. Lavin, C.; Fiorito, F. Optimization of an external perforated screen for improved daylighting and thermal
performance of an office space. Procedia Eng. 2017, 180, 571–581. [CrossRef]
24. Heidenreich, C.; Ruth, J. Parametric optimization of lightweight structures, In Proceedings of the 11th World
Congress on Computational Mechanics, Barcelona, Spain, 21–25 July 2014.
25. Eisenbach, P.; Grohmann, M.; Rumpf, M.; Hauser, S. Seamless Rigid Connections of Thin Concrete Shells—A
Novel Stop-End Construction Technique for Prefab Elements. In Proceedings of the IASS Annual Symposia,
Amsterdam, The Netherlands, 17–20 August 2015; Volume 2015, pp. 1–12.
26. Almaraz, A. Evolutionary Optimization of Parametric Structures: Understanding Structure and Architecture
as a Whole from Early Design Stages. Master’s Thesis, University of Coruna, La Coruña, Spain, 2015.
Algorithms 2019, 12, 141 26 of 27

27. Simon. Goat. 2013. Available online: https://www.food4rhino.com/app/goat (accessed on 10 July 2019).
28. Johnson, S.G. The Nlopt Nonlinear-Optimization Package. Available online: https://nlopt.readthedocs.io/en/
latest/ (accessed on 10 July 2019).
29. Ilunga, G.; Leitão, A. Derivative-free Methods for Structural Optimization. In Proceedings of the 36th
eCAADe Conference, Lodz, Poland, 19–21 September 2018.
30. Austern, G.; Capeluto, I.G.; Grobman, Y.J. Rationalization and Optimization of Concrete Façade Panels.
In Proceedings of the 36th eCAADe Conference, Lodz, Poland, 19–21 September 2018.
31. Delmas, A.; Donn, M.; Grosdemouge, V.; Musy, M.; Garde, F. Towards Context & Climate Sensitive Urban
Design: An Integrated Simulation and Parametric Design Approach. In Proceedings of the 4th International
Conference On Building Energy & Environment 2018 (COBEE2018), Melbourne, Australia, 5–9 February 2018.
32. Wortmann, T. Opossum: Introducing and Evaluating a Model-based Optimization Tool for Grasshopper.
In Proceedings of the CAADRIA 2017, Hong Kong, China, 5–8 July 2017.
33. Costa, A.; Nannicini, G. RBFOpt: An open-source library for black-box optimization with costly function
evaluations. Math. Program. Comput. 2018, 10, 597–629. [CrossRef]
34. Wortmann, T. Model-based Optimization for Architectural Design: Optimizing Daylight and Glare in
Grasshopper. Technol. Archit. Des. 2017, 1, 176–185. [CrossRef]
35. Greco, L. Dodo. 2015. Available online: https://www.food4rhino.com/app/dodo (accessed on 10 July 2019).
36. Eckersley O’Callaghan’s Digital Design Group. 2013. Nelder-Mead Optimization. Available online: https:
//www.food4rhino.com/app/nelder-mead-optimisation-eoc (accessed on 10 July 2019).
37. Lagarias, J.C.; Reeds, J.A.; Wright, M.H.; Wright, P.E. Convergence properties of the Nelder–Mead simplex
method in low dimensions. SIAM J. Optim. 1998, 9, 112–147. [CrossRef]
38. Wrenn, G.A. An Indirect Method for Numerical Optimization Using the Kreisselmeir-Steinhauser Function; NASA:
Washington, DC, USA, 1989.
39. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1,
67–82. [CrossRef]
40. Attia, S.; Hamdy, M.; O’Brien, W.; Carlucci, S. Assessing gaps and needs for integrating building performance
optimization tools in net zero energy buildings design. Energy Build. 2013, 60, 110–124. [CrossRef]
41. Robert McNeel & Associates. Rhinoceros 3D. NURBS Modelling. 2015. Available online: https://www.
rhino3d.com/ (accessed on 10 July 2019).
42. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential
evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10,
646–657. [CrossRef]
43. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.-K.; Tasgetiren, M.F. Differential evolution algorithm with ensemble
of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [CrossRef]
44. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem definitions and
evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005,
2005005, 2005.
45. Blackwell, T.M.; Kennedy, J.; Poli, R. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57.
46. Sengupta, S.; Basak, S.; Peters, R. Particle Swarm Optimization: A survey of historical and recent developments
with hybridization perspectives. Mach. Learn. Knowl. Extr. 2019, 1, 157–191. [CrossRef]
47. Mirjalili, S. Genetic Algorithm. In Evolutionary Algorithms and Neural Networks; Springer: New York, NY,
USA, 2019; pp. 43–55.
48. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT press:
Cambridge, MA, USA, 1992; Volume 1.
49. Knowles, J.; Corne, D. The Pareto Archived Evolution Strategy: A New Baseline Algorithm for Pareto
Multiobjective Optimisation. In Proceedings of the Congress on Evolutionary Computation (CEC99),
Washington, DC, USA, 6–9 July 1999; Volume 1, pp. 98–105.
50. Pan, Q.-K.; Tasgetiren, M.F.; Liang, Y.-C. A discrete differential evolution algorithm for the permutation
flowshop scheduling problem. Comput. Ind. Eng. 2008, 55, 795–816. [CrossRef]
51. Venu, M.K.; Mallipeddi, R.; Suganthan, P.N. Fiber Bragg grating sensor array interrogation using differential
evolution. Optoelectron. Adv. Mater. Commun. 2008, 2, 682–685.
52. Varadarajan, M.; Swarup, K.S. Differential evolution approach for optimal reactive power dispatch.
Appl. Soft Comput. 2008, 8, 1549–1561. [CrossRef]
Algorithms 2019, 12, 141 27 of 27

53. Das, S.; Konar, A. Automatic image pixel clustering with an improved differential evolution. Appl. Soft Comput.
2009, 9, 226–236. [CrossRef]
54. Chatzikonstantinou, I.; Ekici, B.; Sariyildiz, I.S.; Koyunbaba, B.K. Multi-Objective Diagrid Façade Optimization
Using Differential Evolution. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC),
Sendai, Japan, 25–28 May 2015; pp. 2311–2318.
55. Cubukcuoglu, C.; Chatzikonstantinou, I.; Tasgetiren, M.F.; Sariyildiz, I.S.; Pan, Q.-K. A multi-objective
harmony search algorithm for sustainable design of floating settlements. Algorithms 2016, 9, 51. [CrossRef]
56. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput.
2011, 15, 4–31. [CrossRef]
57. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—an updated survey.
Swarm Evol. Comput. 2016, 27, 1–30. [CrossRef]
58. Tasgetiren, M.F.; Suganthan, P.N.; Pan, Q.-K.; Mallipeddi, R.; Sarman, S. An Ensemble of Differential Evolution
Algorithms for Constrained Function Optimization. In Proceedings of the IEEE congress on evolutionary
computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 1–8.
59. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput.
2001, 9, 159–195. [CrossRef] [PubMed]
60. Chatzikonstantinou, I. HoopSnake. 2012. Available online: https://www.food4rhino.com/app/hoopsnake
(accessed on 10 July 2019).
61. Miller, N. LunchBox. 2012. Available online: https://www.food4rhino.com/app/lunchbox (accessed on 10 July 2019).
62. Preisinger, C.; Heimrath, M. Karamba—A toolkit for parametric structural design. Struct. Eng. Int. 2014, 24,
217–221. [CrossRef]
63. Shan, S.; Wang, G.G. Survey of modeling and optimization strategies to solve high-dimensional design
problems with computationally-expensive black-box functions. Struct. Multidiscip. Optim. 2010, 41, 219–241.
[CrossRef]
64. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms:
A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [CrossRef]
65. Tasgetiren, M.F.; Suganthan, P.N. A Multi-Populated Differential Evolution Algorithm for Solving Constrained
Optimization Problem. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation,
Vancouver, BC, Canada, 16–21 July 2006; pp. 33–40.
66. Coit, D.W.; Smith, A.E. Penalty guided genetic search for reliability design optimization. Comput. Ind. Eng.
1996, 30, 895–904. [CrossRef]
67. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng.
2000, 186, 311–338. [CrossRef]
68. Haimes, Y.V. On a bicriterion formulation of the problems of integrated system identification and system
optimization. IEEE Trans. Syst. Man. Cybern. 1971, 1, 296–297.
69. Mallipeddi, R.; Suganthan, P.N. Ensemble of constraint handling techniques. IEEE Trans. Evol. Comput. 2010,
14, 561–579. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like