0% found this document useful (0 votes)
50 views22 pages

Multiobjective Optimization Based On Reputation

This document discusses multiobjective optimization based on reputation. It proposes a novel multiobjective evolutionary algorithm (MOEA) that introduces the concept of reputation to measure the dynamic competency of evolutionary operators and control parameters across problems and stages of search in MOEAs. Based on reputation, individual solutions in the MOEA select highly reputable evolutionary operators and control parameters. Experimental results on 58 benchmark problems show it outperforms classical MOEAs and other adaptive MOEAs.

Uploaded by

jkl316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views22 pages

Multiobjective Optimization Based On Reputation

This document discusses multiobjective optimization based on reputation. It proposes a novel multiobjective evolutionary algorithm (MOEA) that introduces the concept of reputation to measure the dynamic competency of evolutionary operators and control parameters across problems and stages of search in MOEAs. Based on reputation, individual solutions in the MOEA select highly reputable evolutionary operators and control parameters. Experimental results on 58 benchmark problems show it outperforms classical MOEAs and other adaptive MOEAs.

Uploaded by

jkl316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Information Sciences 286 (2014) 125–146

Contents lists available at ScienceDirect

Information Sciences
journal homepage: www.elsevier.com/locate/ins

Multiobjective optimization based on reputation


Siwei Jiang a,b,⇑, Jie Zhang a, Yew-Soon Ong a
a
School of Computer Engineering, Nanyang Technological University, Singapore
b
Singapore Institute of Manufacturing Technology (SIMTech), Singapore

a r t i c l e i n f o a b s t r a c t

Article history: To improve the robustness and ease-of-use of Evolutionary Algorithms (EAs), adaptation on
Received 21 October 2013 evolutionary operators and control parameters shows significant advantages over fixed
Received in revised form 11 July 2014 operators with default parameter settings. To date, many successful research efforts to
Accepted 19 July 2014
adaptive EAs have been devoted to Single-objective Optimization Problems (SOPs),
Available online 29 July 2014
whereas, few studies have been conducted on Multiobjective Optimization Problems
(MOPs). Directly inheriting the adaptation mechanisms of SOPs in the MOPs context faces
Keywords:
challenges due to the intrinsic differences between these two kinds of problems. To fill in
Multiobjective optimization
Multiobjective evolutionary algorithms
this gap, in this paper, a novel Multiobjective Evolutionary Algorithm (MOEA) based on
Adaption reputation is proposed as a unified framework for general MOEAs. The reputation concept
Reputation is introduced for the first time to measure the dynamic competency of evolutionary oper-
jMetal ators and control parameters across problems and stages of the search in MOEAs. Based on
the notion of reputation, individual solutions then select highly reputable evolutionary
operators and control parameters. Experimental studies on 58 benchmark MOPs in jMetal
confirm its superior performance over the classical MOEAs and other adaptive MOEAs.
Ó 2014 Elsevier Inc. All rights reserved.

1. Introduction

Many real-world problems can be formulated as Multiobjective Optimization Problems (MOPs), which involve several
conflicting objectives to be optimized simultaneously [15,61]. Multiobjective Evolutionary Algorithms (MOEAs) have been
well established as efficient approaches to solve diverse MOPs by finding a set of non-dominated solutions in a single run
[15,61,16,63,58,60,20,10,13,44]. Based on various acceptance rules for candidate solutions, the classical MOEAs can be gen-
erally categorized into Pareto dominance-based approaches (e.g., NSGAII [16], SPEA2 [63]) or Decomposition-based
approaches (e.g., MOEA/D [58]).
Nevertheless, due to the inherent properties of Evolutionary Algorithms (EAs), the performance of MOEAs is often greatly
affected by the type of evolutionary operators chosen (e.g., crossover and mutation) and the configuration of user-specific con-
trol parameters. Specifically, in classical MOEAs, a plethora of evolutionary operators have been proposed to promote conver-
gence and diversity in searching non-dominated solutions, such as Simulated Binary Crossover (SBX), polynomial mutation
[16] and strategies in Differential Evolution (DE) [48]. According to [32,2,33,43,28,56,52,50,26], various operators with user-
specific control parameters show significantly different competency in diverse MOPs. The competency of different evolution-
ary operators has been individually investigated in [32,2,33,43]. For instance, SBX operator does not possess the property of
rotational invariance since the correlation between the location of parents is lost under a rotation of the decision space. Thus,

⇑ Corresponding author at: School of Computer Engineering, Nanyang Technological University, Singapore. Tel.: +65 9736 9610.
E-mail addresses: [email protected] (S. Jiang), [email protected] (J. Zhang), [email protected] (Y.-S. Ong).

http://dx.doi.org/10.1016/j.ins.2014.07.020
0020-0255/Ó 2014 Elsevier Inc. All rights reserved.
126 S. Jiang et al. / Information Sciences 286 (2014) 125–146

SBX is generally preferable for dealing with MOPs with independent variables [32]. On the contrary, polynomial mutation is
suitable for handling MOPs with nonlinearly dependent variables, which has been demonstrated in [2,33]. By analyzing the
properties of DE strategies, ‘‘DE/rand/2/bin’’ has better perturbation than ‘‘DE/rand/1/bin’’ due to the two-difference-vectors-
based operator [43]. In comparison to ‘‘DE/rand/1/bin’’ and ‘‘DE/rand/2/bin’’, ‘‘DE/current-to-rand/1/bin’’ being a rotation-
invariant operator, is generally more effective for solving various MOPs [43]. On the other hand, besides the evolutionary
operators, control parameters also play a critical role in optimization. Even for the same operator, the control parameters
can induce different search biases. For instance, a large gc in SBX or gm in polynomial mutation tends to generate offspring
that are near their parent, which is beneficial for promoting solution convergence in MOEAs [16,26]. According to [33], in the
later stage of MOEAs, larger CR values in DE operators are better to find diverse solutions, and smaller F values in DE oper-
ators are more suitable to find solutions in local regions.
However, in the absence of prior knowledge and optimization experience, it is nearly impossible to identify an appropri-
ate setting for MOEAs without resorting to the trial-and-error procedure. Therefore, the aim of the present study is to
enhance the robustness of MOEAs on different problems, and to improve the ease-of-use of MOEAs by reducing the user
efforts required in appropriately predefined parameters. Below, we briefly discuss some of the representative related works
on evolutionary operator selection and control parameter configuration, and then clearly point out their advantages and
shortcomings that inspire and motivate the proposal of our approach.
To address the concerned challenging issue, many approaches have been attempted in the past 30 years to automate the
choice of evolutionary operators [24,37,19,36,11,9], adaptation of control parameters [25,45,47,17,54,6,22,46,23,51] and
configuration of local search operators [40,39,12,41,8,38,7] on various problems, such as single-objective optimization prob-
lems, combinatorial optimization problems, etc. For surveys on some successful research efforts of adaptive EAs, the reader is
referred to [27,21,42]. However, many of the adaptive EAs have been established for solving Single-objective Optimization
Problems (SOPs). Relatively few works have been conducted in the context of Multiobjective Optimization Problems (MOPs)
[3,49,28,31,56,52]. The reason being that directly inheriting the adaptation mechanisms of SOPs in MOPs faces many chal-
lenges due to their intrinsic differences, for example, solution dominance and search improvement differ greatly between the
single and multiple objective contexts. Based on a review of the literature [24,37,19,36,11,9,25,45,47,17,54,6,22,46,23,51,40,
39,12,41,8,38,7], studies on adaptive EAs have focused on two core factors:

 Credit assignment: Learning the competency of various operators and parameters, and then biasing the choice in the form
of rewards.
 Operator selection and parameter adaptation: Adaptation on choice of operators and parameters as a form of algorithm
self-configuration while the search progresses.

The credit assignment in SOPs is often defined in terms of the quality of fitness improvement, including average fitness
improvement and maximum fitness improvement [17,24,37]. In MOPs, since fitness improvement is more complex (i.e., due
to conflicting objectives), credit assignment is on the other hand derived from both the quantity and quality of non-domi-
nated solutions generated by operators and parameters [28,56,52,33].
Some approaches have been proposed to perform appropriate operator selection and parameter adaptation for effectively
improving search capabilities of MOEAs on MOPs. For instance, parameter adaptation utilizes static rules that adapt the
parameters with a Gaussian distribution [3] or a piecewise nonlinear function along with the search (often with respect
to the generation count) [49]. Others follow an online approach and adapt the parameters based on some credit assignment
criteria [31,56]. For instance, Igel et al. [31] proposed a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for MOPs
that configures its parameters, such as the step-size, matrix mean and covariance. Based on the historical records of success-
ful parameters, JADE [56] generates new parameter values of CR (i.e., crossover probability) and F (i.e., differential weight) in
‘‘DE/current-to-pbest/bin’’ by following a Normal distribution and a Cauchy distribution, respectively. In [52], jDE is a self-
adaptive method that encodes the parameters into chromosome and evolves them along with the search. In addition to
parameter adaptation, SaDE [28] and CoDE [50] also involve operator selection. Nevertheless, as both SaDE and CoDE intro-
duce new additional parameters, whose values are predefined according to previous empirical studies, the generality of
these approaches is limited. In essential, adaptation can be viewed in the form of a decision making process with each indi-
vidual solution opting for a configuration of operators and parameters across problems and stages of the search. The core
idea behind adaptation is to provide the ease-of-use of EAs by reducing user efforts (e.g., managing control parameters).
For instance, when only one control parameter is available, it is relatively easy to understand its impact on an algorithm.
However, if more than two control parameters are considered, the response surface for algorithms’ performance becomes
more complex due to the coupling effects of parameters. According to Occam’s Razor [30], a simpler explanation is often eas-
ier to be accepted by the research community. Thus, adaptive MOEAs, such as SaDE [28] and CoDE [50], where several new
parameters have been introduced, tend to bring about challenges and difficulties on using the approaches appropriately.
As highlighted in [34,55], the reputation concept is a simple yet elegant way of decision support to distinguish among
various service providers under selection. In [53,14], Yao et al. introduced the reputation concept into coevolutionary learn-
ing to promote cooperation in iterated prison dilemma (IPD) game. The reputation scores are tagged with different strategies,
which are calculated based on the received payoff in previous games. Then, a player selects strategies based on reputation
scores in future encounters.
S. Jiang et al. / Information Sciences 286 (2014) 125–146 127

Inspired by these recent works [34,55,53,14], in this paper, a novel MOEA based on reputation is proposed for solving
MOPs. In contrast to earlier studies on adaptive MOEAs [3,49,31,56,52] where only parameter adaptation is considered,
our approach also involves operator selection at the same time. More specifically, the reputation concept is introduced
for the first time to measure the dynamic competency of evolutionary operators and control parameters across problems
and stages of the search in MOEAs. Although SaDE [28] and CoDE [50] also study both operator selection and parameter
adaptation in the search, it is achieved with the introduction of four newly defined parameters, which are in addition to stan-
dard parameters. Note that as a result of this, certain parameters need to be predefined appropriately. In contrast, our
approach only introduces one predefined parameter (i.e., the quantization size q in parameter adaptation). In addition, we
also adapt a large variety of operators that are commonly used in the classical MOEAs (i.e., SBX, polynomial mutation
[16] and three DE strategies [48]). Moreover, our approach is proposed as a unified framework that can be applied to both
Pareto dominance-based MOEAs and Decomposition-based MOEAs. Experimental studies on 58 benchmark MOPs in jMetal
[20] confirm its superior performance over the classical MOEAs [16,63,58] and other adaptive MOEAs [28,56,52,50].
The rest of this paper is organized as follows. In Section 2, the procedure of classical MOEAs is generalized. Section 3 then
describes MOEAs with the adaptation of operators and parameters by reputation. Experimental studies are detailed in Sec-
tion 4. Finally, the conclusions and future work are summarized in Section 5.

2. The classical MOEAs

Multiobjective Optimization Problems (MOPs) involve several conflicting objectives to be optimized simultaneously
[15,61]. A minimization of MOP can be stated as follows:
min Fð~ xÞ ¼ ðf1 ð~
xÞ; . . . ; fm ð~
xÞÞ;
ð1Þ
s:t: Gð~
xÞ 6 0; Hð~ xÞ ¼ 0; ~ x 2 X;
where ~x ¼ ðx1 ; . . . ; xn Þ; X is the decision (variable) space, Rm is the objective space, and F : X ! Rm consists of m real-valued
objective functions with constraints Gð~ xÞ ¼ 0, the feasible solution space is X ¼ Pnj¼1 ½Lj ; U j , and Li ; U i are the lower
xÞ 6 0; Hð~
and upper bound of xi , respectively.
The challenge in MOPs is to find a Pareto Set (PS) involving non-dominated solutions that are evenly scattered along the
Pareto Front (PF). Multiobjective Evolutionary Algorithms (MOEAs) have been well established as efficient approaches to
solve various MOPs with independent and dependent PSs, and different MOPs with concave, convex, linear, discrete, discon-
tinuous, unimodal and multi-modal PFs [15,61,20].

Algorithm 1. The procedure of classical MOEAs.

In general, the first population of MOEAs is randomly generated as X g ¼ f~ xi;g ji ¼ 1; . . . ; NP; g ¼ 0g, where NP is the popu-
th
lation size, g is the generation index, ~ xi is the i individual solution in a population. By applying one or several evolutionary
operators on X g , MOEAs produce offspring solutions U g ¼ f~ ui;g ji ¼ 1; . . . ; NPg. After that, MOEAs select offspring that survive
to the next generation (~ ui;g ! X gþ1 ) based on different acceptance rules:

1. Pareto dominance-based MOEAs. These approaches use Pareto dominance together with crowding distance or clustering
S
methods to find evenly distributed solutions. Then ~ ui;g ! X gþ1 if ~
ui;g  ~ x 2 X g X gþ1 1 under, for example, crowding
x 8~
distance (NSGAII) [16] and neighborhood density estimator (SPEA2) [63].

1
‘‘’’ means ‘‘be better than or equal to’’.
128 S. Jiang et al. / Information Sciences 286 (2014) 125–146

2. Decomposition-based MOEAs. These algorithms decompose the multiobjective problems into a number of scalar sub-
 S 
problems and optimize them simultaneously. Then ~ui;g ! X gþ1 if ~ x 8~
ui;g  ~ x 2 X g X gþ1 under, for example, weighted
sum, Tchebycheff and boundary intersection approaches [58].

Algorithm 1 gives the general procedure of classical MOEAs. Although some approaches (e.g., NSGAII and SPEA2) adopt
the elite population or external archive to store non-dominated solutions, we ignore them for simplicity and summarize
the core idea. In particular, the operators and parameters in Algorithm 1 are predefined before the algorithm starts and fixed
throughout the entire search process (Line 5 in Algorithm 1). Thus, the performance of MOEAs is highly dependent on the
chosen operators and their parameters (e.g., the operator ‘‘DE/rand/1/bin’’, and parameters CR and F in the operator [48]).
To generate high quality offspring solutions in MOEAs, our work aims to automatically select operators and adapt parame-
ters across problems and stages of the search.

3. Proposed MOEAs based on reputation

In this section, a novel Multiobjective Evolutionary Algorithm based on reputation is proposed as a unified framework for
general MOEAs. For the sake of readability, we outline the procedure of the proposed framework in what follows.

 Reputation initializing. In addition to population initialization, the initial reputation score of operators and parameters is
set as the same value2 which offers all an equal chance to participate in the search process.
 Offspring reproducing. Individual solutions generate offspring by choosing search configurations (operators and param-
eters) based on reputation scores.3 Credit Assignment rewards the selected operators and parameters based on whether the
offspring can survive to the next generation. Due to the coupling relationship between operators and parameters, our frame-
work is divided into two portions, namely Operator Selection and Parameter Adaptation by reputation.
 Reputation updating. The reputation of operators and parameters is updated based on the aggregation of historical
rewards gained in the previous generation and new rewards in the current generation.

In the following sections, we describe the three major components of the proposed framework, including search config-
uration based on reputation, operator selection by reputation and parameter adaptation by reputation, in details.

3.1. Search configuration based on reputation

In this paper, reputation score is used to measure the dynamic competency of evolutionary operators and control param-
eters on different problems and across stages of the search. Individuals in MOEAs then use the reputation scores to effect the
choice of operator selection and parameter adaptation.
According to [34,55], reputation is defined as what is generally said or believed about a person’s or thing’s character or
standing. It is common for a person in preferring to choose a reputable or famous brand due to its high acceptance by others.
Inspired by this phenomenon, individual solutions X g in MOEAs shall select operators and parameters with higher reputation
scores, i.e., operators and parameters that generate better offspring. We define a search configuration c that is composed of an
operator with a specified parameter ð< o; p >Þ. In general, the search bias of operators is determined by the setting values of
the parameters [16,26,33,28,56,52]. By considering the coupling relationship between operators and parameters, the repu-
tation of search configuration can be formalized as:

RðcÞ ¼ Rð< o; p >Þ ¼ RðoÞRðpjoÞ; ð2Þ

where RðoÞ is the reputation of operator o, and RðpjoÞ means that the reputation of parameter p is dependent on the selected
operator o. If an operator is controlled by several parameters, we assume independency between these parameters, that is
Q
RðcÞ ¼ RðoÞ p RðpjoÞ.
After the reputation scores of various search configurations are calculated, the expected probability of a configuration
c 2 C (where C is the search configuration pool) is given by:

RðcÞ
EðcÞ ¼ P : ð3Þ
c2C RðcÞ

The search configuration with a higher reputation score shall have a greater probability of being selected by individuals in
MOEAs.
From the definition of configuration reputation in Eq. (2), the two components RðoÞ and RðpjoÞ can be regarded as operator
selection and parameter adaptation by reputation, respectively. They are detailed in the following two subsections.

2
In this work, the initial reputation score is set as 0.5. The range of reputation score is ½0; 1.
3
The terms ‘‘reputation’’ and ‘‘reputation score’’ can be used interchangeably.
S. Jiang et al. / Information Sciences 286 (2014) 125–146 129

3.2. Operator selection by reputation

The first component of our framework is operator selection by reputation. The dynamic competency of operator o 2 O
(where O is the operator pool) is determined by reputation.
To reward various operators, the Credit Assignment of our approach is described as follows. Suppose an individual ~
xi;g
employs an operator o to produce an offspring ~
ui;g . Then, the offspring ~
ui;g rewards operator o by:
(
sg ðoÞ ( sg ðoÞ þ 1 if ~
ui;g ! X gþ1 ;
ð4Þ
fg ðoÞ ( fg ðoÞ þ 1 otherwise;

where sg ðoÞ and fg ðoÞ denote the number of success and failure for operator o at generation g, respectively. Whether ~
ui;g can
survive to the next generation g þ 1 (i.e., ~
ui;g ! X gþ1 ) is determined by various acceptance rules (see Section 2).
In order to measure the dynamic competency of operators, the reputation of operators needs to consider two parts,
namely the historical information and current statistical information. The aggregation of the historical and current informa-
tion is given by:
(
sg ðoÞ ( ð1  gÞ  sg1 ðoÞ þ g  sg ðoÞ;
ð5Þ
fg ðoÞ ( ð1  gÞ  fg1 ðoÞ þ g  sf ðoÞ;

where 0 6 g 6 1 is a predefined parameter. Setting g ¼ 0 indicates that only historical information is considered, whereas
g ¼ 1 focuses solely on the current information. The challenge of Eq. (5) lies in adapting g on different problems and across
stages of the search.
The reputation is usually built with Beta probability for binary events [34,55]. To model the distribution of a random var-
iable representing the unknown probability, the Beta probability density functions (PDF) of operator o is given as:
Cða þ bÞ a1
Betaðoja; bÞ ¼ o ð1  oÞb1 ; ð6Þ
CðaÞCðbÞ
where a and b are set as sg ðoÞ and fg ðoÞ, respectively.
The reputation score of operator o at generation g is then the probability expectation value, representing the relative fre-
quency of success in future [34,55], which is given by:
sg ðoÞ
Rg ðoÞ ¼ ; o 2 O: ð7Þ
sg ðoÞ þ fg ðoÞ

In general, while an operator is suitable for some specific types of problems, it might not work well for other forms of
problems. Even for the same problem, the competency of the operator may vary across generations. For instance, ‘‘DE/
ran/1/bin’’ is suitable for multi-modal problems, which has slow convergence in the earlier stage, but exhibits strong explo-
ration in the later stage [43]. Based on the above considerations, the reputation score of an operator needs to reflect the vary-
ing competency of the operator, and it is thus necessary to update the reputation scores based on two guiding principles
[34,55]:

 The reputation score is hard to build.


 The reputation score is easy to decline.

To avoid the requirement for tuning parameter g, we propose the following aggregation functions:
(
sg ðoÞ ( ð1  Rg1 ðoÞÞ  sg1 ðoÞ þ Rg1 ðoÞ  sg ðoÞ;
ð8Þ
fg ðoÞ ( ð1  Rg1 ðoÞÞ  fg1 ðoÞ þ Rg1 ðoÞ  fg ðoÞ:

Then, Eq. (8) has two benefits. First, setting g is eliminated. Note that this is important since any adaptive scheme that intro-
duces user-dependent parameters generally deficits the notion of ease-of-use. At the same time, Eq. (8) meets the two guid-
ing principles stated. When the reputation of the operator at the previous generation Rg1 ðoÞ is low, this operator needs more
number of success sg ðoÞ to build up its reputation at generation g. So, Rg ðoÞ is hard to build. When Rg1 ðoÞ is high, fg ðoÞ is
amplified by Rg1 ðoÞ, despite the small number of failure. Thus, Rg ðoÞ is easy to decline.

3.3. Parameter adaptation by reputation

The second component of our framework is parameter adaptation by reputation. Similar to operator selection, parameter
adaptation quantizes each control parameter into several segments, and then evaluates them by means of reputation.
In MOEAs, the reputation of evolutionary operators (e.g., crossover and mutation) can be measured based on success and
failure counts. However, since control parameters are commonly in the form of continuous variables, to compute the repu-
tation of parameters, quantization of the continuous variables is first performed. Assume a control parameter p 2 P (where P
is the parameter pool) has the range as ½Lp ; U p . The k-th quantized segment of parameter p is:
130 S. Jiang et al. / Information Sciences 286 (2014) 125–146

(
½Lp þ ðU p  Lp Þ k1
q
; Lp þ ðU p  Lp Þ qkÞ if k < q;
~k ¼
p ð9Þ
½Lp þ ðU p  Lp Þ q1
q
; Up  if k ¼ q;
where k 2 ½1; q and q is the quantization size.
Similar to operator selection by reputation, we update the number of success and failure for a parameter segment
~ 2 ½p
p ~1 ; . . . ; p
~q  by:
(
~joÞ ( ð1  Rg1 ðp
sg ðp ~joÞÞ  sg1 ðp
~joÞ þ Rg1 ðp
~joÞ  sg ðp
~joÞ;
ð10Þ
~joÞ ( ð1  Rg1 ðp
fg ðp ~joÞÞ  fg1 ðp
~joÞ þ Rg1 ðp
~joÞ  fg ðp
~joÞ:
~ based on an operator o at generation g is:
Then, the reputation of parameter segment p
~joÞ
sg ð p
~joÞ ¼
Rg ðp ~ 2 P:
; p ð11Þ
~joÞ þ fg ðp
sg ð p ~joÞ

~1 ; . . . ; p
In Eq. (9), parameter p’s range is quantized to q segments ½p ~q . We assume each segment follows a Normal distri-
bution, defined by a mean value and a standard deviation value. V g ðp ~joÞ is denoted as the set that stores successful parameter
values in segment p ~ based on operator o at generation g. The mean of parameter segment is updated by:

lg ðp~joÞ ( ð1  gÞ  lg1 ðp
~joÞ þ g  meanðV g ðp
~joÞÞ; ð12Þ
U L
where g ¼ Rg1 ðp ~joÞ. For simplicity, we use the ‘‘3-sigma rule’’ to define the standard deviation value as r ¼ p3q p , which
enforces the values generated by the Normal distribution to cover the parameter range with 99.7% confidence level [1]. After
that, a new parameter value is generated as p ¼ Normalðlg ðp ~joÞ; rÞ. If the new value is less or larger than the bounds, it is set
as Lp or U p , respectively.

Algorithm 2. The proposed MOEAs based on reputation.


S. Jiang et al. / Information Sciences 286 (2014) 125–146 131

3.4. Pseudocodes of MOEAs based on reputation

Algorithm 2 describes the pseudocodes of MOEAs based on reputations as a unified framework for both Pareto domi-
nated-based MOEAs and Decomposition-based MOEAs. The three major phases are given as:

 Reputation initializing. Lines 3–8 initialize reputation of operators and parameters. More specifically, Line 4 quantizes
parameter p’s range (p 2 P) into q segments ½p ~1 ; . . . ; p
~q . In Line 5, the mean and standard deviation of parameter segments
U L
are assigned as lg ðpjOÞ ¼ fLp þ ðU p  Lp Þ q jk ¼ 1; . . . ; qg and r ¼ p3q p , respectively. In Line 7, the initial success and
~ k0:5

failure numbers for operators and parameter segments are set as s0 ðÞ ¼ f 0 ðÞ ¼ 1.
 Offspring reproducing. Line 10 initializes set V g ðPjOÞ as empty. In Line 12, an individual ~
xi;g selects a search configuration
c ¼ ðo; p~Þ by Eg ðCÞ. Line 13 generates a parameter p by Normal distribution. If an operator has several parameters, Line 13
is repeatedly applied until all the parameters are generated. Lines 16–19 assign rewards to the selected operators and
parameters by different acceptance rules. If offspring ~ ui;g can survive to the next generation, parameter p is stored in
~joÞ (Line 17).
V g ðp
 Reputation updating. In Line 20, we first update success and failure counts for operators using Eq. (8), and then update the
reputation of operators by Eq. (7). Lines 21–22 first update the reputation of parameter segments using Eq. (11), and then
calculate the means of parameter segments by Eq. (12). Lines 23 updates the reputation of search configurations, and
their expected probabilities using Eq. (3).

When the termination criteria is satisfied (i.e., g P G), the non-dominated population is output in Line 24.

4. Experimental results and discussions

The experiments are conducted on jMetal 4.0 [20], which is a Java-based framework that is aimed at facilitating the devel-
opment of metaheuristics for solving MOPs.4 The performance metric is set as the hypervolume indicator [64,4]. The reference
sets in hypervolume are specified by the true Pareto fronts [20].

4.1. Benchmark problems and experimental settings

The benchmark problems include 58 test instances: 5 MOPs in ZDT family problems [62], 7 MOPs in DTLZ family prob-
lems [18], 18 MOPs in WFG family problems5 [29], 5 MOPs in LZ09F family problems6 [35] and 23 MOPs in CEC2009 MOEA
competition [59]. The decision variables in the Pareto Sets (PSs) of ZDT, DTLZ and WFG are independent, and those in LZ09F
and CEC2009 are dependent. The 58 MOPs have different geometrical shapes in objective space such as concave, convex, linear,
discrete, uni-modal and multi-modal Pareto Fronts (PFs).
The major experimental settings are outlined as follows.

1. Population size: In MOEA/D [58], the population size is decided by the number of weight vectors C m1
Hþm1 (m is the objec-
tive number, H is a predefined integer). NP ¼ 100; 153; 715 for m ¼ 2; 3; 5 objectives (H ¼ 99; 16; 9), respectively. Other
algorithms have the same population size as MOEA/D.
2. Maximum function evaluations: Max_FES = 300; 000.
3. Independent run times: runs ¼ 30.
4. Evolutionary operators in NSGAII [16] and SPEA2 [63]:
 SBX: pc ¼ 0:9; gc ¼ 20.
 Polynomial mutation: pm ¼ 1=n; gm ¼ 20.
5. DE operators in MOEA/D [58]: CR ¼ 1:0; F ¼ 0:4.
6. Update approaches in Decomposition-based MOEAs: Tchebycheff approach (m ¼ 2) and Penalty-based Boundary Inter-
section approach (m P 3) [58].
7. Quantization size in our approach: q ¼ 3.

Other parameters are set as the default values in jMetal [20].


The experimental studies involve five evolutionary operators, namely ‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’, ‘‘DE/current-to-
rand/1/bin’’ [48], ‘‘SBX’’, and ‘‘polynomial mutation’’ [16], because they have been demonstrated as effective operators and
commonly used in the classical MOEAs. They are described as following:

1. ‘‘DE/rand/1/bin’’
v i;g ¼ ~xr1;g þ F  ð~xr2;g  ~xr3;g Þ:
~ ð13Þ

4
http://jmetal.sourceforge.net.
5
WFG1-9 and WFG1-9D3 with 2 and 3 objectives, respectively.
6
We omit LZ09F2, 5, 6, 8 because they are the same as UF1, 2, 3, 8 in CEC2009 MOEA competition, respectively.
132
S. Jiang et al. / Information Sciences 286 (2014) 125–146
Table 1
Hypervolume median and IQR of classical MOEAs and those extended by our framework over 30 independently runs on 58 MOPs with 300,000 FES.

MOPs NSGAII NSGAII-R SPEA2 SPEA2-R MOEA/D MOEA/D-R


ZDT1 6.60E01 ± 2.29E04 – 6.61E01 ± 3.15E04 – 6.62E01 ± 1.02E04 + 6.62E01 ± 2.76E05 + 6.61E01 ± 1.31E04 – 6.62E01 ± 4.95E07
ZDT2 3.27E01 ± 2.30E04 – 3.27E01 ± 4.02E04 – 3.28E01 ± 1.35E04  3.29E01 ± 3.40E05 + 3.28E01 ± 1.16E04 – 3.28E01 ± 1.18E08
ZDT3 5.15E01 ± 1.72E04 + 5.15E01 ± 1.95E04 + 5.16E01 ± 8.90E05 + 5.16E01 ± 2.94E05 + 5.14E01 ± 2.01E05 – 5.14E01 ± 2.17E06
ZDT4 6.61E01 ± 2.79E04 – 6.60E01 ± 3.03E04 – 6.62E01 ± 5.16E05 + 4.95E01 ± 1.67E01  6.61E01 ± 3.35E04 – 6.62E01 ± 1.05E04
ZDT6 3.98E01 ± 5.50E04 – 3.98E01 ± 3.83E04 – 4.01E01 ± 5.27E04 – 4.01E01 ± 2.74E05 + 4.01E01 ± 2.11E07 – 4.01E01 ± 4.72E09
DTLZ1 7.78E01 ± 3.13E03  7.80E01 ± 3.28E03  7.97E01 ± 4.60E04  7.97E01 ± 2.10E04  7.93E01 ± 1.24E03  4.51E01 ± 6.94E01
DTLZ2 3.94E01 ± 4.22E03 – 4.01E01 ± 4.96E03 – 4.19E01 ± 2.00E03 – 4.29E01 ± 6.77E04 + 4.09E01 ± 2.46E03 – 4.29E01 ± 1.78E06
DTLZ3 3.98E01 ± 5.79E03 + 4.00E01 ± 4.02E01 + 4.28E01 ± 8.59E04 + 4.29E01 ± 9.35E04 + 3.34E01 ± 7.03E02 – 3.93E01 ± 3.15E04
DTLZ4 3.95E01 ± 4.41E03 – 3.97E01 ± 5.24E03 – 4.13E01 ± 2.08E03 – 4.22E01 ± 8.73E04 + 4.00E01 ± 3.33E03 – 4.22E01 ± 1.52E06
DTLZ5 9.41E02 ± 1.38E04 + 9.44E02 ± 9.18E05 + 9.44E02 ± 5.45E05 + 9.47E02 ± 2.55E05 + 8.06E02 ± 6.92E05 + 8.04E02 ± 1.59E07
DTLZ6 9.52E02 ± 1.02E04 + 9.52E02 ± 1.10E04 + 9.56E02 ± 3.04E05 + 9.56E02 ± 2.16E05 + 8.12E02 ± 3.30E07  8.12E02 ± 1.20E07
DTLZ7 2.98E01 ± 2.85E03 + 3.05E01 ± 2.59E03 + 3.07E01 ± 1.76E03 + 3.12E01 ± 1.52E03 + 2.62E01 ± 3.99E03 – 2.67E01 ± 9.47E04
WFG1 6.30E01 ± 4.37E04 – 6.32E01 ± 3.56E04 – 6.32E01 ± 1.19E01 – 6.24E01 ± 1.86E03 – 6.27E01 ± 3.22E03 – 6.33E01 ± 6.43E05
WFG2 5.64E01 ± 2.70E03  5.64E01 ± 8.40E05 + 5.64E01 ± 2.79E03  5.64E01 ± 3.46E05 + 5.63E01 ± 4.84E05 – 5.63E01 ± 2.41E07
WFG3 4.41E01 ± 1.38E04 – 4.41E01 ± 1.64E04 – 4.42E01 ± 1.15E04 + 4.42E01 ± 1.62E05 + 4.42E01 ± 1.74E05 – 4.42E01 ± 2.41E09
WFG4 2.18E01 ± 3.35E04 – 2.18E01 ± 3.75E04  2.19E01 ± 2.48E04 + 2.19E01 ± 9.47E05 + 2.18E01 ± 2.07E04 – 2.18E01 ± 1.39E06
WFG5 1.95E01 ± 5.05E04  1.95E01 ± 4.53E04  1.96E01 ± 1.64E04 + 1.96E01 ± 1.11E04 + 1.95E01 ± 3.87E05 – 1.95E01 ± 6.56E05
WFG6 2.04E01 ± 5.60E03 – 2.09E01 ± 2.64E04 – 2.03E01 ± 7.41E03 – 2.10E01 ± 8.44E05 + 2.09E01 ± 3.04E05 – 2.10E01 ± 1.69E07
WFG7 2.09E01 ± 2.12E04 – 2.09E01 ± 4.54E04 – 2.10E01 ± 2.05E04 + 2.10E01 ± 9.61E05 + 2.09E01 ± 1.88E05 – 2.10E01 ± 1.11E06
WFG8 1.52E01 ± 8.90E04 – 1.52E01 ± 7.80E04 – 1.54E01 ± 9.13E04 – 1.51E01 ± 1.52E03 – 1.54E01 ± 9.26E04 – 2.07E01 ± 5.26E02
WFG9 2.38E01 ± 1.99E03 – 2.38E01 ± 1.51E03 – 2.39E01 ± 2.01E03  2.39E01 ± 1.27E03  2.40E01 ± 1.14E04  2.39E01 ± 1.03E03
WFG1D3 9.08E01 ± 3.54E03  9.20E01 ± 3.39E03 + 9.15E01 ± 1.38E03 + 7.62E01 ± 3.59E02 – 8.44E01 ± 5.55E02 – 9.07E01 ± 5.89E03
WFG2D3 9.09E01 ± 2.51E03 – 9.13E01 ± 2.37E03 – 9.19E01 ± 1.02E03 + 9.21E01 ± 6.55E04 + 9.13E01 ± 1.23E03 – 9.16E01 ± 2.82E04
WFG3D3 3.19E01 ± 1.68E03 + 3.20E01 ± 1.50E03 + 3.04E01 ± 4.39E03 + 3.09E01 ± 3.46E03 + 2.87E01 ± 7.18E04 – 2.87E01 ± 1.14E04
WFG4D3 3.89E01 ± 4.73E03 – 3.89E01 ± 4.35E03 – 4.11E01 ± 3.03E03 – 4.27E01 ± 1.18E03 + 3.80E01 ± 3.05E03 – 4.17E01 ± 1.04E03
WFG5D3 3.62E01 ± 4.77E03 – 3.66E01 ± 4.93E03 – 3.85E01 ± 1.64E03 + 3.83E01 ± 8.37E04 + 3.72E01 ± 1.64E03 – 3.81E01 ± 6.81E03
WFG6D3 3.94E01 ± 5.60E03 – 4.01E01 ± 4.81E03 – 4.21E01 ± 1.86E03 + 4.33E01 ± 5.60E04 + 4.08E01 ± 1.70E03 – 4.17E01 ± 2.29E03
WFG7D3 3.80E01 ± 6.79E03 – 3.83E01 ± 5.38E03 – 4.01E01 ± 3.17E03 – 4.08E01 ± 2.49E03 + 4.02E01 ± 1.94E03 – 4.06E01 ± 2.08E03
WFG8D3 2.59E01 ± 7.27E03 – 2.59E01 ± 5.10E03 – 2.99E01 ± 5.62E03  2.98E01 ± 4.52E03 – 2.86E01 ± 9.46E02 – 3.91E01 ± 7.33E02
WFG9D3 3.79E01 ± 4.31E03 – 3.79E01 ± 3.88E03 – 3.95E01 ± 4.55E03 – 3.96E01 ± 2.30E03 – 3.97E01 ± 1.07E03 – 4.00E01 ± 2.94E03
LZ09F1 6.49E01 ± 9.18E04 – 6.54E01 ± 8.68E04 – 6.55E01 ± 1.90E03 – 6.56E01 ± 6.01E04 – 6.61E01 ± 8.94E05 – 6.61E01 ± 6.62E05
LZ09F3 6.25E01 ± 4.11E03 – 6.47E01 ± 1.95E03 – 6.20E01 ± 6.81E03 – 6.35E01 ± 3.65E03 – 6.50E01 ± 8.90E03 – 6.53E01 ± 2.55E03
LZ09F4 6.35E01 ± 2.65E03 – 6.47E01 ± 7.24E04 – 6.32E01 ± 3.49E03 – 6.32E01 ± 4.98E03 – 6.57E01 ± 9.73E04 + 6.53E01 ± 5.27E03
LZ09F7 4.87E01 ± 6.06E02 – 6.53E01 ± 9.90E04 – 4.72E01 ± 7.53E02 – 4.83E01 ± 5.57E02 – 6.58E01 ± 1.77E03 – 6.59E01 ± 2.37E03
LZ09F9 2.43E01 ± 1.48E02 – 3.00E01 ± 7.86E03 – 2.24E01 ± 6.19E03 – 2.42E01 ± 1.11E02 – 3.21E01 ± 5.55E03 + 3.18E01 ± 4.89E03
UF1 5.67E01 ± 3.85E02 – 6.40E01 ± 1.61E02 – 5.46E01 ± 2.26E02 – 5.83E01 ± 3.27E02 – 6.57E01 ± 1.51E03 + 6.56E01 ± 2.43E03
UF2 6.34E01 ± 8.51E03 – 6.48E01 ± 1.93E03 – 6.33E01 ± 9.58E03 – 6.42E01 ± 3.45E03 – 6.45E01 ± 2.01E02 – 6.54E01 ± 2.15E03
UF3 4.82E01 ± 5.66E02 – 6.35E01 ± 1.51E02 – 4.29E01 ± 6.86E02 – 4.63E01 ± 8.06E02 – 6.31E01 ± 1.95E02 – 6.50E01 ± 8.98E03
UF4 2.65E01 ± 8.74E04 – 2.74E01 ± 2.39E03 – 2.71E01 ± 7.96E04 – 2.80E01 ± 3.86E04 + 2.30E01 ± 1.10E02 – 2.77E01 ± 1.51E03
UF5 1.68E01 ± 8.22E02 – 3.30E01 ± 2.84E02 + 1.76E01 ± 9.89E02 – 1.04E01 ± 1.59E01 – 6.89E02 ± 1.57E01 – 2.53E01 ± 3.76E02
UF6 2.35E01 ± 4.81E02  2.50E01 ± 7.13E02  2.38E01 ± 5.43E02  2.17E01 ± 1.33E01 – 2.36E01 ± 9.69E02  2.53E01 ± 5.36E02
UF7 2.85E01 ± 1.99E01 – 4.76E01 ± 3.54E03 – 4.37E01 ± 1.75E01 – 4.51E01 ± 1.03E02 – 4.88E01 ± 3.79E03  4.88E01 ± 2.45E03
UF8 1.33E01 ± 1.18E01 – 1.34E01 ± 1.51E01 – 1.58E01 ± 1.91E02 – 2.55E01 ± 8.87E02 – 2.06E01 ± 9.32E02 – 3.02E01 ± 3.12E03
UF9 3.83E01 ± 1.62E01 – 3.58E01 ± 2.16E01 – 5.72E01 ± 9.89E02  5.95E01 ± 1.05E02  5.64E01 ± 1.08E01  5.49E01 ± 1.19E01
UF10 2.20E02 ± 3.47E02 – 9.75E03 ± 4.31E02 – 3.10E02 ± 5.15E02 – 1.19E01 ± 6.66E02  7.06E02 ± 3.71E02 – 1.32E01 ± 7.59E02
UF11 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 3.08E04 ± 2.86E04
UF12 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
UF13 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00

S. Jiang et al. / Information Sciences 286 (2014) 125–146


CF1 4.65E01 ± 1.12E03 + 4.71E01 ± 8.56E04 + 4.62E01 ± 3.28E03 + 4.40E01 ± 6.04E03  4.39E01 ± 1.33E05  4.39E01 ± 2.00E02
CF2 5.87E01 ± 3.13E02 – 6.10E01 ± 7.49E04 – 5.64E01 ± 3.07E02 – 5.72E01 ± 2.35E02 – 6.50E01 ± 1.35E03 – 6.51E01 ± 1.17E03
CF3 9.73E02 ± 6.40E02 – 1.44E01 ± 5.14E02  9.80E02 ± 6.24E02 – 1.28E01 ± 3.08E02  1.31E01 ± 6.40E02  1.46E01 ± 6.82E02
CF4 1.74E01 ± 9.28E02 – 2.34E01 ± 6.50E02 – 2.94E02 ± 1.25E01 – 1.66E03 ± 8.83E02 – 5.46E01 ± 6.71E03 – 5.49E01 ± 4.00E03
CF5 6.65E02 ± 1.43E01 – 1.15E01 ± 1.83E01 – 0.00E00 ± 9.04E02 – 0.00E00 ± 0.00E00 – 3.81E01 ± 1.25E01  3.56E01 ± 1.72E01
CF6 4.10E01 ± 1.19E01 – 3.77E01 ± 8.60E02 – 1.65E01 ± 2.95E01 – 1.90E01 ± 1.87E01 – 6.45E01 ± 1.56E02 – 6.58E01 ± 1.96E03
CF7 1.39E01 ± 2.48E01 – 2.42E01 ± 2.10E01 – 0.00E00 ± 1.29E01 – 0.00E00 ± 0.00E00 – 4.64E01 ± 1.19E01 – 5.56E01 ± 4.47E02
CF8 4.67E02 ± 9.99E02 – 1.61E01 ± 5.50E02 – 2.32E01 ± 2.39E01  2.36E01 ± 1.69E01  2.31E01 ± 4.65E02 + 2.04E01 ± 5.07E02
CF9 1.94E01 ± 7.37E02 – 2.67E01 ± 1.52E02 – 2.88E01 ± 1.55E02 – 2.97E01 ± 1.56E02 – 2.71E01 ± 2.62E02 – 3.25E01 ± 2.91E02
CF10 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 1.42E01 ± 8.81E02 – 2.07E01 ± 6.46E02

+,  and – represent previous algorithm statistically significant better, similar and worse than the last algorithm, respectively.

133
134 S. Jiang et al. / Information Sciences 286 (2014) 125–146

2. ‘‘DE/rand/2/bin’’
v i;g ¼ ~xr1;g þ F  ð~xr2;g  ~xr3;g Þ þ F  ð~xr4;g  ~xr5;g Þ:
~ ð14Þ
3. ‘‘DE/current-to-rand/1/bin’’
v i;g ¼ ~xi;g þ randðÞ  ð~xr1;g  ~xi;g Þ þ F  ð~xr2;g  ~xr3;g Þ:
~ ð15Þ
The three above DE operators apply the binomial crossover to produce the offspring as ~
ui;g ¼ fui;j;g jj ¼ 1; . . . ; ng; i 2 ½1; NP and
g is the generation index, where

v i;j;g if randj ðÞ 6 CR or j ¼ jrand ;
ui;j;g ¼
xi;j;g otherwise:
4. ‘‘SBX’’
(
~
v i;g
~ if randðÞ < pc ;
ui;g ¼ ð16Þ
~
xr2;g otherwise;

v i;g ¼ 12 ½ð~xr1;g þ ~xr2;g Þ  bq  ð~xr1;g  ~xr2;g Þ, and


where ~
8
< ð2  randðÞÞgc1þ1 if randðÞ 6 0:5;
bq ¼ 1
:ð 1
Þgc þ1 otherwise:
2ð1randðÞÞ

5. ‘‘Polynomial mutation’’

v i;j;g if randðÞ < pm ;
ui;j;g ¼ ð17Þ
xi;j;g otherwise;
where v i;j;g ¼ xi;j;g þ ðU j  Lj Þ  dj , and
( 1
ð2  randðÞÞgm þ1  1 if randðÞ < 0:5;
dj ¼ 1
ð18Þ
1  j2  ð1  randðÞÞjgm þ1 otherwise:

Here, r1; r2; r3; r4; r5 2 ½1; NP are random integer numbers and r1 – r2 – r3 – r4 – r5 – i. randðÞ 2 ½0; 1 is a uniform ran-
dom value. The control parameter F is the scaling factor, which amplifies or shrinks the difference vectors. In SBX, gc 2 ½0; n
determines the distance of children from their parent. In polynomial mutation, gm 2 ½0; n defines the polynomial probability
distribution. CR; pc ; pm 2 ½0; 1 are crossover/mutation probabilities in DE operators, SBX and polynomial mutation, respec-
tively. If the value ui;j;g exceeds the lower or upper bound, it is set as Lj or U j , respectively.
All the algorithms are evaluated based on hypervolume, which is the only single set quality measure that is known to be
strictly monotonic with regard to Pareto dominance [4]. The obtained results are compared using median values and inter-
quartile range (IQR). In order to obtain statistically sound conclusion, the Wilcoxon rank sum test with 95% confidence level
is conducted on the experimental results. We compare all the algorithms under the following two aspects.

 Robustness. Under Wilcoxon statistical test, the algorithm generates higher hypervolume than other approaches on differ-
ent types of MOPs.
 Ease-of-use. A simple approach requires minimum effort to develop, as well as a minimum number of predefined param-
eters to be managed or specified by users.

4.2. Comparison with the classical MOEAs

In this experiment, the three classical MOEAs (NSGAII [16], SPEA2 [63] and MOEA/D [58]) are extended by our framework
that adapt the five evolutionary operators and control parameters based on the concept of reputation. The extended versions
of NSGAII, SPEA2 and MOEA/D are referred to as NSGAII-R, SPEA2-R and MOEA/D-R, respectively.

Table 2
Statistical results of classical MOEAs versus those extended by reputation.

NSGAII-R SPEA2 SPEA2-R MOEA/D MOEA/D-R


NSGAII 35 21 2 33 14 11 37 9 12 38 8 12 44 77
NSGAII-R 29 6 23 32 6 20 33 7 18 41 7 10
SPEA2 33 18 7 22 10 26 31 10 17
SPEA2-R 18 12 28 25 10 23
MOEAD 42 11 5
Table 3
Hypervolume median and IQR by adaptive MOEAs with Decomposition-based approaches over 30 independently runs on 58 MOPs with 300,000 FES.

MOPs MOEA/D-JADE MOEA/D-jDE MOEA/D-SaDE MOEA/D-CoDE MOEA/D-R3 MOEA/D-R


ZDT1 6.62E01 ± 1.16E05 – 6.62E01 ± 9.96E06 – 6.62E01 ± 1.18E05 – 6.62E01 ± 1.38E05 – 6.62E01 ± 9.59E07  6.62E01 ± 4.95E07
ZDT2 3.28E01 ± 4.57E06 – 3.28E01 ± 7.88E06 – 3.28E01 ± 2.82E06 – 3.28E01 ± 1.01E05 – 3.28E01 ± 3.81E08  3.28E01 ± 1.18E08
ZDT3 5.14E01 ± 1.47E05 – 5.14E01 ± 2.28E05 – 5.14E01 ± 4.69E06 – 5.14E01 ± 9.04E06 – 5.14E01 ± 6.06E06  5.14E01 ± 2.17E06
ZDT4 3.82E02 ± 3.41E01 – 3.73E03 ± 9.99E02 – 3.41E01 ± 2.86E01 – 0.00E00 ± 3.73E03 – 1.08E01 ± 3.07E01 – 6.62E01 ± 1.05E04
ZDT6 4.01E01 ± 5.94E06 – 4.01E01 ± 5.76E08 – 4.01E01 ± 4.35E07 – 4.01E01 ± 8.47E08 – 4.01E01 ± 3.80E09  4.01E01 ± 4.72E09
DTLZ1 7.98E01 ± 1.10E07  7.98E01 ± 5.41E07 + 7.98E01 ± 1.94E07 + 1.04E01 ± 1.04E01 – 7.98E01 ± 6.94E01  4.51E01 ± 6.94E01
DTLZ2 4.29E01 ± 4.87E06  4.29E01 ± 8.90E06 – 4.29E01 ± 4.33E06 + 4.29E01 ± 7.94E06  4.29E01 ± 3.20E06  4.29E01 ± 1.78E06
DTLZ3 4.29E01 ± 4.29E06 + 0.00E00 ± 0.00E00 – 0.00E00 ± 4.29E01  0.00E00 ± 0.00E00 – 0.00E00 ± 4.29E01 – 3.93E01 ± 3.15E04
DTLZ4 4.22E01 ± 3.25E06  4.22E01 ± 1.38E05  4.22E01 ± 6.49E06  4.22E01 ± 1.25E05 – 4.22E01 ± 2.19E06  4.22E01 ± 1.52E06
DTLZ5 8.04E02 ± 1.46E06  8.04E02 ± 1.61E06 + 8.04E02 ± 6.01E07  8.04E02 ± 8.49E07  8.04E02 ± 1.53E07  8.04E02 ± 1.59E07
DTLZ6 8.12E02 ± 8.31E07  8.12E02 ± 2.88E07  8.12E02 ± 2.70E07  8.12E02 ± 4.69E07  8.12E02 ± 9.58E08  8.12E02 ± 1.20E07
DTLZ7 2.67E01 ± 8.26E04  2.67E01 ± 1.16E03  2.67E01 ± 9.64E04 – 2.67E01 ± 1.07E03  2.67E01 ± 8.96E04  2.67E01 ± 9.47E04
WFG1 6.33E01 ± 2.64E05  6.25E01 ± 1.04E03 – 6.30E01 ± 2.69E03 – 6.27E01 ± 3.75E04 – 6.33E01 ± 8.35E05  6.33E01 ± 6.43E05
WFG2 5.63E01 ± 2.28E07 – 5.63E01 ± 4.76E07 + 5.63E01 ± 2.84E07  5.63E01 ± 6.73E07 + 5.63E01 ± 1.90E07  5.63E01 ± 2.41E07

S. Jiang et al. / Information Sciences 286 (2014) 125–146


WFG3 4.42E01 ± 2.06E08 – 4.42E01 ± 1.77E06 + 4.42E01 ± 5.57E07 + 4.42E01 ± 9.90E07 + 4.42E01 ± 3.20E09 + 4.42E01 ± 2.41E09
WFG4 2.18E01 ± 1.66E06  2.18E01 ± 1.45E06  2.18E01 ± 9.82E07 – 2.18E01 ± 1.70E04 – 2.18E01 ± 1.70E04  2.18E01 ± 1.39E06
WFG5 1.95E01 ± 6.23E05  1.95E01 ± 4.28E05  1.95E01 ± 3.20E05  1.95E01 ± 5.80E05  1.95E01 ± 4.13E05  1.95E01 ± 6.56E05
WFG6 2.10E01 ± 8.83E06 – 2.10E01 ± 1.54E06 – 2.10E01 ± 2.05E06 – 2.10E01 ± 4.93E06 – 2.10E01 ± 1.42E07 + 2.10E01 ± 1.69E07
WFG7 2.10E01 ± 1.33E06 – 2.10E01 ± 1.91E06 – 2.10E01 ± 1.47E06 – 2.10E01 ± 1.31E06 – 2.10E01 ± 5.30E07 + 2.10E01 ± 1.11E06
WFG8 1.54E01 ± 5.29E02 – 2.06E01 ± 5.23E02  1.55E01 ± 5.33E02  1.54E01 ± 5.30E02  1.54E01 ± 5.28E02 – 2.07E01 ± 5.26E02
WFG9 2.39E01 ± 1.33E03  2.40E01 ± 1.04E03  2.39E01 ± 9.87E04  2.40E01 ± 1.05E03 + 2.39E01 ± 1.08E03  2.39E01 ± 1.03E03
WFG1D3 8.95E01 ± 6.28E03 – 9.01E01 ± 1.92E02 – 8.96E01 ± 4.03E03 – 8.94E01 ± 1.25E02 – 9.03E01 ± 4.60E03 – 9.07E01 ± 5.89E03
WFG2D3 9.16E01 ± 3.68E04  9.16E01 ± 3.22E04  9.16E01 ± 3.26E04  9.16E01 ± 5.78E03  9.16E01 ± 9.50E05  9.16E01 ± 2.82E04
WFG3D3 2.87E01 ± 2.55E04  2.87E01 ± 3.57E04  2.87E01 ± 5.01E04  2.87E01 ± 4.10E04  2.87E01 ± 1.13E04  2.87E01 ± 1.14E04
WFG4D3 4.15E01 ± 1.18E03 – 4.13E01 ± 1.58E03 – 4.14E01 ± 1.40E03 – 4.15E01 ± 1.21E03 – 4.17E01 ± 1.06E03  4.17E01 ± 1.04E03
WFG5D3 3.81E01 ± 8.19E03  3.79E01 ± 8.72E03 – 3.81E01 ± 1.01E02  3.78E01 ± 6.86E03 – 3.81E01 ± 4.19E03  3.81E01 ± 6.81E03
WFG6D3 4.13E01 ± 2.04E03 – 4.14E01 ± 1.86E03 – 4.16E01 ± 1.81E03  4.16E01 ± 1.47E03  4.16E01 ± 1.39E03  4.17E01 ± 2.29E03
WFG7D3 4.04E01 ± 2.12E03 – 4.05E01 ± 1.66E03  4.06E01 ± 1.34E03  4.08E01 ± 8.59E04 + 4.05E01 ± 1.45E03  4.06E01 ± 2.08E03
WFG8D3 2.93E01 ± 1.04E01 – 2.91E01 ± 8.98E02 – 3.90E01 ± 1.00E01  2.97E01 ± 1.05E01  3.91E01 ± 1.03E01  3.91E01 ± 7.33E02
WFG9D3 4.03E01 ± 2.96E03 + 4.00E01 ± 3.23E03  4.03E01 ± 3.39E03 + 4.03E01 ± 3.37E03 + 3.98E01 ± 2.98E03  4.00E01 ± 2.94E03
LZ09F1 6.61E01 ± 5.17E05 + 6.61E01 ± 3.57E05 – 6.61E01 ± 1.01E04 – 6.61E01 ± 3.05E05  6.61E01 ± 7.20E05  6.61E01 ± 6.62E05
LZ09F3 6.47E01 ± 3.03E03 – 6.53E01 ± 2.82E03  6.46E01 ± 3.75E03 – 6.55E01 ± 4.72E03 + 6.54E01 ± 3.18E03  6.53E01 ± 2.55E03
LZ09F4 6.50E01 ± 6.44E03 – 6.56E01 ± 2.35E03 + 6.44E01 ± 5.66E03 – 6.57E01 ± 2.64E03 + 6.54E01 ± 5.23E03  6.53E01 ± 5.27E03
LZ09F7 6.59E01 ± 1.21E03  3.30E01 ± 6.60E01  6.59E01 ± 7.64E03  6.61E01 ± 8.55E04 + 6.59E01 ± 1.97E03  6.59E01 ± 2.37E03
LZ09F9 2.61E01 ± 1.14E02 – 3.07E01 ± 4.40E03 – 2.87E01 ± 3.50E03 – 3.13E01 ± 1.08E01 – 3.16E01 ± 3.25E03  3.18E01 ± 4.89E03
UF1 6.12E01 ± 2.11E02 – 6.50E01 ± 2.41E03 – 6.37E01 ± 8.63E03 – 6.55E01 ± 3.81E03 – 6.56E01 ± 1.75E03  6.56E01 ± 2.43E03
UF2 6.50E01 ± 2.98E03 – 6.52E01 ± 2.22E03 – 6.51E01 ± 3.13E03 – 6.53E01 ± 2.09E03 – 6.52E01 ± 4.02E03 – 6.54E01 ± 2.15E03
UF3 3.73E01 ± 3.84E02 – 6.41E01 ± 2.52E02 – 5.24E01 ± 1.10E01 – 6.36E01 ± 2.75E02 – 6.48E01 ± 2.07E02  6.50E01 ± 8.98E03
UF4 2.78E01 ± 9.90E04  2.78E01 ± 9.93E04 + 2.81E01 ± 8.36E04 + 2.77E01 ± 6.67E04 – 2.78E01 ± 6.44E04 + 2.77E01 ± 1.51E03
UF5 3.75E02 ± 1.30E01 – 2.91E01 ± 5.77E02 + 1.82E01 ± 7.61E02 – 3.88E03 ± 9.32E02 – 1.71E01 ± 1.88E01 – 2.53E01 ± 3.76E02
UF6 2.26E01 ± 1.35E01  2.31E01 ± 1.13E02 – 2.38E01 ± 4.13E02  2.10E01 ± 2.35E01  2.49E01 ± 3.59E02  2.53E01 ± 5.36E02
UF7 4.76E01 ± 8.66E03 – 4.83E01 ± 1.66E03 – 4.75E01 ± 5.77E03 – 4.86E01 ± 3.81E03 – 4.87E01 ± 2.17E03 – 4.88E01 ± 2.45E03
UF8 2.27E01 ± 5.57E02 – 1.82E01 ± 2.66E02 – 1.90E01 ± 4.71E02 – 1.92E01 ± 7.40E02 – 1.89E01 ± 5.97E02 – 3.02E01 ± 3.12E03
UF9 6.01E01 ± 1.37E01  6.09E01 ± 1.04E01  5.93E01 ± 1.36E01  5.57E01 ± 1.14E01  5.54E01 ± 9.90E02  5.49E01 ± 1.19E01
UF10 2.26E02 ± 1.14E01 – 5.34E03 ± 1.11E01 – 1.25E02 ± 1.52E01 – 2.76E02 ± 5.62E02 – 1.31E02 ± 5.32E02 – 1.32E01 ± 7.59E02
UF11 6.24E06 ± 4.16E05 – 1.39E06 ± 1.16E05 – 1.97E05 ± 4.50E05 – 5.85E06 ± 2.96E05 – 2.56E05 ± 4.32E05 – 3.08E04 ± 2.86E04
UF12 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
UF13 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00

(continued on next page)

135
136
Table 3 (continued)

MOPs MOEA/D-JADE MOEA/D-jDE MOEA/D-SaDE MOEA/D-CoDE MOEA/D-R3 MOEA/D-R


CF1 4.39E01 ± 2.00E02 + 4.39E01 ± 2.00E02  4.49E01 ± 2.00E02 + 4.39E01 ± 2.00E02 + 4.39E01 ± 2.00E02 + 4.39E01 ± 2.00E02
CF2 6.48E01 ± 5.53E03 – 6.50E01 ± 9.16E04 – 6.50E01 ± 2.78E03 – 6.50E01 ± 1.36E03 – 6.50E01 ± 9.08E04 – 6.51E01 ± 1.17E03
CF3 1.26E01 ± 6.99E02  1.53E01 ± 8.58E02  1.18E01 ± 6.50E02  1.43E01 ± 8.13E02  1.50E01 ± 6.51E02  1.46E01 ± 6.82E02
CF4 4.80E01 ± 4.18E02 – 5.35E01 ± 5.02E03 – 5.17E01 ± 9.68E03 – 5.41E01 ± 1.07E02 – 5.44E01 ± 4.64E03 – 5.49E01 ± 4.00E03
CF5 1.48E01 ± 2.24E01 – 3.93E01 ± 2.46E01  3.08E01 ± 1.83E01  2.95E01 ± 1.40E01 – 3.19E01 ± 2.05E01  3.56E01 ± 1.72E01
CF6 6.52E01 ± 1.45E02 – 6.56E01 ± 3.57E03 – 6.54E01 ± 4.53E03 – 6.56E01 ± 3.46E03 – 6.57E01 ± 1.97E03  6.58E01 ± 1.96E03
CF7 4.95E01 ± 1.60E01  5.96E01 ± 1.84E02 + 4.74E01 ± 9.90E02 – 4.39E01 ± 2.11E01 – 5.77E01 ± 1.86E01  5.56E01 ± 4.47E02
CF8 1.79E01 ± 4.32E02 – 1.82E01 ± 4.68E02 – 1.91E01 ± 4.60E02 – 1.94E01 ± 4.83E02  1.90E01 ± 4.84E02  2.04E01 ± 5.07E02
CF9 2.46E01 ± 1.52E02 – 2.42E01 ± 1.87E02 – 2.49E01 ± 1.32E02 – 2.59E01 ± 1.99E02 – 2.49E01 ± 1.84E02 – 3.25E01 ± 2.91E02
CF10 9.57E02 ± 1.10E01 – 1.67E01 ± 1.68E02 – 1.67E01 ± 1.04E02 – 1.69E01 ± 7.48E02 – 1.68E01 ± 9.82E03 – 2.07E01 ± 6.46E02

+,  and – represent previous algorithm statistically significant better, similar and worse than the last algorithm, respectively.

S. Jiang et al. / Information Sciences 286 (2014) 125–146


S. Jiang et al. / Information Sciences 286 (2014) 125–146 137

Table 1 reports the detailed experimental results, where each tuple tabulates the median and IQR of hypervolume over 30
independently runs on 58 MOPs with a maximum 300,000 function evaluations (FES). Table 2 shows the win/tie/lose (w/t/l)
statistical results under the Wilcoxon rank sum test with 95% confidence level. Each tuple w/t/l means that the algorithm at
the corresponding column wins on w MOPs, ties on t MOPs, and loses on l MOPs, compared to the algorithm at the corre-
sponding row.
In Table 2, the w/t/l values between the three extended versions by our framework and the classical MOEAs are 35/21/2,
33/18/7, 42/11/5, respectively. The performance of the extended versions show improvements over the classical MOEAs on
55.2% of the 58 MOPs. This indicates that our framework can significantly improve the robustness of classical MOEAs.
In addition, the acceptance rule that decides which solutions would survive into the next population is a key factor to
influence the performance of MOEAs. From Table 1, SPEA2-R exhibits higher hypervolume than MOEA/D-R on DTLZ and
2 2
WFG family problems, whose Pareto Fronts (PFs) share the similar geometric characteristic as f 1 þ    þ f m ¼ 1. The reason
is that the k-nearest neighborhood density estimator in SPEA2 makes it more suitable to solve PFs with circular (m ¼ 2) and
spherical shapes (m ¼ 3).

4.3. Comparison with the adaptive MOEAs

In this section, we consider several state-of-the-art adaptive DE variants (JADE [57,56], jDE [5,52], SaDE [43,28], CoDE
[50]) for comparison and use Decomposition-based MOEAs as the baseline. The goal is to verify their effectiveness against
our proposed framework.
JADE and jDE only adapt the control parameters. Each of them uses a fixed evolutionary operator, namely ‘‘DE/current-to-
pbest/bin’’ and ‘‘DE/rand/2/bin’’, respectively. In JADE, the control parameters CR and F are generated with a Normal distri-
bution (rCR ¼ 0:1) and a Cauchy distribution (rF ¼ 0:1), respectively. JADE introduces three new parameters that include
another variable c ¼ 0:1. In jDE, the values CR and F are encoded as part of the individuals which are co-evolved along with
the search. jDE also involves four predefined parameters such as F l ¼ 0:1; F u ¼ 0:9; s1 ¼ s2 ¼ 0:1.
SaDE and CoDE select evolutionary operators and adapt control parameters simultaneously. Both of them design the
operator pool that includes ‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’ and ‘‘DE/current-to-rand/1/bin’’. In SaDE, the control parame-
ters in DE are generated with Normal distribution, where rCR ¼ 0:3, lF ¼ 0:5 and rF ¼ 0:1. SaDE includes another predefined
parameter (i.e., the learning period of 50 generations). In CoDE, the three operators are combined with a set of fixed param-
eter settings, including ½CR ¼ 0:1; F ¼ 0:1; ½CR ¼ 1:0; F ¼ 0:5 and ½CR ¼ 0:2; F ¼ 0:8. They are identified based on empirical
studies. The first and second parameter settings encourage exploitation and exploration, respectively, while
½CR ¼ 0:2; F ¼ 0:8 is suitable to solve Pareto Sets (PSs) with interdependent variables [50].
In contrast to the previous adaptive approaches, we consider a pool of evolutionary operators comprising ‘‘DE/rand/1/
bin’’, ‘‘DE/rand/2/bin’’, ‘‘DE/current-to-rand/1/bin’’, ‘‘SBX’’ and ‘‘polynomial mutation’’. To facilitate a fair comparison, we also
implement MOEA/D-R3 that uses our proposed framework to select among the first three operators, which is consistent to
SaDE and CoDE. Furthermore, our framework has only one predefined parameter (quantization size q). Based on the spirit of
ease-of-use, this is in contrast to the previous adaptive approaches which have at least three or more parameters.
Table 3 reports detailed results of MOEA/D-JADE, MOEA/D-jDE, MOEA/D-SaDE, MOEA/D-CoDE, MOEA/D-R3 and MOEA/D-
R. Table 4 on the other hand, summarizes the win/tie/lose (w/t/l) statistical comparison results.
In Table 4, the w/t/l values between MOEA/D-R3 and MOEA/D-JADE, MOEA/D-jDE are 25/27/6 and 26/24/8, respectively. It
indicates that operator selection is beneficial for MOEAs to solve various MOPs. The w/t/l values between MOEA/D-R3 and
MOEA/D-SaDE, MOEA/D-CoDE are 20/28/10 and 19/30/9, respectively. Our MOEA/D-R3 not only has fewer predefined
parameters than MOEA/D-SaDE and MOEA/D-CoDE, but also is more effective than them. In addition, CoDE fixes the two
parameters (CR and F) before the algorithm starts. MOEA/D-R3 adapts the control parameters as the algorithm progresses.
It demonstrates that parameter adaptation based on reputation in our framework is able to automatically tune parameters
on various MOPs.
Furthermore, the w/t/l values between MOEA/D-R and MOEA/D-JADE, MOEA/D-jDE, MOEA/D-SaDE, MOEA/D-CoDE,
MOEA/D-R3 are 33/21/4, 31/19/8, 31/21/6, 32/17/9 and 14/39/5, respectively. The MOEA/D-R emerges as best among all
the six algorithms. The selection on the two extra evolutionary operators ‘‘SBX’’ and ‘‘polynomial mutation’’ does display
benefits on the adaptive MOEAs.

Table 4
Statistical results of adaptive MOEAs with decomposition-based approaches.

MOEA/D-jDE MOEA/D-SaDE MOEA/D-CoDE MOEA/D-R3 MOEA/D-R


MOEA/D-JADE 24 22 12 19 31 8 22 27 9 25 27 6 33 21 4
MOEA/D-jDE 16 29 13 18 32 8 26 24 8 31 19 8
MOEA/D-SaDE 15 29 14 20 28 10 31 21 6
MOEA/D-CoDE 19 30 9 32 17 9
MOEA/D-R3 14 39 5
138
Table 5
Hypervolume median and IQR by MOEA/D based on DE operators with/without parameter adaptation over 30 independently runs on 58 MOPs with 300,000 FES. ‘‘1/2/3’’ means ‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’ and

S. Jiang et al. / Information Sciences 286 (2014) 125–146


‘‘DE/current-to-rand/1/bin’’, respectively.

MOPs MOEA/D1 MOEA/D2 MOEA/D3 MOEA/D3-R MOEA/D1-R MOEA/D2-R


ZDT1 4.46E01 ± 3.26E01 – 6.59E01 ± 1.65E03 – 6.60E01 ± 1.39E03 – 6.62E01 ± 1.30E05 – 6.62E01 ± 4.29E06  6.62E01 ± 2.61E07
ZDT2 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 3.28E01 ± 1.53E04 – 3.28E01 ± 5.62E06 – 3.28E01 ± 6.84E08 – 3.28E01 ± 1.35E09
ZDT3 5.13E01 ± 1.23E01 – 5.14E01 ± 8.06E05 – 5.14E01 ± 1.03E04 – 5.14E01 ± 1.22E05 – 5.14E01 ± 5.76E06  5.14E01 ± 4.54E06
ZDT4 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 2.59E03 – 0.00E00 ± 0.00E00 – 3.82E02 ± 3.08E01  1.59E01 ± 3.29E01
ZDT6 4.01E01 ± 1.95E07 – 4.01E01 ± 1.12E07 – 4.01E01 ± 6.16E05 – 4.01E01 ± 3.93E06 – 4.01E01 ± 8.01E-10 + 4.01E01 ± 5.94E09
DTLZ1 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 1.04E01 – 7.98E01 ± 6.94E01  7.98E01 ± 5.21E01
DTLZ2 3.92E01 ± 8.01E04 – 3.92E01 ± 9.75E04 – 3.93E01 ± 3.95E04 – 4.29E01 ± 6.98E06  4.29E01 ± 2.52E06  4.29E01 ± 2.09E06
DTLZ3 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
DTLZ4 3.91E01 ± 3.39E03 – 3.97E01 ± 2.53E03 – 3.92E01 ± 2.94E03 – 4.22E01 ± 5.68E06  4.22E01 ± 2.22E06  4.22E01 ± 1.88E06
DTLZ5 9.15E02 ± 4.27E06 + 9.15E02 ± 1.33E05 + 9.15E02 ± 3.22E06 + 8.04E02 ± 4.32E07 + 8.04E02 ± 3.84E07  8.04E02 ± 5.35E08
DTLZ6 9.23E02 ± 1.22E06 + 9.24E02 ± 1.55E06 + 9.24E02 ± 1.21E06 + 8.12E02 ± 2.04E07  8.12E02 ± 1.66E07  8.12E02 ± 5.24E08
DTLZ7 2.17E01 ± 4.64E03 – 2.15E01 ± 3.24E03 – 2.15E01 ± 8.06E04 – 2.67E01 ± 1.30E03 – 2.67E01 ± 1.17E03  2.67E01 ± 6.87E04
WFG1 6.25E01 ± 2.36E02 – 6.24E01 ± 5.53E03 – 6.30E01 ± 3.58E03 – 6.33E01 ± 1.28E04 + 6.33E01 ± 3.44E05 + 6.31E01 ± 2.17E03
WFG2 5.62E01 ± 2.89E04 – 5.63E01 ± 9.23E06 – 5.63E01 ± 1.68E05 – 5.63E01 ± 3.23E07  5.63E01 ± 1.92E07  5.63E01 ± 2.55E07
WFG3 4.42E01 ± 1.21E05 – 4.42E01 ± 7.09E06 – 4.42E01 ± 9.81E06 – 4.42E01 ± 3.96E07 + 4.42E01 ± 2.95E09  4.42E01 ± 1.78E09
WFG4 2.14E01 ± 1.52E03 – 2.10E01 ± 2.79E03 – 2.16E01 ± 9.08E04 – 2.18E01 ± 1.70E04  2.18E01 ± 1.71E04  2.18E01 ± 1.69E04
WFG5 1.95E01 ± 4.24E05  1.95E01 ± 1.75E05 – 1.95E01 ± 6.56E05  1.95E01 ± 8.17E03 + 1.95E01 ± 4.10E05  1.95E01 ± 4.39E05
WFG6 2.09E01 ± 1.04E04 – 2.09E01 ± 1.05E05 – 2.09E01 ± 1.03E05 – 2.10E01 ± 2.97E06 – 2.10E01 ± 9.46E08  2.10E01 ± 1.34E07
WFG7 2.09E01 ± 2.88E05 – 2.09E01 ± 8.94E06 – 2.09E01 ± 5.34E06 – 2.10E01 ± 1.80E06 – 2.10E01 ± 4.99E07  2.10E01 ± 5.82E07
WFG8 1.51E01 ± 1.30E03 – 1.54E01 ± 1.21E03  1.53E01 ± 5.67E04 – 1.53E01 ± 1.14E03  1.54E01 ± 5.31E02  1.54E01 ± 5.32E02
WFG9 2.40E01 ± 6.98E04  2.38E01 ± 4.85E04 – 2.39E01 ± 7.82E04  2.39E01 ± 9.47E04  2.39E01 ± 7.94E04  2.39E01 ± 9.91E04
WFG1D3 8.81E01 ± 1.11E02 – 2.97E01 ± 9.97E02 – 8.88E01 ± 7.12E03 – 8.96E01 ± 3.78E03 – 9.04E01 ± 3.37E03 + 9.01E01 ± 4.07E03
WFG2D3 8.89E01 ± 3.94E03 – 8.91E01 ± 3.65E03 – 8.95E01 ± 3.89E03 – 9.16E01 ± 2.21E04  9.16E01 ± 2.79E04  9.16E01 ± 3.77E04
WFG3D3 3.02E01 ± 1.27E04 + 3.02E01 ± 1.85E04 + 3.02E01 ± 1.08E04 + 2.87E01 ± 3.36E04  2.87E01 ± 1.28E04  2.87E01 ± 2.30E04
WFG4D3 3.63E01 ± 6.78E03 – 3.66E01 ± 7.23E03 – 3.71E01 ± 5.41E03 – 4.15E01 ± 6.71E04 – 4.17E01 ± 1.03E03 + 4.16E01 ± 1.69E03
WFG5D3 3.31E01 ± 3.46E03 – 3.35E01 ± 3.45E03 – 3.33E01 ± 2.46E03 – 3.80E01 ± 4.74E03  3.81E01 ± 5.71E04 + 3.81E01 ± 9.70E04
WFG6D3 3.77E01 ± 1.31E03 – 3.77E01 ± 7.92E04 – 3.78E01 ± 1.04E03 – 4.15E01 ± 1.08E03 – 4.16E01 ± 1.71E03  4.16E01 ± 2.62E03
WFG7D3 3.70E01 ± 1.75E03 – 3.69E01 ± 2.30E03 – 3.69E01 ± 7.97E04 – 4.07E01 ± 1.39E03 + 4.06E01 ± 2.11E03  4.05E01 ± 1.83E03
WFG8D3 2.73E01 ± 4.04E03 – 2.74E01 ± 3.60E03 – 2.68E01 ± 4.88E03 – 2.93E01 ± 1.04E01  2.94E01 ± 1.03E01  3.90E01 ± 1.00E01
WFG9D3 3.65E01 ± 4.57E03 – 3.65E01 ± 2.97E03 – 3.64E01 ± 5.23E03 – 4.04E01 ± 2.87E03 + 3.99E01 ± 4.33E03  3.99E01 ± 3.49E03
LZ09F1 4.76E01 ± 4.84E02 – 6.46E01 ± 1.42E02 – 6.29E01 ± 4.77E02 – 6.61E01 ± 1.05E04 – 6.61E01 ± 4.62E05 – 6.61E01 ± 4.88E05
LZ09F3 3.60E03 ± 1.90E02 – 1.95E01 ± 1.44E01 – 6.23E01 ± 1.02E01 – 6.50E01 ± 4.21E03 – 6.54E01 ± 2.93E03 – 6.56E01 ± 1.93E03
LZ09F4 3.30E02 ± 5.90E02 – 2.14E01 ± 1.05E01 – 6.09E01 ± 1.95E01 – 6.51E01 ± 9.70E03 – 6.53E01 ± 4.42E03 – 6.56E01 ± 5.31E03
LZ09F7 6.49E02 ± 1.58E01 – 4.98E02 ± 6.35E01 – 6.55E01 ± 2.01E02 – 6.57E01 ± 7.45E03 – 6.60E01 ± 1.50E03  6.60E01 ± 1.34E03
LZ09F9 6.43E03 ± 2.47E02 – 6.55E02 ± 4.32E02 – 2.48E01 ± 1.37E02 – 2.98E01 ± 7.04E03 – 3.17E01 ± 3.69E03  3.18E01 ± 3.07E03
UF1 1.61E01 ± 1.06E01 – 5.12E01 ± 5.26E02 – 5.93E01 ± 3.79E02 – 6.47E01 ± 1.24E02 – 6.54E01 ± 3.78E03 – 6.56E01 ± 2.18E03
UF2 5.26E01 ± 2.65E02 – 5.81E01 ± 1.21E02 – 6.19E01 ± 2.92E02 – 6.51E01 ± 4.44E03 – 6.52E01 ± 3.98E03  6.52E01 ± 1.97E03
UF3 9.99E02 ± 3.50E02 – 2.70E01 ± 8.73E02 – 3.68E01 ± 1.12E01 – 5.14E01 ± 6.89E02 – 6.43E01 ± 2.20E02  6.48E01 ± 1.73E02
UF4 2.19E01 ± 1.40E02 – 2.34E01 ± 1.24E02 – 2.29E01 ± 1.63E02 – 2.76E01 ± 1.81E03 – 2.78E01 ± 7.49E04  2.78E01 ± 7.77E04
UF5 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 1.10E03 ± 2.35E02 – 2.04E01 ± 1.50E01 – 2.76E01 ± 8.00E02
UF6 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 1.03E01 ± 1.66E01 – 2.26E01 ± 1.13E01 – 2.32E01 ± 4.35E02  2.45E01 ± 3.20E02
UF7 5.96E02 ± 6.33E02 – 3.61E01 ± 9.24E02 – 4.60E01 ± 1.88E02 – 4.83E01 ± 4.85E03 – 4.86E01 ± 3.75E03  4.86E01 ± 2.15E03
UF8 1.47E01 ± 4.91E02 – 2.30E01 ± 4.72E02  2.82E01 ± 8.54E03 + 1.90E01 ± 5.82E03 – 2.35E01 ± 6.64E02  2.37E01 ± 1.98E02
UF9 2.37E01 ± 7.16E02 – 4.84E01 ± 6.54E02 – 5.38E01 ± 9.98E02  5.35E01 ± 1.35E01  5.97E01 ± 1.18E01  5.51E01 ± 1.08E01
UF10 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 0.00E00 ± 4.61E03 – 2.51E02 ± 3.75E02  3.43E02 ± 1.05E01  3.39E02 ± 7.69E02
UF11 0.00E00 ± 0.00E00 – 0.00E00 ± 0.00E00 – 2.18E04 ± 1.57E04 + 4.64E06 ± 1.57E05  1.33E05 ± 3.51E05  1.32E05 ± 6.81E05
UF12 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
UF13 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
CF1 4.39E01 ± 6.51E04 – 4.39E01 ± 2.00E02 + 4.39E01 ± 2.43E05 – 4.59E01 ± 2.00E02  4.39E01 ± 2.00E02 – 4.39E01 ± 2.00E02

S. Jiang et al. / Information Sciences 286 (2014) 125–146


CF2 6.11E01 ± 8.21E02 – 6.50E01 ± 2.04E03 – 6.45E01 ± 1.20E02 – 6.50E01 ± 1.40E03  6.50E01 ± 7.37E04  6.50E01 ± 8.20E04
CF3 0.00E00 ± 0.00E00 – 0.00E00 ± 1.86E02 – 2.81E02 ± 8.28E02 – 1.09E01 ± 7.04E02  1.44E01 ± 7.51E02  1.30E01 ± 5.73E02
CF4 2.64E01 ± 5.99E02 – 4.27E01 ± 7.33E02 – 4.46E01 ± 9.70E02 – 5.35E01 ± 6.72E03 – 5.43E01 ± 4.59E03  5.45E01 ± 4.95E03
CF5 0.00E00 ± 0.00E00 – 9.96E02 ± 1.81E01 – 2.10E01 ± 2.40E01 – 3.31E01 ± 1.10E01 – 3.16E01 ± 2.67E01  4.07E01 ± 2.72E01
CF6 5.64E01 ± 4.25E02 – 6.54E01 ± 3.03E03 – 6.16E01 ± 2.10E02 – 6.51E01 ± 7.69E03 – 6.57E01 ± 2.79E03  6.57E01 ± 2.23E03
CF7 0.00E00 ± 0.00E00 – 3.50E02 ± 1.32E01 – 3.14E02 ± 1.45E01 – 4.13E01 ± 1.76E01  5.59E01 ± 2.73E01  5.62E01 ± 2.62E01
CF8 2.10E01 ± 4.81E02 + 2.27E01 ± 4.84E02 + 2.30E01 ± 3.89E02 + 2.10E01 ± 4.22E02  1.95E01 ± 4.95E02  1.95E01 ± 3.45E02
CF9 2.97E01 ± 2.29E02 + 3.00E01 ± 3.73E02 + 3.09E01 ± 3.18E02 + 2.63E01 ± 1.54E02 + 2.51E01 ± 1.47E02  2.42E01 ± 1.71E02
CF10 3.82E02 ± 6.18E02 – 1.66E01 ± 8.80E02  1.66E01 ± 7.52E02  1.54E01 ± 9.11E02  1.65E01 ± 2.97E02  1.62E01 ± 2.73E02

+,  and – represent previous algorithm statistically significant better, similar and worse than the last algorithm, respectively.

139
140 S. Jiang et al. / Information Sciences 286 (2014) 125–146

4.4. DE operators with parameter adaptation by reputation

The proposed framework can be treated as a simplified version (i.e., parameter adaptation by reputation) when evolution-
ary operators are selected before the algorithms start. In this section, to investigate the effectiveness of parameter adaptation
by reputation, three DE operators are adopted in MOEA/D for solving various MOPs.
In the classical MOEA/D [58], it involves two operators ‘‘DE/rand/1/bin’’ and ‘‘polynomial mutation’’. To avoid the impact
of other operators, ‘‘polynomial mutation’’ is not used in this section. We implement three versions of MOEA/D named as
MOEA/D1, MOEA/D2 and MOEA/D3, where ‘‘1/2/3’’ means MOEA/D only adopts ‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’ and
‘‘DE/current-to-rand/1/bin’’, respectively. The DE parameters in MOEA/D1-3 are CR ¼ 1:0; F ¼ 0:4. Then, they are extended
by our framework, named as MOEA/D1-R, MOEA/D2-R and MOEA/D3-R, respectively. Table 5 shows the detailed results
of MOEA/D and MOEA/D-R with the three different DE operators. Table 6 summarizes the win/tie/lose (w/t/l) statistical com-
parison results.
In Table 6, the w/t/l values between MOEA/D3 and MOEA/D1, MOEA/D2 are 41/14/3, 30/19/9, respectively. By observing
the definition of Eq. (15), ‘‘DE/current-to-rand/1/bin’’ is regarded as a degenerated case of ‘‘DE/current/1/bin’’ when
randðÞ ¼ 0 and ‘‘DE/rand/1/bin’’ when randðÞ ¼ 1. The high perturbation of ‘‘DE/current-to-rand/1/bin’’ is beneficial for
MOEA/D3 to maintain population diversity. The w/t/l values between MOEA/D3-R and MOEA/D3 is 40/11/7, between
MOEA/D1-R and MOEA/D1 is 48/5/5, and between MOEA/D2-R and MOEA/D2 is 46/6/6. The three DE operators with our
framework obtain higher performance over the original DE operators on 66.7% of the 58 MOPs. It is evident that parameter
adaptation by reputation is able to enhance the robustness of DE operators significantly.
Furthermore, the statistical results in Table 6 show that the w/t/l values between MOEA/D1-R, MOEA/D2-R and MOEA/D3-
R are 27/26/5, 28/23/7, respectively. It is worth noticing that MOEA/D3 emerges as superior among the three original DE
operators, whereas MOEA/D3-R is the worst among those operators with parameter adaptation by reputation. This indicates
that the uncertain perturbation (randðÞ) in ‘‘DE/current-to-rand/1/bin’’ is less efficient than our framework in adapting con-
trol parameters based on the concept of reputation.

4.5. Comparison of the MOEA/D variants

In this section, two additional MOEA/D variants (called MOEA/D-rand, MOEA/D-R’) are implemented to deal with MOPs.
In contrast to the adaptive algorithms, MOEA/D-rand just randomly selects evolutionary operators and adapts control
parameters. The algorithm MOEA/D-R’ is implemented to use another version of our framework with a fixed parameter
g ¼ 0:3 using Eq. (5). In MOEA/D-rand and MOEA/D-R’, the operator pool consists of the five evolutionary operators as men-
tioned in Section 4.1, and the control parameters are quantized (q ¼ 3) into segments that obey the Normal distribution
which is the same as MOEA/D-R. The purpose is to compare the effectiveness of our framework with the random selection
and the update scheme with the fixed g value.
Table 7 shows the details about the hypervolume results of MOEA/D-rand, MOEA/D-R’ and MOEA/D-R over 30 indepen-
dently runs on 58 MOPs with 300,000 function evolutions. Table 8 summarizes the win/tie/lose (w/t/l) statistical comparison
results of these three algorithms.
In Table 8, the w/t/l values between MOEA/D-R and MOEA/D-rand are 34/20/4. This indicates that the adaptive approach
based on reputation is better than simple random selection. The w/t/l values between MOEA/D-R and MOEA/D-R’ are 27/27/
4. This demonstrates that our framework is better than the update schema with fixed g value. In addition, it is worth noticing
that our framework is simpler and more efficient than MOEA/D-R’ by replacing the predefined parameter g with reputation
scores of the previous generation Rg1 ðÞ.

4.6. Effect of Pareto sets on operator selection

An important factor that affects the search ability of Pareto dominance-based MOEAs (e.g., NSGAII [16], SPEA2 [63]) in
variable space is the interdependency among the decision variables in Pareto Sets (PSs). In this experiment, the effect of typ-
ical PSs is investigated on different operators.
The algorithms NSGAII-R and SPEA2-R are tested on the four typical problems DTLZ1, WFG1D3, LZ09F1 and UF1. In the
first two problems DTLZ1 and WFG1D3, decision variables in PSs are independent, but in LZ09F1 and UF1, the decision vari-

Table 6
Statistical comparison results of MOEA/D based on DE operators with/without parameter adaptation.

MOEA/D2 MOEA/D3 MOEA/D3-R MOEA/D1-R MOEA/D2-R


MOEA/D1 32 20 6 41 14 3 46 7 5 48 55 48 5 5
MOEA/D2 30 19 9 41 9 8 46 66 46 6 6
MOEA/D3 40 11 7 45 67 44 7 7
MOEA/D3-R 27 26 5 28 23 7
MOEA/D1-R 7 46 5
S. Jiang et al. / Information Sciences 286 (2014) 125–146 141

Table 7
Hypervolume median and IQR by MOEA/D variants over 30 independently runs on 58 MOPs with 300,000 FES.

MOPs MOEA/D-rand MOEA/D-R’ MOEA/D-R


ZDT1 6.62E01 ± 1.23E05 – 6.62E01 ± 7.45E08 + 6.62E01 ± 4.95E07
ZDT2 3.28E01 ± 7.85E06 – 3.28E01 ± 9.41E-10 + 3.28E01 ± 1.18E08
ZDT3 5.14E01 ± 3.07E05 – 5.14E01 ± 7.23E07  5.14E01 ± 2.17E06
ZDT4 6.62E01 ± 2.02E05 – 3.82E02 ± 3.41E01 – 6.62E01 ± 1.05E04
ZDT6 4.01E01 ± 1.23E07 – 4.01E01 ± 1.65E09  4.01E01 ± 4.72E09
DTLZ1 7.98E01 ± 1.65E07 + 1.04E01 ± 7.72E01 – 4.51E01 ± 6.94E01
DTLZ2 4.29E01 ± 6.12E06  4.29E01 ± 1.52E06  4.29E01 ± 1.78E06
DTLZ3 4.29E01 ± 1.91E04 + 0.00E00 ± 0.00E00 – 3.93E01 ± 3.15E04
DTLZ4 4.22E01 ± 1.29E05  4.22E01 ± 5.78E07  4.22E01 ± 1.52E06
DTLZ5 8.04E02 ± 5.50E07  8.04E02 ± 2.04E08  8.04E02 ± 1.59E07
DTLZ6 8.12E02 ± 3.56E07  8.12E02 ± 1.13E08  8.12E02 ± 1.20E07
DTLZ7 2.67E01 ± 5.06E04  2.67E01 ± 1.16E03  2.67E01 ± 9.47E04
WFG1 6.30E01 ± 3.31E03 – 6.33E01 ± 9.45E05  6.33E01 ± 6.43E05
WFG2 5.63E01 ± 4.63E07 + 5.63E01 ± 1.91E07  5.63E01 ± 2.41E07
WFG3 4.42E01 ± 2.01E06 + 4.42E01 ± 2.50E09  4.42E01 ± 2.41E09
WFG4 2.18E01 ± 9.77E07 – 2.18E01 ± 1.70E04 – 2.18E01 ± 1.39E06
WFG5 1.95E01 ± 6.48E05  1.95E01 ± 4.24E05  1.95E01 ± 6.56E05
WFG6 2.10E01 ± 4.13E07 – 2.10E01 ± 2.12E05 – 2.10E01 ± 1.69E07
WFG7 2.10E01 ± 1.30E06 – 2.10E01 ± 5.92E07 + 2.10E01 ± 1.11E06
WFG8 1.80E01 ± 5.34E02  1.53E01 ± 5.28E02 – 2.07E01 ± 5.26E02
WFG9 2.39E01 ± 1.02E03  2.39E01 ± 9.43E04 – 2.39E01 ± 1.03E03
WFG1D3 9.04E01 ± 9.88E03  9.05E01 ± 5.51E03  9.07E01 ± 5.89E03
WFG2D3 9.16E01 ± 2.49E04  9.05E01 ± 1.13E02 – 9.16E01 ± 2.82E04
WFG3D3 2.87E01 ± 3.85E04  2.87E01 ± 1.25E04  2.87E01 ± 1.14E04
WFG4D3 4.11E01 ± 9.05E04 – 4.17E01 ± 1.29E03 + 4.17E01 ± 1.04E03
WFG5D3 3.79E01 ± 1.73E03 – 3.81E01 ± 2.17E03  3.81E01 ± 6.81E03
WFG6D3 4.15E01 ± 1.83E03 – 4.14E01 ± 2.49E03 – 4.17E01 ± 2.29E03
WFG7D3 4.05E01 ± 1.41E03  4.04E01 ± 2.36E03 – 4.06E01 ± 2.08E03
WFG8D3 3.90E01 ± 1.02E01  2.93E01 ± 9.85E02 – 3.91E01 ± 7.33E02
WFG9D3 4.00E01 ± 2.93E03  3.96E01 ± 2.02E03 – 4.00E01 ± 2.94E03
LZ09F1 6.61E01 ± 3.86E05 – 6.61E01 ± 2.80E04 – 6.61E01 ± 6.62E05
LZ09F3 6.46E01 ± 6.00E03 – 6.51E01 ± 2.25E03 – 6.53E01 ± 2.55E03
LZ09F4 6.50E01 ± 3.86E03 – 6.52E01 ± 8.38E03  6.53E01 ± 5.27E03
LZ09F7 6.58E01 ± 1.86E03 – 6.57E01 ± 3.98E03 – 6.59E01 ± 2.37E03
LZ09F9 2.93E01 ± 9.80E03 – 3.17E01 ± 4.08E03  3.18E01 ± 4.89E03
UF1 6.44E01 ± 4.74E03 – 6.56E01 ± 5.15E03  6.56E01 ± 2.43E03
UF2 6.50E01 ± 3.84E03 – 6.51E01 ± 3.57E03 – 6.54E01 ± 2.15E03
UF3 5.64E01 ± 7.49E02 – 6.50E01 ± 1.75E02  6.50E01 ± 8.98E03
UF4 2.75E01 ± 1.00E03 – 2.78E01 ± 1.01E03  2.77E01 ± 1.51E03
UF5 2.20E01 ± 4.01E02 – 2.14E01 ± 6.92E02 – 2.53E01 ± 3.76E02
UF6 2.41E01 ± 1.59E01  2.33E01 ± 5.11E02  2.53E01 ± 5.36E02
UF7 4.82E01 ± 1.44E03 – 4.85E01 ± 3.82E03 – 4.88E01 ± 2.45E03
UF8 1.85E01 ± 3.92E02 – 1.89E01 ± 5.64E02 – 3.02E01 ± 3.12E03
UF9 5.26E01 ± 9.47E02 – 5.52E01 ± 1.21E01  5.49E01 ± 1.19E01
UF10 3.70E02 ± 1.26E01 – 3.99E02 ± 7.42E02 – 1.32E01 ± 7.59E02
UF11 0.00E00 ± 3.77E07 – 3.95E06 ± 3.66E05 – 3.08E04 ± 2.86E04
UF12 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
UF13 0.00E00 ± 0.00E00  0.00E00 ± 0.00E00  0.00E00 ± 0.00E00
CF1 4.39E01 ± 2.00E02 – 4.39E01 ± 2.00E02  4.39E01 ± 2.00E02
CF2 6.50E01 ± 1.08E03 – 6.50E01 ± 1.06E03 – 6.51E01 ± 1.17E03
CF3 1.20E01 ± 7.72E02  1.29E01 ± 5.20E02  1.46E01 ± 6.82E02
CF4 5.30E01 ± 7.58E03 – 5.36E01 ± 1.09E02 – 5.49E01 ± 4.00E03
CF5 3.06E01 ± 1.90E01  3.32E01 ± 1.82E01  3.56E01 ± 1.72E01
CF6 6.56E01 ± 3.68E03 – 6.54E01 ± 6.79E03 – 6.58E01 ± 1.96E03
CF7 5.37E01 ± 1.36E01 – 4.89E01 ± 1.55E01  5.56E01 ± 4.47E02
CF8 2.10E01 ± 3.36E02  1.94E01 ± 5.12E02 – 2.04E01 ± 5.07E02
CF9 2.57E01 ± 1.68E02 – 2.50E01 ± 1.16E02 – 3.25E01 ± 2.91E02
CF10 1.67E01 ± 3.36E02 – 1.44E01 ± 9.93E02 – 2.07E01 ± 6.46E02

+,  and – represent previous algorithm statistically significant better, similar and worse than the last algorithm, respectively.

Table 8
Statistical comparison results of MOEA/D variants.

MOEA/D-R’ MOEA/D-R
MOEA/D-rand 20 26 12 34 20 4
MOEA/D-R’ 27 27 4
142 S. Jiang et al. / Information Sciences 286 (2014) 125–146

0.3 0.35 0.4


Expected probabilities

Expected probabilities

Expected probabilities
0.35
0.25 0.3
0.3
DE/rand/1/bin
0.2 0.25 0.25 DE/rand/2/bin
DE/rand/1/bin DE/c−t−rand/1/bin
DE/rand/1/bin
0.15 0.2
DE/rand/2/bin 0.2 SBX
DE/rand/2/bin DE/c−t−rand/1/bin Polynomial
DE/c−t−rand/1/bin SBX
SBX
0.15
0.1 Polynomial
Polynomial 0.15
0.1
0.05 0.1 0.05
0 500 1000 1500 2000 0 500 1000 1500 2000 0 500 1000 1500 2000 2500 3000
Generations Generations Generations
(a) DTLZ1 by NSGAII-R (b) WFG1D3 by NSGAII-R (c) LZ09F1 by NSGAII-R
0.45 0.35 0.35

Expected probabilities
Expected probabilities
Expected probabilities

0.4 0.3 0.3


DE/rand/1/bin
0.35 DE/rand/2/bin
DE/c−t−rand/1/bin 0.25 0.25
0.3
SBX
0.25 Polynomial 0.2 0.2
DE/rand/1/bin
0.2 0.15 DE/rand/1/bin 0.15 DE/rand/2/bin
DE/rand/2/bin DE/c−t−rand/1/bin
0.15 DE/c−t−rand/1/bin
0.1 0.1 SBX
0.1 SBX Polynomial
0.05 Polynomial 0.05
0.05
0 0 0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 0 500 1000 1500 2000
Generations Generations Generations
(d) UF1 by NSGAII-R (e) DTLZ1 by SPEA2-R (f) WFG1D3 by SPEA2-R

0.4 0.45
Expected probabilities

Expected probabilities

0.35 0.4
0.35
0.3
0.3 DE/rand/1/bin
DE/rand/1/bin
DE/rand/2/bin
0.25 DE/rand/2/bin 0.25 DE/c−t−rand/1/bin
DE/c−t−rand/1/bin
SBX
0.2 SBX 0.2
Polynomial
Polynomial
0.15
0.15
0.1
0.1 0.05
0.05 0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Generations Generations
(g) LZ09F1 by SPEA2-R (h) UF1 by SPEA2-R
Fig. 1. The expected probabilities of operators Eg ðOÞ along generations derived by Pareto dominance-based MOEAs (NSGAII-R, SPEA2-R) on DTLZ1,
WFG1D3, LZ09F1 and UF1. Operator pool is O ={‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’, ‘‘DE/current-to-rand/1/bin’’, ‘‘SBX’’, ‘‘polynomial mutation’’}.

ables are interdependent [35,59]. Fig. 1 shows the expected probabilities of operators Eg ðOÞ on the four typical MOPs at dif-
ferent generations derived by NSGAII-R and SPEA2-R. All probability values of the five operators are the mean values of 30
independent runs. From Fig. 1, we can see that in Pareto dominance-based MOEAs, the evolutionary operators ‘‘SBX’’ and
‘‘polynomial mutation’’ show great contributions. It is also evident that ‘‘SBX’’ fares better on independent PSs (DTLZ1
and WFG1D3) than on interdependent PSs (LZ09F1 and UF1) [32], whereas, ‘‘polynomial mutation’’ is more suitable to deal
with nonlinear variable dependencies than independent variables [2].

4.7. Reputation of evolutionary operators

In this section, the effectiveness (reputation) of evolutionary operators is investigated under Decomposition-based
MOEAs (i.e., MOEA/D-R) on four typical MOPs.
The operators ‘‘DE/rand/1/bin’’ and ‘‘DE/rand/2/bin’’ are quite similar. However, ‘‘DE/current-to-rand/1/bin’’ is different
and it is formulated as follows:

v i;g ¼ ~xi;g þ randðÞ  ð~xr1;g  ~xi;g Þ þ F  ð~xr2;g  ~xr3;g Þ;


~

where ~ xi is the base vector, and randðÞ 2 ½0; 1 is a uniform random value. The operator ‘‘DE/current-to-rand/1/bin’’ generates
the new offspring based on ~ xi , whereas ‘‘DE/rand/1/bin’’ and ‘‘DE/rand/2/bin’’ search new offspring solutions in the global
region.
Fig. 2 shows the expected probabilities of the evolutionary operators Eg ðOÞ at different generations derived by MOEA/D-R
on DTLZ1, WFG1D3, LZ09F1 and UF1. All probability values of the five operators are the means of 30 independent runs.
S. Jiang et al. / Information Sciences 286 (2014) 125–146 143

0.45 0.5

0.4 0.45
Expected probabilities 0.4
0.35

Expected probabilities
0.35
0.3 DE/rand/1/bin DE/rand/1/bin
DE/rand/2/bin 0.3 DE/rand/2/bin
0.25
DE/c−t−rand/1/bin DE/c−t−rand/1/bin
0.25
0.2 SBX SBX
Polynomial 0.2
Polynomial
0.15
0.15
0.1 0.1
0.05 0.05

0 0
0 500 1000 1500 2000 0 500 1000 1500 2000
Generations Generations
(a) DTLZ1 by MOEA/D-R (b) WFG1D3 by MOEA/D-R

0.5 0.4

0.45
0.35
0.4
Expected probabilities

Expected probabilities
0.3
0.35
0.25
0.3

0.25 DE/rand/1/bin 0.2


DE/rand/2/bin
0.2
DE/c−t−rand/1/bin 0.15
DE/rand/1/bin
0.15 SBX DE/rand/2/bin
Polynomial 0.1
0.1 DE/c−t−rand/1/bin
0.05 SBX
0.05
Polynomial
0 0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Generations Generations
(c) LZ09F1 by MOEA/D-R (d) UF1 by MOEA/D-R

Fig. 2. The expected probabilities of operators Eg ðOÞ along generations derived by Decomposition-based MOEAs (MOEA/D-R) on DTLZ1, WFG1D3, LZ09F1
and UF1. Operator pool is O = {‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’, ‘‘DE/current-to-rand/1/bin’’, ‘‘SBX’’, ‘‘polynomial mutation’’}.

Under the Decomposition-based MOEAs, ‘‘DE/rand/1/bin’’, ‘‘DE/rand/2/bin’’ and ‘‘DE/current-to-rand/1/bin’’ are more effec-
tive than ‘‘SBX’’ and ‘‘polynomial mutation’’. The probability of ‘‘DE/current-to-rand/1/bin’’ increases in the earlier stage, and
then gradually decreases in the later stage, whereas the probabilities of ‘‘DE/rand/1/bin’’ and ‘‘DE/rand/2/bin’’ gradually
increase as MOEA/D-R progresses. From Eq. (15), ‘‘DE/current-to-rand/1/bin’’ has the search bias from the base vector (~ xi )
and a large perturbation (randðÞ). In the earlier stage of MOEA/D-R, it has good performance due to the biased search. But
its performance gradually deteriorates in the later stage because of the uncertain perturbation. ‘‘DE/rand/1/bin’’ and ‘‘DE/
rand/2/bin’’ do not possess bias of any search directions but they have strong exploration. Their probabilities are low in
the earlier stage because of their unbiased search. But in the later stage, they are more effective than ‘‘DE/current-to-
rand/1/bin’’ due to their better exploration. Thus, the reputation of the evolutionary operators modeled by our framework
well reflects their true effectiveness on the various MOPs.

4.8. Reputation of control parameters

In this experiment, the different suitability (reputation) of the control parameters is tested by MOEA/D-R on two typical
MOPs. Fig. 3 shows the expected probabilities of the control parameters EðCRjoÞ and EðFjoÞ at different generations derived by
MOEA/D-R on DTLZ1 and UF1, where CR; F 2 ½0; 1 are quantized into three segments. All reported probability values of the
control parameters are the means of 30 independent runs. Due to space limitation, we only show the reputation of control
parameters for the operator o ¼ ‘‘DE/rand/2/bin’’.
In Fig. 3, the probability of CR 2 ½0:67; 1:00 is high in the later stage on DTLZ1 and UF1. The larger CR induces the operator
to search in a broad region, benefiting MOEA/D in maintaining the population diversity. The probability of F 2 ½0:00; 0:33Þ
gradually increases on DTLZ1 and UF1. As the algorithm progresses, the solutions spread more evenly. It means that the dif-
ference between solutions (e.g., ~ xr1;g  ~
xr2 ;g in DE operators) becomes larger. So, in the later stage, the operator ‘‘DE/rand/2/
bin’’ needs to adjust the parameter F to be small for exploitation in searching the neighboring region. Thus, the reputation of
the control parameters well reflects their varying suitability across generations.
144 S. Jiang et al. / Information Sciences 286 (2014) 125–146

0.6 1
0.55 0.9
Exptected probabilites 0.5 0.8

Exptected probabilites
0.45 0.7
CR∈[0.00,0.33]
0.4 CR∈[0.33,0.67] 0.6 F∈[0.00,0.33]
0.35 CR∈[0.67,1.00] 0.5 F∈[0.33,0.67]
0.3 0.4 F∈[0.67,1.00]
0.25 0.3

0.2 0.2

0.15 0.1

0.1 0
0 500 1000 1500 2000 0 500 1000 1500 2000
Generations Generations

0.8 1

0.9
0.7
0.8
Exptected probabilites

Exptected probabilites
0.6 0.7

CR∈[0.00,0.33] 0.6 F∈[0.00,0.33]


0.5
CR∈[0.33,0.67] 0.5 F∈[0.33,0.67]
0.4 CR∈[0.67,1.00] F∈[0.67,1.00]
0.4

0.3 0.3

0.2
0.2
0.1

0.1 0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Generations Generations

Fig. 3. The expected probabilities of parameters Eg ðCRjoÞ and Eg ðFjoÞ along generations derived by MOEA/D-R on DTLZ1 and UF1. Two parameters CR and F
are quantized into three segments, and operator is o = ‘‘DE/rand/2/bin’’.
Meidan and IQR of Hypervolume

0.4

0.35
NSGAII
SPEA2
MOEA/D
MOEA/D−R q∈[1,10]
0.3

Fig. 4. MOEA/D-R with different values of parameter quantization q.

4.9. Discussion of parameter quantization

As shown in Section 3.3, our framework has only one predefined parameter (q) which defines the number of parameter
quantization. To investigate the impact of q, the algorithm MOEA/D-R with q 2 ½1; 10 is tested on 58 MOPs. Fig. 4 shows the
median and IQR of hypervolume founded by classical MOEAs and MOEA/D-R over 30 independent runs. It is evident that
MOEA/D-R is less sensitive to the setting of the parameter q. The superior of MOEA/D-R over the classical MOEAs is due
to operator selection and parameter adaptation by the concept of reputation.
S. Jiang et al. / Information Sciences 286 (2014) 125–146 145

5. Conclusion and future work

Adaptation on the choice of evolutionary operators and control parameters is an effective and efficient way to enhance
the robustness of Evolutionary Algorithms (EAs) on different problems, and improve the ease-of-use of EAs by reducing user
efforts to manage predefined parameters. Although many adaptive EAs have been successfully applied to Single-objective
Optimization Problems (SOPs), few research efforts have been devoted to Multiobjective Optimization Problems (MOPs).
In view of this limitation, in this paper, a novel Multiobjective Evolutionary Algorithm (MOEA) based on reputation is pro-
posed for tackling different MOPs. In particular, the reputation concept is introduced for the first time to measure the
dynamic competency of operators and parameters across problems and stages of the search. Each individual in MOEAs
selects operators and parameters with the probabilities correlated to their reputation. Moreover, our approach only involves
one predefined parameter (q in parameter adaptation) which is consistent with the spirit of ease-of-use. Experimental studies
on 58 benchmark MOPs in jMetal 4.0 demonstrated that the proposed framework obtained robust solutions than the classical
MOEAs and existing adaptive MOEAs.
In the future, the successful applications on MOPs will attract more research efforts in the area of adaptation mechanisms.
Several important aspects are summarized as following.

 The current framework will be extended and verified by other kinds of optimization problems, such as single-objective
optimization problems, combinatorial optimization problems, expensive optimization problems, etc.
 Another interesting direction is to design adaptive approaches with reputation to choose local search operators and tune
their intensity in Memetic Computation (MC) [40,42,39,41,12].
 The rewards function in the current work is from the quantity of generated non-dominated solutions. To measure the
competency of operators and parameters more accurately, other Credit Assignment methods, for example, hypervolume
improvement, will be designed and explored.
 Last but not least, most of adaptive approaches consider the rewards from all individuals despite the possibility that they
are located in different searching spaces. We plan to draw the reputation scores regarding to the fitness landscapes. Then,
individuals can select operators and parameters by considering the reputation in similar fitness scenarios.

The Java source codes of the proposed framework (MOEAs based on reputation), and other four adaptive MOEAs (i.e.,
MOEA/D-JADE, MOEA/D-jDE, MOEA/D-SaDE, MOEA/D-CoDE) are available at http://trust.sce.ntu.edu.sg/sjiang1/.

References

[1] http://en.wikipedia.org/wiki/normal_distribution.
[2] http://users.jyu.fi/miettine/kurssit/jatkoksem/karthik080310.pdf.
[3] H. Abbass, R. Sarker, C. Newton, PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems, in: Proceedings of
IEEE Congress Evolutionary Computation (CEC), vol. 2, 2001, pp. 971–978.
[4] J. Bader, E. Zitzler, HypE: an algorithm for fast hypervolume-based many-objective optimization, Evol. Comput. 19 (1) (2011) 45–76.
[5] J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer, Self-adapting control parameters in differential evolution: a comparative study on numerical
benchmark problems, IEEE Trans. Evol. Comput. 10 (6) (2006) 646–657.
[6] J. Brest, V. Zumer, M. Maucec, Self-adaptive differential evolution algorithm in constrained real-parameter optimization, in: Proceedings of IEEE
Congress Evolutionary Computation (CEC), 2006, pp. 215–222.
[7] A. Caponio, G.L. Cascella, F. Neri, N. Salvatore, M. Sumner, A fast adaptive memetic algorithm for online and offline control design of pmsm drives, IEEE
Trans. Syst. Man Cybernet. Part B: Cybernet. 37 (1) (2007) 28–41.
[8] A. Caponio, F. Neri, Integrating Cross-Dominance Adaptation in Multi-Objective Memetic Algorithms, Springer, 2009.
[9] F. Caraffini, F. Neri, J. Cheng, G. Zhang, L. Picinali, G. Iacca, E. Mininno, Super-fit multicriteria adaptive differential evolution, in: Proceedings of IEEE
Congress Evolutionary Computation (CEC), 2013, pp. 1678–1685.
[10] C. Chen, Y. Chen, T. Shen, J. K. Zao, Optimizing degree distributions in LT codes by using the multiobjective evolutionary algorithm based on
decomposition, in: Proceedings of IEEE Congress Evolutionary Computation (CEC), 2010, pp. 1–8.
[11] C. Chen, Y. Chen, Q. Zhang, Enhancing MOEA/D with guided mutation and priority update for multi-objective optimization, in: Proceedings of IEEE
Congress Evolutionary Computation (CEC), 2009, pp. 209–216.
[12] X. Chen, Y. Ong, M. Lim, K. Tan, A multi-facet survey on memetic computation, IEEE Trans. Evol. Comput. 15 (5) (2011) 591–607.
[13] Y. Chen, C. Chen, Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization, Evol. Comput. 18
(2) (2010) 199–228.
[14] S. Chong, X. Yao, Multiple choices and reputation in multiagent interactions, IEEE Trans. Evol. Comput. 11 (6) (2007) 689–711.
[15] C.A. Coello Coello, Evolutionary multi-objective optimization: a historical view of the field, IEEE Comput. Intell. Mag. 1 (1) (2006) 28–36.
[16] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197.
[17] K. Deb, K. Sindhya, T. Okabe, Self-adaptive simulated binary crossover for real-parameter optimization, in: Proceedings of Genetic and Evolutionary
Computation Conference (GECCO), 2007, pp. 1187–1194.
[18] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems for evolutionary multiobjective optimization, Evol. Multiobjective Optim. (2005) 105–
145.
[19] W. Du, B. Li, Multi-strategy ensemble particle swarm optimization for dynamic optimization, Inform. Sci. 178 (15) (2008) 3096–3109.
[20] J. Durillo, A. Nebro, E. Alba, The jmetal framework for multi-objective optimization: design and architecture, in: Proceedings of IEEE Congress
Evolutionary Computation (CEC), 2010, pp. 1–8.
[21] A. Eiben, R. Hinterding, Z. Michalewicz, Parameter control in evolutionary algorithms, IEEE Trans. Evol. Comput. 3 (2) (1999) 124–141.
[22] A.E. Eiben, S.K. Smit, Parameter tuning for configuring and analyzing evolutionary algorithms, Swarm Evol. Comput. 1 (1) (2011) 19–31.
[23] A. Ghosh, S. Das, A. Chowdhury, R. Giri, An improved differential evolution algorithm with fitness-based adaptation of the control parameters, Inform.
Sci. 181 (18) (2011) 3749–3765.
[24] W. Gong, Á. Fialho, Z. Cai, Adaptive strategy selection in differential evolution, in: Proceedings of Genetic and Evolutionary Computation Conference
(GECCO), 2010, pp. 409–416.
[25] J. Grefenstette, Optimization of control parameters for genetic algorithms, IEEE Trans. Syst. Man Cybernet. 16 (1) (1986) 122–128.
146 S. Jiang et al. / Information Sciences 286 (2014) 125–146

[26] M. Hamdan, On the disruption-level of polynomial mutation for evolutionary multi-objective optimisation algorithms, Comput. Inform. 29 (5) (2012)
783–800.
[27] R. Hinterding, Z. Michalewicz, A. Eiben, Adaptation in evolutionary computation: a survey, in: Proceedings of IEEE Congress Evolutionary Computation
(CEC), 1997, pp. 65–69.
[28] V. Huang, A. Qin, P. Suganthan, M. Tasgetiren, Multi-objective optimization based on self-adaptive differential evolution algorithm, in: Proceedings of
IEEE Congress Evolutionary Computation (CEC), 2007, pp. 3601–3608.
[29] S. Huband, P. Hingston, L. Barone, L. While, A review of multiobjective test problems and a scalable test problem toolkit, IEEE Trans. Evol. Comput. 10
(5) (2006) 477–506.
[30] G. Iacca, F. Neri, E. Mininno, Y. Ong, M. Lim, Ockham’s razor in memetic computing: three stage optimal memetic exploration, Inform. Sci. 188 (2012)
17–43.
[31] C. Igel, N. Hansen, S. Roth, Covariance matrix adaptation for multi-objective optimization, Evol. Comput. 15 (1) (2007) 1–28.
[32] A. Iorio, X. Li, Rotationally invariant crossover operators in evolutionary multi-objective optimization, Simul. Evol. Learn. (2006) 310–317.
[33] S. Jiang, J. Zhang, Y. Ong, A multiagent evolutionary framework based on trust for multiobjective optimization, in: Proceedings of the 11th Conference
on Autonomous Agents and Multiagent Systems (AAMAS), 2012, pp. 299–306.
[34] A. Jøsang, R. Ismail, C. Boyd, A survey of trust and reputation systems for online service provision, Decis. Support Syst. 43 (2) (2007) 618–644.
[35] H. Li, Q. Zhang, Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II, IEEE Trans. Evol. Comput. 13 (2) (2009)
284–302.
[36] R. Mallipeddi, S. Mallipeddi, P.N. Suganthan, Ensemble strategies with adaptive evolutionary programming, Inform. Sci. 180 (9) (2010) 1571–1581.
[37] J. Maturana, Á. Fialho, F. Saubion, M. Schoenauer, M. Sebag, Extreme compass and dynamic multi-armed bandits for adaptive operator selection, in:
Proceedings of IEEE Congress Evolutionary Computation (CEC), 2009, pp. 365–372.
[38] F. Neri, J. Toivanen, G.L. Cascella, Y.-S. Ong, An adaptive multimeme algorithm for designing HIV multidrug therapies, IEEE/ACM Trans. Comput. Biol.
Bioinform.(TCBB) 4 (2) (2007) 264–278.
[39] Q. Nguyen, Y. Ong, M. Lim, A probabilistic memetic framework, IEEE Trans. Evol. Comput. 13 (3) (2009) 604–623.
[40] Y. Ong, A. Keane, Meta-Lamarckian learning in memetic algorithms, IEEE Trans. Evol. Comput. 8 (2) (2004) 99–110.
[41] Y. Ong, M. Lim, X. Chen, Research frontier: memetic computation-past, present & future, IEEE Comput. Intell. Mag. 5 (2) (2010) 24–31.
[42] Y. Ong, M. Lim, N. Zhu, K. Wong, Classification of adaptive memetic algorithms: a comparative study, IEEE Trans. Syst. Man Cybernet. B: Cybernet. 36
(1) (2006) 141–152.
[43] A. Qin, V. Huang, P. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evol. Comput.
13 (2) (2009) 398–417.
[44] B.-Y. Qu, P.N. Suganthan, Multi-objective evolutionary algorithms based on the summation of normalized objectives and diversified selection, Inform.
Sci. 180 (17) (2010) 3170–3181.
[45] J. Schaffer, R. Caruana, L. Eshelman, R. Das, A study of control parameters affecting online performance of genetic algorithms for function optimization,
in: Proceedings of Conference Genetic Algorithms, 1989, pp. 51–60.
[46] S. K. Smit, A. Eiben, Beating the ‘world champion’ evolutionary algorithm via REVAC tuning, in: Proceedings of IEEE Congress Evolutionary
Computation (CEC), 2010, pp. 1–8.
[47] M. Srinivas, L. Patnaik, Adaptive probabilities of crossover and mutation in genetic algorithms, IEEE Trans. Syst. Man Cybernet. 24 (4) (1994) 656–667.
[48] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (4) (1997)
341–359.
[49] K. Tan, C. Goh, Y. Yang, T. Lee, Evolving better population distribution and exploration in evolutionary multi-objective optimization, Eur. J. Oper. Res.
171 (2) (2006) 463–495.
[50] Y. Wang, Z. Cai, Q. Zhang, Differential evolution with composite trial vector generation strategies and control parameters, IEEE Trans. Evol. Comput. 15
(1) (2011) 55–66.
[51] Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, Q. Tian, Self-adaptive learning based particle swarm optimization, Inform. Sci. 181 (20) (2011) 4515–4538.
[52] Y. Wang, L. Wu, X. Yuan, Multi-objective self-adaptive differential evolution with elitist archive and crowding entropy-based diversity measure,
Methodol. Appl. 14 (3) (2010) 193–209.
[53] X. Yao, P. Darwen, How important is your reputation in a multi-agent environment, in: Proceedings of IEEE Conference Transactions Systems, Man, and
Cybernetics, vol. 2, 1999, pp. 575–580.
[54] J. Zhang, H. Chung, W. Lo, Clustering-based adaptive crossover and mutation probabilities for genetic algorithms, IEEE Trans. Evol. Comput. 11 (3)
(2007) 326–335.
[55] J. Zhang, R. Cohen, A comprehensive approach for sharing semantic web trust ratings, Comput. Intell. 23 (3) (2007) 302–319.
[56] J. Zhang, A. Sanderson, Self-adaptive multi-objective differential evolution with direction information provided by archived inferior solutions, in:
Proceedings of IEEE Congress Evolutionary Computation (CEC), 2008, pp. 2801–2810.
[57] J. Zhang, A. Sanderson, JADE: adaptive differential evolution with optional external archive, IEEE Trans. Evol. Comput. 13 (5) (2009) 945–958.
[58] Q. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput. 11 (6) (2007) 712–731.
[59] Q. Zhang, A. Zhou, S. Zhao, P. Suganthan, W. Liu, S. Tiwari, Multiobjective optimization test instances for the cec 2009 special session and competition,
Univ. of Essex, Colchester, UK and Nanyang Technological Univ., Singapore, Special Session on Performance Assessment of Multi-Objective
Optimization Algorithms, Tech. Rep., 2008.
[60] S.-Z. Zhao, P.N. Suganthan, Q. Zhang, Decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, IEEE Trans.
Evol. Comput. 16 (3) (2012) 442–446.
[61] A. Zhou, B. Qu, H. Li, S. Zhao, P. Suganthan, Q. Zhang, Multiobjective evolutionary algorithms: a survey of the state-of-the-art, Swarm Evol. Comput. 1
(1) (2011) 32–49.
[62] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput. 8 (2) (2000) 173–195.
[63] E. Zitzler, M. Laumanns, L. Thiele, SPEA2: Improving the strength Pareto evolutionary algorithm, in: Proceedings of Evolutionary Methods for Design,
Optimisation and Control with Application to Industrial Problems (EUROGEN), 2001, pp. 95–100.
[64] E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE Trans. Evol. Comput. 3 (4)
(1999) 257–271.

You might also like