Algorithms 16 00545
Algorithms 16 00545
Article
Measuring the Performance of Ant Colony Optimization
Algorithms for the Dynamic Traveling Salesman Problem
Michalis Mavrovouniotis 1,2, * , Maria N. Anastasiadou 1 and Diofantos Hadjimitsis 1,2
Abstract: Ant colony optimization (ACO) has proven its adaptation capabilities on optimization
problems with dynamic environments. In this work, the dynamic traveling salesman problem
(DTSP) is used as the base problem to generate dynamic test cases. Two types of dynamic changes
for the DTSP are considered: (1) node changes and (2) weight changes. In the experiments, ACO
algorithms are systematically compared in different DTSP test cases. Statistical tests are performed
using the arithmetic mean and standard deviation of ACO algorithms, which is the standard method
of comparing ACO algorithms. To complement the comparisons, the quantiles of the distribution
are also used to measure the peak-, average-, and bad-case performance of ACO algorithms. The
experimental results demonstrate some advantages of using quantiles for evaluating the performance
of ACO algorithms in some DTSP test cases.
1. Introduction
Ant colony optimization (ACO) is a mainstream swarm intelligence algorithm inspired
Citation: Mavrovouniotis, M.; by the foraging behavior of real ant colonies [1–3]. In ACO, a colony of artificial ants con-
Anastasiadou, M.N.; Hadjimitsis, D. structs solutions guided by artificial pheromone trails and some heuristic information. ACO
Measuring the Performance of Ant has been successfully applied in various complex optimization problems [4–6]. However,
Colony Optimization Algorithms for in most real-world applications, the optimization problem has a dynamic environment [7].
the Dynamic Traveling Salesman In particular, the environment of the optimization problem, including objective function,
Problem. Algorithms 2023, 16, 545. constraints, search space, input variables, and so on, may change over time.
[Link]
Major challenges occur when addressing dynamic optimization problems because
Academic Editors: Nikola Ivković repeated optimization of the changing optimum is required. The optimal solution in which
and Matej Črepinšek the algorithm has converged to it before the dynamic change may not be the optimal
solution after the dynamic change. Consequently, the optimization process needs to be
Received: 24 October 2023
restarted in order to search for a new optimal solution. However, it may not be very
Revised: 23 November 2023
efficient to restart the optimization process from scratch in complex problems (e.g., N P -
Accepted: 25 November 2023
hard combinatorial problems), as they require huge computational efforts and time [8].
Published: 28 November 2023
A more efficient way to address dynamic optimization problems is to use knowl-
edge from previously optimized environments to speed up the re-optimization process [9].
With ACO algorithms, this knowledge can be transferred via the pheromone trails gen-
Copyright: © 2023 by the authors. erated in the previous environments. However, the pheromone trails of the previous
Licensee MDPI, Basel, Switzerland. environment may contain information directing the search process to the previous opti-
This article is an open access article mal solution, which may represent a poor local optimum in the current environment [10].
distributed under the terms and Several strategies have been integrated to enhance the adaptation capabilities of ACO
conditions of the Creative Commons in dynamic optimization problems, such as the generation of new ants using immi-
Attribution (CC BY) license (https:// grant schemes [11,12], population-based pheromone update strategies [13], multiple ant
[Link]/licenses/by/
colonies [14], and memetic computing [15].
4.0/).
In this work, the traveling salesman problem (TSP) is used as the base optimization
problem to generate dynamic test cases using the benchmark generator proposed in [10].
A conventional ACO [16] and a population-based ACO [13] that use pheromone evapo-
ration and memory archive, respectively, to adapt to dynamic changes are applied to the
generated dynamic TSP (DTSP) test cases.
The most common method to evaluate the performance of ACO in dynamic environ-
ments is used in the experiments, that is, to obtain the mean and standard deviation (for
multiple independent executions) for the generated DTSPs. Statistical tests are performed
to compare the algorithms. In [17], it is argued that, for stochastic algorithms (e.g., ACO
algorithms), this way of comparison may not be very informative. This is because the
distribution of executing ACO algorithms multiple times may be asymmetrical and, hence,
outlier values may affect the mean value. Consequently, inadequate information regard-
ing the average performance of the algorithms may be provided. In contrast, quantiles
of the distribution may address this issue with the use of median values as an average
performance measurement.
The rest of the paper is structured as follows. Section 2 provides a description of the
TSP as well as a description of the generation of dynamic test cases. Section 3 describes
the application of the ACO metaheuristic to the TSP. In addition, the pheromone update
policies in dynamic environments for the two ACO algorithms used in the experiments
are described. Section 4 presents the experimental setup and results. An analysis of the
obtained results is provided. Section 5 concludes this work.
where Rij is a random number drawn from a normal distribution with a zero mean and
Algorithms 2023, 16, 545 3 of 15
standard deviation proportional to the initial value of the arc dij (0), Arnd ( T ) ⊂ A is a set of
randomly selected arcs with size dmn(n − 1)e in which their values are subject to change at
environmental period index T, and m ∈ (0, 1) is the magnitude of change. Therefore, the
quality value of solution s for the DTSP depends on the parameter t and, thus, is evaluated
as φ(s, t). In other words, the quality of a solution s at time t may be different at time t + 1.
For the DTSP with node changes, a newly generated set of cities, i.e., Nrnd ( T ), is
initialized, with n random cities in the range of the actual set of cities N. A dynamic change
is generated for every f function evaluation, in which exactly dmne cities are randomly
chosen from Nrnd ( T ) to replace exactly dmne randomly chosen cities from N, where f and
m define the frequency and magnitude of a dynamic change, respectively, as above.
where τij and ηij are the existing pheromone trail and heuristic information values, re-
spectively, Nik is the set of cities that ant k has not visited yet, and α and β are the two
parameters that determine the relative influence of the pheromone trail and heuristic in-
formation, respectively. With probability q0 , ant k chooses the next city, i.e., j, with the
maximum probability as follows [27]:
α β
j = argmax τij ηij . (4)
j∈Nik
This selection process will continue until each ant has visited all cities once and only
once, as shown in Algorithm 1. The constructed solutions will be evaluated based on
Equation (3).
where sbest is the solution generated by the best ant and ∆τijbest = 1/C best , where C best =
φ(sbest , t) is the quality value of DTSP solution sbest . The best ant to deposit pheromone
may be either the best-so-far ant, in which case ∆τijbest = 1/C bs , where C bs = φ(sbs , t) is the
solution quality of the best-so-far ant, or the iteration-best ant, in which case ∆τijbest = 1/Cib ,
where Cib = φ(sib , t) is the solution quality of the best ant of the iteration. By default, the
iteration-best ant is used to update the pheromone trails and, occasionally, the best-so-far
ant. The pheromone trail values in MMAS are kept to the interval [τmin , τmax ], where τmin
and τmax are the minimum and maximum pheromone trails limits, respectively. The overall
MMAS pheromone update is presented in Algorithm 2, lines 1–20.
where ∆τ = (τmax − τ0 )/K is the constant pheromone value added, sib is the solution of
the iteration-best ant, K is the size of pop(t), and τmax and τ0 are the maximum and initial
pheromone trail values, respectively.
Algorithms 2023, 16, 545 6 of 15
Offline Performance
24,000 24,000
23,500 23,500
23,000 23,000
22,500 22,500
22,000 22,000
21,500 Adapting 21,500 Adapting
Restarting Restarting
21,000 21,000
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Environmental Change T Environmental Change T
Figure 1. Offline performance results (averaged over 50 runs) of MMAS without pheromone
re-initialization (Adapting) and of MMAS with pheromone re-initialization (Restarting) for each
environmental change on kroA100 with nodes with small dynamic changes (left) and severe dynamic
changes (right).
Whenever a dynamic change occurs, the solutions represented by the current ants
in the population list may become invalid (e.g., when existing nodes are removed and
replaced with new nodes). Therefore, these solutions are repaired heuristically using the
KeepElite principle [33]. Specifically, the affected nodes are removed from the solution,
reconnecting the successor and predecessor nodes. The new nodes are inserted in the best
possible position of the solution, causing, in this way, the least increase in the solution
quality. At the same time, the corresponding pheromone trails associated with the arcs of
the affected nodes are updated.
When pop(t) is full, the current iteration-best ant needs to replace an existing ant, say
r, in pop(t), following a negative constant update to its corresponding pheromone trails,
which is defined as follows:
where ∆τ = (τmax − τ0 )/K is the constant deducted pheromone value (the same with the
added value) and sr is the solution of the ant to be replaced, typically the oldest entry of
the population list pop(t). Note that pheromone evaporation is not used in the P-ACO
algorithm. However, it is able to adapt to the changes because outdated pheromone trails
are removed directly from the population list.
The overall application of MMAS and P-ACO algorithms for the DTSP is described
in Algorithm 3. It is worth mentioning that in both MMAS and P-ACO, the solution of
the best-so-far ant is repaired using KeepElite, and the heuristic information is updated,
reflecting the newly generated distances as in Equation (2), whenever a dynamic change
occurs (see Algorithm 3, lines 16–33).
Algorithms 2023, 16, 545 7 of 15
4. Experimental Results
4.1. Experimental Setup
The dynamic benchmark generator, described in Section 2, is used to generate the
DTSP test cases. The following three stationary TSP benchmark instances are obtained from
TSPLIB (Available at [Link] (accessed
on 3 April 2023)): kroA100, rat575, and pr1002. The magnitude of change m is set to
slightly (i.e., m = 0.1), medium (i.e., m = 0.25 and m = 0.5), and severely (i.e., m = 0.75)
changing environments. The frequency of change f is set to 10n iterations (i.e., f = 1000 for
kroA100, f = 5750 for rat575, and f = 10,020 for pr1002) to allow sufficient time for the
ACO algorithms to converge—and it is scalable for each problem instance. For each type
of DTSP (i.e., weights or nodes changes), a total of four test cases are generated from each
static benchmark instance. Ten environmental changes are allowed for each DTSP test case.
The modified offline performance [34] is used as the performance metric, which is
defined as follows:
1 E 0
P̄o f f line = ∑ φ(sbs , t), (9)
E t =1
0
where E is the number of observations taken, and φ(sbs , t) is the solution quality value of
the best-so-far solution since the last dynamic change.
Algorithms 2023, 16, 545 8 of 15
The common ACO parameters of MMAS and P-ACO are set to α = 1, β = 5, and
ω = 25. The evaporation rate of MMAS is investigated with values ρ = {0.1, 0.2, 0.5, 0.8},
and it is set to ρ = 0.8. Furthermore, the exploitation strength of the decision rule is set
to q0 = 0.0, and the initial pheromone trail value τ0 = 1/ρC nn , where C nn = φ(snn , 0)
is the solution quality generated by the nearest-neighbor heuristic. The maximum and
best
√n
minimum pheromone trail limits are set to τmax = 1/ρC and τmin = τmax 1 − 0.05 /
√
(cand − 1) · n 0.05 , respectively, where cand is the number of different choices available
to an ant at each step. The frequency with which the best-so-far ant deposits pheromone
occasionally is set for every 25 iterations [29]. The population list size of the P-ACO is
investigated with values K = {2, 3, 5, 10}, and it is set to K = 3. Furthermore, the initial
pheromone trail value is set to τ0 = 1/(n − 1), the exploitation strength of the decision rule
is set to q0 = 0.5, and the maximum pheromone trails limit is to τmax = 1 [35].
Table 1. Mean and standard deviation results of ACO algorithms for the DTSP with node changes.
Algorithm
m⇒ 0.1 0.25 0.5 0.75
kroA100
MMAS 22,223.86 ± 102.9 22,492.60 ± 108.1 22,537.36 ± 95.2 22,472.96 ± 70.7
P-ACO 22,118.66 ± 49.0 22,350.54 ± 67.1 22,419.70 ± 54.7 22,320.62 ± 48.8
rat575
MMAS 6573.22 ± 25.1 6577.54 ± 66.1 6399.26 ± 22.5 6401.76 ± 15.3
P-ACO 6581.46 ± 25.9 6443.86 ± 20.4 6566.00 ± 43.8 6392.60 ± 19.8
pr1002
MMAS 305,793.32 ± 1719.4 315,315.60 ± 1633.72 314,533.80 ± 1364.8 315,447.28 ± 1419.8
P-ACO 308,342.64 ± 733.1 315,763.52 ± 456.0 313,705.60 ± 445.2 315,500.44 ± 490.4
Bold values indicate statistical significance.
Table 2. Mean and standard deviation results of ACO algorithms for the DTSP with weight changes.
Algorithm
m⇒ 0.1 0.25 0.5 0.75
kroA100
MMAS 20,543.36 ± 164.7 20,600.40 ± 62.0 20,222.00 ± 60.6 19,869.66 ± 58.2
P-ACO 20,462.34 ± 36.4 20,663.38 ± 64.7 20,196.60 ± 44.7 19,900.18 ± 37.9
rat575
MMAS 6621.04 ± 23.4 6465.78 ± 23.1 6404.72 ± 22.1 6399.72 ± 12.7
P-ACO 6758.06 ± 70.5 6675.62 ± 25.3 6643.98 ± 15.0 6618.88 ± 13.2
pr1002
MMAS 266,132.52 ± 1625.4 264,653.36 ± 1902.2 268,062.60 ± 1104.2 268,531.12 ± 1255.6
P-ACO 282,904.76 ± 592.3 276.911.88 ± 394.2 275,933.72 ± 382.7 275,841.04 ± 494.7
Bold values indicate statistical significance.
Algorithms 2023, 16, 545 9 of 15
Offline Performance
22,000
23,000
21,500
22,500
21,000
22,000
20,500
21,500 20,000
21,000 19,500
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Environmental Change T Environmental Change T
Figure 2. Offline performance results (averaged over 50 runs) of MMAS and P-ACO for each
environmental change on kroA100 with node (left) and weight (right) changes.
Offline Performance
7000 7000
6800 6800
6600 6600
6400 6400
6200 6200
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Environmental Change T Environmental Change T
Figure 3. Offline performance results (averaged over 50 runs) of MMAS and P-ACO for each
environmental change on rat575 with node (left) and weight (right) changes.
Offline Performance
330,000 285,000
325,000 280,000
275,000
320,000 270,000
315,000 265,000
260,000
310,000
255,000
305,000 250,000
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Environmental Change T Environmental Change T
Figure 4. Offline performance results (averaged over 50 runs) of MMAS and P-ACO for each
environmental change on pr1002 with node (left) and weight (right) changes.
Algorithms 2023, 16, 545 10 of 15
First, P-ACO performs significantly better than MMAS in most DTSPs with node
changes. This is because the P-ACO has been designed to address DTSPs with node
changes. In particular, when ants are currently stored in the population list when a dynamic
change occurs, they will be repaired heuristically using change-related information (i.e.,
the inserted and removed nodes). Therefore, a constant amount of pheromones will be
removed directly from the outdated trails associated with the arcs connecting the nodes to
be removed. At the same time, the pheromone trails associated with the arcs connecting the
nodes to be inserted will receive a constant amount of pheromone. This specific pheromone
update will have a direct effect on the solution construction of the next iteration. On the
other hand, the pheromone evaporation of MMAS requires more iterations to express its
effect (e.g., more than the frequency used in these experiments [10]). Also, the removal
of outdated pheromone trails is not applied locally to the affected areas, as in the P-ACO,
because the evaporation is applied globally on all arcs.
Second, MMAS performs significantly better than P-ACO in most DTSPs with weight
changes. This is because repairing the ants stored in the population list has no effect when
a dynamic change of this type occurs. In particular, the feasibility of the stored solutions
is not affected as in the case of the DTSP with node changes. Only the solution quality
will change (which will be re-evaluated); however, because a constant pheromone update
is performed, it has no effect on the P-ACO’s behavior. However, since the pheromone
trails in P-ACO are removed directly because of the constant pheromone removal, there
is a higher risk of erasing knowledge from previous environments. On the other hand,
the pheromone evaporation in MMAS is able to better adapt to these types of dynamic
changes because the decrease in the pheromone trails is performed gradually.
Table 3. Experimental results of ACO algorithms for the DTSP with node changes.
Table 4. Experimental results of ACO algorithms for the DTSP with weight changes.
Figure 5. Box plots of MMAS and P-ACO drawn between the first (Q0.25 ) and third quartile (Q0.75 )
of the distribution on kroA100 with node (left) and weight (right) changes.
Figure 6. Box plots of MMAS and P-ACO drawn between the first (Q0.25 ) and third quartile (Q0.75 )
of the distribution on rat575 with node (left) and weight (right) changes.
Algorithms 2023, 16, 545 12 of 15
Figure 7. Box plots of MMAS and P-ACO drawn between the first (Q0.25 ) and third quartile (Q0.75 )
of the distribution on pr1002 with node (left) and weight (right) changes.
Another issue that the use of quantiles may address is when the distribution of
executing ACO algorithms multiple times is asymmetrical (i.e., having several outliers
such as the distribution of MMAS in Figure 5 for kroA100 with weight changes, and
P-ACO in Figure 6 for rat575 with weight changes), affecting the mean comparisons [17].
Therefore, the quantile Q0.50 , that defines the median of the distribution, may be a more
adequate choice to measure the average performance of ACO algorithms than the mean
values shown in Section 4.2. This effect is demonstrated when comparing the mean results
of ACO algorithms in Table 1 (and Table 2) with the corresponding median results in
Table 3 (and Table 4). Specifically, it can be observed that the comparisons of the DTSP with
weight changes are consistent, whereas the comparisons of the DTSP with node changes
have several inconsistencies as follows. P-ACO has better mean values than MMAS
in most DTSP cases of pr1002, while MMAS has better median values than P-ACO on
the same DTSP cases. The difference regarding the distribution of P-ACO and MMAS
can be observed in Figure 7, in which P-ACO has a more symmetrical and concentrated
distribution than MMAS.
From the quantile comparisons in Tables 3 and 4, it can also be observed that: first,
the peak performances of both ACO algorithms are better, as expected, than the average
performance or bad-case performance, and second, the average performance (i.e., Q0.50 )
comparisons in Table 4 for the DTSP with weight changes are consistent with the peak
(i.e., Q0.10 ) and bad-case (i.e., Q0.90 ) performance comparisons since MMAS outperforms
P-ACO in all cases. In contrast, when comparing the average performance results with
the peak performance results for the DTSP with node changes in Table 3, there are some
inconsistencies. For example, MMAS has a better peak performance than P-ACO on the
pr1002 test cases, whereas P-ACO has a better average performance than MMAS on the
same test cases.
In summary, MMAS may be a better choice for DTSP with weight changes when
the independent execution is performed in parallel. This is because it has a better peak
performance than P-ACO, and hence, it is expected to obtain a solution at least as good
as Q0.10 . In contrast, P-ACO may be a better choice when a single execution is performed
because there are more chances in obtaining a solution as good as Q0.50 .
5. Conclusions
In this work, two ACO variations are applied to optimization problems in a dynamic
environment. The TSP is used as the base problem to systematically generate dynamic test
cases. Two types of dynamic changes are considered: (1) when nodes change and (2) when
Algorithms 2023, 16, 545 13 of 15
weights change. From the experiments, the follow concluding remarks can be drawn. First,
P-ACO performs significantly better than MMAS in most DTSPs with node changes.
Second, MMAS is significantly better than P-ACO in DTSP with weight changes. Third,
P-ACO has better average performance than MMAS in most DTSP with node changes.
Fourth, MMAS has a better peak performance in most DTSPs with weight changes.
In general, complex problems arising in practical applications, such as training a neural
network, require huge computational power. As a consequence, the computation power for
optimizing complex problems as well as performing several independent executions (due
to the stochastic nature of swarm and evolutionary computation algorithms) in parallel
may not be available. Therefore, optimization algorithms with better average performances
may be the best choice rather than optimization algorithms with better peak performance.
For future work, it will be interesting to apply and evaluate ACO algorithms in more
practical applications, such as the agile earth observation satellites’ scheduling problem [36,37].
In fact, this particular problem can be solved as a constrained TSP [38] and consists of several
unexpected environmental changes that may affect the scheduled tasks [39,40]. Hence, the adap-
tation capabilities and evaluation of the ACO demonstrated in this work may be suitable to
address the aforementioned problem.
References
1. Dorigo, M.; Gambardella, L.M. Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem. IEEE
Trans. Evol. Comput. 1997, 1, 53–66. [CrossRef]
2. Dorigo, M.; Caro, G.D.; Gambardella, L.M. Ant Algorithms for Discrete Optimization. Artif. Life 1999, 5, 137–172. [CrossRef]
[PubMed]
3. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the European Conference on
Artificial Life, Paris, France, 13 December 1991; pp. 134–142.
4. Mavrovouniotis, M.; Ellinas, G.; Polycarpou, M. Electric Vehicle Charging Scheduling Using Ant Colony System. In Proceedings
of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 2581–2588.
5. Mavrovouniotis, M.; Yang, S. Dynamic vehicle routing: A memetic ant colony optimization approach. In Automated Scheduling and
Planning; Studies in Computational Intelligence; Uyar, A., Ozcan, E., Urquhart, N., Eds.; Springer: Berlin/Heidelberg, Germany,
2013; pp. 283–301, Volume 505.
6. Mavrovouniotis, M.; Li, C.; Ellinas, G.; Polycarpou, M. Parallel Ant Colony optimization for the Electric Vehicle Routing Problem.
In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019;
pp. 1660–1667.
7. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments–a survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317.
[CrossRef]
Algorithms 2023, 16, 545 14 of 15
8. Yang, S.; Jiang, Y.; Nguyen, T. Metaheuristics for dynamic combinatorial optimization problems. IMA J. Manag. Math. 2013,
24, 451–480. [CrossRef]
9. Nguyen, T.; Yang, S.; Branke, J. Evolutionary Dynamic Optimization: A survey of the state of the Art. Swarm Evol. Comput. 2012,
6, 1–24. [CrossRef]
10. Mavrovouniotis, M.; Yang, S.; Van, M.; Li, C.; Polycarpou, M. Ant Colony Optimization Algorithms for Dynamic Optimization:
A Case Study of the Dynamic Travelling Salesperson Problem [Research Frontier]. IEEE Comput. Intell. Mag. 2020, 15, 52–63.
[CrossRef]
11. Mavrovouniotis, M.; Yang, S. Interactive and non-interactive hybrid immigrants schemes for ant algorithms in dynamic
environments. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014;
pp. 1542–1549.
12. Mavrovouniotis, M.; Yang, S. An immigrants scheme based on environmental information for ant colony optimization for the
dynamic travelling salesman problem. In Proceedings of the Artificial Evolution, Angers, France, 24–26 October 2012; Volume
7401, pp. 1–12.
13. Guntsch, M.; Middendorf, M. Applying Population Based ACO to Dynamic Optimization Problems. In Proceedings of the Ant
Algorithms, Brussels, Belgium, 12–14 September 2002; Volume 2463, pp. 111–122.
14. Melo, L.; Pereira, F.; Costa, E. Multi-caste Ant Colony Algorithm for the Dynamic Traveling Salesperson Problem. In Proceedings
of the Adaptive and Natural Computing Algorithms, Lausanne, Switzerland, 4–6 April 2013; Volume 7824, pp. 179–188.
15. Mavrovouniotis, M.; Müller, F.M.; Yang, S. An ant colony optimization based memetic algorithm for the dynamic travelling
salesman problem. In Proceedings of the 2015 Genetic and Evolutionary Computation Conference (GECCO15), Madrid, Spain,
11–15 July 2015; pp. 49–56.
16. Stützle, T.; Hoos, H. Improvements on the Ant-System: Introducing the MAX-MIN Ant System. In Proceedings of the Artificial
Neural Nets and Genetic Algorithms, Norwich, UK, 1997; pp. 245–249.
17. Ivković, N.; Kudelić, R.; Črepinšek, M. Probability and Certainty in the Performance of Evolutionary and Swarm Optimization
Algorithms. Mathematics 2022, 10, 4364. [CrossRef]
18. Garey, M.; Johnson, D. Computer and Intractability: A Guide to the Theory of NP-Completeness; Freeman: San Francisco, CA, USA, 1979.
19. Flood, M.M. The Traveling-Salesman Problem. Oper. Res. 1956, 4, 61–75. [CrossRef]
20. Eyckelhof, C.; Snoek, M. Ant Systems for a Dynamic TSP. In Proceedings of the Ant Algorithms, Brussels, Belgium,
12–14 September 2002; Volume 2463, pp. 88–99.
21. Liu, J. Rank-Based Ant Colony Optimization Applied to Dynamic Traveling Salesman Problems. Eng. Optim. 2005, 37, 831–847.
[CrossRef]
22. Mavrovouniotis, M.; Van, M.; Yang, S. Pheromone modification strategy for the dynamic travelling salesman problem with
weight changes. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27
November–1 December 2017; pp. 1577–1584.
23. de Oliveira, S.M.; Bezerra, L.C.; Stützle, T.; Dorigo, M.; Wanner, E.F.; de Souza, S.R. A computational study on ant colony
optimization for the traveling salesman problem with dynamic demands. Comput. Oper. Res. 2021, 135, 105359. [CrossRef]
24. Dorigo, M. Ant colony optimization. Scholarpedia 2007, 2, 1461. [CrossRef]
25. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern.
Part B Cybern. 1996, 26, 29–41. [CrossRef] [PubMed]
26. Dorigo, M.; Birattari, M.; Stützle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [CrossRef]
27. Gambardella, L.M.; Dorigo, M. Ant-Q: A reinforcement learning approach to the traveling salesman problem. In Machine Learning
Proceedings 1995; Prieditis, A., Russell, S., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1995; pp. 252–260.
28. Stützle, T.; Hoos, H. MAX–MIN Ant System and local search for the traveling salesman problem. In Proceedings of the 1997 IEEE
International Conference on Evolutionary Computation, Indianapolis, IN, USA, 13–16 April 1997; pp. 309–314.
29. Stützle, T.; Hoos, H.H. MAX–MIN Ant System. Future Gener. Comput. Syst. 2000, 16, 889–914. [CrossRef]
30. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Swarm Intelligence: From Natural to Artificial Systems; Oxford University Press: New York,
NY, USA, 1999.
31. Richter, H. Detecting change in dynamic fitness landscapes. In Proceedings of the 2009 IEEE Congress on Evolutionary
Computation, Trondheim, Norway, 18–21 May 2009; pp. 1613–1620.
32. Angus, D.; Hendtlass, T. Ant Colony Optimisation Applied to a Dynamically Changing Problem. In Proceedings of the
Developments in Applied Artificial Intelligence, Cairns, Australia, 17–20 June 2002; Volume 2358, pp. 618–627.
33. Guntsch, M.; Middendorf, M.; Schmeck, H. An Ant Colony Optimization Approach to Dynamic TSP. In Proceedings of the 3rd
Annual Conference on Genetic and Evolutionary Computation, GECCO’01, San Francisco, CA, USA, 7–11 July 2001; pp. 860–867.
34. Branke, J.; Schmeck, H. Designing Evolutionary Algorithms for Dynamic Optimization Problems. In Advances in Evolutionary
Computing; Ghosh, A., Tsutsui, S., Eds.; Natural Computing Series; Springer: Berlin/Heidelberg, Germany, 2003; pp. 239–262.
35. Oliveira, S.; Hussin, M.S.; Roli, A.; Dorigo, M.; Stützle, T. Analysis of the population-based ant colony optimization algorithm for
the TSP and the QAP. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June
2017; pp. 1734–1741.
36. Chen, X.; Reinelt, G.; Dai, G.; Wang, M. Priority-based and conflict-avoidance heuristics for multi-satellite scheduling. Appl. Soft
Comput. 2018, 69, 177–191. [CrossRef]
Algorithms 2023, 16, 545 15 of 15
37. Wang, X.; Wu, G.; Xing, L.; Pedrycz, W. Agile Earth Observation Satellite Scheduling Over 20 Years: Formulations, Methods, and
Future Directions. IEEE Syst. J. 2021, 15, 3881–3892. [CrossRef]
38. Du, B.; Li, S.; She, Y.; Li, W.; Liao, H.; Wang, H. Area targets observation mission planning of agile satellite considering the drift
angle constraint. J. Astron. Telesc. Instruments Syst. 2018, 4, 047002.
39. Povéda, G.; Regnier-Coudert, O.; Teichteil-Königsbuch, F.; Dupont, G.; Arnold, A.; Guerra, J.; Picard, M. Evolutionary Approaches
to Dynamic Earth Observation Satellites Mission Planning under Uncertainty. In Proceedings of the Genetic and Evolutionary
Computation Conference, GECCO ’19, Prague, Czech Republic, 13–17 July 2019; pp. 1302–1310.
40. He, L.; Liu, X.L.; Chen, Y.W.; Xing, L.N.; Liu, K. Hierarchical scheduling for real-time agile satellite task scheduling in a dynamic
environment. Adv. Space Res. 2019, 63, 897–912. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.