Systematic Comparison of Path Planning Algorithms
Systematic Comparison of Path Planning Algorithms
using PathBench
Hao-Ya Hsueha Alexandru-Iosif Tomab Hussein Ali Jaafara Edward Stowb
Riku Muraib Paul H.J. Kellyb and Sajad Saeedia
a
arXiv:2203.03092v1 [cs.RO] 7 Mar 2022
ARTICLE HISTORY
Compiled March 8, 2022
ABSTRACT
KEYWORDS
Path Planning, Benchmarking, Machine Learning, Robotics, Neural Networks
1. Introduction
2
with standardized metrics and environments.
• PathBench provides a ROS (Robot Operating System) real-time extension for
interacting with a real-world robot. Examples have been provided on the GitHub
of the project.
• Using PathBench, we provide a comparative analysis of a large suite of publicly
available classical and learning-based path planning algorithms across different
map types, datasets and hardware systems for point mass in 2D and 3D grid
environments.2
2. Related Work
In this section, classical and learned planning algorithms, and existing simulation &
benchmarking frameworks are reviewed briefly.
2 This paper is partly based on our earlier conference paper [10], extended with across hardware/algorithm
benchmarking results.
3
Table 1. Capabilities comparison for existing frameworks that focus on path planning algorithms. PathBench
supports development, visualization, and benchmarking of classical and learned planning algorithms.
ing
d
n
d
se
o
ase
d
ati
ark
ase
Ba
-B
z
Platform
le-
-B
hm
ali
h
mp
ap
ML
su
nc
Gr
Vi
Be
Sa
OpenRAVE X × X × ×
OMPL X X X × ×
MoveIt X X X × ×
SBPL X × × X ×
OOPSMP X X X × ×
PathBench X X X X X
4
gets/uses
Figure 2. PathBench structure overview. Arrows represent information flow/usage (A ←−−−−−−− B). The
machine learning section is responsible for training dataset generation and model training. The Environment
section controls the interaction between the agent and the map, and supplies graphical visualization. The ROS
section provides support for real-time interaction with a real physical robot. The Evaluation section provides
benchmarking methods for algorithm assessment.
ies. For instance, Delmerico and Scaramuzza have performed a benchmark comparison
of monocular visual-inertial odometry algorithms across embedded hardware systems
that are commonly adopted for flying robots [54]. SLAMBench series of papers con-
duct similar studies for a wide range of algorithms across various hardware [55–57].
Performance of path planning algorithms across commonly utilized hardware systems
in robotics applications should also be examined, so that system-specific algorithmic
benchmarks can be developed. PathBench facilitates cross-system evaluations and al-
lows users to improve and select the appropriate algorithm for a desired task. In this
paper, we perform a comparative study using PathBench and its feature for classic and
learned planning algorithms. Fig. 8 showcases scatter plots that demonstrate trade-offs
between algorithms across five different hardware configurations.
3. PathBench Platform
5
(a) A* (b) Discrete RRT-Connect
Figure 3. The results of different classical and learned planners in PathBench. The red entity is the agent,
light green cells show the path, with the green entity at the end of the path denoting the goal. The black entities
are obstacles and everything else is custom display information (i.e. Dark gray represents the search space in
(a) and (d), red represents edges of the sampling algorithm in (b), and blue shows the frontiers of the A* search
space.) Discretized version of sampling-based algorithms [59] were used to ensure consistent performance in
grid space.
6
3.1. Simulator
The simulator is both a visualizer and an engine for developing algorithms (Fig. 3).
It supports animations and custom map display components which render the algo-
rithm’s internal data. Simulator has a map that contains different entities such as
the agent, goal and obstacles, and provides a clean interface that defines the move-
ment and interaction between them. Therefore, a map can be extended to support
various environments; however, each map has to implement its own physics engine or
use a third party one (e.g. the pymunk physics engine or OpenAI Gym ). The cur-
rent implementation supports three types of 2D/3D maps: DenseMap, SparseMap and
RosMap, corresponding to static grid map, point cloud map, and grid map with live up-
dates, respectively. Additionally, the simulator provides animations that are achieved
through key frames and synchronization primitives. The graphical framework used for
the visualization of planners and GUI is Panda3D [60]. Simulator configurations and
visualization customizations can be directly controlled within the Panda3D GUI, see
Fig. 4.
3.2. Generator
The generator executes the following four actions, each explained briefly below.
1) Generation. The generation procedure accepts as input, different hyper-
parameters such as the type of generated maps, number of generated maps, number
of dimensions, obstacle fill rate range, number of obstacle range, minimum room size
range and maximum room size range. Currently, the generator can produce four types
of maps: uniform random fill map, block map, house map and point cloud map, (See
Fig. 5).
2) Labelling. The labelling procedure takes a map and converts it into training data
by picking only the specified features and labels. Features that can be labeled include
distance to goal, direction to goal, global obstacle map, local view, agent position, etc.
3) Augmentation. The augmentation procedure takes an existing training data file
and augments it with the specified extra features and labels. It is used to remove the
need for re-generating a whole training set.
4) Modification. A custom lambda function which takes as input a map and returns
another map and can be defined to modify the underlining structure of the map (e.g.
modify the agent position, the goal position, create doors, etc.).
3.3. Trainer
The training pipeline is composed of six modules, explained below briefly.
1) Data Pre-processing. Data is loaded from the specified training sets, and only
the features and labels used throughout the model are picked from the training set
and converted to a PyTorch dataset.
2) Data Splitting. The pre-processed data is shuffled and split into three categories:
training, validation and testing (Default data split is 60%, 20%, and 20%), according
to the holdout method [61,62]. The CombinedSubsets object is used to couple the
feature dataset and label dataset of the same category into a single dataset. Then, all
data is wrapped into its DataLoader object with the same batch size as the training
configuration (Default size is 50). These parameters are configurable.
3) Training. The training process puts the model into training mode and takes the
training DataLoader and validation DataLoader and feeds them through the model n
7
Figure 4. The GUI of the simulator is where configuration of the path planners, environments and cus-
tomization of the visualization are made interactively. Simulator configuration window is used to set up path
planning sessions, including the selection of algorithms, maps, and the start and goal points. View editor allows
for adjustment of the simulation’s visualizations.
3.4. Analyzer
The analyzer is used to assess and compare the performance of the path planners. In
addition to manually running a simulator instance to assess an algorithm, the analyzer
supports the following analysis procedures:
• Simple Analysis. n map samples are picked from each generated map type, and m
algorithms are assessed on them. The results are averaged and printed. Barplots
and violinplots, for metrics discussed in Sec. 6, are generated with results from
Simple Analysis.
• Complex Analysis. n maps are selected (generated or hand-made), and m algo-
rithms are run on each map x (Default is 50 runs for each map) times with
random agent and goal positions. In the end, all n × x results are averaged
and reported. Similarly to Simple Analysis, barplots and violinplots for selected
metrics can be generated with results.
• Training Dataset Analysis. A training set analyzer procedure is provided to inspect
the training datasets by using the basic metrics (e.g. Euclidean distance, success
rate, map obstacle ratio, search space, total fringe, steps, etc. See the website of
the project for all metrics and statistics).
8
4. Supported Path Planning Algorithms
The planning algorithms implemented into PathBench include the following algorithms
and categories:
• Graph-based planners such as A*, Dijkstra, CGDS [63], and wavefront;
• Discrete sampling-based algorithms such as simple probabilistic roadmap (d-
sPRM) and d-RRT with its variations such as d-RT, d-RRT*, and d-RRT-
Connect; Moreover, additional sampling based algorithms from the Open Motion
Planning Library (OMPL) are also added to improve benchmarking capability
of PathBench;
• Sensory-based planning algorithms, e.g. Bug1 and Bug2 [16];
• Numerical optimization methods such as potential field algorithm [18];
• Learning-based path planning algorithms including Value Iteration Networks
(VIN) [4], Motion Planning Networks (MPNet) [6], Gated Path Planning Net-
works (GPPN) [5], Online LSTM [7], CAE-LSTM [8], Bagging LSTM [9], and
Waypoint Planning Networks (WPN) [9].
Additional classical or learning-based algorithms can be implemented to PathBench
easily. The algorithms developed inside PathBench support step-by-step planning an-
imation, which is an interesting feature for debugging and educational purposes.
We have included sampling-based planners, although they were originally designed
for planning continuous space, and have competitive results in such settings. Graph-
based or discrete sampling-based algorithms also exist, where instead of sampling from
the free space, nodes of the graph are sampled for planning. Examples of such algo-
rithms include [59,64,65], d-RRT [66], and d-RRT* [67]; however, as Branicky et. al
state [64], “the performance of the RRT in discrete space is degraded by a decreased
bias toward unexplored areas.” For benchmarking purposes, discretized versions of
sampling algorithms (e.g. discretized RRT-Connect and discretized sPRM) were im-
plemented within PathBench where each random sample is confined to an available
grid cell.
5. Supported Maps
The map is the environment of which simulation and benchmarking of algorithms are
performed. Grid environments in 2D and 3D are presently used within PathBench. A
Map contains different entities such as the agent, goal and obstacles, and provides a
clean interface that defines the movement and interaction between them. Therefore, a
map can be extended to support various environments. The following are supported
2D and 3D map types currently in PathBench.
9
(a) (b) (c)
Figure 5. The 3 types of generatable grid maps are: (a) Uniform random fill (64 × 64 2D and 16 × 16 3D
dimensions, [0.1, 0.3] obstacle fill rate range), (b) Block map (64 × 64 2D and 16 × 16 3D dimensions, [0.1,
0.3] obstacle fill rate range, [1, 6] number of obstacles range), (c) House atlas (64 × 64 2D and 16 × 16 3D
dimensions, [8, 15] minimum room size range, [35, 45] maximum room size range). (Note: Magenta colour is
used as goal for all 2D maps as the green goal is difficult to spot.)
10
Figure 6. External maps can be ported into PathBench for benchmarking with ease. The maps above are
from video games and real-world city [51].
can also be generated and used in PathBench. The inclusion of point cloud maps is to
facilitate the development and support of algorithms that work exclusively with point
clouds, such as MPNet [6]. Different map types are included in PathBench, so that
map-type specific performance of path planning algorithms can be analyzed further
(See Fig. 5.)
6. Performance Metrics
11
1) Success Rate (%). The rate of success of finding a path, from start to goal,
demonstrates the reliability of the algorithm.
2) Path Length (cell count). The total distance taken to reach the goal showcases
the efficiency of the path generated.
3) Distance Left To Goal (cell count). The Euclidean distance left from the agent to
the goal, in case of a algorithm failure. This shows the extent of the planning failure.
4) Time (seconds). The total time taken to reach the goal. Time required for plan-
ning is an important factor for real life robotics applications.
5) Path Deviation (%). The path length difference when compared to the shortest
possible path, generated by A*.
6) Search Space (%). The amount of space that was explored and used to find the
final path to the goal.
7) Maximum Memory Consumption (MB). The maximum amount of memory used
during a path generation session. Memory usage could be a limiting factor for various
robotics settings, thus being a relevant benchmarking metric.
8) Obstacle clearance (cell count). Obstacle clearance provides the mean distance of
the agent from obstacles during traversal, where a higher value is typically more ideal.
9) Smoothness of trajectory (degrees). The average angle change between consecu-
tive segments of paths shows how drastic and sudden the agent’s movement changes
could be. A lower average angle change value provides a smoother trajectory. With
discretized grid environments, smoothness of trajectory could produce misleading val-
ues due to large angle changes between each step. Smoothness of trajectory would be
a more reliable metric in continuous map types where consecutive path segments can
be defined with more clarity. PathBench has plans to explore continuous environments
for learned manipulator planning approaches in the future.
Other than the metrics above, additional metrics can be implemented into Path-
Bench if required. Nowak et al. provide potential metrics that could be added, includ-
ing orientation error, number of collisions, number of narrow passages traversed and
number of parameters to tune [72].
7. Experimental Results
12
Table 2. Results of classical algorithms: On 2D 64×64 PathBench built in maps (3000 samples), 512×512
city maps (300 samples), and video game maps with 800 to 1200 cells in dimension (300 samples). An Asus
laptop with Intel i5-6200U CPU and Nvidia GeForce 940MX was used. The failed cases occur when there is
no valid path towards the given goal.
map path distance left time success
planner
type dev.(%) (if failed,cell count) (sec) rate (%)
A* [11] 0.00 0.25 0.106 99.4
PathBench Wavefront [12] 0.36 0.25 0.340 99.4
Dijkstra [3] 0.00 0.25 0.338 99.4
d-sPRM [14] 35.03 2.67 0.767 93.3
d-RRT [13] 19.91 0.48 7.336 96.8
d-RRT-Connect [73] 21.17 0.26 0.2458 99.33
A* 0.00 0.00 3.816 100.0
Wavefront 1.08 0.00 8.468 100.0
0.00 0.00 9.928 100.0
City
Dijkstra
d-sPRM 123.64 13.68 5.377 93.6
d-RRT 54.98 3.43 39.248 95.7
d-RRT-Connect 65.08 5.35 3.489 96.6
A* 0.00 0.00 31.567 100.0
Video Games
and goal points are chosen randomly. Evaluations are done on maps that have never
been seen by the algorithms. To ensure that comparisons were done with the same
trajectories between systems, the pseudorandom number generator’s random seed for
sampling-based algorithms were initialized identically. We use the default values for
the holdout method since there is a lot of training data available, but if one wants to
change the amount of data for training and validation, it is also possible. The default
batch size is 50, as it would fit on most GPUs.
13
Table 3. Results of learned algorithms: On the same 2D 64×64 PathBench built in maps (3000 samples),
512×512 city maps (300 samples), and video game maps (300 samples) from Table 2. An Asus laptop was used
to generate results. (Intel i5-6200U CPU and Nvidia GeForce 940MX)
map path distance left time success
planner
type dev.(%) (if failed,cell count) (sec) rate(%)
VIN [4] 5.25 29.55 1.796 18.2
MPNet [6] 10.86 21.59 0.296 41.2
PathBench GPPN [5] 6.83 27.56 2.018 30.3
Online LSTM [7] 0.79 11.23 0.162 53.8
CAE-LSTM [8] 0.99 12.57 0.207 48.7
Bagging LSTM [9] 1.62 3.99 0.985 78.1
WPN [9] 2.78 0.15 0.817 99.4
VIN 100.00 184.21 32.612 0.0
GPPN 100.00 184.21 57.633 0.0
Online LSTM 1.05 87.51 3.328 16.7
City
GPPN
Online LSTM 0.00 217.10 16.380 8.3
CAE-LSTM 3.05 199.60 27.330 5.3
Bagging LSTM 0.41 155.89 123.901 20.6
WPN 10.55 0.00 110.307 100.0
Table 4. Results of classical algorithms on 3D 28×28×28 PathBench built-in maps using an Asus laptop
(3000 samples).
success path path time path smooth-
planner
rate(%) len.(cell count) dev. (%) (sec) ness(deg.)
A* 100.0 20.69 0.00 0.475 0.28
Wavefront 100.0 21.27 0.51 6.118 0.11
Dijkstra 100.0 20.69 0.00 8.453 0.13
d-sPRM 100.0 36.87 16.18 0.248 0.37
d-RRT-Connect 99.7 38.22 17.56 0.097 0.41
parameter definitions. Thirty video game maps with height and width varying from
800 to 1200 cells were benchmarked in a similar manner. Results of benchmarking
on video game and city maps are also listed in Table 2 and Table 3. The use of
external environments in this experiment demonstrates the capability of PathBench
to incorporate additional datasets.
14
Figure 7. Graphical analysis of 2D benchmarking for classical and learned algorithms that is produced by
PathBench’s analyzer. Results from Table 2 and Table 3 are used.
percent, and wavefront also guarantees paths close to shortest possible lengths. d-RRT-
Connect has much higher success rate than other sampling-based methods, while being
considerably faster. The number and type of samples taken is a parameter that can be
modified to configure sampling-based algorithms’ behaviour in PathBench. Although
A* can generate the shortest path length in both 2D and 3D planning scenarios, d-
RRT-Connect is capable of planning at a significantly faster time in 3D and larger 2D
environments. Sampling path planning algorithms are also selected often for planning
in higher dimensions and are the standard for manipulator motion planning. They will
be focused on heavily as PathBench extends to higher dimensional planning. Machine
learning algorithms, on the other hand, experience lower success rates for all map
types. The success rates decrease greatly as maps become larger and more complex.
For example, VIN and GPPN have shown to not scale well with the increase in map
size and could not successfully provide any paths in the city and video game datasets.
15
The drop in success rate of VIN and GPPN is consistent with what has been reported
in the respective papers. Meanwhile, it is notable from the results of the benchmarking
that GPPN outperforms VIN, suggesting that recurrent neural networks can help the
algorithms to have a better performance. WPN is an exception with the ability to
plan at 100% success rate for all map types when a solution is available and has a
notably lower path deviation. This is due to WPN being a hybrid algorithm that
incorporates A* in its way point planning approach. As learning-based algorithms,
the three LSTM algorithms and MPNet have faster planning times and success rates
than VIN and GPPN. However, machine learning algorithms’ path planning times
in general are higher when compared to classical approaches, especially as the map
size and complexity increases. MPNet was only tested on PathBench maps, due to
constraint of the implementation used. The publicly available version of the network
allowed encoding of a limited number of obstacles. It has a faster planning time than
VIN, GPPN and WPN, but a higher path deviation. Performing this kind of simple
and rapid analysis is trivial in PathBench.
16
(a) (b)
(c) (d)
Figure 8. Scatter plots showing the performance of several classical and learned path planning algorithms.
Each marker shape shows a different algorithm and different colors demonstrate the various hardwares. Per-
formance metrics include the path deviation, success rate of the algorithm, trajectory smoothness, and the
computation time. (Some markers overlap each other completely, due to values being the same across systems.
These overlapped markers are indicated in black, as seen in plot (c) and (d).)
17
the performance metrics such as success rate or trajectory smoothness.
The computation times of the algorithms for hardware systems with more powerful
CPUs are significantly faster as expected. Systems with more advanced GPUs have
lower computation time for learning-based algorithms, such as VIN, GPPN and WPN,
due to machine learning algorithms’ reliance on GPUs for matrice operations. A con-
sistent spread across data points for different systems in the average time metric is
observed. On the other hand, success rate (Fig. 8-b and c) and path deviation (Fig. 8-
a and d) of both classical and learning-based algorithms share a strong consistency
across the five hardware systems tested. Trajectory smoothness and obstacle clearance
performance of classical and learned algorithms are also consistent across platforms.
Specific analysis between algorithms can be done as well. A* and WPN share the
same trajectory smoothness in Fig. 8-c, as seen with the overlapped black markers.
This is due to WPN’s use of A* for planning between waypoints. Dijkstra is expected
to have similar trajectory with A*; however, the existence of multiple shortest possible
paths on most maps led to a different trajectory smoothness value. On the other hand,
GPPN was built upon VIN to improve its lower rate of planning success. In Fig. 8-
b, GPPN can be observed to require more time for path computation in PathBench
while having a much higher success rate than VIN. However, GPPN’s computation
time can be improved significantly and surpass VIN’s performance on most systems,
if CPUs with better processing power are used. This, combined with consistency of
other metrics across systems, suggests that computation time of algorithms is the most
important factor when selecting hardware systems for robotic applications.
Scatter plots seen in this section can be used to benchmark and select optimal
algorithms and hardware for specific tasks and can be generated with PathBench’s
analyzer. Such analysis, enabled by PathBench, allow system developers and algorithm
designers to efficiently select the right algorithm for their application or benchmark
and compare their new algorithm with existing ones. For instance, if for a particular
application, a higher obstacle clearance is needed, according to Fig. 1, compared with
A*, MPNet is a better choice assuming there is access to a good GPU such as NVidia
GeForce RTX 2080, since MPNet is able to generate paths in ≈0.3 seconds with the
best obstacle clearance values. If no GPU is available, still the algorithm works under
a second, i.e. ≈0.5 seconds on NUC. This advantage, according to Fig. 8-b, comes at
the cost of having less success rate (≈41%) compared with other machine learning
algorithms such as WPN (≈99%) and Bagging LSTM (≈78%).
18
Figure 9. Visual comparison of planned paths on real world maps. a) A* b) d-RRT-Connect c) WPN (Note:
Green represents the planned path. Grey is explored space and red is edges and nodes of sampling algorithms.
Black Xs were used to denote the start and goal point for (b), since red edges cover the green path.)
8. Conclusion
Acknowledgements
19
Funding
This work was partially funded by DRDC-IDEaS (CPCA- 0126) and EPSRC
(EP/P010040/1).
Biographical Note
Riku Murai received M.Eng in Computing in 2019 from the Imperial College
London. He is currently a PhD student in the Department of Computing at Imperial
College London. His research interests include robotics and computer vision. In
particular, the use of novel hardware and distributed computations.
Paul H J Kelly has been on the faculty at Imperial College London since 1989, has
a BSc in Computer Science from UCL (1983) and has a PhD in Computer Science
from the University of London (1987). He leads Imperial’s Software Performance
Optimisation research group, working on domain-specific compiler technology.
References
[1] González D, Pérez J, Milanés V, et al. A Review of Motion Planning Techniques for
Automated Vehicles. IEEE Trans Intelligent Transportation Systems. 2016;17(4):1135–
1145.
[2] LaValle SM. Planning Algorithms. Cambridge university press; 2006.
[3] Choset HM, Hutchinson S, Lynch KM, et al. Principles of Robot Motion: theory, algo-
rithms, and implementation. MIT press; 2005.
20
[4] Tamar A, Levine S, Abbeel P. Value iteration networks. CoRR. 2016;abs/1602.02867.
Available from: http://arxiv.org/abs/1602.02867.
[5] Lee L, Parisotto E, Chaplot DS, et al. Gated path planning networks. ICML. 2018;.
[6] Qureshi AH, Miao Y, Simeonov A, et al. Motion planning networks: Bridging the gap
between learning-based and classical motion planners. IEEE Transactions on Robotics.
2020;:1–9.
[7] Nicola F, Fujimoto Y, Oboe R. A LSTM Neural Network applied to Mobile Robots Path
Planning. In: IEEE International Conference on Industrial Informatics (INDIN); 2018. p.
349–354.
[8] Inoue M, Yamashita T, Nishida T. Robot Path Planning by LSTM Network Under Chang-
ing Environment. In: Advances in computer communication and computational sciences.
Springer; 2019. p. 317–329.
[9] Alexandru-Iosif Toma, Hussein Ali Jaafar, Hao-Ya Hsueh, Stephen James, Daniel Lenton,
Ronald Clark, Sajad Saeedi. Waypoint Planning Networks. In: 18th Conference on Robots
and Vision (CRV), Burnaby BC Canada; 2021.
[10] Toma AI, Hsueh HY, Jaafar HA, et al. PathBench: A Benchmarking Platform for Classical
and Learned Path Planning Algorithms. In: 18th Conference on Robots and Vision (CRV),
Burnaby BC Canada; 2021.
[11] Duchoň F, Babinec A, Kajan M, et al. Path Planning with Modified A Star Algorithm
for a Mobile Robot. Procedia Engineering. 2014;96:59–69.
[12] Luo C, Krishnan M, Paulik M, et al. An Effective Trace-guided Wavefront Navigation
and Map-building Approach for Autonomous Mobile Robots. In: Intelligent Robots and
Computer Vision; Vol. 9025; 2014. p. 90250U.
[13] LaValle SM. Rapidly-exploring Random Trees: A new tool for path planning ; 1998.
[14] Kavraki LE, Svestka P, Latombe J, et al. Probabilistic roadmaps for path planning in
high-dimensional configuration spaces. Vol. 12. ; 1996.
[15] Rajko S, LaValle SM. A pursuit-evasion bug algorithm. In: ICRA; Vol. 2; 2001. p. 1954–
1960.
[16] Kamon I, Rivlin E. Sensory-based motion planning with global proofs. IEEE Transactions
on Robotics and Automation. 1997;13(6):814–822.
[17] Paull L, Saeedi S, Seto M, et al. Sensor-driven online coverage planning for autonomous
underwater vehicles. IEEE/ASME Transactions on Mechatronics. 2013;18(6):1827–1838.
[18] Hwang YK, Ahuja N. A potential field approach to path planning. IEEE Transactions on
Robotics and Automation. 1992;8(1):23–32.
[19] Ziegler J, Bender P, Schreiber M, et al. Making Bertha Drive—An Autonomous Journey
on a Historic Route. IEEE Intelligent transportation systems magazine. 2014;6(2):8–20.
[20] Kalakrishnan M, Chitta S, Theodorou E, et al. STOMP: Stochastic trajectory optimiza-
tion for motion planning. In: IEEE International Conference on Robotics and Automation;
2011. p. 4569–4574.
[21] Zucker M, Ratliff N, Dragan AD, et al. CHOMP: Covariant Hamiltonian optimization for
motion planning. The International Journal of Robotics Research. 2013;32(9-10):1164–
1193.
[22] Schulman J, Duan Y, Ho J, et al. Motion planning with sequential convex optimization
and convex collision checking. The International Journal of Robotics Research. 2014;
33(9):1251–1270.
[23] Dolgov D, Thrun S, Montemerlo M, et al. Path Planning for Autonomous Vehicles in
Unknown Semi-structured Environments. The International Journal of Robotics Research.
2010;29(5):485–501.
[24] Schwarting W, Alonso-Mora J, Rus D. Planning and decision-making for autonomous
vehicles. Annual Review of Control, Robotics, and Autonomous Systems. 2018;1:187–210.
[25] Chen N, Karl M, van der Smagt P. Dynamic Movement Primitives in Latent Space of
Time-dependent Variational Autoencoders. In: IEEE-RAS International Conference on
Humanoid Robots (Humanoids); 2016. p. 629–636.
[26] Gupta S, Davidson J, Levine S, et al. Cognitive mapping and planning for visual naviga-
21
tion. In: CVPR; 2017. p. 2616–2625.
[27] Qureshi AH, Yip MC. Deeply Informed Neural Sampling for Robot Motion Planning. In:
IROS; 2018. p. 6582–6588.
[28] Chamzas C, Shrivastava A, Kavraki LE. Using Local Experiences for Global Motion
Planning. arXiv:190308693. 2019;.
[29] Qureshi AH, Simeonov A, Bency MJ, et al. Motion planning networks. arXiv preprint
arXiv:180605767. 2018;.
[30] Qureshi AH, Dong J, Choe A, et al. Neural manipulation planning on constraint mani-
folds. IEEE Robotics and Automation Letters. 2020;5(4):6089–6096.
[31] Bency MJ, Qureshi AH, Yip MC. Neural Path Planning: Fixed Time, Near-Optimal Path
Generation via Oracle Imitation. arXiv preprint arXiv:190411102. 2019;.
[32] Wu K, Abolfazli Esfahani M, Yuan S, et al. TDPP-Net: Achieving three-dimensional path
planning via a deep neural network architecture. Neurocomputing. 2019;357:151 – 162.
[33] Mohammadi M, Al-Fuqaha A, Oh JS. Path planning in support of smart mobility appli-
cations using generative adversarial networks ; 2018.
[34] Choi D, jun Han S, Min K, et al. Pathgan: Local path planning with generative adversarial
networks ; 2020.
[35] Srinivas A, Jabri A, Abbeel P, et al. Universal planning networks. ICML. 2018;.
[36] Levine S, Koltun V. Guided policy search. In: ICML; 2013. p. III–1–III–9.
[37] Abbeel P, Coates A, Ng AY. Autonomous Helicopter Aerobatics Through Apprenticeship
Learning. International Journal of Robotics Research. 2010;29(13):1608–1639.
[38] Diankov R, Kuffner J. OpenRAVE: A Planning Architecture for Autonomous Robotics.
Robotics Institute, Pittsburgh, PA, Tech Rep CMU-RI-TR-08-34. 2008;79.
[39] Şucan IA, Moll M, Kavraki LE. The Open Motion Planning Library. IEEE Robotics &
Automation Magazine. 2012;19(4):72–82.
[40] Sucan IA, Chitta S. MoveIt ; 2014. https://moveit.ros.org; accessed March 8, 2022.
[41] Plaku E, Bekris KE, Kavraki LE. OOPS for Motion Planning: An Online, Open-source,
Programming System. In: ICRA; Vol. 7; 2007. p. 3711–3716.
[42] Moll M, Sucan IA, Kavraki LE. Benchmarking motion planning algorithms: An extensible
infrastructure for analysis and visualization. IEEE Robotics Automation Magazine. 2015;
22(3):96–102.
[43] Shen B, Xia F, Li C, et al. iGibson, a simulation environment for interactive tasks in large
realistic scenes. arXiv preprint. 2020;.
[44] Savva M, Kadian A, Maksymets O, et al. Habitat: A Platform for Embodied AI Research.
In: ICCV; 2019.
[45] Murali A, Chen T, Alwala KV, et al. Pyrobot: An open-source robotics framework for
research and benchmarking. CoRR. 2019;abs/1906.08236.
[46] Althoff M, Koschi M, Manzinger S. CommonRoad: Composable Benchmarks for Motion
Planning on Roads. In: IEEE Intelligent Vehicles Symposium (IV); 2017. p. 719–726.
[47] Meijer J, Lei Q, Wisse M. Performance study of single-query motion planning for grasp
execution using various manipulators. In: 2017 18th International Conference on Advanced
Robotics (ICAR); 2017. p. 450–457.
[48] Perille D, Truong A, Xiao X, et al. Benchmarking metric ground navigation. In: IEEE
International Symposium on Safety, Security and Rescue Robotics (SSRR); IEEE; 2020.
[49] Fox D, Burgard W, Thrun S. The dynamic window approach to collision avoidance. IEEE
Robotics Automation Magazine. 1997;4(1):23–33.
[50] Quinlan S, Khatib O. Elastic bands: connecting path planning and control. In: [1993]
Proceedings IEEE International Conference on Robotics and Automation; 1993. p. 802–
807 vol.2.
[51] Sturtevant N. Benchmarks for grid-based pathfinding. Transactions on Computational
Intelligence and AI in Games. 2012;4(2):144 – 148. Available from: http://web.cs.du.
edu/~sturtevant/papers/benchmarks.pdf.
[52] Radmanesh M, Kumar M, Guentert PH, et al. Overview of path-planning and ob-
stacle avoidance algorithms for uavs: A comparative study. Unmanned Systems. 2018;
22
06(02):95–118.
[53] Wahab MNA, Nefti-Meziani S, Atyabi A. A comparative review on mobile robot
path planning: Classical or meta-heuristic methods? Annual Reviews in Control. 2020;
50:233–252.
[54] Delmerico J, Scaramuzza D. A benchmark comparison of monocular visual-inertial odom-
etry algorithms for flying robots. In: 2018 IEEE International Conference on Robotics and
Automation (ICRA); 2018. p. 2502–2509.
[55] Nardi L, Bodin B, Zia MZ, et al. Introducing SLAMBench, a performance and accuracy
benchmarking methodology for SLAM. In: IEEE International Conference on Robotics
and Automation (ICRA); 2015. p. 5783–5790.
[56] Bodin B, Wagstaff H, Saecdi S, et al. SLAMBench2: Multi-Objective Head-to-Head
Benchmarking for Visual SLAM. In: IEEE International Conference on Robotics and
Automation (ICRA); 2018. p. 3637–3644.
[57] Bujanca M, Gafton P, Saeedi S, et al. SLAMBench 3.0: Systematic Automated Repro-
ducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Under-
standing. In: International Conference on Robotics and Automation (ICRA); 2019. p.
6351–6358.
[58] Paszke A, Gross S, Chintala S, et al. Automatic Differentiation in PyTorch. In: NIPS
Autodiff Workshop; 2017.
[59] Morgan S, Branicky M. Sampling-based planning for discrete spaces. In: IEEE/RSJ Inter-
national Conference on Intelligent Robots and Systems (IROS); Vol. 2; 2004. p. 1938–1945.
[60] Goslin M, Mine MR. The Panda3D graphics engine. Computer. 2004;37(10):112–114.
[61] Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016.
[62] Nielsen M. Neural networks and deep learning. Determination Press; 2015.
[63] Stow E, Murai R, Saeedi S, et al. Cain: Automatic code generation for simultaneous con-
volutional kernels on focal-plane sensor-processors. In: International Workshop on Lan-
guages and Compilers for Parallel Computing (LCPC); 2021.
[64] Branicky M, Curtiss M, Levine J, et al. Rrts for nonlinear, discrete, and hybrid planning
and control. In: IEEE International Conference on Decision and Control (ICDC); 2003.
p. 657–663.
[65] Hvezda J, Kulich M, Preucil L. Improved discrete rrt for coordinated multi-robot plan-
ning. In: International Conference on Informatics in Control, Automation and Robotics;
2018. p. 171–179.
[66] Solovey K, Salzman O, Halperin D. Finding a needle in an exponential haystack: Discrete
RRT for exploration of implicit roadmaps in multi-robot motion planning. The Interna-
tional Journal of Robotics Research. 2016;35(5):501–513.
[67] Shome R, Solovey K, Dobson A, et al. dRRT*: Scalable and informed asymptotically-
optimal multi-robot motion planning. Autonomous Robots. 2020;44(3-4):443–467.
[68] Grisetti G, Stachniss C, Burgard W. Improved Techniques for Grid Mapping With Rao-
Blackwellized Particle Filters. IEEE Transactions on Robotics. 2007;23(1):34–46.
[69] Li T, Ho D, Li C, et al. Houseexpo: A large-scale 2d indoor layout dataset for learning-
based algorithms on mobile robots ; 2019.
[70] Song S, Yu F, Zeng A, et al. Semantic scene completion from a single depth image. CVPR.
2017;:1746–1754.
[71] Brewer D, Sturtevant NR. Benchmarks for pathfinding in 3d voxel space. Symposium on
Combinatorial Search (SoCS). 2018;.
[72] Nowak W, Zakharov A, Blumenthal S, et al. Benchmarks for mobile manipulation and
robust obstacle avoidance and navigation ; 2010.
[73] Kuffner JJ, LaValle SM. RRT-connect: An efficient approach to single-query path plan-
ning. In: ICRA; Vol. 2; 2000. p. 995–1001.
23