0% found this document useful (0 votes)
187 views110 pages

Map Based Localization For Autonomous Vehicles

Uploaded by

Shreyas Gowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views110 pages

Map Based Localization For Autonomous Vehicles

Uploaded by

Shreyas Gowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Map-Based Localization For

Autonomous Vehicles

A Major Project Report (18ECP81)

Submitted by,

Bhanu Prakash Sedamkar 1RV19EC037

Muhammad Akram A 1RV19EC101

Souparna Roy 1RV19EC172

Under the guidance of


Dr. Rajani Katiyar
Assistant Professor
Dept. of ECE
RV College of Engineering

In partial fulfillment of the requirements for the degree of


Bachelor of Engineering in

Electronics and Communication Engineering


2022-23
RV College of Engineering®, Bengaluru
(Autonomous institution affiliated to VTU, Belagavi )
Department of Electronics and Communication Engineering
.

CERTIFICATE
Certified that the major project (18EC81)work titled Map-Based Localization For
Autonomous Vehicles is carried out by Bhanu Prakash Sedamkar (1RV19EC037),
Muhammad Akram A (1RV19EC101) and Souparna Roy (1RV19EC172) who
are bonafide students of RV College of Engineering, Bengaluru, in partial fulfillment
of the requirements for the degree of Bachelor of Engineering in Electronics and
Communication Engineering of the Visvesvaraya Technological University, Belagavi
during the year 2022-23. It is certified that all corrections/suggestions indicated for the
Internal Assessment have been incorporated in the major project report deposited in the
departmental library. The major project report has been approved as it satisfies the
academic requirements in respect of major project work prescribed by the institution for
the said degree.

Guide Head of the Department Principal

Dr. Rajani Katiyar Dr. H. V. Ravish Aradhya Dr. K. N. Subramanya

External Viva
Name of Examiners Signature with Date

1.

2.
DECLARATION

We, Bhanu Prakash Sedamkar , Muhammad Akram A and Souparna Roy


students of eighth semester B.E., Department of Electronics and Communication Engi-
neering, RV College of Engineering, Bengaluru, hereby declare that the major project
titled ‘Map-Based Localization For Autonomous Vehicles’ has been carried out
by us and submitted in partial fulfilment for the award of degree of Bachelor of En-
gineering in Electronics and Communication Engineering during the year 2022-23.

Further we declare that the content of the dissertation has not been submitted previously
by anybody for the award of any degree or diploma to any other university.

We also declare that any Intellectual Property Rights generated out of this project carried
out at RVCE will be the property of RV College of Engineering, Bengaluru and we will
be one of the authors of the same.

Place: Bengaluru

Date:

Name Signature

1. Bhanu Prakash Sedamkar(1RV19EC037)

2. Muhammad Akram A(1RV19EC101)

3. Souparna Roy(1RV19EC172)
ACKNOWLEDGEMENT

We are indebted to our guide, Dr. Rajani Katiyar, Assistant Professor, RV College
of Engineering . for the wholehearted support, suggestions and invaluable advice through-
out our project work and also helped in the preparation of this thesis.

We also express our gratitude to our panel members Dr. Sujatha Hiremath, Assis-
tant Professor and Dr. Rajani Katiyar, Assistant Professor, Department of Electron-
ics and Communication Engineering for their valuable comments and suggestions during
the phase evaluations.

Our sincere thanks to the project coordinators Prof. Sindhu Rajendran R and
Dr Veena Devi for their timely instructions and support in coordinating the project.

Our gratitude to Prof. Narashimaraja P for the organized latex template which
made report writing easy and interesting.

Our sincere thanks to Dr. H. V. Ravish Aradhya, Professor and Head, Department
of Electronics and Communication Engineering, RVCE for the support and encourage-
ment.

We express sincere gratitude to our beloved Principal, Dr. K. N. Subramanya for


the appreciation towards this project work.

We thank all the teaching staff and technical staff of Electronics and Communication
Engineering department, RVCE for their help.

Lastly, we take this opportunity to thank our family members and friends who pro-
vided all the backup support throughout the project work.
PUBLICATION DETAILS
ABSTRACT

Map-based localization in autonomous vehicles is a crucial component of their naviga-


tion. Accurate localization is essential for safe and effective autonomous driving. One of
the primary issues in map-based localization is the computational complexity involved in
real-time processing. The algorithms used for localization NDT, and ICP, require signifi-
cant computational resources, including processing power and memory. These algorithms
perform complex computations to align sensor data with pre-existing maps, estimate the
vehicle’s pose, and maintain a consistent and accurate localization
The objectives of this project were to integrate various sensors, including GPS, IMU,
and Velodyne LiDAR sensors, within the ROS (Robot Operating System) framework.
This allowed for the fusion of sensor data, enabling a comprehensive understanding of
the vehicle’s environment. Additionally, the project aims to implement the LIO-SAM al-
gorithm for mapping purposes. LIO-SAM utilizes LiDAR data to build a highly accurate
map of the surroundings, to generate a Point Cloud representation of the RVCE dataset
using the LIO-SAM algorithm. This Point Cloud provides a detailed 3D representation of
the environment, capturing essential features and landmarks. Finally, To perform map-
based localization using the Point Cloud data generated from LIO-SAM ICP (Iterative
Closest Point) and NDT (Normal Distribution Transform) algorithms are used to align
the observed Point Cloud data with the map and estimate the vehicle’s pose accurately.
While collecting the datasets the vehicle equipped with the sensors was traveling at
speeds between 20km/h to 25km/h. The datasets were collected using the Velodyne
and Ouster LiDARs in a dynamic environment inside the RVCE campus. The Velodyne
dataset has a run time of 345.71s while the Ouster dataset has a run time of 93.18s. Both
datasets were compared at the 23s mark to evaluate the IMU, mapping, and GPS data
and ensure uniformity of input while executing both map-based algorithms. It was found
that the GPS accuracy in the case of ICP is more. For ICP, the units of visualization
used were F latSquares with a size of 0.15m. For NDT, the units of visualization used
were F latSquares with a size of 3m. The large step size for NDT shows that it is more
computationally efficient as it processes more data per unit time. This makes it suitable
for use in dynamic environments. A combination of both helped in transitioning from
move-based to map-based localization.

i
CONTENTS

Abstract i

List of Figures v

Abbreviations vii

1 Introduction to Map-Based Localization and Autonomous Vehicles 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Brief Methodology of the project . . . . . . . . . . . . . . . . . . . . . . 16
1.7 Organization of the report . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Thoery and Fundamentals of Map-based Localization 18


2.1 Prerequites for Map-Based Localization in Autonomous Vehicles . . . . 19
2.1.1 Fundamentals of Lidar Sensing . . . . . . . . . . . . . . . . . . . 19
2.1.2 LIO-SAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Map-Based Localization . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Software Tools Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 Hardware Tools Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3 Design and Methodology of Map-based Localization System 27


3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Robot Operating System (ROS) . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 LIO-SAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Detailed Design Methodology . . . . . . . . . . . . . . . . . . . . 31
3.4 ICP Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.1 Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.2 Global Position Estimation . . . . . . . . . . . . . . . . . . . . . . 37

ii
3.4.3 NDT Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.4 Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . 40

4 Implementation of Map-based Localization 42


4.1 Software Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Robot Operating System(ROS) . . . . . . . . . . . . . . . . . . . 43
4.2 Workspace Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 Sensor Driver Installations . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 LIO-SAM Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4.1 Installation of Dependencies . . . . . . . . . . . . . . . . . . . . . 48
4.4.2 Preparing LiDAR Data . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4.3 Preparing IMU Data . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.4 Selecting Datasets & Configuring Parameters . . . . . . . . . . . 51
4.4.5 Running the LIO-SAM Package . . . . . . . . . . . . . . . . . . . 51
4.4.6 Testing LIO-SAM with RVCE Dataset . . . . . . . . . . . . . . . 52
4.5 ICP Localization Implementation . . . . . . . . . . . . . . . . . . . . . . 53
4.5.1 Creation of ICP Workspace . . . . . . . . . . . . . . . . . . . . . 53
4.5.2 Installation of Dependencies . . . . . . . . . . . . . . . . . . . . . 54
4.5.3 Parameter Configuration . . . . . . . . . . . . . . . . . . . . . . . 54
4.5.4 Running ICP Package . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5.5 Testing ICP with RVCE Dataset . . . . . . . . . . . . . . . . . . 56
4.6 NDT Localization Implementation . . . . . . . . . . . . . . . . . . . . . . 57
4.6.1 Creation of NDT Workspace . . . . . . . . . . . . . . . . . . . . . 57
4.6.2 Installation of Dependencies . . . . . . . . . . . . . . . . . . . . . 57
4.6.3 Parameter Configuration . . . . . . . . . . . . . . . . . . . . . . . 58
4.6.4 Running NDT Package . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6.5 Converting pcd map to csv file . . . . . . . . . . . . . . . . . . . . 60
4.6.6 Testing NDT with RVCE Dataset . . . . . . . . . . . . . . . . . . 60

5 Results & Discussions 62


5.1 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.1.1 LIO-SAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

iii
5.1.2 ICP Localisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.1.3 NDT Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2.1 LIO-SAM Using RVCE Dataset . . . . . . . . . . . . . . . . . . . 66
5.2.2 ICP Localization using RVCE Campus Dataset . . . . . . . . . . 69
5.2.3 NDT Localization using RVCE Campus Datasets . . . . . . . . . 70
5.3 Inferences drawn from the Results obtained . . . . . . . . . . . . . . . . 71

6 Conclusion and Future Scope 72


6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.3 Learning Outcomes of the Project . . . . . . . . . . . . . . . . . . . . . . 74

Bibliography 74

A Appendix 80
A.1 Paper Submission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
A.2 Similarity report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

iv
LIST OF FIGURES

2.1 Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Robot Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Velodyne Lidar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Ouster Lidar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Xsens mti IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6 Flir Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1 Methodology of the Project . . . . . . . . . . . . . . . . . . . . . . . . . 28


3.2 ROS Framework and its Applications . . . . . . . . . . . . . . . . . . . . 30
3.3 System Architecture of LIO-SAM . . . . . . . . . . . . . . . . . . . . . . 31
3.4 System structure for LIO-SAM . . . . . . . . . . . . . . . . . . . . . . . 32
3.5 Example of 2D Point-to-plane ICP . . . . . . . . . . . . . . . . . . . . . 35
3.6 Global Position Estimation for ICP . . . . . . . . . . . . . . . . . . . . . 37
3.7 Position of initial source cloud as estimated by the EKF. . . . . . . . . . 38
3.8 Initial placement of the subset source point cloud. . . . . . . . . . . . . . 38

4.1 Successful installation of XSens MTi IMU Drivers . . . . . . . . . . . . . 46


4.2 Successful installation of FLIR Camera Drivers . . . . . . . . . . . . . . 47
4.3 Successful installation of Velodyne LiDAR Drivers . . . . . . . . . . . . . 48
4.4 Successful installation of Ouster LiDAR Drivers . . . . . . . . . . . . . . 49
4.5 Parameter changes required for running RVCE Campus Datasets . . . . 52
4.6 The Terminator window showing LIO-SAM execution. The screens are:
a) Running roscore, b) Running rviz, c) Playing RVCE Campus dataset,
d) Running LIO-SAM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.1 Localization by LIO-SAM using the Park Dataset . . . . . . . . . . . . . 63


5.2 ICP localization in a forest . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3 NDT localization for courtyard . . . . . . . . . . . . . . . . . . . . . . . 65
5.4 RVCE Campus dataset being played in rviz . . . . . . . . . . . . . . . . 66
5.5 RVCE Campus dataset being played in LIO-SAM . . . . . . . . . . . . . 67
5.6 PCD Map Generated By LIO-SAM For Velodyne Lidar . . . . . . . . . . 68

v
5.7 PCD Map Generated By LIO-SAM For Ouster Lidar . . . . . . . . . . . 68
5.8 ICP localization Using RVCE Dataset . . . . . . . . . . . . . . . . . . . . 69
5.9 NDT localization Using RVCE Dataset . . . . . . . . . . . . . . . . . . . 70

vi
ABBREVIATIONS

2D 2-Dimensional

3D 3-Dimensional

AI Artificial Intelligence

EEG Electroencephalogram

EKF Extended Kalman Filter

EMG Electromyography

GNSS Global Navigation Satellite System

GPS Global Positioning System

HMI Human Machine Interface

ICP Iterative Closest Point

IMU Inertial Measurement Unit

KITTI Karlsruhe Institute of Technology and Toyota Technological Institute

LiDAR Light Detection and Ranging

LIO-SAM Lidar Inertial Odometry via Smoothing and Mapping

LOAM Lidar Odometry and Mapping

LVI-SAM Lidar Visual Inertial Odometry via Smoothing and Mapping

NDT Normal Distributions Transform

PCD Point Cloud Data

PHC Pseudo Hilbert Curve

ROS Robot Operating System

RVCE RV College of Engineering

SLAM Simultaneous Localization and Mapping

SVD Singular Value Decomposition

vii
viii
RV College of Engineering® , Bengaluru - 560059

Chapter 1
Introduction to Map-Based
Localization and Autonomous
Vehicles

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

CHAPTER 1
INTRODUCTION TO MAP-BASED
LOCALIZATION AND AUTONOMOUS
VEHICLES
1.1 Introduction
In recent years, autonomous vehicles have emerged as groundbreaking technology with
the potential to revolutionize the transportation industry. By incorporating advanced ar-
tificial intelligence, sensor technology, and connectivity, these self-driving vehicles can
redefine mobility, improve road safety, and fundamentally transform our travel experi-
ences. Given the swift advancements witnessed in this field, it is crucial to undertake
thorough research to delve into the implications, challenges, and opportunities associated
with autonomous vehicles.
The concept of autonomous vehicles goes beyond the conventional mode of trans-
portation, where humans serve as the primary operators. Instead, these vehicles leverage
an intricate network of sensors, algorithms, and sophisticated control systems to navigate
and make decisions autonomously, without direct human intervention. By interpreting
and analyzing real-time data from their surroundings, including road conditions, traffic
patterns, and pedestrian movements, autonomous vehicles possess the ability to perceive
their environment and respond accordingly.
The integration of Artificial Intelligence (AI) algorithms is key to the operation of
autonomous vehicles, which enable them to process vast amounts of data and make com-
plex decisions. Machine learning techniques, such as deep neural networks, play a vital
role in training these vehicles to recognize and interpret various objects, traffic signs, and
behavioral patterns, facilitating safe and efficient navigation. Additionally, advancements
in sensor technology, such as Light Detection and Ranging (LiDAR), radar, and cameras,
provide autonomous vehicles with a comprehensive view of their surroundings, allowing
them to detect and react to potential hazards in real time. Map-based localization using
LiDAR is one of the important technologies that is used in these autonomous vehicles.
The accurate localization of autonomous robotic systems is a fundamental require-
ment for their successful operation in various applications, including autonomous vehicles,

UG Major Project Report Department of ECE 2022-23 2


RV College of Engineering® , Bengaluru - 560059

unmanned aerial vehicles, and mobile robots. Map-based localization, a technique that
leverages pre-existing maps of the environment to determine the position of a robot, has
emerged as a prominent approach to address the localization challenge. By integrating
sensor measurements with a known map, map-based localization enables robots to navi-
gate and perform tasks with precision and reliability. The aim of the project is to provide
a comprehensive overview of map-based localization, exploring its underlying principles,
advancements, limitations, and potential avenues for future development.
Map-based localization involves the fusion of sensor data, such as odometry, visual
data, or range measurements, with a prior map of the environment. The map serves as
a reference, allowing the robot to estimate its position and orientation relative to known
landmarks or features. Through probabilistic inference, which involves comparing sensor
measurements with the expected measurements from the map, the robot can iteratively
refine its pose estimation and reduce localization uncertainty.
Advancements in map-based localization have been driven by breakthroughs in sensor
technology, particularly in the fields of computer vision and LiDAR. Visual odometry
algorithms employ cameras to track visual features and estimate the robot’s motion, while
LiDAR-based approaches leverage laser scans to generate 3-Dimensional (3D) maps and
perform localization. The fusion of multiple sensors, such as cameras and LiDAR, has
also shown significant potential in improving localization accuracy and robustness.
However, several challenges remain in the field of map-based localization. The scal-
ability of map-based localization algorithms to large-scale environments is an ongoing
area of research. Real-time performance, computational efficiency, and the ability to
handle dynamic environments are critical factors for widespread adoption. Also, robust-
ness against changes in lighting conditions, appearance variations, and occlusions poses
additional challenges in visual-based localization approaches. Map-based localization in
outdoor environments may also be influenced by seasonal changes, environmental defor-
mations, or limited Global Positioning System (GPS) availability. These challenges will
be discussed in detail in this paper.

1.2 Motivation
One of the main challenges in autonomous vehicles is the localization of the vehicle.
It is needed for high accuracy and reliability in a variety of environments and conditions.
This is particularly important in safety-critical applications, such as autonomous driving,

UG Major Project Report Department of ECE 2022-23 3


RV College of Engineering® , Bengaluru - 560059

where even small errors in vehicle position estimation can have serious consequences.
Another challenge is the need for real-time performance, as autonomous vehicles must be
able to process sensor data and update their position estimates quickly and efficiently.
Map-based localization also provides several benefits:

1. Precision and Accuracy: Map-based localization enables robots to achieve high pre-
cision and accuracy in determining their position and orientation within an environ-
ment. By leveraging pre-existing maps that provide detailed information about the
surroundings, robots can utilize known landmarks, features, or geometric structures
for localization. This level of precision is crucial in applications where precise posi-
tioning is required, such as autonomous vehicles navigating complex road networks
or aerial drones conducting precise mapping missions.

2. Robustness and Reliability: Map-based localization offers robustness and reliability


in various scenarios. By relying on pre-existing maps, robots are less susceptible
to short-term sensor measurement errors or fluctuations. The fusion of sensor data
with the map provides a context for filtering out noise and outliers, enabling the
robot to maintain accurate localization even in the presence of challenging condi-
tions, such as sensor inaccuracies or temporary obstructions in the environment.

3. Efficient Exploration and Navigation: With map-based localization, robots can effi-
ciently explore and navigate their surroundings. The availability of a map provides
a priori knowledge of the environment, allowing robots to plan optimal paths, avoid
obstacles, and efficiently reach their desired destinations. This efficiency is partic-
ularly valuable in scenarios where time, energy, or resources need to be optimized,
such as in warehouse logistics, search and rescue missions, or autonomous explo-
ration of unknown environments.

4. Adaptability to Dynamic Environments: Map-based localization algorithms can


handle dynamic environments where changes occur over time. By continuously
updating the map through Simultaneous Localization and Mapping (SLAM) tech-
niques, robots can adapt to changes in the environment, including the appearance of
new obstacles, modifications in the layout, or the presence of moving objects. This
adaptability is essential in scenarios where the environment is subject to frequent
changes, such as construction sites or public spaces with high pedestrian activity.

UG Major Project Report Department of ECE 2022-23 4


RV College of Engineering® , Bengaluru - 560059

5. Redundancy and Fault Tolerance: Map-based localization provides a redundant


source of information, enhancing system resilience and fault tolerance. In addition
to relying on real-time sensor data, the integration of a map allows robots to cross-
validate their position estimates. In situations where individual sensors may fail or
experience inaccuracies, the map acts as a backup reference, enabling the robot to
maintain localization even in challenging circumstances.

6. Scalability and Localization in GPS-Denied Environments: Map-based localization


offers scalability and the ability to localize in GPS-denied or GPS-degraded envi-
ronments. In large-scale environments, creating and utilizing maps allows robots
to navigate over extended distances while maintaining accurate localization. Fur-
thermore, in scenarios where GPS signals are unavailable or unreliable, such as
indoor environments or urban canyons, map-based localization provides a viable
alternative for precise positioning and navigation.

Overall, map-based localization provides a compelling motivation for its utilization


in autonomous robotic systems. The combination of precision, robustness, efficiency,
adaptability, redundancy, and scalability makes it an appealing approach for achieving
accurate and reliable localization, enabling robots to perform complex tasks and interact
effectively with their environments.

1.3 Problem statement


To develop a system for autonomous vehicles equipped with LiDAR, cameras, and
other sensors to provide real-time feedback to the operator or a remote monitoring system.

1.4 Objectives
The objectives of the project are

1. To integrate various sensors like GPS, Inertial Measurement Unit (IMU), and Velo-
dyne LiDAR sensors in Robot Operating System (ROS).

2. To implement Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM)


algorithm for mapping.

3. To generate Point Cloud Data (PCD) for RV College of Engineering (RVCE) Cam-
pus dataset using LIO-SAM algorithm.

UG Major Project Report Department of ECE 2022-23 5


RV College of Engineering® , Bengaluru - 560059

4. To perform map-based localization using Point Cloud data generated from LIO-
SAM.

1.5 Literature Review


The research papers implemented various algorithms using different techniques. Some
have worked in real time using urban environments and some have online rosbags to
implement the algorithms.
Zahir Yılmaz, Levent Bayındır, ”Simulation of LiDAR Based Robot Detection Task
using ROS and Gazebo”[1] has proposed a model which considers a new mobile robot
equipped with 2-Dimensional (2D) LiDAR that uses a Robot Operating System (ROS)
and this robot model is to develop a robot detection method in Gazebo simulation envi-
ronment[2]. the method in the Gazebo simulation environment. Detecting surrounding
objects and distinguishing robots from objects is essential in multi-robot and swarm
robotic applications. In this Paper LiDAR is handled in steps. The first step is the
acquisition of laser data and pre-processing. Then the second step is the segmentation
of data using the point distance-based segmentation method. The third step is for the
classification of segments by applying two levels of filtering: filtering by segment diameter
that focuses on eradicating the segments that don’t fit in a certain size (LiDAR size) by
making use of the features for each segment. The filtering of the segment shape is done
to check the remaining segments and to test if they fit the LiDAR’s shape or not by
using the circle fitting method. The fifth step is to identify the position of kin relative to
the observer robot. The conclusions have been drawn from the experimental results that
the accuracy of the results depends on two main factors one is the distance between the
observer robot and other objects or robots and second one is the amount of noise in the
LiDAR measurements.
Abdussalam A. Alajami1, Nil Palau, Sergio Lopez-Soriano, Rafael Pous, ”A ROS-
based distributed multi-robot localization and orientation strategy for heterogeneous
robots” [3] has presented a distributed multi-robot localization policy strategy that is
based on Robotic Operating System. The paper proposed an algorithm that fuses data
of diverse sensors from 2 heterogeneous robots that are connected within their transform
trees to localize and measure the relative position and orientation. The method proposed
also introduces the robust detection of the Convolutional Neural Networks(CNN) and

UG Major Project Report Department of ECE 2022-23 6


RV College of Engineering® , Bengaluru - 560059

accurate relative position measurements from the local cost map[4]. The algorithm is
composed of two parts one part is The localization part and the other is the relative
orientation measurement part. Localization can be implemented by optimization and
alignment calibration of the CNN output with the cost map in an individual robot. The
relative orientation measurement is determined by a collaborative multi-robot fusing of
diverse sensor data to align and synchronize the transform frames of both robots in their
cost maps. The results showcase that the proposed method is robust and accurate while
also maintaining simplicity and efficiency in costs.
Rapti Chaudhuri, Suman Deb, ”LiDAR Integration with ROS for SLAM Mediated
Autonomous Path Exploration” [5]has presented a theoretic and experimental analysis
of the path exploration challenges and possibilities in an indoor environment by a com-
bination of LiDAR with ROS for creating a SLAM. The algorithmic solutions have been
implemented with customized differential drive structure robot platform. There are also
studies conducted on the mapping of concerned trajectories and environmental local-
ization. This paper used a grade 3D RP LiDAR as a percept for the edging obstacle
periphery and the associated tangential data to construct the localization and mapping.
Significant analysis has been done for the visual representation of the LiDAR data in ROS
platform for performance evaluation of the path exploration and planning algorithms.
Alberto José Chinchilla Santos, José Manuel Murillo Castellanos, Alicia Marı́a Reyes
Duke, ”ROS: AN AUTONOMOUS ROBOT OPERATING SYSTEM FOR SIMULTA-
NEOUS LOCALIZATION AND MAPPING USING A 2D LiDAR SENSOR”[6]has pro-
posed a mobile robot that is capable of simultaneously locating and mapping unknown
environments using a 2D LiDAR sensor to be developed based on the framework of the
ROS robots. There were limitations that came across while implementing the operation
of a 2D LiDAR sensor in a SLAM system as well as there were some advantages. The
development of the prototype is considered to be an intelligent autonomous robot that
can be teleoperated to improve its versatility. Rotary encoders were selected to sup-
port the robot’s real-time localization as these devices can be quantified by the amount
of the pulses that can be determined through the movement of the axis and transform
themselves into a measurable signal that could send feeds back to the system with the
exact position of the robot. ROS was implemented as a framework due to inflexibility,
communication capacity, and heterogeneous data collection that can be collected by each

UG Major Project Report Department of ECE 2022-23 7


RV College of Engineering® , Bengaluru - 560059

component that makes up the SLAM autonomous robot.


Ruijiao Li, Mohammadreza A. Oskoei, Klaus D. McDonald-Maier, Huosheng Hu,
”ROS Based Multi-sensor Navigation of Intelligent Wheelchair”[7] has presented a ROS-
based Multi-sensor navigation for an intelligent wheelchair that can help the elderly
and disabled people. In this paper, the author proposed that users can interact with
robots and smart homes via Human machine interface devices. The System enables many
capabilities of autonomous navigation, and interoperation by multiple modalities and can
become interactive and collaborative for the wheelchair that has to be implemented. The
autonomous navigation system has been employed for the wheelchair. It enables a user to
point to a goal and to achieve the goal automatically in a static environment. The author
has developed some Human Machine Interface (HMI) devices like voice control, gesture
control, and Electroencephalogram (EEG)/ Electromyography (EMG)-based control,
for robots. To make the integration of HMI devices possible for robot interaction the
author has designed a node in the ROS to subscribe messages coming from the HMI
devices and to deliver command to robot action. To carry out this mission successfully
the bridge provides the user to trigger the actuators for autonomous navigation of the
robots via proper HMI. Therefore, enabling the robot to adapt to different user profiles
and preferences. The author described that future work will be focused on improving the
navigation in a more adaptive manner and wrapping the doorway passing navigator into
ROS.
Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus
in”LIO-SAM: Tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping”
[8]has proposed a paper that presented an algorithm to find an accurate way of locating a
vehicle using lio sam . This paper proposed to use LiDAR odometry to estimate the pose
of the vehicle. This paper introduces a way to accurately estimate the pose and position
of the object in real-time using LiDAR and inertial sensor measurements. Lio Sam takes
advantage of the LiDAR sensors’ high-resolution 3d point cloud data inertial motion
sensors. This paper employs a sliding window technique for simultaneous localization
and mapping. The keyframes are analyzed and voxel maps are formed using sub-key
frames. This technique maintains a limited no of keyframes reducing complexity and
ensuring accuracy.’
Chao Wang, Guobao Zhang, and Ming Zhang in ”Research on improving LIO-SAM

UG Major Project Report Department of ECE 2022-23 8


RV College of Engineering® , Bengaluru - 560059

based on Intensity Scan Context”[9] have presented an LIO-SAM framework and the
intensity scan context has been introduced to extract the keyframes and detect loop clo-
sure which can provide a closed loop factor together with IMU pre-integration factor and
LiDAR odometry factor to build the factor graph. The incremental smoothing optimiza-
tion algorithm has been implemented to get a high-precision trajectory and realize the
tightly-coupled LiDAR and IMU positioning. The results show that as the number of
keyframes is reduced the elevation error is decreased and the real-time performance is
improved without decreasing the accuracy of LIO -SAM.
Kangcheng Liu, Xunkuai Zhou, and Ben M. Chen in”An Enhanced LiDAR-Inertial
SLAM System for Robotics Localization and Mapping”[10] has proposed an improved
LiDAR inertial localization and mapping system for unmanned ground vehicles. Com-
paring the existing LiDAR-based localization and mapping systems like Lidar Odometry
and Mapping (LOAM) there are two contributions first is the improvement of the ro-
bustness of the particle swarm filter-based LiDAR SLAM while the second one is the
loop closure methods generated for the global optimization to improve the accuracy of
the localization of the whole system. This Paper has shown experimental results from
the data extracted at Hong Kong science park as well as other outdoor and indoor real
complicated circumstances to be effective and the approach came out to be very effective
as well[11]. The system demonstrated high accuracy, robustness as well as accuracy.
GUANGMAN LU, HONG YANG, JIE LI, ZHENFEI KUANG, AND RU YANG ”A
Lightweight Real-Time 3D LiDAR SLAM for Autonomous Vehicles in Large-Scale Ur-
ban Environment”[12] has presented a fast and lightweight 3d LiDAR simultaneously
localization and mapping(SLAM) to localization of autonomous vehicles in urban envi-
ronments. A novel approach based on depth information has been implemented to encode
the unordered point clouds with various resolutions that can avoid missing dimensional
information in the projection of point clouds onto a 2D plane[13]. The PCP algorithm
is modified by dynamically selecting points according to the information provided in the
novel approach to fit the local plane and thus consuming less time. The threshold and
the number of feature points that are adaptive according to distance intervals, results
obtained in sparse feature points are extracted and uniformly distributed in the three-
dimensional space. The effectiveness and robustness are determined and verified by the
Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) odometry

UG Major Project Report Department of ECE 2022-23 9


RV College of Engineering® , Bengaluru - 560059

and MVSECD. Runtime of 21 ms is obtained for the odometer estimation which is con-
sidered to be fast. Compared to several known state-of-the-art methods on the KITTI
odometry benchmark, the translation errors are reduced by at least 19% and rotation
errors by 7.1% by the implementation of the proposed approach.
Sabir Hossain, and Xianke Lin in ”Uncertainty-Aware Tightly-Coupled GPS Fused
LIO-SLAM”[14] have proposed a method that is a feasible solution for mapping an au-
tonomous delivery vehicle that will operate in areas consisting of GPS-accessed and denied
areas. In this method, the GPS odometry is combined with LiDAR inertial odometry
in a tightly-coupled manner. The final step is to fuse GPS and IMU using an Extended
Kalman filter. The proposed method can overcome long-range drift issues in all three
directions including the z direction as it uses a global positioning system. The method
proposed in this paper presents an uncertainty-aware logic-based system that makes a
positional update back to the LIO system in case of a GPS-denied situation. Then, this
method makes use of the final translational data from the fusion to create a point cloud
map without positional drift. Thus, this method gives us a solution and can produce ac-
curate maps in outdoor and semi-indoor/indoor environments. The experimental results
that were obtained on the dataset taken verified the approach and results achieved on
different datasets displayed accuracy for mapping. The quantitative comparison for LIO-
SAM is employed since this existing mapping approach also uses the GPS factor in the
given mapping approach. this approach (LIO-FUSION with uncertainty) achieved less
RMSE value with respect to GPS for the small campus and park dataset that LIO-SAM
provided[15].
Chunge Bai, Tao Xiao, Yajie Chen, Haoqian Wang, Fang Zhang, and Xiang Gao
in ”Faster-LIO: Lightweight Tightly Coupled LiDAR-Inertial Odometry Using Parallel
Sparse Incremental Voxels”[16]has presented a paper that proposes an incremental voxel-
based LiDAR inertial odometry method for fast-tracking and solid-state LiDAR scans.
The high tracking is achieved by using the incremental voxels as our point cloud spa-
tial data structure which is modified from traditional voxels and supports incremental
voxels(iVox) and parallel approximated k-NN queries. This paper neither considers com-
plicated tree-based structures to divide the spatial point cloud nor the strict k nearest
neighbor queries to calculate the point matching. This paper proposed a liner iVox and
Pseudo Hilbert Curve (PHC) iVox as two alternative underlying structures in the algo-

UG Major Project Report Department of ECE 2022-23 10


RV College of Engineering® , Bengaluru - 560059

rithm. The results have shown that the speed of iVox reaches 100-200 Hz per scan in
solid-state LiDARs and it reaches over 200 Hz for 32 spinning LiDARs with modern Cpu
while preserving the level of accuracy.
Lanyi Han, Zhiyong Shi, and Huaiguang Wang ”A Localization and Mapping Algo-
rithm Based on Improved LVI-SAM for Vehicles in Field Environments”[17] has proposed
a localization and mapping algorithm based on the improved Lidar Visual Inertial Odom-
etry via Smoothing and Mapping (LVI-SAM) for vehicles in field environments. The pro-
posed algorithm could solve the problem of localization and mapping for vehicles in such
open, bumpy, and Global Positioning System (GPS)-denied field environments. In the
paper, the author has recognized an LVI-SAM-based LiDAR, inertial, and visual fusion
using simultaneous localization and mapping (SLAM) algorithm for accurate and reliant
localization. The method presented in this paper has a joint LiDAR in the front end
of pose estimation and The error corrections were done using the Super4PCS, Iterative
Closest Point (ICP), and Normal Distributions Transform (NDT) algorithms and their
variants. This algorithm can maintain balance in localization accuracy and real-time per-
formance by employing out a lower-frequency pose correction based on higher-frequency
pose estimation. The experimental results show that the translation error of localization
can be reduced to 4.7% and thus can create a three-dimensional point cloud of the map
in real time with high precision and high robustness in localization and mapping of the
vehicle in complex field environments[18].
Kai Dai, Bohua Sun, Guanpu Wu, Shuai Zhao, Fangwu Ma, Yufei Zhang, and Jian
Wu, ”LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Ve-
hicles in Complex Scenarios”[19] have proposed a paper that focused on simultaneous
localization and mapping for autonomous vehicles. LiDAR-based SLAM is widely used
but has limitations like drift in the maps and single-sensor localization algorithms have
poor adaptability to complex scenarios. The SLAM-based online localization method
with multi-sensor fusion has been implemented in this paper. In the mapping processes
constraints like the Normal distribution algorithm, loop closure detection, real-time ki-
netic (RTK), and Global navigation satellite system position for the front end and pose
estimation algorithm for the back end have been considered and implemented to achieve
an accurate map without any drift. In the localization process, the error state Kalman
filter is combined with the LiDAR-based localization to achieve a robust and accurate

UG Major Project Report Department of ECE 2022-23 11


RV College of Engineering® , Bengaluru - 560059

localization. An open-source KTTI dataset and field test have been carried out to test
the proposed algorithm. Test results have achieved 5-10 cm mapping accuracy and 20-30
cm localization accuracy.
Donghoon Shin, Kang-moon Park, and Manbok Park in ”High Definition Map-Based
Localization Using ADAS Environment Sensors for Application to Automated Driving
Vehicles”[20]has presented a paper that proposed a high-definition map localization sys-
tem using advanced driver assistance system environment sensors for the application
of autonomous vehicles. With respect to the application of autonomous driving, it is
necessary to ensure an effective and accurate utilizing the features installed for ADAS
purposes. The proposed algorithm has several sections of environment feature represen-
tation with low-cost sensors, digital map analysis, and application, position correction
based on map-matching, designated validation gates, and extended Extended Kalman
Filter (EKF)-based localization filtering and fusion. The monocular vision is attached
to the front of the vehicle to detect the lane information. The quad rail is detected by
the radar by distinguishing low-speed object measurements and extracting features in
several steps. The Iterative closest point(ICP) algorithm is used to perform an iterative
procedure on the lane and quadrant information from earlier and be able to correct the
host vehicle position. The ICP matching calculates the transformation between the digi-
tal map and environmental features. Map-matching corrects each vehicle position and is
selected and merged based on EKF with double updating.
Zhuping Wang, Jian Fang, Xinke Dai, Hao Zhang, and Ljubo Vlacic in ”Intelligent
Vehicle Self-Localization Based on Double-Layer Features and Multilayer LiDAR” [21]
have proposed a paper that works on the intelligent vehicle self-localization scheme based
on the double-layer features and multi-layer LiDAR data. A prior map is taken that
stores one or more environmental features extracted by onboard sensors. These features
contain information about the ground environment such as curbs and lane markings and
building outlines. In order to achieve accuracy and robustness the paper proposes two
layers of features to express the environment one is the upper layer which consists of a 2D
point cloud with vertical features and bottom layer consists of ground and curb features.
To extract the curb features a collision-based detection method is implemented. The
vertical-projection-based detection algorithm is proposed to detect vertical feature points
and extract them from multi-layer LiDAR. Monte Carlo localization is used to extract

UG Major Project Report Department of ECE 2022-23 12


RV College of Engineering® , Bengaluru - 560059

the exact position and estimate the location of the vehicle. The results show that the
proposed method can achieve localization accuracy.
Kyu-Won Kim, Jun-Hyuck Im, Moon-Beom Heo, and Gyu-In Jee in ”Precise Vehicle
Position and Heading Estimation Using a Binary Road Marking Map”[22] have presented
a paper that recognizes a method that focuses on the binary road marking grid map based
precise vehicle localization method using 3D LiDAR. The intensity scan data was obtained
first and then it was binarized using the Otsu thresholding. The binarization of the data
was completed and the next step was to generate a binary road marking map and a
precise local map for matching. The position and heading are determined by the map
matching and navigation filter. In the experimental result, the 2D lateral RMS error was
approximately 0.05 m and the longitudinal RMS error was approximately 0.08 m. The
CDF analysis of the error presented that the lateral error comes out to be 0.21 m and the
longitudinal error is 0.36 m with a 99% confidence level. Thus the vehicle localization
using the binary road marking grid map provides very precise and accurate results.
Li Qingqing1, Jorge Pena Queralta, Tuan Nguyen Gia, Zhuo Zou and Tomi West-
erlund” Multi-Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Ac-
curate Localization in Urban Environments” [23]have proposed a method in the paper
Multi-Sensor Fusion for Navigation and Mapping in Autonomous Vehicles that uses mul-
tiple sensors to detect the pose and position of the vehicle. The author used LiDAR, an
odometer, GPS sensors for self-localization, and a Kalman filter to filter the curbs in the
localization. LiDAR is used to create point clouds and the odometer is used to determine
vehicle motion. To fuse the data from these sensors, the authors used the Kalman filter
and a particle filter. The Kalman filter is used to estimate the vehicle’s position, velocity,
and orientation based on the sensor data.
Xieyuanli Chen, Ignacio Vizzo, Thomas Labe, Jens Behley, Cyrill Stachniss in ”Range
Image-based LiDAR Localization for Autonomous Vehicles” [24] has presented a range-
based localization technique to perform pose-based estimation using a local map of range
images. The filters are also used to brush out the errors in the point cloud generated.
The Extended Kalman filter is fused with the data are taken from the vehicle’s odometry
sensors along with the map matching algorithm ICP to figure out the map for the vehicle
and localization is done with the iterative features of the ICP algorithm
Farzeen Munir, Shoaib Azam, Ahmad Muqeem Sheri, YeongMin Ko, and Moongu

UG Major Project Report Department of ECE 2022-23 13


RV College of Engineering® , Bengaluru - 560059

Jeon in ”The Where Am I: Localization and 3D Maps for Autonomous Vehicles paper”[25]
recognized a method for generating a 3D point cloud to localize the vehicle using 3D
LiDAR scans. The algorithm used is ICP (Iterative Closest Point)to match the scans
made by LiDAR and downsample the data from the point cloud to reduce compilation
time by using the voxel grid filter. This paper also has proposed a pose graph estimation
algorithm to determine the trajectory and improve the localization of the vehicle. During
the operation, the pre-build map is compared with the real-time point cloud scan and it
is identified with the most similar local point cloud scan in the graph.
Alwin Poulose, Minjin Baek, Dong Seog Han in”Point Cloud Map Generation and
Localization for Autonomous Vehicles Using 3D LiDAR Scans”[26] has presented a paper
that focuses on point cloud map generation and localization of the autonomous vehicle
using 3D LiDAR scans. This paper proposed an effective method to generate a high-
resolution 3D map of the environment while using them for the localization of the vehicle.
The method proposed uses a LiDAR sensor to scan the environment generate a point cloud
and processed it further to create a 3D map. The authors have evaluated a method on
the real-time data obtained and showed that it achieves accurate and robust localization.
Yihuan Zhang, Liang Wang and Jun Wang, John M. Dolan ”Real-time localization
method for an autonomous vehicle using 3D LiDAR” [27] has proposed a method for lo-
calization of the vehicle using the LiDAR data for autonomous vehicles. This method uses
a particle filter to determine the pose of the vehicle and the sensor data is preprocessed
to avoid errors using a voxel grid filter and Statistical outliner removal filter.
Liang Wang, Yihuan Zhang, Jun Wang ”Map-Based Localization Method for Au-
tonomous Vehicles Using 3D-LiDAR ”[28]presents a method that uses a pre-build map
combined with real-time LiDAR data to estimate the vehicle’s position and orientation
relative to the map. Multiple scans are done using the LiDAR sensor and the ICP al-
gorithm is used to generate a high-resolution map. The current LiDAR scan is matched
with the pre-build map and determines the best matching point.ICP is implemented in
an iterative way and pose estimation is done using SVD(Singular value decomposition)
and optimal Transformation between two point clouds is determined. The authors have
used real-time data with autonomous vehicles to demonstrate accurate localization in
various environments.
Ehsan Javanmardi, Yanlei Gu, Mahdi Javanmardi, Shunsuke Kamijo, ”Autonomous

UG Major Project Report Department of ECE 2022-23 14


RV College of Engineering® , Bengaluru - 560059

Vehicle Self Localization based on the Abstract map and Multi-Channel LiDAR in ur-
ban”[29] areas recognizes a self-localization method for autonomous vehicles in urban
environments. The proposed method extracts features and LiDAR data is clustered to
generate an abstract map of the environment. The multi-channel LiDAR captures a 360-
degree view of the environment which is further processed to remove noise and outliners
using filters. The method used here to generate an abstract using multi-channel LiDAR
uses a probabilistic approach like Normal Distribution Transform. Self-localization in-
volves matching the LiDAR data with the abstract map. The method proposed is a
promising approach to self localize the autonomous vehicle in the urban area.[7]
Jesse Levinson, Michael Montemerlo, and Sebastian Thrun,” Map-based Precision
vehicle Localization in Urban environments ” [30]presented a localization method that
uses sensor data taken from the environment including GPS measurements, LiDAR Data,
and camera images that is combined with pre-built map. This method employs a particle
filter algorithm to estimate the pose and position of the vehicle by using a set of particles
built through the map and updating their weights based on the sensor data associated
with each particle. The method makes use of a road edge detection algorithm to detect
road edges and also uses a landmark detection algorithm to detect landmarks in the
road.[8]
Xiaohu Lin, Fuhong Wang, Bisheng Yang, and Wanwei Zhang’s ” Autonomous Ve-
hicle Localization with Proper Visual Point Cloud Point Constraints in Global Naviga-
tion Satellite System (GNSS)-Challenged Environments”[31] has presented a localization
method that implements using a prior visual point cloud map in GNSS Challenged envi-
ronments. The proposed method has implemented a particle filter algorithm to integrate
the visual cloud map constraints into the Localization process to improve the accuracy
and robustness. The method also employs a feature-matching algorithm that matches
the stereo camera images with the visual point cloud map to improve accuracy. This
method demonstrates high accuracy and robustness. The particle filter algorithm creates
a set of particles that are relative to the vehicle’s possible locations and their weights are
added according to the data provided by the sensor.
Jordan B. Chipka and Mark Campbell,” Autonomous Urban and Navigation with
LImited Information”[32] has recognized a localization method and navigation method
in urban environments with the information to be limited. This method employs a filter

UG Major Project Report Department of ECE 2022-23 15


RV College of Engineering® , Bengaluru - 560059

algorithm that is combined with other sensors like LiDAR and camera images, to estimate
the vehicle’s position and orientation in real-time. The proposed method also employs a
road centerline extraction algorithm that extracts the road centerline from LiDAR data
and uses it to constrain the particle filter’s estimation. Additionally, the proposed method
uses a limited memory version of the algorithm to reduce computational complexity while
maintaining accuracy. The method demonstrates high accuracy and robustness in real-
world experiments, even in the presence of limited information, such as incomplete maps
and occluded road edges[33].
Overall, the papers demonstrate that 3D-LiDAR is a great technology for achieving
accurate localization and mapping in autonomous vehicles. The proposed methods all use
some form of point cloud registration or matching to align LiDAR data with a pre-built
map and gives us good results in real-world experiments. The papers also highlight some
of the challenges associated with using 3D-LiDAR for autonomous driving, such as the
need for accurate and up-to-date maps, and the high computational requirements of some
of the proposed methods. The main observation was that LiDAR is used to create point
clouds and the Kalman filter is to filter the errors in curb detection. The algorithms
used are ICP(iterative Closed point)and NDT(Normal Distributions Transform). These
algorithms are used on the basis of a probabilistic and iterative approach to exactly locate
the vehicle. The sensors like odometry and GPS are used to locate the vehicle using its
motion.

1.6 Brief Methodology of the project


The project starts out with integrating sensors like IMU, GPS, LiDAR (Velodyne
and Ouster) in ROS. The LIOSAM[12] algorithm is then used to estimate the vehicle’s
position and orientation by optimizing the likelihood function based on the aligned 3D
LiDAR data. Then the LIO-SAM algorithm is simulated using the datasets available on
Github to verify that LIO-SAM is working as intended. Next, the LIO-SAM is simulated
using the RVCE Campus dataset rosbags to create a novel set of .pcd maps. These .pcd
maps are then used for map-based localization using two different algorithms: ICP Lo-
calization and NDT Localization. These algorithms are designed to continuously update
the vehicle’s position and orientation as new data is collected, providing accurate and
real-time localization. The proposed algorithm is evaluated through simulation tests and
compared with previously reported methods to assess its accuracy and efficiency.

UG Major Project Report Department of ECE 2022-23 16


RV College of Engineering® , Bengaluru - 560059

1.7 Organization of the report


This report is organized as follows.

• Chapter 2 discusses the fundamentals of algorithms used in map-based localization


for the project.

• Chapter 3 discusses the design and methodology used in the project.

• Chapter 4 discusses the implementation of the project.

• Chapter 5 discusses the results and discussions on various algorithms used.

• Chapter 6 discusses the conclusion and future scope of the project.

UG Major Project Report Department of ECE 2022-23 17


RV College of Engineering® , Bengaluru - 560059

Chapter 2
Thoery and Fundamentals of
Map-based Localization

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

CHAPTER 2
THOERY AND FUNDAMENTALS OF
MAP-BASED LOCALIZATION
An overview of map-based localization methods for autonomous vehicles using LIO-
SAM, NDT, and ICP localization is given in this chapter. The goal is to achieve re-
liable and precise localization in changing environments. Following a discussion of the
significance of map-based localization in autonomous navigation, the LIO-SAM frame-
work—which combines smoothing and mapping methods with lidar-inertial odometry and
mapping—is introduced. After that, the NDT (Normal Distributions Transform) algo-
rithm is investigated as a technique for precise localization by comparing lidar scans with
a prepared map. Additionally, the accuracy of localization is improved by fine-tuning the
alignment between lidar scans and the map using the Iterative Closest Point (ICP) algo-
rithm. This chapter highlights the distinct features and advantages of each approach and
their significance in achieving reliable map-based localization for autonomous vehicles.

2.1 Prerequites for Map-Based Localization in Au-


tonomous Vehicles
To achieve map-based localization in autonomous vehicles, there are several steps that
need to be completed first, like performing localization, generating pcd files, etc. Some
of these key prerequisites for implementing map-based localization have been discussed
in the section below.

2.1.1 Fundamentals of Lidar Sensing


1. Lidar Overview: A remote sensing technique called Lidar (Light Detection and
Ranging) which works by emitting laser pulses and measuring the time it takes for
the light to return after bouncing off objects in its path to calculate distances and
produce three-dimensional models of the surrounding area.

2. Types of LiDAR Sensors and Their Characteristics: In this project velodyne


VLP32C and ouster OS-1 64 channel lidar is used. Velodyne is a well-known LiDAR
in the LiDAR industry, known for its range of high-resolution and long-range LiDAR
sensors.Their mechanical LiDAR sensors, VLP-32 have gained recognition for its

UG Major Project Report Department of ECE 2022-23 19


RV College of Engineering® , Bengaluru - 560059

reliability and performance and it is widely used in applications such as autonomous


vehicles, robotics, and mapping. Ouster specializes in solid-state LiDAR sensors
that feature a compact size and high data accuracy. Their popular models, including
the OS1 and OS2 series, offer a wide field-of-view and are suitable for various
applications, such as robotics, mapping, and industrial automation.

3. Point Cloud Data: The 3D representations of the Environment that lidar sensors
collect are called point clouds. These are collections of discrete points with X, Y,
and Z coordinates. Understanding point cloud data is crucial since it serves as the
foundation for map-based localization techniques.

2.1.2 LIO-SAM
1. Introduction to LIO-SAM: Using LiDAR data, the LIO-SAM algorithm com-
bines mapping and odometry estimation to provide precise localization. It performs
simultaneous localization and mapping by optimizing the vehicle’s trajectory and
mapping the environment at the same time. LIO-SAM operates online, which
means it can provide real-time estimates of the vehicle’s pose while continuously
updating the map.

2. Features and advantages of LIO-SAM: LIO-SAM has several features and


benefits, including accurate motion estimation, efficient loop closure detection, and
the ability to handle dynamic environments. It takes advantage of the LiDAR point
clouds natural sparsity to speed up computation and use less memory. LIO-SAM is
well-suited for real-time applications and can scale to large-scale mapping scenarios.

3. Working Principles of LIO-SAM: LIO-SAM, or LIDAR-Inertial Odometry and


Mapping with Smoothing and Mapping, contains a number of key components for
robust localization using lidar data. Through feature extraction, point matching,
and motion estimation algorithms, lidar odometry estimates vehicle motion based
on consecutive lidar scans. Using lidar data, an environment map is constructed
by aligning and representing scans using scan context, while loop closure enables
odometry errors to be corrected and localization accuracy to be improved. The final
step in LIO-SAM is to match the current lidar scan with the pre-built map, utilizing
the scan context and the odometry information to determine the pose accurately.

UG Major Project Report Department of ECE 2022-23 20


RV College of Engineering® , Bengaluru - 560059

A comprehensive lidar-based localization and mapping solution, LIO-SAM enables


robust and accurate localization in dynamic environments.

2.1.3 Map-Based Localization


A common technique in autonomous vehicles is map-based localization, which uses
a known map or environment to compare sensor readings and determine the vehicle’s
pose. The localization algorithm can determine the vehicle’s position in the environment
by comparing sensor data with the map[34]. Map-based localization enables capabilities
such as global localization (initializing the vehicle’s pose in an unknown location), loop
closure detection (identifying previously visited locations), and localization in GPS-denied
areas.

1. Iterative Closest Point (ICP) Localization Algorithm:

The Iterative Closest Point (ICP) algorithm is a widely used method for aligning
two-point clouds or 3D surfaces to estimate the relative pose (translation and ro-
tation) between them. It is commonly employed in map localization, registration,
and 3D reconstruction tasks.

The ICP algorithm follows an iterative optimization process to minimize the differ-
ence between corresponding points in the two-point clouds[35]. The steps involved
in the ICP algorithm are as follows:

Initially, An estimate of the relative pose between the two point clouds is obtained
using an initial guess, such as the odometry data or a rough alignment. For each
point in the target point cloud, the closest point in the source point cloud is found
based on a distance metric, typically the Euclidean distance. These corresponding
point pairs form the initial correspondences. Assigning weights to the correspon-
dences can be done to give more importance to certain points or to handle noise and
outliers effectively. Using the correspondences, an optimal transformation matrix
that aligns the two point clouds is estimated. This can be achieved through various
methods such as least-squares fitting or Singular Value Decomposition (SVD).

The estimated transformation is applied to the target point cloud, aligning it with
the source point cloud. The process is repeated iteratively until convergence or a
predefined threshold is reached. The algorithm checks if the estimated transforma-
tion matrix has reached the desired accuracy or if the change between iterations

UG Major Project Report Department of ECE 2022-23 21


RV College of Engineering® , Bengaluru - 560059

falls below a specified threshold. if not the algorithm again works on the point
cloud and finds Euclidean distance with the closest point.

The final transformation matrix provides the optimal alignment between the two
point clouds, representing the relative pose estimation. Overall, the ICP algorithm
offers a powerful solution for point cloud alignment and plays a crucial role in map
localization by accurately estimating the relative pose between a mobile entity and
a reference map[36].

2. Normal Distributions Transform (NDT) Localization: The Normal Distri-


butions Transform (NDT) algorithm is a method used for probabilistic scan match-
ing and registration of point clouds. It is commonly employed in map localization,
simultaneous localization and mapping (SLAM), and object recognition tasks. NDT
is known for its robustness to noise and outliers, making it suitable for environments
with sensor uncertainties[37].

The NDT algorithm represents point clouds as a probabilistic distribution, specif-


ically a Gaussian distribution. The steps involved in the NDT algorithm are as
follows:

(a) Voxelization: Voxelization is the process that converts a data structure,


storing geometric information in a contiguous domain, into a rasterized image.
The point clouds are divided into equally sized voxels to create a voxel grid
representation. Each voxel represents a local region of the environment.

(b) Feature Extraction: Feature Extraction is the process by which an algo-


rithm extracts a set of informative and non-redundant data to help the learning
for the algorithm and generalize the further steps[38].

Initially, an estimate of the relative pose between the target point cloud and the
reference map is obtained using an initial guess, such as odometry data. For each
voxel in the target point cloud, the closest voxel in the reference map is found.
Assigning weights to the correspondences is performed based on the similarity mea-
sure. High weights are given to voxels with similar distributions, while low weights
are assigned to dissimilar voxels or outliers.

Using the correspondences and their weights, an optimal transformation matrix that

UG Major Project Report Department of ECE 2022-23 22


RV College of Engineering® , Bengaluru - 560059

aligns the two point clouds is estimated. The estimated transformation is applied
to the target point cloud, aligning it with the reference map. The algorithm checks
if the estimated transformation matrix has reached the desired accuracy or if the
change between iterations falls below a specified threshold. If not, the algorithm
returns to the final transformation matrix and provides the optimal alignment be-
tween the target point cloud and the reference map, representing the relative pose
estimation[39].

The NDT algorithm is advantageous in scenarios with sparse or noisy data, as it


captures the local distribution information rather than relying solely on individual
point correspondences. Additionally, NDT provides a probabilistic representation,
enabling uncertainty modeling and incorporation into probabilistic frameworks[40].

2.2 Software Tools Used


The following is the list of software tools and how they are being used in the project.

1. Ubuntu Ubuntu is an open-source Linux operating system widely used to integrate


sensors and download library files for projects.

Figure 2.1: Ubuntu


[41]

2. ROS Robot Operating System(ROS) is an open-source framework that provides a


collection of software libraries, tools, and drivers for building robotic operations.

2.3 Hardware Tools Used


1. Velodyne VLP32C: The Velodyne VLP-32C lidar sensor shown in fig is intended
for use in autonomous cars and robotics applications. It employs 32 laser beams
to produce a 360-degree field of vision, resulting in a rich and high-resolution point

UG Major Project Report Department of ECE 2022-23 23


RV College of Engineering® , Bengaluru - 560059

Figure 2.2: Robot Operating System


[42]

cloud depiction of the surroundings. The VLP-32C has a range of up to 200 metres
and an accuracy of 4 centimeters. It gathers data at a rate of up to 700,000 points
per second, allowing for real-time mapping and object identification. Despite its
small size, the VLP-32C is a tough machine that can function in a variety of en-
vironments. It has become a common alternative for sensing the environment in
autonomous systems and facilitating safe and efficient navigation.

Figure 2.3: Velodyne Lidar


[43]

2. Ouster OS1: The Ouster OS1 is a high-resolution lidar sensor manufactured by


Ouster that is intended for use in autonomous cars and mapping. The OS1 use
solid-state lidar technology to capture comprehensive 3D point cloud data with up

UG Major Project Report Department of ECE 2022-23 24


RV College of Engineering® , Bengaluru - 560059

to 2048 points per laser beam each rotation. It gives accurate distance readings
with a maximum range of 120 metres and an accuracy of roughly 2 centimeters.
The OS1 is capable of processing point cloud data at up to 2.6 million points per
second, allowing for real-time perception. The Ouster OS1 is a popular choice for
high-quality LiDAR sensing in a variety of applications because of its compact and
lightweight design, as well as its ability to operate in a variety of climatic situations.

Figure 2.4: Ouster Lidar


[44]

3. IMU: An Xsens MTI IMU is used in a project and IMU, or inertial measurement
unit, is a device that combines multiple sensors to measure and provide information
about the motion, direction, and acceleration of an object. Consisting of accelerom-
eters, gyroscopes, and magnetometers, the IMU collects data on linear and rota-
tional motion and changes in the magnetic field. By analyzing measurements from
these sensors, the IMU can estimate the position, speed, and direction of an object
in real-time. However, IMUs are prone to errors that accumulate over time, causing
drift. To mitigate these errors, IMUs are often integrated with other sensors, such
as GPS, or used with sensor fusion algorithms to improve accuracy and provide
reliable motion tracking and orientation. IMUs are widely used in applications that
require accurate motion tracking and orientation, such as drone stabilization, vir-
tual reality systems, human motion tracking, and inertial navigation systems for
autonomous vehicles.

4. Camera: Spinnaker monocular FLIR camera is used in the project. It combines


FLIR’s advanced infrared (FLIR) sensor technology with the Spinnaker Software

UG Major Project Report Department of ECE 2022-23 25


RV College of Engineering® , Bengaluru - 560059

Figure 2.5: Xsens mti IMU


[45]

Development Kit (SDK). This camera system allows developers to connect to FLIR
cameras via his Spinnaker SDK for control, image capture and data streaming. With
thermal imaging capabilities, the camera can capture thermal images and detect
heat signatures, making it suitable for security, surveillance, industrial inspection,
research, and medical applications. The camera connects to your computer via
USB or other supported interface and offers a comprehensive solution for thermal
imaging and image processing tasks.

Figure 2.6: Flir Camera


[46]

This chapter provides an overview of map-based localization methods for autonomous


vehicles, including LIO-SAM, NDT, and ICP localization. The focus is on achieving re-
liable and precise localization in changing environments. The chapter highlights the
significance of map-based localization, introduces the LIO-SAM, explores the NDT algo-
rithm, and discusses the use of ICP for alignment. The distinct features and advantages
of each approach are emphasized, along with the key prerequisites of necessary software
tools and sensors required for implementing map-based localization.

UG Major Project Report Department of ECE 2022-23 26


RV College of Engineering® , Bengaluru - 560059

Chapter 3
Design and Methodology of
Map-based Localization System

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

CHAPTER 3
DESIGN AND METHODOLOGY OF MAP-BASED
LOCALIZATION SYSTEM
The last chapter discussed the theory behind the software/hardware tools and al-
gorithms that have been used in detail. In this chapter, the design, and methodology of
the project have been discussed in detail. The reasons for choosing LIO-SAM, the specifi-
cations for the design, pre-analysis work, and design methodology have been discussed in
detail. The design for the ICP and NDT localization algorithms has also been discussed
to give a better understanding of the design procedure.

3.1 Methodology

Figure 3.1: Methodology of the Project

The methodology employed in this project can be seen in the flowchart above. All
the design and implementation has been based on this methodology.
ROS has to be installed and a new ROS environment has to be set up. Then the
drivers required for the various sensors like IMU, GPS, LiDAR sensors can be installed.
Once the drivers are installed successfully, the drivers can be integrated into ROS to
provide a platform for running LIO-SAM.
LIO-SAM is an algorithm for 3D mapping an environment. Once LIO-SAM can be
executed successfully, it is first tested with publicly available datasets. Once this is done,

UG Major Project Report Department of ECE 2022-23 28


RV College of Engineering® , Bengaluru - 560059

certain parameters have to be changed to ensure compatibility with the RVCE Campus
datasets. After these parameters have been altered as required, LIO-SAM can be executed
with the RVCE Campus datasets to generate the required pcd maps.
Once the LIO-SAM has successfully generated the required pcd maps, the imple-
mentation of map-based localization can be started. For this purpose, two map-based
localization algorithms, ICP and NDT Localization, have been used and the results have
been compared at the end. For implementing map-based localization, the pcd maps gen-
erated by LIO-SAM have to be given as input to both algorithms. The algorithms first
load up the map in rviz, where the 3D environment can be visualized. Then once the
rosbag has been played, the localization happens on top of the uploaded pcd map. The
map-based algorithms are tested with both public and RVCE Campus datasets.
Finally, the results acquired from both algorithms are studied in detail. Inferences
are drawn based on the results acquired, and the validity of both algorithms is compared.
The detailed implementation of the project can be found in Chapter 4.

3.2 Robot Operating System (ROS)


The open-source Robot Operating System (ROS) framework aids academics and devel-
opers in creating and reusing code for various robotics applications. A global open-source
community of engineers, programmers, and enthusiasts behind ROS works to improve,
expand the usability of, and make robots available to all. ROS makes it easy to continue
the work of other developers efficiently without the need to learn to work on completely
different platforms, thus making the procedure less time-consuming and more efficient.
As a number of pre-existing works in this field have been used either directly in the
project or indirectly as part of experimental procedures, the use of ROS was mandatory.
Moreover, ROS provided a platform where the sensors could be integrated with the
algorithms efficiently. Also, robots and sensors that were hard to access could be easily
simulated by using ROS tools like Gazebo. The results of the localization algorithms
could be easily viewed and calibrated in another ROS tool, Rviz.
ROS1 Melodic has been used in the project as it was the version the algorithm was
originally created in and has extensive resources available for debugging or otherwise.

UG Major Project Report Department of ECE 2022-23 29


RV College of Engineering® , Bengaluru - 560059

Figure 3.2: ROS Framework and its Applications

3.3 LIO-SAM
LIO-SAM stands for LiDAR Inertial Odometry via Smoothing and Mapping. It is
an inertial odometry algorithm that can achieve highly accurate and real-time trajectory
estimation for robots and map building. LIO-SAM has been extensively tested in various
environments using a number of our RVCE Campus data sets. This resulted in us being
able to generate highly accurate and precise point cloud maps that can be used to perform
map-based localization.

3.3.1 Design Specifications


This algorithm maintains two graphs and is 10x faster than real-time while running.

1. mapOptimization.cpp The factor graph present in this file optimizes the GPS
and LiDAR odometry factor and is maintained steadily throughout the entire du-
ration of the test.

2. imuPreintegration.cpp The factor graph present in this file optimizes the IMU

UG Major Project Report Department of ECE 2022-23 30


RV College of Engineering® , Bengaluru - 560059

Figure 3.3: System Architecture of LIO-SAM

and LiDAR odometry factor and also helps in estimating IMU bias. The factor
graph here is reset periodically and it guarantees real-time estimation of odometry
at the IMU frequency.

3.3.2 Detailed Design Methodology


Sensor data from a 3D LiDAR, an IMU, and, if desired, a GPS are received by the
system. Using the observations from these sensors, the robot’s status and trajectory
have to be calculated. This state estimation problem is formulated as a map a posteriori
problem. A factor graph is used to model this problem as it is more suitable in this case
when compared to Bayes’ nets. If there is no loss of generality, this system can easily
incorporate measurements from other sensors like heading from a compass or altitude
from an altimeter.
There are some notations that have to be defined as they have been used extensively
in the next section. The body frame of the robot has been considered as B and the world
frame has been considered as W .For convenience in calculations, it has to be assumed
that the IMU frame coincides with the robot body frame. Fig. 3.4 shows the system that
was proposed. For the purpose of building factor graphs, four different kinds of factors
and one type of variable have been introduced. The nodes of the graph are responsible
for this variable, which captures the state of the robot at a particular moment. The four
basic factors that are used are:

UG Major Project Report Department of ECE 2022-23 31


RV College of Engineering® , Bengaluru - 560059

Figure 3.4: System structure for LIO-SAM

1. IMU Preintegration Factors: The measurements of angular velocity and accel-


eration of the IMU are given by Eqns. (3.1) and (3.2), where ω̂t and ât are the
measurements for the raw IMU data in B at time t. ω̂t and ât are affected by a
white noise nt and a slowly varying bias bt . RtBW is the rotation matrix from W to
B. g is the gravity vector in W .

ω̂t = ωt + bt + nωt (3.1)

ât = RtBW (at − g) + bat + nat (3.2)

Now these measurements from the IMU can be used to estimate the motion of the
robot. The velocity, position and rotation of the robot at a time t + ∆t can be
calculated by using the Eqns. (3.3), (3.4), and (3.5) below, where Rt = RtBW =
T
RtBW .

vt+∆t = vt + g∆t + Rt (ât − bat − nat )∆t (3.3)

1 1
pt+∆t = pt + vt ∆t + g∆t2 + Rt (ât − bat − nat )∆t2 (3.4)
2 2

Rt+∆t = Rt exp((ω̂t − bωt − nωt )∆t) (3.5)

Next, a IMU Preintegration method has been applied. Using this, the relative body

UG Major Project Report Department of ECE 2022-23 32


RV College of Engineering® , Bengaluru - 560059

motion between two consecutive timesteps can be found. The pre-integrated time
measurements ∆vij , ∆pij , ∆Rij between the times i and j can be calculated using
the Eqns. (3.6), (3.7), and (3.8).

∆vij = RiT (vj − vi − g∆tij ) (3.6)

1
∆pij = RiT (pj − pi − vi ∆tij − g∆t2 ij (3.7)
2

∆Rij = RiT Rj (3.8)

The IMU Preintegration is not only efficient but it also gives us one constraint for
the factor graphs: IMU Preintegration Factors.

2. LiDAR Odometry Factors: Feature extraction for each fresh LiDAR scan is
executed first. In order to extract edge and planar characteristics, the roughness of
points inside a given small area is assessed. Points with high roughness values are
categorized as Edge features. The extracted edge and planar edge features from a
LiDAR scan at a time i are denoted as Fie and Fip respectively. All the features
that have been extracted at the time i compose an entire LiDAR frame Fi where
Fi = {Fie , Fip }.

Using an individual LiDAR frame for computation and the addition of factors to
the graph is not computationally feasible. So, the concept of Keyframe selection
is adopted which is widely used in the field of visual SLAM. In this approach,
a LiDAR frame Fi+1 is selected as a keyframe when the change in the pose of
the robot exceeds a user-defined threshold when compared to the previous state
x. This new keyframe, Fi+1 , is associated with a new node of the robot node,
xi+1 , in the factor graph. Now the LiDAR frames between these two keyframes are
discarded. The addition of keyframes in this manner helps to achieve a balance
between memory consumption and map density while also maintaining a relatively
sparse factor graph. This is suitable for real-time nonlinear optimization. For this
project, the position and rotation thresholds for adding a new keyframe are chosen
as 1m and 10◦ .

UG Major Project Report Department of ECE 2022-23 33


RV College of Engineering® , Bengaluru - 560059

3. GPS Factors: Even though accurate state estimation and mapping can be achieved
using only IMU preintegration and LiDAR odometry components, the system still
exhibits drift when performing long-term navigation tasks. To solve this issue,
sensors that provide absolute measurements to eliminate drift can be introduced.
These sensors consist of a GPS, compass, and altimeter. GPS will be used as an
example because it is a common component of real-world navigation systems.

The GPS readings are first translated into the local Cartesian coordinate system
and then a new GPS factor with this node is associated when a new node is added
to the factor graph. If the GPS signal and the LiDAR frame are not hardware syn-
chronized, linear interpolation between GPS measurements based on the timestamp
of the LiDAR frame is done.

Since the drift of LiDAR inertial odometry increases relatively slowly, it is not
required to continuously apply GPS components when GPS reception is available.
In actual use, only a GPS component is used when the projected position covariance
exceeds the GPS position covariance that was received.

4. Loop Closure Factors: In addition to LOAM and LIOM, loop closures can also
be effortlessly implemented into the suggested system because of the use of a factor
graph. For demonstration purposes, a simple but efficient Euclidean distance-based
loop closure detection method is put into practice. In addition, it is pointed out that
the suggested framework is compatible with other loop closure detection techniques
which create a point cloud descriptor and use it for place recognition.

For the first search of the factor graph, the prior states that are close to the new
state xi+1 in Euclidean space whenever a new state xi+1 is added. For instance, x3
is one of the returning candidates, as illustrated in Fig. 3.4. Afterward, an attempt
is made to match Fi+1 to the sub-keyframes {F3−m , ..., F3 , ..., F3+m } by using scan-
matching. It should be noted that before scan-matching, Fi+1 and the previous
sub-keyframes are converted into W . The relative transformation ∆T3,i+1 is found
and incorporated into the graph as a loop closure factor. Throughout this project,
the search distances for loop closures are set to be 15m from a new state xi+1 and
the index m to be 12.

However, in practice when GPS is the only absolute sensor available, it is found

UG Major Project Report Department of ECE 2022-23 34


RV College of Engineering® , Bengaluru - 560059

that adding loop closure factors is very helpful for reversing the drift in a robot’s
height. This is due to the extremely unreliable height measurement provided by
GPS, which in our experiments led to altitude errors nearing 100m in the absence
of loop closures.

The methodology explained above is the basis for the working of LIO-SAM and helps
us generate the required .pcd maps for performing map-based localization using other
algorithms.

3.4 ICP Localization


ICP or Iterative Closest Point algorithm is used for global position estimation in the
system. In a process known as scan registration, a source point cloud is aligned to a set
target point cloud by determining the best possible translation and rotation to reduce
the distance between the two[3]. There are other ICP variants, such as point-to-plane,
which have been demonstrated in [32] to have better convergence than, for instance, the
most used variant, point-to-point.

3.4.1 Design Specifications


1. Point-to-plane variation of ICP: ICP’s point-to-plane variation is frequently
more effective than the traditional point-to-point kind at determining the ideal
transformation[32]. By estimating the surface normals, for each of the points of the
target cloud, the tangent planes are created. The surface normals for each point in
O are denoted by nj .

Figure 3.5: Example of 2D Point-to-plane ICP

The definition of the error metric squared function is the sum of the squared dis-
tances between each point in the source cloud and the plane that corresponds to
that point in the target cloud

UG Major Project Report Department of ECE 2022-23 35


RV College of Engineering® , Bengaluru - 560059

Ns
X
E(R, t) = [(oi∗ ) − (Rsi + t))ni∗ ]2 , (3.9)
i=1

where ni∗ is the surface normal corresponding to the point oi∗ for the current iter-
ation of ICP. The rotation and translation is applied to the source cloud as

si+1 = Rsi + t. (3.10)

2. Sources of Error: When implementing ICP, a few causes of error must be taken
into account [7]. In the first, ICP converges incorrectly to a local minimum rather
than a global minimum. Since ICP does not account for this while minimizing the
cost function, this frequently turns out to be the dominant error. Before using ICP,
an initial alignment can be done to get the source cloud closer to the global minimum
and prevent this problem. Either a qualified estimate or a random selection of
different initial positions can be used to accomplish this. In [25], stochastic ICP is
presented as a further approach to prevent local minimums. Before each iteration
of the ICP algorithm, a random Gaussian noise translation is introduced to the
source cloud to enable the algorithm to relocate from a local minimum.

The second source of error is under-constrained circumstances, such as those in


which there is insufficient data to determine the location and orientation of the
object in question. Since the ICP is unable to identify the exact location of the
object, it can be seen that ”sliding” is taking place in one of the coordinate axes in
the various findings.

Even though the source cloud is initially positioned in the region of attraction of
the real solution before the ICP algorithm is carried out, sensor noise is the third
source of error that might alter the outcome of the solution. The source and target
clouds are not as identical as they ought to be, which is the cause. Outlier removal
algorithms can be used to reduce this error, but only if done correctly, as incorrectly
defined filtering parameters can increase the error. The point cloud’s outliers might
be viewed as solitary points or sparse clusters of points that are dispersed across
the cloud. Due to the ICP algorithm’s attempt to find correspondences for each
cloud point, these points skew the alignment [26]. This issue can be lessened by

UG Major Project Report Department of ECE 2022-23 36


RV College of Engineering® , Bengaluru - 560059

filtering the outliers before running ICP so that the points are rejected if they are
too far from the target cloud’s points.

3.4.2 Global Position Estimation

Figure 3.6: Global Position Estimation for ICP

The two sections that make up the global position estimation stage are shown in the
figure above. The target point cloud and the processed source point cloud from the prior
pre-processing stage are the inputs. A global position estimate and its related covariance
are the results of this stage. The two stages are discussed below:

1. Iterative Closest Point: The results of the pre-processing step are the input
data used in the ICP algorithm. In other words, the target point cloud, which in
this system is similar to an elevation map of the area, and the pre-processed source
point cloud, which was created from the LiDAR sensor mounted on the vehicle. A
subset of points are retrieved, as was mentioned in the previous chapter. In this
stage, the entire cloud population, as well as this subset, is utilized. The initial
position of the source cloud can be seen in Fig. 3.7.

To improve the conditions for the ICP method and reduce incorrect convergence, the
source cloud is initially translated to the predicted position of the vehicle during
the pre-processing stage. In the created system, the algorithm is applied in two
stages, first to a selection of clouds and then to all clouds. The method has been
extended in various ways as well.

UG Major Project Report Department of ECE 2022-23 37


RV College of Engineering® , Bengaluru - 560059

Figure 3.7: Position of initial source cloud as estimated by the EKF. The green cloud is
the subset source cloud before executing the ICP algorithm. The red point cloud is after
ICP, and the blue point cloud is the fixed target subset cloud.

The subset source and target clouds containing the points with inclined surface
normals are utilized as input in the initial application of the ICP method. Thus,
incorrect convergence is prevented. Since it has been demonstrated in [32] that
the point-to-plane variation of ICP has greater convergence, it is chosen. The
source cloud is next subjected to the transformation matrix that, once determined,
minimizes the cost function, ideally bringing it very near the global minimum.

Figure 3.8: Initial placement of the subset source point cloud. The green cloud is the
subset source cloud before executing the ICP algorithm. The red point cloud is after ICP,
and the blue point cloud is the fixed target subset cloud.

Fig. 3.8 shows the initial placement of the subset source point cloud. The two visible

UG Major Project Report Department of ECE 2022-23 38


RV College of Engineering® , Bengaluru - 560059

lines that can be seen in the point cloud are road ditches. These are important
features since they provide important details about the environment.

Following the relocation of the source cloud using the best transformation matrix
determined by the ICP algorithm using the subset source and target clouds that
can be seen in Fig. 3.8, the method is run once more with the entire cloud input to
account for all points. Since the cloud has already been converted to a local or global
minimum, this modifies the cloud in place without producing a transformation that
is as obvious as that produced by the ICP with subset points. The main purpose of
this stage is to ensure that the fitness score approach used for ICP evaluation, which
will be discussed in more detail in the following section, is as precise as possible.
This will make determining how accurate the match is easier.

2. Uncertainty Estimation: Finding the uncertainty of the result produced by ICP


is the second and final phase of the global position estimate procedure, which is
accomplished by estimating the covariance.

The location estimations provided by ICP are weighted by the covariance in the
EKF. The UT-based covariance estimate, covariance with correspondences, and
covariance with Hessian are the three covariance approaches that are put to the
test. The covariance is estimated in three dimensions because the point clouds are
three-dimensional, but only the x, y, and θ(rotation around the z-axis) elements of
the covariance matrix are chosen and put into a three-dimensional matrix. This
is due to the fact that the EKF can only estimate 2D positions because the local
position estimations generated by the present FOI system’s odometry data are in
2D. After the ICP technique, after the source cloud has been aligned to the global
coordinate system shown in Fig.3.8, the covariance is computed.

This is the basic design methodology for the ICP Localization algorithm. Following
this, the project has been executed. The implementation of the same can be found in
Chapter 4 and the results can be found in Chapter 5.

3.4.3 NDT Localization


NDT or Normal Distributions Transform is an algorithm that is used for the global
position estimation of an object in a system. The design and working methodology of
the algorithm have been discussed in the section below.

UG Major Project Report Department of ECE 2022-23 39


RV College of Engineering® , Bengaluru - 560059

3.4.4 Design Specifications


The Normal Distributions Transform (NDT) algorithm is a method used for prob-
abilistic scan matching and registration of point clouds. It is commonly employed in
map localization, simultaneous localization and mapping (SLAM), and object recogni-
tion tasks. NDT is known for its robustness to noise and outliers, making it suitable for
environments with sensor uncertainties.
The NDT algorithm represents point clouds as a probabilistic distribution, specifically
a Gaussian distribution. The steps involved in the NDT algorithm are as follows:

1. Voxelization: Voxelization is the process that converts a data structure, storing


geometric information in a contiguous domain, into a rasterized image. The point
clouds are divided into equally sized voxels to create a voxel grid representation.
Each voxel represents a local region of the environment.

2. Feature Extraction: Feature Extraction is the process by which an algorithm


extracts a set of informative and non-redundant data to help the learning for the
algorithm and generalize the further steps.

Initially, an estimate of the relative pose between the target point cloud and the
reference map is obtained using an initial guess, such as odometry data. For each voxel in
the target point cloud, the closest voxel in the reference map is found. Assigning weights
to the correspondences is performed based on the similarity measure. High weights are
given to voxels with similar distributions, while low weights are assigned to dissimilar
voxels or outliers.
Using the correspondences and their weights, an optimal transformation matrix that
aligns the two point clouds is estimated. The estimated transformation is applied to
the target point cloud, aligning it with the reference map. The algorithm checks if
the estimated transformation matrix has reached the desired accuracy or if the change
between iterations falls below a specified threshold. If not, the algorithm returns to the
final transformation matrix and provides the optimal alignment between the target point
cloud and the reference map, representing the relative pose estimation.
The NDT algorithm is advantageous in scenarios with sparse or noisy data, as it
captures the local distribution information rather than relying solely on individual point
correspondences. Additionally, NDT provides a probabilistic representation, enabling

UG Major Project Report Department of ECE 2022-23 40


RV College of Engineering® , Bengaluru - 560059

uncertainty modeling and incorporation into probabilistic frameworks.


This chapter has discussed in detail the working principles of all the hardware, soft-
ware, tools, and algorithms used. The methodology and reasoning behind the designing
of the algorithms have also been discussed in an effort to better explain the reasoning
behind choosing a particular algorithm and executing them in a particular manner.

UG Major Project Report Department of ECE 2022-23 41


RV College of Engineering® , Bengaluru - 560059

Chapter 4
Implementation of Map-based
Localization

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

CHAPTER 4
IMPLEMENTATION OF MAP-BASED
LOCALIZATION
After gaining an in-depth understanding of all the involved software, tools, and
required algorithms, the implementation of the project according to the design method-
ology can begin.

4.1 Software Installations


4.1.1 Ubuntu
Ubuntu 18.04.6 Bionic Beaver was chosen as the base operating system after careful
consideration. It provided us with the most user-friendly platform to text and experiment
with various algorithms and also provided a plethora of resources for debugging and
otherwise. Ubuntu was dual-booted with Windows.

4.1.2 Robot Operating System(ROS)


ROS1 Melodic was chosen as the framework to run and test our algorithms. ROS
Melodic was the platform on which a lot of the algorithms to be worked with and tested
were made. This provided us with better compatibility. There is also the added advantage
of pre-included tools and software like Rviz and Gazebo that enabled us to work more
efficiently.
A full desktop install was done for ROS1 Melodic, including all additional libraries.
This provided the base to proceed with the creation of our catkin workspace.
The complete procedure is described below.

1. Configuring Ubuntu Repositories: The Ubuntu repositories have to be con-


figured to allow restricted, universe, and multiverse access. More details on this
procedure can be found in [30].

2. Setting up sources.list: The computer needs to be set up for accepting the


packages from packages.ros.org by running:

sudo sh −c ’ echo ” deb ht tp : / / p a c k a g e s . r o s . o r g / r o s / ubuntu


$ ( l s b r e l e a s e −s c ) main” >
/ e t c / apt / s o u r c e s . l i s t . d/ r o s −l a t e s t . l i s t ’

UG Major Project Report Department of ECE 2022-23 43


RV College of Engineering® , Bengaluru - 560059

3. Setting up our keys: The keys for using a version of ROS on our computer has
to be set up:

sudo apt i n s t a l l c u r l #i f c u r l i s n ’ t a l r e a d y i n s t a l l e d
c u r l −s h t t p s : / / raw . g i t h u b u s e r c o n t e n t . com/ r o s / r o s d i s t r o
/ master / r o s . a s c | sudo apt−key add −

4. Pre-installation: To ensure that the Debian index package is up to date:

sudo apt upgrade


sudo apt update

5. Installation: A full desktop install of ROS has to be done. This includes things like
rqt, rviz, robot-generic libraries, 2D/3D simulators and 2D/3D perception libraries,
and other things that are required. The command is:

sudo apt i n s t a l l r o s −melodic −desktop− f u l l

Individual ROS packages can also be installed by using the command:

sudo apt i n s t a l l r o s −melodic −<PACKAGE NAME>

To find available packages, the following command can be used:

apt s e a r c h r o s −m e l o d i c

6. Setting up the Environment: It is easier and convenient if the ROS environment


variables are automatically added to the bash session every time a new shell is
launched:

echo ” s o u r c e / opt / r o s / m e l o d i c / s e t u p . bash ” >> ˜ / . b a s h r c


source ˜/. bashrc

If there are multiple instances of ROS installed on the system, /.bashrc must only
the setup.bash file for the version being used currently.

7. Installing dependencies for building the packages: The above steps have only
installed what is needed to run the core ROS packages. To create and manage our
own ROS workspaces, there are various tools and requirements that have to be in-
stalled that are distributed separately. The command for installing all dependencies
and tools required to build ROS packages is:

UG Major Project Report Department of ECE 2022-23 44


RV College of Engineering® , Bengaluru - 560059

sudo apt i n s t a l l python−r o s d e p python−r o s i n s t a l l


python−r o s i n s t a l l −g e n e r a t o r python−w s t o o l b u i l d −e s s e n t i a l

8. Initializing rosdep: Before any of ROS tools can be used, rosdep has to be
initialized. rosdep enables us to easily install system dependencies for the source
that has to be compiled and is also required to run some core components in ROS.
The steps to install rosdep are:

sudo apt i n s t a l l python−r o s d e p

Now rosdep can be initialized using:

sudo r o s d e p i n i t
r o s d e p update

Now ROS has been installed completely and is ready for implementing the required
algorithms. Any other libraries or packages can be installed if and when required.

4.2 Workspace Creation


Once ROS was installed, the next step was to create a workspace for all the drivers
and algorithms. This is called the catkin workspace. The steps to create the same are as
follows:

mkdir c a t k i n w s
cd c a t k i n w s
mkdir s r c
catkin make

This creates the catkin workspace where all the algorithms and sensor drivers will be
stored.

s o u r c e d e v e l / s e t u p . bash

This command sources the setup files required for the workspace from the ”devel”
folder and makes all the required files available inside the workspace. Now, sensor driver
installations can be started.

UG Major Project Report Department of ECE 2022-23 45


RV College of Engineering® , Bengaluru - 560059

4.3 Sensor Driver Installations


The main algorithms that have been used here are LIO-SAM, ICP Localization, and
NDT Localization. LIO-SAM requires IMU drivers, LiDAR sensor drivers, and Camera
drivers, apart from other necessary libraries. both ICP and NDT localization requires
IMU drivers and LiDAR drivers only, however, ICP localization requires many other ROS
libraries as well.

1. XSens MTi IMU Drivers: For the XSens MTi drivers to work properly, both
the drivers and the MT-SDK Development suite are needed. The MT-SDK can
be easily downloaded from the XSens official website. The downloaded MT-SDK
files have to be extracted and installed either in the home directory or inside the
workspace. Also, the IMU drivers package should be cloned from GitHub to our
catkin workspace. Once this is done, the next step is to execute the command
catkin make. If all the necessary conditions have been satisfied, the drivers will be
installed successfully.

Figure 4.1: Successful installation of XSens MTi IMU Drivers

2. FLIR Camera Drivers: For the FLIR camera to work properly both the FLIR
camera drivers and the Spinnaker SDK Development Suite are needed. The Spinnaker-
SDK can be easily downloaded from the Spinnaker official website. The downloaded
Spinnaker-SDK files have to be extracted and installed either in the home directory
or inside the workspace. Also, the FLIR camera drivers package should be cloned

UG Major Project Report Department of ECE 2022-23 46


RV College of Engineering® , Bengaluru - 560059

from GitHub to our catkin workspace. Once this is done, the next step is to execute
the command catkin make. If all the necessary conditions have been satisfied, the
drivers will be installed successfully.

Figure 4.2: Successful installation of FLIR Camera Drivers

3. Velodyne LiDAR Drivers: For the Velodyne LiDAR to work properly need to
install the proper drivers. There are some dependencies like velodyne laserscan,
velodyne msgs, velodyne pointcloud and other ROS libraries that have to be in-
stalled prior to the driver installation. Next, the Velodyne LiDAR drivers package
should be cloned from GitHub to our catkin workspace. Once this is done, the next
step is to execute the command catkin make. If all the necessary conditions have
been satisfied, the drivers will be installed successfully.

4. Ouster LiDAR Drivers: For the Ouster LiDAR to work properly need to install
the proper drivers. There are some dependencies like build-essential, libeigen3-
dev, libjsoncpp-dev, libspdlog-dev, libcurl4-openssl-dev and other ROS libraries that
have to be installed prior to the driver installation. Next, the Ouster LiDAR drivers
package should be cloned from GitHub to our catkin workspace. Once this is done,
the next step is to execute the command catkin make. If all the necessary conditions
have been satisfied, the drivers will be installed successfully.

Once all the prerequisite drivers and ROS packages have been installed, the next step
is to again build our workspace using catkin make. When the workspace is successfully

UG Major Project Report Department of ECE 2022-23 47


RV College of Engineering® , Bengaluru - 560059

Figure 4.3: Successful installation of Velodyne LiDAR Drivers

built without any errors, the process of implementing the LIO-SAM algorithm can be
started.

4.4 LIO-SAM Implementation


After the sensor drivers are installed, LIO-SAM and its required packages are ready
to be installed.

4.4.1 Installation of Dependencies


The first step towards implementing LIO-SAM is the installation of all the dependen-
cies or packages that are required to run LIO-SAM properly. The following are the ROS
Melodic commands for the installation of some of the dependencies:

sudo apt−g e t i n s t a l l −y r o s −melodic −n a v i g a t i o n


sudo apt−g e t i n s t a l l −y r o s −melodic −robot−l o c a l i z a t i o n
sudo apt−g e t i n s t a l l −y r o s −melodic −robot−s t a t e −p u b l i s h e r

The Georgia Tech Smoothing and Mapping library (gtsam) installation steps are as
follows:

sudo add−apt−r e p o s i t o r y ppa : b o r g l a b / gtsam−r e l e a s e −4.0


sudo apt i n s t a l l l i b g t s a m −dev l i b g t s a m −u n s t a b l e −dev

Once the above dependencies have been installed, the next step is to clone the LIO-
SAM repository in our workspace and build it. The steps are:

UG Major Project Report Department of ECE 2022-23 48


RV College of Engineering® , Bengaluru - 560059

Figure 4.4: Successful installation of Ouster LiDAR Drivers

cd ˜/ c a t k i n w s / s r c
g i t c l o n e h t t p s : / / g i t h u b . com/ TixiaoShan /LIO−SAM. g i t
cd . .
catkin make

If catkin make builds the workspace successfully, the next step is to move on to the
pre-execution setup for LIO-SAM.

4.4.2 Preparing LiDAR Data


For cloud deskewing, the point cloud data has be prepared in the correct format. This
is done by the imageProjection.cpp file. The two requirements are:

1. Providing point time stamp: LIO-SAM performs point cloud deskew using IMU
data. Therefore, it is necessary to know the relative point time in a scan. This in-
formation ought to be immediately output by the most recent Velodyne ROS driver.
In this case, the point time channel has been refered to as time. The deskewPoint()
method makes use of this relative time to determine the transformation of this point
with respect to the start of the scan. The description of the point type is found at
the top of imageProjection.cpp. The timestamp of a point should range between 0
and 0.1 seconds when the LiDAR rotates at 10Hz. To ensure that this time channel
represents the relative time in a scan when utilizing different LiDAR sensors, its
name might need to be changed.

UG Major Project Report Department of ECE 2022-23 49


RV College of Engineering® , Bengaluru - 560059

2. Providing point ring number: This data is used by LIO-SAM to accurately


arrange the point in a matrix. The ring number identifies the sensor channel to
which this point belongs. The point type definition can be found at the very top of
imageProjection.cpp. This information ought to be immediately output by the most
recent Velodyne ROS driver. Again, this information might need to be renamed
if and when using different LiDAR sensors. The package currently only supports
mechanical LiDAR.

4.4.3 Preparing IMU Data


1. IMU Requirement: LIO-SAM uses a 9-axis IMU that provides roll, pitch, and
yaw estimation, just like the original LOAM system. The mechanism is primarily
initialized at the proper attitude using the roll and pitch estimation. When using
GPS data, the yaw estimation sets the system’s initial direction correctly. Theoret-
ically, LIO-SAM can operate with a 6-axis IMU by using an initialization method
like VINS-Mono. The accuracy of the IMU measurements has a significant impact
on the system’s performance. The precision of the system increases with IMU data
rate. The XSens MTi 9-axis IMU has been used here.

2. IMU Alignment: LIO-SAM converts IMU raw data from the IMU frame to the
ROS REP-105 convention-compliant (x-forward, y-left, z-upward) LiDAR frame.
The correct extrinsic transformation must be provided in the params.yaml file for
the system to operate correctly. Because our IMU’s acceleration and attitude have
distinct coordinates, there are two extrinsics. The two extrinsics for any IMU may
or may not be identical, depending on the IMU manufacturer.

The extrinsicRot parameter in params.yaml indicates that the measurements of x-z


acceleration and gyro must be set to negative values in order to modify the IMU
data in the LiDAR frame.

It’s possible that attitude readings will change in a slightly different way. The
rotation of points in the IMU coordinate system to the world coordinate system
(for example, ENU) is what the IMU’s attitude measurement, q wb, often refers
to. The rotation from LiDAR to the world, known as q wl, is necessary for the
algorithm, though. Due to the fact that q wl = q wb ∗ q bl, rotation must be done
from LiDAR to IMU q bl. If acceleration and attitude have the same coordinate, q lb

UG Major Project Report Department of ECE 2022-23 50


RV College of Engineering® , Bengaluru - 560059

should be specified as extrinsicRPY in params.yaml (the same as the extrinsicRot).

3. IMU Debug: The comments have to be removed from the debug lines in imuHan-
dler() of imageProjection.cpp and check the results of the IMU data transformation.
To see if the readings match the movement of the sensors, the sensor array can be
rotated accordingly.

4.4.4 Selecting Datasets & Configuring Parameters


After the pre-execution setup for LIO-SAM is done, the next step is to choose the
datasets(in the form of .bag files) that will be used to test the localization using LIO-SAM.
For our purposes, the Park Dataset that is available publicly[30] is selected.
The next step is to set up the parameters correctly according to the corresponding
.bag files. The following are the setup parameters for running the Park dataset:

pointCloudTopic : ” p o i n t s r a w ” # Point c l o u d data


imuTopic : ” imu raw ” # IMU data
odomTopic : ” odometry /imu” # IMU pre−p r e i n t e g r a t i o n odometry
gpsTopic : ” odometry / gpsz ” # GPS odometry t o p i c from n a v s a t

4.4.5 Running the LIO-SAM Package


After setting up the parameters, the next step is to run the LIO-SAM package. Stand-
ing inside the catkin workspace, source the setup files, and then type the roslaunch com-
mand to run LIO-SAM:

cd c a t k i n w s
s o u r c e d e v e l / s e t u p . bash
r o s l a u n c h l i o s a m run . launch

Now LIO-SAM will open up in the Rviz window. Next, the .bag file containing the
Park Dataset has to be run by using the command:

r o s b a g p l a y p a r k d a t a s e t . bag

The LIO-SAM program will start running and the localization happening will be
visible. The results for the same have been discussed in Chapter 5 and can be seen in
Fig. 5.1.

UG Major Project Report Department of ECE 2022-23 51


RV College of Engineering® , Bengaluru - 560059

4.4.6 Testing LIO-SAM with RVCE Dataset


Once LIO-SAM is running properly, the next step is to test LIO-SAM using the
datasets that have been collected in the RVCE Campus using the Velodyne and Ouster
LiDARs, mounted to a car. These datasets are not publicly available.
There are also specific parameters that have to be changed in the params.yaml file
for running the RVCE datasets. The changes to the parameter file for using the datasets
with the Velodyne VLP32C LiDAR are shown in Fig. 4.5.

Figure 4.5: Parameter changes required for running RVCE Campus Datasets

After the above-mentioned setup is done, the steps to run LIO-SAM using a Termi-
nator window, which can also be seen in Fig. 4.6, are as follows:

1. Run roscore: The command roscore is run in window 1. This initializes and starts
an instance of ROS.

2. Run rviz: The command rviz is run in window 2. This opens up the rviz window
where the necessary configurations can be changed or loaded up. After playing the
rosbag file, the dataset can be seen in rviz as shown in Fig. 5.4.

3. Run RVCE Campus dataset: Using the command,

UG Major Project Report Department of ECE 2022-23 52


RV College of Engineering® , Bengaluru - 560059

r o s b a g p l a y r v c e . bag

the RVCE Campus dataset is run in window 3.

4. Run LIO-SAM: When all three previous tools are running, LIO-SAM is run in
window 4 using the command,

r o s l a u n c h l i o s a m run . launch

The results of this LIO-SAM implementation have been discussed in Chapter 5.

Figure 4.6: The Terminator window showing LIO-SAM execution. The screens are: a)
Running roscore, b) Running rviz, c) Playing RVCE Campus dataset, d) Running LIO-
SAM.

4.5 ICP Localization Implementation


Once the implementation of LIO-SAM is complete and the .pcd map has been gener-
ated, it is possible to move forward with the implementation of ICP Localization.

4.5.1 Creation of ICP Workspace


A separate workspace has to be created for ICP Localization. The creation of separate
workspaces for each algorithm helps in reducing clutter, makes debugging easier, and
minimizes confusion. The steps for the same are as follows:

UG Major Project Report Department of ECE 2022-23 53


RV College of Engineering® , Bengaluru - 560059

mkdir i c p w s
cd i c p w s
mkdir s r c
catkin make

4.5.2 Installation of Dependencies


The next step towards implementing ICP Localization is the installation of all the
dependencies or packages that are required to run the algorithm properly. The following
are the ROS Melodic commands for the installation of some of the dependencies in the
src folder in our icp ws:

g i t c l o n e h t t p s : / / g i t h u b . com/ l e g g e d r o b o t i c s / l i b n a b o . g i t
g i t c l o n e h t t p s : / / g i t h u b . com/ l e g g e d r o b o t i c s / l i b p o i n t m a t c h e r . g i t
g i t c l o n e h t t p s : / / g i t h u b . com/ l e g g e d r o b o t i c s / pointmatcher −r o s . g i t

The installation of some ROS libraries is also required:

sudo apt i n s t a l l −y r o s −melodic −p c l −r o s r o s −melodic −p c l −


c o n v e r s i o n s r o s −melodic −e i g e n −c o n v e r s i o n s r o s −melodic −t f −
c o n v e r s i o n s r o s −melodic −t f 2 −geometry l i b g o o g l e −g l o g −dev

The GitHub repository has to be cloned in the src folder of the icp ws. The commands
are:

cd i c p w s / s r c
g i t c l o n e h t t p s : / / g i t h u b . com/ l e g g e d r o b o t i c s / i c p l o c a l i z a t i o n . g i t

Now for building the packages:

catkin make isolated

Here, catkin make isolated has to be used instead of catkin make because the src
folder contains some files other than catkin files. This command helps to build only the
catkin files without any errors.

4.5.3 Parameter Configuration


There are three configuration (.yaml ) files that are required for the algorithm to run
properly. They are:

UG Major Project Report Department of ECE 2022-23 54


RV College of Engineering® , Bengaluru - 560059

1. icp.yaml : The error metric and outlier filters, among other ICP variables, are
configured in the icp.yaml file. Here, any filter that is applied to the map can be
defined.

2. input filters.yaml : The input filters.yaml configuration file specifies the actions
to be taken during each range sensor scan. In this file, subsampling, cropping, and
standard computation are specified.

3. params.yaml : The filtering and ICP algorithm can be configured as required by


making necessary changes to this file.

After making the desired changes to the .yaml configuration files, there are some
parameters in other files that need to be changed as well. These parameters are:

1. icp localization/initial pose: This contains the initial pose of the range sensor
frame in the given map.

2. icp localization/imu data topic: This is the ROS topic on which the IMU data
is published.

3. icp localization/odometry data topic: This is the ROS topic on which the
odometry data is published.

4. icp localization/num accumulated range data: Before attempting to register


them in a map, a certain amount of point cloud messages will be accumulated.
When using full scans, this value is set to 1. The LiDAR packets must be first
converted into sensor msgs::Pointcloud2 before they can be published. There is no
motion compensation in place at the moment.

5. icp localization/range data topic: This is the ROS topic on which the LiDAR
data is published.

6. icp localization/is use odometry: This determines if the odometry data will
be used for initial pose prediction. If set to false, the pose extrapolator will try to
use the IMU data.

7. icp localization/is provide odom frame: This indicates whether to provide


the odom frame or publish the map directly to the range sensor transformation.

UG Major Project Report Department of ECE 2022-23 55


RV College of Engineering® , Bengaluru - 560059

8. icp localization/gravity vector filter time constant: This is a constant that


helps in filtering the IMU measurements when estimating gravity. Smaller con-
stants produce noisier estimates but respond to changes in orientation more quickly.
Higher numbers provide smoother estimates but respond to a change in orientation
more slowly.

9. icp localization/fixed frame: This is the fixed frame map that is mostly used
for visualization.

10. icp localization/min num odom msgs before ready: This ensures that there
is a certain minimum number of messages present before starting ICP so that in-
terpolation between them is possible later.

11. calibration: This contains the calibration parameters for the sensors.

4.5.4 Running ICP Package


After the necessary parameters have been altered, the algorithm is ready to be exe-
cuted. For the initial testing, the datasets that are publicly available in [30] have been
used. The Forest dataset and .bag file available online have been downloaded and put
in the data folder of the algorithm and the correct input filters config name file has been
selected too. Next, the icp ws has to be opened in the terminal, the setup files have to
be sourced and the roslaunch command to run ICP has to be executed:

cd i c p w s
s o u r c e d e v e l i s o l a t e d / s e t u p . bash
r o s l a u n c h i c p l o c a l i z a t i o n i c p n o d e r o s b a g . launch

Now ICP will open up in the rviz window, the results of which have been discussed in
Chapter 5 in Fig. 5.2.
This Forest dataset uses the Ouster LiDAR to run. Here, instead of full scans, each
LiDAR packet is directly converted into a point cloud message and then published.

4.5.5 Testing ICP with RVCE Dataset


Once ICP is running properly, the next step is to test ICP using the datasets that have
been collected in the RVCE Campus using the Velodyne and Ouster LiDARS, mounted
to a car. These datasets are not publicly available.

UG Major Project Report Department of ECE 2022-23 56


RV College of Engineering® , Bengaluru - 560059

There are some specific parameters that have to be changed like pcd filename, in-
put filters config name, bag filename and the parameter filepath. Now ICP can be run
using the command:

r o s l a u n c h i c p l o c a l i z a t i o n i c p n o d e r o s b a g . launch

The results of this have been discussed in Chapter 5.

4.6 NDT Localization Implementation


By using the .pcd generated by LIO-SAM, the implementation of NDT Localization
can be proceeded with.

4.6.1 Creation of NDT Workspace


A separate workspace has to be created for NDT Localization. The creation of sep-
arate workspaces for each algorithm helps in reducing clutter, makes debugging easier,
and minimizes confusion. The steps for the same are as follows:

mkdir ndt ws
cd ndt ws
mkdir s r c
catkin make

4.6.2 Installation of Dependencies


The next step towards implementing NDT Localization is the installation of all
the dependencies or packages that are required to run the algorithm properly. The
robot localization package has to be installed first GitHub repository for the package
has to be cloned into the src folder of the ndt ws using the command:

g i t c l o n e −b melodic −d e v e l h t t p s : / / g i t h u b . com/ cra−r o s −


pkg / r o b o t l o c a l i z a t i o n . g i t

All other dependencies should already be installed from our earlier work. The next
step is to build the workspace using:

cd i c p w s
catkin make

UG Major Project Report Department of ECE 2022-23 57


RV College of Engineering® , Bengaluru - 560059

4.6.3 Parameter Configuration


There are multiple parameters that need to be changed and files that need to be
altered before the NDT algorithm can be executed. The first execution will be done
using the publicly available Courtyard Dataset from [30].

1. Configuring Map Loader: The map .pcd file and .csv files have to be moved
into ndt localizer/map. The pcd path parameter in the map loader.launch file has
to be changed to our pcd file path by:

<a rg name=”pcd path ”


d e f a u l t =”$ ( f i n d n d t l o c a l i z e r ) /map/ c o ur t y ar d m ap . pcd”/>

The next step is to modify the trajectory path in ndt localizer.launch. The trajec-
tory map is modified as a height map for initialization.

<a rg name=” p a t h f i l e ”
d e f a u l t =”$ ( f i n d n d t l o c a l i z e r ) /map/ co u rt y ar d m ap . c s v ”
doc=”Mapping t r a j e c t o r y as h e i g h t map” />

The different submap parameters also have to be configured.

<a rg name=”s u b m a p s i z e x y ” d e f a u l t =”50.0” />


<a rg name=”s u b m a p s i z e z ” d e f a u l t =”20.0” />
<a rg name=”m a p s w i t c h t h r e s ” d e f a u l t =”25.0” />

2. Configuring Point Cloud downsample: The LiDAR point cloud topic has to
be configured in launch/point downsample.launch. The Courtyard dataset is using
an Ouster LiDAR so the topic is changed accordingly.

<a rg name=” p o i n t s t o p i c ” d e f a u l t =”/ o s c l o u d n o d e / p o i n t s ” />

3. Configuring Static transform: The static transform from /world to /map needs
to be configured.

<node pkg=” t f 2 r o s ” type=” s t a t i c t r a n s f o r m p u b l i s h e r ”


name=”world to map ” a r g s =”0 0 0 0 0 0 map world ” />

UG Major Project Report Department of ECE 2022-23 58


RV College of Engineering® , Bengaluru - 560059

4. Configuring NDT Localizer: The main parameters for NDT Localizer can be
configured in ndt localizer.launch. These parameters work perfectly for 32 and 64
channel LiDARs.

<a r g name=” t r a n s e p s i l o n ” d e f a u l t =”0.05” doc=”The maximum


d i f f e r e n c e between two c o n s e c u t i v e t r a n s f o r m a t i o n s i n o r d e r
t o c o n s i d e r c o n v e r g e n c e ” />

<a r g name=” s t e p s i z e ” d e f a u l t =”0.1” doc=”The newton l i n e


s e a r c h maximum s t e p l e n g t h ” />

<a r g name=” r e s o l u t i o n ” d e f a u l t =”3.0” doc=”The ND v o x e l g r i d


r e s o l u t i o n ” />

<a r g name=” m a x i t e r a t i o n s ” d e f a u l t =”50” doc=”The number o f


i t e r a t i o n s r e q u i r e d t o c a l c u l a t e a l i g n m e n t ” />

<a r g name=” c o n v e r g e d p a r a m t r a n s f o r m p r o b a b i l i t y ”
d e f a u l t =”3.0” doc=”” />

This concludes the parameter configuration for NDT. Now the algorithm is ready to
run.

4.6.4 Running NDT Package


Standing in ndt ws folder in the terminal, the setup files have to be sourced and the
localizer has to be run.

cd ndt ws
s o u r c e d e v e l / s e t u p . bash
r o s l a u n c h n d t l o c a l i z e r n d t l o c a l i z e r . launch

It will take a few seconds for the map to load and become visible in rviz. An initial
pose has to be given for the current vehicle in rviz with 2D Pose Estimate. This will send
an initial pose to the topic /initialpose.
Now the rosbag is ready to be played in a separate terminal.

UG Major Project Report Department of ECE 2022-23 59


RV College of Engineering® , Bengaluru - 560059

r o s b a g p l a y −−c l o c k
c o u r t y a r d w e d r e p e a t n i g h t 2 0 2 1 −03−03−19−07−18.bag

The localization will be started now. The results of this have been discussed in
Chapter 5 and can be seen in Fig. 5.3.

4.6.5 Converting pcd map to csv file


For running NDT, the required inputs are a pcd and csv file. Our pcd files for the
RVCE Campus dataset, which was generated by LIO-SAM, have to be converted into csv
files by using a script from GitHub[30].
The first step is to install a dependency required to run this script.

sudo apt i n s t a l l l i b p c l −dev

Now the GitHub repository can be cloned on our system by using:

g i t c l o n e h t t p s : / / g i t h u b . com/ c r i s t i a n r u b i o a / p c d 2 t x t . g i t

Now the package can be built by using the following commands:

cd / p c d 2 t x t
mkdir b u i l d
cd b u i l d
cmake . . / s r c /
make

Once the package is built successfully, our pcd files have to be moved into the build
folder of the package. Now standing in the build folder in the terminal, a command has
to be executed for initiating the conversion.

cd ˜/ p c d 2 t x t / b u i l d
. / p c d 2 t x t <pcd f i l e > −o <output d i r e c t o r y >

Now the converted .txt file can be easily converted into a .csv file by simply changing
the extension.

4.6.6 Testing NDT with RVCE Dataset


Once the pcd and csv files are ready, the NDT localization can be implemented using
the RVCE Campus datasets. However, some more parameters have to be adjusted again
to work properly with the RVCE datasets.

UG Major Project Report Department of ECE 2022-23 60


RV College of Engineering® , Bengaluru - 560059

The map pcd and csv files are named GlobalMap.pcd and GlobalMap.csv respectively.
These files have to be moved into ndt localizaer/map. The pcd path parameter in the map
loader.launch file has to be changed to our pcd file path by:

<a r g name=”pcd path ”


d e f a u l t =”$ ( f i n d n d t l o c a l i z e r ) /map/ GlobalMap . pcd”/>

The next step is to modify the trajectory path in ndt localizaer/map. The trajectory
map is modified as a height map for initialization again.

<a r g name=” p a t h f i l e ”
d e f a u l t =”$ ( f i n d n d t l o c a l i z e r ) /map/ GlobalMap . c s v ”
doc=”Mapping t r a j e c t o r y as h e i g h t map” />

These are all the changes that need to be made for executing NDT with the RVCE
Campus Datasets.
Now, standing in ndt ws folder in the terminal, the setup files have to be sourced and
the localizer has to be run.

cd ndt ws
s o u r c e d e v e l / s e t u p . bash
r o s l a u n c h n d t l o c a l i z e r n d t l o c a l i z e r . launch

It will take a few seconds for the map to load and become visible in rviz. When the
map is loaded, the rosbag is ready to be played.
The rosbag used here has been generated using the Ouster OS1 LiDAR and is named
as rvce.bag. The command to run it is:

r o s b a g p l a y −−c l o c k r v c e . bag

The localization will start now on top of the map that has been loaded. The results
for this had been discussed in detail in Chapter 5.
This Chapter discusses about the implementation of the different algorithms for map-
based localization algorithms and the generation of results for the same. The results of
the work done in this chapter have been discussed in detail in Chapter 5.

UG Major Project Report Department of ECE 2022-23 61


RV College of Engineering® , Bengaluru - 560059

Chapter 5
Results & Discussions

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

CHAPTER 5
RESULTS & DISCUSSIONS
The last chapter discussed the setup and implementation of the various algorithms
in detail. This chapter is a continuation of that work and the results from implementing
the algorithms have been discussed in detail here. The later parts of this chapter include
the performance comparison between the algorithms and the inferences drawn from the
results that we have achieved.

5.1 Simulation Results


This section discusses the results of implementing the algorithms using the publicly
available datasets that were used to confirm the working of the algorithms.

5.1.1 LIO-SAM
From figure 5.1, here the dataset is taken from GitHub [47], and the output LIO-SAM
provides precise 3D location information within the dataset’s coordinate system. And
LIO-SAM generates a high-quality PCD map of the park by simultaneously optimizing
the vehicle’s trajectory and creating a detailed representation of the surroundings. This
map captures the structural and spatial characteristics of the park, facilitating accurate
mapping within the environment as shown in Fig. 5.1.

Figure 5.1: Localization by LIO-SAM using the Park Dataset

UG Major Project Report Department of ECE 2022-23 63


RV College of Engineering® , Bengaluru - 560059

The symbol with the 3 axes in the middle of Fig. 5.1 denotes the axes of the IMU. The
concentric circles on the map are the layers of detection for the LiDAR. The RGB colors
represent the data points in 3D space that is helping in mapping out the environment.
From Fig. 5.1, it can be made out that the sensor is traveling along a path in a park. The
trees towards the left are clearly distinguishable along with the path that merges with
the current path from the left. This is how LiDAR is detecting the environment around
it and the data is helping in the visualization of the environment. This 3D map data is
generated and stored LIO-SAM and is later used for performing map-based localization.

5.1.2 ICP Localisation


The implementation of the ICP algorithm for map-based localization using the forest
dataset, in which the dataset from GitHub yielded accurate results, is shown in Fig. 5.2.
The algorithm demonstrated high localization accuracy in forest environments, effectively
handling occlusions and noise caused by dense foliage. The algorithm also adapted well to
the dynamic nature of forest environments, ensuring continuous and reliable localization.

Figure 5.2: ICP localization in a forest

This is a map-based localization algorithm. As such, the pcd map generated by LIO-
SAM has to be first given as input to the algorithm. The datasets for the simulation can
be found in [48]. The Forest dataset, which utilizes an Ouster LiDAR, has been used here.
The white-gray background represents the 3D point cloud map that has been uploaded.

UG Major Project Report Department of ECE 2022-23 64


RV College of Engineering® , Bengaluru - 560059

It is a top-down view of a forest. The white data points indicate the trees in the forest.
The RGB spot in the middle denotes the LiDAR that is performing localization in the
environment. The range of the RGB spot indicates the range of the LiDAR as well. The
LiDAR moves around in this environment using the data available from the pcd map and
performs localization as can be seen in Fig. 5.2.

5.1.3 NDT Localization


An NDT localizer is loosely coupled with wheeled odometry and IMU for continuous
global localization within a pre-build point cloud map. And it is implemented for map-
based localization using the courtyard dataset which is used from GitHub [49] to achieve
accurate positioning within the courtyard environment. Here The NDT algorithm has
shown its strength in dealing with the unique characteristics of the yard dataset. Despite
complex structures, varying terrain, and limited GPS signal availability, the algorithm
provided accurate and reliable positioning results. It effectively captured the intricate
details of the courtyard, ensuring an accurate location estimate. It effectively aligned
sensor data with the reference map, resulting in precise localization as can be seen in Fig.
5.3.

Figure 5.3: NDT localization for courtyard

The grey image in the background of Fig. 5.3 is the map that was given as input to
the algorithm. That map was originally created using a modified version of LIO-SAM.

UG Major Project Report Department of ECE 2022-23 65


RV College of Engineering® , Bengaluru - 560059

The RGB map that can be seen on top of the input map indicates the localization that
is being performed by the algorithm. The different colors describe the intensity of the
data at various points of the map.

5.2 Experimental Results


This section discusses the results obtained from implementing the algorithms using the
RVCE Campus Datasets. The algorithms were tested rigorously using multiple datasets
taken using both Velodyne and Ouster LiDARs.

5.2.1 LIO-SAM Using RVCE Dataset


The parameters in the params.yaml file have to be modified as specified in Chapter
4 to ensure the proper working of LIO-SAM.
The command roscore is given and it initializes the ROS Master, which acts as a
central hub for communication between different ROS nodes. To run LIO-SAM in the
rviz window, giving the command rviz opens the rviz window. Here changes can be
made to obtain the desired configuration. Next, after playing rosbag files, the dataset
can be viewed in rviz as shown in Fig. 5.4.

Figure 5.4: RVCE Campus dataset being played in rviz

To run LIO-SAM, the necessary parameters need to be changed as mentioned in Sec-


tion 4.4.6, The LIO-SAM output provides valuable information for map-based localization

UG Major Project Report Department of ECE 2022-23 66


RV College of Engineering® , Bengaluru - 560059

and mapping. This algorithm creates a detailed map of the RVCE campus environment
and captures its structure and features based on LiDAR sensor data and a point cloud
is generated that represents a 3D point cloud representation of the campus, providing
information about object locations and properties. LIO-SAM uses the generated maps
and point clouds to perform accurate map-based localization, estimating the vehicle’s
position and orientation relative to the map in real-time. The output also includes an
estimated trajectory that shows the vehicle’s path through the campus as seen in Fig.
5.5. These results offer valuable insights for autonomous vehicles operating in the RVCE
campus, facilitating reliable localization and effective mapping.

Figure 5.5: RVCE Campus dataset being played in LIO-SAM

LIO-SAM constructs a map of the environment based on the extracted features and
the odometry estimates. It combines the information from multiple LiDAR scans to create
a 3D point cloud map, where each point represents a position in space. And the mapping
process in LIO-SAM generates point cloud data by integrating the individual LiDAR
scans into a consistent and comprehensive map. The resulting point cloud represents the
geometry and structure of the environment, capturing objects, surfaces, and obstacles.
In Fig. 5.5, the real-time localization on running LIO-SAM is visible. The red are in
the figure represents the data points nearest to the LiDAR sensor and thus, the areas
that are getting mapped with the utmost efficiency, and the intensity for these points is

UG Major Project Report Department of ECE 2022-23 67


RV College of Engineering® , Bengaluru - 560059

higher. The point in green color that can be seen towards the left indicate the areas that
are farthest from the sensor and hence, not a lot of detail is captured.

Figure 5.6: PCD Map Generated By LIO-SAM For Velodyne Lidar

Fig. 5.6 shows the pcd map generated using the Velodyne LiDAR along with LIO-
SAM with the RVCE dataset. These generated point cloud data can be further used for
localization, mapping, object recognition, or other applications in autonomous navigation.
The pcd map shows the Administrative Block inside the RVCE Campus.

Figure 5.7: PCD Map Generated By LIO-SAM For Ouster Lidar

Fig. 5.7 shows the pcd map generated using the Ouster LiDAR along with LIO-
SAM with the RVCE dataset. These generated point cloud data can be further used for
localization, mapping, object recognition, or other applications in autonomous naviga-
tion. The pcd map shows the main gate of the RVCE campus leading towards the Civil
Engineering Department inside the RVCE Campus.

UG Major Project Report Department of ECE 2022-23 68


RV College of Engineering® , Bengaluru - 560059

5.2.2 ICP Localization using RVCE Campus Dataset


To run the ICP localization algorithm for the RVCE dataset necessary changes need
to be made as discussed earlier in Section 4.5.3. Here, the input given is the .pcd files
which are generated from running RVCE data in LIO-SAM and the output estimates
the vehicle’s pose, which provides information about its position and orientation relative
to the pre-built map. This pose estimate is continuously updated as the vehicle moves
through the environment, allowing it to determine its location accurately as seen in Fig.
5.8.

Figure 5.8: ICP localization Using RVCE Dataset

The pcd map shown in Fig. 5.7 has been used to run ICP as can be seen in Fig. 5.8.
The white data points in the background represent the map that has been uploaded to
the algorithm. The localization is taking place on top of this map by taking this map
as a reference for where the paths lie. The bright green circle in the middle of the RGB
area indicates the position of the vehicle fitted with the LiDAR sensor. The concentric
circles around the central position indicate the field of view of LiDAR and its effective
range. The changing color from green to blue toward the outer edges indicates that the
points that are furthest away from the LiDAR are being recorded with a lesser intensity.

UG Major Project Report Department of ECE 2022-23 69


RV College of Engineering® , Bengaluru - 560059

5.2.3 NDT Localization using RVCE Campus Datasets


To run the NDT algorithm, a PCD map and CSV files are required as inputs for the
algorithm. To generate a CSV file, a PCD map which is generated from running LIO-
SAM is converted using a script found on GitHub [50], and the same has been discussed
in Section 4.6.5. After generating CSV files, the NDT algorithm can be implemented
for RVCE Campus Datasets by changing some parameters. The results for the NDT
algorithm using RVCE Campus Datasets estimate the location of the vehicle, including
its position and direction on the reference map. These output results of the NDT algo-
rithm are crucial for evaluating the accuracy, reliability, and reliability of vehicle location
estimation in map-based localization.
In addition, the NDT algorithm typically provides a covariance matrix representing
the uncertainty or confidence associated with the estimated location. This covariance
matrix describes the level of uncertainty in estimates of position and orientation.

Figure 5.9: NDT localization Using RVCE Dataset

The pcd map shown in Fig. 5.7 has been used to run ICP as can be seen in Fig. 5.9.
The white data points in the background represent the map that has been uploaded to
the algorithm. The localization is taking place on top of this map by taking this map
as a reference for where the paths lie. The RGB axis towards the bottom indicates the
starting position for the vehicle and also the starting point for localization. The RGB

UG Major Project Report Department of ECE 2022-23 70


RV College of Engineering® , Bengaluru - 560059

axis towards the right of the figure indicates the current position of the vehicle that is
coming back to close the loop at the starting point. The RGB points all over the map
indicate the localization that is happening via the algorithm.

5.3 Inferences drawn from the Results obtained


Based on simulations and experimental results, significant conclusions can be drawn
about map-based localization algorithms in autonomous vehicles.
The ICP algorithm works by iteratively aligning two point clouds to find the best
transformation that minimizes the distance between corresponding points. While it is
suitable for applications with moderate complexity, ICP may encounter challenges with
dynamic objects, as the presence of moving or changing objects can disrupt the alignment
process and lead to inaccurate results.
On the other hand, the NDT algorithm offers a good balance between accuracy and
computational efficiency in localization. NDT is known for its real-time localization
capabilities and can handle dynamic objects to some extent. However, its performance
may still be affected if dynamic objects cause significant changes in the environment.
In the case of the datasets that have been used by us, the environment is mostly static,
and thus, the main disadvantage of the ICP algorithm is reduced significantly. The ICP
algorithm takes significant time to set up as there are various parameters that have to be
changed before it can be used for a particular type of dataset. However, the setup time
in the case of using different LiDARs is significantly lesser.
In the case of NDT localization, the setup time required is significantly lesser than
ICP but converting the pcd files to the equivalent csv file is an extra added step, and
changing the LiDAR sensors for the algorithm takes a long time when compared to ICP.
In conclusion, both the algorithms that have been tested have a variety of applications.
Each of them has a particular scenario where they work the best. A combination of the
two can be used to solve any kind of problem that might arise during the execution of
projects like these.
This chapter presented the results obtained from the evaluation of map-based lo-
calization algorithms in autonomous vehicles. The simulation and experimental results
provided insights into the strengths, weaknesses, and trade-offs of each algorithm. The
findings contribute to advancing autonomous driving technologies and offer valuable rec-
ommendations for future research and development.

UG Major Project Report Department of ECE 2022-23 71


RV College of Engineering® , Bengaluru - 560059

Chapter 6
Conclusion and Future Scope

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

CHAPTER 6
CONCLUSION AND FUTURE SCOPE
6.1 Conclusion
The project is focused on map-based localization in autonomous vehicles using LIO-
SAM, NDT, and ICP algorithms. The objective of the project is to integrate sensors
such as GPS, IMU, and Velodyne LiDAR sensors into the ROS framework, implement
the LIO-SAM algorithm for mapping, generate point clouds for the RVCE dataset using
LIO-SAM, and perform map-based localization using Point Cloud data.
Using the LIO-SAM algorithm, accurate and detailed point clouds were generated for
the RVCE dataset. The dataset was processed to produce point clouds, which served as
an important input for subsequent localization steps.
At last, map-based localization was performed using Point Cloud data generated by
LIO-SAM. The NDT and ICP algorithms played a crucial role in aligning the sensor
data with the existing map, resulting in an accurate estimation of the vehicle’s position
and direction. Integrating these algorithms with the point cloud generated by LIO-SAM
provided reliable and real-time localization results.
The project successfully achieved its objectives in map-based localization for au-
tonomous vehicles. The integration of sensors, implementation of the LIO-SAM algo-
rithm, generation of Point Clouds, and map-based localization demonstrated the effec-
tiveness of the chosen algorithms. The obtained quantitative results and performance
improvements validate the feasibility of this approach and contribute to advancements in
autonomous vehicle localization.

6.2 Future Scope


Map-based localization in autonomous vehicles using LIO-SAM, NDT, and ICP has
limitations. These include sensor limitations, environmental changes, map quality, com-
puting requirements, and scalability. However, it is possible to extend it in the future.
These include combining sensors with cameras, IMUs, and radar, exploring deep learning
methods for better localization, handling dynamic environments through object tracking
and detection, updating and maintaining network maps, optimizing real-time perfor-
mance, and enabling collaboration between autonomous vehicles. These can improve the
accuracy, robustness, and efficiency of autonomous vehicle localization.

UG Major Project Report Department of ECE 2022-23 73


RV College of Engineering® , Bengaluru - 560059

6.3 Learning Outcomes of the Project


1. Understanding of sensor integration: The process of combining different sen-
sors, such as GPS, IMU, and LiDAR, into a unified system to gather and utilize
data from multiple sources.

2. Map-based localization algorithms:The project utilized NDT and ICP algo-


rithms for map-based localization using Point Cloud data. This resulted in un-
derstanding and implementing these algorithms for aligning sensor data with the
existing map, estimating the vehicle’s position and orientation, and improving lo-
calization accuracy.

3. Understanding of ROS: Robotic operating system or ROS is an open-source


framework that provides tools and libraries for building robotic systems.

4. Understanding of LIO-SAM: By implementing the LIO-SAM algorithm for


mapping, the project enhanced the understanding of the state of the art of mapping
algorithms in the field of autonomous vehicles.

5. Performance evaluation and optimization: The project involved quantita-


tively evaluating the performance of the implemented algorithms and techniques in
terms of accuracy, efficiency and real-time capability.

UG Major Project Report Department of ECE 2022-23 74


BIBLIOGRAPHY

[1] Z. YILMAZ and L. BAYINDIR, “Simulation of lidar-based robot detection task


using ros and gazebo,” Avrupa Bilim ve Teknoloji Dergisi, pp. 513–529, 2019.

[2] M. Quigley, K. Conley, B. Gerkey, et al., “Ros: An open-source robot operating


system,” in ICRA workshop on open source software, Kobe, Japan, vol. 3, 2009,
p. 5.

[3] A. A. Alajami, N. Palau, S. Lopez-Soriano, and R. Pous, “A ros-based distributed


multi-robot localization and orientation strategy for heterogeneous robots,” Intel-
ligent Service Robotics, vol. 16, no. 2, pp. 177–193, 2023.

[4] L. Joseph, Robot operating system (ros) for absolute beginners. Springer, 2018.

[5] R. Chaudhuri and S. Deb, “Lidar integration with ros for slam mediated au-
tonomous path exploration,” pp. 225–235, 2022.

[6] A. J. C. Santos, J. M. M. Castellanos, and A. M. R. Duke, “Ros: An autonomous


robot operating system for simultaneous localization and mapping using a 2d li-
dar sensor,” in The 19th LACCEI International Multi-Conference for Engineering,
Education, and Technology: ”Prospective and trends in technology and skills for
sustainable social development” ”Leveraging emerging technologies to construct the
future”, 2021. doi: 10.18687/LACCEI2021.1.1.520.

[7] R. Li, M. A. Oskoei, K. D. McDonald-Maier, and H. Hu, “Ros based multi-sensor


navigation of intelligent wheelchair,” in 2013 Fourth International Conference on
Emerging Security Technologies, IEEE, 2013, pp. 83–88.

[8] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “Lio-sam: Tightly-
coupled lidar inertial odometry via smoothing and mapping,” in 2020 IEEE/RSJ
international conference on intelligent robots and systems (IROS), IEEE, 2020,
pp. 5135–5142.

[9] C. Wang, G. Zhang, and M. Zhang, “Research on improving lio-sam based on


intensity scan context,” in Journal of Physics: Conference Series, IOP Publishing,
vol. 1827, 2021, p. 012 193.

75
RV College of Engineering® , Bengaluru - 560059

[10] K. Liu, “An enhanced lidar-inertial slam system for robotics localization and map-
ping,” arXiv preprint arXiv:2212.14209, 2022.

[11] B. Cao, R. C. Mendoza, A. Philipp, and D. Göhring, “Lidar-based object-level


slam for autonomous vehicles,” in 2021 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), IEEE, 2021, pp. 4397–4404.

[12] G. Lu, H. Yang, J. Li, Z. Kuang, and R. Yang, “A lightweight real-time 3d lidar slam
for autonomous vehicles in large-scale urban environment,” IEEE Access, vol. 11,
pp. 12 594–12 606, 2023.

[13] K. Liu and H. Ou, “A light-weight lidar-inertial slam system with loop closing,”
arXiv preprint arXiv:2212.05743, 2022.

[14] S. Hossain and X. Lin, “Uncertainty-aware tightly-coupled gps fused lio-slam,”


arXiv preprint arXiv:2209.10047, 2022.

[15] K. Liu and H. Ou, “A light-weight lidar-inertial slam system with high efficiency
and loop closure detection capacity,” in 2022 International conference on advanced
robotics and mechatronics (ICARM), IEEE, 2022, pp. 284–289.

[16] C. Bai, T. Xiao, Y. Chen, H. Wang, F. Zhang, and X. Gao, “Faster-lio: Lightweight
tightly coupled lidar-inertial odometry using parallel sparse incremental voxels,”
IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4861–4868, 2022.

[17] L. Han, Z. Shi, and H. Wang, “A localization and mapping algorithm based on
improved lvi-sam for vehicles in field environments,” Sensors, vol. 23, no. 7, p. 3744,
2023.

[18] L. Li, M. Yang, C. Wang, and B. Wang, “Road dna based localization for au-
tonomous vehicles,” in 2016 IEEE Intelligent Vehicles Symposium (IV), IEEE,
2016, pp. 883–888.

[19] K. Dai, B. Sun, G. Wu, et al., “Lidar-based sensor fusion slam and localization
for autonomous driving vehicles in complex scenarios,” Journal of Imaging, vol. 9,
no. 2, p. 52, 2023.

[20] D. Shin, K.-m. Park, and M. Park, “High definition map-based localization using
adas environment sensors for application to automated driving vehicles,” Applied
Sciences, vol. 10, no. 14, p. 4924, 2020.

UG Major Project Report Department of ECE 2022-23 76


RV College of Engineering® , Bengaluru - 560059

[21] Z. Wang, J. Fang, X. Dai, H. Zhang, and L. Vlacic, “Intelligent vehicle self-localization
based on double-layer features and multilayer lidar,” IEEE Transactions on Intel-
ligent Vehicles, vol. 5, no. 4, pp. 616–625, 2020.

[22] K.-W. Kim, J.-H. Im, M.-B. Heo, and G.-I. Jee, “Precise vehicle position and head-
ing estimation using a binary road marking map,” Journal of Sensors, vol. 2019,
2019.

[23] Q. Li, J. P. Queralta, T. N. Gia, Z. Zou, and T. Westerlund, “Multi-sensor fusion


for navigation and mapping in autonomous vehicles: Accurate localization in urban
environments,” Unmanned Systems, vol. 8, no. 03, pp. 229–237, 2020.

[24] X. Chen, I. Vizzo, T. Läbe, J. Behley, and C. Stachniss, “Range image-based lidar
localization for autonomous vehicles,” in 2021 IEEE International Conference on
Robotics and Automation (ICRA), IEEE, 2021, pp. 5802–5808.

[25] F. Munir, S. Azam, A. M. Sheri, Y. Ko, and M. Jeon, “Where am i: Localization


and 3d maps for autonomous vehicles.,” in VEHITS, 2019, pp. 452–457.

[26] A. Poulose, M. Baek, and D. S. Han, “Point cloud map generation and localization
for autonomous vehicles using 3d lidar scans,” in 2022 27th Asia Pacific Conference
on Communications (APCC), IEEE, 2022, pp. 336–341.

[27] Y. Zhang, L. Wang, J. Wang, and J. M. Dolan, “Real-time localization method


for autonomous vehicle using 3d-lidar,” in The Dynamics of Vehicles on Roads and
Tracks, CRC Press, 2017, pp. 271–276.

[28] L. Wang, Y. Zhang, and J. Wang, “Map-based localization method for autonomous
vehicles using 3d-lidar,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 276–281, 2017.

[29] E. Javanmardi, Y. Gu, M. Javanmardi, and S. Kamijo, “Autonomous vehicle self-


localization based on abstract map and multi-channel lidar in urban area,” IATSS
research, vol. 43, no. 1, pp. 1–13, 2019.

[30] J. Levinson, M. Montemerlo, and S. Thrun, “Map-based precision vehicle localiza-


tion in urban environments.,” in Robotics: science and systems, Atlanta, GA, USA,
vol. 4, 2007, p. 1.

UG Major Project Report Department of ECE 2022-23 77


RV College of Engineering® , Bengaluru - 560059

[31] X. Lin, F. Wang, B. Yang, and W. Zhang, “Autonomous vehicle localization with
prior visual point cloud map constraints in gnss-challenged environments,” Remote
Sensing, vol. 13, no. 3, p. 506, 2021.

[32] J. B. Chipka and M. Campbell, “Autonomous urban localization and navigation


with limited information,” in 2018 IEEE Intelligent Vehicles Symposium (IV),
IEEE, 2018, pp. 806–813.

[33] S. Pang, D. Kent, X. Cai, H. Al-Qassab, D. Morris, and H. Radha, “3d scan registra-
tion based localization for autonomous vehicles-a comparison of ndt and icp under
realistic conditions,” in 2018 IEEE 88th vehicular technology conference (VTC-
Fall), IEEE, 2018, pp. 1–5.

[34] T. Qin, Y. Zheng, T. Chen, Y. Chen, and Q. Su, “A light-weight semantic map
for visual localization towards autonomous driving,” in 2021 IEEE International
Conference on Robotics and Automation (ICRA), IEEE, 2021, pp. 11 248–11 254.

[35] L. Li, M. Yang, B. Wang, and C. Wang, “An overview on sensor map based local-
ization for automated driving,” 2017 Joint Urban Remote Sensing Event (JURSE),
pp. 1–4, 2017.

[36] X. Li, S. Du, G. Li, and H. Li, “Integrate point-cloud segmentation with 3d lidar
scan-matching for mobile robot localization and mapping,” Sensors, vol. 20, no. 1,
p. 237, 2019.

[37] R. Valencia, J. Saarinen, H. Andreasson, J. Vallvé, J. Andrade-Cetto, and A. J.


Lilienthal, “Localization in highly dynamic environments using dual-timescale ndt-
mcl,” in 2014 IEEE international conference on robotics and automation (ICRA),
IEEE, 2014, pp. 3956–3962.

[38] N. Akai, L. Y. Morales, E. Takeuchi, Y. Yoshihara, and Y. Ninomiya, “Robust


localization using 3d ndt scan matching with experimentally determined uncertainty
and road marker matching,” in 2017 IEEE Intelligent Vehicles Symposium (IV),
IEEE, 2017, pp. 1356–1363.

[39] S. Pang, D. Kent, X. Cai, H. Al-Qassab, D. Morris, and H. Radha, “3d scan registra-
tion based localization for autonomous vehicles-a comparison of ndt and icp under
realistic conditions,” in 2018 IEEE 88th vehicular technology conference (VTC-
Fall), IEEE, 2018, pp. 1–5.

UG Major Project Report Department of ECE 2022-23 78


RV College of Engineering® , Bengaluru - 560059

[40] Y.-C. Kan, L.-T. Hsu, and E. Chung, “Performance evaluation on map-based ndt
scan matching localization using simulated occlusion datasets,” IEEE Sensors Let-
ters, vol. 5, no. 3, pp. 1–4, 2021.

[41] Ubuntu logo, https://upload.wikimedia.org/wikipedia/commons/thumb/7/


76/Ubuntu-logo-2022.svg/768px-Ubuntu-logo-2022.svg.png?20220421195347.

[42] Ros, https://media.licdn.com/dms/image/D4D12AQFoaORVunb6bA/article-


cover_image-shrink_600_2000/0/1676869008271?e=2147483647&v=beta&t=
Ue5HbDYbc9MLsFeHdqe83pT-9KN7T6MT2lSR5As2pMQ.

[43] Velodyne lidar, https://cdn.shopify.com/s/files/1/1750/5061/products/


Ultra_Puck_300x300.png?v=1541982401.

[44] Ouster lidar, https://msadowski.github.io/images/ouster/ouster_os1.jpg.

[45] Xsens mti, https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRVObwZlDScTytmn8B


usqp=CAU&ec=48665699.

[46] Flir camera, https://www.flir.com/globalassets/imported-assets/image/


oryx-front-back.png.

[47] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and R. Daniela. “Lio-sam:


Tightly-coupled lidar inertial odometry via smoothing and mapping.” (2020), [On-
line]. Available: https://github.com/TixiaoShan/LIO-SAM.

[48] E. Jevalic. “Localization using icp in a known map.” (2021), [Online]. Available:
https://github.com/leggedrobotics/icp_localization/pulse.

[49] K. Li Sun, M. Taher, and A. Shan. “A ros-based ndt localizer with multi-sensor state
estimation.” (2021), [Online]. Available: https://github.com/leggedrobotics/
icp_localization/pulse.

[50] C. Rubioa. “Pcd2txt.” (2021), [Online]. Available: https://github.com/cristianrubioa/


pcd2txt.

UG Major Project Report Department of ECE 2022-23 79


RV College of Engineering® , Bengaluru - 560059

Appendix A
Appendix

UG Major Project Report Department of ECE 2022-23


RV College of Engineering® , Bengaluru - 560059

APPENDIX A
APPENDIX
A.1 Paper Submission
In this section the paper that has been submitted to the IEEE transaction on Systems,
Man and Cybernetics has been represented in the subsequent pages of the report. The
proof of successful submission of the paper to the journal has been represented as shown
in Figure ??.

UG Major Project Report Department of ECE 2022-23 81


Map-based Localization for Autonomous Vehicles
Souparna Roy Bhanu Prakash Sedamkar Muhammad Akram A Dr Rajani Katiyar
Dept. of ECE, Dept. of ECE, Dept. of ECE, Dept. of ECE,
RV College of Engineering, RV College of Engineering,) RV College of Engineering, RV College of Engineering,)
Bangalore, India- 560059. Bangalore, India- 560059. Bangalore, India- 560059. Bangalore, India- 560059.
[email protected] [email protected] [email protected] [email protected]

Abstract—Map-based localization is a technique used in sophisticated control systems to navigate and make decisions
robotics and navigation systems to determine the precise position autonomously, without direct human intervention. By inter-
and orientation of a mobile agent within a known environment, preting and analyzing real-time data from their surroundings,
utilizing a pre-existing map of that environment. It involves
comparing the sensor measurements obtained by the agent, such including road conditions, traffic patterns, and pedestrian
as visual or range sensor data, with the features and landmarks movements, autonomous vehicles possess the ability to per-
represented on the map. The objective of this paper has been to ceive their environment and respond accordingly.
perform map-based localization for a vehicle within the RVCE The advancements in map-based localization have been
campus using datasets that were generated along the campus driven by breakthroughs in sensor technology, particularly in
as well. Algorithms like LIO-SAM (LiDAR Inertial Odometry
via Smoothing and Mapping) were used to generate the PCD the fields of computer vision and LiDAR. Visual odometry al-
(Point Cloud Data) while ICP (Iterative Closest Point) and NDT gorithms employ cameras to track visual features and estimate
(Normal Distributions Transforms) localization algorithms have the robot’s motion, while LiDAR-based approaches leverage
been used to perform map-based localization. laser scans to generate 3D maps and perform localization.
The vehicle equipped with the sensors, collecting datasets, was The fusion of multiple sensors, such as cameras and LiDAR,
traveling at speeds between 20km/h to 25km/h. The datasets
were collected using the Velodyne and Ouster LiDARs in a dy- has also shown significant potential in improving localization
namic environment inside the RVCE campus. The Velodyne and accuracy and robustness.
Ouster LiDARs had a run time of 345.71s and 93.18s respectively Map-based localization is needed for high accuracy and
while collecting datasets. Both datasets were compared at the 23s reliability in a variety of environments and conditions. This
mark to evaluate the IMU, mapping, and GPS data and ensure is particularly important in safety-critical applications, such as
uniformity of input while executing both map-based algorithms.
It was found that the GPS accuracy in the case of ICP is more. autonomous driving, where even small errors in vehicle posi-
For ICP, the units of visualization used were F latSquares with tion estimation can have serious consequences. Another chal-
a size of 0.15m and for NDT, they were F latSquares with lenge is the need for real-time performance, as autonomous
a size of 3m. The large step size for NDT shows that it is vehicles must be able to process sensor data and update their
more computationally efficient as it processes more data per unit position estimates quickly and efficiently. This is also the
time, making it suitable for use in dynamic environments. A
combination of both helped in transitioning from move-based major motivation for the project.
to map-based localization. The proposed system was successfully However, several challenges remain in the field of map-
implemented and could seamlessly perform the required task based localization. The scalability of map-based localization
with minimal setup, computation power, and greater efficiency. algorithms to large-scale environments is an ongoing area
Index Terms—Map-based localization, pcd, LIO-SAM, LiDAR, of research. Real-time performance, computational efficiency,
3D Mapping, ICP, NDT, RVCE
and the ability to handle dynamic environments are critical fac-
tors for widespread adoption. Also, robustness against changes
I. I NTRODUCTION
in lighting conditions, appearance variations, and occlusions
In recent years, autonomous vehicles have emerged as poses additional challenges in visual-based localization ap-
groundbreaking technology with the potential to revolution- proaches. Map-based localization in outdoor environments
ize the transportation industry. By incorporating advanced may also be influenced by seasonal changes, environmental
artificial intelligence, sensor technology, and connectivity, deformations, or limited GPS availability. These challenges
these self-driving vehicles can redefine mobility, improve road have been addressed in this paper.
safety, and fundamentally transform our travel experiences.
Given the swift advancements witnessed in this field, it is II. L ITERATURE SURVEY
crucial to undertake thorough research to delve into the This literature review focuses primarily on the implemen-
implications, challenges, and opportunities associated with tation of map-based localization for autonomous vehicles uti-
autonomous vehicles. The concept of autonomous vehicles lizing the LIO-SAM (LiDAR Inertial Odometry via Smooth-
goes beyond the conventional mode of transportation, where ing and Mapping) framework in combination with Velodyne
humans serve as the primary operators. Instead, these vehicles and Ouster LiDAR sensors. To build extremely precise and
leverage an intricate network of sensors, algorithms, and economical maps for autonomous vehicles, the LIO-SAM
framework includes Simultaneous Localization and Mapping KITTI dataset, displaying superior translation drift (0.899%
(SLAM) methods. The use of modern LiDAR sensors, such as average) and real-time performance (10Hz). The results show
Velodyne and Ouster, enables accurate and detailed perception that NDT-LOAM is a highly precise, low-drift approach for
of the surroundings. LiDAR-based mapping and localization.
This research delves into the current state-of-the-art ap- Liu et al. [4] suggested an NDT-based real-time high-
proaches and methodologies used in map-based localization precision positioning and mapping method for large scenes.
for autonomous cars employing the LIO-SAM framework with Their approach solves LiDAR registration difficulties by en-
Velodyne and Ouster LiDAR sensors throughout this literature hancing stability, accuracy, and vehicle positioning. The ap-
review. Within this paradigm, we critically examine the NDT proach illustrates excellent durability, enhanced monitoring
and ICP algorithms’ strengths, limits, and applicability. performance, and increased accurate positioning even in sit-
The research also explores the implementation of the uations of signal fading by employing NDT properties and
NDT(Normal Distribution transform) and ICP(Iterative closest incorporating them into the SLAM framework.
point) algorithms. The NDT technique makes use of a statis-
tical model to characterize the surrounding area and calculate C. ICP
the pose of the vehicle, whereas the ICP approach iteratively Wang et al. [5] utilized 3D-LiDAR information to deliver
matches the observed point cloud data via the map data to a map-based localization approach for autonomous vehicles.
improve the localization estimation. The approach precisely determines the vehicle’s location and
A. LIO-SAM orientation relative to the map by merging an already con-
structed map with real-time LiDAR data. The ICP technique
The LIO-SAM method, developed by Shan et al. [1], reli- is used to build a high-resolution map, while SVD is used to
ably locates a vehicle in real-time by combining readings from estimate pose. The approach’s usefulness in attaining exact
inertial sensors and LiDAR odometry. The system performs localization throughout varied contexts is demonstrated by
exact pose estimation and retains a small number of keyframes empirical results using real-time data from autonomous cars.
while using the high-resolution 3D point cloud data from Li-
Shin et al. [6] stated a high-definition map localization
DAR sensors and a sliding window approach for simultaneous
technique for self-driving cars that makes use of advanced
localization and mapping, assuring accuracy and efficiency. By
driver assistance system (ADAS) environment sensors. The
offering an accurate approach for applications like mapping,
technique contains parts for representing environment features,
object identification, and autonomous navigation, this method
digital map analysis, map-based position correction, prede-
advances the area of map-based localization for autonomous
fined validation gates, and extended EKF-based localization
vehicles.
filtering and fusion. The system detects lane information with
In their study, Hossain and Lin proposed a method called
monocular vision and the quad rail with radar using numerous
”Uncertainty-Aware Tightly-Coupled GPS Fused LIO-SLAM”
feature extraction stages. The iterative closest point (ICP)
[2] which aims to accurately map autonomous delivery ve-
approach is used to align the lane and quadrant information
hicles operating in areas where GPS access is limited or
iteratively, adjusting the location of the host vehicle. Map-
denied. Through the combination of GPS and LiDAR inertial
matching and EKF with double updating improve the vehicle’s
odometry in a tightly-coupled manner and the use of an
position based on the digital map even more.
Extended Kalman filter for fusion, this method effectively
overcomes long-range drift issues and produces accurate maps III. M ETHODOLOGY
in outdoor as well as semi-indoor/indoor environments. The
experimental results demonstrated its effectiveness by achiev- The methodology employed in this project can be seen in
ing lower RMSE values when compared with GPS-based Fig. 1. All the design and implementation have been based
mapping approaches. on this methodology. ROS has to be installed and a new ROS
environment has to be set up. Then the drivers required for
B. NDT the various sensors like IMU, GPS, and LiDAR sensors can
Chen et al. [3] presented NDT-LOAM, a real-time LiDAR be installed. Once the drivers are installed successfully, the
odometry and mapping method that analyses LiDAR point drivers can be integrated into ROS to provide a platform for
cloud data for simultaneous localization and mapping. To running LIO-SAM.
improve the speed of computation, the framework separates LIO-SAM is an algorithm for 3D mapping an environment.
LiDAR SLAM into front-end odometry and back-end opti- Once LIO-SAM can be executed successfully, it is first tested
mization. To eliminate accumulated mistakes, the front end with publicly available datasets. Once this is done, certain
employs the Normal Distribution Transform (NDT) for point parameters have to be changed to ensure compatibility with
cloud registration. To analyze point clouds, the researchers the RVCE Campus datasets. After these parameters have been
employed weighted NDT in combination with Local Feature altered as required, LIO-SAM can be executed with the RVCE
Adjustment, building novel cost functions having weights Campus datasets to generate the required pcd maps.
depending on range values and surface features. NDT-LOAM Once the LIO-SAM has successfully generated the required
beat state-of-the-art algorithms like ALOAM/LOAM on the pcd maps, the implementation of map-based localization can
• imuPreintegration.cpp: The factor graph present in this
file optimizes the IMU and LiDAR odometry factor and
also helps in estimating IMU bias. The factor graph here
is reset periodically and it guarantees real-time estimation
of odometry at the IMU frequency.
The sensor data from a 3D LiDAR, an IMU, and, if desired,
a GPS are received by the system. Using the observations
from these sensors, the robot’s status and trajectory have to
be calculated. This state estimation problem is formulated as
a map a posteriori problem. A factor graph is used to model
this problem. If there is no loss of generality, this system
can easily incorporate measurements from other sensors like
heading from a compass or altitude from an altimeter.
For the purpose of building factor graphs, four different
kinds of factors and one type of variable have been introduced.
The nodes of the graph are responsible for this variable, which
captures the state of the robot at a particular moment. The four
basic factors that are used are:
• IMU Preintegration Factors: This factor focuses on find-
ing the relative motion between two consecutive timesteps
and hence, the pre-integrated time measurements ∆vij
(velocity), ∆pij (position), ∆Rij (rotation) between the
times i and j. The equations for the same are:
Fig. 1. Design Methodology Flowchart
∆vij = RiT (vj − vi − g∆tij ) (1)

1
be started. For this purpose, two map-based localization al- ∆pij = RiT (pj − pi − vi ∆tij − g∆t2 ij (2)
gorithms, ICP and NDT Localization, have been used and 2
the results have been compared at the end. For implementing
map-based localization, the pcd maps generated by LIO-SAM ∆Rij = RiT Rj (3)
have to be given as input to both algorithms. The algorithms A detailed derivation and explanation of these equations
first load up the map in rviz, where the 3D environment can be found in [7].
can be visualized. Then once the rosbag has been played, • LiDAR Odometry Factors: This factor introduces the
the localization happens on top of the uploaded pcd map. concept of Key-frames. As the computation for individual
The map-based algorithms are tested with both public and LiDAR frames and adding them to the factor graphs is not
RVCE Campus datasets. Finally, the results acquired from both efficient, Key-frames were introduced. In this approach,
algorithms are studied in detail. Inferences are drawn based a LiDAR frame Fi+1 is selected as a keyframe when the
on the results acquired, and the validity of both algorithms is change in the pose of the robot exceeds a user-defined
compared. threshold when compared to the previous state x. This
new keyframe, Fi+1 , is associated with a new node of the
IV. LIO-SAM robot node, xi+1 , in the factor graph. Now the LiDAR
frames between these two keyframes are discarded. The
LIO-SAM stands for LiDAR Inertial Odometry via Smooth- addition of keyframes in this manner helps to achieve a
ing and Mapping. It is an inertial odometry algorithm that balance between memory consumption and map density
can achieve highly accurate and real-time trajectory estimation while also maintaining a relatively sparse factor graph.
for robots and map building. LIO-SAM has been extensively This is suitable for real-time nonlinear optimization.
tested in various environments using a number of our RVCE For this project, the position and rotation thresholds for
Campus data sets. This resulted in us being able to generate adding a new keyframe are chosen as 1m and 10◦ .
highly accurate and precise point cloud maps that can be used • GPS Factors: This factor is used to eliminate drift when
to perform map-based localization. performing long-term navigation tasks. Sensors that pro-
The two main files that form the base for LIO-SAM are: vide absolute measurements to eliminate drift can be
• mapOptimization.cpp: The factor graph present in this introduced. These sensors consist of a GPS, compass,
file optimizes the GPS and LiDAR odometry factor and and altimeter.
is maintained steadily throughout the entire duration of The GPS readings are first translated into the local
the test. Cartesian coordinate system and then a new GPS factor
with this node is associated when a new node is added The second source of error is under-constrained circum-
to the factor graph. If the GPS signal and the LiDAR stances, such as those in which there is insufficient data
frame are not hardware synchronized, linear interpolation to determine the location and orientation of the object in
between GPS measurements based on the timestamp of question. Since the ICP is unable to identify the exact
the LiDAR frame is done. Since the drift of LiDAR location of the object, it can be seen that ”sliding” is
inertial odometry increases relatively slowly, it is not taking place in one of the coordinate axes in the various
required to continuously apply GPS components when findings.
GPS reception is available. In actual use, only a GPS The two other concepts necessary for ICP are:
component is used when the projected position covariance
• Iterative Closest Point: The results of the pre-processing
exceeds the GPS position covariance that was received.
step are the input data used in the ICP algorithm. In
• Loop Closure Factors: This factor implements a simple
other words, the target point cloud, which in this system
Euclidean distance-based loop closure detection method.
is similar to an elevation map of the area, and the pre-
For the first search of the factor graph, the prior states that
processed source point cloud, which was created from the
are close to the new state xi+1 in Euclidean space when-
LiDAR sensor mounted on the vehicle. A subset of points
ever a new state xi+1 is added. The Frames are matched
are retrieved, as was mentioned in the previous chapter.
to the sub-frames by using scan-matching. Throughout
In this stage, the entire cloud population, as well as this
this project, the search distances for loop closures are set
subset, is utilized
to be 15m from a new state xi+1 and the index m to be
• Uncertainty Estimation: Finding the uncertainty of the
12.
result produced by ICP is the second and final phase of
However, in practice when GPS is the only absolute
the global position estimate procedure, which is accom-
sensor available, it is found that adding loop closure
plished by estimating the covariance.
factors is very helpful for reversing the drift in a robot’s
The location estimations provided by ICP are weighted
height. This is due to the extremely unreliable height
by the covariance in the EKF. The UT-based covariance
measurement provided by GPS, which in our experiments
estimate, covariance with correspondences, and covari-
led to altitude errors nearing 100m in the absence of loop
ance with Hessian are the three covariance approaches
closures.
that are put to the test. The covariance is estimated in
V. ICP three dimensions because the point clouds are three-
ICP or Iterative Closest Point [8] algorithm is used for dimensional, but only the x, y, and θ(rotation around the
global position estimation in the system. In a process known z-axis) elements of the covariance matrix are chosen and
as scan registration, a source point cloud is aligned to a set put into a three-dimensional matrix. This is due to the
target point cloud by determining the best possible translation fact that the EKF can only estimate 2D positions because
and rotation to reduce the distance between the two. There the local position estimations generated by the present
are other ICP variants, such as point-to-plane, to have better FOI system’s odometry data are in 2D. After the ICP
convergence than, for instance, the most used variant, point- technique, after the source cloud has been aligned to the
to-point. global coordinate system the covariance is computed.
• Point-to-plane variation of ICP: ICP’s point-to-plane vari- VI. NDT
ation is frequently more effective than the traditional
point-to-point kind at determining the ideal transforma- The Normal Distributions Transform (NDT) algorithm is a
tion. By estimating the surface normals, for each of the method used for probabilistic scan matching and registration of
points of the target cloud, the tangent planes are created. point clouds. It is commonly employed in map localization,
The surface normals for each point in O are denoted by simultaneous localization and mapping (SLAM), and object
nj . recognition tasks. NDT is known for its robustness to noise
• Sources of Error: When implementing ICP, a few causes and outliers, making it suitable for environments with sensor
of error must be taken into account. In the first, ICP uncertainties.
converges incorrectly to a local minimum rather than a The NDT algorithm represents point clouds as a probabilis-
global minimum. Since ICP does not account for this tic distribution, specifically a Gaussian distribution. The steps
while minimizing the cost function, this frequently turns involved in the NDT algorithm are as follows:
out to be the dominant error. Before using ICP, an initial • Voxelization: Voxelization is the process that converts a
alignment can be done to get the source cloud closer to data structure, storing geometric information in a contigu-
the global minimum and prevent this problem. Either a ous domain, into a rasterized image. The point clouds are
qualified estimate or a random selection of different initial divided into equally sized voxels to create a voxel grid
positions can be used to accomplish this. Before each representation. Each voxel represents a local region of the
iteration of the ICP algorithm, a random Gaussian noise environment.
translation is introduced to the source cloud to enable the • Feature Extraction: Feature Extraction is the process by
algorithm to relocate from a local minimum. which an algorithm extracts a set of informative and non-
redundant data to help the learning for the algorithm and
generalize the further steps.
Initially, an estimate of the relative pose between the target
point cloud and the reference map is obtained using an initial
guess, such as odometry data. For each voxel in the target
point cloud, the closest voxel in the reference map is found.
Assigning weights to the correspondences is performed based
on the similarity measure. High weights are given to voxels
with similar distributions, while low weights are assigned to
dissimilar voxels or outliers.
Using the correspondences and their weights, an optimal
transformation matrix that aligns the two point clouds is
estimated. The estimated transformation is applied to the target
point cloud, aligning it with the reference map. The algorithm
checks if the estimated transformation matrix has reached
the desired accuracy or if the change between iterations falls
below a specified threshold. If not, the algorithm returns
to the final transformation matrix and provides the optimal
alignment between the target point cloud and the reference
map, representing the relative pose estimation.

VII. R ESULTS AND D ISCUSSION


The autonomous vehicle on which the Velodyne and Ouster
LiDARs were first set up properly can be seen in Fig. 2. In
the figure, it is only set up with the Ouster LiDAR.
Fig. 3. Top: Linear Acceleration data from Velodyne sensor, Bottom: Linear
Acceleration data from Ouster Sensor

color towards the left indicates mapping with a lower intensity


due to the range constraints of the LiDARs. The Fig. 4 below
shows the execution for LIO-SAM with the RVCE Campus
dataset.

Fig. 2. Autonomous Vehicle set up with the Ouster LiDAR, FLIR Camera,
and XSens MTi IMU

All data pertaining to the sensors were cross-verified. The Fig. 4. LIO-SAM execution in RVCE Campus
Linear Acceleration data from both LiDARs were compared
and found to be matching as can be seen in Fig. 3. Fig. 5 below shows the pcd map generated using LIO-SAM
The LIO-SAM algorithm was used to map the RVCE for the RVCE Campus as well as the corresponding Google
Campus environment and generate a pcd map for the same. Maps image for the place using Velodyne LiDAR. Fig. 6 shows
This helped in localizing the autonomous vehicle and made the same using the Ouster LiDAR. The Google Maps image
the self-guided movement possible. The red color shows the clearly shows which part of the RVCE Campus was mapped
area that is being mapped with higher intensity while the green and the pcd files below them help to visualize the data points
for the same in 3D. The changing color from green to blue toward the outer
edges indicates that the points that are furthest away from the
LiDAR are being recorded with a lesser intensity. This output

Fig. 7. ICP Localization with RVCE Campus Dataset

Fig. 5. RVCE Campus Google Maps photo and pcd map with Velodyne
estimates the vehicle’s pose, which provides information about
LiDAR its position and orientation relative to the pre-built map. This
pose estimate is continuously updated as the vehicle moves
through the environment, allowing it to determine its location
accurately.

Fig. 8. NDT Localization with RVCE Campus Dataset

Fig. 8 shows NDT localization where the pcd map and its
corresponding csv file [10] are given as input to the algorithm.
The map, again, can be seen as a set of white data points in
the background. The RGB axis towards the bottom indicates
the starting position for the vehicle and also the starting point
Fig. 6. RVCE Campus Google Maps photo and pcd map with Ouster LiDAR for localization. The RGB axis towards the right of the figure
indicates the current position of the vehicle that is coming
The maps in Fig. 5 and Fig. 6 have been used to perform back to close the loop at the starting point. The RGB points
map-based localization [9]. For the ICP localization, the pcd all over the map indicate the localization that is happening via
map is given as input to the algorithm. The map gets uploaded the algorithm.
to the background and can be seen from the grey data points in The NDT algorithm typically provides a covariance matrix
Fig. 7. The bright green circle in the middle of the RGB area representing the uncertainty or confidence associated with
indicates the position of the vehicle fitted with the LiDAR the estimated location. This covariance matrix describes the
sensor. The concentric circles around the central position level of uncertainty in estimates of position and orientation.
indicate the field of view of LiDAR and its effective range. The results for it using RVCE Campus Datasets estimate the
location of the vehicle, including its position and direction on
the reference map.
VIII. C ONCLUSION
It was proposed to develop a map-based localization system
using by using concepts like LiDAR odometry, and multi-
sensor fusion and deploy the system to perform localization
inside the RVCE Campus. Existing frameworks like LIO-SAM
have been used to provide initial LiDAR inertial odometry,
help in mapping out the RVCE campus and generate pcd maps
for the campus. These maps were then utilized in algorithms
like ICP and NDT to perform map-based localization. All al-
gorithms were adapted for use with our sensors and hardware.
The LIO-SAM-generated maps were then successfully loaded
onto the map-based algorithms. This helped in successfully
localizing the autonomous vehicle based on the data being
gathered by the sensors in real time and the path data available
from the 3D map. The proposed method had been rigorously
tested and thoroughly evaluated using multiple public datasets
and datasets collected inside the RVCE Campus. The results
show that NDT is better suited to localization in a dynamic
environment but ICP is better when multiple sensors have to be
changed or replaced frequently. A combination of both helped
in creating a functional system and help transition the existing
system from move-based to map-based. Future work might
involve testing the algorithms on other systems like unmanned
aerial vehicles.
R EFERENCES
[1] Shan, Tixiao, et al. ”Lio-sam: Tightly-coupled lidar inertial odometry
via smoothing and mapping.” 2020 IEEE/RSJ international conference
on intelligent robots and systems (IROS). IEEE, 2020.
[2] Hossain, Sabir, and Xianke Lin. ”Uncertainty-Aware Tightly-Coupled
GPS Fused LIO-SLAM.” arXiv preprint arXiv:2209.10047 (2022).
[3] Chen, Shoubin, et al. ”NDT-LOAM: A Real-time Lidar odometry and
mapping with weighted NDT and LFA.” IEEE Sensors Journal 22.4
(2021): 3660-3671.
[4] Liu, Sijia, et al. ”Research on NDT-based positioning for autonomous
driving.” E3S Web of Conferences. Vol. 257. EDP Sciences, 2021.
[5] Wang, Liang, Yihuan Zhang, and Jun Wang. ”Map-based local-
ization method for autonomous vehicles using 3D-LIDAR.” IFAC-
PapersOnLine 50.1 (2017): 276-281.
[6] Shin, Donghoon, Kang-moon Park, and Manbok Park. ”High definition
map-based localization using ADAS environment sensors for application
to automated driving vehicles.” Applied Sciences 10.14 (2020): 4924.
[7] Forster, Christian, Luca Carlone, Frank Dellaert, and Davide Scara-
muzza. ”On-manifold preintegration for real-time visual-inertial odom-
etry.” IEEE Transactions on Robotics 33, no. 1 (2016): 1-21.
[8] Nylén, Rebecka, and Katherine Rajala. ”Development of an ICP-based
Global Localization System.” (2021).
[9] Jevalic, Edo in ”Localization using ICP in a known map”, 2021,
https://github.com/leggedrobotics/icp localization/pulse
[10] Rubioa, Cristian, pcd2txt, 2021, https://github.com/cristianrubioa/
pcd2txt
RV College of Engineering® , Bengaluru - 560059

A.2 Similarity report


The publication report attached after this page is followed by the plagiarism/similarity
report for the report that is being submitted which indicates the similarity of the report
is up to 16% over the length of the report which is spread across 107 matched sources
which is within the acceptable range of plagiarism.

UG Major Project Report Department of ECE 2022-23 89

You might also like