LiDAR Based Self-Driving Car
LiDAR Based Self-Driving Car
https://doi.org/10.22214/ijraset.2022.41213
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
Abstract: LiDAR, typically used as an acronym for “’light detection and ranging’”, is essentially a sonar that uses pulsed laser
waves to map the distance to surrounding objects. It is used by a large number of autonomous vehicles to navigate environments
in real time. Its advantages include impressively accurate depth perception, which allows LiDAR to know the distance to an
object to within a few centimetres, up to 60 metres away. It’s also highly suitable for 3D mapping, which means returning
vehicles can then navigate the environment predictably —a significant benefit for most self-driving technologies. One of the key
strengths of LiDAR is the number of areas that show potential for improvement. These include solid-state sensors, which could
reduce its cost tenfold, sensor range increases of up to 200m, and 4-dimensional LiDAR, which senses the velocity of an object
as well as its position in 3-D space. However, despite these exciting advances, LiDAR is still hindered by a key factor; its
significant cost. LiDAR is not the only self-driving detection technology, with cameras as the major rival, championed by Tesla
as the best way forward. Elon Musk has described LiDAR as “a fool’s errand” and “unnecessary”. The argument runs that
humans drive based only on ambient visible light, so robots should equally be able to. A camera is significantly smaller and
cheaper than LiDAR (although more of them are needed), and has the advantage of seeing in better resolution and in colour,
meaning it can read traffic lights and signs. However, cameras have a wide host of characteristics that make them tricky to use
in common driving conditions. Whereas LiDAR uses near infra-red light, cameras use visible light, and are thus more
susceptible to issues when faced with rain, fog, or even some textures. In addition, LiDARs do not depend on ambient light,
generating their own light pulses, whereas cameras are more sensitive to sudden light changes, direct sunlight and even
raindrops.
Keywords: Congestion, Traffic Accident, LIDAR sensor, Global Positioning System, Electromagnetic waves
I. INTRODUCTION
Every year around in the world approx. 1.5 million deaths caused by road accident. Around age 18 or generally the new learner who
are just learning how to drive, it’s scary for me to think of the about accident Because what if the Accident occur due to someone
else mistake. On the top of that, as we know that amount of traffic, we have it just create the useless/unnecessary frustration for
nearly every 9 people out of 10. This was the main reason for deciding our project that is and Self-Driving Car which is also known
as autonomous car (A car which can run without Driver or also known as robotic Car). It is a vehicle that is capable of sensing its
surrounding and also navigating the input without any help of human. These type of car uses variety of techniques like Lidar, radar,
laser light gps computer vision. Due to which this type of car also known as human brain because of the ability to sense the
surrounding and also give the input by itself. According to research there have been testing and experiment going on to develop self-
driving car for over a 40-45 year. This type of vehicle requires wide range of technologies and infrastructure to operate properly.
With the development of technology, there have been driverless cars. It is a concept which is pushed and supported by Google, and
various other company. As we know as the Development in technology increases the concern for the technology safety is also
increases. the main concern is the security of the car. the way we human drive the car is much different the way robotic brain think
like example we say the miscommunication between car and the passenger we human can drive at our own will to make passenger
happy but self-driving car will only use the safest and only route which are provided to it. it is difficult to think that if the car makes
some kind of mistake like crossing at high speed or ignoring signal because of technology failure how it can correct it mistake on its
own. And the second most important concern our that people will never be able to trust the machine as they trust our fellow humans
without understanding the logic of the car and how its operate. It is important to understand the issues that surround the logic that
has been used in the car. There are many more concern regarding robotic car. here some question i.e. Will it be possible to have
changes in the places where the owner of the driverless car wants to go? What if I want to change where I am going? if there any
communication barrier how can we communicate with the system or car? For our project we have collected our data from the
university of Hesai Inc. and Ford Motor Company as we have decided to work on it i.e., Self-Driving Car by using Lidar to sense
the surrounding. We hope the dataset will be useful to the robotics algorithm. Since 2016, self-driving cars have moved toward
partial autonomy, with features that help drivers stay in their lane, along with ACC technology and the ability to self-park.
Developer of self-driving car use the vast amount of data from various dataset such as image recognition machine learning and
neural network to build system which can work in normal surrounding.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 261
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 262
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
The Society of Automotive Engineers (SAE) developed an industry-standard scale from zero to five to describe this continuum,
although there are many gray areas where features might overlap. Here’s what those levels generally mean:
1) Level 0: No Automation. The driver is completely responsible for controlling the vehicle, performing tasks like steering,
braking, accelerating or slowing down. Level 0 vehicles can have safety features such as backup cameras, blind spot warnings
and collision warnings. Even automatic emergency braking, which applies aggressive braking in the event of an imminent
collision, is classified as Level 0 because it does not act over a sustained period.
2) Level 1: Driver Assistance. At this level, the automated systems start to take control of the vehicle in specific situations, but do
not fully take over. An example of Level 1 automation is adaptive cruise control, which controls acceleration and braking,
typically in highway driving. Depending on the functionality, drivers are able to take their feet off the pedals.
3) Level 2: Partial Automation. At this level, the vehicle can perform more complex functions that pair steering (lateral control)
with acceleration and braking (longitudinal control), thanks to a greater awareness of its surroundings. Level 2+: Advanced
Partial Automation. While Level 2+ is not one of the officially recognized SAE levels, it represents an important category that
delivers advanced performance at a price consumers can afford. Level 2+ includes functions where the vehicle systems are
essentially driving, but the driver is still required to monitor the vehicle and be ready to step in if needed. (By contrast, Level 3
represents a significant technology leap, as it is the first level at which drivers can disengage from the act of driving — often
referred to as “mind off.” At Level 3, the vehicle must be able to safely stop in the event of a failure, requiring much more
advanced software and hardware.) Examples of Level 2+ include highway assistance or traffic jam assistance. The ability for
drivers to take their hands off the wheel and glance away from the road ahead for a few moments makes for a much more
relaxing and enjoyable experience, so there is strong consumer interest.
4) Level 3: Conditional Automation. At Level 3, drivers can disengage from the act of driving, but only in specific situations.
Conditions could be limited to certain vehicle speeds, road types and weather conditions. But because drivers can apply their
focus to some other task such as looking at a phone or newspaper this is generally considered the initial entry point into
autonomous driving. Nevertheless, the driver is expected to take over when the system requests it. For example, features such
as traffic jam pilot mean that drivers can sit back and relax while the system handles it all acceleration, steering and braking. In
stop-and-go traffic, the vehicle sends an alert to the driver to regain control when the vehicle gets through the traffic jam and
vehicle speed increases. The vehicle must also monitor the driver’s state to ensure that the driver resumes control, and be able
to come to a safe stop if the driver does not.
5) Level 4: High Automation. At this level, the vehicle’s autonomous driving system is fully capable of monitoring the driving
environment and handling all driving functions for routine routes and conditions defined within its operational design domain
(ODD). The vehicle may alert the driver that it is reaching its operational limits if there is, say, an environmental condition that
requires a human in control, such as heavy snow. If the driver does not respond, it will secure the vehicle automatically.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 263
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
6) Level 5: Full Automation. Level 5-capable vehicles are fully autonomous. No driver is required behind the wheel at all. In fact,
Level 5 vehicles might not even have a steering wheel or gas/brake pedals. Level 5 vehicles could have “smart cabins” so that
passengers can issue voice commands to choose a destination or set cabin conditions such as temperature or choice of media. In
April 2021, the SAE published an update to its taxonomy to clarify that Levels 0-2 are “driver support features” because the
driver is still heavily involved with the vehicle operation, while Levels 3-5 are “automated driving features.Each level of
automation requires additional layers of sensors, as the vehicles increasingly assume functions previously controlled by the
driver. For example, a Level 1 vehicle might only have one radar and one camera. A Level 5 vehicle, which must be able to
navigate any environment it encounters, will require full 360-degree sensing across multiple sensor types.
Fig 2.1 Schematic for distance measurement with LiDAR principle. The transmitter and receiver uses laser and photodiode
respectively. [Courtesy: Vehicular Electronics Laboratory, Clemson University]
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 264
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
Types of LiDAR Systems LiDAR systems are divided into two types based on its functionality Airborne LiDAR & Terrestrial
Lidar. Airborne LiDAR is installed on a helicopter or drone for collecting data. As soon as it’s activated, Airborne LiDAR emits
light towards the ground surface, which returns to the sensor immediately after hitting the object, giving an exact measurement of its
distance. Airborne LiDAR is further divided into two types — Topological LiDAR and Bathymetric LiDAR. Unlike Airborne,
Terrestrial LiDAR systems are installed on moving vehicles or tripods on the earth surface for collecting accurate data points. These
are quite common for observing highways, analyzing infrastructure or even collecting point clouds from the inside and outside of
buildings. Terrestrial LiDAR systems have two types — Mobile LiDAR and Static LiDAR. Before last year I didn’t have a faint
clue of what LiDAR is, what it does and why it is among the technologies decisively shaping the future. It was while researching
about autonomous cars that I got to know about LiDAR – and my immediate reflex was that LiDAR seems to be a lesser-known
cousin of the famous Radar. A nondescript member, but in the family after all! And now this obscure cousin was striving to carve a
niche away from the shadow of its more distinguished peer, I thought upon learning that LiDAR is being used in everything from
laser scanning to 3D Modeling to sensors. No, LiDAR isn’t the cousin of ‘Big Brother’ Radar. But I want to emphasize how the
term Radar has been etched in our minds and imagination, so the very first thing that anyone who hasn’t heard LiDAR would relate
to is invariably Radar. LiDAR, as we all know, stands for Light Detection and Ranging. It appears to be an acronym just like Radar
is for Radio Detection and Ranging. Even someone who flunked his physics tests would confidently argue that instead of radio
waves, LiDAR uses light waves (not at all an incorrect reasoning!) and both are apparently acronyms. But apparently is the
determiner here. And another old wisdom being ‘appearances are often deceptive’. LiDAR isn’t a short form. But rather a
combination of two different words – what’s called portmanteau. Words like motel (motor +hotel) or broadcast ( broad+cast). Brexit
(Britain +exit) being another example. Similarly, LiDAR was originally coined as Light + Radar. So it’s a portmanteau rather than
an acronym. So while Radar isn’t the big brother or a cousin of LiDAR but etymologically they are literally inseparable. What’s
even more interesting and mind-blowing is the fact that the full form of LiDAR was conceived many years later. And unbelievably,
when the full form was decided after extensive research into its operational phenomenon, it fitted into the original term which was
simply made by combining two words. Facts are stranger than fiction.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 265
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 266
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
and effectively imputating the incomplete road point cloud data that are induced by obstacle vehicles, and outper-forms other
interpolation algorithms and machine learning algorithms. The ApolloScape : In this work, they present a large-scale
comprehensive dataset of street views. This dataset contains 1) higher scene complexities than existing datasets; 2) 2D/3D
annotations and pose information; 3) various annotated lane markings; 4) video frames with instance-level annotations. In the
future, we will first enlarge our dataset to achieve one million annotated video frames with more diversified conditions
including snow, rain, and foggy environments. Second, we plan to mount stereo cameras and a panoramic camera system in
near future to generate depth maps and panoramic images. In the current release, the depth information for the moving objects
is still missing. We would like to produce complete depth information for both static background and moving objects
6) Argoverse: They focus on the ADE and FDE for a prediction horizon of 3 seconds to understand which baselines are less
impacted by accumulating errors. Constant Velocity is out- performed by all the other baselines because it cannot capture
typical driving behaviours like acceleration, deceleration, turns etc. NN+map has lower ADE and FDE than NN because it is
leveraging useful cues from the vector map.NN+map has lower error than NN+map(oracle) as well, emphasizing the
multimodal nature of predictions. LSTM ED does better than NN. LSTM ED+social performs simi-lar to LSTM ED, implying
that the social context does not add significant value to forecasting. A similar observation was made on KITTI in DESIRE,
wherein their model with social interaction couldn’t outperform the one without it. We observe that LSTM ED+map
outperforms all the other baselines for a prediction horizon of 3 sec. This proves the importance of having a vector map for
distant fu-ture prediction and making multimodal predictions. More- over, NN+map has a lower FDE than LSTM ED+social
and LSTM ED for higher prediction horizon (3 secs). UC Berkeley: In this work, they presented BDD100K, a large-scale driv-
ing video dataset with extensive annotations for heteroge- neous tasks. We built a benchmark for heterogeneous multi-task
learning where the tasks have various prediction structure and serve different aspects of a complete driving sys-tem. Our
experiments provided extensive analysis to different multitask learning scenarios: homogeneous multitask learning and
cascaded multitask learning. The results presented interesting findings about allocating the annotation budgets in multitask
learning. We hope our work can foster future studies on heterogeneous multitask learning and shedlight in this important
direction.
7) Carnegie Mellon University: IN this paper, we have studied how the choice of sensor configuration parameters and how various
environmental factors affect the performance of visual localization. We conducted an extensive series of experiments using
both forward-facing cameras and virtual cameras extracted from panoramic imagery. Using an information-theoretic approach,
we established a relationship between the infor- mation content of image regions and the usefulness of those regions for
localization. Our findings reflect the intuition that the sides of the road provide the most benefit for lo- calization algorithms.
Interestingly, many visual localization and mapping algorithms focus primarily on forward-looking Lidar.
8) Ifte Khairul Alam Bhuiyan: The report presented the Light Detection and Ranging (LiDAR) Sensor and its application in
autonomous driving as a potential companion for future safety roads. The Velodyne’s HDL-64E is a High Definition Lidar
capable of acquiring a large volume of high resolution 3-D data. The HDL-64E features a unique combination of high
resolution, broad field of view, high point cloud density and an output data rate superior to any available Lidar sensor in the
marketplace today. It is the ideal building block for applications such as autonomous vehicle navigation, infrastructure
surveying and mapping display and retrieval, as well as many other applications requiring 3-D data collection. Those of us
fortunate enough to be part of the ADAS and. autonomous driving markets understand the critical role that LiDAR
will play in vehicles of the future
V. HARDWARE COMPONENTS
A. Atmega 328p
B. Ulta-Sonic sensor
C. LiDAR (range :10 to 20 m)
D. Servo Motor
E. Motor driver
F. Dc motors
G. Arduino
H. LED
I. Push Buttons
J. GPS module
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 267
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
Ultra-sonic trigger is at A2 and echo is A3 in servo motor has three connection VCC, Ground, Output in that VCC and Ground will
be connected with the same on the Arduino and the third pin is connected with digital 7 number pin. Lidar has 4 connections, 1 pin
is connected with the motor to control the speed of the LiDAR, Tx is connected to digital of Rx, LiDAR has only transmission pin
because receiving takes place in serial monitor which is actually shorten with Tx of Arduino. The motor driver has 9V motor. There
are two ground pins on the motor, one ground is connected with Uno and other ground is connected with ultra-sonic sensor. There
are 4 pins IN1, IN2, IN3 and IN4 connected with Uno pin from 9,10,11,12. IN1 and IN2 are use to control left motor and IN3 and
IN4 to control right side motor.
VIII. BENEFITS
The map created by a LiDAR sensor is important for a self-driving vehicle, as it helps the car “see” the world around it. LiDAR
technology provides more depth and detail than other solutions, such as radar and cameras. It can also work at night. LiDAR-created
map from supplier Ushr that is updated quarterly, which provides the rest of the vehicle’s sensors and computers with the data to
drive confidently without much driver intervention. LiDAR fills in the gaps where other sensors struggle. For example, radar is used
to detect objects that surround a car, and can determine how far away they are and how fast they’re moving. This is why automakers
use radar for parking sensors, blind spot monitors, and adaptive cruise control, but these same sensors struggle when it comes to
detecting the exact position, size and shape of an object, elements that are vital for self-driving features like pedestrian, cyclist, and
animal detection. Additionally, cameras are used for safety and driver assist systems, as they can recognize objects pretty well, but
struggle in low light and with depth perception, where LiDAR fares better.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 268
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
IX. DISCUSSION
As the LiDAR sensor spins on top of the vehicle the digital data are collection of point clouds from the surrounding. The points
come from a single emitter-detector pair over flat ground appears as a continuous circle. The Fig. 1 shows such 3D construction of
image recorded with HDL-64E and there are no breaks in the circular data around the car in any of the point clouds. Fig. 1. Digital
Sensor Recorder Display of HDL-64E Data This indicates that the laser pulse repetition rate and upper block to lower block duty
cycles (emitter and detector arrays) are configured properly for the sensor. A repetition rate that is too slow would result in each of
the circles would appear as dotted lines. The only areas of blanking, where there is no data, are between the point clouds or where a
shadowing effect occurs, where a target is in the optical transmit path, and thus no information can be obtained from behind the
target. The blanking behind the rear bed of the car is an example of this shadowing effect .
X. RESULT
Increasing population is the major issue of transportation nowadays. So, here we have developed an automated driving system
which drives the car automatically. Our goal is to help prevent traffic accidents and save people’s time by fundamentally changing
car use. We have developed a technology for cars that drives it automatically. We have designed an automated vehicle that is
focused to give automated driving experience to the human driver. This car is capable of sensing the surroundings, navigating and
fulfilling the human transportation capabilities without any human input. Lidar is used for sensing the surroundings. It continuously
tracks the surrounding and if any obstacle is detected vehicle senses and moves around and avoids the obstacle. The advantages of
an autonomous car are fewer traffic collisions, increased reliability, increased roadway capacity, reduced traffic congestion. We
believe that the autonomous car is a reality shortly and be a necessity of life by overcoming the current obstacles, as human life has
to be secured and safe, efficient, cost-effective and comfortable means of transport.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 269
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue IV Apr 2022- Available at www.ijraset.com
REFERENCES
[1] asirt.org/Initiatives/ -Road- Users/Road Safety Facts/Road-Crash-Statistics.
[2] S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavioranalysis,” IEEE
Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1773–1795, 2013
[3] Autonomous Vehicles, Clemson University Vehicular Electronics Lab
[4] National Oceanic and Atmospheric Administration (NOAA) – 2013
[5] Nicolas, C., Inoue, D., Matsubara, H., Ichikawa, T. and Soga, M. (2015), Development of Automotive LIDAR. Electron Comm Jpn, 98: 28–33.
doi:10.1002/ecj.11672
[6] J. Carter, K. Schmidt, et al., “An Introduction to Lidar Technology, Data, and Applications”, National Oceanic and Atmospheric Administration (NOAA), US,
June 2013.
[7] Weitkamp, Claus, ed. Lidar: range-resolved optical remote sensing of the atmosphere. Vol. 102. Springer Science & Business, 2006.
[8] “High Definition LiDAR Sensor for 3D Application”, Velodyne’s HDL-64E, White Paper/Oct 2007
[9] P. McCormack., “LIDAR System Design for Automotive / Industrial / Military Applications”, Texas Instruments.
[10] LiDAR News Magazine, Vol. 4 No. 6, Copyright 2014
[11] LIDAR for Automotive Application, First Sensor, White Paper, Jan 2012
[12] R. H. Rasshofer and K. Gresser, “Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions” Advances in Radio Science, 3, 205–
209, 2005
[13] Velodyne’s Product manual, LiDAR Comparison chart; downloaded from website in May 2016.
[14] [Zhang, 2000] Zhang, Z. (2000). A flexible new technique for camera cali-bration. IEEE Transactions on Pattern Analysis and Machine
Intelligence,22(11):1330–1334.
[15] https://www.aptiv.com/en/insights/article/what-are-the-levels-of-automated-driving
[16] https://www.truebil.com/blog/the-history-of-self-driving-cars
[17] https://www.automotiveworld.com/articles/lidars-for-self-driving-vehicles-a-technological-arms-race/
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 270