Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014
…
9 pages
1 file
Simultaneous Localization and Mapping (SLAM) algorithms require huge computational power. Most of the state-of-the-art implementations employ dedicated computational machines which in most cases are off-board the robotic platform. In addition, as soon as the environment become large, the update rate of such algorithms is no more suitable for real-time control. The latest implementations rely on visual SLAM, adopting a reduced number of features. However, these methods are not employable in environments with low visibility or that are completely dark. We present here a SLAM algorithm designed for mobile robots requiring reliable solutions even in harsh working conditions where the presence of dust and darkness could compromise the visibility conditions. The algorithm has been optimized for embedded CPUs commonly employed in light-weight robotic platforms. In this paper the proposed algorithm is introduced and its feasibility as SLAM solution for embedded systems is proved both by a s...
2017
This paper presents a low-cost mobile robot platform to solve the SLAM problem for indoor mobile robotics applications. It deals with the necessity of building a map of the environment while simultaneously determining the location of the robot within the map. In order to solve SLAM, this work uses an existing tool called GMapping, which is based on a RBPF (Rao Blackwellised Particle Filter) approach, provided by ROS. The GMapping tool offers laser-based SLAM for building a map. It was originally based on laser scan data and odometry information. In this work, we are utilizing the Microsoft Kinect Sensor, gyroscope, and wheel encoders on a mobile robot. The paper demonstrates the feasibility of the proposed approach of implementing SLAM on a small, light-weight and low-cost embedded single board computer instead of a more expensive full-fledged PC or laptop.
Journal of Engineering and Applied Science, 2024
This paper presents a low-cost system for simultaneous localization and mapping (SLAM) for unknown indoor environments. The system is based on a low-cost mobilerobot platform. The low-cost mobile robot is designed and fabricated in our control laboratory. The Rao-Blackwellized particle filter algorithm is used for SLAM computations, Xbox 360 Kinect module is utilized for stereo-camera imaging, and a Linuxbased microcomputer (Raspberry Pi3) was used as the main onboard processing unit. An Arduino board is used to control the DC motors for mobile robot wheels. Raspberry Pi unit was wirelessly connected to a ground station machine that processes the information sent by the robot to build the environment map and estimate its pose. ROS (Robot Operating System) is used for map visualization, data-handling, and communication between different software nodes. The system has been tested virtually on a simulator and in real indoor environments and has successfully identified objects greater than 30 cm × 30 cm × 30 cm and added it to the map. It also shows promising capability to work autonomous missions independently without aid from any external sensors and with a fraction of the cost of similar systems based on Lidars.
IJAIT (International Journal of Applied Information Technology)
Many of researches in SLAM are targeting desktops or laptop computers. Mounted in a robot platform such as Pioneer, these high computational power hardware do all the processing in SLAM. Still others, SLAM algorithms exploit GPU power to provide deep details in map reconstruction. Yet, it is desirable to deploy SLAM in a small robot without advantages from high computational power hardware. Single board computer with limited power supply and low computational power is frequently the main board available in a small robot. Therefore, it is important to consider the design solution of SLAM that targets such a system. With this in mind, current work presents a survey paper of SLAM in low-resource hardware. The main question to be answered with this current work is "How researchers deal with hardware limitation when implementing SLAM?" Classification based on a method to tackle the problem is presented as the conclusion of this paper.
Journal of Robotics, 2011
Simultaneous Localization and Mapping (SLAM) is an important technique for robotic system navigation. Due to the high complexity of the algorithm, SLAM usually needs long computational time or large amount of memory to achieve accurate results. In this paper, we present a lightweight Rao-Blackwellized particle filter- (RBPF-) based SLAM algorithm for indoor environments, which uses line segments extracted from the laser range finder as the fundamental map structure so as to reduce the memory usage. Since most major structures of indoor environments are usually orthogonal to each other, we can also efficiently increase the accuracy and reduce the complexity of our algorithm by exploiting this orthogonal property of line segments, that is, we treat line segments that are parallel or perpendicular to each other in a special way when calculating the importance weight of each particle. Experimental results shows that our work is capable of drawing maps in complex indoor environments, nee...
Robotics, 2022
Simultaneous localization and mapping (SLAM) techniques are widely researched, since they allow the simultaneous creation of a map and the sensors’ pose estimation in an unknown environment. Visual-based SLAM techniques play a significant role in this field, as they are based on a low-cost and small sensor system, which guarantees those advantages compared to other sensor-based SLAM techniques. The literature presents different approaches and methods to implement visual-based SLAM systems. Among this variety of publications, a beginner in this domain may find problems with identifying and analyzing the main algorithms and selecting the most appropriate one according to his or her project constraints. Therefore, we present the three main visual-based SLAM approaches (visual-only, visual-inertial, and RGB-D SLAM), providing a review of the main algorithms of each approach through diagrams and flowcharts, and highlighting the main advantages and disadvantages of each technique. Further...
GI_Forum, 2018
This paper introduces a low-cost Simultaneous Localization And Mapping (SLAM) implementation for generating geodata for human-navigable maps. In contrast to prevalent thinking, we maintain that navigation by people who are not mobility-impaired does not need accurate maps down to millimetres or even centimetres. Basically, there is a need only to map the boundaries of spaces and to highlight walkable places and areas of potential decisions. The SLAM system presented here consists of an Arduino-based robot and controlling SLAMTerminal software. A case study conducted at the University of Augsburg, Germany shows that the proposed SLAM implementation is capable of producing a map suitable for helping pedestrians to navigate.
This paper presents a complete Simultaneous Localization and Mapping (SLAM) solution for indoor mobile robots, addressing feature extraction, autonomous exploration and navigation using the continuously updating map. The platform used is Pioneer PeopleBot equipped with SICK Laser Measurment System (LMS) and odometery. Our algorithm uses Hough Transform to extract the major representative features of indoor environment such as lines and edges. Localization is accomplished using Relative Filter which depends directly on the perception model for the correction of error in the robot state. Our map for localization is in the form of a landmark network whereas for navigation we are using occupancy grid. The resulting algorithm makes the approach computationally lightweight and easy to implement. Finally, we present the results of testing the algorithm in Player/Stage as well as on PeopleBot in our Robotics and Control Lab.
International Journal of Innovative Computing
To aid in robot navigation and environment analysis, visual SLAM systems process visual data. Things like AMRs (autonomous mobile robots) and AGVs (autonomous guided vehicles) have been gaining popularity in recent years. These robots depend significantly on simultaneous localization and mapping (SLAM) technology to keep the factory floor free of accidents. vSLAM employs a technique for estimating the precise positioning and orientation of a sensor relative to its environment as well as the navigation of the region around it. SLAM algorithms can be used in various applications, including self-driving vehicles, mobile robots, drones, etc. Visual SLAM does not refer to a particular set of methods or software. This paper proposes to review some of the issues and challenges facing SLAM technology in autonomous robot applications and draw a conclusion.
Journal of Intelligent Systems, 2015
We address the problems of localization, mapping and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this paper a novel light and robust topo-metric SLAM framework using appearance based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long term use of odometry for guidance by correcting the drift. The guidance approach is based on qualitative localization using vision and odometry and is robust to visual sensor occlusions or changes in the scene. The resulting framework is incremental, real-time and based on cheap sensors provided on many robots (a camera and odometry encoders). This approach is moreover particularly well suited for low power robots as it is not dependent on the image processing frequency and latency and so, it can be applied using remote processing. The algorithm has been validated on a Pioneer P3DX mobile robot in indoor environments and its robustness is demonstrated experimentally for a large range of odometry noise levels.
ACECS, 2019
Visual Simultaneous Localization and Mapping (VSLAM) has seen an incredible interest amongst research community in recent years due to its capability to make the robot truly independent in navigation. In this paper we present a framework for Visual Simultaneous Localization and Mapping (VSLAM) to address the challenge of light intensity in an environment The framework addresses this challenge by introducing image filtering algorithm, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation into the VSLAM framework to improve the robustness of the system in a static environment. The methodology used to perform experiment in research is simulation. Experimental results show a root mean square error (RMSE) of 0.13m, which is minimal when compared with other SLAM systems from literature. The inclusion of an Image Filtering Algorithm has enabled the VSLAM system to navigate in a noisy environment.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Autonomous Robots, 2006
IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics Society, 2012
2010 IEEE International Conference on Robotics and Automation, 2010
2014 World Automation Congress (WAC), 2014
2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC), 2018
SSRN Electronic Journal, 2022
2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007
The 7th IEEE International Conference on Mobile Ad-hoc and Sensor Systems (IEEE MASS 2010), 2010