Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
10 pages
1 file
The ORB (Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features)) feature extractor is the state of the art in wide baseline matching with sparse image features for robotic vision. All previous implementations have employed general-purpose computing hardware, such as CPUs and GPUs. This work seeks to investigate the applicability of special-purpose computing hardware, in the form of Field-Programmable Gate Arrays (FPGAs), to the acceleration of this problem. FPGAs offer lower power consumption and higher frame rates than general hardware. A working implementation on an Altera Cyclone II (a low-cost FPGA suitable for development work, and available with a camera and screen interface) is described.
Proceedings of the 11th International Conference on Distributed Smart Cameras, 2017
Smart cameras are image/video acquisition devices that integrate image processing algorithms close to the image sensor, so they can deliver high-level information to a host computer or high-level decision process. In this context, a central issue is the implementation of complex and computationally intensive computer vision algorithms inside the camera fabric. For low-level processing, FPGA devices are excellent candidates because they support data paral-lelism with high data throughput. One computer vision algorithm highly promising for FPGA-based smart cameras is feature matching. Unfortunately, most previous feature matching formulations have inefficient FPGA implementations or deliver relatively poor information about the observed scene. In this work, we introduce a new feature-matching algorithm that aims for dense feature matching and at the same time straightforward FPGA implementation. We propose a new mathematical formulation that addressed the feature matching task as a feature tracking problem. We demonstrate that our algorithmic formulation delivers robust feature matching with low mathematical complexity and obtains accuracy superior to previous algorithmic formulations. An FPGA architecture is lay down and, hardware acceleration strategies are discussed. Finally , we applied our feature matching algorithm in a monocular-SLAM system. We show that our algorithmic formulation provides promising results under real world applications.
2009
We present an implementation of the Speeded Up Robust Features (SURF) on a Field Programmable Gate Array (FPGA). The SURF algorithm extracts salient points from image and computes descriptors of their surroundings that are invariant to scale, rotation and illumination changes. The interest point detection and feature descriptor extraction algorithm is often used as the first stage in autonomous robot navigation, object recognition and tracking etc. However, detection and extraction are computationally demanding and therefore can't be used in systems with limited computational power. We took advantage of algorithm's natural parallelism and implemented it's most demanding parts in FPGA logic. Several modifications of the original algorithm have been made to increase it's suitability for FPGA implementation. Experiments show, that the FPGA implementation is comparable in terms of precision, speed and repeatability, but outperforms the CPU and GPU implementation in terms of power consumption. Our implementation is intended to be used in embedded systems which are limited in computational power or as the first stage preprocessing block, which allows the computational resources to focus on higher level algorithms.
2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014
Recent developments in smartphones create an ideal platform for robotics and computer vision applications: they are small, powerful, embedded devices with low-power mobile CPUs. However, though the computational power of smartphones has increased substantially in recent years, they are still not capable of performing intense computer vision tasks in real time, at high frame rates and low latency. We present a combination of FPGA and mobile CPU to overcome the computational and latency limitations of mobile CPUs alone. With the FPGA as an additional layer between the image sensor and CPU, the system is capable of accelerating computer vision algorithms to real-time performance. Low latency calculation allows for direct usage within control loops of mobile robots. A stereo camera setup with disparity estimation based on the semi global matching algorithm is implemented as an accelerated example application. The system calculates dense disparity images with 752x480 pixels resolution at 60 frames per second. The overall latency of the disparity estimation is less than 2 milliseconds. The system is suitable for any mobile robot application due to its light weight and low power consumption.
Optical …, 2010
The recent research is focused on development of mobile vision systems and algorithms suitable for very large-scale integration implementation. These systems can be used in various applications. We propose a novel field-programmable gate array (FPGA)-based architecture for early vision. The central idea is to take into account the perceptual aspects of visual tasks inspired by biological vision systems: shape and color. For this reason, we propose an original approach based on a system implemented in an FPGA connected to a CMOS imager. The proposed algorithm implementation analysis and optimization methodology under resource constraints enable one to implement the algorithm on only one FPGA chip. To prove the proposed concept the system was implemented and tested on an autonomous mobile platform. The implementation framework enables direct algorithm implementation in applicationspecific integrated circuit. C 2010 Society of Photo-Optical Instrumentation Engineers.
RoboCup 2001: Robot Soccer World Cup V, 2002
A time critical process in a real-time mobile robot application such as RoboCup is the determination of the robot position in the game field. Aiming at low-cost and efficiency, this paper proposes the use of field-programmable gate array device (FPGA) in the vision system of a robotic team. We describe the translation of well-known computer vision algorithms to VHDL and detail the design of a working prototype that includes image acquisition and processing. The CV algorithms used in the system includes thresholding, edge detection and chain-code segmentation. Finally, we present results showing that an FPGA device provides hardware speed to user applications, delivering real-time speeds for image segmentation at an affordable cost. An efficiency comparison is made among the hardware-implemented and a software-implemented (C language) system using the same algorithms.
2015
Abstract. A time critical process in a real-time mobile robot application such as RoboCup, is the determination of the robot position in the game field. Aiming at low-cost and efficiency, this paper proposes the use of field-programmable gate array device (FPGA) in the vision system of a robotic team. We describe the translation of well-known computer vision algorithms to VHDL and detail the design of a working prototype that includes image acquisition and process-ing. The CV algorithms used in the system includes thresholding, edge detec-tion and chain-code segmentation. Finally, we present results showing that an FPGA device provides hardware speed to user applications, delivering real-time speeds for image segmentation at an affordable cost. An efficiency comparison is made among the hardware-impl mented and a software-impl mented (C lan-guage) system using the same algorithms. 1
The EyeBot M6 is the newest revision of an embedded system designated for the control of small mobile robots. Unlike previous revisions of the system, the EyeBot M6 features not only a 400 MHz CPU running a fully fledged operating system but also a Xilinx FPGA accompanied by an SRAM and two cameras in a stereo setup. The recent advancements in FPGA fabrication not only induced lower prices but also permit the implementation of large signal processing algorithms in FPGAs. The current revision is the first EyeBot that tries to exploit the increasing capabilities of FPGAs for image processing purposes on small robots. This project focuses both on the low-level interfacing between the FPGA and the CPU and on the internal memory bus architecture required for image processing purposes.
Sensors (Basel, Switzerland), 2018
Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implemen...
2010
The modern FPGAs enable system designers to develop high-performance computing (HPC) applications with large amount of parallelism. Real-time image processing is such a requirement that demands much more processing power than a conventional processor can deliver. In this research, we implemented software and hardware based architectures on FPGA to achieve real-time image processing. Furthermore, we benchmark and compare our implemented architectures with existing architectures. The operational structures of those systems consist of on-chip processors or custom vision coprocessors implemented in a parallel manner with efficient memory and bus architectures. The performance properties such as the accuracy, throughput and efficiency are measured and presented.
In these last years a new category of sensor is appearing. It has the name of intelligent sensor or smart sensor. This paper focuses on smart camera used in the field of robotics and more precisely in the extraction of features. Object tracking, autonomous navigation or 3D mapping are some application examples. The feature chosen is, the interest points that can be detected by several algorithms. Among these algorithms we have the Harris & Stephen that has a simple principle based on the calculations of multiples derivatives and who gives acceptable results. The smart camera has beside the camera an FPGA, the detection of the interest points algorithm's will be implemented on this FPGA. Other modules were added to the system to deliver more appropriate results.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Remote Sensing, 2017
Microprocessors and Microsystems
IEEE Transactions on Image Processing, 2003
Computer Architecture for …, 2005
2008
Proceedings of IECON '95 - 21st Annual Conference on IEEE Industrial Electronics, 1995
EURASIP Journal on Embedded Systems, 2007
IEEE Robotics and Automation Letters
EURASIP Journal on Advances in Signal Processing, 2013
(IJACSA) International Journal of Advanced Computer Science and Applications, 2019
TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES, 2016
IEEE Circuits and Systems Magazine