Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006
Publication in the conference proceedings of EUSIPCO, Florence, Italy, 2006
This paper presents the implementation of a segmentation process to extract the moving objects from image sequence taken from a static camera used for real time vision tasks. Various aspects of the underlying motion detection algorithm are explored and modifications are made with potential improvements of extraction results and hardware efficiency. The whole system is implemented on a single low cost FPGA chip, capable of real-time segmentation at a very high frame rate that reaches to 1130 fps. In addition, to achieve real-time performance with high resolution video streams, dedicated hardware architecture with streamlined data flow and memory access reduction schemes are developed. Data flow reduction of 38.6% is achieved by processing only one distribution at time through the hardware. Also, substantial memory bandwidth reduction of 60% is achieved by utilizing distribution similarities in succeeding neighboring pixels as well as word length reduction.
2006 International Conference on Image Processing, 2006
This paper proposes a robust real-time, scalable and modular Field Programmable Gate Array (FPGA) based implementation of a spatiotemporal segmentation of video objects. The goal of this work is to translate an existing object segmentation algorithm into hardware to achieve real-time performance. The proposed implementation achieved an optimum processing speed of 133 MPixels/s while utilizing minimal hardware resources. The design was successfully simulated, synthesized and tested for real-time performance on an actual hardware platform which consists of a frame grabber with a user programmable FPGA-Xilinx Virtex-II Pro.
Applied Reconfigurable Computing, 2006
This paper describes a general purpose system based on elementary motion and rigid-body detection that is able to efficiently segment moving objects using a sparse map of features from the visual field. FPGA implementation allows real-time image processing on an embedded system. The modular design allows to add other modules and to use the system in very diverse applications.
2006
A real time vision system of images perceived in real environments based on reconfigurable architecture is presented.
We describe "GW4," an efficient video segmentation algorithm designed for FPGA implementation. The algorithm detects moving foreground objects against a multimodal background; it is motivated by two well-known adaptive background differencing algorithms, Grimson's algorithm and W4. GW4 is designed specifically for implementation on reconfigurable FPGA hardware, avoiding the use of floating point numbers and transcendental operations, and operates at real-time frame rates on 640x480 video streams. We present experimental results indicating processing speeds, and superior segmentation performance to Grimson's algorithm.
2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)
Efficient and real time segmentation of color images has a variety of importance in many fields of computer vision such as image compression, medical imaging, mapping and autonomous navigation. Being one of the most computationally expensive operation, it is usually done through software implementation using high-performance processors. In robotic systems, however, with the constrained platform dimensions and the need for portability, low power consumption and simultaneously the need for real time image segmentation, we envision hardware parallelism as the way forward to achieve higher acceleration. Field-programmable gate arrays (FPGAs) are among the best suited for this task as they provide high computing power in a small physical area. They exceed the computing speed of software based implementations by breaking the paradigm of sequential execution and accomplishing more per clock cycle operations by enabling hardware level parallelization at an architectural level. In this paper, we propose three novel architectures of a well known Efficient Graph based Image Segmentation algorithm. These proposed implementations optimizes time and power consumption when compared to software implementations. The hybrid design proposed, has notable furtherance of acceleration capabilities delivering atleast 2X speed gain over other implementations, which henceforth allows real time image segmentation that can be deployed on Mobile Robotic systems.
2003
In this paper we present the Qimera segmentation platform and describe the different approaches to segmentation that have been implemented in the system to date. Analysis techniques have been implemented for both region-based and object-based segmentation. The region-based segmentation algorithms include: a colour segmentation algorithm based on a modified Recursive Shortest Spanning Tree (RSST) approach, an implementation of a colour image segmentation algorithm based on the K -Means-with-Connectivity-Constraint (KMCC) algorithm and an approach based on the Expectation Maximization (EM) algorithm applied in a 6D colour/texture space. A semi-automatic approach to object segmentation that uses the modified RSST approach is outlined. An automatic object segmentation approach via snake propagation within a level-set framework is also described. Illustrative segmentation results are presented in all cases. Plans for future research within the Qimera project are also discussed.
IEEE Transactions on Nuclear Science, 2014
A multi-core FPGA-based 2D-clustering implementation for real-time image processing is presented in this paper. The clustering algorithm is using a moving window technique to reduce the time and data required for the cluster identification process. The implementation is fully generic, with an adjustable detection window size. A fundamental characteristic of the implementation is that multiple clustering cores can be instantiated. Each core can work on a different identification window that processes data of independent "images" in parallel, thus, increasing performance by exploiting more FPGA resources. The algorithm and implementation are developed for the Fast TracKer processor for the trigger upgrade of the ATLAS experiment but their generic design makes them easily adjustable to other demanding image processing applications that require real-time pixel clustering. Index Terms-Clustering methods, field programmable gate arrays, image analysis, multiprocessing systems, particle tracking.
Real-Time Object Detection and Recognition in FPGA-Based Autonomous Driving Systems, 2024
This research paper presents an innovative methodology for the identification and detection of objects in autonomous driving systems that employ field-programmable gate arrays (FPGAs). Through the integration of deep learning methodologies with FPGA hardware acceleration, the approach successfully attains the minimal latency and optimal precision necessary for secure navigation. By conducting data acquisition, preprocessing, and model training, this can refine the system's performance. By employing parallel computing and hardware optimisation techniques, the FPGA implementation achieves these objectives. Based on experimental data, the FPGA-based approach outperforms conventional CPU and GPU implementations in terms of power efficiency, inference latency, and detection precision. The widespread adoption of field-programmable gate arrays (FPGAs) for enhanced object recognition and identification in autonomous vehicles is imminent due to their exceptional compatibility with autonomous driving systems.
Microscopy Microanalysis Microstructures, 1996
This paper proposes image segmentation technique by background subtraction and implemented in FPGA. Image segmentation is an important technique in the area of image processing. A digital image is a set of quantized samples of a continuously varying function. Segmentation refers to the process of partitioning a digital image into regions. Thresholding method achieves segmentation by looking for the boundaries between regions based on discontinuities in gray levels or color properties. FPGA implementation is more useful for real time application.
7th International Conference on Image Processing and its Applications, 1999
Many works in image processing concern segmentation of moving objects in sequence of images. This problem is particularly critical, since it represents the first step of many complex processes of computer vision, for applications like object tracking, video-surveillance, monitoring, and autonomous navigation. In such applications, both real-time and low-cost requirements should be satisfied. To this aim, we propose a dedicated hardware solution, based on reconfigurable logic, that provides motion detection and moving objects segmentation at framerate.
2014
The proposed work presents FPGA based architecture for image segmentation. It has found application in forensic science and also in digital multimedia for creating image dazzling effect. Currently the image processing algorithms are limited to software implementation which is slower due to the limited processor speed. So a dedicated processor for segmentation was required which was not possible until advancement in VLSI technology. Now more complex system can be integrated on a single chip providing a platform to process real time algorithms on hardware. Image Segmentation is an important technique in the area of image processing with wide applications in Medicine, Remote sensing to mention a few. A lot of research work is in progress in various areas resulting in many computationally efficient algorithms. There are conventional as well as improvised segmentation algorithms depending on the application. The choice of the technique in most cases depends on the application and image in question rather than a generalized method. The proposed work uses histogram method for segmentation. The conventional histogram method is modified to adopt for automatically determining the threshold for different regions in the image. The objective of this project is to realize the segmentation algorithm on FPGA. FPGA implementation renders it more useful for real time applications.
Computer vision and image …, 2010
This paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the conversion into labelled objects using a connected component labelling algorithm. The background models are based on 24-bit RGB values and 8-bit greyscale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The realtime connected component labelling algorithm, also designed for FPGA implementation, has efficiently been integrated with the pixel level background subtraction to extract pixels of a moving object as a single blob. The connected component algorithm, run-length encodes the binary image output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels.
In this article, we present a mixed software/hardware Implementation on a Xilinx's Microblaze Soft core based FPGA platform. The reconfigurable embedded platform designed to support an important algorithm in image processing which is region color image segmentation for detecting objects based on RGB to HSL transformation. The proposed work is implemented and compiled on the embedded development kit EDK6.3i and the synthesis software ISE6.3i available with Xilinx Virtex-II FPGA using C++ language. The basic motivation of our application to radio isotopic images and neutron tomography is to assist physicians in diagnostics by taking out regions of interest. The system is designed to be integrated as an extension to the nuclear imaging system implemented around our nuclear research reactor. The proposed design can significantly accelerate the algorithm and the possible reconfiguration can be exploited to reach a higher performance in the future, and can be used for many image processing applications.
In these last years a new category of sensor is appearing. It has the name of intelligent sensor or smart sensor. This paper focuses on smart camera used in the field of robotics and more precisely in the extraction of features. Object tracking, autonomous navigation or 3D mapping are some application examples. The feature chosen is, the interest points that can be detected by several algorithms. Among these algorithms we have the Harris & Stephen that has a simple principle based on the calculations of multiples derivatives and who gives acceptable results. The smart camera has beside the camera an FPGA, the detection of the interest points algorithm's will be implemented on this FPGA. Other modules were added to the system to deliver more appropriate results.
Object detection is one of the most important tasks in computer vision. It has multiple applications in many different fields such as face detection, video surveillance and traffic sign recognition. Most of these applications are associated with real-time performance constraints. However, the current implementations of object detection algorithms are computationally intensive and far from real-time performance. The problem is further aggravated in an embedded systems environment where most of these applications are deployed. The high computational complexity makes implementing an embedded object detection system with real-time performance a challenging task. Consequently, there is a strong need for dedicated hardware architectures capable of delivering high detection accuracy within an acceptable processing time given the available hardware resources. The presented work investigates the feasibility of implementing an object detection system on a Field Programmable Gate Array (FPGA) platform as a candidate solution for achieving real-time performance in embedded applications. A parallel hardware architecture that accelerates the execution of three algorithms is proposed. The algorithms are: Scale Invariant Feature Transform (SIFT) feature extraction, Bag of Features (BoF) and Support Vector Machine (SVM). The proposed architecture exploits different forms of parallelism inherent in the aforementioned algorithms to reach real-time constraints. A prototype of the proposed architecture is implemented on an FPGA platform and evaluated using two benchmark datasets. On average, the speedup achieved was ×55.06 times when compared with the feature extraction algorithm implemented in pure software. The speedup achieved in the classification algorithm was ×6.64 times. The difference in classification accuracy between our architecture and the software implementation was less than 3%. In comparison to existing hardware solutions, our proposed hardware architecture can detect an additional 380 SFIT features in real-time. Additionally, the hardware resources utilized by our architecture are less than those required by existing solutions.
Lecture Notes in Computer Science, 2006
In this paper an architecture based on FPGA's for real time image processing is described. The system is composed of a high resolution (1280×1024) CMOS sensor connected to a FPGA that will be in charge of acquiring images from the sensor and controlling it too. A PC sends certain orders and parameters, configured by the user, to the FPGA. The connexion between the PC and the FPGA is made through the parallel port. On the other hand, the resolution of the captured image, as well as the selection of a window of interest inside the image, are configured by the user in the PC. Finally, a system to make the convolution between the captured image and a nxn-mask is shown.
Facing the Multicore- …, 2011
Efficient segmentation of color images is important for many applications in computer vision. Non-parametric solutions are required in situations where little or no prior knowledge about the data is available. In this paper, we present a novel parallel image segmentation algorithm which segments images in real-time in a non-parametric way. The algorithm finds the equilibrium states of a Potts model in the superparamagnetic phase of the system. Our method maps perfectly onto the Graphics Processing Unit (GPU) architecture and has been implemented using the framework NVIDIA Compute Unified Device Architecture (CUDA). For images of 256 × 320 pixels we obtained a frame rate of 30 Hz that demonstrates the applicability of the algorithm to video-processing tasks in real-time 1 .
2007
The problem of object tracking is of considerable interest in the scientific community and it is still an open and active field of research. In this paper we address the comparison of two different specific purpose architectures for object tracking based on motion and colour segmentation. On one hand, we have developed a new multi-object segmentation device based on an existing optical flow estimation system. This architecture allows video tracking of fast moving objects based on high speed acquisition cameras. On the other hand, the second approach consists on real time filtering of chromatic components. Multi-object tracking is performed based on segmentation of pixel neighbourhoods according to a predefined colour. In this contribution we evaluate the two methods, comparing their performance, resource consumption and finally, we discuss which architecture fits better in different working cenarios.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.