Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006, International Conference on Digital Telecommunications (ICDT'06)
The essential purpose of this paper is to describe an architecture, in a simple and complete way, which enables the access and processing of individual pixels corresponding to each frame of a video signal captured in real time; with the aim of facilitating developments made by researchers in the field of image processing, as well as promoting the creation of academic applications in this area.
Computer Vision Systems, 2001
arXiv: Image and Video Processing, 2019
Simultaneous processing of multiple video sources requires each pixel in a frame from a video source to be processed synchronously with the pixels at the same spatial positions in corresponding frames from the other video sources. However, simultaneous processing is challenging as corresponding frames from different video signals provided by multiple sources have time-varying delay because of the electrical and mechanical restrictions inside the video sources hardware that cause deviation in the corresponding frame rates. Researchers overcome the aforementioned challenges either by utilizing ready-made video processing systems or designing and implementing a custom system tailored to their specific application. These video processing systems lack flexibility in handling different applications requirements such as the required number of video sources and outputs, video standards, or frame rates of the input/output videos. In this paper, we present a design for a flexible simultaneous...
International journal of engineering research and technology, 2021
Now a day the image processing is the one of the challenging task. There are number of the models present for the implementation of the same. Here I tried to implement the open source tools OpenCV and MediaPipe for the real-time applications of the image processing. The impletion is applied for the finger count with object detection. The result were obtained the with Good accuracy for the finger count.
2003
This paper presents an embedded board for rapid prototyping of real time image processing algorithms. The platform is based on the Texas Instruments' TMS320C6415, a new digital signal processor designed for high performance applications. Any analog video source, such as a camcorder or a VCR, can be used as an input signal. The images are captured, processed in real time and then displayed in a window of a graphical user interface. Both a mouse and a keyboard are available to interact with the system. A software environment is also provided thus allowing the rapid implementation of the algorithms in high-level language.
UPB Scientific Bulletin, Series C: Electrical Engineering
For certain types of applications, such as videoclips processing, the image processing may not terminate within its deadline. The paper presents methods for implementation of image processing algorithms in real time. These methods involve using of graphical processing units (GPU) or digital signal processors (DSP). The illustrated methods may be applied for many image processing algorithms. In general, a denoising algorithm has two phases which are run sequentially: the first one determines the noisy pixels and the second applies a median filtering considering the only good pixels. In all such denoising algorithms, the first phase is run for multiple times depending on the noise power. The second phase also may be executed more than one time but this depends on the specific algorithm. The methods presented includes: 1) parallel processing using the GPU, 2) DSP implementation based on a Blackfin microcomputer with support of Visual DSP kernel (VDK) and 3) adjust the number of iterati...
2005
This paper presents an inexpensive and costeffective configurable platform for designing real time video processing and vision systems. The platform is designed around a Field Programmable Gate Array and provides video input and output interfaces. It has been used to implement several different image and video processing systems, namely with the purpose of prototyping and teaching courses in the area of video processing systems. Experimental results show that this platform provides enough resources and speed to implement even complex systems in real time.
2010
This paper introduces an embedded architecture and the low-level video processing algorithms developed for an intelligent node that is a part of a distributed intelligent sensory network for surveillance purposes. In this paper, details of the architecture developed for this node are given, together with the low-level video processing algorithms used, as well as the results obtained after their implementation. The video board has been developed using two DSP processors for video processing tasks, as well as a FPGA dedicated to image capture (VGA size) and to dispatch them to the DSP processors. The low-level software includes acquisition, segmentation, labeling, tracking and classification of detected objects into three main categories: Person, Group and Luggage. Also, additional features are extracted from each object in the frame. The unit has to communicate the classification results and the main features obtained using XML streaming to upper levels, as well as the processed frames, using a JPEG stream. All these functionalities are currently running in the built prototypes.
International journal of computing and digital system/International Journal of Computing and Digital Systems, 2024
For real-time video processing, the analysis time is a big challenge for researchers. Since digital images from cameras or any image sources can be quite large, it is common practice for researchers to divide these large images into smaller sub-images. The present study proposes a subsystem module to read and display the region of interest (ROI) of real-time video signals for static camera applications to prepare for background subtraction (BGS) algorithm operation. The proposed subsystem was developed using Verilog hardware description language (HDL), synthesized, and implemented in the ZYBO Z7-10 platform. An ROI background image of (360×360) resolution was selected to test the operation of the module in real time. The subsystem consists of five basic modules. Timing analysis was used to determine the real-time performance of the proposed subsystem. Multi-clock domain frequencies are used to manage the module operations, 445.5MHz, 222.75MHz, 148.5MHz, and 74.25MHz, which are six, three, two, and one-time pixel clock frequencies, respectively. These frequencies are chosen to perform five basic processing operations in real-time during the pixel period instant. Two strategies are selected to explain the effectiveness of choosing the trigger instant of the used clock signals on the system performance. The operation revealed that the latency of the proposed ROI reading subsystem was 13.468ns (one-pixel period), which matched the requirements for real-time applications.
Lecture Notes in Computer Science, 2012
Image Processing algorithms implemented in hardware have emerged as the most viable solution for improving the performance of image processing systems. The introduction of reconfigurable devices and high level hardware programming languages has further accelerated the design of image processing in FPGA.
Journal of Signal Processing Systems, 2018
This paper describes flexible tools and techniques that can be used to efficiently design/generate quite a variety of hardware IP blocks for highly parameterized real-time video processing algorithms. The tools and techniques discussed in the paper include host software, FPGA interface IP (PCIe, USB 3.0, DRAM), high-level synthesis, RTL generation tools, synthesis automation as well as architectural concepts (e.g., nested pipelining), an architectural estimation tool, and verification methodology. The paper also discusses a specific use case to deploy the mentioned tools and techniques for hardware design of an optical flow algorithm. The paper shows that in a fairly short amount of time, we were able to implement 11 versions of the optical flow algorithm running on 3 different FPGAs (from 2 different vendors), while we generated and synthesized several thousand designs for architectural trade-off. Keywords Hardware IP generation • Real-time video processing • High-level synthesis • FPGA • Optical flow • Nested pipelining
Control Engineering Practice, 1995
ABSTRACT This paper discusses issues in real-time image-processing, including applications, approaches, and hardware. In particular, the failure of existing programming languages to support these considerations, and present requirements for any language that can support real-time image-processing are discussed.
In this article, lessons learned from the design of a Java interface to digital cameras are described. The interface allows programmers to interact with a FireWire digital camera directly from within their Java programs. Two types of applications were developed. The first uses the Java 2D API directly to display the information on screen. Java networking is used to transmit the images over a network. The second application implements a datasource for the Java Media Framework. Being integrated in the JMF, a wide range of features can be used, including sending images over the network with a real-time transfer protocol. Both types of applications are compared, and their performance is evaluated. The usability of Java and the Java Media Framework to control digital cameras, and the possibilities of integrating the results from this paper in an embedded system are discussed.
The data is exploding day by day in digital technology. Now a day's multimedia data is also handled by the database, multimedia data contains data like images, text and video. The video processing plays a tremendous role in the multimedia but all the videos are not same, it can exists number of settings and different number of formats. By This video processing system the video is processed for enhancement, analysis, dividing the channels and binarization by using different image processing techniques. In this system different color system like YCBR, HSL, and RGB color systems are considered for processing any type of video. For this system, the input video can be from a stored file or continuous stream of video sequences from the web camera (or) any type of camera by this video processing system we can improve the quality of the video and we can also apply some special effects to the video by applying various image processing techniques and filters. The enhancement techniques considered in his system are filtering with correlation and convolution, adaptive smoothing, conservative smoothing and median filtering. The analysis techniques like edge detection, histogram and statistical analysis are considered for this system. Binarization methods implemented in this system are Custom Threshold, Order Dither. The Color filters like converting RGB to Grayscale, Grayscale to RGB ,Sepia, invert, rotate, Custom Color filter, Euclidean color filter, channel filter, red, green, blue, cyan, magenta and yellow, they are so many other filters are also implemented in this system.
Annual Review in Automatic Programming, 1994
ABSTRACT In this paper we discuss issues in real-time image processing, including applications, approaches and hardware. In particular, we discuss the failure of existing programming languages to support these considerations and present requirements for any language that can support real-time image processing.
Java Image Processing Recipes, 2018
Up to now, this book has been focused on getting the reader up to speed with working on images and generated graphical art. You should now feel pretty confident with the methods introduced, and you have room for many ideas. Great! We could keep going on expanding and explaining more on the other methods from OpenCV, but we are going to do something else in Chapter 4, as we switch to real-time video analysis, applying the knowledge learned during the previous chapters to the field of video streaming.
2000
This paper presents a study of the impact of MMX technology in image processing and machine vision application, which, because of their hard real time constrains, is an undoubtedly challenging task. A comparison with a traditional scalar code and with another parallel SIMD architecture (IMAP-VISION board) is discussed with emphasize of the particular programming strategies for speed optimization.
IJIRIS:: AM Publications,India, 2020
The article is all about the Image Processing System that can be defined as, processing and altering an existing image in the desired manner. Image is one of the perceptible sources in applications of Image Processing including a large number of tools and techniques which help to extract complex features of an image. Probably the most powerful image processing system is the human brain together with the eye. The system receives, enhances, and stores images at enormous rates of speed. The objective of Image Processing is to visually enhance or statistically evaluate some aspect of an image not readily apparent in its original form. Several technologies playing on images in real-time but image processing is the real core. This paper discusses the overview of development; implementation of operations required for quality image production and also discusses image processing applications, tools, and techniques.
Journal of Real-Time Image Processing
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.