Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014, Proceedings of the 9th International Conference on Computer Vision Theory and Applications
…
8 pages
1 file
The main contribution of this paper is a compact representation of the 'short tracks' or tracklets present in a time window of a given video input, which allows to analyse and detect different crowd events. To proceed, first, tracklets are extracted from a time window using a particle filter multi-target tracker. After noise removal, the tracklets are plotted into a square image by normalising their lengths to the size of the image. Different histograms are then applied to this compact representation. Thus, different events in a crowd are detected via a Bag-of-words modelling. Novel video sequences, can then be analysed to detect whether an abnormal or chaotic situation is present. The whole algorithm is tested with our own dataset, also introduced in the paper.
International Conference on Pattern Recognition 2014 (ICPR)
Tracklet plots (TPs) describe the motion patterns of a small crowd or a large group of people in a given short time span. This feature can be useful in the context of a Bag-of-Words modelling for the recognition of events or actions that unfold in the scene. This work describes a method where evidence from multiple viewpoints is combined. By obtaining this feature for each of the views, and synchronising the available video streams, a feature-level fusion method by concatenation can be effortlessly applied. The presented system is able to recognise specific events in large groups of people from multiple cameras, and to perform equally well as compared to the best single view available. Furthermore, the dimension of the concatenated feature can be reduced by one order of magnitude without loss of performance.
Advances in Intelligent Systems and Computing, 2015
Crowd monitoring is a critical application in video surveillance. Crowd events such as running, walking, merging, splitting, dispersion, and evacuation inform crowd management about the behavior of groups of people. For an effective crowd management, detection of crowd events provides an early sign of the behavior of the people. However, crowd event detection using videos is a highly challenging task because of several challenges such as non-rigid human body motions, occlusions, unavailability of distinguishing features due to occlusions, unpredictability in people movements, and other. In addition, the video itself is a high-dimensional data and analyzing to detect events becomes further complicated. One way of tackling the huge volume of video data is to represent a video using low-dimensional equivalent. However, reducing the video data size needs to consider the complex data structure and events embedded in a video. To this extent, we focus on detection of crowd events using the Isometric Mapping (ISOMAP) and Support Vector Machine (SVM). The ISOMAP is used to construct the low-dimensional representation of the feature vectors, and then an SVM is used for training and classification. The proposed approach uses Haar wavelets to extract Gray Level Coefficient Matrix (GLCM). Later, the approach extracts four statistical features (contrast, correlating, energy, and homogeneity) at different levels of Haar wavelet decomposition. Experiment results suggest that the proposed approach is shown to perform better when compared with existing approaches.
Journal on Multimodal User Interfaces
The study of crowd behavior in public areas or during some public events is receiving a lot of attention in security community to detect potential risk and to prevent overcrowd. In this paper, we propose a novel approach for change detection, event recognition and characterization in human crowds. It consists of modeling time-varying dynamics of the crowd using local features. It also involves a feature tracking step which allows excluding feature points on the background and extracting long-term trajectories. This process is favourable for the later crowd event detection and recognition since the influence of features irrelevant to the underlying crowd is removed and the tracked features undergo an implicit temporal filtering. These feature tracks are further employed to extract regular motion patterns such as speed and flow direction. In addition, they are used as an observation of a probabilistic crowd function to generate fully automatic crowd density maps. Finally, the variation of these attributes (local density, speed, and flow direction) in time is employed to determine the ongoing crowd behaviors. The experimental results on two different datasets demonstrate the effectiveness of our proposed approach for early detection of crowd change and accurate results for event recognition and characterization.
Optics and Photonics for Counterterrorism, Crime Fighting, and Defence X; and Optical Materials and Biomaterials in Security and Defence Systems Technology XI, 2014
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
18th International Conference on Pattern Recognition (ICPR'06), 2006
This work presents an automatic technique for detection of abnormal events in crowds. Crowd behaviour is difficult to predict and might not be easily semantically translated. Moreover it is difficulty to track individuals in the crowd using state of the art tracking algorithms. Therefore we characterise crowd behaviour by observing the crowd optical flow and use unsupervised feature extraction to encode normal crowd behaviour. The unsupervised feature extraction applies spectral clustering to find the optimal number of models to represent normal motion patterns. The motion models are HMMs to cope with the variable number of motion samples that might be present in each observation window. The results on simulated crowds demonstrate the effectiveness of the approach for detecting crowd emergency scenarios.
We have proposed a method for abnormal crowd detection and tracking in this paper. Automated analysis of crowd activities using surveillance videos is an important issue for communal security, as it allows detection of dangerous crowds and where they are headed. Public places such as shopping centres and airports are monitored using closed circuit television in order to ensure normal operating conditions. Computer vision based crowd analysis algorithm can be divided into three groups; people counting, people tracking and crowd behaviour analysis. In this paper the behaviour understanding will be used for crowd behaviour analysis. The purpose of these methods could lead to a better understanding of crowd activities, improved design of the built environment and increased pedestrian safety. The experimental results show that the proposed method achieves good accuracy
The automated analysis of crowd behavior from videos has been a rather challenging problem to address due to the complexity and density of the motion, occlusions and local noise. A novel approach for the fast and reliable detection and characterization of abnormal events in crowd motions is proposed, based on particle advection and accurate optical flow estimation. Experiments on benchmark datasets show that changes are detected reliably and faster than existing methods. Also, regions of change are localized spatially, and the events occurring in the video are characterized with accuracy.
2015
In this paper we address the problem of semantic analysis of structured/unstructured crowded video scenes. Our proposed approach relies on tracklets for motion representation. Each extracted tracklet is abstracted as a directed line segment, and a novel tracklet similarity measure is formulated based on line geometry. For analysis, we apply non-parametric clustering on the extracted track-lets. Particularly, we adapt the Distance Dependent Chinese Restaurant Process (DD-CRP) to leverage the computed similarities between pairs of tracklets, which ensures the spatial coherence among tracklets in the same cluster. By analyzing the clustering results, we can identify semantic regions in the scene, particularly, the common pathways and their sources/sinks, without any prior information about the scene layout. Qualitative and quantitative experimental evaluation on multiple crowded scenes datasets, principally, the challenging New York Grand Central Station video, demonstrate the state of the art performance of our method.
This paper evaluates an automatic technique for detection of abnormal events in crowds. Crowd behaviour is difficult to predict and might not be easily semantically translated. Moreover it is difficult to track individuals in the crowd using state of the art tracking algorithms. Therefore we characterise crowd behaviour by observing the crowd optical flow and use unsupervised feature extraction to encode normal crowd behaviour. The unsupervised feature extraction applies spectral clustering to find the optimal number of models to represent normal motion patterns. The motion models are HMMs to cope with the variable number of motion samples that might be present in each observation window. The results on simulated crowds analyse the robustness of the approach for detecting crowd emergency scenarios observing the crowd at local and global levels. The results on normal real data show the effectiveness in modelling the more diverse behaviour present in normal crowds. These results improve our previous work in the detection of anomalies in pedestrian data.
2016
In this paper we address the problem of semantic analysis of structured/unstructured crowded video scenes. Our proposed approach relies on tracklets for motion representation. Each extracted tracklet is abstracted as a directed line segment, and a novel tracklet similarity measure is formulated based on line geometry. For analysis, we apply non-parametric clustering on the extracted tracklets. Particularly, we adapt the Distance Dependent Chinese Restaurant Process (DD-CRP) to leverage the computed similarities between pairs of tracklets, which ensures the spatial coherence among tracklets in the same cluster. By analyzing the clustering results, we can identify semantic regions in the scene, particularly, the common pathways and their sources/sinks, without any prior information about the scene layout. Qualitative and quantitative experimental evaluation on multiple crowded scenes datasets, principally, the challenging New York Grand Central Station video, demonstrate the state of ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 2013
2009 IEEE 12th International Conference on Computer Vision, 2009
2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017
Multimedia Tools and Applications, 2018
EURASIP Journal on Advances in Signal Processing, 2008
Journal of Theoretical and Applied Information Technology, 2018
Advanced Concepts for Intelligent Vision Systems, 2008
Computer Vision and Image Understanding
2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2019
IEEE Transactions on Circuits and Systems for Video Technology, 2015
International Journal of Applied Pattern Recognition, 2018