Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Proceedings of IEEE International Conference on Multimedia Computing and Systems
Modeling moving objects has become a topic of increasing interest in the area of video databases. Two k ey aspects of such modeling are spatial and temporal relationships. In this paper we i n troduce an innovative w ay to represent the trajectory of a single moving object and the relative spatio-temporal relations between multiple moving objects. The representation supports a rich set of spatial topological and directional relations. It also supports both quantitative and qualitative user queries about moving objects. Algorithms for matching trajectories and spatio-temporal relations of moving objects are designed to facilitate query processing. These algorithms can handle both exact and similarity matches. We also discuss the integration of our moving object model, based on a video model, in an object-oriented system. Some query examples are provided to further validate the expressiveness of our model.
1996
Modeling moving objects has become a topic of increasing interest in the area of video databases. Two key aspects of such modeling are object spatial and temporal relationships. In this paper we introduce an innovative way to represent the trajectory of single moving object and the relative spatio-temporal relations between multiple moving objects. The representation supports a rich set of spatial topological and directional relations. It also supports both quantitative and qualitative user queries about moving objects. Algorithms for matching trajectories and spatio-temporal relations of moving objects are designed to facilitate query processing. These algorithms can handle both exact and similarity matches. We also discuss the integration of our moving object model, based on a video model, in an object-oriented system. Some query examples are provided to further validate the expressiveness of our model.
2002
Abstract In this paper, we present an efficient video data model to represent moving trajectories of video objects and spatiotemporal relationships among the video objects. A video clip is segmented into a set of common appearance intervals (CAIs). A CAI is a time interval that video objects appear together. Transitions among CAIs record the appearance/disappearance of video objects. Depending on the properties of video objects, they are classified as foreground and background video objects.
Proceedings of International Workshop on Multimedia Database Management Systems
A key aspect in video modeling is spatial relationships. In this paper we propose a spatial representation for specifying the spatial semantics of video data. Based on such a representation, a set of spatial relationships for salient objects is defined to support qualitative and quantitative spatial properties. The model captures both topological and directional spatial relationships. We present a novel way of incorporating this model into a video model, and integrating the abstract video model into an object database management system which has rich multimedia temporal operations. The integrated model is further enhanced by a spatial inference engine. The powerful expressiveness of our video model is validated by some query examples.
2001
This paper presents a symbolic formalism for modeling and retrieving video data via the moving objects contained in the video images. The model integrates the representations of individual moving objects in a scene with the time-varying relationships between them by incorporating both the notions of object tracks and temporal sequences of PIRs (projection interval relationships). The model is supported by a set of operations which form the basis of a moving object algebra. This algebra allows one to retrieve scenes and information from scenes by specifying both spatial and temporal properties of the objects involved. It also provides operations to create new scenes from existing ones. A prototype implementation is described which allows queries to be specified either via an animation sketch or using the moving object algebra.
1997
We propose a data model for representing moving objects in database systems. It is called the Moving Objects Spatio-Temporal (MOST) data model. We also propose Future Temporal Logic (FTL) as the query language for the MOST model, and devise an algorithm for processing FTL queries in MOST
Proceedings of the Eleventh International Conference on Data Engineering
In this paper, we propose a graphical data model for specifying spatio-temporal semantics of video data. The proposed model segments a video clip into subsegments consisting of objects. Each object is detected and recognized, and the relevant information of each object is recorded. The motions of objects are modeled through their relative spatial relationships as time evolves. Based on the semantics provided b y this model, a user can create his/her own object-oriented view of the video database. Using ihe propositional logic, we describe a methodology f o r specifying conceptual queries involving spatio-temporal semantics and expressing views f o r retrieving various video clips. Alternatively, a user can sketch the query, by examplifying the concept. The proposed methodology can be used to specify spatio-temporal concepts at various levels of information granularity.
Lecture Notes in Computer Science, 2000
Modeling video data poses a great challenge since they do not have as clear an underlying structure as traditional databases do. We propose a graphical object-based model, called VideoGraph, in this paper. This scheme has the following advantages: (1) In addition to semantics of video individual events, we capture their temporal relationships as well. (2) The inter-event relationships allow us to deduce implicit video information. (3) Uncertainty can also be handled by associating the video event with a temporal Boolean-like expression. This also allows us to exploit incomplete information. The above features make VideoGraph very exible in representing various metadata types extracted from diverse information sources. To facilitate video retrieval, we a l s o i n troduce a formalism for the query language based on path expressions. Query processing involves only simple traversal of the video graphs.
ACM Transactions on Database Systems, 2000
Spatio-temporal databases deal with geometries changing over time. The goal of our work is to provide a DBMS data model and query language capable of handling such time-dependent geometries, including those changing continuously which describe moving objects. Two fundamental abstractions are moving point and moving region, describing objects for which only the time-dependent position, or position and extent, are of interest, respectively. We propose to represent such time-dependent geometries as attribute data types with suitable operations, that is, to provide an abstract data type extension to a DBMS data model and query language.
Multimedia Computing and Networking 1997, 1997
One of the key aspects of videos is the temporal relationship between video frames. In this paper we propose a tree-based model for specifying the temporal semantics of video data. We present a unique way o f i n tegrating our video model into an object database management system which has rich m ultimedia temporal operations. We further show h o w temporal histories are used to model video data, explore the video objectbase using object-oriented techniques. Such a seamless integration gives a uniform interface to end users. The integrated video objectbase management system supports a broad range of temporal queries.
1997
abstract One of the key aspects of videos is the temporal relationship between video frames. In this paper we propose a tree-based model for specifying the temporal semantics of video data. We present a unique way of integrating our video model into an object database management system which has rich multimedia temporal operations. We further show how temporal histories are used to model video data, explore the video object base using object-oriented techniques.
Moving Object Databases will have significant role in Geospatial Information Systems as they allow users to model continuous movements of entities in the databases and perform spatio-temporal analysis. For representing and querying moving objects, an algebra with a comprehensive framework of User Defined Types together with a set of functions on those types is needed. Moreover, concerning real world applications, moving objects move along constrained environments like transportation networks so that an extra algebra for modeling networks is demanded, too. These algebras can be inserted in any data model if their designs are based on available standards such as Open Geospatial Consortium that provides a common model for existing DBMS's. In this paper, we focus on extending a spatial data model for constrained moving objects. Static and moving geometries in our model are based on Open Geospatial Consortium standards. We also extend Structured Query Language for retrieving, querying, and manipulating spatio-temporal data related to moving objects as a simple and expressive query language. Finally as a proof-of-concept, we implement a generator to generate data for moving objects constrained by a transportation network. Such a generator primarily aims at traffic planning applications.
2004
We present a framework for representing the trajectories of moving objects and the time-varying results of operations on moving objects. This framework supports the realization of discrete data models of moving objects databases, which incorporate representations of moving objects based on non-linear approximation functions.
2003
Abstract Whereas earlier work on spatiotemporal databases generally focused on geometries changing in discrete steps, the emerging area of moving objects databases supports geometries changing continuously. Two important abstractions are moving point and moving region, modelling objects for which only the time-dependent position, or also the shape and extent are relevant, respectively.
International Journal of Database Management Systems, 2012
KEYWORDS Trajectory meta-model, moving object database, space time path, space time ontology, event ontology, trajectory meta-model.
IEEE Transactions on Multimedia, 2003
In the past few years, modeling and querying video databases have been a subject of extensive research to develop tools for effective search of videos. In this paper, we present a hierarchal approach to model videos at three levels, object level ( ), frame level ( ), and shot level ( ). The model captures the visual features of individual objects at , visual-spatio-temporal (VST) relationships between objects at , and time-varying visual features and time-varying VST relationships at . We call the combination of the time-varying visual features and the time-varying VST relationships a Content trajectory which is used to represent and index a shot. A novel query interface that allows users to describe the time-varying contents of complex video shots such as those of skiers, soccer players, etc., by sketch and feature specification is presented. Our experimental results prove the effectiveness of modeling and querying shots using the content trajectory approach.
2008
Trajectory properties are spatio-temporal properties that describe the changes of spatial (topological) relationships of one moving object with respect to regions and trajectories of other moving objects. Trajectory properties can be viewed as continuous changes of an object’s location resulting in a continuous change in the topological relationship between this object and other entities of interest. In this paper we develop a query language TQ for expressing trajectory properties. Our model and query language are based on the framework of constraint query languages. We present some preliminary complexity and expressive power results for the proposed language. 1
IEICE Transactions on Information and Systems
Recently, two approaches investigated indexing and retrieving videos. One approach utilized the visual features of individual objects, and the other approach exploited the spatio-temporal relationships between multiple objects. In this paper, we integrate both approaches into a new video model, called the Visual-Spatio-Temporal (VST) model to represent videos. The visual features are modeled in a topological approach and integrated with the spatio-temporal relationships. As a result, we defined rich sets of VST relationships which support and simplify the formulation of more semantical queries. An intuitive query interface which allows users to describe VST features of video objects by sketch and feature specification is presented. The conducted experiments prove the effectiveness of modeling and querying videos by the visual features of individual objects and the VST relationships between multiple objects.
2003
Abstract The BilVideo video database management system provides integrated support for spatiotemporal and semantic queries for video. A knowledge base, consisting of a fact base and a comprehensive rule set implemented in Prolog, handles spatio-temporal queries. These queries contain any combination of conditions related to direction, topology, 3D relationships, object appearance, trajectory projection, and similarity-based object trajectories.
International Conference on Information Technology: Coding and Computing, 2004. Proceedings. ITCC 2004., 2004
This paper presents a framework of a system for the query and retrieval of video data based on video events in huge video repositories. The events are formulated using domain-independent event primitivies which are represented by spatio-temporal relationships between objects in the video scenes. Complex events are expressible as combinations of simpler events. This facilitates support of event queries from a variety of points of view. In addition, the framework is expected to provide adaptability of the framework to multiple domains.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.