Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, International Journal of Digital Culture and Electronic Tourism
The paper presents an ontological approach for enabling semantic-aware information retrieval and browsing framework facilitating the user access to its preferred contents. Through the ontologies the system will express key entities and relationships describing learning material in a formal machineprocessable representation. An ontology-based knowledge representation could be used for content analysis and concept recognition, for reasoning processes and for enabling user-friendly and intelligent multimedia content search and retrieval.
2017
Recent era there is an increase in the use of videobased applications which revealed the need for extracting the content in videos and how semantically they are interrelated. In this paper we propose a semantic content extraction system that allows users to query and retrieve objects, events, and concepts that are extracted automatically by building the ontology with respect to events and related interest. This paper discusses retrieval techniques of multimedia content using ontology semantically.
Lecture Notes in Computer Science, 2007
Ontologies are defined as the representation of the semantics of terms and their relationships. Traditionally, they consist of concepts, concept properties, and relationships between concepts, all expressed in linguistic terms. In order to support effectively video annotation and content-based retrieval the traditional linguistic ontologies should be extended to include structural video information and perceptual elements such as visual data descriptors.
2007
We outline DLMedia, an ontology mediated multimedia information retrieval system, which combines logic-based retrieval with multimedia featurebased similarity retrieval. An ontology layer may be used to define (in terms of a DLR-Lite like description logic) the relevant abstract concepts and relations of the application domain, while a content-based multimedia retrieval system is used for feature-based retrieval.
International Journal of Multimedia Data Engineering and Management, 2011
This paper examines video retrieval based on Query-By-Example (QBE) approach, where shots relevant to a query are retrieved from large-scale video data based on their similarity to example shots. This involves two crucial problems: The first is that similarity in features does not necessarily imply similarity in semantic content. The second problem is an expensive computational cost to compute the similarity of a huge number of shots to example shots. The authors have developed a method that can filter a large number of shots irrelevant to a query, based on a video ontology that is knowledge base about concepts displayed in a shot. The method utilizes various concept relationships (e.g., generalization/specialization, sibling, part-of, and co-occurrence) defined in the video ontology. In addition, although the video ontology assumes that shots are accurately annotated with concepts, accurate annotation is difficult due to the diversity of forms and appearances of the concepts. Demps...
2009 IEEE International Conference on Semantic Computing, 2009
This paper aims to provide a semantic web based video search engine. Currently, we do not have scalable integration platforms to represent extracted features from videos, so that they could be indexed and searched. The task of indexing extracted features from videos is a difficult challenge, due to the diverse nature of the features and the temporal dimensions of videos. We present a semantic web based framework for automatic feature extraction, storage, indexing and retrieval of videos. Videos are represented as interconnected set of semantic resources. Also, we suggest a new ranking algorithm for finding related resources which could be used in a semantic web based search engine.
This paper provides an overview of the contents of a tutorial on the subject by one of the authors at WI-2013 Conference. The domination of multimedia contents on the web in recent times has motivated research in their semantic analysis. This tutorial aims to provide a critical overview of the technology, and focuses on application of ontologies for multimedia applications. It establishes the need for a fundamentally different approach for a representation and reasoning scheme with ontologies for semantic interpretation of multimedia contents. It introduces a new ontology representation scheme that enables reasoning with uncertain media properties of concepts in a domain context and a language "Multimedia Web Ontology Language" (MOWL) to support the representation scheme. We discuss the approaches to semantic modeling and ontology learning with specific reference to the probabilistic framework of MOWL. We present a couple of illustrative application examples. Further, we discuss the issues of distributed multimedia information systems and how the new ontology representation scheme can create semantic interoperability across heterogeneous multimedia data sources.
2007
The development of appropriate tools and solutions to support effective access to video content is one of the main challenges for video digital libraries. Different techniques for manual and automatic annotation and retrieval have been proposed in recent years. It is a common practice to use linguistic ontologies for video annotation and retrieval: video elements are classified by establishing relationships between video contents and linguistic terms that identify domain concepts at different abstraction levels. However, although linguistic terms are appropriate to distinguish event and object categories, they are inadequate when they must describe specific or complex patterns of events or video entities. Instead, in these cases, pattern specifications can be better expressed using visual prototypes, either images or video clips, that capture the essence of the event or entity. High level concepts, expressed trough linguistic terms, and patterns specification, represented by visual prototypes, can be both organized into new extended ontologies where images or video clips are added to the ontologies as specification of linguistic terms. This paper presents algorithms and techniques that employ enriched ontologies for video annotation and retrieval, and discusses a solution for their implementation for the soccer video domain. An unsupervised clustering method is proposed in order to create multimedia enriched ontologies by defining visual prototypes that represent specific patterns of highlights and adding them as visual concepts to the ontology. An algorithm that uses multimedia enriched ontologies to perform automatic soccer video annotation is proposed and results for typical highlights are presented. Annotation is performed associating occurrences of events, or entities, to higher level concepts by checking their similarity to visual concepts that are hierarchically linked to higher level semantics, using a dynamic programming approach. Usage of reasoning on the ontology is shown, to create complex queries that comprise visual prototypes of actions, their temporal evolution and relations.
2005
A typical way to perform video annotation requires to classify video elements (e.g. events and objects) according to some pre-defined ontology of the video content domain. Ontologies are defined by establishing relationships between linguistic terms that specify domain concepts at different abstraction levels. However, although linguistic terms are appropriate to distinguish event and object categories, they are inadequate when they must describe specific or complex patterns of events or video entities. Instead, in these cases, pattern specifications can be better expressed using visual prototypes, either images or video clips, that capture the essence of the event or entity. Therefore enhanced ontologies, that include both visual and linguistic concepts, can be useful to support video annotation up to the level of detail of pattern specification. This paper presents algorithms and techniques that employ enriched ontologies for video annotation and retrieval, and discusses a solution for their implementation for the soccer video domain. An unsupervised clustering method is proposed in order to create pictorially enriched ontologies by defining visual prototypes that represent specific patterns of highlights and adding them as visual concepts to the ontology.
ACM SIGMOD Record, 1999
Providing concept level access to video data requires, video management systems tailored to the domain of the data. Effective indexing and retrieval for high-level access mandates the use of domain knowledge. This paper proposes an approach based on the use of knowledge models to building domain specific video information systems. The key issues in such systems are identified and discussed.
2007
Abstract Effective usage of multimedia digital libraries has to deal with the problem of building efficient content annotation and retrieval tools. In this paper multimedia ontologies, that include both linguistic and dynamic visual ontologies, are presented and their implementation for soccer video domain is shown.
Lecture Notes in Computer Science, 2004
Domain ontologies are very useful for indexing, query specification, retrieval and filtering, user interfaces, even information extraction from audiovisual material. The dominant emerging language standard for the description of domain ontologies is OWL. We describe here a methodology and software that we have developed for the interoperability of OWL with the complete MPEG-7 MDS so that domain ontologies described in OWL can be transparently integrated with the MPEG-7 MDS metadata. This allows applications that recognize and use the MPEG-7 MDS constructs to make use of domain ontologies for applications like indexing, retrieval, filtering etc. resulting in more effective user retrieval and interaction with audiovisual material.
2006
Abstract Effective usage of multimedia digital libraries has to deal with the problem of building efficient content annotation and retrieval tools. MOM (Multimedia Ontology Manager) is a complete system that allows the creation of multimedia ontologies, supports automatic annotation and creation of extended text (and audio) commentaries of video sequences, and permits complex queries by reasoning on the ontology.
Nowadays, the video documents like educational courses available on the web increases significantly. However, the information retrieval systems today can not return to the users (students or teachers) of parts of those videos that meet their exact needs expressed by a query consisting of semantic information. In this paper, we present a model of pedagogical knowledge of current videos. This knowledge is used throughout the process of indexing and semantic search segments instructional videos. Our experimental results show that the proposed approach is promising.
Proceedings of the 13th annual ACM international conference on Multimedia - MULTIMEDIA '05, 2005
To ensure access to growing video collections, annotation is becoming more and more important. Using background knowledge in the form of ontologies or thesauri is a way to facilitate annotation in a broad domain. Current ontologies are not suitable for (semi-) automatic annotation of visual resources as they contain little visual information about the concepts they describe. We investigate how an ontology that does contain visual information can facilitate annotation in a broad domain and identify requirements that a visual ontology has to meet. Based on these requirements, we create a visual ontology out of two existing knowledge corpora (WordNet and MPEG-7) by creating links between visual and general concepts. We test performance of the ontology on 40 shots of news video, and discuss the added value of each visual property.
—As the multimedia content over the internet is increasing day by day, efficient methods for retrieval of these huge amount of data is required. Video annotation is one of the widely used methods to analyze and retrieve these huge video data. The process of video annotation is complicated as it requires a large amount of processing to analyze the contents in the video. This paper introduces a video annotation system based on ontology with HoG features given for training the classifiers. The SIFT and HoG features of the images are extracted and used for training the classifiers for doing a comparison on performance of the classifiers. An analysis of the results is done to find which feature is better to train the classifier for getting more prominent annotated video database. Retrieval of the videos based on objects from the annotated video database is also done.
Recent advances in digital video analysis and retrieval have made video more accessible than ever. The representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. Raw data and low-level features alone are not sufficient to fulfill the user's needs; that is, a deeper understanding of the content at the semantic level is required .Currently, manual techniques, which are inefficient, subjective and costly in time and limit the querying capabilities .Here, we propose a semantic content extraction system that allows the user to query and retrieve objects, events, and concepts that are extracted automatically. We introduce an ontology-based fuzzy video semantic content model that uses spatial/temporal relations in event and concept definitions. This metaontology definition provides a wide-domain applicable rule construction standard that allows the user to construct ontology for a given domain. In addition to domain ontologies, we use additional rule definitions (without using ontology) to define some complex situations more effectively. The proposed framework has been fully implemented and tested on three different domains and it provides satisfactory results.
Procedia Computer Science, 2016
Numerous educational video lectures, CCTV surveillance, transport and other types have upgraded the impact of multimedia video content. In order to make large video databases realistic, video data has to be automatically indexed in order to search and retrieve relevant material. An annotation is a markup reference made to data in video in order to improve the video accessibility. Video annotation is used to examine the massive quantity of multimedia data in the repositories. Video annotation refers to the taking out of significant data present in video and placing this data to the video can benefit in "retrieval, browsing, analysis, searching comparison and categorization". Video annotation implies taking out of data and to attach such metadata to the video which will "accelerate the retrieval speed, ease of access, analysis and categorization". It permits fast and better understanding of video content and improves the performance of retrieval and decreases human time & efforts for better study of videos. Video annotation is imperative technique that assists in video access. Proposed system provides effortless access to the data of the video and decrease the time necessary to access and evaluate the video. Ontology-based video annotation helps the user to get the semantic information from video, which is essential to search the needful data from a video.
Multimedia Tools and Applications, 2008
In this paper we present a framework for unified, personalized access to heterogeneous multimedia content in distributed repositories. Focusing on semantic analysis of multimedia documents, metadata, user queries and user profiles, it contributes to the bridging of the gap between the semantic nature of user queries and raw multimedia documents. The proposed approach utilizes as input visual content analysis results, as well as analyzes and exploits associated textual annotation, in order to extract the underlying semantics, construct a semantic index and classify documents to topics, based on a unified knowledge and semantics representation model. It may then accept user queries, and, carrying out semantic interpretation and expansion, retrieve documents from the index and rank them according to user preferences, similarly to text retrieval. All processes are based on a novel semantic processing methodology, employing fuzzy algebra and principles of taxonomic knowledge representation. The first part of this work presented in this paper deals with data and knowledge models, manipulation of multimedia content annotations and semantic indexing, while the second part will continue on the use of the extracted semantic information for personalized retrieval.
… information retrieval on …, 2007
This paper proposes a new method for using implicit user feedback from clickthrough data to provide personalized ranking of results in a video retrieval system. The annotation based search is complemented with a feature based ranking in our approach. The ranking algorithm uses belief revision in a Bayesian Network, which is derived from a multimedia ontology that captures the probabilistic association of a concept with expected video features. We have developed a content model for videos using discrete feature states to enable Bayesian reasoning and to alleviate on-line feature processing overheads. We propose a reinforcement learning algorithm for the parameters of the Bayesian Network with the implicit feedback obtained from the clickthrough data.
2010
Abstract In this technical demonstration we present a novel web-based tool that allows a user friendly semantic browsing of video collections, based on ontologies, concepts, concept relations and concept clouds. The system is developed as a Rich Internet Application (RIA) to achieve a fast responsiveness and ease of use that can not be obtained by other web application paradigms, and uses streaming to access and inspect the videos.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.