Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18
…
7 pages
1 file
Despite its importance to the Web, multimedia content is often neglected when building and designing knowledge-bases: though descriptive metadata and links are often provided for images, video, etc., the multimedia content itself is often treated as opaque and is rarely analysed. IMGpedia is an effort to bring together the images of Wikimedia Commons (including visual information), and relevant knowledge-bases such as Wikidata and DBpedia. The result is a knowledge-base that incorporates similarity relations between the images based on visual descriptors, as well as links to the resources of Wikidata and DBpedia that relate to the image. Using the IMGpedia SPARQL endpoint, it is then possible to perform visuo-semantic queries, combining the semantic facts extracted from the external resources and the similarity relations of the images. This paper presents a new web interface to browse and explore the dataset of IMGpedia in a more friendly manner, as well as new visuo-semantic queries that can be answered using 6 million recently added links from IMGpedia to Wikidata. We also discuss future directions we foresee for the IMGpedia project. CCS CONCEPTS • Information systems → Multimedia databases; Wikis;
2016
Linked Data rarely takes into account multimedia content, which forms a central part of the Web. To explore the combination of Linked Data and multimedia, we are developing IMGpedia: we compute content-based descriptors for images used in Wikipedia articles and subsequently propose to link these descriptions with legacy encyclopaedic knowledge-bases such as DBpedia and Wikidata. On top of this extended knowledge-base, our goal is to consider a unified query system that accesses both the encyclopaedic data and the image data. We could also consider enhancing the encyclopaedic knowledge based on rules applied to co-occurring entities in images, or content-based analysis, for example. Abstracting away from IMGpedia, we explore generic methods by which the content of images on the Web can be described in a standard way and can be considered as first-class citizens on the Web of Data, allowing, for example, for combining structured queries with image similarity search. This short paper t...
2017
IMGpedia is a large-scale linked dataset that incorporates visual information of the images from the Wikimedia Commons dataset: it brings together descriptors of the visual content of 15 million images, 450 million visual-similarity relations between those images, links to image metadata from DBpedia Commons, and links to the DBpedia resources associated with individual images. In this paper we describe the creation of the IMGpedia dataset, provide an overview of its schema and statistics of its contents, offer example queries that combine semantic and visual information of images, and discuss other envisaged use-cases for the dataset.
2017
IMGpedia is a linked dataset that provides a public SPARQL endpoint where users can answer queries that combine the visual similarity of images from Wikimedia Commons and semantic information from existing knowledge-bases. Our demo will show example queries that capture the potential of the current data stored in IMGpedia. We also plan to discuss potential use-cases for the dataset and ways in which we can improve the quality of the information it captures and the expressiveness of the queries.
Semantic Web Journal, 2022
This paper presents ArtVision, a Semantic Web application that integrates computer vision APIs with the Re-searchSpace platform, allowing for the matching of similar artworks and photographs across cultural heritage image collections. The field of Digital Art History stands to benefit a great deal from computer vision, as numerous projects have already made good progress in tackling issues of visual similarity, artwork classification, style detection, gesture analysis, among others. Pharos, the International Consortium of Photo Archives, is building its platform using the ResearchSpace knowledge system, an open-source semantic web platform that allows heritage institutions to publish and enrich collections as Linked Open Data through the CIDOC-CRM, and other ontologies. Using the images and artwork data of Pharos collections, this paper outlines the methodologies used to integrate visual similarity data from a number of computer vision APIs, allowing users to discover similar artworks and generate canonical URIs for each artwork.
3Rd the Sixth International Conference on Knowledge Capture K Cap 2011 3rd the Sixth International Conference on Knowledge Capture K Cap 2011 25 06 2011 29 06 2011 Banff Alberta Canada, 2011
Enriching knowledge bases with multimedia information makes it possible to complement textual descriptions with visual and audio information. Such complementary information can help users to understand the meaning of assertions, and in general improve the user experience with the knowledge base. In this paper we address the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines. DBpedia has evolved into a major hub in the Linked Data cloud, interconnecting millions of entities organized under a consistent ontology. Our approach taps into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images. We performed experiments with focus on the particularly challenging problem of highly ambiguous names. Both methods presented in this work outperformed the baseline. Our best method leveraged context words from Wikipedia, tags from Flickr and type information from DBpedia to achieve an average precision of 80%.
2013
DBpedia, used as a web link knowledge garden, provides great opportunities for researchers as a domain concept to enrich resource and information extraction. The integration of DBpedia with ontology-based approach in image retrieval gives complete and rich semantics information to the image. The semantic gap is the main problem in image retrieval. The gap is between the high level image interpretations of the users and the low level image features stored for indexing querying. Ontology-based image retrieval is an effective approach to bridge the semantic gap because it is more focused on capturing and presenting the semantic content which has the potential to satisfy the user need. A recent trend in ontology-based image retrieval is to fuse the two basic modalities of images namely textual content (keywords) and visual features and known as multi-modality ontology. In this paper, we present the framework for integrating structured content in DBpedia resources with multimodality ontology-based image extraction and retrieval system and describe how this framework bridges the semantic gap in content-based image retrieval (CBIR). Our goal is to populate a knowledge base with online image news resources from 12 sport types in the BBC sport news, which has three main items: image, image caption and news information. This system will yield high precision and include diverse sports images for specific entities. A multi-modality ontology retrieval system, with complete relational facts about entities will improves the precision of retrieval.
This paper describes the participation of DAEDALUS at the ImageCLEF 2010 Wikipedia Retrieval task. The main focus of our experiments is to evaluate the impact in the image retrieval pr ocess of the incorporation of semantic information extracted only from the textua l information provided as metadata of the image itself, as compared to expand ing with contextual information gathered from the document where the image is referred. For the semantic annotation, DBpedia ontology and YAGO classification schema are used. As expected, the obtained results show that, in general, the textual information attached to a given image is not able t o fully represent certain features of the image. Furthermore, the use of sema ntic information in the process of multimedia information extraction poses two hard challenges still to solve: how to automatically extract the high level features associated to a multimedia resource, and, once the resource has bee n semantically tagged, which features must be ...
… Workshop on Semantic …, 2006
Abstract. Semantic and semi-structured wiki implementations, which extend traditional, purely string-based wikis by adding machine-process-able metadata, suffer from a lack of support for media management. Currently, it is difficult to maintain semantically rich metadata for both ...
2008
Semantic-based information retrieval is an area of ongoing work. In this paper we present a solution for giving semantic support to multimedia content information retrieval in an e-Learning environment where very often a large number of multimedia objects and information sources are used in combination. Semantic support is given through intelligent use of Wikipedia in combination with statistical Information Extraction techniques.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Innovative Technology and Exploring Engineering, 2020
… Media Adaptation and …, 2006
Neural Information Processing, 2012
Procedia Computer Science, 2016
Communications of The ACM, 1997