Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, International Symposium/Conference on Music Information Retrieval
In this paper we present the Audio Effects Ontology for the ontological representation of audio effects in music production workflows. Designed as an extension to the Studio Ontology, its aim is to provide a framework for the detailed description and sharing of information about audio effects, their implementations, and how they are applied in realworld production scenarios. The ontology enables capturing and structuring data about the use of audio effects and thus facilitates reproducibility of audio effect application, as well as the detailed analysis of music production practices. Furthermore, the ontology may inform the creation of metadata standards for adaptive audio effects that map high-level semantic descriptors to control parameter values. The ontology is using Semantic Web technologies that enable knowledge representation and sharing, and is based on modular ontology design methodologies. It is evaluated by examining how it fulfils requirements in a number of production and retrieval use cases.
International Semantic Web Conference, 2016
This paper discusses an extension to the Audio Effect Ontology (AUFX-O) for the interdisciplinary classification of audio effect types. The ontology extension implements a unified classification system that draws on knowledge from different music-related disciplines and is designed to facilitate the retrieval of audio effect information based on low-level and semantic aspects. It extends AUFX-O enabling communication between agents from different disciplines within the field of music creation and production. After briefly discussing the ontology, we show how it can be used to efficiently classify and retrieve effect types.
2016
A plurality of audio feature extraction toolsets and feature datasets are used by the MIR community. Their different conceptual organisation of features and output formats however present difficulties in exchanging or comparing data, while very limited means are provided to link features with content and provenance. These issues are hindering research reproducibility and the use of multiple tools in combination. We propose novel Semantic Web ontologies (1) to provide a common structure for feature data formats and (2) to represent computational workflows of audio features facilitating their comparison. The Audio Feature Ontology provides a descriptive framework for expressing different conceptualisations of and designing linked data formats for content-based audio features. To accommodate different views in organising features, the ontology does not impose a strict hierarchical structure, leaving this open to task and tool specific ontologies that derive from a common vocabulary. The ontologies are based on the analysis of existing feature extraction tools and the MIR literature, which was instrumental in guiding the design process. They are harmonised into a library of modular interlinked ontologies that describe the different entities and activities involved in music creation, production and consumption.
2009
This paper discusses architectural aspects of a software library for unified metadata management in audio processing applications. The data incorporates editorial, production, acoustical and musicological features for a variety of use cases, ranging from adaptive audio effects to alternative metadata based visualisation. Our system is designed to capture information, prescribed by modular ontology schema. This advocates the development of intelligent user interfaces and advanced media workflows in music production environments. In an effort to reach these goals, we argue for the need of modularity and interoperable semantics in representing information. We discuss the advantages of extensible Semantic Web ontologies as opposed to using specialised but disharmonious metadata formats. Concepts and techniques permitting seamless integration with existing audio production software are described in detail.
2011
This paper introduces the Studio Ontology Framework for describing and sharing detailed information about music production. The primary aim of this ontology is to capture the nuances of record production by providing an explicit, application and situation independent conceptualisation of the studio environment. We may use the ontology to describe real-world recording scenarios involving physical hardware, or (post) production on a personal computer. It builds on Semantic Web technologies and previously published ontologies for knowledge representation and knowledge sharing.
Lecture Notes in Computer Science, 2016
Feature extraction algorithms in Music Informatics aim at deriving statistical and semantic information directly from audio signals. These may be ranging from energies in several frequency bands to musical information such as key, chords or rhythm. There is an increasing diversity and complexity of features and algorithms in this domain and applications call for a common structured representation to facilitate interoperability, reproducibility and machine interpretability. We propose a solution relying on Semantic Web technologies that is designed to serve a dual purpose (1) to represent computational workflows of audio features and (2) to provide a common structure for feature data to enable the use of Open Linked Data principles and technologies in Music Informatics. The Audio Feature Ontology is based on the analysis of existing tools and music informatics literature, which was instrumental in guiding the ontology engineering process. The ontology provides a descriptive framework for expressing different conceptualisations of the audio feature extraction domain and enables designing linked data formats for representing feature data. In this paper, we discuss important modelling decisions and introduce a harmonised ontology library consisting of modular interlinked ontologies that describe the different entities and activities involved in music creation, production and publishing.
Sound designers use big collections of sounds, recorded themselves or bought from commercial library providers. They have to navigate through thousands of sounds in order to find a sound pertinent for a task. Metadata management software is used, but all annotations are text-based and added by hand and there is still no widely accepted vocabulary of terms that can be used for annotations. This introduces several metadata issues that make the search process complex, such as ambiguity, synonymy and relativity. This paper addresses these problems with knowledge elicitation and sound design ontology engineering.
2004
Main professional sound effects (SFX) providers offer their collections using standard text-retrieval technologies. SFX cataloging is an error-prone and labor consuming task. The vagueness of the query specification, normally one or two words, together with the ambiguity and informality of natural languages affects the quality of the search: Some relevant sounds are not retrieved and some irrelevant ones are presented to the user. The use of ontologies alleviates some of the ambiguity problems inherent to natural languages, yet they pose others. It is very complicated to devise and maintain an ontology that account for the level of detail needed in a production-size sound effect management system. To address this problem we use WordNet, an ontology that organizes real world knowledge: e.g.: it relates doors to locks, to wood and to the actions of knocking. However a fundamental issue remains: sounds without caption are invisible to the users. Content-based audio tools offer perceptual ways of navigating the audio collections, like "find similar sounds", even if unlabeled, or query-byexample. We describe the integration of semantically-enhanced management of metadata using WordNet together with content-based methods in a commercial sound effect management system.
Semantic Web aims to lift current Web into semantic repositories where heterogeneous data can be queried and different services can be mashed up. Here we report some of on-going work with the EASAIER project to enable enhanced access to sound archives by integrating archives based on Music Onotlogy and provide different search results from different mashups.
Proc. AES 25th Int. Conf …, 2004
Categories or classification schemes offer ways of navigating and higher control over the search and retrieval of audio content. The MPEG-7 standard provides description mechanisms and ontology management tools for multimedia documents. We have implemented a classification scheme for sound effects management inspired on the MPEG-7 standard on top of an existing lexical network, WordNet. WordNet is a semantic network that organizes over 100.000 concepts of the real world with links among them. We show how to extend WordNet with the concepts of the specific domain of sound effects. We review some of the taxonomies to describe acoustically sounds. Mining legacy metadata from sound effects libraries further supplies us with terms. The extended semantic network includes the semantic, perceptual and sound effects specific terms in an unambiguous way. We show the usefulness of the approach easing the task for the librarian and providing higher control on the search and retrieval for the user.
Lecture Notes in Computer Science, 2016
This paper introduces the Audio Effect Ontology (AUFX-O) building on previous theoretical models describing audio processing units and workflows in the context of music production. We discuss important conceptualisations of different abstraction layers, their necessity to successfully model audio effects, and their application method. We present use cases concerning the use of effects in music production projects and the creation of audio effect metadata facilitating a linked data service exposing information about effect implementations. By doing so, we show how our model facilitates knowledge sharing, reproducibility and analysis of audio production workflows.
Music and sound have a rich semantic structure which is so clear to the composer and the listener, but that remains mostly hidden to computing machinery. Nevertheless, in recent years, the introduction of software tools for music production have enabled new opportunities for migrating this knowledge from humans to machines. A new generation of these tools may exploit sound samples and semantic information coupling for the creation not only of a musical, but also of a “semantic” composition. In this paper we describe an ontology driven content annotation framework for a web-based audio editing tool. In a supervised approach, during the editing process, the graphical web interface allows the user to annotate any part of the composition with concepts from publicly available ontologies. As a test case, we developed a collaborative web-based audio sequencer that provides users with the functionality to remix the audio samples from the Freesound website and subsequently annotate them. The annotation tool can load any ontology and thus gives users the opportunity to augment the work with annotations on the structure of the composition, the musical materials, and the creator’s reasoning and intentions. We believe this approach will provide several novel ways to make not only the final audio product, but also the creative process, first class citizens of the Semantic Web.
2010
In the recent time, the digital music items on the internet have been evolving to an enormous information space where we try to find/locate the piece of information of our choice by means of search engine. The current trend of searching for music by means of music consumers' keywords/tags is unable to provide satisfactory search results; and search and retrieval of music may be potentially improved if music metadata is created from semantic information provided by association of end-users' tags with acoustic metadata which is easy to extract automatically from digital music items. Based on this observation, our research objective was to investigate how music producers may be able to annotate music against MPEG-7 description (with its acoustic metadata) to deliver meaningful search results. In addressing this question, we investigated the potential of multimedia ontologies to serve as backbone for annotating music items and prospective application scenarios of semantic techno...
Proceedings of 120th AES …, 2006
We describe an information management system which addresses the needs of music analysis projects, providing a logic-based knowledge representation scheme for the many types of object in the domains of music and signal processing, including musical works and scores, ...
… (http://www. acemedia. org/) call for …, 2006
Zenodo (CERN European Organization for Nuclear Research), 2022
The use of Semantic Technologies-in particular the Semantic Web-has revealed to be a great tool for describing the cultural heritage domain and artistic practices. However, the panorama of ontologies for musicological applications seems to be limited and restricted to specific applications. In this research, we propose HaMSE, an ontology capable of describing musical features that can assist musicological research. More specifically, HaMSE proposes to address issues that have been affecting musicological research for decades: the representation of music and the relationship between quantitative and qualitative data. To do this, HaMSE allows the alignment between different music representation systems and describes a set of musicological features that can allow the music analysis at different granularity levels.
Proceedings of the International Florida Artificial Intelligence Research Society Conference, Miami Beach, 2004
The focus of this paper is the exploration of ontologies as a means for knowledge sharing in music information retrieval scenarios. Our approach is intended to support the complete “life cycle of digital music” and focuses on the implications of a digital world for all the actors involved in the process of creation, consumption and modification of such entities. As a novelty we take care about legal aspects, digital rights management being heavily tangled in such scenarios. MPEG-7 and MPEG-21 vocabularies are considered as basic ...
Extraction, representation, organisation and application of metadata about audio recordings are in the concern of semantic audio analysis. Our broad interpretation, aligned with recent developments in the field, includes methodological aspects of semantic audio, such as those related to information management, knowledge representation and applications of the extracted information. In particular, we look at how Semantic Web technologies may be used to enhance information management practices in two audio related areas: music informatics and music production.
ACM Transactions on Multimedia Computing, Communications, and Applications, 2006
2018
Existing literature has discussed the use of rule-based systems for intelligent mixing. These rules can either be explicitly defined by experts, learned from existing datasets, or a mixture of both. For such mixing rules to be transferable between different systems and shared online, we propose a representation using the Rule Interchange Format (RIF) commonly used on the Semantic Web. Systems with differing capabilities can use OWL reasoning on those mixing rule sets to determine subsets which they can handle appropriately. We demonstrate this by means of an example web-based tool which uses a logical constraint solver to apply the rules in real time to sets of audio tracks annotated with features.
2016
This paper is about the Mobile Audio Ontology, a semantic audio framework for the design of novel music consumption experiences on mobile devices. The framework is based on the concept of the Dynamic Music Object which is an amalgamation of audio files, structural and analytical information extracted from the audio, and information about how it should be rendered in realtime. The Mobile Audio Ontology allows producers and distributors to specify a great variety of ways of playing back music in controlled indeterministic as well as adaptive and interactive ways. Users can map mobile sensor data, user interface controls, or autonomous control units hidden from the listener to any musical parameter exposed in the definition of a Dynamic Music Object. These mappings can also be made dependent on semantic and analytical information extracted from the audio. 9 Types enable different ways of manipulating dymos. 10 http://vowl.visualdataweb.org/webvowl/ 11 These two properties are both sub-properties of the CHARM property :hasAttribute, which point to general musical attributes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.