Academia.eduAcademia.edu

Music retrieval

316 papers
13 followers
AI Powered
Music retrieval is the process of searching, identifying, and accessing music content from databases or digital libraries using various techniques, including content-based analysis, metadata tagging, and user queries, to facilitate efficient discovery and organization of musical works.
Abstract. Recent research has shown the Linked Data cloud to be a potentially ideal basis for improving user experience when interacting with Web content across different applications and domains. Using the explicit knowledge of datasets,... more
Chroma-based audio features are a well-established tool for analyzing and comparing music data. By identifying spectral components that differ by a musical octave, chroma features show a high degree of invariance to variations in timbre.... more
The intentionally ambiguous expression "Popular Music Browser" reflects the two main goals of this project, which started in 1998, at Sony CSL laboratory. First, we are interested in human-centered issues related to browsing "Popular... more
by An Le
To facilitate information retrieval of large-scale music databases, the detection of musical concepts, or auto-tagging, has been an active research topic. This paper concerns the use of concept correlations to improve musical concept... more
In this paper we present the Audio Effects Ontology for the ontological representation of audio effects in music production workflows. Designed as an extension to the Studio Ontology, its aim is to provide a framework for the detailed... more
We present a measure of the similarity of the long-term structure of musical pieces. The system deals with raw polyphonic data. Through unsupervised learning, we generate an abstract representation of music -the "texture score". This... more
We consider two formulations of the computational problem of transposition-invariant, time-offset tolerant, meterinvariant, and timescale invariant polyphonic music retrieval. We provide algorithms for both that are scalable in the sense... more
Emotion based music retrieval is a music player based on user's emotion. Many music devices and mobile music players are used to listen to music. A practical problem is selection of desired music. Nowadays many devices are integrated with... more
Symmetry turns out to be a very important concept in the analysis of music perception and its relationship to speech perception. There are five or maybe six identifiable symmetries of music perception. These are invariances under six... more
Musical scores are traditionally retrieved by title, composer or subject classification. Just as multimedia computer systems increase the range of opportunities available for presenting musical information, so they also offer new ways of... more
Human listeners are able to recognize structure in music through the perception of repetition and other relationships within a piece of music. This work aims to automate the task of music analysis. Music is "explained" in terms of... more
To achieve a good balance between matching accuracy and computation efficiency is a key challenge for Query-by-Humming (QBH) system. In this paper, we propose an approach of n-gram based fast match. Our n-gram method uses a robust... more
This paper extends the familiar "query by humming" music retrieval framework into the polyphonic realm. As humming in multiple voices is quite difficult, the task is more accurately described as "query by audio example", onto a collection... more
This paper describes a distributed collaborative wiki-based platform that has been designed to facilitate the development of Semantic Web applications. The applications designed using this platform are able to build semantic data through... more
This paper explores the extraction of melodic pitch contour from the polyphonic soundtrack of a song. The motivation for this work lies in developing automatic tools for the melodybased indexing of the database in a music retrieval... more
Music information retrieval is a field of rapidly growing commercial interest. This paper describes TANSEN, a query-by-humming based music retrieval system under development at IIT, Bombay. Named after the legendary musician (and a... more
We have explored methods for music information retrieval for polyphonic music stored in the MIDI format. These methods use a query, expressed as a series of notes that are intended to represent a melody or theme, to identify similar... more
Most research into music information retrieval thus far has only examined music from the western tradition. However, music of other origins often conforms to different tuning systems. Therefore there are problems both in representing this... more
Accurately finding audio recordings in response to symbolic queries is one of the key challenges in the field of music information retrieval. Pitch is one of the main features of music; in this paper we propose and evaluate approaches for... more
Managing and analyzing music scores content is a known issue for digital libraries. Despite their inarguable complexity, these contents are often ignored by digital libraries, where music scores are mostly treated as simple digital... more
Jazz is a musical tradition that is just over 100 years old; unlike in other Western musical traditions, improvisation plays a central role in jazz. Modelling the domain of jazz poses some ontological challenges due to specificities in... more
Recognizing a musical excerpt without necessarily retrieving its title typically reflects the existence of a memory system dedicated to the retrieval of musical knowledge. The functional distinction between musical and verbal semantic... more
We study the problem of identifying repetitions under transposition and time-warp invariances in polyphonic symbolic music. Using a novel onset-time-pair representation, we reduce the repeating pattern discovery problem to instances of... more
The modeling of music as a language is a core issue for a wide range of applications such as polyphonic music retrieval, automatic style identification, audio to symbolic music transcription and computer-assisted composition. In this... more
The Linked Data paradigm has been used to publish a large number of musical datasets and ontologies on the Semantic Web, such as MusicBrainz, AcousticBrainz, and the Music Ontology. Recently, the MIDI Linked Data Cloud has been added to... more
This paper extends the familiar "query by humming" music retrieval framework into the polyphonic realm. As humming in multiple voices is quite difficult, the task is more accurately described as "query by audio example", onto a collection... more
In this paper, we present a novel approach for music summarization based on music structure analysis. From the audio signal, we first extract the note onset representing the time tempo of the song and the music structure analysis can be... more
A n immeasurable amount of multimedia information is available today-in digital archives, on the Web, in broadcast data streams, and in personal and professional databases-and this amount continues to grow. Yet, the value of that... more
Employing existing ontologies including GeoNames and the Music Ontology, we present a proof-of-concept system that demonstrates the utility of Linked Data for enhancing the application of MIR workflows, both when curating collections of... more
We describe a method that aligns polyphonic audio recordings of music to symbolic score information in standard MIDI files without the difficult process of polyphonic transcription. By using this method, we can search through a MIDI... more
We apply a new machine learning tool, kernel combination, to the task of semantic music retrieval. We use 4 different types of acoustic content and social context feature sets to describe a large music corpus and derive 4 individual... more
When attempting to annotate music, it is important to consider both acoustic content and social context. This paper explores techniques for collecting and combining multiple sources of such information for the purpose of building a... more
TagClouds is a popular visualization for the collaborative tags. However it has some instinct problems such as linguistic issues, high semantic density and poor understanding of hierarchical structure and semantic relation between tags.... more
Existing literature has discussed the use of rule-based systems for intelligent mixing. These rules can either be explicitly defined by experts, learned from existing datasets, or a mixture of both. For such mixing rules to be... more
This paper introduces the Audio Effect Ontology (AUFX-O) building on previous theoretical models describing audio processing units and workflows in the context of music production. We discuss important conceptualisations of different... more
Feature extraction algorithms in Music Informatics aim at deriving statistical and semantic information directly from audio signals. These may be ranging from energies in several frequency bands to musical information such as key, chords... more
University Carlos III of Madrid Department of Computer Science Leganés, Madrid, Spain {jurbano, jmorato, mmarrero, dmandres}@inf.uc3m.es ... ABSTRACT Music similarity tasks, where musical pieces similar to a query should be retrieved, are... more
In this work, a novel representation system for symbolic music is described. The proposed representation system is graph-based and could theoretically represent music both from a horizontal (contrapuntal) and from a vertical (harmonic)... more
This paper extends the familiar "query by humming" music retrieval framework into the polyphonic realm. As humming in multiple voices is quite difficult, the task is more accurately described as "query by audio example", onto a collection... more
Sonic Visualiser is the name for an implementation of a system to assist study and comprehension of the contents of audio data, particularly of musical recordings. It is a C++ application with a Qt4 GUI that runs on Windows, Mac, and... more
This paper extends the familiar "query by humming" music retrieval framework into the polyphonic realm. As humming in multiple voices is quite difficult, the task is more accurately described as "query by audio example", onto a collection... more
One of the key missions of the Polifonia project is to build and interlink knowledge graphs of European musical cultural heritage. These interconnected music knowledge graphs act as a data backbone for the project. This deliverable is... more
Integration between different data formats, and between data belonging to different collections, is an ongoing challenge in the MIR field. Semantic Web tools have proved to be promising resources for making different types of music... more
Existing literature has discussed the use of rule-based systems for intelligent mixing. These rules can either be explicitly defined by experts, learned from existing datasets, or a mixture of both. For such mixing rules to be... more
Given two n-bit (cyclic) binary strings, A and B, represented on a circle (necklace instances), let each sequence have the same number (k) of 1's. We are interested in computing the cyclic swap distance between A and B, i.e. the... more
The problem of music retrieval by sung query consists of building a machine capable of simulating the cognitive process of identifying a musical piece from a few sung notes of its melody. In this paper, the algorithms of pitch tracking,... more
New interactive music services have emerged, despite currently using proprietary file formats. Having a standardized file format could benefit the interoperability between these services. In this regard, the ISO/IEC Moving Picture Experts... more