Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, Proceedings of the sixth ACM international conference on Multimedia - MULTIMEDIA '98
…
15 pages
1 file
We are interested in developing multimedia technology for enriching the listening experience of average listeners. One main issue we focus on is the design and construction of software systems in which users interact with music in various ways while maintaining as much as possible the semantics of the original music. In this context, we develop a research activity concerning music spatialization. We propose a system called MidiSpace, in which users may listen to music while controlling in real time the localization and spatialization of sound sources, through a simple interface. We then introduce the problem of mixing consistency, and propose a solution based on a constraint propagation mechanism. The proposed environment contains both an authoring mode, in which sound engineers or composers may specify spatialization constraints to be satisfied, and a listening mode in which listeners can modify spatialization settings under the supervision of a constraint solver that ensures the spatialization always satisfies the constraints. We describe the architecture of the system and report on experiments done so far.
1998
We are interested in developing multimedia technology for enriching the listening experience of average listeners. One main issue we focus on is the design and construction of software systems in which users interact with music in various ways while maintaining as much as possible the semantics of the original music. In this context, we develop a research activity concerning music spatialization. We propose a system called MidiSpace, in which users may listen to music while controlling in real time the localization and spatialization of sound sources, through a simple interface. We then introduce the problem of mixing consistency, and propose a solution based on a constraint propagation mechanism. The proposed environment contains both an authoring mode, in which sound engineers or composers may specify spatialization constraints to be satisfied, and a listening mode in which listeners can modify spatialization settings under the supervision of a constraint solver that ensures the spatialization always satisfies the constraints. We describe the architecture of the system and report on experiments done so far.
Lecture Notes in Computer Science, 1998
We propose a system for controlling in real time the localisation of sound sources. The system, called MidiSpace, is a real time spatializer of Midi music. We raise the issue of which interface is the most adapted for using MidiSpace. Two interfaces are proposed: a 2D interface for controlling the position of sound sources with a global view of the musical setup, and a 3D/VRML interface for moving the listener's avatar. We report on the design of these interfaces and their respective advantages, and conclude on the need for a mixed interface for spatialization. We believe that listening environments of the future can be greatly enhanced by integrating relevant models of musical perception into musical listening devices, provided we can develop appropriate software technology to exploit them. This is the basis of the research conducted on "Active listening" at Sony Computer Science Laboratory, Paris. Active Listening refers to the idea that listeners can be given some degree of control on the music they listen to, that give the possibility of proposing different musical perceptions on a piece of music, by opposition to traditional listening, in which the musical media is played passively by some neutral device. The objective is both to increase the musical comfort of listeners, and, when possible, to provide listeners with smoother paths to new music (music they do not know, or do not like). These control parameters create implicitly control spaces in which musical pieces can be listened to in various ways. Active listening is thus related to the notion of Open Form in composition [8] but differs by two aspects: 1) we seek to create listening environments for existing music repertoires, rather than creating environments for composition or free musical exploration (such as PatchWork , OpenMusic [2], or CommonMusic [18]), and 2) we aim at creating environments in which the variations always preserve the original semantics of the music, at least when this semantics can be defined precisely. The first parameter which comes to mind when thinking about user control on music is the spatialization of sound sources. In this paper we study the implications of giving users the possibility to change dynamically the mixing of sound sources. In te next section, we review previous approaches in computer-controlled sound
2000
The MusicSpace project aims at providing high-level user control on music spatialization, i.e. the position of sound sources and the position of the listener's avatar. This is done by introducing a constraint system in a graphical user interface representing the sound sources, and connected to a spatializer. The constraint system allows to express various sorts of properties on configuration of sound sources. When the user moves one source -through the interface or via a control language -the constraint system is activated and tries to satisfy the constraints that may have been violated. A first Midi version of the MusicSpace has already been designed and proved very successful. We describe here a second version of MusicSpace which now handles full-fledged multi-track audio files. We report on the design of the system and preliminary experiments.
2015
We present recent works carried out in the OpenMusic computer-aided composition environment for combining compositional processes with spatial audio rendering. We consider new modalities for manipulating sound spatialization data and processes following both object-based and channel-based approaches, and developed a framework linking algorithmic processing with interactive control.
… Conference on Digital Audio …, 2006
In an effort to find a better suited interface for musical performance, a novel approach has been discovered and developed. At the heart of this approach is the concept of physical interaction with sound in space, where sound processing occurs at various 3-D locations and sending sound signals from one area to another is based on physical models of sound propagation. The control is based on a gestural vocabulary that is familiar to users, involving natural spatial interaction such as translating, rotating, and pointing in 3-D. This research presents a framework to deal with realtime control of 3-D audio, and describes how to construct audio scenes to accomplish various musical tasks. The generality and effectiveness of this approach has enabled us to reimplement several conventional applications, with the benefit of a substantially more powerful interface, and has further led to the conceptualization of several novel applications.
In this paper, we are presenting a new model for interactive music. Unlike most interactive systems, our model is based on file organization, but does not require digital audio treatments. This model includes a definition of a constraints system and its solver.
Organised Sound, 2009
Zirkonium is a flexible, non-invasive, open-source program for sound spatialisation over spherical (dome-shaped) loudspeaker setups. By non-invasive, we mean that Zirkonium offers the artist spatialisation capabilities without forcing her to change her usual way of working. This is achieved by supporting a variety of means of designing and controlling spatialisations. Zikonium accommodates user-defined speaker distributions and offers HRTF-based headphone simulation for situations when the actual speaker setup is not available. It can acquire sound sources from files, live input, or via the so-called device mode, which allows Zirkonium to appear to other programs as an audio interface. Control data may be predefined and stored in a file or generated elsewhere and sent over OSC. This paper details Zirkonium, its design philosophy and implementation, and how we have been using it since 2005.
2004
Real time spatialization of sound involve not only DSP processes, but also the design of useful user interfaces to produce meaningful movement paths effectively connected with musical ideas. The program RTSPA1 addresses this problem through the control via MIDI data of the software and hardware Session8 (1). In addition to this, the program could be used also to control other real time DSP programs and devices, among of them the Csound program. Future developments are addressed to convert it on a general purpose graphic interface to control spatial location of sound and music.
In this paper, we are presenting a new model for interactive music. Unlike most interactive systems, our model is based on file organization, but does not require digital audio treatments. This model includes a definition of a constraints system and its solver.
2011
Background in Spatial Sound Perception and Synthesis. Spatial sound perception is an important process in how we experience sounds in our environment. This process is studied in the fields of otology, audiology, psychology, neuroscience and acoustical engineering. Its practical implications are notably found in communications, architectural acoustics, urban planning, film, media art and music. The synthesis of spatial sound properties by means of computer and loudspeaker technology is an ongoing research topic. Background in History of Spatial Music. In spatial music, perceptual effects of spatial sound segregation, fusion and divided attention are explored. The compositional use of these properties dates back to the 16th century and the music by Willaert and Gabrieli for spatially separated instruments and choirs. Electroacoustic inventions in the 19th and 20th century, such as microphones and loudspeakers, and the recent increase in computer resources have created new possibilities and challenges for composers. Aims. The aim of this project was to develop a perceptually convincing spatialization system that is flexible and easy to use for musical applications. Main contribution. Using an interdisciplinary approach, the Virtual Microphone Control systen (ViMiC) was developed and refined to be a flexible spatialization system based on the concept of virtual microphones. The software was tested in multiple real-world user scenarios ranging from concert performances and sound installations to movie production and applications in education and medical research. Implications. Our interdisciplinary development approach can guide other development efforts for creating user-friendly computer music tools. Due to its specific feature set, ViMiC has become a flexible tool for spatial sound rendering that can be used in a variety of scenarios and disciplines. We hope that ViMiC will motivate further creative and scientific interest in sound spatialization.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The 27th International Conference on 3D Web Technology
Proceedings of the …, 1997
somasa.qub.ac.uk
Gesture-Based Human-Computer Interaction and Simulation, 2009
Proceedings of the SMC Conferences, 2014