Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, Proceedings of Intl. Conf. on Auditory …
The Virtual Audio Server (VAS) is a toolkit designed for exploring problems in the creation of realistic Virtual Sonic Environments (VSE). The architecture of VAS provides an extensible framework in which new ideas can be quickly integrated into the system and experimented with.
Stereoscopic Displays and Virtual Reality Systems III, 1996
Sound represents a largely untapped source of realism in Virtual Environments (VEs). In the real world, sound constantly surrounds us and pulls us into our world. In VEs, sound enhances the immersiveness of the simulation and provides valuable information about the environment. While there has been a growing interest in integrating sound into VE interfaces, current technology has not brought about its widespread use. This, we believe, can be attributed to the lack of proper tools for modeling and rendering the auditory world. We have been investigating some of the problems which we believe are pivotal to the widespread use of sound in VE interfaces. As a result of this work, we have developed the Virtual Audio Server (VAS). VAS is a distributed, real-time spatial sound generation server. It provides high level abstractions for modeling the auditory world and the events which occur in the world. VAS supports both sampled and synthetic sound sources and provides a device independent interface to spatialization hardware. Resource management schemes can easily be integrated into the server to manage the real-time generation of sound. We are currently investigating possible approaches to this important problem.
Presence: Teleoperators and Virtual Environments, 2008
To be immersed in a virtual environment, the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loudspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory signals to the listener's ears so that these signals are perceptually equivalent to those the listener would receive in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of the various approaches are presented.
Human–Computer Interaction Series
This chapter addresses the first building block of sonic interactions in virtual environments, i.e., the modeling and synthesis of sound sources. Our main focus is on procedural approaches, which strive to gain recognition in commercial applications and in the overall sound design workflow, firmly grounded in the use of samples and event-based logics. Special emphasis is placed on physics-based sound synthesis methods and their potential for improved interactivity. The chapter starts with a discussion of the categories, functions, and affordances of sounds that we listen to and interact with in real and virtual environments. We then address perceptual and cognitive aspects, with the aim of emphasizing the relevance of sound source modeling with respect to the senses of presence and embodiment of a user in a virtual environment. Next, procedural approaches are presented and compared to sample-based approaches, in terms of models, methods, and computational costs. Finally, we analyze ...
IEEE Computer Graphics and Applications, 2002
Journal of The Audio Engineering Society, 1999
The theory and techniques for virtual acoustic modeling and rendering are discussed. The creation of natural sounding audiovisual environments can be divided into three main tasks: sound source, room acoustics, and listener modeling. These topics axe discussed in the context of both non-real-time and real-time virtual acoustic environments. Implementation strategies are considered, and a modular and expandable simulation software is described. r
2001
This paper develops a functional analysis of the augmented reality system presented in [Natkin00]. It also presents the first elements of an experimental architecture and a real example where the system would be used. The system is based on virtual sound reality : spectators are walking into a real space, indoor or outdoor, wearing headphones. They see the real space and at the same time hear a virtual sound space, homeomorphic to the real one. This means that there is a continuous function which maps any trajectory in the real space to a trajectory in the virtual space, thus determining which sound is heard along this trajectory. The synthesis of the virtual sound along a trajectory may depend on many factors : speed of the spectator, past movements of the spectator, current or past position of others spectators, random events and so on. Moreover special rules or constraints will be added, considering the kind of application desired : quality of sound needed, maximum number of spec...
1996
We have created a model of real-time audio virtual reality. This model system includes modelbased sound synthesizers, geometric room-acoustics modeling, binaural auralization for headphone or loudspeaker listening, and high-quality animation. This project aims to create a virtual musical event that is highly authentic in both audio and video. To reach this goal, we innovated this system with a real-time imagesource algorithm for arbitrarily shaped rooms; shorter HRTF filter approximations for more efficient auralization; and a network-based distributed implementation of the audio-processing software and hardware.
The purpose of this paper is to determine the accuracy of modern video game audio engines in reproducing realistic sonic environments through the use of different playback systems. For a number of reasons which will be thoroughly explained in the main body of the dissertation, a specific focus will be set on stereophonic systems, especially on the differences between sounds presented through headphones as opposed to sounds presented through loudspeakers.
1998
COOLVR (Complete Object Oriented Library for Virtual Reality) is a toolkit currently being developed at the Graphics, Visualization, and Usability Center (GVU) at Georgia Tech. The toolkit is written to allow programmers to easily create virtual environments (VE's) which will compile cross platform. Unlike most VE toolkits which focus effort primarily on the visual senses, COOLVR aims to equally engage
Ear & Hearing, 2020
To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are required that go beyond already established methods. These novel methods will simulate to a certain extent the complexity and variability of acoustic conditions and acoustic communication styles in real life. This article discusses the current state and the perspectives of virtual reality technology use in the lab for designing complex audiovisual communication environments for hearing assessment and hearing device design and evaluation. In an effort to increase the ecological validity of lab experiments, that is, to increase the degree to which lab data reflect real-life hearing-related function, and to support the development of improved hearing-related procedures and interventions, this virtual reality lab marks a transition from conventional (audio-only) lab experiments to the field. The first part of the article intro...
Topic of this paper is an interactive sound reproduction system to create virtual sonic environments and visual spaces, called Virtual Audio Reproduction Engine for Spatial Environments (VARESE). Using VARESE, Edgard Varèse's Poème électronique was recreated within a virtual Philips Pavilion. The system draws on binaural sound reproduction principles including spatialization techniques based on the Ambisonics theory. Using headphones and a head tracker, listeners can enjoy a preset reproduction of the Poème électronique as they freely move through the virtual architectural space. While VARESE was developed specifically for its use in reconstructing Poemè électronique, it is flexible enough to function as a standard interpreter of sounds and visual objects, enabling users to design their own spatializations.
A Virtual Performance Studio (VPS) is a space that allows a musician to practice in a virtual version of a real performance space in order to acclimatise to the acoustic feedback received on stage before physically performing there. Traditional auralisation techniques allow this by convolving the direct sound from the instrument with the appropriate impulse response on stage. In order to capture only the direct sound from the instrument, a directional microphone is often used at small distances from the instrument. This can give rise to noticeable tonal distortion due to proximity effect and spatial sampling of the instrument's directivity function. This work reports on the construction of a prototype VPS system and goes on to demonstrate how an auralisation can be significantly affected by the placement of the microphone around the instrument, contributing to a reported 'PA effect'. Informal listening tests have suggested that there is a general preference for auralisations which process multiple microphones placed around the instrument.
We present a case study of sound production and performance that optimize the interactivity of model-based VRsystems. We analyze problems for audio presentation in VR architectures and we demonstrate solutions obtained by amodel-based data-driven component architecture that supports interactive scheduling. Criteria and a protocol forcoupling jMax and VSS software are described. We conclude with recommendations for diagnostic tools, soundauthoring middleware, and further research on sound feedback ...
2000
ABSTRACT Virtual acoustics is a general term for the modeling of acoustical phenomena and systems with the aid of a computer. It comprises many different fields of acoustics, but in the context of this paper the term is restricted to describe a system ranging from sound source and acoustics modeling in rooms to spatial auditory perception simulation in humans. In other words, virtual acoustics concept covers three major subsystems in acoustical communication: source, transmission, and the listener.
We present a new software system for spatial audio repro- duction called SoundScape Renderer. It is currently used for rendering 2-dimensional virtual acoustic scenes using either Wave Field Synthesis, binaural rendering or Vector Based Amplitude Panning. A further extension to higher order Am- bisonics is planned. However, in this paper we will focus on the WFS functionality. We also describe the physical setup of the WFS system in- stalled at the Usability Laboratory of Deutsche Telekom Lab- oratories, the natural habitat of the SoundScape Renderer. Finally, we present a working draft for an XML file for- mat enabling the exchange of acoustic scenes between differ- ent software systems. This Audio Scene Description Format (ASDF) is entirely open and we encourage everyone to par- ticipate in its further definition.
Proc. 2000 International Computer Music Conference, 2000
We present a case study of sound production and performance that optimize the interactivity of model-based VR systems. We analyze problems for audio presentation in VR architectures and we demonstrate solutions obtained by a model-based data-driven component architecture that supports interactive scheduling. Criteria and a protocol for coupling jMax and VSS software are described. We conclude with recommendations for diagnostic tools, sound authoring middleware, and further research on sound feedback ...
Proceedings of the 17th International Audio Mostly Conference (AM ’22), 2022
How can a virtual sound laboratory allow for new and exciting ways of sonic interaction in the context of the arts? Our project addresses this question by developing the virtual sound lab 'OpenSoundLab' (as an open-source fork of 'SoundStage VR' by Logan Olson) that introduces to the artistic and musical production of sonic media with the help of the VR goggles 'Meta Quest 2'. The aim is to combine the physical experience of working on spatial experimental systems, which is often perceived as positive and productive, with the advantages of digital tools and thus to enable independent learning and experimentation. The virtual lab allows to become familiar with the basics of creative sound generation and processing. Specially produced video tutorials play a central role here, which can be viewed at any time within the virtual environment and thus make it possible to study in individual lab environments independently of time and place. Furthermore, 'OpenSoundLab' may serve as an open-source tool for the professional and academic community of musicians, performers, and artists alike. In our reflection, we develop the notion of 'cooking' sound while 'flowing' in a mixed environment and apply this to experimental work in a virtual sound laboratory. CCS CONCEPTS • Applied computing → Arts and humanities; Sound and music computing; Media arts; Performing arts.
2010
Sounds are (almost) always heard and perceived as parts of greater contexts. How we hear a sound depends on things like other sounds present, acoustic properties of the place where the sound is heard, the distance and direction to the sound source etc. Moreover, if the sound bear any meaning to us or not and what the meaning is, if any, depends largely on the listener’s interpretation of the sound, based on memories, previous experiences etc. When working with the design of sounds for all sorts of applications, it is crucial to not only evaluate the sound isolated in the design environment, but to also test the sound in possible greater contexts where it will be used and heard. One way to do this is to sonically simulate one or more environments and use these simulations as contexts to test designed sounds against. In this paper we report on a project in which we have developed a system for simulating the sounding dimension of physical environments. The system consists of a software...
Proceedings of the AES 35th International Conference: …, 2009
Phya is an open source C++ library that facilitates physically motivated audio in virtual environments. A review is presented and recent developments in the context of game audio, including the launch of VFoley, a project using Phya as the basis for a fully fledged virtual sound design environment. This will enable sound designers to rapidly produce rich Foley content from within a virtual environment, and author enhanced objects for use by Phya enabled applications.