Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998
COOLVR (Complete Object Oriented Library for Virtual Reality) is a toolkit currently being developed at the Graphics, Visualization, and Usability Center (GVU) at Georgia Tech. The toolkit is written to allow programmers to easily create virtual environments (VE's) which will compile cross platform. Unlike most VE toolkits which focus effort primarily on the visual senses, COOLVR aims to equally engage
Stereoscopic Displays and Virtual Reality Systems III, 1996
Sound represents a largely untapped source of realism in Virtual Environments (VEs). In the real world, sound constantly surrounds us and pulls us into our world. In VEs, sound enhances the immersiveness of the simulation and provides valuable information about the environment. While there has been a growing interest in integrating sound into VE interfaces, current technology has not brought about its widespread use. This, we believe, can be attributed to the lack of proper tools for modeling and rendering the auditory world. We have been investigating some of the problems which we believe are pivotal to the widespread use of sound in VE interfaces. As a result of this work, we have developed the Virtual Audio Server (VAS). VAS is a distributed, real-time spatial sound generation server. It provides high level abstractions for modeling the auditory world and the events which occur in the world. VAS supports both sampled and synthetic sound sources and provides a device independent interface to spatialization hardware. Resource management schemes can easily be integrated into the server to manage the real-time generation of sound. We are currently investigating possible approaches to this important problem.
Proceedings of the AES 35th International Conference: …, 2009
Phya is an open source C++ library that facilitates physically motivated audio in virtual environments. A review is presented and recent developments in the context of game audio, including the launch of VFoley, a project using Phya as the basis for a fully fledged virtual sound design environment. This will enable sound designers to rapidly produce rich Foley content from within a virtual environment, and author enhanced objects for use by Phya enabled applications.
Pure Data Convention, …, 2007
We present a Pure Data library for managing 3-D sound and accomplishing signal processing using spatial relations. The framework is intended to support applications in the areas of immersive audio, virtual/augmented reality systems, audiovisual performance, multimodal sound installations, acoustic simulation, 3-D audio mixing, and many more.
During the last decade, Virtual Reality (VR) systems have progressed from primary laboratory experiments into serious and valuable tools. Thereby, the amount of useful applications has grown in a large scale, covering conventional use, e.g., in science, design, medicine and engineering, as well as more visionary applications such as creating virtual spaces that aim to act real. However, the high capabilities of today's virtual reality systems are mostly limited to firstclass visual rendering, which directly disqualifies them for immersive applications. For general application, though, VR-systems should feature more than one modality in order to boost its range of applications. The CAVE-like immersive environment that is run at RWTH Aachen University comprises state-of-the-art visualization and auralization with almost no constraints on user interaction. In this article a summary of the concept, the features and the performance of our VR-system is given. The system features a 3D sketching interface that allows controlling the application in a very natural way by simple gestures. The sound rendering engine relies on present-day knowledge of Virtual Acoustics and enables a physically accurate simulation of sound propagation in complex environments, including important wave effects such as sound scattering, airborne sound insulation between rooms and sound diffraction. In spite of this realistic sound field rendering, not only spatially distributed and freely movable sound sources and receivers are supported, but also modifications and manipulations of the environment itself. The auralization concept is founded on pure FIR filtering which is realized by highly parallelized non-uniformly partitioned convolutions. A dynamic crosstalk cancellation system performs the sound reproduction that delivers binaural signals to the user without the need of headphones. The significant computational complexity is handled by distributed computation on PCclusters that drive the simulation in real-time even for huge audio-visual scenarios.
Proceedings of Intl. Conf. on Auditory …, 2000
The Virtual Audio Server (VAS) is a toolkit designed for exploring problems in the creation of realistic Virtual Sonic Environments (VSE). The architecture of VAS provides an extensible framework in which new ideas can be quickly integrated into the system and experimented with.
2011
The effectiveness of virtual environments depends largely on how efficiently they recreate the real world. In the case of auditory virtual environments, the importance of accurate recreation is enhanced since there are no visual cues to assist perception, as in the case of audiovisual virtual environments. In this paper, we present the Immersive Audio Environment (IAE), an easily constructible and portable structure, which is capable of 3-D sound auralization with very high spatial resolution. A novel method for acoustically positioning loudspeakers in space, which is required by the IAE for simulation of sound sources, is presented in this paper. We also present a method to calibrate loudspeakers in an audio system in the case when the loudspeakers are not of the same type. Our contribution is the creation of a system that uses existing and modifications of existing techniques; Vector Based Amplitude Panning (VBAP), in order to recreate an audio battle environment. The IAE recreates the audio effects of battle scenes and can be used as a virtual battle environment for training soldiers. The results of subjective tests show a very low error standard deviation for azimuth and elevation angles and a high correlation between user responses and true angles.
2009
State-of-the-art synthesizers provide numerous controllers through which the user may create a great variety of sounds. The real-time control of so many parameters though, is often problematic for the user. The goal of our work has been to study the physical and remote control of such audial parameters. For this purpose, a system was developed that processes video input and recognizes movements of the user's body parts (movement of hands, head etc.) and translates them as change in some audial parameters of an electronic music instrument (tone, cutoff, etc.). Also, basic techniques of digital audio synthesis were studied and a prototype synthesizer was developed that is controlled physically and remotely. The study didn't limit itself in technical details, but contained study of the human psyche and cognition, as much for the audial, as also for the video part of the implementation of the system.
2010
Abstract The SMART-I² aims at creating a precise and coherent virtual environment by providing users with both audio and visual accurate localization cues. Wave field synthesis, for audio rendering, and tracked passive stereoscopy, for visual rendering, individually permit high quality spatial immersion within an extended space.
Theory and Practice …, 2005
In this paper, we present a framework for experimenting with virtual environments. The architecture of the system VRECKO is designed for the rapid prototyping of techniques for human-computer interaction. The architecture is flexible, but simple. Virtual environment entities and other components can be configured at run time. We demonstrate the flexibility of the approach on several examples of experiments and tools which were realized in VRECKO.
Proceedings IEEE Virtual Reality 2002
VESS is a suite of libraries designed to aid in the creation of applications for virtual reality research. It combines the power of a scene graph library, a flexible input device library, and support for other senses (audio, haptics, etc.). It then adds higher-level libraries that integrate graphics and user input, creating a single, coherent platform that is flexible and easy to use. VESS handles the technical challenges facing virtual reality applications, leaving the developer free to focus on the details of the application itself.
IEEE Computer Graphics and Applications, 2002
Virtual Reality (VR) is on the edge of becoming the next major delivery plat- form for digital content. Technological progress has made accessible a variety of new tools, allowing development of the necessary hardware and software for VR interaction. The importance of audio in this process is increasing. Audio can intensify the three dimensional experience and immersion of game players and users in other applications. This research focuses on determining and implementing the necessary ele- ments for a system able to deliver Virtual Auditory Display (VAD). The system structure is adapted to fit a new emerging gaming platform: smart- phones. Developing for mobile platforms requires consideration of many con- straints such as memory, processor load and development components. The result is a real-time dynamic VAD, manageable by mobile devices, and able to trigger spatial auditory perception across the azimuthal plane. In this study it is also shown how the VAD developed following custom imple- mentation of selected Head Related Transfer Function (HRTF), is able to generate azimuthal localization of sound events in VR scenarios.
Presence: Teleoperators and Virtual Environments, 2008
To be immersed in a virtual environment, the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loudspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory signals to the listener's ears so that these signals are perceptually equivalent to those the listener would receive in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of the various approaches are presented.
NewWorlds of Learning, 2000
A c k n o w l e d g e m e n t s C o n f e r e n c e O r g a n i s i n g C o m m i t t e e A p p l e U n i v e r s i t y C o n s o r t i u m A c a d e m i c a n d D e v e l o p e r ' s C o n f e r e n c e 2 0 0 0
Procedia Technology, 2012
This paper focuses on the behavior of sound on combining: "Navigation, 3D objects and 3D sound", in different virtual worlds, which are displayed on multiple screens. Our project allows building different virtual environment using one or more screens, with a full navigation system, four ways of manipulating using keyboard, head tracker, remote control and input files with different predefined tours. Furthermore, with this project we can also build a Wheatstone-type stereoscope. We have implemented three virtual worlds for the different environments: The first one shows a collection of anaglyphic photographs; the second one is a collection of photographs of roman art; the third one is a computing center.
1986
A real-time virtual audio reality model has been created. The system includes model-based soundsynthesizers, geometrical room acoustics modeling, binaural auralization for headphone or loudspeakerlistening, and hiqh-quality animation. This paper discusses the following subsystems of thedesigned environment: The implementation of the audio processing soft- and hardware, and the designof a dedicated multiprocessor DSP hardware platform. The design goal of the overall
International Journal of Human-Computer Studies, 2009
This document describes the AlloBrain, the debut content created for presentation in the AlloSphere at the University of California, Santa Barbara, and the Cosm toolkit for the prototyping of interactive immersive environments using higher-order Ambisonics and stereographic projections. The Cosm toolkit was developed in order to support the prototyping of immersive applications that involve both visual and sonic interaction design. Design considerations and implementation details of both the Cosm toolkit and the AlloBrain are described in detail, as well as the development of custom human-computer interfaces and new audiovisual interaction methodologies within a virtual environment.
2012 IEEE Virtual Reality (VR), 2012
Immersive 3D Virtual Environments (VE) have become affordable for many research centers. However, a complete solution needs several integration steps to be fully operational. Some of these steps are difficult to accomplish and require an uncommon combination of different skills. This tutorial presents the most recent techniques developed to address this problem, from displays to software tools. The hardware in a typical VR installations combines projectors, screens, speakers, computers, tracking and I/O devices. The tutorial will discuss hardware options, explaining their advantages and disadvantages. We will cover design decisions from basic software and hardware design, through user tracking, multimodal human-computer interfaces and acoustic rendering, to how to administrate the whole solution. Additionally, we will provide an introduction to existing tracking technologies, explaining how the most common devices work, while focusing on infrared optical tracking. Finally, we briefly cover integration software and middleware developed for most VE settings.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.