Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Procedia Technology
…
8 pages
1 file
This paper focuses on the behavior of sound on combining: "Navigation, 3D objects and 3D sound", in different virtual worlds, which are displayed on multiple screens. Our project allows building different virtual environment using one or more screens, with a full navigation system, four ways of manipulating using keyboard, head tracker, remote control and input files with different predefined tours. Furthermore, with this project we can also build a Wheatstone-type stereoscope. We have implemented three virtual worlds for the different environments: The first one shows a collection of anaglyphic photographs; the second one is a collection of photographs of roman art; the third one is a computing center.
2006
Abstract This article presents the approach we followed for the creation of our virtual reality room, both from the hardware point of view and the software one. Our main goal to build a virtual reality room as cheap as possible and easily transportable was constrained by our knowledge of the mechanisms of human preception. Like any virtual reality system, our room aims to immerse the user in a place where he will be able to feel the presence of the virtual objects and his self-presence in the virtual environment.
Where & What, 2015
Navigation activity is commonly conducted mainly through information provided by visual perception. This work proposes sensory substitution as solution to enable navigation functions in situations in which visual perception is limited or impeded. The approach is experimented and tested through the development of an only-audio game. In the game the user has to produce a mental model of the playing field and conduct functions of object-recognition and position-recognition only using 3D audio outputs reproduced by the system. Over the paper is described the path of design and evaluations that led to the realization of the final prototype. In comparing with previous similar works, through the Where&What project it is illustrated a different approach to use sensory substitution for navigation functions. The new approach is focused on the high level of learnability of the audio outputs, on the use of common technologies and on the creation of standard sounds usable for future related works.
This document describes the AlloBrain, the debut content created for presentation in the AlloSphere at the University of California, Santa Barbara, and the Cosm toolkit for the prototyping of interactive immersive environments using higher-order Ambisonics and stereographic projections. The Cosm toolkit was developed in order to support the prototyping of immersive applications that involve both visual and sonic interaction design. Design considerations and implementation details of both the Cosm toolkit and the AlloBrain are described in detail, as well as the development of custom human-computer interfaces and new audiovisual interaction methodologies within a virtual environment.
Audio Mostly 2006, 2006
Auditory authoring is an essential component in the design of virtual environments and describes the process of assigning sounds and voices to objects within a virtual 3D scene. In a broader sense, auditory authoring also includes the definition of dependencies between objects and different object states, as well as time-and user-dependent interactions in dynamic environments. Our system unifies these attributes within so called auditory textures and allows an intuitive design of 3D auditory scenes for varying applications. Furthermore, it takes care of the different perception through auditory channels and provides interactive and easy to use sonification and interaction techniques. In this paper we present the necessary concepts as well as a system for the authoring of 3D virtual auditory environments as they are used in computer games, augmented audio reality and audio-based training simulations for the visually impaired. As applications we especially focus on augmented audio reality and the applications associated with it. In the paper we provide details about the definition of 3D auditory environments along techniques for their authoring and design, as well an overview of the system itself with a discussion of several examples.
International Journal of Human-Computer Studies, 2009
This document describes the AlloBrain, the debut content created for presentation in the AlloSphere at the University of California, Santa Barbara, and the Cosm toolkit for the prototyping of interactive immersive environments using higher-order Ambisonics and stereographic projections. The Cosm toolkit was developed in order to support the prototyping of immersive applications that involve both visual and sonic interaction design. Design considerations and implementation details of both the Cosm toolkit and the AlloBrain are described in detail, as well as the development of custom human-computer interfaces and new audiovisual interaction methodologies within a virtual environment.
2001
abstract We define four different tasks which are common in immersive visualization. Immersive visualization takes place in virtual environments, which provide an integrated system of 3D auditory and 3D visual display. The main objective of our research is to find out the best possible ways to use audio in different tasks. In the long run the goal is more efficient utilization of the spatial audio in immersive visualization application areas. Results of our first experiment have proven that navigation is possible using auditory cues.
IEEE Visualization, 1990
The authors describe the real-time acoustic display capabilities developed for the virtual environment workstation (VIEW) project. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditor symbology, a related collection of representational auditory objects or icons, can be designed using the auditory cue editor, which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with three-dimensional visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events
IEEE Computer Graphics and Applications, 2002
Presence: Teleoperators and Virtual Environments, 2008
To be immersed in a virtual environment, the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loudspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory signals to the listener's ears so that these signals are perceptually equivalent to those the listener would receive in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of the various approaches are presented.
2011
The effectiveness of virtual environments depends largely on how efficiently they recreate the real world. In the case of auditory virtual environments, the importance of accurate recreation is enhanced since there are no visual cues to assist perception, as in the case of audiovisual virtual environments. In this paper, we present the Immersive Audio Environment (IAE), an easily constructible and portable structure, which is capable of 3-D sound auralization with very high spatial resolution. A novel method for acoustically positioning loudspeakers in space, which is required by the IAE for simulation of sound sources, is presented in this paper. We also present a method to calibrate loudspeakers in an audio system in the case when the loudspeakers are not of the same type. Our contribution is the creation of a system that uses existing and modifications of existing techniques; Vector Based Amplitude Panning (VBAP), in order to recreate an audio battle environment. The IAE recreates the audio effects of battle scenes and can be used as a virtual battle environment for training soldiers. The results of subjective tests show a very low error standard deviation for azimuth and elevation angles and a high correlation between user responses and true angles.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Stereoscopic Displays and Virtual Reality Systems III, 1996
Journal of Digital Information Management, 2015
Proceedings of SPIE, 2001
2001
Computers & Graphics, 2005
Acoustical Science and Technology, 2020
Presence: Teleoperators and Virtual Environments, 1998
Multimedia Tools and Applications, 2007