Papers by Mike Wozniewski
Proceedings of the 2006 …, Jan 1, 2006
Traditional uses of virtual audio environments tend to focus on perceptually accurate acoustic re... more Traditional uses of virtual audio environments tend to focus on perceptually accurate acoustic representations. Though spatialization of sound sources is important, it is necessary to leverage control of the sonic representation when considering musical applications. The proposed framework allows for the creation of perceptually immersive scenes that function as musical instruments. Loudspeakers and microphones are modeled within the scene along with the listener/performer, creating a navigable 3D sonic space where sound sources and sinks process audio according to user-defined spatial mappings.

Proceedings of International …, Jan 1, 2006
Immersive virtual environments offer the possibility of natural interaction with a virtual scene ... more Immersive virtual environments offer the possibility of natural interaction with a virtual scene that is familiar to users because it is based on everyday activity. The use of such environments for the representation and control of interactive musical systems remains largely unexplored. We propose a paradigm for working with sound and music in a physical context, and develop a framework that allows for the creation of spatialized audio scenes. The framework uses structures called soundNodes, soundConnections, and DSP graphs to organize audio scene content, and offers greater control compared to other representations. 3-D simulation with physical modelling is used to define how audio is processed, and can offer users a high degree of expressive interaction with sound, particularly when the rules for sound propagation are bent. Sound sources and sinks are modelled within the scene along with the user/listener/performer, creating a navigable 3-D sonic space for sound-engineering, musical creation, listening and performance.

… Conference on Digital Audio …, Jan 1, 2006
In an effort to find a better suited interface for musical performance, a novel approach has been... more In an effort to find a better suited interface for musical performance, a novel approach has been discovered and developed. At the heart of this approach is the concept of physical interaction with sound in space, where sound processing occurs at various 3-D locations and sending sound signals from one area to another is based on physical models of sound propagation. The control is based on a gestural vocabulary that is familiar to users, involving natural spatial interaction such as translating, rotating, and pointing in 3-D. This research presents a framework to deal with realtime control of 3-D audio, and describes how to construct audio scenes to accomplish various musical tasks. The generality and effectiveness of this approach has enabled us to reimplement several conventional applications, with the benefit of a substantially more powerful interface, and has further led to the conceptualization of several novel applications.

… Conference on New …, Jan 1, 2008
New application spaces and artistic forms can emerge when users are freed from constraints. In th... more New application spaces and artistic forms can emerge when users are freed from constraints. In the general case of human-computer interfaces, users are often confined to a fixed location, severely limiting mobility. To overcome this constraint in the context of musical interaction, we present a system to manage large-scale collaborative mobile audio environments, driven by user movement. Multiple participants navigate through physical space while sharing overlaid virtual elements. Each user is equipped with a mobile computing device, GPS receiver, orientation sensor, microphone, headphones, or various combinations of these technologies. We investigate methods of location tracking, wireless audio streaming, and state management between mobile devices and centralized servers. The result is a system that allows mobile users, with subjective 3-D audio rendering, to share virtual scenes. The audio elements of these scenes can be organized into large-scale spatial audio interfaces, thus allowing for immersive mobile performance, locative audio installations, and many new forms of collaborative sonic activity.
Proceedings of International …, Jan 1, 2007
We present a method for user-specific audio rendering of a virtual environment that is shared by ... more We present a method for user-specific audio rendering of a virtual environment that is shared by multiple participants. The technique differs from methods such as amplitude differencing, HRTF filtering, and wave field synthesis. Instead we model virtual microphones within the 3-D scene, each of which captures audio to be rendered to a loudspeaker. Spatialization of sound sources is accomplished via acoustic physical modelling, yet our approach also allows for localized signal processing within the scene. In order to control the flow of sound within the scene, the user has the ability to steer audio in specific directions. This paradigm leads to many novel applications where groups of individuals can share one continuous interactive sonic space.
A framework for collaborative 3d visualization and manipulation in an immersive space using an untethered bimanual gestural interface
Virtual Reality Systems and …, Jan 1, 2004
Pure Data Convention, …, Jan 1, 2007
We present a Pure Data library for managing 3-D sound and accomplishing signal processing using s... more We present a Pure Data library for managing 3-D sound and accomplishing signal processing using spatial relations. The framework is intended to support applications in the areas of immersive audio, virtual/augmented reality systems, audiovisual performance, multimodal sound installations, acoustic simulation, 3-D audio mixing, and many more.

Mobile Music Workshop …, Jan 1, 2008
We demonstrate that musical performance can take place in a large-scale augmented reality setting... more We demonstrate that musical performance can take place in a large-scale augmented reality setting. With the use of mobile computers equipped with GPS receivers, we allow a performer to navigate through an outdoor space while interacting with an overlaid virtual audio environment. The scene is segregated into zones, with attractive forces that keep the virtual representation of the perfomer locked in place, thus overcoming the inaccuracies of GPS technology. Each zone is designed with particular musical potential, provided by a spatial arrangement of interactive audio elements that surround the user in that location. A subjective 3-D audio rendering is provided via headphones, and users are able to input audio at their locations, steering their sound towards sound effects of interest. An objective 3-D rendering of the entire scene can be provided to an audience in a concert hall or gallery space nearby.
sheefa.net
We developed a paradigm for interaction with spatial audio in virtual environments, where users a... more We developed a paradigm for interaction with spatial audio in virtual environments, where users are immersed in a three-dimensional sound-processing world. Audio signals may be steered along spatial pathways, and signal processing takes place at specific 3-D locations. Although a variety of new applications can be conceived with this framework, user mobility is typically limited because the current implementation is based on a centralized architecture in order to support the processing and rendering demands. This paper describes some of the core issues involved in the challenge of migrating this spatial audio architecture to an environment of wireless, mobile devices, so as to support multi-user interaction within distributed sonic spaces.
cim.mcgill.ca
We describe a computer game design that employs interface mechanisms fostering a greater sense of... more We describe a computer game design that employs interface mechanisms fostering a greater sense of player immersion than is typically present in other games. The system uses a large-scale projection display, video-based body position tracking, and bimanual gestural input for interaction. We describe these mechanisms and their implementation in detail, highlighting our user-centered design process. Finally, we describe an experiment comparing our interaction mechanisms with conventional game controllers. Test subjects preferred our interface overall, finding it easier to learn and use.

IFIP Lecture Notes in …, Jan 1, 2011
Ubiquitous computing architectures enable interaction and collaboration in multi-user application... more Ubiquitous computing architectures enable interaction and collaboration in multi-user applications. We explore the challenges of integrating the disparate services required in such architectures and describe how we have met these challenges in the context of a real-world application that operates on heterogeneous hardware and run-time environments. As a compelling example, we consider the role of ubiquitous computing to support the needs of a distributed multi-user game, including mobility, mutual awareness, and geo-localization. The game presented here, "SoundPark", is played in a mixed-reality environment, in which the physical space is augmented with computer-generated audio and graphical content, and the players communicate frequently over a low-latency audio channel. Our experience designing and developing the game motivates significant discussion related to issues of general relevance to ubiquitous game architectures, including integration of heterogeneous components, monitoring, remote control and scalability.
Ménagerie imaginaire
Proceedings of the 7th …, Jan 1, 2007
Our work leading to this piece began with the search for a better way to interact with electronic... more Our work leading to this piece began with the search for a better way to interact with electronic sound during performance. Taking the current trend towards performing on stage with laptop computers, we have conceived of a radically different arrangement where performer motion is not only unrestricted, but actually serves a principal role in the performance interface. We abandon conventional
Uploads
Papers by Mike Wozniewski