Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000
Immersive projection displays have played an important role in enabling large-format virtual reality systems such as the CAVE and CAVE like devices and the various immersive desks and desktop-like displays. However, these devices have played a minor role so far in advancing the sense of immersion for conferencing systems. The Access Grid project led by Argonne is exploring the use of large-scale projection based systems as the basis for building room oriented collaboration and semi-immersive visualization systems. We believe these multiprojector systems will become common infrastructure in the future, largely based on their value for enabling group-to-group collaboration in an environment that can also support large-format projector based visualization. Creating a strong sense of immersion is an important goal for future collaboration technologies. Immersion in conferencing applications implies that the users can rely on natural sight and audio cues to facilitate interactions with participants at remote sites. The Access Grid is a low cost environment aimed primarily at supporting conferencing applications, but it also enables semiimmersive visualization and in particular, remote visualization. In this paper, we will describe the current state of the Access Grid project and how it relates and compares to other environments. We will also discuss augmentations to the Access Grid that will enable it to support mo re immersive visualizations. These enhancements include stereo, higher performance rendering support, tracking and non-uniform projection surface.
Lecture Notes in Computer Science, 2005
Conventional interaction in large screen projection-based display systems only allows a "master user" to have full control over the application. We have developed the VRGEO Demonstrator application based on an interaction paradigm that allows multiple users to share large projection-based environment displays for co-located collaboration. Following SDG systems we introduce a collaborative interface based on tracked PDAs and integrate common device metaphors into the interface to improve user's learning experience of the virtual environment system. The introduction of multiple workspaces in a virtual environment allows users to spread out data for analysis making use of the large screen space more effectively. Two extended informal evaluation sessions with application domain experts and demonstrations of the system show that our collaborative interaction paradigm improves the learning experience and interactivity of the virtual environment.
2010
We present a system for dynamic projection on large, human-scale, moving projection screens and demonstrate this system for immersive visualization applications in several fields. We have designed and implemented efficient, low-cost methods for robust tracking of projection surfaces, and a method to provide high frame rate output for computationally-intensive, low frame rate applications. We present a distributed rendering environment which allows many projectors to work together to illuminate the projection surfaces. This physically immersive visualization environment promotes innovation and creativity in design and analysis applications and facilitates exploration of alternative visualization styles and modes. The system provides for multiple participants to interact in a shared environment in a natural manner. Our new human-scale user interface is intuitive and novice users require essentially no instruction to operate the visualization.
2002
We describe Coliseum, a desktop system for immersive teleconferencing. Five cameras attached to a desktop LCD monitor are directed at a participant. View synthesis methods produce arbitrary-perspective renderings of the participant from these video streams, and transmit them to other participants. Combining these renderings in a shared synthetic environment gives the illusion of having remote participants interacting in a common space.
Proceedings of the …, 2000
ACM CHI Workshop on …, 2006
1 INTRODUCTION We envision situation-rooms and research laboratories in which all the walls are made from seamless ultra-high-resolution displays fed by data streamed over ultra-high-speed networks from distantly located visualization, storage servers, and high definition ...
IEEE Computer Graphics and Applications, 2000
I n a familiar scene from an old movie, generals huddle around a large map, pushing models of tanks and infantry regiments about to indicate the current battle situation. Today, the scene might include electronic displays and networked sensing technology, but the basic form would remain the same: A small group of domain experts surround and gesture toward a common data set, hoping to achieve consensus. This mode of decision making is pervasive, ranging in use from US Marine Corps command and control applications to product design review meetings. Such applications demonstrate the need for VR systems that accommodate small groups of people working in close proximity. Yet, while non-head-mounted, immersive displays perform well for single-person work, when used by small groups they are hampered by an unacceptably large degree of distortion between the head-tracked viewpoint and an untracked collaborator's perspective (see ). What looks like a sphere to one user will look like an egg to another. 2 Solving this problem is critical. Decision makers and designers cannot jointly view and respond to data when all but one see incorrect images.
2008
The emergence of several trends, including the increased availability of wireless networks, miniaturization of electronics and sensing technologies, and novel input and output devices, is creating a demand for integrated, fulltime displays for use across a wide range of applications, including collaborative environments. In this paper, we present and discuss emerging visualization methods we are developing particularly as they relate to deployable displays and displays worn on the body to support mobile users, as well as optical imaging technology that may be coupled to 3D visualization in the context of medical training and guided surgery.
Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry - VRCAI '04, 2004
This paper explores the possibilities of using portable devices in multiprojection environments, such as CAVEs, Panoramas and Power Walls. We propose and implement a tool to generate graphical interfaces in a straightforward manner. These interfaces are light and can be run on PDAs. The interface application communicates transparently with a graphic cluster, via any underlying network system, which processes the events and maintains the synchrony of the rendered images in real-time. This tool is part of Glass, a library for distributed computing. We present two examples of applications: Cathedral and Celestia.
1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 2017
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multiuser visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronized sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.
Electronic Imaging, 2019
One of the main shortcomings of most Virtual Reality display systems, be it head-mounted displays or projection based systems like CAVEs, is that they can only provide the correct perspective to a single user. This is a significant limitation that reduces the applicability of Virtual Reality approaches for most kinds of group collaborative work, which is becoming more and more important in many disciplines. Different approaches have been tried to present multiple images to different users at the same time, like optical barriers, optical filtering, optical routing, time multiplex, volumetric displays and lightfield displays among others. This paper describes, discusses and compares different approaches that have been developed and develop an evaluation approach to identify the most promising one for different usage scenarios.
IEEE Computer, 1999
In collaborative virtual reality (VR), the goal is to reproduce a face-to-face meeting in minute detail. Teleimmersion moves beyond this idea, integrating collaborative VR with audio- and video-conferencing that may involve data mining and heavy computation. In teleimmersion, collaborators at remote sites share the details of a virtual world that can autonomously control computation, query databases and gather results. They don't meet in a room to discuss a car engine; they meet in the engine itself. The University of Illinois at Chicago's Electronic Visualization Laboratory (EVL) has hosted several applications that demonstrate rudimentary teleimmersion. All users are members of Cavern (CAVE Research Network) [<http://www.evl.uic.edu/cavern>] $a collection of participating industrial and research institutions equipped with CAVE (Cave Automated Virtual Environment), ImmersaDesk VR systems and high-performance computing resources, including high-speed networks. There are more than 100 CAVE and ImmersaDesk installations worldwide. The pressing challenge now is how to support collaborative work among Cavern users without having them worry about the details of sustaining a collaboration. Another problem is providing both synchronous and asynchronous collaboration. The authors detail how they've built new display devices to serve as more convenient teleimmersion end-points and to support their international networking infrastructure with sufficient bandwidth to support the needs of teleimmersive applications
Electronic Imaging
360-degree image and movie content has gained popularity in the media and the MICE (Meeting, Incentive, Conventions, and Exhibitions) industry in the last few years. There are three main reasons for this development. On the one hand, it is the immersive character of this media form; on the other hand, recording and presentation technology development has made significant progress in resolution and quality. Third, after a decade of dynamic rising, the MICE Industry focuses on a disruptive change for more digitalbased solutions. 360-degree panoramas are particularly widespread in VR and AR technology. However, despite the high immersive potential, these forms of presentation have the disadvantage that the users are isolated and have no social contact during the performance. Therefore, efforts have been made to project 360-degree content in specially equipped rooms or planetariums to enable a shared experience for the audience. One application area for 360-degree panoramas and films is conference rooms in hotels, conference centers, and other venues that create an immersive environment for their clients to stimulate creativity. This work aims to overview the various application scenarios and usability possibilities for such conference rooms. In particular, we consider applications in construction, control, tourism, medicine, art exhibition, architecture, music performance, education, partying, organizing and carrying out events, and video conferencing. These applications and use scenarios were successfully tested, implemented, and evaluated in the 360-degree conference room "Dortmund" in the Hotel Park Soltau in Soltau, Germany [1]. Finally, the advantages, challenges, and limitations of the proposed method are described.
2007
Low-cost computing, displays and networking is making it possible to create collaboration environments that are truly able to bridge distance. This paper describes EVL's work since 1992 that has pursued the goal of creating collaboration environments that have been able to seamless merge high-resolution 2D information with 3D information incorporating a variety of viewing and interaction modalities.
In this paper we present a framework for the design and evaluation of distributed, collaborative 3D interaction focussing on projection based systems. We discuss the issues of collaborative 3D interaction using audio/video for face-to-face communication and the differences in using rear projection based Virtual Environments.
IEEE Transactions on Visualization and Computer Graphics, 2000
Fig. 1. This figure shows some of our applications in action. From left to right: Our collaborative map visualization application with two users visualizing different parts of the map at the same time on our 3 × 3 array of nine projectors; Our collaborative emergency management application with two users trying to draw a path to hazardous location and dispatching teams of first responders on our 3 × 3 array of nine projectors; Digital graffiti drawn using our collaborative graffiti application on only six of the projectors. We deliberately did not edge blend the projectors to show the six projectors clearly; Four children working together on our digital graffiti application on a 3 × 3 array of nine projectors.
1998
Large-scale immersive displays have an established history in planetaria and large-format film theaters. Video-based immersive theaters are now emerging, and promise to revolutionize group entertainment and education as the computational power and software applications become available to fully exploit these environments.
Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH '93, 1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Journal of Imaging Science and Technology, 2018
Display systems suitable for virtual reality applications can prove useful for a variety of domains. The emergence of low-cost head-mounted displays reinvigorated the area of virtual reality significantly. However, there are still applications where full-scale CAVE-type display systems are better suited. Moreover, the cost of most CAVE-type display systems is typically rather high, thereby making it difficult to justify in a research setting. This article aims at providing a design of less costly display technology combined with inexpensive input devices that implements a virtual environment paradigm suitable for such full-scale visualization and simulation tasks. The focus is on cost-effective display technology that does not break a researchers budget. The software framework utilizing these displays combines different visualization and graphics packages to create an easy-to-use software environment that can run readily on this display. A user study was performed to evaluate the display technology and its usefulness for virtual reality tasks using an accepted measure: presence. It was found that the display technology is capable of delivering a virtual environment in which the user feels fully immersed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.