Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
We present a system for dynamic projection on large, human-scale, moving projection screens and demonstrate this system for immersive visualization applications in several fields. We have designed and implemented efficient, low-cost methods for robust tracking of projection surfaces, and a method to provide high frame rate output for computationally-intensive, low frame rate applications. We present a distributed rendering environment which allows many projectors to work together to illuminate the projection surfaces. This physically immersive visualization environment promotes innovation and creativity in design and analysis applications and facilitates exploration of alternative visualization styles and modes. The system provides for multiple participants to interact in a shared environment in a natural manner. Our new human-scale user interface is intuitive and novice users require essentially no instruction to operate the visualization.
Building a system to actively visualize extremely large data sets on large tiled displays in a real-time immersive environment involves a number of challenges. First, the system must be completely scalable to support the rendering of large data sets. Second, it must provide fast, constant frame rates regardless of user viewpoint or model orientation. Third, it must output the highest resolution imagery where it is needed. Fourth, it must have a flexible user interface to control interaction with the display. This paper presents the prototype for a system which meets all four of these criteria. It details the design of a wireless user interface in conjunction with two different multiresolution techniquesfoveated vision and progressive image composition-to generate images on a tiled display wall. The system emphasizes the parallel, multidisplay, and multiresolution features of the Metabuffer image composition hardware architecture to produce interactive renderings of large data streams with fast, constant frame rates. r
2000
Immersive projection displays have played an important role in enabling large-format virtual reality systems such as the CAVE and CAVE like devices and the various immersive desks and desktop-like displays. However, these devices have played a minor role so far in advancing the sense of immersion for conferencing systems. The Access Grid project led by Argonne is exploring the use of large-scale projection based systems as the basis for building room oriented collaboration and semi-immersive visualization systems. We believe these multiprojector systems will become common infrastructure in the future, largely based on their value for enabling group-to-group collaboration in an environment that can also support large-format projector based visualization. Creating a strong sense of immersion is an important goal for future collaboration technologies. Immersion in conferencing applications implies that the users can rely on natural sight and audio cues to facilitate interactions with participants at remote sites. The Access Grid is a low cost environment aimed primarily at supporting conferencing applications, but it also enables semiimmersive visualization and in particular, remote visualization. In this paper, we will describe the current state of the Access Grid project and how it relates and compares to other environments. We will also discuss augmentations to the Access Grid that will enable it to support mo re immersive visualizations. These enhancements include stereo, higher performance rendering support, tracking and non-uniform projection surface.
ACM CHI Workshop on …, 2006
1 INTRODUCTION We envision situation-rooms and research laboratories in which all the walls are made from seamless ultra-high-resolution displays fed by data streamed over ultra-high-speed networks from distantly located visualization, storage servers, and high definition ...
2007
Immersive displays generally fall within three categories: small-scale, single-user displays (head-mounted displays and desktop stereoscopic displays); medium-scale displays designed for small numbers of collaborative users (CAVEs, reality centres and power walls); and large-scale displays designed for group immersion experiences (IMAX, simulator rides, domes). Small-and medium-scale displays have received by far the most attention from researchers, perhaps due to their smaller size, lower cost and easy accessibility. Large-scale immersive displays present unique technical challenges largely met by niche manufacturers offering proprietary solutions. The rapidly growing number of largescale displays in planetariums, science centers and universities worldwide (275 theaters to date), coupled with recent trends towards more open, extensible systems and mature software tools, offer greater accessibility to these environments for research, interactive science/art application development, and visualization of complex databases for both student and public audiences. An industry-wide survey of leading-edge largescale immersive displays and manufacturers is provided with the goal of fostering industry/academic collaborations. Research needs include advancements in immersive display design, real-time spherical rendering, real-time group interactive technologies and applications, and methods for aggregating and navigating extremely large scientific databases with imbedded physical/astrophysical simulations.
IEEE Computer Graphics and Applications, 2000
I n a familiar scene from an old movie, generals huddle around a large map, pushing models of tanks and infantry regiments about to indicate the current battle situation. Today, the scene might include electronic displays and networked sensing technology, but the basic form would remain the same: A small group of domain experts surround and gesture toward a common data set, hoping to achieve consensus. This mode of decision making is pervasive, ranging in use from US Marine Corps command and control applications to product design review meetings. Such applications demonstrate the need for VR systems that accommodate small groups of people working in close proximity. Yet, while non-head-mounted, immersive displays perform well for single-person work, when used by small groups they are hampered by an unacceptably large degree of distortion between the head-tracked viewpoint and an untracked collaborator's perspective (see ). What looks like a sphere to one user will look like an egg to another. 2 Solving this problem is critical. Decision makers and designers cannot jointly view and respond to data when all but one see incorrect images.
Proceedings of the Fourth …, 2000
2006
We report on our iGrid2005 demonstration, called the “Dead Cat Demo”; an example of a highly interactive augmented reality application consisting of software services distributed over a wide-area, high-speed network. We describe our design decisions, analyse the implications of the design on application performance and show performance measurements.
2012 IEEE International Conference on Cluster Computing, 2012
DisplayCluster is an interactive visualization environment for cluster-driven tiled displays. It provides a dynamic, desktop-like windowing system with built-in media viewing capability that supports ultra high-resolution imagery and video content and streaming that allows arbitrary applications from remote sources (such as laptops or remote visualization machines) to be shown. This support extends to highperformance parallel visualization applications, enabling interactive streaming and display for hundred-megapixel dynamic content. DisplayCluster also supports multiuser , multi-modal interaction via devices such as joysticks, smartphones, and the Microsoft Kinect. Further, our environment provides a Pythonbased scripting interface to automate any set of interactions. In this paper, we describe the features and architecture of Dis-playCluster, compare it to existing tiled display environments, and present examples of how it can combine the capabilities of large-scale remote visualization clusters and high-resolution tiled display systems. In particular, we demonstrate that Dis-playCluster can stream and display up to 36 megapixels in real time and as many as 144 megapixels interactively, which is 3× faster and 4× larger than other available display environments. Further, we achieve over a gigapixel per second of aggregate bandwidth streaming between a remote visualization cluster and our tiled display system.
2013
Immersion is an oft-quoted but ill-defined term used to describe a viewer or participant's sense of engagement with a visual display system or participatory media. Traditionally, advances in immersive quality came at the high price of ever-escalating hardware requirements and computational budgets. But what if one could increase a participant's sense of immersion, instead, by taking advantage of perceptual cues, neuroprocessing, and emotional engagement while adding only a small, yet distinctly targeted, set of advancements to the display hardware? This thesis describes three systems that introduce small amounts of computation to the visual display of information in order to increase the viewer's sense of immersion and participation. It also describes the types of content used to evaluate the systems, as well as the results and conclusions gained from small user studies.
Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH '93, 1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
IEEE Transactions on Visualization and Computer Graphics, 2000
Fig. 1. This figure shows some of our applications in action. From left to right: Our collaborative map visualization application with two users visualizing different parts of the map at the same time on our 3 × 3 array of nine projectors; Our collaborative emergency management application with two users trying to draw a path to hazardous location and dispatching teams of first responders on our 3 × 3 array of nine projectors; Digital graffiti drawn using our collaborative graffiti application on only six of the projectors. We deliberately did not edge blend the projectors to show the six projectors clearly; Four children working together on our digital graffiti application on a 3 × 3 array of nine projectors.
1998
Large-scale immersive displays have an established history in planetaria and large-format film theaters. Video-based immersive theaters are now emerging, and promise to revolutionize group entertainment and education as the computational power and software applications become available to fully exploit these environments.
1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Large Scale Displays, besides their visualization capabilities, can provide a great sense of immersion to a geographically distributed group of people engaging in collaborative work. This paper presents a system that uses remotely located wall sized displays, to offer immersive, interactive collaborative visualization and review of 3D CAD models for engineering applications.
IEEE COMPUTER GAPHICS AND …, 2000
Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry - VRCAI '04, 2004
This paper explores the possibilities of using portable devices in multiprojection environments, such as CAVEs, Panoramas and Power Walls. We propose and implement a tool to generate graphical interfaces in a straightforward manner. These interfaces are light and can be run on PDAs. The interface application communicates transparently with a graphic cluster, via any underlying network system, which processes the events and maintains the synchrony of the rendered images in real-time. This tool is part of Glass, a library for distributed computing. We present two examples of applications: Cathedral and Celestia.
1994
This paper reviews existing graphics architecture software techniques for the visualization of numerical data. A new "next" generation 3D Data Visualization system based around a "Steering" and Virtual Reality paradigm is proposed, which appears to be the most promising in search for a modern solution to the problem of interacting with large, multi-dimensional datasets.
IEEE Computer, 1999
In collaborative virtual reality (VR), the goal is to reproduce a face-to-face meeting in minute detail. Teleimmersion moves beyond this idea, integrating collaborative VR with audio- and video-conferencing that may involve data mining and heavy computation. In teleimmersion, collaborators at remote sites share the details of a virtual world that can autonomously control computation, query databases and gather results. They don't meet in a room to discuss a car engine; they meet in the engine itself. The University of Illinois at Chicago's Electronic Visualization Laboratory (EVL) has hosted several applications that demonstrate rudimentary teleimmersion. All users are members of Cavern (CAVE Research Network) [<http://www.evl.uic.edu/cavern>] $a collection of participating industrial and research institutions equipped with CAVE (Cave Automated Virtual Environment), ImmersaDesk VR systems and high-performance computing resources, including high-speed networks. There are more than 100 CAVE and ImmersaDesk installations worldwide. The pressing challenge now is how to support collaborative work among Cavern users without having them worry about the details of sustaining a collaboration. Another problem is providing both synchronous and asynchronous collaboration. The authors detail how they've built new display devices to serve as more convenient teleimmersion end-points and to support their international networking infrastructure with sufficient bandwidth to support the needs of teleimmersive applications
Electronic Imaging, 2019
One of the main shortcomings of most Virtual Reality display systems, be it head-mounted displays or projection based systems like CAVEs, is that they can only provide the correct perspective to a single user. This is a significant limitation that reduces the applicability of Virtual Reality approaches for most kinds of group collaborative work, which is becoming more and more important in many disciplines. Different approaches have been tried to present multiple images to different users at the same time, like optical barriers, optical filtering, optical routing, time multiplex, volumetric displays and lightfield displays among others. This paper describes, discusses and compares different approaches that have been developed and develop an evaluation approach to identify the most promising one for different usage scenarios.
IEEE Computer Graphics and Applications, 2000
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.