Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016
As part of the final project for this course, I used CRAIVE (Collaborative-Research Augmented Immersive Virtual Environment Laboratory) space to create an interactive immersive visualizations based on human motion inside the space. As shown in Figure 1, CRAIVE lab has a 360 degree screen equipped with 8 projectors along with 6 overhead cameras used for tracking. The floor space enclosed within the screen is rectangular with curved corners and has a length of approximately 12 meters and width of 10 meters. The screen is approximately 5 meters in height.
2017
We describe interfaces and visualizations in the CRAIVE (Colaborative Research Augmented Immersive Virtual Environment) Lab, an interactive human scale immersive environment at Rensselaer Polytechnic Institute. We describe the physical infrastructure and software architecture of the CRAIVE-Lab, and present two immersive scenarios within it. The first is “person folowing”, which alows a person walking inside the immersive space to be tracked by simple objects on the screen. This was implemented as a proof of concept of the overal system, which includes visual tracking from an overhead array of cameras, communication of the tracking results, and large-scale projection and visualization. The second “smart presentation” scenario features multimedia on the screen that reacts to the position of a person walking around the environment by playing or pausing automaticaly, and additionaly supports real-time speech-to-text transcription. Our goal is to continue research in natural human intera...
Electronic Imaging
360-degree image and movie content has gained popularity in the media and the MICE (Meeting, Incentive, Conventions, and Exhibitions) industry in the last few years. There are three main reasons for this development. On the one hand, it is the immersive character of this media form; on the other hand, recording and presentation technology development has made significant progress in resolution and quality. Third, after a decade of dynamic rising, the MICE Industry focuses on a disruptive change for more digitalbased solutions. 360-degree panoramas are particularly widespread in VR and AR technology. However, despite the high immersive potential, these forms of presentation have the disadvantage that the users are isolated and have no social contact during the performance. Therefore, efforts have been made to project 360-degree content in specially equipped rooms or planetariums to enable a shared experience for the audience. One application area for 360-degree panoramas and films is conference rooms in hotels, conference centers, and other venues that create an immersive environment for their clients to stimulate creativity. This work aims to overview the various application scenarios and usability possibilities for such conference rooms. In particular, we consider applications in construction, control, tourism, medicine, art exhibition, architecture, music performance, education, partying, organizing and carrying out events, and video conferencing. These applications and use scenarios were successfully tested, implemented, and evaluated in the 360-degree conference room "Dortmund" in the Hotel Park Soltau in Soltau, Germany [1]. Finally, the advantages, challenges, and limitations of the proposed method are described.
International journal of …, 2006
2014
This paper presents a computer vision system that supports non-instrumented, location-based interaction of multiple users with digital representations of large-scale artifacts. The proposed system is based on a camera network that observes multiple humans in front of a very large display. The acquired views are used to volumetrically reconstruct and track the humans robustly and in real time, even in crowded scenes and challenging human configurations. Given the frequent and accurate monitoring of humans in space and time, a dynamic and personalized textual/graphical annotation of the display can be achieved based on the location and the walk-through trajectory of each visitor. The proposed system has been successfully deployed in an archaeological museum, offering its visitors the capability to interact with and explore a digital representation of an ancient wall painting. This installation permits an extensive evaluation of the proposed system in terms of tracking robustness, comp...
Proceedings of the 23nd annual ACM symposium on User interface software and technology, 2010
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.
Advances in Transdisciplinary Engineering, 2021
Virtual Reality (VR) and Augmented Reality (AR) are becoming useful tools to provide a visual representation of a product, a design or environment without the cost of building a prototype. Other applications include maintenance support, procedures development and workstation design. Currently, VR and AR systems can only display visual and audio cues. For example, in workstation design, i.e., a ship’s bridge, aircraft flight deck, command centre, maintenance facility, etc., it is important to for design engineering to determine if the human operator can reach the gauges, switches and controls. To meet the ergonomic and human factor design criteria, the human operator must easily interact with all the displays, instruments, equipment, and controls. This impacts the design and location of seats, instrument panels and windows. VR and AR devices, such as the controllers and gloves, can track the position of the human hand and fingers in a virtual environment but are not able to detect re...
Lecture Notes in Computer Science, 2005
Conventional interaction in large screen projection-based display systems only allows a "master user" to have full control over the application. We have developed the VRGEO Demonstrator application based on an interaction paradigm that allows multiple users to share large projection-based environment displays for co-located collaboration. Following SDG systems we introduce a collaborative interface based on tracked PDAs and integrate common device metaphors into the interface to improve user's learning experience of the virtual environment system. The introduction of multiple workspaces in a virtual environment allows users to spread out data for analysis making use of the large screen space more effectively. Two extended informal evaluation sessions with application domain experts and demonstrations of the system show that our collaborative interaction paradigm improves the learning experience and interactivity of the virtual environment.
ACM Transactions on Multimedia Computing, Communications, and Applications, 2005
Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility---participants may move around the shared space---and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, an...
Multimedia Tools and Applications, 2017
The case of mixed-reality projector-camera systems is considered and, in particular, those which employ hand-held boards as interactive displays. This work focuses upon the accurate, robust, and timely detection and pose estimation of such boards, to achieve high-quality augmentation and interaction. The proposed approach operates a camera in the near infrared spectrum to filter out the optical projection from the sensory input. However, the monochromaticity of input restricts the use of color for the detection of boards. In this context, two methods are proposed. The first regards the pose estimation of boards which, being computationally demanding and frequently used by the system, is highly parallelized. The second uses this pose estimation method to detect and track boards, being efficient in the use of computational resources so that accurate results are provided in real-time. Accurate pose estimation facilitates touch detection upon designated areas on the boards and high-quality projection of visual content upon boards. An implementation of the proposed approach is extensively and quantitatively evaluated, as to its accuracy and efficiency. This evaluation, along with usability and pilot application investigations, indicate the suitability of the proposed approach for use in interactive, mixed-reality applications.
1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
ACM SIGGRAPH Computer Graphics, 1997
2020
The iCinema Centre for Interactive Cinema Research at UNSW has created a versatile virtual reality theatre that, by combining real-time 360-degree omnistereo projection with surround audio and marker-less motion tracking, provides a highly immersive and interactive environment for up to 20 users. The theatre, codenamed AVIE, serves as the Centre's principal platform for experiments in interactive and emergent narrative, artificial intelligence, human-computer interfaces, virtual heritage, panoramic video and real-time computer graphics, as well as our primary platform for public exhibition of iCinema projects. This paper briefly discusses the design of the system, technical challenges, novel features and current and future applications of the system. We believe our system to be the first and only 360 degree cylindrical stereo virtual reality theatre constructed to date.
1998
Large-scale immersive displays have an established history in planetaria and large-format film theaters. Video-based immersive theaters are now emerging, and promise to revolutionize group entertainment and education as the computational power and software applications become available to fully exploit these environments.
2012
The HybridDesk is an immersive compact solution for visualization and interaction using low cost products. Screens were organized in an small set where a user can interact in a virtual environment sat in front of a desk, and even a standard mouse, keyboard and monitor can be used . Different from a regular CAVE the head is tracked outside the display cube and with the support of a wiimote the interaction can be controlled with a good precision. Several tests were conducted until the best and cheapest combination of technologies was achieved. Home projectors and mirrors were used using a semiautomatic calibration and finally a simple video splitter was used to produce the video source for the four projectors installed.
Proceedings of the ACM symposium on Virtual reality software and technology - VRST '05, 2005
Large screen projection-based display systems are very often not used by a single user alone, but shared by a small group of people. We have developed an interaction paradigm allowing multiple users to share a virtual environment in a conventional single-view stereoscopic projection-based display system, with each of the users handling the same interface and having a full first-person experience of the environment. Multi-viewpoint images allow the use of spatial interaction techniques for multiple users in a conventional projection-based display. We evaluate the effectiveness of multi-viewpoint images for ray selection and direct object manipulation in a qualitative usability study and show that interaction with multi-viewpoint images is comparable to fully head-tracked (single-user) interaction. Based on ray casting and direct object manipulation, using tracked PDA's as common interaction device, we develop a technique for co-located multiuser interaction in conventional projection-based virtual environments. Evaluation of the VRGEO Demonstrator, an application for the review of complex 3D geoseismic data sets in the oil-and-gas industry, shows that this paradigm allows multiple users to each have a full first-person experience of a complex, interactive virtual environment.
Human-Computer Interaction Consortium Conference, 1998
Proc. of 2nd …, 1998
The PIT ("Protein Interactive Theater") is a dual-screen, stereo display system and workspace designed specifically for two-person, seated, local collaboration. Each user has a correct head-tracked stereo view of a shared virtual space containing a 3D model under study. The model appears to occupy the same location in lab space for each user, allowing them to augment their verbal communication with gestures. This paper describes our motivation and goals in designing the PIT workspace, the design and fabrication of the hardware and software components, and initial user testing. Motivation and design goals The PIT display system is the most recent of a series of displays used by the GRIP molecular graphics project at UNC [Brooks et al., 1990]. While we intend for the PIT to be generally applicable to other application domains, our initial use is in molecular graphics applications such as protein fitting. The PIT design is motivated by observations gathered over the years by collaborating with biochemists to develop molecular graphics systems. The following were our primary goals in developing the PIT. High quality 3D display. To provide strong 3D depth cues, we employ the technique of head-tracked stereo display. For each user, a stereo image is displayed and updated in real time according to the perspective projection determined by the positions of the user's eyes. The stereo and motion parallax cues give users the illusion of a stable 3D scene located in front of them and fixed in laboratory space. The users wear LCD shutter glasses and tracking sensors. High-resolution images (four images per frame, each rendered at 1280 × 492 pixels) are displayed on two large rear-projection screens oriented at 90 degrees to each other, with one screen corresponding to each user. Rendering is performed on a Silicon Graphics Onyx workstation with InfiniteReality™ graphics. We allow for decoupling the application's display and simulation loops so that they run as separate processes, in order to maintain a high display update rate that is independent of the complexity of computations the application may be performing. The display screens may also be oriented at 120 degrees to each other for applications desiring a highresolution, wide field-of-view panoramic view for a single user. Including a second user. Over the years, we have observed that our users, who are biochemists and other scientists, quite often work in pairs to conduct their experiments. To allow close collaboration between the two users we wanted to provide the second user with a view equal in quality and realism to
2007
Immersive displays generally fall within three categories: small-scale, single-user displays (head-mounted displays and desktop stereoscopic displays); medium-scale displays designed for small numbers of collaborative users (CAVEs, reality centres and power walls); and large-scale displays designed for group immersion experiences (IMAX, simulator rides, domes). Small-and medium-scale displays have received by far the most attention from researchers, perhaps due to their smaller size, lower cost and easy accessibility. Large-scale immersive displays present unique technical challenges largely met by niche manufacturers offering proprietary solutions. The rapidly growing number of largescale displays in planetariums, science centers and universities worldwide (275 theaters to date), coupled with recent trends towards more open, extensible systems and mature software tools, offer greater accessibility to these environments for research, interactive science/art application development, and visualization of complex databases for both student and public audiences. An industry-wide survey of leading-edge largescale immersive displays and manufacturers is provided with the goal of fostering industry/academic collaborations. Research needs include advancements in immersive display design, real-time spherical rendering, real-time group interactive technologies and applications, and methods for aggregating and navigating extremely large scientific databases with imbedded physical/astrophysical simulations.
2010
We present a system for dynamic projection on large, human-scale, moving projection screens and demonstrate this system for immersive visualization applications in several fields. We have designed and implemented efficient, low-cost methods for robust tracking of projection surfaces, and a method to provide high frame rate output for computationally-intensive, low frame rate applications. We present a distributed rendering environment which allows many projectors to work together to illuminate the projection surfaces. This physically immersive visualization environment promotes innovation and creativity in design and analysis applications and facilitates exploration of alternative visualization styles and modes. The system provides for multiple participants to interact in a shared environment in a natural manner. Our new human-scale user interface is intuitive and novice users require essentially no instruction to operate the visualization.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.