Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015
Perspective projection in combination with head tracking is widely used in immersive virtual environments to support users with correct spatial perception of the virtual world. However, most projection based
Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH '93, 1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Computer Graphics Forum, 2007
In projection-based Virtual Reality (VR) systems, typically only one headtracked user views stereo images rendered from the correct view position. For other users, who are presented a distorted image, moving with the first user's head motion, it is difficult to correctly view and interact with 3D objects in the virtual environment. In close-range VR systems, such as the Virtual Workbench, distortion effects are especially large because objects are within close range and users are relatively far apart. On these systems, multiuser collaboration proves to be difficult. In this paper, we analyze the problem and describe a novel, easy to implement method to prevent and reduce image distortion and its negative effects on close-range interaction task performance. First, our method combines a shared camera model and view distortion compensation. It minimizes the overall distortion for each user, while important user-personal objects such as interaction cursors, rays and controls remain distortion-free. Second, our method retains co-location for interaction techniques to make interaction more consistent. We performed a user experiment on our Virtual Workbench to analyze user performance under distorted view conditions with and without the use of our method. Our findings demonstrate the negative impact of view distortion on task performance and the positive effect our method introduces. This indicates that our method can enhance the multiuser collaboration experience on close-range, projection-based VR systems.
2011
The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display's field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks.
2010 25th International Conference of Image and Vision Computing New Zealand, 2010
Virtual environments (VE) are gaznzng in popularity and are increasingly used for teamwork training purposes, e.g., for medical teams. We have identified two shortcomings of modern VEs: First, nonverbal communication channels are essential for teamwork but are not supported well. Second, view control in VEs is usually done manually, requiring the user to learn the controls before being able to effectively use them. We address those two shortcomings by using an inexpensive webcam to track the user's head. The rotational movement is used to control the head movement of the user's avatar, thereby conveying head gestures and adding a nonverbal communication channel. The translational movement is used to control the view of the VE in an intuitive way. Our paper presents the results of a user study designed to investigate how well users were able to use our system's advantages.
Proceedings of the Workshop on Virtual Environments 2003, 2003
Six-sided fully immersive projective displays present complex and novel problems for tracking systems. Existing tracking technologies typically require tracking equipment that is placed in locations or attached to the user in a way that is suitable for typical displays of five or less walls but which would interfere with the immersive experience within a fully enclosed display. This paper presents a novel vision-based tracking technology for fully-immersive projective displays. The technology relies on the operator wearing a set of laser diodes arranged in a specific configuration and then visually tracking the projection of these lasers on the external walls of the display outside of the user's view. This approach places minimal hardware on the user and no visible tracking equipment is placed within the immersive environment. This paper describes the basic visual tracking system including the hardware and software infrastructure.
Tracking for virtual environments is necessary to record the position and the orientation of real objects in physical space and to allow spatial consistency between real and virtual objects. This paper presents a top-down classification of tracking technologies aimed more specifically at head tracking, organized in accordance with their physical principles of operation. Six main principles were identified: time of flight (TOF), spatial scan, inertial sensing, mechanical linkages, phase-difference sensing, and direct-field sensing. We briefly describe each physical principle and present implementations of that principle. Advantages and limitations of these implementations are discussed and summarized in tabular form. A few hybrid technologies are then presented and general considerations of tracking technology are discussed.
With the recent introduction of low cost head mounted displays (HMDs), prices of HMD-based virtual reality setups dropped considerably. In various application areas personal head mounted displays can be utilized for groups of users to deliver different context sensitive information to individual users. We present a hardware setup that allows attaching 12 or more HMDs to a single PC. Finally we demonstrate how a collaborative, educational, augmented reality application is used by six students wearing HMDs on a single PC simultaneously with interactive framerates.
2011
System latency (time delay) and its visible consequences are fundamental Virtual Environment (VE) deficiencies that can hamper user perception and performance. In order to realize this goal, we present an immersive simulation system which improves upon current latency measurement and minimization techniques. Hardware used for latency measurements and minimization is assembled by low-cost and portable equipment, most of them commonly found in an academic facility without reduction in accuracy of measurements. We present a custom-made mechanism of measuring and minimizing end-to-end head tracking latency in an immersive VE. The mechanism is based on an oscilloscope comparing two signals. One is generated by the head-tracker movement and reported by a shaft encoder attached on a servo motor moving the tracker. The other is generated by the visual consequences of this movement in the VE and reported by a photodiode attached to the computer monitor. Visualization and application-level control of latency in the VE was implemented using the XVR platform. Minimization processes resulted in almost 50% reduction of initial measured latency. The description of the mechanism by which VE latency is measured and minimized will be essential to guide system countermeasures such as predictive compensation. The system presented in this paper will be used to investigate the effect of latency on spatial awareness states.
Proceedings IEEE Virtual Reality 2002, 2002
Virtual reality displays introduce spatial distortions that are very hard to correct because of the difficulty of precisely modelling the camera from the nodal point of each eye. How significant are these distortions for spatial perception in virtual reality? In this study we used a helmet mounted display and a mechanical head tracker to investigate the tolerance to errors between head motions and the resulting visual display. The relationship between the head movement and the associated updating of the visual display was adjusted by subjects until the image was judged as stable relative to the world. Both rotational and translational movements were tested and the relationship between the movements and the direction of gravity was varied systematically. Typically, for the display to be judged as stable, subjects needed the visual world to be moved in the opposite direction of the head movement by an amount greater than the head movement itself, during both rotational and translational head movements, although a large range of movement was tolerated and judged as appearing stable. These results suggest that it not necessary to model the visual geometry accurately and suggest circumstances when tracker drift can be corrected by jumps in the display which will pass unnoticed by the user.
ACM SIGGRAPH Computer Graphics, 1997
Proceedings of the 20th Conference on Computer Aided Architectural Design Research in Asia (CAADRIA)
The process of projecting 3D scenes onto a twodimensional (2D) surface results in the loss of depth cues, which are essential for immersive experience in the scenes. Various solutions are provided to address this problem, but there are still fundamental issues need to be addressed in the existing approaches for compensating the change in the 2D image due to the change in observer's position. Existing studies use head-coupled perspective, stereoscopy, and motion parallax methods to achieve a realistic image representation but a true natural image could not be perceived because of the inaccuracy in the calculations. This paper describes in detail an implementation method of the technique to correctly project a 3D virtual environment model onto a 2D surface to yield a more natural interaction with the virtual world. The proposed method overcomes the inaccuracies in the existing head-coupled perspective viewing and can be used with common stereoscopic displays to naturally represent virtual architecture.
ACM Transactions on Graphics, 2011
Computer graphics systems provide sophisticated means to render virtual 3D space to 2D display surfaces by applying planar geometric projections. In a realistic viewing condition the perspective applied for rendering should appropriately account for the viewer's location relative to the image. As a result, an observer would not be able to distinguish between a rendering of a virtual environment on a computer screen and a view "through" the screen at an identical real-world scene. Until now, little effort has been made to identify perspective projections which cause human observers to judge them to be realistic.
MediaTropes, 2016
With the upcoming generation of virtual reality HMDs, new virtual worlds, scenarios, and games are created especially for them. These are no longer bound to a remote screen or a relatively static user, but to an HMD as a more immersive device. This article discusses requirements for virtual scenarios implemented in new-generation HMDs to achieve a comfortable user experience. Furthermore, the effects of positional tracking are introduced and the relation between the user’s virtual and physical body is analyzed. The observations made are exemplified by existing software prototypes. They indicate how the term “virtual reality,” with all its loaded connotations, may be reconceptualized to express the peculiarities of HMDs in the context of gaming, entertainment, and virtual experiences.
2006
As the interest of the public for new forms of media grows, museums and theme parks select real time Virtual Reality productions as their presentation medium. Based on threedimensional graphics, interaction, sound, music and intense story telling they mesmerize their audiences. The Foundation of the Hellenic World (FHW) having opened so far to the public three different Virtual Reality theaters, is in the process of building a new Dome-shaped Virtual Reality theatre with a capacity of 130 people. This fully interactive theatre will present new experiences in immersion to the visitors. In this paper we present the challenges encountered in developing productions for such a large spherical display system as well as building the underlying realtime display and support systems. Figure 1. A dome VR theater (FHW).
tcnj.edu
A system is presented which provides head tracking services that can be implemented into 3D games and simulations. This creates an enhanced illusion of depth based on the real world position of the user and gives the ability to look around the virtual world by physically turning your ...
2010
Abstract Immersive multi-projection environments are becoming affordable for many research centers, but these solutions needs several integration steps to be fully operational, and some of these steps are difficult and not in a common domain. This paper presents the most recent techniques involved in multi-projection solutions, from projection to computer cluster software. The hardware in these VR (Virtual Reality) installations is a connection of projectors, screen, speaker, computers and tracking devices.
Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, 2015
Figure 1: Orthogonal and perspective views of the virtual environment used in the experiment (a). Render capture of the left eye for Equirectangular (b), Hammer (c) and Perspective (d) projections respectively. Notice that it is possible to see the virtual body (including the nose) with the non-planar projections.
Real-time gaze tracking is a promising interactiontechnique for virtual environments. Immersiveprojection-based virtual reality systems such as theCAVEallow users a wide range of natural movements. Unfortunately, most head and eye movementmeasurement techniques are of limited use during freehead and body motion. An improved head-eye trackingsystem is proposed and developed for use in immersiveapplications with free head motion. The system is basedupon a head-mounted video-based eye tracking systemand a ...
1993
Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.