Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Real-time gaze tracking is a promising interactiontechnique for virtual environments. Immersiveprojection-based virtual reality systems such as theCAVEallow users a wide range of natural movements. Unfortunately, most head and eye movementmeasurement techniques are of limited use during freehead and body motion. An improved head-eye trackingsystem is proposed and developed for use in immersiveapplications with free head motion. The system is basedupon a head-mounted video-based eye tracking systemand a ...
Lecture Notes in Computer Science, 2007
In this paper, we propose a new face and eye gaze tracking method that works by attaching gaze tracking devices to stereoscopic shutter glasses. This paper presents six advantages over previous works. First, through using the proposed method with stereoscopic VR systems, users feel more immersed and comfortable. Second, by capturing reflected eye images with a hot mirror, we were able to increase eye gaze accuracy in a vertical direction. Third, by attaching the infrared passing filter and using an IR illuminator, we were able to obtain robust gaze tracking performance irrespective of environmental lighting conditions. Fourth, we used a simple 2D-based eye gaze estimation method based on the detected pupil center and the 'geometric transform' process. Fifth, to prevent gaze positions from being unintentionally moved by natural eye blinking, we discriminated between different kinds of eye blinking by measuring pupil sizes. This information was also used for button clicking or mode toggling. Sixth, the final gaze position was calculated by the vector summation of face and eye gaze positions and allowing for natural face and eye movements. Experimental results showed that the face and eye gaze estimation error was less than one degree.
2009
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments.
2017 IEEE Winter Conference on Applications of Computer Vision (WACV), 2017
We present a novel, automatic eye gaze tracking scheme inspired by smooth pursuit eye motion while playing mobile games or watching virtual reality contents. Our algorithm continuously calibrates an eye tracking system for a head mounted display. This eliminates the need for an explicit calibration step and automatically compensates for small movements of the headset with respect to the head. The algorithm finds correspondences between corneal motion and screen space motion, and uses these to generate Gaussian Process Regression models. A combination of those models provides a continuous mapping from corneal position to screen space position. Accuracy is nearly as good as achieved with an explicit calibration step.
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge. net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
2013 International Conference on Information Technology and Electrical Engineering (ICITEE), 2013
The usage of Nvidia 3D Vision R is increasing rapidly, ranging from gaming to research purposes. However, researchers in human computer interaction and virtual reality are constrained by hardware configuration since current commercial gaze tracking systems are not specifically designed to be used with Nvidia 3D Vision R . In this paper, we present a novel prototype of gaze tracking headgear which can be used appropriately with Nvidia 3D Vision R . We explain design consideration and detail implementation of our gaze tracking headgear. We also evaluate our gaze tracking system by measuring gaze accuracy on stereoscopic display. Experimental result shows that the average gaze estimation error is less than one degree visual angle.
From a head-mounted display, head and eye movements were recorded using tracking devices (magnetic and infrared) that render the 6 degrees-of-freedom associated with the position and orientation of head movements, and 2 degrees-of-freedom from one eye. We measured the continuous line of sight's deviation from a pre-selected area on a virtual stimulus. Some analyses of the mathematical properties of the emergent perceptual and motor patterns were done. These analyses help to understand how behavioral dynamics, and especially motor coordination underlying perception in virtual reality (VR), is organized.
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008
We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixelwise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6 • accuracy with QVGA (320 × 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with longdistance observations (10 meters).
Optics and Lasers in Engineering, 2002
This work presents an eye-tracking and pupil size-measuring device that interfaces with a computer for applications useful in psychometry, ophthalmology, physiology and virtual reality (VR) systems. This system utilizes a change-coupled device (CCD) camera, appropriate lenses, PC with frame grabber and a DSP unit with various types of VR equipment, i.e., HMD, simulator or LCD projection device. The digital signal processing unit is used to calculate the average brightness and contrast of the VR video image. A CCD camera with various attachments can be mounted on various VR systems to capture the human eye image for testing. An image capture card and a personal computer are used to analyze the test image. From the eye digital image, the computer obtains data on the pupil size and a trace of the tested eye. A pattern recognition computer program and five measurement parameters are used to distinguish the position of the pupil, calculate the pupil location coordinate and analyze the physical conditions of the user. These data can be plotted against the average brightness and contrast of the VR video image in real time. This information is shown on the screen of a personal computer and used for cross-link analysis. This eye-tracking interface can determine the position of a subject's pupil and map that position into a display point on a computer screen. The pupil size and location data versus the average brightness and contrast of a VR video image are computed in real time.
Smart Graphics, 2000
This paper describes hardware and software requirements for the development of a gaze-contingent virtual reality sys- tem which incorporates several cues to presence. The user' s gaze direction, as well as head position and orientation, are tracked to allow dynamic level-of-detail changes for render- ing. Users can see themselves, rather than representations thereof, within blue-screened virtual environments, and lim- ited
Proceedings of the Workshop on Virtual Environments 2003, 2003
Six-sided fully immersive projective displays present complex and novel problems for tracking systems. Existing tracking technologies typically require tracking equipment that is placed in locations or attached to the user in a way that is suitable for typical displays of five or less walls but which would interfere with the immersive experience within a fully enclosed display. This paper presents a novel vision-based tracking technology for fully-immersive projective displays. The technology relies on the operator wearing a set of laser diodes arranged in a specific configuration and then visually tracking the projection of these lasers on the external walls of the display outside of the user's view. This approach places minimal hardware on the user and no visible tracking equipment is placed within the immersive environment. This paper describes the basic visual tracking system including the hardware and software infrastructure.
2002
This paper presents a study of the appli-cation of eye tracking technology in the context of social interaction in a virtual en-vironment. We evaluate the reliability and precision of gaze tracking in two different virtual reality applications. In spite of the known drawbacks, the technology ...
Most available remote eye gaze trackers have two characteristics that hinder them being widely used as the important computer input devices for human computer interaction. First, they have to be calibrated for each user individually; second, they have low tolerance for head movement and require the users to hold their heads unnaturally still. In this paper, by exploiting the eye anatomy, we propose two novel solutions to allow natural head movement and minimize the calibration procedure to only one time for a new individual.
2018
Eye visual perception that is predominantly deluded in Virtual Realities. Yet, the eyes of the observer, despite the fact that they are the fastest perceivable moving body part, have got relatively little attention as an interaction modality. Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. In this we explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.
Proc. of ENACTIVE/07, 4
Optics Express
Head pose is utilized to approximate a user's line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.
Optics & Laser Technology, 2007
In this paper a novel design which combines an eye-tracking device with a head gesture control module is discussed. In the eye-tracking mode, the user wears goggles, and two tiny CCD cameras capture the eye image from the screen with the help of a video capture card. In the head gesture control mode, a light source projector is turned on, and the CCD camera detects the position of the light source. The locations of the spots on the screen and on the image pupil of the eye image are calculated, compared with the previous point and are subsequently mapped to the point on the screen. The movement increment-coordinate control is also discussed, which could improve the ease of use of the computer. r
ICST Transactions on Scalable Information Systems, 2021
In this paper a novel eye-tracking device designed which uses tiny CCD cameras to capture the eye image from the screen with the help of a video capture card. In the head gesture control mode, a light source projector is turned on and the CCD camera detects the position of the light source. The locations of the spots on the screen and on the image pupil of the eye image are calculated, compared with the previous point and are subsequently mapped to the point on the screen. The movement increment-coordinate control is also discussed which could improve the ease of use of the computer. We investigate the use of non-rigid head fixation using a helmet that constrains only general head orientation and allows some freedom of movement. Device results simulated with the help of software which achieves excellent timing performance due to the use of live data streaming, instead of the traditionally employed data storage mode for processing analogous eye position data.
Informatica, 2012
In this paper the gaze tracking system based on the adaptively changing threshold value of the gray level, which automatically detects the pupil position in two dimensional data is proposed. The system detects closed eye using normalized accumulative luminosity function. The detection of closed eye allows the confirmation of voluntary selected command in a more natural way. The presented technique allows recalibration without intervention or help of other assistive persons. Natural head motion is evaluated, employing signals, generated by orientation sensor. Within two experiments, the proposed system is tested at different behavior modes in respect of speed and accuracy of the system.
2009
Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.