Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008
Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability.
A good 3D user interface associated with 3D interaction techniques is very important in order to increase the realness and the easiness of the interaction in Virtual Environments (VEs). 3D user interfaces systems often use multiple input devices, especially, tracking devices. So, the search for new metaphors and techniques for 3D interaction adapted to the navigation task, independently from devices used, is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability.
1999
Hardware and software advances are making real time 3D graphics part of all mainstream computers. WorldWide Web sites encoded in Virtual Reality Modeling Language or other formats allow users across the Internet to share virtual 3D "worlds". As the supporting software and hardware become increasingly powerful, the usability of the current 3D navigation interfaces becomes the limiting factor to the widespread application of 3D technologies. In this paper, we analyze the human factors issues in designing a usable navigation interface, such as interface metaphor, integration and separation of multiple degrees of freedom, mode switching, isotonic versus isometric control, seamless merger of the 3D navigation devices with the GUI pointing and scrolling devices and two-handed input. We propose a dual joystick navigation interface design based on a real world metaphor (bulldozer), and present an experimental evaluation. Results showed that the proposed bulldozer interface outperformed the status quo mouse-mapping interface in maze travelling and free flying tasks by 25% to 50%. Limitations of and possible future improvements to the bulldozer interface are also presented.
HCITaly, at Firenze, 2001
MELECON 2006 - 2006 IEEE Mediterranean Electrotechnical Conference, 2006
The way of interacting between user and environment is one of the main characteristics that increases the sense of presence inside a virtual world. However, interaction could be the weak point of virtual environments based application. In this paper we present a very brief taxonomy of 3D interaction for virtual environments and discuss why interaction is, in our opinion, a key element in the new developments of virtual reality technology.
Journal of Universal Computer Science, 2008
We present an alternative interface that allows users to perceive new sensa-tions in virtual environments. Gaze-based interaction in virtual environments creates the feeling of controlling objects with the mind, arguably translating into a more in-tense immersion sensation. ...
2012 IEEE Symposium on 3D User Interfaces (3DUI), 2012
In this paper we introduce the Drag'n Go technique to navigate in multi-scale virtual 3d environment. This new technique takes its root from the point of interest (POI) [10] approach where the user selects a target to reach. The biggest difference between the two is that with Drag'n Go the user keeps full control of its position relative to the target as well as its traveling speed. The technique requires only a 2d input and consequently, can be used with a large amount of devices like mouse, touch or pen screen. We conducted preliminary experiment that highlights that Drag'n Go is an efficient and appreciated method for touch-based device and a competitive approach for mouse-based device.
Nowadays, the stereoscopic 3D technology is being used in variety of applications including, but not limited to, architecture, design, games, and medicine. The technologies that utilize the concepts of virtual reality have the tendency to be generic and provide the users with a realistic interaction with the environment. In order to control stereoscopic images, conventional methods had to use devices such as keyboard and/or mouse for the input control signal. However, it is challenging for users to control stereoscopic images by using these types of devices and it is difficult to obtain a desired accuracy due to the errors existing in the device. In addition, the major factor is that these methods do not allow users to feel the contents in a realistic manner. In this paper, a gaze tracking interface (line of sight) is developed to control the stereoscopic 3D contents. The proposed method is natural, intuitive, and more efficient compared to the conventional methods that use keyboard and mouse. Controlling various 3D contents including S3D games, tele-operated surgery, military training, and pilot simulation will be achievable through the proposed method.
Work: A Journal of …, 2012
Previous studies suggest significant differences between navigating virtual environments in a life-like walking manner (i.e., using treadmills or walk-in-place techniques) and virtual navigation (i.e., flying while really standing). The latter option, which usually involves hand-centric devices (e.g., joysticks), is the most common in Virtual Reality-based studies, mostly due to low costs, less space and technology demands. However, recently, new interaction devices, originally conceived for videogames have become available offering interesting potentialities for research. This study aimed to explore the potentialities of the Nintendo Wii Balance Board as a navigation interface in a Virtual Environment presented in an immersive Virtual Reality system. Comparing participants' performance while engaged in a simulated emergency egress allows determining the adequacy of such alternative navigation interface on the basis of empirical results. Forty university students participated in this study. Results show that participants were more efficient when performing navigation tasks using the Joystick than with the Balance Board. However there were no significantly differences in the behavioral compliance with exit signs. Therefore, this study suggests that, at least for tasks similar to the studied, the Balance Board have good potentiality to be used as a navigation interface for Virtual Reality systems.
IEEE Computer Graphics and Applications, 1999
T he allure of immersive technologies is undeniable.
Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05, 2005
This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments. Its configurable input and gesture processing components provide an advanced behavior graph capable of routing continuous data streams asynchronously. The framework introduces a Knowledge Representation Layer which augments objects of the simulated environment with Semantic Entities as a central object model that bridges and interfaces Virtual Reality (VR) and Artificial Intelligence (AI) representations. Specialized node types use these facilities to implement required processing tasks like gesture detection, preprocessing of the visual scene for multimodal integration, or translation of movements into multimodally initialized gestural interactions. A modified Augmented Transition Nettwork (ATN) approach accesses the knowledge layer as well as the preprocessing components to integrate linguistic, gestural, and context information in parallel. The overall framework emphasizes extensibility, adaptivity and reusability, e.g., by utilizing persistent and interchangeable XML-based formats to describe its processing stages.
2009
Navigation in virtual environments is a complex task which imposes a high cognitive load on the user. It consists on maintaining knowledge of current position and orientation of the user while he moves through the space. In this paper, we present a novel approach for navigation in 3D virtual environments. The method is based on the principle of skiing, and the idea is to provide to the user a total control of his navigation speed and rotation using his two hands. This technique enables user-steered exploration by determining the direction and the speed of motion using the knowledge of the positions of the user hands. A module of speed control is included to the technique to easily control the speed using the angle between the hands. The direction of motion is given by the orthogonal axis of the segment joining the two hands. A user study will show the efficiency of the method in performing exploration tasks in complex 3D large-scale environments. Furthermore, we proposed an experimental protocol to prove that this technique presents a high level of navigation guidance and control, achieving significantly better performance in comparison to simple navigation techniques.
Lecture Notes in Computer Science, 2013
3D environments have been used in many applications. Besides the use of keyboard and mouse, best suited for desktop environments, other devices emerged for specific use in immersive environments. The lack of standardization in the use and in the control mapping of these devices makes the design task more challenging. We performed an exploratory study involving beginners and advanced users in the use of three devices in 3D environments: Keyboard-Mouse, Wiimote and Flystick. The navigation in this kind of environment is done through three tools: Fly, Examine and Walk. The study results showed how the interaction in virtual reality environments is affected by the navigation mechanism, the device, and the user's previous experience. The results may be used to inform the future design of virtual reality environments.
IEEE Virtual Reality (VR), 2019
Travel in a real environment is a common task that human beings conduct easily and subconsciously. However transposing this task in virtual environments (VEs) remains challenging due to input devices and techniques. Considering the well-described sensory conflict theory, we present a semiautomatic travel method based on path planning algorithms and gaze-directed control, aiming at reducing the generation of conflicted signals that may confuse the central nervous system. Since gaze-directed control is user-centered and path planning is goal-oriented, our semiautomatic technique makes up for the deficiencies of each with smoother and less jerky trajectories.
2001
In this paper we present a multimodal interface for navigating in arbitrary virtual VRML worlds. Conventional haptic devices like keyboard, mouse, joystick and touchscreen can freely be combined with special Virtual-Reality hardware like spacemouse, data glove and position tracker. As a key feature, the system additionally provides intuitive input by command and natural speech utterances as well as dynamic head and hand gestures. The commuication of the interface components is based on the abstract formalism of a context-free grammar, allowing the representation of deviceindependent information. Taking into account the current system context, user interactions are combined in a semantic unification process and mapped on a model of the viewer's functionality vocabulary. To integrate the continuous multimodal information stream we use a straight-forward rulebased approach and a new technique based on evolutionary algorithms. Our navigation interface has extensively been evaluated in usability studies, obtaining excellent results.
Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '01, 2001
We present a task-based taxonomy of navigation techniques for 3D virtual environments, used to categorize existing techniques, drive exploration of the design space, and inspire new techniques. We briefly discuss several new techniques, and describe in detail one new technique, Speed-coupled Flying with Orbiting. This technique couples control of movement speed to camera height and tilt, allowing users to seamlessly transition between local environment-views and global overviews. Users can also orbit specific objects for inspection. Results from two competitive user studies suggest users performed better with Speed-coupled Flying with Orbiting over alternatives, with performance also enhanced by a large display.
In this research, we implemented the 3D interactive interface for city navigation, and used an infrared 3D tracker as an interaction input device in VR CAVE. The design of 3D interface was evaluated by cognitive approach while navigating with a handheld sensor in the VR CAVE. According to the results of cognitive experiment, some revised design guidelines are proposed for further 3D navigation interface.
vrais, 1997
2011 IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems Proceedings, 2011
This paper evaluates two different interfaces for navigation in a 3D environment. The first is a 'Click-to-Move' style interface that involves using a mouse. The second is a gaming style interface that uses both mouse and keyboard (i.e. the 'WASD' keys) for view direction and movement respectively. In the user study, participants were asked to navigate a supermarket environment and collect a specific subset of items. Results revealed significant differences only for some of the submeasures. Yet, some revealing observations were made regarding user behavior and interaction with the user interface.
… VR Workshop: New Directions in 3D …, 2005
Computer Animation and Virtual Worlds
In the last decade, it has been observed an extraordinary acceleration in immersive technologies, including virtual and augmented reality (VR/ AR), as well as innovative experience design. The VR interfaces have widely explored various 3D interaction-based approaches. This technology encourages users to use diverse input devices (e.g., VR headset). However, such devices are still not very reliable, inconvenient and invasive to the person's comfort. Furthermore, users may have an inefficient experience for interacting with objects in a virtual environment. Fortunately, there exists an innovative interaction methods called Natural User Interfaces (NUIs). These interfaces are increasingly introduced in Human Machine Interaction (HMI) systems and they introduce the gestures recognition. They became more useful, improving the user's engagement and sense of presence, providing more stimulating, user-friendly and non-obtrusive interaction methods. In this paper, we present an efficient 3D interaction technique. This technique uses a gesture recognition for 3D interaction tasks (navigation, selection, manipulation and application control).This new approarch called Zoom-fwd, it allows a speed and precise interaction with distant and occulded objects.Zoom-fwd technique is a software solution for many problems, which can be related to the hardware or even to the software ones. A user study was carried out to investigate the impact of this 3D interaction technique on time completion of the different tasks. To do this study, we compare the Zoom-fwd technique with another 3D interaction technique.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.