Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003
In this paper, we will discuss design suggestions to realize remote pointing in distant collaborations. We use the pointing to anchor what we say to real worlds in our daily life. In the virtual world, we anchor our conversation to virtual worlds. The anchoring is necessary to establish mutual understanding between participants but important elements for the anchoring are not known. We show that the pointing toward a desk space can be done using either eyes or using both the eyes and a hand without significant difference in a pointing accuracy under the face-to-face condition. Also, participants judge pointed locations mainly using a hand cue even if the pointer pointed with her eyes and her hand. Therefore, avatars should be able to show hand information accurately but the relation between the eyes and a hand does not require the accurate representation as the hand information. Stereoscopic images can recreate 73% of information that is provided by a faceto-face pointing and more accurate recreation of the face-to-face pointing requires fine-tunings of the system to each user. We show that it is possible to realize the high pointing accuracy without the system tuning to each user, if we use a simple rod as a remote virtual finger.
arXiv (Cornell University), 2023
Figure 1: Collaborating remotely in a shared 3D workspace: A) Veridical Face-to-face Remote Meeting: Kate and George can see each other but have opposing points-of-view of the workspace. Kate indicates the optimal position for the component she is designing using proximal pointing gestures. George does not understand the location Kate is referring to as it is occluded. Kate needs to explain it through words or ask George to come by her side, which brings added complexity to the collaboration task. B) MAGIC Remote Meeting: Kate and George both share the "illusion" of standing on opposite sides of the workspace but are in fact sharing the same point-of-view. Kate's representation is manipulated so that George sees her arm pointing to the location she intends to convey. George understands where she is referring to, without any added effort.
2011
HP Laboratories HPL-2010-201 augmented reality, remote collaboration, computer vision, natural interaction, immersive experiences Video conferencing systems are designed to deliver a collaboration experience that is as close as possible to actually meeting in person. Current systems, however, do a poor job of integrating video streams presenting the users with shared collaboration content. Real and virtual content are unnaturally separated, leading to problems with nonverbal communication and the overall conference experience. Methods of interacting with shared content are typically limited to pointing with a mouse, which is not a natural component of face-to-face human conversation. This paper presents a natural and intuitive method for sharing digital content within a meeting using augmented reality and computer vision. Real and virtual content is seamlessly integrated into the collaboration space. We develop new vision based methods for interacting with inserted digital content including target finding and gesture based control. These improvements let us deliver an immersive collaboration experience using natural gesture and object based interaction.
CHI '19 Proceedings of the 2019 CHI Conference on Human Factors in Computing System, 2019
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer's perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.
pedagogie.ec-nantes.fr
This paper proposes using virtual reality to enhance the perception of actions by distant users on a shared application. Here, distance may refer either to space (e.g. in a remote synchronous collaboration) or time (e.g. during playback of recorded actions). Our approach consists in immersing the application in a virtual inhabited 3D space and mimicking user actions by animating avatars whose motion can be either generated from user actions on the shared application or from motion capture by a computer vision system we briefly describe. We illustrate this approach with two applications, one for remote collaboration on a shared application and the other to playback recorded sequences of user actions. We suggest this could be a low cost enhancement for telepresence.
Proceedings of the 10th European conference on Interactive tv and video - EuroiTV '12, 2012
In this paper we present a comparative study of free-hand pointing, an absolute remote pointing device. Unimanual and bimanual interaction were tested as well as the static reference system (spatial coordinates are fixed in the space in front of the TV) and novel body-aligned reference system (coordinates are bound to the current position of the user). We conducted a pointand-click experiment with 12 participants. We have identified the preferred interaction areas for left-and right-handed users in terms of hand preference and preferred spatial areas of the interaction. In bimanual interaction, the users relied more on dominant hand, switching hands only when necessary. Even though the remote pointing device was faster than the free-hand pointing, it was less accepted probably due to its low precision.
Lecture Notes in Computer Science, 2013
A collaboration scenario involving a remote helper guiding in real time a local worker in performing a task on physical objects is common in a wide range of industries including health, mining and manufacturing. An established ICT approach to supporting this type of collaboration is to provide a shared visual space and some form of remote gesture. The shared space and remote gesture are generally presented in a 2D video form. Recent research in tele-presence has indicated that technologies that support co-presence and immersion not only improve the process of collaboration but also improve spatial awareness of the remote participant. We therefore propose a novel approach to developing a 3D system based on a 3D shared space and 3D hand gestures. A proof of concept system for remote guidance called HandsIn3D has been developed. This system uses a head tracked stereoscopic HMD that allows the helper to be immersed in the virtual 3D space of the worker's workspace. The system captures in 3D the hands of the helper and fuses the hands into the shared workspace. This paper introduces HandsIn3D and presents a user study to demonstrate the feasibility of our approach.
2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2021
The three uni-directional Cross-Reality pinpointing techniques explored in the first study (from left to right): no visual feedback, audio only via voice clips; a highlight outlining the object (in white); and a three-dimensional arrow. Figure 2: Two of techniques explored in the second study in the context of bi-directional Cross-Reality pinpointing. Left-a pointing action from the tablet user, from the perspective of the user in VR. Inspired by Ibayashi et al. [12], we have added a pulse element that travels the scene and enables vibrotactile feedback when passing through the VR user. This enables she or he to be aware of this action even if facing some other direction. Right-a pointing action by the VR user (tracked via the handheld controllers), and how this is represented to the tablet user.
Personal and Ubiquitous Computing, 2020
Collaborative virtual environments allow remote users to work together in a shared 3D space. To take advantage of the possibilities offered by such systems, their design must allow the users to interact and communicate efficiently. One open question in this field concerns the avatar fidelity of remote partners. This can impact communication between the remote users, more particularly when performing collaborative spatial tasks. In this paper, we present an experimental study comparing the effects of two partner's avatars on collaboration during spatial tasks. The first avatar was based on a 2.5D streamed point-cloud and the second avatar was based on a 3D preconstructed avatar replicating the remote user movements. These avatars differ in their fidelity levels described through two components: visual and kinematic fidelity. The collaborative performance was evaluated through the efficacy of completing two spatial communication tasks, a pointing task and spatial guidance task. The results indicate that the streamed point-cloud avatar permitted a significant improvement of the collaborative performance for both tasks. The subjective evaluation suggests that these differences in performance can mainly be attributed to the higher kinematic fidelity of this representation as compared to the 3D preconstructed avatar representation. We conclude that, when designing spatial collaborative virtual environments, it is important to reach a high kinematic fidelity of the partner's representation while a moderate visual fidelity of this representation can suffice.
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013
In this research, we explore using pointing and drawing in a remote collaboration system. Our application allows a local user with a tablet to communicate with a remote expert on a desktop computer. We compared performance in four conditions: (1) Pointers on Still Image, (2) Pointers on Live Video, (3) Annotation on Still Image, and (4) Annotation on Live Video. We found that using drawing annotations would require fewer inputs on an expert side, and would require less cognitive load on the local worker side. In a follow-on study we compared the conditions (2) and (4) using a more complicated task. We found that pointing input requires good verbal communication to be effective and that drawing annotations need to be erased after completing each step of a task.
2000
This paper proposes a gesture-based direct manipulation interface that can be used for data transfer among informational artifacts. "Grasp and Drop (Throw)" by hand gestures allows a user to grasp an object on a computer screen and drop (throw) it on other artifacts without touching them. Using the interface, a user can operate some artifacts in the mixed reality world in a seamless manner, and learn this interaction style easily. Based on this interaction technique, we developed a prototype of presentation system using Microsoft PowerPoint, a wall size screen, computer screens and a printer. The presentation system with gestures allows a presenter to navigate through PowerPoint slides and transfer a slide from one computer screen to another. We conducted an experiment which evaluate the interaction style of gestures and analyzed the user's satisfaction with a questionnaire. The result shows that the overall mean of successful recognition is 96.9 %, and the learning of the system is easy.
IEEE Transactions on Multimedia, 2000
Eye contact; gaze awareness; immersive experiences; natural interaction; remote collaboration Conventional telepresence systems allow remote users to see one another and interact with shared media, but users cannot make eye contact, and gaze awareness with respect to shared media and documents is lost. In this paper, we describe a remote collaboration system based on a see-through display to create an experience where local and remote users are seemingly separated only by a vertical sheet of glass. Users can see each other and media displayed on the shared surface. Face detectors are applied on the local and remote video streams to introduce an offset in the video display so as to bring the local user's face, the local camera, and the remote user's face image into collinearity. This ensures that, when the local user looks at the remote user's image, the camera behind the see-through display captures an image with the "Mona Lisa effect," where the eyes of an image appears to follow the viewer. Experiments show that, for one-on-one meetings, our system is capable of capturing and delivering realistic, genuine eye contact as well as accurate gaze awareness with respect to shared media.
Distributed, Ambient and Pervasive Interactions, 2016
In this paper, we present a unified framework for remote collaboration using interactive augmented reality (AR) authoring and hand tracking methods. The proposed framework enables a local user to organize AR digital contents for making a shared working environment and collaborate multiple users in the distance. To develop the framework, we combine two core technologies: (i) interactive AR authoring method utilizing a smart input device for making a shared working space, (ii) hand-augmented object interaction method by tracking two hands in egocentric camera view. We implement a prototype of the proposed remote collaboration framework for testing its feasibility in an indoor environment. To the end, we expect that our framework enables collaboration as feeling a sense of co-presence with remote users in a user's friendly AR working space.
ami-lab.org
Traditional interfaces for gaming and entertainment still follow very closely to the GUI or keyboard, mouse and monitor form. Besides that, voice recognition and vision tracking technology are also used in some form of computer games. However, these interfaces do not provide the ability for remote users to touch and feel each other. Therefore, we propose a system that aims to improve social and tangible interaction between remote persons which can be applied in the area of gaming and entertainment. It has a mobile input interface device which senses the force of human touch and a haptic wearable suit which simulates human touch on the body of the wearer.
Virtual Reality, 2002
We describe a design approach, Tangible Augmented Reality, for developing face-to-face collaborative Augmented Reality (AR) interfaces. Tangible Augmented Reality combines Augmented Reality techniques with Tangible User Interface elements to create interfaces in which users can interact with spatial data as easily as real objects. Tangible AR interfaces remove the separation between the real and virtual worlds and so enhance natural face-to-face communication. We present several examples of Tangible AR interfaces and results from a user study that compares communication in a collaborative AR interface to more traditional approaches. We find that in a collaborative AR interface people use behaviors that are more similar to unmediated face-to-face collaboration than in a projection screen interface.
2007 IEEE Symposium on 3D User Interfaces, 2007
We have implemented an augmented reality videoconferencing system that inserts virtual graphics overlays into the live video stream of remote conference participants. The virtual objects are manipulated using a novel interaction technique cascading bimanual tangible interaction and eye tracking. User studies prove that our user interface enriches remote collaboration by offering hitherto unexplored ways for collaborative object manipulation such as gaze controlled raypicking of remote physical and virtual objects.
Lecture Notes in Computer Science, 2002
This paper describes the development of a natural interface to a virtual environment. The interface is through a natural pointing gesture and replaces pointing devices which are normally used to interact with virtual environments. The pointing gesture is estimated in 3D using kinematic knowledge of the arm during pointing and monocular computer vision. The latter is used to extract the 2D position of the user's hand and map it into 3D. Off-line tests of the system show promising results with an average errors of 76mm when pointing at a screen 2m away. The implementation of a real time system is currently in progress and is expected to run with 25Hz.
HumanComputer …, 2004
This article considers tools to support remote gesture in video systems being used to complete collaborative physical tasks-tasks in which two or more individuals work together manipulating three-dimensional objects in the real world. We first discuss the process of conversational grounding during collaborative physical tasks, particularly the role of two types of gestures in the grounding process: pointing gestures, which are used to refer to task objects and locations, and representational gestures, which are used to represent the form of task objects and the nature of actions to be used with those objects. We then consider ways in which both pointing and representational gestures can be instantiated in systems for remote collaboration on physical tasks. We present the results of two studies that use a "surrogate" approach to remote gesture, in which images are intended to express the meaning of gestures through visible embodiments, rather than direct views of the hands. In Study 1, we compare performance with a cursor-based 274 FUSSELL ET AL.
2009
We present ConnectBoard, a new system for remote collaboration where users experience natural interaction with one another, seemingly separated only by a vertical, transparent sheet of glass. It overcomes two key shortcomings of conventional video communication systems: the inability to seamlessly capture natural user interactions, like using hands to point and gesture at parts of shared documents, and the inability of users to look into the camera lens without taking their eyes off the display. We solve these problems by placing the camera behind the screen, where the remote user is virtually located. The camera sees through the display to capture images of the user. As a result, our setup captures natural, frontal views of users as they point and gesture at shared media displayed on the screen between them. Users also never have to take their eyes off their screens to look into the camera lens. Our novel optical solution based on wavelength multiplexing can be easily built with off-the-shelf components and does not require custom electronics for projector-camera synchronization.
Proceedings of the First International Conference on …, 2002
In this paper,we introduce a new pointing device for Ubiquitous computing environment.The user's eye is an integral part of the system. This relatively simple system makes it possible to realize novel features such as the "tele-click".
2001
Remote pointing is an interaction style for presentation systems, interactive TV, and other systems where the user is positioned an appreciable distance from the display. A variety of technologies and interaction techniques exist for remote pointing. This paper presents an empirical evaluation and comparison of two remote pointing devices. A standard mouse is used as a base-line condition. Using the ISO metric throughput (calculated from users' speed and accuracy in completing tasks) as the criterion, the two remote pointing devices performed poorly, demonstrating 32% and 65% worse performance than the mouse. Qualitatively, users indicated a strong preference for the mouse over the remote pointing devices. Implications for the design of present and future systems for remote pointing are discussed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.