Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007, Aslib Proceedings
https://doi.org/10.1108/00012530710817618…
16 pages
1 file
The motivation for this research is the emergence of mobile information systems where information is disseminated to mobile individuals via handheld devices. A key distinction between mobile and desktop computing is the significance of the relationship between the spatial location of an individual and the spatial location associated with information accessed by that individual. Given a set of spatially referenced documents retrieved from a mobile information system, this set can be presented using alternative interfaces of which two presently dominate: textual lists and graphical two-dimensional maps. The purpose of this paper is to explore how mixed reality interfaces can be used for the presentation of information on mobile devices.
The purpose of this study is to determine the parameters of the best possible spatial user interface design for the online shopping experience in mixed reality. In the first phase of the study, ten existing augmented and virtual reality shopping applications are examined, and the spatial relationships of the interfaces used in these experiences and the possibilities of natural interactions provided to the user are compared. In the next phase of the study, the interfaces are evaluated to propose a three-dimensional spatial interface powered by the mixed reality that improves the spatial relationship and neutrality of these interfaces.
Journal of Management Information Systems, 2007
Knowledge of objects, situations, or locations in the environment can be productive, useful, or even life-critical for mobile augmented reality (AR) users. Users may need assistance with (1) dangers, obstacles, or situations requiring attention; (2) visual search; (3) task sequencing; and (4) spatial navigation. The omnidirectional attention funnel is a general purpose AR interface technique that rapidly guides attention to any tracked object, person, or place in the space. The attention funnel dynamically directs user attention with strong bottom-up spatial attention cues. In a study comparing the attention funnel to other attentional techniques such as highlighting and audio cueing, search speed increased by over 50 percent, and perceived cognitive load decreased by 18 percent. The technique is a general three-dimensional cursor in a wide array of applications requiring visual search, emergency warning, and alerts to specific objects or obstacles, or for three-dimensional navigation to objects in space.
This dissertation deals mainly with the discipline of Human-Computer Interaction (HCI), with particular attention on the role that it plays in the domain of modern mobile devices. Mobile devices today offer a crucial support to a plethora of daily activities for nearly everyone. Ranging from checking business mails while traveling, to accessing social networks while in a mall, to carrying out business transactions while out of office, to using all kinds of online public services, mobile devices play the important role to connect people while physically apart. Modern mobile interfaces are therefore expected to improve the user's interaction experience with the surrounding environment and offer different adaptive views of the real world. The goal of this thesis is to enhance the usability of mobile interfaces for spatial data. Spatial data are particular data in which the spatial component plays an important role in clarifying the meaning of the data themselves. Nowadays, this kind of data is totally widespread in mobile applications. Spatial data are present in games, map applications, mobile community applications and office automations. In order to enhance the usability of spatial data interfaces, my research investigates on two major issues: 1. Enhancing the visualization of spatial data on small screens 2. Enhancing the text-input methods I selected the Design Science Research approach to investigate the above research questions. The idea underling this approach is “you build artifact to learn from it”, in other words researchers clarify what is new in their design. The new knowledge carried out from the artifact will be presented in form of interaction design patterns in order to support developers in dealing with issues of mobile interfaces. The thesis is organized as follows. Initially I present the broader context, the research questions and the approaches I used to investigate them. Then the results are split into two main parts. In the first part I present the visualization technique called Framy. The technique is designed to support users in visualizing geographical data on mobile map applications. I also introduce a multimodal extension of Framy obtained by adding sounds and vibrations. After that I present the process that turned the multimodal interface into a means to allow visually impaired users to interact with Framy. Some projects involving the design principles of Framy are shown in order to demonstrate the adaptability of the technique in different contexts. The second part concerns the issue related to text-input methods. In particular I focus on the work done in the area of virtual keyboards for mobile devices. A new kind of virtual keyboard called TaS provides users with an input system more efficient and effective than the traditional QWERTY keyboard. Finally, in the last chapter, the knowledge acquired is formalized in form of interaction design patterns.
Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia - MUM '10, 2010
With advanced sensor technologies and tools for content creation, current mobile devices possess features for providing information services based on user's location. There are several services for geographically pinned user-generated content focusing on providing information to users in unfamiliar locations. However, information needs regarding location-based content services in the familiar everyday context have so far been quite little researched. Our research entailed a 12-day user study where nine participants kept a diary, in which they reported needs for annotations of locations they had come across in various day-to-day situations. Based on our results, we present design implications for annotation services, taking into account the user needs in various daily situations. The results show that the services should support flexible controls of the visibility of annotations, notifications about selected annotations within the vicinity, and easy remote annotations. In addition, the system should support collective and living annotations that can be contributed by several users.
… : Teleoperators & Virtual …, 2002
In this paper we describe two explorations in the use of hybrid user interfaces for collaborative geographic data visualization. Our first interface combines three technologies; Augmented Reality (AR), immersive Virtual Reality and computer vision based hand and object tracking. Wearing a lightweight display with camera attached, users can look at a real map and see three-dimensional virtual terrain models overlaid on the map. From this AR interface they can fly in and experience the model immersively, or use free hand gestures or physical markers to change the data representation. Building on this work, our second interface explores alternative interface techniques, including a zoomable user interface, paddle interactions and pen annotations. We describe the system hardware and software, and the implications for GIS and spatial science applications.
Published by the IEEE CS n 1536-1268/11/$26.00 © 2011 IEEE PER VA SI V E computing 73 H u m a n -C o m p u t e r I n t e r a C t I o n a Situative Space model for mobile mixed-reality Computing As more computational devices are put into place, designing the user experience as an ensemble of interactions will become paramount. No single vendor should assume that their suite of devices and systems will become dominant. Consequently, it's important to begin planning and designing how real/virtual devices will cooperatively interoperate. Such interoperation minimally requires communication protocol agreements and behavior standards for devices. But as a prerequisite for these, we need to begin understanding-as a community-what we want the overall experience of ubiquitously emplaced devices to be. We are designing more than just widgets; by creating many devices that communicate, we are creating entire environments. The range of options is completely open to us: we can create a world where we live in a cacophony of many voices, each looking for a piece of user attention, or we can design a world of devices and systems that gently support our work and play, being continually responsive and sensitive to our preferred styles of interaction, announcement and intrusion. -Dan Russell and Mark Weiser 1
2007
To study how users of mobile maps construct the referential relationship between points in the virtual space and in the surrounding physical space, a semi-natural field study was conducted in a built city environment, utilizing a photorealistic interactive map displayed on a PDA screen. The subjects (N=8) were shown a target (building) in the physical or in the virtual space and asked to point to the corresponding object in the other space. Two viewports, "2D" (top-down) and "3D" (street-level), were compared, both allowing the users to move freely. 3D surpassed 2D in performance (e.g., 23% faster task completion) and workload measures (e.g., 53% in NASA-TLX), particularly in non-proximal search tasks where the target lied outside the initial screen. Verbal protocols and search pattern visualizations revealed that subjects utilized varied perceptual-interactive search strategies, many of which were specific to the viewport type. These strategies are set and discussed in the terms of the theory of pragmatic and epistemic action (Kirsh & Maglio, 1994). It is argued that the purpose of such strategies was not to find the target directly by transforming either of the two spaces (pragmatic action), but to make the task processing requirements better aligned with the available cognitive resources (epistemic action). Implications to interface design are discussed with examples.
2008 12th International Conference Information Visualisation, 2008
In this paper, we argue for an integrated approach comprising innovations in MSI with latest advances in visualization. In order to demonstrate the potential of this joint approach, we present a series of three related projects and highlight in each case research challenges and results in the field of Mobile HCI and mobile application development as well as corresponding advances of the underlying server-side rendering algorithms.
Nowadays, spatial data are totally widespread in mobile applications. They are present in games, map applications, web community applications and office automations. However this kind of spatial information potentially needs a large display area and the hardware constraint related to the limited screen dimensions creates many usability challenges. Our investigation in the last few years to find solutions to these challenges has led us to the discovery of general usability principles that a welldesigned interface should adopt. In this paper we describe the mental path we have followed to derive those principles from the experience gained in developing mobile interfaces for different application domains. The principles are formalized in terms of two interaction design patterns, specific for mobile interfaces managing spatial data. They extend existing HCI patterns and are completed, as usual, with concrete examples of their applications.
2011
ing Positions for Indoor Location Based Services An Open Standardized Platform for Dual Reality Applications Context and Location-Awareness for an Indoor Mobile Guide Indoor Positioning: Group Mutual Positioning Management Dashboard in a Retail Scenario Mixed Reality Simulation Framework for Multimodal Remote Sensing Virtual Environments for Testing Location-Based Applications Virtually Centralized, Globally Dispersed: A Sametine 3D Analysis
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of The IEEE, 2012
Research in Learning Technology, 2018
IOP Conference Series: Materials Science and Engineering
Proceedings of the …, 2009
International Journal of Electrical and Computer Engineering (IJECE), 2019
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
Proceedings of the International Working Conference on Advanced Visual Interfaces - AVI '12, 2012
Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005
Proceedings of the working conference on …, 2006
IEEE Computer Graphics and Applications, 1998
Computers & Graphics, 1996