Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, IEEE Multimedia
…
8 pages
1 file
Pen-centric computing enhances user interface design by facilitating natural, fluid input methods that extend beyond traditional WIMP (Windows, Icons, Menus, Pointing device) interfaces. This paper discusses various applications of pen input technologies across fields such as healthcare, design, and education, emphasizing their adaptability and potential for improving usability and accessibility. It also reviews prototypes developed by the Brown University pen-centric computing group and examines ongoing challenges and research opportunities in advancing this technology.
Proceedings of the workshop on Advanced visual interfaces - AVI '96, 1996
Sit.t Sen Chok and I(im Ma.rriot.t .klor~ash I. niversit.y, Vict.orla, Australia less,marriott }'~.'s,monash .edu.au I. iNTRODUCTiON Recent hardware advances have brought intera.ctive graphic tablets and pen-ba.sed notepa.d computers into the market place. This new technology, however, while offering great potential has not yet~ been very successful. One of the main reasons is that. software for pen-based computers is still immature: gesture recognition is poor and few applications make use of t, he new capabilities of the pen. The aim of the PF;N(~UINS project, is 1.o provide a. tools that. help in tim dewelopmel~t of sol't.warc for pen-based coinput.ers which t.a.kes full advallt.a.ge of the pen's new capabilit.ies. The project is based on the iT~telliyqe~t pen (~.~d paper metaphor for human-computer interaction wit.h i)ellbased computers In this int.eract.ion ~et=apl~or. t.lw user comrnunicates with the comput.er using a~ application specific visual language-" co~posed of hai~dwrittcu text and diagrams. This contrasts with the usual stat.e of affairs, in which the input, of diagrams is a CUll/l~ersome, indirect process, requiring sophisticated lll,,2/ttlbased graphic editors with many complex ii~odes. Intelligent pen and fmper, however, promises *,hal s~ch input will be a.ble to be given in frec tbrnl, i~,od<'lessly in any order and place, and modified using tin.rural gesture commands. Users will thus be able to express themselves directly in the ting~a fvattca of t.hcir applicat, ion domain using visual languages such as flowcharl.s, PERT-charts, entit.y-relat.ionship diagrams, sl.at, echarts, electric circuit diagrams, musical not.at.ion, struct.uralchemical formulae, or mathema.tical equations, llox~ever, before the potential of the intelligent pen aud paper metaphor can be fully realized, int.egrated software tools are needed which support the constructiou ()f a user interface based on a partict~la: visual la~guag< The PENGUINS toolkit, provides such fools.
We provide an overview of the Penguins project. The aim of this project is to develop tools that facilitate the development of software for pen-based graphics editors. The project is based on the intelligent pen and paper metaphor for human-computer interaction. In this metaphor, the user communicates with the computer using an application specific visual language composed of handwritten text and diagrams. As the diagram is drawn with a pen , the underlying graphic editor parses the diagram, performing error correction and collect ing geometric constraints which capture the relationships between diagram components. During manipulation these constraints are maintained by the editor in order to preserve the semantics of the diagram. The Penguins system contains tools which, given a grammatical specification of a visual language, automatically construct a core user interface for that visual language which embodies the intelligent pen and paper metaphor. This core user interface consists of a tokenizer, constraint so lver , layout controller and constraint-based graphics editor together with an incremental parser for the specified visual language. The specification language is based on constraint multiset grammars.
2012 Sixth International Conference on Research Challenges in Information Science (RCIS), 2012
Several algorithms have been developed for pen-based gesture recognition. Yet, their integration in streamline engineering of interactive systems is bound to several shortcomings: they are hard to compare to each other, determining which one is the most suitable in which situation is a research problem, their performance largely vary depending on contextual parameters that are hard to predict, their fine-tuning in a real interactive application is a challenge. In order to address these shortcomings, we developed UsiGesture, an engineering method and a software support platform that accommodates multiple algorithms for pen-based gesture recognition in order to integrate them in a straightforward way into interactive computing systems. The method is aimed at providing designers and developers with support for the following steps: defining a dataset that is appropriate for an interactive system (e.g., made of commands, symbols, characters), determining the most suitable gesture recognition algorithm depending on contextual variables (e.g., user, platform, environment), fine-tuning the parameters of this algorithm by multi-criteria optimization (e.g., system speed and recognition rate vs. human distinguishability and perception), and incorporation of the fine-tuned algorithm in an integrated development environment for engineering interactive systems.
2025
The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.
Proceedings of Balisage: The Markup Conference 2019
Many users of today's pen computers have an ambiguous attitude towards these devices. On the one hand, they like the ease of use, especially in the beginning. On the other hand, after some time, they often feel hampered by the systems since the user interfaces do not reflect the users' individual skills, experiences, and preferences. Pen interfaces treat all users in the same way -like novices. Becoming an expert or 'power' user is quite difficult. In this paper, we report on the gedric approach (Geißler, 1995) to this problem and evaluate an application with a so-called pen-centric user interface(Geißler, to appear). We will show that such an interface efficiently supports experienced as well as novice users. By having the freedom to choose from two popular interaction styles -menus and gestures -and to mix them arbitrarily, gedrics support a wide range of user preferences and skills. This results not only in efficient individual working styles but also in a high user satisfaction.
Proceedings of the 20th …, 2007
Information cannot be found if it is not recorded. Existing rich graphical application approaches interfere with user input in many ways, forcing complex interactions to enter simple information, requiring complex cognition to decide where the data should be stored, and limiting the kind of information that can be entered to what can fit into specific applications' data models. Freeform text entry suffers from none of these limitations but produces data that is hard to retrieve or visualize. We describe the design and ...
Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '97, 1997
An experimental GUI paradigm is presented which is based on the design goals of maximizing the amount of screen used for application data, reducing the amount that the UI diverts visual attentions from the application data, and increasing the quality of input. In pursuit of these goals, we integrated the non-standard UI technologies of multi-sensor tablets, toolglass, transparent UI components, and marking menus. We describe a working prototype of our new paradigm, the rationale behind it and our experiences introducing it into an existing application. Finally, we presents some of the lessons learned: prototypes are useful to break the barriers imposed by conventional GUI design and some of their ideas can still be retrofitted seamlessly into products. Furthermore, the added functionality is not measured only in terms of user performance, but also by the quality of interaction, which allows artists to create new graphic vocabularies and graphic styles.
2009
The feature of continuous interaction in pen-based system is critically significant. Seamless mode switch can effectively enhance the fluency of interaction. The interface which incorporated the advantages of seamless and continuous operation has the potential of enhancing the efficiency of operation and concentrating the users' attention. In this paper, we present a seamless and continuous operation paradigm based on pen's multiple-input parameters. A prototype which can support seamless and continuous (SC) operation is designed to compare the performance with MS Word 2007 system. The subjects were requested to select target components, activate the command menus and color the targets with a given flowchart in two systems respectively. The experiment results report the SC operation paradigm outperformed the standard ways in MS Word in both operation speed and cursor footprint length (CFL).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Computer-Aided Design of User Interfaces V, 2007
Proceedings of UbiComp, 2007
IEEE Transactions on Visualization and Computer Graphics
2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221)
Journal of Computer Science and Technology, 2012
Proceedings of the 23nd annual ACM symposium on User interface software and technology, 2010
Robotics and Computer-integrated Manufacturing, 2005
Computers & Graphics, 2000
Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '95, 1995