


default search action
26. UIST 2013: St. Andrews, UK
- Shahram Izadi, Aaron J. Quigley, Ivan Poupyrev, Takeo Igarashi:

The 26th Annual ACM Symposium on User Interface Software and Technology, UIST'13, St. Andrews, United Kingdom, October 8-11, 2013. ACM 2013, ISBN 978-1-4503-2268-3
Keynote address
- Raffaello D'Andrea:

Humans and the coming machine revolution. 1-2
Hardware
- Robert Xiao, Chris Harrison, Karl D. D. Willis, Ivan Poupyrev, Scott E. Hudson:

Lumitrack: low cost, high precision, high speed tracking with projected m-sequences. 3-12 - Lining Yao, Ryuma Niiyama

, Jifei Ou, Sean Follmer
, Clark Della Silva, Hiroshi Ishii:
PneUI: pneumatically actuated soft composite materials for shape changing interfaces. 13-22 - Mustafa Emre Karagozler, Ivan Poupyrev, Gary K. Fedder

, Yuri Suzuki:
Paper generators: harvesting energy from touching, rubbing and sliding. 23-30 - Makoto Ono, Buntarou Shizuki, Jiro Tanaka:

Touch & activate: adding interactivity to existing objects using active acoustic sensing. 31-40 - Christian Holz

, Patrick Baudisch:
Fiberio: a touchscreen that senses fingerprints. 41-50
Mobile
- Xiaojun Bi, Shumin Zhai:

Bayesian touch: a statistical criterion of target selection with finger touch. 51-60 - Philip Quinn, Sylvain Malacria, Andy Cockburn:

Touch scrolling transfer functions. 61-70 - Daniel Spelmezan, Caroline Appert

, Olivier Chapuis
, Emmanuel Pietriga
:
Controlling widgets with one power-up button. 71-74 - Kerry Shih-Ping Chang, Brad A. Myers, Gene M. Cahill, Soumya Simanta, Edwin J. Morris, Grace A. Lewis

:
Improving structured data entry on mobile devices. 75-84 - Shiri Azenkot, Cynthia L. Bennett, Richard E. Ladner

:
DigiTaps: eyes-free number entry on touchscreens with minimal audio feedback. 85-90 - Sunjun Kim

, Geehyuk Lee:
Haptic feedback design for a virtual button along force-displacement curves. 91-96
Visualization & video
- John Brosz

, Miguel A. Nacenta
, Richard Pusch, Sheelagh Carpendale, Christophe Hurter
:
Transmogrification: causal manipulation of visualizations. 97-106 - Dongwook Yoon, Nicholas Chen, François Guimbretière:

TextTearing: opening white space for digital ink annotation. 107-112 - Steve Rubin, Floraine Berthouzoz, Gautham J. Mysore, Wilmot Li, Maneesh Agrawala:

Content-based tools for editing audio stories. 113-122 - Daniel Jackson

, James Nicholson, Gerrit Stoeckigt
, Rebecca Wrobel, Anja Thieme, Patrick Olivier
:
Panopticon: a parallel video overview system. 123-130 - James Tompkin

, Fabrizio Pece, Rajvi Shah, Shahram Izadi, Jan Kautz, Christian Theobalt
:
Video collections in panoramic contexts. 131-140 - Pei-Yu Chi, Joyce Liu, Jason Linder, Mira Dontcheva, Wilmot Li, Björn Hartmann:

DemoCut: generating concise instructional videos for physical demonstrations. 141-150
Crowd & creativitiy
- Walter S. Lasecki, Rachel Wesley, Jeffrey Nichols, Anand Kulkarni, James F. Allen, Jeffrey P. Bigham:

Chorus: a crowd-powered conversational assistant. 151-162 - Shahriyar Amini, Yang Li:

CrowdLearner: rapidly creating mobile recognizers using crowdsourcing. 163-172 - Juho Kim, Haoqi Zhang, Paul André, Lydia B. Chilton, Wendy E. Mackay, Michel Beaudouin-Lafon

, Robert C. Miller, Steven P. Dow:
Cobi: a community-informed conference scheduling tool. 173-182 - Emmanuel Iarussi

, Adrien Bousseau, Theophanis Tsandilas:
The drawing assistant: automated drawing guidance and feedback from photographs. 183-192 - Siddhartha Chaudhuri, Evangelos Kalogerakis

, Stephen Giguere, Thomas A. Funkhouser:
Attribit: content creation with semantic attributes. 193-202 - Junichi Yamaoka, Yasuaki Kakehi:

dePEDd: augmented handwriting system using ferromagnetism of a ballpoint pen. 203-210
Sensing
- Adiyan Mujibiya, Jun Rekimoto:

Mirage: exploring interaction modalities using off-body static electric field sensing. 211-220 - Kian Peen Yeo, Suranga Nanayakkara, Shanaka Ransiri:

StickEar: making everyday objects respond to sound. 221-226 - Andrea Colaço, Ahmed Kirmani, Hye Soo Yang, Nan-Wei Gong, Chris Schmandt, Vivek K. Goyal:

Mime: compact, low power 3D gesture sensing for interaction with head mounted displays. 227-236 - Ke-Yu Chen, Kent Lyons, Sean White, Shwetak N. Patel:

uTrack: 3D input using two magnetic sensors. 237-244 - Simon Olberding, Nan-Wei Gong, John Tiab, Joseph A. Paradiso, Jürgen Steimle

:
A cuttable multi-touch sensor. 245-254 - Li-Wei Chan, Rong-Hao Liang

, Ming-Chang Tsai, Kai-Yin Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng, Bing-Yu Chen
:
FingerPad: private and subtle interaction using fingertips. 255-260
Vision
- Ken Pfeuffer, Mélodie Vidal, Jayson Turner, Andreas Bulling

, Hans Gellersen
:
Pursuit calibration: making gaze calibration less tedious and more flexible. 261-270 - Brian A. Smith

, Qi Yin, Steven K. Feiner, Shree K. Nayar:
Gaze locking: passive eye contact detection for human-object interaction. 271-280 - Matei Negulescu, Yang Li:

Open project: a lightweight framework for remote sharing of mobile applications. 281-290 - Xing-Dong Yang, Khalad Hasan, Neil D. B. Bruce, Pourang Irani:

Surround-see: enabling peripheral vision on smartphones during active use. 291-300 - Vinitha Khambadkar, Eelke Folmer:

GIST: a gestural interface for remote nonvisual spatial perception. 301-310 - Fraser Anderson, Tovi Grossman, Justin Matejka, George W. Fitzmaurice:

YouMove: enhancing movement training with an augmented reality mirror. 311-320
GUI
- Sylvain Malacria, Joey Scarr, Andy Cockburn, Carl Gutwin, Tovi Grossman:

Skillometers: reflective widgets that motivate and help users to improve performance. 321-330 - Gilles Bailly

, Antti Oulasvirta, Timo Kötzing, Sabrina Hoppe:
MenuOptimizer: interactive optimization of menu systems. 331-342 - Clemens Zeidler, Christof Lutteroth

, Wolfgang Stürzlinger
, Gerald Weber:
The auckland layout editor: an improved GUI layout specification process. 343-352 - Hsiang-Sheng Liang, Kuan-Hung Kuo, Po-Wei Lee, Yu-Chien Chan, Yu-Chin Lin, Mike Y. Chen:

SeeSS: seeing what i broke - visualizing change impact of cascading style sheets (css). 353-356
Applications and games
- Jörg Schweitzer, Ralf Dörner

:
Capturing on site laser annotations with smartphones to document construction work. 357-362 - Ethan Fast, Colleen Lee, Alex Aiken, Michael S. Bernstein, Daphne Koller, Eric Smith:

Crowd-scale interactive formal reasoning and analytics. 363-372 - Masato Miyauchi, Takashi Kimura, Takuya Nojima:

A tongue training system for children with down syndrome. 373-376 - Eric Butler, Adam M. Smith, Yun-En Liu, Zoran Popovic:

A mixed-initiative tool for designing level progressions in games. 377-386 - Yupeng Zhang, Teng Han, Zhimin Ren, Nobuyuki Umetani, Xin Tong

, Yang Liu, Takaaki Shiratori, Xiang Cao:
BodyAvatar: creating freeform 3D avatars using first-person body gestures. 387-396 - Miran Kim, Jeff Angermann, George Bebis, Eelke Folmer:

ViziCal: accurate energy expenditure prediction for playing exergames. 397-404 - Patrick Baudisch, Henning Pohl

, Stefanie Reinicke, Emilia Wittmers, Patrick Lühne, Marius Knaust, Sven Köhler, Patrick Schmidt, Christian Holz
:
Imaginary reality gaming: ball games without a ball. 405-410
Tangible and fabrication
- Sungjae Hwang, Myungwook Ahn, KwangYun Wohn:

MagGetz: customizable passive tangible controllers on and around conventional mobile devices. 411-416 - Sean Follmer

, Daniel Leithinger
, Alex Olwal
, Akimitsu Hogge, Hiroshi Ishii:
inFORM: dynamic physical affordances and constraints through shape and object actuation. 417-426 - Jun Rekimoto:

Traxion: a tactile interaction device with virtual force sensation. 427-432 - Amit Zoran, Roy Shilkrot, Joseph A. Paradiso:

Human-computer interaction for hybrid carving. 433-440 - Daniel Saakes, Thomas Cambazard, Jun Mitani, Takeo Igarashi:

PacCAM: material capture and interactive 2D packing for efficient material usage on CNC cutting machines. 441-446 - Valkyrie Savage

, Colin Chang, Björn Hartmann:
Sauron: embedded single-camera sensing of printed physical user interfaces. 447-456 - Eric Brockmeyer, Ivan Poupyrev, Scott E. Hudson:

PAPILLON: designing curved display surfaces with printed optics. 457-462
Development
- Salman Ahmad, Sepandar D. Kamvar:

The dog programming language. 463-472 - Brian Burg, Richard Bailey, Amy J. Ko, Michael D. Ernst:

Interactive record/replay for web application debugging. 473-484 - Shiry Ginosar, Luis Fernando De Pombo, Maneesh Agrawala, Björn Hartmann:

Authoring multi-stage code examples with editable code histories. 485-494 - Kuat Yessenov, Shubham Tulsiani, Aditya Krishna Menon, Robert C. Miller, Sumit Gulwani, Butler W. Lampson, Adam Kalai:

A colorful approach to text processing by example. 495-504
Haptics
- Tom Carter, Sue Ann Seah, Benjamin Long, Bruce W. Drinkwater

, Sriram Subramanian
:
UltraHaptics: multi-point mid-air haptic feedback for touch surfaces. 505-514 - Jack Lindsay, Iris Jiang

, Eric C. Larson, Richard J. Adams, Shwetak N. Patel, Blake Hannaford:
Good vibrations: an evaluation of vibrotactile impedance matching for low power wearable applications. 515-520 - Karen Vanderloock, Vero Vanden Abeele

, Johan A. K. Suykens
, Luc Geurts:
The skweezee system: enabling the design and the programming of squeeze interactions. 521-530 - Seung-Chan Kim

, Ali Israr, Ivan Poupyrev:
Tactile rendering of 3D features on touch surfaces. 531-538 - Masayasu Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami

, Michita Imai:
SenSkin: adapting skin as a soft interface. 539-544

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














