


default search action
13th ICMI 2011: Alicante, Spain
- Hervé Bourlard, Thomas S. Huang, Enrique Vidal, Daniel Gatica-Perez, Louis-Philippe Morency, Nicu Sebe:

Proceedings of the 13th International Conference on Multimodal Interfaces, ICMI 2011, Alicante, Spain, November 14-18, 2011. ACM 2011, ISBN 978-1-4503-0641-6
Keynote address 1
- David A. Forsyth:

Still looking at people. 1-2
Oral session 1: affect
- Héctor Pérez Martínez, Georgios N. Yannakakis

:
Mining multimodal sequential patterns: a case study on affect detection. 3-10 - Daniel McDuff, Rana El Kaliouby, Rosalind W. Picard:

Crowdsourced data collection of facial responses. 11-18 - Florian Lingenfelser, Johannes Wagner, Elisabeth André

:
A systematic discussion of fusion techniques for multi-modal affect recognition tasks. 19-26 - Ravi Kiran Sarvadevabhatla

, Mitchel Benovoy, Sam Musallam, Victor Ng-Thow-Hing:
Adaptive facial expression recognition using inter-modal top-down context. 27-34
Special session 1: multimodal interaction: brain-computer interfacing
- Anton Nijholt

, Brendan Z. Allison, Robert J. K. Jacob:
Brain-computer interaction: can multimodality help? 35-40 - Hayrettin Gürkök, Gido Hakvoort, Mannes Poel:

Modality switching and performance in a thought and speech controlled computer game. 41-48 - Nils Hachmeister, Hannes Riechmann, Helge J. Ritter, Andrea Finke:

An approach towards human-robot-human interaction using a hybrid brain-computer interface. 49-52 - Thorsten O. Zander, Marius David Klippel, Reinhold Scherer

:
Towards multimodal error responses: a passive BCI for the detection of auditory errors. 53-56 - Andreas Pusch, Anatole Lécuyer:

Pseudo-haptics: from the theoretical foundations to practical system design guidelines. 57-64
Poster session
- Martin Pielot, Benjamin Poppinga, Wilko Heuten

, Susanne Boll:
6th senses for everyone!: the value of multimodal feedback in handheld navigation aids. 65-72 - Yi Yang, Yuru Zhang, Zhu Hou, Betty Lemaire-Semail

:
Adding haptic feedback to touch screens at the right time. 73-80 - Prasenjit Dey, Muthuselvam Selvaraj, Bowon Lee:

Robust user context analysis for multimodal interfaces. 81-88 - Ramadevi Vennelakanti, Prasenjit Dey, Ankit Shekhawat, Phanindra Pisupati:

The picture says it all!: multimodal interactions and interaction metadata. 89-96 - Lode Hoste, Bruno Dumas, Beat Signer

:
Mudra: a unified multimodal interaction framework. 97-104 - Stefano Carrino, Alexandre Péclat, Elena Mugellini

, Omar Abou Khaled
, Rolf Ingold
:
Humans and smart environments: a novel multimodal interaction approach. 105-112 - Simon F. Worgan, Ardhendu Behera

, Anthony G. Cohn, David C. Hogg
:
Exploiting petri-net structure for activity classification and user instruction within an industrial setting. 113-120 - Mathias Baglioni, Eric Lecolinet, Yves Guiard:

JerkTilts: using accelerometers for eight-choice selection on mobile devices. 121-128 - Vicent Alabau, Luis Rodríguez-Ruiz, Alberto Sanchís, Pascual Martínez-Gómez, Francisco Casacuberta:

On multimodal interactive machine translation using speech recognition. 129-136 - Alexandra Barchunova, Robert Haschke, Mathias Franzius, Helge J. Ritter:

Multimodal segmentation of object manipulation sequences with product models. 137-144 - Akos Vetek, Saija Lemmelä:

Could a dialog save your life?: analyzing the effects of speech interaction strategies while driving. 145-152 - Dan Bohus, Eric Horvitz:

Decisions about turns in multiparty conversation: from perception to action. 153-160 - Alessandro Soro

, Samuel Aldo Iacolina, Riccardo Scateni
, Selene Uras:
Evaluation of user gestures in multi-touch interaction: a case study in pair-programming. 161-168 - Louis-Philippe Morency, Rada Mihalcea, Payal Doshi:

Towards multimodal sentiment analysis: harvesting opinions from the web. 169-176 - David Warnock, Marilyn Rose McGee-Lennon, Stephen A. Brewster

:
The impact of unwanted multimodal notifications. 177-184 - Natalie Ruiz, Ronnie Taib

, Fang Chen
:
Freeform pen-input as evidence of cognitive load and expertise. 185-188 - Teemu Tuomas Ahmaniemi:

Acquisition of dynamically revealed multimodal targets. 189-192 - Katri Salminen

, Veikko Surakka
, Jukka Raisamo, Jani Lylykangas, Johannes Pystynen, Roope Raisamo
, Kalle Mäkelä, Teemu Tuomas Ahmaniemi:
Emotional responses to thermal stimuli. 193-196 - Jesús González-Rubio, Daniel Ortiz-Martínez

, Francisco Casacuberta:
An active learning scenario for interactive machine translation. 197-200 - Nimrod Raiman, Hayley Hung, Gwenn Englebienne

:
Move, and i will tell you who you are: detecting deceptive roles in low-quality data. 201-204 - Jan-Philip Jarvis, Felix Putze, Dominic Heger, Tanja Schultz

:
Multimodal person independent recognition of workload related biosignal patterns. 205-208 - Verónica Romero

, Alejandro Héctor Toselli Rossi
, Enrique Vidal:
Study of different interactive editing operations in an assisted transcription system. 209-212 - Igor Jauk, Ipke Wachsmuth

, Petra Wagner
:
Dynamic perception-production oscillation model in human-machine communication. 213-216 - Martin Halvey, Graham A. Wilson, Yolanda Vazquez-Alvarez, Stephen A. Brewster

, Stephen A. Hughes:
The effect of clothing on thermal feedback perception. 217-220 - Sashikanth Damaraju, Andruid Kerne:

Comparing multi-touch interaction techniques for manipulation of an abstract parameter space. 221-224 - Afshin Ameri Ekhtiarabadi, Batu Akan, Baran Çürüklü, Lars Asplund:

A general framework for incremental processing of multimodal inputs. 225-228
Keynote address 2
- Marc O. Ernst:

Learning in and from humans: recalibration makes (the) perfect sense. 229-230
Oral session 2: social interaction
- Hayley Hung, Ben J. A. Kröse:

Detecting F-formations as dominant sets. 231-238 - Chreston A. Miller

, Francis K. H. Quek:
Toward multimodal situated analysis. 239-246 - Xavier Alameda-Pineda, Vasil Khalidov, Radu Horaud, Florence Forbes:

Finding audio-visual events in informal social gatherings. 247-254 - Ligia Maria Batrinca, Nadia Mana, Bruno Lepri, Fabio Pianesi, Nicu Sebe

:
Please, tell me about yourself: automatic personality assessment using short self-presentations. 255-262
Oral session 3: gesture and touch
- Gilles Bailly, Dong-Bach Vo, Eric Lecolinet, Yves Guiard:

Gesture-aware remote controls: guidelines and interaction technique. 263-270 - Radu-Daniel Vatavu

:
The effect of sampling rate on the performance of template-based gesture recognizers. 271-278 - Zahoor Zafrulla, Helene Brashear

, Thad Starner, Harley Hamilton, Peter Presti:
American sign language recognition with the kinect. 279-286 - Chi-Hsia Lai, Matti Niinimäki

, Koray Tahiroglu, Johan Kildal
, Teemu Tuomas Ahmaniemi:
Perceived physicality in audio-enhanced force input. 287-294
Demo session and DSS poster session
- Silvia Gabrielli, Rosa Maimone, Michele Marchesoni, Jesús Muñoz:

BeeParking: an ambient display to induce cooperative parking behavior. 295-298 - María José Castro Bleda

, Salvador España Boquera, David Llorens, Andrés Marzal, Federico Prat, Juan Miguel Vilar
, Francisco Zamora-Martínez:
Speech interaction in a multimodal tool for handwritten text transcription. 299-302 - Daniel Sonntag, Marcus Liwicki, Markus Weber:

Digital pen in mammography patient forms. 303-306 - Anirudh Sharma, Sriganesh Madhvanath, Ankit Shekhawat, Mark Billinghurst

:
MozArt: a multimodal interface for conceptual 3D modeling. 307-310 - Luis A. Leiva, Mauricio Villegas

, Roberto Paredes:
Query refinement suggestion in multimodal image retrieval with relevance feedback. 311-314 - Tomás Pérez-García, José Manuel Iñesta Quereda

, Pedro J. Ponce de León
, Antonio Pertusa
:
A multimodal music transcription prototype: first steps in an interactive prototype development. 315-318 - Kenji Mase, Kosuke Niwa, Takafumi Marutani:

Socially assisted multi-view video viewer. 319-322
Special session 2: long-term socially perceptive and interactive robot companions: challenges and future perspectives
- Ruth Aylett, Ginevra Castellano, Bogdan Raducanu, Ana Paiva

, Marc Hanheide
:
Long-term socially perceptive and interactive robot companions: challenges and future perspectives. 323-326 - Astrid M. von der Pütten, Nicole C. Krämer, Sabrina C. Eimler:

Living with a robot companion: empirical study on the interaction with an artificial health advisor. 327-334 - Raquel Ros

, Marco Nalin
, Rachel Wood, Paul Baxter, Rosemarijn Looije, Yiannis Demiris
, Tony Belpaeme, Alessio Giusti, Clara Pozzi:
Child-robot interaction in the wild: advice to the aspiring experimenter. 335-342 - Emilie Delaherche, Mohamed Chetouani

:
Characterization of coordination in an imitation task: human evaluation and automatically computable cues. 343-350
Keynote address 3
- Matthias R. Mehl:

The sounds of social life: observing humans in their natural habitat. 351-352
Oral session 4: ubiquitous interaction
- Trinh Minh Tri Do, Jan Blom, Daniel Gatica-Perez

:
Smartphone usage in the wild: a large-scale analysis of applications and context. 353-360 - Julie R. Williamson, Andrew Crossan, Stephen A. Brewster

:
Multimodal mobile interactions: usability studies in real world settings. 361-368 - Pierre-Alain Avouac, Philippe Lalanda, Laurence Nigay:

Service-oriented autonomic multimodal interaction in a pervasive environment. 369-376 - Hannes Baumann, Thad Starner, Hendrik Iben, Anna Lewandowski, Patrick Zschaler:

Evaluation of graphical user-interfaces for order picking using head-mounted displays. 377-384
Oral session 5: virtual and real worlds
- Gregor Mehlmann, Birgit Endrass, Elisabeth André

:
Modeling parallel state charts for multithreaded multimodal dialogues. 385-392 - David Vázquez, Antonio M. López

, Daniel Ponsa
, Javier Marín
:
Virtual worlds and active learning for human detection. 393-400 - Hung-Hsuan Huang, Naoya Baba, Yukiko I. Nakano:

Making virtual conversational agent aware of the addressee of users' utterances in multi-user conversation using nonverbal information. 401-408 - Ellen C. Haas, Krishna S. Pillalamarri, Chris Stachowiak, Gardner McCullough:

Temporal binding of multimodal controls for dynamic map displays: a systems approach. 409-416

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














