Papers by Christophe Jouffrais

Background: The shading of an object provides an important cue for recognition, especially for de... more Background: The shading of an object provides an important cue for recognition, especially for determining its 3D shape. However, neuronal mechanisms that allow the recovery of 3D shape from shading are poorly understood. The aim of our study was to determine the neuronal basis of 3D shape from shading coding in area V4 of the awake macaque monkey. Results: We recorded the responses of V4 cells to stimuli presented parafoveally while the monkeys fixated a central spot. We used a set of stimuli made of 8 different 3D shapes illuminated from 4 directions (from above, the left, the right and below) and different 2D controls for each stimulus. The results show that V4 neurons present a broad selectivity to 3D shape and illumination direction, but without a preference for a unique illumination direction. However, 3D shape and illumination direction selectivities are correlated suggesting that V4 neurons can use the direction of illumination present in complex patterns of shading present ...
In this paper we describe a brainstorming session with visually impaired users, a sighted locomot... more In this paper we describe a brainstorming session with visually impaired users, a sighted locomotion trainer, as well as sighted and blind researchers. This brainstorming session was part of a larger project on designing accessible guidance systems for visually impaired people. In this session we specifically addressed the design of an accessible route calculation tool. In a method story, we describe how this session took place and report our insights from this experience on adapting brainstorming to a non-visual world.

2010 3rd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, 2010
This paper deals with the modeling of the activity of premotor neurons associated with the execut... more This paper deals with the modeling of the activity of premotor neurons associated with the execution of a visually guided reaching movement in primates. We address this question from a robotics point of view, by considering a simplified kinematic model of the head, eye and arm joints. By using the formalism of visual servoing, we show that the hand controller depends on the direction of the head and the eye, as soon as the hand-target difference vector is expressed in eye-centered coordinates. Based on this result, we propose a new interpretation of previous electrophysiological recordings in monkey, showing the existence of a gaze-related modulation of the activity of premotor neurons during reaching. This approach sheds a new light on this phenomenon which, so far, is not clearly understood.

Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility - Assets '08, 2008
There are many electronic devices for the visually impaired but few actually get used on a daily ... more There are many electronic devices for the visually impaired but few actually get used on a daily basis. This is due in part to the fact that many devices often fail to address the real needs of the users. In this study, we begin with a review of the existing literature followed by a survey of 54 blind people which suggests that one particular function could be particularly useful in a new device, namely, the ability to localize objects. We have tested the possibility of using a sound rendering system to indicate a particular spatial location, and propose to couple this with a biologically inspired image processing system that can locate visual patterns that correspond to particular objects and places. We believe that such a system can address one of the major problems faced by the visually handicapped, namely their difficulty in localizing specific objects.
Proceedings of the 22nd Conference on l'Interaction Homme-Machine, 2010
Nous présentons dans cet article un système d'aide pour personnes tétraplégiques. Afin de mener à... more Nous présentons dans cet article un système d'aide pour personnes tétraplégiques. Afin de mener à bien ce projet, nous avons mené une démarche participative avec patients et ergothérapeutes. A partir de différentes prototypes testés, nous avons défini une architecture modulaire et hautement reconfigurable afin de pouvoir s'adapter aux besoins et capacités des patients d'acquérir des données d'usage pour les patriciens. Ce système est en cours de déploiement dans le service de médecine physique et réadaptation du CHU de Toulouse.

Frontiers in Neuroscience, 2014
Sound localization studies over the past century have predominantly been concerned with direction... more Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments.

Le nombre de personnes déficientes visuelles est évalué à 285 millions dans le monde. L'analyse d... more Le nombre de personnes déficientes visuelles est évalué à 285 millions dans le monde. L'analyse des limitations d'activité et restrictions de participation de ces personnes montre que les nouvelles technologies interactives peuvent apporter des réponses pertinentes pour améliorer leur autonomie. Depuis une décennie, nous avons lancé un ensemble de recherches fondamentales et appliquées permettant de concevoir des dispositifs d'assistance pour les déficients visuels. Les différents domaines concernés sont l'orientation et la mobilité, l'accessibilité des documents (cartes géographiques), l'accessibilité des dispositifs mobiles (smartphones ou tablettes) ou la simulation de la vision prothétique (implant rétinien ou cortical). Afin de travailler de façon pérenne sur la conception de nouvelles technologies d'assistance avec les déficients visuels mais aussi les formateurs et enseignants spécialisés, nous avons créé un laboratoire de recherche commun avec un Centre d'Education Spécialisé pour Déficients Visuels.

L'amelioration considerable de la technologie de synthese et de reconnaissance vocale a permi... more L'amelioration considerable de la technologie de synthese et de reconnaissance vocale a permis de creer des interfaces de lecture/ecriture pour les personnes souffrant de dyslexie et dysorthographie, le passage par l'oral leur permettant de contourner leur handicap. Les outils d'aide que nous proposons reposent sur un principe different : il s'agit de faciliter la lecture en segmentant diversemen t le texte. La souplesse de l'outil informatique permettant de mettre en saillance, a la demande, differentes unites telles que les syllabes ou les phonemes, le choix effectue par le lecteur dyslexique est fonction des difficultes qui lui sont propres. D'une maniere generale, ainsi que le montre l'enregistrement des mouvements oculaires pendant la lecture, les normo-lecteurs ont des mouvements oculaires tres structures (Aghababian & Nazir, 2000) alors que les dyslexiques fixent chaque lettre et reviennent souvent en arriere (McKeben et al., 2004), ce qui semble temoigner d'une difficulte a structurer la suite de lettres qu'est l'ecriture. Dans la litterature scientifique, la majorite des auteurs font une distinction entre des troubles d'ordre phonologique (des deficits dans la perception categorielle des sons du langage entraineraient une difficulte a associer de maniere univoque phonemes et graphemes) et ceux d'ordre visuo-attentionnel (taille et stabilite de l'empan attentionnel, difficulte a engager et degager l'attention aussi rapidement qu'un normo-lecteur)(pour une revue, voir l'expertise collective de l'INSERM, « Dyslexie, Dysorthographie, Dyscalculie, 2007 »). Cette distinction nous a amenes a proposer deux segmentations differentes, l'une reposant sur le decoupage phonologique des mots (phonemes) et l'autre reposant sur le decoupage morphologique des mots (syllabes). Afin d'evaluer experimentalement l'effet d'une presentation segmentee des mots sur la lecture, une etude est en cours ; elle compare la lecture a voix haute de mots isoles chez des normo-lecteurs et chez des lecteurs dyslexiques. L'enregistrement de la justesse et de la latence des reponses ainsi que l'observation des mouvements oculaires des deux groupes de lecteurs dans differentes conditions doivent permettre d'evaluer l'effet des differents mode d'affichage des mots.
2011 4th IFIP International Conference on New Technologies, Mobility and Security, 2011
CHI '12 Extended Abstracts on Human Factors in Computing Systems, 2012
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Copyright

Lecture Notes in Computer Science, 2014
As touch screens become widely spread, making them more accessible to visually impaired people is... more As touch screens become widely spread, making them more accessible to visually impaired people is an important task. Touch displays possess a poor accessibility for visually impaired people. One possibility to make them more accessible without sight is through gestural interaction. Yet, there are still few studies on using gestural interaction for visually impaired people. In this paper we present a comprehensive summary of existing projects investigating accessible gestural interaction. We also highlight the limits of current approaches and propose future working directions. Then, we present the design of an interactive map prototype that includes both a raised-line map overlay and gestural interaction for accessing different types of information (e.g., opening hours, distances). Preliminary results of our project show that basic gestural interaction techniques can be successfully used in interactive maps for visually impaired people.

Proceedings of the 22nd Conference on l'Interaction Homme-Machine, 2010
La conception participative est un processus de conception des systèmes interactifs qui implique ... more La conception participative est un processus de conception des systèmes interactifs qui implique les utilisateurs dans l'ensemble du processus du développement. Cependant elle présuppose que les utilisateurs disposent de toutes leurs capacités physiques, notamment visuelles. Les méthodes et outils utilisés ne sont souvent pas adaptés pour des personnes déficientes visuelles. Dans cet article nous présentons une approche de conception participative intégrant des personnes non-voyantes. Nous partons d'une analyse de la problématique en soulignant les limites d'accessibilité des méthodes de conception existantes et les spécificités des utilisateurs non-voyants à prendre en compte. Nous présentons ensuite l'adaptation des méthodes que nous avons réalisée pour intégrer des utilisateurs déficients visuels au cycle de conception participative de notre projet. Enfin nous concluons par des recommandations et des propositions de travail futur.
ACM International Conference on Interactive Tabletops and Surfaces - ITS '10, 2010
Multimodal interactive maps are a solution for providing the blind with access to geographic info... more Multimodal interactive maps are a solution for providing the blind with access to geographic information. Current projects use a tactile map set down on a monotouch display with additional sound output. In our current project we investigated the usage of multitouch displays for this purpose. In this paper, we outline our requirements concerning the appropriate multitouch tactile device and we present a first prototype. We conclude with future working propositions.

Lecture Notes in Computer Science, 2012
Multimodal interactive maps are a solution for presenting spatial information to visually impaire... more Multimodal interactive maps are a solution for presenting spatial information to visually impaired people. In this paper, we present an interactive multimodal map prototype that is based on a tactile paper map, a multi-touch screen and audio output. We first describe the different steps for designing an interactive map: drawing and printing the tactile paper map, choice of multitouch technology, interaction technologies and the software architecture. Then we describe the method used to assess user satisfaction. We provide data showing that an interactive mapalthough based on a unique, elementary, double tap interactionhas been met with a high level of user satisfaction. Interestingly, satisfaction is independent of a user's age, previous visual experience or Braille experience. This prototype will be used as a platform to design advanced interactions for spatial learning.
Human–Computer Interaction, 2014
Interactions, 2013
Demo Hour highlights new prototypes and projects that exemplify innovation and novel forms of int... more Demo Hour highlights new prototypes and projects that exemplify innovation and novel forms of interaction. Leah Maestri, Editor

Virtual Reality, 2012
Finding ones way to an unknown destination, navigating complex routes, finding inanimate objects;... more Finding ones way to an unknown destination, navigating complex routes, finding inanimate objects; these are all tasks that can be challenging for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed towards increasing the autonomy of visually impaired users in known and unknown environments, exterior and interior, large scale and small scale, through a combination of a Global Navigation Satellite System (GNSS) and rapid visual recognition with which the precise position of the user can be determined. Relying on geographical databases and visually identified objects, the user is guided to his or her desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of the new type of detection and localization device are presented in relation to guidance directives developed through participative design with potential users and educators for the visually impaired. A fundamental concept in this project is the belief that this type of assistive device is able to solve one of the major problems faced by the visually impaired: their difficulty in localizing specific objects.

Psychological Research, 2011
It has been assumed (Lederman et al. 1990, Perception & Psychophysics) that a visual imagery proc... more It has been assumed (Lederman et al. 1990, Perception & Psychophysics) that a visual imagery process is involved in the haptic identification of raised-line drawings of common objects. The finding of significant correlations between visual imagery ability and performance on picture-naming tasks was taken as experimental evidence in support of this assumption. However, visual imagery measures came from self-report procedures, which can be unreliable. The present study therefore used an objective measure of visuospatial imagery abilities in sighted participants and compared three groups of high, medium and low visuospatial imagers on their accuracy and response times in identifying raised-line drawings by touch. Results revealed between-group differences on accuracy, with high visuospatial imagers outperforming low visuospatial imagers, but not on response times. These findings lend support to the view that visuospatial imagery plays a role in the identification of raised-line drawings by sighted adults.
The Journal of the Acoustical Society of America, 2008
Uploads
Papers by Christophe Jouffrais