Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005
Pointing and Describing in Referential Communication Adrian Bangerter ([email protected]) Eric Chevalley ([email protected]) Groupe de Psychologie Appliquee, University of Neuchâtel Fbg. de l’Hopital 106, 2000 Neuchâtel, Switzerland Two types of gesture were coded: elbow resting on table (Type 1), and elbow raised (Type 2). Type 1 gestures involve little movement from a resting position and may be automatic. Type 2 gestures involve extensive arm and hand movement, and are thus probably intended to communicate. All variables had high inter-rater agreement (all κs >.71). Introduction It is unclear what role pointing gestures play in referential communication. They may serve to identify referents of deictic expressions (e.g., when uttering John is right here and pointing, the pointing gestures identifies the referent of right here). But pointing may also focus addressee attention on a sub-region of shared visual space, thereby facilitating concurrent descriptions (Bang...
Jarmołowicz-Nowikow, E. 2012. Are pointing gestures induced by communicative intention? The LNCS Springer Volume “Behavioural Cognitive Systems”, 2012
The aim of the paper is to present some ideas and observations on the communicative intentionality of pointing gestures. The material under study consists of twenty "origami" dialogue tasks, half of them recorded in mutual visibility (MV) and half in lack visibility (LV) condition. Two participants took part in each dialogue sessions: Instructor Giver (IG) and Instructor Follower (IF). The analysis is focused on selected features of pointing gestures as well as on the semantic aspects of the verbal expressions realised concurrently with pointings: semantic content 1 of verbal expressions realised concurrently with pointing gestures, preceding context of the utterances, place of pointing gestures' realisation in gesture space and spatial perspective of their realisation are taken into consideration as potential cues of communicative intentions
Journal of Pragmatics, 2007
This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded ''locality description'' interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: Bpoints (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually ''big'' in form) carries primary, informationally foregrounded information (for saying ''where'' or ''which one''). Infants perform this type of gesture long before they can talk. The second type of gesture (usually ''small'' in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (''Don't over-tell''), and an informational imperative not to provide less information than necessary (''Don't under-tell''). #
"In the present paper, an attempt is made to track some potential cues to intentionality that may accompany pointing gestures in task-oriented dialogues. Twenty “origami” dialogues are analysed, half of them in the “mutual visibility” (MV) and half in the “lack of mutual visibility” (LV) condition. Pointing gestures are extracted and annotated for their internal structure. Accompanying gaze directions are tagged and the verbal content of utterances is transcribed orthographically. A modification to the description of gestural phrase structure for pointing gestures is proposed. It has been found that in mutual visibility condition, speakers tend to gesture in the location which is potentially more visible to their conversational partners and that the pointing holds they perform are substantially longer than in the case of lack of mutual visibility. However, in many cases, the gesturer did not care to track the behaviour of his/her conversational partner in order to make sure whether his/her gesture was noticed. Our analyses of more complex dialogue structures do not show any clear tendencies but allow for hypothesizing that not all pointings are driven by communicative intentions. Some of them seem to be performed with only little or no attention directed to the addressee."
In everyday communication, people not only use speech but also hand gestures to convey information. One intriguing question in gesture research has been why gestures take the specific form they do. Previous research has identified the speaker-gesturer's communicative intent as one factor shaping the form of iconic gestures. Here we investigate whether communicative intent also shapes the form of pointing gestures. In an experimental setting, twenty-four participants produced pointing gestures identifying a referent for an addressee. The communicative intent of the speakergesturer was manipulated by varying the informativeness of the pointing gesture. A second independent variable was the presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
2011
In the present paper, an attempt is made to track some potential cues to intentionality that may accompany pointing gestures in task-oriented dialogues. Twenty "origami" dialogues are analysed, half of them in the "mutual visibility" (MV) and half in the "lack of mutual visibility" (LV) condition. Pointing gestures are extracted and annotated for their internal structure. Accompanying gaze directions are tagged and the verbal content of utterances is transcribed orthographically. A modification to the description of gestural phrase structure for pointing gestures is proposed. It has been found that in mutual visibility condition, speakers tend to gesture in the location which is potentially more visible to their conversational partners and that the pointing holds they perform are substantially longer than in the case of lack of mutual visibility. However, in many cases, the gesturer did not care to track the behaviour of his/her conversational partner in order to make sure whether his/her gesture was noticed. Our analyses of more complex dialogue structures do not show any clear tendencies but allow for hypothesizing that not all pointings are driven by communicative intentions. Some of them seem to be performed with only little or no attention directed to the addressee.
2012
Current semantic theory on indexical expressions claims that demonstratively used indexicals such as this lack a referent-determining meaning but instead rely on an accompanying demonstration act like a pointing gesture. While this view allows to set up a sound logic of demonstratives, the direct-referential role assigned to pointing gestures has never been scrutinized thoroughly in semantics or pragmatics. We investigate the semantics and pragmatics of co-verbal pointing from a foundational perspective combining experiments, statistical investigation, computer simulation and theoretical modeling techniques in a novel manner. We evaluate various referential hypotheses with a corpus of object identification games set up in experiments in which body movement tracking techniques have been extensively used to generate precise pointing measurements. Statistical investigation and computer simulations show that especially distal areas in the pointing domain falsify the semantic direct-referential hypotheses concerning pointing gestures. As an alternative, we propose that reference involving pointing rests on a default inference which we specify using the empirical data. These results raise numerous problems for classical semantics--pragmatics interfaces: we argue for pre-semantic pragmatics in order to account for inferential reference in addition to classical post-semantic Gricean pragmatics.
In both signed and spoken languages, pointing occurs to direct the addressee's attention to the entity one is talking about. These entities may be present or absent in the physical context of the conversation. In this paper we focus on pointing directed to non-speaker/nonaddressee referents in Sign Language of The Netherlands (Nederlandse Gebarentaal, NGT) and spoken Dutch. Our main goal is to show that the semantic-pragmatic function of pointing signs and pointing gestures might be very different. The distinction will be characterized in terms of anchoring and identifying. While pointing signs can have both functions, pointing gestures appear to lack the anchoring option.
Psychological Research, 2016
In everyday communication, people often point. However, a pointing act is often misinterpreted as indicating a different spatial referent position than intended by the pointer. It has been suggested that this happens because pointers put the tip of the index finger close to the line joining the eye to the referent. However, the person interpreting the pointing act extrapolates the vector defined by the arm and index finger. As this line crosses the eyereferent line, it suggests a different referent position than the one that was meant. In this paper, we test this hypothesis by manipulating the geometry underlying the production and interpretation of pointing gestures. In Experiment 1, we compared naïve pointer-observed dyads with dyads in which the discrepancy between the vectors defining the production and interpretation of pointing acts has been reduced. As predicted, this reduced pointer-observer misunderstandings compared to the naïve control group. In Experiment 2, we tested whether pointers elevate their arms steeper than necessary to orient it toward the referent, because they visually steer their index finger tips onto the referents in their visual field. Misunderstandings between pointers and observers were smaller when pointers pointed without visual feedback. In sum, the results support the hypothesis that misunderstandings between (naïve) pointers and observers result from different spatial rules describing the production and interpretation of pointing gestures. Furthermore, we suggest that instructions that reduce the discrepancy between these spatial rules can improve communicating with pointing gestures.
2016
Language and gesture are thought to be tightly interrelated and co-expressive behaviours (McNeill, 1992; 2005) that, when used in communication, are often referred to as composite signals/utterances (Clark, 1996; Enfield, 2009). Linguistic research has typically focussed on the structure of language, largely ignoring the effect gesture can have on the production and comprehension of utterances. In the linguistic literature, gesture is shoehorned into the communicative process rather than being an integral part of it (Wilson and Wharton, 2006; Wharton, 2009), which is at odds with the fact that gesture regularly plays a role that is directly connected to the semantic content of, in Gricean terms, “what is said” (Kendon, 2004; Grice, 1989). In order to explore these issues, this thesis investigates the effect of manual gestures on interaction at several different points during production and comprehension, based on the Clarkian Action Ladder (Clark, 1996). It focusses on the top two l...
Pointing gestures are pervasive in human referring actions, and are often combined with spoken descriptions. Combining gesture and speech naturally to refer to objects is an essential task in multimodal NLG systems. However, the way gesture and speech should be combined in a referring act remains an open question. In particular, it is not clear whether, in planning a pointing gesture in conjunction with a description, an NLG system should seek to minimise the redundancy between them, e.g. by letting the pointing gesture indicate locative information, with other, nonlocative properties of a referent included in the description. This question has a bearing on whether the gestural and spoken parts of referring acts are planned separately or arise from a common underlying computational mechanism. This paper investigates this question empirically, using machine-learning techniques on a new corpus of dialogues involving multimodal references to objects. Our results indicate that human pointing strategies interact with descriptive strategies. In particular, pointing gestures are strongly associated with the use of locative features in referring expressions.
Frontiers in Communication, 2021
When people speak or sign, they not only describe using words but also depict and indicate. How are these different methods of communication integrated? Here, we focus on pointing and, in particular, on commonalities and differences in how pointing is integrated into language by speakers and signers. One aspect of this integration is semantic-how pointing is integrated with the meaning conveyed by the surrounding language. Another aspect is structural-how pointing as a manual signal is integrated with other signals, vocal in speech, or manual in sign. We investigated both of these aspects of integration in a novel pointing elicitation task. Participants viewed brief live-action scenarios and then responded to questions about the locations and objects involved. The questions were designed to elicit utterances in which pointing would serve different semantic functions, sometimes bearing the full load of reference ('load-bearing points') and other times sharing this load with lexical resources ('load-sharing points'). The elicited utterances also provided an opportunity to investigate issues of structural integration. We found that, in both speakers and signers, pointing was produced with greater arm extension when it was load bearing, reflecting a common principle of semantic integration. However, the duration of the points patterned differently in the two groups. Speakers' points tended to span across words (or even bridge over adjacent utterances), whereas signers' points tended to slot in between lexical signs. Speakers and signers thus integrate pointing into language according to common principles, but in a way that reflects the differing structural constraints of their language. These results shed light on how language users integrate gradient, less conventionalized elements with those elements that have been the traditional focus of linguistic inquiry.
Frontiers in psychology, 2015
Pointing toward concrete objects is a well-known and efficient communicative strategy. Much less is known about the communicative effectiveness of abstract pointing where the pointing gestures are directed to "empty space." McNeill's (2003) observations suggest that abstract pointing can be used to establish referents in gesture space, without the referents being physically present. Recently, however, it has been shown that abstract pointing typically provides redundant information to the uttered speech thereby suggesting a very limited communicative value (So et al., 2009). In a first approach to tackle this issue we were interested to know whether perceivers are sensitive at all to this gesture cue or whether it is completely discarded as irrelevant add-on information. Sensitivity to for instance a gesture-speech mismatch would suggest a potential communicative function of abstract pointing. Therefore, we devised a mismatch paradigm in which participants watched a vi...
2015
In dialogue, repeated references contain fewer words (which are also acoustically reduced) and fewer gestures than initial ones. In this paper, we describe three experiments studying to what extent gesture reduction is comparable to other forms of linguistic reduction. Since previous studies showed conflicting findings for gesture rate, we systematically compare two measures of gesture rate: gesture rate per word and per semantic attribute (Experiment I). In addition, we ask whether repetition impacts the form of gestures, by manual annotation of a number of features (Experiment I), by studying gradient differences using a judgment test (Experiment II), and by investigating how effective initial and repeated gestures are at communicating information (Experiment III). The results revealed no reduction in terms of gesture rate per word, but a U-shaped reduction pattern for gesture rate per attribute. Gesture annotation showed no reliable effects of repetition on gesture form, yet participants judged gestures from repeated references as less precise than those from initial ones. Despite this gradient reduction, gestures from initial and repeated references were equally successful in communicating information. Besides effects of repetition, we found systematic effects of visibility on gesture production, with more, longer, larger and more communicative gestures when participants could see each other. We discuss the implications of our findings for gesture research and for models of speech and gesture production.
A fundamental property of language is that it allows us to establish triadic joint attention to a referent, for instance by the use of spatial demonstratives. Traditional accounts of demonstrative choice focused on the physical proximity of the referent to the interlocutors. However, recent work taking into account the multimodal context in which spatial demonstrative use is generally embedded shows that such accounts are too simplistic. Using a controlled elicitation task, we here tested the differential roles of visual joint attention, physical proximity of a referent, and use of a pointing gesture in demonstrative choice in Dutch. It was found that 'proximal' demonstratives were used in a speaker-anchored way to refer to objects nearby the speaker. 'Distal' demonstratives were used for referents not nearby the speaker, but also in an addressee-anchored way, i.e. when the referent was in the addressee's focus of visual attention. Pointing gestures were closely ...
We investigated the use of iconic and deictic gestures during the communication of spatial information. Expert structural geologists were asked to explain one portion of a geologic map. Spatial gestures used in each expert’s response were coded as deictic (indicating an object in the conversational space), iconic (depicting an aspect of an object or event), or both deictic and iconic (indicating an object in the conversational space by depicting an aspect of that object). Speech paired with each gesture was coded for whether or not it referred to complex spatial properties (e.g. shape and orientation of an object). Results indicated that when communicating spatial information, people occasionally use gestures that are both deictic and iconic, and that these gestures tend to occur when complex spatial information is not provided in speech. These results suggest that existing classifications of gesture are not exclusive, especially for spatial discourse.
PLoS One, 2011
Communicative pointing is a human specific gesture which allows sharing information about a visual item with another person. It sets up a three-way relationship between a subject who points, an addressee and an object. Yet psychophysical and neuroimaging studies have focused on non-communicative pointing, which implies a two-way relationship between a subject and an object without the involvement of an addressee, and makes such gesture comparable to touching or grasping. Thus, experimental data on the communicating function of pointing remain scarce. Here, we examine whether the communicative value of pointing modifies both its behavioral and neural correlates by comparing pointing with or without communication. We found that when healthy participants pointed repeatedly at the same object, the communicative interaction with an addressee induced a spatial reshaping of both the pointing trajectories and the endpoint variability. Our finding supports the hypothesis that a change in reference frame occurs when pointing conveys a communicative intention. In addition, measurement of regional cerebral blood flow using H 2 O 15 PET-scan showed that pointing when communicating with an addressee activated the right posterior superior temporal sulcus and the right medial prefrontal cortex, in contrast to pointing without communication. Such a right hemisphere network suggests that the communicative value of pointing is related to processes involved in taking another person's perspective. This study brings to light the need for future studies on communicative pointing and its neural correlates by unraveling the three-way relationship between subject, object and an addressee.
Cognitive Science
A long-standing debate in the study of human communication centers on the degree to which communicators tune their communicative signals (e.g., speech, gestures) for specific addressees, as opposed to taking a neutral or egocentric perspective. This tuning, called recipient design, is known to occur under special conditions (e.g., when errors in communication need to be corrected), but several researchers have argued that it is not an intrinsic feature of human communication, because that would be computationally too demanding. In this study, we contribute to this debate by studying a simple communicative behavior, communicative pointing, under conditions of successful (error-free) communication. Using an information-theoretic measure, called legibility, we present evidence of recipient design in communicative pointing. The legibility effect is present early in the movement, suggesting that it is an intrinsic part of the communicative plan. Moreover, it is reliable only from the viewpoint of the addressee, suggesting that the motor plan is tuned to the addressee. These findings suggest that recipient design is an intrinsic feature of human communication.
Frontiers in Psychology, 2019
Production studies show that anaphoric reference is bimodal. Speakers can introduce a referent in speech by also using a localizing gesture, assigning a specific locus in space to it. Referring back to that referent, speakers then often accompany a spoken anaphor with a localizing anaphoric gesture (i.e., indicating the same locus). Speakers thus create visual anaphoricity in parallel to the anaphoric process in speech. In the current perception study, we examine whether addressees are sensitive to localizing anaphoric gestures and specifically to the (mis)match between recurrent use of space and spoken anaphora. The results of two reaction time experiments show that, when a single referent is gesturally tracked, addressees are sensitive to the presence of localizing gestures, but not to their spatial congruence. Addressees thus seem to integrate gestural information when processing bimodal anaphora, but their use of locational information in gestures is not obligatory in every discourse context.
Psychonomic Bulletin & Review, 2020
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.