Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Discourse Processes
…
37 pages
1 file
The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level which is crucial to understanding the tight link between the two modalities.
We examine a corpus of narrative data to determine which types of events evoke character viewpoint gestures, and which evoke observer viewpoint gestures. We consider early claims made by that character viewpoint tends to occur with transitive utterances and utterances that are causally central to the narrative. We argue that the structure of the event itself must be taken into account: there are some events that cannot plausibly evoke both types of gesture. We show that linguistic structure (transitivity), event structure (visuo-spatial and motoric properties), and discourse structure all play a role. We apply these findings to a recent model of embodied language production, the Gestures as Simulated Action framework.
Frontiers in Psychology
The literature on bimodal discourse reference has shown that gestures are sensitive to referents' information status in discourse. Gestures occur more often with new referents/ first mentions than with given referents/subsequent mentions. However, because not all new entities at first mention occur with gestures, the current study examines whether gestures are sensitive to a difference in information status between brand-new and inferable entities and variation in nominal definiteness. Unexpectedly, the results show that gestures are more frequent with inferable referents (hearer new but discourse old) than with brand-new referents (hearer new and discourse new). The findings reveal new aspects of the relationship between gestures and speech in discourse, specifically suggesting a complementary (disambiguating) function for gestures in the context of first mentioned discourse entities. The results thus highlight the multi-functionality of gestures in relation to speech.
Frontiers in Psychology, 2019
Production studies show that anaphoric reference is bimodal. Speakers can introduce a referent in speech by also using a localizing gesture, assigning a specific locus in space to it. Referring back to that referent, speakers then often accompany a spoken anaphor with a localizing anaphoric gesture (i.e., indicating the same locus). Speakers thus create visual anaphoricity in parallel to the anaphoric process in speech. In the current perception study, we examine whether addressees are sensitive to localizing anaphoric gestures and specifically to the (mis)match between recurrent use of space and spoken anaphora. The results of two reaction time experiments show that, when a single referent is gesturally tracked, addressees are sensitive to the presence of localizing gestures, but not to their spatial congruence. Addressees thus seem to integrate gestural information when processing bimodal anaphora, but their use of locational information in gestures is not obligatory in every discourse context.
a b st r a c t During narrative retelling, speakers shift between diff erent viewpoints to refl ect how they conceptualize the events that unfolded. These viewpoints can be indicated through gestural means as well as through verbal ones. Studies of co-speech gestures have inferred viewpoint from gesture form, i.e., how entities are mapped onto the (primarily manual) articulators, but the merits of this approach have not been discussed. The present study argues that viewpoint is more than gestural form. Despite connections between the two, many other factors may infl uence a gesture's form.
Frontiers in Communication
In this paper, we investigate the question of whether and how perspective taking at the linguistic level interacts with perspective taking at the level of co-speech gestures. In an experimental rating study, we compared test items clearly expressing the perspective of an individual participating in the event described by the sentence with test items which clearly express the speaker’s or narrator’s perspective. Each test item was videotaped in two different versions: In one version, the speaker performed a co-speech gesture in which she enacted the event described by the sentence from a participant’s point of view (i.e. with a character viewpoint gesture). In the other version, she performed a co-speech gesture depicting the event described by the sentence as if it was observed from a distance (i.e. with an observer viewpoint gesture). Both versions of each test item were shown to participants who then had to decide which of the two versions they find more natural. Based on the expe...
Topics in Cognitive Science, 2014
Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively.
2017
S | WORKSHOP | MARCH 30-31, 2017 | LUND UNIVERSITY 2 CONNECTING DISCOURSE IN SPEECH AND GESTURE STUDYING THE PRAGMATIC FUNCTIONS OF SPEAKER GESTURES: HISTORICAL NOTES AND CURRENT UNDERSTANDINGS Adam Kendon University College London, UK Why, until recently, in modern gesture studies, the pragmatic functions of gestures received relatively little attention with some suggestions for a framework for a contemporary understanding. COHESION IS HEARD AND SEEN: CROSS-LINGUISTIC DIFFERENCES IN GESTURES REFERRING TO THE SAME ENTITIES IN SUSTAINED DISCOURSE Emanuela Campisi1,2 & Marianne Gullberg1,3 1, Lund University Humanities Lab & University of Catania, Italy; 3 Centre for Languages and Literature, Lund University For communication to be successful, speakers must refer to entities coherently across discourse, differentiating between referents introduced for the first time, maintained across longer stretches, and reintroduced after a gap (Givón, 1983; Hickmann & Hendriks, 1999). Interestingl...
Journal of Psycholinguistic Research, 2013
Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Topics in cognitive science, 2015
Gesture, 2011
Seventh International Conference on …
Journal of Pragmatics, 2016
Glossa: a journal of general linguistics, 2017
A Multimodal Perspective, 2012
Lingua, 2007