Papers by Susan GoldinMeadow

The Resilience of Structure Built Around the Predicate: Homesign Gesture Systems in Turkish and American Deaf Children
Journal of Cognition and Development, 2014
Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing ... more Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language-the so-called resilient properties of language. We explored the resilience of structure built around the predicate-in particular, how manner and path are mapped onto the verb-in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.
Journal of Memory and …, Jan 1, 2004

Speakers of all ages spontaneously gesture as they talk. These gestures predict children’s milest... more Speakers of all ages spontaneously gesture as they talk. These gestures predict children’s milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight. Children’s narrative structure in speech improved across these ages. At age five, many of the children expressed a character’s viewpoint in gesture, and these children were more likely to tell better-structured stories at the later ages than children who did not produce character- viewpoint gestures at age five. In contrast, framing narratives from a character’s perspective in speech at age five did not predict later narrative structure in speech. Gesture thus continues to act as a harbinger of change even as it assumes new roles in relation to discourse.

Developmental …, Jan 1, 2010
Children with pre- or perinatal brain injury (PL) exhibit marked plasticity for language learning... more Children with pre- or perinatal brain injury (PL) exhibit marked plasticity for language learning. Previous work has focused
mostly on the emergence of earlier-developing skills, such as vocabulary and syntax. Here we ask whether this plasticity for
earlier-developing aspects of language extends to more complex, later-developing language functions by examining the narrative production of children with PL. Using an elicitation technique that involves asking children to create stories de novo in response to a story stem, we collected narratives from 11 children with PL and 20 typically developing (TD) children. Narratives were analysed for length, diversity of the vocabulary used, use of complex syntax, complexity of the macro-level narrative structure and use of narrative evaluation. Children’s language performance on vocabulary and syntax tasks outside the narrative context was also measured. Findings show that children with PL produced shorter stories, used less diverse vocabulary, produced structurally less complex stories at the macro-level, and made fewer inferences regarding the cognitive states of the story characters. These differences in the narrative task emerged even though children with PL did not differ from TD children on vocabulary and syntax tasks outside the narrative context. Thus, findings suggest that there may be limitations to the plasticity for language functions displayed by children with PL, and that these limitations may be most apparent in complex, decontextualized language tasks such as narrative production.

Human brain …, Jan 1, 2009
Everyday communication is accompanied by visual information from several sources, including co-sp... more Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker’s message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, she made semantically unrelated hand movements. In the third, she kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.
Once Is Not Enough: Standards of Well-Formedness in Manual Communication Created over Three Different Timespans
Language, 1993
JENNY L. SINGLETON JILL P. MORFORD University of Illinois, SUSAN GOLDIN-MEADOW U/niversitty of Il... more JENNY L. SINGLETON JILL P. MORFORD University of Illinois, SUSAN GOLDIN-MEADOW U/niversitty of Illinois, Urbhana-Chclmpaign The University of Chicago Natural languages are characterized by standards of well-formedness. These internal standards are likely to be, ...

Human Development, 2001
The stories that children hear not only offer them a model for how to tell stories, but they also... more The stories that children hear not only offer them a model for how to tell stories, but they also serve as a window into their cultural worlds. What would happen if a child were unable to hear what surrounds them? Would such children have any sense that events can be narrated and, if so, would they narrate those events in a culturally appropriate manner? To explore this question, we examined children who did not have access to conventional language -deaf children whose profound hearing deficits prevented them from acquiring the language spoken around them, and whose hearing parents had not yet exposed them to a conventional sign language. We observed 8 deaf children of hearing parents in two cultures, 4 European-American children from either Chicago or Philadelphia, and 4 Taiwanese children from Taipei, all of whom invented gesture systems to communicate. All 8 children used their gestures to recount stories, and those gestured stories were of the same types, and of the same structure, as those told by hearing children. Moreover, the deaf children seemed to produce culturally specific narrations despite their lack of a verbal language model, suggesting that these particular messages are so central to the culture as to be instantiated in nonverbal as well as verbal practices.

Cognitive Processing, 2013
Gestures are common when people convey spatial information, for example, when they give direction... more Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171: [701][702][703] 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finergrained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey ''static only'' information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.
Language Learning and Development, 2013

Language learning and development : the official journal of the Society for Language Development, Apr 1, 2013
Handshape works differently in nouns vs. a class of verbs in American Sign Language (ASL), and th... more Handshape works differently in nouns vs. a class of verbs in American Sign Language (ASL), and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself (object handshapes) and handshapes representing how the object is handled (handling handshapes) appear in both nouns and a particular type of verb, classifier predicates, in ASL. When used as nouns, object and handling handshapes are phonemic-that is, they are specified in dictionary entries and do not vary with grammatical context. In contrast, when used as classifier predicates, object and handling handshapes do vary with grammatical context for both morphological and syntactic reasons. We ask here when young deaf children learning ASL acquire the word class distinction signaled by handshape. Specifically, we determined the age at which children systematically vary object vs. handling handshapes as a function of grammatical context in classifier predicates, ...

Natural Language & Linguistic Theory, 2012
Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In part... more Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that home-Author's personal copy 2 D. Brentari et al. signers pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology, and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.

Cognition, 2011
Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and wh... more Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and whose hearing parents have not exposed them to sign language, use gestures called homesigns to communicate. Homesigns have been shown to contain many of the properties of natural languages. Here we ask whether homesign has structure building devices for negation and questions. We identify two meanings (negation, question) that correspond semantically to propositional functions, that is, to functions that apply to a sentence (whose semantic value is a proposition, φ) and yield another proposition that is more complex (¬φ for negation; ?φ for question). Combining φ with¬ or ? thus involves sentence modification. We propose that these negative and question functions are structure building operators, and we support this claim with data from an American homesigner. We show that: (a) each meaning is marked by a particular form in the child's gesture system (side-to-side headshake for negation, manual flip for question); (b) the two markers occupy systematic, and different, positions at the periphery of the gesture sentences (headshake at the beginning, flip at the end); and (c) the flip is extended from questions to other uses associated with the wh-form (exclamatives, referential expressions of location) and thus functions like a category in natural languages. If what we see in homesign is a language creation process , and if negation and question formation involve sentential modification, then our analysis implies that homesign has at least this minimal sentential syntax. Our findings thus contribute to ongoing debates about properties that are fundamental to language and language learning.
Gesturing has a larger impact on problem-solving than action, even when action is accompanied by words
Uploads
Papers by Susan GoldinMeadow
mostly on the emergence of earlier-developing skills, such as vocabulary and syntax. Here we ask whether this plasticity for
earlier-developing aspects of language extends to more complex, later-developing language functions by examining the narrative production of children with PL. Using an elicitation technique that involves asking children to create stories de novo in response to a story stem, we collected narratives from 11 children with PL and 20 typically developing (TD) children. Narratives were analysed for length, diversity of the vocabulary used, use of complex syntax, complexity of the macro-level narrative structure and use of narrative evaluation. Children’s language performance on vocabulary and syntax tasks outside the narrative context was also measured. Findings show that children with PL produced shorter stories, used less diverse vocabulary, produced structurally less complex stories at the macro-level, and made fewer inferences regarding the cognitive states of the story characters. These differences in the narrative task emerged even though children with PL did not differ from TD children on vocabulary and syntax tasks outside the narrative context. Thus, findings suggest that there may be limitations to the plasticity for language functions displayed by children with PL, and that these limitations may be most apparent in complex, decontextualized language tasks such as narrative production.
mostly on the emergence of earlier-developing skills, such as vocabulary and syntax. Here we ask whether this plasticity for
earlier-developing aspects of language extends to more complex, later-developing language functions by examining the narrative production of children with PL. Using an elicitation technique that involves asking children to create stories de novo in response to a story stem, we collected narratives from 11 children with PL and 20 typically developing (TD) children. Narratives were analysed for length, diversity of the vocabulary used, use of complex syntax, complexity of the macro-level narrative structure and use of narrative evaluation. Children’s language performance on vocabulary and syntax tasks outside the narrative context was also measured. Findings show that children with PL produced shorter stories, used less diverse vocabulary, produced structurally less complex stories at the macro-level, and made fewer inferences regarding the cognitive states of the story characters. These differences in the narrative task emerged even though children with PL did not differ from TD children on vocabulary and syntax tasks outside the narrative context. Thus, findings suggest that there may be limitations to the plasticity for language functions displayed by children with PL, and that these limitations may be most apparent in complex, decontextualized language tasks such as narrative production.