Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, Routledge eBooks
AI
This chapter explores the phonology of sign languages, highlighting that this field, while relatively young, demonstrates significant structure and organization akin to spoken language phonology. Key topics discussed include the development of phonological models, the impact of modality on phonology, and the ongoing relationship between sign languages and gesture. It emphasizes the importance of viewing sign languages as distinct linguistic systems and reviews innovative models that enhance our understanding of their phonological characteristics.
The Routledge Handbook of Phonological Theory, 2018
Language and Linguistics Compass, 2012
Visually perceivable and movable parts of the body-the hands, facial features, head, and upper body-are the articulators of sign language. It is through these articulators that words are formed, constrained, and contrasted with one another, and that prosody is conveyed. This article provides an overview of the way in which phonology is organized in the alternative modality of sign language.
Oxford Research Encyclopedia of Linguistics, 2017
Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to bet...
Annual Review of Linguistics
Comparing phonology in spoken language and sign language reveals that core properties, such as features, feature categories, the syllable, and constraints on form, exist in both naturally occurring language modalities. But apparent ubiquity can be deceptive. The features themselves are quintessentially different, and key properties, such as linearity and arbitrariness, although universal, occur in inverse proportions to their counterparts, simultaneity and iconicity, in the two modalities. Phonology does not appear full blown in a new sign language, but it does gradually emerge, accruing linguistic structure over time. Sign languages suggest that the phonological component of the language faculty is a product of the ways in which the physical system, cognitive structure, and language use among people interact over time.
Phonology, 1993
The study of phonological structure and patterns across languages is seen by contemporary phonologists as a way of gaining insight into language as a cognitive system. Traditionally, phonologists have focused on spoken languages. More recently, we have observed a growing interest in the grammatical system underlying signed languages of the deaf. This development in the field of phonology provides a natural laboratory for investigating language universals. As grammatical systems, in part, reflect the modality in which they are expressed, the comparison of spoken and signed languages permits us to separate those aspects of grammar which are modality-dependent from those which are shared by all human languages. On the other hand, modality-dependent characteristics must also be accounted for by a comprehensive theory of language. Comparing languages in two modalities is therefore of theoretical importance for both reasons: establishing modality-independent linguistic universals, and acc...
Fenlon, Jordan, Kearsy Cormier, Robert Adam & Bencie Woll. under review. Issues in determining phonological structure of sign languages in usage-based lexicons: The case of BSL SignBank. (submitted 11 October 2016). Check for updates before citing. Abstract: Sign languages have a sublexical level of organisation in which a fixed number of contrastive units combine to form all the signs that make up the core native lexicon of a sign language. Most accounts of sign language phonology have identified these units using a parameter-based approach that acknowledged handshape, location, movement and orientation as significant parameters and/or a featural approach. These aspire to account for all the possible phonological contrasts within a given (or multiple) sign language(s). However, much of this work has been based on signs from relatively small datasets which are not organised on the basis of lexical contrast. Here, we outline an ongoing attempt to identify the minimal features required for an adequate phonological description of a sign language lexicon within a corpus-based lemmatised lexical database of British Sign Language (BSL), issues that arise in doing so, and implications of these issues for phonological theory.
2000
This chapter addresses two issues that concern sign language phonology. The first issue is how iconicity influences phonology in SLs. The second arises often in the minds of linguists working on verbal languages: To what extent are the levels of structure and the dimensions of variation the same in signed and verbal languages?
linguistics.uconn.edu
Lingua, 1996
The signs of sign language consist phonetically of hand configurations, locations on the body or in space, and movements. Some models claim that dynamic movements and static locations are the sequential segments of sign language, and even that movements are analogous to vowels. Others claim that movements are redundant, or in any ease should not be represented as fully-fledged sequential segments, The present study measures movements agains~ stringent phonological and morphological criteria for featurehood and classhood, in light of the current controversy over their status. Data from American Sign Language and from Israeli Sign Language support the claims made here, among them, that there is a set of phonologically contrastive features of movement which is phonetically coherent, and that these features constitute a class that is referred to in a blocking constraint on Multiple inflection and other processes. It is shown that the distinction between sequences of dynamic movements mid static elements in signs is exp]oited in templatic morphology in both sign languages. While this analysis supports the claim that movements are phonologically significant at the underlying level, it suggests that their linear position need not be lexically specified. i. The controversy From the earliest days of sign language linguistics, it has been accepted that there arc threc categories of phonological features: hand configuration, location, and ~' I am very grateful ~o Harry van der Hulsl and to anonymous reviewers for their helpful comments on this paper. Thanks as well to paflic:.pants for their questions and comments at the following conferences where earlier versLon:+ of this paper were presented: the Workshop on Sign Language Phonology and
2015
Sign languages offer a unique and informative perspective on the question of the origin of phonological and phonetic features. Here I review research showing that signs are comprised of distinctive features which can be discretely listed and which are organized hierarchically. In these ways sign language feature systems are comparable to those of spoken language. However, the inventory of features and aspects of their organization, while similar across sign languages, are completely unlike those of spoken languages, calling into question claims about innateness of features for either modality. Studies of a young village sign language, Al-Sayyid Bedouin Sign Language (ABSL), demonstrate that phonological structuring is not in evidence at the outset, but rather self-organizes gradually (Sandler et al 2011). However, our new research shows that signature phonetic features of ABSL already can be detected when ABSL signers use signs from Israeli Sign Language. This ABSL ‘accent’ points t...
Transactions of the Philological Society, 2010
Journal of Phonetics, 2006
In this chapter we will propose that the set of phonological features needed for sign languages is much smaller than what is usually proposed or assumed. Even though it has been recognized (since Stokoe's seminal work) that phonological features must capture only those properties of signs that are distinctive in the language, all subsequent models for sign language phonology typically encode a lot of phonetic detail that, on closer study, isn't really distinctive (in a phonological sense). In this chapter, we argue that the non-distinctive nature of these phonetic properties is due to two sources: (a) phonetic predictability (to be accounted for in terms of phonetic implementation rules) and (b) iconicity (to be accounted for in terms of lexical pre-specification). The two routes in (a) and (b) allow us to 'clean up' the phonology which, as a result, can be shown to be quite restricted and non-random, i.e. in accordance with structural principles that appear to play a crucial role in spoken language phonology as well. A case study involving the notion place of articulation is provided. Our claims are based on a study of signs from Sign Language of the Netherlands (SLN), and, in particular on a database (SignPhon) that contains over 3000 signs, provided with a detailed phonetic/ phonological encoding.
Sign Language Studies, 1974
The American Sign Language of the deaf (ASL) has a level of structure which is analogous to phonology. The natural basis for both lexical description and analysis of variation is the articulatory dynamics of the hands and body. 'I have chosen to avoid the term cherology, found in Stokoe (1960) and Stokoe, Casterline, and Croneberg (1965) for two reasons: (a) to avoid confusion between Stokoe's structural analysis and the present study, which is in a generative phonological framework, (b) to avoid using a new term where a familiar one seem both adequate and appropriate.
2008
The research program developed by Peter MacNeilage seeks to derive aspects of phonological organization from fundamental physical properties of the speech system, and from there to arrive at reasonable hypotheses about the evolution of speech. Speech is the dominant medium for the transmission of natural human language, and characterizing its organization is clearly very important for our understanding of language as a whole. Speech is not the only medium available to humans, however, and a comprehensive theory of the nature and evolution of language has much to gain by investigating the form of language in the other natural language modality: sign language, the focus of this chapter. Like spoken languages, sign languages have syllables, the unit that will form the basis for comparison here. As a prosodic unit of organization within the word, sign language syllables bear certain significant similarities to those of spoken language. Such similarities help to shed light on universal properties of linguistic organization, regardless of modality. Yet the form and organization of syllables in the two modalities are quite different, and I will argue that these differences are equally illuminating. The similarities show that spoken and signed languages reflect the same cognitive system in a nontrivial sense. But the differences confirm that certain key aspects of phonological structure must indeed be derived from the physical transmission system, resulting in phonological systems that are in some ways distinct.
Natural Language and Linguistic Theory, 1986
Mouth Actions in Sign Languages
sandlersignlab.haifa.ac.il
The research program developed by Peter MacNeilage seeks to derive aspects of phonological organization from fundamental physical properties of the speech system, and from there to arrive at reasonable hypotheses about the evolution of speech. Speech is the dominant medium for the transmission of natural human language, and characterizing its organization is clearly very important for our understanding of language as a whole. Speech is not the only medium available to humans, however, and a comprehensive theory of the nature and evolution of language has much to gain by investigating the form of language in the other natural language modality: sign language, the focus of this chapter. Like spoken languages, sign languages have syllables, the unit that will form the basis for comparison here. As a prosodic unit of organization within the word, sign language syllables bear certain significant similarities to those of spoken language. Such similarities help to shed light on universal properties of linguistic organization, regardless of modality. Yet the form and organization of syllables in the two modalities are quite different, and I will argue that these differences are equally illuminating. The similarities show that spoken and signed languages reflect the same cognitive system in a nontrivial sense. But the differences confirm that certain key aspects of phonological structure must indeed be derived from the physical transmission system, resulting in phonological systems that are in some ways distinct.
Developmental Science, 2021
Children's gaze behavior reflects emergent linguistic knowledge and real-time language processing of speech, but little is known about naturalistic gaze behaviors while watching signed narratives. Measuring gaze patterns in signing children could uncover how they master perceptual gaze control during a time of active language learning. Gaze patterns were recorded using a Tobii X120 eye tracker, in 31 non-signing and 30 signing hearing infants (5–14 months) and children (2–8 years) as they watched signed narratives on video. Intelligibility of the signed narratives was manipulated by presenting them naturally and in video-reversed (“low intelligibility”) conditions. This video manipulation was used because it distorts semantic content, while preserving most surface phonological features. We examined where participants looked, using linear mixed models with Language Group (non-signing vs. signing) and Video Condition (Forward vs. Reversed), controlling for trial order. Non-signing infants and children showed a preference to look at the face as well as areas below the face, possibly because their gaze was drawn to the moving articulators in signing space. Native signing infants and children demonstrated resilient, face-focused gaze behavior. Moreover, their gaze behavior was unchanged for video-reversed signed narratives, similar to what was seen for adult native signers, possibly because they already have efficient highly focused gaze behavior. The present study demonstrates that human perceptual gaze control is sensitive to visual language experience over the first year of life and emerges early, by 6 months of age. Results have implications for the critical importance of early visual language exposure for deaf infants. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=2ahWUluFAAg.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.