Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012
Humans instinctively form words by weaving patterns of meaningless speech elements. Moreover, we do so in specific, regular ways. We contrast dogs and gods, favour blogs to lbogs. We begin forming sound-patterns at birth and, like songbirds, we do so spontaneously, even in the absence of an adult model. We even impose these phonological patterns on invented cultural technologies such as reading and writing. But why are humans compelled to generate phonological patterns? And why do different phonological systems - signed and spoken - share aspects of their design? Drawing on findings from a broad range of disciplines including linguistics, experimental psychology, neuroscience and comparative animal studies, Iris Berent explores these questions and proposes a new hypothesis about the architecture of the phonological mind.
Current Directions in Psychological Science, 2017
Why do humans drink and drive but fail to rdink and rdive? Here, I suggest that these regularities could reflect abstract phonological principles that are active in the minds and brains of all speakers. In support of this hypothesis, I show that (a) people converge on the same phonological preferences (e.g., dra over rda) even when the relevant structures (e.g., dra, rda) are unattested in their language and that (b) such behavior is inexplicable by purely sensorimotor pressures or experience with similar syllables. Further support for the distinction between phonology and the sensorimotor system is presented by their dissociation in dyslexia, on the one hand, and the transfer of phonological knowledge from speech to sign, on the other. A detailed analysis of the phonological system can elucidate the functional architecture of the typical mind/brain and the etiology of speech and language disorders.
Biolinguistics, 2010
Biolinguistics aims to shed light on the specifically biological nature of human language, focusing on five foundational questions: (1) What are the properties of the language phenotype? (2) How does language ability grow and mature in individuals? (3) How is language put to use? (4) How is language implemented in the brain? (5) What evolutionary processes led to the emergence of language? These foundational questions are used here to frame a discussion of important issues in the study of language, exploring whether our linguistic capacity is the result of direct selective pressure or due to developmental or biophysical constraints, and assessing whether the neural/computational components entering into language are unique to human language or shared with other cognitive systems, leading to a discussion of advances in theoretical linguistics, psycholinguistics, comparative animal behavior and psychology, genetics/genomics, disciplines that can now place these longstanding questions in a new light, while raising challenges for future research.
The Handbook of Language Emergence, 2015
2007
This special session discusses the issue if there is a biological grounding of phonology. By reviewing current and past work from different speech research disciplines we suggest that (1) biological factors provide the limits, the frame of reference for phonology, (2) phonology is shaped by opti- mization processes taking into account the nonlinear relations of different representations of speech (acoustics, articulation, speech perception), and (3) sociolinguistic factors and communicative usage affect, for instance, speech acquisition and sound change. The first of the three suggestions is biological in nature whereas the last represents the non-biological nature of speech.
While morphosyntax and semantics have been studied from afunctional and cognitive perspective, much less emphasis has been placed on phonological phenomena in these frameworks. This paper proposes a rethinking of phonology , arguing that (i) the lexical representation of words have phonetic substance that is gradually changed by phonetic processes; (ii) the spread of these phonetic changes is at least partly accounted for by the way particular items are used in discourse; (iii) the study of exceptions, marginal phenomena , and subphonemic detail are important to the understanding of how phonological information is stored and processed; (iv) generalizations at the morphological and lexical level have radically different properties than generalizations at the phonetic level, with the former having a cognitive or semantic motivation, while the latter have a motor or physical motivation; and (v) that the best way to model the interaction of generalizations with the lexicon is not by separating rules from lists of items, but rather by conceiving of generalizations as patterns or schemas that emerge from the organization of stored lexical units.
Complutense journal of English studies, 2017
While the arbitrariness of the sign has occupied a central space in linguistic theory for a century, counter-evidence to this basic tenet has been mounting. Recent findings from cross-linguistic studies on spoken languages have suggested that, contrary to purely arbitrary distributions of phonological content, languages often exhibit systematic and regular phonological and sub-phonological patterns of form-meaning mappings. To date, studies of distributional tendencies of this kind have not been conducted for signed languages. In an investigation of phoneme distribution in American Sign Language (ASL) and Língua Brasileira de Sinais (Libras), tokens of the claw-5 handshape were extracted and analyzed for whether the handshape contributed to the overall meaning of the sign. The data suggests that distribution of the claw-5 handshape is not randomly distributed across the lexicon, but clusters around six formmeaning patterns: convex-concave, Unitary-elements, non-compact matter, hand-as-hand, touch, and interlocking. Interestingly, feature-level motivations were uncovered as the source of the mappings These findings are considered within a new cognitive framework to better understand how and why sub-morphemic units develop and maintain motivated form-meaning mappings. The model proposed here, Embodied Cognitive Phonology, builds on cognitive and usage-based approaches but incorporates theories of embodiment to address the source of the claw-5 mappings. Embodied Cognitive Phonology provides a unifying framework for understanding the perceived differences in phonological patterning and organization across the modalities. Both language-internal and languageexternal sources of motivation contribute to the emergence of form-meaning mappings. Arbitrariness is argued to be but one possible outcome from the process of emergence and schematization of phonological content, and exists alongside motivation as a legitimate state of linguistic units of all sizes of complexity. Importantly, because language is dynamic, these states are not fixed, but are in continuous flux, as language users reinvent and reinterpret form and meaning over time.
2016
This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate the distribution of a series of static and dynamic handshapes across the two languages. While traditional phonological frameworks argue handshape distribution to be a facet of well-formedness constraints and articulatory ease (Brentari, 1998), the data analyzed here suggests that the majority of handshapes cluster around schematic form-meaning mappings. Furthermore, these schematic mappings are shown to be motivated by both language-internal and language-external construals of formal articulatory properties and embodied experiential gestalts. Usage-based approaches to phonology (Bybee, 2001) and cognitively oriented constructional approaches (Langacker, 1987) have recognized that phonology is not modular. Instead, phonology is expected to interact with all levels of grammar, including semantic association. In this dissertation I begin to develop a cognitive model of phonology which views...
2011
Despite our strong intuitions that language is represented in memory using some kind of alphabet, phones and phonemes appear to play almost no psychological role in human speech perception, production or memory. Instead, evidence shows that people store linguistic material with a rich, detailed auditory and sensory-motor code that tends, in its details, to be unique for each speaker. The obvious phonological discreteness of languages reflects conventional categories of pronunciation but not discrete symbols. In learning to read, we all master the Speech-Letter Blend, so that letters can be effortlessly interpreted as speech when reading. This mapping between letters and speech, requiring many years of training, is apparently achieved in the Visual Word Form Area of cortex. The notion of a phoneme is actually a conceptual blend of letters and speech. Linguistics, for at least the last century, has almost invariably subscribed to the following assumption: Speech in any language consis...
This paper will discuss the ways in which Cognitive Grammar (CG) has integrated the fundamental concepts of phonology. Phonology has traditionally been the neglected stepchild of CG, in part because the initial excitement of CG revolved around the insightful semantic analyses of what had previously been thought to be purely syntactic or arbitrary lexical puzzles, and phonology is, by definition, about meaningless units. Additionally, phonology deals with the coordination of motor activity and auditory perception, areas that initially didn’t seem to lend themselves to the conceptual tools of CG. Unlike much work on syntax and semantics, research within several of the generative and functional traditions turns out to be adaptable to CG’s view of the nature of phonological processing. The primary source of useful insights is the work of the Natural Phonologists (Donegan and Stampe) who argued for non-modular cognitive realism and against Chomskyan innatism independently from CG theorists. Furthermore, some work by recent offshoots of Generative Phonology turn out to have useful things to say about how CG might understand the structure of sounds in language. The earliest writing on the subject (Nathan) examined ways in which the core categorization concepts in early CG (radial prototype categories, image schema transformations, basic level categorization) could be utilized to explain traditional phonological concepts such as the phoneme, allophones and (natural) phonological processes. In more recent research, the conceptual tools of usage-based models have also been used to account for some aspects of phonological behavior, leading some researchers to question the relevance even of such traditional phonological constructs as the phoneme/allophone contrast (Bybee) or the distinction between abstract phonemic representation and representations of individual instances of particular utterances (Pierrehumbert). Additionally, the question of the boundary between phonology and morphology has been raised, with some researchers (such as the present author) arguing for a sharp, functional boundary based on the varying cognitive and physiological resources involved, while others (Nesset, Kemmer) arguing for a more generalized schematization model covering all aspects of phonological as well as morphological and syntactic generalizations. After reviewing relevant issues this paper argues that evidence from early aspects of child language acquisition, such as the onset and development of babbling (MacNeilage), the embodied nature of perception (Johnson), and other research on the acquisition and processing of complex motor skills, shows that phonology requires a more active conception of the storage and production of stored heard instances. Phonology is based upon speakers’ knowledge of the nature and capabilities of their vocal tracts and deals with how speakers actively construct utterances based on their knowledge of their individual language(s)’ conventionalized responses to those physiological and acoustic constraints.
Poznań Studies in Contemporary Linguistics, 2000
This paper compares and contrasts the theories of Natural Phonology and Phonology as Human Behavior in general and shows how each theory views the notion of language universals in particular. The concepts of combinatory phonology, phonotactics, and diachronic, developmental, clinical and evolutionary phonology will be discussed as measures of defining and determining the concept of language universals. The author maintains that biological, physiological, cognitive, psychological, sociological and other universals of human behavior are merely reflected in language rather than being specific "language universals" per se.
2000). Phonological knowledge: …, 2000
Language is primarily an auditory system of symbols. In so far as it is articulated it is also a motor system, but the motor aspect is clearly secondary to the auditory. In normal individuals the impulse to speech first takes effect in the sphere of auditory imagery and is then transmitted to the motor nerves that control the organs of speech. The motor processes and the accompanying motor feelings are not, however, the end, the final resting point. They are merely a means and a control leading to auditory perception in both speaker and hearer... Hence, the cycle of speech...begins and ends in the realm of sounds (Sapir 1921: 17-18).
PLoS ONE, 2013
All spoken languages encode syllables and constrain their internal structure. But whether these restrictions concern the design of the language system, broadly, or speech, specifically, remains unknown. To address this question, here, we gauge the structure of signed syllables in American Sign Language (ASL). Like spoken languages, signed syllables must exhibit a single sonority/energy peak (i.e., movement). Four experiments examine whether this restriction is enforced by signers and nonsigners. We first show that Deaf ASL signers selectively apply sonority restrictions to syllables (but not morphemes) in novel ASL signs. We next examine whether this principle might further shape the representation of signed syllables by nonsigners. Absent any experience with ASL, nonsigners used movement to define syllable-like units. Moreover, the restriction on syllable structure constrained the capacity of nonsigners to learn from experience. Given brief practice that implicitly paired syllables with sonority peaks (i.e., movement)-a natural phonological constraint attested in every human language-nonsigners rapidly learned to selectively rely on movement to define syllables and they also learned to partly ignore it in the identification of morpheme-like units. Remarkably, nonsigners failed to learn an unnatural rule that defines syllables by handshape, suggesting they were unable to ignore movement in identifying syllables. These findings indicate that signed and spoken syllables are subject to a shared phonological restriction that constrains phonological learning in a new modality. These conclusions suggest the design of the phonological system is partly amodal.
Despite our strong intuitions that language is represented in memory using some kind of alphabet, phones and phonemes appear to play almost no psychological role in human speech perception, production or memory. Instead, evidence shows that people store linguistic material with a rich, detailed auditory and sensory-motor code that tends, in its details, to be unique for each speaker. The obvious phonological discreteness of languages reflects conventional categories of pronunciation but not discrete symbols. In learning to read, we all master the Speech-Letter Blend, so that letters can be effortlessly interpreted as speech when reading. This mapping between letters and speech, requiring many years of training, is apparently achieved in the Visual Word Form Area of cortex. The notion of a phoneme is actually a conceptual blend of letters and speech.
Ecological Psychology, 2010
It is proposed that a language, in a rich, high-dimensional form, is part of the cultural environment of the child learner. A language is the product of a community of speakers who develop its phonological, lexical and phrasal patterns over many generations. The language emerges from the joint behavior of many agents in the community acting as a complex adaptive system. Its form only roughly approximates the low-dimensional structures that our traditional phonology highlights. Those who study spoken language have attempted to approach it as an internal knowledge structure, rather than as a communal institution or set of conventions for coordination of activity. We also find it very difficult to avoid being deceived into seeing language in the form employed by our writing system, as letters, words and sentences. But our writing system is a further set of conventions that approximate the high-dimensional spoken language in a consistent and regularized graphical form.
Biolinguistics aims to shed light on the specifically biological nature of human language, focusing on five foundational questions: (1) What are the properties of the language phenotype? (2) How does language ability grow and mature in individuals? (3) How is language put to use? (4) How is language implemented in the brain? (5) What evolutionary processes led to the emergence of language? These foundational questions are used here to frame a discussion of important issues in the study of language, exploring whether our linguistic capacity is the result of direct selective pressure or due to developmental or biophysical constraints, and assessing whether the neural/computational components entering into language are unique to human language or shared with other cognitive systems, leading to a discussion of advances in theoretical linguistics, psycholinguistics, comparative animal behavior and psychology, genetics/genomics, disciplines that can now place these longstanding questions in a new light, while raising challenges for future research.
Journal of Psycholinguistic Research, 2022
Across languages, certain syllables are systematically preferred to others (e.g., plaf > ptaf). Here, we examine whether these preferences arise from motor simulation. In the simulation account, ill-formed syllables (e.g., ptaf) are disliked because their motor plans are harder to simulate. Four experiments compared sensitivity to the syllable structure of labial-vs. corona-initial speech stimuli (e.g., plaf > pnaf > ptaf vs. traf > tmaf > tpaf); meanwhile, participants (English vs. Russian speakers) lightly bit on their lips or tongues. Results suggested that the perception of these stimuli was selectively modulated by motor stimulation (e.g., stimulating the tongue differentially affected sensitivity to labial vs. coronal stimuli). Remarkably, stimulation did not affect sensitivity to syllable structure. This dissociation suggests that some (e.g., phonetic) aspects of speech perception are reliant on motor simulation, hence, embodied; others (e.g., phonology), however, are possibly abstract. These conclusions speak to the role of embodiment in the language system, and the separation between phonology and phonetics, specifically.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.