Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners‚Ao experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-monthold English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-old...
Journal of Phonetics, 1993
Language Acquisition, 2011
Across languages, onsets with large sonority distances are preferred to those with smaller distances (e.g., bw>bd>lb; Greenberg, 1978). Optimality theory (Prince & Smolensky, 2004) attributes such facts to grammatical restrictions that are universally active in all grammars. To test this hypothesis, here, we examine whether children extend putatively universal sonority restrictions to onsets unattested in their language. Participants (M=4;04 years) were presented with pairs of auditory words-either identical (e.g., lbif→lbif) or epenthetically related (e.g., lbif→lebif)-and asked to judge their identity. Results showed that, like adults, children's ability to detect epenthetic distortions was monotonically related to sonority distance (bw>bd>lb), and their performance was inexplicable by several statistical and phonetic factors. These findings suggest that sonority restrictions are active in early childhood and their scope is broad. A large body of research demonstrates the sensitivity of young children and adults to the sound-pattern of novel linguistic forms that they have never heard before. Infants as young as nine months of age, for instance, favor syllables like blif over lbif despite no experience with either (e.g., Friederici & Wessels 1993). But while the productivity of language is widely recognized, its source is contentious. Some authors attribute linguistic generalizations to domain-general mechanisms, including statistical learning, articulatory and auditory preferences (e.g., Blevins 2004; Bybee & McClelland 2005; MacNeilage 2008). Others, however, assert that productivity might be constrained, in part, by grammatical principles that are potentially specific to language (e.g., Jakobson 1968; Prince & Smolensky 1993/2004; de Lacy 2006a). In this view, all phonological grammars include a universal set of grammatical well-formedness restrictions called markedness constraints. Markedness constraints, for example, disfavor syllables such as lbif over blif. Because the variant lbif incurs a more severe violation of markedness constraints, grammars are less likely to admit this marked variant compared to its unmarked counterpart, blif. Consequently, marked structures are less likely to emerge in typology, they are disfavored as the output of active phonological alternations, and they are more difficult to master in language acquisition. And while marked syllables, such as lbif, are typically ones that are also harder to articulate and perceive on purely phonetic grounds, by hypothesis, grammatical markedness constraints are independently represented in the grammar, irreducible to their analog phonetic precursors (de Lacy 2006b). Moreover, markedness
Speech errors typically respect the speaker’s implicit knowledge of language-wide phonotactics (e.g., /ŋ/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally-induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on another phoneme within the sequence (e.g., /t/ can only be an onset if the medial vowel is /i/), but not earlier than the second day of training. Thus far, no work has been done with children. In the current 4-day experiment, a group of Dutch-speaking adults and nine-year-old children were asked to rapidly recite sequences of novel word-forms (e.g., kieng nief siet hiem) that were consistent with phonotactics of the spoken Dutch language. Within the procedure of the experiment, some consonants (i.e., /t/ and /k/) were restricted to onset or coda position depending on the medial vowel (i.e., /i/ or “ie” versus /øː/ or “eu”). Speech errors in adults revealed a learning effect for the novel constraints on the second day of learning, consistent with earlier findings. A post-hoc analysis at trial-level showed that learning was statistically reliable after an exposure of 120 sequence-trials (including a consolidation period). Children, however, started learning the constraints already on the first day. More precisely, the effect appeared significantly after an exposure of 24 sequences. These findings indicate that children are rapid implicit learners of novel phonotactics, which bears important implications for theorizing about developmental sensitivities in language learning.
Trends in Cognitive Sciences, 2000
68 Clark, R.E. and Squire, L.R. (1998) Classical conditioning and brain systems: the role of awareness. Science 280, 77-81 69 Squire, L.R. (1992) Declarative and non-declarative memory: multiple brain systems supporting learning and memory.
… of the 25th …, 2001
Many studies on developmental speech perception (e.g. Werker & Tees 1984 have documented changes in speech perception that occur during an infant's first year of life. These changes are generally understood to reflect the phonemic structure of the native language . There is little research, however, on the phonological abstractness of these initial phonetic categories acquired in infancy. One study by Jusczyk and colleagues (1999) found that 9-month-old infants are sensitive to whether or not a set of sounds shares phonological features, indicating that by this young age infants have already developed featural representations. The question we ask in the present research is whether phonetic categories are initially represented as bearing abstract, contrastive features, or whether additional information or experience is required before a learner will develop featural representations.
2016
Last year, my colleague Ian Howard and I published a paper in the Journal of Phonetics (Messum & Howard 2015) that discussed the mechanism by which young children learn to pronounce the speech sounds of their mother tongue (L1). The longstanding assumption has been that they do this by some form of imitation. We argued that on current evidence it is more likely that they do this through a mirroring process; with their caregivers as the 'mirror' in which infants and young children discover the linguistic significance of their vocal actions. This matters for the learning of second language (L2) pronunciation because many of our teaching practices are implicitly based on the idea that learning to produce sounds by listening first and then trying to copy what we have heard is 'natural' (or even that it is the only possible way for the production of new sounds to be learnt). If it is not natural, then we might want to reconsider our use of 'listen first' approaches for teaching speech sounds. These approaches are not notably successful and there is at least one well-developed and proven alternative. This article summarises the 2015 paper, concentrating on the parts of it that will be of most interest to Speak Out! readers. The paper was written for a special issue of the journal which was examining how speech is represented in the brain, hence the paper's title: Creating the cognitive form of phonological units: the speech sound correspondence problem in infancy could be solved by mirrored vocal interactions in infancy rather than by the imitation of speech sounds. In a second article, Roslyn Young and I will examine the nature of L2 speech sound learning and the different approaches taken to teaching sounds.
Cognition, 2008
We explore whether infants can learn novel phonological alternations on the basis of distributional information. In Experiment 1, two groups of 12-month-old infants were familiarized with artificial languages whose distributional properties exhibited either stop or fricative voicing alternations. At test, infants in the two exposure groups had different preferences for novel sequences involving voiced and voiceless stops and fricatives, suggesting that each group had internalized a different familiarization alternation. In Experiment 2, 8.5-month-olds exhibited the same patterns of preference. In Experiments 3 and 4, we investigated whether infants' preferences were driven solely by preferences for sequences of high transitional probability. Although 8.5-month-olds in Experiment 3 were sensitive to the relative probabilities of sequences in the familiarization stimuli, only 12-month-olds in Experiment 4 showed evidence of having grouped alternating segments into a single functional category. Taken together, these results suggest a developmental trajectory for the acquisition of phonological alternations using distributional cues in the input.
Developmental Science, 2001
While there are many theories of the development of speech perception, there are few data on speech perception in human newborns. This paper examines the manner in which newborns responded to a set of stimuli that define one surface of the adult vowel space. Experiment 1 used a preferential listeningahabituation paradigm to discover how newborns divide that vowel space. Results indicated that there were zones of high preference flanked by zones of low preference. The zones of high preference approximately corresponded to areas where adults readily identify vowels. Experiment 2 presented newborns with pairs of vowels from the zones found in Experiment 1. One member of each pair was the most preferred vowel from a zone, and the other member was the least preferred vowel from the adjacent zone of low preference. The pattern of preference was preserved in Experiment 2. However, a comparison of Experiments 1 and 2 indicated that habituation had occurred in Experiment 1. Experiment 3 tested the hypothesis that the habituation seen in Experiment 1 was due to processes of categorization, by using a familiarization preference paradigm. The results supported the hypothesis that newborns categorized the vowel space in an adult-like manner, with vowels perceived as relatively good or poor exemplars of a vowel category.
Many studies have shown that during the first year of life infants start learning the prosodic, phonetic and phonotactic properties of their native language. In parallel, infants start associating sound sequences with semantic representations. However, the question of how these two processes interact remains largely unknown. The current study explores whether (and when) the relative phonotactic probability of a sound sequence in the native language has an impact on infants' word learning. We exploit the fact that Labial-Coronal (LC) words are more frequent than Coronal-Labial (CL) words in French, and that French-learning infants prefer LC over CL sequences at 10 months of age, to explore the possibility that LC structures might be learned more easily and thus at an earlier age than CL structures. Eye movements of French-learning 14-and 16month-olds were recorded while they watched animated cartoons in a word learning task. The experiment involved four trials testing LC sequences and four trials testing CL sequences. Our data reveal that 16-month-olds were able to learn the LC and CL words, while14-month-olds were only able to learn the LC words, which are the words with the more frequent phonotactic pattern. The present results provide evidence that infants' knowledge of their native language phonotactic patterns influences their word learning: Words with a frequent phonotactic structure could be acquired at an earlier age than those with a lower probability. Developmental changes are discussed and integrated with previous findings.
Cognition, 2010
Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we tested if a lifelong experience with language is necessary for vowels to become the preferred target for structural generalizations. We presented 11-month-old infants with a series of CVCVCV nonsense words in which all vowels were arranged according to an AAB rule (first and second vowels were the same, while the third vowel was different). During the test, we presented infants with new words whose vowels either followed or not, the aforementioned rule. We found that infants readily generalized this rule when implemented over the vowels. However, when the same rule was implemented over the consonants, infants could not generalize it to new instances. These results parallel those found with adult participants and demonstrate that several years of experience learning a language are not necessary for functional asymmetries between consonants and vowels to appear.
Proceedings of the 27th …, 2003
2020
Humans learn much about their language while still in the womb. Prenatal exposure has been repeatedly shown to affect newborn infants’ processing of the prosodic characteristics of native language speech. Little is known about whether and how prenatal exposure affects infants’ perception of speech sound segments. Here we simulated prenatal learning of vowels in two virtual fetuses whose mothers spoke (slightly) different languages. The learners were two-layer neural networks and were each exposed to vowel tokens sampled from an existent five-vowel language (Spanish and Czech, respectively). The input acoustic properties approximated the speech signal that could possibly be heard in the intrauterine environment, and the learners’ auditory system was relatively immature. Without supervision, the virtual fetuses came to warp the continuous acoustic signal into “proto-categories” that were specific to their linguistic environment. Both learners came to create two categorization patterns...
Cognition, 2007
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language . Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7,[49][50][51][52][53][54][55][56][57][58][59][60][61][62][63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.
Studies in Language Companion Series, 1994
Traduction et Langues , 2008
This study aims at getting a better understanding of human speech processing, and explores specifically the task that infants face while learning their native language. Indeed, this work sheds light on 30 years of research that have questioned the developments in early infancy that allow word learning to proceed rapidly before two years of age. Infants are born with Perceptual biases that facilitate attention to speech and the encoding of its properties over the first several months of life, infants' perceptual biases increasingly conform to native language patterns. By the end of this study, it is suggested that word learning is another bootstrapping phenomenon in developmental research. It does not mean it can be reduced to perceptual and learning. Instead, we argue that perceptual learning provides a foundation upon which abstract linguistic units can be built. Just as phonological patterns act as cues to morphological and syntactic structure, and just as naive concepts allow infants to learn more complex ones, perceptual learning allows segmentation and representation of word forms that, once mapped to concepts, bootstrap the process of word learning and lead to a qualitative improvement in its efficiency.
Language Learning and Development, 2005
In this article, we present a summary of recent research linking speech perception in infancy to later language development, as well as a new empirical study examining that linkage. Infant phonetic discrimination is initially language universal, but a decline in phonetic discrimination occurs for nonnative phonemes by the end of the 1st year. Exploiting this transition in phonetic perception between 6 and 12 months of age, we tested the hypothesis that the decline in nonnative phonetic discrimination is associated with native-language phonetic learning. Using a standard behavioral measure of speech discrimination in infants at 7 months and measures of their language abilities at 14, 18, 24, and 30 months, we show (a) a negative correlation between infants' early native and nonnative phonetic discrimination skills and (b) that native-and nonnative-phonetic discrimination skills at 7 months differentially predict future language ability. Better native-language discrimination at 7 months predicts accelerated later language abilities, whereas better nonnative-language discrimination at 7 months predicts reduced later language abilities. The discussion focuses on (a) the theoretical connection between speech perception and language development and (b) the implications of these findings for the putative "critical period" for phonetic learning.
British journal of psychology (London, England : 1953), 2016
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.
Infant Behavior & Development, 2016
In their first year, infants' perceptual abilities zoom in on only those speech sound contrasts that are relevant for their language. Infants' lexicons do not yet contain sufficient minimal pairs to explain this phonetic categorization process. Therefore, researchers suggested a bottom-up learning mechanism: infants create categories aligned with the frequency distributions of sounds in their input. Recent evidence shows that this bottom-up mechanism may be complemented by the semantic context in which speech sounds occur, such as simultaneously present objects. To test this hypothesis, we investigated whether discrimination of a non-native vowel contrast improves when sounds from the contrast were paired consistently or randomly with two distinct visually presented objects, while the distribution of speech tokens suggested a single broad category. This was assessed in two ways: computationally, namely in a neural network simulation, and experimentally, namely in a group of 8-month-old infants. The neural network, trained with a large set of sound-meaning pairs, revealed that two categories emerge only if sounds are consistently paired with objects. A group of 49 real 8-month-old infants did not immediately show sensitivity to the pairing condition; a later test at 18 months with some of the same infants, however, showed that this sensitivity at 8 months interacted with their vocabulary size at 18 months. This interaction can be explained by the idea that infants with larger future vocabularies are more positively influenced by consistent training (and/or more negatively influenced by inconsistent training) than infants with smaller future vocabularies. This suggests that consistent pairing with distinct visual objects can help infants to discriminate speech sounds even when the auditory information does not signal a distinction. Together our results give computational as well as experimental support for the idea that semantic context plays a role in disambiguating phonetic auditory input.
Recently, several studies have argued that infants capitalize on the statistical properties of natural languages to acquire the linguistic structure of their native language, but the kinds of constraints which apply to statistical computations remain largely unknown. Here we explored French-learning infants' perceptual preference for labial-coronal (LC) words over coronal-labial words (CL) words (e.g. preferring bat over tab) to determine whether this phonotactic preference is based on the acquisition of the statistical properties of the input based on a single phonological feature (i.e. place of articulation), multiple features (i.e. place and manner of articulation), or individual consonant pairs. Results from four experiments revealed that infants had a labial-coronal bias for nasal sequences (Experiment 1) and for all plosive sequences (Experiments 2 and 4) but a coronal-labial bias for all fricative sequences (Experiments 3 and 4), independently of the frequencies of individual consonant pairs. These results establish for the first time that constellations of multiple phonological features, defining broad consonant classes, constrain the early acquisition of phonotactic regularities of the native language.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.