Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, Trends in Cognitive Sciences
68 Clark, R.E. and Squire, L.R. (1998) Classical conditioning and brain systems: the role of awareness. Science 280, 77-81 69 Squire, L.R. (1992) Declarative and non-declarative memory: multiple brain systems supporting learning and memory.
2011
Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners‚Ao experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-monthold English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-old...
Nature Reviews Neuroscience, 2004
Infants learn language with remarkable speed, but how they do it remains a mystery. New data show that infants use computational strategies to detect the statistical and prosodic patterns in language input, and that this leads to the discovery of phonemes and words. Social interaction with another human being affects speech learning in a way that resembles communicative learning in songbirds. The brain's commitment to the statistical and prosodic patterns that are experienced early in life might help to explain the long-standing puzzle of why infants are better language learners than adults. Successful learning by infants, as well as constraints on that learning, are changing theories of language acquisition. STATISTICAL LEARNING Acquisition of knowledge through the computation of information about the distributional frequency with which certain items occur in relation to others, or probabilistic information in sequences of stimuli, such as the odds (transitional probabilities) that one unit will follow another in a given language.
European Review, 2008
The study of language acquisition during the first year of life is reviewed. We identified three areas that have contributed to our understanding of how the infant copes with linguistic signals to attain the most basic properties of its native language. Distributional properties present in the incoming utterances may allow infants to extract word candidates in the speech stream as shown in the impoverished conditions of artificial grammar studies. This procedure is important because it would work well for most natural languages. We also highlight another important mechanism that allows infants to induce structure from very scarce data. In fact, humans tend to project structural conjectures after being presented with only a few utterances. Finally, we illustrate constraints on processing that derive from perceptual and memory functions that arose much earlier during the evolutionary history of the species. We conclude that all of these machanisms are important for the infants to gain access to its native language.
The study intends to give an overview of the observational and experimental methods in today's psycholinguistics, which sees language acquisition as a life-long experience from fetus to adolescent and even beyond. It also offers an informative guide to the history and evolution of empirical, applied psycholinguistic techniques, aiming to map and describe background mechanisms of language processing, perception, production, and acquisition, giving us an insight into fetal sensitivity to speech input, and to the intricacies of language processing both in the preverbal and in the verbal stages .
British journal of psychology (London, England : 1953), 2016
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.
The Journal of the Acoustical Society of America, 1978
In order to investigate the nature of some processes in speech acquisition, synthetic speechlike stimuli were played to groups of English and French children between two and fourteen years of age. The acoustic parameters varied were voice onset time and first-formant transition. Three stages were observed in the development of children's labeling behavior. These were called scattered labeling, progressive labeling, and categorical labeling, respectively. Individual response patterns were examined. The first stage (scattered labeling) includes mostly children of two to three years of age for the English and up to about four for the French. Children label most confidently those stimuli closest in physical terms to those of their natural speech environment. All stimuli with intermediate VOT values are labeled quasirandomly. Progressive labeling behavior is found mostly amongst children aged three and four for the English, up to about seven for the French. Children's response curves go progressively-almost linearly-from one type of label (voiced) to the other (voiceless): response follows stimulus continuum. Categorical labeling becomes the dominant pattern only at the age of five to six for the English, one or two years later for the French. This development was found to be highly significant (p smaller than 0.003 for both English and French, using Kendall's tau measure of correlation). English children learn to make use of the F1 transition feature around five years, whereas French children never use it as a voicing cue. French children will have fewer features than English children at their disposal: This may account for the later age at which French children, as a group, reach the various labeling behavior stages, and for labeling curves being less sharply categorical for French than for English children. These findings indicate that categorical labeling for speech sounds is not innate but learned through a relatively slow process which is to a certain extent language specific. The implications of the results are discussed in the light of previous work in the field.
Annual Review of Psychology, 1999
To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptualmotor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.
Cognition, 2007
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language . Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7,[49][50][51][52][53][54][55][56][57][58][59][60][61][62][63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.
Infancy, 2011
Languages instantiate many different kinds of dependencies, some holding between adjacent elements and others holding between non-adjacent elements. During the past decades, many studies have shown how infant initial languagegeneral abilities change into abilities that are attuned to the language they are acquiring. These studies have shown that during the second half of their first year of life, infants became sensitive to the prosodic, phonetic and phonotactic properties of their mother tongue holding between adjacent elements. However, at the present time, no study has established sensitivity to nonadjacent phonological dependencies, which are a key feature in human languages. Therefore, the present dissertation investigates whether infants are able to detect, learn and use non-adjacent phonotactic dependencies. The Labial-Coronal bias, corresponding to the prevalence of structures starting with a labial consonant followed by a coronal consonant (LC, i.e. bat), over the opposite pattern (CL, i.e. tab) was used to explore infants sensitivity to non-adjacent phonological dependencies. Our results establish that by 10 months of age French-learning infants are sensitive to non-adjacent phonological dependencies (experimental part 1.1). In addition, we explored the level of generalization of these acquisitions. Frequency analyses on the French lexicon showed that the LC bias is clearly present for plosive and nasal sequences but not for fricatives. The results of a series of experiments suggest that infants preference patterns are not guided by overall cumulative frequencies in the lexicon, or frequencies of individual pairs, but by consonant classes defined by manner of articulation (experimental part 1.2). Furthermore, we explored whether the LC bias was trigger by maturational constrains or by the exposure to the input. To do so, we tested the emergence of the LC bias firstly in a population having maturational differences, that is infants born prematurely (± 3 months before term) and compared their performance to a group of full-term infants matched in maturational age, and a group of full-term infants matched in chronological age. Our results indicate that the preterm 10-month-old pattern resembles much more that of the full-term 10-month-olds (same listening age) than that of the full-term 7-month-olds (same maturational age; experimental part 1.3). Secondly we tested a population learning a language with no LC bias in its lexicon, that is Japanese-learning infants. The results of these set of experiments failed to show any preference for either LC or CL structures in Japanese-learning infants (experimental part 1.4). Taken together these results suggest that the LC bias is triggered by the exposure to the linguistic input and not only to maturational constrains. Finally, we explored whether, and if so when, phonological acquisitions during the first year of life constrain early lexical development at the level of word segmentation and word learning. Our results show that words with frequent phonotactic structures are segmented (experimental part 2.1) and learned (experimental part 2.2) at an earlier age than words with a less frequent phonotactic structure. These results suggest that prior phonotactic knowledge can constrain later lexical acquisition even when it involves a non-adjacent dependency. Acquisition of non-adjacent phonological dependencies: From speech perception to lexical acquisition xi Contents Language is the blood of the soul into which thoughts run and out of which they grow If we spoke a different language we would perceive a somewhat different world Language is a part of our organism and no less complicated than it Language is the mother of thought not its handmaiden Language shapes the way we think and determines what we can think about Change your language and you change your thoughts Language is not only the vehicle of thought it is a great and efficient instrument in thinking A linguistic system is a series of differences of sound combined with a series of differences of ideas Language is a process of free creation its laws and principles are fixed but the manner in which the principles of generation are used is free and infinitely varied Even the interpretation and use of words involves a process of free creation Language is the blood of the soul into which thoughts run and out of which they grow If we spoke a different language we would perceive a somewhat different world Language is a part of our organism and no less complicated than it Language is the mother of thought not its handmaiden Language shapes the way we think and determines what we can think about Change your language and you change your thoughts Language is a part of our organism and no less complicated than it Language is the mother of thought not its handmaiden Language shapes the way we think and determines what we can think about Change your language and you change your thoughts Language is a part of our organism and no less Nayeli González Gómez 2012 6 Early speech perception Many studies have shown that during the second half of the first year of life many changes occur in infants' initial speech perception abilities. More importantly, the kinds of changes that happen in this period seem to be specifically linked to the input to which infants are exposed. In this section, we review the literature on this topic, underlying the kinds of changes that occur during this period at the segmental and suprasegmental levels. Prosodic information Prosody makes reference to the suprasegmental properties of language, including stress, rhythm and intonation of speech. Developmental research at this level investigates whether or not, and if when, infants react to differences in tones, stress patterns, rhythms and other prosodic dimensions. Initial abilities Many studies have shown that sensitivity to prosodic properties can be found very early in life, even before birth. Different studies have shown that near-term fetuses are able to distinguish low from high musical notes (Lecanuet, Granier
2014
Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers’ utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija’s utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words’ component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce.
Developmental Science, 2009
Acoustical Science and Technology, 2007
Annual Review of Psychology, 2010
During the first year of life, infants pass important milestones in language development. We review some of the experimental evidence concerning these milestones in the domains of speech perception, phonological development, word learning, morphosyntactic acquisition, and bilingualism, emphasizing their interactions. We discuss them in the context of their biological underpinnings, introducing the most recent advances not only in language development, but also in neighboring areas such as genetics and the comparative research on animal communication systems. We argue for a theory of language acquisition that integrates behavioral, cognitive, neural, and evolutionary considerations and proposes to unify previously opposing theoretical stances, such as statistical learning, rule-based nativist accounts, and perceptual learning theories.
2016
Recent years have seen a proliferation of adult phonological-learning studies (“artificial-language” experiments) employing a wide array of experimental tasks, instructions, and materials (reviewed in Moreton & Pater 2012a,b), in the hope of gaining experimental access to the inductive processes underlying firstor secondlanguage acquisition. But there has been little investigation into what is actually going on in these experiments. Do different experimental situations engage different learning processes? If so, do those processes have different inductive biases? How are they related to the processes involved in L1 and L2 acquisition? Answers to these questions have implications for both methodology in particular, and cognitive science in general. Studies of non-linguistic (mainly visual) pattern learning have led psychologists to hypothesize two concurrent learning processes that have different properties and that are facilitated by different experimental conditions (Ashby et al., ...
Child Development, 2000
In this paper, we draw on recent developments in several areas of cognitive science that suggest that the lexicon is at the core of grammatical generalizations at several different levels of representation. Evidence comes from many sources, including recent studies on language processing in adults and on language acquisition in children. Phonological behavior is inßuenced very early by pattern frequency in the lexicon of the ambient language, and we propose that phonological acquisition might provide the initial bootstrapping into grammatical generalization in general. The phonological categories over which pattern frequencies are calculated, however, are neither transparently available in the audiovisual signal nor deterministically Þxed by the physiological and perceptual capacities of the species. Therefore, we need several age-appropriate models of how the lexicon can inßuence a childÕs interactions with the ambient language over the course of phonological acquisition.
Journal of Experimental Psychology: Human Perception and Performance, 2003
Infants' long-term memory for the phonological patterns of words versus the indexical properties of talkers' voices was examined in 3 experiments using the Headturn Preference Procedure (D. . Infants were familiarized with repetitions of 2 words and tested on the next day for their orientation times to 4 passages-2 of which included the familiarized words. At 7.5 months of age, infants oriented longer to passages containing familiarized words when these were produced by the original talker. At 7.5 and 10.5 months of age, infants did not recognize words in passages produced by a novel female talker. In contrast, 7.5-month-olds demonstrated word recognition in both talker conditions when presented with passages produced by both the original and the novel talker. The findings suggest that talker-specific information can prime infants' memory for words and facilitate word recognition across talkers.
2016
Last year, my colleague Ian Howard and I published a paper in the Journal of Phonetics (Messum & Howard 2015) that discussed the mechanism by which young children learn to pronounce the speech sounds of their mother tongue (L1). The longstanding assumption has been that they do this by some form of imitation. We argued that on current evidence it is more likely that they do this through a mirroring process; with their caregivers as the 'mirror' in which infants and young children discover the linguistic significance of their vocal actions. This matters for the learning of second language (L2) pronunciation because many of our teaching practices are implicitly based on the idea that learning to produce sounds by listening first and then trying to copy what we have heard is 'natural' (or even that it is the only possible way for the production of new sounds to be learnt). If it is not natural, then we might want to reconsider our use of 'listen first' approaches for teaching speech sounds. These approaches are not notably successful and there is at least one well-developed and proven alternative. This article summarises the 2015 paper, concentrating on the parts of it that will be of most interest to Speak Out! readers. The paper was written for a special issue of the journal which was examining how speech is represented in the brain, hence the paper's title: Creating the cognitive form of phonological units: the speech sound correspondence problem in infancy could be solved by mirrored vocal interactions in infancy rather than by the imitation of speech sounds. In a second article, Roslyn Young and I will examine the nature of L2 speech sound learning and the different approaches taken to teaching sounds.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.