0% found this document useful (0 votes)
21 views32 pages

Implicit Statistical Learning in Language Proc

Uploaded by

Priss Saez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views32 pages

Implicit Statistical Learning in Language Proc

Uploaded by

Priss Saez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

NIH Public Access

Author Manuscript
Cognition. Author manuscript; available in PMC 2011 March 1.
Published in final edited form as:
NIH-PA Author Manuscript

Cognition. 2010 March ; 114(3): 356–371. doi:10.1016/[Link].2009.10.009.

Implicit Statistical Learning in Language Processing: Word


Predictability is the Key

Christopher M. Conway,
Department of Psychology, Saint Louis University, St. Louis, MO
Althea Baurnschmidt,
Department of Psychological Sciences, Purdue University
Sean Huang, and
Indiana School of Medicine
David B. Pisoni
Department of Psychological and Brain Sciences, Indiana University, Bloomington
NIH-PA Author Manuscript

Abstract
Fundamental learning abilities related to the implicit encoding of sequential structure have been
postulated to underlie language acquisition and processing. However, there is very little direct
evidence to date supporting such a link between implicit statistical learning and language. In three
experiments using novel methods of assessing implicit learning and language abilities, we show that
sensitivity to sequential structure -- as measured by improvements to immediate memory span for
structurally-consistent input sequences -- is significantly correlated with the ability to use knowledge
of word predictability to aid speech perception under degraded listening conditions. Importantly, the
association remained even after controlling for participant performance on other cognitive tasks,
including short-term and working memory, intelligence, attention and inhibition, and vocabulary
knowledge. Thus, the evidence suggests that implicit learning abilities are essential for acquiring
long-term knowledge of the sequential structure of language -- i.e., knowledge of word predictability
– and that individual differences on such abilities impact speech perception in everyday situations.
These findings provide a new theoretical rationale linking basic learning phenomena to specific
aspects of spoken language processing in adults, and may furthermore indicate new fruitful directions
for investigating both typical and atypical language development.
NIH-PA Author Manuscript

Keywords
statistical learning; implicit learning; speech perception; word predictability; individual differences;
language processing

Understanding the role that learning and memory abilities play in language acquisition and
processing remains an important challenge in the cognitive sciences. Towards this end, a major
advance has been in recognizing that language consists of complex, highly variable patterns

Correspondence concerning this article should be addressed to Christopher M. Conway, Department of Psychology, 3511 Laclede Ave.,
Saint Louis University, St. Louis, MO 63103. cconway6@[Link].
Portions of this research was presented at the 29th Annual Meeting of the Cognitive Science Society, Nashville, TN, August, 2007.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting
proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could
affect the content, and all legal disclaimers that apply to the journal pertain.
Conway et al. Page 2

occurring in sequence, and as such can be described in terms of statistical or distributional


relations among language units (Redington & Chater, 1997). Due to the probabilistic nature of
language, rarely is a spoken utterance completely unpredictable; most often, the next word in
NIH-PA Author Manuscript

a sentence will depend on the preceding context of the sentence (Rubenstein, 1973). Put another
way, what a language speaker considers to be a “meaningful” sentence can be quantified in
terms of how much the preceding context constrains or predicts the next spoken word (Miller
& Selfridge, 1950). Due to the apparent importance of context and word predictability in
language, sensitivity to such probabilistic relations among language units likely is crucial for
successful language learning and understanding.

It is not surprising then, that it is now widely accepted that general abilities related to learning
about complex structured patterns -- i.e., implicit statistical learning1 – are important for
language processing (Altmann, 2002;Conway & Christiansen, 2005;Conway & Pisoni,
2008;Gupta & Dell, 1999;Kirkham, Slemmer, Richardson, & Johnson, 2007;Kuhl,
2004;Pothos, 2007;Reber, 1967;Saffran, 2003;Turk-Browne, Junge, & Scholl, 2005;Ullman,
2004). Implicit learning is thought to be important for word segmentation (Saffran, Aslin, &
Newport, 1996), word learning (Graf Estes, Evans, Alibali, & Saffran, 2007;Mirman,
Magnuson, Graf Estes, & Dixon, 2008), the learning of phonotactic (Chambers, Onishi, &
Fisher, 2003) and orthographic (Pacton, Perruchet, Fayol, & Cleeremans, 2001) regularities,
aspects of speech production (Dell, Reed, Adams, & Meyer, 2000), and the acquisition of
syntax (Gómez & Gerken, 2000; Ullman, 2004). What is more surprising, however, is that
NIH-PA Author Manuscript

despite the voluminous work on implicit learning, few if any studies have demonstrated a direct
causal link between implicit learning abilities and everyday language competence. Although
there is some evidence suggesting that implicit learning is disturbed in certain language-
impaired clinical populations (e.g., Evans, Saffran, & Robe-Torres, 2009;Howard, Howard,
Japikse, & Eden, 2006;Plante, Gomez, & Gerken, 2002;Tomblin, Mainela-Arnold, & Zhang,
2007), other studies have revealed no such relationship between implicit learning and language
processing, and the reason for the discrepancy is not entirely clear (for additional discussion,
see Conway, Karpicke, & Pisoni, 2007).

We propose that if implicit learning supports language, then it ought to be possible to


demonstrate an empirical association between individual differences in implicit learning
abilities in healthy adults and some measure of language processing. However, a challenge lies
in choosing language and implicit learning tasks that purportedly tap into the same underlying
processes. Toward this end, we use Elman’s (1990) now classic paper as a theoretical
foundation, in which a connectionist model – a simple recurrent network (SRN) –was shown
to represent sequential order implicitly in terms of the effect it had on processing. The SRN
had a context layer that served to give it a memory for previous internal states. This memory,
coupled with the network’s learning algorithm, gave the SRN the ability to learn about structure
NIH-PA Author Manuscript

in sequential input, enabling it to predict the next element in a sequence, based on the preceding
context. Elman (1990) and many others since have used the SRN successfully to model both
language learning and processing (Christiansen & Chater, 1999) and, interestingly enough,
implicit learning (Cleeremans, 1993).

The crucial commonality between implicit (sequence) learning and language learning and
processing may be the ability to encode and represent sequential input, using preceding context
to implicitly predict upcoming units. To directly test this hypothesis, we explore whether
individual differences in implicit learning abilities are related to how well one is able to use

1We consider implicit learning and statistical learning to refer to the same underlying phenomenon: inducing structure from input
following exposure to multiple exemplars (and see Perruchet & Pacton, 2006). For brevity, we use the term “implicit learning” throughout
the remainder of this paper.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 3

sentence context – i.e., word predictability -- to guide spoken language perception under
degraded listening conditions.
NIH-PA Author Manuscript

Word Predictability in Spoken Language Perception


Previous work has shown that knowledge of the sequential probabilities in language can enable
a listener to better identify – and perhaps even implicitly predict – the next word that will be
spoken (Miller, Heise, & Lichten, 1951; Onnis, Farmer, Baroni, Christiansen, & Spivey, in
press; Rubenstein, 1973; c.f., Bar, 2007). This use of top-down knowledge becomes especially
apparent when the speech signal is perceptually degraded, which is the case in many real-world
situations. When ambient noise degrades parts of a spoken utterance, the listener must rely on
long-term knowledge of the sequential regularities in language to implicitly predict the next
word that will be spoken based on the previous spoken words, thus improving speech
perception and comprehension (Elliott, 1995; Kalikow, Stevens, & Elliott, 1977; McClelland,
Mirman, & Holt, 2006; Miller, et al., 1951; Pisoni, 1996).

For example, consider the following two sentences, which end with highly predictable and
non-predictable endings, respectively:
1. Her entry should win first prize.
2. The arm is riding on the beach.
NIH-PA Author Manuscript

When these two sentences are presented to participants under degraded listening conditions,
long-term knowledge of language structure can improve perception of the final word in
sentence (1) but not in (2). We argue then, that performance on the first type of sentence ought
to be more closely associated with fundamental implicit learning abilities, because it relies on
one’s knowledge of word predictability that accrued implicitly over many years of exposure
to language. On the other hand, performance on the second type of sentence simply relates to
how well one perceives speech in noise, where knowledge of word predictability is less useful.
Bilger and Rabinowitz (1979) further suggested the use of a metric for how well any individual
subject can make use of context and word predictability in spoken language. Their metric is
computed by taking the difference between how well one perceives the final word in high-
predictability sentences (sentences of type 1) minus how well they perceive the final word in
low- or zero-predictability sentences (sentences of type 2). This difference score provides a
means of assessing how well an individual can use word predictability, based on the sentence
context, to aid speech perception.

We propose that implicit learning abilities are used to implicitly encode the word order
regularities of language, which, once learned, can be used to improve speech perception under
degraded listening conditions. In the current study, we directly tested this hypothesis by
NIH-PA Author Manuscript

assessing adult participants on both implicit learning and speech perception tasks. In the
implicit learning tasks, learning was assessed by improvements to immediate memory span for
statistically-consistent, structured sequences (Botvinick, 2005; Conway et al., 2007; Jamieson
& Mewhort, 2005; Karpicke & Pisoni, 2004; Miller & Selfridge, 1950). This method for
measuring implicit learning, based on improvement in the capacity of immediate memory, is
arguably superior to the methods typically used because the dependent measure is indirect
(Redington & Chater, 2002); that is, it does not require an explicit judgment from the
participant. In the speech perception tasks, which also rely on an indirect processing measure,
we use the difference score suggested by Bilger and Rabinowitz (1979), in which speech
perception performance for highly predictable sentences and zero-predictability sentences
under degraded listening conditions is measured (Elliott, 1995; Kalikow et al., 1977). Based
on the preceding considerations, we predict that performance on the implicit learning task will
be correlated with the difference score which reflects sensitivity to word predictability in
spoken sentence perception.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 4

Experiment 1
In the first experiment, participants engaged in a visual implicit learning task that indirectly
NIH-PA Author Manuscript

assessed learning through improvements to immediate memory span for sequences containing
redundant statistical structure (Conway et al., 2007; Karpicke & Pisoni, 2004; Miller &
Selfridge, 1950). Participants also completed a speech perception in noise task that used
degraded sentences varying in the predictability of the final word. If implicit learning abilities
are important for acquiring long-term knowledge of sequential probabilities of words in
sentences, we should expect that performance on the learning task will be positively correlated
with the difference score metric derived from the speech perception task.

Method
Participants—Twenty-three undergraduate students (age 18–22 years old) at Indiana
University received course credit for their participation. All subjects were native speakers of
English and reported no history of a hearing loss, speech impairment, or other cognitive/
perceptual/motor impairments at the time of testing.

Apparatus—For the implicit learning task, a Magic Touch® touch-sensitive monitor


displayed visual sequences and recorded participant responses. For the sentence perception
task, digital audio recordings were played through Beyer Dynamic DT-100 headphones.
NIH-PA Author Manuscript

Stimulus Materials
Visual implicit learning task: We used two artificial grammars to generate the stimuli (c.f.,
Jamieson & Mewhort, 2005). These grammars, depicted in Table 1, specify the probability of
a particular element occurring given the preceding element. The grammar on the left was used
to create constrained sequences whereas the control grammar on the right was used to create
pseudorandom (unconstrained) sequences (all stimuli are listed in Appendix A). For each
sequence, the starting element (1–4) was randomly determined and then the probabilities were
used to determine each subsequent element, until a desired length was reached.

The constrained grammar was used to generate 48 unique sequences for the learning phase and
20 sequences for the test phase. The control grammar was used to generate twenty sequences
for the test phase as well.

Auditory-only sentence perception task: We used 50 English sentences that varied in terms
of the final word’s predictability (Kalikow et al., 1977): 25 high-predictability sentences with
a final target word that is predictable given the preceding context of the sentence; 25 zero-
predictability sentences with a final target word that is not predictable (see Appendix B). The
NIH-PA Author Manuscript

two sets of sentences were balanced in terms of length and word frequency (for details, see
Clopper & Pisoni, 2006). All 50 sentences were spoken by a male speaker and were acoustically
degraded by processing them with a sinewave vocoder ([Link]) that reduced
the signal to 6 spectral channels.

Procedure—All participants completed the implicit learning task first and the sentence
perception task second.

Visual implicit learning task: Input sequences consisted of colored squares (red, blue, yellow,
green) appearing one at a time, in one of four possible quadrants on the screen (upper left,
upper right, lower left, lower right). The task was to reproduce each sequence immediately
following presentation by touching the colored squares displayed on the touch-sensitive
monitor in the correct order. Figure 1 is a depiction of the task. No feedback was given. For
each participant, the mapping of color to screen location was randomly determined, as was the

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 5

mapping between the four sequence elements (1–4) to each of the four quadrants/colors;
however, for each subject, the mapping remained consistent across all trials.
NIH-PA Author Manuscript

Unbeknownst to participants, the task consisted of two parts, a learning phase and a test phase,
which differed only in terms of the sequences used. In the learning phase, the 48 constrained
learning sequences were presented once each, in random order. After completing the learning
phase, the experiment seamlessly transitioned into the test phase, which used the 20 novel
constrained and 20 unconstrained test sequences, presented in random order, once each.

Auditory-only sentence perception task: In the speech perception task, participants were told
they would listen to spoken sentences that were distorted, making them difficult to perceive.
Their task was to identify the last word in each sentence and write that word down on a sheet
of paper provided to them. Sentences were presented using a self-paced format. The 50
sentences described above were presented in random order.

Results
Data from three participants were excluded from the final analyses because their performance
on one or both tasks was greater than two standard deviations from the mean, leaving a total
of twenty participants included. This was done in order to reduce the undesirable effect that
outliers might have on the correlation results.
NIH-PA Author Manuscript

In the implicit learning task, a sequence was scored as correct if the participant reproduced
each test sequence correctly in its entirety. Span scores for the “grammatical” (i.e., constrained)
and “ungrammatical” (i.e., pseudorandom) test sequences were calculated using a weighted
span method, in which the total number of correct test sequences at a given length was
multiplied by the length, and then scores at all lengths added together. Equations 1 and 2 show
the grammatical (Gspan) and ungrammatical (Uspan) span scores, respectively.

(1)

(2)

In Equation 1, gc refers to the number of grammatical test sequences correctly reproduced at


a given length, and L refers to the length. For example, if a participant correctly reproduced 4
NIH-PA Author Manuscript

grammatical test sequences of length 4, 4 of length 5, 3 of length 6, 2 of length 7, and 1 of


length 8, then the Gspan score would be computed as (4*4 + 4*5+ 3*6 + 2*7 + 1*8) = 76.

The Uspan score is calculated in the same manner, using the number of ungrammatical test
sequences correctly reproduced at a given length, uc.

For each subject we also calculated a learning score (Equation 3), which is the difference
between grammatical and ungrammatical span scores. The LRN score measures the extent that
sequence memory spans improved for sequences that are constrained compared to
pseudorandom sequences.

(3)

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 6

For the sentence perception task, a sentence was scored correct if the participant wrote down
the correct final word. For each participant, a word predictability difference score was
calculated (Equation 4), which is the proportion of correctly identified target words in high-
NIH-PA Author Manuscript

predictability sentences (HighPredcorr) minus the proportion of correctly identified target


words in zero-predictability sentences (ZeroPredcorr). This difference score reflects the
participant’s ability to make use of context and word predictability to better perceive degraded
speech.

(4)

A summary of the descriptive statistics is shown in Table 2. The average implicit learning score
was significantly greater than 0 (t(25) = 2.2, p < .05), demonstrating that as a group, participants
showed better memory for predictable test sequences compared to pseudorandom sequences.
For the sentence perception task, participants’ correct identification of high-predictability and
zero-predictability target words was 74.0% and 55.0%, respectively. The word predictability
difference score was significantly greater than 0 (t(19)=6.87; p<.001), showing that participants
were better on the task when context was present (i.e., when the final word in the sentence was
highly predictable based on the preceding context).
NIH-PA Author Manuscript

We next computed a Pearson correlation between implicit learning and the word predictability
score. If implicit learning is associated with long-term knowledge of word predictability in
sentences, we would expect these two scores to be significantly positively correlated. This in
fact was the case (r=.458, p<.05).

Experiment 2
Experiment 1 demonstrated a statistically significant correlation between visual implicit
learning and the use of knowledge of word order predictability in auditory-only speech
perception. Experiment 2 served two purposes. First, it was designed to replicate the main
finding of Experiment 1 but with a change in both the sensory modalities of the two
experimental tasks (see Table 3) and the type of underlying structure used to generate the input
sequences in the implicit learning task. If a significant correlation is still found between the
two tasks even with these relatively dramatic changes, it would provide a convincing replication
of the results of Experiment 1. Second, and perhaps more importantly, several additional
measures were collected from participants in this study in order to determine whether there is
a third mediating variable – such as general language abilities or intelligence -- responsible for
the observed correlation. Observing a correlation between the two tasks even after partialing
out the common sources of variance associated with these other measures would provide
NIH-PA Author Manuscript

additional support for the conclusion that implicit learning is directly associated with
knowledge of word order predictability in language, rather than being mediated by a third
underlying factor.

Method
Participants—Twenty-two undergraduate students (age 20–25 years old) at Indiana
University received monetary compensation for their participation. All subjects were native
speakers of English and reported no history of a hearing loss, speech impairment, or other
cognitive/perceptual/motor impairment at the time of testing.

Apparatus—For the implicit learning task, Beyer Dynamic DT-100 headphones were used
to present auditory sequences and a head-mounted microphone was used to record the
participants’ spoken responses. For the audiovisual spoken sentence perception task, the video

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 7

display of the talker’s face was presented on a Sony brand computer screen and the auditory
signal was played through the headphones.
NIH-PA Author Manuscript

Stimulus Materials
Auditory implicit learning task: An artificial grammar (Reber, 1967) was used to generate
the stimuli (see Figure 2 and Appendix C): 18 sequences for the learning phase and 16
additional sequences for the test phase. Sixteen ungrammatical test sequences were also created
by randomizing each grammatical test sequence and making sure that none of them were
grammatical with respect to the grammar.

Four spoken nonwords (“tiz”, “neb”, “dup”, and “lok”) were recorded from a 22 year-old
female speaker and used to create four sound files, roughly 700 msec in duration. These
nonwords were mapped onto each of the four elements (1–4) of the grammar, randomly
determined for each participant. For example, if the mapping for a particular participant was
1 = “lok”; 2 = “neb”; 3 = “dup”; 4 = “tiz”, then the sequence 1-4-1-3 would be translated into
the nonword sequence, “LOK-TIZ-LOK-DUP”.

Audiovisual sentence perception task: For the audiovisual sentence perception task, we used
25 high-predictability and 25 zero-predictability sentences, listed in Appendix D. The same
female speaker used for the implicit learning task was video recorded speaking all 50 sentences.
The recordings were then converted into video clips using Final Cut Pro HD for the Macintosh.
NIH-PA Author Manuscript

The audio portion of the clips were then processed digitally and degraded to 2 spectral channels.
2

Procedure—The experimental tasks for this experiment took place in a sound-attenuated


booth (Industrial Acoustics Company). All participants did the auditory implicit learning task
first, the audiovisual sentence perception task second, and the language and intelligence
assessments last.

Auditory implicit learning task: The procedure was identical to the one used in Experiment
1 except that auditory sequences were presented instead of visual color patterns. Each nonword
sequence was presented in the clear through the headphones at a level of 66–67 dB (SPL). The
task was to verbally repeat each sequence immediately following presentation (see Figure 3).
The timing parameters were identical to those used in Experiment 1. Sequence presentation
was self-paced. Unbeknownst to participants, the task consisted of two parts, a learning phase
and a test phase, which differed only in terms of the sequences used. In the learning phase, the
18 learning sequences were presented twice each, in random order. After completing the
learning phase, the experiment seamlessly transitioned into the test phase, which used the 16
NIH-PA Author Manuscript

grammatical and 16 ungrammatical test sequences, presented in random order, once each.

Audiovisual sentence perception task: The procedure was identical to the auditory-only
version of the task used in Experiment 1 except that participants watched a video of a person
speaking the sentences and the auditory (but not visual) signal was distorted. Like Experiment
1, the participants’ task was to identify the last word in each sentence and write it down on
paper.

Language assessment: Following the two experimental tasks, participants were given the
Reading/Vocabulary and Reading/Grammar subtests of the Test of Adolescent and Adult
Language (TOAL-3; Hammill, Brown, Larsen, & Wiederholt, 1994). These subtests were used

2Pilot studies revealed that in the audiovisual speech perception task, only 2 spectral channels were needed to achieve performance
comparable to the 6-channel audio-only speech perception task.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 8

to assess receptive language abilities, specifically vocabulary and knowledge of grammar. In


the Reading/Vocabulary subtest, participants read three stimulus words which all relate to a
common concept. From four possible responses, the participant chooses the two words that are
NIH-PA Author Manuscript

associated more closely with the three stimulus words. In the Reading/Grammar subtest, the
participant reads five sentences that are meaningfully similar but syntactically different and
then selects the two that most nearly have the same meaning. Participants completed a total of
30 Reading/Vocabulary items and 25 Reading/Grammar items and received a standardized,
age-normed score for each subtest.

Intelligence test: Participants completed a brief (15-minute) online intelligence test


([Link]) consisting of 30 questions. Upon completion, the test provides a
standardized, age-normed, score. The test has been found to be moderately correlated with
other standardized tests of intelligence, including the Raven Progressive Matrices (r=0.42) and
the Wechsler Scales (r=.32).

Results
As in Experiment 1, outliers on the implicit learning or sentence perception tasks (2 participants
with scores > +/− 2 S.D.) were excluded from the final analyses, leaving a total of twenty
participants included.

A summary of the descriptive statistics for the measures used in Experiment 2 are presented
NIH-PA Author Manuscript

in Table 4. The implicit learning score was significantly greater than 0 (t(19) = 3.9, p < .01),
demonstrating that participants had better memory for test sequences generated from the
grammar compared to random sequences. For the sentence perception task, participants’
correct identification of high-predictability and zero-predictability target words was 68.0% and
37.0%, respectively; the word predictability difference score was significantly greater than 0
(t(19)=9.2; p<.001).

Like Experiment 1, the correlation between the implicit learning and word predictability scores
was positive, and nearly statistically significant (r=.423, p=.06, 2-tailed). Although non-
significant, the strength of this correlation is strikingly similar to Experiment 1, suggesting that
both sensory modality and the type of artificial grammar used have negligible effects on the
nature of the association between implicit learning abilities and knowledge of word
predictability.

We also computed additional correlations that partialed out the combined variance due to the
TOAL-3 Reading/Vocabulary, Reading/Grammar, and intelligence scores, which actually
resulted in a numerically stronger and statistically significant correlation: r=.503, p<.05, 2-
tailed). Thus, it appears that the association between implicit learning and knowledge of word
NIH-PA Author Manuscript

predictability is mediated neither by general linguistic knowledge nor global intelligence.

Finally, the implicit learning task provides a “built-in” measure of short-term memory for
sequences in terms of each participant’s memory spans for the ungrammatical sequences in
the test phase. Because the ungrammatical sequences contain no internal structure, this score
presumably reflects short term, immediate recall. When controlling for ungrammatical
sequence memory spans, the correlation between implicit learning and the word predictability
difference score is also statistically significant, r=.511, p<.05.

Experiments 1 and 2 Combined


Figure 4 shows a scatterplot of the data from both Experiments 1 and 2 combined (using
standardized z-scores for the implicit learning task). As can be seen from the plot, the overall

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 9

correlation between implicit learning and the word predictability difference score is positive
and statistically significant (r = .418, p < .01, 2-tailed).
NIH-PA Author Manuscript

Experiment 3
Experiments 1 and 2 demonstrated that implicit learning abilities are associated with the ability
to use knowledge of word order predictability to aid speech perception. This association
remained strong even after controlling for the common variance associated with linguistic
knowledge, general intelligence, and short-term sequence memory capacity. As a final
replication and extension of these findings, we next include, in addition to a visual implicit
learning and auditory-only sentence perception task, measures of immediate verbal recall and
working memory (i.e., forward and backward digit spans), nonverbal intelligence as measured
by the Raven Standard Progressive Matrices (Raven, Raven, & Court, 2000), and attention and
inhibition as measured by the Stroop Color and Word Test (Golden & Freshwater, 2002).

Method
Participants—Sixty-four undergraduate students (age 18–32 years old) at Indiana University
received monetary compensation for their participation. All subjects were native speakers of
English and reported no history of a hearing loss, speech impairment, or other cognitive/
perceptual/motor impairment at the time of testing.
NIH-PA Author Manuscript

Apparatus—Experiment 3 used the same equipment as used in Experiment 1.

Stimulus Materials
Visual implicit learning task: The stimuli for the visual implicit learning task were identical
to those used in Experiment 1.

Auditory-only sentence perception task: Like Experiment 1, the sentence perception task
incorporated two types of sentences: high-predictability and zero-predictability (18 of each,
see Appendix E). Half of each sentence type were spoken by a female speaker and half by a
male speaker. All sentences were degraded by reducing them to 6 spectral channels.

Procedure—All participants engaged in the implicit learning task first, the sentence
perception task second, and the remaining assessments last.

Visual implicit learning task: The procedure for the visual learning task was identical to that
used in Experiment 1.
NIH-PA Author Manuscript

Auditory-only sentence perception task: The procedure for the auditory-only sentence
perception task was identical to that used in Experiment 1, with the only exception being that
there were 36 sentences total, which were presented to participants in random order.

Forward and backward digit spans: The procedure and materials followed that outlined in
Wechsler (1991). In the forward digit span task, subjects were presented with lists of pre-
recorded spoken digits with lengths (2–10) that became progressively longer. The subjects’
task was to repeat each sequence aloud. In the backwards digit span task, subjects were also
presented with lists of spoken digits with lengths that became progressively longer, but they
were asked to repeat the sequence in reverse order. Digits were played over headphones and
recorded by a desk-mounted microphone. Subjects received a weighted score based on the
number of sequences correctly recalled at each length for each digit span task. Generally, the
forward digit span task is thought to reflect the involvement of processes that maintain and
store verbal items in short-term memory for a brief period of time, whereas the backward digit

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 10

span task reflects the operation of controlled attention and higher-level executive processes
that manipulate and process the verbal items held in memory (Rosen & Engle, 1997).
NIH-PA Author Manuscript

Raven Standard Progressive Matrices: The Raven Standard Progressive matrices are a series
of nonverbal reasoning tasks in which participants are asked to identify which of the given
pictures will best complete the larger pattern in the matrix. The difficulty of the test item
increases as the test goes on, so that each of the 5 subsets is progressively more difficult than
the last. Subjects received either the odd half or the even half of a 60 item set taken from Raven
et al. (2000). Responses were scored by total number of test items correct (out of 30).

Stroop Color and Word Test: We used the paper test created by Golden and Freshwater
(2002). In this classic task, which measures the participant’s ability to inhibit their tendency
to read a word and not the color, participants are asked to read three lists aloud. The first list
is a list of words ‘red’, ‘green’, and ‘blue’ in random order printed in black ink. The second
list is a list of xxx’s in red, blue, or green ink in random order. The last list is a random list of
the words ‘red’, ‘green’ and ‘blue’ in ink color that is incongruent with the word. Each list is
100 items long. The responses are scored on how many items in the list were said aloud in 45
seconds. A standardized interference score is calculated as per Golden and Freshwater
(2002), which represents how well a participant is able to selectively attend to the color of the
word and inhibit the automatic reading response.
NIH-PA Author Manuscript

Results
As in Experiment 1, outliers on the implicit learning or sentence perception tasks (1 participant
with a score > +/− 2 S.D.) were excluded from the final analyses, leaving a total of 59
participants.

A summary of the descriptive statistics for the measures used in Experiment 3 are presented
in Table 5. The implicit learning score was significantly greater than 0 (t(58) = 7.12, p < .001).
For the sentence perception task, participants’ correct identification of both high-predictability
and zero-predictability target words was 79.0%; unlike Experiments 1 and 2, performance was
not better for high-predictability versus low-predictability sentences. Although there were no
differences as a group on the two sentence types, there was still a range of individual variability,
with some participants showing improvements for the high-predictability sentences, and others
not. Therefore, the primary question remains: are participants who show the best implicit
learning abilities also the ones who demonstrate the best use of sentence context to aid speech
perception?

Figure 6 shows a scatterplot of the implicit learning and word predictability difference scores.
Like the previous experiments, the correlation between these scores was positive and
NIH-PA Author Manuscript

statistically significant (r=.308, p<.05, 2-tailed). The correlation remained positive and
statistically significant even when controlling for the combined common variance associated
with the forward and backward digit spans, Stroop interference score, and nonverbal
intelligence (r = .285, p<.05, 2-tailed).

We also conducted a step-wise multiple regression analysis using the word predictability
difference score as the dependent measure, and the following scores as predictors: implicit
learning, forward digit span, backward digit span, Stroop interference score, and nonverbal
intelligence. The results showed that only implicit learning (β= 0.31, p < .05) was a significant
predictor. None of the other measures were significant predictors: forward digit span (β= 0.14,
n.s.), backward digit span (β= −0.12, n.s.), Stroop interference score (β= 0.02, n.s.), and
nonverbal intelligence (β= −0.11, n.s.). The overall model fit was R2 = 0.31.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 11

In sum, these results provide converging evidence that the ability to implicitly learn visual
sequential patterns is specifically associated with the ability to use knowledge of word
predictability to guide speech perception. Importantly, the association between implicit
NIH-PA Author Manuscript

learning and knowledge of word predictability does not appear to be mediated by short-term
or working memory, attention and inhibition, or nonverbal intelligence.

General Discussion
The data from these three experiments show that performance on an implicit learning task is
significantly correlated with performance on a spoken language measure that assesses
sensitivity to word predictability in speech. The implicit learning tasks involved observing and
reproducing visual color or auditory nonword sequences; a learning score was calculated for
each individual by measuring the improvement to immediate serial recall for sequences with
consistent statistical structure. The spoken language tasks involved perceiving degraded
sentences that varied on the predictability of the final word; a difference score reflecting the
use of sentence context (word predictability) to better perceive the final word was calculated
by subtracting performance on zero-predictability sentences from performance on highly
predictable sentences. A significant correlation between these two scores was found in all three
experiments, even after controlling for sources of variance associated with intelligence
(Experiments 2 and 3), short-term and working memory (Experiment 3), attention and
inhibition (Experiment 3), and knowledge of vocabulary and syntax (Experiment 2). We
NIH-PA Author Manuscript

conclude that the common factor involved in both tasks -- and which mediated the observed
correlations -- is sensitivity to the underlying statistical structure contained in sequential
patterns, independent of general memory, intelligence, or linguistic abilities. We propose that
superior implicit learning abilities result in more detailed and robust representations of the
word order probabilities in spoken language. Having a more detailed veridical representation
of word predictability in turn can improve how well one can rely on top-down knowledge to
help implicitly predict, and therefore perceive, the next word spoken in a sentence.

The role of top-down knowledge in influencing subsequent processing of input sequences has
been mechanistically explored using a recurrent neural network model (Botvinick & Plaut,
2006). Unlike other models, Botvinick and Plaut’s (2006) model captures key findings in the
domain of immediate serial recall while also simulating the role that background knowledge
(previous learning) has on the processing of current sequences, in a manner analogous to our
implicit learning task. We imagine that this model, too, could be used to capture the data on
our speech perception task, showing that background knowledge of word predictability
improves processing3. The model of Botvinick and Plaut (2006), then, appears to offer an
explicit instantiation of the cognitive process that we have identified as being important: using
previous knowledge to implicitly predict upcoming items in a sequence. In the terms of
NIH-PA Author Manuscript

Botvinick and Plaut (2006), this involves the “decoding” of imperfectly specified sequence
representations through the use of long-term knowledge of sequential regularities. This model
thus provides a computational link between sequence learning and semantic context effects,
and is therefore a nice complement to the current findings with human participants.

An important finding from our experiments is that performance on the implicit learning task
did not correlate simply with any task requiring verbalizing input (i.e., digit span tasks) or
engaging linguistic knowledge (TOAL subtests). That is, implicit learning did not correlate
with conventional measures of Reading/Vocabulary, Reading/Grammar, or digit spans. This
suggests that despite the implicit learning task involving input patterns that are easy to
verbalize4, performance on the learning task is not simply due to general language processing

3Of course, the dependent variable in our task is word identification, not recall, but at an abstract level, the underlying mechanism –
relying on background knowledge to improve subsequent processing – may be fundamentally the same.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 12

abilities. Instead, implicit learning as assessed by this task is involved in language processing
in a highly specific way: acquiring knowledge about the predictability of items in a sequence.
NIH-PA Author Manuscript

Our sentence perception task involved materials that varied in word predictability, or semantic
context (Kalikow, et al., 1977). There is also evidence that implicit learning is important for
other aspects of language processing too, such as syntax (Ullman, 2004) and phonotactics
(Chambers, Onishi, & Fisher, 2003). From our perspective, both syntax and phonotactics can
be described in terms of the predictability of items (words and phonemes, respectively) in a
spoken sequence, and thus, these processing domains may be two additional aspects of
language that depend upon implicit learning. However, given our present results, it may be the
case that an association between implicit learning and syntax or phonotactics will best be
revealed only when the language tasks rely on an implicit processing measure, not one that
requires an explicit judgment. Toward this purpose, it should be possible to create an analogue
of the present degraded speech perception task but to manipulate the underlying syntax or
phonotactics of the sentences, rather than semantic context, while measuring improvements to
speech perception.

In addition to exploring the role of implicit learning in other aspects of language processing,
such as syntax and phonotactics, additional future work might fruitfully explore the role of
implicit learning in language development specifically. For instance, a longitudinal design with
typically-developing children could help determine if implicit learning abilities predict
NIH-PA Author Manuscript

subsequent speech and language abilities assessed several years later (see Bernhardt, Kemp,
& Werker, 2007; Gathercole & Baddeley, 1989; Newman, Bernstein Ratner, Jusczyk, Jusczyk,
& Dow, 2006; Tsao, Liu, & Kuhl, 2004). Such a finding would provide support for the
hypothesis that implicit learning plays a causal role in language development. The current
approach is also promising for exploring whether break-downs in implicit learning can help
explain the underlying factors contributing to certain language and communication disorders,
such as specific language impairment, dyslexia, and the language delays associated with
profound congenital hearing loss in children (Conway et al., 2007; Conway, Pisoni, &
Kronenberger, in press).

Implicit statistical learning has often been studied under the guiding assumption that it is
important for language acquisition and processing (but see Casillas, in press). The present work
bolsters the claim that general learning mechanisms are important for language, consistent with
other recent evidence in neurophysiology (Christiansen, Conway, & Onnis, 2007; Friederici,
Steinhauer, & Pfeifer, 2002) and clinical neuropsychology (Evans et al., 2009; Howard et al.,
2006; Menghini, Hagberg, Caltagirone, Petrosini, & Vicari, 2006; Plante et al., 2002; Tomblin
et al., 2007; Ullman, 2004; Vicari, Marotta, Menghini, Molinari, & Petrosini, 2003). However,
to our knowledge, no other studies -- apart from our own preliminary findings (Conway et al.,
NIH-PA Author Manuscript

2007) -- have uncovered an association between individual differences in implicit learning and
spoken language comprehension. One recent exception is Misyak and Christiansen (2007),
who showed that adults’ implicit learning performance was correlated with reading
comprehension abilities. In fact, individual differences in implicit learning has been a topic

4The visual learning tasks involved sequences of colors that can be easily encoded into a verbal format. When using visual implicit
learning stimuli that were not as easily verbalizable, Conway et al. (2007) did not find a significant correlation with speech perception
for high-predictable sentences. This suggests that although the association between implicit learning and knowledge of word predictability
does not appear to depend upon the sensory modality of the input, it may indeed depend upon whether the implicit learning task
incorporates input that is easy to verbalize, i.e., encode into phonological and lexical representation in immediate memory. In turn, this
suggests a possible dissociation between phonological and non-phonological forms of implicit learning. In fact, there is some reason to
believe that implicit learning may be at least partly mediated by a number of separate, specialized neurocognitive mechanisms (Conway
& Christiansen, 2006). Goschke, Friederici, Kotz, and van Kampen, (2001) showed that Broca’s aphasics perform normally on a
spatiomotor implicit sequence learning task but are significantly impaired on one involving phonological sequences. We suggest that
there may exist partially non-overlapping verbal and non-verbal implicit learning components, in a manner similar to Baddeley’s
(1986) theory of working memory.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 13

that has not been explored in great depth in the past (though see Feldman, Kerr, & Streissguth,
1995; Karpicke & Pisoni, 2004; Reber, Walkenfeld, & Hernstadt, 1991). Thus, the present
results suggest that studying individual differences in implicit learning may in fact be a fruitful
NIH-PA Author Manuscript

direction for future research, in the same way that it has been in other cognitive domains.

Finally, although not our primary aim, our data also showed that implicit learning task
performance did not correlate with measures of verbal short-term or working memory as
assessed by the forward and backward digit spans (Experiment 2). Indeed, that implicit learning
in this sequence reproduction task appears to be independent of verbal memory spans suggests
that although serial order memory may be necessary in order to learn sequential patterns, it
may not be sufficient. That is, the ability to encode and hold a series of items in immediate
memory surely is necessary in order to learn about sequence structure; however, something
else in addition – i.e., mechanisms involved in learning the underlying regularities – may be
needed, as well. The exact relation between immediate memory capacity and implicit learning
is an area in need of additional exploration (see Frensch & Miner, 1994; Karpicke & Pisoni,
2004). In fact, counter-intuitively, some research suggests that smaller memory capacities may
actually be beneficial for learning complex input because it acts as a filter to reduce the
complexity of the problem space, making it more manageable (Elman, 1993; Kareev,
Lieberman, & Lev, 1997; Newport, 1990). Using the model of Botvinick and Plaut (2006) once
more as a mechanistic framework, one could test the effect that larger or smaller sequence
spans (see their footnote 9, p. 213) may have on the model’s ability to learn sequence structure
NIH-PA Author Manuscript

(i.e., implicit learning).

In sum, we have presented empirical evidence showing that variation in implicit learning
abilities in adulthood is directly related to sensitivity of word predictability in speech
perception, specifically, sentence perception under degraded listening conditions. The
correlation between the two tasks is striking given their apparent dissimilarity on the surface:
one task involves using long-term knowledge of semantics and sentence context to guide
speech perception, whereas the other task has to do with short-term learning and sensitivity to
visual sequential patterns where there is no explicit semantic system. Everyday speech
communication is characterized by the use of context-based redundancy to facilitate real-time
comprehension; thus, these findings may be important for elucidating the underlying
mechanisms involved in language processing and development, as well as for understanding
and treating language and communication disorders.

Acknowledgments
This project was supported by the following grants from the National Institute on Deafness and Other Communication
Disorders: R03DC009485 and T32DC00012.
NIH-PA Author Manuscript

We wish to thank Lauren Grove for her help in data collection and manuscript preparation, and Luis Hernandez for
his technical assistance.

References
Altmann GTM. Statistical learning in infants. Proceedings of the National Academy of Sciences
2002;99:15250–15251.
Baddeley, AD. Working memory. Oxford, UK: Oxford University Press; 1986.
Bar M. The proactive brain: Using analogies and associations to generate predictions. Trends in Cognitive
Sciences 2007;11:280–289. [PubMed: 17548232]
Bernhardt BM, Kemp N, Werker JF. Early word-object associations and later language development.
First Language 2007;27:315–328.
Bilger RC, Rabinowitz WM. Relationships between high- and low-probability SPIN scores. The Journal
of the Acoustical Society of America 1979;65(S1):S99.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 14

Botvinick MM. Effects of domain-specific knowledge on memory for serial order. Cognition
2005;97:135–151. [PubMed: 16226560]
Botvinick MM, Plaut DC. Short-term memory for serial order: A recurrent neural network model.
NIH-PA Author Manuscript

Psychological Review 2006;113:201–233. [PubMed: 16637760]


Casillas G. The insufficiency of three types of learning to explain language acquisition. Lingua. in press.
Chambers KE, Onishi KH, Fisher C. Infants learn phonotactic regularities from brief auditory experience.
Cognition 2003;87:B69–B77. [PubMed: 12590043]
Christiansen MH, Chater N. Toward a connectionist model of recursion in human linguistic performance.
Cognitive Science 1999;23:157–205.
Christiansen, MH.; Conway, CM.; Onnis, L. Neural responses to structural incongruencies in language
and statistical learning point to similar underlying mechanisms. In: McNamara, DS.; Trafton, JG.,
editors. Proceedings of the 29th Annual Meeting of the Cognitive Science Society. Austin, TX:
Cognitive Science Society; 2007. p. 173-178.
Cleeremans, A. Mechanisms of implicit learning: Connectionist models of sequence learning. Cambridge,
MA: MIT Press; 1993.
Clopper CG, Pisoni DB. The Nationwide Speech Project: A new corpus of American English dialects.
Speech Communication 2006;48:633–644.
Conway CM, Christiansen MH. Modality-constrained statistical learning of tactile, visual, and auditory
sequences. Journal of Experimental Psychology 2005;31:24–39. [PubMed: 15641902]
Conway CM, Christiansen MH. Statistical learning within and between modalities: Pitting abstract
against stimulus-specific representations. Psychological Science 2006;17:905–912. [PubMed:
NIH-PA Author Manuscript

17100792]
Conway CM, Karpicke J, Pisoni DB. Contribution of implicit sequence learning to spoken language
processing: Some preliminary findings from normal-hearing adults. Journal of Deaf Studies and Deaf
Education 2007;12:317–334. [PubMed: 17548805]
Conway CM, Pisoni DB. Neurocognitive basis of implicit learning of sequential structure and its relation
to language processing. Annals of the New York Academy of Sciences 2008;1145:113–131.
[PubMed: 19076393]
Conway CM, Pisoni DB, Kronenberger WG. The importance of sound for cognitive sequencing abilities:
The auditory scaffolding hypothesis. Current Directions in Psychological Science. in press.
Dell GS, Reed KD, Adams DR, Meyer AS. Speech errors, phonotactic constraints, and implicit learning:
A study of the role of experience in language production. Journal of Experimental Psychology:
Learning, Memory, & Cognition 2000;26:1355–1367.
Elliott LL. Verbal auditory closure and the speech perception in noise (SPIN) test. Journal of Speech,
Language, and Hearing Research 1995;38:1363–1376.
Elman JL. Finding structure in time. Cognitive Science 1990;14:179–211.
Elman JL. Learning and development in neural networks: The importance of starting small. Cognition
1993;48:71–99. [PubMed: 8403835]
Evans JL, Saffran JR, Robe-Torres K. Statistical learning in children with specific language impairment.
NIH-PA Author Manuscript

Journal of Speech, Language, and Hearing Research 2009;52:321–335.


Feldman J, Kerr B, Streissguth AP. Correlational analyses of procedural and declarative learning
performance. Intelligence 1995;20:87–114.
Frensch PA, Miner CS. Effects of presentation rate and individual differences in short-term memory
capacity on an indirect measure of serial learning. Memory and Cognition 1994;22(1):95–110.
Friederici AD, Steinhauer K, Pfeifer E. Brain signatures of artificial language processing: Evidence
challenging the critical period hypothesis. Proceedings of the National Academy of Sciences
2002;99:529–534.
Gathercole SE, Baddeley AD. Evaluation of the role of phonological STM in the development of
vocabulary in children: A longitudinal study. Journal of Memory and Language 1989;28:200–213.
Golden, CJ.; Freshwater, SM. The Stroop color and word test. Stoelting Co; Wood Dale, IL: 2002.
Goschke T, Friederici AD, Kotz SA, van Kampen A. Procedural learning in Broca’s aphasia: Dissociation
between the implicit acquisition of spatio-motor and phoneme sequences. Journal of Cognitive
Neuroscience 2001;13:370–388. [PubMed: 11371314]

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 15

Graf Estes K, Evans JL, Alibali MW, Saffran JR. Can infants map meaning to newly segmented words?
Psychological Science 2007;18:254–260. [PubMed: 17444923]
Grunow H, Spaulding TJ, Gómez RL, Plante E. The effects of variation on learning word order rules by
NIH-PA Author Manuscript

adults with and without language-based learning disabilities. Journal of Communication Disorders
2006;39:158–170. [PubMed: 16376369]
Gupta, P.; Dell, GS. The emergence of language from serial order and procedural memory. In:
MacWhinney, B., editor. The emergence of language. Hillsdale, NJ: Lawrence Erlbaum Associates;
1999. p. 447-481.
Hammill, DD.; Brown, VL.; Larsen, SC.; Wiederholt, JL. Test of adolescent and adult language:
Assessing linguistic aspects of listening, speaking, reading, and writing. 3. Austin, TX: Pro-Ed; 1994.
Howard JH Jr, Howard DV, Japikse KC, Eden GF. Dyslexics are impaired on implicit higher-order
sequence learning, but not on implicit spatial context learning. Neuropsychologia 2006;44:1131–
1144. [PubMed: 16313930]
Jamieson RK, Mewhort DJK. The influence of grammatical, local, and organizational redundancy on
implicit learning: An analysis using information theory. Journal of Experimental Psychology:
Learning, Memory, & Cognition 2005;31:9–23.
Kalikow DN, Stevens KN, Elliott LL. Development of a test of speech intelligibility in noise using
sentence materials with controlled word predictability. Journal of the Acoustical Society of America
1977;61:1337–1351. [PubMed: 881487]
Kareev Y, Lieberman I, Lev M. Through a narrow window: Sample size and the perception of correlation.
Journal of Experimental Psychology: General 1997;126:278–287.
NIH-PA Author Manuscript

Karpicke JD, Pisoni DB. Using immediate memory span to measure implicit learning. Memory &
Cognition 2004;32(6):956–964.
Kirkham NZ, Slemmer JA, Richardson DC, Johnson SP. Location, location, location: Development of
spatiotemporal sequence learning in infancy. Child Development 2007;78:1559–1571. [PubMed:
17883448]
Kuhl PK. Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience
2004;5:831–843.
McClelland JL, Mirman D, Holt LL. Are there interactive processes in speech perception? Trends in
Cognitive Sciences 2006;10:363–369. [PubMed: 16843037]
Menghini D, Hagberg GE, Caltagirone C, Petrosini L, Vicari S. Implicit learning deficits in dyslexic
adults: An fMRI study. Neuroimage 2006;33:1218–1226. [PubMed: 17035046]
Miller GA, Heise GA, Lichten W. The intelligibility of speech as a function of the context of the test
materials. Journal of Experimental Psychology 1951;41:329–335. [PubMed: 14861384]
Miller GA, Selfridge JA. Verbal context and the recall of meaningful material. American Journal of
Psychology 1950;63:176–185. [PubMed: 15410879]
Mirman D, Magnuson JS, Graf Estes K, Dixon JA. The link between statistical segmentation and word
learning in adults. Cognition 2008;108:271–280. [PubMed: 18355803]
Misyak, JB.; Christiansen, MH. In: McNamara, DS.; Trafton, JG., editors. Extending statistical learning
NIH-PA Author Manuscript

farther and further: Long-distance dependencies and individual differences in statistical learning and
language; Proceedings of the 29th Annual Meeting of the Cognitive Science Society; Austin, TX:
Cognitive Science Society; 2007. p. 1307-1312.
Newman R, Bernstein Ratner N, Jusczyk AM, Jusczyk PW, Dow KA. Infants’ early ability to segment
the conversational speech signal predicts later language development: A retrospective analysis.
Developmental Psychology 2006;42:643–655. [PubMed: 16802897]
Newport EL. Maturational constraints on language learning. Cognitive Science 1990;14:11–28.
Onnis L, Farmer T, Baroni M, Christiansen MH, Spivey MJ. Generalizable distributional regularities aid
fluent language processing: The case of semantic valence tendencies. Special Issue of the Italian
Journal of Linguistics. in press.
Pacton S, Perruchet P, Fayol M, Cleeremans A. Implicit learning out of the lab: The case of orthographic
regularities. Journal of Experimental Psychology: General 2001;130:401–426. [PubMed: 11561917]
Perruchet P, Pacton S. Implicit learning and statistical learning: One phenomenon, two approaches.
Trends in Cognitive Sciences 2006;10:233–238. [PubMed: 16616590]

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 16

Pisoni DB. Word identification in noise. Language and Cognitive Processes 1996;11:681–687.
Plante E, Gomez R, Gerken L. Sensitivity to word order cues by normal and language/learning disabled
adults. Journal of Communication Disorders 2002;35:453–462. [PubMed: 12194564]
NIH-PA Author Manuscript

Pothos EM. Theories of artificial grammar learning. Psychological Bulletin 2007;133:227–244.


[PubMed: 17338598]
Raven, J.; Raven, JC.; Court, JH. Harcourt Assessment. San Antonio, TX: 2000. Standard progressive
matrices.
Redington M, Chater N. Probabilistic and distributional approaches to language acquisition. Trends in
Cognitive Sciences 1997;1:273–281.
Reber AS. Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior
1967;6:855–863.
Reber AS, Walkenfeld FF, Hernstadt R. Explicit and implicit learning: Individual differences and IQ.
Journal of Experimental Psychology 1991;17:888–896. [PubMed: 1834770]
Redington, M.; Chater, N. Knowledge representation and transfer in artificial grammar learning (AGL).
In: French, RM.; Cleeremans, A., editors. Implicit learning and consciousness: An empirical,
philosophical, and computational consensus in the making. Hove, East Sussex: Psychology Press;
2002. p. 121-143.
Rosen VM, Engle RW. Forward and backward serial recall. Intelligence 1997;25:37–47.
Rubenstein, H. Language and probability. In: Miller, GA., editor. Communication, language, and
meaning: Psychological perspectives. New York: Basic Books; 1973. p. 185-195.
Saffran JR. Statistical language learning: Mechanisms and constraints. Current Directions in
NIH-PA Author Manuscript

Psychological Science 2003;12:110–114.


Saffran JR, Aslin RN, Newport EL. Statistical learning by 8-month-old infants. Science 1996;274:1926–
1928. [PubMed: 8943209]
Tomblin JB, Mainela-Arnold E, Zhang X. Procedural learning in adolescents with and without specific
language impairment. Language Learning and Development 2007;3:269–293.
Tsao FM, Liu HM, Kuhl PK. Speech perception in infancy predicts language development in the second
year of life: A longitudinal study. Child Development 2004;75:1067–1084. [PubMed: 15260865]
Turk-Browne NB, Junge JA, Scholl BJ. The automaticity of visual statistical learning. Journal of
Experimental Psychology: General 2005;134:522–564.
Ullman MT. Contributions of memory circuits to language: The declarative/procedural model. Cognition
2004;92:231–270. [PubMed: 15037131]
Wechsler, D. Wechsler intelligence scale for children. 3. The Psychological Corporation; San Antonio,
TX: 1991.
Vicari S, Marotta L, Menghini D, Molinari M, Petrosini L. Implicit learning deficit in children with
developmental dyslexia. Neuropsychologia 2003;41:108–114. [PubMed: 12427569]
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 17
NIH-PA Author Manuscript
NIH-PA Author Manuscript

Figure 1.
Depiction of the visual implicit learning task used in Experiments 1 and 3, similar to that used
in previous work (Conway et al., 2007; Karpicke & Pisoni, 2004). Participants view a sequence
NIH-PA Author Manuscript

of colored squares (700-msec duration, 500-msec ISI) appearing on the computer screen (top)
and then, 2000-msec after sequence presentation, they must attempt to reproduce the sequence
by pressing the touch-panels in correct order (bottom). The next sequence occurs 2000-msec
following their response.

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 18
NIH-PA Author Manuscript
NIH-PA Author Manuscript

Figure 2.
Artificial grammar used in Experiment 2. Each numeral was mapped onto one of four spoken
nonwords.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 19
NIH-PA Author Manuscript

Figure 3.
Depiction of the auditory implicit learning task used in Experiment 2. Participants listened to
nonword sequences (700-msec duration, 500-msec ISI) through headphones and then, 2000-
msec after sequence presentation, they must attempt to verbally reproduce the sequence by
speaking into a microphone. The next sequence occurs 2000-msec following their response.
NIH-PA Author Manuscript
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 20
NIH-PA Author Manuscript
NIH-PA Author Manuscript

Figure 4.
Scatterplot of data from Experiments 1 and 2 (n=40). The x-axis displays the implicit learning
scores; the y-axis displays the word predictability difference scores for the spoken sentence
perception task. The best-fit line was drawn using SPSS 16.0.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 21
NIH-PA Author Manuscript

Figure 6.
NIH-PA Author Manuscript

Scatterplot of data from Experiment 3 (n=59). The x-axis displays the implicit learning scores;
the y-axis displays the word predictability difference scores for the spoken sentence perception
task. The best-fit line was drawn using SPSS 16.0.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript

Table 1
Constrained and Control Grammars used in Experiment 1

Constrained grammar (n+1) Control grammar (n+1)


Conway et al.

Colors/locations (n) 1 2 3 4 1 2 3 4

1 0.0 0.5 0.5 0.0 0.0 0.33 0.33 0.33


2 0.0 0.0 1.0 0.0 0.33 0.0 0.33 0.33
3 0.5 0.0 0.0 0.5 0.33 0.33 0.0 0.33
4 1.0 0.0 0.0 0.0 0.33 0.33 0.33 0.0

Note. Grammars show transition probabilities from position n of a sequence to position n+1 of a sequence for four colors labeled 1–4.

Cognition. Author manuscript; available in PMC 2011 March 1.


Page 22
Conway et al. Page 23

Table 2
Summary of Descriptive Statistics for the Measures used in Experiment 1
NIH-PA Author Manuscript

Observed score range

M SD
Measure Minimum Maximum

GramSpan 90.00 11.96 67.00 109.00


UngramSpan 60.05 15.50 32.00 84.00
LRN 29.95 13.95 5.00 58.00
HighPred 0.74 0.11 0.56 0.92
ZeroPred 0.55 0.08 0.40 0.64
PredDiff 0.19 0.12 −0.08 0.40

Note. GramSpan, grammatical sequence span; UngramSpan, ungrammatical sequence span; LRN, implicit learning score; HighPred, number of high-
predictability sentences correct; ZeroPred, number of zero-predictability sentences correct; PredDiff, word predictability difference score.
NIH-PA Author Manuscript
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 24

Table 3
Input and Output Modalities used in Experiments 1–3
NIH-PA Author Manuscript

Experiments 1&3 Experiment 2

Modality/Format IL SP IL SP

Input Modality V (color/space) A (words) A (nonwords) A/V (words)


Output Response manual written spoken written

Note. IL = implicit learning task; SP = sentence perception task; V = visual; A = auditory; A/V = audiovisual.
NIH-PA Author Manuscript
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 25

Table 4
Summary of Descriptive Statistics for the Measures used in Experiment 2
NIH-PA Author Manuscript

Observed score range

M SD
Measure Minimum Maximum

GramSpan 28.60 19.85 5.00 74.00


UngramSpan 22.60 19.45 0.00 75.00
LRN 6.00 6.79 −7.00 16.00
HighPred 0.64 0.22 0.21 0.96
ZeroPred 0.34 0.14 0.08 0.68
PredDiff 0.30 0.14 −0.20 0.52
TOAL-Vocab 13.15 2.03 9.00 15.00
TOAL-Grammar 12.85 1.39 10.00 135.00
IQ 117.60 13.01 93.00 135.00

Note. GramSpan, grammatical sequence span; UngramSpan, ungrammatical sequence span; LRN, implicit learning score; HighPred, number of high-
predictability sentences correct; ZeroPred, number of zero-predictability sentences correct; PredDiff, word predictability difference score; TOAL-
vocab, vocabulary as assessed by the TOAL-3 Reading/Vocabulary subscale; TOAL-grammar, knowledge of grammar as assessed by the TOAL-3
NIH-PA Author Manuscript

Reading/Grammar subscale; IQ, intelligence as assessed by [Link].


NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 26

Table 5
Summary of Descriptive Statistics for the Measures used in Experiment 3
NIH-PA Author Manuscript

Observed score range

M SD
Measure Minimum Maximum

GramSpan 88.34 25.23 8.00 120.00


UngramSpan 69.63 27.50 4.00 120.00
LRN 17.68 19.08 −18.00 54.00
HighPred 0.79 0.07 0.61 0.94
ZeroPred 0.79 0.11 0.50 1.00
PredDiff 0.00 0.11 −0.22 0.22
FWdigit 57.29 19.81 14.00 108.00
BWdigit 32.32 19.38 4.00 74.00
Raven 27.59 2.37 17.00 30.00
Stroop 57.66 7.89 39.00 80.00

Note. GramSpan, grammatical sequence span; UngramSpan, ungrammatical sequence span; LRN, implicit learning score; HighPred, number of high-
NIH-PA Author Manuscript

predictability sentences correct; ZeroPred, number of zero-predictability sentences correct; PredDiff, word predictability difference score; FWdigit,
forward digit span; BWdigit, backward digit span; Raven, non-verbal intelligence as measured by the Raven Progressive Matrices; Stroop, Stroop
Interference Score.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 27

Appendix A
Learning and Test Sequences used for Experiments 1 and 3 Visual ISL Task
NIH-PA Author Manuscript

Sequence Length Learning Sequence Test Sequence (C) Test Sequence (UC)

3 4-1-2
1-3-4
2-3-4
3-4-1
4-1-3
1-3-1
1-2-3
2-3-1
4 4-1-2-3 1-2-3-1 1-4-3-4
3-4-1-3 1-3-4-1 2-1-2-4
3-1-3-1 4-1-3-4 4-2-4-3
2-3-1-2 4-1-3-1 1-4-1-2
1-2-3-4
2-3-1-3
NIH-PA Author Manuscript

3-4-1-2
2-3-4-1
5 2-3-1-3-4 4-1-3-4-1 4-1-2-3-2
1-2-3-4-1 2-3-4-1-2 1-2-1-3-1
3-4-1-3-1 3-4-1-2-3 1-4-2-3-1
4-1-2-3-1 4-1-3-1-3 3-1-4-2-4
2-3-4-1-3
3-4-1-3-4
4-1-2-3-4
2-3-1-2-3
6 2-3-4-1-3-4 2-3-4-1-2-3 2-1-4-1-3-2
3-1-3-4-1-3 3-4-1-2-3-1 4-1-3-4-3-2
1-2-3-4-1-2 2-3-1-3-1-2 3-1-2-3-4-3
2-3-1-2-3-1 3-1-3-1-3-1 2-1-2-1-3-4
1-3-4-1-2-3
NIH-PA Author Manuscript

4-1-2-3-4-1
1-3-1-2-3-4
2-3-4-1-3-1
7 4-1-3-4-1-3-1 3-4-1-3-4-1-3 2-3-1-2-3-1-3
3-1-3-1-3-1-3 1-2-3-4-1-2-3 2-3-2-4-3-4-2
1-2-3-4-1-3-1 3-1-3-1-2-3-4 4-3-4-2-3-2-4
4-1-2-3-4-1-2 2-3-1-2-3-1-2 4-3-1-3-2-4-3
3-4-1-3-1-2-3
2-3-1-3-1-3-4
1-3-1-3-4-1-2
3-1-3-4-1-2-3

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 28

Sequence Length Learning Sequence Test Sequence (C) Test Sequence (UC)

8 2-3-4-1-3-4-1-2 3-4-1-2-3-1-2-3 1-3-1-4-3-1-2-4


NIH-PA Author Manuscript

1-3-1-3-4-1-2-3 2-3-1-2-3-1-3-1 4-3-4-2-3-4-2-4


3-1-3-4-1-3-4-1 1-2-3-4-1-2-3-4 2-4-2-1-2-1-2-3
3-4-1-3-4-1-3-4 3-4-1-2-3-4-1-2 2-3-2-3-1-4-2-4
4-1-2-3-1-3-4-1
2-3-4-1-2-3-1-3
1-2-3-4-1-2-3-1
3-1-3-1-3-4-1-2

Note: (C), constrained; (UC), unconstrained


NIH-PA Author Manuscript
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 29

Appendix B
Sentences used for Experiment 1 Auditory-Only Spoken Sentence Perception Task
NIH-PA Author Manuscript

High Predictability Zero Predictability

Eve was made from Adam’s rib. The bread gave hockey loud aid.
Greet the heroes with loud cheers. The problem hoped under the bay.
He rode off in a cloud of dust. The cat is digging bread on its beak.
Her entry should win first prize. The arm is riding on the beach.
Her hair was tied with a blue bow. Miss Smith was worn by Adam’s blade.
He’s employed by a large firm. The turn twisted the cards.
Instead of a fence, plant a hedge. Jane ate in the glass for a clerk.
I’ve got a cold and a sore throat. Nancy was poured by the cops.
Keep your broken arm in a sling. Mr. White hit the debt.
Maple syrup was made from sap. The first man heard a feast.
She cooked him a hearty meal. The problems guessed their flock.
Spread some butter on your bread. The coat is talking about six frogs.
The car drove off the steep cliff. It was beaten around with glue.
They tracked the lion to his den. The stories covered the glass hen.
NIH-PA Author Manuscript

Throw out all this useless junk. The ship was interested in logs.
Wash the floor with a mop. Face the cop through the notch.
The lion gave an angry roar. The burglar was parked by an ox.
The super highway has six lanes. For a bloodhound he had spoiled pie.
To store his wood, he built a shed. Water the worker between the pole.
Unlock the door and turn the knob. The chimpanzee on his checkers wore a scab.
We heard the ticking of the clock. Miss Brown charged her wood of sheep.
Playing checkers can be fun. Tom took the elbow after a splash.
That job was an easy task. We rode off in our tent.
The bloodhound followed the trail. The king shipped a metal toll.
He was scared out of his wits. David knows long wheels.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 30

Appendix C
Learning and Test Sequences used for Experiment 2 Auditory ISL Task
NIH-PA Author Manuscript

Sequence Length Learning Sequence Test Sequence (G) Test Sequence (UG)

4 1-4-1-3
3-4-2-4
5 1-1-4-1-3 1-1-4-3-1 3-1-4-1-3
1-4-3-1-3 1-4-1-2-4 1-1-2-4-4
3-2-4-1-3 3-3-4-1-3 4-3-1-3-3
3-3-4-3-1 3-4-1-3-1 4-1-3-3-1
6 1-1-4-2-3-1 1-4-3-1-2-4 1-2-4-3-4-1
3-2-1-4-1-3 3-2-4-1-2-4 4-2-2-4-1-3
3-3-4-1-2-4 3-3-4-3-1-3 3-3-1-4-3-3
3-4-1-2-3-1 3-4-2-4-1-3 3-4-1-4-3-2
7 1-1-4-3-3-1-3 1-1-4-2-2-3-1 1-2-4-3-1-2-1
3-2-1-4-3-2-4 3-2-1-4-1-2-4 1-1-2-3-4-3-2
3-3-4-3-3-1-3 3-3-4-2-2-3-1 1-2-3-3-2-3-4
3-4-2-4-3-2-4 3-4-2-4-1-2-4 2-4-3-4-1-4-2
NIH-PA Author Manuscript

8 1-1-4-3-3-1-2-4 1-4-3-1-2-4-3-1 4-2-1-1-3-1-4-3


3-2-1-4-3-3-1-3 3-3-4-2-2-3-2-4 4-4-3-3-2-2-3-2
3-3-4-3-3-1-2-4 3-4-1-3-2-4-1-3 3-1-2-4-3-4-3-1
3-4-2-4-2-3-2-4 3-4-2-4-3-3-1-3 3-1-4-2-3-3-4-3

Note: (G), grammatical; (UG), ungrammatical


NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 31

Appendix D
Sentences used for Experiment 2 Audiovisual Spoken Sentence Perception Task
NIH-PA Author Manuscript

High Predictability Zero Predictability

A bicycle has two wheels. A duck talks like an ant.


Ann works in the bank as a clerk. All the sailors were in bloom.
Banks kept their money in a vault. For your football I prescribed a cake.
Bob was cut by the jackknife’s blade. Lubricate the kitten to chew the draft.
Break the dry bread into crumbs. Mr. White swam with their mugs.
Cut the meat into small chunks. My teacher has a glad screen.
It was stuck together with glue. Peter played and swabbed a white bruise.
Kill the bugs with this spray. Ruth robbed the hay.
Paul hit the water with a splash. She parked the camera with brief sheets.
Paul was arrested by the cops. She tracked a bill in her cap.
Raise the flag up the pole. Spread some soup from the pad.
Ruth had a necklace of glass beads. Stir the class into strips.
The bird of peace is the dove. Syrup is a fine sport.
The boat sailed across the bay. The prickly bath knew about the track.
NIH-PA Author Manuscript

The bride wore a white gown. The rosebush slept along the coast.
The cigarette smoke filled his lungs. The sailor was followed from old wheat.
The cow gave birth to a calf. The sand was filled with pine.
The nurse gave him first aid. The stale game was spoken by steam.
The poor man was deeply in debt. The steamship employed his crop.
The shepherds guarded their flock. The steep man sat with the wax.
The wedding banquet was a feast. The storm was trained by a dive.
The witness took a solemn oath. The turn twisted the cards.
Tree trunks are covered with bark. The useless knees escaped with the hive.
Watermelons have lots of seeds. They milked a frightened entry of gin.
We swam at the beach at high tide. This bear won’t drive in the lock.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.


Conway et al. Page 32

Appendix E
Sentences used for Experiment 3 Auditory-Only Spoken Sentence Perception Task
NIH-PA Author Manuscript

High Predictability Zero Predictability

He was scared out of his wits. Mr. White hit the debt.
Greet the heroes with loud cheers. The problem hoped under the bay.
He rode off in a cloud of dust. Nancy was poured by the cops.
Her entry should win first prize. Jane ate in the glass for a clerk
Her hair was tied with a blue bow. The ship was interested in logs.
He’s employed by a large firm. The turn twisted the cards.
Instead of a fence, plant a hedge. .The stories covered the glass hen.
I’ve got a cold and a sore throat. It was beaten around with glue.
Keep your broken arm in a sling. The problems guessed their flock.
The baby slept in his crib. Discuss the sailboat on the bend.
The candle flame melted the wax. Face the cop through a notch.
The drowning man let out a yell. The king shipped a metal toll.
The fruit was shipped in wooden crates. The low woman was gladly in the calf.
The furniture was made of pine. The old cloud broke his lungs.
NIH-PA Author Manuscript

The honey bees swarmed round the hive. Water the worker between the poles.
The little girl cuddled her doll. We scared a bomb of clever geese.
The lonely bird searched for its mate. They milked a frightened entry of gin.
The railroad train ran off the track. The rosebush slept along the coast.
NIH-PA Author Manuscript

Cognition. Author manuscript; available in PMC 2011 March 1.

You might also like