Papers by George Kachergis

Frontiers in Neurorobotics, 2014
Robots are increasingly capable of performing everyday human activities such as cooking, cleaning... more Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations.
We introduce a class of artificial stimuli that lack preexperimental associations or encoding str... more We introduce a class of artificial stimuli that lack preexperimental associations or encoding strategies. In a set of recognition memory experiments using these stimuli, we manipulate the similarity between studied items and between targets and foils, thus investigating the effects of pure perceptual similarity. We also assign values to studied items in order to induce encoding strategies that might emphasize encoding distinctive or overlapping features. Applying a stochastic signal detection model to these data, we find that blocked presentation and increased category size lead to poorer encoding of individual items, indicating that participants fail to encode distinctive features when list homogeneity is increased. Further, items assigned a negative value are encoded more poorly, a sign that participants may attempt to find overlapping features among negative items.
We describe a model designed to learn word-concept pairings using a combination of semantic space... more We describe a model designed to learn word-concept pairings using a combination of semantic space models. We compare various semantic space models to each other as well as to extant word-learning models in the literature and find that not only do semantic space models require fewer underlying assumptions, they perform at least on par with existing associative models. We also demonstrate that semantic space models correctly predict different word-concept pairings from existing models and can be combined with existing models to perform better than either model can individually.

Many people have had the experience of knowing what song will play next on an album (even one hea... more Many people have had the experience of knowing what song will play next on an album (even one heard only a few times). Conversely, many people fail to recognize an acquaintance encountered in an unfamiliar context. Associations can likely form simply because items appear nearby in time, and not only due to semantic similarity. Using surprise recognition testing, we examine the automatic storage of associations between successively encountered words on a list of incidentally studied words. Many modern memory models assume storage of such associations, but with little evidence as yet (e.g., REM-II Mueller & Shiffrin, 2006). We find evidence for sequential associations, which are further improved by shared semantics or study context. We also find improved accuracy and response time for old words preceded by old words, and for new words preceded by new words-regardless of the previous response.

The serial reaction time (SRT) task, which measures how participants' keypress responses speed up... more The serial reaction time (SRT) task, which measures how participants' keypress responses speed up as a repeating stimulus sequence is learned, is popular in implicit and motor learning research, and may help us understand the basic learning mechanisms underlying the acquisition of complex skills (e.g., riding a bike). However, complex action sequences are not simple stimulus-response chains, but rather require representing sequential context in order to learn. Moreover, human actions are continuous, temporally-extended movements that are not fully measured in the discrete button presses of the SRT task. Using a novel movement adaptation of the SRT task in which spatial locations are both stimuli and response options, participants were trained to move the mouse cursor to a continuous sequence of stimuli. We replicate the Nissen and RT results with the trajectory SRT paradigm and show sequential context effects-predictive bends in response trajectories-that promise to reveal cognitive processes underlying sequential action learning.
Philosophical Transactions of the Royal Society B: Biological Sciences, 2014
People can learn a large number of word-referent pairs solely from their co-occurrences across sh... more People can learn a large number of word-referent pairs solely from their co-occurrences across short sequences of (individually) ambiguous trials (Yu & Smith, 2007). Here we discuss and model important factors: frequency=# of times a pair appears during training contextual diversity (CD)=# of times a pair appears with (and could thus be confused with) other pairs within-trial ambiguity=# of pairs per trial

Decision makers are sometimes faced with aggregating advice from multiple advisers without knowin... more Decision makers are sometimes faced with aggregating advice from multiple advisers without knowing what information is driving each adviser's opinion. Following Yu (2006, 2007), we conducted an experiment in which participants first learned to estimate the probability of a disease based on multiple test results. Next, subjects made the same judgments solely on the basis of probabilities given by multiple advisers who may have only received partial information. Experimental results confirm previous findings that decision makers give extreme estimates when advisers are in agreement and compromise estimates when advisors are in disagreement. Unlike previously proposed models that can only account for extreme or compromise estimates but not both, we develop a new Bayesian model that explains both types of judgments. This model provides a rational explanation of information aggregation by assuming that decision makers use the probability estimates of advisers to infer underlying data before making probability judgments.

Cross-situational learning, the ability to learn word meanings across multiple scenes consisting ... more Cross-situational learning, the ability to learn word meanings across multiple scenes consisting of multiple words and referents, is thought to be an important tool for language acquisition. The ability has been studied in infants, children, and adults, and yet there is much debate about the basic storage and retrieval mechanisms that operate during cross-situational word learning. It has been difficult to uncover the learning mechanics in part because the standard experimental paradigm, which presents a few words and objects on each of a series of training trials, measures learning only at the end of training after several occurrences of each word-object pair. Thus, the exact learning moment–and its current and historical context–cannot be investigated directly. This paper offers a version of the cross-situational learning task in which a response is made each time a word is heard, as well as in a final test. We compare this to the typical cross-situational learning task, and examine how well the response distributions match two recent computational models of word learning.

Proceedings of the 36th Annual Conference of the Cognitive Science Society, 2014
The process of learning a language requires that long-term memory stores the meanings of thousand... more The process of learning a language requires that long-term memory stores the meanings of thousands of words encountered
across a variety of situations. These word meanings form a network of associations that, influenced by environmental
factors such as word frequency and contextual diversity,
cause behavioral effects on measures such as lexical decision
and naming times. We investigate the development of recognition priming as a function of explicit knowledge after repeated training and testing on a novel vocabulary. By varying the word frequency and contextual diversity of the training input, and examining learning trajectories as well as semantic knowledge effects, we shed light on which environmental factors most influence performance in language acquisition. Contextual diversity and entropy–the uncertainty about a word's referents–are the two strongest factors predicting primed recognition times, and play a role along with frequency and context familiarity in predicting explicit learning.

Frontiers in Cognitive Science, 2014
For decades, implicit learning researchers have examined a variety of cognitive tasks in which pe... more For decades, implicit learning researchers have examined a variety of cognitive tasks in which people seem to automatically extract structure from the environment. Similarly, recent statistical learning studies have shown that people can learn word-object mappings from the repeated co-occurrence of words and objects in individually ambiguous situations. In light of this, the goal of the present paper is to investigate whether adult cross-situational learners require an explicit effort to learn word-object mappings, or if it may take place incidentally, only requiring attention to the stimuli. In two implicit learning experiments with incidental tasks directing participants' attention to different aspects of the stimuli, we found evidence of learning, suggesting that cross-situational learning mechanisms can operate incidentally, without explicit effort. However, performance was superior under explicit study instructions, indicating that strategic processes also play a role. Moreover, performance under instruction to learn word meanings did not differ from performance at counting co-occurrences, which may indicate these tasks engage similar strategies.

IEEE Conference on Development and Learning / EpiRob 2014, 2014
The serial reaction time (SRT) task measures learning of a repeating stimulus sequence as speed u... more The serial reaction time (SRT) task measures learning of a repeating stimulus sequence as speed up in keypresses, and is used to study implicit and motor learning research which aim to explain complex skill acquisition (e.g., learning to type). However, complex skills involve continuous, temporallyextended movements that are not fully measured in the discrete button presses of the SRT task. Using a movement adaptation of the SRT task in which spatial locations are both stimuli and response options, participants were trained to move the cursor to a continuous sequence of stimuli. Elsewhere we replicated [1] with the trajectory SRT paradigm . The current study extends it to the problem of learning complex actions, composed of recurring short sequences of movements that may be rearranged like words. Reaction time and trajectory deflection analyses show that subjects show within-word improvements relative to unpredictable betweenword transitions, suggesting that participants learn to segment the sequence according to the statistics of the input.
Abstract Previous research shows that people can use the co-occurrence of words and objects in am... more Abstract Previous research shows that people can use the co-occurrence of words and objects in ambiguous situations (ie, containing multiple words and objects) to learn word meanings during a brief passive training period (Yu & Smith, 2007). However, learners in the world are not completely passive but can affect how their environment is structured by moving their heads, eyes, and even objects. These actions can indicate attention to a language teacher, who may then be more likely to name the attended objects.

Proceedings of the 34th Annual Conference of the Cognitive Science Society, 2012
Cognitive scientists have begun collecting the trajectories of hand movements as participants mak... more Cognitive scientists have begun collecting the trajectories of hand movements as participants make decisions in experiments. These response trajectories offer a fine-grained glimpse into ongoing cognitive processes. For example, difficult decisions show more hesitation and deflection from the optimal path than easy decisions. However, many summary statistics used for trajectories throw away much information, or are correlated and thus partially redundant. To alleviate these issues, we introduce Gaussian process regression for the purpose of modeling trajectory data collected in psychology experiments. Gaussian processes are a well-developed statistical model that can find parametric differences in trajectories and their derivatives (e.g., velocity and acceleration) rather than a summary statistic. We show how Gaussian process regression can be implemented hierarchically across conditions and subjects, and used to model the actual shape and covariance of the trajectories. Finally, we demonstrate how to construct a generative hierarchical Bayesian model of trajectories using Gaussian processes.

ICDL/EpiRob, 2012
Research has shown that people can learn many nouns (i.e., word-referent mappings) from a short s... more Research has shown that people can learn many nouns (i.e., word-referent mappings) from a short series of ambiguous situations containing multiple word-referent pairs. Associative models assume that people accomplish such crosssituational learning by approximately tracking which words and referents co-occur. However, some researchers posit that learners hypothesize only a single referent for each word, and retain and test this hypothesis unless it is disconfirmed. To compare these two views, we fit two models to individual learning trajectories in a cross-situational word-learning task, in which each trial presents four objects and four spoken words-16 possible wordobject pairings per trial. The model that maintains a single hypothesis for each word does not fit as well as the associative model that roughly learns the co-occurrence structure of the data using competing attentional biases for familiar pairings and uncertain stimuli. We conclude that language acquisition is likely supported by memory, not sparse hypotheses.
Uploads
Papers by George Kachergis
across a variety of situations. These word meanings form a network of associations that, influenced by environmental
factors such as word frequency and contextual diversity,
cause behavioral effects on measures such as lexical decision
and naming times. We investigate the development of recognition priming as a function of explicit knowledge after repeated training and testing on a novel vocabulary. By varying the word frequency and contextual diversity of the training input, and examining learning trajectories as well as semantic knowledge effects, we shed light on which environmental factors most influence performance in language acquisition. Contextual diversity and entropy–the uncertainty about a word's referents–are the two strongest factors predicting primed recognition times, and play a role along with frequency and context familiarity in predicting explicit learning.
across a variety of situations. These word meanings form a network of associations that, influenced by environmental
factors such as word frequency and contextual diversity,
cause behavioral effects on measures such as lexical decision
and naming times. We investigate the development of recognition priming as a function of explicit knowledge after repeated training and testing on a novel vocabulary. By varying the word frequency and contextual diversity of the training input, and examining learning trajectories as well as semantic knowledge effects, we shed light on which environmental factors most influence performance in language acquisition. Contextual diversity and entropy–the uncertainty about a word's referents–are the two strongest factors predicting primed recognition times, and play a role along with frequency and context familiarity in predicting explicit learning.