Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Cognitive Science
…
28 pages
1 file
The contents and structure of semantic memory have been the focus of much recent research, with major advances in the development of distributional models, which use word co-occurrence information as a window into the semantics of language. In parallel, connectionist modeling has extended our knowledge of the processes engaged in semantic activation. However, these two lines of investigation have rarely been brought together. Here, we describe a processing model based on distributional semantics in which activation spreads throughout a semantic network, as dictated by the patterns of semantic similarity between words. We show that the activation profile of the network, measured at various time points, can successfully account for response times in lexical and semantic decision tasks, as well as for subjective concreteness and imageability ratings. We also show that the dynamics of the network is predictive of performance in relational semantic tasks, such as similarity/relatedness rating. Our results indicate that bringing together distributional semantic networks and spreading of activation provides a good fit to both automatic lexical processing (as indexed by lexical and semantic decisions) as well as more deliberate processing (as indexed by ratings), above and beyond what has been reported for previous models that take into account only similarity resulting from network structure.
The contents and structure of semantic networks have been the focus of much recent research, with major advances in the development of distributional models. In parallel, connectionist modeling has extended our knowledge of the processes engaged in semantic activation. However, these two lines of investigation have rarely brought together. Here, starting from a standard textual model of semantics, we allow activation to spread throughout its associated semantic network, as dictated by the patterns of semantic similarity between words. We find that the activation profile of the network, measured at various time points, can successfully account for response times in the lexical decision task, as well as for subjective concreteness and imageability ratings.
Behavior Research Methods, 2012
In this article, we describe the most extensive set of word associations collected to date. The database contains over 12,000 cue words for which more than 70,000 participants generated three responses in a multipleresponse free association task. The goal of this study was (1) to create a semantic network that covers a large part of the human lexicon, (2) to investigate the implications of a multiple-response procedure by deriving a weighted directed network, and (3) to show how measures of centrality and relatedness derived from this network predict both lexical access in a lexical decision task and semantic relatedness in similarity judgment tasks. First, our results show that the multiple-response procedure results in a more heterogeneous set of responses, which lead to better predictions of lexical access and semantic relatedness than do singleresponse procedures. Second, the directed nature of the network leads to a decomposition of centrality that primarily depends on the number of incoming links or in-degree of each node, rather than its set size or number of outgoing links. Both studies indicate that adequate representation formats and sufficiently rich data derived from word associations represent a valuable type of information in both lexical and semantic processing.
Journal of experimental psychology. General, 2016
Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record
Big data in cognitive science: From methods to insight, 2016
The mental lexicon contains the knowledge about words acquired over a lifetime. A central question is how this knowledge is structured and changes over time. Here we propose to represent this lexicon as a network consisting of nodes that correspond to words and links reflecting associative relations between two nodes, based on free association data. A network view of the mental lexicon is inherent to many cognitive theories, but the predictions of a working model strongly depend on a realistic scale, covering most words used in daily communication. Combining a large network with recent methods from network science allows us to answer questions about its organization at different scales simultaneously, such as: How efficient and robust is lexical knowledge represented considering the global network architecture? What are the organization principles of words in the mental lexicon (i.e. thematic versus taxonomic)? How does the local connectivity with neighboring words explain why certain words are processed more efficiently than others? Networks built from word associations are specifically suited to address prominent psychological phenomena such as developmental shifts, individual differences in creativity, or clinical states like schizophrenia. While these phenomena can be studied using these networks, various future challenges and ways in which this proposal complements other perspectives are also discussed.
Distributional models of semantics are a popular way of capturing the similarity between words or concepts. More recently, such models have also been used to generate properties associated with a concept; model-generated properties are typically compared against collections of semantic feature norms. In the present paper, we propose a novel way of testing the plausibility of the properties generated by a distributional model using data from a visual world experiment. We show that model-generated properties, when embedded in a sentential context, bias participants' expectations towards a semantically associated target word in real time. This effect is absent in a neutral context that contains no relevant properties.
Psychological Review, 1975
This paper presents a spreading-activation theory of human semantic processing, which can be applied to a wide range of recent experimental results. The theory is based on Quillian's theory of semantic memory search and semantic preparation, or priming. In conjunction with this, several of the miscondeptions concerning Qullian's theory are discussed. A number of additional assumptions are proposed for his theory in order to apply it to recent experiments. The present paper shows how the extended theory can account for results of several production experiments by Loftus, Juola and Atkinson's multiple-category experiment, Conrad's sentence-verification experiments, and several categorization experiments on the effect of semantic relatedness and typicality by Holyoak and Glass, Rips, Shoben, and Smith, and Rosch. The paper also provides a critique of the Smith, Shoben, and Rips model for categorization judgments.
The choice to represent co-occurrence statistics directly as matrices produces prima facie incompatible semantic spaces We lose sight of the fact that different semantic spaces actually rely on the same kind of underlying distributional information This results in the development of ad hoc models geared towards specific aspects of meanings taxonomic similarity, relation identification, selectional preferences, etc.
Computational Linguistics, 2010
Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this "one task, one model" approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature.
2010
Semantic memory is the cognitive system devoted to storage and retrieval of conceptual knowledge. Empirical data indicate that semantic memory is organized in a network structure. Everyday experience shows that word search and retrieval processes provide fluent and coherent speech, i.e. are efficient. This implies either that semantic memory encodes, besides thousands of words, different kind of links for different relationships (introducing greater complexity and storage costs), or that the structure evolves facilitating the differentiation between long-lasting semantic relations from incidental, phenomenological ones. Assuming the latter possibility, we explore a mechanism to disentangle the underlying semantic backbone which comprises conceptual structure (extraction of categorical relations between pairs of words), from the rest of information present in the structure. To this end, we first present and characterize an empirical data set modeled as a network, then we simulate a stochastic cognitive navigation on this topology. We schematize this latter process as uncorrelated random walks from node to node, which converge to a feature vectors network. By doing so we both introduce a novel mechanism for information retrieval, and point at the problem of category formation in close connection to linguistic and non-linguistic experience.
Proceedings of the twenty …, 2005
In recent studies of semantic representation, two distinct sources of information from which we can learn word meanings have been described. We refer to these as attributional and distributional information sources. Attributional information describes the attributes or features associated with referents of words, and is acquired from our interactions with the world. Distributional information describes the distribution of words across different linguistic contexts, and is acquired through our use of language. While previous work has concentrated on the role of one source, to the exclusion of the other, in this paper we study the role of both sources in combination. We describe a general framework based on probabilistic generative models for modelling both sources of information, and how they can be integrated to learn semantic representation. We provide examples comparing the learned structures of each of three models: attributional information alone, distributional information alone, and both sources combined.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Frontiers in Psychology, 2014
Proceedings of the 23rd Annual Conference of …, 2001
Topics in Cognitive Science, 2010
Cognitive Science, 2015
Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP
Selected Papers, UK CLC 2012 (published in 2014)
Submitted for publication, 2005
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
Neuropsychologia, 2017
Cognitive Science, 2016
Workshop on Computational Linguistics for Linguistic Complexity, 2016
Frontiers in Psychology