Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1992
1. Summary This project comprised two related research efforts:(A) high-level connectionist cognitive modeling,(B) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic models of local neural circuits, and to understand the computational behavior of such models.
1988
Abstract: The difficulties that continually show up in connectionist modelling attempts directed towards high-level cognitive processing and knowledge representation include the inter-related problems of generative capacity, representational adequacy, variable binding, multiple instantiation of schemata (concepts, frames, etc.), rapid construction and modification of information structures, task control, and recursive processing.
Science, 2006
Biologically Based Computational Models of
Proceedings of the International Joint Conference on Neural Networks, 2003.
We present a model cortical column consisting of recurrently connected, continuous-time sigmoid activation units that provides a building block for neural models of complex cognition. Recent progress with a hybrid neural/symbolic cognitive model of problem-solving [9] prompted us to investigate the adequacy of these columns for the construction of purely neural cognitive models. Here we examine the computational power of networks of columns and show that every Turing machine maps in a straightforward fashion onto such a network. Furthermore, several hierarchical structures composed of columns that are critical in this mapping promise to provide biologically plausible models of timing circuits, gating mechanisms, activation-based short-term memory, and simple if-then rules that will likely be necessary in neural models of higher cognition.
2001
This article describes computer models that simulate the neural networks of the brain, with the goal of understanding how cognitive functions (perception, memory, thinking, language, etc) arise from their neural basis. Many neural network models have been developed over the years, focused at many different levels of analysis from engineering to relatively low-level biology to cognition.
Current Opinion in Neurobiology, 2014
Computational neuroscience has focused largely on the dynamics and function of local circuits of neuronal populations dedicated to a common task, such as processing a common sensory input, storing its features in working memory, choosing between a set of options dictated by controlled experimental settings or generating the appropriate actions. Most of current circuit models suggest mechanisms for computations that can be captured by networks of simplified neurons connected via simple synaptic weights. In this article I review the progress of this approach and its limitations. It is argued that new experimental techniques will yield data that might challenge the present paradigms in that they will (1) demonstrate the computational importance of microscopic structural and physiological complexity and specificity; (2) highlight the importance of models of large brain structures engaged in a variety of tasks; and (3) reveal the necessity of coupling the neuronal networks to chemical and environmental variables.
Nature Reviews Neuroscience, 2021
| Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative and hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning, to implementation of inhibition and control, along with neuroanatomical properties including area structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, based on these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling. 3 PULVERMÜLLER ET AL., BIOLOGICAL CONSTRAINTS ON NEURAL NETWORK MODELS OF COGNITIVE FUNCTIONS An important step towards addressing the neural substrate was taken by so-called localist models of cognition and language 8-12 , which filled the boxes of modular models with single artificial 'neurons' thought to locally represent cognitive elements 13 such as perceptual features and percepts, phonemes, word forms, meaning features, concepts and so on (Fig. 1a). The 1:1 relationship between the artificial neuron-like computational-algorithmic implementations and the entities postulated by cognitive theories made it easy to connect the two types of models. However, the notion that individual neurons each carry major cognitive functions is controversial today and difficult to reconcile with evidence from neuroscience research 14,15. This is not to dispute the great specificity of some neurons' responses 16 , but rather to highlight the now dominant view that even these very specific cells "do not act in isolation but are part of cell assemblies representing familiar concepts", objects or other entities 17,18. A further limitation of the localist models was that they did not systematically address the mechanisms underlying the formation of new representations and their connections. Auto-associative networks. Neuroanatomical observations suggest that the cortex is characterized by ample intrinsic and recurrent connectivity between its neurons and, therefore, it can be seen as an associative memory 19,20. This position inspired a family of artificial neural networks, called 'auto-associative networks' or 'attractor networks' 21-32. Auto-associative network models implement neurons with connections between their neuron members, so that each neuron interlinks with several or even all of the other neurons included in the set. This contrasts with the hetero-associative networks discussed below, where connections run between sub-populations of network neurons without any connections within each neuron pool. To simulate the effect of learning in auto-associative networks, so-called learning rules are included that change the connection weights between neurons as a consequence of their prior activity. For example, biologically founded unsupervised Hebbian learning, which strengthens connections between co-activated neurons 5 , is frequently applied and leads to the formation of strongly connected cell assemblies within a weakly connected auto-associative neuron pool (Fig. 2b). These cell assemblies can function as distributed network correlates or representations of perceptual, cognitive or 'mixed' context-dependent perceptual-cognitive states 6,30,32-34. Therefore, the observations that cortical neurons work together in groups and that representations are distributed across such groups 14,18 can both be accommodated by this artificial network type, along with learning mechanisms, thus overcoming major shortcomings of localist networks. Additional cognitively relevant features of auto-associative networks include the ability of a cell assembly to fully activate after only partial stimulation-a possible mechanism for Gestalt completion; that is, the recognition of an object (such as a cat) given only partial input (tail and paws). The mechanism is illustrated in Fig. 2b, where stimulation of neurons α and β is sufficient for activating the cell assembly formed by neurons α-to-γ. Furthermore, auto-associative networks integrate the established observations that: cortical neural codes can be sparse (that is, only a small fraction of available neurons respond to a given (complex) stimulus) 15,18,22,35,36 ; and that some (other) neurons respond to elementary and frequently occurring features of several stimuli (thus behaving in a less-sparse manner) 37. The reason for this lies in cell assembly overlap; that is, the possibility that two or more such circuits can share neurons while remaining functionally separate. This is illustrated in Fig. 2b, by the 'overlap neuron' of cell assemblies α-to-γ and γ-to-ε. Auto-associative networks can model a wide spectrum of cognitive processes, ranging from object, word and concept recognition to navigation, syntax processing, memory, planning and
Encyclopedia of Artificial Intelligence
1992
Abstract: We developed several novel representational and processing techniques for use in connectionist systems designed for high-level AI-like applications such as common-sense reasoning and natural language understanding. The techniques were used, for instance, in a connectionist system (Composit/SYLL) that implements Johnson-Laird's mental-model theory of human syllogistic reasoning.
Frontiers in Bioscience-Landmark
Sigact News, 1991
Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory.
2003
9. Conclusion This article describes computer models that simulate the neural networks of the brain, with the goal of understanding how cognitive functions (perception, memory, thinking, language, etc.) arise from their neural basis. Many neural network models have been developed over the years, focused at many different levels of analysis, from engineering, to low-level biology, to cognition. Here, we consider models that try to bridge the gap between biology and cognition.
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its computational credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine what might be regarded as the ‘‘conventional’’ account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks are not genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks.
Artificial Neural Networks -- Comparison of 3 Connectionist Models
2009
If we want to explain cognitive processes with means of connectionist networks, these networks have to correspond with cognitive systems and their underlying biological mechanisms in different respects. The question of biological and cognitive plausibility of connectionist models arises from two different aspects-first, from the aspect of biology-on one hand, one has to have a fair understanding of biological mechanisms and cognitive mechanisms in order to represent them in a model, and on the other hand there is the aspect of modeling-one has to know how to construct a model to represent precisely what we are aiming at. Computer power and modeling techniques have improved dramatically in recent 20 years, so the plausibility problem is being addressed in more adequate ways as well. Connectionist models are often used for representing different aspects of natural language. Their biological plausibility had sometimes been questioned in the past. Today, the field of computational neuroscience offers several acceptable possibilities of modeling higher cognitive functions, and language is among them. This paper brings a presentation of some existing connectionist networks modeling natural language. The question of their explanatory power and plausibility in terms of biological and cognitive systems they are representing is discussed.
Metaphilosophy, 1997
In our book “Neural Engineering: Representation, Transformations and Dynamics”, MIT Press 2003, Chris Eliasmith and I present a unified framework that describes the function of neurobiological systems through the application of the quantitative tools of systems engineering. Our approach is not revolutionary, but more evolutionary in nature, building on many current and generally disparate approaches to neuronal modeling. The basic premise is that the principles of information processing apply to neurobiological systems.
Mathematical Perspectives on Neural Networks, 1996
1989
Many researchers interested in connectionist models accept that such models are "neurally inspired" but do not worry too much about whether their models are biologically realistic. While such a position may be perfectly justifiable, the present paper attempts to illustrate how biological information can be used to constrain connectionist models. Two particular areas are discussed. The first section deals with visual information processing in the primate and human visual system. It is argued that speed with which visual information is processed imposes major constraints on the architecture and operation of the visual system. In particular, it seems that a great deal of processing must depend on a single bottum-up pass. The second section deals with biological aspects of learning algorithms. It is argued that although there is good evidence for certain coactivation related synaptic modification schemes, other learning mechanisms, including back-propagation, are not currently supported by experimental data.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.