Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1982, Cognitive Science
Much of the progress in the fields constituting cognitive science has been based upon the use of explicit information processing models, almost exclusively patterned after conventional serial computers. An extension of these ideas to massively parallel, connectianist models appears to offer a number of advantages. After a preliminary discussion, this paper introduces a general connectionist model and considers how it might be used in cognitive science. Among the issues addressed are: stability and noise-sensitivity, distributed decisionmaking, time and sequence problems, and the representation of complex concepts.
Behavioral and Brain Sciences, 1988
A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models.Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule appli...
2018
Computers have been used in every sphere of life and their role is increasing day by day, as newer and newer technologies are being developed. Artificial intelligence is at the heart of many exciting innovations. Representation forms the vital part of any AI application. If the representation is correct the half of the work is done. The connectionist approach is one of the ways to represent and identify any object in AI field. This approach has been successfully used and implemented in many of the real-life areas. The connectionist approach is based on the linking and state of any object at any time. An object has to mean with respect to its state and its links at a particular instant. It has many advantages for representation in AI field. Keyword: Artificial Intelligent, connectionist approach, symbolic learning, neural network.
Artificial Neural Networks -- Comparison of 3 Connectionist Models
Metaphilosophy, 1997
1988
Abstract: The difficulties that continually show up in connectionist modelling attempts directed towards high-level cognitive processing and knowledge representation include the inter-related problems of generative capacity, representational adequacy, variable binding, multiple instantiation of schemata (concepts, frames, etc.), rapid construction and modification of information structures, task control, and recursive processing.
2001
Connectionist approaches to cognitive modeling make use of large networks of simple computational units, which communicate by means of simple quantitative signals. Higher-level information processing emerges from the massivelyparallel interaction of these units by means of their connections, and a network may adapt its behavior by means of local changes in the strength of the connections.
Nips, 1987
A general method, the tensor product representation, is described for the distributed representation of value/variable bindings. The method allows the fully distributed representation of symbolic structures: the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and partially localized special cases reduce to existing cases of connectionist representations of structured data; the tensor product representation generalizes these and the few existing examples of fuUy distributed representations of structures. The representation saturates gracefully as larger structures are represented; it penn its recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it pennits values to also serve as variables; it enables analysis of the interference of symbolic structures stored in associative memories; and it leads to characterization of optimal distributed representations of roles and a recirculation algorithm for learning them.
Proceedings of the 24th annual meeting on Association for Computational Linguistics -, 1986
Biological Cybernetics, 1988
Pattern classification using connectionist (i.e., neural network) models is viewed within a statistical framework. A connectionist network's subjective beliefs about its statistical environment are derived. This belief structure is the network's "subjective" probability distribution. Stimulus classification is interpreted as computing the "most probable" response for a given stimulus with respect to the subjective probability distribution. Given the subjective probability distribution, learning algorithms can be analyzed and designed using maximum likelihood estimation techniques, and statistical tests can be developed to evaluate and compare network architectures. The framework is applicable to many connectionist networks including those of
1992
Abstract: We developed several novel representational and processing techniques for use in connectionist systems designed for high-level AI-like applications such as common-sense reasoning and natural language understanding. The techniques were used, for instance, in a connectionist system (Composit/SYLL) that implements Johnson-Laird's mental-model theory of human syllogistic reasoning.
Behavioral and Brain Sciences, 1990
Connectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and “simple” homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities (including “distributed representations”) or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explo...
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its computational credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine what might be regarded as the ‘‘conventional’’ account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks are not genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks.
Computer Standards & Interfaces, 1994
Connectionist networks are not simply opaque black boxes with useful and efficient computational properties. Rather they have important internal structure that must be harnessed it their lull potential as a novel computing technology is to be realised. First, the importance and usefulness of opening the black box is discussed and research is reviewed on how internal representations have been studied and used in the Cognitive Science literature. In the second section, a simple method for the geometrical analysis of decision space is presented. This shows coil neetionist networks as transparent boxes in which their computational properties arc clear. The paper finishes with an example of how decision space diagrams can be useful for investigating under what circumstances it is best to adapt old weights for use in novel tasks .
Human and Machine Vision II, 1986
Students of human and machine vision share the belief that massively parallel processing characterizes early vision. For higher levels of visual organization, considerably less is known and there is much less agreement about the best computational view of the processing. This paper lays out a computational framework in which all levels of vision can be naturally carried out in highly parallel fashion. One key is the representation of all visual information needed for high level processing as discrete parameter values which can be represented by units. Two problems that appear to require sequential attention are described and their solutions within the basically parallel structure are presented. Some simple program results are included.
2002
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.
connectionism (n.) An application in linguistics of a computational framework for modelling cognitive functions, based on numerical computation rather than symbol manipulation. A connectionist network (or neural network) is devised which models the kinds of structures and processes thought to operate in the brain: the processing units in the network are called 'neurons' (in an abstract sense) or 'nodes', each being excited or inhibited (according to certain numerical formulae) by information obtained from the other units to which it is connected. The pattern of neuronal activity represents the data being processed by the network. A particular interpretation (e.g. of speech input data) is likely to depend on the activity pattern of a large number of related units ('distributed representation'), the properties of which can be demonstrated only through statistical analysis. Because all the processing units compute at the same time, the approach is also known as parallel distributed processing. This approach contrasts with the view that people process sentences by transforming representations according to a set of rules, and rejects the notion that speakers internalize grammars, in the generative sense. Areas of application include the modelling of the non-discrete and statistical properties of language use, and the study of language processing within psycholinguistics, neurolinguistics, and computational linguistics (e.g. automatic speech recognition).
BRAIN. Broad Research in Artificial Intelligence …, 2011
The symbolic information-processing paradigm in cognitive psychology has met a growing challenge from neural network models over the past two decades. While neuropsychological evidence has been of great utility to theories concerned with information processing, the real question is, whether the less rigid connectionist models provide valid, or enough, information concerning complex cognitive structures. In this work, we will discuss the theoretical implications that neuropsychological data posits for modelling cognitive systems.
Knowledge-Based Systems, 1995
The relationship between symbolism and connectionism has been one of the major issues in recent artificial intelligence research. An increasing number of researchers from each side have tried to adopt the desirable characteristics of the approach. A major open question in this field is the extent to which a connectionist architecture can accommodate basic concepts of symbolic inference, such as a dynamic variable binding mechanism and a rule and fact encoding mechanism involving nary predicates. One of the current leaders in this area is the connectionist rule-based system proposed by Shastri and Ajjanagadde. The paper demonstrates that the mechanism for variable binding which they advocate is fundamentally limited, and it shows how a reinterpretation of the primitive components and corresponding modifications of their system can extend the range of inference which can be supported. Our extension hinges on the basic structural modification of the network components and further modifications of the rule and fact encoding mechanism. These modifications allow the extended model to have more expressive power in dealing with symbolic knowledge as in the unification of terms across many' groups of unifying arguments.
1991
Two general information-encoding techniques called 'relative-position encoding'and 'pattern-similarity association'are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short-term information processing of the sort needed in commonsense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high-level cognitive processing.
Green offers us two options: either connectionist models are literal models of brain activity; or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, idealised models of the brain, that are capable of providing genuine explanations of cognitive phenomena.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.