Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, Minds and Machines
Explicitness has usually been approached from two points of view, labelled by Kirsh the structural and the process view, that hold opposite assumptions to determine when information is explicit. In this paper, we offer an intermediate view that retains intuitions from both of them. We establish three conditions for explicit information that preserve a structural requirement, and a notion of explicitness as a continuous dimension. A problem with the former accounts was their disconnection with psychological work on the issue. We review studies by Karmiloff-Smith, and Shanks and St. John to show that the proposed conditions have psychological grounds. Finally, we examine the problem of explicit rules in connectionist systems in the light of our framework.
Behavioral and Brain Sciences, 1999
Stability of activation, while it may be necessary for information to become available to consciousness, is not sufficient to produce phenomenal experience. We suggest that consciousness involves access to information and that access makes information symbolic. From this perspective, implicit representations exist, and are best thought of as sub-symbolic. Crucially, such representations can be causally efficacious in the absence of consciousness.
Issues in Applied Linguistics, 2010
In previous issues of ML (cf. Fantuzzi, 1992, 1993), it was argued that connectionist explanations are too vague to qualify as theories of cognitive functions. Much of the argument hinges on the claim that hidden unit activation patterns of connectionist networks are currently too difficult to analyze, and that such opacity renders connectionist accounts virtually ineffective.
Procedural Meaning: Problems and Perspectives, 2011
Ai Magazine, 1988
Behavioral and Brain Sciences, 1994
Behavioral and Brain Sciences, 1993
Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency-as though these inferences were a reflexive response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability seems paradoxical given the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuronlike elements represent a large body of systemic knowledge and perform a range of inferences with such speed? We describe a computational model that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox. We show how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables and perform a class of inferences in a few hundred milliseconds. Efficient reasoning requires the rapid representation and propagation of dynamic bindings. Our model (which we refer to as SHRUTi) achieves this by representing (1) dynamic bindings as the synchronous firing of appropriate nodes, (2) rules as interconnection patterns that direct the propagation of rhythmic activity, and (3) long-term facts as temporal pattern-matching subnetworks. The model is consistent with recent neurophysiological evidence that synchronous activity occurs in the brain and may play a representational role in neural information processing. The model also makes specific psychologically significant predictions about the nature of reflexive reasoning. It identifies constraints on the form of rules that may participate in such reasoning and relates the capacity of the working memory underlying reflexive reasoning to biological parameters such as the lowest frequency at which nodes can sustain synchronous oscillations and the coarseness of synchronization.
The most fundamental question in the philosophy of information "What is information?" has not received yet a definite answer free from commonly recognized deficiencies. In my earlier work I have proposed a definition of information as an identification of the variety. The definition is based on the concept of the one-many relation, a philosophical theme as old as philosophy itself. The rich tradition of the theme established through the centuries of philosophical discourse is in a clear contrast to the common sense concepts such as "uncertainty" usually utilized in attempts to set foundations for the concept of information. An identification of the variety can have two basic forms of a selection of one out many in the variety, or of the structure uniting the variety (many) into one. The distinction of the forms of identification leads to the distinction between the selective and structural information. However, since every occurrence of one type of information is always accompanied by the other, selective and structural information can be considered just different manifestations of the uniform concept of information. The selective information can be easily identified with the concept of information in its usual understanding. The structural manifestation of information has been considered usually in the context of integration of information. In the present paper the analysis of the concept of information based on the one-many relation is being carried out in the three perspectives. First, the philosophical aspects of information are considered. Then, the concept of information is being identified in a selection of very different domains. For instance, Hutcheson's concept of beauty dominating classical aesthetics since 18 th century, understood as "unity in variety," provides an example of an idea very close to structural information. Integration of the neuronal activity in the brain considered as a basis for consciousness by Edelman and his collaborators can be also viewed as an example of structural information in a different domain. Finally, an attempt is being made to identify a mathematical formalism which reflects the distinction of the selective and structural information.
Rethinking Fodor and Pylyshyn's Systematicity Challenge, 2014
The systematicity debate initially turned on the issue of the best explanation for the systematicity of cognition-then a property taken for granted, so that it barely required anything more than cursory exemplification. Connectionists challenged the idea that a "language of thought" of atomic constituents, plus formal combinatorial rules, was the only (best) approach to account for that claimed property of cognition. In these post-cognitivist times, we rather think that the proper reaction to the Fodor and Pylyshyn's (1988) challenge is to deny that cognition is systematic in general. Systematicity rather seems a property intrinsically dependent upon language rather than cognition in general, since the typical examples of systematicity are in fact syntax-bound; in addition, when we examine nonverbal cognition, we don't find the kind of systematicity required by the argument. Current post-cognitivist approaches to cognition, which emphasize embodiment and dynamic interaction, in its turn, also challenge the cognitivist assumption that the explanandum that a theory of cognition has to account for includes systematicity as a basic property of cognitive processes.
Behavioral and Brain Sciences, 1988
A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models.Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule appli...
Arxiv preprint quant-ph/0404068, 2004
In this article 1 review the connectionist framework for modeling psychological processes, and I examine the role of connectionist models in empirical psychology. I illustrate how modeling can reveal the empirical implications of general principles, and I point out that the connectionist framework is particularly apt for formalizing certain proposed processing principles. The framework has led to the discovery of new classes of explanations for basic findings; it has led to unified accounts of disparate or contradictory phenomena; and it has shed light on the relevance of certain types of evidence for basic questions about the nature of the processing system. 10 1988 Academic POW. IIK
1990
The character of computational modelling of cognition depends on an underlying theory of representation. Classical cognitive science has exploited the syntax/semantics theory of representation that derives from logic. But this has had the consequence that the kind of psychological explanation supported by classical cognitive science is conceptualist: psychological phenomena are modelled in terms of relations that hold between concepts, and between the sensors/effectors and concepts. This kind of explanation is inappropriate for the Proper Treatment of Connectionism . Is there an alternative theory of representation that retains the advantages of classical theory, but which does not force psychological explanation into the conceptualist mould? I outline such an alternative by introducing an experience-based notion of nonconceptual content and by showing how a complex construction out of nonconceptual content can satisfy classical constraints on cognition. The psychologically fundamental structure of cognition is not the structure that holds between concepts, but, rather, the structure within concepts. The theory of the representational structure within concepts allows psychological phenomena to be explained as the progressive emergence of objectivity. This can be modelled computationally by means of the computational processes of a perspective-dependence-reducing transformer. This device may be thought of as a generalisation of a cognitive map, which includes the processes of map-formation and map use. It forms computational structures which take nonconceptual contents as inputs and yield nonconceptual contents as outputs, but do so in a way which makes the resulting capacity of the system less and less dependent on any particular perspectives, yielding satisfactory performance from any point of view.
Synthese, 2001
ABSTRACT. In Representations without Rules, Connectionism and the Syntactic Argument, Kenneth Aizawa argues against the view that connectionist nets can be un-derstood as processing representations without the use of representation-level rules, and he ...
Cognitive Science: A Multidisciplinary Journal, 2008
The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which specific aspects of perception, memory, or decision making might be modelled? Following Shepard (e.g., 1987), it is argued that some universal principles may be attainable in cognitive science. Here we propose two examples: The simplicity principle (which states that the cognitive system prefers patterns that provide simpler explanations of available data); and the scale-invariance principle, which states that many cognitive phenomena are independent of the scale of relevant underlying physical variables, such as time, space, luminance, or sound pressure.
Stability of activation, while it may be necessary for information to become available to consciousness, is not sufficient to produce phenomenal experience. We suggest that consciousness involves access to information and that access makes information symbolic. From this perspective, implicit representations exist, and are best thought of as sub-symbolic. Crucially, such representations can be causally efficacious in the absence of consciousness.
I discuss a connectionist model, based on Elman's (1990, 1991) Simple Recurrent Network, of the acquisition of complex syntactic structure. While not intended as a detailed model of the process children go through in acquiring natural languages, the model helps clarify concepts that may be useful for understanding the development of complex abilities. It provides evidence that connectionist learning can produce stage-wise development emergently. It is consistent with prior work on connectionist models emphasizing their capability of computing in ways that are not possible within the symbolic paradigm (Siegelmann, 1999). On the other hand, it suggests that one mechanism of the symbolic paradigm (a pushdown automaton) may be identified with an attractor of the learning process. Thus, the model provides a concrete example of what might be called "emergence of new conceptual structure during development" and suggests that we need to use both dynamical systems theory and symbolic computation theory to make sense of it.
2019
Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.