Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004, Physica A: Statistical Mechanics and its Applications
A feed-forward neural net with adaptable synaptic weights and fixed, zero or non-zero threshold potentials is studied, in the presence of a global feedback signal that can only have two values, depending on whether the output of the network in reaction to its input is right or wrong. It is found, on the basis of four biologically motivated assumptions, that only two forms of learning are possible, Hebbian and Anti-Hebbian learning. Hebbian learning should take place when the output is right, while there should be Anti-Hebbian learning when the output is wrong. For the Anti-Hebbian part of the learning rule a particular choice is made, which guarantees an adequate average neuronal activity without the need of introducing, by hand, control mechanisms like extremal dynamics. A network with realistic, i.e., non-zero threshold potentials is shown to perform its task of realizing the desired input-output relations best if it is sufficiently diluted, i.e. if only a relatively low fraction of all possible synaptic connections is realized.
Neuroscience, 1997
A central problem in learning theory is how the vertebrate brain processes reinforcing stimuli in order to master complex sensorimotor tasks. This problem belongs to the domain of supervised learning, in which errors in the response of a neural network serve as the basis for modification of synaptic connectivity in the network and thereby train it on a computational task. The model presented here shows how a reinforcing feedback can modify synapses in a neuronal network according to the principles of Hebbian learning. The reinforcing feedback steers synapses towards long-term potentiation or depression by critically influencing the rise in postsynaptic calcium, in accordance with findings on synaptic plasticity in mammalian brain. An important feature of the model is the dependence of modification thresholds on the previous history of reinforcing feedback processed by the network. The learning algorithm trained networks successfully on a task in which a population vector in the motor output was required to match a sensory stimulus vector presented shortly before. In another task, networks were trained to compute coordinate transformations by combining different visual inputs. The model continued to behave well when simplified units were replaced by single-compartment neurons equipped with several conductances and operating in continuous time. This novel form of reinforcement learning incorporates essential properties of Hebbian synaptic plasticity and thereby shows that supervised learning can be accomplished by a learning rule similar to those used in physiologically plausible models of unsupervised learning. The model can be crudely correlated to the anatomy and electrophysiology of the amygdala, prefrontal and cingulate cortex and has predictive implications for further experiments on synaptic plasticity and learning processes mediated by these areas.
Artificial neural networks (ANNs) are usually homoge- nous in respect to the used learning algorithms. On the other hand, recent physiological observations suggest that in bio- logical neurons synapses undergo changes according to lo- cal learning rules. In this study we present a biophysically motivated learning rule which is influenced by the shape of the correlated signals and results in a learning charac- teristic which depends on the dendritic site. We investigate this rule in a biophysical model as well as in the equiva- lent artificial neural network model. As a consequence of our local rule we observe that transitions from differential Hebbian to plain Hebbian learning can coexist at the same neuron. Thus, such a rule could be used in an ANN to cre- ate synapses with entirely different learning properties at the same network unit in a controlled way.
Artificial Neural Networks - Architectures and Applications, 2013
Cahiers du Centre de Recherche Viabilité, Jeux, Contrôle, 1998
Experimental results on the parieto-frontal cortical network clearly show that 1. in all parietal and frontal cortical areas involved in reaching, more than one signal influences the activity of individual neurons for learning a large set of visual-to-motor transformations, 2. they enjoy gating properties that can be simply modeled by “tensor products” of vectorial inputs, known in the language of neural networks as Σ− Π units.
2003
We demonstrate that our recently introduced stochastic Hebb-like learning rule [7] is capable of learning the problem of timing in general network topologies generated by an algorithm of Watts and Strogatz . We compare our results with a learning rule proposed by Bak and Chialvo [2, 4] and obtain not only a significantly better convergence behavior but also a dependence of the presentation order of the patterns to be learned by introduction of an additional degree of freedom which allows the neural network to select the next pattern itself whereas the learning rule of Bak and Chialvo stays uneffected. This dependence offers a bidirectional communication between a neuronal and a behavioural level and hence completes the action-perception-cycle which is a characteristics of any living being with a brain.
Sci
Artificial neural networks in their various different forms convincingly dominate 1 machine learning of the present day. Nevertheless, the manner in which these networks are 2 trained, in particular using end-to-end backpropagation, presents a major limitation in practice 3 and hampers research, as well as raises questions as regards the very fundamentals of the learning 4 algorithm design. Motivated by these challenges and the contrast between the phenomenology of 5 biological (natural) neural networks that artificial ones are inspired by and the learning processes 6 underlying the former, there has been an increasing amount of research on the design of biologically 7 plausible means of training artificial neural networks. In this paper we (i) describe a biologically 8 plausible learning method which takes advantage of various biological processes, such as Hebbian 9 synaptic plasticity, and includes both supervised and unsupervised elements, (ii) conduct a 10 series of experiments aimed at elucidating the advantages and disadvantages of the described 11 biologically plausible learning as compared with end-to-end backpropagation, and (iii) discuss 12 the findings which should serve as a means of illuminating the algorithmic fundamentals of 13 interest and directing future research. Amongst our findings is the greater resilience of biologically 14 plausible learning to data scarcity, which conforms to our expectations, but also its lesser robustness 15 to additive, zero mean Gaussian noise.
2002
It has been demonstrated that one of the most striking features of the nervous system, the so called 'plasticity' (i.e high adaptability at different structural levels) is primarily based on Hebbian learning which is a collection of slightly different mechanisms that modify the synaptic connections between neurons. The changes depend on neural activity and assign a special dynamic behavior to the neural networks. From a structural point of view, it is an open question what network structures may emerge in such dynamic structures under 'sustained' conditions when input to the system is only noise. In this paper we present and study the `HebbNets', networks with random noise input, in which structural changes are exclusively governed by neurobiologically inspired Hebbian learning rules. We show that Hebbian learning is able to develop a broad range of network structures, including scale-free small-world networks.
Lecture Notes in Computer Science, 2009
Spiking neural P systems and artificial neural networks are computational devices which share a biological inspiration based on the flow of information among neurons. In this paper we present a first model for Hebbian learning in the framework of spiking neural P systems by using concepts borrowed from neuroscience and artificial neural network theory.
Journal of Mathematical Analysis and Applications, 2005
We study the dynamical behavior of a discrete time dynamical system which can serve as a model of a learning process. We determine fixed points of this system and basins of attraction of attracting points. This system was studied by Fernanda Botelho and James J. Jamison in [A learning rule with generalized Hebbian synapses, J. Math. Anal. Appl. 273 (2002) 529-547] but authors used its continuous counterpart to describe basins of attraction. 2004 Elsevier Inc. All rights reserved.
Scientific Reports
Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered selflimiting Hebbian synaptic plasticity rule is continuously active.
Journal of Mathematical Analysis and Applications, 2002
We study the convergence behavior of a learning model with generalized Hebbian synapses.
Physical Review E, 1999
A correlation-based ͑''Hebbian''͒ learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. The relative timing of presynaptic and postsynaptic spikes influences synaptic weights via an asymmetric ''learning window.'' A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and neuronal spike dynamics can be separated. The differential equation is solved for a Poissonian neuron model with stochastic spike arrival. It is shown that correlations between input and output spikes tend to stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normalization of the average weight and the output firing rate. Noise generates diffusion-like spreading of synaptic weights.
Bio-Inspired Systems: …, 2009
We present an evolving neural network model in which synapses appear and disappear stochastically according to bio-inspired probabilities. These are in general nonlinear functions of the local fields felt by neurons-akin to electrical stimulation-and of the global average field-representing total energy consumption. We find that initial degree distributions then evolve towards stationary states which can either be fairly homogeneous or highly heterogeneous, depending on parameters. The critical cases-which can result in scale-free distributions-are shown to correspond, under a mean-field approximation, to nonlinear drift-diffusion equations. We show how appropriate choices of parameters yield good quantitative agreement with published experimental data concerning synaptic densities during brain development (synaptic pruning).
Abstract We describe our initial attempts to reconcile powerful neural network learning rules derived from computational principles with learning rules derived “bottom-up” from biophysical mechanisms. Using a biophysical model of synaptic plasticity (Shouval, Bear, and Cooper, 2002), we generated numerical synaptic learning rules and compared them to the performance of a Hebbian learning rule in a previously studied neural network model of self-organized learning.
Journal of Physiology-Paris, 2007
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.
Neural Computation, 1991
We show that a form of synaptic plasticity recently discovered in slices of the rat visual cortex can support an error-correcting learning rule. The rule increases weights when both pre-and postsynaptic units are highly active, and decreases them when pre-synaptic activity is high and postsynaptic activation is less than the threshold for weight increment but greater than a lower threshold. We show that this rule corrects false positive outputs in feedforward associative memory, that in an appropriate opponent-unit architecture it corrects misses, and that it performs better than the optimal Hebbian learning rule reported by .
1993
Interest in the ANN AEeld has recently focused on dynamical neural networks for performing temporal operations, as more realistic models of biological information processing, and to extend ANN learning techniques. While this represents a step towards realism, it is important to note that individual neurons are complex dynamical systems, interacting through nonlinear, nonmonotonic connections. The result is that the ANN concept of learning, even when applied to a single synaptic connection, is a nontrivial subject.
2003
We present a novel stochastic Hebb-like learning rule for neural networks. This learning rule is stochastic with respect to the selection of the time points when a synaptic modification is induced by pre-and postsynaptic activation. Moreover, the learning rule does not only affect the synapse between pre-and postsynaptic neuron which is called homosynaptic plasticity but also on further remote synapses of the pre-and postsynaptic neuron. This form of plasticity has recently come into the light of interest of experimental investigations and is called heterosynaptic plasticity. Our learning rule gives a qualitative explanation of this kind of synaptic modification.
publi-etis.ensea.fr
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connec-13 tivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron 14 dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. 15 We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the 16 learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, 17 the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that 18 progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-19 world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of 20 the Jacobian matrix. This drives the system through the ''edge of chaos'' where sensitivity to the input pattern is maximal. Taken 21 together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory. 22
Ninth Workshop on Virtual Intelligence/Dynamic Neural Networks, 1999
This paper presents the double loop feedback model, which is used for structure and data flow modelling through reinforcement learning in an artificial neural network. We first consider physiological arguments suggesting that loops and double loops are widely spread in the exchange flows of the central nervous system. We then demonstrate that the double loop pattern, named a mental object, works as a functional memory unit and we describe the main properties of a double loop resonator built with the classical Hebb's law learning principle in a feedforward basis. In this model, we show how some mental objects aggregate themselves in building blocks, then what are the properties of such blocks. We propose the mental objects block as the representing structure of a concept in a neural network. We show how the local application of Hebb's law at the cell level leads to the concept of functional organization cost at the network level (upward effect), which explains spontaneous reorganization of mental blocks (downward effect). In this model, the simple hebbian learning paradigm appears to have emergent effects in both upward and downward directions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.