Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, Cahiers du Centre de Recherche Viabilité, Jeux, Contrôle
…
12 pages
1 file
Experimental results on the parieto-frontal cortical network clearly show that 1. in all parietal and frontal cortical areas involved in reaching, more than one signal influences the activity of individual neurons for learning a large set of visual-to-motor transformations, 2. they enjoy gating properties that can be simply modeled by “tensor products” of vectorial inputs, known in the language of neural networks as Σ− Π units.
Frontiers in Neuroscience, 2019
Neurons in the dorsal pathway of the visual cortex are thought to be involved in motion processing. The first site of motion processing is the primary visual cortex (V1), encoding the direction of motion in local receptive fields, with higher order motion processing happening in the middle temporal area (MT). Complex motion properties like optic flow are processed in higher cortical areas of the Medial Superior Temporal area (MST). In this study, a hierarchical neural field network model of motion processing is presented. The model architecture has an input layer followed by either one or cascade of two neural fields (NF): the first of these, NF1, represents V1, while the second, NF2, represents MT. A special feature of the model is that lateral connections used in the neural fields are trained by asymmetric Hebbian learning, imparting to the neural field the ability to process sequential information in motion stimuli. The model was trained using various traditional moving patterns such as bars, squares, gratings, plaids, and random dot stimulus. In the case of bar stimuli, the model had only a single NF, the neurons of which developed a direction map of the moving bar stimuli. Training a network with two NFs on moving square and moving plaids stimuli, we show that, while the neurons in NF1 respond to the direction of the component (such as gratings and edges) motion, the neurons in NF2 (analogous to MT) responding to the direction of the pattern (plaids, square object) motion. In the third study, a network with 2 NFs was simulated using random dot stimuli (RDS) with translational motion, and show that the NF2 neurons can encode the direction of the concurrent dot motion (also called translational flow motion), independent of the dot configuration. This translational RDS flow motion is decoded by a simple perceptron network (a layer above NF2) with an accuracy of 100% on train set and 90% on the test set, thereby demonstrating that the proposed network can generalize to new dot configurations. Also, the response properties of the model on different input stimuli closely resembled many of the known features of the neurons found in electrophysiological studies.
Zeitschrift für Naturforschung. C, Journal of biosciences
We present a simplified binocular neural network model of the primary visual cortex with separate ON/OFF-pathways and modifiable afferent as well as intracortical synaptic couplings. Random as well as natural image stimuli drive the weight adaptation which follows Hebbian learning rules stabilized with constant norm and constant sum constraints. The simulations consider the development of orientation and ocular dominance maps under different conditions concerning stimulus patterns and lateral couplings. With random input patterns realistic orientation maps with +/- 1/2-vortices mostly develop and plastic lateral couplings self-organize into mexican hat type structures on average. Using natural greyscale images as input patterns, realistic orientation maps develop as well and the lateral coupling profiles of the cortical neurons represent the two point correlations of the input image used.
Formal Aspects of Computing, 1996
In this paper we address the question of how interactions affect the formation and organization of receptive fields in a network composed of interacting neurons with Hebbian-type learning. We show how to partially decouple single cell effects from network effects, and how some phenomenological models can be seen as approximations to these learning networks. We show that the interaction affects the structure of receptive fields. We also demonstrate how the organization of different receptive fields across the cortex is influenced by the interaction term, and that the type of singularities depends on the symmetries of the receptive fields.
Lecture Notes in Computer Science, 2009
Spiking neural P systems and artificial neural networks are computational devices which share a biological inspiration based on the flow of information among neurons. In this paper we present a first model for Hebbian learning in the framework of spiking neural P systems by using concepts borrowed from neuroscience and artificial neural network theory.
Neural Networks, 2004
A toy model of a neural network in which both Hebbian learning and reinforcement learning occur is studied. The problem of 'path interference', which makes that the neural net quickly forgets previously learned input-output relations is tackled by adding a Hebbian term (proportional to the learning rate η) to the reinforcement term (proportional to ρ) in the learning rule. It is shown that the number of learning steps is reduced considerably if 1/4 < η/ρ < 1/2, i.e., if the Hebbian term is neither too small nor too large compared to the reinforcement term.
Neuroscience, 1997
A central problem in learning theory is how the vertebrate brain processes reinforcing stimuli in order to master complex sensorimotor tasks. This problem belongs to the domain of supervised learning, in which errors in the response of a neural network serve as the basis for modification of synaptic connectivity in the network and thereby train it on a computational task. The model presented here shows how a reinforcing feedback can modify synapses in a neuronal network according to the principles of Hebbian learning. The reinforcing feedback steers synapses towards long-term potentiation or depression by critically influencing the rise in postsynaptic calcium, in accordance with findings on synaptic plasticity in mammalian brain. An important feature of the model is the dependence of modification thresholds on the previous history of reinforcing feedback processed by the network. The learning algorithm trained networks successfully on a task in which a population vector in the motor output was required to match a sensory stimulus vector presented shortly before. In another task, networks were trained to compute coordinate transformations by combining different visual inputs. The model continued to behave well when simplified units were replaced by single-compartment neurons equipped with several conductances and operating in continuous time. This novel form of reinforcement learning incorporates essential properties of Hebbian synaptic plasticity and thereby shows that supervised learning can be accomplished by a learning rule similar to those used in physiologically plausible models of unsupervised learning. The model can be crudely correlated to the anatomy and electrophysiology of the amygdala, prefrontal and cingulate cortex and has predictive implications for further experiments on synaptic plasticity and learning processes mediated by these areas.
The Generalized Hebbian Algorithm has been proposed for training linear feedforward neural networks and has been proven to cause the weights to converge to the eigenvectors of the input distribution (Sanger 1989a, b). For an input distribution given by 2D Gaussian smoothed white noise inside a Gaussian window, some of the masks learned by the Generalized Hebbian Algorithm resemble edge and bar detectors. Since these do not match the form of the actual eigenvectors of this distribution , we seek an explanation of the development of the masks prior to complete convergence to the correct solution. Analysis in the spatial and spatial frequency domains sheds light on this development, and shows that the masks which occur tend to be localized in the spatial frequency domain, reminiscent of one of the properties of 2D Gabor filters proposed by as a model for the receptive fields of cells in primate visual cortex.
Advances in Artificial Life, ECAL 2013, 2013
Hebbian learning is a classical non-supervised learning algorithm used in neural networks. Its particularity is to transcribe the correlations between couple of neurons within their connecting synapse. From this idea, we created a robotic task where 2 sensory modalities indicate the same target in order to find out if a neural network equipped with Hebbian learning could naturally exploit the relation between those modalities. Another question we explored is the difference in terms of learning between a feedforward neural network(FNN) and spiking neural network(SNN). Our results indicate that a FNN can partially exploit the relation between the modalities and the task when receiving a feedback from a teacher. We also found out that a SNN could not complete the task because of the nature of the Hebbian learning modeled.
Journal of Mathematical Analysis and Applications, 2002
We study the convergence behavior of a learning model with generalized Hebbian synapses.
Physical Review E, 1999
A correlation-based ͑''Hebbian''͒ learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. The relative timing of presynaptic and postsynaptic spikes influences synaptic weights via an asymmetric ''learning window.'' A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and neuronal spike dynamics can be separated. The differential equation is solved for a Poissonian neuron model with stochastic spike arrival. It is shown that correlations between input and output spikes tend to stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normalization of the average weight and the output firing rate. Noise generates diffusion-like spreading of synaptic weights.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Neural Computation, 2012
Biological Cybernetics, 1999
arXiv (Cornell University), 2022
PLOS Computational Biology, 2018
Frontiers in Bioscience-Landmark
Journal of Physiology-Paris, 1994
Lecture Notes in Computer Science, 2004
Journal of Neural Engineering, 2013
International Journal of Humanoid Robotics, 2022
Systems and Computers in Japan, 1996
Lecture Notes in Morphogenesis, 2014
Journal of Theoretical Biology, 2009
Physica A: Statistical Mechanics and its Applications, 2004
Biological Cybernetics, 2000
Proceedings of the ... International Florida Artificial Intelligence Research Society Conference, 2022
International Joint Conference on Neural Networks, 1989