Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1993
Interest in the ANN AEeld has recently focused on dynamical neural networks for performing temporal operations, as more realistic models of biological information processing, and to extend ANN learning techniques. While this represents a step towards realism, it is important to note that individual neurons are complex dynamical systems, interacting through nonlinear, nonmonotonic connections. The result is that the ANN concept of learning, even when applied to a single synaptic connection, is a nontrivial subject.
International Journal of Intelligent Systems, 1995
Traditionally, associative memory models are based on point attractor dynamics, where a memory state corresponds to a stationary point in state space. However, biological neural systems seem to display a rich and complex dynamics whose function is still largely unknown. We use a neural network model of the olfactory cortex to investigate the functional significance of such dynamics, in particular with regard to learning and associative memory. The model uses simple network units, corresponding to populations of neurons connected according to the structure of the olfactory cortex. All essential dynamical properties of this system are reproduced by the model, especially oscillations at two separate frequency bands and aperiodic behavior similar to chaos. By introducing neuromodulatory control of gain and connection weight strengths, the dynamics can change dramatically, in accordance with the effects of acetylcholine, a neuromodulator known to be involved in attention and learning in animals. With computer simulations we show that these effects can be used for improving associative memory performance by reducing recall time and increasing fidelity. The system is able to learn and recall continuously as the input changes, mimicking a real world situation of an artificial or biological system in a changing environment.
arXiv (Cornell University), 2021
The success of state-of-the-art machine learning is essentially all based on different variations of gradient descent algorithms that minimize some version of a cost or loss function. A fundamental limitation, however, is the need to train these systems in either supervised or unsupervised ways by exposing them to typically large numbers of training examples. Here, we introduce a fundamentally novel conceptual approach to machine learning that takes advantage of a neurobiologically derived model of dynamic signaling, constrained by the geometric structure of a network. We show that MNIST images can be uniquely encoded and classified by the dynamics of geometric networks with nearly state-of-the-art accuracy in an unsupervised way, and without the need for any training.
2019
One of the major challenges of computational cognitive neuroscience is to apply models of neural information processing to complex tasks. Hierarchical learning architectures like deep convolutional networks can be trained to solve tasks efficiently, but utilize simple mechanisms of activity integration and output generation. On the other hand, biologically plausible models of activation dynamics incorporate detailed mechanisms of changing membrane potentials and axonal firing properties. Making such elaborate models trainable requires learning of internal model parameters. Here, we propose to apply supervised learning and train a model of canonical cortical circuits via backpropagation through time. We train the model to settle to target equilibrium values, to generate oscillations, and to solve a contour completion task.
Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain’s ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-firing-rate (GLIFR) model, which is a rate-equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows us to use standard gradient...
Physica A: Statistical Mechanics and its Applications, 2004
A feed-forward neural net with adaptable synaptic weights and fixed, zero or non-zero threshold potentials is studied, in the presence of a global feedback signal that can only have two values, depending on whether the output of the network in reaction to its input is right or wrong. It is found, on the basis of four biologically motivated assumptions, that only two forms of learning are possible, Hebbian and Anti-Hebbian learning. Hebbian learning should take place when the output is right, while there should be Anti-Hebbian learning when the output is wrong. For the Anti-Hebbian part of the learning rule a particular choice is made, which guarantees an adequate average neuronal activity without the need of introducing, by hand, control mechanisms like extremal dynamics. A network with realistic, i.e., non-zero threshold potentials is shown to perform its task of realizing the desired input-output relations best if it is sufficiently diluted, i.e. if only a relatively low fraction of all possible synaptic connections is realized.
Journal of Medical …, 2009
Learning is a process involved in multiple timescales. As per biology, changes which last from milliseconds to seconds and hours to days are the main mediators for the formation of short-term and long-term memory. It is obvious that, memory formation is neither static nor it is restricted into a one phase of life. Every step we keep in our life, even it succeed or fail or no matter what happen, we learn from them and acquire invaluable knowledge on that, which makes us easy manipulation on similar events in future. Thus continuous learning in a dynamic environment is a necessary qualification for the researches which are interested in studying phenomena, such as addiction, stress, noise, etc on such a dynamic learning environments. This research proposes a new approach of modelling our nervous system with the intention of implementing learning on dynamic environment.
Neural Computation, 2000
Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic, i.e., their "weight" changes on a short time scale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics -which should not be confused with long term "learning" -affects the computational power of a neural network. In particular we analyze computations on temporal and spatio-temporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy which is related to the cost of implementing such filters in neural systems. * Wolfgang Maass would like to thank the Sloan Foundation (USA), the Fonds zur Förderung der wissenschaftlichen Forschung (FWF) , Austria, project P12153, and the NeuroCOLT project of the EC for partial support, and the Computational Neurobiology Lab at the Salk-Institute for its hospitality.
New Journal of Physics, 2007
kind of dynamical behavior in a controllable fashion and in a manner applicable to a variety of starting systems. Viz we are interested in neural networks which generate transient states dynamics in terms of a meaningful time series of states approaching arbitrarily close predefined attractor ruins.
The European Physical Journal Special Topics, 2007
This paper presents an overview of some techniques and concepts coming from dynamical system theory and used for the analysis of dynamical neural networks models. In a first section, we describe the dynamics of the neuron, starting from the Hodgkin-Huxley description, which is somehow the canonical description for the "biological neuron". We discuss some models reducing the Hodgkin-Huxley model to a two dimensional dynamical system, keeping one of the main feature of the neuron: its excitability. We present then examples of phase diagram and bifurcation analysis for the Hodgin-Huxley equations. Finally, we end this section by a dynamical system analysis for the nervous flux propagation along the axon. We then consider neuron couplings, with a brief description of synapses, synaptic plasticiy and learning, in a second section. We also briefly discuss the delicate issue of causal action from one neuron to another when complex feedback effects and non linear dynamics are involved. The third section presents the limit of weak coupling and the use of normal forms technics to handle this situation. We consider then several examples of recurrent models with different type of synaptic interactions (symmetric, cooperative, random). We introduce various techniques coming from statistical physics and dynamical systems theory. A last section is devoted to a detailed example of recurrent model where we go in deep in the analysis of the dynamics and discuss the effect of learning on the neuron dynamics. We also present recent methods allowing the analysis of the non linear effects of the neural dynamics on signal propagation and causal action. An appendix, presenting the main notions of dynamical systems theory useful for the comprehension of the chapter, has been added for the convenience of the reader.
The first few pages of any good introductory book on neurocomputing contain a cursory description of neurophysiology and how it has been abstracted to form the basis of artificial neural networks as we know them today. In particular, artificial neurons simplify considerably the behavior of their biological counterparts. It is our view that in order to gain a better understanding of how biological systems learn and remember it is necessary to have accurate models on which to base computerized experimentation. In this paper we describe an artificial neuron that is more realistic than most other models used currently. The model is based on conventional artificial neural networks (and is easily computerized) and is currently being used in our investigations into learning and memory.
arXiv (Cornell University), 2019
The task of the brain is to look for structure in the external input. We study a network of integrateand-fire neurons with several types of recurrent connections that learns the structure of its time-varying feedforward input by attempting to efficiently represent this input with spikes. The efficiency of the representation arises from incorporating the structure of the input into the decoder, which is implicit in the learned synaptic connectivity of the network. While in the original work of [Boerlin, Machens, Denève 2013] and [Brendel et al., 2017] the structure learned by the network to make the representation efficient was the low-dimensionality of the feedforward input, in the present work it is its temporal dynamics. The network achieves the efficiency by adjusting its synaptic weights in such a way, that for any neuron in the network, the recurrent input cancels the feedforward for most of the time. We show that if the only temporal structure that the input possesses is that it changes slowly on the time scale of neuronal integration, the dimensionality of the network dynamics is equal to the dimensionality of the input. However, if the input follows a linear differential equation of the first order, the efficiency of the representation can be increased by increasing the dimensionality of the network dynamics in comparison to the dimensionality of the input. If there is only one type of slow synaptic current in the network, the increase is twofold , while if there are two types of slow synaptic currents that decay with different rates and whose amplitudes can be adjusted separately, it is advantageous to make the increase threefold. We numerically simulate the network with synaptic weights that imply the most efficient input representation in the above cases. We also propose a learning rule by means of which the corresponding synaptic weights can be learned.
PLoS ONE, 2011
Competition between synapses arises in some forms of correlation-based plasticity. Here we propose a game theoryinspired model of synaptic interactions whose dynamics is driven by competition between synapses in their weak and strong states, which are characterized by different timescales. The learning of inputs and memory are meaningfully definable in an effective description of networked synaptic populations. We study, numerically and analytically, the dynamic responses of the effective system to various signal types, particularly with reference to an existing empirical motor adaptation model. The dependence of the system-level behavior on the synaptic parameters, and the signal strength, is brought out in a clear manner, thus illuminating issues such as those of optimal performance, and the functional role of multiple timescales.
Artificial Neural Networks - Architectures and Applications, 2013
Biological Cybernetics, 1982
Massively parallel (neural-like) networks are receiving increasing attention as a mechanism for expressing information processing models. By exploiting powerful primitive units and stability-preserving construction rules, various workers have been able to construct and test quite complex models, particularly in vision research. But all of the detailed technical work was concerned with the structure and behavior of fixed networks. The purpose of this paper is to extend the methodology to cover several aspects of change and memory.
Computational Intelligence, 1987
paper presents an overview and analylds of learning in Artiftciul Neural System_ (ANS's). It bef_ns with a general introduction to neural networks and connection_t approaches to informatlon proceging. The basis for learning in ANS's is then described, and compared with classical machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual _eigh_a to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analysed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomised, and where possible, the limitations inherent to specific classes of rules are outlined. DATA SHEET 4. Title sad Subcicle I. Rep, s No.
1970
In this paper we present a class of nonlinear neural network models and an associated learning algorithm that always converges to a set of network parameters (e.g., the connection weights) such that the error between the network trajectories and the desired trajectories vanishes, for all initial conditions and system inputs. Our models are the well known class of additive neural networks. We show that additive networks are one instance of the class of models whose dynamics can be decomposed into gradient and Hamiltonian portions. Furthermore, we use the Hamiltonian potential function to prove that additive networks possess bounded-input bounded-state stability for some minor restrictions on the node output functions. Also we present a condition for persistent excitation which guarantees that the parameter estimates converge to the actual parameter values. Since parameter convergence means that the model accurately reproduces the system outputs for all inputs, it is analogous to the notion of good generalization. Lastly, we discuss the relationship between parameter convergence and model controllability.
Journal of Statistical Physics, 1986
A new learning mechanism is proposed for networks of formal neurons analogous to Ising spin systems; it brings such models substantially closer to biological data in three respects: first, the learning procedure is applied initially to a network with random connections (which may be similar to a spin-glass system), instead of starting from a system void of any knowledge (as in the Hopfield model); second, the resultant couplings are not symmetrical; third, patterns can be stored without changing the sign of the coupling coefficients. It is shown that the storage capacity of such networks is similar to that of the Hopfield network, and that it is not significantly affected by the restriction of keeping the couplings' signs constant throughout the learning phase. Although this approach does not claim to model the central nervous system, it provides new insight on a frontier area between statistical physics, artificial intelligence, and neurobiology.
Theory in Biosciences, 2006
We study a learning rule based upon the temporal correlation (weighted by a learning kernel) between incoming spikes and the internal state of the postsynaptic neuron, building upon previous studies of spike timing dependent synaptic plasticity ([2, 3, 6]). Our learning rule for the synaptic weight wij iṡ wij (t) = ǫ ∞ −∞ 1 T l t t−T l µ δ(τ + s − tj,µ)u(τ)dτ Γ(s)ds where the tj,µ are the arrival times of spikes from the presynaptic neuron j and the function u(t) describes the state of the postsynaptic neuron i. Thus, the spike-triggered average contained in the inner integral is weighted by a kernel Γ(s), the learning window, positive for negative, negative for positive values of the time diffence s between post-and presynaptic activity. An antisymmetry assumption for the learning window enables us to derive analytical expressions for a general class of neuron models and to study the changes in input-output relationships following from synaptic weight changes. This is a genuinely non-linear effect ([16]).
Lecture Notes in Computer Science, 2010
Memory is often considered to be embedded into one of the attractors in neural dynamical systems, which provides an appropriate output depending on the initial state specified by an input. However, memory is recalled only under the presence of external inputs. Without such inputs, neural states do not provide such memorized outputs. Hence, each of memories do not necessarily correspond to an attractor of the dynamical system without input and do correspond to an attractor of the dynamics system with input. With this background, we propose that memory recall occurs when the neural activity changes to an appropriate output activity upon the application of an input. We introduce a neural network model that enables learning of such memories. After the learning process is complete, the neural dynamics is shaped so that it changes to the desired target with each input. This change is analyzed as bifurcation in a dynamical system. Conditions on timescales for synaptic plasticity are obtained to achieve the maximal memory capacity.