Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019
…
166 pages
1 file
I want to thank the people of México whose contributions allowed the National Council for Science and Technology (CONACyT) to sponsor my PhD studies. I would like to show my deepest appreciation to my colleagues John V. Woods, Simon Davidson, Michael Hopkins, Luis Plana and Jim Garside for the diverse topics we had the chance to discuss. Their knowledge has nurtured my understanding of a broad range of topics. My eternal gratitude goes to James Knight, Alan Stokes, Andrew Rowley and Oliver Rhodes for their help in disentangling my view of the SpiNNaker software stack. Many thanks to my lab partners Qian Liu, Petruţ Bogdan, Mantas Mikaitis, Robert James, Patrick Camilleri and James Knight for the talks and collaborations which made my research possible. The path towards the PhD would have not been bearable without the presence of my friends. I would like to thank
Interface focus, 2018
State-of-the-art computer vision systems use frame-based cameras that sample the visual scene as a series of high-resolution images. These are then processed using convolutional neural networks using neurons with continuous outputs. Biological vision systems use a quite different approach, where the eyes (cameras) sample the visual scene continuously, often with a non-uniform resolution, and generate neural spike events in response to changes in the scene. The resulting spatio-temporal patterns of events are then processed through networks of spiking neurons. Such event-based processing offers advantages in terms of focusing constrained resources on the most salient features of the perceived scene, and those advantages should also accrue to engineered vision systems based upon similar principles. Event-based vision sensors, and event-based processing exemplified by the SpiNNaker (Spiking Neural Network Architecture) machine, can be used to model the biological vision pathway at vari...
Proceedings of the IEEE, 2000
This paper describes the design of a massively parallel computer that is suitable for computational neuroscience modeling of large-scale spiking neural networks in biological real time.
Cognitive Science, 2004
Computing in Science & Engineering, 2000
SpiNNaker is a massively parallel architecture with more than a million processing cores that can model up to 1 billion spiking neurons in biological real time.
Frontiers in Neuroscience, 2013
In this paper we present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network that, operating together with recently proposed PolyChronous Networks (PCN), enables rapid, unsupervised, scale and rotation invariant object recognition using efficient spatio-temporal spike coding. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures capable of delivering end-to-end high-speed, low-power and lowresolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilising the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable temporal patterns and the use of asynchronous frames for information binding.
Proceedings 17th International Conference, ICONIP 2010., 2010
This paper describes a closed-loop robotic system which calculates its position by means of a silicon retina sensor. The system uses an artificial neural network to determine the direction in which to move the robot in order to maintain a line-following trajectory. We introduce a pure" end to end" neural system in substitution of typical algorithms executed by a standard DSP/CPU. Computation is performed solely using spike events; from the silicon neural input sensors, through to the artificial neural network computation and motor output ...
2008
This paper describes our recent efforts to develop biologically-inspired spiking neural network software (called JSpike) for vision processing. The ultimate goal is object recognition with both scale and translational invariance. This paper describes the initial software development effort, including code performance and memory requirement results. The software includes the neural network, image capture code, and graphical display programs. All the software is written in Java. The CPU time requirements for very large networks scale with the number of synapses, but even on a laptop computer billions of synapses can be simulated. While our initial application is image processing, the software is written to be very general and usable for processing other sensor data and for data fusion.
Neural Networks, 2001
In this paper, we investigate the relation between Arti®cial Neural Networks (ANNs) and networks of populations of spiking neurons. The activity of an arti®cial neuron is usually interpreted as the ®ring rate of a neuron or neuron population. Using a model of the visual cortex, we will show that this interpretation runs into serious dif®culties. We propose to interpret the activity of an arti®cial neuron as the steady state of a cross-inhibitory circuit, in which one population codes for`positive' arti®cial neuron activity and another for`negative' activity. We will show that with this interpretation it is possible, under certain circumstances, to transform conventional ANNs (e.g. trained with`backpropagation') into biologically plausible networks of spiking populations. However, in general, the use of biologically motivated spike response functions introduces arti®cial neurons that behave differently from the ones used in the classical ANN paradigm. q
Lecture Notes in Computer Science, 2012
Over the past 15 years, we have developed software image processing systems that attempt to reproduce the sorts of spike-based processing strategies used in biological vision. The basic idea is that sophisticated visual processing can be achieved with a single wave of spikes by using the relative timing of spikes in different neurons as an efficient code. While software simulations are certainly an option, it is now becoming clear that it may well be possible to reproduce the same sorts of ideas in specific hardware. Firstly, several groups have now developed spiking retina chips in which the pixel elements send the equivalent of spikes in response to particular events such as increases or a decreases in local luminance. Importantly, such chips are fully asynchronous, allowing image processing to break free of the standard frame based approach. We have recently shown how simple neural network architectures can use the output of such dynamic spiking retinas to perform sophisticated tasks by using a biologically inspired learning rule based on Spike-Time Dependent Plasticity (STDP). Such systems can learn to detect meaningful patterns that repeat in a purely unsupervised way. For example, after just a few minutes of training, a network composed of a first layer of 60 neurons and a second layer of 10 neurons was able to form neurons that could effectively count the number of cars going by on the different lanes of a freeway. For the moment, this work has just used simulations. However, there is a real possibility that the same processing strategies could be implemented in memristor-based hardware devices. If so, it will become possible to build intelligent image processing systems capable of learning to recognize significant events without the need for conventional computational hardware.
2000
Recent research has shown that the speed of image processing achieved by the human visual system is incompatible with conventional neural network approaches that use standard coding schemes based on firing rate. An alternative is to use networks of asynchronously firing spiking neurones and use the order of firing across a population of neurones as a code. In this paper we summarize results that demonstrate a number of advantages of such coding schemes: (1) they allow very efficient transmission of information, (2) they are intrinsically invariant to variations in stimulus intensity and contrast, (3) they can be used in very large scale processing architectures to solve difficult problems including categorisation of objects in natural scenes, and (4) they are particularly suited for implementation in low-cost multi-processor hardware.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2008
Turkish Journal of Electrical Engineering and Computer Sciences, 2023
Cellular Automata - Simplicity Behind Complexity, 2011
Frontiers in neuroscience, 2016
Frontiers in Neuroscience, 2015
The 2011 International Joint Conference on Neural Networks, 2011
Temporal Convolution in Spiking Neural Networks:Soft Computing for Problem Solving, Advances in Intelligent Systems and Computing, 17, 1139, , 2019
Lecture Notes in Computer Science, 2005
Neurocomputing, 2014
Frontiers in Neuroscience