Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, Springer eBooks
The Random Neural Network (RNN) is a recurrent spiking neuronal model that has been used for learning and dynamic optimisation of large scale network systems. Here we use the RNN to construct dense block of spiking neuronal cells in conjunction with Deep Learning to mimic the stochastic behaviour of biological neurons in mammalian brains. Together with prior work on extreme learning machines (ELM), we construct multilayer architectures (MLA) that exploit dense clusters of RNNs for Deep Learning and evaluate their performance on large visual recognition datasets. The results obtained indicate that this approach can reach and exceed the levels of performance that have been previously reported. Finally, we develop an incremental learning algorithm to train such RNN-ELM multilayer architectures for purpose of handing big data.
This paper introduces techniques for Deep Learning in conjunction with spiked random neural networks that closely resemble the stochastic behaviour of biological neurons in mammalian brains. The paper introduces clusters of such random neural networks and obtains the characteristics of their collective behaviour. Combining this model with previous work on extreme learning machines, we develop multilayer architectures which structure Deep Learning Architectures a a " front end " of one or two layers of random neural networks, followed by an extreme learning machine. The approach is evaluated on a standard – and large – visual character recognition database, showing that the proposed approach can attain and exceed the performance of techniques that were previously reported in the literature.
This paper develops multi-layer classifiers and auto-encoders based on the Random Neural Network. Our motivation is to build robust classifiers that can be used in systems applications such as Cloud management for the accurate detection of states that can lead to failures. Using an idea concerning some to soma interactions between natural neuronal cells, we discuss a basic building block constructed of clusters of densely packet cells whose mathematical properties are based on G-Networks and the Random Neural Network. These mathematical properties lead to a transfer function that can be exploited for large arrays of cells. Based on this mathematical structure we build multi-layer networks. In order to evaluate the level of classification accuracy that can be achieved, we test these auto-encoders and classifiers on a widely used standard database of handwritten characters. Abstract. This paper develops multi-layer classifiers and auto-encoders based on the Random Neural Network. Our motivation is to build robust classifiers that can be used in systems applications such as Cloud management for the accurate detection of states that can lead to failures. Using an idea concerning some to soma interactions between natural neuronal cells, we discuss a basic building block constructed of clusters of densely packet cells whose mathematical properties are based on G-Networks and the Random Neural Network. These mathematical properties lead to a transfer function that can be exploited for large arrays of cells. Based on this mathematical structure we build multi-layer networks. In order to evaluate the level of classification accuracy that can be achieved, we test these auto-encoders and classifiers on a widely used standard database of handwritten characters. AQ1
Recent work demonstrated the value of multi clusters of spiking Random Neural Networks (RNN) with dense soma-to-soma interactions in deep learning. In this paper we go back to the original simpler structure and we investigate the power of single RNN cells for deep learning. First, we consider three approaches with the single cells, twin cells and multi-cell clusters. This first part shows that RNNs with only positive parameter can conduct convolution operations similar to those of the convolutional neural network. We then develop a multi-layer architecture of single cell RNNs (MLSRNN), and show that this architecture achieves comparable or better classification at lower computation cost than conventional deep-learning methods.
Artificial Neural Networks (ANNs) based techniques have dominated state-of-the-art 1 results in most problems related to computer vision, audio recognition, and natural language 2 processing in the past few years, resulting in strong industrial adoption from all leading technology 3 companies worldwide. One of the major obstacles that have historically delayed large scale adoption 4 of ANNs is the huge computational and power costs associated with training and testing (deploying) 5 them. In the meantime , Neuromorphic Computing platforms have recently achieved remarkable 6 performance running the bio-realistic Spiking Neural Networks at high throughput and very low 7 power consumption making them a natural alternative to ANNs if they could match their classification 8 performance. Here, we propose using the Random Neural Network (RNN), a spiking neural network 9 with both theoretical and practical appealing properties, as a general purpose classifier that can match 10 the classification power of ANNs on a number of tasks while enjoying all the features of being a 11 spiking neural network. This is demonstrated on a number of real-world classification datasets. 12
2019
Spiking Neural Networks (SNNs) has recently emerged as a prominent neural computing paradigm. However, the typical shallow spiking network architectures have limited capacity for expressing complex representations, while training a very deep spiking network have not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-line trained deep Artificial Neural Networks (ANNs) to SNNs. However, ANN-to-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous and non-differentiable nature of the spike signals. To overcome this problem, we propose using differentiable (but approximate) activation for Leaky Integrate-and-Fire (LIF) spiking neurons to train deep convolutional SNNs with input spike events using spike-based backpropagation algorithm. Our experiments show the effectiveness of t...
PLOS ONE, 2015
Deep networks have inspired a renaissance in neural network use, and are becoming the default option for difficult tasks on large datasets. In this report we show that published deep network results on the MNIST handwritten digit dataset can straightforwardly be replicated (error rates below 1%, without use of any distortions) with shallow 'Extreme Learning Machine' (ELM) networks, with a very rapid training time (∼10 minutes). When we used distortions of the training set we obtained error rates below 0.6%. To achieve this performance, we introduce several methods for enhancing ELM implementation, which individually and in combination can significantly improve performance, to the point where it is nearly indistinguishable from deep network performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90 percent of weights equal to zero, which is a potential advantage for hardware implementations. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hiddenunits required to achieve a particular performance. Our close to state-of-the-art results for MNIST suggest that the ease of use and accuracy of ELM should cause it to be given greater consideration as an alternative to deep networks applied to more challenging datasets.
Deep neural networks often require very large volume of training examples, whereas children can learn concepts such as hand-written digits with few examples. The goal of this project is to develop a deep spiking neural network that can learn from few training trials. Using known neuronal mechanisms, a spiking neural network model is developed and trained to recognize hand-written digits with presenting one to four training examples for each digit taken from the MNIST database. The model detects and learns geometric features of the images from MNIST database. In this work, a novel biological back-propagation based learning rule is developed and used to a train the network to detect basic features of different digits. For this purpose, randomly initialized synaptic weights between the layers are being updated. By using a neuroscience inspired mechanism named synaptic pruning and a predefined threshold, some of the synapses through the training are deleted. Hence, information channels ...
ArXiv, 2016
Benchmarks and datasets have important role in evaluation of machine learning algorithms and neural network implementations. Traditional dataset for images such as MNIST is applied to evaluate efficiency of different training algorithms in neural networks. This demand is different in Spiking Neural Networks (SNN) as they require spiking inputs. It is widely believed, in the biological cortex the timing of spikes is irregular. Poisson distributions provide adequate descriptions of the irregularity in generating appropriate spikes. Here, we introduce a spike-based version of MNSIT (handwritten digits dataset),using Poisson distribution and show the Poissonian property of the generated streams. We introduce a new version of evt_MNIST which can be used for neural network evaluation.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Spiking neural networks (SNNs) have shown clear advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency, due to their event-driven nature and sparse communication. However, the training of deep SNNs is not straightforward. In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning. By studying the equivalence between ANNs and SNNs in the discrete representation space, a primitive network conversion method is introduced that takes full advantage of spike count to approximate the activation value of ANN neurons. To compensate for the approximation errors arising from the primitive network conversion, we further introduce a layer-wise learning method with an adaptive training scheduler to fine-tune the network weights. The progressive tandem learning framework also allows hardware constraints, such as limited weight precision and fan-in connections, to be progressively imposed during training. The SNNs thus trained have demonstrated remarkable classification and regression capabilities on large-scale object recognition, image reconstruction, and speech separation tasks, while requiring at least an order of magnitude reduced inference time and synaptic operations than other state-of-the-art SNN implementations. It, therefore, opens up a myriad of opportunities for pervasive mobile and embedded devices with a limited power budget.
Big Data and Cognitive Computing
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.
Probability in the Engineering and Informational Sciences, 2019
Artificial Neural Network (ANN) based techniques have dominated state-of-the-art results in most problems related to computer vision, audio recognition, and natural language processing in the past few years, resulting in strong industrial adoption from all leading technology companies worldwide. One of the major obstacles that have historically delayed large scale adoption of ANNs is the huge computational and power costs associated with training and testing (deploying) them. In the meantime , Neuromorphic Computing platforms have recently achieved remarkable performance running more bio-realistic Spiking Neural Networks at high throughput and very low power consumption making them a natural alternative to ANNs. Here, we propose using the Random Neural Network (RNN), a spiking neural network with both theoretical and practical appealing properties, as a general purpose classifier that can match the classification power of ANNs on a number of tasks while enjoying all the features of a spiking neural network. This is demonstrated on a number of real-world classification datasets.
Frontiers in Neuroscience, 2020
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications. In recent years, there have been several proposals focused on supervised (conversion, spike-based gradient descent) and unsupervised (spike timing dependent plasticity) training methods to improve the accuracy of SNNs on large-scale tasks. However, each of these methods suffer from scalability, latency and accuracy limitations. In this paper, we propose novel algorithmic techniques of modifying the SNN configuration with backward residual connections, stochastic softmax and hybrid artificial-and-spiking neuronal activations to improve the learning ability of the training methodologies to yield competitive accuracy, while, yielding large efficiency gains over their artificial counterparts. Note, artificial counterparts refer to conventional deep learning/artificial neural networks. Our techniques apply to VGG/Residual architectures, and are compatible with all forms of training methodologies. Our analysis reveals that the proposed solutions yield near state-of-the-art accuracy with significant energy-efficiency and reduced parameter overhead translating to hardware improvements on complex visual recognition tasks, such as, CIFAR10, Imagenet datatsets.
Extreme learning machines (ELMs) basically give answers to two fundamental learning problems: (1) Can fundamentals of learning (i.e., feature learning, clustering , regression and classification) be made without tuning hidden neurons (including biological neurons) even when the output shapes and function modeling of these neurons are unknown? (2) Does there exist unified framework for feedforward neural networks and feature space methods? ELMs that have built some tangible links between machine learning techniques and biological learning mechanisms have recently attracted increasing attention of researchers in widespread research areas. This paper provides an insight into ELMs in three aspects, viz: random neurons, random features and kernels. This paper also shows that in theory ELMs (with the same kernels) tend to outperform support vector machine and its variants in both regression and classification applications with much easier implementation.
2016 International Joint Conference on Neural Networks (IJCNN), 2016
We present a spike-based unsupervised regenerative learning scheme to train Spiking Deep Networks (SpikeCNN) for object recognition problems using biologically realistic leaky integrate-and-fire neurons. The training methodology is based on the Auto-Encoder learning model wherein the hierarchical network is trained layer wise using the encoder-decoder principle. Regenerative learning uses spiketiming information and inherent latencies to update the weights and learn representative levels for each convolutional layer in an unsupervised manner. The features learnt from the final layer in the hierarchy are then fed to an output layer. The output layer is trained with supervision by showing a fraction of the labeled training dataset and performs the overall classification of the input. Our proposed methodology yields 0.92%/29.84% classification error on MNIST/CIFAR10 datasets which is comparable with state-of-the-art results. The proposed methodology also introduces sparsity in the hierarchical feature representations on account of event-based coding resulting in computationally efficient learning.
Neuro-Inspired Computational Elements Conference
Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of machine learning with artificial neural networks. Building on our recent works translating ANNs to SNNs and directly training classifiers with e-prop, we here present the mlGeNN interface as an easy way to define, train and test spiking neural networks on our efficient GPU based GeNN framework. We illustrate the use of ml-GeNN by investigating the performance of a number of one and two layer recurrent spiking neural networks trained to recognise hand gestures from the DVS gesture dataset with the e-prop learning rule. We find that not only is mlGeNN vastly more convenient to use than the lower level PyGeNN interface, the new freedom to effortlessly and rapidly prototype different network architectures also gave us an unprecedented overview over how e-prop compares to other recently published results on the DVS gesture dataset across architectural details. CCS CONCEPTS • Computing methodologies → Bio-inspired approaches; Supervised learning; Vector / streaming algorithms.
2016 6th International Conference on Computer and Knowledge Engineering (ICCKE), 2016
Understanding brain mechanisms and its problem solving techniques is the motivation of many emerging brain inspired computation methods. In this paper, respecting deep architecture of the brain and spiking model of biological neural networks, we propose a spiking deep belief network to evaluate ability of the deep spiking neural networks in face recognition application on ORL dataset. To overcome the change of using spiking neural networks in a deep learning algorithm, Siegert model is utilized as an abstract neuron model. Although there are state of the art classic machine learning algorithms for face detection, this work is mainly focused on demonstrating capabilities of brain inspired models in this era, which can be serious candidate for future hardware oriented deep learning implementations. Accordingly, the proposed model, because of using leaky integrate-and-fire neuron model, is compatible to be used in efficient neuromorphic platforms for accelerators and hardware implementation.
Neural Computing and Applications
This is the Special Issue on ''Emerging applications of Deep Learning (DL) and Spiking Artificial Neural Networks'' of the Springer Neural Computing and Applications Journal. It includes seventeen high-quality scientific research papers, presenting innovative research, falling in the above areas. The accepted papers were selected from a large pool of submissions after a peer review process, based on their level of novelty and quality. Deep Learning belongs to the Machine Learning approaches, and it can be successfully applied even for complex modeling cases with vast amount of diverse or unstructured data, especially in the domains of Image Classification and Natural Language processing. DL is employed in several diverse fields of our everyday life, like self-driving cars, fraud news detection, medical applications, even in social media. On the other hand, Spiking Neural Networks are inspired by the actual function of the brain and more specifically by the emission, communication and processing of pulses known as spikes. They have been proven to be used quite efficiently in control systems (e.g., in Robotics) in cybersecurity modeling and more generally in the development of neuromorphic systems. The first paper is entitled ''Affective Analysis of patients in Homecare Video Assisted Telemedicine using Computational Intelligence,'' and it is authored by Antonis Kallipolitis, Michael Galliakis both from the
2020
Computation using brain-inspired spiking neural networks (SNNs) with neuromorphic hardware may offer orders of magnitude higher energy efficiency compared to the current analog neural networks (ANNs). Unfortunately, training SNNs with the same number of layers as state of the art ANNs remains a challenge. To our knowledge the only method which is successful in this regard is supervised training of ANN and then converting it to SNN. In this work we directly train deep SNNs using backpropagation with surrogate gradient and find that due to implicitly recurrent nature of feed forward SNN's the exploding or vanishing gradient problem severely hinders their training. We show that this problem can be solved by tuning the surrogate gradient function. We also propose using batch normalization from ANN literature on input currents of SNN neurons. Using these improvements we show that is is possible to train SNN with ResNet50 architecture on CIFAR100 and Imagenette object recognition data...
Neural Networks, 2019
In recent years, Spiking Neural Networks (SNNs) have demonstrated great successes in completing various Machine Learning tasks. We introduce a method for learning image features by locally connected layers in SNNs using spike-timingdependent plasticity (STDP) rule. In our approach, sub-networks compete via competitive inhibitory interactions to learn features from different locations of the input space. These Locally-Connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biological neurons. We explore biologically inspired ngram classification approach allowing parallel processing over various patches of the the image space. We report the classification accuracy of simple two-layer LC-SNNs on two image datasets, which match the state-of-art performance and are the first results to date. LC-SNNs have the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning. Robustness tests demonstrate that LC-SNNs exhibit graceful degradation of performance despite the random deletion of large amounts of synapses and neurons.
Deep neural networks such as Convolutional Networks (ConvNets) and Deep Belief Networks (DBNs) represent the state-of-the-art for many machine learning and computer vision classification problems. To overcome the large computational cost of deep networks, spiking deep networks have recently been proposed, given the specialized hardware now available for spiking neural networks (SNNs). However, this has come at the cost of performance losses due to the conversion from analog neural networks (ANNs) without a notion of time, to sparsely firing, event-driven SNNs. Here we analyze the effects of converting deep ANNs into SNNs with respect to the choice of parameters for spiking neurons such as firing rates and thresholds. We present a set of optimization techniques to minimize performance loss in the conversion process for ConvNets and fully connected deep networks. These techniques yield networks that outperform all previous SNNs on the MNIST database to date, and many networks here are close to maximum performance after only 20 ms of simulated time. The techniques include using rectified linear units (ReLUs) with zero bias during training, and using a new weight normalization method to help regulate firing rates. Our method for converting an ANN into an SNN enables lowlatency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time. The presented analysis and optimization techniques boost the value of spiking deep networks as an attractive framework for neuromorphic computing platforms aimed at fast and efficient pattern recognition.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.