Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Neural Networks are relatively crude electronic models based on the neural structure of the brain. The brain basically learns from experience. It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy efficient packages.In this paper we propose the fundamentals of neural network topologies, activation function and learning algorithms based on the flow of information in bi-direction or uni-directions. We outline themain features of a number of popular neural networks and provide an overview on their topologies and their learning capabilities.
2013
Abstract. We present a model of a bidirectional three-layer neural network with sigmoidal units, which can be trained to learn arbitrary mappings. We introduce a bidirectional activation-based learning algorithm (BAL), inspired by O'Reilly's supervised Generalized Recirculation (GeneRec) algorithm that has been designed as a biologically plausible alternative to standard error backpropagation. BAL shares several features with GeneRec, but differs from it by being completely bidirectional regarding the activation propagation and the weight updates. In pilot experiments, we test the learning properties of BAL using three artificial data sets with binary patterns of increasing complexity.
1996
The neural network model (NN) comprised of relatively simple computing elements, operat¬ ing in parallel, offers an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. Due to the amount of research developed in the area many types of networks have been defined. The one of interest here is the multi-layer perceptron as it is one of the simplest and it is considered a powerful representation tool whose complete potential has not been adequately exploited and whose limitations need yet to be specified in a formal and coherent framework. This dissertation addresses the theory of gen¬ eralisation performance and architecture selection for the multi-layer perceptron; a subsidiary aim is to compare and integrate this model with existing data analysis techniques and exploit its potential by combining it with certain constructs from computational geometry creating a reliable, coherent network design process which conforms t...
Computers & Mathematics with Applications, 1996
The presented technical report is a preliminary English translation of selected revised sections from the rst part of the book Theoretical Issues of Neural Networks 75] by the rst author which represents a brief introduction to neural networks. This work does not cover a complete survey of the neural network models but the exposition here is focused more on the original motivations and on the clear technical description of several basic type models. It can be understood as an invitation to a deeper study of this eld. Thus, the respective b a c kground is prepared for those who have not met this phenomenon yet so that they could appreciate the subsequent theoretical parts of the book. In addition, this can also be pro table for those engineers who want t o a p p l y the neural networks in the area of their expertise. The introductory part does not require deeper preliminary knowledge, it contains many pictures and the mathematical formalism is reduced to the lowest degree in the rst chapter and it is used only for a technical description of neural network models in the following chapters. We will come back to the formalization of some of these introduced models within their theoretical analysis. The rst chapter makes an e ort to describe and clarify the neural network phenomenon. It contains a brief survey of the history of neurocomputing and it explains the neurophysiological motivations which led to the mathematical model of a neuron and neural network. It shows that a particular model of the neural network can be determined by means of the architectural, computational, and adaptive dynamics that describe the evolution of the speci c neural network parameters in time. Furthermore, it introduces neurocomputers as an alternative to the classical von Neumann computer architecture and the appropriate areas of their applications are discussed. The second chapter deals with the classical models of neural networks. First, the historically oldest model | the network of perceptrons is shortly mentioned. Further, the most widely applied model in practice | the multi{layered neural network with the back-propagation learning algorithm, is described in detail. The respective description, besides various variants of this model, contains implementation comments as well. The explanation of the linear model MADALINE, adapted according to the Widrow rule, follows. The third chapter is concentrated on the neural network models that are exploited as autoassociative or heteroassociative memories. The principles of the adaptation according to Hebb law are explained on the example of the linear associator neural network. The next model is the well{known Hop eld network, motivated by p h ysical theories, which is a representative of the cyclic neural networks. The analog version of this network can be used for heuristic solving of the optimization tasks (e. g. traveling salesman problem). By the physical analogy, a temperature parameter is introduced into the Hop eld network and thu s , a s t o c hastic model, the so{called Boltzmann machine is obtained. The information from this part of the book can be found in an arbitrary monograph or in survey articles concerning neural networks. For its composition we issued namely from the works 16, 24, 26, 27, 35, 36, 45, 73].
Digital Systems, 2018
Due to the recent trend of intelligent systems and their ability to adapt with varying conditions, deep learning becomes very attractive for many researchers. In general, neural network is used to implement different stages of processing systems based on learning algorithms by controlling their weights and biases. This chapter introduces the neural network concepts, with a description of major elements consisting of the network. It also describes different types of learning algorithms and activation functions with the examples. These concepts are detailed in standard applications. The chapter will be useful for undergraduate students and even for postgraduate students who have simple background on neural networks.
Soft Computing-A Fusion of Foundations, …, 2007
1995
The Workshop is designed to serve as a regular forum for researchers from universities and industry who are interested in interdisciplinary research on neural networks for signal processing applications. NNSP'95 offers a showcase for current research results in key areas, including learning algorithms, network architectures, speech processing, image processing, computer vision, adaptive signal processing, medical signal processing, digital communications and other applications. Our deep appreciation is extended to Prof. Abu-Mostafa of Caltech, Prof. John Moody of Oregon Graduate Institute, Prof. S.Y. Kung, of Princeton U., Prof. Michael I. Jordan of MIT and Dr. Vladimir Vapnik of AT&T Bell Labs, for their insightful plenary talks. Thanks to Dr. Gary Kuhn of Siemens Corporate Research for organizing a wonderful evening panel discussion on "Why Neural Networks are not Dead". Our sincere thanks go to all the authors for their timely contributions and to all the members of the Program Committee for the outstanding and high-quality program. We would like to thank the other members of the Organizing Committee: Finance Chair Dr. Judy Franklin of
The basic idea behind a neural network is to simulate (copy in a simplified but reasonably faithful way) lots of densely interconnected brain cells inside a computer so you can get it to learn things, recognize patterns, and make decisions in a humanlike way. The amazing thing about a neural network is that you don't have to program it to learn explicitly: it learns all by itself, just like a brain! But it isn't a brain. It's important to note that neural networks are (generally) software simulations: they're made by programming very ordinary computers, working in a very traditional fashion with their ordinary transistors and serially connected logic gates, to behave as though they're built from billions of highly interconnected brain cells working in parallel. This paper is to propose that a neural network applied in engineering science that how a robots that can see, feel, and predict the world around them, improved stock prediction, common usage of self-driving car and much more!
Artificial neural networks may probably be the single most successful technology in the last two decades which has been widely used in a large variety of applications. The purpose of this book is to provide recent advances of architectures, methodologies, and applications of artificial neural networks. The book consists of two parts: the architecture part covers architectures, design, optimization, and analysis of artificial neural networks; the applications part covers applications of artificial neural networks in a wide range of areas including biomedical, industrial, physics, and financial applications. Thus, this book will be a fundamental source of recent advances and applications of artificial neural networks. The target audience of this book includes college and graduate students, and engineers in companies.
The purpose of this chapter is to introduce a powerful class of mathematical models: the arti icial neural networks. This is a very general term that includes many different systems and various types of approaches, both from statistics and computer science. The analogy is not very detailed, but it serves to introduce the concept of parallel and distributed computing. Neural networks explain and give the proper demonstration for the applications of neural networks and wireless network. The arti icial network and wireless network have strong connection and can be investigated and explained properly. There are also some models which are called mathematical models which can explain and demonstrate the arti icial and wireless networks in detail. There are many organizations which are using arti icial neural networks by using sensor technology for wide range of purpose. Then we analyze in detail a widely applied type of arti icial neural network: the feed-forward network with error back-propagation algorithm. We illustrate the architecture of the models, the main learning methods and data representation, showing how to build a typical arti icial neural network. Our aim is not to examine them all (it would be a very long discussion), but to understand the basic functionality and the possible implementations of this powerful tool. We initially introduce neural networks, by analogy with the human brain.
Preface We have made this report file on the topic neural network; we have tried our best to elucidate (clarify) all the relevant detail to the topic to be included in the report. While in the beginning we have tried to give a general view about this topic. Our efforts and wholehearted co-corporation of each and everyone has ended on a successful note. we express our sincere gratitude to MR UGWUNNA CHARLES O. who have been there for us throughout the preparation of this topic. We thank
2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society
We present here a new scheme to construct a neural network architecture based on the physiological properties biological neuron for enhancing its performance. The new scheme divides every hidden layer into two parts to facilitate the processing of 0 and 1 separately and reduces the total number of interconnections considerably. The first part consist of units that receives signals only from '1 state' units of the immediately lower layer and are responsible for producing excitation units in the output layer i.e. the '1 state' and the second part consist of units that also receives signals only '1 state' units from the immediately lower layer are responsible for producing inhibition of the units in the output layer i.e. the '0 states'. The resulting architecture converges fast, produces more reliable results and reduces the computational burden considerably when compared to fully connected neural networks.
1991
This thesis deals mainly with the development of new learning algorithms and the study of the dynamics of neural networks. We develop a method for training feedback neural networks. Appropriate stability conditions are derived, and learning is performed by the gradient descent technique. We develop a new associative memory model using Hopfield's continuous feedback network. We demonstrate some of the storage limitations of the Hopfield network, and develop alternative architectures and an algorithm for designing the associative memory. We propose a new unsupervised learning method for neural networks. The method is based on applying repeatedly the gradient ascent technique on a defined criterion function. We study some of the dynamical aspects of Hopfield networks. New stability results are derived. Oscillations and synchronizations in several architectures are studied, and related to recent findings in biology. The problem of recording the outputs of real neural networks is con...
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2021
The purpose of this study is to familiarise the reader with the foundations of neural networks. Artificial Neural Networks (ANNs) are algorithm-based systems that are modelled after Biological Neural Networks (BNNs). Neural networks are an effort to use the human brain's information processing skills to address challenging real-world AI issues. The evolution of neural networks and their significance are briefly explored. ANNs and BNNs are contrasted, and their qualities, benefits, and disadvantages are discussed. The drawbacks of the perceptron model and their improvement by the sigmoid neuron and ReLU neuron are briefly discussed. In addition, we give a bird's-eye view of the different Neural Network models. We study neural networks (NNs) and highlight the different learning approaches and algorithms used in Machine Learning and Deep Learning. We also discuss different types of NNs and their applications. A brief introduction to Neuro-Fuzzy and its applications with a comprehensive review of NN technological advances is provided.
The traditional computation techniques of programming were not capable enough to solve " hard " problems like pattern recognition, prediction, compression, optimization, classification and machine learning. In order to solve such problems, an interest towards developing intelligent computation systems became stronger. To develop such intelligent systems, innumerable advances have been made by the researchers. Inspired by the human brain neural networks, researchers from various disciplines designed the Artificial Neural Networks (ANN). These artificial neurons are characterized on the basis of architecture, training or learning method and activation function. The neural network architecture is the arrangement of neurons to form layers and connection scheme formed in between and within the layers. Neural network architectures are broadly classified into feed-forward and feedback architectures that further contain single and multiple layers. The feed-forward networks provide a unidirectional signal flow whereas in the feedback networks the signals can flow in both the directions. These neural network architectures are trained through various learning algorithms for producing most efficient solutions to computation problems. In this paper, we present neural network architectures that play a crucial role in modeling the intelligent systems.
Information Sciences, 2015
Artificial Neural Networks (ANN) have been the center of attention of researchers for several decades. They have been efficiently used to provide global and reliable solutions to real-world problems in a wide range of scientific fields. Numerous types of ANN architectures and learning schemes have been used for pattern recognition, classification and regression cases. We are already standing on the third generation, where Networks of Spiking neurons have demonstrated high computational power mainly in pattern recognition. Hybrid approaches have been deployed by combining ANNs and Evolutionary Computing, or other techniques, in order to provide optimized solutions. Current research varies from the development of new and more efficient learning algorithms, to networks capable of responding to temporally varying patterns, and to their implementation in hardware. This Special Issue aims at highlighting recent and timely modeling applications of ANN with the use of innovative learning algorithms or architectures. The INS Journal made an open call for original scientific contributions and the scientific community responded very well.
Corr, 2005
The Artificial Neural Network (ANN) is a functional imitation of simplified model of the biological neurons and their goal is to construct useful 'computers' for real-world problems and reproduce intelligent data evaluation techniques like pattern recognition, classification and generalization by using simple, distributed and robust processing units called artificial neurons. ANNs are fine-grained parallel implementation of non-linear static-dynamic systems. The intelligence of ANN and its capability to solve hard problems emerges from the high degree of connectivity that gives neurons its high computational power through its massive parallel-distributed structure. The current resurgent of interest in ANN is largely because ANN algorithms and architectures can be implemented in VLSI technology for real time applications. The number of ANN applications has increased dramatically in the last few years, fired by both theoretical and application successes in a variety of disciplines. This paper presents a survey of the research and explosive developments of many ANN-related applications. A brief overview of the ANN theory, models and applications is presented. Potential areas of applications are identified and future trend is discussed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.