Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Feedforward Neural Network (FFNN) is a surrogate of Artificial Neural Network (ANN) in which links amongst the units do not form a directed cycle. ANNs, akin to the vast network of neurons in the brain (human central nervous system) are usually presented as systems of interweaving connected "neurons" which exchange messages between each other. These connections have numeric hefts that can be adjusted and grounded on experience, enforcing adaptively on neural networks to inputs and learning capabilities. This paper presents a comprehensive review of FFNN with emphasis on implantation issues, which have been addressed by previous approaches. We also propose a theoretical model that exhibits potential superior performances in terms of convergence speed, efficient and effective computation and generality than state of the art models.
2013
A feedforward neural network is an artificial neural network where connections between the units do not form a directed cycle. This is different from recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network. A feedforward neural network is a biologically inspired classification algorithm. It consist of a (possibly large) number of simple neuron-like processing units, organized in layers. Every unit in a layer is connected with all the units in the previous layer. These connections are not all equal, each connection may have a different strength or weight. The weights on these connections encode the knowledge of a network. Often the units in a neural network are also called nodes. Data enters at the inputs and passes thro...
1996
The neural network model (NN) comprised of relatively simple computing elements, operat¬ ing in parallel, offers an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. Due to the amount of research developed in the area many types of networks have been defined. The one of interest here is the multi-layer perceptron as it is one of the simplest and it is considered a powerful representation tool whose complete potential has not been adequately exploited and whose limitations need yet to be specified in a formal and coherent framework. This dissertation addresses the theory of gen¬ eralisation performance and architecture selection for the multi-layer perceptron; a subsidiary aim is to compare and integrate this model with existing data analysis techniques and exploit its potential by combining it with certain constructs from computational geometry creating a reliable, coherent network design process which conforms t...
1997
Abstract: There has been enormous interest over the past decade in the use of artificial neural networks (ANNs) for data processing applications. By and large, this interest has been well-founded. ANNs, however, offer no panacea to the data analyst. They are as prone to misuse as any other method and the details of their functioning are often clouded in mystique. This has made a firm understanding of their functioning difficult. Presents a brief introduction to the most widely applied class of ANN, the feed-forward network.
1991
Acknowledgments: The authors would like to thank Shilin Hu and Ying Li for their assistance with the material of Section 4. We would also like to thank the reviewers for their helpful suggestions which contributed signiicantly to the overall quality of the paper. Abstract Theoretical results and practical experience indicate that feedforward networks are very good at approximating a wide class of functional relationships. Training networks to approximate functions takes place by using exemplars to nd interconnect weights that maximize some goodness of t criterion. Given nite data sets it can be important in the training process to take advantage of any a priori information regarding the underlying functional relationship to improve the approximation and the ability of the network to generalize. This paper describes methods for incorporating a priori information of this type into feedforward networks. Two general approaches, one based upon architectural constraints and a second upon ...
IEEE Transactions on Neural Networks, 2001
Artificial Neural Networks - Architectures and Applications, 2013
Proceedings of the XII CAEPIA
We present an analysis of the computational capabilities of feed-forward neural networks focusing on the role of the output function. The space of configurations that implement a given target function is analyzed for small size networks when different output functions are considered. The generalization complexity and other relevant properties for some complex and useful linearly separable functions are also analyzed. The results indicate that efficient output functions are those with a similar Hamming weight as the target output that at the same time have a high complexity.
As a machine learning algorithm, neural network has been widely used in various research projects to solve various critical problems. The concept of neural networks is inspired from the human brain. The paper will explain the actual concept of Neural Networks such that a non-skilled person can understand basic concept and also make use of this algorithm to solve various tedious and complex problems. The paper demonstrates the designing and implementation of fully design Neural Network along with the codes. It gives various architectures of ANN also the advantages, disadvantages & applications.
Neural Networks, 1991
Theoretical results and practical experience indicate that feedforward networks are very good at approximating a wide class of functional relationships. Training networks to approximate functions takes place by using exemplars to nd interconnect weights that maximize some goodness of t criterion. Given nite data sets it can be important in the training process to take advantage of any a priori information regarding the underlying functional relationship to improve the approximation and the ability of the network to generalize. This paper describes methods for incorporating a priori information of this type into feedforward networks. Two general approaches, one based upon architectural constraints and a second upon connection weight constraints form the basis of the methods presented. These two approaches can be used either alone or in combination to help solve speci c training problems. Several examples covering a variety of types of a priori information, including information about curvature, interpolation points, and output layer interrelationships are presented.
Throughout the years, the computational changes have brought growth to new technologies. Such is the case of artificial neural networks, that over the years, have given various solutions to the industry. An Artificial neural network was motivated by models of the biological brain .It can create its own organization or representation of information it receives during learning time.
Journal of Mathematical Psychology, 1997
Journal of Intelligent and Robotic Systems, 1998
Abstract. The generalization ability of feedforward neural networks (NNs) depends on the size of training set and the feature of the training patterns. Theoretically the best classification property is obtained if all possible patterns are used to train the network, which is practically ...
Artificial Neural Networks are composed of a large number of simple computational units operating in parallel they have the potential to provide fault tolerance. One extremely motivating possessions of genetic neural networks of the additional urbanized human body and other animal is their open-mindedness against injure or destroyed to individual neurons. In the case of biological neural networks a solution tolerant to loss of neurons has a high priority since a graceful degradation of performance is very important to the survival of the organism. We propose a simple modification of the training procedure commonly used with the Back-Propagation algorithm in order to increase the tolerance of the feed forward multi-layered ANN to internal hardware failures such as the loss of hidden units.
A dynamical system model is derived for feedforward neural networks with one layer of hidden nodes. The model is valid in the vicinity of at minima of the cost function that rise due to the formation of clusters of redundant hidden nodes with nearly identical outputs. The derivation is carried out for networks with an arbitrary number of hidden and output nodes and is, therefore, a generalization of previous work valid for networks with only two hidden nodes and one output node. The Jacobian matrix of the system is obtained, whose eigenvalues characterize the evolution of learning. Flat minima correspond to critical points of the phase plane trajectories and the bifurcation of the eigenvalues signi®es their abandonment. Following the derivation of the dynamical model, we show that identi®cation of the hidden nodes clusters using unsupervised learning techniques enables the application of a constrained application (Dynamically Constrained Back PropagationÐDCBP) whose purpose is to facilitate prompt bifurcation of the eigenvalues of the Jacobian matrix and, thus, accelerate learning. DCBP is applied to standard benchmark tasks either autonomously or as an aid to other standard learning algorithms in the vicinity of¯at minima. Its application leads to signi®cant reduction in the number of required epochs for convergence. q 2001 Published by Elsevier Science Ltd.
1996
List of Figures 1.1 Detailed (top) and symbolic (bottom) representations of the artificial neuron. 1.2 Examples of activation functions, a(.): logistic and hyperbolic tangent. 1.3 The conventional 2-layer d: q: l feedforward network with d inputs, q hidden neurons, and a single output. 1.4 Example of a good fit (a), an under-fit (b), and an over-fit (c) of training data. The first case will results in good generalisation, whereas the latter two in poor. 2.1 Universal approximation in one dimension. 15 26 3.1 Two choices for the discretising function Q(w) [841.47 3.2 Frequency distribution of numbers generated by tan(RND). 3.3 Qp,, cc '(w). 53 3.4 Comparison of the actual error term and its approximation in the range [-3,31.53 3.5 The black-hole function. 54 4.1 The set of decision boundaries of an integer [-3,31 weight 2-input perceptron with offset. Some of the possible 73 decision boundaries lie outside the {(-l,-1), (1,1)) square, and therefore are not shown. 63 4.2 Linearly separable data sets with decision boundaries at gradually varying angles. 64 iv V LIST OF FIGURES 4.3 IWN minimum E,,,.. as a function of the number of hidden neurons for the data sets shown in Figure 4.2. The 1 and 2 hidden neuron E
IEEE Transactions on Neural Networks, 2004
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.