Papers by Francisco Maldonado
Letters: Upper bound on pattern storage in feedforward networks
Neurocomputing, Oct 1, 2008
A Pseudogenetic Algorithm for MLP Design based upon the Schmidt Procedure
Nci, 2003
Abstract Based in the natural law of selection a pseudogenetic algorithm is defined in this paper... more Abstract Based in the natural law of selection a pseudogenetic algorithm is defined in this paper for designing Multilayer Perceptron (MLP). The algorithm is based in two operations; combine and prune. A non-heuristic pruning algorithm is defined from the Schmidt ...
Sensor Calibration by Neural Network in a Smart Wireless System
2006 IEEE Autotestcon, 2006
Self-learning and neural network adaptation by embedded collaborative learning engine (eCLE) — An overview
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
Optimized neuro genetic fast estimator (ONGFE) for efficient distributed intelligence instantiation within embedded systems
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
Enhancing vibration analysis by embedded sensor data validation technologies
2012 IEEE AUTOTESTCON Proceedings, 2012
Distributed intelligent health monitoring with the coremicro Reconfigurable Embedded Smart Sensor Node
2012 IEEE AUTOTESTCON Proceedings, 2012
Neurocomputing, 2008
Starting from the strict interpolation equations for multivariate polynomials, an upper bound is ... more Starting from the strict interpolation equations for multivariate polynomials, an upper bound is developed for the number of patterns that can be memorized by a nonlinear feedforward network. A straightforward proof by contradiction is presented for the upper bound. It is shown that the hidden activations do not have to be analytic. Networks, trained by conjugate gradient, are used to demonstrate the tightness of the bound for random patterns. Based upon the upper bound, small multilayer perceptron models are successfully demonstrated for large support vector machines.
Enhancing vibration analysis by embedded sensor data validation technologies
IEEE Instrumentation & Measurement Magazine, 2000
Motion Equations of Mobile robots
In this resume is presented a model of movement for mobile robots (MR), the synchro- drive-and-st... more In this resume is presented a model of movement for mobile robots (MR), the synchro- drive-and-steering-wheel vehicle, and its application to two configurations of mobile robots considering their kinematics constraints. 1. Model Here are considered polar and parametric representation of the path (ζ). In both cases the input is the speed (v), the angle of the speed (θ) and a function of the angular velocity (ω). The outputs are the states, the position of the robot, components of the velocity and the distance traveled along to the path.
Flairs, 2006
In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a onepass validation method is discussed. Finally, a training method that combines growing and pruning is described.
In designing feedforward neural networks, one often trains a large network and then prunes less u... more In designing feedforward neural networks, one often trains a large network and then prunes less useful hidden units. In this paper, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found and ordered one at a time. In optimal pruning, the best subset of orthonormal basis functions is found for each size network. Linear dependency of basis functions is considered and computational cost is analyzed. Simulation results are given.
A common way of designing feed forward networks is to obtain a large network and then to prune le... more A common way of designing feed forward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found and ordered one at a time. In optimal pruning, the best subset of orthonormal basis functions is found for each size network. Simulation results are shown.

Neurocomputing, 2008
In order to facilitate complexity optimization in feedforward networks, several algorithms are de... more In order to facilitate complexity optimization in feedforward networks, several algorithms are developed that combine growing and pruning. First, a growing scheme is presented which iteratively adds new hidden units to full-trained networks. Then, a non-heuristic onepass pruning technique is presented, which utilizes orthogonal least squares. Based upon pruning, a one-pass approach is developed for generating the validation error versus network size curve. A combined approach is described in which networks are continually pruned during the growing process. As a result, the hidden units are ordered according to their usefulness, and the least useful units are eliminated. Examples show that networks designed using the combined method have less training and validation error than growing or pruning alone. The combined method exhibits reduced sensitivity to the initial weights and generates an almost monotonic error versus network size curve. It is shown to perform better than two well-known growing methods-constructive backpropagation and cascade correlation. r
A common way of designing feed forward networks is to obtain a large network and then to prune le... more A common way of designing feed forward networks is to obtain a large network and then to prune less useful hidden units. Here, hvo non-heuristic pruning algorithnis are derived from the Schmidt procedure. In both. oi?lionormal systems of basis functions are found, ordered, pruned, and mapped back to tke original iiehcoi-k. In the first algor-itkm, the oi~thonoi~iiai basis ,fiinctions are .found and ordered one at a time. In optimal pruning, the best subset of orthonormal basis ,functions is found for eack size network. Simulation results are shown.
Uploads
Papers by Francisco Maldonado