Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, Perspectives in Neural Computing
…
9 pages
1 file
Los Alamos National Laboratory, an affirmative action/equal opportunity employer, is operated by the University of California for the US. Department of Energy under contract W-7405-ENG-36. By acceptance of this article, the publisher recognizes that the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or to allow others to do so, for U.S. Government pufpnses. The Los Alamos National Laboratory requests that the publisher identify this article as work performed under the auspices of the U.S. Department of Energy. Fom, No. 836 R5 SF2629 lolo1 DISCLAIMER This report was prepared as an account of work sponsored by an agency of the United States Government Neither the United States Government nor any agency thereof, nor any of their employees, make any warranty, express or implied, or assumes any legal liability or respom-b' bility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disdosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
Biological and Artificial Intelligence Environments, 2005
One way of using the entropy criteria in learning systems is to minimize the entropy of the error between two variables: typically, one is the output of the learning system and the other is the target. This framework has been used for regression. In this paper we show how to use the minimization of the entropy of the error for classification. The minimization of the entropy of the error implies a constant value for the errors. This, in general, does not imply that the value of the errors is zero. In regression, this problem is solved by making a shift of the final result such that it's average equals the average value of the desired target. We prove that, under mild conditions, this algorithm, when used in a classification problem, makes the error converge to zero and can thus be used in classification.
Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine the q-generalized function based on Tsallis statistics as an alternative error measure in neural networks. In order to validate different performance aspects of the proposed function and to enable identification of its strengths and weaknesses the extensive simulation was prepared based on the artificial benchmarking dataset. The results indicate that Tsallis entropy error function can be successfully introduced in the neural networks yielding satisfactory results and handling with class imbalance, noise in data or use of non-informative predictors.
Entropy
Measuring the predictability and complexity of time series using entropy is essential tool designing and controlling a nonlinear system. However, the existing methods have some drawbacks related to the strong dependence of entropy on the parameters of the methods. To overcome these difficulties, this study proposes a new method for estimating the entropy of a time series using the LogNNet neural network model. The LogNNet reservoir matrix is filled with time series elements according to our algorithm. The accuracy of the classification of images from the MNIST-10 database is considered as the entropy measure and denoted by NNetEn. The novelty of entropy calculation is that the time series is involved in mixing the input information in the reservoir. Greater complexity in the time series leads to a higher classification accuracy and higher NNetEn values. We introduce a new time series characteristic called time series learning inertia that determines the learning rate of the neural n...
1997
This report was prepared as an account of work sponsored by an agency of the United States-Government. Neither the United States Gavernment nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Refer-Los Alamos National Laboratory, an affirmative action/equal opportunity employer, is operated by the University of California for the U S. Depattment of Energy under contract W-7405-ENG-36. By acceptance of this article, the publisher recognizes that the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or to allow others to do so, for US. Government purposes. The Los Alamos National Laboratory requests that the publisher identify this article as work performed under the auspices of the U S. Department of Energy. FomrNo.836R5 ST 2629 ?om
Knowledge and Information Systems, 2000
In this paper, an additional entropy penalty term is used to steer the direction of the hidden node's activation in the process of learning. A state with minimum entropy means that most nodes are operating in the non-linear zones (i.e. saturation zones) near the extreme ends of the Sigmoid curve. As the training proceeds, redundant hidden nodes' activations are pushed towards their extreme value corresponding to a low entropy state with maximum information, while some relevant nodes remain active in the linear zone. As training progresses, more nodes get into saturation zones. The early creation of such nodes may impair generalisation performance. To prevent the network from being driven into saturation before it can really learn, an entropy cycle is proposed in this paper to dampen the creation of such inactive nodes in the early stage of training. At the end of training, these inactive nodes can then be eliminated without affecting the performance of the original network. The concept has been successfully applied for pruning in two classification problems. The experiments indicate that redundant nodes are pruned resulting in optimal network topologies.
We explore the role of entropy manipulation during learning in supervised multiple layer perceptron classifiers. Entropy maximization [1][2] or mutual information maximization [3] is the criterion for unsupervised blind signal separation or feature extraction. In contrast, we show that for a 2-layer MLP classifier, conditional entropy minimization in the internal layer is a necessary condition for error minimization in the mappingfiom the input to the output. The relationship between entropy and the expected volume and mass of a convex hull constructedfiom n sample points is examined. We show that minimizing the expected hull volume may have more desirable gradient dynamics when compared to minimizing entropy. We show that entropy by itseK has some geometrical invariance with respect to expected hull volumes. We develop closed form expressions for the expected convex hull mass and volumes in RI and relate these to error probability. Finally we show that learning in an MLP may be accomplished solq by minimization of the conditional expected hull volumes and the expected volume of the "intensity of collision. ''
ArXiv, 2021
Measuring the predictability and complexity of time series is an essential tool in designing and controlling the nonlinear system. Different entropy measures exist in the literature to analyze the predictability and complexity of time series. However, the existed methods have some drawbacks related to a strong dependence of entropy on the parameters of the methods, as well as on the length and amplitude of the time series. To overcome these difficulties, this study proposes a new method for estimating the entropy of a time series using the LogNNet neural network model. The LogNNet reservoir matrix is filled with the time series elements according to our algorithm. The network is trained on MNIST-10 dataset and the classification accuracy is calculated. The accuracy is considered as the entropy measure and denoted by NNetEn. The novelty of entropy calculation is that the time series is involved in mixing the input information in the reservoir. The greater complexity of the time serie...
Algorithms
Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution function. However, for the effective separation of time series, new entropy estimation methods are required to characterize the chaotic dynamic of the system. Our concept of Neural Network Entropy (NNetEn) is based on the classification of special datasets in relation to the entropy of the time series recorded in the reservoir of the neural network. NNetEn estimates the chaotic dynamics of time series in an original way and does not take into account probability distribution functions. We propose two new classification metrics: R2 Efficiency and Pearson Efficiency. The efficiency of NNetEn is verified on separation of two chaotic time series of sine mapping using dispersion analysis. For two close dynamic time series (r = 1.1918 and r = 1.2243), the F-ratio has reached the value of 124 and reflects high efficiency of the...
2014
There are many methods for determining the Classification Accuracy. In this paper significance of Entropy of training signatures in Classification has been shown. Entropy of training signatures of the raw digital image represents the heterogeneity of the brightness values of the pixels in different bands. This implies that an image comprising a homogeneous lu/lc category will be associated with nearly the same reflectance values that would result in the occurrence of a very low entropy value. On the other hand an image characterized by the occurrence of diverse lu/lc categories will consist of largely differing reflectance values due to which the entropy of such image would be relatively high. This concept leads to analyses of classification accuracy. Although Entropy has been used many times in RS and GIS but its use in determination of classification accuracy is new approach.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Lecture Notes in Computer Science, 2012
BMC Neuroscience, 2009
IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), 1999
ASEAN Journal on Science and Technology for Development
IEEE Transactions on Systems, Man, and Cybernetics, 1991
Neurocomputing, 2019
IEEE Transactions on Neural Networks, 1997
IEEE Transactions on Biomedical Engineering, 2008
International Journal of Computational Intelligence Systems, 2023
Nonlinear Dynamics
IEEE Transactions on Industrial Electronics, 2000
Journal of Statistical Mechanics: Theory and Experiment, 2013