Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005, European Journal of Physics
…
9 pages
1 file
Physics concepts have often been borrowed and independently developed by other fields of science. In this perspective, a significant example is that of the entropy in information theory. The aim of this paper is to provide a short and pedagogical introduction to the use of data compression techniques for the estimate of the entropy and other relevant quantities in information theory and algorithmic information theory. We consider in particular the LZ77 algorithm as a case study and discuss how a zipper can be used for information extraction.
Entropy, 2018
We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects t...
Journal of Computer and Communications
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.
2014
Complexity is a catchword of certain extremely popular and rapidly devel oping interdisciplinary new sciences often called accordingly the sciences of complexity It is often closely associated with another notably popular but ambiguous word information information in turn may be justly called the central new concept in the whole th century science Moreover the notion of information is regularly coupled with a key concept of thermody namics viz entropy And like this was not enough it is quite usual to add one more at present extraordinarily popular notion namely chaos and wed it with the abovementioned concepts It is my aim in this paper to critically analyse this conceptual mess from a logical and philosophical point of view concentrating on the concepts of complexity and information and the question concerning the true relation between them I shall focus especially on the socalled algorithmic infor mation theory which has lately become extraordinarily popular especially in theoreti...
Chaos, Solitons & Fractals, 2003
The adoption of the Kolmogorov-Sinai (KS) entropy is becoming a popular research tool among physicists, especially when applied to a dynamical system fitting the conditions of validity of the Pesin theorem. The study of time series that are a manifestation of system dynamics whose rules are either unknown or too complex for a mathematical treatment, is still a challenge since the KS entropy is not computable, in general, in that case. Here we present a plan of action based on the joint action of two procedures, both related to the KS entropy, but compatible with computer implementation through fast and efficient programs. The former procedure, called Compression Algorithm Sensitive To Regularity (CASToRe), establishes the amount of order by the numerical evaluation of algorithmic compressibility. The latter, called Complex Analysis of Sequences via Scaling AND Randomness Assessment (CASSANDRA), establishes the complexity degree through the numerical evaluation of the strength of an anomalous effect. This is the departure, of the diffusion process generated by the observed fluctuations, from ordinary Brownian motion. The CASSANDRA algorithm shares with CASToRe a connection with the Kolmogorov complexity. This makes both algorithms especially suitable to study the transition from dynamics to thermodynamics, and 1 the case of non-stationary time series as well. The benefit of the joint action of these two methods is proven by the analysis of artificial sequences with the same main properties as the real time series to which the joint use of these two methods will be applied in future research work.
IEEE Communications Magazine, 2000
Artificial Life, 2015
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannonʼs information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules. 1 We do not refer here to computational complexity, which is a very well refined concept .
Entropy, 2013
What is information? What role does information entropy play in this information exploding age, especially in understanding emergent behaviors of complex systems? To answer these questions, we discuss the origin of information entropy, the difference between information entropy and thermodynamic entropy, the role of information entropy in complexity theories, including chaos theory and fractal theory, and speculate new fields in which information entropy may play important roles.
A detailed entropy analysis by the recent novelty of 'lumping' is performed in some DNA sequences. On the basis of this, we first report here a negative answer to the question 'can the DNA sequences at the level of nucleotides be generated by a deterministic finite automaton of essentially a small number of states, in the statistical limit?'. What is observed in all cases is an almost linear scaling of the block entropies-up to the numerical precision-close to the one of a mixing ergodic system with a very high topological entropy. The basic result that we report here is that the all the examined biological sequences appear to be very little compressible (they lie near to the incompressible limit). The topological entropy of coding regions appears to be even higher than that of non-coding regions.
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.
2001
Kolmogorov Complexity (K (x)) is the optimal compression bound of a given string x. This incomputable yet fundamental property of information has vast implications and applications in the areas of network and system optimization, security, bioinformatics, and emergence (see [1],[2] for an introduction to Kolmogorov Complexity and [2],[3] and [7] for some applications). An ideal approach for compression of information with a known distribution is Huffman Coding, which approaches the entropy of the source distribution.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Physica D: Nonlinear Phenomena, 2004
Pramana, 2015
Complex Systems Summer School, 2003
Proceedings. Ninth IEEE European Test Symposium, 2004. ETS 2004., 2004
Arxiv preprint math/ …, 2001
Advances in Network Complexity, 2013
Reports from the Department of Philosophy, 1998
IEEE Transactions on Information Theory