Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001
Abstract The problem of information assurance is approached from the point of view of Kolmogorov complexity and minimum message length criteria. Several theoretical results are obtained, possible applications are discussed and a new metric for measuring complexity is introduced. Utilization of Kolmogorov complexity like metrics as conserved parameters to detect abnormal system behavior is explored.
International Journal of Engineering Sciences & Research Technology, 2012
This paper describes various application issues of kolmogorov complexity . Information assurance , network management , active network are those areas where kolmogorov complexity is applied . Our main focus is to show it's importance in various domain including the domain of computer virus detection .
This paper presents a proposal for the application of Kolmogorov complexity to the characterization of systems and processes, and the evaluation of computational models. The methodology developed represents a theoretical tool to solve problems from systems science. Two applications of the methodology are presented in order to illustrate the proposal, both of which were developed by the authors. The first one is related to the software development process, the second to computer animation models. In the end a third application of the method is briefly introduced, with the intention of characterizing dynamic systems of chaotic behavior, which clearly demonstrates the potentials of the methodology.
The Computer Journal, 1999
The question why and how probability theory can be applied to the real-world phenomena has been discussed for several centuries. When the algorithmic information theory was created, it became possible to discuss these problems in a more specific way. In particular, Li and Vitányi [6], Rissanen [3], Wallace and Dowe [7] have discussed the connection between Kolmogorov (algorithmic) complexity and minimum description length (minimum message length) principle. In this note we try to point out a few simple observations that (we believe) are worth keeping in mind while discussing these topics.
Artificial life, 2015
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.
2010
This is a short introduction to Kolmogorov complexity and information theory. The interested reader is referred to the literature, especially the textbooks [CT91] and [LV97] which cover the fields of information theory and Kolmogorov complexity in depth and with all the necessary rigor. They are well to read and require only a minimum of prior knowledge.
2000
This document contains lecture notes of an introductory course on Kolmogorov complexity. They cover basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), notion of randomness (Martin-Löf randomness, Mises–Church randomness), Solomonoff universal a priori probability and their properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity) and applications (incompressibility method in computational complexity theory, incompleteness theorems).
The Computer Journal, 1999
We briefly discuss the origins, main ideas and principal applications of the theory of Kolmogorov complexity.
Measures of Complexity, 2015
Algorithmic information theory studies description complexity and randomness and is now a well known field of theoretical computer science and mathematical logic. There are several textbooks and monographs devoted to this theory [4, 1, 5, 2, 7] where one can find the detailed exposition of many difficult results as well as historical references. However, it seems that a short survey of its basic notions and main results relating these notions to each other, is missing. This report attempts to fill this gap and covers the basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), Solomonoff universal a priori probability, notions of randomness (Martin-Löf randomness, Mises-Church randomness), effective Hausdorff dimension. We prove their basic properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity, complexity characterization for effective dimension) and show some applications (incompressibility method in computational complexity theory, incompleteness theorems). It is based on the lecture notes of a course at Uppsala University given by the author [6].
2008
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov-Chaitin complexity of a string s. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the naturalness of the language in question. The aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that naturalness can be inferred, thereby providing a framework for a stable definition of K under the set of convergent models of computation.
2001
Abstract Unless vulnerabilities can be identified and measured, the information assurance of a system can never be properly designed or guaranteed. Results from a study on complexity evolving within an information system using Mathematica, Swarm, and a new Java complexity probe toolkit are presented in this paper. An underlying definition of information security is hypothesized based upon the attacker and defender as reasoning entities, capable of learning to outwit one another.
This paper presents a proposal for the application of Kolmogorov complexity to the characterization of systems and processes, and the evaluation of computational models. The methodology developed represents a theoretical tool to solve problems from systems science. Two applications of the methodology are presented in order to illustrate the proposal, both of which were developed by the authors. The first one is related to the software development process, the second to computer animation models. In the end a third application of the method is briefly introduced, with the intention of characterizing dynamic systems of chaotic behavior, which clearly demonstrates the potentials of the methodology.
Kolmogorov complexity (K) is an incomputable function. It can be approximated from above but not to arbitrary given precision and it cannot be approximated from below. By restricting the source of the data to a specific model class, we can construct a computable function κ to approximate K in a probabilistic sense: the probability that the error is greater than k decays exponentially with k. We apply the same method to the normalized information distance (NID) and discuss conditions that affect the safety of the approximation.
Kolmogorov complexity and Shannon entropy are conceptually different measures. However, for any recursive probability distribution, the expected value of Kolmogorov complexity equals its Shannon entropy, up to a constant. We study if a similar relationship holds for Rényi and Tsallis entropies of order α, showing that it only holds for α = 1. Regarding a time-bounded analogue relationship, we show that, for some distributions we have a similar result. We prove that, for universal time-bounded distribution m t (x), Tsallis and Rényi entropies converge if and only if α is greater than 1. We also establish the uniform continuity of these entropies.
2011
The notion of Kolmogorov complexity (=the minimal length of a program that generates some object) is often useful as a kind of language that allows us to reformulate some notions and therefore provide new intuition. In this survey we provide (with minimal comments) many different examples where notions and statements that involve Kolmogorov complexity are compared with their counterparts not involving complexity.
2013
Engineers like to think that they produce something different from that of a chaotic system. The Eiffel tower is fundamentally different from the same components lying in a heap on the ground. Mt. Rushmore is fundamentally different from a random mountainside. But engineers lack a good method for quantifying this idea. This has led some to reject the idea that engineered or designed systems can be detected. Various methods have been proposed, each of which has various faults. Some have trouble distinguishing noise from data, some are subjective, etc. For this study, conditional Kolmogorov complexity is used to measure the degree of specification of an object. The Kolmogorov complexity of an object is the length of the shortest computer program required to describe that object. Conditional Kolmogorov complexity is Kolmogorov complexity with access to a context. The program can extract information from the context in a variety of ways allowing more compression. The more compressible a...
2021
It is well known that normality can be described as incompressibility via finite automata. Still the statement and the proof of this result as given by Becher and Heiber (2013) in terms of "lossless finite-state compressors" do not follow the standard scheme of Kolmogorov complexity definition (an automaton is used for compression, not decompression). We modify this approach to make it more similar to the traditional Kolmogorov complexity theory (and simpler) by explicitly defining the notion of automatic Kolmogorov complexity and using its simple properties. Using this characterization and a sufficient condition for normality in terms of Kolmogorov complexity derived from it, we provide easy proofs for classical results about normal sequences (Champernown, Wall, Piatetski-Shapiro, Besicovitch, Copeland, Erdos et al.) Then we extend this approach to finite state dimension. We show that the block entropy definition of the finite state dimension remains the same if non-align...
In addition to the equations, physicists use the following additional difficult-to-formalize property: that the initial conditions and the value of the parameters must not be abnormal. We will describe a natural formalization of this property, and show that this formalization in good accordance with theoretical physics. At present, this formalization has been mainly applied to the foundations of physics. However, potentially, more practical applications are possible.
Fundamentals of Computation Theory
It is well known that normality (all factors of a given length appear in an infinite sequence with the same frequency) can be described as incompressibility via finite automata. Still the statement and proof of this result as given by Becher and Heiber [4] in terms of "lossless finite-state compressors" do not follow the standard scheme of Kolmogorov complexity definition (the automaton is used for compression, not decompression). We modify this approach to make it more similar to the traditional Kolmogorov complexity theory (and simpler) by explicitly defining the notion of automatic Kolmogorov complexity and using its simple properties. Other known notions (Shallit-Wang [13], Calude-Salomaa-Roblot [6]) of description complexity related to finite automata are discussed (see the last section). As a byproduct, this approach provides simple proofs of classical results about normality (equivalence of definitions with aligned occurences and all occurencies, Wall's theorem saying that a normal number remains normal when multiplied by a rational number, and Agafonov's result saying that normality is preserved by automatic selection rules).
Entropy, 2011
Kolmogorov complexity and Shannon entropy are two conceptually different information measures, as the former is based on size of programs and the later in probability distributions. However, it is known that, for any recursive probability distribution, the expected value of Kolmogorov complexity equals its Shannon entropy, up to a constant that depends only on the distribution. We study if a similar relationship holds for Rényi and Tsallis entropies of order α, showing that it only holds for Rényi and Tsallis entropies of order 1 (i.e., for Shannon entropy). Regarding a time bounded analogue relationship, we show that, for distributions such that the cumulative probability distribution is computable in time t(n), the expected value of time-bounded Kolmogorov complexity (where the alloted time is nt(n) log(nt(n))) is in the same range as the unbounded version. So, for these distributions, Shannon entropy captures the notion of computationally accessible information. We prove that, for universal time-bounded distribution m t (x), Tsallis and Rényi entropies converge if and only if α is greater than 1.