Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
This is a short introduction to Kolmogorov complexity and information theory. The interested reader is referred to the literature, especially the textbooks [CT91] and [LV97] which cover the fields of information theory and Kolmogorov complexity in depth and with all the necessary rigor. They are well to read and require only a minimum of prior knowledge.
The Computer Journal, 1999
We briefly discuss the origins, main ideas and principal applications of the theory of Kolmogorov complexity.
Measures of Complexity, 2015
Algorithmic information theory studies description complexity and randomness and is now a well known field of theoretical computer science and mathematical logic. There are several textbooks and monographs devoted to this theory [4, 1, 5, 2, 7] where one can find the detailed exposition of many difficult results as well as historical references. However, it seems that a short survey of its basic notions and main results relating these notions to each other, is missing. This report attempts to fill this gap and covers the basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), Solomonoff universal a priori probability, notions of randomness (Martin-Löf randomness, Mises-Church randomness), effective Hausdorff dimension. We prove their basic properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity, complexity characterization for effective dimension) and show some applications (incompressibility method in computational complexity theory, incompleteness theorems). It is based on the lecture notes of a course at Uppsala University given by the author [6].
2000
This document contains lecture notes of an introductory course on Kolmogorov complexity. They cover basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), notion of randomness (Martin-Löf randomness, Mises–Church randomness), Solomonoff universal a priori probability and their properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity) and applications (incompressibility method in computational complexity theory, incompleteness theorems).
The Computer Journal, 1999
The question why and how probability theory can be applied to the real-world phenomena has been discussed for several centuries. When the algorithmic information theory was created, it became possible to discuss these problems in a more specific way. In particular, Li and Vitányi [6], Rissanen [3], Wallace and Dowe [7] have discussed the connection between Kolmogorov (algorithmic) complexity and minimum description length (minimum message length) principle. In this note we try to point out a few simple observations that (we believe) are worth keeping in mind while discussing these topics.
International Journal of Engineering Sciences & Research Technology, 2012
This paper describes various application issues of kolmogorov complexity . Information assurance , network management , active network are those areas where kolmogorov complexity is applied . Our main focus is to show it's importance in various domain including the domain of computer virus detection .
This paper presents a proposal for the application of Kolmogorov complexity to the characterization of systems and processes, and the evaluation of computational models. The methodology developed represents a theoretical tool to solve problems from systems science. Two applications of the methodology are presented in order to illustrate the proposal, both of which were developed by the authors. The first one is related to the software development process, the second to computer animation models. In the end a third application of the method is briefly introduced, with the intention of characterizing dynamic systems of chaotic behavior, which clearly demonstrates the potentials of the methodology.
2019
Romashchenko and Zimand~\cite{rom-zim:c:mutualinfo} have shown that if we partition the set of pairs $(x,y)$ of $n$-bit strings into combinatorial rectangles, then $I(x:y) \geq I(x:y \mid t(x,y)) - O(\log n)$, where $I$ denotes mutual information in the Kolmogorov complexity sense, and $t(x,y)$ is the rectangle containing $(x,y)$. We observe that this inequality can be extended to coverings with rectangles which may overlap. The new inequality essentially states that in case of a covering with combinatorial rectangles, $I(x:y) \geq I(x:y \mid t(x,y)) - \log \rho - O(\log n)$, where $t(x,y)$ is any rectangle containing $(x,y)$ and $\rho$ is the thickness of the covering, which is the maximum number of rectangles that overlap. We discuss applications to communication complexity of protocols that are nondeterministic, or randomized, or Arthur-Merlin, and also to the information complexity of interactive protocols.
Lecture Notes in Computer Science, 2009
The question how and why mathematical probability theory can be applied to the "real world" has been debated for centuries. We try to survey the role of algorithmic information theory (Kolmogorov complexity) in this debate.
Arxiv preprint cs/0410002
We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and where they are fundamentally different. We discuss and relate the basic notions of both theories: Shannon entropy versus Kolmogorov complexity, the relation of both to universal coding, Shannon mutual information versus Kolmogorov ('algorithmic') mutual information, probabilistic sufficient statistic versus algorithmic sufficient statistic (related to lossy compression in the Shannon theory versus meaningful information in the Kolmogorov theory), and rate distortion theory versus Kolmogorov's structure function. Part of the material has appeared in print before, scattered through various publications, but this is the first comprehensive systematic comparison. The last mentioned relations are new.
Artificial life, 2015
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.
2020
There is a parallelism between Shannon information theory and algorithmic information theory. In particular, the same linear inequalities are true for Shannon entropies of tuples of random variables and Kolmogorov complexities of tuples of strings (Hammer et al., 1997), as well as for sizes of subgroups and projections of sets (Chan, Yeung, Romashchenko, Shen, Vereshchagin, 1998--2002). This parallelism started with the Kolmogorov-Levin formula (1968) for the complexity of pairs of strings with logarithmic precision. Longpre (1986) proved a version of this formula for space-bounded complexities. In this paper we prove an improved version of Longpre's result with a tighter space bound, using Sipser's trick (1980). Then, using this space bound, we show that every linear inequality that is true for complexities or entropies, is also true for space-bounded Kolmogorov complexities with a polynomial space overhead.
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.
Kolmogorov complexity and Shannon entropy are conceptually different measures. However, for any recursive probability distribution, the expected value of Kolmogorov complexity equals its Shannon entropy, up to a constant. We study if a similar relationship holds for Rényi and Tsallis entropies of order α, showing that it only holds for α = 1. Regarding a time-bounded analogue relationship, we show that, for some distributions we have a similar result. We prove that, for universal time-bounded distribution m t (x), Tsallis and Rényi entropies converge if and only if α is greater than 1. We also establish the uniform continuity of these entropies.
2008
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov-Chaitin complexity of a string s. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the naturalness of the language in question. The aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that naturalness can be inferred, thereby providing a framework for a stable definition of K under the set of convergent models of computation.
2011
The notion of Kolmogorov complexity (=the minimal length of a program that generates some object) is often useful as a kind of language that allows us to reformulate some notions and therefore provide new intuition. In this survey we provide (with minimal comments) many different examples where notions and statements that involve Kolmogorov complexity are compared with their counterparts not involving complexity.
Theoretical Computer Science, 2002
Kolmogorov's very first paper on algorithmic information theory (Kolmogorov, Problemy peredachi infotmatsii 1(1) (1965), 3) was entitled "Three approaches to the definition of the quantity of information". These three approaches were called combinatorial, probabilistic and algorithmic. Trying to establish formal connections between combinatorial and algorithmic approaches, we prove that every linear inequality including Kolmogorov complexities could be translated into an equivalent combinatorial statement. (Note that the same linear inequalities are true for Kolmogorov complexities and Shannon entropy, see Hammer et al., (Proceedings of CCC'97, Ulm).) Entropy (complexity) proofs of combinatorial inequalities given in Llewellyn and Radhakrishnan (Personal Communication) and Hammer and Shen (Theory Comput. Syst. 31 (1998) 1) can be considered as special cases (and a natural starting points) for this translation.
ACM SIGACT News, 2021
This formula can be informally read as follows: the ith messagemi brings us log(1=pi) "bits of information" (whatever this means), and appears with frequency pi, so H is the expected amount of information provided by one random message (one sample of the random variable). Moreover, we can construct an optimal uniquely decodable code that requires about H (at most H + 1, to be exact) bits per message on average, and it encodes the ith message by approximately log(1=pi) bits, following the natural idea to use short codewords for frequent messages. This fits well the informal reading of the formula given above, and it is tempting to say that the ith message "contains log(1=pi) bits of information." Shannon himself succumbed to this temptation [46, p. 399] when he wrote about entropy estimates and considers Basic English and James Joyces's book "Finnegan's Wake" as two extreme examples of high and low redundancy in English texts. But, strictly speak...
Kolmogorov complexity (K) is an incomputable function. It can be approximated from above but not to arbitrary given precision and it cannot be approximated from below. By restricting the source of the data to a specific model class, we can construct a computable function κ to approximate K in a probabilistic sense: the probability that the error is greater than k decays exponentially with k. We apply the same method to the normalized information distance (NID) and discuss conditions that affect the safety of the approximation.
Information and Computation, 2011
We apply recent results on extracting randomness from independent sources to "extract" Kolmogorov complexity. For any α, > 0, given a string x with K(x) > α|x|, we show how to use a constant number of advice bits to efficiently compute another string y, |y| = Ω(|x|), with K(y) > (1 − )|y|. This result holds for both classical and space-bounded Kolmogorov complexity.
Lecture Notes in Computer Science, 2006
Multisource information theory in Shannon setting is well known. In this article we try to develop its algorithmic information theory counterpart and use it as the general framework for many interesting questions about Kolmogorov complexity.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.