Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020
There is a parallelism between Shannon information theory and algorithmic information theory. In particular, the same linear inequalities are true for Shannon entropies of tuples of random variables and Kolmogorov complexities of tuples of strings (Hammer et al., 1997), as well as for sizes of subgroups and projections of sets (Chan, Yeung, Romashchenko, Shen, Vereshchagin, 1998--2002). This parallelism started with the Kolmogorov-Levin formula (1968) for the complexity of pairs of strings with logarithmic precision. Longpre (1986) proved a version of this formula for space-bounded complexities. In this paper we prove an improved version of Longpre's result with a tighter space bound, using Sipser's trick (1980). Then, using this space bound, we show that every linear inequality that is true for complexities or entropies, is also true for space-bounded Kolmogorov complexities with a polynomial space overhead.
2019
Romashchenko and Zimand~\cite{rom-zim:c:mutualinfo} have shown that if we partition the set of pairs $(x,y)$ of $n$-bit strings into combinatorial rectangles, then $I(x:y) \geq I(x:y \mid t(x,y)) - O(\log n)$, where $I$ denotes mutual information in the Kolmogorov complexity sense, and $t(x,y)$ is the rectangle containing $(x,y)$. We observe that this inequality can be extended to coverings with rectangles which may overlap. The new inequality essentially states that in case of a covering with combinatorial rectangles, $I(x:y) \geq I(x:y \mid t(x,y)) - \log \rho - O(\log n)$, where $t(x,y)$ is any rectangle containing $(x,y)$ and $\rho$ is the thickness of the covering, which is the maximum number of rectangles that overlap. We discuss applications to communication complexity of protocols that are nondeterministic, or randomized, or Arthur-Merlin, and also to the information complexity of interactive protocols.
2000
This document contains lecture notes of an introductory course on Kolmogorov complexity. They cover basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), notion of randomness (Martin-Löf randomness, Mises–Church randomness), Solomonoff universal a priori probability and their properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity) and applications (incompressibility method in computational complexity theory, incompleteness theorems).
Journal of Computer and System Sciences, 2000
It was mentioned by Kolmogorov in that the properties of algorithmic complexity and Shannon entropy are similar. We investigate one aspect of this similarity. Namely, we are interested in linear inequalities that are valid for Shannon entropy and for Kolmogorov complexity.
Measures of Complexity, 2015
Algorithmic information theory studies description complexity and randomness and is now a well known field of theoretical computer science and mathematical logic. There are several textbooks and monographs devoted to this theory [4, 1, 5, 2, 7] where one can find the detailed exposition of many difficult results as well as historical references. However, it seems that a short survey of its basic notions and main results relating these notions to each other, is missing. This report attempts to fill this gap and covers the basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), Solomonoff universal a priori probability, notions of randomness (Martin-Löf randomness, Mises-Church randomness), effective Hausdorff dimension. We prove their basic properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity, complexity characterization for effective dimension) and show some applications (incompressibility method in computational complexity theory, incompleteness theorems). It is based on the lecture notes of a course at Uppsala University given by the author [6].
Arxiv preprint cs/0410002
We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and where they are fundamentally different. We discuss and relate the basic notions of both theories: Shannon entropy versus Kolmogorov complexity, the relation of both to universal coding, Shannon mutual information versus Kolmogorov ('algorithmic') mutual information, probabilistic sufficient statistic versus algorithmic sufficient statistic (related to lossy compression in the Shannon theory versus meaningful information in the Kolmogorov theory), and rate distortion theory versus Kolmogorov's structure function. Part of the material has appeared in print before, scattered through various publications, but this is the first comprehensive systematic comparison. The last mentioned relations are new.
2014
We show that almost all known lower bound methods for communication complexity are also lower bounds for the information complexity. In particular, we define a relaxed version of the partition bound of Jain and Klauck [JK10] and prove that it lower bounds the information complexity of any function. Our relaxed partition bound subsumes all norm based methods (e.g. the γ 2 method) and rectangle-based methods (e.g. the rectangle/corruption bound, the smooth rectangle bound, and the discrepancy bound), except the partition bound. Our result uses a new connection between rectangles and zero-communication protocols where the players can either output a value or abort. We prove the following compression lemma: given a protocol for a function f with information complexity I, one can construct a zero-communication protocol that has non-abort probability at least 2 −O(I) and that computes f correctly with high probability conditioned on not aborting. Then, we show how such a zero-communication protocol relates to the relaxed partition bound. We use our main theorem to resolve three of the open questions raised by Braverman [Bra12]. First, we show that the information complexity of the Vector in Subspace Problem [KR11] is Ω(n 1/3), which, in turn, implies that there exists an exponential separation between quantum communication complexity and classical information complexity. Moreover, we provide an Ω(n) lower bound on the information complexity of the Gap Hamming Distance Problem.
2010
This is a short introduction to Kolmogorov complexity and information theory. The interested reader is referred to the literature, especially the textbooks [CT91] and [LV97] which cover the fields of information theory and Kolmogorov complexity in depth and with all the necessary rigor. They are well to read and require only a minimum of prior knowledge.
Theoretical Computer Science, 2002
Kolmogorov's very first paper on algorithmic information theory (Kolmogorov, Problemy peredachi infotmatsii 1(1) (1965), 3) was entitled "Three approaches to the definition of the quantity of information". These three approaches were called combinatorial, probabilistic and algorithmic. Trying to establish formal connections between combinatorial and algorithmic approaches, we prove that every linear inequality including Kolmogorov complexities could be translated into an equivalent combinatorial statement. (Note that the same linear inequalities are true for Kolmogorov complexities and Shannon entropy, see Hammer et al., (Proceedings of CCC'97, Ulm).) Entropy (complexity) proofs of combinatorial inequalities given in Llewellyn and Radhakrishnan (Personal Communication) and Hammer and Shen (Theory Comput. Syst. 31 (1998) 1) can be considered as special cases (and a natural starting points) for this translation.
Theory of Computing Systems, 2012
We prove the formula C (a, b) = K (a| C (a, b))+C (b|a, C (a, b))+O(1) that expresses the plain complexity of a pair in terms of prefix-free and plain conditional complexities of its components. The well known formula from Shannon information theory states that H(ξ , η) = H(ξ) + H(η|ξ). Here ξ , η are random variables and H stands for the Shannon entropy. A similar formula for algorithmic information theory was proven by Kolmogorov and Levin [5] and says that C (a, b) = C (a) + C (b|a) + O(logn), where a and b are binary strings of length at most n and C stands for Kolmogorov complexity (as defined initially by Kolmogorov [4]; now this version is usually called plain Kolmogorov complexity). Informally, C (u) is the minimal length of a program that produces u, and C (u|v) is the minimal length of a program that transforms v to u; the complexity C (u, v) of a pair (u, v) is defined as the complexity of some standard encoding of this pair. This formula implies that I(a : b) = I(b : a) + O(logn) where I(u : v) is the amount of information in u about v defined as C (v) − C (v|u); this property is often called "symmetry of information". The term O(log n), as was noted in [5], cannot be replaced by O(1). Later Levin found an O(1)-exact version of this formula that uses the so-called prefix-free version of complexity: K (a, b) = K (a) + K (b|a, K (a)) + O(1); this version, reported in [2], was also discovered by Chaitin [1]. In the definition of prefix-free complexity we restrict ourselves to self-delimiting programs: reading a program from left to right, the interpreter determines where it ends. See, e.g., [7] for the definitions and proofs of these results. In this note we provide a somewhat similar formula for plain complexity (also with O(1)-precision): Theorem 1. C (a, b) = K (a| C (a, b)) + C (b|a, C (a, b)) + O(1). Proof. The proof is not difficult after the formula is presented. The ≤-inequality is a generalization of the inequality C (x, y) ≤ K (x) + C (y) and can be proven in the same way. Assume that p is a self-delimiting program that maps C (a, b) to a, and q is a (not necessarily self-delimiting) program that maps a and C (a, b) * B. Bauwens is supported by Fundaçao para a Ciência e a Tecnologia by grant (SFRH/BPD/75129/2010), and is also partially supported by project CSI 2 (PTDC/EIAC/099951/2008). A. Shen is supported in part by projects NAFIT ANR-08-EMER-008-01 grant and RFBR 09-01-00709a. Author are grateful to their colleagues for interesting discussions and to the anonymous referees for useful comments.
The Computer Journal, 1999
The question why and how probability theory can be applied to the real-world phenomena has been discussed for several centuries. When the algorithmic information theory was created, it became possible to discuss these problems in a more specific way. In particular, Li and Vitányi [6], Rissanen [3], Wallace and Dowe [7] have discussed the connection between Kolmogorov (algorithmic) complexity and minimum description length (minimum message length) principle. In this note we try to point out a few simple observations that (we believe) are worth keeping in mind while discussing these topics.
Artificial Life, 2015
In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannonʼs information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules. 1 We do not refer here to computational complexity, which is a very well refined concept .
53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS'12), 2012
We show that almost all known lower bound methods for communication complexity are also lower bounds for the information complexity. In particular, we define a relaxed version of the partition bound of Jain and Klauck and prove that it lower bounds the information complexity of any function. Our relaxed partition bound subsumes all norm based methods (e.g. the γ 2 method) and rectangle-based methods (e.g. the rectangle/corruption bound, the smooth rectangle bound, and the discrepancy bound), except the partition bound.
2003
(1) In the two-party communication complexity model, we show that the tribes function on n inputs [6] has two-sided error randomized complexity Ω(n), while its nondeterminstic complexity and co-nondeterministic complexity are both Θ( √ n).
Kolmogorov complexity and Shannon entropy are conceptually different measures. However, for any recursive probability distribution, the expected value of Kolmogorov complexity equals its Shannon entropy, up to a constant. We study if a similar relationship holds for Rényi and Tsallis entropies of order α, showing that it only holds for α = 1. Regarding a time-bounded analogue relationship, we show that, for some distributions we have a similar result. We prove that, for universal time-bounded distribution m t (x), Tsallis and Rényi entropies converge if and only if α is greater than 1. We also establish the uniform continuity of these entropies.
Proceedings of Computational Complexity. Twelfth Annual IEEE Conference, 1997
The paper investigates connections between linear inequalities that are valid for Shannon entropies and for Kolmogorov complexities.
Journal of the ACM
We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings x and y is equal, up to logarithmic precision, to the length of the longest shared secret key that two parties, one having x and the complexity profile of the pair and the other one having y and the complexity profile of the pair, can establish via a probabilistic protocol with interaction on a public channel. For > 2, the longest shared secret that can be established from a tuple of strings (x 1 , ..., x) by parties, each one having one component of the tuple and the complexity profile of the tuple, is equal, up to logarithmic precision, to the complexity of the tuple minus the minimum communication necessary for distributing the tuple to all parties. We establish the communication complexity of secret key agreement protocols that produce a secret key of maximal length, for protocols with public randomness. We also show that if the communication complexity drops below the established threshold then only very short secret keys can be obtained.
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.
ArXiv, 2020
It is known that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings x and y is equal to the length of the longest shared secret key that two parties can establish via a probabilistic protocol with interaction on a public channel, assuming that the parties hold as their inputs x and y respectively. We determine the worst-case communication complexity of this problem for the setting where the parties can use private sources of random bits. We show that for some x, y the communication complexity of the secret key agreement does not decrease even if the parties have to agree on a secret key whose size is much smaller than the mutual information between x and y. On the other hand, we discuss examples of x, y such that the communication complexity of the protocol declines gradually with the size of the derived secret key. The proof of the main result uses spectral properties of appropriate graphs and the expander mixing lemma, as well as information theo...
Lecture Notes in Computer Science, 2014
Kolmogorov complexity (K) is an incomputable function. It can be approximated from above but not to arbitrary given precision and it cannot be approximated from below. By restricting the source of the data to a specific model class, we can construct a computable function κ to approximate K in a probabilistic sense: the probability that the error is greater than k decays exponentially with k. We apply the same method to the normalized information distance (NID) and discuss conditions that affect the safety of the approximation. In this section we present a generalization of the notion of resource-bounded Kolmogorov complexity. We first review the unbounded version:
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.