Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012
We present several applications of simple topological arguments (such as non-contractibility of a sphere and similar results) to Kolmogorov complexity. It turns out that discrete versions of these results can be used to prove the existence of strings with prescribed complexity with O(1)-precision (instead of usual O(log n)-precision). In particular, we improve an earlier result of M. Vyugin and show that for every n and for every string x of complexity at least n + O(log n) there exists a string y such that both C(x | y) and C(y | x) are equal to n + O(1). We also show that for a given tuple of strings x i (assuming they are almost independent) there exists another string y such that the condition y makes the complexities of all x i twice smaller with O(1)-precision.
2011
The notion of Kolmogorov complexity (=the minimal length of a program that generates some object) is often useful as a kind of language that allows us to reformulate some notions and therefore provide new intuition. In this survey we provide (with minimal comments) many different examples where notions and statements that involve Kolmogorov complexity are compared with their counterparts not involving complexity.
Measures of Complexity, 2015
Algorithmic information theory studies description complexity and randomness and is now a well known field of theoretical computer science and mathematical logic. There are several textbooks and monographs devoted to this theory [4, 1, 5, 2, 7] where one can find the detailed exposition of many difficult results as well as historical references. However, it seems that a short survey of its basic notions and main results relating these notions to each other, is missing. This report attempts to fill this gap and covers the basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), Solomonoff universal a priori probability, notions of randomness (Martin-Löf randomness, Mises-Church randomness), effective Hausdorff dimension. We prove their basic properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity, complexity characterization for effective dimension) and show some applications (incompressibility method in computational complexity theory, incompleteness theorems). It is based on the lecture notes of a course at Uppsala University given by the author [6].
2008
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov-Chaitin complexity of a string s. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the naturalness of the language in question. The aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that naturalness can be inferred, thereby providing a framework for a stable definition of K under the set of convergent models of computation.
1998
We extend notion of complexity to computational model real Turing Following the lines of we show Invariance Theorem explore the of incompressibility. explicitly show infinite strings real numbers. of these are so that the complexity notion, on real machines, has in commom ordinary randomness. 1998 Elsevier B.V. All reserved.
arXiv (Cornell University), 2000
Given a reference computer, Kolmogorov complexity is a well defined function on all binary strings. In the standard approach, however, only the asymptotic properties of such functions are considered because they do not depend on the reference computer. We argue that this approach can be more useful if it is refined to include an important practical case of simple binary strings. Kolmogorov complexity calculus may be developed for this case if we restrict the class of available reference computers. The interesting problem is to define a class of computers which is restricted in a natural way modeling the real-life situation where only a limited class of computers is physically available to us. We give an example of what such a natural restriction might look like mathematically, and show that under such restrictions some error terms, even logarithmic in complexity, can disappear from the standard complexity calculus.
Topology and its Applications, 2010
Farber introduced a notion of topological complexity TC(X) that is related to robotics. Here we introduce a series of numerical invariants TC n (X), n = 2, 3,. .. such that TC 2 (X) = TC(X) and TC n (X) ≤ TC n+1 (X). For these higher complexities, we define their symmetric versions that can also be regarded as higher analogs of the symmetric topological complexity.
Journal of Computer and System Sciences, 2011
We continue an investigation into resource-bounded Kolmogorov complexity [ABK + 06], which highlights the close connections between circuit complexity and Levin's time-bounded Kolmogorov complexity measure Kt (and other measures with a similar flavor), and also exploits derandomization techniques to provide new insights regarding Kolmogorov complexity. The Kolmogorov measures that have been introduced have many advantages over other approaches to defining resource-bounded Kolmogorov complexity (such as much greater independence from the underlying choice of universal machine that is used to define the measure) [ABK + 06]. Here, we study the properties of other measures that arise naturally in this framework. The motivation for introducing yet more notions of resource-bounded Kolmogorov complexity are twofold: • to demonstrate that other complexity measures such as branching-program size and formula size can also be discussed in terms of Kolmogorov complexity, and • to demonstrate that notions such as nondeterministic Kolmogorov complexity and distinguishing complexity [BFL02] also fit well into this framework. The main theorems that we provide using this new approach to resource-bounded Kolmogorov complexity are: • A complete set (RKNt) for NEXP/poly defined in terms of strings of high Kolmogorov complexity. • A lower bound, showing that RKNt is not in NP ∩ coNP. • New conditions equivalent to the conditions "NEXP ⊆ nonuniform NC 1 " and "NEXP ⊆ L/poly". • Theorems showing that "distinguishing complexity" is closely connected to both FewEXP and to EXP. • Hardness results for the problems of approximating formula size and branching program size.
Information Processing Letters, 2005
We derive the coincidence of Lutz's constructive dimension and Kolmogorov complexity for sets of infinite strings from Levin's early result on the existence of an optimal left computable cylindrical semi-measure M via simple calculations. 2004 Elsevier B.V. All rights reserved.
2019
Romashchenko and Zimand~\cite{rom-zim:c:mutualinfo} have shown that if we partition the set of pairs $(x,y)$ of $n$-bit strings into combinatorial rectangles, then $I(x:y) \geq I(x:y \mid t(x,y)) - O(\log n)$, where $I$ denotes mutual information in the Kolmogorov complexity sense, and $t(x,y)$ is the rectangle containing $(x,y)$. We observe that this inequality can be extended to coverings with rectangles which may overlap. The new inequality essentially states that in case of a covering with combinatorial rectangles, $I(x:y) \geq I(x:y \mid t(x,y)) - \log \rho - O(\log n)$, where $t(x,y)$ is any rectangle containing $(x,y)$ and $\rho$ is the thickness of the covering, which is the maximum number of rectangles that overlap. We discuss applications to communication complexity of protocols that are nondeterministic, or randomized, or Arthur-Merlin, and also to the information complexity of interactive protocols.
Theoretical Computer Science, 2002
We consider for a real number the Kolmogorov complexities of its expansions with respect to di erent bases. In the paper it is shown that, for usual and self-delimiting Kolmogorov complexity, the complexity of the preÿxes of their expansions with respect to di erent bases r and b are related in a way that depends only on the relative information of one base with respect to the other.
2020
There is a parallelism between Shannon information theory and algorithmic information theory. In particular, the same linear inequalities are true for Shannon entropies of tuples of random variables and Kolmogorov complexities of tuples of strings (Hammer et al., 1997), as well as for sizes of subgroups and projections of sets (Chan, Yeung, Romashchenko, Shen, Vereshchagin, 1998--2002). This parallelism started with the Kolmogorov-Levin formula (1968) for the complexity of pairs of strings with logarithmic precision. Longpre (1986) proved a version of this formula for space-bounded complexities. In this paper we prove an improved version of Longpre's result with a tighter space bound, using Sipser's trick (1980). Then, using this space bound, we show that every linear inequality that is true for complexities or entropies, is also true for space-bounded Kolmogorov complexities with a polynomial space overhead.
Theoretical Computer Science, 2012
We present a measure of string complexity, called I-complexity, computable in linear time and space. It counts the number of different substrings in a given string. The least complex strings are the runs of a single symbol, the most complex are the de Bruijn strings. Although the I-complexity of a string is not the length of any minimal description of the string, it satisfies many basic properties of classical description complexity. In particular, the number of strings with I-complexity up to a given value is bounded, and most strings of each length have high I-complexity.
Theory of Computing Systems, 2012
We prove the formula C (a, b) = K (a| C (a, b))+C (b|a, C (a, b))+O(1) that expresses the plain complexity of a pair in terms of prefix-free and plain conditional complexities of its components. The well known formula from Shannon information theory states that H(ξ , η) = H(ξ) + H(η|ξ). Here ξ , η are random variables and H stands for the Shannon entropy. A similar formula for algorithmic information theory was proven by Kolmogorov and Levin [5] and says that C (a, b) = C (a) + C (b|a) + O(logn), where a and b are binary strings of length at most n and C stands for Kolmogorov complexity (as defined initially by Kolmogorov [4]; now this version is usually called plain Kolmogorov complexity). Informally, C (u) is the minimal length of a program that produces u, and C (u|v) is the minimal length of a program that transforms v to u; the complexity C (u, v) of a pair (u, v) is defined as the complexity of some standard encoding of this pair. This formula implies that I(a : b) = I(b : a) + O(logn) where I(u : v) is the amount of information in u about v defined as C (v) − C (v|u); this property is often called "symmetry of information". The term O(log n), as was noted in [5], cannot be replaced by O(1). Later Levin found an O(1)-exact version of this formula that uses the so-called prefix-free version of complexity: K (a, b) = K (a) + K (b|a, K (a)) + O(1); this version, reported in [2], was also discovered by Chaitin [1]. In the definition of prefix-free complexity we restrict ourselves to self-delimiting programs: reading a program from left to right, the interpreter determines where it ends. See, e.g., [7] for the definitions and proofs of these results. In this note we provide a somewhat similar formula for plain complexity (also with O(1)-precision): Theorem 1. C (a, b) = K (a| C (a, b)) + C (b|a, C (a, b)) + O(1). Proof. The proof is not difficult after the formula is presented. The ≤-inequality is a generalization of the inequality C (x, y) ≤ K (x) + C (y) and can be proven in the same way. Assume that p is a self-delimiting program that maps C (a, b) to a, and q is a (not necessarily self-delimiting) program that maps a and C (a, b) * B. Bauwens is supported by Fundaçao para a Ciência e a Tecnologia by grant (SFRH/BPD/75129/2010), and is also partially supported by project CSI 2 (PTDC/EIAC/099951/2008). A. Shen is supported in part by projects NAFIT ANR-08-EMER-008-01 grant and RFBR 09-01-00709a. Author are grateful to their colleagues for interesting discussions and to the anonymous referees for useful comments.
2010
This is a short introduction to Kolmogorov complexity and information theory. The interested reader is referred to the literature, especially the textbooks [CT91] and [LV97] which cover the fields of information theory and Kolmogorov complexity in depth and with all the necessary rigor. They are well to read and require only a minimum of prior knowledge.
1995
We study the set of resource bounded Kolmogorov random strings: R t = fx j K t (x) jxjg for t a time constructible function such that t(n) 2 n 2 and t(n) 2 2 n O(1) . We show that the class of sets that Turing reduce to R t has measure 0 in EXP with respect to the resource-bounded measure introduced by [17]. From this we conclude that R t is not Turing-complete for EXP . This contrasts the resource unbounded setting. There R is Turing-complete for co-RE . We show that the class of sets to which R t bounded truthtable reduces, has p 2 -measure 0 (therefore, measure 0 in EXP ). This answers an open question of Lutz, giving a natural example of a language that is not weaklycomplete for EXP and that reduces to a measure 0 class in EXP . It follows that the sets that are p btt -hard for EXP have p 2 -measure 0. 1 Introduction One of the main questions in complexity theory is the relation between complexity classes, such as for example P ; NP , and EXP . It is well known that ...
Annals of Pure and Applied Logic, 2004
We investigate the initial segment complexity of random reals. Let K() denote preÿx-free Kolmogorov complexity. A natural measure of the relative randomness of two reals and ÿ is to compare complexity K(n) and K(ÿ n). It is well-known that a real is 1-random i there is a constant c such that for all n, K(n) ¿ n − c. We ask the question, what else can be said about the initial segment complexity of random reals. Thus, we study the ÿne behaviour of K(n) for random. Following work of Downey, Hirschfeldt and LaForte, we say that 6 K ÿ i there is a constant O(1) such that for all n, K(n) 6 K(ÿ n) + O(1). We call the equivalence classes under this measure of relative randomness K-degrees. We give proofs that there is a random real so that lim sup n K(n) − K(n) = ∞ where is Chaitin's random real. One is based upon (unpublished) work of Solovay, and the other exploits a new idea. Further, based on this new idea, we prove there are uncountably many K-degrees of random reals by proving that ({ÿ : ÿ 6 K }) = 0. As a corollary to the proof we can prove there is no largest K-degree. Finally we prove that if n = m then the initial segment complexities of the natural n-and m-random sets (namely ∅(n−1) and ∅(m−1)) are di erent. The techniques introduced in this paper have already found a number of other applications.
The Computer Journal, 1999
We briefly discuss the origins, main ideas and principal applications of the theory of Kolmogorov complexity.
2013
The famous Gödel incompleteness theorem states that for every consistent sufficiently rich formal theory T there exist true statements that are unprovable in T . Such statements would be natural candidates for being added as axioms, but how can we obtain them? One classical (and well studied) approach is to add to some theory T an axiom that claims the consistency of T . In this paper we discuss another approach motivated by Chaitin's version of Gödel's theorem where axioms claiming the randomness (or incompressibility) of some strings are probabilistically added, and show that it is not really useful, in the sense that this does not help us to prove new interesting theorems. This result (cf. [She06]) answers a question recently asked by Lipton [LR11]. The situation changes if we take into account the size of the proofs: randomly chosen axioms may help making proofs much shorter (unless NP=PSPACE). This result partially answers the question asked in [She06]. We then study the axiomatic power of the statements of type "the Kolmogorov complexity of x exceeds n" (where x is some string, and n is some integer) in general. They are Π1 (universally quantified) statements of Peano arithmetic. We show (Theorem 5) that by adding all true statements of this type, we obtain a theory that proves all true Π1-statements, and also provide a more detailed classification. In particular, as Theorem 7 shows, to derive all true Π1-statements it is enough to add one statement of this type for each n (or even for infinitely many n) if strings are chosen in a special way. On the other hand, one may add statements of this type for most x of length n (for every n) and still obtain a weak theory (Theorem 10). We also study other logical questions related to "random axioms" (hierarchy with respect to n, Theorem 8 in Section 3.3, independence in Section 3.6, etc.). Finally, we consider a theory that claims Martin-Löf randomness of a given infinite binary sequence. This claim can be formalized in different ways. We show that different formalizations are closely related but not equivalent, and study their properties.
2010
We study the relationship between complexity cores of a language and the descriptional complexity of the characteristic sequence of the language based on Kolmogorov complexity. Intuitively, a complexity core is a set of hard instances of a language, i.e. instances which cannot be decided in polynomial time. Kolmogorov complexity measures the information content of a string by the length of the shortest program which prints the string. Time-bounded Kolmogorov complexity looks at the length of a shortest program which prints the string within a specified time bound. We prove that a recursive set A has a complexity core if for all constants c, the computational depth (the difference between time-bounded and unbounded Kolmogorov complexities) of the characteristic sequence of A up to length n is larger than c infinitely often. We also show that if a language has a complexity core of exponential density, then it cannot be accepted in average polynomial time, when the strings are distributed according to a time bounded version of the universal distribution.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.