While Kolmogorov complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two... more
As an increasing number of protein structures become available, the need for algorithms that can quantify the similarity between protein structures increases as well. Thus, the comparison of proteins' structures, and their clustering... more
The algorithmic theory of randomness is well developed when the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution. These... more
In the study of nonlinear physical systems, one encounters apparently random or chaotic behavior, although the systems may be completely deterministic. Applying techniques from symbolic dynamics to maps of the interval, we compute two... more
Effective fractal dimension was defined by Lutz (2003) in order to quantitatively analyze the structure of complexity classes, but then interesting connections of effective dimension with information theory were also found, justifying the... more
An algorithmic information theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG), whose connected... more
In contrast with software-generated randomness (called pseudo-randomness), quantum randomness can be proven incomputable; that is, it is not exactly reproducible by any algorithm. We provide experimental evidence of incomputability-an... more
Four general approaches to the metaphysics of causation are current in Australasian philosophy. One is a development of the regularity theory (attributed to Hume) that uses counterfactuals (Lewis, 1973; 1994). A second is based in the... more
This article investigates emergence and complexity in complex systems that can share information on a network. To this end, we use a theoretical approach from information theory, computability theory, and complex networks. One key studied... more
Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several... more
We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ("universal probability") as a starting point, and define complexity (an operator) as its negative logarithm.
In spite of being ubiquitous in life sciences, the concept of information is harshly criticized. Uses of the concept other than those derived from Shannon's theory are denounced as pernicious metaphors. We perform a computational... more
This document contains lecture notes of an introductory course on Kolmogorov complexity. They cover basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), notion of randomness (Martin-Löf... more
This paper is devoted to large scale aspects of the geometry of the space of isometry classes of Riemannian metrics, with a 2-sided curvature bound, on a fixed compact smooth manifold of dimension at least five. Using a mix of tools from... more
The concept of entropy plays a major part in communication theory. The Shannon entropy is a measure of uncertainty with respect to a priori probability distribution. In algorithmic information theory the information content of a message... more
Using frequency distributions of daily closing price time series of several financial market indexes, we investigate whether the bias away from an equiprobable sequence distribution found in the data, predicted by algorithmic information... more
In a sampling problem, we are given an input x ∈ {0, 1} n , and asked to sample approximately from a probability distribution D x over poly (n)-bit strings. In a search problem, we are given an input x ∈ {0, 1} n , and asked to find a... more
"Abstract. Ray Solomonoff invented the notion of universal induction featuring an aptly termed “universal” prior probability function over all possible computable environments [9]. The essential property of this prior was its ability to... more
Kolmogorov's very first paper on algorithmic information theory (Kolmogorov, Problemy peredachi infotmatsii 1(1) (1965), 3) was entitled "Three approaches to the definition of the quantity of information". These three approaches were... more
We further deconstruct Heraclitean Quantum Systems giving a model for a universe using pregeometric notions in which the end-game problem is overcome by means of self-referential noise. The model displays self-organisation with the... more
The main topic of the present work are universal machines for plain and prefix-free description complexity and their domains. It is characterised when an r.e. set W is the domain of a universal plain machine in terms of the description... more
An algorithmic information theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG), whose connected... more
We discuss certain analogies between quantization and discretization of classical systems on manifolds. In particular, we will apply the quantum dynamical entropy of Alicki and Fannes to numerically study the footprints of chaos in... more
A Bayesian prior over first-order theories is defined. It is shown that the prior can be approximated, and the relationship to previously studied priors is examined.
We show that Kolmogorov complexity and such its estimators as universal codes (or data compression methods) can be applied for hypothesis testing in a framework of classical mathematical statistics. The methods for identity testing and... more
Recent large scale experiments have shown that the Normalized Information Distance, an algorithmic information measure, is among the best similarity metrics for melody classification. This paper proposes the use of this distance as a... more
Recently, genetic programming has been proposed to model agents' adaptive behavior in a complex transition process where uncertainty cannot be formalized within the usual probabilistic framework. However, this approach has not been widely... more
The main contribution of this paper is to design an Information Retrieval (IR) technique based on Algorithmic Information Theory (using the Normalized Compression Distance-NCD), statistical techniques (outliers), and novel organization of... more
The concept of effective complexity of an object as the minimal description length of its regularities has been initiated by Gell-Mann and Lloyd. The regularities are modeled by means of ensembles, which is the probability distributions... more
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the... more
In a genetic algorithm, fluctuations of the entropy of a genome over time are interpreted as fluctuations of the information that the genome's organism is storing about its environment, being this reflected in more complex organisms. The... more
In this article, we will show that uncomputability is a relative property not only of oracle Turing machines, but also of subrecursive classes. We will define the concept of a Turing submachine, and a recursive... more
All science is founded on the assumption that the physical universe is ordered. Our aim is to challenge this hypothesis using arguments,from the algorithmic information theory.
Let v, w be infinite 0-1 sequences, andm a positive integer. We say that w ism-embeddable in v, if there exists an increasing sequence (n i : i ≥ 0) of integers with n 0 = 0, such that 1 ≤ n i − n i −1 ≤m, w(i) = v(n i) for all i ≥ 1. Let... more
The sophistication of a string measures how much structural information it contains. We introduce naive sophistication, a variant of sophistication based on randomness deficiency. Naive sophistication measures the minimum number of bits... more
Generalized Information (GI) is a measurement of the degree to which a program can be said to generalize a dataset. It is calculated by creating a program to model the data set, measuring the Active Information in the model, and... more
This work presents a theoretical investigation of incompressible multidimensional networks defined by a generalized graph representation. In particular, we study the incompressibility (i.e., algorithmic randomness) of snapshot-dynamic... more
Broadly speaking, there are two approaches to quantifying information. The first, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number... more
Network complexity, network information content analysis, and lossless compressibility of graph representations have been played an important role in network analysis and network modeling. As multidimensional networks, such as... more
This article presents a theoretical investigation of computation beyond the Turing barrier from emergent behavior in distributed systems. In particular, we present an algorithmic network that is a mathematical model of a networked... more
Our aim is to experimentally study the possibility of distinguishing between quantum sources of randomness-recently proved to be theoretically incomputable-and some well-known computable sources of pseudo-randomness. Incomputability is a... more
Solomonoff induction is known to be universal, but incomputable. Its approximations, namely, the Minimum Description (or Message) Length (MDL) principles, are adopted in practice in the efficient, but non-universal form. Recent attempts... more
In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefix-free domain. The Omega number's bits are algorithmically random-there is no reason... more
In this article, we investigate limitations of importing methods based on algorithmic information theory from monoplex networks into multidimensional networks (such as multilayer networks) that have a large number of extra dimensions... more
We show in this article that uncomputability is also a relative property of subrecursive classes built on a recursive relative incompressible function, which acts as a higher-order "yardstick" of irreducible information for the respective... more
Previous work has shown that perturbation analysis in software space can produce candidate computable generative models and uncover possible causal properties from the finite description of an object or system quantifying the algorithmic... more
In this paper, we analyze the problem of prediction in physics from the computational viewpoint. We show that physical paradigms like Laplace determinism, statistical determinism, etc., can be naturally explained by this computational... more
As one of the main subjects of investigation in data science, network science has been demonstrated a wide range of applications to real-world networks analysis and modeling. For example, the pervasive presence of structural or... more
This letter provides a proof that Active Information and Generalized Information are both Specified Complexity models, and therefore the mathematics of Specified Complexity can be used to analyze them both.