Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Lecture Notes in Computer Science
Partial metric spaces generalise metric spaces, allowing non zero self distance. This is needed to model computable partial information, but falls short in an important respect. The present cost of computing information, such as processor time or memory used, is rarely expressible in domain theory, but contemporary theories of algorithms incorporate precise control over cost of computing resources. Complexity theory in Computer Science has dramatically advanced through an intelligent understanding of algorithms over discrete totally defined data structures such as directed graphs, without using partially defined information. So we have an unfortunate longstanding separation of partial metric spaces for modelling partially defined computable information from the complexity theory of algorithms for costing totally defined computable information. To bridge that separation we seek an intelligent theory of cost for partial metric spaces. As examples we consider the cost of computing a double negation ¬¬p in two-valued propositional logic, the cost of computing negation as failure in logic programming, and a cost model for the hiaton time delay.
Acknowledgements vii Dedication ix Table of Contents xi List of Tables xv List of Figures xvii 6.4 Automata accepting strings of the form {0 n , 0 n 1} 2n and strings not of that form.
2021
We consider the complexity (in terms of the arithmetical hierarchy) of the various quantifier levels of the diagram of a computably presented metric structure. As the truth value of a sentence of continuous logic may be any real in [0, 1], we introduce two kinds of diagrams at each level: the closed diagram, which encapsulates weak inequalities of the form φ ≤ r, and the open diagram, which encapsulates strict inequalities of the form φ < r. We show that the closed and open ΣN diagrams are Π 0 N+1 and ΣN respectively, and that the closed and open ΠN diagrams are Π 0 N and Σ N+1 respectively. We then introduce effective infinitary formulas of continuous logic and extend our results to the hyperarithmetical hierarchy. Finally, we demonstrate that our results are optimal.
Mathematical and Computer Modelling, 2010
Weightable quasi-metric Hausdorff quasi-pseudo-metric Pompéiu quasi-pseudo-metric Hyperspace The specialization order The information order a b s t r a c t It is well known that both weightable quasi-metrics and the Hausdorff distance provide efficient tools in several areas of Computer Science. This fact suggests, in a natural way, the problem of when the upper and lower Hausdorff quasi-pseudo-metrics of a weightable quasi-metric space (X, d) are weightable. Here we discuss this problem. Although the answer is negative in general, we show, however, that it is positive for several nice classes of (nonempty) subsets of X . Since the construction of these classes depends, to a large degree, on the specialization order of the quasi-metric d, we are able to apply our results to some distinguished quasi-metric models that appear in theoretical computer science and information theory, like the domain of words, the interval domain and the complexity space.
Information and Computation, 1998
The synergy between logic and computational complexity has gained importance and vigor in recent years, cutting across areas such as proof theory, finite model theory, computation theory, applicative programming, database theory, and philosophical logic. This volume is the outcome of a Workshop on Logic and Computational Complexity (LCC), organized to bring together researchers in this growing interdisciplinary field, so as to foster and enhance collaborations and to facilitate the discovery of conceptual bridges and unifying principles.
ACM Transactions on …, 2003
We investigate the expressive power and computational properties of two different types of languages intended for speaking about distances. First, we consider a first-order language FM the two-variable fragment of which turns out to be undecidable in the class of distance spaces validating the triangular inequality as well as in the class of all metric spaces. Yet, this two-variable fragment is decidable in various weaker classes of distance spaces. Second, we introduce a variable-free 'modal' language MS which, when interpreted in metric spaces, has the same expressive power as the two-variable fragment of FM. We determine natural and expressive fragments of MS which are decidable in various classes of distance spaces validating the triangular inequality, in particular, the class of all metric spaces.
Theoretical Computer Science, 2011
Continuous first-order logic is used to apply model-theoretic analysis to analytic structures (e.g. Hilbert spaces, Banach spaces, probability spaces, etc.). Classical computable model theory is used to examine the algorithmic structure of mathematical objects that can be described in classical first-order logic. The present paper shows that probabilistic computation (sometimes called randomized computation) and continuous logic stand in a similar close
Information Processing Letters, 1997
Fixed Point Theory and Applications, 2014
In Cerdà-Uguet et al. (Theory Comput. Syst. 50:387-399, 2012), a new mathematical fixed point technique, that uses the so-called Baire partial quasi-metric space, was introduced with the aim of providing the asymptotic complexity of a class of recursive algorithms. The aforementioned technique presents the advantage that requires less calculations than the quasi-metric original one given by Schellekens (Electron. Notes Theor. Comput. Sci. 1:211-232, 1995). In this paper we continue the study, started in Cerdà-Uguet et al. (Theory Comput. Syst. 50:387-399, 2012), on the use of partial quasi-metric spaces for asymptotic complexity analysis of algorithms. Concretely, our main purpose is to prove that the Baire partial quasi-metric space is an appropriate mathematical framework for discussing via fixed point arguments the asymptotic complexity of a general class of recursive algorithms to which all the algorithms analyzed in Cerdà-Uguet et al. (Theory Comput. Syst. 50:387-399, 2012) belong. The obtained results are illustrated by means of applying them to yield the complexity of two celebrated recursive algorithms which don not belong to the class discussed in Cerdà-Uguet et al.
Monographs in Theoretical Computer Science An EATCS Series, 2002
2007 IEEE International Conference on Research, Innovation and Vision for the Future, 2007
Generalized Galois Lattices Formalism for computing As a matter of fact, as noticed by [3], we do not have a contextual categorization allows metrics to evaluate complexity scientific definition of complexity. For some authors , and efficiency as well as methods for simplifying or complicating the term "complexity" is generally avoided as an overused and the external object at hand. Such methods are adapted for virtual environments and augmented reality devices for which it is poorly defined word, except in specific systems. They simple to change the distribution of features over categories. For advocate in favor of terms such as "diversity" or real world objects, and human operators that operate on them, "complication", and are using "complexity" for real nonthe online computation allows a survey of the complexity level modeled world entities. and a "simplify it first" planning of operations. Let's consider however scientific definitions such as 1-4244-0695-1/07/$25.00 ©2007 IEEE.
2010
Interval temporal logics formalize reasoning about interval structures over (usually) linearly ordered domains, where time intervals are the primitive ontological entities and truth of formulae is defined relative to time intervals, rather than time points. In this paper, we introduce and study Metric Propositional Neighborhood Logic (MPNL) over natural numbers. MPNL features two modalities referring, respectively, to an interval that is "met by" the current one and to an interval that "meets" the current one, plus an infinite set of length constraints, regarded as atomic propositions, to constrain the lengths of intervals. We argue that MPNL can be successfully used in different areas of artificial intelligence to combine qualitative and quantitative interval temporal reasoning, thus providing a viable alternative to well-established logical frameworks such as Duration Calculus. We show that MPNL is decidable in double exponential time and expressively complete with respect to a well-defined subfragment of the two-variable fragment FO 2 [N, =, <, s] of first-order logic for linear orders with successor function, interpreted over natural numbers. Moreover, we show that MPNL can be extended in a natural way to cover full FO 2 [N, =, <, s], but, unexpectedly, the latter (and hence the former) turns out to be undecidable.
Lecture Notes in Computer Science, 2003
In the paper we present a purely logical approach to estimating computational complexity of potentially intractable problems. The approach is based on descriptive complexity and second-order quantifier elimination techniques. We illustrate the approach on the case of the transversal hypergraph problem, TRANSHYP, which has attracted a great deal of attention. The complexity of the problem remains unsolved for over twenty years. Given two hypergraphs, G and H, TRANSHYP depends on checking whether G = H d , where H d is the transversal hypergraph of H. In the paper we provide a logical characterization of minimal transversals of a given hypergraph and prove that checking whether G ⊆ H d is tractable. For the opposite inclusion the problem still remains open. However, we interpret the resulting quantifier sequences in terms of determinism and bounded nondeterminism. The results give better upper bounds than those known from the literature, e.g., in the case when hypergraph H has a sub-logarithmic number of hyperedges and (for the deterministic case) all hyperedges have the cardinality bounded by a function sub-linear wrt maximum of sizes of G and H.
Journal of the ACM, 1971
The purpose of this paper is to outline the theory of computational complexity which has emerged as a comprehensive theory during the last decade. This theory is concerned with the quantitative aspects of computations and its central theme is the measuring of the difficulty of computing functions. The paper concentrates on the study of computational complexity measures defined for all computable functions and makes no attempt to survey the whole field exhaustively nor to present the material in historical order. Rather it presents the basic concepts, results, and techniques of computational complexity from a new point of view from which the ideas are more easily understood and fit together as a coherent whole.
International Journal of Modern Nonlinear Theory and Application, 2022
This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity in algorithmic efficiency and deduce that an optimally efficient algorithm has zero time complexity, zero space complexity, and an infinite dimensional complexity. This algorithm is used to generate the number line.
IEEE Transactions on Information Theory, 2000
Effective complexity measures the information content of the regularities of an object. It has been introduced by M. Gell-Mann and S. Lloyd to avoid some of the disadvantages of Kolmogorov complexity, also known as algorithmic information content. In this paper, we give a precise formal definition of effective complexity and rigorous proofs of its basic properties. In particular, we show that incompressible binary strings are effectively simple, and we prove the existence of strings that have effective complexity close to their lengths. Furthermore, we show that effective complexity is related to Bennett's logical depth: If the effective complexity of a string x exceeds a certain explicit threshold then that string must have astronomically large depth; otherwise, the depth can be arbitrarily small.
Lecture Notes in Computer Science, 2012
The metric dimension of a graph G is the size of a smallest subset L ⊆ V (G) such that for any x, y ∈ V (G) there is a z ∈ L such that the graph distance between x and z differs from the graph distance between y and z. Even though this notion has been part of the literature for almost 40 years, the computational complexity of determining the metric dimension of a graph is still very unclear. Essentially, we only know the problem to be NP-hard for general graphs, to be polynomialtime solvable on trees, and to have a log n-approximation algorithm for general graphs. In this paper, we show tight complexity boundaries for the Metric Dimension problem. We achieve this by giving two complementary results. First, we show that the Metric Dimension problem on bounded-degree planar graphs is NP-complete. Then, we give a polynomial-time algorithm for determining the metric dimension of outerplanar graphs.
Normalized information distance (NID) uses the theoretical notion of Kolmogorov complexity, which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. This practical application is called 'normalized compression distance' and it is trivially computable. It is a parameter-free similarity measure based on compression, and is used in pattern recognition, data mining, phylogeny, clustering, and classification. The complexity properties of its theoretical precursor, the NID, have been open. We show that the NID is neither upper semicomputable nor lower semicomputable. Comment: 9 pages, LaTeX, No figures, To appear in J. Comput. Syst. Sci
IEEE Transactions on Software Engineering, 1976
This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual Fortran programs are then presented to iUustrate the correlation between intuitive complexity and the graph-theoretic complexity. Several properties of the graphtheoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.