Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, arXiv (Cornell University)
Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative µ-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in long-run average and discounted average behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time O(n 4) for both turn-based games and MDPs; improving the previously best known O(n 9 • log(n)) time algorithm for MDPs. For a concurrent game G, we show that computing the exact distance between states is at least as hard as computing the value of concurrent reachability games and the squareroot-sum problem in computational geometry. We show that checking whether the metric distance is bounded by a rational r, can be done via a reduction to the theory of real closed fields, involving a formula with three quantifier alternations, yielding O(|G| O(|G| 5)) time complexity, improving the previously known reduction, which yielded O(|G| O(|G| 7)) time complexity. These algorithms can be iterated to approximate the metrics using binary search.
22nd Annual IEEE Symposium on Logic in Computer Science (LICS 2007), 2007
We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves determine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal (e.g., "reach a target state"), the question of winning is thus a probabilistic one: "what is the maximal probability of winning from a given state?".
Lecture Notes in Computer Science, 2004
We extend the basic system relations of trace inclusion, trace equivalence, simulation, and bisimulation to a quantitative setting in which propositions are interpreted not as boolean values, but as real values in the interval [0, 1]. Trace inclusion and equivalence give rise to asymmetrical and symmetrical linear distances, while simulation and bisimulation give rise to asymmetrical and symmetrical branching distances. We study the relationships among these distances, and we provide a full logical characterization of the distances in terms of quantitative versions of LTL and µ-calculus. We show that, while trace inclusion (resp. equivalence) coincides with simulation (resp. bisimulation) for deterministic boolean transition systems, linear and branching distances do not coincide for deterministic quantitative transition systems. Finally, we provide algorithms for computing the distances, together with matching lower and upper complexity bounds. In this paper, we use the term "distance" in a generic way, applying it to quantities that are traditionally called pseudo-metrics and quasi-pseudo-metrics .
Annals of Mathematics and Artificial Intelligence
Rational verification refers to the problem of checking which temporal logic properties hold of a concurrent/multiagent system, under the assumption that agents in the system choose strategies that form a game theoretic equilibrium. Rational verification can be understood as a counterpart to model checking for multiagent systems, but while classical model checking can be done in polynomial time for some temporal logic specification languages such as , and polynomial space with specifications, rational verification is much harder: the key decision problems for rational verification are 2-complete with specifications, even when using explicit-state system representations. Against this background, our contributions in this paper are threefold. First, we show that the complexity of rational verification can be greatly reduced by restricting specifications to , a fragment of that can represent a broad and practically useful class of response properties of reactive systems. In particular,...
IEEE Transactions on Software Engineering, 2000
We extend the classical system relations of trace inclusion, trace equivalence, simulation, and bisimulation to a quantitative setting in which propositions are interpreted not as boolean values, but as elements of arbitrary metric spaces. Trace inclusion and equivalence give rise to asymmetrical and symmetrical linear distances, while simulation and bisimulation give rise to asymmetrical and symmetrical branching distances. We study the relationships among these distances, and we provide a full logical characterization of the distances in terms of quantitative versions of LTL and µ-calculus. We show that, while trace inclusion (resp. equivalence) coincides with simulation (resp. bisimulation) for deterministic boolean transition systems, linear and branching distances do not coincide for deterministic metric transition systems. Finally, we provide algorithms for computing the distances over finite systems, together with a matching lower complexity bound.
Acknowledgements vii Dedication ix Table of Contents xi List of Tables xv List of Figures xvii 6.4 Automata accepting strings of the form {0 n , 0 n 1} 2n and strings not of that form.
Acta Informatica, 2001
We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experience or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or closeness) of processes. Relaxing the definition of simulation and bisimulation is another avenue which we follow. We define a new semantics to a known simple logic for probabilistic processes and show that it characterises a notion of -simulation. We also define two-players games that correspond to these notions: the existence of a winning strategy for one of the players determines -(bi)simulation. Of course, for all the notions defined, letting = 0 gives back the usual notions of logical equivalence, simulation and bisimulation. However, in contrast to what happens when = 0, two-way -simulation for > 0 is not equal tobisimulation. Next we give a polynomial time algorithm to compute a naturally derived metric: distance between states s and t is defined as the smallest such that s and t are equivalent. Finally we show that most of these notions can be extended to deal with probabilistic systems that allow non-determinism as well.
This is a consideration of iterated games in which the likelihood of interaction at each stage is a function of the distance between players. Divide-the-dollar is played on a circle and a lattice, and we compare various update strategies and matchup rules.
2011
We present relaxed notions of simulation and bisimulation on Probabilistic Automata (PA), that allow some error epsilon. When epsilon is zero we retrieve the usual notions of bisimulation and simulation on PAs. We give logical characterisations of these notions by choosing suitable logics which differ from the elementary ones, L with negation and L without negation, by the modal operator. Using flow networks, we show how to compute the relations in PTIME. This allows the definition of an efficiently computable non-discounted distance between the states of a PA. A natural modification of this distance is introduced, to obtain a discounted distance, which weakens the influence of long term transitions. We compare our notions of distance to others previously defined and illustrate our approach on various examples. We also show that our distance is not expansive with respect to process algebra operators. Although L without negation is a suitable logic to characterise epsilon-(bi)simulation on deterministic PAs, it is not for general PAs; interestingly, we prove that it does characterise weaker notions, called a priori epsilon-(bi)simulation, which we prove to be NP-difficult to decide.
2021
We consider the complexity (in terms of the arithmetical hierarchy) of the various quantifier levels of the diagram of a computably presented metric structure. As the truth value of a sentence of continuous logic may be any real in [0, 1], we introduce two kinds of diagrams at each level: the closed diagram, which encapsulates weak inequalities of the form φ ≤ r, and the open diagram, which encapsulates strict inequalities of the form φ < r. We show that the closed and open ΣN diagrams are Π 0 N+1 and ΣN respectively, and that the closed and open ΠN diagrams are Π 0 N and Σ N+1 respectively. We then introduce effective infinitary formulas of continuous logic and extend our results to the hyperarithmetical hierarchy. Finally, we demonstrate that our results are optimal.
Lecture Notes in Computer Science, 2012
Partial metric spaces generalise metric spaces, allowing non zero self distance. This is needed to model computable partial information, but falls short in an important respect. The present cost of computing information, such as processor time or memory used, is rarely expressible in domain theory, but contemporary theories of algorithms incorporate precise control over cost of computing resources. Complexity theory in Computer Science has dramatically advanced through an intelligent understanding of algorithms over discrete totally defined data structures such as directed graphs, without using partially defined information. So we have an unfortunate longstanding separation of partial metric spaces for modelling partially defined computable information from the complexity theory of algorithms for costing totally defined computable information. To bridge that separation we seek an intelligent theory of cost for partial metric spaces. As examples we consider the cost of computing a double negation ¬¬p in two-valued propositional logic, the cost of computing negation as failure in logic programming, and a cost model for the hiaton time delay.
2000
We introduce a family of languages intended for representing knowledge and reasoning about metric (and more general distance) spaces. While the simplest language can speak only about distances between individual objects and Boolean relations between sets, the more expressive ones are capable of capturing notions such as ‘somewhere in (or somewhere out of) the sphere of a certain radius’, ‘everywhere in a certain ring’, etc. The computational complexity of the satisfiability problem for formulas in our languages ranges from NP-completeness to undecidability and depends on the class of distance spaces in which they are interpreted. Besides the class of all metric spaces, we consider, for example, the spaces ℝ × ℝ and ℕ × ℕ with their natural metrics.
ArXiv, 2020
The purpose of this paper is to explore the question "to what extent could we produce formal, machine-verifiable, proofs in real algebraic geometry?" The question has been asked before but as yet the leading algorithms for answering such questions have not been formalised. We present a thesis that a new algorithm for ascertaining satisfiability of formulae over the reals via Cylindrical Algebraic Coverings [Abraham, Davenport, England, Kremer, \emph{Deciding the Consistency of Non-Linear Real Arithmetic Constraints with a Conflict Driver Search Using Cylindrical Algebraic Coverings}, 2020] might provide trace and outputs that allow the results to be more susceptible to machine verification than those of competing algorithms.
wellbehaved, where p is the length of the sequence B. We have also reviewed the other weaker and stronger conditions of triangularity and have introduced a new weaker condition here to reformulate the strategy for deciding the metricity of a d(B).
Theoretical Computer Science, 2005
Given a string x and a language L, the Hamming distance of x to L is the minimum Hamming distance of x to any string in L. The edit distance of a string to a language is analogously defined.
2012 Ninth International Conference on Quantitative Evaluation of Systems, 2012
We study the problem of determining approximate equivalences in Markov Decision Proceses with rewards using bisimulation metrics. We provide an extension of the framework previously introduced in Ferns et al. (2004), which computes iteratively improving approximations to bisimulation metrics using exhaustive pairwise state comparisons. The similarity between states is determined using the Earth Mover's Distance, as extensively studied in optimization and machine learning. We address two computational limitations of the above framework: first, all pairs of states have to be compared at every iteration, and second, convergence is proven only under exact computations. We extend their work to incorporate "on-the-fly" methods, which allow computational effort to focus first on pairs of states where the impact is expected to be greater. We prove that a method similar to asynchronous dynamic programming converges to the correct value of the bisimulation metric. The second relaxation is based on applying heuristics to obtain approximate state comparisons, building on recent work on improved algorithms for computing Earth Mover's Distance. Finally, we show how this approach can be used to generate new algorithmic strategies, based on existing prioritized sweeping algorithms used for prediction and control in MDPs.
Logical Methods in Computer Science, 2008
We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves determine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal (e.g., "reach a target state"), the question of winning is thus a probabilistic one: "what is the maximal probability of winning from a given state?".
Journal of Automated Reasoning, 2007
In many AI fields, the problem of finding out a solution which is as close as possible to a given configuration has to be faced. This paper addresses this problem in a propositional framework. The decision problem distance-sat which consists in determining whether a propositional formula admits a model that disagrees with a given partial interpretation on at most d variables, is introduced. The complexity of distance-sat and of several restrictions of it are identified. Two algorithms based on the well-known Davis/Logemann/Loveland search procedure for the satisfiability problem sat are presented so as to solve distance-sat for CNF formulas. Their computational behaviours are compared with the ones offered by sat solvers on sat encodings of distance-sat instances. The empirical evaluation allows for drawing firm conclusions about the respective performances of the algorithms, and to relate the difficulty of distance-sat with the difficulty of sat from the practical side.
Theoretical Computer Science, 2011
Continuous first-order logic is used to apply model-theoretic analysis to analytic structures (e.g. Hilbert spaces, Banach spaces, probability spaces, etc.). Classical computable model theory is used to examine the algorithmic structure of mathematical objects that can be described in classical first-order logic. The present paper shows that probabilistic computation (sometimes called randomized computation) and continuous logic stand in a similar close
National Conference on Artificial Intelligence, 1999
In many AI fields, the problem of finding out a solution which is as close as possible to a given configuration has to be faced. This paper addresses this problem in a propositional framework. The decision problem DISTANCE-SAT that consists in determining whether a propositional CNF formula admits a model that disagrees with a given partial interpretation on at most d variables, is introduced. The complexity of DISTANCE-SAT and of several restrictions of it are identified. Two algorithms based on the well-known Davis/Putnam search procedure are presented so as to solve DISTANCE-SAT. Their empirical evaluation enables deriving firm conclusions about their respective performances, and to relate the difficulty of DISTANCE-SAT with the difficulty of SAT from the practical side.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.