Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007, HAL (Le Centre pour la Communication Scientifique Directe)
This paper is devoted to establishing sharp bounds for deviation probabilities of partial sums Σ n i=1 f(X i ), where X = (X n ) n✷◆ is a positive recurrent Markov chain and f is a real valued function defined on its state space. Combining the regenerative method to the Esscher transformation, these estimates are shown in particular to generalize probability inequalities proved in the i.i.d. case to the Markovian setting for (not necessarily uniformly) geometrically ergodic chains.
2013
This paper is devoted to establishing sharp bounds for deviation probabilities of partial sums Σn i=1f(Xi), where X = (Xn)n2N is a positive recurrent Markov chain and f is a real valued function defined on its state space. Combining the regenerative method to the Esscher transformation, these estimates are shown in particular to generalize probability inequalities proved in the i.i.d. case to the Markovian setting for (not necessarily uniformly) geometrically ergodic chains.
Statistics & Probability Letters, 2014
We consider Markov chain Xn with spectral gap in L 2 π space. Assume that f is a bounded function on X with real values. Then the probabilities of large deviations of sums Sn = n k=1 f (X k ) satisfy Hoeffding'stype inequalities. These bounds depend only on the stationary mean πf , spectral gap and the end-points of support of f . We generalize the results of [LP04] in two directions. In our paper the state space is general and we do not assume reversibility.
2007
Let SN be the sum of vector-valued functions defined on a finite Markov chain. An analogue of the Bernstein–Hoeffding inequality is derived for the probability of large deviations of SN and relates the probability to the spectral gap of the Markov chain. Examples suggest that this inequality is better than alternative inequalities if the chain has a sufficiently large spectral gap and the function is high-dimensional. 1. Introduction. Suppose
Advances in Applied Probability, 2001
In this paper, we obtain Markovian bounds on a function of a homogeneous discrete time Markov chain. For deriving such bounds, we use well known results on stochastic majorization of Markov chains and the Rogers-Pitman's lumpability criterion. The proposed method of comparison between functions of Markov chains is not equivalent to generalized coupling method of Markov chains although we obtain same kind of majorization. We derive necessary and sufficient conditions for existence of our Markovian bounds. We also discuss the choice of the geometric invariant related to the lumpability condition that we use.
Annals of Applied Probability an Official Journal of the Institute of Mathematical Statistics, 2007
Let SN be the sum of vector-valued functions defined on a finite Markov chain. An analogue of the Bernstein-Hoeffding inequality is derived for the probability of large deviations of SN and relates the probability to the spectral gap of the Markov chain. Examples suggest that this inequality is better than alternative inequalities if the chain has a sufficiently large spectral gap and the function is high-dimensional.
Statistics & Probability Letters, 2012
We establish a simple variance inequality for U-statistics whose underlying sequence of random variables is an ergodic Markov Chain. The constants in this inequality are explicit and depend on computable bounds on the mixing rate of the Markov Chain. We apply this result to derive the strong law of large number for U-statistics of a Markov Chain under conditions which are close from being optimal.
Proceedings. International Symposium on Information Theory, 2005. ISIT 2005., 2005
We develop explicit, general bounds for the probability that the normalized partial sums of a function of a Markov chain on a general alphabet will exceed the steady-state mean of that function by a given amount. Our bounds combine simple information-theoretic ideas together with techniques from optimization and some fairly elementary tools from analysis. In one direction, we obtain a general bound for the important class of Doeblin chains; this bound is optimal, in the sense that in the special case of independent and identically distributed random variables it essentially reduces to the classical Hoeffding bound. In another direction, motivated by important problems in simulation, we develop a series of bounds in a form which is particularly suited to these problems, and which apply to the more general class of "geometrically ergodic" Markov chains.
We observe that the technique of Markov contraction can be used to establish measure concentration for a broad class of non-contracting chains. In particular, geometric ergodicity provides a simple and versatile framework. This leads to a short, elementary proof of a general concentration inequality for Markov and hidden Markov chains (HMM), which supercedes some of the known results and easily extends to other processes such as Markov trees. As applications, we give a Dvoretzky-Kiefer-Wolfowitz-type inequality and a uniform Chernoff bound. All of our bounds are dimension-free and hold for countably infinite state spaces.
Linear Algebra and its Applications, 1998
In several papers Meyer, singly and with coauthors, established the usefulness of the group generalized inverse in the study and computations of various aspects of Markov chains. Here we are interested in those results which concern bounds on the condition number of the chain and on the error in the computation of the stationary distribution vector. We show that a lemma due to Paz can be used to improve, sometimes by a factor of 2, some of the constants in the bounds obtained in the aforementioned papers.
Electronic Communications in Probability, 2015
We explore a method introduced by Chatterjee and Ledoux in a paper on eigenvalues of principle submatrices. The method provides a tool to prove concentration of measure in cases where there is a Markov chain meeting certain conditions, and where the spectral gap of the chain is known. We provide several additional applications of this method. These applications include results on operator compressions using the Kac walk on SO(n) and a Kac walk coupled to a thermostat, and a concentration of measure result for the length of the longest increasing subsequence of a random walk distributed under the invariant measure for the asymmetric exclusion process.
The Annals of Applied Probability, 2004
We build optimal exponential bounds for the probabilities of large deviations of sums n k=1 f (X k) where (X k) is a finite reversible Markov chain and f is an arbitrary bounded function. These bounds depend only on the stationary mean E π f, the end-points of the support of f , the sample size n and the second largest eigenvalue λ of the transition matrix.
Linear Algebra and its Applications, 2010
This paper is devoted to perturbation analysis of denumerable Markov chains. Bounds are provided for the deviation between the stationary distribution of the perturbed and nominal chain, where the bounds are given by the weighted supremum norm. In addition, bounds for the perturbed stationary probabilities are established. Furthermore, bounds on the norm of the asymptotic decomposition of the perturbed stationary distribution are provided, where the bounds are expressed in terms of the norm of the ergodicity coefficient, or the norm of a special residual matrix. Refinements of our bounds for Doeblin Markov chains are considered as well. Our results are illustrated with a number of examples.
The Annals of Applied Probability, 2003
Consider the partial sums {S t } of a real-valued functional F (Φ(t)) of a Markov chain {Φ(t)} with values in a general state space. Assuming only that the Markov chain is geometrically ergodic and that the functional F is bounded, the following conclusions are obtained:
The Annals of Applied Probability, 2014
Firstly, under geometric ergodicity assumption, we provide some limit theorems and some probability inequalities for bifurcating Markov chains introduced by Guyon to detect cellular aging from cell lineage, thus completing the work of Guyon. This probability inequalities are then applied to derive a first result on moderate deviation principle for a functional of bifurcating Markov chains with a restricted range of speed, but with a function which can be unbounded. Next, under uniform geometric ergodicity assumption, we provide deviation inequalities for bifurcating Markov chains and apply them to derive a second result on moderate deviation principle for bounded functional of bifurcating Markov chains with a more larger range of speed. As statistical applications, we provide superexponential convergence in probability and deviation inequalities (under the gaussian setting or the bounded setting), and moderate deviation principle for least square estimators of the parameters of a first order bifurcating autoregressive process.
2014
We prove a large deviation principle for a class of empirical processes in C[0, ∞) associated with additive functionals of Markov processes that were shown to have a martingale decomposition representation.
Electronic Journal of Probability, 2002
We prove a self-normalized large deviation principle for sums of Banach space valued functions of a Markov chain. Self-normalization applies to situations for which a full large deviation principle is not available. We follow the lead of Dembo and Shao [DemSha98b] who state partial large deviations principles for independent and identically distributed random sequences.
2011
We establish a simple variance inequality for U-statistics whose underlying sequence of random variables is an ergodic Markov Chain. The constants in this inequality are explicit and depend on computable bounds on the mixing rate of the Markov Chain. We apply this result to derive the strong law of large number for U-statistics of a Markov Chain under conditions which
Journal of Mathematical Sciences, 2017
The aim of this paper is to investigate the stability of Markov chains with general state space. We present new conditions for the strong stability of Markov chains after a small perturbation of their transition kernels. Also, we obtain perturbation bounds with respect to different quantities.
Journal of Applied Probability, 2012
In this paper we study the functional central limit theorem (CLT) for stationary Markov chains with a self-adjoint operator and general state space. We investigate the case when the variance of the partial sum is not asymptotically linear in n, and establish that conditional convergence in distribution of partial sums implies the functional CLT. The main tools are maximal inequalities that are further exploited to derive conditions for tightness and convergence to the Brownian motion.
Mathematische Zeitschrift, 2004
1 Department of Mathematics, University of Illinois,Urbana, IL 61801, USA (e-mail: [email protected]) 2 Department of Mathematics, University of Zagreb, Zagreb, Croatia (e-mail: [email protected]) ... Received: 16 October 2002; in final form: 16 May 2003 / ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.