Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1986, Linear Algebra and its Applications
Techniques for updating the stationary distribution of a finite irreducible Markov chain following a rank one perturbation of its transition matrix are discussed. A variety of situations where such perturbations may arise are presented together with suitable procedures for the derivation of the related stationary distributions.
Proceedings of the 9th …, 2007
We consider a stationary distribution of a finite, irreducible, homogeneous Markov chain. Our aim is to perturb the transition probabilities matrix using approximations to find regions of feasibility and optimality for a given basis when the chain is optimized using linear programming. We also explore the application of perturbations bonds and analyze the effects of these on the construction of optimal policies.
2002
Stationary distributions of perturbed finite irreducible discrete time Markov chains are intimately connected with the behaviour of associated mean first passage times. This interconnection is explored through the use of generalized matrix inverses. Some interesting qualitative results regarding the nature of the relative and absolute changes to the stationary probabilities are obtained together with some improved bounds.
Electronic Journal of Linear Algebra, 2004
For an irreducible stochastic matrix T of order n, a certain condition number κ j (T ) that measures the sensitivity of the j−th entry of the corresponding stationary distribution under perturbation of T is considered. A lower bound on κ j is produced in terms of the directed graph of T , and the case of equality is characterized in that lower bound. Also all of the directed graphs D are characterized such that κ j (T ) is bounded from above as T ranges over the set of irreducible stochastic matrices having directed graph D. For those D for which κ j is bounded, a tight upper bound is given on κ j in terms of information contained in D.
2002
We obtain results on the sensitivity of the invariant measure and other statistical quantities of a Markov chain with respect to perturbations of the transition matrix. We use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
arXiv: Probability, 2014
An algorithm for estimating quasi-stationary distribution of finite state space Markov chains has been proven in a previous paper. Now this paper proves a similar algorithm that works for general state space Markov chains under very general assumptions.
Advances in Applied Probability, 2016
In this paper we provide a perturbation analysis of finite time-inhomogeneous Markov processes. We derive closed-form representations for the derivative of the transition probability at time t, with t > 0. Elaborating on this result, we derive simple gradient estimators for transient performance characteristics either taken at some fixed point in time t, or for the integrated performance over a time interval [0 , t]. Bounds for transient performance sensitivities are presented as well. Eventually, we identify a structural property of the derivative of the generator matrix of a Markov chain that leads to a significant simplification of the estimators.
Journal of Applied Probability, 2008
In this paper we consider discrete-time multidimensional Markov chains having a block transition probability matrix which is the sum of a matrix with repeating block rows and a matrix of upper-Hessenberg, quasi-Toeplitz structure. We derive sufficient conditions for the existence of the stationary distribution, and outline two algorithms for calculating the stationary distribution.
Linear Algebra and its Applications, 2007
Let T ∈ R n×n be an irreducible stochastic matrix with stationary distribution vector π. Set A = I − T , and define the quantity κ 3 (T ) ≡ 1 2 max j =1,...,n π j A −1 j ∞ , where A j , j = 1, . . . , n, are the (n − 1) × (n − 1) principal submatrices of A obtained by deleting the jth row and column of A. Results of Cho and Meyer, and of Kirkland show that κ 3 provides a sensitive measure of the conditioning of π under perturbation of T. Moreover, it is known that κ 3 (T ) n−1 2n . In this paper, we investigate the class of irreducible stochastic matrices T of order n such that κ 3 (T ) = n−1 2n , for such matrices correspond to Markov chains with desirable conditioning properties. We identify some restrictions on the zero-nonzero patterns of such matrices, and construct several infinite classes of matrices for which κ 3 is as small as possible.
Stochastic Models, 2010
This paper studies the light-tailed asymptotics of the stationary tail probability vectors of a Markov chain of M/G/1 type. Almost all related studies have focused on the typical case, where the transition block matrices in the non-boundary levels have a dominant impact on the decay rate of the stationary tail probability vectors and their decay is aperiodic. In this paper, we study not only the typical case but also atypical cases such that the stationary tail probability vectors decay periodically and/or their decay rate is determined by the tail distribution of jump sizes from the boundary level. We derive light-tailed asymptotic formulae for the stationary tail probability vectors by locating the dominant poles of the generating function of the sequence of those vectors. Further we discuss the positivity of the dominant terms of the obtained asymptotic formulae.
2002
In an earlier paper (Hunter, 2002) it was shown that mean first passage times play an important role in determining bounds on the relative and absolute differences between the stationary probabilities in perturbed finite irreducible discrete time Markov chains. Further when two perturbations of the transition probabilities in a single row are carried out the differences between the stationary probabilities in the unperturbed and perturbed situations are easily expressed in terms of a reduced number of mean first passage times. Using this procedure we provide an updating procedure for mean first passage times to determine changes in the stationary distributions under successive perturbations. Simple procedures for determining both stationary distributions and mean first passage times in a finite irreducible Markov chain are also given. The techniques used in the paper are based upon the application of generalized matrix inverses.
Advances in Applied Probability, 1999
We consider a singularly perturbed ( nite state) Markov chain and provide a complete characterization of the fundamental matrix. In particular, we obtain a formula for the regular part simpler than a previous Schweitzer's formula, and the singular part is obtained via a reduction process similar to Delebecque's reduction for the stationary distribution. In contrast to previous approaches, one works with aggregate Markov chains of much smaller dimension than the original chain, an essential feature for practical computation. An application to mean rst-passage times is also presented.
Linear Algebra and its Applications, 2016
Computational procedures for the stationary probability distribution, the group inverse of the Markovian kernel and the mean first passage times of a finite irreducible Markov chain, are developed using perturbations. The derivation of these expressions involves the solution of systems of linear equations and, structurally, inevitably the inverses of matrices. By using a perturbation technique, starting from a simple base where no such derivations are formally required, we update a sequence of matrices, formed by linking the solution procedures via generalized matrix inverses and utilising matrix and vector multiplications. Four different algorithms are given, some modifications are discussed, and numerical comparisons made using a test example. The derivations are based upon the ideas outlined in Hunter,
Electronic Journal of Linear Algebra, 2003
Let T be an irreducible stochastic matrix with stationary vector π T . The conditioning of π T under perturbation of T is discussed by providing an attainable upper bound on the absolute value of the derivative of each entry in π T with respect to a given perturbation matrix. Connections are made with an existing condition number for π T , and the results are applied to the class of Markov chains arising from a random walk on a tree.
Perturbation analysis of Markov chains provides bounds on the effect that a change in a Markov transition matrix has on the corresponding stationary distribution. This paper compares and analyzes bounds found in the literature for finite and denumerable Markov chains and introduces new bounds based on series expansions. We discuss a series of examples to illustrate the applicability and numerical efficiency of the various bounds. Specifically, we address the question on how the bounds developed for finite Markov chains behave as the size of the system grows. In addition, we provide for the first time an analysis of the relative error of these bounds. For the case of a scaled perturbation we show that perturbation bounds can be used to analyze stability of a stable Markov chain with respect to perturbation with an unstable chain.
2011
Questions are posed regarding the influence that the column sums of the transition probabilities of a stochastic matrix (with row sums all one) have on the stationary distribution, the mean first passage times and the Kemeny constant of the associated irreducible discrete time Markov chain. Some new relationships, including some inequalities, and partial answers to the questions, are given using a special generalized matrix inverse that has not previously been considered in the literature on Markov chains.
INFOR: Information Systems and Operational Research, 1997
In this paper, based on probabilistic arguments, we obtain an explicit solution of the stationary distribution for a discrete time Markov chain with an upper Hessenberg time stationary transition probability matrix. Our solution then leads to a numerically stable and efficient algorithm for computing stationary probabilities. Two other expressions for the stationary distribution are also derived, which lead to two alternative algorithms. Numerical analysis of the algorithms is given, which shows the reliability and efficiency of the algorithms. Examples of applications are provided, including results of a discrete time state dependent batch arrive queueing model. The idea used in this paper can be generalized to deal with Markov chains with a more general structure.
arXiv (Cornell University), 2014
We consider a class of discrete time Markov chains with state space [0, 1] and the following dynamics. At each time step, first the direction of the next transition is chosen at random with probability depending on the current location. Then the length of the jump is chosen independently as a random proportion of the distance to the respective end point of the unit interval, the distributions of the proportions being fixed for each of the two directions. Chains of that kind were subjects of a number of studies and are of interest for some applications. Under simple broad conditions, we establish the ergodicity of such Markov chains and then derive closed form expressions for the stationary densities of the chains when the proportions are beta distributed with the first parameter equal to 1. Examples demonstrating the range of stationary distributions for processes described by this model are given, and an application to a robot coverage algorithm is discussed.
Advances in Applied Probability, 2005
In this paper, we consider the asymptotic behavior of stationary probability vectors of Markov chains of GI/G/1 type. The generating function of the stationary probability vector is explicitly expressed by the R-measure. This expression of the generating function is more convenient for the asymptotic analysis than those in the literature. The RG-factorization of both the repeating row and the Wiener-Hopf equations for the boundary row are used to provide necessary spectral properties. The stationary probability vector of a Markov chain of GI/G/1 type is shown to be light tailed if the blocks of the repeating row and the blocks of the boundary row are light tailed. We derive two classes of explicit expression for the asymptotic behavior, the geometric tail, and the semigeometric tail, based on the repeating row, the boundary row, or the minimal positive solution of a crucial equation involved in the generating function, and discuss the singularity classes of the stationary probabilit...
Advances in Applied Probability, 2005
In this paper, we provide a novel approach to studying the heavy-tailed asymptotics of the stationary probability vector of a Markov chain of GI/G/1 type, whose transition matrix is constructed from two matrix sequences referred to as a boundary matrix sequence and a repeating matrix sequence, respectively. We first provide a necessary and sufficient condition under which the stationary probability vector is heavy tailed. Then we derive the long-tailed asymptotics of the R-measure in terms of the RG-factorization of the repeating matrix sequence, and a Wiener-Hopf equation for the boundary matrix sequence. Based on this, we are able to provide a detailed analysis of the subexponential asymptotics of the stationary probability vector.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.