Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Mathematical Problems in Engineering
This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.
Proceedings of the International Multiconference on Computer Science and Information Technology, 2010
The authors consider the impact of the structure of the matrix on the convergence behavior for the GMRES projection method for solving large sparse linear equation systems resulting from Markov chains modeling. Studying experimental results we investigate the number of steps and the rate of convergence of GMRES method and the IWZ preconditioning for the GMRES method. The motivation is to better understand the convergence characteristics of Krylov subspace method and the relationship between the Markov model, the nonzero structure of the coefficient matrix associated with this model and the convergence of the preconditioned GMRES method.
2011 Federated Conference on Computer Science and Information Systems (FedCSIS), 2011
This paper is a review and a comparison of some preconditioners based on incomplete factorizations of matrices — for matrices describing Markov chains. Three preconditioners are considered: ILU(0), ILU3, IWZ(0). Two of them (ILU(0), ILU3) are based on the LU factorization, the latter (IWZ(0)) — on the WZ factorization. The preconditioners are investigated in respect of their usability for decreasing number of iterations in a projection method, namely GMRES(m). To chose the best preconditioner for such methods, authors introduce a measure called iteration speed-up (p) and some of its relatives, as well as they define a function giving an average number of restarts needed to achieve a given accuracy for matrices from a some set (Is). These measures are studied for two different cases of matrices describing Markov chains to compare influence of the examined incomplete preconditioners for GMRES(m).
Applied Mathematics and Computation, 2015
In this paper, we develop new methods for approximating dominant eigenvector of columnstochastic matrices. We analyze the Google matrix, and present an averaging scheme with linear rate of convergence in terms of 1-norm distance. For extending this convergence result onto general case, we assume existence of a positive row in the matrix. Our new numerical scheme, the Reduced Power Method (RPM), can be seen as a proper averaging of the power iterates of a reduced stochastic matrix. We analyze also the usual Power Method (PM) and obtain convenient conditions for its linear rate of convergence with respect to 1-norm.
Special Matrices, 2016
This article describes an accurate procedure for computing the mean first passage times of a finite irreducible Markov chain and a Markov renewal process. The method is a refinement to the Kohlas, Zeit fur Oper Res, 30, 197–207, (1986) procedure. The technique is numerically stable in that it doesn’t involve subtractions. Algebraic expressions for the special cases of one, two, three and four states are derived.Aconsequence of the procedure is that the stationary distribution of the embedded Markov chain does not need to be derived in advance but can be found accurately from the derived mean first passage times. MatLab is utilized to carry out the computations, using some test problems from the literature.
FUDMA JOURNAL OF SCIENCES
The evolution of this model is represented by transitions from one state to the next. Also, the physical or mathematical behavior of this system can also be illustrated by identifying all of the possible states and explaining how it transitions between them. The iterative solution approaches for the stationary distribution of Markov chains, which begin with an initial estimate of the solution vector and it becomes closer and closer to the true solution with each iteration are investigated. Our goal is to compute solutions of stationary distribution of Markov chain by utilizing the power iterative method which leaves the transition matrices unchanged and saves time by considering the discretization effect, and the convergency. Matrices operations such as multiplication with one or more vectors, lower, diagonal and upper concepts of matrix, with the help of several existing Markov chain laws, theorems, formulas, and the normalization principle are applied. For the illustrative exampl...
Numerical Linear Algebra with Applications, 2011
This special issue contains a selection of papers from the Sixth International Workshop on the Numerical Solution of Markov Chains, held in Williamsburg, Virginia on September 16-17, 2010. The papers cover a broad range of topics including perturbation theory for absorbing chains, bounding techniques, steady-state and transient solution methods, multilevel algorithms, preconditioning, and applications.
Nig. J. Pure & Appl. Sci. Faculty of Physical Sciences and Faculty of Life Sciences, Univ. of Ilorin, Nigeria, 2022
The evolution of a system is represented by transitions from one state to the next, and the system's physical or mathematical behavior can also be depicted by defining all of the numerous states it can be in and demonstrating how it moves between them. In this study, the iterative solution methods for the stationary distribution of Markov chains were investigated, which start with an initial estimate of the solution vector and then alter it in such a way that it gets closer and closer to the genuine solution with each step or iteration., and also involved matrices operations such as multiplication with one or more vectors, which leaves the transition matrices unchanged and saves time. Our goal is to use Successive Overrelaxation Algorithmic and Block Numerical Iterative Solution Methods to compute the solutions. With the help of some existing Markov chain laws, theorems, and formulas, the normalization principle and matric operations such as lower, upper, and diagonal matrices are used. The stationary distribution vector's () = () (0.2 0.2 0.2 0.2 0.2) and it was observed that all subsequent iterations yield exactly the same result as () , and this shows that, the block iterative method requires only a single iteration to obtain the solution to full machine precision.
Numerical Linear Algebra with Applications, 2011
This paper describes multilevel methods for the calculation of the stationary probability vector of large, sparse, irreducible Markov chains. In particular, several recently proposed significant improvements to the multilevel aggregation method of Horton and Leutenegger are described and compared. Furthermore, we propose a very simple improvement of that method using an over-correction mechanism. We also compare with more traditional iterative methods for Markov chains such as weighted Jacobi, two-level aggregation/disaggregation, and preconditioned stabilized biconjugate gradient and generalized minimal residual method. Numerical experiments confirm that our improvements lead to significant speedup, and result in multilevel methods that are competitive with leading iterative solvers for Markov chains.
Computations with Markov Chains, 1995
We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.
Dutse Journal of Pure and Applied Sciences
The Physical or Mathematical behaviour of this model may be represented by describing all the different states it may occupy and by indicating how it moves among these states. In this study, the stationary distribution of Markov chains was solved using iterative methods that begin with an initial estimate of the solution vector and then modified it in a way that brings it closer and closer to the real solution with each step or iteration. These methods also involved matrix operations like multiplication with one or more vectors, which preserves the transition matrices while speeding up the process. We computed the solutions using Jacobi iterative method and Gauss-Seidel iterative method in order to shed more light on the solutions of stationary distribution in Markov chain. This was done with the aid of several already-existing laws, theorems, and formulas of Markov chain and the application of normalization principle and matrix operations such as lower, upper, and diagonal matrices...
SIAM Journal on Scientific Computing, 2011
This work concerns the development of an Algebraic Multilevel method for computing stationary vectors of Markov chains. We present an efficient Bootstrap Algebraic Multilevel method for this task. In our proposed approach, we employ a multilevel eigensolver, with interpolation built using ideas based on compatible relaxation, algebraic distances, and least squares fitting of test vectors. Our adaptive variational strategy for computation of the state vector of a given Markov chain is then a combination of this multilevel eigensolver and associated multilevel preconditioned GMRES iterations. We show that the Bootstrap AMG eigensolver by itself can efficiently compute accurate approximations to the state vector. An additional benefit of the Bootstrap approach is that it yields an accurate interpolation operator for many other eigenmodes. This in turn allows for the use of the resulting AMG hierarchy to accelerate the MLE steps using standard multigrid correction steps. Further, we mention that our method, unlike other existing multilevel methods for Markov Chains, does not employ any special processing of the coarse-level systems to ensure that stochastic properties of the fine-level system are maintained there. The proposed approach is applied to a range of test problems, involving non-symmetric stochastic M-matrices, showing promising results for all problems considered.
Linear Algebra and its Applications
A survey of a variety of computational procedures for finding the mean first passage times in Markov chains is presented. The author recently developed a new accurate computational technique, an Extended GTH Procedure, Hunter (Special Matrices, 2016) similar to that developed by Kohlas (Zeit. fur Oper. Res., 1986). In addition, the author has recently developed a variety of new perturbation techniques for finding key properties of Markov chains including finding the mean first passage times, Hunter (Linear Algebra and its Applications, 2016). These recently developed procedures are compared with other procedures including the standard matrix inversion technique using the fundamental matrix (Kemeny and Snell, 1960), some simple generalized matrix inverse techniques developed by Hunter (Asia Pacific J. Oper. Res., 2007), and the FUND technique (with some modifications) of Heyman (SIAM J Matrix Anal. and Appl., 1995). MatLab is used to compute errors when the techniques are used on some test problems that have been used in the literature. A preference for the accurate procedure of the author is exhibited.
2009
In this paper we propose a functional description of Markov chains (MCs) using recursive stochastic equations and factor distributions instead of the state transition matrix P . This new modeling method is very intuitive as it separates the functional behavior of the system under study from probabilistic factors. We present the “forward algorithm” to calculate consecutive state distributions xn. It is numerically equivalent to the well-known vector-matrix multiplication method xn+1 = xn · P , but it can be faster and require less memory. We compare the operation and efficiency of the power method and MC simulation. Then, we propose several optimization techniques to speed up the computation of the stationary state distribution based on consecutive state distributions, to accelerate the forward algorithm, and to save its memory requirements. The presented concept has been implemented in a tool including all optimization methods. To make this paper easily accessible to novices, a tuto...
Journal of Scientific Computing, 2022
Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. Without loss of generality, and drawing our motivation from applications to large networks, we interpret this problem as one of computing the stationary distribution of a random walk on a graph. We propose a new controlled, easily distributed algorithm for this task, briefly summarized as follows: at the beginning, each node receives a fixed amount of cash (positive or negative), and at each iteration, some nodes receive 'green light' to distribute their wealth or debt proportionally to the transition probabilities of the Markov chain; the stationary probability of a node is computed as a ratio of the cash distributed by this node to the total cash distributed by all nodes together. Our method includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, Gauss-Southwell, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive scheduling strategies for the green-light, that achieve convergence rate faster than state-of-the-art algorithms.
Publication interne n 1038 | Juillet 1996 | 15 pages Abstract: Evaluation studies of computer systems deal often with the analysis of random phenomena that arise when contentions on system resources imply an overall behavior which is not predictable in a deterministic fashion. In these cases, the statistical regularities that are nevertheless present allow the construction of a probabilistic model of the observed system. In this paper we address a performance comparison between two stable approaches for computing some steady state measures for Markov chains. Both approaches reveal particularly suitable when the in nitesimal generator of the Markov chain is ill{conditioned. Our analysis is carried out by means of a few numerical case studies, dealing with di erent structures and di erent dimensions of the in nitesimal generators themselves.
Applied Numerical Mathematics, 2002
We consider the parallel computation of the stationary probability distribution vector of ergodic Markov chains with large state spaces by preconditioned Krylov subspace methods. The parallel preconditioner is obtained as an explicit approximation, in factorized form, of a particular generalized inverse of the generator matrix of the Markov process. Graph partitioning is used to parallelize the whole algorithm, resulting in a two-level method.
An article identifies and assesses an effectiveness of two different methods applied to solve linear equations systems which result while modeling of computer networks and systems with Markov chains. The paper considers both the hybrid of direct methods as well as classic one of iterative methods. Two varieties of Gauss elimination will be considered as an example of direct methods: the LU factorization method and the WZ factorization method. Gauss-Seidel iterative method will be discussed. That issue points in preconditioning and matrix division into blocks where blocks will be solved applying direct methods. The paper presents an impact of liked methods on both time and accuracy of vector probability determining regarding particular networks and computer systems occurring.
2006
: In Bertail & Clemencon (2005a) a novel methodology for bootstrappinggeneral Harris Markov chains has been proposed, which crucially exploits their renewalproperties (when eventually extended via the Nummelin splitting technique) and has theoreticalproperties that surpass other existing methods within the Markovian framework(bmoving block bootstrap, sieve bootstrap etc...). This paper is devoted to discuss practicalissues related to the implementation of this specific resampling method and to presentvarious simulations studies for investigating the performance of the latter and comparingit to other bootstrap resampling schemes standing as natural candidates in the Markovsetting.
Computational Statistics & Data Analysis, 2008
In Bertail & Clémençon (2005a) a novel methodology for bootstrapping general Harris Markov chains has been proposed, which crucially exploits their renewal properties (when eventually extended via the Nummelin splitting technique) and has theoretical properties that surpass other existing methods within the Markovian framework (bmoving block bootstrap, sieve bootstrap etc...). This paper is devoted to discuss practical issues related to the implementation of this specific resampling method and to present various simulations studies for investigating the performance of the latter and comparing it to other bootstrap resampling schemes standing as natural candidates in the Markov setting. Résumé : Une nouvelle méthodologie pour "bootstrapper" des chaînes de Markov Harris récurrente à été proposée par Bertail et Clémençon (2005a). Cette méthode utilise de manière cruciale les propriétés de renouvellement des chaînes de Markov (éventuellement en étendant la chaîne via la technique de "splitting" introduite par Nummelin). Elle possède des propriétés asymptotiques (propriétés au second ordre) meilleures que celles obtenues pour les méthodes existantes dans un contexte markovien (bootstrap par block, sieve bootstrap etc...). L'objet de cet article est de discuter les questions pratiques d'implémentation de cette méthode de rééchantillonnage et de présenter diverses simulations pour en étudier les performances à distance finie. Nous comparons ces résultats avec ceux obtenus avec des méthodes concurrentes dans un cadre markovien.
Stochastic Models, 2012
In this paper we revisit Newton's iteration as a method to find the G or R matrix in M/G/1type and GI/M/1-type Markov chains. We start by reconsidering the method proposed in [14] which required O(m 6 + N m 4) time per iteration, and show that it can be reduced to O(N m 4), where m is the block size and N the number of blocks. Moreover, we show how this method is able to further reduce this time complexity to O(N r 3 + N m 2 r 2 + m 3 r) when A 0 has rank r < m. In addition, we consider the case where [A 1 A 2. .. A N ] is of rank r < m and propose a new Newton's iteration method which is proven to converge quadratically and that has a time complexity of O(N m 3 + N m 2 r 2 + mr 3) per iteration. The computational gains in all the cases are illustrated through numerical examples.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.