Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, Applied Numerical Mathematics
We consider the parallel computation of the stationary probability distribution vector of ergodic Markov chains with large state spaces by preconditioned Krylov subspace methods. The parallel preconditioner is obtained as an explicit approximation, in factorized form, of a particular generalized inverse of the generator matrix of the Markov process. Graph partitioning is used to parallelize the whole algorithm, resulting in a two-level method.
2008 International Multiconference on Computer Science and Information Technology, 2008
The authors consider the use of the parallel iterative methods for solving large sparse linear equation systems resulting from Markov chains-on a computer cluster. A combination of Jacobi and Gauss-Seidel iterative methods is examined in a parallel version. Some results of experiments for sparse systems with over 3 × 10 7 equations and about 2 × 10 8 nonzeros which we obtained from a Markovian model of a congestion control mechanism are reported.
Advances in Engineering Software, 2010
Two-stage methods in which the inner iterations are accomplished by an alternating method are developed. Convergence of these methods is shown in the context of solving singular and nonsingular linear systems. These methods are suitable for parallel computation. Experiments related to finding stationary probability distribution of Markov chains are performed. These experiments demonstrate that the parallel implementation of these methods can solve singular systems of linear equations in substantially less time than the sequential counterparts. z ðkþ 1 2 Þ ¼ P À1 Qz ðkÞ þ P À1 ðNx ðlÞ þ bÞ; z ðkþ1Þ ¼ R À1 Sz ðkþ 1 2 Þ þ R À1 ðNx ðlÞ þ bÞ; k ¼ 0; 1; . . . ; sðlÞ À 1 with z ð0Þ ¼ x ðlÞ , or equivalently
2011 Federated Conference on Computer Science and Information Systems (FedCSIS), 2011
This paper is a review and a comparison of some preconditioners based on incomplete factorizations of matrices — for matrices describing Markov chains. Three preconditioners are considered: ILU(0), ILU3, IWZ(0). Two of them (ILU(0), ILU3) are based on the LU factorization, the latter (IWZ(0)) — on the WZ factorization. The preconditioners are investigated in respect of their usability for decreasing number of iterations in a projection method, namely GMRES(m). To chose the best preconditioner for such methods, authors introduce a measure called iteration speed-up (p) and some of its relatives, as well as they define a function giving an average number of restarts needed to achieve a given accuracy for matrices from a some set (Is). These measures are studied for two different cases of matrices describing Markov chains to compare influence of the examined incomplete preconditioners for GMRES(m).
Numerical Linear Algebra with Applications, 2011
This special issue contains a selection of papers from the Sixth International Workshop on the Numerical Solution of Markov Chains, held in Williamsburg, Virginia on September 16-17, 2010. The papers cover a broad range of topics including perturbation theory for absorbing chains, bounding techniques, steady-state and transient solution methods, multilevel algorithms, preconditioning, and applications.
Proceedings of the International Multiconference on Computer Science and Information Technology, 2010
The authors consider the impact of the structure of the matrix on the convergence behavior for the GMRES projection method for solving large sparse linear equation systems resulting from Markov chains modeling. Studying experimental results we investigate the number of steps and the rate of convergence of GMRES method and the IWZ preconditioning for the GMRES method. The motivation is to better understand the convergence characteristics of Krylov subspace method and the relationship between the Markov model, the nonzero structure of the coefficient matrix associated with this model and the convergence of the preconditioned GMRES method.
Numerical Linear Algebra with Applications, 2001
Let M T be the mean ÿrst passage matrix for an n-state ergodic Markov chain with a transition matrix T . We partition T as a 2 × 2 block matrix and show how to reconstruct M T e ciently by using the blocks of T and the mean ÿrst passage matrices associated with the non-overlapping Perron complements of T . We present a schematic diagram showing how this method for computing M T can be implemented in parallel. We analyse the asymptotic number of multiplication operations necessary to compute M T by our method and show that, for large size problems, the number of multiplications is reduced by about 1=8, even if the algorithm is implemented in serial. We present ÿve examples of moderate sizes (of orders 20 -200) and give the reduction in the total number of ops (as opposed to multiplications) in the computation of M T . The examples show that when the diagonal blocks in the partitioning of T are of equal size, the reduction in the number of ops can be much better than 1=8.
Mathematical Problems in Engineering, 2013
This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.
The computation of stationary distributions of Markov chains is an important task in the simulation of stochastic models. The linear systems arising in such applications involve non-symmetric M-matrices, making algebraic multigrid methods a natural choice for solving these systems. In this paper we investigate extensions and improvements of the bootstrap algebraic multigrid framework for solving these systems. This is achieved by reworking the bootstrap setup process to use singular vectors instead of eigenvectors in constructing interpolation and restriction. We formulate a result concerning the convergence speed of GMRES for singular systems and experimentally justify why rapid convergence of the proposed method can be expected. We demonstrate its fast convergence and the favorable scaling behaviour for various test problems.
IFAC Proceedings Volumes, 1993
A new method for computing the steady-state probability distributIOn of ergodic Markov chains is presented. Starting with some ergodic chain the steady-state distribution of a new Markov chain is computed by updatin~ the. steady-state distribution of the old one by simple updating formula. Expenments on a sample of large band matrices is reported.
Applied Mathematics and Computer Science, 2009
The article considers the effectiveness of various methods used to solve systems of linear equations (which emerge while modeling computer networks and systems with Markov chains) and the practical influence of the methods applied on accuracy. The paper considers some hybrids of both direct and iterative methods. Two varieties of the Gauss elimination will be considered as an example of
Linear Algebra and its Applications, 2016
Computational procedures for the stationary probability distribution, the group inverse of the Markovian kernel and the mean first passage times of a finite irreducible Markov chain, are developed using perturbations. The derivation of these expressions involves the solution of systems of linear equations and, structurally, inevitably the inverses of matrices. By using a perturbation technique, starting from a simple base where no such derivations are formally required, we update a sequence of matrices, formed by linking the solution procedures via generalized matrix inverses and utilising matrix and vector multiplications. Four different algorithms are given, some modifications are discussed, and numerical comparisons made using a test example. The derivations are based upon the ideas outlined in Hunter,
… , University of Cambridge, Tech. Rep. UCAM- …, 2005
Lecture Notes in Control and Information Sciences, 2006
For the numerous applications of Markov chains (in particular MCMC methods), the problem of detecting an instant at which the convergence takes place is crucial. The 'cut-off phenomenon', or abrupt convergence, provides an answer to this problem. When a sample of Markov chains, or more generally of exponentially converging processes, is simulated in parallel, it remains far from its stationary distribution until a deterministic instant, and approaches it exponentially fast afterwards. The cut-off instant is explicitly known, and can be detected algorithmically using appropriate stopping times. The technique is illustrated on the Ornstein-Uhlenbeck diffusion.
Performance Evaluation, 2017
State based analysis of stochastic models for performance and dependability often requires the computation of the stationary distribution of a multidimensional continuous-time Markov chain (CTMC). The infinitesimal generator underlying a multidimensional CTMC with a large reachable state space can be represented compactly in the form of a block matrix in which each nonzero block is expressed as a sum of Kronecker products of smaller matrices. However, solution vectors used in the analysis of such Kronecker-based Markovian representations require memory proportional to the size of the reachable state space. This implies that memory allocated to solution vectors becomes a bottleneck as the size of the reachable state space increases. Here, it is shown that the hierarchical Tucker decomposition (HTD) can be used with adaptive truncation strategies to store the solution vectors during Kronecker-based Markovian analysis compactly and still carry out the basic operations including vector-matrix multiplication in Kronecker form within Power, Jacobi, and Generalized Minimal Residual methods. Numerical experiments on multidimensional problems of varying sizes indicate that larger memory savings are obtained with the HTD approach as the number of dimensions increases.
Applied Mathematics and Computation, 2015
In this paper, we develop new methods for approximating dominant eigenvector of columnstochastic matrices. We analyze the Google matrix, and present an averaging scheme with linear rate of convergence in terms of 1-norm distance. For extending this convergence result onto general case, we assume existence of a positive row in the matrix. Our new numerical scheme, the Reduced Power Method (RPM), can be seen as a proper averaging of the power iterates of a reduced stochastic matrix. We analyze also the usual Power Method (PM) and obtain convenient conditions for its linear rate of convergence with respect to 1-norm.
An article identifies and assesses an effectiveness of two different methods applied to solve linear equations systems which result while modeling of computer networks and systems with Markov chains. The paper considers both the hybrid of direct methods as well as classic one of iterative methods. Two varieties of Gauss elimination will be considered as an example of direct methods: the LU factorization method and the WZ factorization method. Gauss-Seidel iterative method will be discussed. That issue points in preconditioning and matrix division into blocks where blocks will be solved applying direct methods. The paper presents an impact of liked methods on both time and accuracy of vector probability determining regarding particular networks and computer systems occurring.
Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187), 2000
This paper deals with a class of ergodic control problems for systems described by Markov chains with strong and weak interactions. These systems are composed of a set of m subchains that are weakly coupled. Using results recently established by Abbad et al. one formulates a limit control problem the solution of which can be obtained via an associated nondifferentiable convex programming (NDCP) problem. The technique used to solve the NDCP problem is the Analytic Center Cutting Plane Method (ACCPM) which implements a dialogue between, on one hand, a master program computing the analytical center of a localization set containing the solution and, on the other hand, an oracle proposing cutting planes that reduce the size of the localization set at each main iteration. The interesting aspect of this implementation comes from two characteristics: (i) the oracle proposes cutting planes by solving reduced sized Markov Decision Problems (MDP) via a linear programm (LP) or a policy iteration method; (ii) several cutting planes can be proposed simultaneously through a parallel implementation on m processors. The paper concentrates on these two aspects and shows, on a large scale MDP obtained from the numerical approximation "à la Kushner-Dupuis" of a singularly perturbed hybrid stochastic control problem, the important computational speed-up obtained.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.