Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005, month
AI
Solving systems of linear equations is crucial in scientific computing, particularly in the context of Continuous Time Markov Chains (CTMCs). This paper presents a parallel iterative solution method aimed at addressing the challenges posed by large sparse linear systems arising from CTMC analysis. It discusses the limitations of existing explicit methods due to state space explosion and introduces techniques that enhance computational efficiency and widen the scope of solvable models. Through empirical evaluations, the proposed method demonstrates significant improvements in memory and time efficiency for large-scale CTMCs, providing insights into potential applications and future research directions in performance modeling.
… , University of Cambridge, Tech. Rep. UCAM- …, 2005
Applied Numerical Mathematics, 2002
We consider the parallel computation of the stationary probability distribution vector of ergodic Markov chains with large state spaces by preconditioned Krylov subspace methods. The parallel preconditioner is obtained as an explicit approximation, in factorized form, of a particular generalized inverse of the generator matrix of the Markov process. Graph partitioning is used to parallelize the whole algorithm, resulting in a two-level method.
2008 International Multiconference on Computer Science and Information Technology, 2008
The authors consider the use of the parallel iterative methods for solving large sparse linear equation systems resulting from Markov chains-on a computer cluster. A combination of Jacobi and Gauss-Seidel iterative methods is examined in a parallel version. Some results of experiments for sparse systems with over 3 × 10 7 equations and about 2 × 10 8 nonzeros which we obtained from a Markovian model of a congestion control mechanism are reported.
1994
In this paper we present two symbolic algorithms to compute the steady-state probabilities for very large nite state machines. These algorithms, based o n A lgebraic Decision Diagrams (ADD's) { an extension of BDD's that allows arbitrary values to be associated with the terminal nodes of the diagrams { determine the steady-state p r obabilities by regarding nite state machines as homogeneous, discreteparameter Markov chains with nite state spaces, and by solving the corresponding Chapman-Kolmogorov equations. We have implemented two solution techniques: one is based on the Gauss-Jacobi iteration, and the other one on simple matrix multiplication, we report the experimental results obtained for problems with over 10 8 unknowns in irreducible form.
Computations with Markov Chains, 1995
We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.
Journal of Scientific Computing, 2022
Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. Without loss of generality, and drawing our motivation from applications to large networks, we interpret this problem as one of computing the stationary distribution of a random walk on a graph. We propose a new controlled, easily distributed algorithm for this task, briefly summarized as follows: at the beginning, each node receives a fixed amount of cash (positive or negative), and at each iteration, some nodes receive 'green light' to distribute their wealth or debt proportionally to the transition probabilities of the Markov chain; the stationary probability of a node is computed as a ratio of the cash distributed by this node to the total cash distributed by all nodes together. Our method includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, Gauss-Southwell, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive scheduling strategies for the green-light, that achieve convergence rate faster than state-of-the-art algorithms.
Performance modelling of complex and heterogeneous systems based on analytical models are often solved by the analysis of underlying Markovian models. We consider performance models based on Continuous Time Markov Chains (CTMCs) and their solution, that is the analysis of the steady-state distribution, to efficiently derive a set of performance indices. This paper presents a tool that is able to decide whether a set of cooperating CTMCs yields a product-form stationary distribution. In this case, the tool computes the unnormalised steady-state distribution. The algorithm underlying the tool has been presented in [10] by exploiting the recent advances in the theory of product-form models such as the Reversed Compound Agent Theorem (RCAT) . In this paper, we focus on the peculiarities of the formalism adopted to describe the interacting CTMCs and on the software design that may have interesting consequences for the performance community.
Journal of Information Science and Engineering, 2002
The objective of this work is the analysis and prediction of the performance of irregular codes, mainly in their parallel implementations. In particular, this paper focuses on parallel iterative solvers for sparse matrices as a relevant case of study of this kind of codes. An efficient library of solvers and preconditioners was developed using HPF and MPI as parallel platforms. For this library, models to characterize and predict the behavior of the execution of the methods, preconditioners and kernels were introduced. To show the results of these models, a visualization tool with an easy to use GUI interface was implemented. Finally, results of the prediction models for the codes of the parallel library are presented using the visualization tool.
Studies in classification, data analysis, and knowledge organization, 2022
Markov Decision Processes (MDPs) are useful to solve real-world probabilistic planning problems. However, finding an optimal solution in an MDP can take an unreasonable amount of time when the number of states in the MDP is large. In this paper, we present a way to decompose an MDP into Strongly Connected Components (SCCs) and to find dependency chains for these SCCs. We then propose a variant of the Topological Value Iteration (TVI) algorithm, called parallel chained TVI (pcTVI), which is able to solve independent chains of SCCs in parallel leveraging modern multicore computer architectures. The performance of our algorithm was measured by comparing it to the baseline TVI algorithm on a new probabilistic planning domain introduced in this study. Our pcTVI algorithm led to a speedup factor of 20, compared to traditional TVI (on a computer having 32 cores).
Electronic Notes in …, 2002
Advances in Engineering Software, 2010
Two-stage methods in which the inner iterations are accomplished by an alternating method are developed. Convergence of these methods is shown in the context of solving singular and nonsingular linear systems. These methods are suitable for parallel computation. Experiments related to finding stationary probability distribution of Markov chains are performed. These experiments demonstrate that the parallel implementation of these methods can solve singular systems of linear equations in substantially less time than the sequential counterparts. z ðkþ 1 2 Þ ¼ P À1 Qz ðkÞ þ P À1 ðNx ðlÞ þ bÞ; z ðkþ1Þ ¼ R À1 Sz ðkþ 1 2 Þ þ R À1 ðNx ðlÞ þ bÞ; k ¼ 0; 1; . . . ; sðlÞ À 1 with z ð0Þ ¼ x ðlÞ , or equivalently
2017 Annual IEEE International Systems Conference (SysCon), 2017
Modeling the dynamic, time-varying behavior of systems and processes is a common design and analysis task in the systems engineering community. A popular method for performing such analysis is the use of Markov chains. Additionally, automated methods may be used to automatically determine new system state values for a system under observation or test. Unfortunately, the state-transition space of a Markov chain grows exponentially in the number of states resulting in limitations in the use of Markov chains for dynamic analysis. We present results in the use of an efficient data structure, the algebraic decision diagram (ADD), for representation of Markov chains and an accompanying prototype analysis tool. Experimental results are provided that indicate the ADD is a viable structure to enable the automated modeling of Markov chains consisting of hundreds of thousands of states. This result allows automated Markov chain analysis of extremely large state spaces to be a viable technique for system and process modeling and analysis. Experimental results from a prototype implementation of an ADD-based analysis tool are provided to substantiate our conclusions.
Performance Evaluation, 2011
We propose a new approximate numerical algorithm for the steady-state solution of general structured ergodic Markov models. The approximation uses a state-space encoding based on multiway decision diagrams and a transition rate encoding based on a new class of edge-valued decision diagrams. The new method retains the favorable properties of a previously proposed Kronecker-based approximation, while eliminating the need for a Kronecker-consistent model decomposition. Removing this restriction allows for a greater utilization of event locality, which facilitates the generation of both the state-space and the transition rate matrix, thus extends the applicability of this algorithm to larger and more complex models.
An article identifies and assesses an effectiveness of two different methods applied to solve linear equations systems which result while modeling of computer networks and systems with Markov chains. The paper considers both the hybrid of direct methods as well as classic one of iterative methods. Two varieties of Gauss elimination will be considered as an example of direct methods: the LU factorization method and the WZ factorization method. Gauss-Seidel iterative method will be discussed. That issue points in preconditioning and matrix division into blocks where blocks will be solved applying direct methods. The paper presents an impact of liked methods on both time and accuracy of vector probability determining regarding particular networks and computer systems occurring.
Applied Mathematics and Computer Science, 2009
The article considers the effectiveness of various methods used to solve systems of linear equations (which emerge while modeling computer networks and systems with Markov chains) and the practical influence of the methods applied on accuracy. The paper considers some hybrids of both direct and iterative methods. Two varieties of the Gauss elimination will be considered as an example of
Approved f or pabb ic re tuase C...) distribution unlimited.-L J UNCLASSIFIED SECURITY CLASSIFICATION OF ee.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.