Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
Simulation is an interesting alternative to solve Markovian models. However, when compared to analytical and numerical solutions it suffers from a lack of precision in the results due to the very nature of simulation, which is the choice of samples through pseudorandom generation. This paper proposes a different way to simulate Markovian models by using a Bootstrap-based statistical method to minimize the effect of sample choices. The effectiveness of the proposed method, called Bootstrap simulation, is compared to the numerical solution results for a set of examples described using Stochastic Automata Networks modeling formalism.
Lecture Notes in Computer Science, 2008
The solution of continuous and discrete-time Markovian models is still challenging mainly when we model large complex systems, for example, to obtain performance indexes of parallel and distributed systems. However, iterative numerical algorithms, even well-fitted to a multidimensional structured representation of Markov chains, still face the state space explosion problem. Discrete-event simulations can estimate the stationary distribution based on long run trajectories and are also alternative methods to estimate performance indexes of models. Perfect simulation algorithms directly build steady-state samples avoiding the warm-up period and the initial state bias of forward simulations. This paper introduces the concepts of backward coupling and the advantages of monotonicity properties and component-wise characteristics to simulate Stochastic Automata Networks (SAN). The main contribution is a novel technique to solve SAN descriptions originally unsolvable by iterative methods due to large state spaces. This method is extremely efficient when the state space is large and the model has dynamic monotonicity because it is possible to contract the reachable state space in a smaller set of maximal states. Component-wise characteristics also contribute to the state space reduction extracting extremal states of the model underlying chain. The efficiency of this technique applied to sample generation using perfect simulation is compared to the overall efficiency of using an iterative numerical method to predict performance indexes of SAN models.
2010
The Stochastic Automata Networks (SAN) formalism provides a compact and modular description for Markovian models. Moreover, SAN is suitable to derive performance indices for systems analysis and interpretation using iterative numerical solutions based on a descriptor and a state space sized probability vector. Depending on the size of the model this operation is computationally onerous and sometimes impracticable. An alternative method to compute indices from a model is simulation, mainly because it simply requires the definition of a pseudorandom generator and transition functions for states that enable the creation of a trajectory. The sampling process can be different for each technique, establishing some rules to collect samples for further statistical analysis. Simulation techniques often demand lots of samples in order to calculate statistically relevant performance indices. We focus our attention on the parallelization of sampling techniques to enhance the generation of more samples in less time, drawing considerations about the impact on results accuracy.
International Journal of Modelling and Simulation, 1982
The paper discusses cu r rent limitations of Markov methods wh en compar ed with M ont e-Carlo t ec hniques and suggests how these l imi tations may be overcome. A topdown Markov modell i ng s trat egy support ed by interactive graphics software , whi c h i s currently under devel opment , i s described. Th e use of Markov processes in conjun ct ion with M ont e-Carlo models f or fault-t ol er ant simulation is al so outlined. lJ J l
2010 22nd International Symposium on Computer Architecture and High Performance Computing, 2010
The solution of state-based stochastic models is usually a demanding application, then it is a natural subject to high performance techniques. We are particularly interested in the speedup of Bootstrap Simulation of structured Markovian models. This approach is a quite recent development in the performance evaluation area, and it brings a considerable improvement in the results accuracy, despite the intrinsic effect of randomness in simulation experiments. Unfortunately, Bootstrap Simulation has higher computational cost than other alternatives. We present experiments with different options to optimize the parallel solution of Bootstrap Simulation applied to three practical examples described in Stochastic Automata Networks (SAN) formalism. This paper contribution resides in the discussion of theoretical implementation issues, the obtained speedup and the actual processing and communication times for all experiments. Additionally, we also suggest future works to improve even more the proposed solution and we discuss some interesting insights for parallelization of similar applications.
Markov Chain Monte Carlo Simulations and Their Statistical Analysis, 2004
This article is a tutorial on Markov chain Monte Carlo simulations and their statistical analysis. The theoretical concepts are illustrated through many numerical assignments from the author's book on the subject. Computer code (in Fortran) is available for all subjects covered and can be downloaded from the web.
Annals of Statistics, 1996
The Markov chain simulation method has been successfully used in many problems, including some that arise in Bayesian statistics. We give a self-contained proof of the convergence of this method in general state spaces under conditions that are easy to verify.
2009
* Abstract The aim of this paper is to assist researchers in understanding the dynamics of simulation models that have been implemented and can be run in a computer, ie computer models.
Econometrica, 2003
This study compares the restricted and unrestricted methods of bootstrap data generating processes (DGPs) on statistical inference. It used hypothetical datasets simulated from normal distribution with different ability levels. Data were analyzed using different bootstrap DGPs. In practice, it is advisable to use the restricted parametric bootstrap DGP models and thereafter, check the kernel density of the empirical distributions that are close to normal (at least not too skewed). In fact, 21600 scenarios were replicated 200 times using bootstrap DGPs and kernel density methods. This analysis was carried out using R-statistical package. The results show that in a situation where the distribution of a test is skewed, all the scores need to be taken into account, no matter how small the sample size and the bootstrap level are. Across all the conditions considered, models HR5UR and HPN5UR yielded much larger bias and standard error while the smallest bias values were associated with models HR5R (0.0619) and HPN5R (0.0624). The result confirms the fact that bootstrap DGPs are very vital in statistical inference.
2004
The most widely used mathematical tools to model the behavior of the fault-tolerant computer systems are regenerative Markov processes [1], [2], [10]. Many stationary performance measures of such systems can be written in an explicit form of the stationary distribution of a Markov process. One familiarly form of such measures is 碌 =,<SUB>s2E f(s)?<SUB>s where f is a function of state such that E[jf(Z)j] =,<SUB>s2E jf(s)j ?<SUB>s < 1, where Z is an irreducible discrete time Markov chain with a finite state space E, and ? = (?<SUB>s)<SUB>s2E is its stationary distribution. For example if F ? E is the subset of nonoperational states of the system, then 碌 =,<SUB>s2E I<SUB>F(s)?<SUB>s, where I<SUB>A(x) = 1 if x is in set A and I<SUB>A(x) = 0 otherwise, is the steady-state unavailability of the system. Such measure is studied for a large markovian model in [10]. When analytical computation of 碌 is very di卤cult or almost ...
2009
In this paper we propose a functional description of Markov chains (MCs) using recursive stochastic equations and factor distributions instead of the state transition matrix P . This new modeling method is very intuitive as it separates the functional behavior of the system under study from probabilistic factors. We present the “forward algorithm” to calculate consecutive state distributions xn. It is numerically equivalent to the well-known vector-matrix multiplication method xn+1 = xn · P , but it can be faster and require less memory. We compare the operation and efficiency of the power method and MC simulation. Then, we propose several optimization techniques to speed up the computation of the stationary state distribution based on consecutive state distributions, to accelerate the forward algorithm, and to save its memory requirements. The presented concept has been implemented in a tool including all optimization methods. To make this paper easily accessible to novices, a tuto...
Electronic Notes in Theoretical Computer Science, 2016
For systems that are suitable to be modelled by continuous Markov chains, dependability analysis is not always straightforward. When such systems are large and complex, it is usually impossible to compute their dependability measures exactly. An alternative solution is to estimate them by simulation, typically by Monte Carlo simulation. But for highly reliable systems standard simulation can not reach satisfactory accuracy levels (measured by the variance of the estimator) within reasonable computing times. Conditional Monte Carlo with Intermediate Estimations (CMIE) is a simulation method proposal aimed at making accurate estimations of dependability measures on highly reliable Markovian systems. The basis of CMIE is introduced, the unbiasedness of the corresponding estimator is proven, and its variance is shown to be lower than the variance of the standard estimator. A variant of the basic scheme, that applies to large and highly reliable multicomponent systems, is introduced. Some experimental results are shown.
Operations Research, 2008
We introduce and study a randomized quasi-Monte Carlo method for the simulation of Markov chains up to a random (and possibly unbounded) stopping time. The method simulates n copies of the chain in parallel, using a (d + 1)-dimensional, highly uniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. The general idea is to obtain a better approximation of the state distribution, at each step of the chain, than with standard Monte Carlo. The technique can be used in particular to obtain a low-variance unbiased estimator of the expected total cost when state-dependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function.
Information Technology and Control, 2004
The paper considers a method for construction of numerical models for systems described by Markov processes with discrete states and continuous time. The approach of the consequence embedding of Markov chains is used for computing stationary probabilities of Markov processes. The performance of the system described in the piece-linear aggregate approach is used for generating the system of Kolmogorov equations. An example of a numerical model for the data transmission tract with the adaptive commutation is presented.
This paper introduced a general class of mathematical models, Markov chain models, which are appropriate for modeling of phenomena in the physical life, medicine, engineering and social sciences. Application of Markov chains are quite common and have become a standard tool of decision making. What matters in predicting the future of the system is its present state, and not the path by which the system got to its present state. Two methods are presented that exemplify the flexibility of this approach: the regular Markov chain and absorbing Markov chain. The long-term trend in absorbing Markov chains depends on the initial state. In addition, changing the initial state can change the final result. This property distinguishes absorbing Markov chains from regular Markov chains, where the final result is independent of the initial state. The problems are formulated by using the Wolfram Mathematical Programming System.
Statistica sinica, 1997
The Bayesian bootstrap for Markov chains is the Bayesian analogue of the bootstrap method for Markov chains. We construct a random-weighted empirical distribution, based on i.i.d. exponential random variables, to simulate the posterior distribution of the transition probability, the stationary probability, as well as the first hitting time up to a specific state, of a finite state ergodic Markov chain. The large sample theory is developed which shows that with a matrix beta prior on the transition probability, the Bayesian bootstrap procedure is second-order consistent for approximating the pivot of the posterior distributions of the transition probability. The small sample properties of the Bayesian bootstrap are also discussed by a simulation study.
2011
The solution of Markovian models is usually non-trivial to be performed using iterative methods, so it is well-fitted to simulation approaches and high performance implementations. The Bootstrap simulation method is a novel simulation technique of Markovian models that brings a considerable improvement in the results accuracy, notwithstanding its higher computation cost when compared to other simulation alternatives. In this paper, we present three parallel implementations of the Bootstrap simulation algorithm, exploiting a multi-core SMP cluster. We discuss some practical implementation issues about processing and communication demands, as well as present an analysis of speedup and efficiency considering different models' sizes and simulation trajectory lengths. Finally, future works point out some improvements to achieve even better results in terms of accuracy.
Communications in Information and Systems, 2007
Many problems modeled by Markov decision processes (MDPs) have very large state and/or action spaces, leading to the well-known curse of dimensionality that makes solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to explicitly specify some of the MDP model parameters, but simulated sample paths can be readily generated (e.g., for random state transitions and rewards), albeit at a non-trivial computational cost. For these settings, we have developed various sampling and population-based numerical algorithms to overcome the computational difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches presented in this survey include multi-stage adaptive sampling, evolutionary policy iteration and evolutionary random policy search.
Proceedings of the World Congress on …, 2009
I. INTRODUCTION Computer networks belong to a class of physical systems that can be studied effectively by means of discrete events simulation models. The computer network subject of the simulation is the closed network Gordan-Newell network and Cox-Cox network with a ...
2014
Disertacija rengta 2009-2014 metais Vilniaus universiteto Matematikos ir informatikos institute. Mokslinis vadovas prof. habil. dr. Leonidas Sakalauskas (Vilniaus universitetas, fiziniai mokslai, informatika-09 P). Disertacija ginama Vilniaus universiteto Informatikos mokslo krypties taryboje: Pirmininkas prof. habil. dr. Gintautas Dzemyda (Vilniaus universitetas, fiziniai mokslai, informatika-09 P). Nariai: prof. dr. Romualdas Kliukas (Vilniaus Gedimino technikos universitetas, technologijos mokslai, informatikos inžinerija-07 T); prof. dr. Audrius Lopata (Vilniaus universitetas, fiziniai mokslai, informatika-09 P); prof. dr. Gediminas Stepanauskas (Vilniaus universitetas, fiziniai mokslai, matematika-01 P); prof. habil. dr. Rimantas Šeinauskas (Kauno technologijos universitetas, fiziniai mokslai, informatika-09 P). Oponentai: prof. dr. Kęstutis Dučinskas (Klaipėdos universitetas, fiziniai mokslai, matematika-01 P); doc. dr. Olga Kurasova (Vilniaus universitetas, fiziniai mokslai, informatika-09P). Disertacija bus ginama Vilniaus universiteto viešame Informatikos mokslo krypties tarybos posėdyje 2014 m. lapkričio mėn. 26 d. 13 val. Vilniaus universiteto Matematikos ir informatikos instituto 203 auditorijoje. Adresas: Akademijos g. 4, LT-08663 Vilnius, Lietuva. Disertacijos santrauka išsiuntinėta 2014 m. spalio mėn. 24 d. Disertaciją galima peržiūrėti Vilniaus universiteto bibliotekoje.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.