Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1997, Statistica sinica
The Bayesian bootstrap for Markov chains is the Bayesian analogue of the bootstrap method for Markov chains. We construct a random-weighted empirical distribution, based on i.i.d. exponential random variables, to simulate the posterior distribution of the transition probability, the stationary probability, as well as the first hitting time up to a specific state, of a finite state ergodic Markov chain. The large sample theory is developed which shows that with a matrix beta prior on the transition probability, the Bayesian bootstrap procedure is second-order consistent for approximating the pivot of the posterior distributions of the transition probability. The small sample properties of the Bayesian bootstrap are also discussed by a simulation study.
Journal of Statistical Planning and Inference, 2005
The Bayesian bootstrap for Markov chains is the Bayesian analogue of the bootstrap method for Markov chains. We construct a random-weighted empirical distribution, based on i.i.d. exponential random variables, to simulate the posterior distribution of the transition probability, the stationary probability, as well as the first hitting time up to a specific state, of a finite state ergodic Markov chain. The large sample theory is developed which shows that with a matrix beta prior on the transition probability, the Bayesian bootstrap procedure is second-order consistent for approximating the pivot of the posterior distributions of the transition probability. The small sample properties of the Bayesian bootstrap are also discussed by a simulation study.
Journal of Applied Probability, 2002
We consider the estimation of Markov transition matrices by Bayes’ methods. We obtain large and moderate deviation principles for the sequence of Bayesian posterior distributions.
2009
<p align="left"><font size="1">While the large portion of the literature on Markov chain (possibly of orderhigher than one) bootstrap methods has focused on the correct estimation ofthe transition probabilities, little or no attention has been devoted to theproblem of estimating the dimension of the transition probability matrix.Indeed, it is usual to assume that the Markov chain has a one-step memoryproperty and that the state space could not to be clustered, and coincideswith the distinct observed values. In this paper we question the opportunityof such a standard approach.In particular we advance a method to jointly estimate the order of the Markovchain and identify a suitable clustering of the states. Indeed in several reallife applications the "memory" of manyprocesses extends well over the last observation; in those cases a correctrepresentation of past trajectories requires a significantly richer set thanthe state space. On the contr...
Computational Statistics & Data Analysis, 2006
1998 Winter Simulation Conference. Proceedings (Cat. No.98CH36274)
We describe a new estimator of the stationary density of a Markov chain on general state space. The new estimator is easier to compute, converges faster, and empirically gives visually superior estimates than more standard estimators such as nearest-neighbour or kernel density estimators. PETER W. GLYNN received his Ph.D. from Stanford University, after which he joined the faculty of the Department of Industrial Engineering at the University of Wisconsin-Madison. In 1987, he returned to Stanford, where he currently holds the Thomas Ford Chair in the Department of Engineering-Economic Systems and Operations Research. He was a cowinner of the 1993 Outstanding Simulation Publication Award sponsored by the TIMS College on Simulation. His research interests include discrete-event simulation, computational probability, queueing, and general theory for stochastic systems.
arXiv: Probability, 2014
An algorithm for estimating quasi-stationary distribution of finite state space Markov chains has been proven in a previous paper. Now this paper proves a similar algorithm that works for general state space Markov chains under very general assumptions.
2006
, except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now know or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if the are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
Bernoulli, 2006
Les documents de travail ne reflètent pas la position de l'INSEE et n'engagent que leurs auteurs.
Computational Statistics & Data Analysis, 2008
2003
In this paper, we show how the original Bootstrap method introduced by Datta & McCormick (1993), namely the regenerationbased Bootstrap, for approximating the sampling distribution of sample mean statistics in the atomic Markovian setting may be modified, so as to be second order correct. We prove that the drawback of the original construction mainly relies on a wrong estimation of the skewness of the sampling distribution and that it is possible to correct it by suitable standardization of the regeneration-based bootstrap statistic and recentering of the bootstrap distribution. An asymptotic result establishing the second order accuracy of this bootstrap estimate up to O(n −1 log(n)) (close to the rate obtained in an i.i.d. setting) is also stated under weak moment assumptions.
Cornell University - arXiv, 2019
Markov chain Monte Carlo (MCMC) methods generate samples that are asymptotically distributed from a target distribution of interest as the number of iterations goes to infinity. Various theoretical results provide upper bounds on the distance between the target and marginal distribution after a fixed number of iterations. These upper bounds are on a case by case basis and typically involve intractable quantities, which limits their use for practitioners. We introduce L-lag couplings to generate computable, non-asymptotic upper bound estimates for the total variation or the Wasserstein distance of general Markov chains. We apply L-lag couplings to the tasks of (i) determining MCMC burn-in, (ii) comparing different MCMC algorithms with the same target, and (iii) comparing exact and approximate MCMC. Lastly, we (iv) assess the bias of sequential Monte Carlo and self-normalized importance samplers.
Annals of Statistics, 1996
The Markov chain simulation method has been successfully used in many problems, including some that arise in Bayesian statistics. We give a self-contained proof of the convergence of this method in general state spaces under conditions that are easy to verify.
Econometrica, 2003
This study compares the restricted and unrestricted methods of bootstrap data generating processes (DGPs) on statistical inference. It used hypothetical datasets simulated from normal distribution with different ability levels. Data were analyzed using different bootstrap DGPs. In practice, it is advisable to use the restricted parametric bootstrap DGP models and thereafter, check the kernel density of the empirical distributions that are close to normal (at least not too skewed). In fact, 21600 scenarios were replicated 200 times using bootstrap DGPs and kernel density methods. This analysis was carried out using R-statistical package. The results show that in a situation where the distribution of a test is skewed, all the scores need to be taken into account, no matter how small the sample size and the bootstrap level are. Across all the conditions considered, models HR5UR and HPN5UR yielded much larger bias and standard error while the smallest bias values were associated with models HR5R (0.0619) and HPN5R (0.0624). The result confirms the fact that bootstrap DGPs are very vital in statistical inference.
P m (x, ·) − Π(·) T V → 0 as m → ∞.
2020
Note this is a random variable with expected value π(f) (i.e. the estimator is unbiased) and standard deviation of order O(1/ √ N). Then by CLT, the errorπ(f) − π(f) will have a limiting normal distribution as N → ∞. Therefore we can compute π(f) by computing samples (plus some regression techniques?). But the problem is if π u is complicated, then it is very difficult to simulate i.i.d. random variables from π(•). The MCMC solution is to construct a Markov chain on X which has π(•) as a stationary distribution, i.e. X π(dx)P (x, dy) = π(dy) Then for large n the distribution of X n will be approximately stationary. We can set Z 1 = X n and get Z 2 , Z 3 ,. .. , Z n repeatedly. Remark. In practice instead of starting a fresh Markov chain every time we take the successive X n 's, for example, (N − B) −1 N i=B+1 f (X i). We tend to ignore the dependence problem as many of the mathematical issues are similar in either implementation. Remark. We have other ways of estimation, such as "rejection sampling" and "importance sampling". But MCMC algorithms is applied most widely. 2 MCMC and its construction This section will explain how MCMC algorithm is constructed. Now we introduce reversibility. Definition. A Markov Chain on state space X is reversible with respect to a probability distribution π(•) on X , if π(dx)P (x, dy) = π(dy)P (y, dx), x, y ∈ X Proposition. A Markov Chain is reversible with respect to π(•), then π(•) is the stationary distribution for the chain. Proof. By reversibility, we have x∈X π(dx)P (x, dy) = x∈X π(dy)P (y, dx) = π(dy) x∈X P (x, dy) = π(dy) Now the simplest way to construct a MCMC algorithm which satisfies reversibility is using Metropolis-Hastings algorithm. 2.1 The Metropolis-Hastings Algorithm. Suppose that π(•) has a (possibly unnormalized) density π u. Let Q(x, •) be essentially any other Markov Chain, whose transitions also have a (possibly unnormalized) density, i.e. Q(x, dy) ∝ q(x, y)dy. First choose some X 0. Then given X n , generate a proposal Y n+1 from Q(X n , •). In the meantime we flip a independent bias coin with probability of heads equals to α(X n , Y n+1), where α(x, y) = min 1, π u (y)q(y, x) π u (x)q(x, y) , π(x)q(x, y) = 0 And α(x, y) = 1 when π(x)q(x, y) = 0. Then if the coin is heads, we accept the proposal and set X n+1 = Y n+1. If the coin is tails, then we reject the proposal and set X n+1 = X n. Then we replace n by n + 1 and repeat. The reason we take α(x, y) as above is explain as follow. Proposition. The Metropolis-Hastings Algorithm produces a Markov Chain {X n } which is reversible with respect to π(•). Proof. We want to show for any x, y ∈ X , π(dx)P (x, dy) = π(dy)P (y, dx) whereȲ i = 1 J j Y ij. The Gibbs sampler then proceeds by updating the K + 3 variables according to the above conditional distributions. This is feasible since the conditional distributions are all easily simulated (IG and N).
2010
Simulation is an interesting alternative to solve Markovian models. However, when compared to analytical and numerical solutions it suffers from a lack of precision in the results due to the very nature of simulation, which is the choice of samples through pseudorandom generation. This paper proposes a different way to simulate Markovian models by using a Bootstrap-based statistical method to minimize the effect of sample choices. The effectiveness of the proposed method, called Bootstrap simulation, is compared to the numerical solution results for a set of examples described using Stochastic Automata Networks modeling formalism.
Bootstrapping time series is one of the most acknowledged tools to make forecasts and study the statistical properties of an evolutive phenomenon. The idea underlying this procedure is to replicate the phenomenon on the basis of an observed sample. One of the most important classes of bootstrap procedures is based on the assumption that the sampled phenomenon evolves according to a Markov chain. Such an assumption does not apply when the process takes values in a continuous set, as frequently happens for time series related to economic and financial variables. In this paper we apply Markov chain theory for bootstrapping continuous processes, relying on the idea of discretizing the support of the process and suggesting Markov chains of order k to model the evolution of the time series under study. The difficulty of this approach is that, even for small k, the number of rows of the transition probability matrix is too large, and this leads to a bootstrap procedure of high complexity. In many practical cases such complexity is not fully justified by the information really required to replicate a phenomenon satisfactorily. In this paper we propose a methodology to reduce the number of rows without loosing "too much" information on the process evolution. This requires a clustering of the rows that preserves as much as possible the "law" that originally generated the process. The novel aspect of our work is the use of Mixed Integer Linear Programming for formulating and solving the problem of clustering similar rows in the original transition probability matrix. Even if it is well known that this problem is computationally hard, in our application medium size real-life instances were solved efficiently. Our empirical analysis, which is done on two time series of prices from the German and the Spanish electricity markets, shows that the use of the aggregated transition probability matrix does not affect the bootstrapping procedure, since the characteristic features of the original series are maintained in the resampled ones.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.