Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Applied Mathematics Letters
…
5 pages
1 file
This work deals with the diffusion approximation of the first integral of a stochastic dynamic system switched by a semi-Markov process in a series scheme. The results are obtained by a singular perturbation approach using the compensating operator of the extended Markov renewal process.
arXiv (Cornell University), 2022
The coefficients of the stochastic differential equations with Markovian switching (SDEwMS) additionally depend on a Markov chain and there is no notion of differentiating such functions with respect to the Markov chain. In particular, this implies that the Itô-Taylor expansion for SDEwMS is not a straightforward extension of the Itô-Taylor expansion for stochastic differential equations (SDEs). Further, higher-order numerical schemes for SDEwMS are not available in the literature, perhaps because of the absence of the Itô-Taylor expansion. In this article, first, we overcome these challenges and derive the Itô-Taylor expansion for SDEwMS, under some suitable regularity assumptions on the coefficients, by developing new techniques. Secondly, we demonstrate an application of our first result on the Itô-Taylor expansion in the numerical approximations of SDEwMS. We derive an explicit scheme for SDEwMS using the Itô-Taylor expansion and show that the strong rate of convergence of our scheme is equal to γ ∈ {n/2 : n ∈ N} under some suitable Lipschitz-type conditions on the coefficients and their derivatives. It is worth mentioning that designing and analysis of the Itô-Taylor expansion and the γ ∈ {n/2 : n ∈ N}-order scheme for SDEwMS become much more complex and involved due to the entangling of continuous dynamics and discrete events. Finally, our results coincide with the corresponding results on SDEs when the state of the Markov chain is a singleton set.
Ukrainian Mathematical Journal, 2005
We consider an evolutionary system switched by a semi-Markov process. For this system we obtain a nonhomogeneous diffusion approximation results where the initial process is compensated by the averaging function in the average approximation scheme. Dlq system, wo peremykagt\sq napivmarkovs\kymy procesamy, oderΩano rezul\taty pro neodnoridnu dyfuzijnu aproksymacig, de vysxidnyj proces kompensu[t\sq userednenog funkci[g v aproksymacijnij sxemi userednennq.
Semi-Markov Models and Applications, 1999
Stochastic processes with semi-Markov switches (or in semi-Markov environment) and general Switching processes are considered. In case of asymptotically ergodic environment functional Averaging Principle and Di usion Approximation types theorems for trajectory of the process are proved. In case of asymptotically consolidated environment a convergence to a solution of a di erential or stochastic di erential equation with Markov switches is studied. Applications to the analysis of random movements with fast semi-Markov switches and semi-Markov queueing systems in case of heavy tra c conditions are considered.
Journal of Statistical Physics, 1986
We introduce singular perturbation methods for constructing asymptotic approximations to the mean first passage time for Markov jump processes. Our methods are applied directly to the integral equation for the mean first passage time and do not involve the use of diffusion approximations. An absorbing interval condition is used to properly account for the possible jumps of the process over the boundary which leads to a Wiener-Hopf problem in the neighborhood of the boundary. A model of unimolecular dissociation is considered to illustrate our methods.
Theory of Probability and Mathematical Statistics, 2014
This paper is a continuation of the paper [A. Yu. Veretennikov and A. M. Kulik, Diffusion approximation for systems with weakly ergodic Markov perturbations. I, Theory Probab. Math. Statist. 87 (2012), 13-29]. Some corollaries of the general results are given in several particular cases being of their own interest. An example of a process being a solution of a stochastic differential equation with a Lévy noise is considered; we show that the assumptions imposed on the process can effectively be verified.
Cybernetics and Systems Analysis, 1998
The development of a broad class of stochastic systems can be described in terms of stochastic processes whose behavior spontaneously changes (switches) at certain time moments. These switching points are random functionals of the previous path. Such processes arise in the theory of queueing systems and networks, in branching and migration phenomena, in the analysis of stochastic dynamical systems with random errors, and in other applications. The description of these models relies on a special class of discrete-event stochastic processes introduced in [1-3], where they are called switching processes. A switching process is a two-component process (x(t), ~(t)), t > 0, with values in the space (X, R r) for which there is a sequence of time moments t t < t 2 < ... such that x(t) = x(tk) on each time interval It k, t k+ l) and the behavior of the process ~t) on this time interval depends only on the values (x(tk), ~(tk)). The moments t k are called switching points; x(t) is the discrete switching component. Such processes are describable in terms of constructive characteristics [ 1-3]. They are useful for the analysis of the asymptotic behavior of stochastic systems with "fast" and "rare" switching events [2-7]. Switching processes are a natural generalization of classes of Markov processes homogeneous in the second component [8], processes with independent increments and semi-Markov switchings [2, 3, 9], Markov aggregates [8], Markov processes with semi-Markovian random interventions [ 10], Markovian and semi-Markovian evolutions [ 11-14]. Limit theorems on the convergence of one switching process to another (in the class of switching processes) have been proved in [2, 3]. These results have led to a theory of asymptotic state aggregation of nonhomogeneous Markov and semi-Markov processes [3]. In this article we consider limit theorems on the convergence of the path of a switching process to the solution of some ordinary differential equation (the averaging principle) and the convergence of the normalized deviation to some diffusion process (the diffusion approximation) for an important subclass of switching processes-the so-called semi-Markov recurrent processes (SMRP) with additional dependence on the current switching point. The approach to the study of recursive algorithms with random response time is based on representation of the original process as a superposition of a recurrent embedded process and an accumulated-time counter process followed by application of limit theorems for discrete-time recursive stochastic algorithms and superpositions of random functions. We first prove some generalizations of theorems that constitute the averaging principle and the diffusion approximation for SMRP [6]. Given are independent families of random vectors {(~k(s, t), ~-k(s, t), s E R m, t >__ 0}, k >_ 0, with values in R m • [0, ~) whose characteristic functions are Bern-measurable (Bern is the Borel a-algebra in Rm), and the initial value s 0. Define the sequences ~
SIAM Journal on Control and Optimization, 1991
The Markov chain approximation method is a widely used numerical approach to computing optimal controls and value functions for general nonlinear jump diffusions, with a possible reflecting boundary. We extend the method to models with singular controls, where the control increment has the form g(x(t−))dH(t), which we call state dependent owing to the multiplier g(x). For the most part, past work concerned the case where g(•) is constant. There are major differences in the properties of and treatment of the two cases. Owing to the possibility of "multiple simultaneous impulses," H(•) must be interpreted in a generalized sense, and the analysis done in a "stretched-out" time scale, analogous to that previously used by the author and colleagues.
Methodology and Computing in Applied Probability, 2004
This paper presents the numerical solution of the process evolution equation of a homogeneous semi-Markov process (HSMP) with a general quadrature method. Furthermore, results that justify this approach proving that the numerical solution tends to the evolution equation of the continuous time HSMP are given. The results obtained generalize classical results on integral equation numerical solutions applying them to particular kinds of integral equation systems. A method for obtaining the discrete time HSMP is shown by applying a very particular quadrature formula for the discretization. Following that, the problem of obtaining the continuous time HSMP from the discrete one is considered. In addition, the discrete time HSMP in matrix form is presented and the fact that the solution of the evolution equation of this process always exists is proved. Afterwards, an algorithm for solving the discrete time HSMP is given. Finally, a simple application of the HSMP is given for a real data social security example.
Journal of Applied Probability, 2007
For a Markov renewal process where the time parameter is discrete, we present a novel method for calculating the asymptotic variance. Our approach is based on the key renewal theorem and is applicable even when the state space of the Markov chain is countably infinite.
Journal of Applied Mathematics, 2015
Weak convergence of semi-Markov processes in the diffusive approximation scheme is studied in the paper. This problem is not new and it is studied in many papers, using convergence of random processes. Unlike other studies, we used in this paper concept of the compensating operator. It enables getting sufficient conditions of weak convergence under the conditions on the local characteristics of output semi-Markov process.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Japanese journal of mathematics. New series
Journal of Applied Mathematics and Stochastic Analysis, 2005
Journal of Computational and Applied Mathematics, 2021
Mathematical Methods of Operations Research, 2007
Journal of Computational and Applied Mathematics, 2007
2005
arXiv (Cornell University), 2022
Theory of Probability and Mathematical Statistics, 2015
Mathematical and Computer Modelling, 2011
Theory of Probability and Mathematical Statistics, 2011
SIAM Journal on Mathematical Analysis, 2010
Advances in Difference Equations, 2013
Acta Applicandae Mathematicae, 1995
arXiv (Cornell University), 2022
Lithuanian Mathematical Journal, 1978
TAIWANESE JOURNAL OF MATHEMATICS
Communications in Theoretical Physics, 2014
Zeitschrift f�r Wahrscheinlichkeitstheorie und Verwandte Gebiete, 1977
SIAM Journal on Control and Optimization, 2009
Theory of Probability and Mathematical Statistics, 2010
Journal of Systems Science and Complexity, 2010
Advances in Applied Mathematics, 1983