0% found this document useful (0 votes)
31 views22 pages

A Crash Intro To SDE and Poisson Processes

Uploaded by

simply24365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views22 pages

A Crash Intro To SDE and Poisson Processes

Uploaded by

simply24365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

A Crash Intro.

to
Stochastic Differential Equations
and Poisson Processes
School of Economics
Yonsei University
Kyongchae Jung

In order to have stochastic differential equations defined I will illustrate the Brownian
motion process, a fundamental continuous-paths process. Given its importance in default
modeling, I also introduce the Poisson process, to some extent the purely jump analogous of
Brownian motion. Brownian motions and Poisson processes are among the most important
random processes of probability.

These notes are far from being complete or fully rigorous, in that I privilege the
intuitive aspect, but I give references for the reader who is willing to deepen her knowledge
on such matters.

I note that the understanding, and subsequent implementation, of most of the essential
and important issues in interest rate modeling do not require excessively-exotic tools of
stochastic calculus. The basic paradigms, risk neutral valuation and change of numeraire, in
fact, essentially involve Ito's formula and the Girsanov theorem. I therefore introduce quickly
and intuitively such results. The fact that I do not insist upon more fundamental questions to
address in practice can be very often solved with the basic tools above.

1. From Deterministic to Stochastic Differential Equations

Here I present a quick and informal introduction to SDE's. I consider the scalar case
to simplify exposition.

I consider a probability space        . The usual interpretation of this space as


an experiment can help intuition. The generic experiment result is denoted by ∈; 
represents the set of all possible outcomes of the random experiment, and the -field 
represents the set of events  ⊂  with which we shall work. The -field  represents the
information available up to time . We have  ⊆  ⊆  for all  ≦ , meaning that "the
information increase in time", never exceeding the whole set of events  . The family of 
-fields   ≧  is called filtration.
If the experiment result is  and ∈  ∈  , we say that the event  occurred. If
∈  ∈  , we say that the event  occurred at a time smaller or equal to .

I use the symbol  to denote expectation, and   ⋅   denotes expectation conditional


on the information contained in  .

I begin by a simple example. Consider a population growth model. Let      ∈,


 ≧ , be the population at time  ≧ . The simplest model for the population growth is
obtained by assuming that the growth rate   is proportional to the current population.
This can be translated into the differential equation:

    , 

where  is a real constant. Now suppose that, due to some complications, it is no longer
realistic to assume the initial condition  to be a deterministic constant. Then we may decide
to let  be a random variable    , and to model the population growth by the differential
equation:

        ,    .

The solution of this last equation is              . Note that     is a


random variable, but all its randomness comes from the initial condition     . For each
experiment result  , the map ↦     is called the path of  associated to  .

As a further step, suppose that no even  is known for certain, but that also our
knowledge of  is perturbed by some randomness, which we model as the "increment" of a
stochastic process     ≧ , so that

             ,    ,  ≧ . (1)

Here,    represents a noise process that adds randomness to  .

Equation (1) is an example of stochastic differential equation (SDE). More generally, a


SDE is written as

                  ,     . (2)

The function , corresponding to the deterministic part of the SDE, is called the drift. The
function  (or sometimes its square    ) is called the diffusion coefficient. Note that the
randomness enters the differential equation from two sources: The "noise term"  ⋅   
and the initial condition     .
Usually, the solution  of the SDE is called also a diffusion process, because of the
fact that some particular SDE's can be used to arrive at a model of physical diffusion. In
general the paths ↦     of a diffusion process  are continuous.

2. Brownian Motion

The process whose "increments"    are candidate for representing the noise
process in (2) is the Brownian motion. This process has important properties: It has stationary
and independent Gaussian increments "   ", or more precisely for any        and
any   :

       independent of        (indep. increments) (3)

           ∼        (stationary increments) (4)

       ∼       (Gaussian increments) (5)

The above assumption imply intuitively that, for example,          is


independent of the history of  up to time . Therefore,          can assume any
value independently of     ≦ .

The definition of Brownian motion requires also the paths ↦    to be continuous.


It turns out that the properties listed above imply that although the path be continuous, they
are (almost surely) nowhere differentiable. In fact, the paths have unbounded variation, and
hence

    
     
 

does not exist.

3. Stochastic Integrals

   does now exist, what meaning can we give to equation (2)? The answer
Since  

relies on rewriting (2) in integral form :

 
                     ,

 

   (6)

so that, from now on, all differential equations involving terms like  are meant as integral
equations, in the same way as (2) will be an abbreviation for (6).

However, we are not done yet, since we have to deal with the new problem of

defining an integral like       .

   A priori it is not possible to define it as a

Stieltjes integral on the paths, since they have unbounded variation. Nonetheless, under some
"reasonable" assumptions that we do not mention, it is still possible to define such integrals a
la Stieltjes. The price to be paid is that the resulting integral will depend on the chosen points
of the sub-partitions (whose mesh tends to zero) used in the limit that defines the integral.
More specifically, consider the following definition. Take an interval    and consider the
following dyadic partition of    depending on an integer ,

     



,     ⋯  ∞.

Notice that from a certain  on all terms collapse to  , i.e.,    for all     . For
each  we have such a partition, and when  increases the partition contains more elements,
giving a better discrete approximation of the continuous interval   . Then define the integral
as

lim             
 ∞
 
     
→ ∞   



  

where  is any point in the interval     . Now, by choosing    (initial point of the
subinterval) we have the definition of the Ito integral, whereas by taking       
(middle point) we obtain a different result, the Stratonovich integral.

The Ito integral has interesting probabilistic properties (for example, it is a martingale,
an important type of stochastic process that will be briefly defined below), but leads to a
calculus where the standard chain rule is not preserved since there is a non-zero contribution
of the second order terms. On the contrary, although probabilistically less interesting, the
Stratonovich integral does preserve the ordinary chain rule, and is preferable from the
viewpoint of properties of the paths.

To better understand the difference between these two definitions, we can resort to
the following classical example of stochastic integral computed both with the Ito calculus and
the Stratonovich calculus:

    
Ito → 

         
 

   
Stratonovich → 

      

To distinguish between the two definition, a symbol "∘" is often introduced to denote
the Stratonovich version as follows :


    ∘   .

 

In differential notation, one then has

Ito →            (7)

Stratonovich →        ∘    (8)

In the Ito version, the "" term originates from second order effects, which are not negligible
like in ordinary calculus. Note that the first integral is a martingale (so that, for example, it
has constant expected value equal to zero, which is an important probabilistic property), but
does not satisfy formal rules of calculus, as instead does the second one (which is not a
martingale).

In general, stochastic integrals are defined a la Lebesgue rather than a la


Riemann-Stieltges. One defines the stochastic integral for increasingly more sophisticated
integrands (indicators, simple functions ... ), and then takes the limit in some sense.

4. Martingales, Driftless SDEs and Semimartingales

In our discussion above, we have mentioned the concept of martingale. To give a quick
idea, consider a process  satisfying the following measurability and integrability conditions.

Measurability:  includes all the information on  up to time , usually expressed in the


literature by saying that    is adapted to   ;

Integrability : the relevant expected values exist.

A martingale is a process satisfying these two conditions and such that the following
property holds for each  ≦  :

          .

This definition states that, if we consider  as the present time, the expected value at a future
time  given the current information is equal to the current value. This is, among other
things, a picture of a "fair game", where it is not possible to gain or lose on average. It turns
out that the martingale property is also suited to model the absence of arbitrage in
mathematical finance. To avoid arbitrage, one requires that certain fundamental processes of
the economy be martingales, so that there are no "safe" ways to make money from nothing out
of them.

Consider an SDE admitting a unique solution (resulting in a diffusion process, a s


stated above). This solution is a martingale when the equation has zero drift. In other terms,
the solutions of the SDE (2) is a martingale when   ⋅    for all :

           ,    .

Therefore, in diffusion-processes language, martingale means driftless diffusion process.

A submartingale is a similar process  satisfying instead

       ≧   .

This means that the expected value of the process grows in time, and that averages of future
values of the process given the current information always exceed (or at least are equal to)
the current value.

Similarly, a semimartingale satisfies

       ≦   ,

and that expected value of the process decreases in time, so that average of future values of
the process given the current information are always smaller than (or at most are equal to)
the current value.

A process  that is either a supermartingale or a submartingale is usually termed


semimartingale.

5. Quadratic Variation

The quadratic variation of a stochastic process   with continuous paths ↦     is


defined as follows:

〈〉  lim           .



 
  
→ ∞ 
 

Intuitively this could be written as a "second order" integral :


〈〉 

   


or, even more intuitively, in the differential form

〈〉        .

It is easy to check that a process  whose paths ↦     are differentiable for almost all 
satisfies 〈〉  . In case  is a Brownian motion, it can be proved, instead, that

〈〉   , for each  ,

which can be written in a more informal way as

      .

Again, this comes from the fact that the Brownian motion moves so quickly that second order
effects are not negligible. Instead, a process whose trajectories are differentiable cannot move
so quickly, and therefore its second order effects do not contribute.

In case the process  is equal to the deterministic process ↦, so that    , we
immediately retrieve the classical result from (deterministic) calculus:

  .

6. Quadratic Covariation

One can also define the quadratic covariation of two processes  and  with
continuous paths as

〈  〉  lim                   .



   
     
→ ∞ 
 

Intuitively this could be written as a "second order" integral :

〈  〉 

     ,

 

or, in differential form,

〈  〉        .


It is then easy to check that, denoting again by  the deterministic process ↦,

〈 〉  , for each  ,

which can be informally written as

    .

7. Solution to a General SDE

Let us go back to our general SDE, and let us take time-homogeneous coefficients for
simplicity:

                ,    . (9)

Under which conditions does it admit a unique solution in the Ito sense? Standard theory tells
us that it is enough to have both the  and  coefficients satisfying Lipschitz continuity (and
linear growth, which does not follow automatically in the time-inhomogeneous case or with
local Lipscitz continuity only). There sufficient conditions are valid for deterministic differential
equations as well, and can be weakened, especially in dimension one. Typical examples
showing how, without Lipschitz continuity or linear growth, existence and uniqueness of
solutions can fail are the following:


   ,    ->    , ∈    .
 

(explosion in finite time) and

   
 ,     ->      ∞       , ∈   ∞ 

for any positive  (no uniqueness).

The proof of the fact that the existence and uniqueness of a solution to a SDE is
guaranteed by Lipschitz continuity and linear growth of the coefficients, is similar in spirit to
the proof for deterministic equations.

8. Interpretation of the Coefficients of the SDE

We conclude by presenting an interpretation of the drift and diffusion coefficient for a


SDE. For deterministic differential equations such as
    ,

with a smooth function , one clearly has

lim  lim  
          
→     
    ,
→    
 .

The "analogous" relations for the SDE

                 ,

are the following:

lim    
       
       
→ 

lim    
             
        .
→  

The second limit is non-zero because of the "infinite velocity" of Brownian motion, while the
first limit is the analogous of the deterministic case.

9. Ito's Formula

Now we are ready to introduce the famous Ito formula, which gives the chain rule for
differentials in a stochastic context.

For deterministic differential equations such as

    ,

given a smooth transformation    , one can write the evolution of     via the chain rule:

 
          . (10)
 

We already observed in (7) that whenever a Brownian motion is involved, such a fundamental
rule of calculus needs to be modified. The general formulation of the chain rule for stochastic
differential equations is the following. Let     be a smooth function and let     be the
unique solution of the stochastic differential equation (9). Then, Ito's formula reads as
    
                                     , (11)
   

or, in a more compact notation,

   〈〉
    
                    
   

Comparing equation (11) with its deterministic counterpart (10), we notice that the extra term

  
           


appears in our stochastic context, and this is the term due to the Ito integral.

The term        can be developed algebraically by taking into account the rules on the
quadratic variation and covariation seen above:

       ,      ,   .

We thus obtain

 
                     
 

   
                      
  

10. Stochastic Leibnitz Rule

Also the classical Leibnitz rule for differentiation of a product of functions is modified,
analogously to the chain rule. The related formula can be derived as a corollary of Ito's
formula in two dimensions, and is reported below.

For deterministic and differentiable functions  and  we have the deterministic


Leibnitz rule

             .

For two diffusion process (and more generally semimartingales)     and     we have
instead

                                ,


or, in more compact notation,

                 〈  〉 .

11. Linear SDEs with Deterministic Diffusion Coefficient

A SDE is said to be linear if both its drift and diffusion coefficients are first order
polynomials (or affine functions) in the state variable. We here consider the particular case:

                    ,       , (12)

where , ,  are deterministic functions of time that are regular enough to ensure existence
and uniqueness of a solution.

It can be shown that a stochastic integral of a deterministic function is the same both
in the Stratonovich and in the Ito sense. As a consequence, by writing (12) in integral form we
see that the same equation holds in the Stratonovich sense:

                ∘   ,       ,

so that we can solve it by ordinary calculus for linear differential equations:

We obtain

  
              

 
  
      
   
     
   
   
  
          
 
  
   
 
    
    
 

A remarkable fact is that the distribution of the solution   is normal at each time .
Intuitively, this holds since the last stochastic integral is a limit of a sum of independent
normal random variables. Indeed, we have

   

       


 
 

 
  ∼       
    
   
   

The major examples of models based on a SDE like (13) are that of Vasicek and that of Hull
and White.
12. Lognormal Linear SDEs

Another interesting example of linear SDE is that where the diffusion coefficient is a
first order homogeneous polynomial in the underlying variable. This SDE can be obtained as an
exponential of a linear equation with deterministic diffusion coefficient. Indeed, let us take
       , where   evolves according to (12), and write by Ito's formula.

           
                      

  
                           
  

As a consequence, the process  has a lognormal marginal density. A major example of model
based on such a SDE is the Black and Karasinski model.

13. Geometric Brownian Motion

The geometric Brownian motion is a particular case of a process satisfying a lognormal


linear SDE. Its evolution is defined according to

                ,

where  and  are positive constants. To check that  is indeed a lognormal process, one
can compute      via Ito's formula and obtain

 

                 .
 
From the seminal work of Black and Scholes on, processes of this type are frequently used in
option pricing theory to model general asset price dynamics. Notice that this process is a
submartingale, in that clearly

                 ≧   .

Finally, notice also that by setting             , we obtain

          

Therefore, since the drift of this last SDE is zero,      is a martingale.

14. Square-Root Processes


An interesting case of non-linear SDE is given by

                  
      ,       . (13)

A process following such dynamics is commonly referred to as square-root process. Major


examples of models based on this dynamics are the Cos, Ingerssoll and Ross instantaneous
interest rate model and a particular of the constant-elasticity of variance (CEV) model for
stock prices:

          
       .

In general, square-root processes are naturally linked to non-central -square distributions. In


particular, there are simplified version of (13) for which the resulting process  is strictly
positive and analytically tractable, like in the case of the Cox, Ingerssoll and Ross model.

15. The Feynman-Kac Theorem

The Feynman-Kac theorem, under certain assumptions, allows us to express the


solution of a given partial differential equation (PDE) as the expected value of a function of a
suitable diffusion process whose drift and diffusion coefficient are defined in terms of the PDE
coefficients.

Theorem [The Feynman-Kac Theorem] Given Lipschitz continuous    and    and a


smooth , the solution of the PDE

    
                        (14)
   

with terminal boundary condition

    (15)

can be expressed as the following expected value

            
      , (16)

where the diffusion process  has dynamics, starting from  at time , given by

             


  ,  ≧ ,       (17)

under the probability measure 


 under which the expectation 
⋅  is taken. The process 

is a standard Brownian motion under 
.

Notice that the terminal condition determines the function  of the diffusion process
whose expectation is relevant, whereas the PDE coefficients determine the dynamics of the
diffusion process.

This theorem is important because it establishes a link between the PDE's of


traditional analysis and physics and diffusion processes in stochastic calculus. Solutions of
PDE's can be interpreted as expectations of suitable transformations of solutions of stochastic
differential equations and vice versa.

16. The Girsanov Theorem

The Girsanov theorem shows how a SDE changes due to changes in the underlying
probability measure. It is based on the fact that the SDE drift depends on the particular
probability measure  in our probability space        , and that, if we change the
probability measure in a "regular" way, the drift of the equation changes while the diffusion
coefficient remains the same. The Girsanov theorem can be thus useful when we want to
modify the drift coefficient of a SDE. Indeed, suppose that we are given two measures  and
 on the space      . Two such measures are said to be equivalent, written ∼  , if
they share the same sets of null probability (or of probability one, which is equivalent).
Therefore, two measures are equivalent when they agree on which events of  hold almost
surely. Accordingly, a proposition holds almost surely under  if and only if it holds almost
surely under . Similar definitions apply also for the measures restriction to  , thus
expressing equivalence of the two measures up to time .

When two measures are equivalent, it is possible to express the first in terms of the
second through the Radon-Nikodym derivative. Indeed, there exists a martingale  on
       such that

         ,  ∈ ,

 

which can be written in a more concise form as



  
  .

The process  is called the Radon-Nikodym derivative of  with respect to  restricted to


 .

When in need of computing the expected value of an integrable random variable  , it


may be useful to switch from one measure to another equivalent one. Indeed, it is possible to
prove that the following equivalence holds:

   
               
 
        ,
  

where  and  denote expected values with respect to the probability measures  and  ,
respectively. More generally, when dealing with conditional expectations, we can prove that

  
     
 
      .
 


Theorem [The Girsanov theorem] Consider again the stochastic differential equation, with
Lipschitz coefficients,

                 ,  ,

under  . Let be given a new drift    and assume          to be bounded.
Define the measure  by


 
          
       
          
 


 

     
           

Then  is equivalent to  . Moreover, the process  defined by

            
          
      

is a Brownian motion under , and

              ,  .

As already, noticed this theorem is fundamental when we wish to change the drift of
a SDE. It is now clear that we can do this by defining a new probability measure , via a
suitable Radon-Nikodym derivative, in terms of the difference "desired drift-given drift."

In mathematical finance, a classical example of application of the Girsanov theorem is


when one moves from the "real-world" asset price dynamics

               

to the risk-neutral ones


               .

This is accomplished by setting



   
     

             .
     (18)

We finally stress that above we assumed boundedness for simplicity, but less stringent
assumptions are possible for the theorem to hold.

17. A Crash Intro. to Poisson Processes

Given their importance in default modeling, and their growing interest to the financial
community in addressing jump-diffusion models, we cannot close the lecture note without
mentioning Poisson processes, that are the purely jump analogous of the Brownian motion
with which we started the lecture note.

A time homogeneous Poisson process is a unit-jump increasing, right-continuous


process   with stationary independent increments and    .

One notices immediately that the Poisson process shares the properties expressed by
(3) and (4) with the Brownian motion process. Actually, if we substituted "unit jump
increasing, right continuous" by "continuous" we would obtain a characterization of Brownian
motion with time-linear drift. Indeed, an important theorem of stochastic calculus due to Levy
states that a continuous process with independent and stationary increments and null initial
condition is necessarily of the form

  

with  and  deterministic constants and  a Brownian motion. This shows that Poisson
Processes and Brownian motions are analogous processes family respectively. In particular,
they are both particular cases of the larger family of Levy processes, i.e. particular cases of
processes with stationary independent increments and with right continuous and left limit
paths.

The first results on Poisson processes are given by the following facts:
First properties of Poisson Processes. Let  be a time homogeneous Poisson process. Then
1) There exists a positive real number 
 such that       
  for all .
2) lim     
→ 

3) lim      
→ 
The first point states that the probability of having no jumps up to some given time
is an exponential function of minus that (possibly re-scaled) time. The second point tells us
that the probability of having more than one jump in an arbitrary small time going to zero
goes to zero faster than the time itself. So, roughly speaking, in small intervals we can have
at most one jump. The third point tells us that the probability of having exactly one jump in a
small time, re-scaled by the time itself, is the constant we find in the exponent of the
exponential function found in the first point.

Also, in classical Poisson process theory, starting from the above first results, one
proves the following
Further properties of Poisson Processes. Let  be a time homogeneous Poisson
process. Then

 
                
     

i.e.      ∼        


 with      independent of    ≦   .

This second set of properties tells us that the number of jumps of a Poisson process
follows the Poisson law (hence the name of the process).

What is amazing at first sight is that a requirement on the properties of trajectories


(unit jumps and right continuity) plus a requirement on the increments, i.e. independence and
stationarity, completely determines the law of the process. This happens with Brownian
motion (with time-linear drift) too, but the different requirement on the paths (they are to be
continuous with Brownian motion) implies a Gaussian law for the process, contrary to the
Poisson case where we get the Poisson law. Thus seemingly general requirements on the
increments (stationarity and independence) determine completely the law of the process in
case paths are required to be continuous (Brownian motion, Gaussian law) or unit-jump
increasing and right continuous (Poisson process Poisson law).

A different characterization of the Poisson process (that could also be used as a


definition) is given now:
A different characterization of Poisson processes.

 is a time-homogeneous Poisson process with parameter 

if and only if
             ≦    
 for all     and  is unit-jump increasing, right
continuous with    .

This characterization is remarkably mild at first sight, compared to the standard


definition.
At this point we may wonder whether we can have some further intuition for the
parameter 
 . Actually, from the above properties we have easily the following
Interpretation of 
 as average arrival rate for unit of time. Let  be a
time-homogeneous Poisson process. Then


           .

A fundamental result (also for financial applications) concerns the distribution of the
intervals of time between two jumps of the process.
Exponential distribution for the time between two jumps. Let  be a time
  
homogeneous Poisson process. Let  ,  , ⋯,  , ⋯ be the first, second etc. jump times of
 . Then  ,    ,    , ⋯, i.e. the times between any jump and the subsequent one, are
i.i.d. ∼ exponential( 
 ) (or, equivalently, the random variables 
 , 
    , 
    , ⋯, are
i.i.d ∼ exponential(1)).

In the simplest intensity models for credit derivatives the default time is modeled as
 , so that the time of the first jump becomes particularly important. An immediate important
consequence of the last property is that the probability of having the first jump in a small
time interval       given that this first jump did not occur before  is 
, so that 

bears also the interpretation of probability of having a new jump about  given that we have
not had it before . In formula

 ∈      ≧  


.

18. Time inhomogeneous Poisson Processes

We consider now a deterministic time-varying intensity    (called also hazard rate),


which we assume to be a strictly positive and piecewise continuous function. We define


       ,

the cumulated intensity, cumulated hazard rate, or also Hazard function.


If   is a Standard Poisson Process, i.e. a Poisson process with intensity one, than a
time-inhomogeneous Poisson process   with intensity    is defined as

       .

So a time inhomogeneous Poisson process is just a time-changed standard Poisson process.

  is still increasing by jumps of size 1, its increments are still independent, but they
are no longer identically distributed (stationary) due to the "time distortion" introduced by the
possibly nonlinear  .

From        we have obviously that  jumps the first time at  if and only if 
jumps the first time at   .

But since we know that  is a standard Poisson Process for which the first jump
time is exponentially distributed, then we have

     ∼   .

By inverting this last equation we have that

      ,

with  standard exponential random variable. Also, we have easily

                           

                   

i.e. "Probability of first jumping between  and " is

   ≈ 
    
                         
 
 
   
   

(where the final approximation is good for small exponents). Following this, we have that the
"probability of first jumping between  and  given that one has not jumped before  " is

 
         
               

           
        
    
 


     
 
  
≈         
   
  ,

(where, again, the final approximation is good for small exponents). The last term is a sort of
time-averaged intensity between  and .
It is easy to show, along the same lines, that

 ∈      ≧    .

"Probability that first jump occurs in the (arbitrary small) next "dt" instants given that we had
no jump so far is   ."

Notice that a fundamental fact from probability tells us that  is independent of all
possible Brownian motion in the same probability space where the Poisson process is defined,
and also of the intensity itself when this is assumed to be stochastic, as we are going to
assume now.

19. Doubly Stochastic Poisson Processes (or Cox Processes)

Intensity, besides bing time varying, can also be stochastic: in that case it is assumed
to be at least a  -adapted and right continuous (and thus progressive) process and is
denoted by  and the cumulated intensity of hazard process is the random variable

      . We assume 

   .

We recall again that "  -adapted" means essentially that given the information  we
know  from  to .

A Poisson process with stochastic intensity  is called a doubly stochastic Poisson


process, or Cox Process. The term doubly stochastic is due to the fact that besides having
stochasticity in the jump component , we have also stochasticity in the probability of
jumping, i.e. on the intensity. We observe that in the definition of Cos process as a Poisson
process with stochastic intensity it is implicit that conditional on   (i.e. on ), we still have
a Poisson process structure and all facts we have seen for the case with deterministic
intensity    still hold, conditional on  and replacing  with .

We have that, for Cox Processes, the first jump time can be represented as
       .

Notice once again that here not only  is random (and still independent of anything
else, included ), but  itself is stochastic. With Cox processes we have
 ∈     ≧     . This reads, if "=now":

"The probability that the process first jumps in (a small) time "" given that it has not
jumped so far and given the  information is  ."

Under standard assumptions one can show that


     
              

   
  ≧       ≧      ≧    ≧  

   
which, in a financial context, where  is typically a default time, is completely analogous to
the bond price formula in a short rate model with interest rate  replacing .

Cox processes thus allow to drag the interest-rate technology and paradigms into
default modeling. But again  is independent of all default free market quantities (of  , of ,
of ...) and represents an external source of randomness that makes reduced form models
incomplete.

20. Compound Poisson processes

A generalization of the time-homogeneous Poisson process different from the


time-homogeneous Poisson process and from the Cox porcess is the compound Poisson
process.

A compound Poisson process is obtained by taking a time-homogeneous Poisson


process and by replacing the jumps of unit size 1 with jumps of size distributed according to
i.i.d. random variables       ⋯     ⋯  all independent of the basic Poisson process.

Indeed, a time-homogeneous Poisson process  with intensity 


   can be trivially
written as


   .
 

Now replace the "1" by i.i.d. non-negative random variables

    ⋯   ⋯ 

all distributed according to a distribution function  and independent of  . We obtain



     .
 

The process  is a compound Poisson process.

21. Jump-diffusion Processes

The interest in compound Poisson processes is given by their possible use as "jumpy
shocks" processes in jump diffusion models, as opposed to the continuous shock process
given by Brownian motion.
In general, a candidate jump-diffusion process is written as


                           .

Here  denotes the jump-increments in the compound Poisson process  . If we call


         ⋯, the first, second, third, ... jump times of the basic prcess  , then


         if     , and 0 otherwise.

We see that the shock  in      is always finite (rather than infinitesimal/small of order
"" or " 
 ") or null.

Clearly, the more we increase 


 , the more frequent the jumps in the system. Also,
the larger the values implied by the distribution  , the larger the jump sizes. These degrees
of freedom allows for large variability of situations.

Finally, we notice that the above-mentioned Levy Processes have been characterized
as limits of compositions of independent families of compound Poisson processes and
Brownian motions. The basic mathematical framework for reaching Levy Processes thus
includes the above compound Poisson process and the Brownian motion. Obviously in their
basic formulation Levy processes incorporate the Brownian motion and the Poisson process as
particular cases, but not the jump diffusion and Cox processes in general. The financial
community is now considering processes with Levy shocks, or Levy processes under
stochastic time-changes. These processes encompass a large family of earlier models based
on jump-diffusions and stochastic volatility.

You might also like