0% found this document useful (0 votes)
287 views17 pages

The Method of Multiple Scales: A. Salih

The document describes the method of multiple scales for analyzing the linear damped oscillator equation. It begins by presenting the exact solution, which shows that the amplitude and phase change on different time scales as damping is introduced. A straightforward expansion is shown to become invalid for longer times due to secular terms. The method of multiple scales introduces separate fast and slow time scales to generate uniformly valid approximations over longer times than the straightforward expansion.

Uploaded by

okmovies
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
287 views17 pages

The Method of Multiple Scales: A. Salih

The document describes the method of multiple scales for analyzing the linear damped oscillator equation. It begins by presenting the exact solution, which shows that the amplitude and phase change on different time scales as damping is introduced. A straightforward expansion is shown to become invalid for longer times due to secular terms. The method of multiple scales introduces separate fast and slow time scales to generate uniformly valid approximations over longer times than the straightforward expansion.

Uploaded by

okmovies
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

The Method of Multiple Scales

A. Salih
Department of Aerospace Engineering
Indian Institute of Space Science and Technology, Thiruvananthapuram
1 September 2014

1 Introduction
Some natural processes have more than one characteristic length or time scales associated with them,
for example, the turbulent flow consists of various length scales of the turbulent eddies along with the
length scale of the objects over which the fluid flows. The failure to recognize a dependence on more
than one space/time scale is a common source of nonuniformity in perturbation expansions.
The method of multiple scales (also called the multiple-scale analysis) comprises techniques used
to construct uniformly valid approximations to the solutions of perturbation problems in which the
solutions depend simultaneously on widely different scales. This is done by introducing fast-scale and
slow-scale variables for an independent variable, and subsequently treating these variables, fast and
slow, as if they are independent.
We will begin by describing the straightforward expansion method and the Poincaré-Lindstedt
method for the linear damped oscillator. We then describe the method of multiple scales for the same
problem.

2 The Linear Damped Oscillator


We consider the differential equation for the linear damped mass-spring system with no external forces.
The equation for displacement y(τ) is

my00 + cy0 + ky = 0 (1a)

where ’prime‘ denotes the differentiation with respect to τ . If initially the mass is released from a
positive displacement yi with no initial velocity, we have the following initial conditions:

y(0) = yi , y0 (0) = 0 (1b)


p
We assume here that c  m, k. Choosing yi and m/k as the characteristic distance and characteristic
time respectively, we define the following dimensionless variables
y τ
x= , t = p
yi m/k
Under this change of variables the dimensionless form of the differential equation (1a) and initial
conditions (1b) become
x00 + 2εx0 + x = 0 (2a)

1
x(0) = 1, x0 (0) = 0 (2b)

where
c
ε = √ 1
2 mk
is a dimensionless parameter. This equation corresponds to a linear oscillator with weak damping,
where the time variable has been scaled by the period of the undamped system. This is the classical
example used to illustrate the method of multiple scales.

The exact solution


The exact solution of system (2) is given by
 p 
−εt 2
ε p
2
x(t) = e cos 1 − ε t + √ sin 1 − ε t (3)
1 − ε2
note that if the oscillation is undamped, i.e., if ε = 0, we have the following exact solution,

x(t) = cost

where both amplitude and phase of the oscillation remain constant. However, with the presence of
damping, (3) shows that both amplitude and phase change with time. In fact, the amplitude drifts on
the time scale ε −1 , while the phase drifts on the longer time scale ε −2 . Note that both the amplitude
and phase times scales (ε −1 and ε −2 ) are much longer than the time scale of 1 for the basic oscillation.
Of course in this example there is not much amplitude left by the time that the phase has slipped
significantly.
By looking at the solution (3), we can say that

x = cost + O(ε) for t = O(1) (4a)

is uniformly true, but is not uniformly valid for t = O(1/ε). If we are interested in times which are
O(1/ε) then the combination εt must be preserved in the exponential function. Then it is uniformly
valid to state that
x = e−εt cost + O(ε) for t = O(1/ε) (4b)

If we are interested in values of t which are O(1/ε 2 ) then (4b) is no longer valid. In this case terms
of the form ε 2t must be preserved in the cosine function appearing in (3). Using binomial expansion,
we have
p ε2 ε4 ε6
1 − ε2 = 1 − − − − ···
2 8 16
Thus
ε2
 
−εt
x = e cos 1 − t + O(ε) for t = O(1/ε 2 ) (4c)
2
That is (4c) is uniformly valid for t = O(1/ε 2 ).
Notice that if we are concerned only with uniformly valid leading order expansions then the second
member of the bracket in (3) never contributes since it is uniformly of O(ε) for all t .

2
3 Straightforward Expansion
We will first develop a straight forward expansion for (2) and discuss its nonconformity. So we look
for straightforward expansion of an asymptotic solution as ε → 0,

x(t) ∼ x0 (t) + εx1 (t) + ε 2 x2 (t) + · · · (5)

Substituting (5) into the differential equation (2a) yields

x000 + εx100 + ε 2 x200 + · · · + 2ε(x00 + εx10 + ε 2 x20 + · · · ) + x0 + εx1 + ε 2 x2 + · · · = 0

Collecting coefficients of equal powers of ε gives

x000 + x0 + ε(x100 + 2x00 + x1 ) + ε 2 (x200 + 2x10 + x2 ) + · · · = 0

Equating coefficients of like powers of ε to 0, gives a sequence of linear differential equations


x000 + x0 = 0, x0 (0) = 1, x00 (0) = 0 (6a)
x100 + x1 = −2x00 , x1 (0) = 0, x10 (0) = 0 (6b)
x200 + x2 = −2x10 , x2 (0) = 0, x20 (0) =0 (6c)

The respective initial conditions are also shown alongside the equations above. The equation (6a)
is the unperturbed problem obtained by setting ε = 0. It is the governing equation of a harmonic
oscillator with angular frequency of unity. The solution is

x0 = cost

Then (6b) becomes


x100 + x1 = 2 sint, x1 (0) = 0, x10 (0) = 0 (7)
The solution of the nonhomogeneous differential equation (7) is given by

x1 = xc + x p

where xc is the general solution of the corresponding homogeneous equation (complementary function
of (7))
x100 + x1 = 0
and x p is a particular solution of (7). We have

xc = A cost + B sint

The right-hand side of differential equation (7) is of the same form as the general solution of the
corresponding homogeneous equation so that a trail particular solution of the form

x p = C1t cost + C2t sint

must be sought. The constants C1 and C2 can be found by the method of undetermined coefficients.
Substituting x p into (7) yields
−2C1 sint + 2C2 cost = 2 sint

3
Equating like terms gives C1 = −1 and C2 = 0. Thus the general solution of (7) is

x1 = A cost + B sint − t cost

Applying the initial conditions on x1 gives A = 0 and B = 1. Thus, the solution of (7) is given by

x1 = sint − t cost

Therefore, a two-term approximate solution of (2a) takes the form

x(t) = cost + ε(sint − t cost) (8)

The straightforward expansion is not valid when t > 0(1/ε) due the presence of secular terms. It can
be shown that the secular term become more compounded for higher-order expansions. The two-term
approximation has a linear secular term, whereas the three-term approximation would have a quadratic
secular term.
The two-term expansion (8) can be constructed from the exact solution (3) by expanding the
exponential, square root, and trigonometric functions. Nonuniformities are generated in forming the
√ √
expansions of the exponential term e−εt and trigonometric functions cos 1 − ε 2 t and sin 1 − ε 2 t in
powers of ε . We note that the straightforward expansion (8) forces the frequency to be unity, which
is independent of the damping. In fact, the presence of the damping changes the frequency from 1 to

1 − ε 2 . Thus, any expansion procedure that does not account for the dependence of the frequency
on ε will fail for large t .

4 Poincaré-Lindstedt Method
We will now apply the Poincaré-Lindstedt Method to the initial value problem (2) to see whether the
method is capable of avoiding the secular term that ruined the approximation when a straightforward
application of the regular perturbation method is used. The important observation from the earlier
analysis is that the breakdown in the straightforward expansion is due to its failure to account for the
nonlinear dependence of frequency on the nonlinearity. To account for the fact that the frequency is
a function of ε , we let
ρ = ωt

where ρ is called the strained coordinate and ω is a constant that depends on ε . Then we need
to change the independent variable for t to ρ . Using the chain rule, we transform the derivatives
according to

d d dρ d
= =ω
dt dρ dt dρ
d2 d2 dρ d 2 2 d
2
= ω = ω = ω
dt 2 dt dρ dt dρ 2 dρ 2

Hence, (2) becomes


ω 2 x00 + 2ωεx0 + x = 0 (9a)

4
x(0) = 1, x0 (0) = 0 (9b)
where x = x(ρ) the prime indicate the derivative with respect to ρ . We now try expanding x and ω
in powers of ε , i.e.,

ω = 1 + εω1 + ε 2 ω2 + · · · (10)
x = x0 (ρ) + εx1 (ρ) + ε 2 x2 (ρ) + · · · (11)

Note that the first term in (10) is unity, which is the unperturbed (undamped) frequency. Substituting
(10) into differential equation (9a) to yield

(1 + εω1 + ε 2 ω2 + · · · )2 x00 + 2(1 + εω1 + ε 2 ω2 + · · · )εx0 + x = 0 (12)

Now, substituting the expansion (11) into differential equation (12) and the initial conditions (9b)
gives

(1 + εω1 + ε 2 ω2 + · · · )2 (x000 + εx100 + ε 2 x200 + · · · ) +


2ε(1 + εω1 + ε 2 ω2 + · · · )(x00 + εx10 + ε 2 x20 + · · · ) + (x0 + εx1 + ε 2 x2 + · · · ) = 0

and
x0 (0) + εx1 (0) + ε 2 x2 (0) + · · · = 1, x00 (0) + εx10 (0) + ε 2 x20 (0) + · · · = 0
The differential equation above can be written as

x000 + x0 + ε(x100 + 2ω1 x000 + 2x00 + x1 ) + ε 2 (x200 + x2 + 2ω2 x000 + ω12 x000 + 2ω1 x100 + 2ω1 x00 + 2x10 ) + · · · = 0
(13)

x000 + x0 = 0, x0 (0) = 1, x00 (0) = 0 (14a)


x100 + x1 = −2ω1 x000 − 2x00 , x1 (0) = 0, x10 (0) = 0 (14b)
x200 + x2 = −2ω2 x000 − ω12 x000 − 2ω1 x100 − 2ω1 x00 − 2x10 , x2 (0) = 0, x20 (0) = 0 (14c)

The O(1) system (14a) has the solution

x0 (ρ) = cos ρ (15)

Then the O(ε) equation (14b) becomes

x100 + x1 = 2ω1 cos ρ + 2 sin ρ, x1 (0) = 0, x10 (0) = 0 (16)

The solution of nonhomogeneous differential equation (16) is given by

x1 = xc + x p

where xc is the general solution of the homogeneous equation given by

xc = A cos ρ + B sin ρ

5
and x p is a particular solution of (16) given by

x p = C1 ρ cos ρ + C2 ρ sin ρ

The constants C1 and C2 can be found by the method of undetermined coefficients. Substituting x p
into (16) and equating like terms gives C1 = −1 and C2 = ω1 . Thus the general solution of (16) is

x1 = A cos ρ + B sin ρ − ρ cos ρ + ω1 ρ sin ρ

Applying the initial conditions on x1 gives A = 0 and B = 1. Thus, the particular solution of (16) is
given by
x1 (ρ) = sin ρ − ρ cos ρ + ω1 ρ sin ρ (17)
Note that the above solution of x1 contains two secular terms, which makes the expansion breakdown
at large ρ . It may be noted that the secular term ω1 ρ sin ρ can be eliminated by setting ω1 = 0,
however, the secular term ρ cos ρ cannot be eliminated as it does not contain any adjustable parameter.
Therefore, a two-term approximate solution of (2a) takes the form

x(ρ) = x0 (ρ) + εx1 (ρ) = cos ρ + ε(sin ρ − ρ cos ρ) (18)

If we continue further by setting ω1 = 0 in (14c), it can be easily shown that the solution of (14c) will
provide a condition ω2 = 0. This shows that our attempt to expand ω in terms of ε has failed and
consequently, we get ρ = t . Thus, the Poincaré-Lindstedt method has failed to yield a perturbation
approximation for the linear damped oscillator (2). The reason for the failure of this technique to
yield a uniform solution is our insistence on a uniform solution having a constant amplitude as in (15).
Since the amplitude is e−εt according to the exact solution (3), the only constant-amplitude uniform
solution is the one attained after a long time, i.e., steady state. Therefore, although the Poincaré
-Lindstedt technique is effective in determining periodic solutions, they are incapable of determining
transient responses.

5 The Method of Multiple Scales


Any asymptotic expansion of (3) must simultaneously depict both the decaying and oscillatory behav-
iors of the solution in order to be uniformly valid in t = O(1/ε k ). It is clear that the Poincaré-Lindstedt
method fails to achieve this. The Poincaré-Lindstedt method provides a way to construct asymptotic
approximations of periodic solutions, but it cannot be used to obtain solutions that evolve aperiodi-
cally on a slow time-scale. The method of multiple scales is a more general approach that involve two
key tricks. The first is the idea of introducing scaled space and time coordinates to capture the slow
modulation of the pattern, and treating these as separate variables in addition to the original variables
that must be retained to describe the pattern state itself. This is essentially the idea of multiple scales.
The second is the use of what are known as solvability conditions in the formal derivation.
We note from analytical solution (3) that the functional dependence of x on t and ε is not disjoint
because x depends on the combination of εt as well as on the individual t and ε . Thus in place of
x = x(t; ε), we write
x = x̂(t, εt; ε)

6
We return to the regular expansion (8) and rewrite it as

x(t) = cost + ε sint − εt cost (19)

As in the case of analytical solution, regular expansion also shows that x depends on the combination
of εt as well as on the individual t and ε . The trouble with the naive regular expansion is that the
small damping changes both the amplitude of the oscillation on a time scale ε −1 and the phase of
the oscillation on a time scale ε −2 by the slow accumulation of small effects. Thus the oscillator has
three processes acting on their on time scales. Fist, there is the basic oscillation on the time scale of
1 from the inertia causing the restoring force to overshoot the equilibrium position. Then there is a
small drift in the amplitude on the time scale of ε −1 and finally a very small drift in the phase on the
time scale of ε −2 due to the small friction. We recognize these three time scales by introducing three
time variables.

T0 = t − the fast time of the oscillation


T1 = εt − the slower time of the amplitude drift
2
T2 = ε t − even slower time of the phase drift

The rapidly changing features will then be combined into factors which are functions of T0 , while the
slowly changing features will then be combined into factors which are functions of T1 and T2 . Thus
we look for a solution of the form
x(t; ε) = x(T0 , T1 , T2 ; ε)

In general, if we choose n time scales for the expansion, we look for a solution of the form

x(t; ε) = x(T0 , T1 , T2 , · · · Tn ; ε)

where the time scales are defined as

T0 = t, T1 = εt, T2 = ε 2t, ··· , Tn = ε nt

Thus, instead of determining x as a function of t , we determine x as a function of T0 , T1 , · · · , Tn . Note


that as real time t increases the fast time T0 increases at the same rate, while the slower time Ti s
increase slowly. Using the chain rule we have

d ∂ ∂ T0 ∂ ∂ T1 ∂ ∂ T2
= + + + ···
dt ∂ T0 ∂t ∂ T1 ∂t ∂ T2 ∂t
∂ ∂ ∂
= +ε + ε2 + ··· (20a)
∂ T0 ∂ T1 ∂ T2

d2 ∂2 ∂2 ∂2 ∂2
 
2
= + 2ε +ε + + ··· (20b)
dt 2 ∂ T02 ∂ T0 ∂ T1 ∂ T0 ∂ T2 ∂ T12
Hence, (2) becomes

∂ 2x ∂ 2x ∂ 2x ∂ 2x
   
∂x ∂x ∂x
+ 2ε + ε2 + + 2ε +ε + ε2 + x + ··· = 0 (21a)
∂ T02 ∂ T0 ∂ T1 ∂ T0 ∂ T2 ∂ T12 ∂ T0 ∂ T1 ∂ T2

7
∂x ∂x ∂x
x = 1, +ε + ε2 + ··· = 0 for T0 = T1 = T2 · · · = 0 (21b)
∂ T0 ∂ T1 ∂ T2
We note that when t = 0, all T0 , T1 , etc. are zero. The benefits of introducing the multiple time
variables are not yet apparent. In fact, it appears that we have made the problem harder since the
original ordinary differential equation has been turned into a partial differential equation. This is true,
but experience with this method has shown that the disadvantages of including this complication are
far outweighed by the advantages.
It should be pointed out that the solution of (21) is not unique and that we need to impose more
conditions for uniqueness on the solution. This freedom will enable us to prevent secular terms from
appearing in the expansion (at least over the time scales we are using). We now seek an asymptotic
approximation for x of the form

x(t) ≡ x(T0 , T1 , · · · , Tn ; ε) ∼ x0 (T0 , T1 , · · · , Tn ) + εx1 (T0 , T1 , · · · , Tn ) + ε 2 x2 (T0 , T1 , · · · , Tn ) + · · · (22)

It must be understood that there are actually only two independent variables, t and ε , in (22); Ti s
are functions of these two, and so is not independent. Nevertheless, the principal steps in finding the
coefficients xn are carried out as though T0 , T1 , · · · , Tn and ε were independent variables. This is one
reason why these steps cannot be justified rigorously in advance, but are merely heuristic. Secondly, it
must be remarked that (22) is a generalized asymptotic expansion, since (21) enters both through the
gauges (which are just the powers of (21)) and also through the coefficients xn by way of Ti . Although
there is no general theorem allowing the differentiation of a generalized asymptotic expansion term by
term, it is nevertheless reasonable to construct the coefficients of (22) on the assumption that such
differentiation is possible, and then to justify the resulting series by direct error estimation afterwards.

5.1 The first-order two-scale expansion


Before proceeding further, we will first assume that there are only two time scales (T0 and T1 ) involved
in the present problem. The scales are defined as

T0 = t, T1 = εt

Thus, instead of determining x as a function of t , we determine x as a function of T0 , T1 . Note that


the time T0 must increase a great deal before the time T1 will change appreciably, when ε is small.
With this, the differential equation and initial conditions given in (21) become
∂ 2x ∂ 2x 2
 
2∂ x ∂x ∂x
+ 2ε +ε + 2ε +ε + x + ··· = 0 (23a)
∂ T02 ∂ T0 ∂ T1 ∂ T12 ∂ T0 ∂ T1
∂x ∂x
x = 1, +ε =0 for T0 = T1 = 0 (23b)
∂ T0 ∂ T1
We seek an asymptotic approximation for x of the form

x(t) ≡ x(T0 , T1 ; ε) ∼ x0 (T0 , T1 ) + εx1 (T0 , T1 ) (24)

Substituting this into (23a) yields the following:


∂ 2 x0 ∂ 2 x1 ∂ 2 x0 ∂ x0
2
+ ε 2
+ 2ε + 2ε + x0 + εx1 + · · · = 0
∂ T0 ∂ T0 ∂ T0 ∂ T1 ∂ T0

8
Collecting coefficients of equal powers of ε gives

∂ 2 x0 ∂ 2 x1 ∂ 2 x0
 
∂ x0
+ x0 + ε + 2 +2 + x1 =0 (25)
∂ T02 ∂ T0 2 ∂ T0 ∂ T1 ∂ T0

Equating coefficients of like powers of ε to 0, gives the following sequence of linear partial differential
equations:

∂ 2 x0
O(1) : + x0 = 0 (26a)
∂ T02
∂ 2 x1 ∂ 2 x0 ∂ x0
O(ε) : 2
+ x 1 = −2 −2 (26b)
∂ T0 ∂ T0 ∂ T1 ∂ T0

It should be remembered that even the step of equating coefficients of equal powers of ε , used in
passing from (25) to (26), is not justified by any theorem about generalized asymptotic expansions
(since there is no uniqueness theorem for such expansions). It is instead a heuristic assumption used
to arrive at a candidate for an approximate solution, whose validity is to be determined afterwards by
error analysis.
The respective initial conditions for (26a) and (26b) are given by

∂ x0
x0 = 1, =0 for T0 = T1 = 0 (27a)
∂ T0
∂ x1 ∂ x0
x1 = 0, =− for T0 = T1 = 0 (27b)
∂ T0 ∂ T1

Since T0 and T1 are being treated (temporarily) as independent, the differential equation (26a) is
actually a ‘partial’ differential equation for a function x0 of two variables T0 and T1 . However, since no
derivatives with respect to T1 appear in (26a), it may be regarded instead as an ‘ordinary’ differential
equation for a function of T0 regarding T1 as merely an auxiliary parameter. Therefore the general
solution of (26a) may be obtained from the general solution of the corresponding ordinary differential
equation just by letting the arbitrary constants become arbitrary functions of T1 . Thus the general
solution of (26a) can be written as

x0 = A0 (T1 ) cos T0 + B0 (T1 ) sin T0 (28)

in which A0 and B0 are constant as far as the fast T0 variations are concerned, but are allowed to vary
over the slow T1 time. The initial conditions give

A0 (0) = 1 and B0 (0) = 0 (29)

We have used all of the information contained in (26a) & (27a), and the functions A0 and B0 are
still undetermined except for their initial values (29). In order to complete the determination of these
functions, and hence of x0 , we must consider the next order of approximation, i.e., O(ε). This is
accomplished by considering the equation (26b). From (28), we have

∂ x0
= −A0 (T1 ) sin T0 + B0 (T1 ) cos T0
∂ T0

9
and
∂ 2 x0
 
∂ ∂ x0 ∂ A0 ∂ B0
= = − sin T0 + cos T0
∂ T1 ∂ T0 ∂ T1 ∂ T0 ∂ T1 ∂ T1
Substituting the above relations in (26b), we obtain

∂ 2 x1
   
∂ A0 ∂ B0
+ x1 = 2 + A0 sin T0 − 2 + B0 cos T0 (30)
∂ T02 ∂ T1 ∂ T1

Since both the right-hand side of (30) and the complementary function of this equation contain terms
proportional to sin T0 & cos T0 , the particular solution of x1 will have secular terms in it. Thus, to
obtain a uniform expansion each of the coefficients of sin T0 & cos T0 must independently vanish. The
vanishing of these coefficients yields the condition for the determination of A0 and B0 . Hence

∂ A0
+ A0 = 0 (31)
∂ T1
∂ B0
+ B0 = 0 (32)
∂ T1

Equations (31) and (32) represent the conditions to avoid secular terms in x1 . The solution of (31)
and (32) are

A0 = a0 e−T1 (33)
−T1
B0 = b 0 e (34)

where a0 and b0 are constants of integration. To obtain x0 , we substitute (33) and (34) in (28) to
obtain
x0 = a0 e−T1 cos T0 + b0 e−T1 sin T0 (35)

We can now impose the initial conditions for x0 given in (27a), which is repeated below:

∂ x0
x0 (0, 0) = 1, (0, 0) = 0
∂ T0

Imposing these on the general solution (35) yields

a0 = 1 and b0 = 0

and thus we obtain the solution


x0 = e−T1 cos T0

Note that we did not evaluate x1 but merely ensure that secular terms are avoided so that we may
write
x0 = e−T1 cos T0 + O(ε) (36)

In terms of the original variables, x becomes

x = e−εt cost + O(ε) (37)

which is uniformly valid for t = O(1/ε) and in agreement with exact solution (3) to O(ε).

10
5.2 Higher-order approximations
Different strategies for finding higher order multiple scale approximations are available in the literature.
One of the simple strategies is to continue with the two-scale method using T0 = t and T1 = εt . But
the solution is written in the form x(t) ∼ x0 (T0 , T1 ) + εx1 (T0 , T1 ) + ε 2 x2 (T0 , T1 ) + · · · . The theory of
higher order approximations by the first of these methods is fairly well understood. This form continues
to use the two time scales and is applicable to a wide variety of problems. The purpose of this strategy
is to improve the accuracy of the first approximation to any order of accuracy by computing higher
order terms in the series (22), without attempting to increase the length of time O(1/ε) for which
the approximation is valid. In another strategy, multiple scale methods using three scales T0 = t ,
T1 = εt , and T2 = ε 2t are being used. The solutions are written x(t) ∼ x0 (T0 , T1 , T2 ) + εx1 (T0 , T1 , T2 ) +
ε 2 x2 (T0 , T1 , T2 ). A variation of this method (the “short form”) omits one time scale in each successive
term; for instance, a three-scale three-term solution would look like x(t) ∼ x0 (T0 , T1 , T2 ) + εx1 (T0 , T1 ) +
ε 2 x2 (T0 ).
At least in theory, the first strategy is always successful at achieving its goal. However, to carry
out the solution in practice requires solving certain differential equations in order to eliminate secular
terms; these differential equations are in general nonlinear, and therefore may not have “closed form”
solutions (that is, explicit solutions in terms of elementary functions). The second strategy of using
three time scales is more general and ambitious but less satisfactory. Their aim is not only to improve
the asymptotic order of the error estimate, but also to extend the validity of the approximations to
“longer” intervals of time, that is, expanding intervals of length O(ε 2 ) or longer. This form tries
improves the accuracy to second order and at the same time valid over the length of time O(1/ε 2 ). It
should be pointed that these methods were originally developed by heuristic reasoning only, and there
does not yet exist a fully adequate rigorous theory explaining their range of validity.

5.2.1 The second-order three-time scale expansion


Here we seek an asymptotic approximation for x of the form

x(t) ≡ x(T0 , T1 , T2 ; ε) ∼ x0 (T0 , T1 , T2 ) + εx1 (T0 , T1 , T2 ) + ε 2 x2 (T0 , T1 , T2 ) (38)

for three time scales


T0 = t, T1 = εt, T2 = ε 2t
Substituting (38) into (21a) yields the following:

∂ 2 x0 ∂ 2 x1 2
2 ∂ x2 ∂ 2 x0 2
2 ∂ x1 2 ∂ x0
2 2
2 ∂ x0
+ ε + ε + 2ε + 2ε + 2ε + ε
∂ T02 ∂ T02 ∂ T02 ∂ T0 ∂ T1 ∂ T0 ∂ T1 ∂ T0 ∂ T2 ∂T2
  1
∂ x0 ∂ x0 ∂ x1
+ 2ε +ε +ε + x0 + εx1 + ε 2 x2 = 0
∂ T0 ∂ T1 ∂ T0
Collecting coefficients of equal powers of ε gives

∂ 2 x0
 2
∂ 2 x0

∂ x1 ∂ x0
+ x0 + ε +2 +2 + x1
∂ T02 ∂ T02 ∂ T0 ∂ T1 ∂ T0
 2
∂ 2 x1 ∂ 2 x0 ∂ 2 x0

2 ∂ x2 ∂ x0 ∂ x1
+ε +2 +2 + +2 +2 + x2 = 0 (39)
∂ T02 ∂ T0 ∂ T1 ∂ T0 ∂ T2 ∂ T12 ∂ T1 ∂ T0

11
Equating coefficients of like powers of ε to 0, gives the following sequence of linear partial differential
equations:
∂ 2 x0
O(1) : + x0 = 0 (40a)
∂ T02
∂ 2 x1 ∂ 2 x0 ∂ x0
O(ε) : 2
+ x 1 = −2 −2 (40b)
∂ T0 ∂ T0 ∂ T1 ∂ T0
∂ 2 x2 ∂ 2 x1 ∂ 2 x0 ∂ 2 x0 ∂ x0 ∂ x1
O(ε 2 ) : 2
+ x 2 = −2 − 2 − 2
−2 −2 (40c)
∂ T0 ∂ T0 ∂ T1 ∂ T0 ∂ T2 ∂ T1 ∂ T1 ∂ T0
The respective initial conditions for (40) are given by
∂ x0
x0 = 1, =0 for T0 = T1 = T2 = 0 (41a)
∂ T0
∂ x1 ∂ x0
x1 = 0, =− for T0 = T1 = T2 = 0 (41b)
∂ T0 ∂ T1
∂ x2 ∂ x1 ∂ x0
x2 = 0, =− − for T0 = T1 = T2 = 0 (41c)
∂ T0 ∂ T1 ∂ T2
It is clear that to solve (40c), we need the solutions of (40a) and (40b). The general solution of (40a)
can be written as
x0 = A0 (T1 , T2 ) cos T0 + B0 (T1 , T2 ) sin T0 (42)
in which A0 and B0 are constant as far as the fast T0 variations are concerned, but are allowed to vary
over the slow times T1 and T2 . The initial conditions give

A0 (0, 0) = 1 and B0 (0, 0) = 0 (43)

Here the functions A0 and B0 are still undetermined except for their initial values (43). In order to
complete the determination of these functions, and hence of x0 , we must consider the next order of
approximation, i.e., O(ε). This is accomplished by considering the equation (40b). From (42), we
have
∂ x0
= −A0 (T1 , T2 ) sin T0 + B0 (T1 , T2 ) cos T0
∂ T0
and
∂ 2 x0
 
∂ ∂ x0 ∂ A0 ∂ B0
= =− sin T0 + cos T0
∂ T1 ∂ T0 ∂ T1 ∂ T0 ∂ T1 ∂ T1
Substituting the above relations in (40b), we obtain
∂ 2 x1
   
∂ A0 ∂ B0
+ x1 = 2 + A0 sin T0 − 2 + B0 cos T0 (44)
∂ T02 ∂ T1 ∂ T1
Since both the right-hand side of (44) and the complementary function of this equation contain terms
proportional to sin T0 & cos T0 , the particular solution of x1 will have secular terms in it. Thus, to
obtain a uniform expansion each of the coefficients of sin T0 & cos T0 must independently vanish. The
vanishing of these coefficients yields the condition for the determination of A and B. Hence
∂ A0
+ A0 = 0 (45)
∂ T1
∂ B0
+ B0 = 0 (46)
∂ T1

12
Equations (45) and (46) represent the conditions to avoid secular terms in x1 . The solution of (45)
and (46) are

A0 = a0 (T2 )e−T1 (47)


−T1
B0 = b0 (T2 )e (48)

where a0 & b0 are the integration constants and are function of T2 . They are determined by eliminating
the terms that produce secular terms in the second order problem for x2 . To obtain x0 , we substitute
(47) and (48) in (42) to obtain

x0 = a0 (T2 )e−T1 cos T0 + b0 (T2 )e−T1 sin T0 (49)

so that
∂ x0
= −a0 e−T1 sin T0 + b0 e−T1 cos T0
∂ T0
and
∂ 2 x0
 
∂ ∂ x0
= = a0 e−T1 sin T0 − b0 e−T1 cos T0
∂ T1 ∂ T0 ∂ T1 ∂ T0
Substitution of the above derivatives into (40b) yields the following equation for x1

∂ 2 x1
+ x1 = 0 (50)
∂ T02

Since this is a homogeneous equation, the general solution is given by

x1 = A1 (T1 , T2 ) cos T0 + B1 (T1 , T2 ) sin T0 (51)

Having determined x0 and x1 , each term in the right-hand side of (40c) can be evaluated as follows:

∂ x0
= −a0 e−T1 cos T0 − b0 e−T1 sin T0
∂ T1

∂ 2 x0
= a0 e−T1 cos T0 + b0 e−T1 sin T0
∂ T12
∂ 2 x0
 
∂ ∂ x0 ∂ a0 −T1 ∂ b0 −T1
= =− e sin T0 + e cos T0
∂ T0 ∂ T2 ∂ T2 ∂ T0 ∂ T2 ∂ T2
∂ x1
= −A1 sin T0 + B1 cos T0
∂ T0
∂ 2 x1
 
∂ ∂ x1 ∂ A1 ∂ B1
= =− sin T0 + cos T0
∂ T0 ∂ T1 ∂ T1 ∂ T0 ∂ T1 ∂ T1
Substituting the above relations in (40c), we get

∂ 2 x2
   
∂ A1 ∂ B1 ∂ a0 −T1 ∂ b0 −T1
+ x2 = 2 sin T0 − cos T0 + 2 e sin T0 − e cos T0
∂ T02 ∂ T1 ∂ T1 ∂ T2 ∂ T2
− a0 e−T1 cos T0 + b0 e−T1 sin T0 + 2 a0 e−T1 cos T0 + b0 e−T1 sin T0 + 2 (A1 sin T0 − B1 cos T0 )
 

13
Rearranging the above equation to obtain

∂ 2 x2
   
∂ A1 ∂ a0 −T1 1 −T1 ∂ B1 ∂ b0 −T1 1 −T1
+ x2 = 2 + A1 + e + b0 e sin T0 − 2 + B1 + e − a0 e cos T0
∂ T02 ∂ T1 ∂ T2 2 ∂ T1 ∂ T2 2
(52)
The terms on the right-hand side of (52) produce secular terms because the particular solution is of
the form
   
∂ A1 ∂ a0 −T1 1 −T1 ∂ B1 ∂ b0 −T1 1 −T1
x2p = − + A1 + e + b0 e T0 sin T0 − + B1 + e − a0 e T0 cos T0
∂ T1 ∂ T2 2 ∂ T1 ∂ T2 2
(53)
Therefore, in order to eliminate these secular terms, we must have the following conditions
   
∂ A1 ∂ a0 1 −T1 ∂ B1 ∂ b0 1
+ A1 = − + b0 e and + B1 = − − a0 e−T1 (54)
∂ T1 ∂ T2 2 ∂ T1 ∂ T2 2
It may be noted that it is not required to solve for x2 in order to arrive at (54). One needs only to
inspect (52) and eliminate terms that produce secular terms. The general solutions of (54) are
 
−T1 ∂ a0 1
A1 (T1 , T2 ) = a1 (T2 )e − + b0 T1 e−T1
∂ T2 2
  (55)
∂ b0 1
B1 (T1 , T2 ) = b1 (T2 )e−T1 − − a0 T1 e−T1
∂ T2 2
where a1 and b1 are integration constants as far as derivatives with respect to T1 are concerned.
Substituting for A1 and B1 into (51), we obtain
       
∂ a0 1 −T1 ∂ b0 1
x1 = a1 − + b0 T1 e cos T0 + b1 − − a0 T1 e−T1 sin T0 (56)
∂ T2 2 ∂ T2 2
Also, we have the following equations for x0

x0 = a0 e−T1 cos T0 + b0 e−T1 sin T0 (57)

Therefore, as T1 → ∞, although x0 , and x1 → 0, εx1 becomes O(x0 ) as t increases to O(1/ε 2 ). Thus


the expansion x0 + εx1 breaks down for t as large as O(1/ε 2 ) unless the coefficients of T1 in the
brackets in (56) vanish; i.e., unless
∂ a0 1
+ b0 = 0
∂ T2 2
(58)
∂ b0 1
− a0 = 0
∂ T2 2
Equation (58) is a set of coupled PDE for a0 and b0 . To solve this system let us proceed as follows.
Differentiate the first of (58) with respect to T2 yields

∂ 2 a0 1 ∂ b0
2
+ =0
∂ T2 2 ∂ T2

Now, using the second of (58) this can be written as

∂ 2 a0 1
2
+ a0 = 0 (59)
∂ T2 4

14
Equation (59) is a homogeneous second-order PDE with constant coefficients, and its general solution
can be written as
a0 (T2 ) = a00 cos(T2 /2) + b00 sin(T2 /2) (60)

where a00 and b00 are the integration constants. In a similar manner we can obtain

b0 (T2 ) = c00 cos(T2 /2) + d00 sin(T2 /2) (61)

It is easy to see that the simultaneous system of equations (58) can be satisfied only when

c00 = −b00 and d00 = a00

Therefore, equation (61) becomes

b0 (T2 ) = −b00 cos(T2 /2) + a00 sin(T2 /2) (62)

With these results, the equation for x0 (57) becomes

x0 = [a00 cos(T2 /2) + b00 sin(T2 /2)] e−T1 cos T0 + [−b00 cos(T2 /2) + a00 sin(T2 /2)] e−T1 sin T0
= a00 e−T1 [cos T0 cos(T2 /2) + sin T0 sin(T2 /2)] − b00 e−T1 [sin T0 cos(T2 /2) − cos T0 sin(T2 /2)]

Using the following trigonometric identities

cos(α − β ) = cos α cos β + sin α sin β


sin(α − β ) = sin α cos β − cos α sin β

the equation for x0 can be written as

x0 = a00 e−T1 cos(T0 − T2 /2) − b00 e−T1 sin(T0 − T2 /2) (63)

With the aid of equation (58) the expressions for A1 and B1 given by equation (55) becomes

A1 (T1 , T2 ) = a1 (T2 )e−T1


(64)
B1 (T1 , T2 ) = b1 (T2 )e−T1

and the equation for x1 (56) becomes

x1 = a1 e−T1 cos T0 + b1 e−T1 sin T0 (65)

The function a1 (T2 ) and b1 (T2 ) can be determined by carrying out the expansion to third order

a1 (T2 ) = a11 cos(T2 /2) + b11 sin(T2 /2)


(66)
b1 (T2 ) = −b11 cos(T2 /2) + a11 sin(T2 /2)

where a11 and b11 are the integration constants. With this, the equation for x1 becomes

x1 = a11 e−T1 cos(T0 − T2 /2) − b11 e−T1 sin(T0 − T2 /2) (67)

15
Hence the asymptotic approximation for x = x0 + εx1 is given by

x = e−T1 [a00 cos(T0 − T2 /2) − b00 sin(T0 − T2 /2) + ε (a11 cos(T0 − T2 /2) − b11 sin(T0 − T2 /2))] (68)

We can now impose the initial conditions to determine the constants in the equations for x0 and
x1 . Applying the conditions (41a) gives

a00 = 1 and b00 = 0

Thus (63) becomes


x0 = e−T1 cos(T0 − T2 /2) (69)
Applying the conditions (41b) gives

a11 = 0 and b11 = −1

Thus (67) becomes


x1 = e−T1 sin(T0 − T2 /2) (70)
Hence the asymptotic approximation for x = x0 + εx1 is given by

x = e−T1 [cos(T0 − T2 /2) + ε sin(T0 − T2 /2)] (71)

In terms of the original variables, x becomes

x = e−εt cos t − 21 ε 2t + ε sin t − 12 ε 2t


  
(72)

which is uniformly valid for t = O(1/ε 2 ) and in agreement with exact solution (3) to O(ε 2 ). Here we
have used three time scales and hence this solution is valid only for t = O(1/ε 2 ) and we could improve
the accuracy of this approximation to second order by computing up to second order terms in the
series (22).

6 Bibliography
1. Bellman, R., Perturbation Techniques in Mathematics, Engineering & Physics, Dover Publica-
tions (2003).
2. Bender, C. M. and Orszag, S. A., Advanced Mathematical Methods for Scientists and Engineers:
Asymptotic Methods and Perturbation Theory, Springer (1999).
3. Bush, A. W., Perturbation Methods for Engineers and Scientists, CRC Press (1992).
4. de Bruijn, N. G., Asymptotic Methods in Analysis, 3rd ed., Dover Publications (1981).
5. Hinch, E. J., Perturbation Methods, Cambridge Univ. Press (1995).
6. Holmes, M. H., Introduction to Perturbation Methods, Springer-Verlag (1995).
7. Kevorkian, J. and Cole, J. D., Perturbation Methods in Applied Mathematics, Springer-Verlag
(1981).

16
8. Kevorkian, J. and Cole, J. D., Multiple Scale and Singular Perturbation Methods, Springer
(1996).
9. Murdock, J. A., Perturbation Methods: Theory and Methods, Wiley-Interscience (1991).
10. Nayfe, A. H., Introduction to Perturbation Techniques, Wiley-VCH (1993).
11. Nayfe, A. H., Perturbation Methods, John Wiley (2000).
12. Simmonds, J. G. and Mann, J. E., A First Look at Perturbation Theory, 2nd ed., Dover Publi-
cations (1998).
13. Wasow, W. R., Asymptotic Expansions for Ordinary Differential Equations, Dover Publications
(1987).

17

You might also like