Numerical
Numerical
There’s no sense in being precise when you don’t even know what you’re talking
about.- John von Neumann (1903-1957)
Most of the book has dealt with finding exact solutions to some
generic problems. However, most problems of interest cannot be solved ex-
actly. The heat, wave, and Laplace equations are linear partial differential
equations and can be solved using separation of variables in geometries
in which the Laplacian is separable. However, once we introduce nonlin-
earities, or complicated non-constant coefficients intro the equations, some
of these methods do not work. Even when separation of variables or the
method of eigenfunction expansions gave us exact results, the computation
of the resulting series had to be done on a computer and inevitably one
could only use a finite number of terms of the expansion. So, therefore, it is
sometimes useful to be able to solve differential equations numerically.
In this chapter we will introduce the idea of numerical solutions of partial
differential equations. However, we will first begin with a discussion of the
solution of ordinary differential equations in order to get a feel for some
common problems in the solution of differential equations and the notion
of convergence rates of numerical schemes. Then, we turn to the finite
difference method and the ideas of stability. Other common approaches
may be added later.
xn = x0 + n∆x.
y0
( x0 , y0 )
x
x0 x1 x2
The first step of Euler’s Method is to use the initial condition. We repre-
sent this as a point on the solution curve, ( x0 , y( x0 )) = ( x0 , y0 ), as shown in
Figure 10.1. The next step is to develop a method for obtaining approxima-
tions to the solution for the other xn ’s.
We first note that the differential equation gives the slope of the tangent
line at ( x, y( x )) of the solution curve since the slope is the derivative, y0 ( x )0
From the differential equation the slope is f ( x, y( x )). Referring to Figure
10.1, we see the tangent line drawn at ( x0 , y0 ). We look now at x = x1 . The
vertical line x = x1 intersects both the solution curve and the tangent line
passing through ( x0 , y0 ). This is shown by a heavy dashed line.
While we do not know the solution at x = x1 , we can determine the
tangent line and find the intersection point that it makes with the vertical.
As seen in the figure, this intersection point is in theory close to the point
on the solution curve. So, we will designate y1 as the approximation of the
solution y( x1 ). We just need to determine y1 .
The idea is simple. We approximate the derivative in the differential
equation by its difference quotient:
dy y − y0 y − y0
≈ 1 = 1 . (10.2)
dx x1 − x0 ∆x
Since the slope of the tangent to the curve at ( x0 , y0 ) is y0 ( x0 ) = f ( x0 , y0 ),
numerical solutions of pdes 433
we can write
y1 − y0
≈ f ( x0 , y0 ). (10.3)
∆x
Solving this equation for y1 , we obtain
y1 = y0 + ∆x f ( x0 , y0 ). (10.4)
y2 = y1 + ∆x f ( x1 , y1 ). (10.5)
y0 = y ( x0 ),
yn = yn−1 + ∆x f ( xn−1 , yn−1 ), n = 1, . . . , N. (10.6)
three points.
We next carry out Euler’s Method systematically by setting up a table for the
needed values. Such a table is shown in Table 10.1. Note how the table is set up.
There is a column for each xn and yn . The first row is the initial condition. We also
made use of the function f ( x, y) in computing the yn ’s from (10.6). This sometimes
makes the computation easier. As a result, we find that the desired approximation
is given as y2 = 2.5.
Is this a good result? Well, we could make the spatial increments smaller. Let’s
repeat the procedure for ∆x = 0.2, or N = 5. The results are in Table 10.2.
Now we see that the approximation is y1 = 2.97664. So, it looks like the value
is near 3, but we cannot say much more. Decreasing ∆x more shows that we are
beginning to converge to a solution. We see this in Table 10.3.
434 partial differential equations
Of course, these values were not done by hand. The last computation
would have taken 1000 lines in the table, or at least 40 pages! One could
use a computer to do this. A simple code in Maple would look like the
following:
> restart:
> f:=(x,y)->y+x;
> a:=0: b:=1: N:=100: h:=(b-a)/N;
> x[0]:=0: y[0]:=1:
for i from 1 to N do
y[i]:=y[i-1]+h*f(x[i-1],y[i-1]):
x[i]:=x[0]+h*(i):
od:
evalf(y[N]);
In this case we could simply use the exact solution. The exact solution is
easily found as
y( x ) = 2e x − x − 1.
(The reader can verify this.) So, the value we are seeking is
y(1) = 2e − 2 = 3.4365636 . . . .
Thus, even the last numerical solution was off by about 0.00027.
Adding a few extra lines for plotting, we can visually see how well the
approximations compare to the exact solution. The Maple code for doing
such a plot is given below.
> with(plots):
Figure 10.2: A comparison of the results
> Data:=[seq([x[i],y[i]],i=0..N)]:
Euler’s Method to the exact solution for
y0 = x + y, y(0) = 1 and N = 10. > P1:=pointplot(Data,symbol=DIAMOND):
> Sol:=t->-t-1+2*exp(t);
numerical solutions of pdes 435
> P2:=plot(Sol(t),t=a..b,Sol=0..Sol(b)):
> display({P1,P2});
Why would we use a numerical method when we have the exact solu-
tion? Exact solutions can serve as test cases for our methods. We can make
sure our code works before applying them to problems whose solution is
not known.
Figure 10.3: A comparison of the results
There are many other methods for solving first order equations. One Euler’s Method to the exact solution for
commonly used method is the fourth order Runge-Kutta method. This y0 = x + y, y(0) = 1 and N = 100.
y00 = f ( x, y).
This is a larger class of equations than the second order constant coefficient
equation. We can turn this into a system of two first order differential equa-
tions by letting u = y and v = y0 = u0 . Then, v0 = y00 = f ( x, u). So, we have
the first order system
u0 = v,
0
v = f ( x, u). (10.7)
We will not go further into the Runge-Kutta Method here. You can find
more about it in a numerical analysis text. However, we will see that systems
of differential equations do arise naturally in physics. Such systems are
often coupled equations and lead to interesting behaviors.
ti = a + ih, i = 0, 1, . . . , N, t0 = a, t N = b,
y
where
b−a
. h=
N
We then seek the numerical solutions
(ti , y(ti ))
ỹi ≈ y(ti ), i = 1, 2, . . . , N,
with ỹ0 = y(t0 ) = y0 . Figure 10.4 graphically shows how these quantities
are related.
( a, y0 ) Euler’s Method can be derived using the Taylor series expansion of of the
(ti , ỹi )
solution y(ti + h) about t = ti for i = 1, 2, . . . , N. This is given by
t
t0 ti tN y ( t i +1 ) = y ( ti + h )
Figure 10.4: The interval [ a, b] is divided h2 00
into N equally spaced subintervals. The = y ( ti ) + y 0 ( ti ) h + y ( ξ i ), ξ i ∈ ( t i , t i +1 ). (10.9)
2
exact solution y(ti ) is shown with the
numerical solution, ỹi with ti = a + ih, 2
Here the term h2 y00 (ξ i ) captures all of the higher order terms and represents
i = 0, 1, . . . , N.
the error made using a linear approximation to y(ti + h).
Dropping the remainder term, noting that y0 (t) = f (t, y), and defining
the resulting numerical approximations by ỹi ≈ y(ti ), we have
y(ti+1 ) − ỹi
τi+1 (h) = − f ( t i , y i ).
h
A simple computation gives
h 00
τi+1 (h) = y ( ξ i ), ξ i ∈ ( t i , t i +1 ).
2
Since the local truncation error is of order h, this scheme is said to be of
order one. More generally, for a numerical scheme of the form
y(ti+1 ) − ỹi
τi+1 (h) = − F ( t i , y i ).
h
The accumulation of these errors leads to the global error. In fact, one
can show that if f is continuous, satisfies the Lipschitz condition,
then
hM L(ti − a)
|y(ti ) − ỹ| ≤ e − 1 , i = 0, 1, . . . , N.
2L
Furthermore, if one introduces round-off errors, bounded by δ, in both the
initial condition and at each step, the global error is modified as
1 hM δ L ( ti − a )
|y(ti ) − ỹ| ≤ + e − 1 + |δ0 |e L(ti −a) , i = 0, 1, . . . , N.
L 2 h
Then for small enough steps h, there is a point when the round-off error
will dominate the error. [See Burden and Faires, Numerical Analysis for the
details.]
Can we improve upon Euler’s Method? The natural next step towards
finding a better scheme would be to keep more terms in the Taylor series
expansion. This leads to Taylor series methods of order n.
Taylor series methods of order n take the form
h 0 h ( n −1) ( n −1)
T (n) (t, y) = f (t, y) + f (t, y) + · · · + f (t, y).
2 n!
We note that for n = 1, we retrieve Euler’s Method as a special case. We
demonstrate a third order Taylor’s Method in the next example.
dy
= t + y, y (0) = 1
dt
and obtain an approximation for y(1) for h = 0.1.
The third order Taylor’s Method takes the form
where
h 0 h2 00
T (3) (t, y) = f (t, y) + f (t, y) + f (t, y)
2 3!
and f (t, y) = t + y(t).
In order to set up the scheme, we need the first and second derivative of f (t, y) :
d
f 0 (t, y) = (t + y)
dt
= 1 + y0
= 1+t+y (10.14)
d
f 00 (t, y) = (1 + t + y )
dt
= 1 + y0
= 1+t+y (10.15)
h2
h
ỹi+1 = ỹi + h (ti + yi ) + (1 + ti + yi ) + (1 + ti + yi ) ,
2 3!
1 h
= ỹi + h(ti + yi ) + h2 ( + )(1 + ti + yi ),
2 6
ỹ0 = y0 , (10.16)
for i = 0, 1, . . . , N − 1.
In Figure 10.2 we show the results comparing Euler’s Method, the 3rd Order
Taylor’s Method, and the exact solution for N = 10. In Table 10.4 we provide are
the numerical values. The relative error in Euler’s method is about 7% and that
of the 3rd Order Taylor’s Method is about 0.006%. Thus, the 3rd Order Taylor’s
Method is significantly better than Euler’s Method.
In the last section we provided some Maple code for performing Euler’s
method. A similar code in MATLAB looks like the following:
a=0;
b=1;
numerical solutions of pdes 439
N=10;
h=(b-a)/N;
% Slope function
f = inline(’t+y’,’t’,’y’);
sol = inline(’2*exp(t)-t-1’,’t’);
% Initial Condition
t(1)=0;
y(1)=1;
y
% Euler’s Method
for i=2:N+1 4
y(i)=y(i-1)+h*f(t(i-1),y(i-1));
t(i)=t(i-1)+h; 3
end
2
1
A simple modification can be made for the 3rd Order Taylor’s Method by
replacing the Euler’s method part of the preceding code by
t
0 .2 .4 .5 .8 1
Figure 10.5: Numerical results for Eu-
ler’s Method (filled circle) and 3rd Order
% Taylor’s Method, Order 3 Taylor’s Method (open circle) for solving
Example 10.2 as compared to exact solu-
y(1)=1;
tion (solid line).
h3 = h^2*(1/2+h/6);
for i=2:N+1
y(i)=y(i-1)+h*f(t(i-1),y(i-1))+h3*(1+t(i-1)+y(i-1));
t(i)=t(i-1)+h;
end
While the accuracy in the last example seemed sufficient, we have to re-
member that we only stopped at one unit of time. How can we be confident
that the scheme would work as well if we carried out the computation for
much longer times. For example, if the time unit were only a second, then
one would need 86,400 times longer to predict a day forward. Of course, the
scale matters. But, often we need to carry out numerical schemes for long
times and we hope that the scheme not only converges to a solution, but
that it converges to the solution to the given problem. Also, the previous
example was relatively easy to program because we could provide a rela-
tively simple form for T (3) (t, y) with a quick computation of the derivatives
of f (t, y). This is not always the case and higher order Taylor methods in
this form are not typically used. Instead, one can approximate T (n) (t, y) by
evaluating the known function f (t, y) at selected values of t and y, leading
to Runge-Kutta methods.
440 partial differential equations
As we had seen in the last section, we can use higher order Taylor
methods to derive numerical schemes for solving
dy
= f (t, y), y ( a ) = y0 , t ∈ [ a, b], (10.17)
dt
using a scheme of the form
h h ( n −1) ( n )
T (n) (t, y) = y0 (t) + y00 (t) + · · · + y ( t ).
2 n!
In this section we will find approximations of T (n) (t, y) which avoid the
need for computing the derivatives.
For example, we could approximate
h
T (2) (t, y) = f (t, y) + f racd f dt(t, y)
2
by
T (2) (t, y) ≈ a f (t + α, y + β)
for selected values of a, α, and β. This requires use of a generalization of
Taylor’s series to functions of two variables. In particular, for small α and β
we have
∂f ∂f
a f (t + α, y + β) = a f (t, y) + (t, y)α + (t, y) β
∂t ∂y
2 2 ∂2 f
1 ∂ f 2 ∂ f 2
+ (t, y)α + 2 (t, y)αβ + 2 (t, y) β
2 ∂t2 ∂t∂y ∂y
+ higher order terms. (10.19)
df
Furthermore, we need dt (t, y). Since y = y(t), this can be found using a
generalization of the Chain Rule from Calculus III:
df ∂f ∂ f dy
(t, y) = + .
dt ∂t ∂y dt
Thus,
h ∂f ∂ f dy
T (2) (t, y) = f (t, y) + + .
2 ∂t ∂y dt
Comparing this expression to the linear (Taylor series) approximation of
a f (t + α, y + β), we have
T (2) ≈ a f (t + α, y + β)
h ∂f h ∂f ∂f ∂f
f+ + f ≈ a f + aα + β . (10.20)
2 ∂t 2 ∂y ∂t ∂y
numerical solutions of pdes 441
% Midpoint Method
y(1)=1;
for i=2:N+1
k1=h/2*f(t(i-1),y(i-1));
k2=h*f(t(i-1)+h/2,y(i-1)+k1);
y(i)=y(i-1)+k2;
t(i)=t(i-1)+h;
end
Example 10.3. Compare the Midpoint Method with the 2nd Order Taylor’s Method
for the problem
y0 = t2 + y, y(0) = 1, t ∈ [0, 1]. (10.22)
The solution to this problem is y(t) = 3et − 2 − 2t − t2 . In order to implement
the 2nd Order Taylor’s Method, we need
h 0
T (2) = f (t, y) + f (t, y)
2
h
= t2 + y + (2t + t2 + y). (10.23)
2
The results of the implementation are shown in Table 10.3.
There are other way to approximate higher order Taylor polynomials. For
example, we can approximate T (3) (t, y) using four parameters by
Table 10.5: Numerical values for 2nd Or- Exact Taylor Error Midpoint Error
der Taylor’s Method, Midpoint Method,
exact solution, and errors for solving Ex-
1.0000 1.0000 0.0000 1.0000 0.0000
ample 10.3 with N = 10.. 1.1055 1.1050 0.0005 1.1053 0.0003
1.2242 1.2231 0.0011 1.2236 0.0006
1.3596 1.3577 0.0019 1.3585 0.0010
1.5155 1.5127 0.0028 1.5139 0.0016
1.6962 1.6923 0.0038 1.6939 0.0023
1.9064 1.9013 0.0051 1.9032 0.0031
2.1513 2.1447 0.0065 2.1471 0.0041
2.4366 2.4284 0.0083 2.4313 0.0053
2.7688 2.7585 0.0103 2.7620 0.0068
3.1548 3.1422 0.0126 3.1463 0.0085
There are three equations and four unknowns. Therefore there are many
second order methods. Two classic methods are given by the modified Euler
method (a = b = 12 , α = β = h) and Huen’s method (a = 14 , b = 34 ,
The Fourth Order Runge-Kutta. α = β = 32 h).
The Fourth Order Runge-Kutta Method, which is most often used, is
given by the scheme
ỹ0 = y0 ,
k1 = h f (ti , ỹi ),
h 1
k2 = h f (ti + , ỹi + k1 ),
2 2
h 1
k3 = h f (ti + , ỹi + k2 ),
2 2
k4 = h f (ti + h, ỹi + k3 ),
1
ỹi+1 = ỹi + (k1 + 2k2 + 2k3 + k4 ), i = 0, 1, . . . , N − 1. (10.24)
6
Again, we can test this on Example 10.3 with N = 10. The MATLAB
implementation is given by
a=0;
b=1;
N=10;
h=(b-a)/N;
% Slope function
f = inline(’t^2+y’,’t’,’y’);
sol = inline(’-2-2*t-t^2+3*exp(t)’,’t’);
% Initial Condition
t(1)=0;
numerical solutions of pdes 443
y(1)=1;
% RK4 Method
y1(1)=1;
for i=2:N+1
k1=h*f(t(i-1),y1(i-1));
k2=h*f(t(i-1)+h/2,y1(i-1)+k1/2);
k3=h*f(t(i-1)+h/2,y1(i-1)+k2/2);
k4=h*f(t(i-1)+h,y1(i-1)+k3);
y1(i)=y1(i-1)+(k1+2*k2+2*k3+k4)/6;
t(i)=t(i-1)+h;
end
MATLAB has built-in ODE solvers, as do
MATLAB has built-in ODE solvers, such as ode45 for a fourth order other software packages, like Maple and
Mathematica. You should also note that
Runge-Kutta method. Its implementation is given by there are currently open source pack-
ages, such as Python based NumPy and
[t,y]=ode45(f,[0 1],1); Matplotlib, or Octave, of which some
packages are contained within the Sage
Project.
In this case f is given by an inline function like in the above RK4 code.
The time interval is entered as [0, 1] and the 1 is the initial condition, y(0) =
1.
However, ode45 is not a straight forward RK4 implementation. It is a
hybrid method in which a combination of 4th and 5th order methods are
combined allowing for adaptive methods to handled subintervals of the in-
tegration region which need more care. In this case, it implements a fourth
order Runge-Kutta-Fehlberg method. Running this code for the above ex-
ample actually results in values for N = 41 and not N = 10. If we wanted
to have the routine output numerical solutions at specific times, then one
could use the following form
tspan=0:h:1;
[t,y]=ode45(f,tspan,1);
In Table 10.6 we show the solutions which results for Example 10.3 com-
paring the RK4 snippet above with ode45. As you can see RK4 is much
better than the previous implementation of the second order RK (Midpoint)
Method. However, the MATLAB routine is two orders of magnitude better
that RK4.
There are many ODE solvers in MATLAB. These are typically useful if
RK4 is having difficulty solving particular problems. For the most part, one
is fine using RK4, especially as a starting point. For example, there is ode23,
which is similar to ode45 but combining a second and third order scheme.
Applying the results to Example 10.3 we obtain the results in Table 10.6. We
compare these to the second order Runge-Kutta method. The code snippets
are shown below.
Table 10.6: Numerical values for Fourth Exact Taylor Error Midpoint Error
Order Runge-Kutta Method, rk45, exact
solution, and errors for solving Example
1.0000 1.0000 0.0000 1.0000 0.0000
10.3 with N = 10. 1.1055 1.1055 4.5894e-08 1.1055 -2.5083e-10
1.2242 1.2242 1.2335e-07 1.2242 -6.0935e-10
1.3596 1.3596 2.3850e-07 1.3596 -1.0954e-09
1.5155 1.5155 3.9843e-07 1.5155 -1.7319e-09
1.6962 1.6962 6.1126e-07 1.6962 -2.5451e-09
1.9064 1.9064 8.8636e-07 1.9064 -3.5651e-09
2.1513 2.1513 1.2345e-06 2.1513 -4.8265e-09
2.4366 2.4366 1.6679e-06 2.4366 -6.3686e-09
2.7688 2.7688 2.2008e-06 2.7688 -8.2366e-09
3.1548 3.1548 2.8492e-06 3.1548 -1.0482e-08
for i=2:N+1
k1=h*f(t(i-1),y1(i-1));
k2=h*f(t(i-1)+h/2,y1(i-1)+k1/2);
y1(i)=y1(i-1)+k2;
t(i)=t(i-1)+h;
end
tspan=0:h:1;
[t,y]=ode23(f,tspan,1);
Table 10.7: Numerical values for Second Exact Taylor Error Midpoint Error
Order Runge-Kutta Method, rk23, exact
solution, and errors for solving Example
1.0000 1.0000 0.0000 1.0000 0.0000
10.3 with N = 10. 1.1055 1.1053 0.0003 1.1055 2.7409e-06
1.2242 1.2236 0.0006 1.2242 8.7114e-06
1.3596 1.3585 0.0010 1.3596 1.6792e-05
1.5155 1.5139 0.0016 1.5154 2.7361e-05
1.6962 1.6939 0.0023 1.6961 4.0853e-05
1.9064 1.9032 0.0031 1.9063 5.7764e-05
2.1513 2.1471 0.0041 2.1512 7.8665e-05
2.4366 2.4313 0.0053 2.4365 0.0001
2.7688 2.7620 0.0068 2.7687 0.0001
3.1548 3.1463 0.0085 3.1547 0.0002
We have seen several numerical schemes for solving initial value prob-
lems. There are other methods, or combinations of methods, which aim
to refine the numerical approximations efficiently as if the step size in the
current methods were taken to be much smaller. Some methods extrapolate
solutions to obtain information outside of the solution interval. Others use
one scheme to get a guess to the solution while refining, or correcting, this
to obtain better solutions as the iteration through time proceeds. Such meth-
ods are described in courses in numerical analysis and in the literature. At
this point we will apply these methods to several physics problems before
continuing with analytical solutions.
numerical solutions of pdes 445
ut = ku xx .
∂u u( x, t + ∆t) − u( x, t)
= lim .
∂t ∆t→∞ ∆t
∂u u( x, t + ∆t) − u( x, t)
≈ . (10.25)
∂t ∆t
∂u u( x + ∆x, t) − u( x, t)
≈ .
∂x ∆x
Then,
∂u x u x ( x + ∆x, t) − u x ( x, t)
≈ .
∂x ∆x
We need to approximate the terms in the numerator. It is customary to
use a backward difference approximation. This is given by letting ∆x →
−∆x in the forward difference form,
∂u u( x, t) − u( x − ∆x, t)
≈ . (10.26)
∂x ∆t
u( x, t) − u( x − ∆x, t)
u x ( x, t) ≈ ,
∆x
and
u( x + ∆x, t) − u( x, t)
u x ( x + ∆x, t) ≈ .
∆x
446 partial differential equations
∂2 u ∂u x
=
∂x2 ∂x
u x ( x + ∆x, t) − u x ( x, t)
≈
∆x
u( x +∆x,t)−u( x,t) u( x,t)−u( x −∆x,t)
∆x ∆x
≈ −
∆x ∆x
u( x + ∆x, t) − 2u( x, t) + u( x − ∆x, t)
= . (10.27)
(∆x )2
∆t
where α = k (∆x )2
.
In this equation we have a way to determine the solution at position x and
time t + ∆t given that we know the solution at three positions, x, x + ∆x,
and x + 2∆x at time t.
xi = a + i∆x, i = 0, 1, . . . , N.
t j = j∆t, j = 0, 1, 2, . . . .
Equation (10.31) is the finite difference scheme for solving the heat equa-
tion. This equation is represented by the stencil shown in Figure 10.6. The
black circles represent the four terms in the equation, ui,j ui−1,j ui+1,j and
ui,j+1 .
numerical solutions of pdes 447
j+1
i−1 i i+1 x
u( x, 0) = f ( x ).
u( a, t) = 0,
u(b, t) = 0,
or u N,j = 0 for j = 0, 1, . . . , then we can fill in the rest of the missing data
points as seen in Figure 10.11.
We could also use Neumann conditions. For example, let
u x ( a, t) = 0.
∂u u( a + ∆x, t) − u( a, t)
≈ = 0.
∂x x=a ∆x
Then,
u( a + ∆x, t) − u( a, t),
u( x, 0) = sin πx.
% Initialize Data
% Length of Rod, Time Interval
% Number of Points in Space, Number of Time Steps
L=1;
T=0.1;
k=1;
N=10;
M=50;
dx=L/N;
dt=T/M;
alpha=k*dt/dx^2;
% Position
for i=1:N+1
x(i)=(i-1)*dx;
end
% Initial Condition
for i=1:N+1
u0(i)=sin(pi*x(i));
end
for j=1:M
for i=2:N
u1(i)=u0(i)+alpha*(u0(i+1)-2*u0(i)+u0(i-1));
end
u1(1)=0;
u1(N+1)=0;
u0=u1;
end
% Plot solution
plot(x, u1);
numerical solutions of pdes 451
∂u u( x, t + ∆t) − u( x, t)
≈ . (10.34)
∂t ∆t
This can be derived from the Tayloer series expansion of u( x, t + ∆t) about
∆t = 0,
∂u 1 ∂2 u
u( x, t + ∆t) = u( x, t) + ( x, t)∆t + ( x, t)(∆t)2 + O((∆t)3 ).
∂t 2! ∂t2
∂u
Solving for ∂t ( x, t ), we obtain
∂u u( x, t + ∆t) − u( x, t) 1 ∂2 u
( x, t) = − ( x, t)∆t + O((∆t)2 ).
∂t ∆t 2! ∂t2
We see that we have obtained the forward difference approximation (10.25)
with the added benefit of knowing something about the error terms intro-
duced in the approximation. Namely, when we approximate ut with the
forward difference approximation (10.25), we are making an error of
1 ∂2 u
E( x, t, ∆t) = − ( x, t)∆t + O((∆t)2 ).
2! ∂t2
We have truncated the Taylor series to obtain this approximation and we say
that
∂u u( x, t + ∆t) − u( x, t)
= + O(∆t) (10.35)
∂t ∆t
is a first order approximation in ∆t.
452 partial differential equations
In a similar manor, we can obtain the truncation error for the u x x- term.
However, instead of starting with the approximation we used in Equation
??uxx), we will derive a term using the Taylor series expansion of u( x +
∆x, t) about ∆x = 0. Namely, we begin with the expansion
1 1
u( x + ∆x, t) = u( x, t) + u x ( x, t)∆x + u xx ( x, t)(∆x )2 + u xxx ( x, t)(∆x )3
2! 3!
1
+ u xxxx ( x, t)(∆x )4 + . . . . (10.36)
4!
We want to solve this equation for u xx . However, there are some obstruc-
tions, like needing to know the u x term. So, we seek a way to eliminate
lower order terms. On way is to note that replacing ∆x by −∆x gives
1 1
u( x − ∆x, t) = u( x, t) − u x ( x, t)∆x + u xx ( x, t)(∆x )2 − u xxx ( x, t)(∆x )3
2! 3!
1
+ u xxxx ( x, t)(∆x )4 + . . . . (10.37)
4!
Adding these Taylor series, we have
10.4 Stability
α = k∆t/(∆x )2 , one finds that the solution goes crazy when α is too big.
In other words, if you try to push the individual time steps too far into
the future, then something goes haywire. We can determine the onset of
instability by looking at the solution of this equation for um,j . [Note: We
changed index i to m to avoid confusion later in this section.]
The scheme is actually what is called a partial difference equation for
um,j . We could write it in terms of difference, such as um+1,j − um,j and
um,j+1 − um,j . The furthest apart the time steps are are one unit and the
spatial points are two units apart. We can see this in the stencils in Figure
10.6. So, this is a second order partial difference equation similar to the idea
that the heat equation is a second order partial differential equation. The
heat equation can be solved using the method of separation of variables.
The difference scheme can also be solved in a similar fashion. We will show
how this can lead to product solutions.
We begin by assuming that umj = Xm Tj , a product of functions of the
indices m and j. Inserting this guess into the finite difference equation, we
have
um,j+1 = um,j + α um+1,j − 2um,j + um−1,j ,
Xm Tj+1 = Xm Tj + α [ Xm+1 − 2Xm + Xm−1 ] Tj ,
Tj+1 αXm+1 + (1 − 2α) Xm + αXm−1
= . (10.41)
Tj Xm
The first equation is a simple first order difference equation and can be
solved by iteration:
Tj+1 = λTj ,
= λ(λTj−1 ) = λ2 Tj−1 ,
= λ3 Tj−2 ,
= λ j+1 T0 , (10.44)
0 = αξ 2 + (1 − 2α − λ)ξ + α
= αe2iβ∆x + (1 − 2α − λ)eiβ∆x + α
h i
= eiβ∆x α(eiβ∆x + e−iβ∆x ) + (1 − 2α − λ)
= eiβ∆x [2α cos( β∆x ) + (1 − 2α − λ)]
λ = 2α cos( β∆x ) + 1 − 2α. (10.46)
and
λ = 2α cos( β∆x ) + 1 − 2α.
For the solution to remain bounded, or stable, we need |λ| ≤ 1.
Therefore, we have the inequality
−1 ≤ 2α cos( β∆x ) + 1 − 2α ≤ 1.
∆t 1
α=k ≤ .
∆x 2 2