ENGS203P, Block 2 (Topics 3 & 4):
Introduction to Second-Order Partial
Dierential Equations
Lecturer: Prof. Ajoy Velayudhan, Department of Biochemical Engineering
December 7, 2015
1 Review and Scope
1.1 Review
From MMA I and material already covered in MMA II, these are the topics you
should be familiar with:
1. Partial dierentiation: You should thoroughly understand partial dier-
entiation of a function of several variables, and appreciate the dierence
between independent and dependent variables. This material was covered
in MMA I.
2. Series: You should understand the expansion of an arbitrary function in a
Fourier series, appreciate the importance of the completeness of the basis
functions (such as the sine and cosine functions), and understand the
dierence between convergence in the mean and pointwise convergence.
This material was covered in previous lectures in MMA II.
3. Integral transforms: You should understand how to apply Laplace and
Fourier transforms to ODEs (we will cover their extension to PDEs in
this section), how to invert functions in the transformed domain using
tables and simple rules, and how to apply these transforms to convolution
integrals. This material was covered in previous lectures in MMA II.
1.2 Scope of current block
The topics to be covered in Block 2 are:
1. Brief introduction to PDEs as the most common mathematical model for
engineering problems. (ODEs often represent the lumped approximation
of a PDE model.) Introduce basic terminology (linear versus nonlinear,
homogeneous versus inhomogeneous, etc.).
1
2. Classify PDEs into elliptic, parabolic, and hyperbolic; solve one problem
of each kind on a nite domain by the method of separation of variables.
3. Solve one innite-domain problem (the heat equation) using Fourier Trans-
forms.
4. Outline the numerical solution of one nonlinear PDE using Matlab.
1.2.1 Learning objectives
At the completion of this block of lectures, the diligent student should be able
to:
1. Classify a PDE with respect to the following criteria: linear/nonlinear,
homogeneous/inhomogeneous, elliptic/parabolic/hyperbolic (including re-
gions of applicability, if appropriate).
2. Use the method of separation of variables to solve appropriate linear PDEs.
3. Use Laplace and Fourier tranforms to solve appropriate linear PDEs.
2 Introduction
Partial dierential equations (PDEs) involve at least two independent variables.
Thus, with x and y as the independent variables, and u as the only dependent
variable, the following equation is a PDE:
∂ ∂
u(x, y) + u(x, y) = 1. (1)
∂x ∂y
Such equations are very common in engineering and science. Earlier, we had
studied ordinary dierential equations (ODEs), in which there was only one
independent variable.
Notation: we will sometimes suppress the function dependence of u on x
and y for simplicity and write u rather than u(x, y), but you should always
remember which are the independent, and which the dependent, variables.
Linearity: A PDE is linear when it contains no product of dependent-variable
terms. Thus, u ∂u∂x is a nonlinear term, because it is the product of the dependent
variable u and a function of the dependent variable ∂u ∂x . However, x y ∂x is a
2 3 ∂u
linear term, because only nonlinearities of the dependent variable matter, for
the purposes of this denition. Nonlinearities of the independent variables, such
as x2 and y 3 in this example, do not count. Nonlinear PDEs usually do not have
analytical solutions, and often numerical methods are needed to treat them. We
will focus on the subset of linear second-order PDEs with constant coecients;
although this is a substantial restriction, many practical problems lie within
our scope. Dierent solutions to linear equations can be combined to produce
another solution to that equation (principle of superposition), and this feature
will be critical in the solution method to be developed.
2
Homogeneity: Just as for ODEs, if every term in the equation for a PDE con-
tains the dependent variable in some form, it is a homogeneous PDE. Otherwise,
it is an inhomogeneous PDE, as exemplied by Equation 1.
3 Classication of PDEs
A linear second-order PDE in two independent variables can be written as
∂2u ∂2u ∂2u
A(x, y) 2
+ B(x, y) + C(x, y) 2 + F (x, y, u, ux , uy ) = 0 (2)
∂x ∂x∂y ∂y
where x and y are the independent variables and u is the dependent variable.
Note that we have denoted ∂u ∂x by ux , and ∂y by uy ; we will use these convenient
∂u
abbreviations often. Note also that we have not specied the form of the rst
order derivatives ux and uy , or of the depedent variable u; these have all been
captured in the general function F (x, y, u, ux , uy ). This is because linear PDEs
can be classied based only on the coecients of the second-order terms. We
2
calculate the discriminant D(x, y) ≡ [B(x, y)] − 4A(x, y)C(x, y). If D > 0 in
the domain of interest, the PDE is called hyperbolic; if D = 0 in the domain
of interest, the PDE is called parabolic; if D < 0 in the domain of interest, the
PDE is called elliptic. Since D is in general a function of x and y , its value can
change over the domain of interest, and so can the classication of the PDE.
But we are primarily interested in PDEs with constant coecients; then A, B ,
and C are constants, and so is D; hence the classication of such a PDE will
not change over the domain.
We shall see that these three classes of PDEshyperbolic, parabolic, and
elliptichave fundamentally dierent behaviour.
4 Elliptic PDEs
The classical example of an elliptic PDE is Laplace's equation (LE):
∂2u ∂2u
+ 2 = 0. (3)
∂x2 ∂y
where the independent variables x and y could be the distances from the origin
of a two-dimensional surface, and the single dependent variable u could be
the temperature at any point in that surface. In this case, LE represents the
conservation of (thermal) energy in a system where only thermal diusion is
important. As with the other linear PDEs considered above, we can solve the
LE by separation of variables.
Consider a thin square plate of side L located so that one vertex is at the
origin of coordinates, and two edges lie along the X and Y axes. The behaviour
along the Z axis can be neglected because it is a thin plate. The three edges
x = 0, x = L, y = 0 are all maintained at zero temperature, u = 0. The fourth
3
edge, y = L, is maintained at u = f (x), where f is an arbitrary continuous
function. Thus four boundary conditions (BCs) have been specied. We want
to calculate the temperature at steady-state at any point within the plate.
As usual, we assume that u(x, y) = g(x).h(y), which when substituted into
the PDE gives g 00 (x).h(y) + g(x).h00 (y) = 0. (Justifying this approach is non-
trivial, and some discussion is presented in Section 5.1.) Dividing throughout
by g(x).h(y), which is never zero within the plate, we get gg(x)
00 00
(x)
+ hh(y)
(y)
= 0,
00 00
or, equivalently, gg(x)
(x)
= − hh(y)(y)
. Now, the left-hand side of this last equation
depends only on x, while the right-hand side depends only on y . The only
way for this equation to be true generally is for each side separately to equal
00 00
a constant: gg(x)(x)
= − hh(y)(y)
= K , where K is a constant. Since there are two
g 00 (x)
equal signs in this last expression, we have two equations: g(x) = K , and
00
− hh(y)
(y)
= K . A substantial simplication has been achieved; the original PDE
has now been replaced by two ODEs. We solve these linear ODEs separately.
Note that g(x) will have to satisfy the two boundary conditions for which x is
specied, and that h(y) will have to satisfy the two boundary conditions for
which y is specied.
The rst equation can be rewritten as g 00 (x) = K g(x), which is a constant-
coecient linear second-order ODE, whose solution depends on whether the
constant K is positive, negative, or zero (all these solutions have been previously
discussed in Maths I). We consider each case in turn.
1. Case 1: K > 0. Then the equation for g(x) has the solution g(x) =
A · eP x + B · e−P x , where A and √ B are constants of integration, and
the constant P is dened as P = K . For the BC at x = 0, we get
0 = g(0) = A + B , or B = −A. For the BC at x = L, we get 0 = g(L) =
A · eP L + B · e−P L = A eP L − e−P L . This leads to two subcases: either
A = 0, or the term in brackets is zero. If A is zero, so is B, and the
solution g(x) becomes identically zero. This is obviously not physically
appropriate. In the second subcase, where the term in brackets must be
zero, it follows that P = 0, and so K = 0. But this contradicts the
assumption that K is strictly positive. So Case 1 does not lead to a viable
solution, and can be rejected.
2. Case 2: K = 0. Then the equation for g(x) becomes g 00 (x) = K g(x) = 0,
which has the simple solution g(x) = A + Bx, where A and B are again
arbitrary constants of integration. For the BC at x = 0, we get 0 = g(0) =
A. For the BC at x = L, we get 0 = g(L) = BL, which leads to B = 0. So
both A and B are zero, resulting in g(x) becoming identically zero. Again,
this is physically inappropriate, and Case 2 can be rejected.
3. Case 3: K < 0, and we write K = −W , where W is positive. Then
the equation for g(x) becomes g 00 (x) = −W g(x), which has the solution
g(x) = A√· sin(P x) + B · cos(P x), where the positive constant P is dened
as P = W . For the BC at x = 0, we get 0 = g(0) = B . For the BC
4
at x = L, we get 0 = g(L) = A · sin(P L). Again there are two subcases:
either A = 0, or the sine term is zero. If A = 0, then the solution becomes
identically zero, which can be rejected. If the sine term is zero, there is an
innity of possible solutions, of the form Pn L = nπ , where n is any positive
integer (1, 2, 3, ...). Each of these values of PP n has its own integration
∞
of constant An , leading to the solution g(x) = n=1 An sin nπx L . Recall
that the principle of superposition applies to linear problems, allowing
us to include all these solutions and thus produce the most general and
inclusive solution.
Having calculated the solution to g(x), we now turn to the ODE for h(y), which
2
becomes h00 (y) = −K h(y) = (Pn ) h(y), where Pn = nπ L , as just determined
above. Because of the positive multiplying factor, the solution for h(y) must
take the form h(y) = Cn · ePn y + Dn · e−Pn y , where the Cn and Dn are constants
of integration. Applying the rst BC for y=0 gives 0 = h(y) = Cn + Dn , or
Dn = −Cn . Thus the solution calculated so far can be written as:
∞
X nπx
· Cn ePn y − e−Pn y .
u(x, y) = g(x) · h(y) = An sin
n=1
L
From the denition of the hyperbolic cosine function cosh, this solution can be
rewritten as:
∞
X nπx nπy
u(x, y) = An sin · 2Cn sinh .
n=1
L L
There remains one nal BC, at y=L. Applying it results in
∞
X nπx
f (x) = u(x, L) = 2An Cn sinh(nπ) sin .
n=1
L
For convenience, we dene Qn ≡ 2An Cn sinh(nπ)
P∞ , allowing the previous result
to be written in simplied form as f (x) = n=1 Qn sin nπx L . But we know
from the theory of Fourier series that the coecients Qn are then given by
ˆ L
2
Qn = f (x) sin(nπx/L) dx.
L 0
The solution is therefore
∞
( ˆ L
)
nπx sinh nπy
X 2 L
u(x, y) = f (x) sin(nπx/L) dx · sin · .
n=1
L 0 L sinh(nπ)
It can easily be shown that this solution satises the original PDE and the BCs,
and the reader is encouraged to verify these claims.
An immediate generalisation of this result is to apply the Laplace Equation
to a rectangle, 0 < x < a, 0 < y < b, with the same BCs. The result is provided
below without derivation:
5
∞ ˆ a nπx sinh nπy
X 2 a
u(x, y) = f (x) sin(nπx/a) dx · sin · .
n=1
a 0 a sinh( nπb
a )
Applying this result to specic functions f (x) is straightforward; the rst
Workshop showed that the solution for f (x) = A, where A is a constant, is:
n o
∞ sinh (2k−1)πy
4A X 1 (2k − 1)πx L
u(x, y) = · sin ·
π (2k − 1) L sinh {(2k − 1)π}
k=1
for the case of a square of side L. Similarly, the solution for f (x) = A for the
rectangle 0 < x < a, 0 < y < b is:
n o
∞ sinh (2k−1)πy
4A X 1 (2k − 1)πx a
u(x, y) = · sin · n o
π n=1 (2k − 1) a sinh (2k−1)πb
a
5 Parabolic PDEs
The classical example of a parabolic PDE is the diusion equation :
∂u ∂2u
=D 2 (4)
∂t ∂x
where the distance x and the time t are the independent variables, and, for
instance, the single dependent variable u could be the concentration of a dius-
ing species. The parameter D has the dimensions of a diusivity [e.g., metres
squared per second], and is assumed to be constant. When the dependent vari-
able u is the temperature of a system, this PDE is called the heat equation ;
the form of the PDE does not change whether diusion of heat or mass is in-
volved. The problem is dened for a body of length L (only the distance in
the X-direction is considered, which is reasonable when the dimension in the X
direction is much larger than in the Y and Z directions, eg, for a long thin rod).
The initial condition (IC) is: At t = 0, u(x, t) = h(x), where h(x) is a given
function. The boundary conditions (BCs) are simply that the concentration
tends to zero at the boundaries: As x → 0, u = 0; and as x → L, u = 0.
We solve this problem by separation of variables, which assumes that the
solution is a product of two functions, u(x, t) = f (x).g(t), where f depends
only on x and g only on t; thus the dependence of the solution on the space
and time variables has been separated. (Justifying this approach is non-trivial,
∂2u
and some discussion is presented in Section 5.1.) Then, ∂u ∂x = dx · g(t), ∂x2 =
df
d2 f
dx2 · g(t), and ∂u
∂t = f · dt . Note that all derivatives on the right-hand side are
dg
'total' derivatives, while those on the left are partial derivatives. Substituting
6
d2 f g0 f 00
these expressions into the PDE results in f · dg
dt =D· dx2 · g(t), or D·g = f ,
2
where g 0 ≡ dg
dt , and f ≡ dx2 . Now we note that the left-hand side of the last
00 d f
equation is only a function of time, and the right-hand side is only a function
of space. This can only be true generally if both sides are constant. Thus,
g0 00
we have D·g = ff = −λ2 , where the constant has been denoted by −λ2 for
convenience; this will be justied later. Again, a substantial simplication has
been achieved; the original PDE has now been replaced by two ODEs: dg dt =
2
−λ2 Dg(t), and ddxf2 = −λ2 f (x). Both are easily solved. We begin with the
second equation, which has the solution f (x) = A sin(λx) + B cos(λx), where
A and B are constants. From the rst BC, f (x)g(t) = 0; if we exclude the
physically uninteresting case where g(t) = 0 for all t, we're left with f (x) = 0
at x = 0, or 0 = Asin(0) + Bcos(0) = B ; therefore B = 0. Then, applying the
second BC gives A sin(λL) = 0, which implies that λL = nπ , where n can be
any integer including zero. However, if n = 0, then λ = 0, and it follows that
dt = 0, and that there is no time dependence; this is physically unreasonable,
dg
and so n = 0 can be rejected. Similarly, if n were negative, λ would become
negative, which is physically unreasonable. We conclude that the eigenvalue λ
can only take on one of the following values: π/L, 2π/L, 3π/L,
P.∞. .Each value of λ
will have its corresponding coecient A, leading to f (x) = n=1 An sin (λn x).
g0 2 2
We now turn to the other ODE, D·g = −λ2 = − nLπ2 . It follows that g(t) =
2 2
C exp − nLπ2 t , where C is an arbitrary constant of integration. So the solution
P∞ 2 2
is u(x, t) = f (x)g(t) = n=1 An sin (λn x) C exp − nLπ2 t , and the constant C
P∞ 2 2
can be absorbed into the constants An . Thus, u(x, t) = n=1 An sin (λn x) exp − nLπ2 t .
In
P∞order to determine the constants An , we use the IC: h(x) = u(x, 0) =
n=1 An sin (λn x). It can be seen that this is a Fourier sine series, and the
coecients can therefore be determined from well-established theory: An =
´
2 L nπx
dx. Using these values of An in the expression for u(x, t)
L 0 f (x)sin L
gives us the nal result:
∞
( ˆ L
) 2 2
X 2 nπx nπx n π
u(x, t) = f (x)sin dx sin exp − 2 t .
n=1
L 0 L L L
5.1 Notes on the separation-of-variables method
The assumption of a product decomposition of the dependent variable (u(x, t) =
f (x).g(t)) is interesting. It is clear that functions like x3 sin(log(t)) follow this
pattern, but what about simple functions like x + t? The detailed answer to this
question is quite complex, and we will not address it here. (The answer is dis-
cussed in Whittaker and Watson and Morse and Feshbach). We will only point
out that it can be shown that the eigenfunctions produced by the separation
of variables can describe any physically relevant function h(x) (Birkho and
Rota, 1985); this is technically called completeness. This at least provides some
7
reassurance, in the sense that our approach can cope with any initial condition.
For the parabolic problem, using the same dash to represent dierentiation
with respect to t in one case and x in the other is an abuse of notation, but is
easy to keep track of here, because derivatives of f are always with respect to
distance, and derivatives of g are always with respect to time. We chose the
constant −λ2 in the calculation above to be a negative square. The negative
value ensured that the time-dependence of the concentration was a decreasing
function. This is physically appropriate; if the concentration kept increasing
with time, it would eventually become unbounded, which is clearly physically
wrong (an innite concentration would fail to satisfy the conservation of mass).
6 Hyperbolic PDEs
The classical example of a hyperbolic PDE is the wave equation :
∂2u 2
2∂ u
= c (5)
∂t2 ∂x2
where the distance x and the time t are the independent variables, and, for
instance, the single dependent variable u could be a wave-function in quantum
mechanics, or the displacement of a wave in classical physics. The parameter
c has the dimensions of a velocity; in the wave displacement example, it is the
velocity of the wave.
We will solve the wave equation for the displacement u(x, t) of a very long
string, with the following initial and boundary conditions:
(
sin(t), 0 ≤ t ≤ 2π
u(0, t) = f (t) ≡ (6)
0 otherwise
limx→∞ u(x, t) = 0 (7)
u(x, 0) = 0 (8)
∂u
(x, 0) = 0. (9)
∂t
Take the Laplace transform of the wave equation with respect to t, transforming
u(x, t) into U (x, s). This gives:
∂u ∂ 2 U (x, s)
s2 U (x, s) − su(x, 0) − (x, 0) = c2
∂t ∂x2
Using the two initial conditions (Equations 7 and 8) simplies this to:
∂2U s2
2
− 2U = 0 (10)
∂x c
8
Since this equation contains only a derivative with respect to x, it can be re-
garded as an ordinary dierential equation for U (x, s) considered as a function
of x. This is a simple linear homogeneous second-order ODE, with the general
solution:
U (x, s) = A(s)esx/c + B(s)e−sx/c (11)
where the 'constants' A and B are independent of x, but may depend on s.
These constants must be determined by applying the two initial conditions. We
begin with Equation (6), the condition as x → ∞ :
ˆ ∞ ˆ ∞
−st
limx→∞ U (x, s) = limx→∞ e u(x, t) dt = e−st limx→∞ u(x, t) = 0,
0 0
where we have assumed that the limit and the integral can be interchanged,
which is usually true for physically motivated problems. In order to prevent
the positive exponential term in U (x, s) above from blowing up as x → ∞, it
follows that A(s) ≡ 0. We therefore have:
U (x, s) = B(s)e−sx/c , (12)
where B must be determined from the other initial condition, Equation (5).
Application of the boundary condition at x = 0 gives:
U (x, s) = F (s) = B(s)
where F (s) is the Laplace transform of the initial function f (t). This species
B(s), leading to the solution in the Laplace domain:
U (x, s) = F (s) e−sx/c
The shift theorem helps us invert this result, producing:
x x
u(x, t) = f t − H t−
c c
where H is the Heaviside step function. We can write this solution explicitly:
0, 0 < t ≤ xc
u(x, t) = x
sin t − c , xc ≤ t ≤ xc + 2π
0, t ≥ xc + 2π
Since the calculation was formal, rather than rigorous, it is good practice to
verify that the nal solution does in fact satisfy the governing PDE and the ICs
and BCs. To do that, we need to calculate derivatives of u(x, t) with respect to
x and t; only the non-zero portion needs to be calculated.
9
7 Unbounded domains
So far we have been focusing on nite spatial domains, because the separation-of-
variables method is most eective on such domains. While nite domains are the
most common for physical applications, it is occasionally useful to consider an
innite-domain problem (sometimes as a convenient limiting approximation to a
nite-domain problem). The most general method available for innite domains
are integral transforms: either the Fourier transform or the Laplace transform,
depending on whether the relevant domain is innite (e.g., −∞ < x < ∞) or
semi-innite (e.g., 0 < x < ∞).
For example, the heat equation on an innite domain can be simply solved
using the Fourier transform. We want to solve the heat equation
∂u ∂2u
=k 2 (13)
∂t ∂x
for the temperature u(x, t) with the initial condition:
u(x, 0) = h(x), (14)
where h(x) is a given function; and the boundary conditions: as x → ±∞, u →
0. Note that the heat equation is formally identical to the diusion equation
listed earlier (Equation 3). This is because diusive processes of heat and mass
(and momentum) are mathematically identical. With care, results from one
eld can be translated into another.
Note that there are several denitions of the Fourier transform√ in current
use, each diering from the others by constant factors of 2π or 2π . Here we
use the denition that is commonly used in electrical engineering, which has
also been used in the previous lectures on integral transforms:
ˆ ∞
F (ξ) = fˆ(ξ) = f (x) exp (−2πjξx) dx
−∞
In Equation (13), taking the Fourier transform with respect to x trans-
forms u(x, t) into û(ξ, t); note that while one independent variable,x, is being
transformed into ξ , the other independent variable,t, simply gets carried across
without change (as if it were a parameter). The transformed equation is
∂ û
= −4(πξ)2 kû (15)
∂t
because u tends to zero at the spatial boundaries. This is an ordinary dierential
equation for û(ξ, t) in the variable t, with ξ being carried along as a parameter.
Its solution is
2
û(ξ, t) = Ce−4(πξ) kt
(16)
where C is a constant of integration. C can be evaluated taking the Fourier
transform with respect to x of the initial condition: û(ξ, 0) = ĥ(ξ). Comparing
10
this result to that from Equation (14) at t = 0, we see that C = ĥ(ξ), and the
solution becomes
2
û(ξ, t) = ĥ(ξ)e−4(πξ) kt
(17)
2
From tables of Fourier transforms, we nd that the inverse transform of e−4(πξ) kt
−x2 /(4kt)
is g(x, t) = e √4πkt . The right-hand side of Equation (17) is therefore the prod-
uct of the Fourier transforms of h(x)and g(x, t). Inverting the result will require
a convolution integral:
ˆ ∞
h(p) − (x−p)2
u(x, t) = √ e 4kt dp (18)
−∞ 4πkt
This is the required solution. It can be shown by performing the required partial
dierentiations to satisfy the governing equation (13). It clearly satises the
boundary conditions, and it can be shown that it satises the initial condition
(however, this is not trivial and we will not do so here; see Weinberger, p.328).
The result is in fact quite general, and is true for all continuous and bounded
functions h(x).
An important advantage of transform methods, such as the Laplace and
Fourier transforms, is that they can solve linear, constant-coecient ODEs and
PDEs, regardless of the form of the initial and boundary conditions and the
source term. For instance, it should be clear that including a source term in
the problem above would have produced an inhomogeneous ODE in x, which
would have been easy to solve. The method of separation of variables, on the
other hand, can generally only cope with one inhomogeneous term (either IC or
BC or source), and has to superpose dierent solutions to cope with multiple
inhomogeneities. However, the transform methods typically perform well on
semi-innite or innite domains, rather than on nite domains. (There are
nite analogues of these transforms, such as the nite Fourier transform, which
work well on nite domains, but we will not discuss them in this course.)
8 Summary
This block of lectures has outlined the solution of linear PDEs by two basic
methods: separation of variables and transform methods. While we have focused
on constant-coecient PDEs, some other linear PDEs can be solved by these
methods. They represent a useful approach to modelling physical systems in
simple ways. As more realistic and detailed models become needed, the PDEs
tend to become more complex and nonlinear, and then numerical methods will
typically be needed. However, these analytical solutions to the simple or base-
case descriptions of physical and biological systems often give us insight into
the problem, which can then be augmented by numerical results.
11
9 Notation
Some common abbreviations are reported here for convenience.
ODE: Ordinary Dierential Equation
PDE: Partial Dierential Equation
BC: Boundary Condition
IC: Initial Condition
LE: Laplace's Equation
10 References
There are hundreds of books on PDEs, and you might want to browse through
the library to look for ones you nd interesting. Only those directly related to
the discussion above are listed here.
Logan, J.D., Applied Partial Dierential Equations, Springer (1998). [Gen-
eral reference.]
Kreyzig, E., Advanced Engineering Mathematics, Wiley, 8th edition (1999).
[There are many editions of this classical textbook, and most of them, especially
the later editions, are relevant to this course.]
Stroud, K.A, and Booth, D.J., Advanced Engineering Mathematics, Palgrave
Macmillan, 5th edition (2011). [Again, many editions exist of this classical work,
and most of them are relevant.]
Weinberger, H.F., A First Course in Partial Dierential Equations, Wiley
(1965). [General reference.]
Birkho, G. and Rota, J.C., Ordinary Dierential Equations, Wiley (1978).
[Source for completeness of eigenfunctions.]
Whittaker, E.T. and Watson, G.N., A Course of Modern Analysis, 4th ed.,
paperback reprinting (1996). [Source for product solution f(x)g(y) in separation
of variables].
Morse, P.M. and Feshbach, H., Methods of Theoretical Physics, 2 volumes,
McGraw-Hill (1953). [Source for product solution f(x)g(y) in separation of vari-
ables].
12