Special Functions & Symmetries Course
Special Functions & Symmetries Course
Contents
1 Gamma and Beta functions 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Other beta integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Second beta integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 Third (Cauchy’s) beta integral . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 A complex contour for the beta integral . . . . . . . . . . . . . . . . 8
1.4.4 The Euler reflection formula . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.5 Double-contour integral . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Hypergeometric functions 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Euler’s integral representation . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Two functional relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Contour integral representations . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 The hypergeometric differential equation . . . . . . . . . . . . . . . . . . . . 15
2.7 The Riemann-Papperitz equation . . . . . . . . . . . . . . . . . . . . . . . . 17
2.8 Barnes’ contour integral for F (a, b; c; x) . . . . . . . . . . . . . . . . . . . . . 18
3 Orthogonal polynomials 18
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 General orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Zeros of orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Gauss quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Classical orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Hermite polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 2
Thus Γ(x) is a meromorphic function equal to (x − 1)! when x is a positive integer. Euler
found its representation as an infinite integral and as a limit of a finite product. Let us
derive the latter representation following Euler’s generalization of the factorial.
Let x and n be nonnegative integers. For any a ∈ C define the shifted factorial (a)n by
Then, obviously,
(x + n)! n!(n + 1)x n!nx (n + 1)x
x! = = = · . (1.2)
(x + 1)n (x + 1)n (x + 1)n nx
Since
(n + 1)x
limn→∞ = 1, (1.3)
nx
we conclude that
n!nx
x! = limn→∞ . (1.4)
(x + 1)n
The limit exists for ∀x ∈ C such that x 6= −1, −2, −3, . . .. for
µ ¶x Y
n µ ¶−1 µ ¶x
n!nx n x 1
= 1+ 1+ (1.5)
(x + 1)n n + 1 j=1 j j
and µ ¶−1 µ ¶x µ ¶
x 1 x(x − 1) 1
1+ 1+ =1+ +O . (1.6)
j j 2j 2 j3
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 5
Definition 1.3 For ∀x ∈ C, x 6= 0, −1, −2, . . ., the gamma function Γ(x) is defined by
k!k x−1
Γ(x) = limk→∞ . (1.7)
(x)k
Three immediate consequences are
¿From the definition it follows that the gamma function has poles at zero and the negative
integers, but 1/Γ(x) is an entire function with zeros at these points. Every entire function
has a product representation.
Theorem 1.4 ∞ n³
1 γx
Y x ´ −x/n o
= xe 1+ e , (1.9)
Γ(x) n=1
n
where γ is Euler’s constant given by
à n !
X1
γ = limn→∞ − log n . (1.10)
k=1
k
Proof.
1 x(x + 1) · · · (x + n − 1)
= limn→∞
Γ(x) n!nx−1
³ x´ ³ x´ ³ x ´ −xlog n
= limn→∞ x 1 + 1+ ··· 1 + e
1 2 n
n n³
x(1+ 12 +···+ n
1
−log n)
Y x ´ −x/k o
= limn→∞ xe 1+ e
k=1
k
∞ n³
γx
Y x ´ −x/n o
= xe 1+ e .
n=1
n
One may also speak of the beta function B(x, y), which is obtained from the integral by
analytic continuation.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 6
Theorem 1.6
Γ(x)Γ(y)
B(x, y) = . (1.13)
Γ(x + y)
Proof. From the definition of beta integral we have the following contiguous relation
between three functions
However, integration by parts of the integral in the left hand side gives
y
B(x, y + 1) = B(x + 1, y). (1.15)
x
Combining the last two we get the functional equation of the form
x+y
B(x, y) = B(x, y + 1). (1.16)
y
Iterating this equation we obtain
(x + y)n
B(x, y) = B(x, y + n). (1.17)
(y)n
Rewrite this relation as
Z n µ ¶n+y−1
(x + y)n n!ny−1 x−1 t
B(x, y) = t 1− dt. (1.18)
n!nx+y−1 (y)n 0 n
As n → ∞, we have Z ∞
Γ(y)
B(x, y) = tx−1 e−t dt. (1.19)
Γ(x + y) 0
Set y = 1 to arrive at
Z 1 Z ∞
1 x−1 Γ(1)
= t dt = B(x, 1) = tx−1 e−t dt. (1.20)
x 0 Γ(x + 1) 0
Hence Z ∞
Γ(x) = tx−1 e−t dt, < x > 0. (1.21)
0
This is the integral representation for the gamma function, which appears here as a byproduct.
Now use it to prove the theorem for < x > 0 and < y > 0 and then use the standard argument
of analytic continuation to finish the proof. ¤
An important corollary is an integral representation for the gamma function which may
be taken as its definition for < x > 0.
Use it to explicitly represent the poles and the analytic continuation of Γ(x):
Z 1 Z ∞
x−1 −t
Γ(x) = t e dt + tx−1 e−t dt
0 1
∞
X Z ∞
(−1)n
= + tx−1 e−t dt.
n=0
(n + x)n! 1
The integral is an intire function and the sum gives the poles at x = −n, n = 0, 1, 2, . . . with
the residues equal to (−1)n /n!.
Several other useful forms of the beta integral can be derived by a change of variables.
For example, take t = sin2 θ in (1.12) to get
Z π/2
Γ(x)Γ(y)
sin2x−1 θ cos2y−1 θ dθ = .
0 2Γ(x + y)
Put x = y = 1/2. The result is √
Γ(1/2) = π.
The substitution t = (u − a)/(b − a) gives
Z b
Γ(x)Γ(y)
(b − u)x−1 (u − a)y−1 du = (b − a)x+y−1 ,
a Γ(x + y)
which can be rewritten in the alternative form:
Z b
(b − u)x−1 (u − a)y−1 (b − a)x+y−1
du = .
a Γ(x) Γ(y) Γ(x + y)
There is a similar contour integral representing the gamma function. Let us first prove
Hankel’s contour integral for the reciprocal gamma function, which is one of the most beau-
tiful and useful representations for this function. It has the following form:
Z
1 1
= s−z es ds, z ∈ C. (1.24)
Γ(z) 2πi L
The contour of integration L is the Hankel contour that runs from −∞, arg s = −π, encircles
the origin in positive direction and terminates at −∞, now with arg s = +π. For this we
R (0+)
also use the notation −∞ . The multi-valued function s−z is assumed to be real for real
values of z and s, s > 0.
A proof of (1.24) follows immediately from the theory of Laplace transforms: from the
well-known integral Z ∞
Γ(z)
= tz−1 e−st dt
sz 0
(1.24) follows as a special case of the inversion formula. A direct proof follows from a special
choice of the contour L: the negative real axis. When < z < 1 we can pull the contour onto
the negative axis, where we have
· Z 0 Z ∞ ¸
1 −iπ −z −s +iπ −z −s 1
− (se ) e ds − (se ) e ds = sin πz Γ(1 − z).
2πi ∞ 0 π
Using the reflection formula (cf. next subsection for a proof),
π
Γ(x)Γ(1 − x) = , (1.25)
sin πx
we see that this is indeed the left hand side of (1.24). In a final step the principle of analytic
continuation is used to show that (1.24) holds for all finite complex values of z. Namely,
both the left- and the right-hand side of (1.24) are entire functions of z.
Another form of (1.24) is
Z
1
Γ(z) = sz−1 es ds.
2i sin πz L
where C consists of two circles about the origin of radii R and ² respectively, which are
joined along the negative real axis from −R to −². Move along the outer circle in the
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 10
counterclockwise direction, and along the inner circle in the clockwise direction. By the
residue theorem Z
z x−1
dz = −2πi,
C 1−z
Let R → ∞ and ² → 0 so that the first and third integrals tend to zero and the second and
fourth combine to give (1.25) for 0 < x < 1. The full result follows by analytic continuation.
Here the contour starts at P , encircles the point 1 in the positive (counterclockwise) direction,
returns to P , then encircles the origin in the positive direction, and returns to P . The 1−, 0−
indicates that now the path of integration is in the clockwise direction, first around 1 and
then 0. The formula is proved by the same method as Hankel’s formula. Notice that it is
true for any complex x and y: both sides are entire functions of x and y.
2 Hypergeometric functions
2.1 Introduction
In this Lecture we give the definition and main properties of the Gauss (F = 2 F 1 ) hypergeo-
metric function and shortly mention its generalizations, the p F q generalized and p φq basic
(or q-) hypergeometric functions.
Almost all of the elementary functions of mathematics and some not very elementary,
like the error function erf(x) and dilogarithm function Li2 (x), are special cases of the hyper-
geometric functions, or they can be expressed as ratios of hypergeometric functions.
We will first derive Euler’s fractional integral representation for the Gauss hypergeometric
function F , from which many identities and transformations will follow. Then we talk about
hypergeometric differential equation, as a general linear second order differential equation
having three regular singular points. We derive contiguous relations satisfied by the function
F . Finally, we explain the Barnes approach to the hypergeometric functions and Barnes-
Mellin contour integral representation for the function F .
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 11
2.2 Definition
P
Directly from the definition of a hypergeometric series cn , on factorizing the polynomials
in n, we obtain
cn+1 (n + a1 )(n + a2 ) · · · (n + ap )x
= .
cn (n + b1 )(n + b2 ) · · · (n + bq )(n + 1)
Hence, we can get a more explicit definition.
Definition 2.1 The (generalized) hypergeometric series is defined by the following series
representation
µ ¶ X∞
a1 , . . . , ap (a1 )n · · · (ap )n xn
F
p q ; x = .
b1 , . . . , b q (b1 )n · · · (bq )n n!
n=0
Theorem
P 2.3
P The series q+1 F q (a1 , . . . , aq+1 ; b1 , . . . , bq ; x) with |x| iθ= 1 converges absolutely
P
if
P < ( b i − ai ) > 0. The series converges P conditionally
P if x = e =
6 1 and 0 ≥ < ( bi −
ai ) > −1 and the series diverges if < ( bi − ai ) ≤ −1.
Proof. Notice that the shifted factorial can by expressed as a ratio of two gamma functions:
Γ(x + n)
(x)n = .
Γ(x)
Definition 2.4 The (Gauss) hypergeometric function 2 F 1 (a, b; c; x) is defined by the series
X∞
(a)n (b)n n
x
n=0
(c)n n!
Since for <b > 1, <(c − b) > 1 and |x| < 1 the series
∞
X (a)n b+n−1
Un (t), Un (t) = xn t (1 − t)c−b−1
n=0
n!
converges uniformly with respect to t ∈ [0, 1], we are able to interchange the order of integ-
ration and summation for these values of b, c and x.
Now, use the beta integral to prove the result for |x| < 1. Since the integral is analytic
in the cut plane, the theorem holds for x in this region as well; also we apply the analytic
continuation with respect to b and c in order to arrive at the conditions announced in the
formulation of the theorem.
¤
Hence we have obtained the analytic continuation of F , as a function of x, outside the
unit disc, but only when <c > <b > 0. It is important to note that we view 2 F 1 (a, b; c; x)
as a function of four complex variables a, b, c, and x instead of just x. It is easy to see that
1
F (a, b; c; x) is an entire function of a, b, c if x is fixed and |x| < 1, for in this case the
Γ(c) 2 1
series converges uniformly in every compact domain of the a, b, c space.
Gauss found evaluation of the series in the point 1.
Proof. Let x → 1− in Euler’s integral for 2 F 1 . Then when <c > <b > 0 and <(c−a−b) > 0
we get Z 1
Γ(c) Γ(c)Γ(c − a − b)
tb−1 (1 − t)c−a−b−1 dt = .
Γ(b)Γ(c − b) 0 Γ(c − a)Γ(c − b)
The condition <c > <b > 0 may be removed by continuation.
¤
This is a polynomial identity in a, b, c. Dougall (1907) took the view that both sides of this
equation are polynomials of degree n in a. Therefore, the identity is true if both sides are
equal for n + 1 distinct values of a. By the same method he proved a more general identity:
µ ¶
a, 1 + 12 a, −b, −c, −d, −e, −n
7F 6 1 ;1
2
a, 1 + a + b, 1 + a + c, 1 + a + d, 1 + d + e, 1 + a + n
The contour starts and terminates at t = 0 and encircles the point t = 1 in the positive
direction. The point 1/x should be outside the contour. The many-valued functions of the
integrand assume their principal branches: arg(1 − xt) tends to zero when x → 0, and arg t,
arg(t − 1) are zero at the point where the contour cuts the real positive axis (at the right of
1). Observe that no condition on c is needed, whereas in (2.1) we need <(c − b) > 0. The
proof of the above representation runs as for (2.1), with the help of the corresponding loop
integral for the beta function.
Alternative representation involves a contour encircling the point 0:
µ ¶ Z
a, b Γ(c)Γ(1 − b) (0+)
2F 1 ;x = (−t)b−1 (1 − t)c−b−1 (1 − xt)−a dt, <c > <b.
c 2πiΓ(c − b) 1
Using the double-loop (or Pochhammer’s) contour integral one can derive the following
representation
µ ¶
1 a, b e−iπc
F
2 1 ; x = −
Γ(c) c 4Γ(b)Γ(c − b) sin πb sin π(c − b)
Z (1+,0+,1−,0−)
× tb−1 (1 − t)c−b−1 (1 − xt)−a dt.
P
Here we have following conditions: | arg(1 − x)| < π, arg t = arg(1 − t) = 0 at the starting
point P of the contour, and (1 − xt)−a = 1 when x = 0. Note that there are no conditions
on a, b, or c.
Hence
ϑ(ϑ + c − 1) 2 F 1 (a, b; c; x) = x(ϑ + a)(ϑ + b) 2 F 1 (a, b; c; x).
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 16
x1−c F (a − c + 1, b − c + 1; 2 − c; x).
When c = 1 it does not give a new solution, but, in general, the second solution of (2.4)
appears to be of the form
F (a, b; c; x) = AF (a, b; a + b − c + 1; 1 − x)
+B(1 − x)c−a−b F (c − a, c − b; c − a − b + 1; 1 − x) (2.6)
= C(−x)−a F (a, 1 − c + a; 1 − b + a; 1/x)
+D(−x)−b F (b, 1 − c + b; 1 − a + b; 1/x) (2.7)
= C(1 − x)−a F (a, c − b; a − b + 1; 1/(1 − x))
+D(1 − x)−b F (b, c − a; b − a + 1; 1/(1 − x)) (2.8)
= Ax−a F (a, a − c + 1; a + b − c + 1; 1 − 1/x)
+Bxa−c (1 − x)c−a−b F (c − a, 1 − a; c − a − b + 1; 1 − 1/x). (2.9)
Here
Γ(c)Γ(c − a − b) Γ(c)Γ(a + b − c)
A= , B= ,
Γ(c − a)Γ(c − b) Γ(a)Γ(b)
Γ(c)Γ(b − a) Γ(c)Γ(a − b)
C= , D= .
Γ(b)Γ(c − a) Γ(a)Γ(c − b)
Since the Pfaff’s formula (2.2) gives a continuatiuon of 2 F 1 from |x| < 1 to <x < 12 , then
(2.6) gives the continuation to <x > 12 cut along the real axis from x = 1 to x = ∞. The
cut comes from the branch points of the factor (1 − x)c−a−b . Analogously, (2.7) holds when
| arg(−x)| < π; (2.8) holds when | arg(1 − x)| < π; (2.9) holds when | arg(1 − x)| < π and
| arg x| < π.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 17
In fact, this equation is a generic equation that has only three regular singularities.
Theorem 2.9 Any homogeneous linear differential equation of the second order with at
most three singularities, which are regular singular points, can be transformed into the
hypergeometric differential equation (2.4).
Proof. Let us only sketch the proof. First we consider the equation
d2 f df
2
+ p(z) + q(z)f = 0
dz dz
and assume that it has only three finite regular singular points ξ, η and ζ with the exponents
(α1 , α2 ), (β1 , β2 ) and (γ1 , γ2 ). Then we find that such equation can be always brought into
the form µ ¶
00 1 − α1 − α2 1 − β1 − β2 1 − γ1 − γ2
f + + + f0 (2.10)
z−ξ z−η z−ζ
· ¸
α1 α2 β1 β2 γ1 γ2 (ξ − η)(η − ζ)(ζ − ξ)
− + + f = 0.
(z − ξ)(η − ζ) (z − η)(ζ − ξ) (z − ζ)(ξ − η) (z − ξ)(z − η)(z − ζ)
Next we introduce the following fractional linear transformation:
(ζ − η)(z − ξ)
x= ,
(ζ − ξ)(z − η)
F = x−α1 (1 − x)−γ1 f.
This transformation changes singularities to 0, 1 and ∞. The exponents in these points are
It is easy to check that we arrive at the hypergeometric differential equation (2.4) for the
function F (x) with the following parameters:
a = α1 + β1 + γ1 , b = α1 + β2 + γ1 , c = 1 + α1 − α2 .
¤
Equation (2.10) is called the Riemann-Papperitz equation.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 18
This can be proved by Cauchy’s residue theorem. Take a rectangular contour L with vertices
c ± iR, c − (N + 12 ) ± iR, where N is a positive integer. The poles of Γ(s) inside this contour
are at 0, 1, . . . , N and the residues are (−1)j /j!. Now let R and N tend to ∞.
The Mellin transform of the hypergeometric function is
Z ∞ µ ¶
s−1 a, b Γ(c) Γ(s)Γ(a − s)Γ(b − s)
x 2F 1 ; −x dx = .
0 c Γ(a)Γ(b) Γ(c − s)
Theorem 2.10
µ ¶ Z i∞
Γ(a)Γ(b) a, b 1 Γ(a + s)Γ(b + s)Γ(−s)
2F 1 ;x = (−x)s ds,
Γ(c) c 2πi −i∞ Γ(c + s)
| arg(−x)| < π. The path of integration is curved, if necessary, to separate the poles s =
−a − n, s = −b − n, from the poles s = n, where n is an integer ≥ 0. (Such a contour can
always be drawn if a and b are not negative integers.)
3 Orthogonal polynomials
3.1 Introduction
In this lecture we talk about general properties of orthogonal polynomials and about classical
orthogonal polynomials, which appear to be hypergeometric orthogonal polynomials. One
way to link the hypergeometric function to orthogonal polynomials is through a formula
of Jacobi. Multiply the hypergeometric equation by xc−1 (1 − x)a+b−c and write it in the
following form
d
[x(1 − x)xc−1 (1 − x)a+b−c y 0 ] = abxc−1 (1 − x)a+b−c y.
dx
From µ ¶ µ ¶
d a, b ab a + 1, b + 1
2F 1 ;x = 2F 1 ;x ,
dx c c c+1
by induction,
d k
[x (1 − x)k M y (k) ] = (a + k − 1)(b + k − 1)xk−1 (1 − x)k−1 M y (k−1) ,
dx
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 19
dk k
[x (1 − x)k M y (k) ] = (a)k (b)k M y.
dxk
Substitute µ ¶
(k) (a)k (b)k a + k, b + k
y = 2F 1 ;x ,
(c)k c+k
to get · µ ¶¸ µ ¶
dk k k a + k, b + k a, b
x (1 − x) M 2 F 1 ;x = (c)k M 2 F 1 ;x .
dxk c+k c
Put b = −n, k = n, then
µ ¶
−n, a x1−c (1 − x)c+n−a dn c+n−1
2F 1 ;x = [x (1 − x)a−c ].
c (c)n dx n
Definition 3.2 We say that a sequence of polynomials {pn (x)}∞ 0 , where pn (x) has exact
degree n, is orthogonal with respect to the weight function w(x) if
Z b
pn (x)pm (x)w(x)dx = hn δmn .
a
Theorem 3.3 A sequence of orthogonal polynomials {pn (x)} satisfies the three-term recur-
rence relation
where we set p−1 (x) = 0. Here An , Bn , and Cn are real constants, n = 0, 1, 2, . . ., and
An−1 An Cn > 0, n = 1, 2, . . .. If the highest coefficient of pn (x) is kn , then
Corollary 3.5 p0n+1 (x)pn (x) − pn+1 (x)p0n (x) > 0 ∀x.
Theorem 3.7 The zeros of pn (x) and pn+1 (x) separate each other.
where xj , j = 1, . . . , n, are zeros of the polynomial pn (x) from the set of polynomials ortho-
gonal with respect to the weight function w(x), and λj have the form
Z b
pn (x)w(x)dx
λj = .
a p0n (xj )(x − xj )
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 21
A(x)y 00 + B(x)y 0 + λn y = 0,
where X is a polynomial in x with coefficients not depending on n, and Kn does not depend
on x.
As was said earlier all classical orthogonal polynomials are, in fact, hypergeometric poly-
nomials, in the sense that they can be expressed in terms of the hypergeometric function.
With the Jacobi polynomials expressed by (3.2), all the other appear to be either partial
cases or limits from hypergeometric function to confluent hypergeometric function.
Gegenbauer polynomials:
Hermite polynomials:
µ ¶
(−1)n (2n)! 1 2
H2n (x) = 1 F 1 −n; ; x , (3.7)
n! 2
µ ¶
(−1)n (2n + 1)! 3 2
H2n+1 (x) = 2x 1 F 1 −n; ; x . (3.8)
n! 2
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 22
n
r = (1 − r2 )−1/2 e[2xyr−(x +y )r ]/(1−r ) .
n=0
2 n!
The following integral equation for |r| < 1 can be derived from the Poisson kernel by using
orthogonality:
Z ∞ [2xyr−(x2 +y2 )r2 ]/(1−r2 )
1 e
√ √ Hn (y)dy = Hn (x)rn .
π −∞ 1−r 2
where L is some contour in the complex k-plane, and the function ρ(k) (‘spectral data’) can
be expressed in terms of certain integral transforms of q1 (x) and q2 (t), in order to satisfy the
initial data.
This is just a simple demonstration of the method of Separation of Variables, also called
Ehrenpreis principle when applied to such kind of problems. It is interesting to note that
all solutions of (4.1) can be given by (4.2) with the appropriate choice of the contour L and
the function ρ(x).
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 24
Hψ(x1 , x2 ) = hψ(x1 , x2 ).
This problem can be solved by the straightforward application of the method of SoV without
any intermediate transformations. We get
µ ¶
∂2 2
H1 ψ = − 2 + x1 ψ(x1 , x2 ) = h1 ψ(x1 , x2 ),
∂x1
µ ¶
∂2 2
H2 ψ = − 2 + x2 ψ(x1 , x2 ) = h2 ψ(x1 , x2 ),
∂x2
h1 + h2 = h.
Then
ψ(x1 , x2 ) = ψ(x1 )ψ(x2 ),
where µ ¶
∂2 2
− 2 + xi ψ(xi ) = hi ψ(xi ).
∂xi
Square-integrable solution is expressed in terms of the Hermite polynomials
2
ψ(xi ) ∈ L2 (R) ⇔ ψ(xi ) = e−xi /2 Hn (xi ), h = 2ni + 1, ni = 0, 1, 2, . . . .
Notice that
[H1 , H2 ] = 0.
Hence, we get the basis in L2 (R2 ) of the form
2 2
ψn1 n2 (x1 , x2 ) = e−(x1 +x2 )/2 Hn1 (x1 )Hn2 (x2 ),
h = 2(n1 + n2 ) + 2.
The functions {ψn1 n2 } constitute an orthogonal set of functions in R2
Z ∞
ψn1 n2 (x1 , x2 )ψm1 m2 (x1 , x2 ) = an1 n2 δn1 m1 δn2 m2 .
−∞
Every function f ∈ L2 (R2 ) can be decomposed into a series with respect to these basis
functions ∞
X
f (x1 , x2 ) = fmn ψmn (x1 , x2 ).
m,n=0
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 25
where u, a, b ∈ R are some parameters, and u1 , u2 are zeros of Θ(u). Taking the residues at
u = a and u = b in both sides of (4.3), we have
Theorem 4.3 Functions ψ~λ (x1 , x2 ) are eigenfunctions of the operator H iff {λi } satisfy the
following algebraic equations
X 1 1 k1 /2 + 14 k2 /2 + 14
− + + = 0, i = 1, . . . , n. (4.7)
j6=i
λi − λj 2 λi − a λi − b
Parameters λi have the following properties (generalized Stieltjes theorem): i) they are
simple, ii) they are placed along the real axis inside the intervals (4.6), iii) they are the
critical points of the function |G|,
1
G(λ1 , . . . , λn ) = exp(− (λ1 + · · · + λn ))
2
Yn Y
× (λp − a)k1 /2+1/4 (λp − b)k2 /2+1/4 (λr − λp ). (4.8)
p=1 r>p
This Theorem gives another basis for the oscillator problem. In this case we have two
commuting operators G1 and G2 :
µ ¶2
∂2 2 1 ∂ ∂
G1 = − 2 + x1 + x1 − x2 ,
∂x1 a−b ∂x2 ∂x1
µ ¶2
∂2 2 1 ∂ ∂
G2 = − 2 + x2 − x1 − x2 ,
∂x2 a−b ∂x2 ∂x1
[G1 , G2 ] = 0,
which are diagonal on this basis. Notice that
H = G1 + G2 .
(i)
where ϕi (yi ) are some functions of one variable and Dyi are some ordinary differential op-
erators. It could be done by changing variables (coordinate transform) {x} 7→ {y}, but it
could also involve an integral transform. Then, we can introduce another operator Gy1 ,y2
such that
Dy(1)
1
− ϕ1 (y1 )Dy1 ,y2 = Gy1 ,y2 = Dy(2)
2
− ϕ2 (y2 )Dy1 ,y2 .
Notice, that D and G commute
[D, G] = 0.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 27
The operator G that appeared in the procedure of SoV is called operator of constant of
separation.
The above definition is easily expanded to the more variable case. Essential step is to
keep bringing the operator into the ‘separable form’ (5.1) that allows to introduce more and
more operators of constants of separation. If one could break the operator down to a set
of one-variable operators, then separation of variables is done. This obviously requires that
the number of operators G will be equal to the number of variables minus 1, and also that
they commute between themselves and with the operator D. The latter condition defines an
integrable system. So, we can say that necessary condition for an operator to be separable is
that it can be supplemented by a full set of mutually commuting operators (G), or in other
words the operator has to belong to an integrable family of operators.
As we have seen in the previous Lecture, special functions of one variable often appear
when one separates variables in linear PDEs in attempt to find a general solution in terms of
a large set of factorized partial (or separated) solutions. Usually, the completeness of the set
of separated solutions can also be proved, so that we can indeed expand any solution of our
equation, which is a multi-variable ‘special function’, into a basis of separated (one-variable)
special functions.
There are two aspects of this procedure. First, the separated functions of one variable
will satisfy ODEs, so that we can, in principle, ‘classify’ the initial multi-variable special
function by the procedure of separation of variables and by the type of the obtained ODEs.
It is clear that when some regularity conditions are set on the class of allowed transformations
which used in a SoV, one should expect a good correspondence between complexity of both
functions, the multivariable and any of the corresponding one-variable special functions.
In the example of isotropic harmonic oscillator, we had a trivial separation of variables
first (in Cartesian coordinates), which gave us a basis as a product of Hermite polynomials.
Hence, we might conclude that the operator H is, in a sense, a two-dimensional analogue of
the hypergeometric differential operator, because one of its separated bases is given in terms
of hypergeometric functions (Hermite polynomials).
Curiously, the second separation, in elliptic coordinates, led to the functions of the Heun
type, which is beyond the hypergeometric class. The explanation of this seeming contradic-
tion is that the operator H is ‘degenerate’ in the sense that it separates in many coordinate
systems. To avoid this degeneracy, one could disturb this operator by adding some additional
terms that will break one separation, but will still allow the other.
Therefore generically, if an operator can be separated, it usually separates by a unique
transformation, leading to a unique set of separated special functions of one variable.
The second aspect of the problem is understanding what are the sufficient conditions of
separability? Or, which integrable operators can be separated and which can not? A very
close question is: what class of transformations should be allowed when trying to separate
an operator?
In order to demonstrate the point about a class of transformations, take a square of the
Laplace operator:
µ 2 ¶2
∂ ∂2
+ .
∂x21 ∂x22
Of course, this operator is integrable, although there is a Theorem saying that one can not
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 28
∂
qij = qi − qj , ∂i = ,
∂qi
or, by the equivalent set, acting on Laurent polynomials in variables tj = e2iqj , j = 1, 2, 3:
e 1 = −i(∂1 + ∂2 + ∂3 )
H
e 2 = −(∂1 ∂2 + ∂1 ∂3 + ∂2 ∂3 )
H
g[ cot q12 (∂1 − ∂2 ) + cot q13 (∂1 − ∂3 ) + cot q23 (∂2 − ∂3 )]
−4g 2
e 3 = i∂1 ∂2 ∂3
H
−ig[ cot q12 (∂1 − ∂2 )∂3 + cot q13 (∂1 − ∂3 )∂2 + cot q23 (∂2 − ∂3 )∂1 ]
+2ig 2 [(1 + cot q12 cot q13 )∂1 + (1 − cot q12 cot q23 )∂2 + (1 + cot q13 cot q23 )∂3 ]
the vacuum function being
Ω(~q) = |sin q12 sin q13 sin q23 |g . (5.2)
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 29
where κ is a normalization coefficient to be fixed later. It is assumed in (5.6) and (5.7) that
y− < x− = ξ < y+ = x+ . The integral converges when g > 0 which will always be assumed
henceforth.
The motivation for such a choice of K takes its origin from considering the problem in the
classical limit (g → ∞) where there exists effective prescription for constructing a separation
of variables for an integrable system from the poles of the so-called Baker-Akhiezer function.
e ~n = KΨ~n satisfies the
Theorem 5.1 Let Hk Ψn1 n2 n3 = hk Ψn1 n2 n3 . Then the function Ψ
differential equations
QΨ e ~n = 0, e ~n = 0, j = 1, 2
Yj Ψ (5.8)
where
Q = −i∂Q − h1 , (5.9)
µ ¶
3 2 g(g − 1)
Yj = i∂yj + h1 ∂yj − i h2 + 3 ∂yj
sin2 yj
µ ¶
g(g − 1) cos yj
− h3 + h1 + 2ig(g − 1)(g − 2) 3 . (5.10)
sin2 yj sin yj
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 30
[−i∂Q − H1∗ ]K = 0,
· µ ¶
3g(g − 1)
i∂y3j+ H1∗ ∂y2j
−i + H2∗ ∂y j
sin2 yj
µ ¶¸
∗ ∗ g(g − 1) cos yj
− H3 + H1 + 2ig(g − 1)(g − 2) 3 K = 0,
sin2 yj sin yj
where Hn∗ is the Lagrange adjoint of Hn
Z Z
ϕ(q)(Hψ)(q) dq = (H ∗ ϕ)(q)ψ(q) dq
The coefficients ck (~n; g) are rational functions of k, nj and g. Moreover, ϕ~n (y) can be
expressed explicitely in terms of the hypergeometric function 3 F2 as
µ ¶
n1 1−3g a1 , a2 , a3
ϕ~n (y) = t (1 − t) 3 F2 ;t (5.14)
b1 , b2
where
aj = n1 − n4−j + 1 − (4 − j)g, bj = aj + g. (5.15)
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 31
Note that, by virtue of the theorem 5.1, the function Ψ e ~n (y1 , y2 ; Q) satisfies an ordinary
differential equation in each variable. Since Qf = 0 is a first order differential equation
having a unique, up to a constant factor, solution f (Q) = eih1 Q , the dependence on Q is
factorized. However, the differential equations Yj ψ(yj ) = 0 are of third order and have
three linearly independent solutions. To prove the theorem 5.3 one needs thus to study the
ordinary differential equation
· µ ¶
3 2 g(g − 1)
i∂y + h1 ∂y − i h2 + 3 ∂y
sin2 y
µ ¶¸
g(g − 1) cos y
− h3 + h1 + 2ig(g − 1)(g − 2) 3 ψ = 0. (5.16)
sin2 y sin y
The change of variable t = e2iy brings the last equation to the Fuchsian form:
£ 3 ¤
∂t + w1 ∂t2 + w2 ∂t + w3 ϕ = 0 (5.22)
where
3(g − 1) + 21 h1 6g
w1 = − + ,
t t−1
(3g 2 − 3g + 1) + 12 (2g − 1)h1 + 14 h2 3g(3g − 1) g(9(g − 1) + 2h1 )
w2 = + − ,
t2 (t − 1)2 t(t − 1)
g 3 + 21 g 2 h1 + 41 gh2 + 18 h3 1
2
g((h2 + 4g 2 )(t − 1) − (3g − 1)h1 )
w3 = − + .
t3 t2 (t − 1)2
The equation (5.22) is reduced by the substitution ϕ(t) = tn1 (1−t)1−3g f (t) to the standard
3 F2 hypergeometric form
a1 = n1 − n3 + 1 − 3g, a2 = n1 − n2 + 1 − 2g, a3 = 1 − g,
b1 = n1 − n3 + 1 − 2g, b2 = n1 − n2 + 1 − g.
Proposition 5.5 Let the parameters hk be given by (5.4), (5.5) for a triplet of integers
{n1 ≤ n2 ≤ n3 } and g 6= 1, 0, −1, −2, . . .. Then the equation (5.22) has a unique, up to a
constant factor, Laurent-polynomial solution
n3
X
ϕ(t) = tk ck (~n; g), (5.24)
k=n1
The expression
and is called fractional differentiation operator. We will not give details of this calculation,
just giving the final result. The formula for M −1 : Je 7→ J is
Z x+
J(x+ , x− ; Q) = e + , y− ; Q)
dy− M̌(x+ , x− ; y− )J(x (5.28)
x−
£ ¡ ¢ ¡ ¢¤3g−1
sin y− sin x+ +y
2
−
sin x+ −y
2
−
M̌ = κ̌ £ ¡ y +x ¢ ¡ ¢¤g+1 (5.29)
sin − 2 − sin y− −x
2
−
[sin x1 sin x2 ]2g−1
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 34
where
Γ(2g)
κ̌ = . (5.30)
2Γ(−g)Γ(3g)
The operators M (and M −1 ) are normalized by M : 1 7→ 1.
For the kernel of K −1 we have respectively
£ ¡ ¢ ¡ ¢¤g−1
sing x− sin y− sin x+ +y
2
−
sin x+ −y
2
−
Ǩ = κ̌ £ ¡ y +x ¢ ¡ ¢¤g+1 . (5.31)
sin − 2 − sin y− −x2
−
[sin x1 sin x2 ]g−1
The formulas (5.25), (5.28), (5.29) provide a new integral representation for Jack poly-
nomial J~n of three variables in terms of the 3 F2 hypergeometric polynomials ϕ~n (y). It is
remarkable that for positive integer g the operators K −1 , M −1 become differential operators
of order g. In particular, for g = 1 we have K −1 = ∂/∂y− .
References
[1] George E. Andrews, Richard Askey, and Ranjan Roy. Special functions. Cambridge
University Press, Cambridge, 1999.
[2] Willard Miller, Jr. Lie Theory and Special Functions. Academic Press, New York, 1968.
Mathematics in Science and Engineering, Vol. 43.
[3] N. Ja. Vilenkin. Special Functions and the Theory of Group Representations. American
Mathematical Society, Providence, R. I., 1968. Translated from the Russian by V. N.
Singh. Translations of Mathematical Monographs, Vol. 22.
Index
(Gauss) hypergeometric function, 11
(generalized) hypergeometric series, 10
Chebyshev polynomials, 20
Christoffel-Darboux formula, 19
classical orthogonal polynomials, 20
gamma function, 4
Gauss quadrature, 19
Gegenbauer polynomials, 20
heat equation, 22
Hermite polynomials, 20
hypergeometric differential equation, 15
hypergeometric function, 2
hypergeometric series, 1
integrable system, 26
integral transform, 22, 25
Jacobi polynomial, 18
Laguerre polynomials, 20
Legendre polynomials, 20
Rodrigues formula, 20
Separation of Variables, 22
SoV, 24
35