0% found this document useful (0 votes)
96 views35 pages

Special Functions & Symmetries Course

This document provides an outline for a course on special functions and their symmetries. It will focus on hypergeometric functions, which include the gamma and beta functions. Hypergeometric functions can be defined through series representations involving rational terms or as solutions to differential equations with at most three singularities. They have integral and series representations and satisfy many relationships that can transform one into another. The course will also discuss orthogonal polynomials, separation of variables, integrable systems, and relationships between special functions and physics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views35 pages

Special Functions & Symmetries Course

This document provides an outline for a course on special functions and their symmetries. It will focus on hypergeometric functions, which include the gamma and beta functions. Hypergeometric functions can be defined through series representations involving rational terms or as solutions to differential equations with at most three singularities. They have integral and series representations and satisfy many relationships that can transform one into another. The course will also discuss orthogonal polynomials, separation of variables, integrable systems, and relationships between special functions and physics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

PG course on

SPECIAL FUNCTIONS AND THEIR SYMMETRIES


Vadim KUZNETSOV
22nd May 2003

Contents
1 Gamma and Beta functions 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Other beta integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Second beta integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 Third (Cauchy’s) beta integral . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 A complex contour for the beta integral . . . . . . . . . . . . . . . . 8
1.4.4 The Euler reflection formula . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.5 Double-contour integral . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Hypergeometric functions 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Euler’s integral representation . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Two functional relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Contour integral representations . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 The hypergeometric differential equation . . . . . . . . . . . . . . . . . . . . 15
2.7 The Riemann-Papperitz equation . . . . . . . . . . . . . . . . . . . . . . . . 17
2.8 Barnes’ contour integral for F (a, b; c; x) . . . . . . . . . . . . . . . . . . . . . 18

3 Orthogonal polynomials 18
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 General orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Zeros of orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Gauss quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Classical orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Hermite polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 2

4 Separation of variables and special functions 23


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 SoV for the heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 SoV for a quantum problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 SoV and integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5 Another SoV for the quantum problem . . . . . . . . . . . . . . . . . . . . . 25

5 Integrable systems and special functions 26


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2 Calogero-Sutherland system . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Integral transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4 Separated equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.5 Integral representation for Jack polynomials . . . . . . . . . . . . . . . . . . 33

1 Gamma and Beta functions


1.1 Introduction
This course is about special functions and their properties. Many known functions could
be called special. They certainly include elementary functions like exponential and more
generally, trigonometric, hyperbolic functions and their inverses, logarithmic functions and
poly-logarithms, but the class also expands into transcendental functions like Lamé and
Mathieu functions. Usually one deals first with a special function of one variable before
going into a study of its multi-variable generalisation, which is not unique and which opens
up a link with the theory of integrable systems.
We will restrict ourselves to hypergeometric functions which are usually defined by series
representations.
P
Definition 1.1 A hypergeometric series is a series ∞ n=0 an with an+1 /an a rational function
of n.
Bessel, Legendre, Jacobi functions, parabolic cylinder functions, 3j- and 6j-symbols
arising in quantum mechanics and many more classical special functions are partial cases
of the hypergeometric functions. However, the trancendental functions “lying in the land
beyond Bessel” are out of the hypergeormetric class and, thereby, will not be considered in
this course.
Euler, Pfaff and Gauss first introduced and studied hypergeometric series, paying special
attention to the cases when a series can be summed into an elementary function. This gives
one of the motivations for studying hypergeometric series, i.e. the fact that the elementary
functions and several other important functions in mathematics can be expressed in terms
of hypergeometric functions.
Hypergeometric functions can also be described as solitions of special differential equa-
tions, the hypergeometric differential equations. Riemann was first who exploited this idea
and introduced a special symbol to classify hypergeometic functions by singularities and
exponents of differential equations they satisfy. In this way we come up with an alternative
definition of a hypergeometric function.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 3

Definition 1.2 A hypergeometric function is a solution of a Fuchsian differential equation


which has at most three regular singularities.
Notice that transcendental special functions of the Heun class, so-called Heun functions
which are “beyond Bessel”, are defined as special solutions of a generic linear second order
Fuchsian differential equation with four regular singularities.
Of course, when talking about 3 or 4 regular singularities it means that the number of
singularities can be less than that, either by trivilising the equation or by merging any two
regular singularities in order to obtain an irregular singular point thus leading to correspond-
ing confluent cases of hypergeomeric or Heun functions. So, Bessel and parabolic cylinder
functions are special cases of the confluent and double-confluent hypergeometric function,
while Lamé and Mathieu functions are special cases of Heun and confluent Heun functions,
respectively. When a maximal allowed number of singularities of a differential equation grows
it results in a more trancendental special function associated with it. A short introduction
to the theory of Fuchsian equations with n regular singularities will be given later on in the
course.
In the first decade of the XXth century E.W. Barnes introduced yet another approach to
hypergeometric functions based on contour integral representations. Such representations are
important because they can be used for derivation of many relations between hypergeometric
functions and also for studying their asymptotics.
The whole class of hypergeometric functions is very distinguished comparing to other
special functions, because only for this class one can have explicit series and integral rep-
resentations, contiguous and connection relations, summation and transformation formulas,
and many other beautiful equations relating one hypergeometric function with another. This
is a class of functions for which one can probably say that any meaningful formula can be
written explicitly; it does not say though that it is always easy to find the one. Also for that
reason this is the class of functions to start from and to put as a basis of an introductory
course in special functions.
The main reason of many applications of hypergeometric functions and special functions
in general is their usefulness. Summation formulas find their way in combinatorics; classical
orthogonal polynomials give explicit bases in several important Hilbert spaces and lead
to constructive Hamonic Analysis with applications in quantum physics and chemistry; q-
hypergeometric series are related to elliptic and theta-functions and therefore find their
application in integration of systems of non-linear differential equations and in some areas
of numeric analysis and discrete mathematics.
For this part of the course the main reference is the recent book by G.E. Andrews,
R. Askey and R. Roy “Special Functions”, Encyclopedia of Mathematics and its Applications
71, Cambridge University Press, 1999. The book by N.M. Temme “Special functions: an
introduction to the classical functions of mathematical physics”, John Wiley & Sons, Inc.,
1996, is also recommended as well as the classical reference: E.T. Whittaker and G.N. Watson
“A course of modern analysis”, Cambridge University Press, 1927.

1.2 Gamma function


The Gamma function Γ(x) was discovered by Euler in the late 1720s in an attempt to find
an analytical continuation of the factorial function. This function is a cornerstone of the
theory of special functions.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 4

Thus Γ(x) is a meromorphic function equal to (x − 1)! when x is a positive integer. Euler
found its representation as an infinite integral and as a limit of a finite product. Let us
derive the latter representation following Euler’s generalization of the factorial.

Figure 1: The graph of absolute value of Gamma function of a complex variable z = x + iy


from MuPAD computer algebra system. There are seen poles at z = 0, −1, −2, −3, −4, . . ..

Let x and n be nonnegative integers. For any a ∈ C define the shifted factorial (a)n by

(a)n = a(a + 1) · · · (a + n − 1) for n > 0, (a)0 = 1. (1.1)

Then, obviously,
(x + n)! n!(n + 1)x n!nx (n + 1)x
x! = = = · . (1.2)
(x + 1)n (x + 1)n (x + 1)n nx
Since
(n + 1)x
limn→∞ = 1, (1.3)
nx
we conclude that
n!nx
x! = limn→∞ . (1.4)
(x + 1)n
The limit exists for ∀x ∈ C such that x 6= −1, −2, −3, . . .. for
µ ¶x Y
n µ ¶−1 µ ¶x
n!nx n x 1
= 1+ 1+ (1.5)
(x + 1)n n + 1 j=1 j j

and µ ¶−1 µ ¶x µ ¶
x 1 x(x − 1) 1
1+ 1+ =1+ +O . (1.6)
j j 2j 2 j3
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 5

Definition 1.3 For ∀x ∈ C, x 6= 0, −1, −2, . . ., the gamma function Γ(x) is defined by

k!k x−1
Γ(x) = limk→∞ . (1.7)
(x)k
Three immediate consequences are

Γ(1) = 1, Γ(x + 1) = xΓ(x) and Γ(n + 1) = n!. (1.8)

¿From the definition it follows that the gamma function has poles at zero and the negative
integers, but 1/Γ(x) is an entire function with zeros at these points. Every entire function
has a product representation.

Theorem 1.4 ∞ n³
1 γx
Y x ´ −x/n o
= xe 1+ e , (1.9)
Γ(x) n=1
n
where γ is Euler’s constant given by
à n !
X1
γ = limn→∞ − log n . (1.10)
k=1
k

Proof.
1 x(x + 1) · · · (x + n − 1)
= limn→∞
Γ(x) n!nx−1
³ x´ ³ x´ ³ x ´ −xlog n
= limn→∞ x 1 + 1+ ··· 1 + e
1 2 n
n n³
x(1+ 12 +···+ n
1
−log n)
Y x ´ −x/k o
= limn→∞ xe 1+ e
k=1
k
∞ n³
γx
Y x ´ −x/n o
= xe 1+ e .
n=1
n

The infinite product in (1.9) exists because


³ µ ¶ µ ¶
x ´ −x/n ³ x´ x x2 x2 1
1+ e = 1+ 1 − + 2 ··· = 1 − 2 + O . (1.11)
n n n 2n 2n n3
¤

1.3 Beta function


Definition 1.5 The beta integral is defined for < x > 0, < y > 0 by
Z 1
B(x, y) = tx−1 (1 − t)y−1 dt. (1.12)
0

One may also speak of the beta function B(x, y), which is obtained from the integral by
analytic continuation.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 6

The beta function can be expessed in terms of gamma functions.

Theorem 1.6
Γ(x)Γ(y)
B(x, y) = . (1.13)
Γ(x + y)
Proof. From the definition of beta integral we have the following contiguous relation
between three functions

B(x, y + 1) = B(x, y) − B(x + 1, y), < x > 0, < y > 0. (1.14)

However, integration by parts of the integral in the left hand side gives
y
B(x, y + 1) = B(x + 1, y). (1.15)
x
Combining the last two we get the functional equation of the form
x+y
B(x, y) = B(x, y + 1). (1.16)
y
Iterating this equation we obtain
(x + y)n
B(x, y) = B(x, y + n). (1.17)
(y)n
Rewrite this relation as
Z n µ ¶n+y−1
(x + y)n n!ny−1 x−1 t
B(x, y) = t 1− dt. (1.18)
n!nx+y−1 (y)n 0 n
As n → ∞, we have Z ∞
Γ(y)
B(x, y) = tx−1 e−t dt. (1.19)
Γ(x + y) 0
Set y = 1 to arrive at
Z 1 Z ∞
1 x−1 Γ(1)
= t dt = B(x, 1) = tx−1 e−t dt. (1.20)
x 0 Γ(x + 1) 0

Hence Z ∞
Γ(x) = tx−1 e−t dt, < x > 0. (1.21)
0
This is the integral representation for the gamma function, which appears here as a byproduct.
Now use it to prove the theorem for < x > 0 and < y > 0 and then use the standard argument
of analytic continuation to finish the proof. ¤
An important corollary is an integral representation for the gamma function which may
be taken as its definition for < x > 0.

Corollary 1.7 For < x > 0 Z ∞


Γ(x) = tx−1 e−t dt. (1.22)
0
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 7

Use it to explicitly represent the poles and the analytic continuation of Γ(x):
Z 1 Z ∞
x−1 −t
Γ(x) = t e dt + tx−1 e−t dt
0 1

X Z ∞
(−1)n
= + tx−1 e−t dt.
n=0
(n + x)n! 1

The integral is an intire function and the sum gives the poles at x = −n, n = 0, 1, 2, . . . with
the residues equal to (−1)n /n!.
Several other useful forms of the beta integral can be derived by a change of variables.
For example, take t = sin2 θ in (1.12) to get
Z π/2
Γ(x)Γ(y)
sin2x−1 θ cos2y−1 θ dθ = .
0 2Γ(x + y)
Put x = y = 1/2. The result is √
Γ(1/2) = π.
The substitution t = (u − a)/(b − a) gives
Z b
Γ(x)Γ(y)
(b − u)x−1 (u − a)y−1 du = (b − a)x+y−1 ,
a Γ(x + y)
which can be rewritten in the alternative form:
Z b
(b − u)x−1 (u − a)y−1 (b − a)x+y−1
du = .
a Γ(x) Γ(y) Γ(x + y)

1.4 Other beta integrals


There are several kinds of integral representations for the beta function. All of them can be
brought to the following form Z
[`1 (t)]p [`2 (t)]q dt,
C
where `1 (t) and `2 (t) are linear functions of t, and C is an appropriate curve. The represent-
ation (1.12) is called Euler’s first beta integral. Fot it, the curve consisits of a line segment
connecting the two zeros of `-functions. We introduce now four more beta integrals. For the
second beta integral, the curve is a half line joining one zero with infinity while the other
zero is not on this line. For the third (Cauchy’s) beta integral, it is a line with zeros on
opposite sides. For the last two beta integrals, the curve is a complex contour. In the first
case, it starts and ends at one zero and encircles the other zero in positive direction. In the
second case, the curve is a double loop winding around two zeros, once in a positive direction
and the second time in the negative direction.

1.4.1 Second beta integral


Set t = s/(s + 1) in (1.12) to obtain the second beta integral with integration over half line,
Z ∞
sx−1 Γ(x)Γ(y)
x+y
ds = . (1.23)
0 (1 + s) Γ(x + y)
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 8

1.4.2 Third (Cauchy’s) beta integral


The beta integral due to Cauchy is defined by
Z ∞
dt π22−x−y Γ(x + y − 1)
C(x, y) = x y
= , <(x + y) > 1.
−∞ (1 + it) (1 − it) Γ(x)Γ(y)

Proof. To prove this, first show that integration by parts gives


x
C(x, y + 1) = − C(x + 1, y).
y
Also, Z ∞
(−1 − it) + 2
C(x, y) = = 2C(x, y) − C(x − 1, y + 1).
−∞ (1 + it)x (1 − it)y+1
Last two combine to give the functional equation
2y
C(x, y) = C(x, y + 1) .
x+y−1
Iteration gives
22n (x)n (y)n
C(x, y) = C(x + n, y + n) .
(x + y − 1)2n
Now, Z ∞
dt
C(x + n, y + n) = .
−∞ (1 + t2 )n (1 + it)x (1 − it)y

Set t → t/ n and let n → ∞. ¤
The substitution t = tan θ leads to the integral:
Z π/2
x+y−2 π21−x−y Γ(x + y − 1)
cos θ cos(x − y)θ dθ = , <(x + y) > 1.
0 Γ(x)Γ(y)

1.4.3 A complex contour for the beta integral


Consider the integral
Z (1+)
1
Ix,y = wx−1 (w − 1)y−1 dw,
2πi 0
with < x > 0 and y ∈ C. The contour starts and ends at the origin, and encircles the point
1 in positive direction. The phase of w − 1 is zero at positive points larger than 1. When
< y > 0 we can deform the contour along (0, 1). Then we obtain Ix,y = B(x, y) sin(πy)/π. It
follows that Z (1+)
1
B(x, y) = wx−1 (w − 1)y−1 dw.
2i sin πy 0
The integral is defined for any complex value of y. For y = 1, 2, . . ., the integral vanishes;
this is canceled by the infinite values of the term in front of the integral.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 9

There is a similar contour integral representing the gamma function. Let us first prove
Hankel’s contour integral for the reciprocal gamma function, which is one of the most beau-
tiful and useful representations for this function. It has the following form:
Z
1 1
= s−z es ds, z ∈ C. (1.24)
Γ(z) 2πi L
The contour of integration L is the Hankel contour that runs from −∞, arg s = −π, encircles
the origin in positive direction and terminates at −∞, now with arg s = +π. For this we
R (0+)
also use the notation −∞ . The multi-valued function s−z is assumed to be real for real
values of z and s, s > 0.
A proof of (1.24) follows immediately from the theory of Laplace transforms: from the
well-known integral Z ∞
Γ(z)
= tz−1 e−st dt
sz 0
(1.24) follows as a special case of the inversion formula. A direct proof follows from a special
choice of the contour L: the negative real axis. When < z < 1 we can pull the contour onto
the negative axis, where we have
· Z 0 Z ∞ ¸
1 −iπ −z −s +iπ −z −s 1
− (se ) e ds − (se ) e ds = sin πz Γ(1 − z).
2πi ∞ 0 π
Using the reflection formula (cf. next subsection for a proof),
π
Γ(x)Γ(1 − x) = , (1.25)
sin πx
we see that this is indeed the left hand side of (1.24). In a final step the principle of analytic
continuation is used to show that (1.24) holds for all finite complex values of z. Namely,
both the left- and the right-hand side of (1.24) are entire functions of z.
Another form of (1.24) is
Z
1
Γ(z) = sz−1 es ds.
2i sin πz L

1.4.4 The Euler reflection formula


The Euler reflection formula (1.25) connects the gamma function with the sine function. In
a sense, it shows that 1/Γ(x) is ‘half of the sine function’. To prove the formula (1.25), set
y = 1 − x, 0 < x < 1 in (1.23) to obtain
Z ∞ x−1
t
Γ(x)Γ(1 − x) = dt.
0 1+t
To compute the integral, consider the following contour integral
Z
z x−1
dz,
C 1−z

where C consists of two circles about the origin of radii R and ² respectively, which are
joined along the negative real axis from −R to −². Move along the outer circle in the
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 10

counterclockwise direction, and along the inner circle in the clockwise direction. By the
residue theorem Z
z x−1
dz = −2πi,
C 1−z

when z x−1 has its principal value. Thus


Z π Z ² x−1 ixπ Z −π x ixθ Z R x−1 −ixπ
iRx eixθ t e i² e t e
−2πi = iθ
dθ + dt + iθ
dθ + dt.
−π 1 − Re R 1+t π 1 − ²e ² 1+t

Let R → ∞ and ² → 0 so that the first and third integrals tend to zero and the second and
fourth combine to give (1.25) for 0 < x < 1. The full result follows by analytic continuation.

1.4.5 Double-contour integral


We have seen that it is possible to replace the integral for Γ(z) along a half line by a contour
integral which converges for all values of z. A similar process can be carried out for the beta
integral.
Let P be any point between 0 and 1. We have the following Pochhammer’s extension of
the beta integral:
Z (1+,0+,1−,0−)
−4π 2 eπi(x+y)
tx−1 (1 − t)y−1 dt = .
P Γ(1 − x)Γ(1 − y)Γ(x + y)

Here the contour starts at P , encircles the point 1 in the positive (counterclockwise) direction,
returns to P , then encircles the origin in the positive direction, and returns to P . The 1−, 0−
indicates that now the path of integration is in the clockwise direction, first around 1 and
then 0. The formula is proved by the same method as Hankel’s formula. Notice that it is
true for any complex x and y: both sides are entire functions of x and y.

2 Hypergeometric functions
2.1 Introduction
In this Lecture we give the definition and main properties of the Gauss (F = 2 F 1 ) hypergeo-
metric function and shortly mention its generalizations, the p F q generalized and p φq basic
(or q-) hypergeometric functions.
Almost all of the elementary functions of mathematics and some not very elementary,
like the error function erf(x) and dilogarithm function Li2 (x), are special cases of the hyper-
geometric functions, or they can be expressed as ratios of hypergeometric functions.
We will first derive Euler’s fractional integral representation for the Gauss hypergeometric
function F , from which many identities and transformations will follow. Then we talk about
hypergeometric differential equation, as a general linear second order differential equation
having three regular singular points. We derive contiguous relations satisfied by the function
F . Finally, we explain the Barnes approach to the hypergeometric functions and Barnes-
Mellin contour integral representation for the function F .
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 11

2.2 Definition
P
Directly from the definition of a hypergeometric series cn , on factorizing the polynomials
in n, we obtain
cn+1 (n + a1 )(n + a2 ) · · · (n + ap )x
= .
cn (n + b1 )(n + b2 ) · · · (n + bq )(n + 1)
Hence, we can get a more explicit definition.

Definition 2.1 The (generalized) hypergeometric series is defined by the following series
representation
µ ¶ X∞
a1 , . . . , ap (a1 )n · · · (ap )n xn
F
p q ; x = .
b1 , . . . , b q (b1 )n · · · (bq )n n!
n=0

Sometimes, we will use other notation: p F q (a1 , . . . , ap ; b1 , . . . , bq ; x).


If we apply the ratio test to determine the convergence of the series,
¯ ¯
¯ cn+1 ¯ |x|np−q−1 (1 + |a1 |/n) · · · (1 + |ap |/n)
¯ ¯
¯ cn ¯ ≤ |(1 + 1/n)(1 + b1 /n) · · · (1 + bq /n)| ,

then we get the following theorem.

Theorem 2.2 The series p F q (a1 , . . . , ap ; b1 , . . . , bq ; x) converges absolutely for all x if p ≤ q


and for |x| < 1 if p = q + 1, and it diverges for all x 6= 0 if p > q + 1 and the series does not
terminate.
Proof. It is clear that |cn+1 /cn | → 0 as n → ∞ if p ≤ q. For p = q + 1, limn→∞ |cn+1 /cn | =
|x|, and for p > q + 1, |cn+1 /cn | → ∞ as n → ∞. This proves the theorem.
¤
The case |x| = 1 when p = q + 1 is of interest. Here we have the following conditions for
convergence.

Theorem
P 2.3
P The series q+1 F q (a1 , . . . , aq+1 ; b1 , . . . , bq ; x) with |x| iθ= 1 converges absolutely
P
if
P < ( b i − ai ) > 0. The series converges P conditionally
P if x = e =
6 1 and 0 ≥ < ( bi −
ai ) > −1 and the series diverges if < ( bi − ai ) ≤ −1.
Proof. Notice that the shifted factorial can by expressed as a ratio of two gamma functions:

Γ(x + n)
(x)n = .
Γ(x)

By the definition of the gamma function

Γ(n + x) y−x Γ(x) (x)n y−x Γ(x) Γ(y)


lim n = lim n = · = 1.
n→∞ Γ(n + y) Γ(y) n→∞ (y)n Γ(y) Γ(x)

The coefficient of nth term in q+1 F q


Q
(a1 )n · · · (aq+1 )n Γ(bi ) P a−P b−1
∼Q n
(b1 )n · · · (bq )n n! Γ(ai )
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 12

as n → ∞. The statements about absolute convergence and divergence follow immediately.


The part of the theorem concerning conditional convergence can by proved by summation
by parts.
¤
The 2 F 1 series was studied extensively by Euler, Pfaff, Gauss, Kummer and Riemann.
Examples: µ ¶
1, 1
log(1 + x) = x 2 F 1 ; −x ;
2
µ ¶
−1 1/2, 1 2
tan x = x 2 F 1 ; −x ;
3/2
µ ¶
−1 1/2, 1/2 2
sin x = x 2 F 1 ;x ;
3/2
µ ¶
−a a
(1 − x) = 1 F 0 ;x ;

µ ¶
− 2
sin x = x 0 F 1 ; −x /4 ;
3/2
µ ¶
− 2
cos x = 0 F 1 ; −x /4 ;
1/2
µ ¶
x −
e = 0F 0 ;x .

The next set of examples uses limits:
µ ¶
x 1, b
e = lim 2 F 1 ; x/b ;
b→∞ 1
µ ¶
a, b 2
cosh x = lim 2 F 1 ; x /(4ab) ;
a,b→∞ 1/2
µ ¶ µ ¶
a a, b
1F 1 ; x = lim 2 F 1 ; x/b ;
c b→∞ c
µ ¶ µ ¶
− a, b
0F 1 ; x = lim 2 F 1 ; x/(ab) .
c a,b→∞ c
The example of log(1 − x) = −x 2 F 1 (1, 1; 2; x) shows that though the series converges for
|x| < 1, it has a continuation as a single-valued function in the complex plane from which
a line joining 1 to ∞ is deleted. This describes the general situation; a 2 F 1 function has a
continuation to the complex plane with branch points at 1 and ∞.

Definition 2.4 The (Gauss) hypergeometric function 2 F 1 (a, b; c; x) is defined by the series
X∞
(a)n (b)n n
x
n=0
(c)n n!

for |x| < 1, and by continuation elsewhere.


PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 13

2.3 Euler’s integral representation


Theorem 2.5 If <c > <b > 0, then
µ ¶ Z 1
a, b Γ(c)
2F 1 ;x = tb−1 (1 − t)c−b−1 (1 − xt)−a dt (2.1)
c Γ(b)Γ(c − b) 0
in the x plane cut along the real axis from 1 to ∞. Here it is understood that arg t =
arg(1 − t) = 0 and (1 − xt)−a has its principal value.
Proof. Suppose |x| < 1. Expand (1 − xt)−a by the binomial theorem
X∞ Z
Γ(c) (a)n n 1 n+b−1
x t (1 − t)c−b−1 dt.
Γ(b)Γ(c − b) n=1 n! 0

Since for <b > 1, <(c − b) > 1 and |x| < 1 the series

X (a)n b+n−1
Un (t), Un (t) = xn t (1 − t)c−b−1
n=0
n!

converges uniformly with respect to t ∈ [0, 1], we are able to interchange the order of integ-
ration and summation for these values of b, c and x.
Now, use the beta integral to prove the result for |x| < 1. Since the integral is analytic
in the cut plane, the theorem holds for x in this region as well; also we apply the analytic
continuation with respect to b and c in order to arrive at the conditions announced in the
formulation of the theorem.
¤
Hence we have obtained the analytic continuation of F , as a function of x, outside the
unit disc, but only when <c > <b > 0. It is important to note that we view 2 F 1 (a, b; c; x)
as a function of four complex variables a, b, c, and x instead of just x. It is easy to see that
1
F (a, b; c; x) is an entire function of a, b, c if x is fixed and |x| < 1, for in this case the
Γ(c) 2 1
series converges uniformly in every compact domain of the a, b, c space.
Gauss found evaluation of the series in the point 1.

Theorem 2.6 For <(c − a − b) > 0



X µ ¶
(a)n (b)n a, b Γ(c)Γ(c − a − b)
= 2F 1 ;1 = .
(c)n n! c Γ(c − a)Γ(c − b)
n=0

Proof. Let x → 1− in Euler’s integral for 2 F 1 . Then when <c > <b > 0 and <(c−a−b) > 0
we get Z 1
Γ(c) Γ(c)Γ(c − a − b)
tb−1 (1 − t)c−a−b−1 dt = .
Γ(b)Γ(c − b) 0 Γ(c − a)Γ(c − b)
The condition <c > <b > 0 may be removed by continuation.
¤

Corollary 2.7 (Chu-Vandermonde)


µ ¶
−n, b (c − b)n
2F 1 ;1 = , n = 0, 1, 2, . . . .
c (c)n
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 14

2.4 Two functional relations


The hypergeometric function satisfies a great number of relations. The most simple and
obvious is the symmetry a ↔ b. Let us prove two more relations
µ ¶ µ ¶
a, b −a a, c − b x
2F 1 ; x = (1 − x) 2 F 1 ; (Pfaff), (2.2)
c c x−1
µ ¶ µ ¶
a, b c−a−b c − a, c − b
2F 1 ; x = (1 − x) 2F 1 ;x (Euler). (2.3)
c c
First relation is proved through the change of variable t = 1 − s in Euler’s integral formula.
The second relation follows by using the first relation twice.
The right-hand series in Pfaff’s transformation converges for |x/(x − 1)| < 1. This
condition is implied by <x < 1/2; so we have a continuation of the series 2 F 1 (a, b; c; x) to
this region by Pfaff’s formula.
Now, let us rewrite Euler’s transformation as
µ ¶ µ ¶
a+b−c a, b c − a, c − b
(1 − x) 2F 1 ; x = 2F 1 ;x .
c c

Equate the coefficient of xn on both sides to get


n
X (a)j (b)j (c − a − b)n−j (c − a)n (c − b)n
= .
j=0
j!(c)j (n − j)! n!(c)n

Rewrite this as:

Theorem 2.8 (Pfaff-Saalschütz)


µ ¶
−n, a, b (c − a)n (c − b)n
3F 2 ;1 = .
c, 1 + a + b − c − n (c)n (c − a − b)n
The Pfaff-Saalschütz identity can be written as
µ ¶
−n, −a, −b
(c)n (c + a + b)n 3 F 2 ; 1 = (c + a)n (c + b)n .
c, 1 − a − b − c − n

This is a polynomial identity in a, b, c. Dougall (1907) took the view that both sides of this
equation are polynomials of degree n in a. Therefore, the identity is true if both sides are
equal for n + 1 distinct values of a. By the same method he proved a more general identity:
µ ¶
a, 1 + 12 a, −b, −c, −d, −e, −n
7F 6 1 ;1
2
a, 1 + a + b, 1 + a + c, 1 + a + d, 1 + d + e, 1 + a + n

(1 + a)n (1 + a + b + c)n (1 + a + b + d)n (1 + a + c + d)n


= ,
(1 + a + b)n (1 + a + c)n (1 + a + d)n (1 + a + b + c + d)n
where 1 + 2a + b + c + d + e + n = 0 and n is a positive integer. Taking the limit n → ∞
we get µ ¶
a, 1 + 12 a, −b, −c, −d
5F 4 1 ;1
2
a, 1 + a + b, 1 + a + c, 1 + a + d
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 15

Γ(1 + a + b)Γ(1 + a + c)Γ(1 + a + d)Γ(1 + a + b + c + d)


=
Γ(1 + a)Γ(1 + a + b + c)Γ(1 + a + b + d)Γ(1 + a + c + d)
when <(a + b + c + d + 1) > 0. Now, take d = −a/2 to get Dixon’s summation formula
µ ¶
a, −b, −c Γ(1 + 12 a)Γ(1 + a + b)Γ(1 + a + c)Γ(1 + 21 a + b + c)
F
3 2 ; 1 = .
1 + a + b, 1 + a + c Γ(1 + a)Γ(1 + 12 + b)Γ(1 + 21 a + c)Γ(1 + 21 a + b + c)

2.5 Contour integral representations


A more general integral representation for the 2 F 1 hypergeometric function is the loop in-
tegral defined by
µ ¶ Z
a, b Γ(c)Γ(1 + b − c) (1+) b−1
2F 1 ;x = t (t − 1)c−b−1 (1 − xt)−a dt, <b > 0.
c 2πiΓ(b) 0

The contour starts and terminates at t = 0 and encircles the point t = 1 in the positive
direction. The point 1/x should be outside the contour. The many-valued functions of the
integrand assume their principal branches: arg(1 − xt) tends to zero when x → 0, and arg t,
arg(t − 1) are zero at the point where the contour cuts the real positive axis (at the right of
1). Observe that no condition on c is needed, whereas in (2.1) we need <(c − b) > 0. The
proof of the above representation runs as for (2.1), with the help of the corresponding loop
integral for the beta function.
Alternative representation involves a contour encircling the point 0:
µ ¶ Z
a, b Γ(c)Γ(1 − b) (0+)
2F 1 ;x = (−t)b−1 (1 − t)c−b−1 (1 − xt)−a dt, <c > <b.
c 2πiΓ(c − b) 1

Using the double-loop (or Pochhammer’s) contour integral one can derive the following
representation
µ ¶
1 a, b e−iπc
F
2 1 ; x = −
Γ(c) c 4Γ(b)Γ(c − b) sin πb sin π(c − b)
Z (1+,0+,1−,0−)
× tb−1 (1 − t)c−b−1 (1 − xt)−a dt.
P

Here we have following conditions: | arg(1 − x)| < π, arg t = arg(1 − t) = 0 at the starting
point P of the contour, and (1 − xt)−a = 1 when x = 0. Note that there are no conditions
on a, b, or c.

2.6 The hypergeometric differential equation


Let us introduce the differential operator ϑ = x d/dx. We have

ϑ(ϑ + c − 1)xn = n(n + c − 1)xn .

Hence
ϑ(ϑ + c − 1) 2 F 1 (a, b; c; x) = x(ϑ + a)(ϑ + b) 2 F 1 (a, b; c; x).
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 16

In explicit form it reads

x(1 − x)F 00 + [c − (a + b + 1)x)F 0 − abF = 0, (2.4)

F = F (a, b; c; x) = 2 F 1 (a, b; c; x).


This is the hypergeometric differential equation, which was given by Gauss.
It is easy to show that, in addition to F (a, b; c; x), a second solution of (2.4) is given by

x1−c F (a − c + 1, b − c + 1; 2 − c; x).

When c = 1 it does not give a new solution, but, in general, the second solution of (2.4)
appears to be of the form

P F (a, b; c; x) + Qx1−c F (a − c + 1, b − c + 1; 2 − c; x), (2.5)

where P and Q are independent of x.


Next we observe that with the help of (2.4) and (2.5) we can express a hypergeometric
function with argument 1 − x or 1/x in terms of functions with argument x. For example,
when in (2.4) we introduce a new variable x0 = 1 − x we obtain a hypergeometric differential
equation, but now with parameters a, b and a + b − c + 1. Hence, besides the solutions in
(2.5) we have F (a, b; a + b − c + 1; 1 − x) as a solution as well. Any three solutions have to
be linearly dependent. Therefore we get

F (a, b; a + b − c + 1; 1 − x) = P F (a, b; c; x) + Qx1−c F (a − c + 1, b − c + 1; 2 − c; x).

To find P and Q we substitute z = 0 and z = 1. If we also use Pfaff’s and Euler’s


transformations we can get the following list of relations:

F (a, b; c; x) = AF (a, b; a + b − c + 1; 1 − x)
+B(1 − x)c−a−b F (c − a, c − b; c − a − b + 1; 1 − x) (2.6)
= C(−x)−a F (a, 1 − c + a; 1 − b + a; 1/x)
+D(−x)−b F (b, 1 − c + b; 1 − a + b; 1/x) (2.7)
= C(1 − x)−a F (a, c − b; a − b + 1; 1/(1 − x))
+D(1 − x)−b F (b, c − a; b − a + 1; 1/(1 − x)) (2.8)
= Ax−a F (a, a − c + 1; a + b − c + 1; 1 − 1/x)
+Bxa−c (1 − x)c−a−b F (c − a, 1 − a; c − a − b + 1; 1 − 1/x). (2.9)

Here
Γ(c)Γ(c − a − b) Γ(c)Γ(a + b − c)
A= , B= ,
Γ(c − a)Γ(c − b) Γ(a)Γ(b)
Γ(c)Γ(b − a) Γ(c)Γ(a − b)
C= , D= .
Γ(b)Γ(c − a) Γ(a)Γ(c − b)
Since the Pfaff’s formula (2.2) gives a continuatiuon of 2 F 1 from |x| < 1 to <x < 12 , then
(2.6) gives the continuation to <x > 12 cut along the real axis from x = 1 to x = ∞. The
cut comes from the branch points of the factor (1 − x)c−a−b . Analogously, (2.7) holds when
| arg(−x)| < π; (2.8) holds when | arg(1 − x)| < π; (2.9) holds when | arg(1 − x)| < π and
| arg x| < π.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 17

2.7 The Riemann-Papperitz equation


The hypergeometric differential equation (2.4) for the function 2 F 1 has three regular singular
points, at 0, 1 and ∞ with exponents 0, 1 − c; 0, c − a − b; and a, b respectively. Its Riemann
symbol has the following form:
 
 0 1 ∞ 
F =P 0 0 a x .
 
1−c c−a−b b

In fact, this equation is a generic equation that has only three regular singularities.

Theorem 2.9 Any homogeneous linear differential equation of the second order with at
most three singularities, which are regular singular points, can be transformed into the
hypergeometric differential equation (2.4).
Proof. Let us only sketch the proof. First we consider the equation

d2 f df
2
+ p(z) + q(z)f = 0
dz dz
and assume that it has only three finite regular singular points ξ, η and ζ with the exponents
(α1 , α2 ), (β1 , β2 ) and (γ1 , γ2 ). Then we find that such equation can be always brought into
the form µ ¶
00 1 − α1 − α2 1 − β1 − β2 1 − γ1 − γ2
f + + + f0 (2.10)
z−ξ z−η z−ζ
· ¸
α1 α2 β1 β2 γ1 γ2 (ξ − η)(η − ζ)(ζ − ξ)
− + + f = 0.
(z − ξ)(η − ζ) (z − η)(ζ − ξ) (z − ζ)(ξ − η) (z − ξ)(z − η)(z − ζ)
Next we introduce the following fractional linear transformation:

(ζ − η)(z − ξ)
x= ,
(ζ − ξ)(z − η)

and also a ‘gauge-transformation’ of the function f :

F = x−α1 (1 − x)−γ1 f.

This transformation changes singularities to 0, 1 and ∞. The exponents in these points are

(0, α2 − α1 ), (0, γ2 − γ1 ), (α1 + β1 + γ1 , α1 + β2 + γ1 ).

It is easy to check that we arrive at the hypergeometric differential equation (2.4) for the
function F (x) with the following parameters:

a = α1 + β1 + γ1 , b = α1 + β2 + γ1 , c = 1 + α1 − α2 .

¤
Equation (2.10) is called the Riemann-Papperitz equation.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 18

2.8 Barnes’ contour integral for F (a, b; c; x)


The pair of Mellin transformations (direct and inverse) is defined by
Z ∞ Z c+i∞
s−1 1
F (s) = x f (x)dx, f (x) = x−s F (s)ds.
0 2πi c−i∞

It is true for some class of functions. For example,


Z ∞ Z c+i∞
s−1 −x −x 1
Γ(s) = x e dx, e = x−s Γ(s)ds, c > 0.
0 2πi c−i∞

This can be proved by Cauchy’s residue theorem. Take a rectangular contour L with vertices
c ± iR, c − (N + 12 ) ± iR, where N is a positive integer. The poles of Γ(s) inside this contour
are at 0, 1, . . . , N and the residues are (−1)j /j!. Now let R and N tend to ∞.
The Mellin transform of the hypergeometric function is
Z ∞ µ ¶
s−1 a, b Γ(c) Γ(s)Γ(a − s)Γ(b − s)
x 2F 1 ; −x dx = .
0 c Γ(a)Γ(b) Γ(c − s)

Theorem 2.10
µ ¶ Z i∞
Γ(a)Γ(b) a, b 1 Γ(a + s)Γ(b + s)Γ(−s)
2F 1 ;x = (−x)s ds,
Γ(c) c 2πi −i∞ Γ(c + s)

| arg(−x)| < π. The path of integration is curved, if necessary, to separate the poles s =
−a − n, s = −b − n, from the poles s = n, where n is an integer ≥ 0. (Such a contour can
always be drawn if a and b are not negative integers.)

3 Orthogonal polynomials
3.1 Introduction
In this lecture we talk about general properties of orthogonal polynomials and about classical
orthogonal polynomials, which appear to be hypergeometric orthogonal polynomials. One
way to link the hypergeometric function to orthogonal polynomials is through a formula
of Jacobi. Multiply the hypergeometric equation by xc−1 (1 − x)a+b−c and write it in the
following form
d
[x(1 − x)xc−1 (1 − x)a+b−c y 0 ] = abxc−1 (1 − x)a+b−c y.
dx
From µ ¶ µ ¶
d a, b ab a + 1, b + 1
2F 1 ;x = 2F 1 ;x ,
dx c c c+1
by induction,
d k
[x (1 − x)k M y (k) ] = (a + k − 1)(b + k − 1)xk−1 (1 − x)k−1 M y (k−1) ,
dx
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 19

where M = xc−1 (1 − x)a+b−c . Then

dk k
[x (1 − x)k M y (k) ] = (a)k (b)k M y.
dxk
Substitute µ ¶
(k) (a)k (b)k a + k, b + k
y = 2F 1 ;x ,
(c)k c+k
to get · µ ¶¸ µ ¶
dk k k a + k, b + k a, b
x (1 − x) M 2 F 1 ;x = (c)k M 2 F 1 ;x .
dxk c+k c
Put b = −n, k = n, then
µ ¶
−n, a x1−c (1 − x)c+n−a dn c+n−1
2F 1 ;x = [x (1 − x)a−c ].
c (c)n dx n

This is Jacobi’s formula.


Set x = (1 − y)/2, c = α + 1, and a = n + α + β + 1 to get
µ ¶
−n, n + α + β + 1 1 − y (1 − y)−α (1 + y)−β dn
2F 1 ; = [(1 − y)n+α (1 + y)n+β ]. (3.1)
α+1 2 (α + 1)n 2n dxn

Definition 3.1 The Jacobi polynomial of degree n is defined by


µ ¶
(α,β) (α + 1)n −n, n + α + β + 1 1 − x
Pn (x) := 2F 1 ; . (3.2)
n! α+1 2

Its orthogonality relation is as follows:


Z +1
Pn(α,β) (x)Pm(α,β) (x)(1 − x)α (1 + x)β dx
−1

2α+β+1 Γ(n + α + 1)Γ(n + β + 1)


= δmn .
(2n + α + β + 1)Γ(n + α + β + 1)n!
Formula (3.1) is called Rodrigues formula for Jacobi polynomials.

3.2 General orthogonal polynomials


Consider the linear space P of polynomials of the real variable x with real coefficients.
A set of orthogonal polynomials is defined by the interval (a, b) and by the measure
dµ(x) = w(x)dx of orthogonality. The positive function w(x), with the property that
Z b
w(x)xk dx < ∞, ∀k = 0, 1, 2, . . . ,
a

is called the weight function.


PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 20

Definition 3.2 We say that a sequence of polynomials {pn (x)}∞ 0 , where pn (x) has exact
degree n, is orthogonal with respect to the weight function w(x) if
Z b
pn (x)pm (x)w(x)dx = hn δmn .
a

Theorem 3.3 A sequence of orthogonal polynomials {pn (x)} satisfies the three-term recur-
rence relation

pn+1 (x) = (An x + Bn )pn (x) − Cn pn−1 (x) for n ≥ 0,

where we set p−1 (x) = 0. Here An , Bn , and Cn are real constants, n = 0, 1, 2, . . ., and
An−1 An Cn > 0, n = 1, 2, . . .. If the highest coefficient of pn (x) is kn , then

kn+1 An+1 hn+1


An = , Cn+1 = .
kn An hn
An important consequence of the recurrence relation is the Christoffel-Darboux formula.

Theorem 3.4 Suppose that pn (x) are normalized so that hn = 1. The


n
X kn pn+1 (x)pn (y) − pn+1 (y)pn (x)
pm (y)pm (x) = .
m=0
kn+1 x−y

Corollary 3.5 p0n+1 (x)pn (x) − pn+1 (x)p0n (x) > 0 ∀x.

3.3 Zeros of orthogonal polynomials


Theorem 3.6 Suppose that {pn (x)} is a sequence of orthogonal polynomials with respect
to the weight function w(x) on the interval [a, b]. Then pn (x) has n simple zeros in [a, b].

Theorem 3.7 The zeros of pn (x) and pn+1 (x) separate each other.

3.4 Gauss quadrature


Theorem 3.8 There are positive numbers λ1 , . . . , λn such that for every polynomial f (x)
of degree at most 2n − 1
Z b X n
f (x)w(x)dx = λj f (xj ),
a j=1

where xj , j = 1, . . . , n, are zeros of the polynomial pn (x) from the set of polynomials ortho-
gonal with respect to the weight function w(x), and λj have the form
Z b
pn (x)w(x)dx
λj = .
a p0n (xj )(x − xj )
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 21

3.5 Classical orthogonal polynomials


The orthogonal polynomials associated with the names of Jacobi, Gegenbauer, Chebyshev,
Legendre, Laguerre and Hermite are called the classical orthogonal polynomials. The follow-
ing properties are characteristic of the classical orthogonal polynomials:
(i) the family {p0n } is also an orthogonal system;
(ii) pn satisfies a second order linear differential equation

A(x)y 00 + B(x)y 0 + λn y = 0,

where A and B do not depend on n and λn does not depend on x;


(iii) there is a Rodrigues formula of the form
µ ¶n
1 d
pn (x) = [w(x)X n (x)],
Kn w(x) dx

where X is a polynomial in x with coefficients not depending on n, and Kn does not depend
on x.
As was said earlier all classical orthogonal polynomials are, in fact, hypergeometric poly-
nomials, in the sense that they can be expressed in terms of the hypergeometric function.
With the Jacobi polynomials expressed by (3.2), all the other appear to be either partial
cases or limits from hypergeometric function to confluent hypergeometric function.
Gegenbauer polynomials:

(2γ)n (γ− 1 ,γ− 1 )


Cnγ (x) = 1 Pn 2 2 (x), (3.3)
(γ + 2 )n
Chebyshev polynomials:
n! (− 12 ,− 12 )
Tn (x) = Pn (x), (3.4)
( 12 )n
Legendre polynomials:
Pn (x) = Pn(0,0) (x), (3.5)
Laguerre polynomials:
µ ¶
n+α
Lαn (x) = 1 F 1 (−n; α + 1; x); (3.6)
n

Hermite polynomials:
µ ¶
(−1)n (2n)! 1 2
H2n (x) = 1 F 1 −n; ; x , (3.7)
n! 2
µ ¶
(−1)n (2n + 1)! 3 2
H2n+1 (x) = 2x 1 F 1 −n; ; x . (3.8)
n! 2
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 22

3.6 Hermite polynomials


2
Hermite polynomials are orthogonal on (−∞, +∞) with the e−x as the weight function.
This weight function is its own Fourier transform:
Z ∞
−x2 1 2
e = √ e−t e2ixt dt. (3.9)
π −∞
Hermite polynomials can be defined by the Rodrigues formula:
2
n x2 dn e−x
Hn (x) = (−1) e .
dxn
It is easy to check that Hn (x) is a polynomial of degree n.
If we repeatedly differentiate (3.9) we get
2 Z
dn e−x (2i)n ∞ −t2 n 2ixt
= √ e t e dt.
dxn π −∞
Hence 2 Z ∞
(−2i)n ex 2
Hn (x) = √ e−t tn e2ixt dt.
π −∞
It is easy now to prove the orthogonality property,
Z ∞
2 √
e−x Hn (x)Hm (x)dx = 2n n! πδmn ,
−∞

using the Rodrigues formula and integrating by parts.


The Hermite polynomials have a simple generating function

X Hn (x) 2
rn = e2xr−r . (3.10)
n=0
n!

Recurrence relation has the form


H2n+1 (x) − 2xHn (x) + 2nHn−1 (x) = 0.
From the integral representation we can derive the Poisson Kernel for the Hermite polyno-
mials ∞
X Hn (x)Hn (y) n 2 2 2 2

n
r = (1 − r2 )−1/2 e[2xyr−(x +y )r ]/(1−r ) .
n=0
2 n!
The following integral equation for |r| < 1 can be derived from the Poisson kernel by using
orthogonality:
Z ∞ [2xyr−(x2 +y2 )r2 ]/(1−r2 )
1 e
√ √ Hn (y)dy = Hn (x)rn .
π −∞ 1−r 2

Let r → i and we have, at least formally,


Z ∞
1 2 2
√ eixy e−y /2 Hn (y)dy = in e−x Hn (x).
2π −∞
2
Hence, e−x Hn (x) is an eigenfunction of the Fourier transform with eigenvalue in . This can
be proved by using the Rodrigues formula for Hn (x).
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 23

4 Separation of variables and special functions


4.1 Introduction
This lecture is about some applications of special functions. It will also give some answer
to the question: Where special functions come from? Special functions usually appear
when solving linear partial differential equations (PDEs), like heat equation, or when solving
spectral problems arising in quantum mechanics, like finding eigenfuntions of a Schödinger
operator. Many equations of this kind, including many PDEs of mathematical physics can
be solved by the method of Separation of Variables (SoV). We will give an introduction to
this very powerful method and also will see how it fits into the theory of special functions.

Definition 4.1 Separation of Variables M is a transformation which brings a function


ψ(x1 , . . . , xn ) of many variables into a factorized form
M : ψ 7→ ϕ1 (y1 ) · . . . · ϕn (yn ).
Functions ϕj (yj ) are usually some known special functions of one variable. The transform-
ation M could be a change of variables from {x} to {y}, but could also be an integral
transform. Usually the function ψ satisfies a simple linear PDE.

4.2 SoV for the heat equation


Let the complex valued function q(x, t) satisfy the heat equation
iqt + qxx = 0, x, t ∈ [0, ∞). (4.1)
q(x, 0) = q1 (x), q(0, t) = q2 (t),
where q1 (x) and q2 (t) are given functions decaying sufficiently fast for large x and large t,
respectively.
Divide (4.1) by q(x, t) and rewrite it as
iqt = k 2 q, qxx = −k 2 q,
which is a separation of variables, since there is a factorized solution of last two equations:
2
qk (x, t) = e−ikx−ik t .
Notice that this gives solution to (4.1) ∀k ∈ C. Because our equation is linear, the following
integral is also a solution to the equation (4.1)
Z
2
q(x, t) = e−ikx−ik t ρ(k)dk, (4.2)
L

where L is some contour in the complex k-plane, and the function ρ(k) (‘spectral data’) can
be expressed in terms of certain integral transforms of q1 (x) and q2 (t), in order to satisfy the
initial data.
This is just a simple demonstration of the method of Separation of Variables, also called
Ehrenpreis principle when applied to such kind of problems. It is interesting to note that
all solutions of (4.1) can be given by (4.2) with the appropriate choice of the contour L and
the function ρ(x).
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 24

4.3 SoV for a quantum problem


Now consider another simple problem that comes from quantum mechanics, namely: the
linear spectral problem for the stationary Schödinger operator describing bound states of
the 2-dimensional harmonic oscillator. That is, consider ψ(x1 , x2 ) ∈ L2 (R2 ) which is an
eigenfunction of the following differential operator:
µ 2 ¶
∂ ∂2
H=− + + x21 + x22 ,
∂x21 ∂x22

Hψ(x1 , x2 ) = hψ(x1 , x2 ).
This problem can be solved by the straightforward application of the method of SoV without
any intermediate transformations. We get
µ ¶
∂2 2
H1 ψ = − 2 + x1 ψ(x1 , x2 ) = h1 ψ(x1 , x2 ),
∂x1
µ ¶
∂2 2
H2 ψ = − 2 + x2 ψ(x1 , x2 ) = h2 ψ(x1 , x2 ),
∂x2
h1 + h2 = h.
Then
ψ(x1 , x2 ) = ψ(x1 )ψ(x2 ),
where µ ¶
∂2 2
− 2 + xi ψ(xi ) = hi ψ(xi ).
∂xi
Square-integrable solution is expressed in terms of the Hermite polynomials
2
ψ(xi ) ∈ L2 (R) ⇔ ψ(xi ) = e−xi /2 Hn (xi ), h = 2ni + 1, ni = 0, 1, 2, . . . .

Notice that
[H1 , H2 ] = 0.
Hence, we get the basis in L2 (R2 ) of the form
2 2
ψn1 n2 (x1 , x2 ) = e−(x1 +x2 )/2 Hn1 (x1 )Hn2 (x2 ),

h = 2(n1 + n2 ) + 2.
The functions {ψn1 n2 } constitute an orthogonal set of functions in R2
Z ∞
ψn1 n2 (x1 , x2 )ψm1 m2 (x1 , x2 ) = an1 n2 δn1 m1 δn2 m2 .
−∞

Every function f ∈ L2 (R2 ) can be decomposed into a series with respect to these basis
functions ∞
X
f (x1 , x2 ) = fmn ψmn (x1 , x2 ).
m,n=0
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 25

4.4 SoV and integrability


As we have seen, SoV can provide a very constructive way to finding general solutions of
some PDE’s. Of course, a PDE in question should possess some unique property to allow
application of the above technique. This extra quality is called integrability. In very rough
terms it means existence of several commuting operators, like H1 and H2 above. This new
notion will become clearer in the next example.
Now, we can give slightly more precise definition of SoV, when applied to the spectral
problems like the one in the previous subsection.

Definition 4.2 SoV is a transformation of the multi-dimensional spectral problem into a


set of 1-dimensional spectral problems.

4.5 Another SoV for the quantum problem


It might be surprising at first, but SoV is not unique. To demonstrate this, let us construct
another solution of the same oscillator problem.
Consider functions Θ(u):

x21 x22 (u − u1 )(u − u2 )


Θ(u) = + −1=− , (4.3)
u−a u−b (u − a)(u − b)

where u, a, b ∈ R are some parameters, and u1 , u2 are zeros of Θ(u). Taking the residues at
u = a and u = b in both sides of (4.3), we have

(u1 − a)(u2 − a) (u1 − b)(u2 − b)


x21 = , x22 = . (4.4)
b−a a−b
The variables u1 and u2 are called elliptic coordinates in R2 , because by definition they
satisfy the equation
x21 x22
+ =1 (4.5)
u−a u−b
with the roots u1 , u2 given by the equations

u1 + u2 = a + b + x21 + x22 , u1 u2 = ab + bx21 + ax22 .

Let all variables satisfy the inequalities

a < u1 < b < u2 < ∞. (4.6)

Now introduce the functions


n
Y
2 2
ψ~λ (x1 , x2 ) := xk11 xk22 e−(x1 +x2 )/2 Θ(λi ),
i=1

where ki = 0, 1 and λ1 , . . . , λn ∈ R are indeterminates.


PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 26

Theorem 4.3 Functions ψ~λ (x1 , x2 ) are eigenfunctions of the operator H iff {λi } satisfy the
following algebraic equations
X 1 1 k1 /2 + 14 k2 /2 + 14
− + + = 0, i = 1, . . . , n. (4.7)
j6=i
λi − λj 2 λi − a λi − b

Parameters λi have the following properties (generalized Stieltjes theorem): i) they are
simple, ii) they are placed along the real axis inside the intervals (4.6), iii) they are the
critical points of the function |G|,
1
G(λ1 , . . . , λn ) = exp(− (λ1 + · · · + λn ))
2
Yn Y
× (λp − a)k1 /2+1/4 (λp − b)k2 /2+1/4 (λr − λp ). (4.8)
p=1 r>p

This Theorem gives another basis for the oscillator problem. In this case we have two
commuting operators G1 and G2 :
µ ¶2
∂2 2 1 ∂ ∂
G1 = − 2 + x1 + x1 − x2 ,
∂x1 a−b ∂x2 ∂x1
µ ¶2
∂2 2 1 ∂ ∂
G2 = − 2 + x2 − x1 − x2 ,
∂x2 a−b ∂x2 ∂x1
[G1 , G2 ] = 0,
which are diagonal on this basis. Notice that

H = G1 + G2 .

5 Integrable systems and special functions


5.1 Introduction
Separation of variables (SoV) for linear partial differential operators of two varaibles Dx1 ,x2
can be defined by the following procedure. Assume that by some transformation we could
transform the operator Dx1 ,x2 into the form:
1 ¡ (1) ¢
Dx1 ,x2 7→ Dy1 ,y2 = Dy1 − Dy(2) , (5.1)
ϕ1 (y1 ) − ϕ2 (y2 ) 2

(i)
where ϕi (yi ) are some functions of one variable and Dyi are some ordinary differential op-
erators. It could be done by changing variables (coordinate transform) {x} 7→ {y}, but it
could also involve an integral transform. Then, we can introduce another operator Gy1 ,y2
such that
Dy(1)
1
− ϕ1 (y1 )Dy1 ,y2 = Gy1 ,y2 = Dy(2)
2
− ϕ2 (y2 )Dy1 ,y2 .
Notice, that D and G commute
[D, G] = 0.
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 27

The operator G that appeared in the procedure of SoV is called operator of constant of
separation.
The above definition is easily expanded to the more variable case. Essential step is to
keep bringing the operator into the ‘separable form’ (5.1) that allows to introduce more and
more operators of constants of separation. If one could break the operator down to a set
of one-variable operators, then separation of variables is done. This obviously requires that
the number of operators G will be equal to the number of variables minus 1, and also that
they commute between themselves and with the operator D. The latter condition defines an
integrable system. So, we can say that necessary condition for an operator to be separable is
that it can be supplemented by a full set of mutually commuting operators (G), or in other
words the operator has to belong to an integrable family of operators.
As we have seen in the previous Lecture, special functions of one variable often appear
when one separates variables in linear PDEs in attempt to find a general solution in terms of
a large set of factorized partial (or separated) solutions. Usually, the completeness of the set
of separated solutions can also be proved, so that we can indeed expand any solution of our
equation, which is a multi-variable ‘special function’, into a basis of separated (one-variable)
special functions.
There are two aspects of this procedure. First, the separated functions of one variable
will satisfy ODEs, so that we can, in principle, ‘classify’ the initial multi-variable special
function by the procedure of separation of variables and by the type of the obtained ODEs.
It is clear that when some regularity conditions are set on the class of allowed transformations
which used in a SoV, one should expect a good correspondence between complexity of both
functions, the multivariable and any of the corresponding one-variable special functions.
In the example of isotropic harmonic oscillator, we had a trivial separation of variables
first (in Cartesian coordinates), which gave us a basis as a product of Hermite polynomials.
Hence, we might conclude that the operator H is, in a sense, a two-dimensional analogue of
the hypergeometric differential operator, because one of its separated bases is given in terms
of hypergeometric functions (Hermite polynomials).
Curiously, the second separation, in elliptic coordinates, led to the functions of the Heun
type, which is beyond the hypergeometric class. The explanation of this seeming contradic-
tion is that the operator H is ‘degenerate’ in the sense that it separates in many coordinate
systems. To avoid this degeneracy, one could disturb this operator by adding some additional
terms that will break one separation, but will still allow the other.
Therefore generically, if an operator can be separated, it usually separates by a unique
transformation, leading to a unique set of separated special functions of one variable.
The second aspect of the problem is understanding what are the sufficient conditions of
separability? Or, which integrable operators can be separated and which can not? A very
close question is: what class of transformations should be allowed when trying to separate
an operator?
In order to demonstrate the point about a class of transformations, take a square of the
Laplace operator:
µ 2 ¶2
∂ ∂2
+ .
∂x21 ∂x22
Of course, this operator is integrable, although there is a Theorem saying that one can not
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 28

separate this operator by any coordinate transform {x} 7→ {y}:


y1 = y1 (x1 , x2 ), y2 = y2 (x1 , x2 ).
It means that although one can find some partial solutions in the factorized form, they will
never form a basis. There is no statement like that if one allows integral transforms, which
means that this operator might be still separable in a more general sense.
It is interesting to note that the operators that are separable through a change of co-
ordinates, although being very important in applications, constitute a very small subclass
of all separable operators. Correspondingly, the class of special functions of many variables,
which is related to integrable systems, is much larger then its subclass that is reducible to
one-variable special functions by a coordinate change of variables.
Below we give an example of the integrable system that can not be separated by a
coordinate change of variables, but is neatly separable by a special integral transform.

5.2 Calogero-Sutherland system


In this subsection, an integral operator M is constructed performing a separation of variables
for the 3-particle quantum Calogero-Sutherland (CS) model. Under the action of M the CS
eigenfunctions (Jack polynomials for the root system A2 ) are transformed to the factorized
form ϕ(y1 )ϕ(y2 ), where ϕ(y) is a trigonometric polynomial of one variable expressed in terms
of the 3 F2 hypergeometric series. The inversion of M produces a new integral representation
for the A2 Jack polynomials.
The set of commuting differential operators defining the integrable system called (3-
particle) Calogero-Sutherland model is generated by the following partial differential oper-
ators
H1 = −i(∂1 + ∂2 + ∂3 ),
¡ ¢
H2 = −(∂1 ∂2 + ∂1 ∂3 + ∂2 ∂3 ) − g(g − 1) sin−2 q12 + sin−2 q13 + sin−2 q23 ,
¡ ¢
H3 = i∂1 ∂2 ∂3 + ig(g − 1) sin−2 q23 ∂1 + sin−2 q13 ∂2 + sin−2 q12 ∂3 ,


qij = qi − qj , ∂i = ,
∂qi
or, by the equivalent set, acting on Laurent polynomials in variables tj = e2iqj , j = 1, 2, 3:
e 1 = −i(∂1 + ∂2 + ∂3 )
H
e 2 = −(∂1 ∂2 + ∂1 ∂3 + ∂2 ∂3 )
H
g[ cot q12 (∂1 − ∂2 ) + cot q13 (∂1 − ∂3 ) + cot q23 (∂2 − ∂3 )]
−4g 2
e 3 = i∂1 ∂2 ∂3
H
−ig[ cot q12 (∂1 − ∂2 )∂3 + cot q13 (∂1 − ∂3 )∂2 + cot q23 (∂2 − ∂3 )∂1 ]
+2ig 2 [(1 + cot q12 cot q13 )∂1 + (1 − cot q12 cot q23 )∂2 + (1 + cot q13 cot q23 )∂3 ]
the vacuum function being
Ω(~q) = |sin q12 sin q13 sin q23 |g . (5.2)
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 29

Their eigenvectors Ψ~n , resp. J~n ,


Ψ~n (~q) = Ω(~q)J~n (~q), (5.3)
are parametrized by the triplets of integers {n1 ≤ n2 ≤ n3 } ∈ Z3 , the corresponding eigen-
values being
h1 = 2(m1 + m2 + m3 ), h2 = 4(m1 m2 + m1 m3 + m2 m3 ), h3 = 8m1 m2 m3 , (5.4)
where,
m1 = n1 − g, m2 = n2 , m3 = n3 + g. (5.5)

5.3 Integral transform


We will denote the separating operator acting on Ψ~n as K, and the one acting on the Jack
polynomials J~n as M .
To describe both operators, let us introduce the following notation.
x1 = q 1 − q 3 , x 2 = q2 − q3 , Q = q3 ,
x± = x1 ± x2 , y± = y1 ± y2 .
We shall study the action of K locally, assuming that q1 > q2 > q3 and hence x+ > x− .
e 1 , y2 ; Q) is defined as an integral operator
The operator K : Ψ(q1 , q2 , q3 ) 7→ Ψ(y
Z y+ µ ¶
e 1 , y2 ; Q) = y+ + ξ y + − ξ
Ψ(y dξ K(y1 , y2 ; ξ)Ψ + Q, + Q, Q (5.6)
y− 2 2
with the kernel
 ³ ´ ³ ´ ³ ´ ³ ´ g − 1
ξ + y− ξ − y− y+ + ξ y+ − ξ
 sin 2 sin 2 sin 2 sin 2 
K = κ  (5.7)
sin y1 sin y2 sin ξ

where κ is a normalization coefficient to be fixed later. It is assumed in (5.6) and (5.7) that
y− < x− = ξ < y+ = x+ . The integral converges when g > 0 which will always be assumed
henceforth.
The motivation for such a choice of K takes its origin from considering the problem in the
classical limit (g → ∞) where there exists effective prescription for constructing a separation
of variables for an integrable system from the poles of the so-called Baker-Akhiezer function.
e ~n = KΨ~n satisfies the
Theorem 5.1 Let Hk Ψn1 n2 n3 = hk Ψn1 n2 n3 . Then the function Ψ
differential equations
QΨ e ~n = 0, e ~n = 0, j = 1, 2
Yj Ψ (5.8)
where
Q = −i∂Q − h1 , (5.9)
µ ¶
3 2 g(g − 1)
Yj = i∂yj + h1 ∂yj − i h2 + 3 ∂yj
sin2 yj
µ ¶
g(g − 1) cos yj
− h3 + h1 + 2ig(g − 1)(g − 2) 3 . (5.10)
sin2 yj sin yj
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 30

The proof is based on the following proposition.

Proposition 5.2 The kernel K satisfies the differential equations

[−i∂Q − H1∗ ]K = 0,

· µ ¶
3g(g − 1)
i∂y3j+ H1∗ ∂y2j
−i + H2∗ ∂y j
sin2 yj
µ ¶¸
∗ ∗ g(g − 1) cos yj
− H3 + H1 + 2ig(g − 1)(g − 2) 3 K = 0,
sin2 yj sin yj
where Hn∗ is the Lagrange adjoint of Hn
Z Z
ϕ(q)(Hψ)(q) dq = (H ∗ ϕ)(q)ψ(q) dq

H1∗ = i(∂q1 + ∂q2 + ∂q3 ),


H2∗ = −∂q1 ∂q2 − ∂q1 ∂q3 − ∂q2 ∂q3 − g(g − 1)[ sin−2 q12 + sin−2 q13 + sin−2 q23 ],
H3∗ = −i∂q1 ∂q2 ∂q3 − ig(g − 1)[ sin−2 q23 ∂q1 + sin−2 q13 ∂q2 + sin−2 q12 ∂q3 ].

The proof is given by a direct, though tedious calculation.


To complete the proof of the theorem 5.1, consider the expressions QKΨ~n and Yj KΨ~n
using the formulas (5.6) and (5.7) for K. The idea is to use the fact that Ψ~n is an eigen-
function of Hk and replace hk Ψ~n by Hk Ψ~n . After integration by parts in the variable ξ the
operators Hk are replaced by their adjoints Hk∗ and the result is zero by virtue of proposition
5.2.
The following theorem gives the separation of variables.
e n1 n2 n3 is factorized
Theorem 5.3 The function Ψ
e n1 n2 n3 (y1 , y2 ; Q) = eih1 Q ψn1 n2 n3 (y1 )ψn1 n2 n3 (y2 ).
Ψ (5.11)

The factor ψ~n (y) allows further factorization

ψ~n (y) = (sin y)2g ϕ~n (y) (5.12)

where ϕ~n (y) is a Laurent polynomial in t = e2iy


n3
X
ϕ~n (y) = tk ck (~n; g). (5.13)
k=n1

The coefficients ck (~n; g) are rational functions of k, nj and g. Moreover, ϕ~n (y) can be
expressed explicitely in terms of the hypergeometric function 3 F2 as
µ ¶
n1 1−3g a1 , a2 , a3
ϕ~n (y) = t (1 − t) 3 F2 ;t (5.14)
b1 , b2
where
aj = n1 − n4−j + 1 − (4 − j)g, bj = aj + g. (5.15)
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 31

Note that, by virtue of the theorem 5.1, the function Ψ e ~n (y1 , y2 ; Q) satisfies an ordinary
differential equation in each variable. Since Qf = 0 is a first order differential equation
having a unique, up to a constant factor, solution f (Q) = eih1 Q , the dependence on Q is
factorized. However, the differential equations Yj ψ(yj ) = 0 are of third order and have
three linearly independent solutions. To prove the theorem 5.3 one needs thus to study the
ordinary differential equation
· µ ¶
3 2 g(g − 1)
i∂y + h1 ∂y − i h2 + 3 ∂y
sin2 y
µ ¶¸
g(g − 1) cos y
− h3 + h1 + 2ig(g − 1)(g − 2) 3 ψ = 0. (5.16)
sin2 y sin y

and to select its special solution corresponding to Ψ.e


e the vacuum factors
The proof will take several steps. First, let us eliminate from Ψ and Ψ
Ω, see (5.3), and, respectively
e 1 , y2 ; Q) = ω(y1 )ω(y2 )J(y
Ψ(y e 1 , y2 ; Q), ω(y) = sin2g y. (5.17)
Conjugating the operator K with the vacuum factors
M = ω1−1 ω2−1 KΩ : J 7→ Je (5.18)
we obtain the integral operator
Z y+ µ ¶
e 1 , y2 ; Q) = y+ + ξ y + − ξ
J(y dξ M(y1 , y2 ; ξ)J + Q, + Q, Q (5.19)
y− 2 2
with the kernel
³ ´
y+ +ξ
Ω 2
+ Q, y+2−ξ + Q, Q
M(y1 , y2 ; ξ) = K(y1 , y2 ; ξ)
ω(y1 )ω(y2 )
h ³ ´ ³ ´ig−1 h ³ ´ ³ ´i2g−1
sin ξ+y2

sin ξ−y−
2
sin y+ +ξ
2
sin y+ −ξ
2
= κ sin ξ 3g−1 . (5.20)
[sin y1 sin y2 ]
Proposition 5.4 Let S be a trigonometric polynomial in qj , i.e. Laurent polynomial in tj =
e2iqj , which is symmetric w.r.t. the transpositon q1 ↔ q2 . Then Se = M S is a trigonometric
polynomial symmetric w.r.t. y1 ↔ y2 .

5.4 Separated equation


To complete the proof of the theorem 5.3 we need to learn more about the separated equation
(5.16).
Eliminating from ψ the vacuum factor ω(y) = sin2g y via the substitution ψ(y) =
ϕ(y)ω(y) one obtains
£ 3
i∂y + (h1 + 6ig cot y)∂y2
+(−i(h2 + 12g 2 ) + 4gh1 cot y + 3ig(3g − 1) sin−2 y)∂y
¤
+ (−(h3 + 4g 2 h1 ) − 2ig(h2 + 4g 2 ) cot y + g(3g − 1)h1 sin−2 y) ϕ = 0. (5.21)
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 32

The change of variable t = e2iy brings the last equation to the Fuchsian form:
£ 3 ¤
∂t + w1 ∂t2 + w2 ∂t + w3 ϕ = 0 (5.22)

where
3(g − 1) + 21 h1 6g
w1 = − + ,
t t−1
(3g 2 − 3g + 1) + 12 (2g − 1)h1 + 14 h2 3g(3g − 1) g(9(g − 1) + 2h1 )
w2 = + − ,
t2 (t − 1)2 t(t − 1)
g 3 + 21 g 2 h1 + 41 gh2 + 18 h3 1
2
g((h2 + 4g 2 )(t − 1) − (3g − 1)h1 )
w3 = − + .
t3 t2 (t − 1)2

The points t = 0, 1, ∞ are regular singularities with the exponents

t∼1 ϕ ∼ (t − 1)µ µ ∈ {−3g + 2, −3g + 1, 0}


t∼0 ϕ ∼ tρ ρ ∈ {n1 , n2 + g, n3 + 2g}
t∼∞ ϕ ∼ t−σ −σ ∈ {n1 − 2g, n2 − g, n3 }

The equation (5.22) is reduced by the substitution ϕ(t) = tn1 (1−t)1−3g f (t) to the standard
3 F2 hypergeometric form

[t∂t (t∂t + b1 − 1)(t∂t + b2 − 1) − t(t∂t + a1 )(t∂t + a2 )(t∂t + a3 )] f = 0, (5.23)

the parameters a1 , a2 , a3 , b1 , b2 being given by the formulas (5.15) which read

a1 = n1 − n3 + 1 − 3g, a2 = n1 − n2 + 1 − 2g, a3 = 1 − g,

b1 = n1 − n3 + 1 − 2g, b2 = n1 − n2 + 1 − g.

Proposition 5.5 Let the parameters hk be given by (5.4), (5.5) for a triplet of integers
{n1 ≤ n2 ≤ n3 } and g 6= 1, 0, −1, −2, . . .. Then the equation (5.22) has a unique, up to a
constant factor, Laurent-polynomial solution
n3
X
ϕ(t) = tk ck (~n; g), (5.24)
k=n1

the coefficients ck (~n; g) being rational functions of k, nj and g.


Proof. Consider first the hypergeometric series for 3 F2 which converges for |t| < 1. Using
for aj and bj the expressions (5.15) one notes that aj+1 = bj + n4−j − n3−j and therefore

(aj+1 )k (bj + k)n4−j −n3−j


= .
(bj )k (bj )n4−j −n3−j

The expression

(a2 )k (a3 )k (b1 + k)n3 −n2 (b2 + k)n2 −n1


= = Pn3 −n1 (k)
(b1 )k (b2 )k (b1 )n3 −n2 (b2 )n2 −n1
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 33

is thus a polynomial in k of degree n3 − n1 . So we have



X (a1 )k
3 F2 (a1 , a2 , a3 ; b1 , b2 ; t) = Pn3 −n1 (k)tk
k=0
k!

from which it follows that

3 F2 (a1 , a2 , a3 ; b1 , b2 ; t) = Pen3 −n1 (t)(1 − t)3g−1

where Pen3 −n1 (t) is a polynomial of degree n3 − n1 in t.


To prove now the proposition 5.5 it is sufficient to notice that the hypergeometric series
3 F2 (a1 , a2 , a3 ; b1 , b2 ; t) satisfies the same equation (5.23) as f (t) and therefore the Laurent
polynomial F~n (t) constructed above satisfies the equation (5.22). The uniqueness follows
from the fact that all the linearly independent solutions to (5.22) are nonpolynomial which
is seen from the characteristic exponents.
Now everything is ready to finish the proof of the theorem 5.3. Since the function Jen1 n2 n3
(y1 , y2 ; Q) satisfies (5.21) in variables y1,2 and is a Laurent polynomial it inevitably has the
factorized form
Jen1 n2 n3 (y1 , y2 ; Q) = eih1 Q ϕn1 n2 n3 (y1 )ϕn1 n2 n3 (y2 ) (5.25)
by virtue of the proposition 5.5.

5.5 Integral representation for Jack polynomials


The formula (5.25) presents an interesting opportunity to construct a new integral repres-
entation of the Jack polynomial J~n in terms of the 3 F2 hypergeometric polynomials ϕ~n (y)
constructed above. To achieve this goal, it is necessary to invert explicitely the operator
e
M : J 7→ J.
It is possible to show that this problem (after changing variables) is equivalent to the
problem of finding an inverse of the following integral transform:
Z η+
(ξ− − η− )g−1
se(η− ) = dξ− s(ξ− ) (5.26)
η− Γ(g)

which is known as Riemann-Liouville integral of fractional order g. Its inversion is formally


given by changing sign of g
Z ξ+
(η− − ξ− )−g−1
s(ξ− ) = dη− se(η− ) (5.27)
ξ− Γ(−g)

and is called fractional differentiation operator. We will not give details of this calculation,
just giving the final result. The formula for M −1 : Je 7→ J is
Z x+
J(x+ , x− ; Q) = e + , y− ; Q)
dy− M̌(x+ , x− ; y− )J(x (5.28)
x−

£ ¡ ¢ ¡ ¢¤3g−1
sin y− sin x+ +y
2

sin x+ −y
2

M̌ = κ̌ £ ¡ y +x ¢ ¡ ¢¤g+1 (5.29)
sin − 2 − sin y− −x
2

[sin x1 sin x2 ]2g−1
PG course onSPECIAL FUNCTIONS AND THEIR SYMMETRIES 34

where
Γ(2g)
κ̌ = . (5.30)
2Γ(−g)Γ(3g)
The operators M (and M −1 ) are normalized by M : 1 7→ 1.
For the kernel of K −1 we have respectively
£ ¡ ¢ ¡ ¢¤g−1
sing x− sin y− sin x+ +y
2

sin x+ −y
2

Ǩ = κ̌ £ ¡ y +x ¢ ¡ ¢¤g+1 . (5.31)
sin − 2 − sin y− −x2

[sin x1 sin x2 ]g−1

The formulas (5.25), (5.28), (5.29) provide a new integral representation for Jack poly-
nomial J~n of three variables in terms of the 3 F2 hypergeometric polynomials ϕ~n (y). It is
remarkable that for positive integer g the operators K −1 , M −1 become differential operators
of order g. In particular, for g = 1 we have K −1 = ∂/∂y− .

References
[1] George E. Andrews, Richard Askey, and Ranjan Roy. Special functions. Cambridge
University Press, Cambridge, 1999.

[2] Willard Miller, Jr. Lie Theory and Special Functions. Academic Press, New York, 1968.
Mathematics in Science and Engineering, Vol. 43.

[3] N. Ja. Vilenkin. Special Functions and the Theory of Group Representations. American
Mathematical Society, Providence, R. I., 1968. Translated from the Russian by V. N.
Singh. Translations of Mathematical Monographs, Vol. 22.
Index
(Gauss) hypergeometric function, 11
(generalized) hypergeometric series, 10

Chebyshev polynomials, 20
Christoffel-Darboux formula, 19
classical orthogonal polynomials, 20

fractional linear transformation, 16

gamma function, 4
Gauss quadrature, 19
Gegenbauer polynomials, 20

heat equation, 22
Hermite polynomials, 20
hypergeometric differential equation, 15
hypergeometric function, 2
hypergeometric series, 1

integrable system, 26
integral transform, 22, 25

Jacobi polynomial, 18

Laguerre polynomials, 20
Legendre polynomials, 20

Rodrigues formula, 20

Separation of Variables, 22
SoV, 24

35

You might also like