Sequences and Series Overview
Sequences and Series Overview
Mike Prest1
School of Mathematics
Alan Turing Building
Room 1.120
1
This set of notes is a slightly modified version of notes developed by Prof. J. T. Stafford and, before
him, Prof. A. J. Wilkie.
Contents
0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1 Before We Begin 4
1.1 Some Reminders about Mathematical Notation . . . . . . . . . . . . . . . . . . . 4
1.1.1 Special sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Set theory notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Logical notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Greek letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.5 Structure of the course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.6 Where we’re headed and some things we’ll see on the way . . . . . . . . . 6
1.1.7 Reading outside the Course Notes . . . . . . . . . . . . . . . . . . . . . . 7
1.1.8 Basic properties of the real numbers . . . . . . . . . . . . . . . . . . . . . 7
1.1.9 The Integer Part (or ‘Floor’) Function . . . . . . . . . . . . . . . . . . . . 9
I Sequences 11
2 Convergence 12
2.1 What is a Sequence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 The Triangle Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 The Definition of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 The Completeness Property for R . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Some General Theorems about Convergence . . . . . . . . . . . . . . . . . . . . . 19
2.6 Exponentiation - a digression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 Divergence 37
5.1 Sequences that Tend to Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6 Subsequences 40
6.1 The Subsequence Test for Non-Convergence . . . . . . . . . . . . . . . . . . . . . 40
6.2 Cauchy Sequences and the Bolzano-Weierstrass Theorem . . . . . . . . . . . . . . 42
6.2.1 Proofs for the section - optional . . . . . . . . . . . . . . . . . . . . . . . . 42
1
7 L’Hôpital’s Rule 46
7.1 L’Hôpital’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
II Series 49
8 Introduction to Series 50
8.1 The Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11 Power Series 68
11.1 The Radius of Convergence of a Power Series . . . . . . . . . . . . . . . . . . . . 69
2
0.1 Introduction
Maybe you can see that
1 1 1 1
1+ + + + ··· + n + ··· = 2
2 4 8 2
and even that
1 1 1 1 3
1++ + + ··· + n + ··· = .
3 9 27 3 2
But what exactly do these formulas mean? What does it mean to add infinitely many numbers
together? Is that even meaningful?
You might recognize the numbers above as being terms of geometric progressions and know
the relevant general formula, for |x| < 1,
1
1 + x + x2 + x3 + · · · + xn + · · · = .
1−x
But how do we prove this? And what if |x| ≥ 1?
In this course we shall answer these, and related, questions. In particular, we shall give a
rigorous definition of what it means to add up infinitely many numbers and then we shall find
rules and procedures for finding the sum in a wide range of particular cases.
Here are some more, rather remarkable, such formulas:
1 1 1 1 π2
1+ + + + ··· + 2 + ··· = ,
4 9 16 n 6
1 1 1 (−1)n+1
1− + − + ··· + + · · · = log 2,
2 3 4 n
1 1 1 1
1+ + + + · · · + + · · · = ∞.
2 3 4 n
We shall prove the second and third of these formulas in this course unit, but the first one is
too difficult and will be done in your lectures on real and complex analysis in the second year.
Here, “real analysis” means the study of functions from real numbers to real numbers, from the
point of view of developing a rigorous foundation for calculus (differentiation and integration)
and for other infinite processes.1 The study of sequences and series is the first step in this
programme.
This also means there are two contrasting aspects to this course. On the one hand we will
develop the machinery to produce formulas like the ones above. On the other hand it is also
crucial to understand the theory that lies behind that machinery. This rigorous approach forms
the second aspect of the course—and is in turn the first step in providing a solid foundation for
real analysis.
1
and “complex analysis” refers to the complex numbers, not (necessarily) complexity in the sense of “compli-
cated”!
3
Chapter 1
Before We Begin
4
1.1.4 Greek letters
The two most commonly used Greek letters in this course are δ (delta) and (epsilon). They
are reserved exclusively for (usually small) positive real numbers.
Others are α (alpha), β (beta), γ (gamma), λ (lambda), θ (theta-usually an angle), η (eta)
and Σ (capital sigma - the summation sign which will be used when we come to study series in
Part II).
Tutorials: These start on 5th February and are at the following times/places; you will
be assigned to one of the five tutorial groups.
The tutorials start in Week 2 and, typically, Week n tutorials concentrate on the subject matter
introduced in Week n − 1.
I will put the weekly exercise sheets on my teaching webpage website. (Complete course
notes and some links are already there.) It is very important that you work at these sheets before
the weekly tutorials; try to have a serious attempt at all the problems. But don’t waste time
going round in circles: try to recognise when you’re doing that, have a break/do something else
and, maybe, when you come back to the question you’ll see your way around what seemed to be
a problem. In general, for each lecture or tutorial hour, you should expect to spend two to three
hours working at understanding the material and testing/strengthening your understanding by
doing problems. It is not enough to read and summarise the notes; it is only by testing your
understanding by doing examples that you will really understand the material. Moreover, the
exam questions will be similar to these problems!
When you get really stuck on something, discuss it with someone! As well as me and the
people who help in the tutorials, there are your fellow-students - they are an excellent resource
and bear in mind that one of the most useful exercises you can do is to try to explain something
to someone else.
5
Assessment:
2. Coursework carries 20% of the course weight and will consist of:
1.1.6 Where we’re headed and some things we’ll see on the way
In Part I we aim to understand the behaviour of infinite sequences of real numbers, meaning
what happens to the terms as we go further and further on in the sequence. Do the terms all
gradually get as close as we like to a limiting value (then the sequence is said to converge to that
value) or not? The “conceptual” aim here is to really understand what this means. To do that,
we have to be precise and avoid some plausible but misleading ideas. It’s worthwhile trying to
develop, and refine, your own “pictures” of what’s going on. We also have to understand the
precise definition well enough to be able to use it when we calculate examples, though we will
gradually build up a stock of general results (the “Algebra of Limits”), general techniques and
particular cases, so that we don’t have to think so hard when faced with the next example.
Part II is about “infinite sums” of real numbers: how we can make a sensible definition of
that vague idea and then how we can calculate the value of an infinite sum - if it exists. We also
need to be able to tell whether a particular “infinite sum” does or doesn’t make sense/exist.
Sequences appear here in two ways: first as the sequence of numbers to be “added up” (and the
order of adding up does matter, as we shall see); second as a crucial ingredient in the actual
definition of an “infinite sum” (“infinite series” is the official term). What we actually do is add
up just the first n terms of such an infinite series - call this value the n-th partial sum - and
then see what happens to this sequence (note) of partial sums as n gets bigger and bigger. If
that sequence of partial sums converges to a limit then that limit is what we define to be the
sum of the infinite series. Hopefully that makes sense to you and seems like it should be the
right definition to use. Anyway, it works and, again, we have the conceptual aspect to get to
grips with as well as various techniques that we can (try to) use in computations of examples.
Here are some of the things we prove about our concept of limit: a sequence can have at
most one limit; if a sequence is increasing but never gets beyond a certain value, then it has a
limit; if a sequence is squeezed between two other sequences which have the same limit l, then
it has limit l. These properties help clarify the concept and are frequently used in arguments
and calculations. We also show arithmetic properties like: if we have two sequences, each with
a limit, and produce a new sequence by adding corresponding terms, then this new sequence
has a limit which is, as you might expect, the sum of the limits of the two sequences we started
with.
Then we turn to methods of calculating limits. We compare standard functions (polynomials,
logs, exponentials, factorials, ...) according to how quickly they grow, but according to a very
coarse measure - their “order” of growth, rather than rate of growth (i.e. derivative). That lets
us see which functions in a complicated expression for the n-th term of a sequence are most
important in calculating the limit of the sequence. There will be lots of examples, so that you
can gain some facility in computing limits, and there are various helpful results, L’Hôpital’s
Rule being particularly useful.
While the properties of sequences are, at least once you’ve absorbed the concept, quite
natural, infinite series hold quite a few surprises and really illustrate the need to be careful
about definitions in mathematics (many mathematicians made errors by mishandling infinite
series, especially before the concepts were properly worked out in the 1800s). Given an infinite
6
series, there are two questions: does it have a sum? (then we say that it “converges”, meaning
that the sequence of partial sums has a limit - the value of that infinite sum) and, if so, what
is the value of the sum? There are a few series (e.g. a geometric series with ratio < 1) where
we can quite easily compute the value but, in general this is hard. It is considerably easier to
determine whether a series has a sum or not by comparing it with a series we already know
about. Indeed, the main test for convergence that we will use, the Ratio Test, is basically
comparison with a geometric series.
Many infinite series that turn up naturally are “alternating”, meaning that the terms are
alternately positive and negative. So, in contrast with the corresponding series where all the
terms are made positive, there’s more chance of an alternating sequence converging, because
the terms partly cancel each other out. Indeed, remarkably, it’s certain provided the absolute
values of the individual terms shrink monotonically to 0!
We’ll finish with power series: infinite series where each term involves some variable x - you
could think of these as “infinite polynomials in x”. Whether or not a power series converges
(i.e. has a sum), depends very much on the value of x, convergence being more likely for smaller
values of x. In fact, the picture is as good as it could be: there’s a certain “radius of convergence”
R (which could be 0 or ∞ or anything in between, depending on the series) such that, within
that radius we have convergence, outside we have divergence, and on the boundary (x = ±R)
it could go either way for each boundary point (so we have to do some more work there).
7
precise. One method with which you should be familiar is to use infinite decimal expansions as
described in Section 13.3 of [PJE].
Here we extract some of the basic arithmetic and order properties of the reals.
First, just as for the set of rational numbers, R is a field. That means that it satisfies the
following conditions.
(A0) ∀a, b ∈ R one can form the sum a + b and the product a · b (also written as just ab). We
have that a + b ∈ R and a · b ∈ R;
(A3) ∀a ∈ R, a + 0 = 0 + a = a;
(A7) ∀a ∈ R, a · 1 = 1 · a = a;
These axioms (A0)-(A9) list the basic arithmetic/algebraic properties that hold in R and
from which all the other such properties may be deduced. Further identities, such as x · 0 = 0,
x · (−y) = −(x · y), −(−x) = x, (x + y)(x − y) = x2 − y 2 , (x + y)2 = x2 + 2xy + y 2 , ... follow
from these axioms. Here we are using the usual notation: for a ∈ R, a2 is an abbreviation for
a · a (similarly a3 is an abbreviation for a · (a · a), etc.).
Example 1.1.1. For example, let us prove the last identity (x + y)2 = x2 + 2xy + y 2 above.
So, we have
Actually we have also used A1 many times here. This allowed us to ignore brackets in
expressions involving many + symbols.
8
Note that if we replace R in these axioms by Q then they do hold; that is, Q is also a field
(but Z is not since it fails (A8)). The point of giving a name (“field”) to an “arithmetic system”
where these hold is that many more examples appear in mathematics, so it proved to be worth
isolating these properties and investigating their general implications. Other fields that you will
have encountered by now are the complex numbers and the integers modulo 5 (more generally,
modulo any prime). But the reals and rationals have some further properties not shared by
these latter examples.
Namely, the reals form an ordered field. This means that we have a total order relation
< on R. All the properties of this relation and how it interacts with the arithmetic operations
follow from the following axioms:
Example 1.1.2. Let us prove that if x is positive (i.e. 0 < x) then −x is negative (i.e. −x <
0). So suppose that 0 < x. Then by (Ord 3), 0 + (−x) < x + (−x). Simplifying we get
−x = 0 + (−x) < x + (−x) = 0, as required.
Other facts that follow from these properties include:
∀x ∈ R x2 ≥ 0,
∀x, y ∈ R x ≤ y ⇔ −x ≥ −y, et cetera.
It left as an exercise to prove these facts using just the axioms (Ord 1–4). Also see the first
exercise sheet.
However, there are more subtle facts about the real numbers that cannot
√ be deduced from
the axioms discussed so far. For example, √ consider the theorem that 2 is irrational. This
really contains two statements: first that 2 6∈ Q (that is, no rational number squares to 2; this
was proved in MATH10101/10111); second that there really is such a number in R - there is a
(positive) solution to the equation X 2 − 2 = 0 in R. And this definitely is an extra property
of the reals—simply because it not true for the rationals (which satisfies all the algebraic and
order axioms that we listed above).
So, we need to formulate a property√of R that expresses that there are no “missing numbers”
(like the “missing number” in Q where 2 should be). Of course, we have to say what “numbers”
should
√ be there, in order to make sense of saying that some of there are “missing”. The example
of 2 might suggest that we should have “enough” numbers so as to be able to take n-th roots
of positive numbers and perhaps to solve other polynomial equations and following that idea
does lead to another field - the field of real algebraic numbers - but we have a much stronger
condition (“completeness”) in mind here. We will introduce it in Chapter 2, just before we need
it.
9
∀x ∈ R, ∃n ∈ Z such that n ≤ x < n + 1.
The integer n that appears here is unique and is denoted [x]; this is called the integer part of
x. The function x 7→ [x] is called the integer part, or floor, function. Note that 0 ≤ x − [x] < 1.
Example 1.1.3. [1.47] = 1, [π] = 3, [−1.47] = [−2].
10
Part I
Sequences
11
Chapter 2
Convergence
The word “sequence” suggests time, with the numbers occurring in temporal sequence:
first a1 , then a2 , et cetera. Indeed, some sequences arise this way, for instance as successive
approximations to some quantity we want to compute. Formally, a sequence is simply a function
f :N → R, where we write an for f (n).
We shall be interested in the long term behaviour of sequences, i.e. the behaviour of the
numbers an when n is very large; in particular, do the approximations converge on some value?
Example 2.1.2. Consider the sequence 1, 4, 9, 16, . . . n2 , . . .. Here, the nth term is n2 . So we
write the sequence as (n2 )n∈N or (n2 )n or just (n2 ). What is the nth term of the sequence
2
4, 9, 16, 25, . . .? What are the first few terms of the sequence (n − 1) n ?
3 4 5 n+1
Example 2.1.3. Consider the sequence 2, 2, 3, 4, .... Here, an = . The sequence is
n
n+1
.
n n∈N
Example 2.1.4. Consider the sequence −1, 1, −1, 1, −1, . . .. A precise and succinct way of
writing it is (−1)n n∈N . The nth term is 1 if n is even and −1 if n is odd.
(−1)n
−1
Example 2.1.5. Consider the sequence n
. The 5th term, for example, is .
3 n∈N 243
Example 2.1.6. Sometimes we might not have a precise formula for the nth term but rather
a rule for generating the sequence, E.g. consider the sequence 1, 1, 2, 3, 5, 8, 13, . . ., which is
specified by the rule a1 = a2 = 1 and, for n ≥ 3, an = an−1 + an−2 . (This is the Fibonacci
sequence.)
Long term behaviour: In Examples 2.1.2 and 2.1.6 the terms get huge, with no bound on
their size (we shall say that they tend to ∞).
101 1 1001 1
However, for 2.1.3, the 100th term is = 1+ , the 1000th term is = 1+ . It
100 100 1000 1000
looks as though
theterms are getting closer and closer to 1. (Later we shall express this by
n+1
saying that converges to 1.)
n n∈N
In Example 2.1.4 , the terms alternate between −1 and 1 so don’t converge to a single value.
12
In Example 2.1.5, the terms alternate between being positive and negative, but they are also
getting
very small in absolute value (i.e. in their size when we ignore the minus sign): so
(−1)n
converges to 0.
3n n∈N
Before giving the precise definition of “convergence” of a sequence, we require some technical
properties of the modulus (i.e. absolute value) function.
We now make the convention that unless otherwise stated, all variables (x, y, l, ,
δ ...) range over the set R of real numbers.
Remark: For x, y ∈ R, |x − y| is the distance from x to y along the “line” R. The triangle
inequality is saying that the sum of the lengths of any two sides of a triangle is at least as big
as the length of the third side, as is made explicit in the following corollary. (Of course we are
dealing here with rather degenerate triangles: the name really comes from the fact that in this
form, the triangle inequality is also true for points in the plane.)
Corollary 2.2.2 (Also called the Triangle Inequality). For all a, b, c, we have
|a − c| ≤ |a − b| + |b − c|
.
Proof. |a − c| = |(a − b) + (b − c)| ≤ |a − b| + |b − c| (by Theorem 2.2.1 with x = (a − b) and
y = (b − c))). 2
Now we come to the definition of what it means for a sequence (an )n∈N to converge to a real
number l. We want a precise mathematical way of saying that “as n gets bigger and bigger, an
gets closer and closer to l” (in the sense the distance between an and l tends towards 0).
13
2.3 The Definition of Convergence
Definition 2.3.1. We say that a sequence (an )n∈N converges to the real number l if the
following holds:
∀ > 0 ∃N ∈ N such that for all n ∈ N with n ≥ N we have |an − l| < .
(−1)n
Example 2.3.3. Now consider the sequence of 2.1.5. The claim is that
3n n∈N
(−1)n
→ 0 as n → ∞.
3n
Proof: Let > 0 be given. To prove the claim we must find N ∈ N so that ∀n ≥ N ,
(−1)n
− 0 < . In fact N = [−1 ] + 1 (which implies N1 < ) works here. For suppose
3n
(−1)n (−1)n |(−1)n | 1 1
that n ≥ N . Then n
− 0 = n
= n
(by Lemma 2.2.3(a)) = n ≤ (since it
3 3 |3 | 3 n
1 1 1 1
is easy to show (by induction) that ∀n ∈ N, 3n ≥ n, and hence that n ≤ ). But ≤ < ,
3 n n N
and we are done.
The definition of convergence is notoriously difficult for students to take in, and rather few
grasp it straight away, especially to the extent of being able to apply it. So here’s a different,
but equivalent, way of saying the same thing. Maybe one of the definitions might make more
sense to you than the other - it can be useful to look at something from different angles to
understand it. And, of course, the more examples you do, the more quickly you will get the
picture.
By Lemma 2.2.3(c) the condition |an − l| < is equivalent to saying that l − < an < l + .
This in turn is equivalent to saying that an ∈ (l − , l + ). So, to say that an → l as n → ∞
is saying that no matter how small an interval we take around l, the terms of the sequence
(an )n∈N will eventually lie in it, meaning that, from some point on, every term lies in that
interval. You might like to look at the formula for N (in terms of ) in Example 2.3.2 and
1 n+1 1 1 3
check that (taking = 10 ), ∈ (1 − 10 , 1 + 10 ) for all n ≥ 11, and that (taking = 500 )
n
14
n+1 3 3 3
∈ (1 − 500 , 1 + 500 ) for all n ≥ 167 (so, for = 500 we can take N = 167 or any integer
n
≥ 167 - the definition of convergence doesn’t require us to choose the least N that will work).
Example 2.3.4. This is more of a nonexample really. Consider the sequence ((−1)n )n∈N (of
2.1.4). Then there is no l such that the terms will eventually all lie in the interval (l − 21 , l + 21 ).
This is because if l ≤ 12 then the interval does not contain the number 1, yet infinitely many of
the terms of the sequence are equal to 1, and if l ≥ 12 then the same argument applies to the
number −1. Hence there is no number l such that (−1)n → l as n → ∞. In general, if there is
no l such that an → l as n → ∞ then we say that the sequence (an )n∈N does not converge
or is divergent.
Example 2.3.5. Another nonexample. Consider the sequence (n2 )n∈N of Example 2.1.2. This
does not converge either. Here is a rigorous proof of this fact directly using the definition of
convergence.
Suppose, for a contradiction, that there is some l such that an → l as n → ∞. Choose = 1
in Definition 2.3.1. So there must exist some N ∈ N such that |n2 − l| < 1 for all n ≥ N . In
particular |N 2 −l| < 1 and |(N +1)2 −l| < 1. Therefore |(N +1)2 −N 2 | = |(N +1)2 −l+l−N 2 | ≤
|(N + 1)2 − l| + |l − N 2 | (by the Triangle Inequality). But each of the terms in the last expression
here is less than 1, so we get that |(N +1)2 −N 2 | < 1+1 = 2. However, |(N +1)2 −N 2 | = 2N +1,
so 2N + 1 < 2, which is absurd since N ≥ 1.
This last example is a particular case of a general theorem. Namely, if the set of terms of
a sequence is not bounded (as is certainly the case for the sequence (n2 )n∈N ) then it cannot
converge. We develop this remark precisely now.
Example 2.3.7. (1) Let S = {17, −6, −25, 25, 0}. Then S is bounded: an upper bound is 25
(or any larger number) and a lower bound is −25 (or any smaller number). In fact one can
show easily by induction on the size of X that if X is a non-empty, finite subset of R then X
is bounded.
(2) Intervals like (a, b] or [a, b], etc, for a < b are bounded above, and below, by a, respectively
b. However an interval like (a, ∞) is only bounded below.
(3) Let S = {x ∈ R : x2 < 2}. Since 1.52 > 2 it follows that if x2 < 2 then −1.5 < x < 1.5. Of
course there are better bounds, but this is certainly sufficient to show that S is bounded.
Applying this to the set {an : n ∈ N} of terms of a sequence we make the following definition.
Definition 2.3.8. A sequence (an )n∈N is bounded if there exists M ∈ R+ such that for all
n ∈ N, |an | ≤ M .
Theorem 2.3.9 (Convergent implies Bounded). Suppose that (an )n∈N is a convergent sequence
(i.e. for some l, an → l as n → ∞). Then (an )n∈N is a bounded sequence.
15
Proof. Choose l so that an → l as n → ∞. Now take = 1 in the definition of convergence.
Then there is some N ∈ N such that for all n ≥ N , |an − l| < 1.
But |an | = |(an − l) + l| ≤ |an − l| + |l| (by the triangle inequality). Thus for all n ≥ N ,
|an | ≤ 1 + |l|. So if we take M = max{|a1 |, |a2 |, . . . , |aN −1 |, 1 + |l|} we have that |an | ≤ M for
all n ∈ N, as required. 2
We give one last example before we prove some more general theorems about convergence.
1
8n 3
Example 2.3.10. Let an = 3 √ . Then an → 0 as n → ∞.
n + n
Proof: Let > 0 be given. We must find N (which will depend on the given ) such that for
1 1
8n 3 8n 3
all n ≥ N , 3 √ − 0 < , i.e. so that 3 √ < (since the terms are positive, we may
n + n n + n
remove the modulus signs).
1
8n 3
So the game is to find a decent looking upper bound for 3 √ so that one can easily read
n + n
1 1
3 √ 8n 3 8n 3
off what N must be. Well, we certainly have n ≥ n (for all n), so 3 √ ≤ √ √ =
n + n n+ n
1
4n 3
√ .
n
(You must get used to tricks like this: replacing an expression in the denominator by a
smaller expression and/or replacing the numerator by a larger term always increases the size of
the fraction, provided everything is positive.)
1 1
4 · n3 4 · n3 4 4
Now √ = 1 = 1 1 = 1.
n n2 n2−3 n6
1
8n 3 4
So we have that 3 √ ≤ 1 .
n + n n6
1
8n 3 4
So we will have the desired inequality, namely 3
√ − 0 < , provided that 1 <
n + n n6
6
4
which, after rearrangement, is provided that < n. So to complete the argument we just
" #
6
4 4 6
take N to be any natural number greater than , say N = + 1.
Definition 2.4.1. Let S be a non-empty subset of R and assume that S is bounded above. Then
a real number M is a supremum or least upper bound of S if the following two conditions
hold:
16
It is easy to see that a set S cannot have more than one supremum. For if M1 and M2 were
both suprema and M1 6= M2 , then either M1 < M2 or M2 < M1 (this is the sort of situation
where one automatically uses rule (Ord 1)). But, in either case, this contradicts condition
2.4.1(ii) above.
Thus we have proved:
Lemma 2.4.2. The supremum of a set S, if it exists, is unique and we then denote it by
sup(S).
Lemma 2.4.3. Let S be a non-empty subset of R and assume that M = sup(S) exists. Then
∀ ∈ R+ , ∃x ∈ S such that M − < x ≤ M .
Proof. Let > 0 be given. Let M 0 = M − . Then M 0 < M , so by 2.4.1(ii), M 0 is not an upper
bound for S. So there must be some x ∈ S such that x > M 0 . Since M is an upper bound for
S (by 2.4.1(i) ), we also have x ≤ M . So M 0 < x ≤ M , i.e. M − < x ≤ M , as required. 2
Example 2.4.4. For S = {17, −6, −25, 25, 0} we clearly have that sup(S) = 25. Indeed, if S
is any non-empty, finite subset of R then it contains a greatest element and this element is
necessarily its supremum.
Example 2.4.5. However, it need not be the case that a set contains its supremum as a member.
Indeed, if a < b then we claim that sup(a, b) = b despite the fact that b ∈
/ (a, b).
Proof. Let us prove this. Certainly b is an upper bound for (a, b). To see that 2.4.1(ii) is also
satisfied, suppose that M 0 < b. Let c = max{a, M 0 }. Then certainly c < b and so the average,
c+b
d= of c and b satisfies M 0 ≤ c < d < b. Similarly, a < d < b. So d is an element of the set
2
(a, b) which is strictly greater than M 0 . Hence M 0 is not an upper bound for (a, b), as required.
2
It is now time to introduce the final property that characterises the reals. It is a succinct,
but sweeping property of R. It is strong enough to imply facts as diverse as “the number 2 has
a square root” on the one hand and apparently easy things like “N has no upper bound in R”
on the other.
Property 2.4.6. (The Completeness property of R) (A14) Any non-empty subset of R which
is bounded above has a supremum.
Remark. We will not prove the completeness property. After all, we’ve not actually defined
what real numbers are, so we can’t begin to prove that they have such-and-such a property.
The usual route is to carefully define the real numbers in terms of so-called Dedekind cuts (an
alternative approach is in terms of Cauchy sequences). The completeness axiom is then almost
part of the definition. We don’t have the time to give the details in this course but you can find
the construction in most analysis text books.
Let us list some consequences of the completeness property.
Example 2.4.7. Consider now the set S = {x ∈ R : x2 < 2} of Example 2.3.7. We have already
noted that it is bounded above, for example by 1.5. Thus by Property
√ 2.4.6 it has a supremum,
say s. We will see below that s2 = 2 and so s does indeed equal 2. (Clearly s is the positive
square root of 2 since 1 ∈ S, so 1 ≤ s).
So suppose, for a contradiction, that s2 6= 2. Then either s2 > 2 or s2 < 2. Assume first
that s2 > 2. What we want to do is find some (positive) number t that is small enough so that
17
2
(s − t) is still an upper bound for S. It might be tempting to take t to be the average t = s 2−2 ,
but this does not quite work. However the idea is good:
s2 − 2 s2 s
Let t = . Then t > 0. Also t < = , so t < s and s − t > 0. Then we check that
4s 4s 4
s − t is also an upper bound for S, which contradicts the fact that s is the least upper bound
for S. The key point is to compute
2
2 2 2 2 2 s −2
(s − t) = s − 2st + t ≥ s − 2st = s − 2s
4s
2
s2
s −2 2
= s2 − = + 1 > + 1 = 2.
2 2 2
This does it, since if x ∈ S then x2 < 2 < (s − t)2 . As (s − t) > 0, it follows (see the first
example sheet) that x < (s − t) contradicting the fact that s is a supremum for S.
We should also dispose of the case s2 < 2; that is left for the exercise sheet.
In fact, by using a similar method (though the details are rather more complicated), one
can also show that
• for any x ∈ R with x > 0, and any natural number n, there exists y ∈ R, with y > 0, such
1 √
that y n = x. Such a y is unique and is denoted x n or n x (the positive real nth root of
x). Furthermore,
• for any x ∈ R, and any odd natural number n, there exists a unique y ∈ R (of the same
1
sign as x) such that y n = x. Again, we write this y as x n .
Finally, here is a property of the reals that you probably took for granted. However let’s
prove it using the completeness property.
Example 2.4.8. N is not bounded above.
Proof. To see this suppose, for a contradiction, that N is bounded above. Then the complete-
ness property says that N has a supremum, say s = sup(S). Apply Lemma 2.4.3 with = 21 .
Then the lemma guarantees that there is some x ∈ N such that s − 12 < x. Adding 1 to both
sides we get s + 12 < x + 1. But then we have x + 1 ∈ N and s < x + 1 which contradicts our
assumption that s was the supremum of, and hence an upper bound for, S. 2
Definition 2.4.10. Suppose that S is a non-empty subset of R and that S is bounded below.
Then a real number m is an infimum (or greatest lower bound) of S if
Again, one can show that the infimum of a set S is unique if it exists, and we write inf(S)
for it when it does exist. One might think that we need a result like Property 2.4.6 to postulate
the existence of infima. But we can directly prove from that proposition.
18
Theorem 2.4.11 (Existence of infima). Every non-empty subset T of R which is bounded below
has an infimum.
Proof. See the Problem Sheet for Week 1. As a hint, define T − = {−x : x ∈ T }. Then
you should prove that T − has a supremum, M say. Now show that −M is the infimum of the
original set T . 2
So, to summarise, we are not going to define the real numbers from first principles - you can
find their construction in various sources and, in any case, you are used to dealing with them
- but it is the case that everything we need about the reals follows from the axioms that we
listed above for an ordered field, together with the completeness property. Indeed, these axioms
completely characterise the reals. That is, and this is a rather remarkable fact: although there
are many different fields, and even many different ordered fields, if we add the completeness
axiom then there is just one mathematical structure which satisfies all these conditions - namely
the reals R.
19
Obviously x = aN for some N ∈ N. So l − < aN ≤ l.
Now for any n ≥ N we have that an ≥ aN (since the sequence (an )n∈N is increasing) and of
course we also have that an ≤ l (since l is certainly an upper bound for the set {an : n ∈ N},
being its supremum).
So, for all n ≥ N , we have l − < an ≤ l, and hence l − < an ≤ l + . But (by 2.2.3(c))
this is equivalent to saying that for all n ≥ N , we have |l − an | < . Thus an → l as n → ∞ as
required. 2
n2 − 1 1
Example 2.5.4. Let an = 2
. Then (an )n∈N is an increasing sequence since an = 1 − 2 ≤
n n
1
1− = an+1 . Further, 0 ≤ an < 1 for all n, so (an )n∈N is also a bounded sequence.
(n + 1)2
Hence (an )n∈N converges.
In fact it is fairly easy to show directly from the definition of convergence that, with an as in
the example above, an → 1 as n → ∞. However, Theorem 2.5.3 comes into its own in situations
where it is far from easy to guess what the limit is:
n+1 n 1 n
Example 2.5.5. Let an = = 1+ . Then one can show (though it’s not partic-
n n
ularly easy) that (an )n∈N is an increasing, bounded sequence. So it converges. But what is its
limit? It turns out to be e (the base for natural logarithms).
Lemma 2.6.1. (a) Let y ∈ R. Then there exists a sequence (qn )n∈N which (i) is strictly
increasing, (ii) has rational terms (i.e. qn ∈ Q for each n ∈ N) and (iii) converges to y.
(b) If q and s are positive rational numbers such that q < s, and if x ≥ 1, then xq < xs .
Proof. (a) Firstly, let q1 be any rational number strictly less than y (e.g. q1 = [y] − 1.)
Let q2 be any rational number satisfying max{q1 , y − 12 } < q2 < y (using the result from
the problem sheet). Now let q3 be any rational number satisfying max{q2 , y − 13 } < q3 < y (as
above).
We continue: once q1 < q2 < · · · < qn < y have been constructed, we choose a rational
1
number qn+1 satisfying max{qn , y − n+1 } < qn+1 < y.
20
Clearly our construction has ensured that the sequence (qn )n∈N is (strictly) increasing and is
bounded above (by y). It therefore converges by the Monotone Convergence Theorem. However,
we have also guaranteed that for all n ∈ N, y − n1 < qn < y, from which it follows that the limit
is, in fact, y.1
(b) This is left as an exercise; see the solution to Question 4 from the Exercise sheet for
Week 3. 2
Construction of xy (outline).
We first use the lemma to construct an increasing sequence (qn )n∈N of rational numbers that
converges to y. Part (b) of the lemma implies that (xqn )n∈N is a strictly increasing sequence
which is bounded above by xN where N is any natural number greater than y (e.g. [y] + 1).
Hence, by the Monotone Convergence Theorem, there is some l such that xqn → l as n → ∞.
We define xy to be this l.
1
We can extend this definition to negative y by the formula x−y = y , and we also set x0 =
x
1. Finally, if 0 < x < 1, so x−1 ≥ 1, then we define xy to be (x−1 )−y . We do not give any value
to xy for negative x.
With these definitions all the usual laws for exponentiation can now be established. One
first proves them for rational exponents and then shows that the laws carry over to arbitrary
real exponents upon taking the limit described above. This latter process is made easier by
developing an “algebra of limits” which we shall do in the next chapter.
The laws of exponentiation being referred to above are as follows (where x is assumed
positive throughout).
(E1) xy+z = xy · xz ;
Since E1-E4 encapsulate all we will need to know about exponention, we don’t need to refer
back to our particular construction. In particular, we will use (E4) a number of times, without
particular comment.
1 1
Can you see this final step? As a hint, note that for any > 0 there exists n ∈ N with 0 < < . Now
n+1
use Lemma 2.2.3(c) as usual.
21
Chapter 3
We now develop a variety of methods and general results that will allow us to calculate limits
of particular sequences without always having to go back to the original definition.
an ≤ cn ≤ bn for all n ≥ N .
Now we write down what it means for lim an = ` = lim bn . So, fix > 0. Then there
n→∞ n→∞
exists N1 such that if n ≥ N1 then |` − an | < . Equivalently
Now, let M = max{N, N1 , N2 }. Then the displayed inequalities combine to show that:
Remark 3.1.2. The Sandwich Rule is often applied when either (an )n∈N or (bn )n∈N is a constant
sequence: if a ≤ cn ≤ bn for sufficiently large n and bn → a as n → ∞, then cn → a as n → ∞.
(Just take an = a for all n ∈ N in 3.1.1.) Similarly, if an ≤ cn ≤ b for sufficiently large n and
an → b as n → ∞, then cn → b as n → ∞.
Definition 3.1.3. A sequence (an )n∈N is a null sequence if an → 0 as n → ∞.
22
Theorem 3.1.4. (i) If (an )n∈N is a null sequence then so are the sequences (|an |)n∈N and
(−|an |)n∈N .
(ii) (Sandwich rule for null sequences.) Suppose that (an )n∈N is a null sequence. Let (bn )n∈N
be any sequence such that for all sufficiently large n, |bn | ≤ |an |. Then (bn )n∈N is also a null
sequence.
Proof. (i) This follows from the definitions. For any > 0, by definition there exists N such
that |an | < for all n ≥ N . But then | |an | | = |an | < and so, by definition lim |an | = 0. The
n→0
same works for −|an | since | − |an | | = |an |.
(ii) We are given that for some N ∈ N we have |bn | ≤ |an | for all n ≥ N . It follows that for
all n ≥ N , −|an | ≤ bn ≤ |an |. But by (i) both (|an |)n∈N and (−|an |)n∈N are null sequences.
Hence by the Sandwich Rule 3.1.1 (with l = 0) it follows that (bn )n∈N is null. 2
Of course, this is really what we were doing in explicit cases like Question 2 of the Week 3
Exercise Sheet.
Example 3.1.5. (i) n1 n∈N is null.
1
(ii) 3 is null.
n2 + n 2 n∈N
Proof (i) We did this as part of Example 2.3.2.
1 1 1 1 1
(ii) For all n ∈ N, 3 = 3 ≤ 2
= 2
≤ . So the result follows from
n2 + n 2 n2 + n 2 n n n
Theorem 3.1.4 and Part (i) (Or you could use Question 2(a) of the Week 3 Exercise Sheet.)
Lemma 3.1.6. (a) For all m ≥ 5 we have 2m > m2 .
(b) In particular, if an = 2−n , then limn→∞ an = 0.
Proof. (a) You may have seen this in the Foundations of Pure Mathematics course.
Base case: For m = 5 we have 25 = 32 > 25 = 52 .
So, suppose for some k ≥ 5 we have 2k > k 2 . Then in trying to relate 2k+1 and (k + 1)2 one
gets
2k+1 = 2 · 2k > 2 · k 2 = (k + 1)2 + k 2 − 2k − 1.
Now the key point is that, as k ≥ 5 we have (k 2 − 2k − 1) = (k − 1)2 − 2 > 2. So from the
displayed equation we get
2k+1 > (k + 1)2 + k 2 − 2k − 1) > (k + 1)2 + 2.
Thus, by induction 2m > m2 for all m ≥ 5.
(b) Of course, you can prove this more directly, but part (a) implies that 0 < an = 2−n ≤ n−2 .
Thus, by Theorem 3.1.4 and Question 2(a) (of Problem Sheet 2) limn→∞ (an ) = 0. 2
1
Example 3.1.7. is null.
2n + 3n n∈N
1 1 1
Proof: For all n ∈ N, we have n = n ≤ n . So the result follows from Theo-
2 + 3n 2 + 3n 2
rem 3.1.4 and Lemma
3 3.1.6.
n
Example 3.1.8. is null.
4n n∈N
Proof: By the lemma, ∀n ≥ 5,
n3 n3 n3 n3 1
n
= n
= n n
≤ 2 2
= .
4 4 2 ·2 n ·n n
So the result follows from 3.1.4 (and the fact that ( n1 ) is null).
23
3.2 The Algebra of Limits.
Hopefully you now have a feel for testing whether a given sequence is convergent or not. What
should also be apparent is that we tend to repeat the same sorts of tricks. When that happens,
one should suspect that there are general rules in play. This is the case here and we can convert
the sorts of manipulations we have been doing into theorems telling us when certain types of
functions converge (or not). The next theorem can be paraphrased as saying that: Limits of
convergent series satisfy the same rules of addition and multiplication as numbers.
Theorem 3.2.1. [The Algebra of Limits Theorem] Let (an )n∈N , (bn )n∈N be sequences and let
a, b be real numbers. Suppose that limn→∞ an = a and that limn→∞ bn = b. Then:
Remarks: (i) In many of the proofs we are going to want to change in the definition of
convergence, so let’s rephrase the definition of convergence as: a sequence (cn ) converges to `
if, for all η > 0 there exists M such that,
(ii) Secondly, we will frequently use the various versions of the triangle inequality from 2.2.1
and 2.2.2, generally without explicit mention.
Proof. (i) Let > 0 be given. Choose N so that for all n ≥ N , |an − a| < . Now by
Lemma 2.2.3(b), |an | − |a| ≤ |an − a|. Hence, for all n ≥ N , |an | − |a| < , as required.
(ii) Firstly, if k = 0 the result is obvious since limn→∞ 0 = 0. So suppose that k 6= 0.
Let > 0 be given. Here we take η = . Then η > 0 so we may apply the Remark to
|k|
obtain M ∈ N such that for all m ≥ M , we have |am − a| < η. Multiplying by |k| (which is
> 0) we get that |k| · |an − a| < |k| · η = for all n ≥ M . Therefore, for n ≥ M , we have
|kan − ka| = |k(an − a)| < . This shows that kan → ka as n → ∞, as required.
(iii) Let > 0 be given. We take (for reasons that will soon become apparent) η = in the
2
Remark applied to the two sequences. Thus we can find M1 such that
a − η ≤ an ≤ a + η for all n ≥ M1
24
Since = 2η this means that, for N = max{M1 , M2 }
So an + bn → a + b as n → ∞, as required.
(iv) This is harder. We first use the fact (Theorem 2.3.9) that (an )n∈N and (bn )n∈N are bounded
sequences. So we may choose B ∈ R+ such that:
Thus an bn → ab as n → ∞, as required.
1 1
(v) This is immediate from parts (iv) and (vi). Indeed, if → as n → ∞, then we can then
bn b
1 1
apply part (iv) to conclude that an · → a · as n → ∞.
bn b
(vi) We start with:
Claim: There exists N1 ∈ N such that ∀n ≥ N1 , one has |bn | > 12 |b|.
Proof of Claim: Use the Remark with η = 12 |b| (which is strictly positive as |b| = 6 0). This gives
N1 ∈ N such that ∀n ≥ N1 , we have |b| − 21 |b| < |bn | < |b| + 12 |b|, which is what we wanted.
We now complete the proof. So let > 0 be given.
|b|2
Let η = · . Then η > 0 and hence by the Remark there exists N2 ∈ N such that, for all
2
n ≥ N2 , we have |bn − b| < η. Now let N = max{N1 , N2 }. Then, for n ≥ N we have
1 1 b − bn |b − bn |
− = =
bn b bn b |bn ||b|
|b| |b|2
However, |bn | > (because n ≥ N1 ) and hence |bn | · |b| > . Therefore
2 2
|b − bn | 2|b − bn | 2·η
< 2
< (because n ≥ N2 ).
|bn ||b| |b| |b|2
25
|b|2
Since η = · we obtain that 1 − 1 < for all n ≥ N , as required. 2
2 bn b
Theorem 3.2.2. Let (an )n∈N be a null sequence and let (bn )n∈N be a bounded sequence (not
necessarily convergent). Then (an · bn )n∈N is a null sequence.
Proof. Exercise. 2
1
Example 3.2.3. For any fixed positive real number p, → 0 as n → ∞.
np
1
Proof: Let > 0 be given. We want to show that p < for all large n. But
n
1/p
1 p 1 1
< ⇐⇒ n > ⇐⇒ n > ,
np
−1
(where the final equivalence uses E4 of Section 2.6). So, we take N to be [ p ] + 1. Thus, if
−1 1 1
n ≥ N then n > p and so the above computation shows that p < . Therefore p − 0 <
n n
1
and hence limn→0 p = 0.
n
Aside: By the way, if you do not want to use E4 here, you could also use the argument from
the solutions to the Exercise sheet for Week 3. So, (using that remark) if we take q = [p] then
q ∈ Q+ and n−p ≤ n−q . So (by the Sandwich Theorem again) it is enough to prove Example
3.2.3 for q or, as is the same statement, for p rational. This is the case of E4 proved explicitly
on that solution sheet.
n2 + n + 1
Example 3.2.4. 2 → 1 as n → ∞
n −n+1
Proof: Divide top and bottom of the nth term by n2 . (This trick, and variations of it, is the
main idea in all examples like this.) Thus
1 1
n2 + n + 1 1+ n + n2
2
= 1 1 .
n −n+1 1− n + n2
We now apply the above example with p = 1 and p = 2 to deduce that n1 → 0 and that
1
n2
→ 0 as n → ∞. (Of course, these are also examples we have done several times before!)
So by the Algebra of Limits Theorem 3.2.1(ii, iii)
1 1
1+ + 2 −→ 1 + 0 + 0 = 1 as n → ∞
n n
and
1 1
1− + 2 −→ 1 + (−1) · 0 + 0 = 1 as n → ∞.
n n
So by using Algebra of Limits Theorem 3.2.1(vi)) again we obtain that
1 1
1+ n + n2 1
1 1 → = 1,
1− n + n2
1
2
that is, n2 + n + 1 → 1 as n → ∞.
n −n+1
26
For many functions given as polynomials or fractions of polynomials we can apply methods
as in the above example. It is however very important to use this result (and earlier results)
only for convergent sequences.
For example, we have seen that (an ) is not convergent when an = (−1)n . This is an example
of a bounded sequence that is not convergent (so the converse of Theorem 2.5.3 fails).
There are however cases where we can deduce negative statements.
Example 3.2.5. If p > 0 then (np ) is an unbounded sequence (and hence is not convergent by
Theorem 2.3.9).
Proof. Given any ` we can certainly find a natural number n > (`)1/p since the natural numbers
are unbounded. However, using (E4) of Section 2.6 we have np > ` ⇐⇒ n > (`)1/p . Thus this
also proves that (np ) is unbounded.
It is worth emphasising that in proving unboundedness (or any counterexample) it’s not
necessary that every value of n is “bad”, just that we can always find a larger “bad” value of n.
Example 3.2.6. (i) If (an ) is unbounded and (bn ) is convergent, then (an + bn ) is unbounded.
Similarly, (kan ) is unbounded whenever k 6= 0.
Proof. Since (bn ) is convergent, it is bounded, say |bn | < B for all n. But now given any ` there
exists some an with |an | > B + `. Hence by the triangle inequality, |an + bn | > `, as required.
The proof for products is similar.
27
Chapter 4
In the previous chapter we saw how to establish the convergence of certain sequences that were
built out of ones that we already knew about. In this chapter we build up a stock of basic
convergent sequences that will recur throughout your study of analysis.
Step II: Take nth roots in PartI and use (E4) of Section 2.6 to get (1 + x) ≥ (1 + nx)1/n .
y y
Substituting x = gives 1 + ≥ (1 + y)1/n , which is what we want.
n n
Steph III: Fix > 0. For any c > 1 we can write c = 1 + y for y > 0. Now we choose
yi
N= + 1, so that Ny < . Then for any n ≥ N we have:
c1/n − 1 = c1/n − 1 since clearly c1/n > 1 (or use E4 of Section 2.6)
= (1 + y)1/n − 1
y
≤ 1+ −1 by part II
n
= ny ≤ Ny < .
Step IV: So it remains to consider the case when 0 < c < 1. But then d = c−1 > 1 so by Part
1
III, d1/n → 1 as n → ∞. By the Algebra of Limits Theorem 3.2.1(vi), 1/n → 11 = 1 as n → ∞.
d
1 1/n
Finally, since 1/n = c , we have shown that c 1/n → 1 as n → ∞. 2
d
For completeness, recall Question 4 from the Week 4 Example Sheet:
28
Lemma 4.1.2. For 0 < c < 1 we have cn → 0 as n → ∞.
Proof. Here is a different (and much simpler) proof. Let d = 1/c > 1 and write d = 1 + x, for
x > 0. Then dn = (1 + x)n = 1 + nx + · · · xn by the binomial theorem. As all the terms are
positive, dn > nx. Thus for any number E, we have dn > E whenever n ≥ E/x.
1 1
Now suppose that is given. Then cn < ⇐⇒ dn = n > . So, using the computations
c
1 1
of the last paragraph, take N = 1 + , where x = 1/c − 1. Then, for n ≥ N we have n >
x x
1
and so dn > by the last paragraph. In other words, cn < , as required. 2
We now consider several sequences where it is harder to see what is going on; typically they
bn
are of the form an = where both bn and cn get large (or small) as n gets large. What matters,
cn
for the limit, is how quickly bn grows (or shrinks to 0) in comparison with cn . Roughly, if (bn )n
has a higher “order of growth”1 than cn then the modulus of the ratio will tend to infinity and
there will be no limit; if (bn )n and (cn )n have roughly the same order of growth, then we might
get a limiting value for (an )n ; if (cn )n has the higher order of growth, then (an )n → 0 as n → ∞.
Lemma 4.1.3. Suppose that (an ) is a convergent sequence with limit `. For any integer M ,
let bn = an+M (if M happens to be negative, we just take bn = 0 or any other number for
1 ≤ n ≤ −M ). Then lim bn = `.
n→∞
Proof. The point is that (apart from starting at different places) the two sequences are really
the same, so ought to have the same limit. More formally, let > 0. Then we can find
N ∈ N such that |` − an | < for all n ≥ N . Hence |` − bn | = |` − an+M | < for all for all
n ≥ max{1, N − M }. So the sequence (bn ) converges. 2
cn
Lemma 4.1.4. For any c, → 0 as n → ∞.
n!
cn
Proof. Set an = . Since it suffices to prove that |an | → 0 (see Theorem 3.1.4), we can
n!
replace c by |c| and assume that c > 0. In particular, an > 0 for all n.
c c c an
Now, an+1 = an · . For all n ≥ 2c, we have < 12 and hence an+1 = an · < .
n+1 n+1 n+1 2
aN
So, fix an integer N ≥ c. Then for all m > 0 a simple induction says that 0 < am+N < m =
2
aN 2−m .
But we know that limm→0 (2−m ) = 0 by Lemma 4.1.2. Thus by the Algebra of Limits The-
orem 3.2.1 lim aN (2−m ) = 0 and by the Sandwich Theorem lim (am+N ) = 0. By Lemma 4.1.3,
m→0 m→0
lim (an ) = 0. 2
n→0
29
Let kn = n1/n − 1. Then E4 of Section 2.6 says that kn > 0 and clearly n = (1 + kn )n . By
the binomial theorem
n(n − 1) 2
n = (1 + kn )n = 1 + nkn + kn + · · · + knn .
2
n(n − 1) 2 2n 2
Since all the terms are positive, n > kn , and hence kn2 < = . So, for all
√ 2 n(n − 1) n−1
2
n ≥ 2 we have that 0 < kn < √ .
√ n − 1
2
Now √ → 0 as n → ∞ (exercise), so by the Sandwich Rule, kn → 0 as n → ∞. But
n−1
1 1
n n = 1 + kn , so by the Algebra of Limits Theorem, n n → 1 as n → ∞. 2
Lemma 4.1.6. Fix c with 0 < c < 1 and fix k. Then limn→∞ nk · cn = 0.
Remark. This proof is quite subtle, and it is included for completeness. Once we have
L’Hôpital’s Theorem, we can give a very quick proof.
Proof. If k = 0 then the result is Lemma 4.1.5. Hence the result is also true for k ≤ 0 by the
Sandwich Rule (as 0 < nk · cn ≤ cn if k ≤ 0).
So we may assume that k > 0.
1
Let us first assume that k ∈ N. Now n n → 1 as n → ∞ (by 4.1.2). Hence, by the AoL
1 1
(Algebra of Limits Theorem), n n · n n → 1 · 1 = 1 as n → ∞. By repeatedly multiplying by
1 1
n n we see that (n n )m → 1 as n → ∞ for any positive integer m, and in particular for m = k.
1 1
Another use of AoL gives that (n n )k · c → c as n → ∞ or, to put this another way, (nk ) n · c → c
as n → ∞.
1−c
Now let = so that > 0 by our assumption on c.
2
Choose N ∈ N so that for all n ≥ N we have that
1
|(nk ) n · c − c| < .
30
n√
Proofs: (1) Here we can break it up to get 3n = 31/n n1/n . Now Lemmas 4.1.1 √and 4.1.5 show
1/n 1/n n
that lim 3 = 1 = lim n . Thus by the Algebra of Limits Theorem, lim 3n = 1.
n→∞ n→∞ n→∞
(2) We know from Lemma 4.1.2 that lim ( 21 )n = 0, and so Theorem 3.1.4 (or 3.2.2 implies that
n n→∞
−1
lim = 0.
n→∞ 2
(3) This does not follow directly from our results, but think about individual terms:
n! 1 · 2 · 3···n 12 n 1
= = ··· ≤ .
nn n · n · n···n nn n n
1
Since all the terms are positive and lim = 0, we can use the Sandwich Theorem to get
n→∞ n
n!
lim n = 0.
n→∞ n
31
106n
Here we applied Lemma 4.1.4 with c = 106 to deduce that → 0, and again with c = 2
n
n!
2
to deduce that → 0, and finally we applied AoL.
n!
Alternative viewpoint. Another way to think about “relative orders of growth” is to draw
up a table.
an
We will say that an goes to zero faster than bn if lim = 0. The first table shows common
n→∞ bn
functions ordered by how quickly they slip or plunge to zero.
1
See 4.1.7
nn
1
See 4.1.4
n!
1
cn for 0 < c < 1 See 4.1.6 and think of e−n =
en
1
= n−p for p > 0 you can put any polynomial in the denominator
np
1
see below
ln(n)
1 1
c n and n n for c > 0 These come last as they tend to 1; see 4.1.1 and 4.1.5.
nn See 4.1.7
n! See 4.1.4
np for p > 0 See 3.2.3; you can also take any polynomial here
1 1
c n and n n for c > 0 These come last as they tend to 1; see 4.1.1 and 4.1.5.
These tables include ln(n), which we have yet to discuss but you should have the intuition
that “Since exponentials grow faster than any polynomial, so logs grow slower.” We show this
in the next example.
32
ln(n)
Example 4.2.3. For any 0 < c, we have lim = 0.
n→∞ nc
Proof: Just like Lemma 4.1.6, this is best done using L’Hôpital’s rule. You can prove it using
(a variant of) Lemma 4.1.6, but it is sufficiently messy I will leave it to L’Hôpital’s rule.
Examples 4.2.4. Find the limits of the sequences (an ) where:
n3 3n 3n + n! 3! n28 + 5n7 + 1
(a) an = (b) an = (c) an = (d) an = (e) .
3n n3 n! n3 2n
Answers (a) By the table, exponentials grow faster than polynomials (or use 4.1.6), so this
tends to zero.
(b) This is really something we deal with in the next chapter, but notice it is the reciprocal of
(a). So we would expect that it tends to 10 = ∞. This is indeed a true statement, but can you
see how to prove it?
3n + n! 3n 3n
n
3
(c) = +1. By the table, limn→∞ = 0 and so by the AoL Theorem limn→∞ +1 =
n! n! n! n!
1.
1
(d) Careful! This is just a constant times n3
and so (by the AoL or directly) it tends to zero.
(e) Here the table says that the limit is 0. Note that to prove it using Lemma 4.1.6,, you should
n28 n7 1
write it as n + 5 n + n . Then 4.1.6, says each fraction has limit zero and so the AoL says
2 2 2
that
n28 + 5n7 + 1
lim = 0 + 5 · 0 + 0 = 0.
n→∞ 2n
In the next example we will use the following general result:
Lemma 4.2.5. Suppose that (an )n∈N is a convergent sequence, say with limn→∞ an = `. Let r
and s be real numbers such that r ≤ an ≤ s for all sufficiently large n ∈ N, say for all n ≥ M .
Then r ≤ ` ≤ s.
Proof. Suppose (for a contradiction) that, ` < r. Let = r − ` and choose N1 so that for all
n ≥ N1 , |an − `| < . Set N = max{N1 , M }. Then for any n ≥ N we have
` − < an < ` + = ` + (r − `) = r.
This contradicts the hypothesis that an ≥ r. The same argument shows that ` ≤ s. 2
Here is a more complicated type of example which we will see quite a lot.
Example 4.2.6. Let the sequence (an )n∈N be defined inductively by
a1 = 2,
and for n ≥ 1
a2n + 2
an+1 = .
2an + 1
We show that (an )n∈N converges and then find the limit.
To do this we first show that
(A) ∀n ∈ N, an ≥ 1, and
33
(B) ∀n ∈ N, an+1 ≤ an .
Finally,
(C) This forces (an ) to converge. Now use the AoL to solve an equation for the limit `.
Proof of A: Now obviously a1 ≥ 1. So, suppose, for some natural number n ≥ 1 we have
a2 + 2 a2 + 2
an ≥ 1. Then an+1 = n , so an+1 ≥ 1 ⇐⇒ n ≥ 1 ⇐⇒ a2n + 2 ≥ 2an + 1 (here we
2an + 1 2an + 1
are using the fact that by induction 2an + 1 is positive because an is). But
a2n + 2 ≥ 2an + 1 ⇐⇒ a2n − 2an + 1 ≥ 0 ⇐⇒ (an − 1)2 ≥ 0,
which is certainly true. So working back through these equivalences we see that an+1 ≥ 1 as
required.
Remark. It is very important in this sort of argument that one has ⇐⇒ or at least ⇐ since
one is trying to prove the first statement using the validity of the final statement. If you just
have ⇒ you would not be justified in drawing these conclusions.
Proof of B: We have
a2n + 2
an+1 ≤ an ⇐⇒ ≤ an ⇐⇒ a2n + 2 ≤ 2a2n + an ⇐⇒ 0 ≤ a2n + an − 2.
2an + 1
But an ≥ 1 (by A), so a2n ≥ 1, and hence a2n + an ≥ 2, so we are done.
Step C. So we have now shown that (an )n∈N is a strictly decreasing sequence which is bounded
(above by 2 and below by 1). Hence, by the Monotone Convergence Theorem (or, rather, by its
variant in the Exercises for Week 3) it converges. Let its limit be `. We calculate ` explicitly
as follows.
We first note that, by Lemma 4.2.5, as an ≥ 0 for all n, then ` ≥ 0 also holds.
By repeated use of the Algebra of Limits theorem, we have a2n → `2 , a2n + 2 → `2 + 2,
2an + 1 → 2` + 1 and, finally,
a2n + 2 `2 + 2
→ as n → ∞
2an + 1 2` + 1
`2 + 2
(note that the denominator is not zero since ` ≥ 0.) Thus limn→∞ an+1 = .
2` + 1
However, Lemma 4.1.3 shows that lim an+1 = lim an . So putting these equations together
n→∞ n→∞
`2 + 2
we see that = `.
2` + 1
`2 + 2
We can now solve this for `: rearranging the equation = ` gives `2 + 2 = 2`2 + ` and
2` + 1
hence gives the quadratic equation `2 + ` − 2 = 0, or (` + 2)(` − 1) = 0. Thus, either ` = 1 or
` = −2. But we know that ` ≥ 0, so ` 6= −2 and hence ` = 1. Thus
lim an = 1.
n→∞
Remarks 4.2.7. It is important to realise that the calculation of the limit in Example 4.2.3 was
valid only because we already knew that the limit existed.
To bring this point home, reflect on the following (incorrect!) proof that (−1)n → 0 as
n → ∞ (which we know to be false from Example 2.1.4): Let an = (−1)n . Then the sequence
(an )n∈N is defined inductively by a1 = −1 and an+1 = −an . Let l = limn→∞ an . Then by the
Algebra of Limits Theorem we have limn→∞ (−an ) = −l, i.e. limn→∞ an+1 = −l. But, as above,
limn→∞ an = limn→∞ an+1 , so l = −l, whence 2l = 0 and so l = 0 as required!
More subtle examples will appear on future Exercise sheets.
34
4.3 Newton’s Method for Finding Roots of Equations - optional
This sub-section is an aside that will not be covered in the class; as such, it is not examinable,
but you might find it interesting and/or useful.
35
√
Example 4.3.2. Let us calculate 10 by Newton’s method. We take f (x) = x2 − 10 and x1 = 3.
The Newton sequence for this function is defined inductively by
2
xn − 10
xn+1 = xn −
2xn
x1 = 3
3 5 19
x2 = + = ' 3 · 1666667
2 3 6
19 30 721
x3 = + = ' 3 · 1622807
12 19 228
√
In fact, to seven decimal places, 10 = 3 · 1622776.
36
Chapter 5
Divergence
Example: Recall from Theorem 2.3.9 that if a sequence is convergent then it is bounded. We
can turn this around and see
that
certain sequences are not convergent simply because they are
en
not bounded; for example (or some of the examples on the Exercise sheet for Week
n2 n∈N
2
n 1
6). To see this: we know from Lemma 4.1.6 that n
→ 0. So for any ; in particular =
e K
n2 1
there exists N such that n < for n ≥ N . Taking reciprocals we get
e K
en
>K for all n ≥ N.
n2
en
So the sequence is unbounded and hence not convergent. Of course, this argument
n2 n∈N
is completely general, so we will make a definition and a theorem out of it.
Definition 5.1.2. We say that a sequence (an )n∈N tends to infinity as n tends to infinity
if for each positive real number K, there exists an integer N (depending on K) such that for all
n ≥ N , an > K. In this case we write an → ∞ as n → ∞, or limn→∞ an = ∞.
Example 5.1.3. The sequence ((−1)n )n∈N is divergent since there is no l ∈ R such that (−1)n → l
as n → ∞. (see Example 2.3.4). But it does not tend to infinity as n → ∞ because one may
simply take K = 1: there is clearly no N such that for all n ≥ N , (−1)n > 1.
√
Example 5.1.4. The sequence ( n)n∈N is not bounded and so is divergent (by Theorem 2.3.9).
It does tend to infinity as n → ∞. For let K > 0 be√given. Choose N = [K 2 ] + 1. Then if
√
n ≥ N we have that n ≥ [K 2 ] + 1 > K 2 . Hence n > K 2 = K, as required.
Example 5.1.5. Here is an example one needs to be careful about.
Consider the sequence ((−1)n · n)n∈N . This is clearly not a bounded sequence, so is not
convergent. But it does not tend to infinity either. For if it did, taking K = 1 we would have
an N such that for all n ≥ N, (−1)n · n > 1. Which is absurd since half the time it is negative.
37
Theorem 5.1.6 (The Reciprocal Rule). (i) Let (an )n∈N be a sequence of non-zero real numbers
1
such that an → ∞ as n → ∞. Then → 0 as n → ∞.
an
(ii) Let (an )n∈N be a sequence of non-zero real numbers such that for all sufficiently large n,
1
an > 0. Assume that the sequence ( )n∈N is null. Then an → ∞ as n → ∞.
an
Proof. (i) Suppose that an → ∞ as n → ∞ and let > 0 be given.
1 1
Then > 0, so taking K = in Definition 5.1.2, there exists N ∈ N such that for all
1 1
n ≥ N , an > K, i.e. for all n ≥ N , an > . Hence for all n ≥ N , 0 < < . Thus for all
an
1 1
n ≥ N we have that − 0 < , which proves that → 0 as n → ∞.
an an
1
(ii) Now suppose that → 0 as n → ∞. Let K > 0 be given.
an
1 1
Then > 0, so taking = we see that there exists an N such that for all n ≥ N ,
K K
1
− 0 < . We may also take N large enough so that we have an > 0 for all n ≥ N . Thus
an
1 1
for all n ≥ N , 0 < < = . Therefore for all n ≥ N , an > K, so limn→∞ an = ∞ as
an K
required. 2
Examples 5.1.7. One may invert the special null sequences of Section 4.1. For example, we have
cn
that for any c > 0, n! · cn → ∞ as n → ∞. Similarly, if c > 1, we have k → ∞ as n → ∞.
n
(In both cases the proof is left as an exercise.)
1 1 1 1
Consider now the sequence (n! − 8n )n∈N . We have = 8 n = · n →
n! − 8 n n!(1 − n! ) n! (1 − 8n! )
1 8n
0· = 0 (by Lemma 4.1.4 with c = 8, and AoL). Also, the fact that 1− → 1 (as n → ∞),
1−0 n!
n
ensures that for sufficiently large n, n! − 8 > 0. Thus, by The Reciprocal Rule 5.1.6(ii),
n! − 8n → ∞ as n → ∞.
We now prove an Algebra of Limits Theorem for sequences tending to infinity.
Theorem 5.1.8. (i) Suppose that (an )n∈N and (bn )n∈N both tend to infinity. Then
(a) an + bn → ∞ as n → ∞;
(b) if c > 0, then c · an → ∞ as n → ∞;
(c) an · bn → ∞ as n → ∞.
(ii) (The Infinite Sandwich Rule.) If (bn )n∈N → ∞ as n → ∞, and (an )n∈N is any sequence
such that an ≥ bn for all sufficiently large n, then an → ∞ as n → ∞.
K K
Proof. For (i)(b), let K > 0 be given. Then > 0, so there exists N ∈ N such that an >
c c
for all n ≥ N . Hence c · an > K for all n ≥ N , and we are done.
The rest of the proofs are left as exercises. 2
Definition 5.1.9. We say that a sequence (an )n∈N tends to −∞ as n → ∞, if for all K < 0,
there exists N ∈ N such that for all n ≥ N , an < K. This is written: an → −∞ as n → ∞.
[This is easily seen to be equivalent to saying that −an → ∞ as n → ∞.]
38
Example 5.1.10. The sequence ((−1)n · n)n∈N is unbounded (so does not converge) but neither
tends to ∞ nor to −∞ as n → ∞.
Similarly (8n − n!) → −∞ as n → ∞. Why? Because this is exactly the same statement as
saying that −(8n − n!) → +∞ as n → ∞. Which we proved in Examples 5.1.7.
Questions 6A:
(These will appear on the Exercise sheet for Week 7, but are also relevant to the Coursework
Exam.)
Do the following sequences converge/diverge/tend to infinity or tend to minus infinity?
√ √
(a) cos(nπ) n n∈N (b) sin(nπ) n n∈N
√ ! 3
n2 + 2 n + 3n
c) √ (d)
n n2 + 2n n∈N
n∈N
2
n + 2n
1
e) (f) √ √
n3 + 3n n∈N n − 2n n∈N
39
Chapter 6
Subsequences
Looking at subsequences of a given sequence gives a practical test, Theorem 6.1.3, for non-
convergence. These also feature in two of the most important ideas and results in analysis,
namely Cauchy sequences and the Bolzano-Weierstrass theorem.
Example 6.1.2. The sequence (4n )n∈N is a subsequence of the sequence (n2 )n∈N : we take kn = 2n .
Then with an = n2 , we have akn = (kn )2 = (2n )2 = 22n = 4n .
Theorem 6.1.3. Let (an )n∈N be a sequence. For any subsequence (akn )n∈N of the sequence
(an )n∈N we have:
(i) if an → l as n → ∞, then akn → l as n → ∞;
(ii) if an → ∞ as n → ∞, then akn → ∞ as n → ∞;
(iii) if an → −∞ as n → ∞ then akn → −∞ as n → ∞.
Proof. (i) Let > 0 be given. Choose N ∈ N so that for all n ≥ N , |an − l| < . Now an
easy induction shows that for all n ∈ N we have kn ≥ n. Hence, if n ≥ N then kn ≥ N , so
|akn − l| < . So the same N works for the sequence (akn )n∈N .
The proofs of (ii) and (iii) are similar. 2
As mentioned above, the practical importance of Theorem 6.1.3 is that it gives us a method
of proving that certain sequences do not converge. We can either find two subsequences of the
given sequence that converge to different limits, or else one subsequence that converges to either
∞ or −∞.
n 1
Example 6.1.4. The sequence (−1) + 2 does not converge. For suppose it does, to l
n n∈N
2n 1
say. Consider the subsequence with kn = 2n. This is the sequence (−1) + , which
(2n)2 n∈N
1
is 1 + and this converges to 1. Hence, by Theorem 6.1.3(i), we must have l = 1.
(2n)2 n∈N
40
1
But now consider the subsequence with kn = 2n + 1, i.e. (−1)2n+1 + 2
, which
(2n + 1) n∈N
1
is −1 + , and this converges to −1. Hence by 6.1.3(i) we must have l = −1, a
(2n + 1)2 n∈N
contradiction! n h n i
Example 6.1.5. The sequence − does not converge. For suppose it does, to l say.
4 4 n∈N
4n 4n
Consider the subsequence with kn = 4n, i.e. − . This is the sequence with
4 4 n∈N
all terms equal to 0 which of course converges to 0 and hence by6.1.3(i),l = 0. However,
4n + 1 4n + 1
consider now the subsequence with kn = 4n + 1, i.e. − . Note that
4 4 n∈N
4n + 1 1 1
= n+ = n. So this subsequence has all terms equal to , and therefore it
4 4 4
1 1
converges to . So by 6.1.3(i), l = , a contradiction.
4 4 nπ
Example 6.1.6. The sequence n sin neither converges, nor tends to ∞ nor tends
2 n∈N
(4n + 1)π
to −∞. For consider the subsequence with kn = 4n + 1, i.e. (4n + 1) sin .
2 n∈N
(4n + 1)π π π
Now (4n + 1) sin = (4n + 1) sin 2nπ + = (4n + 1) sin = 4n + 1. So this
2 2 2
subsequence is (4n + 1)n∈N which tends to ∞ as n → ∞.
4nπ
But now consider the subsequence with kn = 4n, i.e. the sequence 4n sin .
2 n∈N
Every term here is 0 (since sin(2nπ) = 0 for all n ∈ N). So this subsequence converges to 0. So
by 6.1.3 (i), (ii) and (iii), the original sequence does not converge and nor does it tend to ∞ or
−∞.
√ √
Example 6.1.7. Does lim n − n exist?
n→∞ √
Hint: you may assume that [ m2 + m] = m.
√ √
Answer: Set an = [ n]− n. We√should√use subsequences. One subsequence is pretty obvious—
if we take kn = n2 then akn = [ n2 ] − n2 = n − n = 0, so this subsequence clearly has limit
0.
For the second one we use the hint and take kn = n2 + n; thus
hp i p p
akn = n2 + n − n2 + n = n − n2 + n.
(x − y)(x + y)
This is now one of those cases where we use the trick (x − y) = . So, in this
(x + y)
case √ √
p (n − n 2 + n)(n + n2 + n) n2 − (n2 + n)
n − n2 + n = √ = √
(n + n2 + n) n + n2 + n
−n −1
= √ = p .
2
n+ n +n 1 + 1 + 1/n
p √
Now, for example from the last Homework set, we know that limn→∞ ( 1 + 1/n) = 1 = 1.
−1 −1
Hence by the AoL the last display has limit = .
1+1 2
Since we have two subsequences of the original sequence with different limits the original
sequence cannot have a limit.
41
(Here is the proof of the hint: Simply observe that
p p
[ m2 + m] = m ⇐⇒ m2 + m < m + 1 ⇐⇒ (m2 + m) < (m + 1)2 = m2 + 2m + 1,
Theorem 6.2.1 (The Bolzano-Weierstrass Theorem (1817)). Every bounded sequence (an ) has
a convergent subsequence.
Remarks: (1) It is important to note that the Theorem is not saying that (an ) is convergent—
which is false in general. For instance take our favourite bad example (an = (−1)n ). So, for
this example one would have to take a genuine subsequence. Two obvious examples would be
(a2 = 1, a4 = 1, a6 = 1, ...) and (a1 = −1, a3 = −1, a5 = −1, ...)
(2) The way to think about the theorem is as a generalisation of the Monotone Convergence
Theorem—which says that if one has an increasing bounded sequence then it is convergent. So,
you can probably guess how one might try to prove this theorem: given a bounded sequence
(an ) then start with a1 , and look for some ak2 ≥ a1 and induct. If this is not possible then try
to do the same with descending sequences. In fact the proof needs to be a bit more subtle, but
it develops from this idea.
We now come to the problem of giving a necessary and sufficient condition on a sequence for
it to converge, without actually knowing what the limit might be. The following definition is
crucial and will feature in many different guises in your future Analysis and Topology courses.
Definition 6.2.2. A sequence (an )n∈N is called a Cauchy sequence if for all > 0, there
exists N ∈ N such that for all i, j ∈ N with i, j ≥ N , we have |ai − aj | < .
So this is saying that the terms of the sequence (an )n∈N are getting closer and closer together.
Then we have a central theorem.
It is important in the definition of a Cauchy sequence that we are saying that all the terms get
close together. For example if one defined a sequence inductively by a1 = 1 and an = an−1 + n1 ,
then certainly for all > 0 we can find N such that |an − an+1 | < for n ≥ N . But, in fact this
sequence is not convergent (we have seen an informal proof of this already and will see it again
when we consider infinite series). So, the last theorem would not be true if we just assumed
that successive terms got close together.
42
Theorem 6.2.4. (The Bolzano-Weierstrass Theorem). Any bounded sequence (an ) pos-
sesses a convergent subsequence. In other words there exist a sequence of numbers
43
Examples 6.2.5. The Theorem tells us nothing about what is the subsequence {akr } or its limit
µ. Some exercises might give you a feel for the vagaries inherent in the argument.
1
For the ascending chain an = 1 − then Mr = 1 for all r and the {akr } is just some (indeed
n
n 1
any) subsequence of the {an }. For a sequence like an = (−1) there are lots of possible
n
1
subsequences— you could take the descending chain a2n = for all n or the ascending
2n n∈N
1
chain a2n+1 = − for all n or some messier combination of the two. The limit is
2n + 1 n∈N
also not uniquely determined; just think about the case of (an = (−1)n ).
In fact one can extend the proof of this theorem a bit to prove the following:
Challenging Exercise: Prove that every sequence (an )n∈N has a subsequence (akn )n∈N which
is either increasing (i.e. for all n ∈ N, akn ≤ akn+1 ) or decreasing (i.e. for all n ∈ N,
akn ≥ akn+1 ).
We now consider Cauchy sequences as defined in Definition 6.2.2. We begin with:
Proof. Take N such that, for i, j ≥ N we have |ai − aj | < 1. In particular, taking i = N we
see that
aN − 1 < aj < aN + 1 for all j ≥ N.
Thus an ≤ max{a1 , a2 , . . . , aN −1 , aN + 1} for all n. The lower bound is similar. 2
We know that an increasing sequence (an ) has a limit if and only if it is bounded (combine
the Monotone Convergence Theorem with Theorem 2.3.9). Using Cauchy sequences we can get
a general analogue:
Theorem 6.2.7. (The Cauchy Sequence Theorem) A sequence (an ) converges if and only
if it is a Cauchy sequence.
Proof. Suppose first that (an ) is a Cauchy sequence and let > 0 be given. As so often, we
will use our various rules for convergence using η = . By the Bolzano-Weierstrass Theorem
2
we can at least find a convergent subsequence; say (akr : r ∈ N) with lim akr = µ. If we just
r→∞
had a bounded sequence then there would be no reason why µ would equal lim an ; just think
n→∞
of our favourite bad sequence ((−1)n ). So somehow we have to use the Cauchy condition.
So, let > 0 be given. In particular, there exists N0 such that |akr − µ| < η = for all
2
kr ≥ N . Also, by the Cauchy condition pick N1 such that |ai − aj | < η = for all i, j ≥ N1 .
2
Set N = max{N0 , N1 } Then there is some kr > N and so, for this kr we get |akr − aj | < for
2
all j ≥ N . But we also know that |akr − µ| < for this kr . By the triangle inequality these
2
combine to give
|aj − µ| ≤ |aj − akr | + |akr − µ| < + = for all j ≥ N .
2 2
Thus lim an = µ as we wanted.
n→∞
44
Conversely, suppose that (an ) converges; say with lim an = ν. Let > 0 be given. Once
n→∞
again we use η = . Thus there exists N such that |aj − µ| < 2 for all j ≥ N . But now we find
2
that for i, j ≥ N we have
|ai − aj | ≤ |ai − ν] + |aj − ν] < + = .
2 2
So we have a Cauchy sequence. 2
45
Chapter 7
L’Hôpital’s Rule
We will use L’Hôpital’s Rule, which some of you will have seen before. We can’t mathematically
justify the rule in this course unit, simply because it involves differentiation of functions and
we have not yet given the rigorous definition of when a function is differentiable nor of what
the derivative is. That will be done in the second year course on Real Analysis. But the rule is
very useful, so we will use it.
We consider two sequences (an )n∈N and (bn )n∈N , both tending to infinity. L’Hôpital’s Rule
an
gives a method for calculating limn→∞ under certain circumstances.
bn
an f 0 (n) dg
Then lim = lim 0 , assuming that the Right Hand Side exists. Here g 0 = .
n→∞ bn n→∞ g (n) dx
Second Version: L’Hôpital’s Rule also holds if you replace (†) by
Assume that lim an = 0 = lim bn . (††)
n→∞ n→∞
Since we do not yet have a rigorous definition of the derivative, we’re not in a position
to have a rigorous proof of the validity of L’Hôpital’s Rule, though you may have seen some
arguments for it when studying Calculus. Anyway, we will take it as correct (which it is) and
see some examples of its use.
Example 7.1.1. Say an = 2n + 3 and bn = 3n + 2. Take f (x) = 2x + 3 and g(x) = 3x + 2, so
that clearly abnn = fg(n)
(n)
and the hypotheses of the rule are satisfied. Thus L’Hôpital’s Rule tells
us that
an f 0 (n) 2 2
lim = lim 0 = lim = .
n→∞ bn n→∞ g (n) n→∞ 3 3
Example 7.1.2. Let an = 3n2 − 4n + 2 and bn = (n + 1)2 . Here we apply the rule to the
functions f (x) = 3x2 − 4x + 2 and g(x) = (x + 1)2 , noting that they satisfy the conditions
(twice-differentiability and the latter having nonzero derivative for large enough x). We obtain
3n2 − 4n + 2 6n − 4
lim 2
= lim
n→∞ (n + 1) n→∞ 2(n + 1)
46
.
Still top and bottom go to ∞ as n → ∞. But we may apply the rule again to obtain
6n − 4 6
lim = lim = 3.
n→∞ 2(n + 1) n→∞ 2
Of course, in these examples one may also use the previously discussed method of dividing
top and bottom by the fastest-growing term. However, L’Hôpital’s Rule really comes into its
own when applied to examples like the following. (They do all satisfy the differentiability
condition and the condition, on the denominator, of being nonzero for large enough x, though
we do not note this explicitly.)
log n
Example 7.1.3. Consider the sequence . Before applying L’Hôpital’s Rule we should
n n∈N
note that log n → ∞ as n → ∞.
So we may indeed apply the Rule in this example (with f (x) = log x and g(x) = x) and we
see that
1
log n 1
lim = lim n = lim = 0.
n→∞ n n→∞ 1 n→∞ n
Example 7.1.4. Similarly (with f (x) = log x and g(x) = log(x + 1)):
1
log n n n+1 1
lim = lim 1 = lim = lim 1 + = 1.
n→∞ log(n + 1) n→∞ n→∞ n n→∞ n
(n+1)
47
Proof: First, pick a natural number K ≥ k. Since nK cn ≥ nk cn , the Sandwich Rule says we
nK
need only prove that lim nK cn = 0. Secondly, we rewrite this as lim n = 0 (for d = 1c > 1).
n→∞ n→∞ d
This ensures that we are in a situation where we can apply L’Hôpital’s Rule and gives
nK K nK−1
lim = lim .
n→∞ dn n→∞ ln(d) dn
By induction on K the RHS has limit 0 (where one starts the induction at K − 1 = 0).
ln(n)
The rule for logs is similar: We are discussing lim , for any c > 0. Again we can apply
n→∞ nc
the rule to get
1
ln(n) n 1 1
lim = lim = lim → 0.
n→∞ nc n→∞ cnc−1 n→∞ c nc
48
Part II
Series
49
Chapter 8
Introduction to Series
In this chapter we give a rigorous definition of infinite sums of real numbers. We are interested
in expressions of the form
X∞
ai = a1 + a2 + · · · + an + · · ·
i=1
and need to make sense of such expressions. This is closely related to sequences since this
“infinite sum” is (by definition! as you will see) the limit of the sequence of partial sums (sn )
where
Xn
sn = ai = a1 + a2 + · · · an .
i=1
1
You have probably seen that ∞
P
i=0 i = 2. To see that: an easy induction shows that sn =
2
Pn 1 1
i=0 i = 2 − n , then the limit of the sequence (sn )n of partial sums is 2.
2 2
However, there are some surprising subtleties in these series. One of the basic tricky ones is
P∞ 1
the Harmonic Series: i=1 which equals ∞. That might seem counterintuitive but, to see
n
it, collect terms together to get:
P∞ 1 1 1
+ 14 1 1
+ 17 + 1
i=1 = 1 + 2 + 3 + 5 + 6 8 +···
n (8.1)
1 1 1
≥ 1 + 2 + 2 + 2 +··· .
So this gives an infinite sum of halves, which is therefore infinite. In contrast, we will see that
∞
X 1
< ∞.
n1.001
i=1
50
8.1 The Basic Definitions
For real numbers an , an infinite series is an expression of the form
∞
X
an (also written a1 + a2 + a3 + · · · + an + · · · ).
n=1
P P
This may also be written an or even just an .
n≥1
Given such a series, we form the sequence of partial sums (sn )n∈N defined by setting
n
X
sn = a1 + a2 + · · · an = ai .
i=1
∞
P
If sn → s as n → ∞, we then say that the series an is convergent with sum s, and write
n=1
∞
P
an = s. If there is no real number s with this property, that is if the sequence (sn ) does
n=1
∞
P
not have a limit, then we say that the series an is divergent. If the series is divergent but
n=1
∞
P
lim sn = ∞ then we say that an = ∞ (similarly with −∞).
n→∞ n=1
The Harmonic Series, revisited. Let’s first check that the harmonic series (8.1) really does
have sum ∞. So, what we have seen is that for any K there exists M (in fact M = 2K+1 works)
1
such that M gives (more than) a sum of 2K copies of 21 . In other words, if m ≥ M then
P
n=1
n
1 PM 1 1
sm = m ≥ n=1 ≥ K. Which is just what one needs to prove to see that ∞
P P
n=1 i=1 = ∞.
n n n
Remark. Sometimes (as in the next example), it is more convenient to start the series/sequence
at 0, thus looking at series of the form ∞
P
n=0 = a0 + a1 + · · · .
∞
rn
P
Example 8.1.1 (Geometric series). Let r be a real number with |r| < 1. Then the series
n=0
1
converges with sum .
1−r
Proof. Here, an = r and sn = a0 + a1 + · · · + an = 1 + r + · · · + rn . So
n
sn = 1 + r + · · · + rn ⇒ rsn = r + r2 + · · · + rn+1 .
Subtracting we obtain
(1 − r)sn = 1 − rn+1
because all the other terms cancel. Hence
1 − rn+1
sn = .
1−r
1 − rn+1 1−0 1
Now since −1 < r < 1, rn+1 → 0 as n → ∞. (Use 4.1.2.) Hence → =
1−r 1−r 1−r
1 ∞ 1
n . 2
P
(by AoL). Thus sn → as n → ∞. So, by definition, r =
1−r n=0 1−r
51
Remark: This proof only works for |r| < 1. It might be tempting to put in other values of
r since the right hand side is well defined for any r with r 6= 1. But this gives nonsense, e.g.
∞
1
2n = 1−2
P
“ = −1” is definitely not correct.
n=0
∞
1
P
Example 8.1.2 (Using partial fractions). Consider the series .
n=1 n(n + 1)
1 1 1 1 1
Here an = and sn = a1 + a2 + · · · + an = + + + ··· + .
n(n + 1) 1·2 2·3 3·4 n(n + 1)
A trick that sometimes works is to use partial fractions on the terms. Here we use the
identity
1 1 1
= −
x(x + 1) x x+1
valid for x 6= 0, −1. Hence
1 1 1 1 1 1 1 1
sn = − + − + − + ··· + − .
1 2 2 3 3 4 n n+1
Now all the terms cancel apart from the first and the last, so we obtain
1
sn = 1 − → 1 − 0 = 1 as n → ∞.
n+1
∞
X 1
Thus = 1.
n(n + 1)
n=1
Remark: It is very important in Example 8.1.2 that one only rearranges terms in a finite sum
1
rather than rearranging terms in the infinite sum ∞
P
n=1 . The reason is that, with
n(n + 1)
infinite sums, you can get all sorts of strange different answers by arranging things in different
ways—the details can be found in Section 12.2.
It is quite rare for one to be able to find an exact formula for the nth partial sum (i.e. for
the number sn ) as we did in these two examples. So, just as we did with the study of sequences,
we need to build up a stock of general theorems that will help us to tell when a series does and
does not converge without explicitly computing the partial sums. Our first result along these
lines states that a series cannot converge unless the terms (i.e. the an ) form a null sequence.
P∞ 1
Note, however, that the converse of this statement is false in general, as we saw with .
n=1 n
Here is a similar type of example.
P∞ 1 1
Example 8.1.3 (of non-convergence). Consider the series √ . Here, an = √ and we have
n=1 n n
∞
P
shown (see Example 3.2.3) that (an )n∈N is a null sequence. However, the series an does not
n=1
1 1 1 1
converge. For sn = √ + √ + √ + · · · + √ , from which we deduce that sn ≥ (number of
1 2 3 n
1 √
terms) · (smallest term) = n · √ = n.
√ n
Thus sn ≥ n for all n ∈ N, so sn → ∞ as n → ∞ by the Infinite Sandwich Rule (since
√ P∞ 1
n → ∞ as n → ∞). Hence, by definition, the series √ does not converge.
n=1 n
52
∞
Theorem 8.1.4. (The nth term Test.) If
P
an is a convergent series, then (an )n∈N is a
n=1
null sequence.
∞
P
Proof. Suppose that an = s. Let sn = a1 + · · · + an be the nth partial sum. So sn → s as
n=1
n → ∞. Hence lim sn−1 → s as well (see Lemma 4.1.3), and so by AoL, lim (sn − sn−1 ) =
n→∞ n→∞
s − s = 0. But sn − sn−1 = an . So an → 0 as n → ∞, i.e. (an )n∈N is a null sequence. 2
n
P n
P
Proof. (i) For n ∈ N, let sn = ak and tn = bk be the partial sums and similarly let
k=1 k=1
n
P ∞
P
un = (ak + bk ) be the partial sums for the series (an + bn ).
k=1 n=1
Then, rearranging terms,
Xn n
X
un = ( ak ) + ( ak ) = sn + tn for n ∈ N.
k=1 k=1
53
Chapter 9
In this chapter we establish some key facts about series with non-negative terms. The theory
of such series is very much easier than the general case where negative terms are allowed.
∞
P
Theorem 9.1.2 (The Comparison Test). If 0 ≤ an ≤ bn for all n ∈ N and bn converges,
n=1
∞
P
then an converges.
n=1
∞
P ∞
P
Equivalently, if 0 ≤ an ≤ bn for all n ∈ N and an diverges, then bn diverges.
n=1 n=1
Exercise 9.1.3. Let (an )n∈N be any sequence and let N be any positive integer. Show that the
∞
P P
series an is convergent if and only if the series n≥N an (which is the same as the series
n=1
∞
P
an+N −1 ) is convergent.
n=1
Remark: This exercise is done in the Exercise sheet for Week 9, and essentially means that
for any given test, you need only apply the test for “large n”. Let’s make this precise with:
54
∞
P
Slightly Improved Comparison Test: Let N ∈ N. If 0 ≤ an ≤ bn for all n ≥ N and bn
n=1
∞
P
converges, then an converges.
n=1
∞
P X X X X
Proof. By (9.1.3), bn converges ⇒ bn = bN +m−1 converges ⇒ an = aN +m−1
n=1 n≥N m≥1 n≥N m≥1
X
converges (by the comparison test) ⇒ an converges (by 9.1.3 again) 2
n≥1
∞
X 1 1 1
Example 9.1.4. The series 2
is convergent. For we have that 2
≤ for all
n (n + 1) n(n + 1)
n=1
P∞ 1
n ∈ N, and hence by Example 8.1.2 and the Comparison Test, 2
is convergent. Since
n=1 (n + 1)
P 1 ∞ 1
P
this is just the series n≥2 2 , the convergence of the series 2
follows from Exercise 9.1.3.
n n=1 n
This example shows the value of the Comparison Test: it was straightforward to calculate
P∞ 1
the partial sums of the series and hence calculate explicitly the sum of this series
n=1 n(n + 1)
(thereby establishing its convergence). But it is impossible to give a neat formula for the
∞ 1
P
partial sums of the series 2
, so we resort to comparing its terms with those of the series
n=1 n
∞
P 1
in order to establish its convergence. In fact, as mentioned in the introduction to
n=1 n(n + 1)
this course,
∞
X 1 π2
=
n2 6
n=1
but the proof of this requires methods from next year’s complex analysis course.
P∞ 1
Example 9.1.5. For any p ≥ 2, the series p
is convergent. To see this we simply observe
n=1 n
1 1
that for all n ∈ N, p ≤ 2 (since p ≥ 2) and apply the Comparison Test to the previous
n n
example.
Similarly we have:
P∞ 1
Example 9.1.6. For any p ≤ 1 the series p
is divergent. For suppose that p ≤ 1 and, for
n=1 n
∞
X 1 1 1
a contradiction, that p
is convergent. Since ≤ p for all n ∈ N we would have, by the
n n n
n=1
P∞ 1
Comparison Test, that is convergent. But this contradicts Equation 8.1.
n=1 n
P∞ 1
Remark: We’ve left the case 1 < p < 2 open but, later, we will show that p
converges if
n=1 n
p > 1 but diverges if p ≤ 1.
We now come to an important test for the convergence of series.
Theorem 9.1.7 (The Ratio Test for series.). Suppose that an > 0 for all n ∈ N and assume
an+1
that → l as n → ∞.
an
∞
P
(i) If l < 1, then an is convergent.
n=1
55
∞
P
(ii) If l > 1, then an is divergent.
n=1
1 1
Remark: If l = 1, no conclusion can be drawn: For example, if an = 2 or an = , then in
n n
an+1 ∞
P
both cases we have that → 1 as n → ∞. But in the first case an converges, whereas
an n=1
in the second case it diverges.
P n
Proof. (i) Suppose that l < 1. We want to compare our series to the geometric series r for
some r. We need a little room to manoeuvre and taking l = r wouldn’t give us that, so we will
take a slightly bigger number for r.
1+l
So, choose r so that l < r < 1 (e.g. take r = to be the average of l and 1) and set
2
an+1
= r − l. Then > 0 so we may choose N ∈ N so that l − < < l + for all n ≥ N .
an
But = r − l says that l + = r and so, as an > 0 we get 0 < an+1 < r · an for n ≥ N . In
particular
aN +1 < r · aN , and aN +2 < r · aN +1 < r2 · aN
and so on. So (by induction) we obtain
But as 0 < r < 1 it follows from Example 8.1.1 thatP n≥0 rn converges. Now we are basically
P
done: by The Algebra of Infinite Sums n
P8.1.5(ii), aN n≥0 r converges and so from (*) and the
Slightly Improved Comparison Test, n≥1 an converges, as required.
(ii) Now suppose that l > 1. In this case we choose r so that 1 < r < l and, by following the
same procedure as above, we obtain an N ∈ N such that for all n ≥ N we have an > rn−N ·aN =
aN
rn · ( N ). But, since r > 1, this implies that an → ∞ as n → ∞ and, in particular, it implies
r
∞
an diverges by Theorem 8.1.4. 2
P
that (an )n∈N is not a null sequence. So
n=1
P∞ n2
Example 9.1.8. Consider the series n
.
n=1 2
n2
Let an = . Then
2n
(n + 1)2 2n 1 2
an+1 1
= · 2 = · 1+ .
an 2n+1 n 2 n
So, by AoL,
an+1 1 1
→ · (1 + 0)2 = as n → ∞.
an 2 2
∞ n2
1 P
Since 2 < 1, it follows from the Ratio Test that n
converges.
n=1 2
56
P xn
Example 9.1.9. Let x be any positive real number and consider the series n≥0 .
n!
xn
Here, an = , so
n!
an+1 xn+1 n! x
= · n = .
an (n + 1)! x n+1
x P xn
But → 0 as n → ∞ and 0 < 1. So by the Ratio Test, n≥0 converges.
n+1 n!
P xn
Aside: We can define ex to be the sum . One can then show (directly from this definition)
n!
that ex+y x y
= e ·e . This justifies the notation, and also implies its consistency with our previous
discussion of exponentiation (see Section 2.6).
Example 9.1.10. (The Bouncing Ball.) Suppose that a rubber ball is dropped from a height of
1 metre and that each time it bounces it rises to a height of (2/3) of the previous height. How
far does it travel before it stops bouncing (and yes it does stop)?
Answer: First it drops 1m. Then it rises up and drops 2/3m. Then it rises up and drops
(2/3)2 m, etc etc. So the total distance travelled is
2
2 3
1+2× 3+ 2 × 23 + 2 × 23 + · · ·
2
= 1 + 2 × 23 1 + 23 + 32 + · · · = 1 + 4
3 · 1
1−2/3 = 5.
(In this example, try taking x = 2 and = 14 .) Examples of functions f (x) satisfying (i), (ii)
1 1 1
and (iii) are , 2 , , 2−x
x x x log(1 + x)
Theorem 9.2.1 (The Integral Test). Let f : [1, ∞) → R be a function satisfying the three
conditions above. Then the series
X∞
f (n)
n=1
57
converges if and only if the sequence
Z n
f (x)dx
1 n∈N
converges as n → ∞. Rn
This in turn happens if and only if the sequence 1 f (x)dx n∈N is bounded.
Before giving the proof here are some examples of how the test works.
∞ 1
P
Example 9.2.2. Consider the series . We shall show that it diverges.
n=1 n
1
Let f : [1, ∞) → R be the function f (x) = , which clearly satisfies our three conditions.
x
Now Z n Z n
1
f (x)dx = dx = [log x]n1 = log n − log 1 = log n.
1 1 x
P∞ 1
But (as in Example 7.1.3) (log n)n∈N is a divergent sequence, so by the Integral Test,
n=1 n
is a divergent series.
Here is the important example which was left partly unresolved in the previous chapter:
P∞ 1
Example 9.2.3. If p is a real number with p > 1, then p
is a convergent series.
n=1 n
1
Here we take f (x) = p . This clearly satisfies the three conditions for the Integral Test to
x
be applicable. We have
−p+1 n
n−p+1
Z n Z n
−p x 1 1 1
f (x)dx = x dx = = − = 1 − p−1 .
1 1 (−p + 1) 1 −p + 1 −p + 1 p−1 n
1
Now since p > 1, we have that → 0 as n → ∞. So
np−1
Z n
1 1 1
p
dx → (1 − 0) = as n → ∞.
1 x p−1 p−1
Rn 1 1
In particular, the sequence 1 xp dx is convergent (with limit ). Hence, by the
n∈N p − 1
P∞ 1
Integral Test, the series p
is convergent (for p > 1).
n=1 n
P∞ 1
Exercise 9.2.4. Use the Integral test to prove that the series p
is divergent if p < 1. (We
n=1 n
also proved this in Example 9.1.6.)
Rn R n+1
Proof of the Integral Test: First of all, it is clear that 1 f (x)dx ≤ 1 f (x)dx and so the
sequences of integrals is an increasing sequence. Hence the second paragraph of the Theorem
is nothing more than the Monotone Convergence test. Rn
So we look at the first assertion. The basic idea is that the integral x=1 f (x)dx defines the
area under that curve for 1 ≤ x ≤ n. This we can approximate by summing the areas of a
series of rectangles with width 1. The point of the proof is that this is (close to) the partial
sum we want. To make this precise we can either draw it (which I will do in the lectures since
it is probably easier to understand) or a give more algebraic proof, as follows.
58
Let n ∈ N and consider the two rectangles A, B in the xy-plane given by
59
∞
X ∞
X Z n
f (n) converges ⇐⇒ f (n) converges ⇐⇒ f (x)dx converges.
n=1 n=K K n≥K
P∞ n2
Example 9.2.6. The series n=2 diverges.
n3 − 1
x2
Proof: Consider the function f : [2, ∞) → R defined by f (x) = .
x3 − 1
Clearly f (x) > 0 for x ≥ 2 and f is continuous. To see that f is decreasing, we can either
x2
use calculus or algebra. Using calculus is easier: if f (x) = 3 then (after simplification)
x −1
0 4 3 −2
f (x) = (−x − 2x)(x − 1) . Clearly this is negative for x > 1 and so our function f (x)
decreases.
For an algebraic proof, suppose that 2 ≤ x ≤ y. Then
x2 y2
≥ ⇐⇒ x2 y 3 − x2 ≥ y 2 x3 − y 2
x3 − 1 y3 − 1
⇐⇒ y 2 − x2 ≥ y 2 x3 − x2 y 3 ⇐⇒ (y − x)(y + x) ≥ y 2 x2 (x − y).
Since the last statement here is true (for 2 ≤ x ≤ y) we can reverse the bi-implications. This
shows that f is indeed decreasing.
Rn R n x2
So we may apply the Integral Test (in the form of 9.2.5). Now 2 f (x)dx = 2 3 dx.
x −1
In order to evaluate the integral we make the substitution u = x3 − 1. Thus du = 3x2 dx and
the new limits are u = 7 to u = n3 − 1. Thus
Z n Z n3 −1
1 du 1 3 1
f (x)dx = = [ln u]n7 −1 = (ln(n3 − 1) − ln 7).
2 37 u 3 3
Rn
Since 31 (ln(n3 − 1) − ln 7) → ∞ as n → ∞, the sequence 2 f (x) n≥2 diverges and hence
n2
so does the series ∞
P
n=2 3 by the Integral Test.
n −1
60
So we paraphrase Theorem P∞9.2.1 as saying that R∞
n=1 f (n) converges ⇐⇒ 1 f (x) converges.
(Finally, note that talking about improper integrals usually involves a somewhat different kind of
limit, namely the limit of the F (r) as the real number r tends to infinity. Also, if the conditions
(i)–(iii) do not hold, then you have to be much more careful with the definitions and properties
of improper integrals.)
Exercise 9.2.8. If f is a function to which the Integral Test applies and if it tells us that we
∞
P R∞
have convergence, then we certainly cannot conclude that f (n) = 1 f (x)dx.
n=1
Examine the proof of the Integral Test to show that
∞
X Z ∞ ∞
X
f (n) ≤ f (x)dx ≤ f (n)
n=2 1 n=1
whenever either side converges. Show, in fact, that we always have strict inequality here.
1
Example 9.2.9. Consider the series ∞
P
n=2 . To investigate convergence we take f (x) =
n ln n
1 Rn 1
in the Integral Test and consider the integral 2 dx. In order to evaluate it we make
x ln x x ln x
dx
the substitution u = ln x. Then du = x and the new range of integration is from u = ln 2 to
u = ln n. Thus Z n Z ln n
1 du
dx = = [ln u]ln n
ln 2 = ln(ln n) − ln(ln 2).
2 x ln x ln 2 u
Rn 1
Now (exercise) ln(ln n) → ∞ as n → ∞ and so the sequence 2 x ln x dx diverges.
n∈N
1
Hence, by the Integral Test, the series ∞
P
n=2 diverges.
n ln n
Exercise 9.2.10. Let p ∈ R. Prove that the series
∞
X 1
n(ln n)p
n=2
(We take the lower limit of summation to be 3 here because ln ln 2 is negative and so (ln ln 2)p
might not be defined.)
Final Remark: In the integral test you do have to be careful to check that conditions (i - iii)
hold. The trouble is that without them it is easy to get silly counterexamples. For example,
suppose we take the function y(x) = 1 + cos((2n + 1)π); P this function has been chosen so that
f (n) = 0 for all n and f (x) ≥ 0 Rfor all x ≥ 0. So, here f (n) = 0 + · · · + 0 = 0 is certainly
∞
convergent, whereas the integral x=1 f (x)dx = ∞.
Exercises 9.2.11. No doubt when you first saw the techniques of integration it took a while
to get the intuition about which technique to use for which
P example. The same goes for our
techniques for testing the convergence of infinite series an (and sequences). The only way
to develop this intuition is to do lots of examples. Here are some examples, with partial hints
about how to approach them:
61
P p
Remember the geometric series and the series n . Can you easily compare the given
sequence with these?
If there is a dominant term of the form n! or cn the Ratio test will probably work. But
X n! X n! P 1 P 1
something like or simplify to respectively
(n + 1)! (n + 1)! n+1 (n + 2)(n + 1)
so nothing is perfect. If limn→∞ an 6= 0 then the “Test for Divergence” (meaning 9.1.1) will
work. If the function can easily be integrated, do so. If the terms have alternating positive and
negative terms, see the next chapter.
So, what about:
∞
X n−1
(1) . Answer: Diverges by 9.1.1 as limn→∞ an 6= 0.
2n + 1
n=1
∞ √ √
X n3 + 1 n3
. Answer: Converges by comparison with bn for bn = 3 = n−3/2
P
(2) 3 2
3n + 4n + 2 n
n=1
which converges as 3/2 > 1. Here you have to put a little work √
into the comparison test, but,
n3 + 1
since we want to get convergence we should show that an = ≤ λbn for some λ.
3n + 4n2 + 2
3
So certainly
∞ √ ∞ √ 3 ∞ ∞
X n3 + 1 X n +1 1 X p −3 −6
2 X √ −3
≤ = n +n ≤ n .
3n3 + 4n2 + 2 3n3 3 3
n=1 n=1 n=1 n=1
as n → ∞.
∞ ∞
X 2n X n!
(4) and . Answer: The Ratio Test will work to show the first converges and
n! 2n
n=1 n=1
the second diverges. (There is an easier test to use for the second one—do you see it?)
∞
X 1
(5) . Answer: Since this is closely related to a geometric series we should use
2 + 3n
n=1 P 1
the Comparison Test to compare it to 3n and deduce that it converges.
62
Chapter 10
63
Proof. Let
n
X
sn = (−1)k+1 ak
k=1
be a typical partial sum. Then
s2n = a1 − a2 + a3 − a4 + · · · + a2n−1 − a2n
= a1 − (a2 − a3 ) − (a4 − a5 ) − · · · − (a2n−2 − a2n−1 ) − a2n .
Notice that all the terms in brackets here are non-negative since the sequence (an )n∈N is de-
creasing. Also a2n > 0. It follows that for all n ∈ N, s2n ≤ a1 .
Also s2(n+1) = s2n + a2n+1 − a2n+2 . Since a2n+1 ≥ a2n+2 , we obtain that s2(n+1) ≥ s2n for
all n ∈ N. Hence (s2n )n≥1 is an increasing sequence which is bounded above (by a1 ). Therefore
it converges by the Monotone Convergence Theorem. Let its limit be `. So s2n → ` as n → ∞.
Now consider the sequence (s2n+1 )n≥1 . Then s2n+1 = s2n + a2n+1 , and both the series s2n
and a2n+1 converge. So, by the Algebra of Limits Theorem
lim s2n+1 = lim s2n − lim a2n+1 = ` − 0 = `.
n→∞ n→∞ n→∞
It now follows easily in this situation that sn → ` as n → ∞, as well, which (by definition)
∞
P
implies that an converges, as required.
n=1
In more detail, let > 0 be given. Choose N1 so that |s2n − `| < for all 2n ≥ N1 and N2 so
that |s2n+1 − `| < for all 2n + 1 ≥ N2 . Then clearly |sm − `| < for all m ≥ N = max{N1 , N2 }.
So, yes, sn → ` as n → ∞ as required. 2
Remark: Sometimes it is useful to slightly modify the series in the Alternating series. The
same argument will work (or we can use the version proved) to deduce either of the following
slightly modified versions of the theorem:
(A) Let (an )n∈N be a decreasing sequence of positive terms such that an → 0 as n → ∞.
Then the series
∞
X
(−1)n an
n=1
converges.
(B) Let (an )n≥0 be a decreasing sequence of positive terms such that an → 0 as n → ∞.
Then the series
∞
X
(−1)n an
n=0
converges.
Remark: It might be tempting to say that any alternating sum converges. But that
P is false: nLet
an be positive numbers such that an ≥ 0 for all n and (an ) is decreasing. Then n≥1 (−1) an
converges if and only if (an ) is null.
Proof. The direction ⇐ is The Alternating Series Test 10.1.1. The direction ⇒ is the “Nullity
Theorem 8.1.4.” 2
∞ (−1)n+1
P
Example 10.1.2. Let p ∈ R. Then converges if and only if p > 0.
n=1 np
(−1)n+1 (−1)n+1
Proof: If p ≤ 0, then for all n, −p
= n ≥ 1. So is not a null sequence
np np n∈N
P∞ (−1)n+1
and hence the series cannot converge, by 8.1.4.
n=1 np
64
1
On the other hand, if p > 0, then is a decreasing null sequence of positive terms.
np n∈N
P∞ (−1)n+1
Hence the series converges by the Alternating Series Test.
n=1 np
P∞ (−1)n+1
We end this section with a precise formula for the alternating sum . The last
n=1 n
step in this argument requires the definition of an integral as a limit of sums of areas. If you
have not seen this before, do not worry, since it is not something required for this course. But
the computation (pointed out by Carolyn Dean) is fun and interesting. This computation is
not something you need to remember for this course.
∞ (−1)n+1
P
Example 10.1.3. The series converges to ln(2).
n=1 n
Proof. This follows form a more careful analysis of the partial sums
2N
X 1 1 1 1
s2N = an = 1 − + + ··· + − .
2 3 2N − 1 2N
n=1
1 1 1
Claim: s2N = + + ··· + .
N +1 N +2 2N
Proof of the Claim: It is clear that it holds for N = 1 since s2 = 1 − 12 = 12 . So suppose that
it holds for some N . Then, by definition
1 1 1 1
s2(N +1) = 1− + + ··· + −
2 3 2N + 1 2N + 2
1 1
= s2N + −
2N + 1 2N + 2
1 1 1 1 1 1
= + + ··· + + − by the inductive hypothesis
N +1 N +2 2N 2N + 1 2 N + 1
1 1 1 1 1
= + ··· + + + by a simple manipulation.
N +2 2N 2N + 1 2 N + 1
Z 1
1
Finally, this is the Riemann Sum for the integral dx where one is dividing the region
x=0 1 + x
[0, 1] into n subdivisions. (You might not have seen this before, but it’s a simple enough idea,
and will be done properly next year.) So, by definition
Z 1 2
1
lim s2n = dx = ln(x) = ln(2).
n→∞ x=0 1+x 1
65
10.2 Absolute Convergence
∞
P
Suppose that we are given a series an where some of the an are positive and some negative.
n=1
Then (apart from the Alternating Series Test) there are not so many good rules for deciding
∞
P
whether an converges or not. For example something like
n=1
1 1 1 1 1
1+ − + + − ···
2 3 4 5 6
(where every third term is negative) is not something covered by one of our rules. The one other
∞
P P∞
rule we will have is that the series an does converge provided the series |an | converges.
n=1 n=1
Before stating the result we make a definition.
∞
P ∞
P
Definition 10.2.1. Let an be any series. We say that an is absolutely convergent
n=1 n=1
∞
P ∞
P
if the series |an | is convergent. If an is convergent, but not absolutely convergent, then
n=1 n=1
we say it is conditionally convergent.
P∞ (−1)n+1 P∞ (−1)n+1
For example, the series is absolutely convergent because =
n=1 n2 n=1 n2
∞ 1
P P∞ (−1)n+1
2
which is convergent. However, the series is conditionally convergent because
n=1 n n=1 n
P∞ (−1)n+1 ∞ 1
P
it converges (see Example 10.1.2) but = which diverges.
n=1 n n=1 n
The definition above rather presupposes that absolute convergence is stronger than conver-
gence and we next give a proof of this fact.
∞
P
Theorem 10.2.2. If the series an is absolutely convergent, then it is convergent.
n=1
∞
P
Proof. We are given that |an | converges. Let
n=1
(
an if an ≥ 0,
pn =
0 if an ≤ 0.
∞
P ∞
P
Then pn is a series of positive terms and for all n ∈ N, pn ≤ |an |. Hence pn converges
n=1 n=1
by the Comparison Test. Similarly, if
(
|an | if an < 0,
qn =
0 if an ≥ 0
∞
P
then qn converges.
n=1
∞
P
Hence, by the Algebra of Infinite Sums Theorem 8.1.5, (pn − qn ) is convergent. But for
n=1
all n ∈ N, pn − qn = an and we are done. 2
66
∞
X 1 nπ
Example 10.2.3. sin converges absolutely and hence converges.
n2 4
n=1
1 1 1
Proof: The sine terms are √ , 1, √ , 0 − √ , , · · · , so the alternating series test does not
2 2 2
1 nπ 1
help. However, as sin(x) ≤ 1, we do know that 0 ≤ 2
sin ≤ 2 for all n. Hence
n 4 n
∞
X 1 nπ
sin converges by the comparison test. .
n2 4
n=1
The same sort of argument means that we can modify some of our earlier tests to work for
series with positive and negative terms. For example:
Theorem 10.2.4. (The Modified Ratio Test) Suppose that an for all n ∈ N are any real
an+1
numbers and assume that → l as n → ∞.
an
∞
P
(i) If l < 1, then an is absolutely convergent and hence convergent.
n=1
∞
P
(ii) If l > 1, then an is divergent.
n=1
∞
P
(iii) If l = 1, then we still cannot conclude whether an is convergent or divergent.
n=1
P ∞
P
Proof. (i) In this case the Ratio Test 9.1.7 says that n≥1 |an | converges; i.e. that an is
n=1
absolutely convergent.
(ii) In this case we know from the proof of 9.1.7 that |an | → ∞ as n → ∞. So, certainly
∞
an is divergent. 2
P
an 6→ 0 as n → ∞. Thus, by Theorem 8.1.4,
n=1
∞ ∞
xn (and indeed (−1)n xn ) converges if |x| < 1 and diverges if |x| > 1.
P P
Example 10.2.5.
n=1 n=1
In this case, one can also check that it diverges when |x| = 1.
Final Comments. In Example 9.2.11, we saw how one might approach testing series for
convergence. Of course, there, the sequences all had positive terms. So how should one approach
generalPseries? In a sense the rules are easier, as there are really only three possibilities for a
series ∞ n=1 an :
1. Is limn→∞ = 0? If not then the series diverges by the n-th term test Theorem 8.1.4.
2. Does the Alternating Series Test 10.1.1 apply? If so, use it!
3. Otherwise you had better hope that the Absolute Convergence Test (Theorem 10.2.2)
applies, in which case you are back to the ideas of Chapter 9. As we will see in the next
chapter, one of the most important cases where this case applies is when one can use the
Modified Ratio Test 10.2.4.
For examples of all these cases, see the next Exercise Sheet.
67
Chapter 11
Power Series
We now consider (the simplest case of) series where the nth term depends on a real variable x
and we ask for which values of x does the series converge.
∞
an xn (also written n
P P
Definition 11.0.6. A series of the form n≥1 an x ) is called a power
n=1
series (in the variable x).
∞ 1
For example, as we have mentioned before, we can write ex = xn as a power series.
P
n=1 n!
∞
xn is a power series (with an = 1 for all n). As we saw
P
Similarly, the geometric series
n=1
in Example 10.2.5, this converges absolutely for |x| < 1 and diverges otherwise. For another
example, consider
∞ 1
xn .
P
Example 11.0.7.
n=1 n
Let’s use the (modified) Ratio Test to study this example. Since it is customary to write
∞ 1
an xn as we have done, when using the tests we should set (say) cn = xn , and
P
the series as
n=1 n
∞
P
apply that rule to cn . So,
n=1
cn+1 xn+1 n n
= · n = |x| · → |x| as n → ∞.
cn n+1 x n+1
Thus, if |x| < 1 the series converges absolutely, by the Modified Ratio Test 10.2.4, while it
diverges if |x| > 1. When |x| = 1 we get two more cases:
P∞ 1
If x = 1 the series is which we know diverges.
n=1 n
P∞ (−1)n
If x = −1 the series is which we know converges (by the Alternating Test 10.1.1
n=1 n
and the Remark just after that theorem).
The pattern we see in these last two examples is actually a general phenomenon: using
∞
an xn
P
the ratio test we find that for any series there exists some number R ≥ 0 such that
n=1
converges absolutely if |x| < R, it diverges if |x| > R and may or may not converge when
|x| = R.
(In the last two examples R = 1 but we will see other examples where it can be any positive
number. We will also see examples where R = 0 and R = ∞)
68
11.1 The Radius of Convergence of a Power Series
The main result concerning convergence of power series is the following.
∞
an xn be a power series. Then there are three possibilities concerning
P
Theorem 11.1.1. Let
n=1
convergence.
∞
an xn converges only for x = 0.
P
Case (i):
n=1
∞
an xn converges for all x ∈ R.
P
Case (ii):
n=1
∞
an xn converges absolutely for all x with |x| < R, and
P
(a)
n=1
∞
an xn diverges for all x with |x| > R.
P
(b)
n=1
for all n ∈ N.
∞ ∞
M tn is convergent (since the geometric series tn is convergent as 0 < t < 1) and
P P
But
n=1 n=1
∞ ∞
|an un | is convergent by the Comparison Test. Thus an un is absolutely convergent
P P
hence
n=1 n=1
as required. 2
69
Now S is not empty because b ∈ S by Lemma 11.1.2. Also S is bounded above by c. (Since
∞
if c0 ∈ S for some c0 > c then by definition of S, |an cn | would converge, and hence so would
P
n=1
∞
an cn , by 10.2.2 - a contradiction.)
P
n=1
Thus S is a non-empty set of real numbers which is bounded above. Hence by the Com-
pleteness Property 2.4.6 it has a supremum, R say. We now show that this R has the desired
properties.
(a) Suppose that x ∈ R and |x| < R. Then there is some y ∈ S with |x| < y (by Definition 2.4.1).
∞ ∞
|an xn | converges by the definition of S. That is, the series an xn converges
P P
But then
n=1 n=1
absolutely.
(b) Suppose that x ∈ R and |x| > R. Choose any y with |x| > y > R. Suppose, for a contra-
∞ ∞
an xn converges. Then by 11.1.2, |an un | converges for all u with |u| < |x|.
P P
diction, that
n=1 n=1
In particular, this series converges for all u with |u| < y, which implies that y ∈ S. But this is
impossible since y > R = sup(S).
There is one last thing to prove, namely that the R above is unique. It is left as an exercise
to show that there can be at most one real number R having properties (a) and (b). 2
∞
an xn converges
P
Definition 11.1.3. If the real number R has the property that the series
n=1
for all x with |x| < R, and diverges for all x with |x| > R, then R is called the radius of
∞
an xn .
P
convergence (RoC for short) of the power series
n=1
∞
an xn only converges for x = 0, and
P
We also extend this definition by putting R = 0 if
n=1
∞
xn
P
putting R = ∞ if an converges for all x ∈ R.
n=1
∞
an xn converges absolutely
P
By the theorem this definition does cover all possible cases and
n=1
for |x| < R.
70
is called the interval of convergence. Certainly this contains (−R, R) and it may or may not
contain the end points ±R.
∞
xn - Here we already saw that the Radius of Convergence (RoC) is R = 1
P
Example 11.1.4.
n=1
and the interval of convergence is (−1, 1).
∞ 1
xn - Here we already saw that the RoC is R = 1 and the interval of
P
Example 11.1.5.
n=1 n
convergence is [−1, 1); that is it converges if x = −1 but diverges if x = +1.
∞ ∞
nxn . To find the RoC we let cn = |nxn |. Then
P P
Example 11.1.6. Consider the series cn
n=1 n=1
is a series of positive terms and
(n + 1)|x|n+1
cn+1 1
= = 1+ · |x| → (1 + 0) · |x| = |x|, as n → ∞.
cn n|x|n n
∞
P
So the (Modified) Ratio Test now tells us that if |x| < 1 then cn converges absolutely,
n=1
∞
P
whereas if |x| > 1 then cn diverges. This shows that the RoC is 1. It is left for you to check
n=1
that it diverges if x = ±1.
∞ (3n)!
xn . We shall show that its radius of convergence
P
Example 11.1.7. Consider the series 3
n=1 (n!)
1
is .
27
(3n)! n
As above we let cn = x Then cn > 0 and
(n!)3
(3 + n1 )(3 + n2 )(3 + n3 )
= · |x|
(1 + n1 )(1 + n1 )(1 + n1 )
(3 + 0)(3 + 0)(3 + 0)
→ · |x| (as n → ∞)
(1 + 0)(1 + 0)(1 + 0)
= 27|x|.
∞
P 1 1
So by the Ratio Test, cn converges for 27|x| < 1, i.e. for |x| <
, and diverges for |x| > .
n=1 27 27
∞ (3n)! 1 1
xn converges for |x| <
P
Therefore 3
and diverges for |x| > . Thus, as in the
n=1 (n!) 27 27
71
previous example (and, indeed, all examples like this), we conclude that the RoC of the series
∞ (3n)! 1
xn is
P
3
.
n=1 (n!) 27
xn
Example 11.1.8. Consider the series ∞
P
n=0 . (It is often more natural to start the series at
n!
n = 0 rather than, as we have been doing, n = 1. This makes no difference at all to convergence
issues. In particular, it does not affect the radius of convergence.)
xn cn+1
Here we let cn = and, just as in Example 9.1.9, we see that → 0 as n → ∞
n! cn
no matter what x is. Since 0 < 1 it follows that the radius of convergence of this series (the
exponential series) is ∞.
72
Chapter 12
This chapter discusses some results that will be useful for next year even if they are getting a
little outside the scope of this year’s course. In particular, you are not required to know these
results for the exams.
∞
an (x − α)n converges only for x = α.
P
Case (i):
n=1
∞
an (x − α)n converges for all x ∈ R.
P
Case (ii):
n=1
∞
an (x − α)n converges absolutely for all x with |x − α| < R, and
P
(a)
n=1
∞
an (x − α)n diverges for all x with |x − α| > R.
P
(b)
n=1
∞
an y n . So, now apply Theorem 11.1.1 to
P
Proof. Set y = x − α. Then our series becomes
n=1
that series and you will find the conclusion you get is exactly the present theorem. For example,
∞ ∞
an y n has radius of convergence R then an y n converges if |y| < R, which is the same
P P
if
n=1 n=1
73
∞ ∞
an (x − α)n = an y n converges if |x − α| < R. Similarly it diverges if
P P
as saying that
n=1 n=1
|x − α| > R. 2
ln(2)
Here is a rearrangement with sum . To do this we use the rearrangement where we always
2
have two negative terms in succession but only one positive one:
1 1 1 1 1 1
1− − + − − + − ···
2 4 3 6 8 5
1 1 1 1 1 1 1 1
= 1− − + − − + − − ···
2 4 3 6 8 5 10 12
1 1 1 1 1 1
= − + − + − ···
2 4 6 8 10 12
1 1 1 1 1 ln(2)
= 1 − + − + ··· = .
2 2 3 4 5 2
In fact we can get any number we want in this way.
P
Proposition 12.2.2. Suppose that n≥1 an is a conditionally
P convergent series, and let α
be
P any real number. Then there exists some rearrangement n≥1 bn of this series for which
n≥1 bn = α.
Proof. (Outline) We start with some notation essentially coming from Question 3 on the
Exercise sheet for Week 11:
P
Notation 12.2.3. Given a series n≥1 an , write
( (
a n if a n ≥ 0 0 if an ≥ 0
a+
n = and a−
n =
0 if an ≤ 0 an if an ≤ 0
+ −
P +
By that exercise sheet, the series P a−n and an are divergent. Hence an = ∞, since all the
terms are positive and, similarly, an = −∞. P
Now, we construct the sequence {bn }: Since a+
n = ∞ we can therefore take the first few
positive numbers ak1 , ak2 . . . , akr so that X1 = ak1 + ak2 + · ·P· + akr > α. (More precisely we
choose the smallest possible r with this property.) Then, as a−
n = −∞, we can add to this
the first few negative numbers akr+1 , akr+2 . . . , aks so that
X2 = (ak1 + ak2 + · · · + akr ) + akr+1 + akr+2 + · · · + aks < α.
74
(Again we take s minimal with this property.) Now keep going; adding positive terms aj to get
bigger than α then immediately adding more negative ones so that the sum becomes less than
α, then immediately adding more positive terms et cetera.
Finally, we must check that the sum actually converges to α. This is the same idea as in
the proof of the Alternating Series test, but the notation is messy. More formally, we have
constructed the numbers X1 > α, then X2 < α, and X3 > α, etc. Let a`u be the last number
added to make Xu ; thus in the last paragraph a`1 = akr while a`2 = aks . The key point to
notice is that, as each a`j is chosen minimally, |Xj − α| ≤ |a`j | in each case. But, by the nth
term test Theorem 8.1.4, limj→∞ |a`j | → 0 as j → ∞. Therefore, by the Sandwich Theorem
3.1.4, |Xj − α| → 0 as j → ∞. Which is exactly what we wanted to prove. 2
Strangely enough, for absolutely convergent series, taking rearrangements does not change
the sum.
Hence {vt }t≥1 is an ascending and bounded sequence. Thus, by the Monotone Convergence
Theorem, itPhas a limit, say B. Notice also that B ≤ A by Lemma 4.2.5. In particular this also
shows that bn is absolutely
P + convergent (if not then Question 3 of the Exercise sheet for Week
11 would imply that bn = ∞). So we can reverse this argument and see that each u` ≤ B
and hence that A ≤ B.
In other words A = B.
Now
P repeat this
P argument the a−
for P n (withPdecreasing P
sequences of partial sums) to show
− −
that n≥1 an = n≥1 bn . Now n≥1 an = n≥1 a + + −
P n Pan (use
n≥1
+
Question
P −
3(a) of the
Exercise sheet for Week 11, again), and similarly n≥1 bn = n≥1 bn + n≥1 bn . Therefore
we conclude that
X X X X X X
an = a+n + a−
n = b+
n + b−
n = bn .
n≥1 n≥1 n≥1 n≥1 n≥1 n≥1
One significant application of this result is that it tells us what happens when we multiply
power series. To set this up, recall the equation ex · ey = ex+y . In terms of power series this
should mean that
X xn X yn X (x + y)n
= .
n! n! n!
n≥0 n≥0 n≥0
(“should” because when you construct exponentials formally in next year’s Real Analysis course
you will proceed in the opposite direction: you define the exponential ex as the power series
P∞ xn
n=0 . This for example solves the problem of what exactly e is, but it does mean that rules
n!
like ex ey = ex+y are no longer “obvious”.)
75
Anyway, this displayed equation is true and actually works much more generally. To prove
this, we start with an analogous result for series. Here it is more natural to start summing our
power series at n = 0 rather than n = 1 and so the next few results will be phrased in that way.
P∞ P∞ P∞
Theorem P∞12.2.5. Let n=0 an and n=0 bn be absolutely convergent series, say with n=0 an =
A and n=0 P bn = B.
Write i,j≥0 ai bj for the infinite sum consisting of the product, in any
P order, of every term
of the first series multiplied by every term
P of the second series. Then i,j≥0 P ai bj is absolutely
convergent (in the sense that the sum i,j≥1 |ai bj | is also convergent) and i,j≥0 ai bj = AB.
Remark: One use of this theorem P is when we need to make sense of a summation with two
indices in P
an expression like i,j≥0 i bj . The problem is that one does not want to take some-
a
∞ P∞
thing like j=0 ( i=0 ai bj ), since now one has an infinite sum of infinite sums—which Pcauses a
whole new set of problems! We get around this by making it into a single infinite sum m≥0 cm .
But then one runs into the problem that there are many different ways of choosing the sum.
The theorem says this does not matter since whatever way you do it you get the same answer.
Proof. Recall from the Foundations of Pure Mathematics course that Z≥0 × Z≥0 is countable,
so pick your favourite bijection φ : Z≥0 → Z≥0 × ZP ≥0 , and write it as φ(m) = (φ1 (m), φ2 (m)).
Now set cmP = aφ1 (m) bφ2 (m) ; thus the definition of i,j≥0 ai bj in the statement of the theorem
just means m≥0 cm . Then, by considering the partial sums as in the previous theorem, we see
that the partial sums γt = tn=0 |cm | form an increasing sequence of non-negative terms and
P
certainly
∞
! ∞ !
X X
γt ≤ X = |an | |bn | for t ∈ N.
n=0 n=0
Therefore, by the Monotone Convergence Theorem again, the sequence {γt } has a limit bounded
above by X. In other words
∞ ∞
! ∞ !
X X X
γ= |cm | ≤ X = |an | |bn | .
m=0 n=0 n=0
P But we can now apply Theorem 12.2.4 to conclude that, however we ordered the terms,
i,j≥1 ai bj must give the same number
P
γ. Now, one way of constructing the doubly infinite
P
N N
sum is for the (N + 1)2 term to be n=0 an n=0 bn . Since
N N
! !
X X
an bn → AB as N → ∞,
n=0 n=0
P
it follows that in fact our sum i,j≥0 ai bj equals AB, as well.
Notice that since we only get one possible sum, Proposition 12.2.2 says the series must be
absolutely convergent. 2
Product) Let ∞
P P∞
Corollary 12.2.6. (CauchyP n=0 an and n=0 bn be absolutely convergent series,
say with Pn=0 an = A and ∞
P∞ P n
b
n=0 n = B. Set cn = r=0 r bn−r for each n.
a
∞ P∞
Then n=0 cn converges absolutely with n=0 cn = AB.
Pt
Proof. Once again, the given sequence ofPpartial sums n=0 cn is a subsequence of some
sequence of partial sums constructed from i,j≥0 ai bj . Therefore, by Theorem 6.1.3, it also
76
converges to AB. Once again, as we only get one possible sum, Proposition 12.2.2 says the
series must be absolutely convergent. 2
As you might imagine, the Cauchy Product Theorem 12.2.6 and its variants can be used to
prove lots of other product formulas.
Functions that can be obtained this way are called analytic (on (−R, R)).
One can now show that this analytic function f can be differentiated and that f 0 (x) is given
by the result of differentiating the series term by term:
∞
X
f 0 (x) = nan xn−1 .
n=0
A fundamental fact is that the differentiated series has the same radius of convergence as the
original series, namely R. So we may differentiate again:
∞
X
f 00 (x) = n(n − 1)an xn−2
n=0
Now the negative exponents of x do not actually appear (as their coefficients contain a 0,
so vanish) and we usually rewrite this series (by changing the variable of summation from n to
n + k) as
∞
X
f (k) (x) = an+k (n + k)(n + k − 1)(n + k − 2) · · · (n + 1)xn .
n=0
77
Now if we put x = 0 in this expression then all the terms except the first vanish, and we
obtain the following formula for the coefficients of the original series in terms of the function f :
or
f (k) (0)
ak = .
k!
So, to summarise, analytic functions behave very nicely with respect to differentiation (and
integration and almost all other operations) and the coefficients an can easily be determined.
These functions therefore are very convenient to work with.
In the other direction suppose now that, rather than a power series, we are given a function
f : (−R, R) → R which can be differentiated as many times as we please. Does it follow that
it is an analytic function? Certainly we know what its power series must be, by the formula
above, namely
∞
X f (n) (0) n
x .
n!
n=0
This power series is called the Taylor series of the function f and one might guess that it
converges to f (x) for all x ∈ (−R, R). If you have calculated examples of Taylor series before,
like for ex , sin x or ln(1 + x), then this has always been the case. And, indeed, it is true with
any “reasonable” function.
However, one can easily write down functions for which the Taylor series does not converge
(except at x = 0) and, rather more shockingly, there are also examples of functions f : R → R
whose Taylor series converges for all x (i.e. the radius of convergence is ∞) but do not converge
to f (x) ...
78