Math185NotesWeek3 1
Math185NotesWeek3 1
Linear Autonomous Systems. We have seen that every system of first order autonomous equations
dx
= f (x, y)
dt
dy
= g(x, y)
dt
can be solved, at least in theory. But in practice, the method we have outlined is badly ineffective. We will
now describe a more effective method for solving systems of the following general form:
dx
= a11 x + a12 y + b1
dt
dy
= a21 x + a21 y + b2
dt
A system of this form is said to be a linear, autonomous system of equations. Since we will only be considering
autonomous systems, we will often drop the adjective ”autonomous”. However, it is possible to consider
non-autonomous linear systems as well (in which case the coefficients aij and bi would be functions of t,
rather than constants).
Any linear system can always be written in matrix form, like this:
d x a11 a12 x b
= + 1
dt y a21 a22 y b2
Often we will also use the following shorthand:
d~x ~ a11 a12 ~ b1 x
= A~x + b , A = , b= , ~x =
dt a21 a22 b2 y
It will also be useful to distinguish between two separate kinds of linear systems. If
~b = ~0
we say that the system is homogeneous, and if
~b 6= ~0
we say that the system is inhomogeneous.
For example, the linear system
dx
= x+y
dt
dy
= x−y
dt
is homogeneous, because it can be written in matrix form without a constant term:
d x 1 1 x
= .
dt y 1 −1 y
dy
= x−y−1
dt
is inhomogeneous, because its matrix form has a nonzero constant term:
d x 1 1 x 1
= +
dt y 1 −1 y −1
1
2
There are two important initial observations that must be made about linear systems:
(1) Suppose that ~x1 (t) and ~x2 (t) are two solutions of a homogeneous system:
dx~i
= Ax~i , i = 1, 2.
dt
If c1 and c2 are arbitrary constants, and
~x(t) = c1 ~x1 (t) + c2 ~x2 (t),
then ~x(t) is a solution of the same homogeneous system:
d~x
= A~x.
dt
In other words, solutions of homogeneous equations can be superimposed.
If the matrix A is not invertible, there may not be any equilibrium solutions - we’ll ignore this case.
Assuming we can find an equilibrium solution, the general solution of the inhomogeneous system
d~x
= A~x + ~b
dt
will be given by
~x(t) = ~xeq + ~xh ,
where ~xh is a solution of the homogeneous system,
d
~xh = A~xh .
dt
Therefore, the rest of this week will be focused on solving homogeneous equations. Our strategy will be to
apply the superposition principle - we will obtain solutions ~x1 (t) and ~x2 (t) by “guessing”, and then we will
superimpose them to obtain a general solution of the homogeneous equation,
~xh (t) = c1 ~x1 (t) + c2 ~x2 (t).
In cases where we want to solve an initial value problem
d~x
= A~x + ~b , ~x(0) = ~x0 ,
dt
we will write the general solution of the inhomogeneous equation in the form
~x(t) = ~xeq + c1 ~x1 (t) + c2 ~x2 (t),
and we will determine c1 and c2 by setting t = 0 and solving the resulting system of equations:
~x0 = ~xeq + c1 ~x1 (0) + c2 ~x2 (0).
As long as the vectors ~x1 (0) and ~x2 (0) are a 2-dimensional basis, this will always be possible.
1An example of a system which does not have an equilibirum solution is dx = 1 , dy
= x. It’s a nice exercise to find all
dt dt
solutions of this system and plot them in the xy plane.
4
Homogeneous Linear Systems: Real Eigenvalue Case. In this section we will begin to address the
problem of finding two solutions ~x1 (t) and ~x2 (t) of a linear, homogeneous, autonomous system
d~x
= A~x,
dt
such that ~x1 (0) and ~x2 (0) are a two-dimensional basis.
As a concrete example, consider the homogeneous system
d x 1 −2 x
= .
dt y −2 1 y
To understand what the solutions of this system look like, we first compute the slope of its velocity field,
dy −2x + y
= ,
dx x − 2y
and we write down its isocline equations:
−2x + y
= C = constant.
x − 2y
Simplifying this equation, we find that all isoclines are lines through the origin:
−2x + y = C(x − 2y) =⇒ (1 + 2C)y = (2 + C)x.
We can see this effect when we plot the velocity field:
y
Notice that there are two isoclines which are everywhere parallel to the velocity field. One of these is the
isocline with slope 1:
C = 1 =⇒ (1 + 2)y = (2 + 1)x =⇒ y = x.
Geometrically, the fact that the velocity field is parallel to this line tells us that our system of equations has
a special solution of the form
x1 (t) 1 f (t)
= f1 (t) = 1 .
y1 (t) 1 f1 (t)
5
To solve for the unknown function f1 (t), we can substitute into both sides of our system:
0
d x1 (t) f (t)
= 10
dt y1 (t) f1 (t)
1 −2 f1 (t) −f1 (t)
= .
−2 1 f1 (t) −f1 (t)
From this we see that f1 (t) must satisfy the differential equation
f10 (t) = −f1 (t),
and therefore must take the general form
f1 (t) = c1 e−t
for some constant c1 .
An important thing to notice here is that
−t
x1 (t) c e 0
lim = lim 1 −t = .
t→∞ y1 (t) t→∞ c1 e 0
This is consistent with the velocity field: along the line y = x, the velocity field points directly inward,
towards the origin.
By similar reasoning, we can find a solution which travels along the isocline with slope C = −1:
C = −1 =⇒ (1 − 2)y = (2 − 1)x =⇒ y = −x
This solution takes the form
1
~x2 (t) = f2 (t) ,
−1
and the function f2 (t) can be found using the same process:
0
d x1 (t) f2 (t)
=
dt y1 (t) −f20 (t)
1 −2 f2 (t) 3f2 (t)
= .
−2 1 −f2 (t) −3f2 (t)
From this we see that f2 (t) must satisfy the differential equation
f20 (t) = 3f2 (t),
and therefore must take the general form
f2 (t) = c2 e3t
for some constant c2 . This time, we have
lim f2 (t) = ∞.
t→∞
Again, this is consistent with the velocity field, which points away from the origin on the line y = −x.
How can we generalize these solutions to other linear homogeneous systems? The answer is that we must
look for solutions that start at points (x0 , y0 ) where the position vector,
x
~v = 0
y0
is parallel to the velocity specified by the velocity field:
x0
A~v = A
y0
To make A~v and ~v parallel, ~v must satisfy the eigenvector equation:
A~v = λ~v .
Given any eigenvector ~v , we can immediately find a solution of our system, in the form
~x(t) = f (t)~v ,
6
and we can also immediately solve for f (t) by substituting into the equation
d~x
= A~x.
dt
and therefore
f (t) = ceλt
In cases where A is a 2 × 2 matrix with a real eigenbasis, this idea produces two solutions,
For example, suppose we want to find a solution of the initial value problem
d x 1 −2 x
= , x(0) = 2, y(0) = 1.
dt y −2 1 y
For this system, we have already obtained the two special solutions
−t 1 3t 1
~x1 (t) = e , ~x2 (t) = e ,
1 −1
You can verify that the solution of this system of linear equations is
3 1
c1 = , c2 = .
2 2
You can see that the solution starts off asymptotic to the line y = x, in the limit as t → −∞. It’s a bit
harder to see from the picture, but in the limit as t → ∞ it is asymptotic to the line y = −x. The reason
for this is that
3 −t 1 0
lim e =
t→∞ 2 1 0
and
1 3t 1 0
lim e = ,
t→−∞ 2 −1 0
so in each of these limits only one of the two terms dominates the behavior of ~x(t).
The reason why it takes the solution much longer to approach the line y = −x is that the exponential
e−t
decays to 0 at a slower rate in the limit t → ∞ than the exponential
e3t
decays to 0 in the limit as t → −∞.
The relative rates of exponential decay actually become very important for systems where the eigenvalues
of the coefficient matrix have the same sign. For example, consider the initial value problem
d x 2 1 x
= , x(0) = 2 , y(0) = 1
dt y 1 2 y
The coefficient matrix for this system,
2 1
1 2
has the same eigenvectors as in our previous example, but the eigenvalues have the same sign:
2 1 1 1 2 1 1 1
=3· , =1·
1 2 1 1 1 2 −1 −1
8
Here a plot of the solution, together with the velocity field of the system:
y
In the limit as t → ∞, the solution curve is nearly parallel to the line y = x. This is because the exponential
e3t
grows at a faster rate than the exponential
et ,
so the first term dominates the solution in the limit as t → ∞.
Similarly, in the limit as t → −∞, the exponential
et
decays slower than the exponential
e3t ,
so as t → −∞, the second term dominates. This is also reflected in the plot: as the solution approaches the
origin, it does so in a direction which is tangent to the line y = −x.
9
Homogeneous Linear Systems: Complex Eigenvalue Case. In 183 you learned that not every 2 × 2
matrix has a real eigenbasis. For example, consider the matrix
1 −2
A= .
2 1
If we compute its characteristic polynomial,
1−λ −2
det (A − λI) = det = (1 − λ)2 + 4
2 1−λ
we find that it has two complex roots:
(1 − λ)2 + 4 = 0 =⇒ λ = 1 ± 2i.
Therefore, A has no real eigenvalues (and certainly no real eigenbasis).
But it does have complex eigenvalues! By definition, a complex eigenvalue of a matrix A is a complex number
λ = p + iq
such that there exists a complex vector
v1 + w1 i
~v = ,
v2 + w2 i
which satisfies
v1 + w1 i
A~v = λ~v = (p + iq) .
v2 + w2 i
If λ is any complex root of the characteristic polynomial of A, it is always possible to find a complex
eigenvector with eigenvalue λ, using the same methods we use to find real eigenvectors.
For example, we can find a complex eigenvector of the matrix
1 −2
A= ,
2 1
with eigenvalue
λ = 1 + 2i,
by solving the equation
1 − (1 + 2i) −2 x0 0
= ,
2 1 − (1 + 2i) y0 0
or
−2i −2 x0 0
= .
2 −2i y0 0
One solution of the equation is
1
~v = ,
−i
and this is a complex eigenvector for A.
We can use this complex eigenvector to construct a solution of the equation
d~x
= A~x,
dt
in exactly the same way that we used real eigenvectors to construct solutions. The main difference is that
the solution is a complex vector:
1
~x(t) = eλt~v = e(1+2i)t .
−i
To understand why this is a valid solution, look at what happens when we plug it back in to the equation:
d λt
e ~v = λeλt~v = eλt λ~v = eλt A~v = Aeλt~v
dt
If λ was a real number and ~v was a real vector, these steps would show that eλt~v was a solution. But there
is one step which requires additional verification when λ is complex:
d λt
e = λeλt .
dt
10
To prove this identity, first expand λ into its real and imaginary parts:
d (p+iq)t
e = (p + iq)e(p+iq)t .
dt
Then expand both sides of the equation into their real and imaginary parts:
d pt d pt d pt
e cos(qt) + iept sin(qt) =
e cos(qt) + i e sin(qt)
dt dt dt
(p + iq) ept cos(qt) + iept sin(qt) = pept cos(qt) − qept sin(qt) + i pept sin(qt) + qept cos(qt)
where ~v1 and ~v2 are real vectors. It’s helpful to plot these trajectories in the special case
1 0
~v1 = , ~v2 = .
0 1
In this case, we have
cos(qt)
~x(t) = ept .
sin(qt)
No matter the value of p, the curve will “spiral” around the origin, crossing the x axis at regular intervals
with period
2π
T = .
q
The distance from the origin is controlled by the factor ept , in a way which depends on the sign of p:
(1) If p < 0, then the solution will spiral towards the origin, since
lim ept = 0
t→∞
(2) If p > 0, the solution will spiral away from the origin, since
lim ept = ∞
t→∞
(3) If p = 0, the solution will move around the origin in a circle, since
e0t = 1
Here are pictures of all three cases:
We can plot the solutions in the general case by applying a linear transformation to each of the standard
solutions above.
For example, to sketch the trajectory
~x(t) = cos(qt)~v1 + sin(qt)~v2 ,
we would take the standard trajectory
~x0 (t) = cos(qt)ı̂ + sin(qt)̂,
and apply the following transformation:
~v1
̂
~v2
ı̂
12
To guide the transformation, we draw a square with sides parallel to the x and y axes. When we transform
this square, it becomes a parallelogram with sides parallel to ~v1 and ~v2 . The circle inscribed in the square
becomes an ellipse inscribed in the parallelogram!
The pictures when p 6= 0 is similar. A spiral aligned with the x and y axes gets transformed to a spiral
aligned with the vectors ~v1 and ~v2 :
~v1
̂
~v2
ı̂
Notice that in both cases, the orientation of the spiral changed, from clockwise to counterclockwise. In
general, to determine the orientation of the spiral, we need to draw a few velocity vectors.
For example, consider the system
d x −1 −4 x
=
dt y 2 3 y
If we calculate the characteristic polynomial of its matrix, we get
−1 − λ −4
det = λ2 − 2λ + 5 = (λ − 1)2 + 22 .
2 3−λ
This has complex roots
λ = 1 ± 2i,
and therefore the solutions will spiral outward (since the real part of the eigenvalue is p = 1 < 0). To
determine the orientation of the spirals, we can compute the velocity vectors at the points (±1, 0) and
(0, ±1) and draw them:
Once we draw these, it becomes clear that the spirals must be going counterclockwise.
It’s usually pretty difficult to make accurate sketches of the solutions by hand, but it’s not hard to get a
rough idea of what they look like!
13