0% found this document useful (0 votes)
23 views13 pages

Math185NotesWeek3 1

Uploaded by

Sofia Mansilla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views13 pages

Math185NotesWeek3 1

Uploaded by

Sofia Mansilla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Math 18500 Week 3: Systems of First Order Linear Equations

Linear Autonomous Systems. We have seen that every system of first order autonomous equations
dx
= f (x, y)
dt
dy
= g(x, y)
dt
can be solved, at least in theory. But in practice, the method we have outlined is badly ineffective. We will
now describe a more effective method for solving systems of the following general form:
dx
= a11 x + a12 y + b1
dt
dy
= a21 x + a21 y + b2
dt
A system of this form is said to be a linear, autonomous system of equations. Since we will only be considering
autonomous systems, we will often drop the adjective ”autonomous”. However, it is possible to consider
non-autonomous linear systems as well (in which case the coefficients aij and bi would be functions of t,
rather than constants).
Any linear system can always be written in matrix form, like this:
      
d x a11 a12 x b
= + 1
dt y a21 a22 y b2
Often we will also use the following shorthand:
     
d~x ~ a11 a12 ~ b1 x
= A~x + b , A = , b= , ~x =
dt a21 a22 b2 y
It will also be useful to distinguish between two separate kinds of linear systems. If
~b = ~0
we say that the system is homogeneous, and if
~b 6= ~0
we say that the system is inhomogeneous.
For example, the linear system 
dx
= x+y


 dt


 dy

 = x−y
dt
is homogeneous, because it can be written in matrix form without a constant term:
    
d x 1 1 x
= .
dt y 1 −1 y

Likewise, the linear system 


dx
= x+y+1


 dt


 dy

 = x−y−1
dt
is inhomogeneous, because its matrix form has a nonzero constant term:
      
d x 1 1 x 1
= +
dt y 1 −1 y −1

1
2

There are two important initial observations that must be made about linear systems:
(1) Suppose that ~x1 (t) and ~x2 (t) are two solutions of a homogeneous system:
dx~i
= Ax~i , i = 1, 2.
dt
If c1 and c2 are arbitrary constants, and
~x(t) = c1 ~x1 (t) + c2 ~x2 (t),
then ~x(t) is a solution of the same homogeneous system:
d~x
= A~x.
dt
In other words, solutions of homogeneous equations can be superimposed.

(2) Suppose that ~xp (t) is a particular solution of an inhomogeneous system,


d~xp
= A~xp + ~b.
dt
If ~xh (t) is any solution of the corresponding homogeneous equation,
d~xh
= A~xh .
dt
and if
~x(t) = ~xp (t) + ~xh (t),
then ~x(t) is a solution of the original inhomogeneous equation:
d~x
= A~x + ~b
dt
Observation (1) is referred to as the superposition principle. The superposition principle is the key to
solving linear equations - you will see it recur over and over again in this course (and 186) in various guises.
To prove the superposition principle, we just expand both sides:
d~x d d~x1 d~x2
= [c1 ~x1 + c2 ~x2 ] = c1 + c2
dt dt dt dt
A~x = A [c1 ~x1 + c2 ~x2 ] = c1 A~x1 + c2 A~x2
By assumption, since
d~xi
= A~xi ,
dt
the two sides of the equation
d~x
= A~x
dt
are equal, i.e. ~x = c1 ~x1 + c2 ~x2 is a solution of the equation.
Observation (2) can be proved in exactly the same way. Expanding both sides, we get
d~x d d~xp d~xh
= [~xp + ~xh ] = +
dt dt dt dt
 
A~x + ~b = A [~xp + ~xh ] + ~b = A~xp + ~b + A~xh
These are equal due to the assumptions
d~xp
= A~xp + ~b
dt
and
d~xh
= A~xh .
dt
To exploit observation (2), we need to find a particular solution ~xp . In most cases, it’s easy to find one such
solution - we can just look for equilibrium points! This is something we should always do in any case, as a
first step towards analyzing a system of first order equations.
3

Given a linear inhomogeneous equation


d~x
= A~x + ~b,
dt
we can find all equilibrium points by looking for solutions ~x(t) = ~xeq such that
d~xeq
=0
dt
This is equivalent to solving the matrix equation
A~xeq + ~b = 0,
1
and as long as A is an invertible matrix, this equation will always have exactly one solution.
For example, to find the equilibrium solution of the equation

 dx = x + y + 1

dt
dy

 = x−y−1
dt
We just need to solve the system of equations

xeq + yeq = −1
xeq − yeq = 1
This can either be done by hand, or by converting the system of equations to a matrix equation,
    
1 1 xeq −1
=
1 −1 yeq 1
and multiplying on both sides by an inverse matrix:
   −1       
xeq 1 1 −1 −1 −1 −1 −1 0
= = = .
yeq 1 −1 1 2 −1 1 1 −1

If the matrix A is not invertible, there may not be any equilibrium solutions - we’ll ignore this case.
Assuming we can find an equilibrium solution, the general solution of the inhomogeneous system
d~x
= A~x + ~b
dt
will be given by
~x(t) = ~xeq + ~xh ,
where ~xh is a solution of the homogeneous system,
d
~xh = A~xh .
dt
Therefore, the rest of this week will be focused on solving homogeneous equations. Our strategy will be to
apply the superposition principle - we will obtain solutions ~x1 (t) and ~x2 (t) by “guessing”, and then we will
superimpose them to obtain a general solution of the homogeneous equation,
~xh (t) = c1 ~x1 (t) + c2 ~x2 (t).
In cases where we want to solve an initial value problem
d~x
= A~x + ~b , ~x(0) = ~x0 ,
dt
we will write the general solution of the inhomogeneous equation in the form
~x(t) = ~xeq + c1 ~x1 (t) + c2 ~x2 (t),
and we will determine c1 and c2 by setting t = 0 and solving the resulting system of equations:
~x0 = ~xeq + c1 ~x1 (0) + c2 ~x2 (0).
As long as the vectors ~x1 (0) and ~x2 (0) are a 2-dimensional basis, this will always be possible.
1An example of a system which does not have an equilibirum solution is dx = 1 , dy
= x. It’s a nice exercise to find all
dt dt
solutions of this system and plot them in the xy plane.
4

Homogeneous Linear Systems: Real Eigenvalue Case. In this section we will begin to address the
problem of finding two solutions ~x1 (t) and ~x2 (t) of a linear, homogeneous, autonomous system
d~x
= A~x,
dt
such that ~x1 (0) and ~x2 (0) are a two-dimensional basis.
As a concrete example, consider the homogeneous system
    
d x 1 −2 x
= .
dt y −2 1 y
To understand what the solutions of this system look like, we first compute the slope of its velocity field,
dy −2x + y
= ,
dx x − 2y
and we write down its isocline equations:
−2x + y
= C = constant.
x − 2y
Simplifying this equation, we find that all isoclines are lines through the origin:
−2x + y = C(x − 2y) =⇒ (1 + 2C)y = (2 + C)x.
We can see this effect when we plot the velocity field:
y

Notice that there are two isoclines which are everywhere parallel to the velocity field. One of these is the
isocline with slope 1:
C = 1 =⇒ (1 + 2)y = (2 + 1)x =⇒ y = x.
Geometrically, the fact that the velocity field is parallel to this line tells us that our system of equations has
a special solution of the form
     
x1 (t) 1 f (t)
= f1 (t) = 1 .
y1 (t) 1 f1 (t)
5

To solve for the unknown function f1 (t), we can substitute into both sides of our system:
   0 
d x1 (t) f (t)
= 10
dt y1 (t) f1 (t)
    
1 −2 f1 (t) −f1 (t)
= .
−2 1 f1 (t) −f1 (t)
From this we see that f1 (t) must satisfy the differential equation
f10 (t) = −f1 (t),
and therefore must take the general form
f1 (t) = c1 e−t
for some constant c1 .
An important thing to notice here is that
   −t   
x1 (t) c e 0
lim = lim 1 −t = .
t→∞ y1 (t) t→∞ c1 e 0
This is consistent with the velocity field: along the line y = x, the velocity field points directly inward,
towards the origin.
By similar reasoning, we can find a solution which travels along the isocline with slope C = −1:
C = −1 =⇒ (1 − 2)y = (2 − 1)x =⇒ y = −x
This solution takes the form  
1
~x2 (t) = f2 (t) ,
−1
and the function f2 (t) can be found using the same process:
   0 
d x1 (t) f2 (t)
=
dt y1 (t) −f20 (t)
    
1 −2 f2 (t) 3f2 (t)
= .
−2 1 −f2 (t) −3f2 (t)
From this we see that f2 (t) must satisfy the differential equation
f20 (t) = 3f2 (t),
and therefore must take the general form
f2 (t) = c2 e3t
for some constant c2 . This time, we have
lim f2 (t) = ∞.
t→∞
Again, this is consistent with the velocity field, which points away from the origin on the line y = −x.
How can we generalize these solutions to other linear homogeneous systems? The answer is that we must
look for solutions that start at points (x0 , y0 ) where the position vector,
 
x
~v = 0
y0
is parallel to the velocity specified by the velocity field:
 
x0
A~v = A
y0
To make A~v and ~v parallel, ~v must satisfy the eigenvector equation:
A~v = λ~v .
Given any eigenvector ~v , we can immediately find a solution of our system, in the form
~x(t) = f (t)~v ,
6

and we can also immediately solve for f (t) by substituting into the equation

d~x
= A~x.
dt

Explicitly, we find that

f 0 (t)~v = Af (t)~v = f (t)λ~v ,

and therefore

f (t) = ceλt

for some constant c.

In cases where A is a 2 × 2 matrix with a real eigenbasis, this idea produces two solutions,

~x1 (t) = c1 eλ1 t~v1 , ~x2 (t) = c2 eλ2 t~v2 .

and we can obtain a general solution by superimposing them:

~x(t) = c1 eλ1 t~v1 + c2 eλ2 t~v2 .

For example, suppose we want to find a solution of the initial value problem
    
d x 1 −2 x
= , x(0) = 2, y(0) = 1.
dt y −2 1 y

For this system, we have already obtained the two special solutions
   
−t 1 3t 1
~x1 (t) = e , ~x2 (t) = e ,
1 −1

which correspond to the following solutions of the eigenvector equation:


         
1 −2 1 1 1 −2 1 1
= −1 · , =3·
−2 1 1 1 −2 1 −1 −1

Therefore, the general solution is


   
1 1
~x(t) = c1 e−t + c2 e3t .
1 −1

Setting t = 0, and imposing our initial conditions, we obtain


     
1 1 2
~x(0) = c1 + c2 = .
1 −1 1

You can verify that the solution of this system of linear equations is

3 1
c1 = , c2 = .
2 2

Therefore, the solution of the initial value problem is


   
3 −t 1 1 1
~x(t) = e + e3t .
2 1 2 −1
7

Here is a plot of the solution:


y

You can see that the solution starts off asymptotic to the line y = x, in the limit as t → −∞. It’s a bit
harder to see from the picture, but in the limit as t → ∞ it is asymptotic to the line y = −x. The reason
for this is that    
3 −t 1 0
lim e =
t→∞ 2 1 0
and    
1 3t 1 0
lim e = ,
t→−∞ 2 −1 0
so in each of these limits only one of the two terms dominates the behavior of ~x(t).
The reason why it takes the solution much longer to approach the line y = −x is that the exponential
e−t
decays to 0 at a slower rate in the limit t → ∞ than the exponential
e3t
decays to 0 in the limit as t → −∞.
The relative rates of exponential decay actually become very important for systems where the eigenvalues
of the coefficient matrix have the same sign. For example, consider the initial value problem
    
d x 2 1 x
= , x(0) = 2 , y(0) = 1
dt y 1 2 y
The coefficient matrix for this system,  
2 1
1 2
has the same eigenvectors as in our previous example, but the eigenvalues have the same sign:
         
2 1 1 1 2 1 1 1
=3· , =1·
1 2 1 1 1 2 −1 −1
8

By applying the methods above, you can find the solution:


   
3 1 1 1
~x(t) = e3t + et .
2 1 2 −1

Here a plot of the solution, together with the velocity field of the system:
y

In the limit as t → ∞, the solution curve is nearly parallel to the line y = x. This is because the exponential
e3t
grows at a faster rate than the exponential
et ,
so the first term dominates the solution in the limit as t → ∞.
Similarly, in the limit as t → −∞, the exponential
et
decays slower than the exponential
e3t ,
so as t → −∞, the second term dominates. This is also reflected in the plot: as the solution approaches the
origin, it does so in a direction which is tangent to the line y = −x.
9

Homogeneous Linear Systems: Complex Eigenvalue Case. In 183 you learned that not every 2 × 2
matrix has a real eigenbasis. For example, consider the matrix
 
1 −2
A= .
2 1
If we compute its characteristic polynomial,
 
1−λ −2
det (A − λI) = det = (1 − λ)2 + 4
2 1−λ
we find that it has two complex roots:
(1 − λ)2 + 4 = 0 =⇒ λ = 1 ± 2i.
Therefore, A has no real eigenvalues (and certainly no real eigenbasis).
But it does have complex eigenvalues! By definition, a complex eigenvalue of a matrix A is a complex number
λ = p + iq
such that there exists a complex vector  
v1 + w1 i
~v = ,
v2 + w2 i
which satisfies  
v1 + w1 i
A~v = λ~v = (p + iq) .
v2 + w2 i
If λ is any complex root of the characteristic polynomial of A, it is always possible to find a complex
eigenvector with eigenvalue λ, using the same methods we use to find real eigenvectors.
For example, we can find a complex eigenvector of the matrix
 
1 −2
A= ,
2 1
with eigenvalue
λ = 1 + 2i,
by solving the equation     
1 − (1 + 2i) −2 x0 0
= ,
2 1 − (1 + 2i) y0 0
or     
−2i −2 x0 0
= .
2 −2i y0 0
One solution of the equation is 
1
~v = ,
−i
and this is a complex eigenvector for A.
We can use this complex eigenvector to construct a solution of the equation
d~x
= A~x,
dt
in exactly the same way that we used real eigenvectors to construct solutions. The main difference is that
the solution is a complex vector:  
1
~x(t) = eλt~v = e(1+2i)t .
−i
To understand why this is a valid solution, look at what happens when we plug it back in to the equation:
d  λt 
e ~v = λeλt~v = eλt λ~v = eλt A~v = Aeλt~v
dt
If λ was a real number and ~v was a real vector, these steps would show that eλt~v was a solution. But there
is one step which requires additional verification when λ is complex:
d λt
e = λeλt .
dt
10

To prove this identity, first expand λ into its real and imaginary parts:
d (p+iq)t
e = (p + iq)e(p+iq)t .
dt
Then expand both sides of the equation into their real and imaginary parts:
d  pt d  pt d  pt
e cos(qt) + iept sin(qt) =
  
e cos(qt) + i e sin(qt)
dt dt dt
(p + iq) ept cos(qt) + iept sin(qt) = pept cos(qt) − qept sin(qt) + i pept sin(qt) + qept cos(qt)
     

The two expressions are equal, by virtue of the product rule.


Now let’s return to the equation we were trying to solve,
    
d x 1 −2 x
= .
dt y 2 1 y
We have found a complex solution, but this isn’t very satisfying - what we actually want are real solutions.
Fortunately, any complex solution leads to not only one but two real solutions!
To see why, we can expand our complex solution into its real and imaginary parts:
     t   t 
(1+2i)t 1 t 1 e cos(2t) e sin(2t)
~x(t) = e = e (cos 2t + i sin 2t) = t +i t = ~x1 (t) + i~x2 (t)
−i −i e sin(2t) e cos(2t)
We know that this is a solution of the equation
d~x
= A~x.
dt
Making the substitution ~x(t) = ~x1 (t) + i~x2 (t), we find that
d~x1 d~x2
+i = A~x1 + iA~x2 ,
dt dt
and comparing the real and imaginary parts of both sides,
d~x1 d~x2
= A~x1 , = A~x2
dt dt
we find that both the real and imaginary parts of the complex solution are real solutions!
In conclusion, we obtain two real solutions,
 t 
e cos(2t)
~x1 (t) =
et sin(2t)
and
et sin(2t)
 
~x2 (t) = .
et cos(2t)
These solutions can be used to generate arbitrary real solutions, by taking (real) linear combinations:
~x(t) = c1 ~x1 (t) + c2 ~x2 (t)
If we want a solution with prescribed initial values, we can set t = 0, and solve the equation
~x(0) = c1 ~x1 (0) + c2 ~x2 (0)
for c1 and c2 .
To complete this story, you need to understand how to visualize solutions like this. In general, if
λ = p + iq , q > 0
is a complex eigenvalue of a matrix A, then all real solutions of the equation
d~x
= A~x
dt
will take the general form
~x(t) = ept cos(qt)~v1 + ept sin(qt)~v2 ,
11

where ~v1 and ~v2 are real vectors. It’s helpful to plot these trajectories in the special case
   
1 0
~v1 = , ~v2 = .
0 1
In this case, we have  
cos(qt)
~x(t) = ept .
sin(qt)
No matter the value of p, the curve will “spiral” around the origin, crossing the x axis at regular intervals
with period

T = .
q
The distance from the origin is controlled by the factor ept , in a way which depends on the sign of p:
(1) If p < 0, then the solution will spiral towards the origin, since
lim ept = 0
t→∞
(2) If p > 0, the solution will spiral away from the origin, since
lim ept = ∞
t→∞
(3) If p = 0, the solution will move around the origin in a circle, since
e0t = 1
Here are pictures of all three cases:

p<0 p>0 p=0

We can plot the solutions in the general case by applying a linear transformation to each of the standard
solutions above.
For example, to sketch the trajectory
~x(t) = cos(qt)~v1 + sin(qt)~v2 ,
we would take the standard trajectory
~x0 (t) = cos(qt)ı̂ + sin(qt)̂,
and apply the following transformation:

~v1
̂
~v2
ı̂
12

To guide the transformation, we draw a square with sides parallel to the x and y axes. When we transform
this square, it becomes a parallelogram with sides parallel to ~v1 and ~v2 . The circle inscribed in the square
becomes an ellipse inscribed in the parallelogram!
The pictures when p 6= 0 is similar. A spiral aligned with the x and y axes gets transformed to a spiral
aligned with the vectors ~v1 and ~v2 :
~v1
̂
~v2
ı̂

Notice that in both cases, the orientation of the spiral changed, from clockwise to counterclockwise. In
general, to determine the orientation of the spiral, we need to draw a few velocity vectors.
For example, consider the system     
d x −1 −4 x
=
dt y 2 3 y
If we calculate the characteristic polynomial of its matrix, we get
 
−1 − λ −4
det = λ2 − 2λ + 5 = (λ − 1)2 + 22 .
2 3−λ
This has complex roots
λ = 1 ± 2i,
and therefore the solutions will spiral outward (since the real part of the eigenvalue is p = 1 < 0). To
determine the orientation of the spirals, we can compute the velocity vectors at the points (±1, 0) and
(0, ±1) and draw them:

Once we draw these, it becomes clear that the spirals must be going counterclockwise.
It’s usually pretty difficult to make accurate sketches of the solutions by hand, but it’s not hard to get a
rough idea of what they look like!
13

Stability. In summary, we have seen that solutions of linear autonomous systems


d~x
= A~x + ~b
dt
can always be written in the form
~x(t) = ~xeq + c1 ~x1 (t) + c2 ~x2 (t),
where ~x1 (t) and ~x2 (t) are solutions of the associated homogeneous system
d~x
= A~x.
dt
This is valid in all cases, as long as an equilibrium solution exists.
In the case where A has two distinct real eigenvalues, the solutions ~xi (t) are given by
~x1 (t) = eλ1 t~v1 , ~x2 (t) = eλ2 t~v2 ,
where λ1 and λ2 are the real eigenvalues, and ~v1 and ~v2 are the real eigenvectors.
In the case where A has two complex eigenvalues, they are given by
~x1 (t) = ept (cos(qt)~v1 − sin(qt)~v2 ) , ~x2 (t) = ept (sin(qt)~v1 + cos(qt)~v2 )
where ~v1 and ~v2 are the real and imaginary parts of a complex eigenvector
~v = ~v1 + i~v2
with eigenvalue
λ = p + iq.
There is also a third case: A could have a repeated real eigenvalue. In this case it is always possible to find
solutions of the form
~x1 (t) = eλt~v1 , ~x2 (t) = eλt~v2 + teλt~v3
where ~v1 and ~v3 are parallel. We will not discuss this case in detail, but it’s good to be aware of it.
Some systems have the property that both solutions of the homogeneous equation decrease in magnitude
and tend towards the origin as t → ∞:
lim ~x1 (t) = lim ~x2 (t) = ~0
t→∞ t→∞
For example, this happens in the real eigenvalue case if both eigenvalues are negative. It also happens in the
complex eigenvalue case, if the complex eigenvalues have negative real part.
In either case, the limiting value of any solution of the inhomogeneous equation will be ~xeq :
lim ~x(t) = lim [~xeq + c1 ~x1 (t) + c2 ~x2 (t)] ,
t→∞ t→∞
= lim ~xeq + c1 lim ~x1 (t) + c2 lim ~x2 (t)
t→∞ t→∞ t→∞
= ~xeq
In the context of first order autonomous equations, we had a word for an equilibrium point with this property:
we said that an equilibrium point was stable if all solutions starting at nearby values were drawn back
towards the equilibrium, and approached it in the limit as t → ∞.
The same concept and terminology can be applied to systems of equations as well. For linear autonomous
systems, we have the following stability criterion:
Stability Criterion for Linear Systems. A linear autonomous system
d~x
= A~x + ~b
dt
has a stable equilibrium point ~xeq if and only if every eigenvalue of A has a negative real part.
While we have only proved this criterion only for systems of two equations with distinct real eigenvalues
and/or complex eigenvalues, it is true in general for linear autonomous systems with any number of equations.
There is also a generalization to nonlinear systems, which you will be guided through in the problem set.

You might also like