0% found this document useful (0 votes)
38 views9 pages

Inverses of Square Matrices Explained

This document discusses inverses of square matrices and solving systems of linear equations. It begins by providing an example of inverting a 3x3 matrix to solve a system. It then defines an inverse matrix A^-1 as a matrix such that AA^-1=I. It presents the Gauss-Jordan method for finding the inverse by row reducing [A|I]. It notes that finding the inverse to then solve Ax=b is less efficient than directly row reducing [A|b]. It concludes with the Unique Solution Theorem for square matrices.

Uploaded by

sprinklesdb16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views9 pages

Inverses of Square Matrices Explained

This document discusses inverses of square matrices and solving systems of linear equations. It begins by providing an example of inverting a 3x3 matrix to solve a system. It then defines an inverse matrix A^-1 as a matrix such that AA^-1=I. It presents the Gauss-Jordan method for finding the inverse by row reducing [A|I]. It notes that finding the inverse to then solve Ax=b is less efficient than directly row reducing [A|b]. It concludes with the Unique Solution Theorem for square matrices.

Uploaded by

sprinklesdb16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

WTW 164

Unit 1.5: The inverse of a


square matrix (part 1)
Dr HR (Maya) Thackeray
<[Link]@[Link]>
Reciprocals and systems of linear equations
To solve the linear equation 3x = 12, we multiply by 3–1 to get
x = (3–1)(3x) = (3–1)(12) = 4.
A system of linear equations is given by Ax = b, where A is a matrix and
x and b are column vectors.
Can we multiply by A–1 to get x = A–1Ax = A–1b, where A–1A “acts like 1”?
We would then want Ax = AA–1b = b, so AA–1 should also “act like 1”.
If A is square, then sometimes we can do this.
(Notice that if we can do this, then we get a unique solution!)
Inverses
Let A be an n by n matrix.
A is called invertible, or nonsingular, if there is an n by n matrix B such
that AB = BA = In, where In is the n by n identity matrix. In this case, such
a B is unique, it is written A–1, and it is called the inverse of A.
If A is not invertible, then it is called noninvertible or singular.
We do not write “1/A”: we cannot divide the number 1 by a matrix.
Example of an inverse
The matrix A = has the inverse A–1 = : we have
= = and
==,
so AA–1 = A–1A = I.
Note that A–1 is not (we do not divide 1 by each entry).
How to find the inverse: Gauss-Jordan
Take an n by n matrix A. We want to find A–1 if it exists.
Suppose A–1 = [x.1 | x.2 | … | x.n]. The equation AA–1 = In is equivalent to
A[x.1 | x.2 | … | x.n] = [e1 | e2 | … | en] (where ej is the jth unit vector).
We want to solve the n linear systems given by Ax.j = ej.
These systems have the augmented matrix [A | In].
Apply Gauss-Jordan elimination to this augmented matrix.
(A is converted to reduced row-echelon form; each elementary row operation
applied to A is also applied to the matrix on the right side.)
If the result is [In | B], then A–1 exists and A–1 = B; otherwise, A is singular.
We illustrate the process through an example.
Example
We find the inverse of .

R3 R3 + 3R2
Example (continued)

R1 R1 R2

The inverse of exists and is equal to .


Solving systems and finding inverses
• If we want to solve Ax = b and we know A–1, then we can multiply by A–1
on the left to get A–1Ax = A–1b, so Ix = A–1b, so x = A–1b. This does satisfy
Ax = b, because AA–1b = Ib = b.
• If we don’t know A–1, then finding it via Gauss-Jordan elimination on [A
| I] takes more work than solving Ax = b by applying Gauss-Jordan
elimination to [A | b]. (The same operations are done on A; the right
side of [A | I] is larger than that of [A | b].)
• Therefore, if all we want is to solve Ax = b, using Gaussian or Gauss-
Jordan elimination on [A | b] is more efficient than finding A–1 and then
multiplying Ax = b by A–1. (Indeed, if A is not invertible, then we cannot
find A–1 anyway.)
Square coefficient matrices:
Unique Solution Theorem, version 2
Theorem. Suppose that A is an n by n matrix. The following statements
are equivalent (that is, all are true or all are false).
1. For every n by 1 column vector b, the equation Ax = b has a unique
solution x.
2. A is row equivalent to an upper-triangular matrix in which all
diagonal entries are nonzero.
3. The only solution of Ax = 0 is x = 0.
4. A is invertible.

You might also like