0% found this document useful (0 votes)
16 views23 pages

Adv Eng Math Lecture Notes 2

The document covers the topic of eigenvalues and eigenvectors in linear algebra, detailing how to find them through the characteristic equation of a matrix. It includes definitions, theorems, and examples to illustrate the concepts, as well as applications in various fields such as mechanical systems and Markov processes. Additionally, it discusses the properties of symmetric, skew-symmetric, and orthogonal matrices.

Uploaded by

benlee05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views23 pages

Adv Eng Math Lecture Notes 2

The document covers the topic of eigenvalues and eigenvectors in linear algebra, detailing how to find them through the characteristic equation of a matrix. It includes definitions, theorems, and examples to illustrate the concepts, as well as applications in various fields such as mechanical systems and Markov processes. Additionally, it discusses the properties of symmetric, skew-symmetric, and orthogonal matrices.

Uploaded by

benlee05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link].

kr)

F31.201 Advanced Engineering Mathematics


In-Class Material: 2

Objectives

1
Chapter 8

Linear Algebra: Matrix Eigenvalue

Problems

8.1 Eigenvalues and Eigenvectors

• Given a square matrix A of size n × n, consider the vector equation:

Ax = λx,

where x is an unknown vector and λ is an unknown scalar.

• A scalar λ for which the above equation has a nontrivial solution (x ̸= 0) is called an eigenvalue of A.

• The corresponding nonzero solution x is called an eigenvector of A associated with the eigenvalue λ.

How to find Eigenvalues and Eigenvectors

• Consider the eigenvalue equation:

Ax = λx ⇐⇒ (A − λI)x = 0 : Homogeneous linear system

2
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

• The system (A − λI)x = 0 has a nontrivial solution if and only if:

det(A − λI) = 0
| {z }
Cramer’s theorem

• Definitions:

– D(λ): Characteristic determinant or Characteristic polynomial

– D(λ) = 0: The characteristic equation of A

• The eigenvalues of A are the solutions of the characteristic equation of A.

8.1.1 Finding Eigenvalues and Eigenvectors

To find eigenvalues and eigenvectors, we solve:

(A − λI)x = 0.

A nontrivial solution exists if and only if:

det(A − λI) = 0.

This equation is called the characteristic equation of A, and the determinant is called the characteristic

polynomial.

Cramer’s Theorem

det(A − λI) = 0 if and only if (A − λI)x = 0 has a nontrivial solution.

Example 1

Find the eigenvalues and eigenvectors of the matrix:

 
−5 2
A=

.

2 −2

3
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Step 1: Eigenvalues

We solve the characteristic equation:

 
−5 − λ 2
 = (−5 − λ)(−2 − λ) − 4 = λ2 + 7λ + 6 = 0.

det(A − λI) = det 
 
2 −2 − λ

Solving, we get:

λ1 = −1, λ2 = −6.

Step 2: Eigenvectors

For λ = −1:    
−4 2 1
(A + I)x = 

x = 0
 ⇒  .
x= 
2 −1 2

For λ = −6:    
1 2 2
(A + 6I)x = 

x = 0
 ⇒  .
x= 
2 4 −1

Theorem 1: Eigenvalues

• The eigenvalues of a square matrix are the roots of its characteristic polynomial.

• An n × n matrix has at least one eigenvalue and at most n distinct eigenvalues.

Theorem 2: Eigenvectors and Eigenspace

• If w and x are eigenvectors corresponding to the same eigenvalue λ, then w + x and kx (k ∈ R) are

also eigenvectors.

• The eigenspace of λ is the set of all eigenvectors corresponding to λ, together with the zero vector.

Theorem 3: Eigenvalues of the Transpose

The eigenvalues of A and AT are the same.

4
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Example 2:

Find the eigenvalues and eigenvectors of:

 
−2 2 −3
 
 
A=
2 1 −6.
 
 
−1 −2 0

Characteristic equation:

det(A − λI) = −λ3 − λ2 + 21λ + 45 = 0.

Solutions: λ = 5, λ = −3 (double root).

For λ = 5:    
−7 2 −3  1
   
   
(A − 5I)x = 
0 −24/7 48/7
 ⇒ x=
 2 .

   
   
0 0 0 −1

For λ = −3:  
1 2 −3
 
 
(A + 3I)x = 
0 0 0.
 
 
0 0 0

Eigenspace is spanned by:    


−2 3
   
   
x1 = 
 1 ,
 x2 = 
0 .

   
   
0 1

8.1.2 Algebraic and Geometric Multiplicity

• Algebraic Multiplicity: The order of an eigenvalue as a root of the characteristic polynomial.

• Geometric Multiplicity: The number of linearly independent eigenvectors corresponding to an eigen-

value.

• Defect: Difference between algebraic and geometric multiplicity.

Example with λ = 0 (double root), algebraic multiplicity 2, geometric multiplicity 1, defect = 1.

5
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

8.2 Applications of Eigenvalue Problems

8.2.1 Example: Stretching of an Elastic Membrane

Consider:  
5 3
A=

.

3 5

Characteristic equation:

(5 − λ)2 − 9 = 0 ⇒ λ = 8, 2.

Eigenvectors:    
1 1
 ,
λ=8: x= 
 .
λ=2: x= 
1 −1

The principal directions are given by the eigenvectors, and the deformation leads to an ellipse with equation:

z12 z22
+ = 1.
82 22

6
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

8.2.2 Example 2: Markov Processes


Markov Process Eigenvalue Problem

Consider the stochastic matrix:  


0.7 0.1 0
 
 
A=
0.2 0.9 0.2

 
 
0.1 0 0.8

Find x such that Ax = x.

Solution yields λ = 1 and an eigenvector:  


2
 
 
x=
6 .

 
 
1

This shows the long-term state probabilities.

7
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Example4: Mechanical System and Eigenvalue Problem

Mass–spring systems involving several masses and springs can be treated as eigenvalue problems. For instance,

consider the mechanical system in Fig. 161, which is governed by the system of ODEs

y1′′ = −3 y1 − 2 (y1 − y2 ) = −5 y1 + 2 y2 , (6a)

y2′′ = −2 (y2 − y1 ) = 2 y1 − 2 y2 , (6b)

where y1 and y2 are the displacements of the masses from rest, as shown in the figure. Primes denote derivatives

with respect to time t.


 
y1 
In vector form, setting y = 
  , we can write

y2

    
′′
y1  −5 2  y1 
y ′′ = A y =⇒ 

 = 
 
 .
  (7)
y2′′ 2 −2 y2

We consider a mechanical system (e.g. two masses on springs) whose motion can be described by exponen-

8
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

tial/trigonometric solutions. We try a vector solution of the form

y = x eωt . (8)

From the governing differential equation

ÿ = A y,

substituting y = x eωt gives

ω 2 x eωt = A x eωt .

Dividing by eωt (assuming it is nonzero) leads to the eigenvalue problem

A x = λ x, where λ = ω 2 . (9)

Example:

From Example 1 in Sec. 8.1 (of some reference), suppose that A has eigenvalues λ1 = −1 and λ2 = −6. Hence

p p √
ω = ± −λ1 = ± i and ± −λ2 = ± i 6.

Corresponding eigenvectors are, say,


   
1 2
x1 = 
 ,
 x2 = 
 .
 (10)
2 −1

Thus from (8), we obtain four complex solutions of the form

√ √ √
x1 e± it = x1 (cos t ± i sin t), x2 e± i 6t
= x2 (cos( 6 t) ± i sin( 6 t)).

By taking linear combinations (and real/imag parts), we can form four real solutions:

√ √
x1 cos t, x1 sin t, x2 cos( 6 t), x2 sin( 6 t).

9
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

A general real solution is any linear combination:

√ √
y = a1 x1 cos t + b1 x1 sin t + a2 x2 cos( 6 t) + b2 x2 sin( 6 t),

where a1 , b1 , a2 , b2 are constants determined by initial conditions. These describe harmonic oscillations (in the

absence of damping).

8.3 Symmetric, Skew-Symmetric, and Orthogonal Matrices


Definitions: Symmetric, Skew-Symmetric, and Orthogonal Matrices

• Symmetric: AT = A

• Skew-Symmetric: AT = −A

• Orthogonal: AT = A−1

• Decomposition: Any real n × n matrix A can be written uniquely as the sum of a symmetric matrix R

and a skew-symmetric matrix S:

1 1
A + AT , A − AT .
 
A = R + S, R= S=
2 2

Example: Illustration
     
9 5 2  9.0 3.5 3.5   0 1.5 −1.5
     
     
A=
2 3 −8 = 3.5
 3.0 −2.0 + −1.5
 0 −6.0

     
     
5 4 3 3.5 −2.0 3.0 1.5 6.0 0
| {z } | {z }
R S

Theorem 1: Eigenvalues of Symmetric and Skew-Symmetric Matrices

• The eigenvalues of a symmetric matrix are all real.

• The eigenvalues of a skew-symmetric matrix are purely imaginary or zero.

10
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

8.3.1 Orthogonal Transformation and Orthogonal Matrices

An orthogonal transformation is a linear transformation of the form

y = Ax,

where A is an orthogonal matrix (i.e., AT = A−1 ).

• Example: A rotation in the plane by angle θ is given by

    
y1  cos θ − sin θ x1 
 =  .
  
   
y2 sin θ cos θ x2
| {z }
orthogonal

Theorem 2: Invariance of Inner Product


• An orthogonal transformation preserves the value of the inner product. For any a, b ∈ Rn and an

orthogonal n × n matrix A,

a · b = (Aa) · (Ab).

• Consequently, the length (norm) of any vector is also preserved:

√ p
∥a∥ = a·a = (Aa) · (Aa).

11
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Theorem 3: Orthonormality of Column and Row Vectors

A real square matrix A is orthogonal if and only if its column vectors (and also its row vectors) form an

orthonormal system. That is, if a1 , . . . , an are the column vectors of A, then



0, j ̸= k,


aj · ak = aTj ak =


1,

j = k.

   
a⊤ a⊤ a1 a⊤
1 a2 ··· a⊤
1 an 
 1  1


   
 .   . .. .. .. 
I = A−1 A = A⊤ A =  .   .

 .  a1 · · · an =  . . . . .
   
   
   
a⊤
n a⊤
n a1 a⊤
n a2 ··· a⊤
n an

Theorem 4: Determinant of an Orthogonal Matrix

The determinant of an orthogonal matrix is either +1 or −1.

Theorem 5: Eigenvalues of an Orthogonal Matrix

The eigenvalues of an orthogonal matrix A are either real or occur in complex-conjugate pairs, and all

have absolute value 1.

An Example with a Rotation–Reflection Matrix

Consider the matrix  


2 1 2
 3 3 3 
 
 
 
− 2 2 1 .
 3 3 3 
 
 
 
1 2
3 3 − 23

Its characteristic polynomial turns out to be

− λ3 + 2
3 λ2 + 2
3 λ − 1 = 0.

Since one of the eigenvalues must be real (we can argue this from the fact that a 3D rotation–reflection matrix

has at least one real eigenvalue), we test λ = +1 or λ = −1. Substituting λ = −1 satisfies the polynomial, so

λ = −1 is indeed a root.

12
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Then we can factor out (λ + 1) from the characteristic polynomial to get

−(λ2 − 5 λ3 + 1) = 0.

Hence the other two eigenvalues are the roots of λ2 − 53 λ + 1 = 0, which come out to


5 ± i 11
,
6

each having absolute value 1.


√ √
5+i 11 5−i 11
Thus the three eigenvalues of this matrix are −1, 6 , and 6 , and the latter two have magnitude 1.

13
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

8.4 Eigenbases. Diagonalization. Quadratic Forms


Theorem 1: Basis of Eigenvectors

If an n × n matrix A has n distinct eigenvalues, then A has a basis of eigenvectors

x1 , x2 , . . . , xn

for Rn .

Theorem 2: Symmetric Matrices

Any real symmetric matrix admits an orthonormal basis of eigenvectors for Rn .

Example:  
5 3
A = 



3 5

   
1 1
 ,
has a basis of eigenvectors    , corresponding to the eigenvalues λ1 = 8 and λ2 = 2. (See Example 1
 
1 −1
in Sec. 8.2.)

Even if not all n eigenvalues are different, a matrix A may still provide an eigenbasis for Rn . See Example 2

in Sec. 8.1, where n = 3.

On the other hand, a matrix A may not have enough linearly independent eigenvectors to form a basis. For

instance, in Example 3 of Sec. 8.1,  


0 1
A = 

,

0 0

 
k 
  (with k ̸= 0 arbitrary). Hence this A cannot be diagonalized
we find there is only one eigenvector, of the form  
0
by a basis of eigenvectors.

Actually, eigenbases exist under more general conditions than those in Theorem 1. An important case is given

next (see the subsequent theorem or example).

14
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Definition (Similar Matrices, Similarity Transformation). We say A


b is similar to A if

b = P−1 A P,
A

where P is an n × n invertible matrix. The map A 7→ A


b is called a similarity transformation.

Theorem 3: Eigenvalues & Eigenvectors of Similar Matrices

• If A
b is similar to A, then A
b has the same eigenvalues as A.

• Moreover, if x is an eigenvector of A, then

y = P−1 x

is an eigenvector of A
b for the same eigenvalue.

Let    
6 −3 1 3
A=

,
 P =

.

4 −1 1 4

Then     
4 −3 1 3 3 0
 = P −1 A P = 



=
 
.

−1 1 1 4 0 2

(Here P −1 was obtained from an earlier calculation in Sec. 7.8 with det P = 1.)

We see that  has eigenvalues λ1 = 3, λ2 = 2. The characteristic equation of A is

(6 − λ)(−1 − λ) + 12 = λ2 − 5λ + 6 = 0,

which has the same roots λ1 = 3, λ2 = 2. This confirms that  and A share eigenvalues, as expected for similar

matrices.

We can also check the eigenvectors. From the first component of (A − λI)x = 0:

(6 − λ)x1 − 3x2 = 0.

15
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

 
1
For λ = 3, this gives 3x1 − 3x2 = 0 =⇒ x1 = x2 . Hence an eigenvector is x1 = 
 .

1
For λ = 2,

(6 − 2)x1 − 3x2 = 0 =⇒ 4x1 − 3x2 = 0,


 
3
 .
so an eigenvector is x2 =  
4

By Theorem 3 (or direct calculation),

         
4 −3 1 1 4 −3 3 0
y1 = P −1 x1 = 

  =  ,
    y2 = P −1 x2 = 

  =  .
   
−1 1 1 0 −1 1 4 1

Indeed, these y1 , y2 are eigenvectors of the diagonal matrix Â. Thus, we see clearly that the columns of P are

precisely the eigenvectors of A, which is the standard method for diagonalizing a matrix.

Theorem 4: Diagonalization of a Matrix

If an n × n matrix A has a basis of eigenvectors, then

D = X−1 A X

is diagonal, with the eigenvalues of A on its main diagonal. Here, X is the matrix whose columns are the

eigenvectors of A.

Problem: Diagonalize the matrix

 
 7.3 0.2 −3.7
 
 
A = 
−11.5 1.0 .
5.5 
 
 
17.7 1.8 −9.3

Solution. The characteristic determinant gives the characteristic equation

−λ3 − λ2 + 12 λ = 0.

16
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

The roots (eigenvalues of A) turn out to be λ1 = 3, λ2 = −4, λ3 = 0.

By applying Gauss elimination to (A−λI)x = 0 for each λ = λ1 , λ2 , λ3 , we find the corresponding eigenvectors.

Then, using Gauss–Jordan elimination (Sec. 7.8, Example 1) on those eigenvectors (arranged as columns) gives

the inverse of the resulting matrix X. Concretely, suppose we obtain:

   
−1 1 2 −0.7 0.2 0.3 
   
X −1
   
X = 
3 −1 ,
1 = 
−1.3 −0.2 .
0.7 
   
   
−1 3 4 0.8 0.2 −0.2

Then, calculating A X and left-multiplying by X −1 yields

    
−0.7 0.2 0.3  −3 −4 0
   3 0 0
    
D = X −1 A X = 
   
= .

−1.3 −0.2 0.7  9 4 0 −4
0 0
  
   
    
  
0 0 0
0.8 0.2 −0.2 −3 −12 0

Hence A is diagonalized as A = X D X −1 , where D = diag(3, −4, 0).

8.4.1 Quadratic Forms and Transformation to Principal Axes

A quadratic form Q in the variables x1 , x2 , . . . , xn of a vector x ∈ Rn can be written as

n X
X n
Q = xT A x = ajk xj xk = a11 x21 + a12 x1 x2 + · · · + ann x2n ,
j=1 k=1

where A = [ ajk ] is called the coefficient matrix.

• We usually assume A is symmetric. In that case, A admits an orthonormal basis of eigenvectors. Let X be

the orthogonal matrix whose columns are those eigenvectors (X−1 = XT ).

• From Theorem 4 (Diagonalization), we have

D = X−1 A X = XT A X,

17
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

which is diagonal. Consequently,

 
Q = xT A x = (Xy)T A(Xy) = yT XT AX y = yT D y.

Setting x = Xy is called the transformation to principal axes.

Theorem 5 (Principal Axes Theorem)

Under the substitution x = Xy, a quadratic form Q = xT A x (with A real and symmetric) is transformed

into the principal axes form

Q = yT D y = λ1 y12 + λ2 y22 + · · · + λn yn2 ,

where λ1 , . . . , λn are the eigenvalues of A, and X is an orthogonal matrix whose columns are the corres-

ponding eigenvectors.

Example 5: Quadratic Form, Symmetric Coefficient Matrix

Consider   
3 4   x1 
xT A x = [x1 x2 ] 

   = 3 x21 + 10 x1 x2 + 2 x22 .
 
6 2 x2

Note that A is not symmetric as given, but one can form the corresponding symmetric matrix

   
4+6
1   3 2  3 5
A + AT = 
 
C= = ,

2    
6+4
2 5 2
2

so that

xT C x = 3 x21 + 10 x1 x2 + 2 x22 .

Either way, the quadratic form is Q = 3 x21 + 10 x1 x2 + 2 x22 . Diagonalizing C (or directly applying the principal

axes theorem) can rewrite Q in the form Q = λ1 y12 + λ2 y22 .

18
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Example 6: Conic Section via Principal Axes

Find the type of conic section described by

Q = 17 x21 − 30 x1 x2 + 17 x22 = 128,

and transform it to its principal axes.


 
17 −15
• Write Q = xT A x with A = 
 
.
 
−15 17

• The characteristic equation is

det A − λI = (17 − λ)2 − (−15)2 = (17 − λ)2 − 152 = 0.




Hence λ1 = 2 and λ2 = 32.

• Transforming to coordinates y aligned with these eigenvalues, the quadratic form becomes

Q = 2 y12 + 32 y22 .

Thus,
y12 y2
2 y12 + 32 y22 = 128 =⇒ 2
+ 22 = 1,
8 2

which is an ellipse.

8.5 Complex Matrices and Forms

8.5.1 Notations

• For a matrix A = [ ajk ] with complex entries ajk = α + iβ (α, β ∈ R), we write A for the matrix obtained

by taking the complex conjugate of each entry, i.e. ajk = α − iβ.

T
• A denotes the transpose of A, hence also called the conjugate transpose of A.
     
3 + 4i 1 − i  3 − 4i 1 + i  T 3 − 4i 6 
Example. A = 


 =⇒ A=

,
 A =

.

6 2 − 5i 6 2 + 5i 1+i 2 + 5i

19
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Definition: Hermitian, Skew-Hermitian, and Unitary Matrices

Let Λ = [ ajk ] be a square complex matrix.

T
• Λ is called Hermitian if Λ = Λ (equivalently akj = ajk ).

T
• Λ is Skew-Hermitian if Λ = − Λ.

T
• Λ is Unitary if Λ = Λ−1 .

Matrix Type Characteristic Equation Eigenvalues

A Hermitian λ2 − 11λ + 18 = 0 9, 2

B Skew-Hermitian λ2 − 2i λ + 8 = 0 4i, −2i

1√ √
C Unitary λ2 − i λ − 1 = 0 3 + 12 i, − 12 3 + 12 i
2

Table 8.1: Examples of Hermitian, Skew-Hermitian, and Unitary matrices with their characteristic equations and
eigenvalues.

8.5.2 Eigenvalues of Complex Matrices


Theorem 1 (Eigenvalues)

• The eigenvalues of a Hermitian matrix (and thus of a real symmetric matrix) are real.

• The eigenvalues of a skew-Hermitian matrix (and thus of a real skew-symmetric matrix) are purely

imaginary or zero.

• The eigenvalues of a unitary matrix (and thus of an orthogonal matrix) all have absolute value 1.

20
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

Theorem 2 (Invariance of Inner Product)

A unitary transformation y = Ax, with A unitary, preserves the value of the (complex) inner product

a · b = a T b,

and thus preserves the norm

p p
∥a∥ = aT a = |a1 |2 + · · · + |an |2 .

8.5.3 Unitary Systems


Definition: Unitary System

A unitary system is a set of complex vectors {a1 , . . . , an } that satisfy



0, j ̸= k,


aj · ak = aj T ak =


1,

j = k.

Theorem 3 (Unitary System of Column and Row Vectors)

A complex square matrix A is unitary if and only if its column vectors (and its row vectors) form a

unitary system.

Theorem 4 (Determinant of a Unitary Matrix)

For a unitary matrix A,

det(A) = 1.

Theorem 5 (Basis of Eigenvectors)

Any Hermitian, skew-Hermitian, or unitary matrix admits a basis of eigenvectors in Cn that forms a

unitary system.

21
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@[Link])

8.5.4 Hermitian and Skew-Hermitian Forms

A form in the components x1 , x2 , . . . , xn of a vector x ∈ Cn is given by

n X
X n
T
x Ax = ajk xj xk .
j=1 k=1

Written out term by term, this expansion is

x T A x = a11 x1 x1 + a12 x1 x2 + · · · + a1n x1 xn + a21 x2 x1 + a22 x2 x2 + · · · + ann xn xn .

Here A = [ ajk ] is called the coefficient matrix.

• The form x T A x is said to be Hermitian or skew-Hermitian if A is a Hermitian or a skew-Hermitian

matrix, respectively.

• If A is Hermitian, then x T A x is always a real value.

• If A is skew-Hermitian, then x T A x is purely imaginary or zero.

22
References

23

You might also like