0% found this document useful (0 votes)
95 views6 pages

Mathematical Preliminaries Handbook

This document provides an introduction to vector and tensor analysis. It defines vectors, coordinate systems, and tensor operations. Vectors are described by their components in different coordinate systems. Tensors are linear transformations that operate on vectors. Second-order tensors can be represented by their matrix in a given coordinate system. Properties of tensors like transposition, symmetry, and eigenvectors are discussed. Key tensor operations like multiplication and addition are defined through their action on vectors.

Uploaded by

Saquib Hussain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views6 pages

Mathematical Preliminaries Handbook

This document provides an introduction to vector and tensor analysis. It defines vectors, coordinate systems, and tensor operations. Vectors are described by their components in different coordinate systems. Tensors are linear transformations that operate on vectors. Second-order tensors can be represented by their matrix in a given coordinate system. Properties of tensors like transposition, symmetry, and eigenvectors are discussed. Key tensor operations like multiplication and addition are defined through their action on vectors.

Uploaded by

Saquib Hussain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 1

Vector and tensors

We assume familiarity with matrix algebra. We quickly summarize tensor analysis


and introduce the notation employed in this text.

1.1 Coordinate systems

We will exclusively employ right-handed cartesian coordinate systems defined by


three orthogonal unit vectors. A triad {êi } is mutually orthogonal if

êi · ê j = δi j , (1.1)

where δi j is the Kronecker delta defined by



1 if i = j
δi j = (1.2)
0 if i �= j.

The coordinate system of choice may be stationary, translating, or rotating, or both.


We will employ calligraphic capital letters to identify coordinate systems. For ex-
ample, S can signify a coordinate system attached to a spacecraft, and defined by
the unit vectors êi , i = 1, 2, 3. The notation {S , S, êi } will indicate both the name
(S ) of the coordinate system, its origin (S), and the unit vectors that define it. When
several coordinate systems are present, we need to differentiate their unit vectors.
Typically this is done by appending primes to them, e.g., ê�i , i = 1, 2, 3 will identify
a second system, and so on.

1.2 Vectors

� vector� �a is indicated as thus. Given two coordinate systems {P, P, êi } and
A
S , S, êi , the components of a in P will be denoted by

3
4 1 Vector and tensors

ai = a · êi , (1.3)

and in S by a�i = a · êi , where ‘ · ’ is the usual vector dot product. Thus we have the
identities
a = (a · êi )êi = ai êi = (a · ê�i )ê�i = a�i ê�i . (1.4)
The magnitude or norm of a is
� �1/2
|a| = (a · a)1/2 = (ai ai )1/2 = a�i a�i . (1.5)

We now collect several useful formulae:

a · b = |a||b| cos θ = ai bi , (1.6a)


a × b = −b × a = |a||b| sin θ êc = εi jk êi a j bk , (1.6b)
a · (b × c) = b · (c × a) = c · (a × b) = εi jk ai b j ck (1.6c)
and a × (b × c) = (a · c)b − (a · b)c = εi jk εklm êi a j bl cm . (1.6d)

where θ is the angle between vectors a and b, êc is a unit vector normal to the plane
containing a and b, εi jk is the alternating tensor defined by

 1 if i, j and k are an even permutation
εi jk = −1 if i, j and k are an odd permutation (1.7)

0 otherwise.

1.3 Tensors

A first-order tensor is simply a vector. A second-order tensor is a linear transforma-


tion that operates on a vector to produce another vector; we will see that a second-
order tensor is the coordinate-independent counterpart of a matrix.

1.3.1 Second-order tensors

A second-order tensor A is probed by its action (·) on vectors. We employ the same
symbol as for the dot-product of vectors because of similarities between the two
operations. We define the resultant b of A’s operation (specifically, right-operation)
on a by b = A · a. Similarly a left-operation may be defined as c = a · A. Note that,
in general A · a �= a · A.
The addition A + B and multiplication A · B of two tensors A and B result in
tensors C and D, respectively, that are defined in terms of how they operate on some
vector a, i.e.,
1.3 Tensors 5

(A + B) · a = C · a := A · a + B · a (1.8a)
and (A · B) · a = D · a := A · (B · a). (1.8b)

It is understood in the above that the two tensors A and B operate on similar kinds
of vectors to produce alike vectors.
To better understand tensors, it is useful to generalize the concept of a unit vector
to a tensorial basis. Such a generalization is furnished by the tensor product (a ⊗ b)
of two vectors a and b. The entity a ⊗ b is a second-order tensor that can act on
another vector c in two different ways (left- and right-operations) to yield another
vector:
(a ⊗ b) · c = (c · b)a and c · (a ⊗ b) = (c · a)b , (1.9)
where the ‘·’ on the left-hand sides denotes a tensor operation, and the usual vector
dot product on the right-hand sides. Clearly the left and right operations are not the
same. Given a coordinate systems êi , contrasting the computation

a ⊗ b = ai êi ⊗ b j ê j = (ai b j )êi ⊗ ê j (1.10)

with (1.4), suggests that a tensorial basis may be constructed by taking appropriate
order tensor products of the unit vectors. We note that the above represents the
weighted linear sum of the nine tensors êi ⊗ ê j . Thus, a second-order tensorial basis
in the coordinate system êi is given by the nine quantities êi ⊗ ê j . A second-order
tensor A may then be written as

A = Ai j êi ⊗ ê j , (1.11)

in terms of A’s components Ai j in and êi . These components, obtained by appealing


to (1.9), are given by the equations

Ai j = êi · A · ê j , (1.12)

that are reminiscent of analogous ones for vector components; see (1.4). We will
refer to the nine Ai j ’s as the “matrix of A in êi ” denoted by [A]. In another coordinate
system ê�i , the A�i j = ê�i · A · ê�j constitute the “matrix of A in ê�i ” denoted by [A]� .
Sometimes, for more clarity, we can append the name of the coordinate system in
which we are finding the matrix of A, e.g., [A]P is the matrix of A in {P, êi }.
A second-order tensor’s interactions with vectors and other second-order tensors
may be obtained by repeated (if required) application of (1.9). These operations are
summarized below:

A · a = Ai j êi ⊗ ê j .am êm = Ai j a j êi (1.13a)


a · A = am êm · Ai j êi ⊗ ê j = ai Ai j ê j , (1.13b)
and A · B = Ai j êi ⊗ ê j · Bmn êm ⊗ ên = Ai j B jn êi ⊗ ên , (1.13c)

where the first two operations produce vectors, and the last a second-order tensor.
The components of the new vectors or tensors produced above may be found by
6 1 Vector and tensors

comparing coefficients, or by projection using either (1.3) and (1.12). For example,
if C = A · B, then from the third equation above, and applying (1.12), we obtain

C pq = ê p · C · êq = ê p · (Ai j B jn êi ⊗ ên ) · êq = δip Ai j B jn δnq = A p j B jq .

The transpose AT of a tensor A is defined by the formula


� �
AT = AT i j êi ⊗ ê j = A ji êi ⊗ ê j , (1.14)

The above results in [AT ] = [A]T , which is the usual transpose of matrices. From the
above definition of a tensor’s transpose the following identities are easily proved:

(A + B)T = AT + BT (1.15a)
(A · B)T = BT · AT (1.15b)
tr A = tr AT (1.15c)
and (A · a) · b = a · (AT · b), (1.15d)

for any two vectors a and b. In the last identity, we can drop the parentheses, as the
equations will make sense in only the indicated way.
A tensor is said to be symmetric if A = AT , and anti-symmetric if A = −AT . An
anti-symmetric tensor has at most three independent components in any coordinate
system. Thus, for any anti-symmetric tensor W, it is possible to associate an axial
vector denoted by w with the property that for all vectors b

W · b = w × b. (1.16)

The operations of constructing anti-symmetric tensors from vectors and extracting


axial vectors from anti-symmetric tensors are denoted by sk w (= W) and ax W (=
w), respectively. The relationship between w and W may be expressed in indical
notation employing the alternating tensor of (1.7):
1
ax W = w = − εi jkW jk êi and sk w = W = −εi jk wi ê j ⊗ êk . (1.17)
2
Employing (1.6b), it is straightforward to check that the above prescription for ax W
and sk w will satisfy (1.16).
For most tensors, and all tensors occurring in this book, it is possible to find three
unit vectors that are simply scaled under that tensor’s operation, i.e., given A, there
(almost always) exist three vectors v̂i and correspondingly three scalars λi , such that

A · v̂i = λi v̂i (no sum). (1.18)

These special vectors v̂i are called the eigenvectors of A, and the corresponding
scalings λi are A’s eigenvalues. In the coordinate system described by the three
eigenvectors, the tensor’s matrix is diagonalized with the tensor’s eigenvalues as the
diagonal entries. This simple diagonal nature makes employing the eigen-coordinate
1.3 Tensors 7

system very tempting for computation. Unfortunately, there is no guarantee that


the eigenvector triad are mutually orthogonal, so that the coordinate system they
describe may not be cartesian. However, if the tensor is symmetric, it is always
possible to diagonalize it, and, moreover, the eigenvectors are orthogonal, so that
the coordinate system they describe is frequently a convenient operational choice.
Thus, given a symmetric tensor S, it is possible to find three eigenvectors v̂i and
corresponding eigenvalues λi , so that S is simply
3
S = ∑ λi v̂i ⊗ v̂i . (1.19)
i=1

A symmetric S’s operation therefore corresponds simply to linear stretches/extension


along three mutually orthogonal eigen-directions.
If in case all the eigenvalues of a symmetric tensor are non-zero and positive, the
symmetric tensor is said to be positive definite. Finally, it is important to mention
that for any second-order tensor, in three dimensional space, three eigenvalues can
always be found, even if it is not diagonalizable, i.e., does not have three eigenvec-
tors. These eigenvalues are either all real, or have one real and one complex pair as
eigenvalues.
Frequently, and again, for all tensors considered in this book, it is possible to
define an inverse tensor. Thus, given A taking a to b, the inverse tensor A−1 brings
b back to a. It is easy enough to see that a tensor and its inverse share the same
eigenvectors, but inverse eigenvalues. Thus, if A has a zero eigenvalue, its inverse
does not exist. The following identities regarding inverses are easily verified:

A · A−1 = A−1 · A = 1 (1.20a)


(A · B)−1 = B−1 · A−1 (1.20b)
(AT )−1 = (A−1 )T = A−T (1.20c)
−1 −1
and det A = (det A) , (1.20d)

where det A is the determinant of a tensor which is computed by forming the matrix
of the tensor in any coordinate system and then computing the determinant of that
matrix.
An important class of tensor that will occur frequently in the text is the orthogo-
nal tensor Q that has the property that given any vector a,

|Q · a| = |a|, (1.21)

i.e., Q preserves a vector’s length. From this the following properties follow:

Q−1 = QT and det Q = ±1. (1.22)

In all applications to follow, all orthogonal tensors will have a determinant one.
Such proper orthogonal tensors are called rotation tensors. Physically, as its name
suggests, a rotation tensor represents rotation about the origin. It may be shown (see
8 1 Vector and tensors

e.g., Knowles 1998, p. 51) that of a rotation tensor’s three eigenvalues, two are com-
plex conjugates of norm one and the third is unity. The eigenvector corresponding
to the unitary eigenvalue provides the axis of rotation. The amount of rotation is
provided by the argument of the complex eigenvalue.
We have already mentioned the tensor product of two vectors in (1.10). Amongst
other things, the tensor product helps in “tensorizing” the vector operations of taking
dot- and cross-products, viz.,

a · b = tr a ⊗ b, (1.23a)
and a × b = −2 ax sk a ⊗ b. (1.23b)

Some additional identities that are easily proved, and will often be used are

a ⊗ A · b = a ⊗ b · AT , (1.24a)
and a × (b × c) = (a · c) b − (a · b) c
= tr (a ⊗ c) b − tr (a ⊗ b) c. (1.24b)

1.4 Coordinate transformation

We will need to find the components of vectors and second-order� tensors �in one
coordinate system, say {S , S, êi }, given its matrix in the other, say P, P, ê�i . This
may be done by expressing the unit vectors êi of P in terms of ê�i of S as,

ê�i = (ê�i · ê j ) ê j ,

and substituting this relationship into (1.4) and (1.12). Geometrically, because both
P and S are right-handed Cartesian coordinate systems, it is possible to obtain
one from the other by a rotation tensor, and we write

ê�i = R · êi , (1.25)

in terms of the R = R�i j ê�i ⊗ ê�j = Ri j êi ⊗ ê j . It may be shown that, in fact,

Ri j = [R]i j = R�i j = [R]�i j ≡ ê�j · êi . (1.26)

Substituting the previous two equations into (1.3) and (1.12), we obtain the coordi-
nate transformation rules

[a] = [RT · a]� = [R]T [a]� ⇐⇒ ai = R ji a�j (1.27a)

and [A] = [RT · A · R]� = [R]T [A]� [R] ⇐⇒ Ai j = Rki A�kl Rl j (1.27b)
for vector and second-order tensor components, respectively.

You might also like