Module 2
Linear Algebra
Linear Equation
• An Equation in which variable’s highest power is 1
• X+4=9
• Y+5=0
• In general form
• a1x1+a2x2+a3x3+ +anxn = C
System of Linear Equation
A system(group) of two or more linear equation involving same variables is called system of
linear equations
System of Linear Equation
System of Linear Equation
Vectors vs Scaler
A quantity that has magnitude but no particular direction is
described as scalar. A quantity that has magnitude and acts in a
particular direction is described as vector.
V <2,3>
V <1,2>
Tensors
• ML generalization of vectors and matrices to any number of dimensions
• Name Dimension Description
• Scalar x 0 magnitude only
• Vector [ x1,x2,x3] 1 array
• Matrix 2 flat tables
3 3D table i.e cube
n-tensor higher dimension
Vectors
• One dimensional array of number
• Denoted in lowercase,italics,bold
• Arranged I so not bolsn order,so element can be accessed by its index
• Element are scaler
• Representing a point in space
• Vectors of length two represents location in 2D matrix.
• Length of three represents location in 3D cube
• Length of n represents location in n-dimensional tensor.
Vector vs Point
P (2,3)
V <1,2>
• A Point has position in space. The only characteristic
that distinguishes one point from another is its position.
A Vector has both magnitude and direction, but no
fixed position in space. Geometrically, we draw points
as dots and vectors as line segments with arrows.
• Vectors represent a magnitude and direction from
origin
Norms
•
Vector Norms
Basis vectors
• Can be scaled to represent any vector in a given vector space
• i(1,0)
• J(0,1)
• Can be scaled to
• V = 2.i+ 3.j
Orthogonal vectors
• X and y are orthogonal vectors if Xty=0 X.Y = 0 (XYcos90)
• Are at 90 degree to each other ( non zero norms)
• N dimensional space has max n mutually orthogonal vectrors
• Orthonormal vectors are orthogonal and all have unit norm
• Basis vector are an example
Types of Vector
• Parallel vector
• Orthogonal Vector
• Orthonormal vector
• Like vector/ Unlike vector
• Equal vector
• Collinear vector : sample plane opposite direction
• Unit vector
• Position vector : pass through origin
• Zero vector : magnitude zero
• https://www.geeksforgeeks.org/types-of-vectors/amp/
Unit Vector
• A unit vector in a given direction is a vector with magnitude one in that direct. Used to
represent that direction of a vector
• a = [3,4]
• |a|=5
• Ax = 3/5
• Ay = 4/5
• ||a||=1
Position Vector
• A= 2i + 3j + k
• ||a||= sqrt (14)
• Unit vector =1
• A vector that start from origin (o) is called position vector.
Vector Operation
Addition and Subtraction
Vector operation ( Dot Product)
Output is scalar
• a. b = |a||b| cos (theta)
• a.b = a1b1+a2b2+--- +anbn
• Example(2,2,-1) (5,-3,2)
Vector Operation (cross product)
Output is vector
• A * B = |a||b| sin (theto) n
• n denote direction (unit vector = magnitude)
• A= (3,-3,1) b=(4,9,2)
• Example 1 a = <-4,3,0> b=<2,0,0>
• i j k |a| = 5 |b|=2
• 3 -3 1 a*b= -6
• 49 2 sin-1(6/10)= 36.87
• -15i-2j+39 k
Vector Space
Eigenvector and Eigenvalue
Issues with ML
• Inadequate Training Data/ Poor Quality of data : data plays a vital role in the processing of machine
learning.
• Overfitting : Whenever a machine learning model is trained with a huge amount of data, it starts
capturing noise and inaccurate data into the training data set. It negatively affects the performance of
the model
• Methods to reduce overfitting:
• Increase training data in a dataset.
• Reduce model complexity by simplifying the model by selecting one with fewer parameters
• Ridge Regularization and Lasso Regularization
• Early stopping during the training phase
• Reduce the noise
• Reduce the number of attributes in training data.
• Constraining the model.
Matrix
•
Types of matrix
• Diagonal Matrix
• Zero matrix
• Upper triangular
• Lower triangular
• Identity matrix
Matrix Algebra
• Two matrix are said to be equal if their dimensions are equal and all corresponding
elements are equal.
Vector algebra
• The addition and subtraction of two matrix , A and B of the same size yields a matrix C of
the same size
• Cij = Aij +/- Bij
• It follows commutative law and associative law (for addition)
• Matrix of different size cannot be added ad subtracted
Scalar Multiplication
• Matrix can be
• multiplied by a scalar (constant)
Matric Multiplication
• Necessary condition number of columns of A must be equal to number of rows of B
Transpose of matrix
• Changing rows and columns
Power of matrix
Determinate of matrix
• Every square matrix has a determinate
• The determinate of a matrix is a number
Inverse of matrix
• K =7 K inverse = 1/7
• Division of matrix is not defined instead matrix inversion is used
• The inverse of a square matrix ,A if it exists is a unique matrix A-1 where
• A A -1 == Identity matrix
• When the determinate of a matrix is zero the matrix is singular
• A matrix that does not have an inverse is called a singular matrix
• A square matrix that has an inverse is called non singular matrix
• Square matrix have inverses except when determinate is zero
Inverse
Matrix Rank
• The rank is how many of the rows are "unique": not made of other rows. (Same for columns.)
•
The rank tells us a lot about the matrix.
•
It is useful in letting us know if we have a chance of solving a system of linear equations: when the rank
equals the number of variables we may be able to find a unique solution.
•
Positive Definite Matrix
A square matrix A is positive definite(PD) if its is symmetric and
XT A X > 0 non zero X
In
other words
XT A X = 0 only if X =0
OR
Sy
mmetric matrix A whose eigen values are positive is called positive definite
OR
All principal minors of A are positive .
Use of SPD
• There is a vector z.
• This z will have a certain direction.
• When we multiply matrix M with z, z no longer points in the same direction. The direction of z is
transformed by M.
• If M is a positive definite matrix, the new direction will always point in “the same general” direction
(here “the same general” means less than π/2 angle change). It won’t reverse (= more than 90-degree
angle change) the original direction.
Reading : https://towardsdatascience.com/what-is-a-positive-definite-matrix-181e24085abd
Diagonalization Don't read from here
• A = X D X-1
• Taking a matrix an writing it as a product of matrices one of which as value only along diagonal
• X matrix is said to diagonalize A
• Works only if
• A has unique eigen value
• If duplicate eigenvalue the linearly independent eigenvectors
• D is made of eigen vales
• X made of eigenvectors
SVD Applications of SVD:
Image compression.
• Numeric
Market data analysis.
Latent Semantic Indexing (LSI) for web document search (see also here).
Political spectrum analysis.
3D image deformation using moving least-squares.
SVD and PCA for gene expression data