Pertemuan1b-Review of Matrix Algebra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Matrix Algebra & Random Vector

Matrices & Vectors


A Matrix is a 𝑎11 𝑎12 … … 𝑎1𝑝
system of numbers 𝑎 𝑎22 𝑎2𝑝
𝐀(𝑛×𝑝) = 𝐀 = 21⋮
with 𝑛 rows and 𝑝 ⋱ ⋮
columns 𝑎𝑛1 … … ⋯ 𝑎𝑛𝑝

A Vector is a matrix Column vector Row vector


where one 𝑦1
dimension is equal 𝑦
𝐲(𝒏×𝟏) = 𝐲 = 2 𝐱 (𝟏×𝒑) = 𝐱 = 𝑥1 … 𝑥𝑝
to size one. ⋮
𝑦𝑛

2 Matrix algebra & Random Vector


3 Matrix algebra & Random Vector
4 Matrix algebra & Random Vector
Length of Vector & Correlation
𝑥1
𝑥
𝐱 (𝒏×𝟏) =𝐱= 2

𝑥𝑛

𝐿𝐱 = 𝐱 ′ 𝐱

Standard deviasi
dari 𝐱 terhadap
titik origin =
Euclidean distance
to origin

5 Matrix algebra & Random Vector


cos 𝜃 = cos 𝜃2 − 𝜃1
= cos 𝜃2 cos 𝜃1 + sin 𝜃2 sin 𝜃1
𝑦1 𝑥1 𝑦2 𝑥2
= +
𝐿𝐲 𝐿𝐱 𝐿𝐲 𝐿𝐱
𝑥1 𝑦1 + 𝑥2 𝑦2
=
𝐿𝐱 𝐿𝐲

Lebih umum, misal 𝐱 dan 𝐲 vektor kolom berukuran 𝑘, sedangkan 𝐱′ adalah


transpose dari vektor 𝐱, perkalian dua vektor tersebut adalah:
𝐱 ′ 𝐲 = 𝑥1 𝑦1 + ⋯ + 𝑥𝑘 𝑦𝑘
Maka Korelasi antara 𝐱 dan 𝐲 adalah :
𝐱′𝐲 𝐱′𝐲
cos 𝜃 = =
𝐿𝐱 𝐿𝐲 𝐱′𝐱 𝐲′𝐲
cos 𝜃 = 0 jika 𝐱 ′ 𝐲 = 0 => 𝐱 dan 𝐲 perpendicular jika 𝐱 ′ 𝐲 = 0

6 Matrix algebra & Random Vector


Orthogonal & Orthonormal Vectors
• We say that 2 vectors are orthogonal if • A set of vectors S is orthonormal if
they are perpendicular to each other every vector in S has magnitude 1
• We say that a set of vectors {𝐱1 , 𝐱 2 , … , 𝐱 𝑘 } and the set of vectors are mutually
are mutually orthogonal if every pair of orthogonal.
vectors is orthogonal: 𝐱 𝑖′ 𝐱𝑗 = 0 , • If 𝐳 ′ 𝐳 = 1, the vector 𝐱 is said to be
for all i ≠ j normalized (has magnitude 1)
• Example: the set of vectors • a vector 𝐱 can always be normalized
1 1 1
by dividing by its length. Thus
0 , 2 , − 2 is mutually orthogonal
−1 1 1
𝟏 𝐱
𝐳= 𝐱=
1 1 𝐱′𝐱 𝐿𝐱
1 0 −1 2 = 0; 1 0 −1 − 2 = 0;
1 1 is normalized
1
1 2 1 − 2 =0
1
7 Matrix algebra & Random Vector
Linearly dependent & Linearly independent
• A set of vectors 𝐱1 , 𝐱 2 , … , 𝐱 𝑘 is said to be linearly dependent if there exist
constants 𝑐1 , 𝑐2 , … , 𝑐𝑘 not all zero, such that

𝑐1 𝐱1 + 𝑐2 𝐱 2 + ⋯ + 𝑐𝑘 𝐱 𝑘 = 𝟎

• Linear dependence implies that at least one vector in the set can be written as a
linear combination of the other vectors.

• Vectors of the same dimension that are not linearly dependent are said to be
linearly independent

8 Matrix algebra & Random Vector


9 Matrix algebra & Random Vector
Projection & Length of Projection
Projection of vector 𝐱 on 𝐲:
𝐱′𝐲 𝐱′𝐲 1
𝐲= 𝐲
𝐲′𝐲 𝐿𝐲 𝐿𝐲
Length of Projection of vector 𝐱 on 𝐲:
𝐱′𝐲 𝐱′𝐲
= 𝐿𝐱 = 𝐿𝐱 cos 𝜃
𝐿𝐲 𝐿𝐱 𝐿𝐲

10 Matrix algebra & Random Vector


Matrix Operations
Addition and multiple a matrix by scalar Multiplication:
• (A + B) + C = A + (B + C) → Associative • c(AB) = (cA)B
• A + B = B + A → Commutative • A(BC) = (AB)C
• c(A + B) = cA + cB → Distributive • A(B + C) = AB + AC
• (c + d)A = cA + dA • (B + C)A = BA + CA
• (A + B)′ = A′ + B′ • (AB)′ = B′A′
• (cd)A = c(dA) • AB≠BA
• (cA)′ = cA′ • For 𝐱𝑗 such that 𝐀𝐱 𝐣 is
defined:
Trace Matrix , 𝐀 𝑘×𝑘 , 𝐁 𝑘×𝑘 :
• σ𝑘𝑗=1 𝐀𝐱 𝐣 = 𝐀 σ𝑘𝑗=1 𝐱 𝐣
• 𝑡𝑟 𝐀 = σ𝑘𝑖=1 𝑎𝑖𝑖 ′
• 𝑡𝑟 𝑐𝐀 = 𝑐 𝑡𝑟 𝐀 • σ𝑘𝑗=1 𝐀𝐱 𝐣 𝐀𝐱 𝐣 =
• 𝑡𝑟 𝐀 ± 𝐁 = 𝑡𝑟 𝐀 ± 𝑡𝑟 𝐁 𝐀 σ𝑘𝑗=1 𝐱 𝐣 𝐱 𝐣 ′ 𝐀′
• 𝑡𝑟 𝐀𝐁 = 𝑡𝑟 𝐁𝐀
• 𝑡𝑟 𝐀′𝐀 = 𝑡𝑟 𝐀𝐀′ = σ𝑘𝑖=1 σ𝑘𝑗=1 𝑎2 𝑖𝑗
11 Matrix algebra & Random Vector
Rank of Matrix
• The row rank of a matrix is the maximum number of linearly independent rows,
considered as vectors (that is, row vectors).
• The column rank of a matrix is the rank of its set of columns, considered as
vectors.
• The row rank and the column rank of a matrix are equal. Thus, the rank of a
matrix is either the row rank or the column rank.
• Example:
1 1 1
𝐀 = 2 5 −1
0 1 −1
The rows o f A , written as vectors, were shown t o be linearly dependent. Note
that the column rank of A is also 2, since
1 1 1 0
−2 2 + 5 + −1 = 0
0 1 −1 0
but columns 1 and 2 are linearly independent. So the rank of A is 𝑟 𝑨 = 2
12 Matrix algebra & Random Vector
Nonsingularity
• A square matrix 𝐀 is nonsingular if
𝐀 𝑘×𝑘 𝐱 𝑘×1 = 𝟎 𝑘×1 implies that 𝐱 𝑘×1 = 𝟎 𝑘×1
• Equivalently, a square matrix is nonsingular if its rank is equal to the number of
rows (or columns) it has.

𝐀𝐱 = 𝑥1 𝒂𝟏 + 𝑥2 𝒂𝟐 + ⋯ + 𝑥𝑘 𝒂𝒌
where 𝒂𝒊 = 𝑎1𝑖 , 𝑎2𝑖 , … , 𝑎𝑘𝑖 ′ → i-th vector column 𝑖 = 1, … , 𝑘
• The condition of nonsingularity is just the statement that the columns of A are
linearly independent.
• There is exist inverse matrix, 𝑨−𝟏 , such that 𝑨𝑨−𝟏 = 𝑨−𝟏 𝑨 = 𝑰
• Determinant of A is not zero, 𝐀 ≠ 0
• If a matrix fails to be nonsingular, it is called singular.

13 Matrix algebra & Random Vector


Eigenvalue, Eigenvector & Normalized Eigenvector
(concept fundamental to multivariate statistical analysis)

Principal components are comprised of linear combination of a set


of variables weighted by the eigenvectors.

The eigenvalues represent the proportion of variance accounted


for by specific principal components.

Each principal component is orthogonal to the next, producing a set


of uncorrelated variables that may be used for regression purposes.

14 Matrix algebra & Random Vector


Eigenvalue, Eigenvector & Normalized Eigenvector
• A square matrix 𝐀 is said to have an eigenvalue 𝜆, with corresponding
eigenvector 𝐱 ≠ 𝟎, if
𝐀𝐱 = 𝜆𝐱
Ordinarily, we normalize 𝐱 so that it has length unity; It is convenient to denote
𝐱
normalized eigenvectors by 𝒆 = ′ . Thus 𝒆′ 𝒆 = 1.
𝐱 𝐱
• The eigenvalues obtained from the characteristic equation:
𝑨 − 𝜆𝑰 = 0

15 Matrix algebra & Random Vector


Example. Using R #calculate eigen values &eigen vectors
Find eigen values and eigen vectors of this eigen(A)
matrix:
2 3 eigen() decomposition
𝐴=
1 4
#making matrix $values
A<-matrix(c(2,1,3,4),2,2) [1] 5 1 𝜆2
A
𝜆1 $vectors
[,1] [,2]
[,1] [,2]
[1,] 2 3
[1,] -0.7071068 -0.9486833
[2,] 1 4
[2,] -0.7071068 0.3162278

𝒆𝟏 𝒆𝟐

16 Matrix algebra & Random Vector


Spectral Decomposition & Positive Definite
The corresponding eigenvectors 𝒆1 , 𝒆2 , … , 𝒆𝑘 are the (normalized) solutions of the
equations 𝐀𝒆𝒊 = 𝜆𝑖 𝒆𝒊 , 𝑖 = 1, … , 𝑘,
The spectral decomposition of 𝐀 is then
𝑘

𝐀 = ෍ 𝜆𝑖 𝒆𝒊 𝒆𝒊 ′
𝑖=1
• Using the spectral decomposition, we can easily show that a 𝑘 × 𝑘 symmetric
matrix 𝐀 is a positive definite matrix if and only if every eigenvalue of 𝐀 is
positive, ∀ 𝜆𝑖 > 0.
• 𝐀 is a nonnegative definite matrix if and only if all of its eigenvalues are greater
than or equal to zero, ∀ 𝜆𝑖 ≥ 0 with the number of positive eigenvalues equal to
the rank of the matrix.

17 Matrix algebra & Random Vector


Quadratic Form & Positive Definite
• Let 𝐱 is a vector 𝑘 × 1, and A is a 𝑘 × 𝑘 symmetric matrix, then quadratic form
is defined by:
𝐱 ′ 𝐀𝐱
• The symmetric matrix A is said to be positive definite if 𝐱 ′ 𝐀𝐱 > 0 for all
possible vectors x (except 𝐱 = 𝟎).
• Similarly, A is positive semi definite (nonnegative definite) if 𝐱 ′ 𝐀𝐱 ≥ 0 for all
possible vectors x.
• The diagonal elements 𝑎𝑖𝑖 of a positive definite matrix are positive.
• If A is positive definite, its determinant is positive.
• a positive definite quadratic form can be interpreted as a squared distance (from x
to the origin)
0 < 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 2 = 𝐱 ′ 𝐀𝐱
• The square of the distance from x to an arbitrary fixed point 𝝁 = 𝜇1 , 𝜇2 , … , 𝜇𝑘 ′
is given by the general expression:
𝐱 − 𝝁 ′𝐀 𝐱 − 𝝁
18 Matrix algebra & Random Vector
Maximization

19 Matrix algebra & Random Vector


Square Root Matrix

The spectral
decomposition
allows us to
express the
inverse of a
matrix:

𝑨−𝟏 = 𝑷𝜦−𝟏 𝑷′

20 Matrix algebra & Random Vector


square root matrix

21 Matrix algebra & Random Vector

You might also like