Llinear Dep Theorams

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

MATH 304

Linear Algebra
Lecture 10:
Linear independence.
Basis of a vector space.
Linear independence
Definition. Let V be a vector space. Vectors
v1 , v2 , . . . , vk ∈ V are called linearly dependent if
they satisfy a relation
r1 v1 + r2 v2 + · · · + rk vk = 0,
where the coefficients r1 , . . . , rk ∈ R are not all
equal to zero. Otherwise vectors v1 , v2 , . . . , vk are
called linearly independent. That is, if
r1 v1 +r2 v2 + · · · +rk vk = 0 =⇒ r1 = · · · = rk = 0.
An infinite set S ⊂ V is linearly dependent if
there are some linearly dependent vectors v1 , . . . , vk ∈ S.
Otherwise S is linearly independent.
Examples of linear independence
• Vectors e1 = (1, 0, 0), e2 = (0, 1, 0), and
e3 = (0, 0, 1) in R3 .
xe1 + y e2 + ze3 = 0 =⇒ (x, y , z) = 0
=⇒ x = y = z = 0
   
1 0 0 1
• Matrices E11 = , E12 = ,
0 0 0 0
   
0 0 0 0
E21 = , and E22 = .
1 0 0 1
 
a b
aE11 + bE12 + cE21 + dE22 = O =⇒ =O
c d
=⇒ a = b = c = d = 0
Examples of linear independence
• Polynomials 1, x, x 2 , . . . , x n .
a0 + a1 x + a2 x 2 + · · · + an x n = 0 identically
=⇒ ai = 0 for 0 ≤ i ≤ n

• The infinite set {1, x, x 2 , . . . , x n , . . . }.

• Polynomials p1 (x) = 1, p2 (x) = x − 1, and


p3 (x) = (x − 1)2 .
a1 p1 (x) + a2 p2 (x) + a3 p3 (x) = a1 + a2 (x − 1) + a3 (x − 1)2 =
= (a1 − a2 + a3 ) + (a2 − 2a3 )x + a3 x 2 .
Hence a1 p1 (x) + a2 p2 (x) + a3 p3 (x) = 0 identically
=⇒ a1 − a2 + a3 = a2 − 2a3 = a3 = 0
=⇒ a1 = a2 = a3 = 0
Problem Let v1 = (1, 2, 0), v2 = (3, 1, 1), and
v3 = (4, −7, 3). Determine whether vectors
v2 , v2 , v3 are linearly independent.

We have to check if there exist r1 , r2 , r3 ∈ R not all


zero such that r1 v1 + r2 v2 + r3 v3 = 0.
This vector equation is equivalent to a system
  
 1r + 3r2 + 4r3 = 0 1 3 4 0
2r + r2 − 7r3 = 0 2 1 −7 0
 1
0r1 + r2 + 3r3 = 0 0 1 3 0
The vectors v1 , v2 , v3 are linearly dependent if and
only if the matrix A = (v1 , v2 , v3 ) is singular.
We obtain that det A = 0.
Theorem The following conditions are equivalent:
(i) vectors v1 , . . . , vk are linearly dependent;
(ii) one of vectors v1 , . . . , vk is a linear combination
of the other k − 1 vectors.
Proof: (i) =⇒ (ii) Suppose that
r1 v1 + r2 v2 + · · · + rk vk = 0,
where ri 6= 0 for some 1 ≤ i ≤ k. Then
vi = − rr1i v1 − · · · − ri−1 ri+1 rk
ri vi−1 − ri vi+1 − · · · − ri vk .

(ii) =⇒ (i) Suppose that


vi = s1 v1 + · · · + si−1 vi−1 + si+1 vi+1 + · · · + sk vk
for some scalars sj . Then
s1 v1 + · · · + si−1 vi−1 − vi + si+1 vi+1 + · · · + sk vk = 0.
Theorem Vectors v1 , v2 , . . . , vm ∈ Rn are linearly
dependent whenever m > n (i.e., the number of
coordinates is less than the number of vectors).
Proof: Let vj = (a1j , a2j , . . . , anj ) for j = 1, 2, . . . , m.
Then the vector equality t1 v1 + t2 v2 + · · · + tm vm = 0
is equivalent to the system

a t + a12 t2 + · · · + a1m tm = 0,
 11 1


a21 t1 + a22 t2 + · · · + a2m tm = 0,

 ·········
 a t + a t + · · · + a t = 0.
n1 1 n2 2 nm m

Note that vectors v1 , v2 , . . . , vm are columns of the matrix


(aij ). The number of leading entries in the row echelon form
is at most n. If m > n then there are free variables, therefore
the zero solution is not unique.
Example. Consider vectors v1 = (1, −1, 1),
v2 = (1, 0, 0), v3 = (1, 1, 1), and v4 = (1, 2, 4) in R3 .
Two vectors are linearly dependent if and only if
they are parallel. Hence v1 and v2 are linearly
independent.
Vectors v1 , v2 , v3 are linearly independent if and
only if the matrix A = (v1 , v2 , v3 ) is invertible.
1 1 1
−1 1
det A = −1 0 1 = − = 2 6= 0.
1 1
1 0 1
Therefore v1 , v2 , v3 are linearly independent.
Four vectors in R3 are always linearly dependent.
Thus v1 , v2 , v3 , v4 are linearly dependent.
Problem. Show that functions e x , e 2x , and e 3x
are linearly independent in C ∞ (R).
Suppose that ae x + be 2x + ce 3x = 0 for all x ∈ R, where
a, b, c are constants. We have to show that a = b = c = 0.
Differentiate this identity twice:
ae x + be 2x + ce 3x = 0,
ae x + 2be 2x + 3ce 3x = 0,
ae x + 4be 2x + 9ce 3x = 0.
It follows that A(x)v = 0, where
 x   
e e 2x e 3x a
A(x) =  e x 2e 2x 3e 3x , v =  b .
e x 4e 2x 9e 3x c
e x e 2x e 3x
   
a
A(x) = e x 2e 2x 3e 3x , v =  b .
e x 4e 2x 9e 3x c
1 e 2x e 3x 1 1 e 3x
det A(x) = e x 1 2e 2x 3e 3x = e x e 2x 1 2 3e 3x
1 4e 2x 9e 3x 1 4 9e 3x
1 1 1 1 1 1 1 1 1
x 2x 3x 6x 6x
=e e e 1 2 3 =e 1 2 3 =e 0 1 2
1 4 9 1 4 9 1 4 9
1 1 1
1 2
= e 6x 0 1 2 = e 6x = 2e 6x 6= 0.
3 8
0 3 8
Since the matrix A(x) is invertible, we obtain
A(x)v = 0 =⇒ v = 0 =⇒ a = b = c = 0
Wronskian

Let f1 , f2 , . . . , fn be smooth functions on an interval


[a, b]. The Wronskian W [f1 , f2 , . . . , fn ] is a
function on [a, b] defined by
f1 (x) f2 (x) ··· fn (x)
f1′ (x) f2′ (x) ··· fn′ (x)
W [f1 , f2 , . . . , fn ](x) = .. .. ... .. .
. . .
(n−1) (n−1) (n−1)
f1 (x) f2 (x) · · · fn (x)

Theorem If W [f1 , f2 , . . . , fn ](x0 ) 6= 0 for some


x0 ∈ [a, b] then the functions f1 , f2 , . . . , fn are
linearly independent in C [a, b].
Theorem 1 Let λ1 , λ2 , . . . , λk be distinct real
numbers. Then the functions e λ1 x , e λ2 x , . . . , e λk x
are linearly independent.

Theorem 2 The set of functions


{x m e λx | λ ∈ R, m = 0, 1, 2, . . . }
is linearly independent.
Spanning set
Let S be a subset of a vector space V .
Definition. The span of the set S is the smallest
subspace W ⊂ V that contains S. If S is not
empty then W = Span(S) consists of all linear
combinations r1 v1 + r2 v2 + · · · + rk vk such that
v1 , . . . , vk ∈ S and r1 , . . . , rk ∈ R.
We say that the set S spans the subspace W or
that S is a spanning set for W .
Remark. If S1 is a spanning set for a vector space
V and S1 ⊂ S2 ⊂ V , then S2 is also a spanning set
for V .
Basis
Definition. Let V be a vector space. A linearly
independent spanning set for V is called a basis.
Suppose that a set S ⊂ V is a basis for V .
“Spanning set” means that any vector v ∈ V can be
represented as a linear combination
v = r1 v1 + r2 v2 + · · · + rk vk ,
where v1 , . . . , vk are distinct vectors from S and
r1 , . . . , rk ∈ R. “Linearly independent” implies that the above
representation is unique:
v = r1 v1 + r2 v2 + · · · + rk vk = r1′ v1 + r2′ v2 + · · · + rk′ vk
=⇒ (r1 − r1′ )v1 + (r2 − r2′ )v2 + · · · + (rk − rk′ )vk = 0
=⇒ r1 − r1′ = r2 − r2′ = . . . = rk − rk′ = 0
Examples. • Standard basis for Rn :
e1 = (1, 0, 0, . . . , 0, 0), e2 = (0, 1, 0, . . . , 0, 0),. . . ,
en = (0, 0, 0, . . . , 0, 1).
Indeed, (x1 , x2 , . . . , xn ) = x1 e1 + x2 e2 + · · · + xn en .

      
1 0 0 1 0 0 0 0
• Matrices , , ,
0 0 0 0 1 0 0 1
form a basis for M2,2 (R).
         
a b 1 0 0 1 0 0 0 0
=a +b +c +d .
c d 0 0 0 0 1 0 0 1

• Polynomials 1, x, x 2 , . . . , x n−1 form a basis for


Pn = {a0 + a1 x + · · · + an−1 x n−1 : ai ∈ R}.
• The infinite set {1, x, x 2 , . . . , x n , . . . } is a basis
for P, the space of all polynomials.
Bases for Rn
Let v1 , v2 , . . . , vk be vectors in Rn .
Theorem 1 If k < n then the vectors
v1 , v2 , . . . , vk do not span Rn .

Theorem 2 If k > n then the vectors


v1 , v2 , . . . , vk are linearly dependent.

Theorem 3 If k = n then the following conditions


are equivalent:
(i) {v1 , v2 , . . . , vn } is a basis for Rn ;
(ii) {v1 , v2 , . . . , vn } is a spanning set for Rn ;
(iii) {v1 , v2 , . . . , vn } is a linearly independent set.
Example. Consider vectors v1 = (1, −1, 1),
v2 = (1, 0, 0), v3 = (1, 1, 1), and v4 = (1, 2, 4) in R3 .

Vectors v1 and v2 are linearly independent (as they


are not parallel), but they do not span R3 .
Vectors v1 , v2 , v3 are linearly independent since
1 1 1
−1 1
−1 0 1 = − = −(−2) = 2 6= 0.
1 1
1 0 1
Therefore {v1 , v2 , v3 } is a basis for R3 .
Vectors v1 , v2 , v3 , v4 span R3 (because v1 , v2 , v3
already span R3 ), but they are linearly dependent.

You might also like