MA 106: Linear Algebra: Prof. B.V. Limaye IIT Dharwad

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

MA 106: Linear Algebra

Lecture 19

Prof. B.V. Limaye


IIT Dharwad

Tuesday, 20 February 2018

B.V. Limaye, IITDH MA 106: Lec-19


Abstract Vector Spaces

All through the last 18 lectures, we have dealt with row


vectors, column vectors and matrices. We have introduced
many interesting concepts like linear independence of vectors,
span of a set of vectors, a subspace of vectors, a basis for a
subspace, the dimension of a subspace, the nullity and the
rank of a matrix, linear transformations induced by matrices,
inner product of two vectors, orthogonality of vectors,
orthonormal basis for a subspace, orthogonal projection onto a
subspace, and so on.
Based on these concepts, we have proved some important
theorems like the Rank-Nullity theorem, the Fundamental
Theorem for Linear Systems, the Spectral Theorem, the
Projection Theorem.

B.V. Limaye, IITDH MA 106: Lec-19


Time has come to take a stock of all these concepts and
results, and ask whether the concepts we have introduced in
respect of row vectors and column vectors can be defined in a
more general set up, and if so, whether our results involving
row vectors and column vectors are valid in such a framework.
Let us start from the basic notion of a linear combination. Let
us look for a set such that every linear combination of elements
in that set is also in that set. Here are some examples.
1. All column vectors of length n, that is, Kn×1 .
2. All row vectors of length n, that is, K1×n .
3. All m × n matrices, that is, Km×n .
4. All sequences of real numbers, all bounded sequences of
real numbers, all convergent sequences of real numbers.
5. All sequences of vectors in Rn , all bounded sequences of
vectors in Rn , all convergent sequences of vectors in Rn .
B.V. Limaye, IITDH MA 106: Lec-19
6. All polynomials in the indeterminate x with coefficients in
K, all polynomials of degree at most n.
7. All real-valued functions defined on a set, all bounded
functions defined on a set, all continuous functions defined on
a subset of Rn , all differentiable functions defined on an open
subset of Rn , all integrable functions defined on [a, b], all
double integrable functions defined on a bounded subset of R2 .
8. All solutions (column vectors) of a homogeneous linear
system such as A x = 0.
9. All solutions (row vectors) of homogeneous a linear system
such as y B = 0.
10. All solutions (differentiable functions) of a homogeneous
linear differential equation such as y ′ (t) = a(t)y (t), t ∈ (a, b),
or more generally, of a homogeneous system of linear
differential equations such as y′ (t) = A(t)y(t), t ∈ (a, b),
[ ]T
where y(t) = y1 (t) · · · yn (t) and A(t) := [ajk (t)].
B.V. Limaye, IITDH MA 106: Lec-19
Let K denote R or C as usual. A vector space over K is a
nonempty set V along with two algebraic operations, namely
addition (+) and scalar multiplication (·) satisfying the
following properties.
I Closure axioms
1. u + v ∈ V for all u, v ∈ V .
2. α · v ∈ V for all α ∈ K and v ∈ V .
(We shall write αv instead of α · v ∈ V hence onward.)
II Axioms for addition
1. u + v = v + u for all u, v ∈ V . (commutativity)
2. u +(v +w ) = (u +v )+w for all u, v , w ∈ V . (associativity)
3. There is unique 0 ∈ V such that v + 0 = v for all v ∈ V .
4. For v ∈ V , there is unique u ∈ V such that v + u = 0.
(We shall write this element u as −v .)

B.V. Limaye, IITDH MA 106: Lec-19


III Axioms for scalar multiplication
1. α(βv ) = (αβ)v for all α, β ∈ K and v ∈ V .
2. α(u + v ) = αu + αv for all α ∈ K and u, v ∈ V .
3. (α + β)v = αv + βv for all α, β ∈ K and v ∈ V .
4. 1v = v for all v ∈ V .
An element of a vector space is called a vector.
We can easily check that the usual operations of addition and
scalar multiplication on each of the sets in the examples we
have given earlier satisfy the above axioms. Hence each of
them is a vector space.
Let V be a vector space (over K), and let n ∈ N. Given
v1 , . . . , vn ∈ V and α1 , . . . , αn ∈ K, the element

α1 v1 + · · · + αn vn

of V is called the linear combination of v1 , . . . , vn with


respective coefficients α1 , . . . , αn .
B.V. Limaye, IITDH MA 106: Lec-19
Let W be a nonempty subset of V . If W satisfies the closure
axioms for the addition and multiplication operations on V ,
that is, if v + w ∈ W for all v , w ∈ W and αw ∈ W for all
α ∈ K and w ∈ W , then W is called a subspace of V . Thus
a subspace W of a vector space V is a vector space in its own
right; in particular, 0 ∈ W , and every linear combination of its
elements of W belongs W .
We can easily check which of the sets considered earlier
constitute subspaces of possibly larger vector spaces.
Let now W1 and W2 be subspaces of V . Then W1 ∩ W2 is a
subspace of V ; in fact it is the largest subspace of V which is
contained in both W1 and W2 . On the other hand, W1 ∪ W2 is
not a subspace of V unless W1 ⊂ W2 or W2 ⊂ W1 .
Let W1 + W2 := {w1 + w2 : w1 ∈ W1 and w2 ∈ W2 }. Then
W1 + W2 is a subspace of V ; in fact it is the smallest
subspace of V containing both W1 and W2 .
B.V. Limaye, IITDH MA 106: Lec-19
In Lectures 4 and 5, we considered the vector space of column
vectors and we introduced the concepts of the span of a set of
vectors, linear dependence and independence of vectors, the
dimension of a subspace of vectors and a basis for such a
subspace. These notions carry over to an abstract vector space
without any difficulty as follows.
Let S ⊂ V . The set of all (finite) linear combinations of
elements of S is called the span of S, and we denote it by
span S .
A subset S of V is called linearly dependent if there are
v1 , . . . , vm in S and there are α1 , . . . , αm ∈ K, not all zero,
satisfying
α1 v1 + · · · + αm vm = 0.
This is the case if and only if at least one of the elements of S
is a linear combination of the other elements of S.

B.V. Limaye, IITDH MA 106: Lec-19


The following crucial result was proved for the case V := Rn×1
in Lecture 5. Exactly the same proof works in the general
case. We give it here again because of its importance.

Proposition
Let S be a subset of s elements and R be a set of r elements
of V . If S ⊂ span R and s > r , then S is linearly dependent.

Proof.
Let S := {v1 , . . . , vs }, and suppose each element in S is a
linear combination of the r elements of R := {w1 , . . . , wr }.
Then

r
vj = ajk wk for j = 1, . . . , s, where ajk ∈ R.
k=1

B.V. Limaye, IITDH MA 106: Lec-19


Let s > r . Consider the coefficient matrix A := [ajk ] ∈ Rs×r .
Then AT ∈ Kr ×s . Since r < s, the linear system AT x = 0 has
nonzero solution. Hence there are α1 , . . . , αs , not all zero,
such that
      
α1 a11 · · · as1 α1 0
T  ..   .. . ..   ..  =  ... 
.   .   r ×1
A  . = . .. ∈K ,
αs a1r · · · asr αs 0
∑s
that is, j=1 ajk αj = 0 for k = 1, . . . , r . Hence


s ∑
s (∑
r ) r (∑
∑ s )
αj vj = αj ajk wk = ajk αj wk = 0.
j=1 j=1 k=1 k=1 j=1

Since not all α1 , . . . , αn are zero, S is linearly dependent.

B.V. Limaye, IITDH MA 106: Lec-19


A subset S of V is called linearly independent if it is not
linearly dependent, that is,

α1 v1 + · · · + αm vm = 0 =⇒ α1 = · · · = αm = 0,

whenever v1 , . . . , vm ∈ S and α1 , . . . , αm ∈ K. This is the case


if and only if none of the elements of S is a linear combination
of the other elements of S. We also say that the elements of
S are linearly independent.
Examples
1 . Let m, n ∈ N, and let V := Km×n be set of all m×n
matrices with entries in K with entry-wise addition and scalar
multiplication. For j = 1, . . . , m and k = 1, . . . , n, let Ejk
denote the m×n matrix whose (j, k)th entry is equal to 1 and
all other
{ entries are equal to zero. } Then the set
S := Ejk : 1 ≤ j ≤ m, 1 ≤ k ≤, n is linearly independent.

B.V. Limaye, IITDH MA 106: Lec-19


∑ ∑n
To see this, let αjk ∈ K be such that m j=1 k=1 αjk Ejk = O.
(Note that the zero element of this vector space is the m×n
matrix having all its entries equal to 0.)
Then the (j, k)th entry of the matrix on the left side is αjk and
that on the right side is 0. Thus αjk = 0 for all j = 1, . . . , m
and k = 1, . . . , n. Hence the set S is linearly independent.
Next, let S1 := S ∪ {E}, where all the entries of the matrix E
are equal
∑m to∑1.n Then the set S1 is linearly dependent since
E = j=1 k=1 Ejk .
2 . Let V := c0 denote the set of all sequences in K which
converge to 0. For j ∈ N, let ej denote the element of S
whose j-th term is equal {to 1 and all other terms are equal
to 0. Then the set S := ej : j ∈ N} is linearly ∑independent.
To see this, let αj1 , . . . , αjn ∈ K be such that nk=1 αjk ejk = 0.
(Note that the zero element of this vector space is the
sequence having all its terms equal to 0.)
B.V. Limaye, IITDH MA 106: Lec-19
Then the jk th term of the sequence on the left side is αjk and
that on the right side is 0. Thus αjk = 0 for all k = 1, . . . , n.
Hence the set S is linearly independent.
Next, let S1 := S ∪ {e}, where the nth entry of the sequence e
is equal to 1/n for n ∈ N. Then the set S1 is also linearly
independent since e is not a (finite) linear combination of
elements of S.
3 . Let V := K[x] denote the set of all polynomials in the
indeterminate
{ x with coefficients in K. Then the set
S := x j : j = 0, 1, 2, . . .} is linearly independent.
∑n To see this,
let αj0 , αj1 , . . . , αjn ∈ K be such that k=0 αjk x = 0. (Note
jk

that the zero element of this vector space is the polynomial


having all its coefficients equal to 0.)
Then the jk th coefficient of the polynomial on the left side is
αjk and that on the right side is 0. Thus αjk = 0 for all
k = 0, 1, . . . , n. Hence the set S is linearly independent.
B.V. Limaye, IITDH MA 106: Lec-19
Next, let S1 := S ∪ {p}, where p ∈ K[x]. Then the set S1 is
linearly dependent since p ∈ span S.
5 . Let V := C ([−π, π]) denote the set of all K-valued
continuous function on the interval [−π, π]. For n ∈ N, let
un (t) := cos nt and vn (t) := sin nt for t ∈ [−π, π]. Then the
set S := {u1 , u2 , . . .} ∪ {v1 , v2 , . . .} is linearly independent.
(Note that the zero element of this vector space is the function
having all its values on [−π, π] equal to 0.) We shall prove
this assertion later. At present, you can think of the following
idea: If α cos t + β sin t = 0, then −α sin t + β cos t = 0.
Next, let S1 := S ∪ {w }, where w (t) := t for t ∈ [−π, π].
Then the set S1 is also linearly independent, since
w (π) ̸= w (−π), and so w ̸∈ span S.

B.V. Limaye, IITDH MA 106: Lec-19


A vector space V is said to be finite dimensional if there is a
finite subset S of V such that V = span S; otherwise the
vector space V is said to be infinite dimensional.
If a vector space V is infinite dimensional, then V is larger
than the span of any finite subset of V , and so V must
contain an infinite linearly independent subset. Conversely, if
V contains an infinite linearly independent subset, then V
must be infinite dimensional because our crucial result says
that if a linearly independent subset S of V has s elements,
and there is a subset R of V having r elements such that
S ⊂ span R, then s ≤ r .)
Examples
Let n, m ∈ N. The vector spaces Kn×1 , K1×n and Km×n are
finite dimensional, and so is the vector space of all polynomials
in the indeterminate x having degree less than or equal to n.
But the vector spaces c0 , K[x] and C ([−π, π]) are infinite
dimensional.
B.V. Limaye, IITDH MA 106: Lec-19
We shall mainly deal with finite dimensional vector spaces.
Suppose V is a finite dimensional vector space over K, and
suppose V is equal to the span of n elements of V .
If V = {0}, then ∅ is the only linearly independent subset of
V , and it has 0 elements.
Next, suppose V ̸= {0}, that is, there is at least one nonzero
element v in V . Then the singleton set {v } is a linearly
independent subset of V , and it has 1 element. Since no
linearly independent subset of V can have more than n
elements, the number of elements in any linearly independent
subset of V is between 1 and n. Hence there is r ∈ {1, . . . , n}
such that V contains a linearly independent subset having r
elements, while any subset of V having more than r elements
is linearly dependent, that is, r is the maximum number of
linearly independent elements that V can contain.

B.V. Limaye, IITDH MA 106: Lec-19


A linearly independent subset of a finite dimensional vector
space V consisting of a maximum possible number of elements
is called a basis for V .
Here is a characterization of a basis for a vector space.
Proposition
Let V be a finite dimensional vector space over K, and let
S ⊂ V . Then S is a basis for V if and only if S is linearly
independent and span S = V .

We can deduce the following result.

Corollary
Let V be a finite dimensional vector space over K. Every
linearly independent subset of V can be extended to a basis
for V .

B.V. Limaye, IITDH MA 106: Lec-19


Another immediate consequence of the above proposition:

Proposition
Let S := {v1 , . . . , vr } be a basis for a vector space V , and let
v ∈ V . Then there are unique α1 , . . . , αr ∈ K such that
v = α1 v1 + · · · + αr vr .

We have given proofs of the three results stated above in


Lecture 6. for a subspace V of Rn×1 . Exactly the same proofs
work in the case of any finite dimensional vector space V , and
so we omit them here.
In fact, these results hold even for an infinite dimensional
vector space, but proofs involve transfinite induction in the
form of Axiom of Choice.

B.V. Limaye, IITDH MA 106: Lec-19


Examples
Earlier we have considered various bases of subspaces for
Kn×1 . Now we give examples of bases of other vector spaces.
1. If {x1 , . . . , xn } is a basis for Kn×1 , then {xT1 , . . . , xTn } is a
basis for K1×n . The dimension of K1×n is n.
{ }
2. The set Ejk : 1 ≤ j ≤ m, 1 ≤ k ≤, n is a basis for Km×n .
Its dimension is mn.
3. The set {x j : j = 0, 1, 2, . . . , n} is a basis for the vector
space consisting of all polynomials of degree less than or equal
to n with coefficients in K. Its dimension is n + 1.

B.V. Limaye, IITDH MA 106: Lec-19

You might also like