You are about to erase your work on this activity. Are you sure you want to do this?
Updated Version Available
There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?
Mathematical Expression Editor
We define the row space, the column space, and the null space of a matrix, and we
prove the Rank-Nullity Theorem.
VSP-0040: Subspaces of Associated with Matrices
Row Space of a Matrix
Recall that in SYS-0030, we claimed that every row-echelon form of a given matrix
has the same number of nonzero rows. This result suggests that there are certain
characteristics associated with the rows of a matrix that are not affected
by elementary row operations. We are now in the position to examine this
question and to supply the proof we omitted earlier.
Let be an matrix.
The row space of , denoted by , is the subspace of spanned by the rows of
.
Consider the matrix
Let and be the rows of :
Then is a plane through the origin containing and .
We will use elementary row operations to reduce to .
Let and be the rows of :
What do you think looks like?
The following video will help us visualize and compare it to .
_
Based on what we observed in the video, we may conjecture that
But why does this make sense? Vectors and were obtained from and by repeated
applications of elementary row operations. At every stage of the row reduction
process, the rows of the matrix are linear combinations of and . Thus, at every stage
of the row reduction process, the rows of the matrix lie in the span of and . Our next
video shows a step-by-step row reduction process accompanied by sketches of
vectors.
_
Exploration init:rowspace makes a convincing case for the following theorem.
If matrix was obtained from matrix by applying an elementary row operation to
then
Proof
Let be the rows of .
There are three elementary row operations. Clearly, switching the order of
vectors in will not affect the span.
Suppose that was obtained from by multiplying the row of by a non-zero
constant . We need to show that
To do this we will assume that some vector is in , and show that is in . We
will then assume that some vector is in and show that must be in .
Suppose that is in . Then
But then
So is in .
Now suppose is in , then
But because , we can do the following:
So is in .
We leave it to the reader to verify that adding a multiple of one row of to
another does not change the row space. (See Practice Problem prob:proofofrowBrowA.)
If matrix was obtained from matrix by applying a sequence of elementary row
operations to then
Proof
This follows from repeated applications of Theorem th:rowBrowA.
Let
Find two distinct bases for .
By Corollary cor:rowArowrrefA a basis for will also be a basis for . Row
reduction gives us:
Since the zero row contributes nothing to the span, we conclude that the nonzero
rows of span . Therefore
By Theorem th:rowsrreflinind of VEC-0110, the nonzero rows of are linearly independent. It follows
that the nonzero rows of form a basis for .
To find a second basis for , observe that by Corollary cor:rowequiv the row space of any
row-echelon form of will be equal to . Matrix has many row-echelon forms. Here is
one of them:
The nonzero rows of span . By Theorem th:rowsofreflinind of VEC-0110 the nonzero rows
of are linearly independent. Thus the nonzero rows of form a basis for
.
Our observations in Example ex:basisrowspace can be generalized to all matrices. Given any matrix
,
The nonzero rows of any row-echelon form of are linearly independent
and span . (Theorem th:rowsreflinind of VEC-0110, and Corollary cor:rowequiv)
Therefore nonzero rows of or the nonzero rows of any row-echelon form
of constitute a basis of . Since all bases for must have the same number
of elements (Theorem th:dimwelldefined of VSP-0035), we have just proved the following
theorem.
All row-echelon forms of a given matrix have the same number of nonzero
rows.
This result was first introduced without proof in SYS-0030 where we used it to
define the rank of a matrix as the number of nonzero rows in its row-echelon
forms.
Let be a matrix. .
Column Space of a Matrix
Let be an matrix. The column space of , denoted by , is the subspace of spanned
by the columns of .
Let
Our goal is to find a basis for . To do this we need to find a linearly independent
subset of the columns of that spans .
Consider the linear relation:
Solving this homogeneous equation amounts to finding .
We now see that (eq:init:colspaceB) has infinitely many solutions.
Observe that the homogeneous equation
has the same solution set as (eq:init:colspaceB). In particular, , , , is a non-trivial solution of (eq:init:colspaceB) and
(eq:init:colspaceR). This means that the third column of and the third column of can be expressed as
the first column minus the second column of their respective matrices. We
conclude that the third column of can be eliminated from the spanning set for
and
Having gotten rid of one of the vectors, we need to determine whether the remaining
three vectors are linearly independent. To do this we need to find all solutions
of
Fortunately, we do not have to start from scratch. Observe that crossing out the third
column in the previous row reduction process yields the desired reduced row-echelon
form.
This time the reduced row-echelon form tells us that (eq:init:colspaceB2) has only the trivial solution.
We conclude that the three vectors are linearly independent and
is a basis for .
The approach we took to find a basis for in Exploration init:colspace uses the reduced
row-echelon form of . It is true, however, that any row-echelon form of could have
been used in place of . (Why?). We generalize the steps as follows:
Given a matrix ,
a basis for can be found as follows:
(a)
Find (or any row-echelon form of .)
(b)
Identify the pivot columns of (or ).
(c)
The columns of corresponding to the pivot columns of (or ) form a basis
for .
Proof
Let be the columns of , and let be the columns of (or ). Observe that the
equations
have the same solution set. This means that any non-trivial relation among the
columns of (or ) translates into a non-trivial relation among the columns of .
Likewise, any collection of linearly independent columns of (or ) corresponds to
linearly independent columns of .
By Theorems th:rowsrreflinind and th:rowsofreflinind of VEC-0110, the pivot columns of (or ) are linearly independent.
Therefore the corresponding columns of are linearly independent. Non-pivot
columns can be expressed as linear combinations of the pivot columns, therefore
they contribute nothing to the span and can be removed from the spanning
set.
The proof of Procedure proc:colspace shows that the number of basis elements for the column
space of a matrix is equal to the number of pivot columns. But the number of pivot
columns is the same as the number of pivots in a row-echelon form, which is equal to
the number of nonzero rows and the rank of the matrix. This gives us the following
important result.
Let be a matrix.
We will return to matrix of Example ex:basisrowspace and find a basis for .
We begin by finding
.
Columns , and of contain leading . Therefore columns , and of form a basis for
.
The Null Space
Let be an matrix. The null space of , denoted by , is the set of all vectors in such
that .
Find if
We need to solve the equation . Row reduction gives us
We conclude that . Thus consists of all vectors of the form . We might
write
or
Example ex:nullintro allows us to make an important observation. Note that every scalar
multiple of is contained in . This means that is closed under vector addition and
scalar multiplication. Recall that this property makes a subspace of . This result was
first presented as Practice Problem prob:null(A)_is_subspace of VSP-0020. We now formalize it as a
theorem.
Let be an matrix. Then is a subspace of .
Proof
To show that is closed under vector addition and scalar multiplication
we will show that a linear combination of any two elements of is contained in
.
Suppose and are in . Then and . But then
We conclude that is also in .
Find a basis for , where is the matrix in Example ex:basisrowspace.
Elements in the null space of
are solutions to the equation
Row reduction yields
Therefore, elements of are of the form
Thus
Now we need to find a basis for we need to find linearly independent vectors that
span . Take a closer look at the vectors
Because of the locations of and , it is clear that one vector is not a scalar multiple of
the other. Therefore the two vectors are linearly independent. We conclude
that
is a basis of , and .
It is not a coincidence that the steps we used in Example ex:dimnull produced linearly
independent vectors, and it is worth while to try to understand why this procedure
will always produce linearly independent vectors.
Take a closer look at the elements of the null space:
The parameter in the third component of produces a in the third component of
the first vector and a in the third component of the second vector, while parameter
in the fifth component of produces a in the fifth component of the second vector
and a in the fifth component of the first vector. This makes it clear that the two
vectors are linearly independent.
This pattern will hold for any number of parameters, each parameter producing a
in exactly one vector and in the corresponding components of the other
vectors.
Therefore, vectors obtained in this way will always be linearly independent.
Rank and Nullity Theorem
Let be a matrix. The dimension of the null space of is called the nullity of
.
We know that the dimension of the row space and the dimension of the column space
of a matrix are the same and are equal to the rank of the matrix (or the number of
nonzero rows in any row-echelon form of the matrix).
As we observed in Example ex:dimnull, the dimension of the null space of a matrix is equal to
the number of free variables in the solution vector of the homogeneous system
associated with the matrix. Since the number of pivots and the number of free
variables add up to the number of columns in a matrix (Theorem th:rankandsolutions of SYS-0030) we
have the following significant result.
Let be an matrix. Then
We will see the geometric implications of this theorem when we study linear
transformations.
Practice Problems
Let
Find .
Use and the procedure outlined in Example ex:basisrowspace to find a basis for .
Demonstrate that the Rank-Nullity Theorem (Theorem th:matrixranknullity) holds for .
Explain how you can quickly tell that the two vectors you selected for your basis are
linearly independent.
Find a basis for .
Basis for
Demonstrate that the Rank-Nullity Theorem (Theorem th:matrixranknullity) holds for .
Find a basis for .
Basis for
Demonstrate that the Rank-Nullity Theorem (Theorem th:matrixranknullity) holds for .
Suppose matrix is such that
Follow the process used in Example ex:dimnull to find a basis for . Explain why the basis
elements obtained in this way are linearly independent.
Basis of
Let denote the columns of . Express as a linear combination of and .
Answer:
Suppose is a matrix. Which of the following statements could be true?
Suppose is a matrix. Which of the following statements could be true?
Complete the proof of Theorem th:rowBrowA by showing that adding a scalar multiple of one row
of a matrix to another row does not change the row space.