Subspaces of Associated with Matrices

Row Space of a Matrix

Recall that in Gaussian Elimination and Rank, we claimed that every row-echelon form of a given matrix has the same number of nonzero rows. This result suggests that there are certain characteristics associated with the rows of a matrix that are not affected by elementary row operations. We are now in the position to examine this question and to supply the proof we omitted earlier.

Consider the matrix Let and be the rows of :

Then is a plane through the origin containing and .

We will use elementary row operations to reduce to . Let and be the rows of : What do you think looks like?

The following video will help us visualize and compare it to .

_

Based on what we observed in the video, we may conjecture that

But why does this make sense? Vectors and were obtained from and by repeated applications of elementary row operations. At every stage of the row reduction process, the rows of the matrix are linear combinations of and . Thus, at every stage of the row reduction process, the rows of the matrix lie in the span of and . Our next video shows a step-by-step row reduction process accompanied by sketches of vectors.

_

Exploration init:rowspace makes a convincing case for the following theorem.

Proof
Let be the rows of .

There are three elementary row operations. Clearly, switching the order of vectors in will not affect the span.

Suppose that was obtained from by multiplying the row of by a non-zero constant . We need to show that

To do this we will assume that some vector is in , and show that is in . We will then assume that some vector is in and show that must be in .

Suppose that is in . Then But then So is in .

Now suppose is in , then But because , we can do the following: So is in .

We leave it to the reader to verify that adding a multiple of one row of to another does not change the row space. (See Practice Problem prob:proofofrowBrowA.)

Proof
This follows from repeated applications of Theorem th:rowBrowA.

Our observations in Example ex:basisrowspace can be generalized to all matrices. Given any matrix ,

(a)
The nonzero rows of are linearly independent (Why?) and span (Corollary cor:rowArowrrefA).
(b)
The nonzero rows of any row-echelon form of are linearly independent (Why?) and span (Corollary cor:rowequiv).

Therefore nonzero rows of or the nonzero rows of any row-echelon form of constitute a basis of . Since all bases for must have the same number of elements (Theorem th:dimwelldefined), we have just proved the following theorem.

This result was first introduced without proof in Gaussian Elimination and Rank where we used it to define the rank of a matrix as the number of nonzero rows in its row-echelon forms. We can now update the definition of rank as follows.

Column Space of a Matrix

Let Our goal is to find a basis for . To do this we need to find a linearly independent subset of the columns of that spans .

Consider the linear relation: \begin{equation} \label{eq:init:colspaceB} a_1\begin{bmatrix}2\\1\\1\end{bmatrix}+a_2\begin{bmatrix}-1\\-1\\3\end{bmatrix}+a_3\begin{bmatrix}3\\2\\-2\end{bmatrix}+a_4\begin{bmatrix}1\\2\\-3\end{bmatrix}=\vec{0} \end{equation}

Solving this homogeneous equation amounts to finding . We now see that (eq:init:colspaceB) has infinitely many solutions.

Observe that the homogeneous equation

\begin{equation} \label{eq:init:colspaceR} a_1\begin{bmatrix}1\\0\\0\end{bmatrix}+a_2\begin{bmatrix}0\\1\\0\end{bmatrix}+a_3\begin{bmatrix}1\\-1\\0\end{bmatrix}+a_4\begin{bmatrix}0\\0\\1\end{bmatrix}=\vec{0} \end{equation}

has the same solution set as (eq:init:colspaceB). In particular, , , , is a non-trivial solution of (eq:init:colspaceB) and (eq:init:colspaceR). This means that the third column of and the third column of can be expressed as the first column minus the second column of their respective matrices. We conclude that the third column of can be eliminated from the spanning set for and Having gotten rid of one of the vectors, we need to determine whether the remaining three vectors are linearly independent. To do this we need to find all solutions of

\begin{equation} \label{eq:init:colspaceB2} b_1\begin{bmatrix}2\\1\\1\end{bmatrix}+b_2\begin{bmatrix}-1\\-1\\3\end{bmatrix}+b_3\begin{bmatrix}1\\2\\-3\end{bmatrix}=\vec{0} \end{equation} Fortunately, we do not have to start from scratch. Observe that crossing out the third column in the previous row reduction process yields the desired reduced row-echelon form.

This time the reduced row-echelon form tells us that (eq:init:colspaceB2) has only the trivial solution. We conclude that the three vectors are linearly independent and is a basis for .

The approach we took to find a basis for in Exploration init:colspace uses the reduced row-echelon form of . It is true, however, that any row-echelon form of could have been used in place of . (Why?). We generalize the steps as follows:

Proof
Let be the columns of , and let be the columns of (or ). Observe that the equations \begin{equation} a_1\vec{b}_1+\ldots +a_n\vec{b}_n=\vec{0} \end{equation} \begin{equation} a_1\vec{b}'_1+\ldots +a_n\vec{b}'_n=\vec{0} \end{equation} have the same solution set. This means that any non-trivial relation among the columns of (or ) translates into a non-trivial relation among the columns of . Likewise, any collection of linearly independent columns of (or ) corresponds to linearly independent columns of .

By Theorems th:rowsrreflinind and th:rowsofreflinind, the pivot columns of (or ) are linearly independent. Therefore the corresponding columns of are linearly independent. Non-pivot columns can be expressed as linear combinations of the pivot columns, therefore they contribute nothing to the span and can be removed from the spanning set.

The proof of Procedure proc:colspace shows that the number of basis elements for the column space of a matrix is equal to the number of pivot columns. But the number of pivot columns is the same as the number of pivots in a row-echelon form, which is equal to the number of nonzero rows and the rank of the matrix. This gives us the following important result.

The Null Space

Example ex:nullintro allows us to make an important observation. Note that every scalar multiple of is contained in . This means that is closed under vector addition and scalar multiplication. Recall that this property makes a subspace of . This result was first presented as Practice Problem prob:null(A)_is_subspace. We now formalize it as a theorem.

Proof
To show that is closed under vector addition and scalar multiplication we will show that a linear combination of any two elements of is contained in .

Suppose and are in . Then and . But then We conclude that is also in .

It is not a coincidence that the steps we used in Example ex:dimnull produced linearly independent vectors, and it is worth while to try to understand why this procedure will always produce linearly independent vectors.

Take a closer look at the elements of the null space: The parameter in the third component of produces a in the third component of the first vector and a in the third component of the second vector, while parameter in the fifth component of produces a in the fifth component of the second vector and a in the fifth component of the first vector. This makes it clear that the two vectors are linearly independent.

This pattern will hold for any number of parameters, each parameter producing a in exactly one vector and in the corresponding components of the other vectors.

Therefore, vectors obtained in this way will always be linearly independent.

Rank and Nullity Theorem

We know that the dimension of the row space and the dimension of the column space of a matrix are the same and are equal to the rank of the matrix (or the number of nonzero rows in any row-echelon form of the matrix).

As we observed in Example ex:dimnull, the dimension of the null space of a matrix is equal to the number of free variables in the solution vector of the homogeneous system associated with the matrix. Since the number of pivots and the number of free variables add up to the number of columns in a matrix (Theorem th:rankandsolutions) we have the following significant result.

We will see the geometric implications of this theorem when we study linear transformations.

Practice Problems

Problems prob:colrowmatrixA1-prob:colrowmatrixA4

Let

Find .
Use and the procedure outlined in Example ex:basisrowspace to find a basis for .

Basis for

Use Procedure proc:colspace to find a basis for .

Basis for

Problems prob:colrowmatrixB1-prob:colrowmatrixB4

Let

Find .
Use and the procedure outlined in Example ex:basisrowspace to find a basis for .

Basis for

Use Procedure proc:colspace to find a basis for .

Basis for

Prove that
Find a basis for if
Find a basis for the column space of a matrix whose columns are the given vectors.

Problems prob:nullABC1-prob:nullABC2

This problem will refer to matrices and of Problems prob:colrowmatrixA1 and prob:colrowmatrixB1.

Find a basis for .

Basis for

Demonstrate that the Rank-Nullity Theorem (Theorem th:matrixranknullity) holds for .

Explain how you can quickly tell that the two vectors you selected for your basis are linearly independent.

Find a basis for .

Basis for

Demonstrate that the Rank-Nullity Theorem (Theorem th:matrixranknullity) holds for .

Problems prob:nullM1-prob:nullM2

Suppose matrix is such that

Follow the process used in Example ex:dimnull to find a basis for . Explain why the basis elements obtained in this way are linearly independent.

Basis of

Let denote the columns of . Express as a linear combination of and .

Answer:

Suppose is a matrix. Which of the following statements could be true?
Suppose is a matrix. Which of the following statements could be true?
Complete the proof of Theorem th:rowBrowA by showing that adding a scalar multiple of one row of a matrix to another row does not change the row space.
2024-09-26 22:12:40