Tedious Proofs Concerning Determinants

In Finding the Determinant we described the determinant as a function that assigns a scalar to every square matrix. The value of the function in the original definition was given by cofactor expansion along the first row of the matrix. We also observed, through examples, that cofactor expansion along any row or column produces the same value. Examples, however, do not constitute a sufficient proof of equivalency of different cofactor expansions. In this section we will prove that cofactor expansions along any row or column produce the same outcome. This result is known as the Laplace Expansion Theorem. We will also prove several results concerning elementary row operations.

Cofactor Expansion Along the Top Row

Let

Definition def:threebythreedet of cofactor expansion along the top row for a matrix requires three minor matrices associated with .

  • is obtained from by deleting the first row and the first column of .

  • is obtained from by deleting the first row and the second column of .

  • is obtained from by deleting the first row and the third column of .

The determinant of is given by

Now we are ready for an matrix. Let Define to be an matrix obtained from by deleting the first row and the column of . We say that is the -minor of .

For a matrix we have We want to follow the same pattern to define the determinant of a larger matrix. A distinct feature of this expression is the alternating sign pattern. We want to preserve this feature as we increase matrix size.

Naturally, we would like to condense this formula. To accomplish this, let We will refer to as the -cofactor of . When we use the cofactor notation, the expression in Definition def:toprowexpansion turns into the following:

This process of computing the determinant is called the cofactor expansion along the first row.

Cofactor Expansion Along the First Column

As we have observed in several examples in Finding the Determinant, cofactor expansion along the first column produces the same result as cofactor expansion along the top row. We will now formalize the process of cofactor expansion along the first column for an matrix and prove that this process produces the same result as our original definition of the determinant. Let be an matrix. Define to be an matrix obtained from by deleting the first column and the row of . We say that is the -minor of .

Define to be the -cofactor of .

Proof of Definition Equivalence

We will now show that cofactor expansion along the first row produces the same result as cofactor expansion along the first column.

Proof
We will proceed by induction on . Clearly, the result holds for . Just for practice you should also verify the equality for . (See Practice Problem prob:extrainductionsteps.) We will assume that the result holds for matrices and show that it must hold for matrices.

You will find the following matrix a useful reference as we proceed.

For convenience, we will refer to the Right-Hand Side (RHS) and the Left-Hand Side (LHS) of the equality we are trying to prove.

Note that the first term is the same for LHS and RHS, so we will only need to consider .

We will start by analyzing RHS. Consider an arbitrary entry of the fist column. This entry will only appear in the term . We will find by cofactor expansion along the first row. As we proceed, we have to pay special attention to the subscripts. Because the first column of was removed, the column of contains the column of .

Note that the entry will only appear in the term So, after we distribute , RHS will contain only one term of the form We will perform a similar analysis on LHS. Consider an arbitrary entry of the fist row. This entry will only appear in the term . Invoking the induction hypothesis, we will find by cofactor expansion along the first column. The entry will only appear in the term So, after we distribute , LHS will contain only one term of the form But RHS also has only one term of this form. We now need to show that these two terms are equal. The two terms are and Observe that and are the same matrix because both were obtained from matrix by deleting the first and the rows of , and the first and the columns of . Therefore .

We conclude that the terms of LHS and RHS match. This establishes the desired equality.

Now we know that cofactor expansion along the first row and cofactor expansion along the first column produce the same result, so either expansion can be used to find the determinant.

Proof of Results Concerning Elementary Row Operations

In Elementary Row Operations we observed, without proof, the following properties of the determinant. (See Theorem th:elemrowopsanddet.)

We now prove these properties.

Proof of Theorem th:elemrowopsanddetitem:rowswapanddetSUMM
We will start by showing that the result holds if two consecutive rows are interchanged. Suppose is obtained from by swapping rows and of .

We proceed by induction on . The result is not applicable for . In Practice Problem prob:proofofrowswapanddet, you will be asked to verify that the result holds for matrices. Suppose that the result holds for matrices. We need to show that it holds for matrices. You may find the following diagram useful throughout the proof.

Observe that for we have: Because is obtained from by switching two rows of , our induction hypothesis give us:

For and we have:

We compute the determinant of by cofactor expansion along the first column.

If two non-adjacent rows are switched, then the switch can be carried out by performing an odd number of adjacent row interchanges (See Practice Problem prob:numberofrowswitches), so the result still holds.

Proof of Theorem th:elemrowopsanddetitem:rowconstantmultanddetSUMM
We proceed by induction on . Clearly the statement is true for . Just for fun, you might want to verify directly that it holds for matrices. Now suppose the statement is true for all matrices. We will show that it holds for matrices.

Suppose is obtained from by multiplying the ’s row of by .

We compute the determinant of by cofactor expansion along the first column.

Before we tackle the proof of Part item:addmultotherrowdetSUMM of Theorem th:elemrowopsanddet we will need to prove the following lemma.

Proof
We will proceed by induction on . We leave it to the reader to verify cases . We will assume that the statement holds for all matrices and show that it holds for matrices.

You may find the following representations of , and helpful. Identical entries in , and are labeled .

Observe that For , the induction hypothesis gives us

We now compute the determinant of by cofactor expansion along the first column.

We are now ready to finish the proof of Theorem th:elemrowopsanddet

Proof of Theorem th:elemrowopsanddetitem:addmultotherrowdetSUMM
Suppose is obtained from by adding times row to row . ( and ) You may find the following representations of and useful. We will form another matrix by replacing the row of with times the row. Observe that matrices , and are identical except for the row, and the row of matrix is the sum of the rows of and . Thus, by Lemma lemma:arowsumofbc, we have: Since one row of is a scalar multiple of another row, we know (see Practice Problem prob:kAdet). Therefore .

The Laplace Expansion Theorem

As we have seen in examples, the value of the determinant can be computed by expanding along any row or column. This result is known as the Laplace Expansion Theorem. We begin by generalizing some earlier definitions.

Given an matrix define the minor to be an matrix obtained from by deleting the row and the column of .

Define the -cofactor of by Note that the sign of follows a checkerboard pattern.

Proof
We will start by showing that cofactor expansion along column produces the same result as cofactor expansion along the first column. Observe that column can be shifted into the first column position by consecutive row switches. Let be the matrix obtained from by performing the necessary column switches. Then

To show that the determinant of can also be computed by cofactor expansion along any row follows from the fact that . (Theorem th:detoftrans)

Practice Problems

Complete the proof of Theorem th:elemrowopsanddetitem:rowswapanddet by showing that the result holds for a matrix.
Let and be two rows of a matrix, with . Show that the switch of and requires adjacent row interchanges.