Determinants and Inverses of Nonsingular Matrices

Combining results of Theorem th:detofsingularmatrix and Theorem th:nonsingularequivalency1 shows that the following statements about matrix are equivalent:

  • exists
  • Any equation has a unique solution

In this section we will take a closer look at the relationship between the determinant of a nonsingular matrix , solution to the system , and the inverse of .

Cramer’s Rule

We begin by establishing a formula that allows us to express the unique solution to the system in terms of the determinant of , for a nonsingular matrix . This formula is called Cramer’s rule.

Consider the system

The system can be written as a matrix equation

Using one of our standard methods for solving systems we find that

Observe that the denominators in the expressions for and are the same and equal to .

A close examination shows that the numerators of expressions for and can also be interpreted as determinants of matrices. The numerator of the expression for is the determinant of the matrix that is formed by replacing the first column of with . The numerator of the expression for is the determinant of the matrix that is formed by replacing the second column of with . Thus, and can be written as

Note that a unique solution to the system exists if and only if the determinant of the coefficient matrix is not zero.

It turns out that a solution to any square system can be expressed using ratios of determinants, provided that is nonsingular. The general formula for the component of the solution vector is

To formalize this expression, we need to introduce some notation. Given a matrix and a vector we use to denote the matrix obtained from by replacing the column of with . In other words, \begin{equation} \label{eq:AiNotation} A_i(\vec{b})=\begin{bmatrix} | & |& &|&|&|&&|\\ \vec{a}_1 & \vec{a}_2&\dots &\vec{a}_{i-1}&\vec{b}&\vec{a}_{i+1}&\dots &\vec{a}_n\\ | & |& &|&|&|&&| \end{bmatrix} \end{equation} Using our new notation, we can write the component of the solution vector as

We will work through a couple of examples before proving this result as a theorem.

We are now ready to state and prove Cramer’s rule as a theorem.

Proof
For this proof we will need to think of matrices in terms of their columns. Thus, We will also need the identity matrix . The columns of are standard unit vectors. Recall that

Similarly, Observe that is the only non-zero entry in the row of . Cofactor expansion along the row gives us \begin{equation} \label{eq:cramerix}\det{I_i(\vec{x})}=x_i \end{equation} Now, consider the product \begin{align*} A\Big (I_i(\vec{x})\Big )&=\begin{bmatrix} | & |& &|\\ \vec{a}_1 & \vec{a}_2&\dots &\vec{a}_n\\ | & |& &| \end{bmatrix}\begin{bmatrix} | & |& &|&|&|&&|\\ \vec{e}_1 & \vec{e}_2&\dots &\vec{e}_{i-1}&\vec{x}&\vec{e}_{i+1}&\dots &\vec{e}_n\\ | & |& &|&|&|&&| \end{bmatrix}\\ &=\begin{bmatrix} | & |& &|&|&|&&|\\ \vec{a}_1 & \vec{a}_2&\dots &\vec{a}_{i-1}&A\vec{x}&\vec{a}_{i+1}&\dots &\vec{a}_n\\ | & |& &|&|&|&&| \end{bmatrix}\\ &=\begin{bmatrix} | & |& &|&|&|&&|\\ \vec{a}_1 & \vec{a}_2&\dots &\vec{a}_{i-1}&\vec{b}&\vec{a}_{i+1}&\dots &\vec{a}_n\\ | & |& &|&|&|&&| \end{bmatrix}=A_i(\vec{b}) \end{align*}

This gives us By our earlier observation in (eq:cramerix) we have is nonsingular, so . Thus

Finding the determinant is computationally expensive. Because Cramer’s rule requires finding many determinants, it is not a computationally efficient way of solving a system of equations. However, Cramer’s rule is often used for small systems in applications that arise in economics, natural, and social sciences, particularly when solving for only a subset of the variables.

Adjugate Formula for the Inverse of a Matrix

In Practice Problem prob:inverseformula we used the row reduction algorithm to show that if is nonsingular then

\begin{equation} \label{eq:twobytwoinverse}A^{-1}=\frac{1}{\det{A}}\begin{bmatrix}d&-b\\-c&a\end{bmatrix} \end{equation} This formula is a special case of a general formula for the inverse of a nonsingular square matrix. Just like the formula for a matrix, the general formula includes the coefficient and a matrix related to the original matrix. We will now derive the general formula using Cramer’s rule.

Let be an nonsingular matrix. When looking for the inverse of , we look for a matrix such that . We will think of matrices in terms of their columns

If then we must have This gives us systems of equations. Solution vectors to these systems are the columns of . Thus, the column of is By Cramer’s rule

But To find , we can expand along the column of . But the column of is the vector which has 1 in the spot and zeros everywhere else. Thus We now have Thus,

The matrix of cofactors of is called the adjugate of . We write

We summarize our result as a theorem

Practice Problems

Problems prob:cramer1-prob:cramer2

Use Cramer’s rule to solve each of the following systems.

Answer:
Answer:
Consider the equation
(a)
Solve for using Cramer’s Rule.

Answer:

(b)
If you had to solve for all four variables, which method would you use? Why?

Problems prob:adjinverse1-prob:adjinverse2

Use Theorem th:adjugateinverseformula to find the inverse of each of the following matrices.

Answer:
Answer:
Show that the formula in (eq:twobytwoinverse) is a special case of the formula in Theorem th:adjugateinverseformula by showing that
2024-09-26 22:11:56