We develop a method for finding the inverse of a square matrix, discuss when the inverse does not exist, and use matrix inverses to solve matrix equations.
MAT-0050: The Inverse of a Matrix
Definition and Properties of Matrix Inverses
Consider the equation . It takes little time to recognize that the solution to this equation is . In fact, the solution is so obvious that we do not think about the algebraic steps necessary to find it. Let’s take a look at these steps in detail. This process utilizes many properties of real-number multiplication. In particular, we make use of the existence of multiplicative inverses. Every non-zero real number has a multiplicative inverse with the property that . We say that is the multiplicative identity because .
Given a matrix equation , we would like follow a process similar to the one above to solve this matrix equation for .
Given the set of all matrices, with standard matrix multiplication, the role of the multiplicative identity is filled by because .
Given an matrix , a multiplicative inverse of would have to be some matrix that satisfies the following property:
Assuming that such an inverse exists, this is what the process of solving the equation would look like:
The following theorem shows that matrix inverses are unique.
- Proof
- Because is an inverse of , we have: Suppose there exists another matrix such that Then
Now that we know that a matrix cannot have more than one inverse, we can safely refer to the inverse of as .
We now prove several useful properties of matrix inverses.
We will prove item:inverseofproduct. The remaining properties are left as exercises.
- Proof of Property item:inverseofproduct:
- We will check to see if is the inverse of . Thus is invertible and .
Computing the Inverse
We now turn to the question of how to find the inverse of a square matrix, or determine that the inverse does not exist.
Given a square matrix , we are looking for a square matrix such that We will start by attempting to satisfy . Let be the columns of , then where each is a standard unit vector of . This gives us a system of equations for each . If each has a unique solution, then finding these solutions will give us the columns of the desired matrix .
First, suppose that , then we can use elementary row operations to carry each to its reduced row-echelon form. Observe that the row operations that carry to will be the same for each . We can, therefore, combine the process of solving systems of equations into a single process Each is a unique solution of , and we conclude that is a solution to .
By Problem prob:elemrowopsreverse of SYS-0010, we can reverse the elementary row operations to obtain But the same row operations would also give us We conclude that , and .
Next, suppose that . Then must contain a row of zeros. Because one of the rows of was completely wiped out by elementary row operations, one of the rows of must be a linear combination of the other rows. Suppose row is a linear combination of the other rows. Then row can be carried to a row of zeros. But then the system is inconsistent. This is because has a as the entry and zeros everywhere else. The in the spot will not be affected by elementary row operations, and the row will eventually look like this This shows that a matrix such that does not exist, and does not have an inverse.
We have just proved the following theorem.
Inverse of a Matrix
We will conclude this section by discussing the inverse of a nonsingular matrix. Let be a non-singular matrix. We can find by using the row reduction method described above, that is, by computing the reduced row-echelon form of . Row reduction yields the following:
Note that the denominator of each term in the inverse matrix is the same. Factoring it out, gives us the following formula for .
Clearly, the expression for is defined, if and only if . So, what happens when ? In Practice Problem prob:inverseformula you will be asked to fill in the steps of the row reduction procedure that produces this formula, and show that if then does not have an inverse.