Linear Systems as Matrix and Linear Combination Equations
A Linear System as a Matrix Equation
In general, a system of linear equations
can be written as a matrix equation as follows:
Solving this matrix equation (or showing that a solution does not exist) amounts to finding the reduced row-echelon form of the augmented matrix
ex:linsysmatrix1b The augmented matrix that corresponds to the original system and its reduced row-echelon form are
This shows that the ordered pair is a solution to the system. We conclude that is a solution to the matrix equation in ex:linsysmatrix1a. A quick verification confirms this
One way to obtain a solution is to convert this to a system of equations. It is not necessary to write the system down, but it helps to think about it as you write out your solution vector.
We see that and are leading variables because they correspond to leading 1s in the reduced row-echelon form , while and are free variables. We start by assigning parameters and to and , respectively, then solve for and .
We can now write the solution vector as followsThe solution given in (eq:generalvsparticular) is an example of a general solution because it accounts for all of the solutions to the system. Letting and take on specific values produces particular solutions. For example, is a particular solution that corresponds to , .
Singular and Nonsingular Matrices
Our examples so far involved non-square matrices. Square matrices, however, play a very important role in linear algebra. This section will focus on square matrices.
Observe that the left-hand side of the augmented matrix in Example ex:nonsingularintro is the identity matrix . This means that .
The elementary row operations that carried to were not dependent on the vector . In fact, the same row reduction process can be applied to the matrix equation for any vector to obtain a unique solution. Given a matrix such that , the system will never be inconsistent because we will never have a row like this: . Neither will we have infinitely many solutions because there will never be free variables. Matrices such as deserve special attention.
Non-singular matrices have many useful properties.
We will prove equivalence of the three statements by showing that
- Proof of item:asingularitem:uniquesolution
- Suppose . Given any vector in , the augmented matrix can be carried to its reduced row-echelon form . Uniqueness of the reduced row-echelon form guarantees that is the unique solution of .
- Proof of item:uniquesolutionitem:onlytrivialsolution
- Suppose has a unique solution for all vectors . Then has a unique solution. But is always a solution to . Therefore is the only solution.
- Proof of item:onlytrivialsolutionitem:asingular
- Suppose has only the trivial solution. This means that is the only solution of . But then, we know that the augmented matrix can be reduced to . The same row operations will carry to .
Not all square matrices are nonsingular. For example, By Theorem th:nonsingularequivalency1, a matrix equation involving a singular matrix cannot have a unique solution. The following example illustrates the two scenarios that arise when solving equations that involve singular matrices.
item:infeasible When the vector is changed, the row operations that take to its reduced row-echelon form produce a in the last row of the vector on the right, which shows that the system is inconsistent.
A Linear System as a Linear Combination Equation
Recall that the product of a matrix and a vector can be interpreted as a linear combination of the columns of the matrix. For example,
So, is a solution to the matrix equation. We conclude that is a linear combination of the columns of , and write
item:linearcombofcols2b We begin by attempting to solve the matrix equation . The augmented matrix corresponding to this equation, together with its reduced row-echelon form are
This shows that this matrix equation has no solutions. We conclude that is not a linear combination of the columns of .
Practice Problems