We interpret linear systems as matrix equations and as equations involving linear combinations of vectors. We define singular and nonsingular matrices.
In general, a system of linear equations
can be written as a matrix equation as follows:
Solving this matrix equation (or showing that a solution does not exist) amounts to finding the reduced row-echelon form of the augmented matrix
ex:linsysmatrix1b The augmented matrix that corresponds to the original system and its reduced row-echelon form are
This shows that the ordered pair is a solution to the system. We conclude that is a solution to the matrix equation in ex:linsysmatrix1a. A quick verification confirms this
One way to obtain a solution is to convert this to a system of equations. It is not necessary to write the system down, but it helps to think about it as you write out your solution vector.
We see that and are leading variables because they correspond to leading 1s in the reduced row-echelon form , while and are free variables. We start by assigning parameters and to and , respectively, then solve for and .
We can now write the solution vector as follows
The solution given in (eq:generalvsparticular) is an example of a general solution because it accounts for all of the solutions to the system. Letting and take on specific values produces particular solutions. For example, is a particular solution that corresponds to , .
Our examples so far involved non-square matrices. Square matrices, however, play a very important role in linear algebra. This section will focus on square matrices.
Observe that the left-hand side of the augmented matrix in Example ex:nonsingularintro is the identity matrix . This means that .
The elementary row operations that carried to were not dependent on the vector . In fact, the same row reduction process can be applied to the matrix equation for any vector to obtain a unique solution. Given a matrix such that , the system will never be inconsistent because we will never have a row like this: . Neither will we have infinitely many solutions because there will never be free variables. Matrices such as deserve special attention.
Non-singular matrices have many useful properties.
Not all square matrices are nonsingular. For example, By Theorem th:nonsingularequivalency1, a matrix equation involving a singular matrix cannot have a unique solution. The following example illustrates the two scenarios that arise when solving equations that involve singular matrices.
Recall that the product of a matrix and a vector can be interpreted as a linear combination of the columns of the matrix. For example,
So, is a solution to the matrix equation. We conclude that is a linear combination of the columns of , and write
item:linearcombofcols2b We begin by attempting to solve the matrix equation . The augmented matrix corresponding to this equation, together with its reduced row-echelon form are
This shows that this matrix equation has no solutions. We conclude that is not a linear combination of the columns of .