Matrices and vectors can be used to rewrite systems of equations as a single equation, and there are advantages to doing this.
Now the expression on the left in (eqn:vec1) can be written as a sum of its components, where the component can be derived by setting all of the other variables to zero. The result is
Next we observe that the component, which involves only , can be factored as Using this, the vector equation (eqn:vec2) may be rewritten as The left-hand side of this last equation leads us to one of the central constructions in all of Linear Algebra.Finally, going back to equation (eqn:vec1) we observe that the left-hand side can be written as , where is the coefficient matrix
and is the vector variable which leads to our final equivalent form of (eqn:vec1), referred to as the matrix equation associated to the system of equations: where is the vector . As with (eqn:vec1), a solution is an assignment of a particular numerical vector to making the equation true, and matrix equation is consistent iff such an x exists. Summarizing
- Proof
- The only point needing verification is the last statement. But this follows from (eqn:vec3), which can be more succinctly written as since any solution will yield a particular set of values for to take as scalars on the left so that the resulting linear combination produces b, while a particular linear combination which results in b would in turn produce a solution to (eqn:vec3).
The last part of this theorem is often called the consistency theorem for systems of equations. We will refer to it in this way.
Finally, we consider the case of a matrix equation
when is invertible. If we assume is a solution, we can multiply both sides of the equation on the left by to get On the other hand, if we take and substitute into equation (eqn:inv), we get In other words, we have shownThe above matrix formulations give us an equivalent perspective on the augmented coefficient matrix of the original system.