Matrices and vectors can be used to rewrite systems of equations as a single equation, and there are advantages to doing this.

A matrix with one row is called a row vector, and if it has one column a column vector. The term vector, for now, will refer to a column vector. Matrices and vectors can be used to rewrite systems of equations as a single equation, and there are advantages to doing this. To begin with, notice that the system appearing in (eqn:sys) can be expressed as the single vector equation The vector on the left above consists of entries which are linear homogeneous functions in the variables . A solution to this vector equation will be exactly what it was before; an assignment of values to the variables which make the equation true.

Now the expression on the left in (eqn:vec1) can be written as a sum of its components, where the component can be derived by setting all of the other variables to zero. The result is

Next we observe that the component, which involves only , can be factored as Using this, the vector equation (eqn:vec2) may be rewritten as The left-hand side of this last equation leads us to one of the central constructions in all of Linear Algebra. In words, it is a sum of scalar multiples of the vectors . Now the expression on the left of (eqn:vec3) is a linear combination of sorts, but where the coefficients are scalar-valued variables rather than actual scalars. So for any assignment of values to the variables we get an actual linear combination.

Finally, going back to equation (eqn:vec1) we observe that the left-hand side can be written as , where is the coefficient matrix

and is the vector variable which leads to our final equivalent form of (eqn:vec1), referred to as the matrix equation associated to the system of equations: where is the vector . As with (eqn:vec1), a solution is an assignment of a particular numerical vector to making the equation true, and matrix equation is consistent iff such an x exists. Summarizing

Proof
The only point needing verification is the last statement. But this follows from (eqn:vec3), which can be more succinctly written as since any solution will yield a particular set of values for to take as scalars on the left so that the resulting linear combination produces b, while a particular linear combination which results in b would in turn produce a solution to (eqn:vec3).

The last part of this theorem is often called the consistency theorem for systems of equations. We will refer to it in this way.

Finally, we consider the case of a matrix equation

when is invertible. If we assume is a solution, we can multiply both sides of the equation on the left by to get On the other hand, if we take and substitute into equation (eqn:inv), we get In other words, we have shown

The above matrix formulations give us an equivalent perspective on the augmented coefficient matrix of the original system.