We know that every non-zero vector space admits a basis. It is natural then to ask: does every non-zero inner product space admit an orthogonal basis? The answer is: yes, it does. In fact, given a basis for an inner product space, there is a systematic way to convert it into an orthogonal basis. And this method simultaneously provides a method for projecting a vector onto a subspace. Again, we discuss the procedure first for equipped with its standard scalar product, then show how this naturally extends to more general inner product spaces.

Let be a basis for . We will give an inductive procedure for constructing an orthogonal basis for from this original set.

First, some notation. Let be the span of the first vectors in the set. Since any subset of linearly independent vectors is linearly independent, we see that , with .

Now is an orthogonal basis for , since it has only one element. We set , and consider the vector . This need not be orthogonal to , but it cannot be simply a scalar multiple of either, since that would imply that the set was linearly dependent, contradicting what we know.

So we define

As we have just observed, .

Compute the dot product , and confirm it is zero. Also, verify that . Conclude that is an orthonormal basis for .

We now suppose that we have constructed an orthogonal basis for . We need to show how to this can be extended to if . First, for , we define the projection of onto to be

Again, if , then and so their difference will not be zero. As above, we then set The same arguments used in the previous exercise show

Continuing in this fashion, we eventually reach the case at which point the algorithm is complete. Note that this procedure depends not only on the basis but also on the order in which we list the basis elements - changing the order will (most of the time) result in a different orthogonal basis for . Note also that this procedure works just as well if we start with a subspace of , together with a basis for that subspace. Summarizing

The reason for calling this projection the -component of is more or less clear, since the equation decomposes as a sum of i) its component in , and ii) its component in . As we have seen above, , so this sum decomposition of is unique. In other words,

As we are about to see, this is equivalent to saying that it is the vector in closest to , where distance is in terms of the standard Euclidean distance for based on the scalar product.