You are about to erase your work on this activity. Are you sure you want to do this?
Updated Version Available
There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?
Mathematical Expression Editor
Orthogonal Complements and Decompositions
Orthogonal Complements
We will now consider the set of vectors that are orthogonal to every vector
in a given subspace. As a quick example, consider the -plane in . Clearly,
every scalar multiple of the standard unit vector in is orthogonal to every
vector in the -plane. We say that the set is an orthogonal complement of
.
Orthogonal Complement of a Subspace of If is a subspace of
, define the orthogonal complement of (pronounced “-perp”) by
The following theorem collects some useful properties of the orthogonal complement;
the proof of th:023783a and th:023783b is left as Practice Problem prob:8_1_6.
We must show that . To show that two sets are equal, we
must show that all elements of one set are included in the other set, and then
we must show the reverse inclusion.
If is in then for all because each is in . This shows . For the reverse
inclusion, suppose that for all ; we need to show that is in . We
need to show for each in . We can write , where each is in . Then
as required, and the proof of equality is complete.
Find a basis for if in .
By Theorem th:023783, is in if and only if is orthogonal to both and ; that is, and , or
Using Gaussian elimination on this system gives . You are asked to confirm this in
Practice Problem prob:Uperp (which serves as a wonderful review of concepts we covered earlier
in the course!).
Some of the important subspaces we studied earlier are orthogonal complements
of each other. Recall the following definitions associated with an matrix
.
(a)
The null space of , , is a subspace of .
(b)
The row space of , , is a subspace of .
(c)
The column space of , , is a subspace of .
In the following GeoGebra interactive, you can change the coordinates of the vectors
and using the sliders. (At this stage make sure that and are not collinear.) Let .
Then . RIGHT-CLICK and DRAG to rotate the coordinate system for a better
view.
(a)
Follow the prompts in the interactive to visualize and . What relationships
do you observe between and ?
(b)
It is possible to “break” this interactive (for certain choices of and ). If
and are scalar multiples of each other, then is a Point, Line,Plane, and the dimension of is 1, 2,3. The interactive does not accommodate this situation. To see what happens
when and are scalar multiples of each other, see Practice Problem
prob:brokenInteractive.
Let be an matrix. Then we have:
(a)
;
(b)
.
Before proving this theorem, let’s examine what it says about a couple of our
examples. In Example ex:023829, we solved for the unknown vectors . Notice that this is
equivalent to creating a matrix whose rows are and , and then finding
the null space of that matrix . You can check that a basis for is given by
.
Let
Verify each of the statements in Theorem th:4subspaces.
We compute to find a basis for , , and . After some work we arrive at:
and
(See examples in Subspaces of Associated with Matrices for the details.) It is easy to
check that each of the basis vectors of is orthogonal to each of the basis
vectors of , demonstrating the first part of Theorem th:4subspaces. You will be asked to
demonstrate the second part of Theorem th:4subspaces for this example in Practice Problem
prob:finishex4subspaces.
We now return to the proof of Theorem th:4subspaces.
Let . if and only if x is orthogonal to every row of .
But this is true if and only if , which is equivalent to saying , which proves th:4subspacesa. To
prove th:4subspacesb, we simply replace with , and we may apply th:4subspacesa since .
Orthogonal Decomposition Theorem
Now that we have defined the orthogonal complement of a subspace, we are ready to
state the main theorem of this section. If you have studied physics or multi-variable
calculus, you are familiar with the idea of expressing a vector in as the sum of its
tangential and normal components. (If you haven’t yet taken those courses, this
section will help to prepare you for them!) The following theorem is a generalization
of this idea.
Orthogonal Decomposition Theorem Let be a subspace of and let . Then there exist
unique vectors and such that .
Proof
This is an example of an “existence and uniqueness” theorem, so there
are two things to prove. If we have an orthogonal basis for , then it is easy to
show that our orthogonal decomposition exists for . We let , which is clearly in
, and we let , and we have , so we need to see that .
By Theorem th:023783th:023783c, it suffices to show that is orthogonal to each of the basis vectors .
We compute for
This proves that .
The reason we need to prove this decomposition is unique is because we started with
the orthogonal basis for , but what would happen if we chose a different orthogonal
basis?
Suppose that is another orthogonal basis of , and let
As before, and , and we must show that . To see this, write the vector as follows:
This vector is in (because and are in ) and it is in (because and are in ), and
so it must be the zero vector (it is orthogonal to itself!). This means as
desired.
Let be a subspace given by , and let . Write as the sum of a vector in and a vector
in .
Following the notation of Theorem th:OrthoDecomp, we will write , where and . Let and let . We
observe that we have the good fortune that is an orthogonal basis for (otherwise,
our first step would be to use the Gram-Schmidt procedure to create an orthogonal
basis for ). We compute:
and then
This gives us
The final theorem of this section shows that projection onto a subspace of is actually
a linear transformation from to .
If is an orthonormal basis of , then
by the definition of the projection. Thus is a linear transformation because
(b)
We have is a subset of by (orthonormalUeq) because each is in . But if is in , then by (orthonormalUeq) and
Theorem th:fourierexpansion applied to the space . This shows that is a subset of , so is
.
Now suppose that is in . Then for each (again because each is in )
so is in by (th:023783). Hence is in . On the other hand, Theorem th:023783 shows
that is in for all in , and it follows that is in . Hence is , proving
th:ProjLinTran_b.
Solve the linear system in Example ex:023829 and use your result to find a basis for if in
.
In this problem we return to the GeoGebra interactive in Exploration exp:discoverortho, and we
consider the case where the matrix has rank 1 (which Exploration exp:discoverortho could not handle).
This time, the sliders define row 1 of matrix , and row 2 will be 2 times row 1. Follow
the prompts in the interactive to visualize and . What relationships do you observe
between and ?
In this problem you are asked to finish Example ex:4subspaces. More specifically, for the
matrix
show that . It may be helpful to consult Subspaces of Associated with Matrices,
where we found a basis for the column space of this matrix.