- (a)
- Follow the prompts in the interactive to visualize and . What relationships do you observe between and ?
- (b)
- It is possible to “break” this interactive (for certain choices of and ). If and are scalar multiples of each other, then is a Point, Line,Plane , and the dimension of is 1, 2,3 . The interactive does not accommodate this situation. To see what happens when and are scalar multiples of each other, see Practice Problem prob:brokenInteractive.
Orthogonal Complements and Decompositions
Orthogonal Complements
We will now consider the set of vectors that are orthogonal to every vector in a given subspace. As a quick example, consider the -plane in . Clearly, every scalar multiple of the standard unit vector in is orthogonal to every vector in the -plane. We say that the set is an orthogonal complement of .
The following theorem collects some useful properties of the orthogonal complement; the proof of th:023783a and th:023783b is left as Practice Problem prob:8_1_6.
- Proof of Part th:023783c:
- We must show that . To show that two sets are equal, we
must show that all elements of one set are included in the other set, and then
we must show the reverse inclusion.
If is in then for all because each is in . This shows . For the reverse inclusion, suppose that for all ; we need to show that is in . We need to show for each in . We can write , where each is in . Then \begin{equation*} \vec{x} \dotp \vec{y} = c_{1}(\vec{x} \dotp \vec{x}_{1}) + c_{2}(\vec{x} \dotp \vec{x}_{2})+ \dots +c_{k}(\vec{x} \dotp \vec{x}_{k}) = c_{1}0 + c_{2}0 + \dots + c_{k}0 = 0 \end{equation*} as required, and the proof of equality is complete.
Some of the important subspaces we studied earlier are orthogonal complements of each other. Recall the following definitions associated with an matrix .
- (a)
- The null space of , , is a subspace of .
- (b)
- The row space of , , is a subspace of .
- (c)
- The column space of , , is a subspace of .
Before proving this theorem, let’s examine what it says about a couple of our examples. In Example ex:023829, we solved for the unknown vectors . Notice that this is equivalent to creating a matrix whose rows are and , and then finding the null space of that matrix . You can check that a basis for is given by .
We now return to the proof of Theorem th:4subspaces.
- Proof of Theorem th:4subspaces:
- Let . if and only if x is orthogonal to every row of . But this is true if and only if , which is equivalent to saying , which proves th:4subspacesa. To prove th:4subspacesb, we simply replace with , and we may apply th:4subspacesa since .
Orthogonal Decomposition Theorem
Now that we have defined the orthogonal complement of a subspace, we are ready to state the main theorem of this section. If you have studied physics or multi-variable calculus, you are familiar with the idea of expressing a vector in as the sum of its tangential and normal components. (If you haven’t yet taken those courses, this section will help to prepare you for them!) The following theorem is a generalization of this idea.
- Proof
- This is an example of an “existence and uniqueness” theorem, so there
are two things to prove. If we have an orthogonal basis for , then it is easy to
show that our orthogonal decomposition exists for . We let , which is clearly in
, and we let , and we have , so we need to see that .
By Theorem th:023783 th:023783c, it suffices to show that is orthogonal to each of the basis vectors . We compute for \begin{align*} \vec{f}_i \dotp \vec{w}^\perp &= \vec{f}_i \dotp (\vec{x} - \vec{w}) \\ &= \vec{f}_i \dotp \vec{x} - \vec{f}_i \dotp \left (\frac{\vec{x} \dotp \vec{f}_{1}}{\norm{\vec{f}_{1}}^2}\vec{f}_{1} + \frac{\vec{x} \dotp \vec{f}_{2}}{\norm{\vec{f}_{2}}^2}\vec{f}_{2}+ \dots +\frac{\vec{x} \dotp \vec{f}_{m}}{\norm{\vec{f}_{m}}^2}\vec{f}_{m}\right ) \\ &= \vec{f}_i \dotp \vec{x} - \left (\frac{\vec{x} \dotp \vec{f}_{1}}{\norm{\vec{f}_{1}}^2}\vec{f}_i \dotp \vec{f}_{i} \right ) = \vec{f}_i \dotp \vec{x} - (\vec{x} \dotp \vec{f}_i) = 0. \end{align*}
This proves that .
The reason we need to prove this decomposition is unique is because we started with the orthogonal basis for , but what would happen if we chose a different orthogonal basis?
Suppose that is another orthogonal basis of , and let \begin{equation*} \vec{w}^{\prime } = \left (\frac{\vec{x} \dotp \vec{f}^{\prime }_{1}}{\norm{\vec{f}^{\prime }_{1}}^2}\right )\vec{f}^{\prime }_{1} + \left (\frac{\vec{x} \dotp \vec{f}^{\prime }_{2}}{\norm{\vec{f}^{\prime }_{2}}^2}\right )\vec{f}^{\prime }_{2} + \dots +\left (\frac{\vec{x} \dotp \vec{f}^{\prime }_{m}}{\norm{\vec{f}^{\prime }_{m}}^2}\right )\vec{f}^{\prime }_{m} \end{equation*} As before, and , and we must show that . To see this, write the vector as follows: \begin{equation*} \vec{w} - \vec{w}^{\prime } = (\vec{x} - \vec{w}^{\prime }) - (\vec{x} - \vec{w}) \end{equation*} This vector is in (because and are in ) and it is in (because and are in ), and so it must be the zero vector (it is orthogonal to itself!). This means as desired.
This gives us
The final theorem of this section shows that projection onto a subspace of is actually a linear transformation from to .
- (a)
- is a linear transformation. (See Introduction to Linear Transformations.)
- (b)
- is and is . (See Image and Kernel of a Linear Transformation.)
- (c)
- .
- Proof
- If , then , and so for all . Thus is the zero (linear) operator, so th:ProjLinTran_a, th:ProjLinTran_b, and
th:ProjLinTran_c hold. Hence assume that .
- (a)
- If is an orthonormal basis of , then \begin{equation} \label{orthonormalUeq} T(\vec{x}) = (\vec{x} \dotp \vec{q}_{1})\vec{q}_{1} + (\vec{x} \dotp \vec{q}_{2})\vec{q}_{2} + \dots + (\vec{x} \dotp \vec{q}_{m})\vec{q}_{m} \quad \mbox{ for all }\vec{x}\in \RR ^n \end{equation} by the definition of the projection. Thus is a linear transformation because \begin{equation*} (\vec{x} + \vec{y}) \dotp \vec{q}_{i} = \vec{x} \dotp \vec{q}_{i} + \vec{y} \dotp \vec{q}_{i} \quad \mbox{ and } \quad (r\vec{x}) \dotp \vec{q}_{i} = r(\vec{x} \dotp \vec{q}_{i}) \quad \mbox{ for each } i. \end{equation*}
- (b)
- We have is a subset of by (orthonormalUeq) because each is in . But if is in , then by (orthonormalUeq) and
Theorem th:fourierexpansion applied to the space . This shows that is a subset of , so is
.
Now suppose that is in . Then for each (again because each is in ) so is in by (th:023783). Hence is in . On the other hand, Theorem th:023783 shows that is in for all in , and it follows that is in . Hence is , proving th:ProjLinTran_b.
- (c)
- This follows from th:ProjLinTran_a, th:ProjLinTran_b, and the Rank-Nullity theorem (Theorem th:ranknullityforT).
Practice Problems
Problems OrthoDecomp2-OrthoDecomp6
In each case, write as , where and .
Text Source
This section was adapted from the second part of Section 8.1 of Keith Nicholson’s Linear Algebra with Applications. (CC-BY-NC-SA)
W. Keith Nicholson, Linear Algebra with Applications, Lyryx 2018, Open Edition, p. 415–423.
Example ex:OrthogDecomp was adapted from Example 4.148 of Ken Kuttler’s A First Course in Linear Algebra. (CC-BY)
Ken Kuttler, A First Course in Linear Algebra, Lyryx 2017, Open Edition, p. 249.
2024-09-11 17:56:17