- (a)
- ,
- (b)
- ,
Click the arrow to see the answer.
Property prop:inner_prod_5 fails.
- (c)
- , , where is complex conjugation
Click the arrow to see the answer.
Property prop:inner_prod_1 fails, as sometimes we get a complex number. However, we will return to this definition of in Complex Matrices – see Definition def:025549
- (d)
- ,
Click the arrow to see the answer.
Property prop:inner_prod_5 fails.
- (e)
- ,
- (f)
- ,
Click the arrow to see the answer.
Property prop:inner_prod_5 fails.
Inner Product Spaces
We have used the dot product in to compute the length of vectors (Corollary cor:length_via_dotprod) and also the angle between vectors. The goal of this section is to define an inner product on an arbitrary vector space over the real numbers. The dot product is an example an inner product for .
The next example is important in analysis.
If is any vector, then, using Property prop:inner_prod_3 of Definition def:innerproductspace, we get \begin{equation*} \langle \vec{0}, \vec{v} \rangle = \langle \vec{0} + \vec{0}, \vec{v} \rangle = \langle \vec{0}, \vec{v} \rangle + \langle \vec{0}, \vec{v} \rangle \end{equation*} and it follows that the number must be zero. This observation is recorded for reference in the following theorem, along with several other properties of inner products. The other proofs are left as Practice Problem ex:10_1_20.
If is an inner product on a space , then, given , , and in , \begin{equation*} \langle r\vec{u} + s\vec{v}, \vec{w} \rangle = \langle r\vec{u}, \vec{w} \rangle + \langle s\vec{v}, \vec{w} \rangle = r\langle \vec{u}, \vec{w} \rangle + s\langle \vec{v}, \vec{w} \rangle \end{equation*} for all and in by Property prop:inner_prod_3 and Property prop:inner_prod_4 of Definition def:innerproductspace. Moreover, there is nothing special about the fact that there are two terms in the linear combination or that it is in the first component: \begin{equation*} \langle r_1\vec{v}_1 + r_2\vec{v}_2 + \dots + r_n\vec{v}_n, \vec{w} \rangle = r_1\langle \vec{v}_1, \vec{w} \rangle + r_2\langle \vec{v}_2, \vec{w} \rangle + \dots + r_n\langle \vec{v}_n, \vec{w} \rangle \end{equation*} and \begin{equation*} \langle \vec{v}, s_1\vec{w}_1 + s_2\vec{w}_2 + \dots + s_m\vec{w}_m \rangle = s_1\langle \vec{v}, \vec{w}_1 \rangle + s_2\langle \vec{v}, \vec{w}_2 \rangle + \dots + s_m\langle \vec{v}, \vec{w}_m \rangle \end{equation*} hold for all and in and all , , , and in . These results are described by saying that inner products “preserve” linear combinations. For example, \begin{align*} \langle 2\vec{u} - \vec{v}, 3\vec{u} + 2\vec{v} \rangle &= \langle 2\vec{u}, 3\vec{u} \rangle + \langle 2\vec{u}, 2\vec{v} \rangle + \langle -\vec{v}, 3\vec{u} \rangle + \langle -\vec{v}, 2\vec{v} \rangle \\ &= 6 \langle \vec{u}, \vec{u} \rangle + 4 \langle \vec{u}, \vec{v} \rangle -3 \langle \vec{v}, \vec{u} \rangle - 2 \langle \vec{v}, \vec{v} \rangle \\ &= 6 \langle \vec{u}, \vec{u} \rangle + \langle \vec{u}, \vec{v} \rangle - 2 \langle \vec{v}, \vec{v} \rangle \end{align*}
If is a symmetric matrix and and are columns in , we regard the matrix as a number. If we write \begin{equation*} \langle \vec{x}, \vec{y} \rangle = \vec{x}^TA\vec{y} \quad \mbox{ for all columns } \vec{x}, \vec{y} \mbox{ in } \RR ^n \end{equation*} then Properties prop:inner_prod_1 -prop:inner_prod_4 of Definition def:innerproductspace follow from matrix arithmetic (only Property prop:inner_prod_2 of Definition def:innerproductspace requires that is symmetric). Property prop:inner_prod_5 of Definition def:innerproductspace reads \begin{equation*} \vec{x}^TA \vec{x} > 0 \quad \mbox{ for all columns } \vec{x} \neq \vec{0} \mbox{ in } \RR ^n \end{equation*} and this condition characterizes the positive definite matrices (Theorem thm:024830). This proves the first assertion in the next theorem.
- Proof
- Given an inner product on , let be the standard basis of . If and are two vectors in , compute by adding the inner product of each term to each term . The result is a double sum. \begin{equation*} \langle \vec{x}, \vec{y} \rangle = \displaystyle \sum _{i = 1}^{n} \sum _{j = 1}^{n} \langle x_i \vec{e}_i, y_j\vec{e}_j \rangle = \displaystyle \sum _{i = 1}^{n} \sum _{j = 1}^{n} x_i \langle \vec{e}_i, \vec{e}_j \rangle y_j \end{equation*} As the reader can verify, this is a matrix product: \begin{equation*} \langle \vec{x}, \vec{y} \rangle = \left [ \begin{array}{cccc} x_1 & x_2 & \cdots & x_n \\ \end{array} \right ] \left [ \begin{array}{cccc} \langle \vec{e}_1, \vec{e}_1 \rangle & \langle \vec{e}_1, \vec{e}_2 \rangle & \cdots & \langle \vec{e}_1, \vec{e}_n \rangle \\ \langle \vec{e}_2, \vec{e}_1 \rangle & \langle \vec{e}_2, \vec{e}_2 \rangle & \cdots & \langle \vec{e}_2, \vec{e}_n \rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle \vec{e}_n, \vec{e}_1 \rangle & \langle \vec{e}_n, \vec{e}_2 \rangle & \cdots & \langle \vec{e}_n, \vec{e}_n \rangle \\ \end{array} \right ] \left [ \begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_n \end{array} \right ] \end{equation*} Hence , where is the matrix whose -entry is . The fact that \begin{equation*} \langle \vec{e}_{i}, \vec{e}_{j}\rangle = \langle \vec{e}_{j}, \vec{e}_{i}\rangle \end{equation*} shows that is symmetric. Finally, is positive definite by Theorem thm:024830.
Thus, just as every linear operator corresponds to an matrix, every inner product on corresponds to a positive definite matrix. In particular, the dot product corresponds to the identity matrix .
Let be an inner product on given as in Theorem thm:030372 by a positive definite matrix . If , then is an expression in the variables called a quadratic form. For more on quadratic forms, see Section 8.8 of [Nicholson], pp. 472–482.
Norm and Distance
Note that Property prop:inner_prod_5 of Definition def:innerproductspace guarantees that , so is a real number.
The norm of a continuous function in (with the inner product from Example exa:030334) is given by \begin{equation*} \norm{ f } = \sqrt{\int _{a}^{b} f(x)^2dx} \end{equation*} Hence is the area beneath the graph of between and .
A vector in an inner product space is called a unit vector if . The set of all unit vectors in is called the unit ball in . For example, if (with the dot product) and , then \begin{equation*} \norm{ \vec{v} }^2 = 1 \quad \mbox{ if and only if } \quad x^2 + y^2 = 1 \end{equation*} Hence the unit ball in is the unit circle with centre at the origin and radius . However, the shape of the unit ball varies with the choice of inner product.
Let and . If and , define an inner product on by \begin{equation*} \langle \vec{v}, \vec{w} \rangle = \frac{xx_1}{a^2} + \frac{yy_1}{b^2} \end{equation*} The reader can verify (Practice Problem ex:10_1_5) that this is indeed an inner product. In this case \begin{equation*} \norm{ \vec{v} }^2 = 1 \quad \mbox{ if and only if } \quad \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 \end{equation*} so the unit ball is the ellipse shown in the diagram.
Example exa:030469 graphically illustrates the fact that norms and distances in an inner product space vary with the choice of inner product in .
The next theorem reveals an important and useful fact about the relationship between norms and inner products, extending the Cauchy inequality for (Theorem th:CS).
- Proof of Cauchy-Schwarz Inequality
- Write and . Using Theorem thm:030346 we
compute: \begin{equation} \label{eq:thm10_1_4} \begin{split} \norm{ b\vec{v} - a \vec{w} }^2 &= b^2 \norm{ \vec{v} }^2 - 2ab \langle \vec{v}, \vec{w} \rangle + a^2\norm{ \vec{w} }^2 = 2ab(ab - \langle \vec{v}, \vec{w} \rangle ) \\ \norm{ b\vec{v} + a \vec{w} }^2 &= b^2 \norm{ \vec{v} }^2 + 2ab \langle \vec{v}, \vec{w} \rangle + a^2\norm{ \vec{w} }^2 = 2ab(ab + \langle \vec{v}, \vec{w} \rangle ) \\ \end{split} \end{equation}
It follows that and , and hence that . But then , as desired.
Conversely, if then . Hence (eq:thm10_1_4) shows that or . It follows that one of and is a scalar multiple of the other, even if or .
Another famous inequality, the so-called triangle inequality (See Triangle Inequality in the Appendix), also comes from the Cauchy-Schwarz inequality. It is included in the following list of basic properties of the norm of a vector.
- Proof
- Because , properties thm:030504a and thm:030504b follow immediately from thm:030346c and thm:030346d of Theorem thm:030346. As
to thm:030504c, compute \begin{equation*} \norm{ r\vec{v} } ^2 = \langle r\vec{v}, r\vec{v} \rangle = r^2\langle \vec{v}, \vec{v} \rangle = r^2\norm{ \vec{v} }^2 \end{equation*}
Hence thm:030504c follows by taking positive square roots. Finally, the fact that by the
Cauchy-Schwarz inequality gives \begin{align*} \norm{ \vec{v} + \vec{w} } ^2 = \langle \vec{v} + \vec{w}, \vec{v} + \vec{w} \rangle &= \norm{ \vec{v} } ^2 + 2 \langle \vec{v}, \vec{w} \rangle + \norm{ \vec{w} } ^2 \\ &\leq \norm{ \vec{v} } ^2 + 2 \norm{ \vec{v} } \norm{ \vec{w} } + \norm{ \vec{w} } ^2 \\ &= (\norm{ \vec{v} } + \norm{ \vec{w} })^2 \end{align*}
Hence thm:030504d follows by taking positive square roots.
It is worth noting that the usual triangle inequality for absolute values, \begin{equation*} | r + s | \leq |r| + |s| \mbox{ for all real numbers } r \mbox{ and } s \end{equation*} is a special case of thm:030504d where and the dot product is used.
In many calculations in an inner product space, it is required to show that some vector is zero. This is often accomplished most easily by showing that its norm is zero. Here is an example.
The norm properties in Theorem thm:030504 translate to the following properties of distance familiar from geometry. The proof is Practice Problem ex:10_1_21.
- (a)
- for all , in .
- (b)
- if and only if .
- (c)
- for all and in .
- (d)
- for all , , and in .
Practice Problems
- (a)
- in where
- (b)
- in where
Click the arrow to see the answer.
- (c)
- in where
- (d)
- in ,
Click the arrow to see the answer.
- (a)
- ,
- (b)
- ,
Click the arrow to see the answer.
- (c)
- , in where and ;
- (d)
- , in where and ;
Click the arrow to see the answer.
- (a)
- (b)
-
Click the arrow to see the answer.
- (c)
- (d)
Click the arrow to see the answer.
- (a)
- (b)
-
Click the arrow to see the answer.
- (c)
- (d)
-
Click the arrow to see the answer.
Click the arrow to see the answer.
By the condition, for all , . Let denote column of . If , then for all and .
- (a)
- Using Property prop:inner_prod_2: .
- (b)
- Using Property prop:inner_prod_2 and Property prop:inner_prod_4: .
- (c)
- Using Property prop:inner_prod_3: , so . The rest is Property prop:inner_prod_2.
- (d)
- Assume that . If this contradicts Property prop:inner_prod_5, so . Conversely, if , then by Part 3 of this theorem.
- (a)
- Expand .
- (b)
- Expand .
Click the arrow to see the answer.
- (c)
- Show that .
- (d)
- Show that .
Click the arrow to see the answer.
for all , , and in .
- (a)
- Show that for all , in an inner product space .
- (b)
- If and are two inner products on that have equal associated norm functions, show that holds for all and .
- (a)
- Show that is a subspace of .
- (b)
- Let be as in (a). If with the dot product, and if , find a basis for .
Click the arrow to see the answer.
- (a)
- If , then for all in .
- (b)
- for all real , , and .
- (c)
- for all vectors , and all in .
- (a)
- If and are nonzero vectors in an inner product space , show that , and
hence that a unique angle exists such that
and . This angle is called the angle between and . - (b)
- Find the angle between and in with the dot product.
- (c)
- If is the angle between and , show that the law of cosines is valid: \begin{equation*} \norm{ \vec{v} - \vec{w} } = \norm{ \vec{v} } ^2 + \norm{ \vec{w} } ^2 - 2\norm{ \vec{v} } \norm{ \vec{w} } \cos \theta . \end{equation*}
- (a)
- Show that satisfies the conditions in Theorem thm:030504.
- (b)
- Show that does not arise from an inner product on given by a matrix . If it did, use Theorem thm:030372 to find numbers , , and such that for all and .
Text Source
This section was adapted from Section 10.1 of Keith Nicholson’s Linear Algebra with Applications. (CC-BY-NC-SA)
W. Keith Nicholson, Linear Algebra with Applications, Lyryx 2018, Open Edition, pp. 521–530.
2024-09-07 16:11:47