- (a)
- (b)
- (c)
- (d)
Click the arrow to see answer.
- (a)
- ,
- (b)
- ,
One of the main virtues of orthogonal matrices is that they can be easily inverted—the transpose is the inverse. This fact, combined with the factorization theorem in this section, provides a useful way to simplify many matrix calculations (for example, in least squares approximation).
The importance of the factorization lies in the fact that there are computer algorithms that accomplish it with good control over round-off error, making it particularly useful in matrix calculations. The factorization is a matrix version of the Gram-Schmidt process.
Suppose is an matrix with linearly independent columns . The Gram-Schmidt algorithm can be applied to these columns to provide orthogonal columns where and
for each . Now write for each . Then are orthonormal columns, and the above equation becomes Using these equations, express each as a linear combination of the : These equations have a matrix form that gives the required factorization: Here the first factor has orthonormal columns, and the second factor is an upper triangular matrix with positive diagonal entries (and so is invertible). We record this in the following theorem.The matrices and in Theorem th:QR-025133 are uniquely determined by ; we return to this below.
The reader can verify that indeed .
If a matrix has independent rows and we apply QR-factorization to , the result is:
Since a square matrix with orthonormal columns is orthogonal, we have
We now take the time to prove the uniqueness of the QR-factorization.
In The Power Method and the Dominant Eigenvalue, we learned about an iterative method for computing eigenvalues. We also mentioned that a better method for approximating the eigenvalues of an invertible matrix depends on the QR-factorization of . While it is beyond the scope of this book to pursue a detailed discussion of this method, we give an example and conclude with some remarks on the QR-algorithm. The interested reader is referred to J. M. Wilkinson, The Algebraic Eigenvalue Problem (Oxford, England: Oxford University Press, 1965) or G. W. Stewart, Introduction to Matrix Computations (New York: Academic Press, 1973).
The QR-algorithm uses QR-factorization repeatedly to create a sequence of matrices as follows:
In general, is factored as and we define . Then is similar to [in fact, ], and hence each has the same eigenvalues as . If the eigenvalues of are real and have distinct absolute values, the remarkable thing is that the sequence of matrices converges to an upper triangular matrix with these eigenvalues on the main diagonal. [See below for the case of complex eigenvalues.]
We first learned about the concept of shifting in The Power Method and the Dominant Eigenvalue. Convergence is accelerated if, at stage of the algorithm, a number is chosen and is factored in the form rather than itself. Then
so we take . If the shifts are carefully chosen, convergence can be greatly improved.A matrix such as
is said to be in upper Hessenberg form, and the QR-factorizations of such matrices are greatly simplified. Given an matrix , a series of orthogonal matrices (called Householder matrices can be easily constructed such that is in upper Hessenberg form. Then the QR-algorithm can be efficiently applied to and, because is similar to , it produces the eigenvalues of .If some of the eigenvalues of a real matrix are not real, the QR-algorithm converges to a block upper triangular matrix where the diagonal blocks are either (the real eigenvalues) or (each providing a pair of conjugate complex eigenvalues of ).
Click the arrow to see answer.
Click the arrow to see answer.
, ,
,
,
Click the arrow to see answer.
Use induction on . If , . In general , so the fact that implies . The eigenvalues of are all real (Theorem cor:ews_symmetric_real), so the converge to an upper triangular matrix . But must also be symmetric (it is the limit of symmetric matrices), so it is diagonal.
This section was adapted from Sections 8.4 and 8.5 of Keith Nicholson’s Linear Algebra with Applications. (CC-BY-NC-SA)
W. Keith Nicholson, Linear Algebra with Applications, Lyryx 2018, Open Edition, pp. 437–445.