Determine how the matrix representation depends on a choice of basis.
- Proof
- To show that (a) is true, it is enough to observe that for any coordinate vector there is an equality which implies the equality of matrices . The second statement is clear; is the matrix for which left multiplication by it leaves each coordinate vector unchanged, and the only matrix satisfying this property is the identity matrix. Finally (c) follows from (a) and (b) when .
For vector spaces other than (such as the function space we looked at earlier) where the vectors do not naturally look like column vectors, we always use the above notation when working with their coordinate representations.
However, in the case of , the vectors were defined as column vectors even before discussing coordinate representations. So what should we do here?
The answer is that the vectors in are, by convention, identified with their coordinate representations in the standard basis for . So, for example, in when we wrote what we really meant was that is the vector in with coordinate representation in the standard basis given by .
Because it is very important to keep track of bases whenever determining base transition matrices and computing new coordinate representations, when doing so we will always use base-subscript notation when working with coordinate vectors, even when the vectors are in and are being represented in the standard basis.
- Show that the set of vectors is a basis for ,
- compute the base transition matrix ,
- for in with , compute the coordinate representation of with repsect to the basis .
To perform step 1, since has the right number of vectors to be a basis for , it suffices to show the vectors are linearly independent. And we know how to do this; we form the matrix and show that the columns are linearly independent by showing (exercise: do this, using MATLAB or Octave). This verifies is a basis.
Next, we look at the matrix . The columns of are the coordinate representations of the vectors in with respect to the standard basis . But is a basis. So the matrix identifies as a base-transition matrix. We know it must be either or . But which one?
This is where the notation being used helps us. The coordinate vectors of the columns of have “” in the lower left. The rule is that this must match what appears in the notation of the transition matrix. So the only possibility is To complete the second step, we then compute Finally, we can use this to compute as
The following theorem combines base-transition in both the domain and range, together with matrix representations of linear transformations. It amounts to a “base-transition” for matrix representations of linear transformations.
Note: Square matrices which satisfy the equality are called similar. This is an important relation between square matrices, and plays a prominent role in the theory of eigenvalues and eigenvectors as we will see later on.