You are about to erase your work on this activity. Are you sure you want to do this?
Updated Version Available
There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?
We define isomorphic vector spaces, discuss isomorphisms and their properties, and
prove that any vector space of dimension is isomorphic to .
LTR-0060: Isomorphic Vector Spaces
A vector space is defined as a collection of objects together with operations of
addition and scalar multiplication that follow certain rules (Definition def:vectorspacegeneral of VSP-0050).
In our study of abstract vector spaces, we have encountered spaces that appeared
very different from each other. Just how different are they? Does , a vector space
whose elements have the form , have anything in common with ? Is fundamentally
different from ?
To answer these questions, we will have to look beyond the superficial appearance of
the elements of a vector space and delve into its structure. The “structure” of a
vector space is determined by how the elements of the vector space interact with each
other through the operations of addition and scalar multiplication.
Let us return to the question of what has in common with . Consider two typical
elements of :
We can add these elements together
or multiply each one by a constant
But suppose we get tired of having to write down every time. Could we leave off the
and represent by ? If we do this, expressions (eq:iso1), (eq:iso2) and (eq:iso3) would be mimicked by the
following expressions involving vectors of :
It appears that we should be able to switch back and forth between and ,
translating questions and answers from one space to the other and back
We begin to suspect that and have the same “structure”. Spaces such as and are
said to be isomorphic. This term is derived from the Greek “iso,” meaning “same,”
and “morphe,” meaning “form.” The term captures the idea that isomorphic vector
spaces have the same structure. Before we present a precise definition of the term, we
need to better understand what we mean by “switching back and forth” between
spaces. The following Exploration will help us formulate this vague notion in terms of
Recall that the set of all polynomials of degree or less, together with polynomial
addition and scalar multiplication, is a vector space, denoted by . Let . You should do
a quick mental check that is a basis of .
Define a transformation by . You may have recognized as the transformation that
maps each element of to its coordinate vector with respect to .
Our goal is to investigate and illustrate what these properties mean for transformation ,
and for the relationship between and .
First, observe that being one-to-one and onto establishes “pairings” between
elements of and in such a way that every element of one vector space is uniquely
matched with exactly one element of the other vector space, as shown in the diagram
Second, the fact that (and ) are linear will allow us to translate questions related to
linear combinations in one of the vector spaces to equivalent questions in the other
vector space, then translate answers back to the original vector space. To make this
statement concrete, consider the following problem:
The answer is, of course
Easy. But suppose for a moment that we did not know how to add polynomials, or
that we found the process extremely difficult, or maybe instead of we had another
vector space that we did not want to deal with.
It turns out that we can use and to answer the addition question. We will start by
applying to and separately:
Next, we add the images of and in
This maneuver allows us to avoid the addition question in and answer the question
in instead. We use to translate the answer back to :
All of this relies on linearity. Here is a formal justification for the process. Try to
spot where linearity is used.
Which transition requires linearity?
From Step (1) to Step (2)From Step
(2) to Step (3)From Step (3) to Step (4)From Step (4) to Step (5)
Invertible linear transformations, such as transformation of Exploration init:isomorph, are
useful because they preserve the structure of interactions between elements as
we move back and forth between two vector spaces, allowing us to answer
questions about one vector space in a different vector space. In particular, any
question related to linear combinations can be addressed in this fashion.
This includes questions concerning linear independence, span, basis and
Let and be vector spaces. If there exists an invertible linear transformation we say
that and are isomorphic and write . The invertible linear transformation is called
It is worth pointing out that if is an isomorphism, then , being linear and invertible,
is also an isomorphism.
Our earlier discussion suggests that . We postpone the proof until Example
We will start by finding a plausible candidate for an isomorphism.
We will first show that is a linear transformation, then verify that is invertible.
We can show that is one-to-one and onto, and therefore has an inverse. We can also
observe directly that is given by
We conclude that is an isomorphism, and .
Isomorphism in Example ex:isomorphexample1 establishes the fact that . However, there is nothing
special about , as there are many other isomorphisms from to . Just for fun, try to
verify that each of the following is an isomorphism.
The Coordinate Vector Isomorphism
In Exploration init:isomorph we made good use of a transformation that maps every element of to
its coordinate vector in . We observed that this transformation is linear and
invertible, therefore it is an isomorphism. The following example generalizes this
Let be an -dimensional vector space, and let be a basis for . Then given by is an
isomorphism. We leave the proof of this result to the reader. (See Practice Problem
Properties of Isomorphic Vector Spaces and Isomorphisms
In this section we will illustrate properties of isomorphisms with specific examples.
Formal proofs of properties will be presented in the next section.
In Exploration init:isomorph we defined a transformation by . We later observed that is an
isomorphism. We will now examine the effect of on two different basis of
Clearly, the images of the elements of form a basis of .
Now we consider .
It is easy to verify that are linearly independent and span , therefore the images of
the elements of from a basis of .
We can try any number of bases of and we will find that the image of each basis of
is a basis of . In general, we have the following result:
An isomorphism maps a basis
of the domain to a basis of the codomain. (We will state this result more formally as
Theorem th:bijectionsbasis in the next section.)
Isomorphisms preserve bases, but more generally, they preserve linear independence.
If is an isomorphism, then the subset of is linearly independent if and only if is
linearly independent in . (We will state and prove this result as Theorem
Let be a vector space, and let be a basis of . Let
Are linearly independent?
We can approach this question head-on by considering the
to see if the only solution is the trivial one. (See Practice Problem prob:noiso.)
Instead, we will use isomorphisms. Observe that we do not know anything about
aside from the fact that it has four basis vectors. Vectors , , are given in terms of
these basis vectors. This should give us an idea for constructing an isomorphism
between and . Consider such that . Then
By Example ex:coordmapiso, is an isomorphism. This means that , , are linearly independent if
and only if their coordinate vectors are linearly independent. There are multiple ways
of determining whether
are linearly independent. One way is to find the reduced row echelon form
The matrix reduces as follows:
We see that the rank of the matrix is . By Theorem th:linindandrank of VEC-0110 we conclude that
the column vectors are not linearly independent. Thus, the vectors , and are not
Proofs of Isomorphism Properties
Recall that a transformation is one-to-one provided that
We will show that images of linearly independent vectors under one-to-one linear
transformations are linearly independent.
Let be a one-to-one linear transformation. Suppose is linearly independent in .
Then is linearly independent in .
Suppose is an isomorphism, then the subset of is linearly independent if and only
if is linearly independent in .
We have already proved one direction of this this “if and only if”
statement as Theorem th:onetoonelinind. To prove the other direction, suppose that are linearly
independent vectors in . We need to show that this implies that are linearly
independent in . Observe that if is an isomorphism, then is also an isomorphism.
Thus, by Theorem th:onetoonelinind, are linearly independent. But this means that are linearly
Let , and be vector spaces. Suppose that and are isomorphisms. Then is an