You are about to erase your work on this activity. Are you sure you want to do this?
Updated Version Available
There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?
Mathematical Expression Editor
Isomorphic Vector Spaces
A vector space is defined as a collection of objects together with operations of
addition and scalar multiplication that follow certain rules (Definition def:vectorspacegeneral of Abstract
Vector Spaces). In our study of abstract vector spaces, we have encountered spaces
that appeared very different from each other. Just how different are they? Does , a
vector space whose elements have the form , have anything in common with ? Is
fundamentally different from ?
To answer these questions, we will have to look beyond the superficial appearance of
the elements of a vector space and delve into its structure. The “structure” of a
vector space is determined by how the elements of the vector space interact with each
other through the operations of addition and scalar multiplication.
Let us return to the question of what has in common with . Consider two typical
elements of : \begin{equation} \label{eq:iso1} mx+b\quad \text{and}\quad qx+c \end{equation}
We can add these elements together \begin{equation} \label{eq:iso2} (mx+b)+(qx+c)=(m+q)x+(b+c) \end{equation}
or multiply each one by a scalar \begin{equation} \label{eq:iso3} k(mx+b)=kmx+kb\quad \text{and}\quad t(qx+c)=tqx+tc \end{equation}
But suppose we get tired of having to write down every time. Could we leave off the
and represent by ? If we do this, expressions (eq:iso1), (eq:iso2) and (eq:iso3) would be mimicked by the
following expressions involving vectors of :
It appears that we should be able to switch back and forth between and ,
translating questions and answers from one space to the other and back
again.
We begin to suspect that and have the same “structure”. Spaces such as and are
said to be isomorphic. This term is derived from the Greek “iso,” meaning “same,”
and “morphe,” meaning “form.” The term captures the idea that isomorphic vector
spaces have the same structure. Before we present a precise definition of the term, we
need to better understand what we mean by “switching back and forth” between
spaces. The following Exploration will help us formulate this vague notion in terms of
transformations.
Recall that the set of all polynomials of degree or less, together with polynomial
addition and scalar multiplication, is a vector space, denoted by . Let . You should do
a quick mental check that is a basis of .
Define a transformation by . You may have recognized as the transformation that
maps each element of to its coordinate vector with respect to the ordered basis
.
Our goal is to investigate and illustrate what these properties mean for transformation ,
and for the relationship between and .
First, observe that being one-to-one and onto establishes “pairings” between
elements of and in such a way that every element of one vector space is uniquely
matched with exactly one element of the other vector space, as shown in the diagram
below.
Second, the fact that (and ) are linear will allow us to translate questions related to
linear combinations in one of the vector spaces to equivalent questions in the other
vector space, then translate answers back to the original vector space. To make this
statement concrete, consider the following problem:
Let
find .
The answer is, of course
Easy. But suppose for a moment that we did not know how to add polynomials, or
that we found the process extremely difficult, or maybe instead of we had another
vector space that we did not want to deal with.
It turns out that we can use and to answer the addition question. We will start by
applying to and separately:
Next, we add the images of and in
This maneuver allows us to avoid the addition question in and answer the question
in instead. We use to translate the answer back to :
All of this relies on linearity. Here is a formal justification for the process. Try to
spot where linearity is used. \begin{align} p_1(x)+p_2(x)&=(3-x+2x^2)+(-1+3x+x^2)\label{steplin1}\\ &=T^{-1}\left (\begin{bmatrix}3\\-1\\2\end{bmatrix}\right )+T^{-1}\left (\begin{bmatrix}-1\\3\\1\end{bmatrix}\right )\label{steplin2}\\ &=T^{-1}\left (\begin{bmatrix}3\\-1\\2\end{bmatrix}+\begin{bmatrix}-1\\3\\1\end{bmatrix}\right )\label{steplin3}\\ &=T^{-1}\left (\begin{bmatrix}2\\2\\3\end{bmatrix}\right )\label{steplin4}\\ &=2+2x+3x^2\label{steplin5} \end{align}
Invertible linear transformations, such as transformation of Exploration init:isomorph, are
useful because they preserve the structure of interactions between elements as
we move back and forth between two vector spaces, allowing us to answer
questions about one vector space in a different vector space. In particular, any
question related to linear combinations can be addressed in this fashion.
This includes questions concerning linear independence, span, basis and
dimension.
Let and be vector spaces. If there exists an invertible linear transformation we say
that and are isomorphic and write . The invertible linear transformation is called
an isomorphism.
It is worth pointing out that if is an isomorphism, then , being linear and invertible,
is also an isomorphism.
Our earlier discussion suggests that . We postpone the proof until Theorem
ex:coordmapiso.
We will start by finding a plausible candidate for an
isomorphism. Define by
We will first show that is a linear transformation, then verify that is invertible. \begin{align*} T\left (k\begin{bmatrix}a&b\\c&d\end{bmatrix}\right )&=T\left (\begin{bmatrix}ka&kb\\kc&kd\end{bmatrix}\right )\\ &=ka+kbx+kcx^2+kdx^3=k(a+bx+cx^2+dx^3)\\ &=kT\left (\begin{bmatrix}a&b\\c&d\end{bmatrix}\right ) \end{align*}
We can show that is one-to-one and onto, and therefore has an inverse. We can also
observe directly that is given by
We conclude that is an isomorphism, and .
Isomorphism in Example ex:isomorphexample1 establishes the fact that . However, there is nothing
special about , as there are many other isomorphisms from to . Just for fun, try to
verify that each of the following is an isomorphism.
In Exploration init:isomorph we made good use of a transformation that maps every element of to
its coordinate vector in . We observed that this transformation is linear and
invertible, therefore it is an isomorphism. The following example generalizes this
result.
Let be an -dimensional vector space, and let be an ordered basis for . Then given
by is an isomorphism.
Proof
We leave the proof of this result to the reader. (See Practice Problem
prob:verifyisomorphism.)
Properties of Isomorphic Vector Spaces and Isomorphisms
In this section we will illustrate properties of isomorphisms with specific examples.
Formal proofs of properties will be presented in the next section.
In Exploration init:isomorph we defined a transformation by . We later observed that is an
isomorphism. We will now examine the effect of on two different bases of
.
First,
Clearly, the images of the elements of form a basis of .
Now we consider .
It is easy to verify that are linearly independent and span , therefore the images of
the elements of from a basis of .
We can try any number of bases of and we will find that the image of each basis of
is a basis of . In general, we have the following result:
An isomorphism maps a basis
of the domain to a basis of the codomain. (We will state this result more formally as
Theorem th:bijectionsbasis in the next section.)
Isomorphisms preserve bases, but more generally, they preserve linear independence.
If is an isomorphism, then the subset of is linearly independent if and only if is
linearly independent in . (We will state and prove this result as Theorem
th:linindtolinindiso.)
Let be a vector space, and let be an ordered basis of . Let
Are linearly independent?
We could approach this question head-on by considering
the vector equation
to see if the only solution is the trivial one. (See Practice Problem prob:noiso.)
Instead, we will use isomorphisms. Observe that we do not know anything about
aside from the fact that it has four basis vectors. Vectors , , are given in terms of
these basis vectors. This should give us an idea for constructing an isomorphism
between and . Consider such that . Then
By Theorem ex:coordmapiso, is an isomorphism. This means that , , are linearly independent if
and only if their coordinate vectors are linearly independent. There are multiple ways
of determining whether
are linearly independent. One way is to find the reduced row echelon form
of
The matrix reduces as follows:
We see that the rank of the matrix is . We conclude that the column vectors are not
linearly independent. Thus, the vectors , and are not linearly independent.
Proofs of Isomorphism Properties
Recall that a transformation is one-to-one provided that
implies that
We will show that images of linearly independent vectors under one-to-one linear
transformations are linearly independent.
Let be a one-to-one linear transformation. Suppose is linearly independent in .
Then is linearly independent in .
Because is one-to-one, we conclude that \begin{align} \label{onlytrivial}a_1\vec{v}_1+\ldots +a_n\vec{v}_n=\vec{0} \end{align}
By assumption, is linearly independent. Therefore for .
Recall that a transformation is onto provided that every vector of the codomain of
is the image of some vector in the domain of .
We will show that an onto linear transformation maps sets that span the domain to
sets that span the codomain.
Let be an onto linear transformation. Suppose . Then .
Proof
Suppose is an element of . To show that spans , we will express as
a linear combination of .
Because is onto, for some in . But . Therefore, for some scalar coefficients . By
linearity, we have: \begin{align*} \vec{w}=T(\vec{v})&=T(a_1\vec{v}_1+\ldots +a_n\vec{v}_n)\\ &=a_1T(\vec{v}_1)+\ldots +a_nT(\vec{v}_n) \end{align*}
Thus, is in the span of .
We will now combine the results of Theorem th:onetoonelinind and Theorem th:ontospan to obtain a result about
the effect of isomorphisms on a basis.
Let be an isomorphism. Suppose is a basis for . Then is a basis for .
Suppose is an isomorphism, then the subset of is linearly independent if and only
if is linearly independent in .
Proof
We have already proved one direction of this this “if and only if”
statement as Theorem th:onetoonelinind. To prove the other direction, suppose that are linearly
independent vectors in . We need to show that this implies that are linearly
independent in . Observe that if is an isomorphism, then is also an isomorphism.
Thus, by Theorem th:onetoonelinind, are linearly independent. But this means that are linearly
independent.
Let , and be vector spaces. Suppose that and are isomorphisms. Then is an
isomorphism.
First, assume that . Then there exists an isomorphism . Suppose and
let be a basis for . By Theorem th:bijectionsbasis is a basis for . Therefore .
Conversely, suppose , and let , be bases for and , respectively.
Define a linear transformation by for . To show that is an isomorphism, we
need to prove that is one-to-one and onto.
Suppose for some vectors , in . We know that
for some scalars ’s and ’s. Thus,
By linearity of ,
But are linearly independent, so for all . Therefore for all . We conclude that
.
We now show that is onto. Suppose that is an element of . Then for some
scalars ’s. But then
We conclude that is an image of an element of , so is onto.
From this theorem follows an important corollary that shows why we spent so much
time trying to understand in this course.
Every -dimensional vector space is isomorphic to .
The span of any two linearly independent vectors in is isomorphic to .
Recall that . Since , we conclude that is not isomorphic to .
Practice Problems
Prove that transformation of Exploration init:isomorph is one-to-one and onto.
Verify that
given by
of Expression eq:justforfuniso1 is an isomorphism.