We define isomorphic vector spaces, discuss isomorphisms and their properties, and prove that any vector space of dimension is isomorphic to .

LTR-0060: Isomorphic Vector Spaces

A vector space is defined as a collection of objects together with operations of addition and scalar multiplication that follow certain rules (Definition def:vectorspacegeneral of VSP-0050). In our study of abstract vector spaces, we have encountered spaces that appeared very different from each other. Just how different are they? Does , a vector space whose elements have the form , have anything in common with ? Is fundamentally different from ?

To answer these questions, we will have to look beyond the superficial appearance of the elements of a vector space and delve into its structure. The “structure” of a vector space is determined by how the elements of the vector space interact with each other through the operations of addition and scalar multiplication.

Let us return to the question of what has in common with . Consider two typical elements of :

We can add these elements together or multiply each one by a constant

But suppose we get tired of having to write down every time. Could we leave off the and represent by ? If we do this, expressions (eq:iso1), (eq:iso2) and (eq:iso3) would be mimicked by the following expressions involving vectors of : It appears that we should be able to switch back and forth between and , translating questions and answers from one space to the other and back again.

We begin to suspect that and have the same “structure”. Spaces such as and are said to be isomorphic. This term is derived from the Greek “iso,” meaning “same,” and “morphe,” meaning “form.” The term captures the idea that isomorphic vector spaces have the same structure. Before we present a precise definition of the term, we need to better understand what we mean by “switching back and forth” between spaces. The following Exploration will help us formulate this vague notion in terms of transformations.

Recall that the set of all polynomials of degree or less, together with polynomial addition and scalar multiplication, is a vector space, denoted by . Let . You should do a quick mental check that is a basis of .

Define a transformation by . You may have recognized as the transformation that maps each element of to its coordinate vector with respect to .

Transformation has several nice properties:

By Theorem th:coordvectmappinglinear of LTR-0022, is linear.
It is easy to verify that is one-to-one and onto. (See Practice Problem prob:Tonetooneonto.)
By Theorem th:isomeansinvert of LTR-0035, has an inverse.

Our goal is to investigate and illustrate what these properties mean for transformation , and for the relationship between and .

First, observe that being one-to-one and onto establishes “pairings” between elements of and in such a way that every element of one vector space is uniquely matched with exactly one element of the other vector space, as shown in the diagram below.

Second, the fact that (and ) are linear will allow us to translate questions related to linear combinations in one of the vector spaces to equivalent questions in the other vector space, then translate answers back to the original vector space. To make this statement concrete, consider the following problem:

Let find .

The answer is, of course Easy. But suppose for a moment that we did not know how to add polynomials, or that we found the process extremely difficult, or maybe instead of we had another vector space that we did not want to deal with.

It turns out that we can use and to answer the addition question. We will start by applying to and separately: Next, we add the images of and in This maneuver allows us to avoid the addition question in and answer the question in instead. We use to translate the answer back to : All of this relies on linearity. Here is a formal justification for the process. Try to spot where linearity is used.

Which transition requires linearity?

From Step (1) to Step (2) From Step (2) to Step (3) From Step (3) to Step (4) From Step (4) to Step (5)
Invertible linear transformations, such as transformation of Exploration init:isomorph, are useful because they preserve the structure of interactions between elements as we move back and forth between two vector spaces, allowing us to answer questions about one vector space in a different vector space. In particular, any question related to linear combinations can be addressed in this fashion. This includes questions concerning linear independence, span, basis and dimension.

It is worth pointing out that if is an isomorphism, then , being linear and invertible, is also an isomorphism.

Isomorphism in Example ex:isomorphexample1 establishes the fact that . However, there is nothing special about , as there are many other isomorphisms from to . Just for fun, try to verify that each of the following is an isomorphism.
The Coordinate Vector Isomorphism

In Exploration init:isomorph we made good use of a transformation that maps every element of to its coordinate vector in . We observed that this transformation is linear and invertible, therefore it is an isomorphism. The following example generalizes this result.

Properties of Isomorphic Vector Spaces and Isomorphisms

In this section we will illustrate properties of isomorphisms with specific examples. Formal proofs of properties will be presented in the next section.

In Exploration init:isomorph we defined a transformation by . We later observed that is an isomorphism. We will now examine the effect of on two different basis of .

Let and . (Recall that is a basis of by Example ex:coordvectorinpolyvectspace2 of VSP-0060.)

First, Clearly, the images of the elements of form a basis of .

Now we consider . It is easy to verify that are linearly independent and span , therefore the images of the elements of from a basis of .

We can try any number of bases of and we will find that the image of each basis of is a basis of . In general, we have the following result:

Isomorphisms preserve bases, but more generally, they preserve linear independence.

Proofs of Isomorphism Properties

Recall that a transformation is one-to-one provided that implies that

We will show that images of linearly independent vectors under one-to-one linear transformations are linearly independent.

Suppose satisfy

We will show that for each , we must have .

By linearity, we have:

By Theorem th:zerotozero, . Therefore,

Because is one-to-one, we conclude that

By assumption, is linearly independent. Therefore for .

Recall that a transformation is onto provided that every vector of the codomain of is the image of some vector in the domain of .

We will show that an onto linear transformation maps sets that span the domain to sets that span the codomain.

Suppose is an element of . To show that spans , we will express as a linear combination of .

Because is onto, for some in . But . Therefore, for some scalar coefficients . By linearity, we have:

Thus, is in the span of .

We will now combine the results of Theorem th:onetoonelinind and Theorem th:ontospan to obtain a result about the effect of isomorphisms on a basis.

Left to the reader. (See Practice Problem prob:bijectionsbasisproof)

We have already proved one direction of this this “if and only if” statement as Theorem th:onetoonelinind. To prove the other direction, suppose that are linearly independent vectors in . We need to show that this implies that are linearly independent in . Observe that if is an isomorphism, then is also an isomorphism. Thus, by Theorem th:onetoonelinind, are linearly independent. But this means that are linearly independent.

The proof is left to the reader. (See Practice Problem prob:isocompisisoproof.)

Finite-dimensional Vector Spaces

First, assume that . Then there exists an isomorphism . Suppose and let be a basis for . By Theorem th:bijectionsbasis is a basis for . Therefore .

Conversely, suppose , and let , be bases for and , respectively.

Define a linear transformation by for . To show that is an isomorphism, we need to prove that is one-to-one and onto.

Suppose for some vectors , in . We know that for some scalars ’s and ’s. Thus, By linearity of , But are linearly independent, so for all . Therefore for all . We conclude that .

We now show that is onto. Suppose that is an element of . Then for some scalars ’s. But then We conclude that is an image of an element of , so is onto.

From this theorem follows a corollary, that shows why we spent so much time trying to understand in this course.

Practice Problems

Prove that transformation of Exploration init:isomorph is one-to-one and onto.
Verify that given by of Expression eq:justforfuniso1 is an isomorphism.
Do Example ex:inverseimageoflinind without using isomorphisms.
Let Is linearly independent in ? YesNo
Let Is a basis for ? YesNo
Prove Theorem th:bijectionsbasis.
Let be a vector space, and suppose is a basis for . What can we conclude about ?
We cannot conclude anything about because we don’t know what is.
Which of the followng statements are true?
Verify that of Example ex:coordmapiso is an isomorphism.
You may find the proof of Theorem th:ndimspacesisorn helpful.
Prove that the composition of two isomorphisms is an isomorphism. (Theorem th:isocompisiso.)
Prove that a linear transformation is one-to-one if and only if .