In this section, we review matrices, including the determinant and the linear transformation represented by a matrix.

Matrices

We begin with the definition of a matrix.

Note that for an entry , the subscript describes the location of in the matrix : gives the row, and gives the column.

We can also think of a matrix as a “vector of vectors” in two different ways. If we imagine that the columns of are vectors in , then the matrix of can be viewed as a vector of column vectors. If we imagine that the rows of are vectors in , then the matrix can be viewed as a vector of row vectors.

Matrix Operations

Here, we’ll define matrix addition and matrix multiplication.

In order to be able to add two matrices, they need to have the exact same dimensions. That is, they both need to be matrices for some fixed values of and . When we have two matrices with the same dimensions, we define their sum component-wise or entry-wise.

As you might expect, matrix addition has some nice properties which are inherited from addition of real numbers. We list some of them here.

We’ve seen that matrix addition works in a very natural way, and multiplying a matrix by a scalar (or real number) is similarly nice. We now define scalar multiplication for matrices.

We now list some nice properties of scalar multiplication.

We’ll now define matrix multiplication, which can be a bit trickier to work with than matrix addition or scalar multiplication. Matrix multiplication is stranger because it arises from the correspondence between matrices and linear transformations, in a way so that multiplication corresponds to composition of transformations. We’ll review this correspondence a bit later. For now, here are some important things to remember about matrix multiplication:

  • Not all matrices can be multiplied. In order to compute the product of two matrices and , the number of columns in needs to be the same as the number of rows in .
  • Matrix multiplication is not commutative. In fact, its possible that the matrix product exists but the product does not.

This definition can seem a bit convoluted, and it’s easier to understand how matrix multiplication works by going through an example.

Although matrix multiplication is not commutative, it still has some nice algebraic properties. We list some of them here.

Determinants

When we have a square matrix (meaning an matrix, where the number of rows and number of columns are the same), we can compute an important number, called the determinant of the matrix. It turns out that this single number can tell us some important things about the matrix!

We begin by defining the determinant of a matrix.

Note that the determinate of a matrix is just a number, not a matrix. We compute the determinant in a couple of examples.

We’ve defined the determinant of matrices, but we haven’t defined the determinant of a larger square matrix yet. It turns out that the determinant is defined inductively. This means that the determinant of a matrix is defined using determinants of matrices, the determinant of a matrix is defined using determinants of matrices, the determinant of a matrix is defined using determinants of matrices, and so on. This means in order to compute the determinant of a large square matrix, we often need to compute the determinants of many smaller matrices.

We now give the definition of the determinant of an matrix.

This definition is pretty confusing if you read through it without seeing an example, but this actually follows a nice pattern. This pattern is easier to see with an example.

We sometimes call this method of computing a determinant as “expanding along the first row.” This is because we can also compute the determinant of a matrix by similarly expanding along a different row, or even a column.

It can be useful to think about which row or column will be easiest to expand along. In particular, choosing a row or column with a lot of zeros greatly simplifies computation.

One of the most powerful uses of the determinant is to tell us whether or not a matrix is invertible. Recall that an matrix is invertible if there is a matrix such that , where is the identity matrix.

This gives us a convenient way to test if a matrix is invertible, without needing to produce an explicit inverse.

Linear Transformations

One of the most important uses of matrices is to represent linear transformations. Recall the definition of a linear transformation.

We can view an matrix as representing a linear transformation from to as follows. We write vectors as column vectors, or, equivalently, or matrices. For an input column vector in , we multiply by on the left, using matrix multiplication. This produces an matrix, or, equivalently, a column vector in . Thus, we can define a function Using properties of matrix multiplication, we have that this is a linear transformation. Thus, we have the linear transformation associated to a matrix.

Notice that when we apply the linear transformation to the standard unit vectors , , and , we obtain the columns of as the output vector. This observation can be used to reconstruct a matrix from a given linear transformation.

We can see how this is useful through an example.

Although we’ve reviewed some of the most important concepts from linear algebra, there is still a lot of material that we weren’t able to include here. Make sure you refer back to your linear algebra textbook if there’s anything else you need to review!