INDEX

A hyperlinked term will take you to the section where the term is defined. Parenthetical hyperlink will take you to the specific definition or formula in the section. Use arrows on the right to display the definition or formula in the index.

A

addition of vectors

adjugate of a matrix

algebraic multiplicity of an eigenvalue

associated homogeneous system (def:asshomsys)

Given any linear system , the system is called the associated homogeneous system.

augmented matrix

B

back substitution

basis (def:basis)

A set of vectors is called a basis of (or a basis of a subspace of ) provided that

(a)
(or )
(b)
is linearly independent.

block matrices

box product

C

change-of-basis matrix (def:matlintransgenera)

Matrix of Theorem th:matlintransgeneral is called the matrix of with respect to ordered bases and .

characteristic equation (def:chareqcharpoly)

The equation is called the characteristic equation of .

characteristic polynomial

Cholesky factorization

closed under addition (def:closedunderaddition)

A set is said to be closed under addition if for each element and the sum is also in .

closed under scalar multiplication (def:closedunderscalarmult)

A set is said to be closed under scalar multiplication if for each element and for each scalar the product is also in .

codomain of a linear transformation

coefficient matrix

cofactor expansion

column matrix (vector)

column space of a matrix (def:colspace)

Let be an matrix. The column space of , denoted by , is the subspace of spanned by the columns of .

composition of linear transformations (def:compoflintrans)

Let , and be vector spaces, and let and be linear transformations. The composition of and is the transformation given by

consistent system

convergence

coordinate vector with respect to a basis (in ) (abstract vector spaces)

Cramer’s Rule

cross product (def:crossproduct)

Let and be vectors in . The cross product of and , denoted by , is given by

D

determinant () (def:twodetcrossprod)

A determinant is a number associated with a matrix

determinant () (def:threedetcrossprod)

A determinant is a number associated with a matrix

diagonal matrix

diagonalizable matrix (def:diagonalizable)

Let be an matrix. Then is said to be diagonalizable if there exists an invertible matrix such that

where is a diagonal matrix. In other words, a matrix is diagonalizable if it is similar to a diagonal matrix, .

dimension (def:dimension) (also see def:dimensionabstract)

Let be a subspace of . The dimension of is the number, , of elements in any basis of . We write

Direction vector

Distance between points in (form:distRn)

Let and be points in . The distance between and is given by

Distance between point and line

divergence

domain of a linear transformation

dominant eigenvalue (def:dominant ew,ev)

An eigenvalue of an matrix is called a dominant eigenvalue if has multiplicity , and

Any corresponding eigenvector is called a dominant eigenvector of .

dot product (def:dotproduct)

Let and be vectors in . The dot product of and , denoted by , is given by

E

eigenspace (def:eigspace)

The set of all eigenvectors associated with a given eigenvalue of a matrix is known as the eigenspace associated with that eigenvalue.

eigenvalue (def:eigen)

Let be an matrix. We say that a non-zero vector is an eigenvector of if for some scalar . We say that is an eigenvalue of associated with the eigenvector .

eigenvalue decomposition (def:eigdecomposition)

If we are able to diagonalize , say , we say that is an eigenvalue decomposition of .

eigenvector (def:eigen)

Let be an matrix. We say that a non-zero vector is an eigenvector of if for some scalar . We say that is an eigenvalue of associated with the eigenvector .

elementary matrix (def:elemmatrix)

An elementary matrix is a square matrix formed by applying a single elementary row operation to the identity matrix.

elementary row operations (def:elemrowops)

The following three operations performed on a linear system are called elementary row operations.

(a)
Switching the order of equations (rows) and :
(b)
Multiplying both sides of equation (row) by the same non-zero constant, , and replacing equation with the result:
(c)
Adding times equation (row) to equation (row) , and replacing equation with the result:

equivalence relation

equivalent linear systems (def:equivsystems)

Two systems of linear equations are said to be equivalent if they have the same solution set.

F

free variable

fundamental subspaces of a matrix

G

Gaussian elimination (def:GaussianElimination)

The process of using the elementary row operations on a matrix to transform it into row-echelon form is called Gaussian Elimination.

Gauss-Jordan elimination (def:GaussJordanElimination)

The process of using the elementary row operations on a matrix to transform it into reduced row-echelon form is called Gauss-Jordan elimination.

Gauss-Seidel method

geometric multiplicity of an eigenvalue (def:geommulteig)

The geometric multiplicity of an eigenvalue is the dimension of the corresponding eigenspace .

Gerschgorin’s Disk Theorem

Gram-Schmidt Process

H

homogeneous system (def:homogeneous)

A system of linear equations is called homogeneous if the system can be written in the form

hyperplane

I

identity matrix

identity transformation (def:idtransonrn)

The identity transformation on , denoted by , is a transformation that maps each element of to itself.

In other words, is a transformation such that

image of a linear transformation (def:imageofT)

Let and be vector spaces, and let be a linear transformation. The image of , denoted by , is the set In other words, the image of consists of individual images of all vectors of .

inconsistent system

inner product (def:innerproductspace)

An inner product on a real vector space is a function that assigns a real number to every pair , of vectors in in such a way that the following properties are satisfied.

(a)
is a real number for all and in .
(b)
for all and in .
(c)
for all , , and in .
(d)
for all and in and all in .
(e)
for all in .

inner product space

inverse of a linear transformation (def:inverseoflintrans)

Let and be vector spaces, and let be a linear transformation. A transformation that satisfies and is called an inverse of . If has an inverse, is called invertible.

inverse of a square matrix (def:matinverse)

Let be an matrix. An matrix is called an inverse of if where is an identity matrix. If such an inverse matrix exists, we say that is invertible. If an inverse does not exist, we say that is not invertible.

isomorphism (def:isomorphism)

Let and be vector spaces. If there exists an invertible linear transformation we say that and are isomorphic and write . The invertible linear transformation is called an isomorphism.

iterative methods

J

Jacobi’s method

K

kernel of a linear transformation (def:kernel)

Let and be vector spaces, and let be a linear transformation. The kernel of , denoted by , is the set In other words, the kernel of consists of all vectors of that map to in .

L

Laplace Expansion Theorem (th:laplace1)

leading entry (leading 1) (def:leadentry)

The first non-zero entry in a row of a matrix (when read from left to right) is called the leading entry. When the leading entry is 1, we refer to it as a leading 1.

leading variable

linear combination of vectors (def:lincomb)

A vector is said to be a linear combination of vectors if for some scalars .

linear equation (def:lineq)

A linear equation in variables is an equation that can be written in the form where and are constants.

linear transformation (def:lin) (also see Linear Transformations of Abstract Vector Spaces)

A transformation is called a linear transformation if the following are true for all vectors and in , and scalars .

linearly dependent vectors (def:linearindependence)

Let be vectors of . We say that the set is linearly independent if the only solution to

is the trivial solution .

If, in addition to the trivial solution, a non-trivial solution (not all are zero) exists, then we say that the set is linearly dependent.

linearly independent vectors (def:linearindependence)

Let be vectors of . We say that the set is linearly independent if the only solution to

is the trivial solution .

If, in addition to the trivial solution, a non-trivial solution (not all are zero) exists, then we say that the set is linearly dependent.

lower triangular matrix

LU factorization

M

Magnitude of a vector (def:normrn)

Let be a vector in , then the length, or the magnitude, of is given by

main diagonal

matrix

matrix addition (def:additionofmatrices)

Let and be two matrices. Then the sum of matrices and , denoted by , is an matrix given by

matrix equality (def:equalityofmatrices)

Let and be two matrices. Then means that for all and .

matrix multiplication (by a matrix) (def:matmatproduct)

Let be an matrix whose rows are vectors , . Let be an matrix with columns . Then the entries of the matrix product are given by the dot products

matrix multiplication (by a scalar) (def:scalarmultofmatrices)

If and is a scalar, then .

matrix multiplication (by a vector) (def:matrixvectormult)

Let be an matrix, and let be an vector. The product is the vector given by: or, equivalently,

matrix of a linear transformation with respect to the given bases (def:matlintransgenera)

Matrix of Theorem th:matlintransgeneral is called the matrix of with respect to ordered bases and .

minor of a square matrix

N

norm (def:030438)

nonsingular matrix (def:nonsingularmatrix)

A square matrix is said to be nonsingular provided that . Otherwise we say that is singular.

normal vector

null space of a matrix (def:nullspace)

Let be an matrix. The null space of , denoted by , is the set of all vectors in such that .

nullity of a linear transformation (def:nullityT)

The nullity of a linear transformation , is the dimension of the kernel of .

nullity of a matrix (def:matrixnullity)

Let be a matrix. The dimension of the null space of is called the nullity of .

O

one-to-one (def:onetoone)

A linear transformation is one-to-one if

onto (def:onto)

A linear transformation is onto if for every element of , there exists an element of such that .

ordered basis

orthogonal basis

orthogonal complement of a subspace of (def:023776)

If is a subspace of , define the orthogonal complement of (pronounced “-perp”) by

Orthogonal Decomposition Theorem (th:OrthoDecomp)

Let be a subspace of and let . Then there exist unique vectors and such that .

orthogonal matrix (def:orthogonal matrices)

An matrix is called an orthogonal matrix if it satisfies one (and hence all) of the conditions in Theorem th:orthogonal_matrices.

Orthogonal projection onto a subspace of (def:projOntoSubspace)

Let be a subspace of with orthogonal basis . If is in , the vector

is called the orthogonal projection of onto .

Orthogonal projection onto a vector (def:projection)

Let be a vector, and let be a non-zero vector. The projection of onto is given by

orthogonal set of vectors (orthset)

Let be a set of nonzero vectors in . Then this set is called an orthogonal set if for all . Moreover, if for (i.e. each vector in the set is a unit vector), we say the set of vectors is an orthonormal set.

Orthogonal vectors (def:orthovectors)

Let and be vectors in . We say and are orthogonal if .

orthogonally diagonalizable matrix (def:orthDiag)

An matrix is said to be orthogonally diagonalizable if an orthogonal matrix can be found such that is diagonal.

orthonormal basis

orthonormal set of vectors (orthset)

Let be a set of nonzero vectors in . Then this set is called an orthogonal set if for all . Moreover, if for (i.e. each vector in the set is a unit vector), we say the set of vectors is an orthonormal set.

P

parametric equation of a line (form:paramlinend)

Let be a direction vector for line in , and let be an arbitrary point on . Then the following parametric equations describe :

particular solution

partitioned matrices (block multiplication)

permutation matrix

pivot

positive definite matrix (def:024811)

A square matrix is called positive definite if it is symmetric and all its eigenvalues are positive. We write when eigenvalues are real and positive.

power method (and its variants)

Q

QR factorization (def:QR-factorization)

Let be an matrix with independent columns. A QR-factorization of expresses it as where is with orthonormal columns and is an invertible and upper triangular matrix with positive diagonal entries.

R

rank of a linear transformation (def:rankofT)

The rank of a linear transformation , is the dimension of the image of .

rank of a matrix (def:rankofamatrix) (th:dimofrowA)

The rank of matrix , denoted by , is the number of nonzero rows that remain after we reduce to row-echelon form by elementary row operations.

For any matrix ,

Rank-Nullity Theorem for linear transformations (th:ranknullityforT)

Let be a linear transformation. Suppose , then

Rank-Nullity Theorem for matrices (th:matrixranknullity)

Let be an matrix. Then

Rayleigh quotients

reduced row echelon form (def:rref)

A matrix that is already in row-echelon form is said to be in reduced row-echelon form if:

(a)
Each leading entry is
(b)
All entries above and below each leading are

redundant vectors (def:redundant)

Let be a set of vectors in . If we can remove one vector without changing the span of this set, then that vector is redundant. In other words, if we say that is a redundant element of , or simply redundant.

row echelon form (def:ref)

A matrix is said to be in row-echelon form if:

(a)
All entries below each leading entry are 0.
(b)
Each leading entry is in a column to the right of the leading entries in the rows above it.
(c)
All rows of zeros, if there are any, are located below non-zero rows.

row equivalent matrices

row matrix (vector)

row space of a matrix (def:rowspace)

Let be an matrix. The row space of , denoted by , is the subspace of spanned by the rows of .

S

Scalar

scalar triple product

similar matrices (def:similar)

If and are matrices, we say that and are similar, if for some invertible matrix . In this case we write .

singular matrix (def:nonsingularmatrix)

A square matrix is said to be nonsingular provided that . Otherwise we say that is singular.

singular value decomposition (SVD)

singular values (singularvalues)

Let be an matrix. The singular values of are the square roots of the positive eigenvalues of

skew symmetric matrix (def:symmetricandskewsymmetric)

An matrix is said to be symmetric if It is said to be skew symmetric if

span of a set of vectors (def:span)

Let be vectors in . The set of all linear combinations of is called the span of . We write and we say that vectors span . Any vector in is said to be in the span of . The set is called a spanning set for .

spanning set

spectral decomposition - another name for eigenvalue decomposition (def:eigdecomposition)

If we are able to diagonalize , say , we say that is an eigenvalue decomposition of .

Spectral Theorem

spectrum The set of distinct eigenvalues of a matrix.

standard basis (def:standardbasis)

The set is called the standard basis of .

standard matrix of a linear transformation (def:standardmatoflintrans)

The matrix in Theorem th:matlin is known as the standard matrix of the linear transformation .

Standard Position

Standard Unit Vectors (def:standardunitvecrn)

Let denote a vector that has as the component and zeros elsewhere. In other words, where is in the position. We say that is a standard unit vector of .

strictly diagonally dominant (def:strict_diag_dom)

Let be the matrix which is the coefficient matrix of the linear system . Let denote the sum of the absolute values of the non-diagonal entries in row . We say that is strictly diagonally dominant if for all values of from to .

square matrix

subspace (def:subspaceabstract)

A nonempty subset of a vector space is called a subspace of , provided that is itself a vector space when given the same addition and scalar multiplication as .

subspace of (def:subspace)

Suppose that is a nonempty subset of that is closed under addition and closed under scalar multiplication. Then is a subspace of .

subspace test (th:subspacetestabstract)

Let be a nonempty subset of a vector space . If is closed under the operations of addition and scalar multiplication of , then is a subspace of .

Subtraction of vectors

symmetric matrix (def:symmetricandskewsymmetric)

An matrix is said to be symmetric if It is said to be skew symmetric if

system of linear equations

T

trace of a matrix (def:trace)

The trace of an matrix , abbreviated , is defined to be the sum of the main diagonal elements of . In other words, if , then We may also write .

transpose of a matrix (def:matrixtranspose)

Let be an matrix. Then the transpose of , denoted by , is the matrix given by

triangle inequality

U

Unit Vector

upper triangular matrix

V

Vector

Vector equation of a line (form:vectorlinend)

Let be a direction vector for line in , and let be an arbitrary point on . Then the following vector equation describes :

vector space () (def:vectorspacegeneral)

Let be a nonempty set. Suppose that elements of can be added together and multiplied by scalars. The set , together with operations of addition and scalar multiplication, is called a vector space provided that

  • is closed under addition
  • is closed under scalar multiplication

and the following properties hold for , and in and scalars and :

(a)
Commutative Property of Addition:
(b)
Associative Property of Addition:
(c)
Existence of Additive Identity:
(d)
Existence of Additive Inverse:
(e)
Distributive Property over Vector Addition:
(f)
Distributive Property over Scalar Addition:
(g)
Associative Property for Scalar Multiplication:
(h)
Multiplication by :

We will refer to elements of as vectors.

W

X

Y

Z

zero matrix (def:zeromatrix)

The zero matrix is the matrix having every entry equal to zero. The zero matrix is denoted by .

zero transformation (def:zerotransonrn)

The zero transformation, , maps every element of the domain to the zero vector.

In other words, is a transformation such that