INDEX

A hyperlinked term will take you to the section where the term is defined. Parenthetical hyperlink will take you to the specific definition or formula in the section. Use arrows on the right to display the definition or formula in the index.

A

addition of vectors

adjugate of a matrix (the term adjoint is also sometimes used) (th:adjugateinverseformula)

The transpose of the matrix of cofactors of a matrix - it is part of a formula for the inverse of a matrix.

algebraic multiplicity of an eigenvalue

The multiplicity of an eigenvalue as a root of the characteristic equation.

associated homogeneous system (def:asshomsys)

Given any linear system , the system is called the associated homogeneous system.

augmented matrix

Every linear system can be written in the augmented matrix form as follows: The array to the left of the vertical bar is called the coefficient matrix of the linear system and is often given a capital letter name, like . The vertical array to the right of the bar is called a constant vector. We will sometimes use the following notation to represent an augmented matrix.

B

back substitution

When a matrix is in row-echelon form, we can compute the solution to the system by starting from the last equation and working backwards. This process is known as back substitution.

basic variable (also called a leading variable

When a coefficient matrix is in row echelon form, a basic variable is a variable corresponding to a column of the matrix with at least one leading entry.

basis (def:basis)

A set of vectors is called a basis of (or a basis of a subspace of ) provided that

(a)
(or )
(b)
is linearly independent.

block matrices

Subdividing a matrix into submatrices using imaginary horizontal and vertical lines - used to multiply matrices more efficiently.

box product

C

change-of-basis matrix (def:matlintransgenera)

Matrix of Theorem th:matlintransgeneral is called the matrix of with respect to ordered bases and .

characteristic equation (def:chareqcharpoly)

The equation is called the characteristic equation of .

characteristic polynomial

The polynomial is called the characteristic equation of .

Cholesky factorization

closed under addition (def:closedunderaddition)

A set is said to be closed under addition if for each element and the sum is also in .

closed under scalar multiplication (def:closedunderscalarmult)

A set is said to be closed under scalar multiplication if for each element and for each scalar the product is also in .

codomain of a linear transformation

coefficient matrix

A coefficient matrix is a matrix whose entries are the coefficients of a system of linear equations. For the system the coefficient matrix is .

cofactor expansion

A method to compute using determinants of minor matrices associated with one row or one column.

column matrix (vector)

A matrix with rows and only 1 column.

column space of a matrix (def:colspace)

Let be an matrix. The column space of , denoted by , is the subspace of spanned by the columns of .

composition of linear transformations (def:compoflintrans)

Let , and be vector spaces, and let and be linear transformations. The composition of and is the transformation given by

The matrix of a composition is the product of the matrices corresponding to the transformations in the composition, in the same order.

consistent system

A system of equations that has at least one solution.

convergence

when the iterates of an iterative method approach a solution

coordinate vector with respect to a basis (in ) (abstract vector spaces)

Let be an ordered basis. Then the coordinate vector is the column vector such that .

Cramer’s Rule

A method of solving systems of equations that uses determinants.

cross product (def:crossproduct)

Let and be vectors in . The cross product of and , denoted by , is given by

D

determinant

A function that assigns a scalar output to each square matrix , denoted - it is nonzero if and only if is invertible. Geometrically speaking, the determinant of a linear transformation of a square matrix is the factor by which area (or volume or hypervolume) is scaled by the transformation.

diagonal matrix

A matrix where whenever

diagonalizable matrix (def:diagonalizable)

Let be an matrix. Then is said to be diagonalizable if there exists an invertible matrix such that \begin{equation*} P^{-1}AP=D \end{equation*} where is a diagonal matrix. In other words, a matrix is diagonalizable if it is similar to a diagonal matrix, .

dimension (def:dimension) (also see def:dimensionabstract)

Let be a subspace of . The dimension of is the number, , of elements in any basis of . We write

Dimensions of a matrix

An matrix is a matrix with rows and columns.

Direction vector

Distance between points in (form:distRn)

Let and be points in . The distance between and is given by

Distance between point and line

divergence

when the iterates of an iterative method fail to approach a solution

domain of a linear transformation

dominant eigenvalue (def:dominant ew,ev)

An eigenvalue of an matrix is called a dominant eigenvalue if has multiplicity , and \begin{equation*} |\lambda | > |\mu | \quad \mbox{ for all eigenvalues } \mu \neq \lambda \end{equation*} Any corresponding eigenvector is called a dominant eigenvector of .

dot product (def:dotproduct)

Let and be vectors in . The dot product of and , denoted by , is given by

E

eigenspace (def:eigspace)

If is an eigenvalue of an matrix, the set of all eigenvectors associated to along with the zero vector is the eigenspace associated to . The eigenspace is a subspace of .

eigenvalue (def:eigen)

Let be an matrix. We say that a scalar is an eigenvalue of if for some nonzero vector . We say that is an eigenvector of associated with the eigenvalue .

eigenvalue decomposition (def:eigdecomposition)

If we are able to diagonalize , say , we say that is an eigenvalue decomposition of .

eigenvector (def:eigen)

Let be an matrix. We say that a non-zero vector is an eigenvector of if for some scalar . We say that is an eigenvalue of associated with the eigenvector .

elementary matrix (def:elemmatrix)

An elementary matrix is a square matrix formed by applying a single elementary row operation to the identity matrix.

elementary row operations (def:elemrowops)

The following three operations performed on a linear system are called elementary row operations.

(a)
Switching the order of equations (rows) and :
(b)
Multiplying both sides of equation (row) by the same non-zero constant, , and replacing equation with the result:
(c)
Adding times equation (row) to equation (row) , and replacing equation with the result:

equivalence relation

equivalent linear systems (def:equivsystems)

Linear systems are called equivalent if they have the same solution set.

F

free variable

When a linear system is in row-echelon form, the variables corresponding to columns that do not have any leading coefficients (if there are any) are known as free variables.

fundamental subspaces of a matrix (See also Orthogonal Complements and Decompositions.)

is the orthogonal complement of , and is the orthogonal complement of

G

Gauss-Jordan elimination (def:GaussJordanElimination)

The process of using the elementary row operations on a matrix to transform it into reduced row-echelon form is called Gauss-Jordan elimination.

Gauss-Seidel method

An iterative method for solving linear systems that is a refinement of the Jacobi method, where we use computed values of variables alternately for quicker convergence.

Gaussian elimination (def:GaussianElimination)

The process of using the elementary row operations on a matrix to transform it into row-echelon form is called Gaussian Elimination.

geometric multiplicity of an eigenvalue (def:geommulteig)

The geometric multiplicity of an eigenvalue is the dimension of the corresponding eigenspace .

Gershgorin disk (th:Gershgorin)

A circle in the complex plane which has a diagonal entry of a matrix as its center and the sum of the absolute values of the other entries in that row (or column) as its radius.

Gershgorin’s Theorem (th:Gershgorin)

Gershgorin’s theorem says that the eigenvalues of an matrix can be found in the region in the complex plane consisting of the Gershgorin disks.

Gram-Schmidt process (th:GS)

An iterative process which constructs an orthogonal basis for a subspace. The idea is to build the orthogonal set one vector at a time, by taking a vector not in the span of the vectors in the current iteration of the set, and subtracting its orthogonal projection onto each of those vectors.

H

homogeneous system (def:homogeneous)

A system of linear equations is called homogeneous if the system can be written in the form or as a matrix equation as .

hyperplane

I

identity matrix

A square matrix with ones as diagonal entries and zeros for the remaining entries.

identity transformation (def:idtransonrn)

The identity transformation on , denoted by , is a transformation that maps each element of to itself.

In other words, is a transformation such that

image of a linear transformation (def:imageofT)

Let and be vector spaces, and let be a linear transformation. The image of , denoted by , is the set In other words, the image of consists of individual images of all vectors of .

inconsistent system

A system of equations that has no solution.

inner product (def:innerproductspace)

An inner product on a real vector space is a function that assigns a real number to every pair , of vectors in in such a way that the following properties are satisfied.

(a)
is a real number for all and in .
(b)
for all and in .
(c)
for all , , and in .
(d)
for all and in and all in .
(e)
for all in .

inner product space

inverse of a linear transformation (def:inverseoflintrans)

Let and be vector spaces, and let be a linear transformation. A transformation that satisfies and is called an inverse of . If has an inverse, is called invertible.

inverse of a square matrix (def:matinverse)

Let be an matrix. An matrix is called an inverse of if where is an identity matrix. If such an inverse matrix exists, we say that is invertible. If an inverse does not exist, we say that is not invertible. The inverse of is denoted by .

isomorphism (def:isomorphism)

Let and be vector spaces. If there exists an invertible linear transformation we say that and are isomorphic and write . The invertible linear transformation is called an isomorphism.

iterative method

A technique where we repeat the same procedure (called an iteration) many times (usually using a computer), and we obtain approximate solutions which we hope “converge to” the actual solution.

J

Jacobi’s method

An iterative method for solving a system of equations where one variable is isolated in each equation in order to compute the coordinate of the next iterate.

K

kernel of a linear transformation (def:kernel)

Let and be vector spaces, and let be a linear transformation. The kernel of , denoted by , is the set In other words, the kernel of consists of all vectors of that map to in .

L

Laplace Expansion Theorem (th:laplace1)

The determinant of a matrix can be computed using cofactor expansion along ANY row or ANY column.

leading entry (leading 1) (def:leadentry)

The first non-zero entry in a row of a matrix (when read from left to right) is called the leading entry. When the leading entry is 1, we refer to it as a leading 1.

leading variable (also called a basic variable

When a coefficient matrix is in row echelon form, a leading variable is a variable corresponding to a column of the matrix with at least one leading entry.

linear combination of vectors (def:lincomb)

A vector is said to be a linear combination of vectors if for some scalars .

linear equation (def:lineq)

A linear equation in variables is an equation that can be written in the form where and are constants.

linear transformation (def:lin) (also see Linear Transformations of Abstract Vector Spaces)

A transformation is called a linear transformation if the following are true for all vectors and in , and scalars . \begin{equation} T(k\vec{u})= kT(\vec{u}) \end{equation} \begin{equation} T(\vec{u}+\vec{v})= T(\vec{u})+T(\vec{v}) \end{equation}

linearly dependent vectors (def:linearindependence)

Let be vectors of . We say that the set is linearly independent if the only solution to \begin{equation} c_1\vec{v}_1+c_2\vec{v}_2+\ldots +c_p\vec{v}_k=\vec{0} \end{equation} is the trivial solution .

If, in addition to the trivial solution, a non-trivial solution (not all are zero) exists, then we say that the set is linearly dependent.

linearly independent vectors (def:linearindependence)

Let be vectors of . We say that the set is linearly independent if the only solution to \begin{equation} c_1\vec{v}_1+c_2\vec{v}_2+\ldots +c_p\vec{v}_k=\vec{0} \end{equation} is the trivial solution .

If, in addition to the trivial solution, a non-trivial solution (not all are zero) exists, then we say that the set is linearly dependent.

lower triangular matrix

LU factorization

A factorization where is lower triangular and is upper triangular with ones on the diagonal (called unit upper triangluar). It is useful for solving .

M

Magnitude of a vector (def:normrn)

Let be a vector in , then the length, or the magnitude, of is given by

main diagonal

matrix

A rectangular array of numbers. It has rows and columns for some positive integers and .

matrix addition (def:additionofmatrices)

Let and be two matrices. Then the sum of matrices and , denoted by , is an matrix given by

matrix equality (def:equalityofmatrices)

Let and be two matrices. Then means that for all and .

matrix factorization (see LU factorization, eigenvalue decomposition (def:eigdecomposition), QR factorization (def:QR-factorization), and singular value decomposition (SVD).

Representing a matrix as a product of two or more matrices.

matrix multiplication (by a matrix) (def:matmatproduct)

Let be an matrix whose rows are vectors , . Let be an matrix with columns . Then the matrix product is an matrix with entries given by the dot products

matrix multiplication (by a scalar) (def:scalarmultofmatrices)

If and is a scalar, then .

matrix multiplication (by a vector) (def:matrixvectormult)

Let be an matrix, and let be an vector. The product is the vector given by: or, equivalently,

matrix of a linear transformation with respect to the given bases (def:matlintransgenera)

Matrix of Theorem th:matlintransgeneral is called the matrix of with respect to ordered bases and .

matrix powers (exp:motivate_diagonalization)

If is a square matrix then we can define to be the result of multiplying by itself times.

minor of a square matrix

N

negative of a matrix (th:propertiesofaddition)

The additive inverse of a matrix, formed by multiplying the matrix by the scalar .

norm (def:030438)

nonsingular matrix (def:nonsingularmatrix)

A square matrix is said to be nonsingular provided that . Otherwise we say that is singular.

normal vector

null space of a matrix (def:nullspace)

Let be an matrix. The null space of , denoted by , is the set of all vectors in such that . It is a subspace of .

nullity of a linear transformation (def:nullityT)

The nullity of a linear transformation , is the dimension of the kernel of .

nullity of a matrix (def:matrixnullity)

Let be a matrix. The dimension of the null space of is called the nullity of .

O

one-to-one (def:onetoone)

A linear transformation is one-to-one if

onto (def:onto)

A linear transformation is onto if for every element of , there exists an element of such that .

ordered basis

A basis in which the elements appear in a specific fixed order. Establishing an order is necessary because a coordinate vector with respect to a given basis relies on the order in which the basis elements appear.

orthogonal basis

A set of orthogonal vectors that spans a subspace. (Any orthogonal set of vectors must be linearly independent by Theorem orthbasis.)

orthogonal complement of a subspace (def:023776)

If is a subspace, we define the orthogonal complement as the set of all vectors orthogonal to every vector in , i.e., \begin{equation*} W^\perp = \{\vec{x} \in \RR ^n \mid \vec{x} \dotp \vec{y} = 0 \mbox{ for all } \vec{y} \in W\} \end{equation*}

Orthogonal Decomposition Theorem (th:OrthoDecomp)

Let be a subspace of and let . Then there exist unique vectors and such that .

orthogonal matrix (def:orthogonal matrices)

An matrix is called an orthogonal matrix if its columns form an orthonormal set. This will happen if and only if its rows form an orthonormal set. Note also that is an orthogonal matrix if and only if it is an invertible matrix such that .

Orthogonal projection onto a subspace of (def:projOntoSubspace)

Let be a subspace of with orthogonal basis . If is in , the vector \begin{equation} \vec{w}=\mbox{proj}_W(\vec{x}) = \mbox{proj}_{\vec{f}_1}\vec{x} + \mbox{proj}_{\vec{f}_2}\vec{x} + \dots + \mbox{proj}_{\vec{f}_m}\vec{x} \end{equation} is called the orthogonal projection of onto .

Orthogonal projection onto a vector (def:projection)

Let be a vector, and let be a non-zero vector. The projection of onto is given by

orthogonal set of vectors (orthset)

Let be a set of nonzero vectors in . Then this set is called an orthogonal set if for all . Moreover, if for (i.e. each vector in the set is a unit vector), we say the set of vectors is an orthonormal set.

Orthogonal vectors (def:orthovectors)

Let and be vectors in . We say and are orthogonal if .

orthogonally diagonalizable matrix (def:orthDiag)

An matrix is said to be orthogonally diagonalizable if an orthogonal matrix can be found such that is diagonal.

orthonormal basis

A set of orthonormal vectors that spans a subspace. (Any orthogonal set of vectors must be linearly independent by Theorem orthbasis.)

orthonormal set of vectors (orthset)

Let be a set of nonzero vectors in . Then this set is called an orthogonal set if for all . Moreover, if for (i.e. each vector in the set is a unit vector), we say the set of vectors is an orthonormal set.

P

parametric equation of a line (form:paramlinend)

Let be a direction vector for line in , and let be an arbitrary point on . Then the following parametric equations describe :

particular solution

partitioned matrices (block multiplication)

Subdividing a matrix into submatrices using imaginary horizontal and vertical lines - used to multiply matrices more efficiently.

permutation matrix

A matrix formed by permuting the rows of the identity matrix.

pivot

In Gaussian elimination, an entry chosen to become a leading coefficient used to get zeros in the remaining rows.

positive definite matrix (def:024811)

A square matrix is called positive definite if it is symmetric and all its eigenvalues are positive. We write when eigenvalues are real and positive.

power method (and its variants)

The power method is an iterative method for computing the dominant eigenvalue of a matrix. It variants can compute the smallest eigenvalue or the eigenvalue closest to some target.

properties of determinants)

(a)
The determinant of a triangular matrix is the product of its diagonal entries.
(b)
The determinant of a matrix is equal to the determinant of its transpose.
(c)
The determinant of the inverse of a matrix is the reciprocal of the determinant of the matrix.
(d)
A matrix with a zero row has determinant zero.
(e)
Interchanging two rows of a matrix changes the sign of its determinant.
(f)
A matrix with two identical rows has determinant zero.
(g)
Multiplying a row of a matrix by changes the determinant by a factor of .
(h)
Multiplying a matrix by changes the determinant by a factor of .
(i)
Adding a multiple of one row of a matrix to another row does not change the determinant.
(j)
A matrix is singular if and only if its determinant is zero.
(k)
The determinant of a product is equal to the product of the determinants.

properties of orthogonal matrices)

If is an orthogonal matrix, then...

(a)
is orthogonal,
(b)
,
(c)
if is an eigenvalue of , then ,
(d)
the product of with any other orthogonal matrix will be an orthogonal matrix (i.e. orthogonal matrices are closed under matrix multiplication),

and

(e)
is a length-preserving and angle-preserving linear transformation.

properties of similar matrices)

Similar matrices must have the same...

(a)
determinant,
(b)
rank,
(c)
trace,
(d)
characteristic polynomial,

and

(e)
eigenvalues.

Q

QR factorization (def:QR-factorization)

Let be an matrix with independent columns. A QR-factorization of expresses it as where is with orthonormal columns and is an invertible and upper triangular matrix with positive diagonal entries.

R

rank of a linear transformation (def:rankofT)

The rank of a linear transformation , is the dimension of the image of .

rank of a matrix (def:rankofamatrix) (th:dimofrowA)

The rank of matrix , denoted by , is the number of nonzero rows that remain after we reduce to row-echelon form by elementary row operations.

For any matrix ,

Rank-Nullity Theorem for linear transformations (th:ranknullityforT)

Let be a linear transformation. Suppose , then

Rank-Nullity Theorem for matrices (th:matrixranknullity)

Let be an matrix. Then

Rayleigh quotients

reduced row echelon form (def:rref)

A matrix that is already in row-echelon form is said to be in reduced row-echelon form if:

(a)
Each leading entry is
(b)
All entries above and below each leading are

redundant vectors (def:redundant)

Let be a set of vectors in . If we can remove one vector without changing the span of this set, then that vector is redundant. In other words, if we say that is a redundant element of , or simply redundant.

row echelon form (def:ref)

A matrix is said to be in row-echelon form if:

(a)
All entries below each leading entry are 0.
(b)
Each leading entry is in a column to the right of the leading entries in the rows above it.
(c)
All rows of zeros, if there are any, are located below non-zero rows.

row equivalent matrices

Two matrices and are said to be row equivalent if there is a sequence of elementary row operations that converts to .

row matrix (vector)

A matrix with only row and columns.

row space of a matrix (def:rowspace)

Let be an matrix. The row space of , denoted by , is the subspace of spanned by the rows of .

row matrix

A matrix with only row and columns.

S

Scalar

scalar triple product

scalar multiple of a matrix (def:scalarmultofmatrices)

If and is a scalar, then .

similar matrices (def:similar)

If and are matrices, we say that and are similar, if for some invertible matrix . In this case we write .

singular matrix (def:nonsingularmatrix)

A square matrix is said to be singular provided that is NOT the identity matrix. If instead , we say that is nonsingular.

singular value decomposition (SVD)

singular values (singularvalues)

Let be an matrix. The singular values of are the square roots of the positive eigenvalues of

skew symmetric matrix (def:symmetricandskewsymmetric)

An matrix is said to be symmetric if It is said to be skew symmetric if

span of a set of vectors (def:span)

Let be vectors in . The set of all linear combinations of is called the span of . We write and we say that vectors span . Any vector in is said to be in the span of . The set is called a spanning set for .

spanning set

Let be vectors in . The set of all linear combinations of is called the span of . We write and we say that vectors span . Any vector in is said to be in the span of . The set is called a spanning set for .

spectral decomposition - another name for eigenvalue decomposition (def:eigdecomposition)

If we are able to diagonalize , say , we say that is an eigenvalue decomposition of .

Spectral Theorem

If is a real matrix, then is symmetric if an only if is orthogonally diagonalizable.

spectrum The set of distinct eigenvalues of a matrix.

square matrix

A matrix with the same number of rows and columns.

standard basis (def:standardbasis)

The set is called the standard basis of .

standard matrix of a linear transformation (def:standardmatoflintrans)

Let be a linear transformation. Then the matrix \begin{equation*} \label{matlintrans} A=\begin{bmatrix} | & |& &|\\ T(\vec{e}_1) & T(\vec{e}_2)&\dots &T(\vec{e}_n)\\ |&| & &| \end{bmatrix} \end{equation*} is known as the standard matrix of the linear transformation .

Standard Position

Standard Unit Vectors (def:standardunitvecrn)

Let denote a vector that has as the component and zeros elsewhere. In other words, where is in the position. We say that is a standard unit vector of .

strictly diagonally dominant (def:strict_diag_dom)

Let be the matrix which is the coefficient matrix of the linear system . Let denote the sum of the absolute values of the non-diagonal entries in row . We say that is strictly diagonally dominant if for all values of from to .

subspace (def:subspaceabstract)

A nonempty subset of a vector space is called a subspace of , provided that is itself a vector space when given the same addition and scalar multiplication as .

subspace of (def:subspace)

Suppose that is a nonempty subset of that is closed under addition and closed under scalar multiplication. Then is a subspace of .

subspace test (th:subspacetestabstract)

Let be a nonempty subset of a vector space . If is closed under the operations of addition and scalar multiplication of , then is a subspace of .

Subtraction of vectors

symmetric matrix (def:symmetricandskewsymmetric)

An matrix is said to be symmetric if It is said to be skew symmetric if

system of linear equations

A finite set of linear (degree 1) equations each with the same variables.

T

trace of a matrix (def:trace)

The trace of an matrix , abbreviated , is defined to be the sum of the main diagonal elements of . In other words, if , then We may also write .

transpose of a matrix (def:matrixtranspose)

Let be an matrix. Then the transpose of , denoted by , is the matrix given by switching the rows and columns: \begin{equation*} A^{T} = \begin{bmatrix} a _{ij}\end{bmatrix}^{T}= \begin{bmatrix} a_{ji} \end{bmatrix} \end{equation*}

triangle inequality

U

Unit Vector

upper triangular matrix

V

Vector

Vector equation of a line (form:vectorlinend)

Let be a direction vector for line in , and let be an arbitrary point on . Then the following vector equation describes :

vector space () (def:vectorspacegeneral)

Let be a nonempty set. Suppose that elements of can be added together and multiplied by scalars. The set , together with operations of addition and scalar multiplication, is called a vector space provided that

  • is closed under addition
  • is closed under scalar multiplication

and the following properties hold for , and in and scalars and :

(a)
Commutative Property of Addition:
(b)
Associative Property of Addition:
(c)
Existence of Additive Identity:
(d)
Existence of Additive Inverse:
(e)
Distributive Property over Vector Addition:
(f)
Distributive Property over Scalar Addition:
(g)
Associative Property for Scalar Multiplication:
(h)
Multiplication by :

We will refer to elements of as vectors.

W

X

Y

Z

zero matrix (def:zeromatrix)

The zero matrix is the matrix having every entry equal to zero. The zero matrix is denoted by .

zero transformation (def:zerotransonrn)

The zero transformation, , maps every element of the domain to the zero vector.

In other words, is a transformation such that

2024-09-06 18:20:26