You are about to erase your work on this activity. Are you sure you want to do this?
Updated Version Available
There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?
Mathematical Expression Editor
[?]
INDEX
A hyperlinked term will take you to the section where the term is defined.
Parenthetical hyperlink will take you to the specific definition or formula in the
section. Use arrows on the right to display the definition or formula in the index.
Every linear system
can be written in the augmented matrix form as follows:
The array to the left of the vertical bar is called the coefficient matrix of the linear
system and is often given a capital letter name, like . The vertical array to the right
of the bar is called a constant vector.
We will sometimes use the following notation to represent an augmented
matrix.
When a matrix is in row-echelon form, we can compute the solution to the system by
starting from the last equation and working backwards. This process is known as back
substitution.
When a coefficient matrix is in row echelon form, a basic variable is a variable
corresponding to a column of the matrix with at least one leading entry.
A function that assigns a scalar output to each square matrix , denoted - it is
nonzero if and only if is invertible. Geometrically speaking, the determinant of a
linear transformation of a square matrix is the factor by which area (or volume or
hypervolume) is scaled by the transformation.
Let be an matrix. Then is said to be diagonalizable if there exists an invertible
matrix such that
where is a diagonal matrix. In other words, a matrix is diagonalizable if it is similar
to a diagonal matrix, .
An eigenvalue of an matrix is called a dominant eigenvalue if has multiplicity , and
Any corresponding eigenvector is called a dominant eigenvector of .
If is an eigenvalue of an matrix, the set of all eigenvectors associated to along with
the zero vector is the eigenspace associated to . The eigenspace is a subspace of .
Let be an matrix. We say that a scalar is an eigenvalue of if
for some nonzero vector . We say that is an eigenvector of associated with the
eigenvalue .
Let be an matrix. We say that a non-zero vector is an eigenvector of
if
for some scalar . We say that is an eigenvalue of associated with the eigenvector .
When a linear system is in row-echelon form, the variables corresponding to columns
that do not have any leading coefficients (if there are any) are known as free
variables.
An iterative method for solving linear systems that is a refinement of the Jacobi
method, where we use computed values of variables alternately for quicker
convergence.
A circle in the complex plane which has a diagonal entry of a matrix as its center and
the sum of the absolute values of the other entries in that row (or column) as its
radius.
An iterative process which constructs an orthogonal basis for a subspace. The idea is
to build the orthogonal set one vector at a time, by taking a vector not in the span of
the vectors in the current iteration of the set, and subtracting its orthogonal
projection onto each of those vectors.
Let and be vector spaces, and let be a linear transformation. The image of ,
denoted by , is the set
In other words, the image of consists of individual images of all vectors of .
An inner product on a real vector space is a function that assigns a real number to
every pair , of vectors in in such a way that the following properties are
satisfied.
Let and be vector spaces, and let be a linear transformation. A transformation
that satisfies and is called an inverse of . If has an inverse, is called invertible.
Let be an matrix. An matrix is called an inverse of if
where is an identity matrix. If such an inverse matrix exists, we say that is
invertible. If an inverse does not exist, we say that is not invertible. The inverse of is
denoted by .
Let and be vector spaces. If there exists an invertible linear transformation we say
that and are isomorphic and write . The invertible linear transformation is called
an isomorphism.
A technique where we repeat the same procedure (called an iteration) many times
(usually using a computer), and we obtain approximate solutions which we hope
“converge to” the actual solution.
An iterative method for solving a system of equations where one variable is isolated
in each equation in order to compute the coordinate of the next iterate.
Let and be vector spaces, and let be a linear transformation. The kernel of ,
denoted by , is the set
In other words, the kernel of consists of all vectors of that map to in .
The first non-zero entry in a row of a matrix (when read from left to right) is called
the leading entry. When the leading entry is 1, we refer to it as a leading 1.
When a coefficient matrix is in row echelon form, a leading variable is a variable
corresponding to a column of the matrix with at least one leading entry.
Let be an matrix whose rows are vectors , . Let be an matrix with columns . Then
the matrix product is an matrix with entries given by the dot products
A basis in which the elements appear in a specific fixed order. Establishing an order
is necessary because a coordinate vector with respect to a given basis relies on the
order in which the basis elements appear.
An matrix is called an orthogonal matrix if its columns form an orthonormal set.
This will happen if and only if its rows form an orthonormal set. Note also that is
an orthogonal matrix if and only if it is an invertible matrix such that .
Let be a set of nonzero vectors in . Then this set is called an orthogonal set if for all
. Moreover, if for (i.e. each vector in the set is a unit vector), we say the set of
vectors is an orthonormal set.
Let be a set of nonzero vectors in . Then this set is called an orthogonal set if for all
. Moreover, if for (i.e. each vector in the set is a unit vector), we say the set of
vectors is an orthonormal set.
The power method is an iterative method for computing the dominant eigenvalue of a
matrix. It variants can compute the smallest eigenvalue or the eigenvalue closest to
some target.
Let be an matrix with independent columns. A QR-factorization of expresses it as
where is with orthonormal columns and is an invertible and upper triangular
matrix with positive diagonal entries.
Let be a set of vectors in . If we can remove one vector without changing the span of
this set, then that vector is redundant. In other words, if
we say that is a redundant element of , or simply redundant.
Let be vectors in . The set of all linear combinations of is called the span of . We
write
and we say that vectors span . Any vector in is said to be in the span of . The set
is called a spanning set for .
Let be vectors in . The set of all linear combinations of is called the span of . We
write
and we say that vectors span . Any vector in is said to be in the span of . The set
is called a spanning set for .
spectral decomposition - another name for eigenvalue decomposition (def:eigdecomposition)
If we are able to diagonalize , say , we say that is an eigenvalue decomposition of .
Let denote a vector that has as the component and zeros elsewhere. In other
words,
where is in the position. We say that is a standard unit vector of .
Let be the matrix which is the coefficient matrix of the linear system .
Let
denote the sum of the absolute values of the non-diagonal entries in row . We say
that is strictly diagonally dominant if
for all values of from to .
A nonempty subset of a vector space is called a subspace of , provided that is itself
a vector space when given the same addition and scalar multiplication as .
Let be a nonempty set. Suppose that elements of can be added together and
multiplied by scalars. The set , together with operations of addition and scalar
multiplication, is called a vector space provided that
is closed under addition
is closed under scalar multiplication
and the following properties hold for , and in and scalars and :