- .
Matrix algebra uses three different types of operations.
Note that the dimensions of are the same as those of (both) and . If and do not have the same dimensions, then the sum is not defined; in other words, it doesn’t exist.
Scalar Multiplication If is a matrix and a scalar, the scalar product of with is the matrix whose entries are given by There are no restrictions on the dimensions of in order for this operation to be defined; the scalar product always exists.
Matrix Multiplication This is the most complicated of the three operations. If is and is , then in order for the product to be defined, we require that ; this condition is expressed by saying that the internal dimensions agree. In this case the product is the matrix whose entries are given by The dimensions of the product are . If the internal dimensions of and do not agree, then the product doesn’t exist.
The two different types of products may both be viewed as extensions of ordinary multiplication of real numbers. In fact, if we identify numerical matrices with scalars, then in that dimension the two product operations agree and correspond to ordinary multiplication. It is important to note that matrix multiplication can be performed whenever the sum of products appearing on the right-hand side of the defining equation makes sense algebraically; in other words, for more than just numerical matrices. This fact will come into play later on.
The operations for matrix algebra satisfy similar properties to those for addition and multiplication of real numbers. The following theorem lists those properties for real-valued matrices (that is, matrices whose entries are real numbers). In each case, the expression on the left is defined iff that on the right is also defined.
- (3.2.1)
- (commutativity of addition);
- (3.2.2)
- (associativity of addition);
- (3.2.3)
- (scalar multiplication distributes over matrix addition);
- (3.2.4)
- (matrix multiplication left-distributes over matrix addition);
- (3.2.5)
- (matrix multiplication right-distributes over matrix addition);
- (3.2.6)
- (associativity of scalar multiplication);
- (3.2.7)
- (associativity of matrix multiplication).
This theorem is proven by showing that, in each case, the matrix on the left has the same entry as the one on the right. However, as with any proof, one needs to be clear from the beginning exactly what one is allowed to assume as being true. In this case, we have i) the definition of what it means for two matrices to be equal, ii) the explicit definition of each operation, and iii) the corresponding properties for addition and multiplication for real numbers (which will be taken as axioms for this proof). To see how this works, let’s verify the first equality.
- Proof of (3.2.1)
Notice that the proof consists of a sequence of equalities, beginning with the left-hand side of the equation we wish to verify, and ending with the right-hand side of that equation. Moreover, each equality in the sequence is justified by either a definition, or an axiom. Not all of the equalities are that easy; some may require more steps. To illustrate a more involved proof, we will verify property 7 (probably the most difficult to prove of the properties listed).
- Proof of (3.2.7)
- In this proof, will be used as indices (the reason for using four different indices will become apparent).
Before moving on to considering equations, we introduce a few more matrix operations.
The way this operation relates to the algebraic operations defined above is described by the next theorem.Also, one can concatenate matrices. Specifically,
- the horizontal concatenation of its columns, and
- the vertical concatenation of its rows.
In what follows, the horizontal type of concatenation will be used much more often than the vertical one; for that reason concatenation (direction unspecified) will refer to horizontal concatenation. The following exercise will help the reader better understand how concatenation interacts with the algebraic operations, and the transpose. It is not an exhaustive list.
- (a)
- If the pairs and have the same number of columns as well, then
- (b)
- With respect to products, if is a matrix where equals the number of rows of , then
- (c)
- With respect to transpose, one has
Finally, we discuss the identity matrix and inverses. Recall that a number is invertible if there is another number such that . A similar notion exists for matrices. To explain it, we first need to define the matrix equivalent of the number “1”.
In most cases the dimension of will not be indicated, as it will be uniquely determined by the manner in which it is being used. For example, if it appears as a term in a matrix product, then its dimension is assumed to be the one which makes the product well-defined. This rule applies for the following
Apriori, is seems there is no dimensional restriction on a matrix for it to be invertible. However, the following theorem clarifies the situation. The reason for why it is true will become clear later on when we discuss the rank of a matrix.
- Proof
- Suppose , are two possibly distinct inverses of the matrix . Then using the previous theorem we have
Given this, we will refer to the inverse of an invertible matrix , and write it as . An alternative term for invertible is non-singular (so singular is equivalent to being non-invertible). Two important questions are:
- under what conditions is a square matrix non-singular? And
- if a matrix is non-singular, how can one find its inverse?
These questions are answered by
- , which happens precisely when is non-singular;
- the bottom row of is entirely zero, which happens precisely when is singular.
Moreover, when is non-singular, one has for any matrix . Specifically, if then
- Proof
- If is already in reduced row echelon form, then . But by the previous theorem, it is invertible iff . Thus if it satisfies both properties .