In Section ?? we discussed one method for solving systems of first order constant coefficient linear differential equations. We saw that such systems can be solved by putting the coefficient matrix in Jordan normal form and then computing the exponential of the Jordan normal form matrix.

In this section we describe a second and a third approach to solving linear systems; the second method is based on finding the generalized eigenvectors that put into Jordan normal form and then computing solutions directly using this information while the third method is based on deriving a formula for the exponential in original coordinates. The advantage of the second method is that it is not necessary to perform the similarity that transforms the matrix to Jordan normal form and the advantage of the third method is that it is not necessary even to compute the eigenvectors of . Be forewarned, however, that all of these methods require substantial calculations.

Let be an matrix. All methods for solving the system

begin by finding the eigenvalues of . This can be done either analytically (sometimes) or numerically (using MATLAB). Then the methods diverge. In the first and second methods, we need to find the eigenvectors and, if need be, the generalized eigenvectors of ; while in the third method, we need to perform tedious calculations involving partial fractions and matrix multiplications. With either method, the calculations simplify enormously when the eigenvalues are simple. Indeed, this simplification also occurs in the Jordan normal form method of Section ??.

A Method Based on Eigenvectors

In this method we find a basis for solutions of (??). First, we review the simpler case when there is a basis of eigenvectors and then we consider (part of) the case when there is a deficiency of eigenvectors.

A Complete Set of Eigenvectors

The simplest case in solving (??) occurs when has a basis of eigenvectors corresponding to the (not necessarily distinct) eigenvalues . We showed how to find a basis for the solutions of (??) in Section ?? but review the results here. Each eigenvector generates the solution to (??) and the general solution is:

where the scalars are real when is real and complex when the is not real.

The initial value problem is then solved by finding scalars so that

The solution of (??) is a well understood linear algebra problem.
Complex Eigenvalues

The only complication occurs when some of the eigenvalues are complex. If is a complex eigenvalue of the real matrix , so is . If is a complex eigenvector corresponding to , then is the complex eigenvector corresponding to . With this choice of eigenvector, is the scalar corresponding to the eigenvector where is the scalar corresponding to the eigenvector .

More precisely, let be an eigenvalue of and let be a corresponding eigenvector. We claim that

are solutions of the homogeneous equation (??). Verifying that and are solutions proceeds as follows. The solutions corresponding to the eigenvalue are for all complex scalars . If we set , then using Euler’s formula, we obtain the solution Similarly, setting leads to the solution .
Two Examples with a Complete Set of Eigenvectors

Next we consider two examples. The first has distinct eigenvalues, some of which are complex, while the second has real eigenvalues one of which is multiple.

(a) Find all the solutions of the linear system of ODEs

(b) As a second example, consider the system where

We use this information to find linearly independent solutions to . See Theorem ??.

The theory of Section ?? tells us that solutions of (??) have the form where each is a polynomial of degree at most . See Lemma ?? in Section ??. Using the product rule, compute On the other hand, from (??)

It follows that is a solution if and only if Since are linearly independent vectors, (??) is equivalent to

Next, choose such that . By setting and Then the function

is a solution of (??). Moreover, since , the solutions are linearly independent. We have proved:

For example, when , we have found three linearly independent solutions:

An Example

Consider the system , where

where is a polynomial with . Partial fractions state that can be written as sums of expressions of the form for scalars . Putting these terms over a common denominator proves (??).

Define the polynomials

For instance, if with , and then
We can now state one method for computing .

Note that this matrix exponential consists of functions that are linear combinations of where . These are just the type of terms that appeared in our discussion of solutions of equations in Jordan normal form in Section ??.

Proof
Multiplying (??) by yields the identity Identity (??) is valid for every number . This is possible only if, for each , the sum of all coefficients of terms with is zero. Therefore, we can substitute into (??) and obtain

We now compute Multiplying this identity by yields Now we use the Cayley-Hamilton theorem to observe that Hence for all and therefore Finally, we sum over the index and use (??) to conclude that which proves the theorem.

As a special case, consider a matrix that has distinct eigenvalues. Then for each implying that the polynomial has degree zero and is a constant independent of . Since , , and , we obtain

An Example with Distinct Eigenvalues

We revisit the example in (??), that is, As we discussed previously, the eigenvalues of this matrix are: , , and . Therefore, the characteristic polynomial of is and

Moreover, using partial fractions, we can write So

Next, using MATLAB, compute

Corollary ?? states that Using MATLAB, we obtain

Therefore,

A distinct advantage of this direct method for computing comes when solving an initial value problem. Then we can write the solution directly as . For example, the solution to the initial value problem when is: Compare this result with (??).

An Example with Multiple Eigenvalues

As an example, recall the matrix (??) The eigenvalues of (using eig(A)) are Therefore, the characteristic polynomial is Using partial fractions, we write It follows that and

Thus, using MATLAB, we calculate and From Theorem ?? (again with the help of MATLAB), it follows that

Exercises

In Exercises ?? – ?? consider the system of differential equations corresponding to the given matrix and construct a complete set of linearly independent solutions.

.
.
.
.

In Exercises ?? – ?? use Corollary ?? to compute for the given matrix.

.
.
.
.

In Exercises ?? – ?? solve the initial value problem for the given system of ODE with initial condition .

.
.

In Exercises ?? – ?? use Theorem ?? and MATLAB to compute for the given matrix.

.
.
.
.