The general constant coefficient system of differential equations has the form

where the coefficients are constants. Suppose that (??) satisfies the initial conditions , …, .

Using matrix multiplication of a vector and matrix, we can rewrite these differential equations in a compact form. Consider the coefficient matrix and the vectors of initial conditions and unknowns Then (??) has the compact form

In Section ??, we plotted the phase space picture of the planar system of differential equations

where In those calculations we observed that there is a solution to (??) that stayed on the main diagonal for each moment in time. Note that a vector is on the main diagonal if it is a scalar multiple of . Thus a solution that stays on the main diagonal for all time must have the form for some real-valued function . When a function of form (??) is a solution to (??), it satisfies:

A calculation shows that Hence It follows that the function must satisfy the differential equation whose solutions are for some scalar .

Similarly, we also saw in our MATLAB experiments that there was a solution that for all time stayed on the anti-diagonal, the line . Such a solution must have the form A similar calculation shows that must satisfy the differential equation Solutions to this equation all have the form for some real constant .

Thus, using matrix multiplication, we are able to prove analytically that there are solutions to (??) of exactly the type suggested by our MATLAB experiments. However, even more is true and this extension is based on the principle of superposition that was introduced for algebraic equations in Section ??.

Superposition in Linear Differential Equations

Consider a general linear differential equation of the form

where is an matrix. Suppose that and are solutions to (??) and are scalars. Then is also a solution. We verify this fact using the ‘linearity’ of . Calculate
So superposition is valid for solutions of linear differential equations.

Initial Value Problems

Suppose that we wish to find a solution to (??) satisfying the initial conditions Then we can use the principle of superposition to find this solution in closed form. Superposition implies that for each pair of scalars , the functions

are solutions to (??). Moreover, for a solution of this form

Thus we can solve our prescribed initial value problem, if we can solve the system of linear equations

This system is solved for and . Thus is the desired closed form solution.

Eigenvectors and Eigenvalues

We emphasize that just knowing that there are two lines in the plane that are invariant under the dynamics of the system of linear differential equations is sufficient information to solve these equations. So it seems appropriate to ask the question: When is there a line that is invariant under the dynamics of a system of linear differential equations? This question is equivalent to asking: When is there a nonzero vector and a nonzero real-valued function such that is a solution to (??)?

Suppose that is a solution to the system of differential equations . Then and must satisfy

Since is nonzero, it follows that and must lie on the same line through the origin. Hence for some real number . Geometrically, the matrix maps an eigenvector onto a multiple of itself — that multiple is the eigenvalue.

Note that scalar multiples of eigenvectors are also eigenvectors. More precisely:

Proof
By assumption, and is nonzero. Now calculate The lemma follows from the definition of eigenvector.

It follows from (??) and (??) that if is an eigenvector of with eigenvalue , then Thus we have returned to our original linear differential equation that has solutions for all constants .

We have proved the following theorem.

Finding eigenvalues and eigenvectors from first principles — even for matrices — is not a simple task. We end this section with a calculation illustrating that real eigenvalues need not exist. In Section ??, we present a natural method for computing eigenvalues (and eigenvectors) of matrices. We defer the discuss of how to find eigenvalues and eigenvectors of matrices until Chapter ??.

An Example of a Matrix with No Real Eigenvalues

Not every matrix has real eigenvalues and eigenvectors. Recall the linear system of differential equations whose phase plane is pictured in Figure ??. That phase plane showed no evidence of an invariant line and indeed there is none. The matrix in that example was We ask: Is there a value of and a nonzero vector such that

Equation (??) implies that If this matrix is row equivalent to the identity matrix, then the only solution of the linear system is . To have a nonzero solution, the matrix must not be row equivalent to . Dividing the row by leads to Subtracting times the row from the second produces the matrix This matrix is not row equivalent to when the lower right hand entry is zero; that is, when That is, when which is not possible for any real number . This example shows that the question of whether a given matrix has a real eigenvalue and a real eigenvector — and hence when the associated system of differential equations has a line that is invariant under the dynamics — is a subtle question.

Questions concerning eigenvectors and eigenvalues are central to much of the theory of linear algebra. We discuss this topic for matrices in Section ?? and Chapter ?? and for general square matrices in Chapters ?? and ??.

Exercises

Write the system of linear ordinary differential equations
in matrix form.
Show that all solutions to the system of linear differential equations
are linear combinations of the two solutions
Consider where Let and let
  • Show that and are solutions to (??).
  • Show that is a solution to (??).
  • Use the principle of superposition to verify that is a solution to (??).
  • Using the general solution found in part (c), find a solution to (??) such that

Find a solution to where and Hint: Observe that are eigenvectors of .
Let Show that are eigenvectors of . What are the corresponding eigenvalues?
Let Show that has no real eigenvectors.
Suppose that is an matrix with zero as an eigenvalue. Show that is not invertible. Hint: Assume that is invertible and compute where is an eigenvector of corresponding to the zero eigenvalue.
Remark: In fact, is invertible if all of the eigenvalues of are nonzero. See Corollary ?? of Chapter ??.
Consider the matrix and vector given by Use map to compute , , etc. by a repeated use of the Map button in the MAP Display window. What do you observe? What happens if you start the iteration process with a different choice for , and, in particular, for an that is close to ?

In Exercises ?? – ?? use map to find an (approximate) eigenvector for the given matrix. Hint: Choose a vector in map and repeatedly click on the button Map until the vector maps to a multiple of itself. You may wish to use the Rescale feature in the MAP Options. Then the length of the vector is rescaled to one after each use of the command Map. In this way, you can avoid overflows in the computations while still being able to see the directions where the vectors are moved by the matrix mapping. The coordinates of the new vector obtained by applying map can be viewed in the Vector input window.

.
.
Use MATLAB to verify that solutions to the system of linear differential equations
are linear combinations of the two solutions More concretely, proceed as follows:
  • By superposition, the general solution to the differential equation has the form . Find constants and such that .
  • Graph the second component of this solution using the MATLAB plot command.
  • Use pplane5 to compute a solution via the Keyboard input starting at and then use the y vs t command in pplane5 to graph this solution.
  • Compare the results of the two plots.
  • Repeat steps (a)–(d) using the initial vector .