It is clear that any function , where is a constant, is an antiderivative of . Why? Because, .

Could there exist any other antiderivative of , call it , such that is not a sum of and a constant? In that case, for all we would have

We introduce antiderivatives.

How many antiderivatives does have?
The functions , , , and so on, are all antiderivatives of .
Recall: Any function whose derivative is equal to on some interval is equal to a
constant on that interval. This means that the function , for some constant . This
implies that , for all x.

none one infinitely many

It is clear that any function , where is a constant, is an antiderivative of . Why? Because, .

Could there exist any other antiderivative of , call it , such that is not a sum of and a constant? In that case, for all we would have

The Family of Antiderivatives

If is an antiderivative of , then the function has a whole **family of antiderivatives**.
Each antiderivative of is the sum of and some constant .

So, when we write , we denote the entire family of antiderivatives of . Alternative notation for this family of antiderivatives was introduced by G.W. Leibniz (1646-1716):

Let be a function. The family of of *all* antiderivatives of is denoted by

It follows that where is any antiderivative of and is an arbitrary constant.
This is called the **indefinite integral** of .

Now we are ready to ”integrate” some famous functions.

Fill out these basic antiderivatives. Note each of these examples comes directly from our knowledge of basic derivatives.

It may seem that one could simply memorize these antiderivatives and antidifferentiating
would be as easy as differentiating. This is **not** the case. The issue comes up when
trying to combine these functions. When taking derivatives we have the
*product rule* and the *chain rule*. The analogues of these two rules are much
more difficult to deal with when taking antiderivatives. However, not all is
lost.

Consider the following example.

Find an antiderivative of the function defined by
expression

It is easy to recognize an antiderivative: we just have to differentiate it, and check
whether , for all in .
, for all in some interval .

Differentiate each choice. In the last choice we get .

Notice, that the function is the sum of the two functions, and , where and , for in .

We know antiderivatives of both functions: and , for in , are antiderivatives of and , respectively. So, in this example we see that the function is an antiderivative of . In other words, ”the sum of antiderivatives is an antiderivative of a sum”.

Is this true in general?

Let’s check whether this rule holds for any eligible pair of functions and defined on some interval .

Let , , and be four functions defined on some
interval such that is an antiderivative of and is an antiderivative of ,
i.e.

We have proved the following theorem.
and for all in some interval .

Find an antiderivative of the function .

Since , it follows that is an antiderivative of .To summarize: The sum of antiderivatives is an antiderivative of a sum.

The Sum Rule for Antiderivatives If is an antiderivative of and is an antiderivative
of , then is an antiderivative of .

We can write equivalently, using indefinite integrals,

.

Next, we will try to prove an analogue of the constant multiple rule for derivatives. Let’s consider the following example.

Find an antiderivative of the function , where ,
for all in some interval .

It is easy to recognize an antiderivative: we just have to differentiate it, and check
whether , for all in .
Differentiate each choice for . In the first choice we get .

Notice, in this example the function is a constant multiple of ,

where . On the other hand, we know that the function , defined by is an antiderivative of .

If we differentiate the function , we get that

, for in .

In other words, “a constant multiple of an antiderivative is an antiderivative of a constant multiple of a function.” Is this always true?

Let’s check whether this rule holds for any constant and any eligible function defined on some interval .

Let be a constant, let be a function defined on some
interval and let be an antiderivative of , i.e.

We have proved the following theorem. , for all in some interval .

Find an antiderivative of the function .

Since , it follows that is an antiderivative of . To summarize: The constant multiple of an antiderivative is an antiderivative of a constant multiple of a function.The Constant Multiple Rule for
Antiderivatives If is an antiderivative of , and is a constant, then is an
antiderivative of .

Let’s put these rules and our knowledge of basic derivatives to work. The sum rule for antiderivatives allows us to integrate term-by-term. Let’s see an
example of this.
We can write equivalently, using indefinite integrals, .

While the sum rule for antiderivatives allows us to integrate term-by-term, we cannot
integrate *factor-by-factor*, meaning that in general

Unfortunately, we cannot tell you how to compute every antiderivative; we view them
as a sort of *puzzle*. Later we will learn a hand-full of techniques for computing
antiderivatives. However, a robust and simple way to compute antiderivatives is
guess-and-check.

Tips for guessing antiderivatives

- (a)
- If possible, express the function that you are integrating in a form that is convenient for integration.
- (b)
- Make a guess for the antiderivative.
- (c)
- Take the derivative of your guess.
- (d)
- Note how the above derivative is different from the function whose antiderivative you want to find.
- (e)
- Change your original guess by
**multiplying**by constants or by**adding**in new functions.

Computing antiderivatives is a place where insight and rote computation meet.
We cannot teach you a method that will always work. Moreover, merely
*understanding* the examples above will probably not be enough for you to
become proficient in computing antiderivatives. You must practice, practice,
practice!

Differential equations show you relationships between rates of functions.

A *differential equation* is simply an equation with a derivative in it. Here is an
example:

When a mathematician solves a differential equation, they are finding *functions*
satisfying the equation.

We can directly check that any function is a solution to our differential equation .
Could there be any others? It turns out that these are the *only* solutions. But
showing that we didn’t miss any is a bit tricky.

Well, suppose we have some mysterious function and all we know is that . Let’s
define a new function . Since our denominator is never 0, the quotient rule tells
us that But we know that a function with a 0 derivative is constant, so .
Plugging this back into our formula for tells us that . Now we rearrange to get
.

This shows us that any solution to our differential equation *must* be one of the
functions we’ve already found. Our argument relies on some features that are special
to this problem; proving that we’ve found all the possible solutions to an arbitrary
differential equation is a very difficult task!

A function is called a **general solution** of the differential equation. Since there are
infinitely many solutions to a differential equation, we can impose an additional
condition (say ), called an **initial condition**. When we are asked to find a function
that satisfies both the differential equation (DE) and the initial condition (IC), this is
called an **initial value problem** (IVP). Let’s try one out.

Solve the initial value
problem (IVP):

The figure below shows several solutions of the differential equation.
The figure suggests that the only solution to the initial value problem is the function .
We can verify this result.

Since all solutions of the differential equation have the form and , it follows that
Therefore, So, the **unique** solution to the given initial value problem is the function
.

Solve the initial value problem (IVP):

First, we have to solve the differential equation. The solution is clearly an
antiderivative of . where is an arbitrary constant. We have found the general
solution of the DE and this family of solutions is illustrated by the figure below.
Now we must find the solution that also satisfies the initial condition .

Since it follows that Therefore, The function is the solution of the initial value problem and it is shown in the figure above.