
We introduce antiderivatives.

Computing derivatives is not too difficult. At this point, you should be able to take the derivative of almost any function you can write down. However, undoing derivatives is much harder. This process of undoing a derivative is called taking an antiderivative.
How many antiderivatives does $f(x) = 2x$ have?
none one infinitely many
Recall: Any function whose derivative is equal to $0$ on some interval is equal to a constant on that interval. This means that the function $G(x)-x^2=C$, for some constant $C$. This implies that $G(x)=x^2+C$, for all x.

So, when we write $F(x)+C$, we denote the entire family of antiderivatives of $f$. Alternative notation for this family of antiderivatives was introduced by G.W. Leibniz (1646-1716):

It follows that where $F$ is any antiderivative of $f$ and $C$ is an arbitrary constant.

Now we are ready to ”integrate” some famous functions.

Fill out these basic antiderivatives. Note each of these examples comes directly from our knowledge of basic derivatives.

It may seem that one could simply memorize these antiderivatives and antidifferentiating would be as easy as differentiating. This is not the case. The issue comes up when trying to combine these functions. When taking derivatives we have the product rule and the chain rule. The analogues of these two rules are much more difficult to deal with when taking antiderivatives. However, not all is lost.

Consider the following example.

It is easy to recognize an antiderivative: we just have to differentiate it, and check whether $H'(x)=h(x)$, for all $x$ in $I$.

Notice, that the function $h$ is the sum of the two functions, $f$ and $g$, where $f(x)=\cos {x}$ and $g(x)=\frac {1}{x}$, for $x$ in $I$.

We know antiderivatives of both functions: $F(x)=\sin {x}$ and $G(x)=\ln {x}$, for $x$ in $I$, are antiderivatives of $f$ and $g$, respectively. So, in this example we see that the function $F+G$ is an antiderivative of $f+g$. In other words, ”the sum of antiderivatives is an antiderivative of a sum”.

Is this true in general?

Let’s check whether this rule holds for any eligible pair of functions $f$ and $g$ defined on some interval $I$.

Let $f$, $g$, $F$ and $G$ be four functions defined on some interval $I$ such that $F$ is an antiderivative of $f$ and $G$ is an antiderivative of $g$, i.e.

$F'(x)=f(x)$ and $G'(x)=g(x)$ for all $x$ in some interval $I$.

Find an antiderivative of the function $f+g$.

Differentiate the function $F+G$.
Since $\left (F(x)+G(x)\right )'=F'(x)+G'(x)=f(x)+g(x)$, it follows that $F+G$ is an antiderivative of $f+g$.

To summarize: The sum of antiderivatives is an antiderivative of a sum.

We have proved the following theorem.

Next, we will try to prove an analogue of the constant multiple rule for derivatives. Let’s consider the following example.

It is easy to recognize an antiderivative: we just have to differentiate it, and check whether $H'(x)=h(x)$, for all $x$ in $I$.

Notice, in this example the function $h$ is a constant multiple of $f$,

where $f(x)=\sec ^{2}{x}$. On the other hand, we know that the function $F$, defined by $F(x)=\tan {x}$ is an antiderivative of $f$.

If we differentiate the function $5F$, we get that

$\left (5F(x)\right )'=\left (5\tan {x}\right )'=5\left (\tan {x}\right )'=5\sec ^{2}{x}=5f(x)$, for $x$ in $I$.

In other words, “a constant multiple of an antiderivative is an antiderivative of a constant multiple of a function.” Is this always true?

Let’s check whether this rule holds for any constant $k$ and any eligible function $f$ defined on some interval $I$.

Let $k$ be a constant, let $f$ be a function defined on some interval $I$ and let $F$ be an antiderivative of $f$, i.e.

$F'(x)=f(x)$, for all $x$ in some interval $I$.

Find an antiderivative of the function $k f$.

Differentiate the function $kF$.
Since $\left (kF(x)\right )'=kF'(x)=kf(x)$, it follows that $kF$ is an antiderivative of $kf$. To summarize: The constant multiple of an antiderivative is an antiderivative of a constant multiple of a function.
We have proved the following theorem. Let’s put these rules and our knowledge of basic derivatives to work. The sum rule for antiderivatives allows us to integrate term-by-term. Let’s see an example of this.

### Computing antiderivatives

Unfortunately, we cannot tell you how to compute every antiderivative; we view them as a sort of puzzle. Later we will learn a hand-full of techniques for computing antiderivatives. However, a robust and simple way to compute antiderivatives is guess-and-check.

Tips for guessing antiderivatives

(a)
If possible, express the function that you are integrating in a form that is convenient for integration.
(b)
Make a guess for the antiderivative.
(c)
Take the derivative of your guess.
(d)
Note how the above derivative is different from the function whose antiderivative you want to find.
(e)
Change your original guess by multiplying by constants or by adding in new functions.

### Final thoughts

Computing antiderivatives is a place where insight and rote computation meet. We cannot teach you a method that will always work. Moreover, merely understanding the examples above will probably not be enough for you to become proficient in computing antiderivatives. You must practice, practice, practice!

### Differential equations

Differential equations show you relationships between rates of functions.

A differential equation is simply an equation with a derivative in it. Here is an example:

Which one is a differential equation?
$x^2+3x-1=0$ $f'(x)=x^2+3x-1$ $f(x)=x^2+3x-1$

When a mathematician solves a differential equation, they are finding functions satisfying the equation.

Which of the following functions solve the differential equation
$f(x) = \sin (x)$ $f(x) = x^2$ $f(x) = e^x$ $f(x) =6 e^x$ $f(x) = e^{-x}$ $f(x) = \tan (x)$

A function $Ce^x$ is called a general solution of the differential equation. Since there are infinitely many solutions to a differential equation, we can impose an additional condition (say $f(0)=1$), called an initial condition. When we are asked to find a function $f$ that satisfies both the differential equation (DE) and the initial condition (IC), this is called an initial value problem (IVP). Let’s try one out.