It is clear that any function , where is a constant, is an antiderivative of . Why? Because, .
Could there exist any other antiderivative of , call it , such that is not a sum of and a constant? In that case, for all we would have
We introduce antiderivatives.
It is clear that any function , where is a constant, is an antiderivative of . Why? Because, .
Could there exist any other antiderivative of , call it , such that is not a sum of and a constant? In that case, for all we would have
If is an antiderivative of , then the function has a whole family of antiderivatives. Each antiderivative of is the sum of and some constant .
So, when we write , we denote the entire family of antiderivatives of . Alternative notation for this family of antiderivatives was introduced by G.W. Leibniz (1646-1716):
This is called the indefinite integral of .
Now we are ready to ”integrate” some famous functions.
Fill out these basic antiderivatives. Note each of these examples comes directly from our knowledge of basic derivatives.
It may seem that one could simply memorize these antiderivatives and antidifferentiating would be as easy as differentiating. This is not the case. The issue comes up when trying to combine these functions. When taking derivatives we have the product rule and the chain rule. The analogues of these two rules are much more difficult to deal with when taking antiderivatives. However, not all is lost.
Consider the following example.
, for all in some interval .
In the last choice we get .
Notice, that the function is the sum of the two functions, and , where and , for in .
We know antiderivatives of both functions: and , for in , are antiderivatives of and , respectively. So, in this example we see that the function is an antiderivative of . In other words, ”the sum of antiderivatives is an antiderivative of a sum”.
Is this true in general?
Let’s check whether this rule holds for any eligible pair of functions and defined on some interval .
and for all in some interval .
Find an antiderivative of the function .
Since , it follows that is an antiderivative of .To summarize: The sum of antiderivatives is an antiderivative of a sum.
We can write equivalently, using indefinite integrals,
.
Next, we will try to prove an analogue of the constant multiple rule for derivatives. Let’s consider the following example.
Notice, in this example the function is a constant multiple of ,
where . On the other hand, we know that the function , defined by is an antiderivative of .
If we differentiate the function , we get that
, for in .
In other words, “a constant multiple of an antiderivative is an antiderivative of a constant multiple of a function.” Is this always true?
Let’s check whether this rule holds for any constant and any eligible function defined on some interval .
, for all in some interval .
Find an antiderivative of the function .
Since , it follows that is an antiderivative of . To summarize: The constant multiple of an antiderivative is an antiderivative of a constant multiple of a function.We can write equivalently, using indefinite integrals, .
Unfortunately, we cannot tell you how to compute every antiderivative, we view them as a sort of puzzle. Later we will learn a hand-full of techniques for computing antiderivatives. However, a robust and simple way to compute antiderivatives is guess-and-check.
Tips for guessing antiderivatives
Computing antiderivatives is a place where insight and rote computation meet. We cannot teach you a method that will always work. Moreover, merely understanding the examples above will probably not be enough for you to become proficient in computing antiderivatives. You must practice, practice, practice!
Differential equations show you relationships between rates of functions.
A differential equation is simply an equation with a derivative in it. Here is an example:
When a mathematician solves a differential equation, they are finding functions satisfying the equation.
This shows us that any solution to our differential equation must be one of the functions we’ve already found. Our argument relies on some features that are special to this problem; proving that we’ve found all the possible solutions to an arbitrary differential equation is a very difficult task!
A function is called a general solution of the differential equation. Since there are infinitely many solutions to a differential equation, we can impose an additional condition (say ), called an initial condition. When we are asked to find a function that satisfies both the differential equation (DE) and the initial condition (IC), this is called an initial value problem (IVP). Let’s try one out.
Since all solutions of the differential equation have the form and , it follows that Therefore, So, the unique solution to the given initial value problem is the function .
Since it follows that Therefore, The function is the solution of the initial value problem and it is shown in the figure above.