We study Taylor and Maclaurin series.
- The Taylor series of , centered at is
- Setting gives the Maclaurin series of :
The difference between a Taylor polynomial and a Taylor series is the former is a polynomial, containing only a finite number of terms, whereas the latter is a series, a summation of an infinite set of terms, any number of which (including an infinite number) may be zero. When creating the Taylor polynomial of degree for a function at , we needed to evaluate , and the first derivatives of , at . When creating the Taylor series of , it helps to find a pattern that describes the th derivative of at . Time for examples!
Let’s see an example that is not centered at :
Finally, sometimes Taylor’s formula may not be the best way to compute the Taylor series.
Hmm. This is getting messy. Let’s try to find the Taylor series via known power series. We know that setting we now have when and when . Since we can find the desired power series by integrating. Write with me
since , , and we have our desired power series, which converges with radius of convergence . However, note the interval of convergence may be different, and it is in this case. First note that our power series can be written in summation notation as If or we can see that this sequence is In both cases, the series converges by the alternating series test. Hence the interval of convergence is .
Above we implicitly used the following theorem:
This is just saying that if you know a power series for a function, then using Taylor’s formula will do nothing but give you the power series.
Here is an somewhat unsatisfying example:
- (a)
- Compute the Maclaurin series of .
- (b)
- Find the radius of convergence.
- (c)
- Is the Maclaurin series for equal to on the interval of convergence?
A more satisfying example is the following:
It is within your power to show that is infinitely differentiable everywhere, and to prove that . This is quite involved, and we will not do it here. If you have the gumption, and the willpower, it would make a fantastic exercise.
We will find that “most of the time” they are equal, but we need to consider the conditions that allow us to conclude this. Taylor’s theorem states that the error between a function and its th degree Taylor polynomial is : and that where is the maximum value of on . If goes to for each in an interval as approaches infinity, we conclude that the function is equal to its Taylor series expansion. This leads us to our next theorem:
We’ll work a representative example of this theorem to see what is going on, the general case is much the same.
There is good news. A function that is equal to its Taylor series, centered at any point the domain of , is said to be an analytic function, and most, if not all, functions that we encounter within this course are analytic functions. Generally speaking, any function that one creates with elementary functions (polynomials, exponentials, trigonometric functions, etc.) that is not piecewise defined is probably analytic. For most functions, we assume the function is equal to its Taylor series on the series’ interval of convergence and only check the remainder (as above) when we suspect something may not work as expected.