$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

We can approximate smooth functions with polynomials.

### Polynomials can approximate some functions

In our study of mathematics, we’ve found that some functions are easier to work with than others. For instance, if you are doing calculus, typically polynomials are “easy” to work with because they are easy to differentiate and integrate. Other functions, like are more difficult to work with. However, there are polynomials that mimic the behavior of $f$ near zero.

Above we see a graph of $f$ along with the polynomial As we see, this polynomial approximates $f$ very well near zero. There are times when we would much rather work with a polynomial than any other type of function. Many of the ideas we discuss in this section should remind you of our work with linear approximation. In the case of linear approximation, we replaced a difficult function with a line, and used this line to approximate the function. Now, we want to replace our complicated function with a polynomial. This leads us to a question.

#### How do we produce approximating polynomials?

Cutting straight to the point, the approximating polynomials we’ll discuss are called Taylor polynomials and Maclaurin polynomials.

Consider $f(x) = e^x$. Which of the following is a Maclaurin polynomial for $e^x$?
$1+x+x^2+x^3+x^4$ $1+x$ $1+x+\frac {x^2}{2!} + \frac {x^3}{3!}+ \frac {x^4}{4!}$ $1-\frac {x^2}{2!} +\frac {x^4}{4!}$ $x-\frac {x^3}{3!}$ $1$
Check out a plot of $e^x$ along with the plots of the correct answers above:
Consider $f(x) = \ln (x)$. Which of the following is a Taylor polynomial centered at $1$ for $f$?
$x-1$ $1+x$ $1+x+\frac {x^2}{2!} + \frac {x^3}{3!}+ \frac {x^4}{4!}$ $(x-1)-\frac {(x-1)^2}{2}+\frac {(x-1)^3}{3}$ $(x-1)-\frac {(x-1)^2}{2}+\frac {(x-1)^3}{3}-\frac {(x-1)^4}{4}$ $1$
Check out a plot of $\ln (x)$ along with the plots of the correct answers above.

In the previous example, why did we choose to use $x=1$ for the center? There are several reasons. We often like to use $x=0$ for the center of a Taylor polynomial (which is why such polynomials have a special name). In the case of $f(x) = \ln (x)$, however, $f(0)$ is undefined. So, we must choose a different center. Since we have to evaluate the function at the center, any center point we choose should be a value for which we know and can easily compute the function value. In the case of $f(x) = \ln (x)$, the value we know best is $f(1) = 0$, so $x=1$ is a good choice for the center. Of course, in general we can choose any center we like, but this will give us a different polynomial (and may make our calculations more difficult.)

Using $p_6(x)$, what is the approximate value of $\ln (1.5)$ to 6 decimal places?
Since $p_n(x)$ approximates $\ln (x)$ well near $x=1$, we approximate $\ln (1.5) \approx p_6(1.5)$:

This is a good approximation as a calculator shows that $\ln (1.5) \approx 0.4055.$

Using $p_6(x)$, what is approximate value of $\ln (2)$?
We approximate $\ln (2)$ with $p_6(2)$:

This approximation is not terribly impressive: a hand held calculator shows that $\ln (2) \approx 0.693147$. Surprisingly enough, even the $20$th degree Taylor polynomial fails to approximate $\ln (x)$ for $x>2$. We’ll soon discuss why this is.

You may be wondering, how exactly Taylor polynomials and Maclaurin polynomials approximate these functions. Here’s the idea: Suppose you have two functions $f$ and $g$ which are each as differentiable as we need them to be. If for some specific value $x=c$ we have that

then it makes sense that for all $x$. The more derivatives we use, the better our approximation (usually) is. In that sense, we are just working with a better version of linear approximation – we could call this polynomial approximation! The Taylor and Maclaurin polynomials are “cooked up” so that their value and the value of their derivatives equals the value of the related function at $x=c$. Check it out: here we see the third Maclaurin polynomial for $f(x) = \sin (x)$: we see

Note that in the case of sine, shares the function’s value at $x=0$ and shares the first $4$ derivatives, though the $5$th derivative is different. Let’s see a graph to help us understand what is going on.

We can see that $p_3$ is a good approximation for $\sin (x)$ near $x=0$. Next, we quantify exactly how good our approximation is.

### Taylor’s theorem

Again, let’s get to the point by stating Taylor’s theorem (which is a generalization of the mean value theorem):

The first part of Taylor’s theorem states that $f(x) = p_n(x) + R_n(x)$, where $p_n(x)$ is the $n$th order Taylor polynomial and $R_n(x)$ is the remainder, or error, in the Taylor approximation. The second part gives bounds on how big that error can be. If the $(n+1)$th derivative is large, the error may be large; if $x$ is far from $c$, the error may also be large. However, the $(n+1)!$ term in the denominator tends to ensure that the error gets smaller as $n$ increases.

The following example computes error estimates for the approximations of $\ln (1.5)$ and $\ln (2)$ made earlier.

We practice again. This time, we use Taylor’s theorem to find $n$ that guarantees our approximation is within a certain amount.

### Connections to differential equations

Our final example gives a brief introduction for using Taylor polynomials to solve differential equations.

Taylor polynomials are very useful approximation in two basic situations:

(a)
When $f$ is known, but perhaps “hard” to compute directly. For instance, we can define $y=\cos (x)$ as either the ratio of sides of a right triangle (“adjacent over hypotenuse”) or with the unit circle. However, neither of these provides a convenient way of computing $\cos (2)$. A Taylor polynomial of sufficiently high degree can provide a reasonable method of computing such values using only operations usually hard-wired into a computer ($+$, $-$, $\times$ and $\div$). However, even though Taylor polynomials could be used in calculators and computers to calculate values of trigonometric functions, in practice they generally aren’t. Other more efficient and accurate methods have been developed, such as the CORDIC algorithm.
(b)
When $f$ is not known, but information about its derivatives is known. This occurs more often than one might think, especially in the study of differential equations.