We can approximate smooth functions with polynomials.

1 Polynomials can approximate some functions

In our study of mathematics, we’ve found that some functions are easier to work with than others. For instance, if you are doing calculus, typically polynomials are “easy” to work with because they are easy to differentiate and integrate. Other functions, like

\[ f(x) = \begin{cases} \frac {\sin (x)}{x} &\text {if $x\ne 0$}\\ 1 &\text {if $x=0$} \end{cases} \]

are more difficult to work with. However, there are polynomials that mimic the behavior of \(f\) near zero.

Above we see a graph of \(f\) along with the polynomial

\[ 1-\frac {x^2}{6}+\frac {x^4}{120}. \]

As we see, this polynomial approximates \(f\) very well near zero. There are times when we would much rather work with a polynomial than any other type of function. Many of the ideas we discuss in this section should remind you of our work with linear approximation. In the case of linear approximation, we replaced a difficult function with a line, and used this line to approximate the function. Now, we want to replace our complicated function with a polynomial. This leads us to a question.

1.1 How do we produce approximating polynomials?

Cutting straight to the point, the approximating polynomials we’ll discuss are called Taylor polynomials and Maclaurin polynomials.

Consider \(f(x) = e^x\). Which of the following is a Maclaurin polynomial for \(e^x\)?
\(1+x+x^2+x^3+x^4\) \(1+x\) \(1+x+\frac {x^2}{2!} + \frac {x^3}{3!}+ \frac {x^4}{4!}\) \(1-\frac {x^2}{2!} +\frac {x^4}{4!}\) \(x-\frac {x^3}{3!}\) \(1\)
Check out a plot of \(e^x\) along with the plots of the correct answers above:
Consider \(f(x) = \ln (x)\). Which of the following is a Taylor polynomial centered at \(1\) for \(f\)?
\(x-1\) \(1+x\) \(1+x+\frac {x^2}{2!} + \frac {x^3}{3!}+ \frac {x^4}{4!}\) \((x-1)-\frac {(x-1)^2}{2}+\frac {(x-1)^3}{3}\) \((x-1)-\frac {(x-1)^2}{2}+\frac {(x-1)^3}{3}-\frac {(x-1)^4}{4}\) \(1\)
Check out a plot of \(\ln (x)\) along with the plots of the correct answers above.

In the previous example, why did we choose to use \(x=1\) for the center? There are several reasons. We often like to use \(x=0\) for the center of a Taylor polynomial (which is why such polynomials have a special name). In the case of \(f(x) = \ln (x)\), however, \(f(0)\) is undefined. So, we must choose a different center. Since we have to evaluate the function at the center, any center point we choose should be a value for which we know and can easily compute the function value. In the case of \(f(x) = \ln (x)\), the value we know best is \(f(1) = 0\), so \(x=1\) is a good choice for the center. Of course, in general we can choose any center we like, but this will give us a different polynomial (and may make our calculations more difficult.)

Using \(p_6(x)\), what is the approximate value of \(\ln (1.5)\) to 6 decimal places?
\[ p_6(1.5) = \answer [tolerance=.000001]{.404688} \]
Since \(p_n(x)\) approximates \(\ln (x)\) well near \(x=1\), we approximate \(\ln (1.5) \approx p_6(1.5)\): \begin{align*} p_6(1.5) &= (1.5-1)-\frac 12(1.5-1)^2+\frac 13(1.5-1)^3-\frac 14(1.5-1)^4+\cdots \\ &\cdots +\frac 15(1.5-1)^5-\frac 16(1.5-1)^6\\ &=\frac {259}{640}\\ &\approx 0.404688. \end{align*}

This is a good approximation as a calculator shows that \(\ln (1.5) \approx 0.4055.\)

Using \(p_6(x)\), what is approximate value of \(\ln (2)\)?
\[ p_6(2) = \answer {0.616667} \]
We approximate \(\ln (2)\) with \( p_6(2)\): \begin{align*} p_6(2) &= (2-1)-\frac 12(2-1)^2+\frac 13(2-1)^3-\frac 14(2-1)^4+\cdots \\ &\cdots +\frac 15(2-1)^5-\frac 16(2-1)^6\\ &= 1-\frac 12+\frac 13-\frac 14+\frac 15-\frac 16 \\ &= \frac {37}{60}\\ &\approx 0.616667. \end{align*}

This approximation is not terribly impressive: a hand held calculator shows that \(\ln (2) \approx 0.693147\). Surprisingly enough, even the \(20\)th degree Taylor polynomial fails to approximate \(\ln (x)\) for \(x>2\). We’ll soon discuss why this is.

You may be wondering, how exactly Taylor polynomials and Maclaurin polynomials approximate these functions. Here’s the idea: Suppose you have two functions \(f\) and \(g\) which are each as differentiable as we need them to be. If for some specific value \(x=c\) we have that \begin{align*} f(c) &= g(c)\\ f'(c) &= g'(c)\\ f''(c) &= g''(c)\\ f'''(c) &= g'''(c)\\ &\vdots \end{align*}

then it makes sense that

\[ f(x) = g(x) \]

for all \(x\). The more derivatives we use, the better our approximation (usually) is. In that sense, we are just working with a better version of linear approximation – we could call this polynomial approximation! The Taylor and Maclaurin polynomials are “cooked up” so that their value and the value of their derivatives equals the value of the related function at \(x=c\). Check it out: here we see the third Maclaurin polynomial for \(f(x) = \sin (x)\):

\[ p_3(x) = x - \frac {x^3}{3!} \]

we see \begin{align*} p_3(0) = 0 &= f(0) = \sin (0)\\ p_3'(0) = 1 &= f'(1) = \cos (0)\\ p_3''(0) = 0 &= f''(0) = -\sin (0)\\ p_3'''(0) = -1 &= f'''(0) = -\cos (0)\\ p_3^{(4)}(0) = 0 &= f^{(4)}(0) = \sin (0)\\ p_3^{(5)}(0) = 0 &\ne f^{(5)}(0) = \cos (0) \end{align*}

Note that in the case of sine,

\[ p_3(x) = x - \frac {x^3}{3!} \]

shares the function’s value at \(x=0\) and shares the first \(4\) derivatives, though the \(5\)th derivative is different. Let’s see a graph to help us understand what is going on.

We can see that \(p_3\) is a good approximation for \(\sin (x)\) near \(x=0\). Next, we quantify exactly how good our approximation is.

2 Taylor’s theorem

Again, let’s get to the point by stating Taylor’s theorem (which is a generalization of the mean value theorem):

The first part of Taylor’s theorem states that \(f(x) = p_n(x) + R_n(x)\), where \(p_n(x)\) is the \(n\)th order Taylor polynomial and \(R_n(x)\) is the remainder, or error, in the Taylor approximation. The second part gives bounds on how big that error can be. If the \((n+1)\)th derivative is large, the error may be large; if \(x\) is far from \(c\), the error may also be large. However, the \((n+1)!\) term in the denominator tends to ensure that the error gets smaller as \(n\) increases.

The following example computes error estimates for the approximations of \(\ln (1.5)\) and \(\ln (2)\) made earlier.

We practice again. This time, we use Taylor’s theorem to find \(n\) that guarantees our approximation is within a certain amount.

3 Connections to differential equations

Our final example gives a brief introduction for using Taylor polynomials to solve differential equations.

Taylor polynomials are very useful approximation in two basic situations:

(a)
When \(f\) is known, but perhaps “hard” to compute directly. For instance, we can define \(y=\cos (x)\) as either the ratio of sides of a right triangle (“adjacent over hypotenuse”) or with the unit circle. However, neither of these provides a convenient way of computing \(\cos (2)\). A Taylor polynomial of sufficiently high degree can provide a reasonable method of computing such values using only operations usually hard-wired into a computer (\(+\), \(-\), \(\times \) and \(\div \)). However, even though Taylor polynomials could be used in calculators and computers to calculate values of trigonometric functions, in practice they generally aren’t. Other more efficient and accurate methods have been developed, such as the CORDIC algorithm.
(b)
When \(f\) is not known, but information about its derivatives is known. This occurs more often than one might think, especially in the study of differential equations.
2025-01-06 19:12:44