$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

There is a nice result for approximating the remainder for series that converge by the integral test.

When we have a convergent geometric series or a convergent telescoping series, we can find an explicit formula for the terms in the sequence of remainders since we can find an explicit formula for the terms in the sequence of partial sums. One of the other important convergence tests we have studied so far is the integral test.

The key to proving that a series converges by the integral test is to note that if all of the terms in $\{a_n\}_{n=n_0}$ are eventually positive, then $\{s_n\}_{n=n_0}$ will be eventually increasing.

When $n_0=1$, and we have the assumptions for the integral test, the picture to keep in mind is below.

When the improper integral converges, it can be used to establish an upper bound for $\{s_n\}_{n=n_0}$. This means that $\{s_n\}_{n=n_0}$ will be bounded and monotonic and thus have a limit, which we can determine without finding an explicit formula for $s_n$! From the picture, it should also be clear that the series and the improper integral do not have the same value since the series is represented by the sum of the areas of all of the rectanglesthe area under the curve whereas the improper integral is represented by the sum of the areas of all of the rectanglesthe area under the curve .

As such, we made the following important observation.

When the assumptions for the integral test are met, we can use the integral test to determine if a series converges, but we cannot ever use it to find the value to which the series converges!

What, then should we do? Thankfully, the integral test comes with a nice remainder result, which we will state then explore in the context of a familiar example.

### Remainders and the Integral Test

Note that we actually have two results here.

The first set of inequalities gives bounds for the error.

• The inequality $\int _{n+1}^{\infty } f(x) \d x \leq r_n$ tells us a lower bound for the error; this means we know that the error made using $\sum _{k=n_0}^n a_k$ is at least the value of $\int _{n+1}^{\infty } f(x) \d x$. That is, if we use $\sum _{k=n_0}^n a_k$ to approximate $\sum _{k=n_0}^\infty a_k$:

• The inequality $r_n \leq \int _{n}^{\infty } f(x) \d x$ gives us an upper bound for the error; this means we know that the error we make in our approximation can be no more than the value of $\int _{n}^{\infty } f(x) \d x$. That is, if we use $\sum _{k=n_0}^n a_k$ to approximate $\sum _{k=n_0}^\infty a_k$:

The second result gives us a sharper estimate for the value of the series than the usual estimate using just $s_n$.

Since we have a positivity assumption on the terms of the series, note that this means that for every $n$, $s_n$ will be an overestimateunderestimate . Since we have a result for the minimum error made, we can add this to the value of $s_n$ to obtain the smallest possible value of the series.

Before commenting further, let’s explore how this test works in the context of an example we have seen in previous sections.

Wait a minute! We wanted to approximate the series in the last example to within $.01$ of its actual value, but the actual bounds for the error led us to find

We thus actually know the value to within $.0001$ (and can verify this since we are given the actual value of the series). What’s going on here?

• The inequality $\int _{n+1}^{\infty } f(x) \d x \leq r_n$ gives us the minimum possible error made.
• The inequality $r_n \leq \int _{n}^{\infty } f(x) \d x$ gives us the maximum possible error made.

Since we have a positivity assumption on the terms, note that this means that for every $n$, $s_n$ will be an overestimateunderestimate . This leads to an important observation; if we want to approximate a convergent series to within $\epsilon$ of its actual value, we can require that the difference between the upper estimate for the series and the lower estimate be no more than $\epsilon$. Note that for every $n$,

If the difference between the left and right sides of the inequality is less than $\epsilon$, then the infinite series must be within $\epsilon$ of its minimum possible value (the lefthand side) and its maximum possible value (the righthand side).

Now, note that

In going from the penultimate to the last step, we are justified in our calculation because the integrals both converge. This in practice will greatly reduce the number of terms needed to approximate the value of a series.

To help gain some intuition and see how this process works, let’s see how this sharpens our estimate for $N$ in the last example.

### Summary

When we can establish that a series $\sum _{k=n_0}^{\infty } a_k$ converges, but we do not have an explicit formula for $s_n$, we cannot find the exact value of the series. However, we can sometimes approximate it by considering the sequence of remainders. There are two important types of questions we have asked about remainders thus far.

• How bad is the error made when we approximate a convergent infinite series by its first several terms?
• How many terms should we specify if we want to know the value of a convergent series to obtain a desired precision?

In some cases, we use a formula or bounds for the remainder to answer both of these, but when the series can be approximated using the integral test, we have a much more efficient way to estimate. We attack the previous questions in the following manner.

• To approximate $\sum _{k=n_0}^{\infty } a_k$ using $\sum _{k=n_0}^{N} a_k$ and the error bounds, do the following.
• Compute $s_N=\sum _{k=n_0}^{N} a_k$.
• Compute the minimum error from $r_N \geq \int _{N+1}^\infty f(x) \d x$.
• Compute the maximum error from $r_N \leq \int _{N}^\infty f(x) \d x$.

The approximate value for the series is then found from putting these together.

• To specify the value of a convergent series $\sum _{k=n_0}^{\infty } a_k$ to within a desired precision of $\epsilon$, do the following.
• Find a value for $N$ from setting $\int _N^{N+1} f(x) \d x \leq \epsilon$.
• Compute $s_N=\sum _{k=n_0}^{N} a_k$.
• Approximate $\sum _{k=n_0}^{\infty } a_k$ by noting that

Choose a value in this interval as the approximate value of the series. Frequently, the midpoint provides a good choice.