$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

There is a nice result for approximating the remainder of convergent alternating series.

### Introduction

Recall that if $\{a_n\}_{n=n_0}$ is a sequence of positive terms, we say the series $\sum _{k=n_0}^{\infty } (-1)^k a_k$ and $\sum _{k=n_0}^{\infty } (-1)^{k+1} a_k$ are alternating series, and we have a nice result to test these for convergence.

Note that this test gives us a way to determine that many alternating series must converge, but it does not give us information about their corresponding values. If we want to approximate such series, we must study their remainders.

### Remainders for Alternating Series

As usual we must establish that a series converges first before we begin to think about remainders. Once we have established that an alternating series $\sum _{k=1}^{\infty } (-1)^k a_k$ converges, we have the usual decomposition.

As before, $s_n$ is the approximate value of the infinite series and $r_n$ is the error made when using this approximation. While we cannot find an explicit formula for $s_n$, we have a good way to establish bounds on the error made when approximating $\sum _{k=1}^{\infty } (-1)^k a_k$ by the finite sum $s_n= \sum _{k=1}^{n} (-1)^k a_k$. Let’s explore this result pictorially for a general alternating series.

Of course, if $a_1$ is negative, we have the same behavior, except the odd partial sums are increasing and the even ones are decreasing. Try it out with an example if you are skeptical! We can now state the general result for approximating alternating series.

In order to gain some practice, let’s work an example.

Simple enough! Let’s see if our other typical question presents any additional trouble.

Finally, let’s consider one more problem.

Determine if there is a value for $N$ so $\sum _{k=1}^{N} (-1)^k \cdot \frac {k+2}{2k+1}$ is within $.01$ of the value of $\sum _{k=1}^{N} (-1)^k \cdot \frac {k+2}{2k+1}$. If there is such a value, give one possibility for it.
There is no such value for $N$ since the series $\sum _{k=1}^{N} (-1)^k \cdot \frac {k+2}{2k+1}$ diverges. There is such a value for $N$.

### Summary

When $\{a_n\}_{n = n_0}$ is a decreasing sequence of positive term, we will approximate $\sum _{k=n_0}^{\infty } a_k$ by the finite sum $s_n =\sum _{k=n_0}^{n} a_k$. Generally, adding more and more terms of a convergent series should generally get you closer to the actual sum! Indeed, we have a nice bound for the remainder:

We can use to approximate the series to any degree of accuracy as we want!