One way to think about this is by noting that the sequence is represented by the list

while the sequence is represented by the list below.

If the first sequence tends to , the second sequence must also tend to .

There are two ways to establish whether a sequence has a limit.

In the previous section, we found many ways to generate this list. Regardless of how we obtain it, there are two fundamental questions we can ask.

- Do the numbers in the list approach a finite value?
- Can I add all of the numbers in the list together and obtain a finite result?

As it turns out, the second question will be more important for us. However, as we will see in a future section, we can reduce the second question to the first one. As such, we should examine the first question in detail. We begin by giving an intuitive definition.

Given a sequence , we say that the **limit** of the sequence is if, as grows arbitrarily
large, becomes arbitrarily close to .

If we say that the sequence **converges**. If there is no finite value so that , then
we say that the limit **does not exist**, or equivalently that the sequence
**diverges**.

This intuitive definition of a limit can be made more precise as follows.

Suppose that is a sequence. We say that if for every , there exists an integer , such
that for any .

This precise definition captures the same idea as the intuitive definition but makes it more precise. The quantity measures how close the terms in the sequence are to the limit . We say that the limit exists and is if we can choose how far we want the terms to be from and we know the terms in the sequence eventually become and stay that close to .

The precise definition is extremely important to establish the theoretical foundations
of sequences and is used frequently in more theoretically-oriented courses. For our
purposes, however, the intuitive definition will be sufficient.

Suppose that is a sequence and that . Intuitively, what can we say about ? exists,
but we do not know what its value is. exists, and exists. may or may not
exist.

One way to think about this is by noting that the sequence is represented by the list

while the sequence is represented by the list below.

If the first sequence tends to , the second sequence must also tend to .

In the case that , we say that diverges. The only time we say that a sequence
converges is when the limit exists and is equal to a *finite* value.

Since sequences are functions defined on the integers, the notion of a “limit at a
specific ” is not very interesting since we can explicitly find for a given . However,
limits at *infinity* are a different story. An important question can now be asked; given
a sequence, how do we determine if it has a limit?

There are several techniques that allow us to find limits of real-valued functions, and we have seen that if we have a sequence, we can often find a real-valued function that agrees with it on their common domains. Suppose that we have found a real-valued function that agrees with on their common domains, i.e. that . If we know , can we use this to conclude something about ?

Before answering this question, consider the following cautionary example.

Let and . Show that

The sequence is represented by the ordered list of numbers
below. Since , this list is actually a list of zeroes. Since every term in the sequence is
, we have But , when is real, does not exist; as becomes arbitrarily large, the values
do not get closer and closer to a single value, but instead oscillate between and
.

This is shown graphically below.

What can we conclude from the above example?

If exists, then exists. If does not exist, then does not exist. If does not exist,
may still exist.

This might lead us to believe that we need to develop a whole new arsenal of techniques in order to determine if limits of sequences exist, but there is good news.

Let be a sequence and suppose that is a real-valued function for which for all
integers . If then as well.

If we think about the theorem a bit further, the conclusion of the theorem and the
content of the preceding example should seem reasonable. If the values of become
arbitrarily close to a number for *all* arbitrarily large -values, then the result should
still hold when we only consider *some* of these values. However, if we only know what
happens for *some* arbitrarily large -values, we cannot say what happens for *all* of
them!

Remember that the converse of this theorem is not true. In the example preceding
this theorem, we have an explicit example of a function and a sequence where and

In practice, we use the above theorem to compute limits without explicitly exhibiting the function of a real variable from which the limit is derived.

The last example shows us that for many sequences, we can employ the same techniques that we used to compute limits previously. While algebraic techniques and L’Hopital’s rule are useful, in many of the following sections, being able to determine limits quickly is an important skill.

Let . Determine if the limit of the sequence exists.

The highest degree term in the numerator is , while the largest term in the
denominator is . We can factor out the largest terms from both the numerator and
denominator and do a little algebra.

The second term becomes arbitrarily close to as grows larger and larger, so the limit of the sequence is completely determined by the ratio of the highest degree term in the numerator to the highest degree term in the denominator. In this case, that ratio is , so .

In the preceding example, we say that the *dominant term* in the numerator is and
that the *dominant term* in the denominator is because these terms are the only ones
that are relevant when finding the limit.

The reader may notice that the last example is a special case of the Rational
Function Theorem. However, the name given to this result is not as important as the
idea it captures. When finding limits of functions, it is only necessary to consider the
dominant term. When treating quotients of functions, we only need to consider the
dominant terms in the numerator and denominator.

Sometimes, this technique can be used to find limits where L’Hopital’s rule or an algebraic approach would be complicated.

The preceding examples illustrate that higher positive powers of grow more quickly than lower positive powers of . We can introduce a little notation that captures the rate at which terms in a sequence grow in a succinct way.

In essence, writing says that the sequence grows much faster than .

Many sequences of interest involve terms other than powers of . It is often
useful to understand how different *types* of functions grow relative to each
other.

Growth rates of sequences Let be positive real numbers, and let . We have the
following relationships.

The first inequality in this theorem essentially guarantees that *any* power of grows
more slowly than *any* power of . For example:

This allows us to extend the *dominant term* idea to more complicated expressions.

Let . What is ?

By growth rates, the dominant term in the numerator is , and the dominant term in
the denominator is . We thus will know if exists by considering . By growth rates,
this limit is infinite, so .

This can be made more explicit by the following computation, which shows exactly how the growth rates results are used. As with a previous example, it relies on factoring the dominant term in the numerator and the denominator. These terms are determined by the growth rates results.

By the growth rates results, and , so we have:

Previously, when considering limits, one of our techniques was to replace complicated
functions by simpler functions. The *Squeeze Theorem* tells us one situation where this
is possible.

Let’s see an example.

The squeeze theorem is helpful in establishing a more general result about *geometric*
sequences.

Perhaps the best way to determine whether the limit of a sequence exists is to compute it. Even though we’ve been working with sequences that are generated by an explicit formula in this section thus far, not all sequences are defined this way. Sometimes, we’ll only have a recursive description of a sequence rather than an explicit one, and sometimes we will have neither. In many of the coming sections, we will only have a recursive description of a sequence, so we want to determine a good approach for determining whether a limit exists without having to compute it directly. To do this, we introduce some terminology focused on the relationships between the terms of a sequence.

A sequence is called

**increasing**if for all ,**nondecreasing**if for all ,**decreasing**if for all ,**nonincreasing**if for all .

Lots of facts are true for sequences which are either increasing or decreasing; to talk about this situation without constantly saying “either increasing or decreasing,” we can introduce a single word to cover both cases.

If a sequence is always increasing, or
always nondecreasing, or always decreasing, or always nonincreasing, it is said to be
**monotonic**.

If an arithmetic sequence is monotonic, what must be true about and
?

The sign of is positiveis negativedoes not matter , and the sign of is is positiveis negativedoes not matter

We can model an arithmetic sequence with the line . Can a line ever increase then
decrease or vice-versa?

If a geometric sequence is monotonic, what must be true about and ?
The sign of is positiveis negativedoes not matter , and the sign of is is positiveis negativedoes not matter

From our examples earlier in the section, a geometric sequence can be modeled by
an exponential function (which is always increasing or always decreasing) if the sign
of is positive. If is negative, the signs of each successive term is different from the
last.

Sometimes we want to classify sequences for which the terms do not get too big or too small.

A sequence is:

**bounded above**if there is some number so that for all , we have .**bounded below**if there is some number so that for all , we have .**bounded**if it is both bounded above and bounded below.

So what does this definition actually say? Essentially, we say that a sequence is bounded above if its terms cannot become too large, bounded below if its terms cannot become too large and negative, and bounded if the terms cannot become too large and positive or too large and negative.

True or False: If a sequence is nondecreasing it is bounded below by .

True False

If a sequence is nondecreasing, then its smallest value is its first element.

True or False: If a sequence is nonincreasing it is bounded above by .

True False

If a sequence is nonincreasing, then its largest value is its first element.

So, what do these previous definitions have to do with the idea of a limit? Essentially, there are three reasons that a sequence may diverge:

- the terms eventually are either always positive or always negative but become arbitrarily large in magnitude.
- the terms are never eventually monotonic.
- the terms are never eventually monotonic
*and*become arbitrarily large in magnitude.

Let’s think about the terminology we introduced.

Think about the following statements and choose the correct option.

- If we know that a sequence is monotonic and its limit does not exist, then the terms become too large in magnitudethe terms are never eventually monotonicthe terms are never eventually monotonic and become arbitrarily large in magnitude .
- If we know that a sequence is bounded and its limit does not exist, then the terms become arbitrarily large in magnitude.the terms are never eventually monotonicthe terms are never eventually monotonic and become arbitrarily large in magnitude .

We can now state an important theorem:

To think about the statement of the theorem, if we have a sequence that is bounded, the only way it could diverge is if the terms are never eventually monotonic. However, if we know the sequence is also monotonic, this cannot happen! Thus, the series cannot diverge, so it must have a limit.

In short, bounded monotonic sequences always converge, though we can’t necessarily describe the number to which they converge. Let’s try some examples.

Given the sequence for , explain how you know that converges to a finite value
without computing its limit.

We don’t actually need to know that a sequence is always monotonic to
apply the bounded-monotone convergence theorem. It is enough to know
that the sequence is eventually monotonic. More formally, this means that
there is some integer for which the sequence is always increasing or always
decreasing.

In the previous examples, we could write down a function corresponding to each series and apply the theorem from earlier in the section. However, this is not always possible.

Suppose that , and let . Determine whether the sequences and are bounded or
monotonic, and explain whether either has a limit.

The sequence is certainly not monotonic, but it is bounded since for all . We also
can see that does not exist since the terms oscillate.

For , note that since for all , each term is positive. Thus, is increasing and hence monotonic.

However, since for all , we have the following inequality.

Hence, is not bounded and does not exist.

We had a way to analyze in the above example because we had an explicit
formula for , but how would we find such a formula for ? As it turned out, we
didn’t have to do so in order to determine that does not exist. We will
make arguments in the coming sections that allow us to determine whether
limits of sequences exist without relying on having explicit formulas for the
sequences.