Section 8.4 Alternating Series
Motivating Questions
What is an alternating series?
Under what conditions does an alternating series converge? Why?
How well does the \(n\)th partial sum of a convergent alternating series approximate the actual sum of the series? Why?
So far, we've considered series with exclusively nonnegative terms. Next, we consider series that have some negative terms. For instance, the geometric series
has \(a = 2\) and \(r = \frac{2}{3}\text{,}\) so that every other term alternates in sign. This series converges to
In Preview Activity 8.4.1 and our following discussion, we investigate the behavior of similar series where consecutive terms have opposite signs.
Preview Activity 8.4.1.
Preview Activity 8.3.1 showed how we can approximate the number \(e\) with linear, quadratic, and other polynomial approximations. We use a similar approach in this activity to obtain linear and quadratic approximations to \(\ln(2)\text{.}\) Along the way, we encounter a type of series that is different than most of the ones we have seen so far. Throughout this activity, let \(f(x) = \ln(1+x)\text{.}\)
Find the tangent line to \(f\) at \(x=0\) and use this linearization to approximate \(\ln(2)\text{.}\) That is, find \(L(x)\text{,}\) the tangent line approximation to \(f(x)\text{,}\) and use the fact that \(L(1) \approx f(1)\) to estimate \(\ln(2)\text{.}\)

The linearization of \(\ln(1+x)\) does not provide a very good approximation to \(\ln(2)\) since \(1\) is not that close to \(0\text{.}\) To obtain a better approximation, we alter our approach; instead of using a straight line to approximate \(\ln(2)\text{,}\) we use a quadratic function to account for the concavity of \(\ln(1+x)\) for \(x\) close to \(0\text{.}\) With the linearization, both the function's value and slope agree with the linearization's value and slope at \(x=0\text{.}\) We will now make a quadratic approximation \(P_2(x)\) to \(f(x) = \ln(1+x)\) centered at \(x=0\) with the property that \(P_2(0) = f(0)\text{,}\) \(P'_2(0) = f'(0)\text{,}\) and \(P''_2(0) = f''(0)\text{.}\)
Let \(P_2(x) = x  \frac{x^2}{2}\text{.}\) Show that \(P_2(0) = f(0)\text{,}\) \(P'_2(0) = f'(0)\text{,}\) and \(P''_2(0) = f''(0)\text{.}\) Use \(P_2(x)\) to approximate \(\ln(2)\) by using the fact that \(P_2(1) \approx f(1)\text{.}\)
We can continue approximating \(\ln(2)\) with polynomials of larger degree whose derivatives agree with those of \(f\) at \(0\text{.}\) This makes the polynomials fit the graph of \(f\) better for more values of \(x\) around \(0\text{.}\) For example, let \(P_3(x) = x  \frac{x^2}{2}+\frac{x^3}{3}\text{.}\) Show that \(P_3(0) = f(0)\text{,}\) \(P'_3(0) = f'(0)\text{,}\) \(P''_3(0) = f''(0)\text{,}\) and \(P'''_3(0) = f'''(0)\text{.}\) Taking a similar approach to preceding questions, use \(P_3(x)\) to approximate \(\ln(2)\text{.}\)
If we used a degree \(4\) or degree \(5\) polynomial to approximate \(\ln(1+x)\text{,}\) what approximations of \(\ln(2)\) do you think would result? Use the preceding questions to conjecture a pattern that holds, and state the degree \(4\) and degree \(5\) approximation.
Subsection 8.4.1 The Alternating Series Test
Preview Activity 8.4.1 gives us several approximations to \(\ln(2)\text{.}\) The linear approximation is \(1\text{,}\) and the quadratic approximation is \(1  \frac{1}{2} = \frac{1}{2}\text{.}\) If we continue this process, cubic, quartic (degree \(4\)), quintic (degree \(5\)), and higher degree polynomials give us the approximations to \(\ln(2)\) in Table 8.4.1.
linear  \(1\)  \(1\) 
quadratic  \(1  \frac{1}{2}\)  \(0.5\) 
cubic  \(1  \frac{1}{2} + \frac{1}{3}\)  \(0.8\overline{3}\) 
quartic  \(1  \frac{1}{2} + \frac{1}{3}  \frac{1}{4}\)  \(0.58\overline{3}\) 
quintic  \(1  \frac{1}{2} + \frac{1}{3}  \frac{1}{4} + \frac{1}{5}\)  \(0.78\overline{3}\) 
The pattern here shows that \(\ln(2)\) can be approximated by the partial sums of the infinite series
where the alternating signs are indicated by the factor \((1)^{k+1}\text{.}\) We call such a series an alternating series.
Using computational technology, we find that the sum of the first 100 terms in this series is 0.6881721793. As a comparison, \(\ln(2) \approx 0.6931471806\text{.}\) This shows that even though the series (8.4.1) converges to \(\ln(2)\text{,}\) it must do so quite slowly, since the sum of the first 100 terms isn't particularly close to \(\ln(2)\text{.}\) We will investigate the issue of how quickly an alternating series converges later in this section.
Definition 8.4.2.
An alternating series is a series of the form
where \(a_k \gt 0\) for each \(k\text{.}\)
We have some flexibility in how we write an alternating series; for example, the series
whose index starts at \(k = 1\text{,}\) is also alternating. As we will soon see, there are several very nice results that hold for alternating series, while alternating series can also demonstrate some unusual behaivior.
It is important to remember that most of the series tests we have seen in previous sections apply only to series with nonnegative terms. Alternating series require a different test.
Activity 8.4.2.
Remember that, by definition, a series converges if and only if its corresponding sequence of partial sums converges.

Calculate the first few partial sums (to 10 decimal places) of the alternating series
\begin{equation*} \sum_{k=1}^{\infty} (1)^{k+1}\frac{1}{k}\text{.} \end{equation*}Label each partial sum with the notation \(S_n = \sum_{k=1}^{n} (1)^{k+1}\frac{1}{k}\) for an appropriate choice of \(n\text{.}\)
Plot the sequence of partial sums from part (a). What do you notice about this sequence?
Small hints for each of the prompts above.
Activity 8.4.2 illustrates the general behavior of any convergent alternating series. We see that the partial sums of the alternating harmonic series oscillate around a fixed number that turns out to be the sum of the series.
Recall that if \(\lim_{k \to \infty} a_k \neq 0\text{,}\) then the series \(\sum a_k\) diverges by the Divergence Test. From this point forward, we will thus only consider alternating series
in which the sequence \(a_k\) consists of positive numbers that decrease to \(0\text{.}\) The \(n\)th partial sum \(S_n\) is
Notice that
\(S_2 = a_1  a_2\text{,}\) and since \(a_1 \gt a_2\) we have \(0 \lt S_2 \lt S_1 \text{.}\)
\(S_3 = S_2+a_3\) and so \(S_2 \lt S_3\text{.}\) But \(a_3 \lt a_2\text{,}\) so \(S_3 \lt S_1\text{.}\) Thus, \(0 \lt S_2 \lt S_3 \lt S_1 \text{.}\)
\(S_4 = S_3a_4\) and so \(S_4 \lt S_3\text{.}\) But \(a_4 \lt a_3\text{,}\) so \(S_2 \lt S_4\text{.}\) Thus, \(0 \lt S_2 \lt S_4 \lt S_3 \lt S_1 \text{.}\)
\(S_5 = S_4+a_5\) and so \(S_4 \lt S_5\text{.}\) But \(a_5 \lt a_4\text{,}\) so \(S_5 \lt S_3\text{.}\) Thus, \(0 \lt S_2 \lt S_4 \lt S_5 \lt S_3 \lt S_1 \text{.}\)
This pattern continues as illustrated in Figure 8.4.4 (with \(n\) odd) so that each partial sum lies between the previous two partial sums.
Note further that the absolute value of the difference between the \((n1)\)st partial sum \(S_{n1}\) and the \(n\)th partial sum \(S_n\) is
Because the sequence \(\{a_n\}\) converges to \(0\text{,}\) the distance between successive partial sums becomes as close to zero as we'd like, and thus the sequence of partial sums converges (even though we don't know the exact value to which it converges).
The preceding discussion has demonstrated the truth of the Alternating Series Test.
The Alternating Series Test.
Given an alternating series \(\sum (1)^k a_k \text{,}\) if the sequence \(\{a_k\}\) of positive terms decreases to 0 as \(k \to \infty\text{,}\) then the alternating series converges.
Note that if the limit of the sequence \(\{a_k\}\) is not 0, then the alternating series diverges.
Activity 8.4.3.
Which series converge and which diverge? Justify your answers.
\(\displaystyle \displaystyle\sum_{k=1}^{\infty} \frac{(1)^k}{k^2+2}\)
\(\displaystyle \displaystyle\sum_{k=1}^{\infty} \frac{(1)^{k+1}2k}{k+5}\)
\(\displaystyle \displaystyle\sum_{k=2}^{\infty} \frac{(1)^{k}}{\ln(k)}\)
Small hints for each of the prompts above.
Subsection 8.4.2 Estimating Alternating Sums
If the series converges, the argument for the Alternating Series Test also provides us with a method to determine how close the \(n\)th partial sum \(S_n\) is to the actual sum of the series. To see how this works, let \(S\) be the sum of a convergent alternating series, so
Recall that the sequence of partial sums oscillates around the sum \(S\) so that
Therefore, the value of the term \(a_{n+1}\) provides an error estimate for how well the partial sum \(S_n\) approximates the actual sum \(S\text{.}\) We summarize this fact in the statement of the Alternating Series Estimation Theorem.
Alternating Series Estimation Theorem.
If the alternating series \(\sum_{k=1}^{\infty} (1)^{k+1}a_k\) has positive terms \(a_k\) that decrease to zero as \(k \to \infty\text{,}\) and \(S_n = \sum_{k=1}^{n} (1)^{k+1}a_k\) is the \(n\)th partial sum of the alternating series, then
Example 8.4.5.
Determine how well the \(100\)th partial sum \(S_{100}\) of
approximates the sum of the series.
If we let \(S\) be the sum of the series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\text{,}\) then we know that
Now
so the 100th partial sum is within 0.0099 of the sum of the series. We have discussed the fact (and will later verify) that
and so \(S \approx 0.693147\) while
We see that the actual difference between \(S\) and \(S_{100}\) is approximately \(0.0049750013\text{,}\) which is indeed less than \(0.0099\text{.}\)
Activity 8.4.4.
Determine the number of terms it takes to approximate the sum of the convergent alternating series
to within 0.0001.
Small hints for each of the prompts above.
Subsection 8.4.3 Absolute and Conditional Convergence
A series such as
whose terms are neither all nonnegative nor alternating is different from any series that we have considered so far. The behavior of such a series can be rather complicated, but there is an important connection between a series with some negative terms and series with all positive terms.
Activity 8.4.5.

Explain why the series
\begin{equation*} 1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots \end{equation*}must have a sum that is less than the series
\begin{equation*} \sum_{k=1}^{\infty} \frac{1}{k^2}\text{.} \end{equation*} 
Explain why the series
\begin{equation*} 1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots \end{equation*}must have a sum that is greater than the series
\begin{equation*} \sum_{k=1}^{\infty} \frac{1}{k^2}\text{.} \end{equation*} 
Given that the terms in the series
\begin{equation*} 1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots \end{equation*}converge to 0, what do you think the previous two results tell us about the convergence status of this series?
Small hints for each of the prompts above.
As the example in Activity 8.4.5 suggests, if a series \(\sum a_k\) has some negative terms but \(\sum a_k\) converges, then the original series, \(\sum a_k\text{,}\) must also converge. That is, if \(\sum  a_k \) converges, then so must \(\sum a_k\text{.}\)
As we just observed, this is the case for the series (8.4.2), because the corresponding series of the absolute values of its terms is the convergent \(p\)series \(\sum \frac{1}{k^2}\text{.}\) But there are series, such as the alternating harmonic series \(\sum (1)^{k+1} \frac{1}{k}\text{,}\) that converge while the corresponding series of absolute values, \(\sum \frac{1}{k}\text{,}\) diverges. We distinguish between these behaviors by introducing the following language.
Definition 8.4.6.
Consider a series \(\sum a_k\text{.}\)
The series \(\sum a_k\) converges absolutely (or is absolutely convergent) provided that \(\sum  a_k \) converges.
The series \(\sum a_k\) converges conditionally (or is conditionally convergent) provided that \(\sum  a_k \) diverges and \(\sum a_k\) converges.
In this terminology, the series (8.4.2) converges absolutely while the alternating harmonic series is conditionally convergent.
Activity 8.4.6.

Consider the series \(\sum (1)^k \frac{\ln(k)}{k}\text{.}\)
Does this series converge? Explain.
Does this series converge absolutely? Explain what test you use to determine your answer.

Consider the series \(\sum (1)^k \frac{\ln(k)}{k^2}\text{.}\)
Does this series converge? Explain.
Does this series converge absolutely? Hint: Use the fact that \(\ln(k) \lt \sqrt{k}\) for large values of \(k\) and then compare to an appropriate \(p\)series.
Small hints for each of the prompts above.
Conditionally convergent series turn out to be very interesting. If the sequence \(\{a_n\}\) decreases to 0, but the series \(\sum a_k\) diverges, the conditionally convergent series \(\sum (1)^k a_k\) is right on the borderline of being a divergent series. As a result, any conditionally convergent series converges very slowly. Furthermore, some very strange things can happen with conditionally convergent series, as illustrated in some of the exercises.
Subsection 8.4.4 Summary of Tests for Convergence of Series
We have discussed several tests for convergence/divergence of series in our sections and in exercises. We close this section of the text with a summary of all the tests we have encountered, followed by an activity that challenges you to decide which convergence test to apply to several different series.
 Geometric Series

The geometric series \(\sum ar^k\) with ratio \(r\) converges for \(1 \lt r \lt 1\) and diverges for \(r \geq 1\text{.}\)
The sum of the convergent geometric series \(\displaystyle \sum_{k=0}^{\infty} ar^k\) is \(\frac{a}{1r}\text{.}\)
 Divergence Test

If the sequence \(a_n\) does not converge to 0, then the series \(\sum a_k\) diverges.
This is the first test to apply because the conclusion is simple. However, if \(\lim_{n \to \infty} a_n = 0\text{,}\) no conclusion can be drawn.
 Integral Test

Let \(f\) be a positive, decreasing function on an interval \([c,\infty)\) and let \(a_k = f(k)\) for each positive integer \(k \geq c\text{.}\)
If \(\int_c^{\infty} f(t) \ dt\) converges, then \(\sum a_k\) converges.
If \(\int_c^{\infty} f(t) \ dt\) diverges, then \(\sum a_k\) diverges.
Use this test when \(f(x)\) is easy to integrate.
 Direct Comp. Test

(see Ex 4 in Section 8.3)
Let \(0 \leq a_k \leq b_k\) for each positive integer \(k\text{.}\)
If \(\sum b_k\) converges, then \(\sum a_k\) converges.
If \(\sum a_k\) diverges, then \(\sum b_k\) diverges.
Use this test when you have a series with known behavior that you can compare to — this test can be difficult to apply.
 Limit Comp. Test

Let \(a_n\) and \(b_n\) be sequences of positive terms. If
\begin{equation*} \displaystyle \lim_{k \to \infty} \frac{a_k}{b_k} = L \end{equation*}for some positive finite number \(L\text{,}\) then the two series \(\sum a_k\) and \(\sum b_k\) either both converge or both diverge.
Easier to apply in general than the comparison test, but you must have a series with known behavior to compare. Useful to apply to series of rational functions.
 Ratio Test

Let \(a_k \neq 0\) for each \(k\) and suppose
\begin{equation*} \displaystyle \lim_{k \to \infty} \frac{a_{k+1}}{a_k} = r\text{.} \end{equation*}If \(r \lt 1\text{,}\) then the series \(\sum a_k\) converges absolutely.
If \(r \gt 1\text{,}\) then the series \(\sum a_k\) diverges.
If \(r=1\text{,}\) then test is inconclusive.
This test is useful when a series involves factorials and powers.
 Root Test

(see Exercise 2 in Section 8.3)
Let \(a_k \geq 0\) for each \(k\) and suppose
\begin{equation*} \displaystyle \lim_{k \to \infty} \sqrt[k]{a_k} = r\text{.} \end{equation*}If \(r \lt 1\text{,}\) then the series \(\sum a_k\) converges.
If \(r \gt 1\text{,}\) then the series \(\sum a_k\) diverges.
If \(r=1\text{,}\) then test is inconclusive.
In general, the Ratio Test can usually be used in place of the Root Test. However, the Root Test can be quick to use when \(a_k\) involves \(k\)th powers.
 Alt. Series Test

If \(a_n\) is a positive, decreasing sequence so that \(\displaystyle \lim_{n \to \infty} a_n = 0\text{,}\) then the alternating series \(\sum (1)^{k+1} a_k\) converges.
This test applies only to alternating series — we assume that the terms \(a_n\) are all positive and that the sequence \(\{a_n\}\) is decreasing.
 Alt. Series Est.

Let \(S_n = \displaystyle \sum_{k=1}^n (1)^{k+1} a_k\) be the \(n\)th partial sum of the alternating series \(\displaystyle \sum_{k=1}^{\infty} (1)^{k+1} a_k\text{.}\) Assume \(a_n \gt 0\) for each positive integer \(n\text{,}\) the sequence \(a_n\) decreases to 0 and \(\displaystyle \lim_{n \to \infty} S_n = S\text{.}\) Then it follows that \(S  S_n \lt a_{n+1}\text{.}\)
This bound can be used to determine the accuracy of the partial sum \(S_n\) as an approximation of the sum of a convergent alternating series.
Activity 8.4.7.
For (a)(j), use appropriate tests to determine the convergence or divergence of the following series. Throughout, if a series is a convergent geometric series, find its sum.
\(\displaystyle \displaystyle\sum_{k=3}^{\infty} \ \frac{2}{\sqrt{k2}}\)
\(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{k}{1+2k}\)
\(\displaystyle \displaystyle\sum_{k=0}^{\infty} \ \frac{2k^2+1}{k^3+k+1}\)
\(\displaystyle \displaystyle\sum_{k=0}^{\infty} \ \frac{100^k}{k!}\)
\(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{2^k}{5^k}\)
\(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{k^31}{k^5+1}\)
\(\displaystyle \displaystyle\sum_{k=2}^{\infty} \ \frac{3^{k1}}{7^k}\)
\(\displaystyle \displaystyle\sum_{k=2}^{\infty} \ \frac{1}{k^k}\)
\(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{(1)^{k+1}}{\sqrt{k+1}}\)
\(\displaystyle \displaystyle\sum_{k=2}^{\infty} \ \frac{1}{k \ln(k)}\)
Determine a value of \(n\) so that the \(n\)th partial sum \(S_n\) of the alternating series \(\displaystyle\sum_{n=2}^{\infty} \frac{(1)^n}{\ln(n)}\) approximates the sum to within 0.001.
Small hints for each of the prompts above.
Subsection 8.4.5 Summary

An alternating series is a series whose terms alternate in sign. It has the form
\begin{equation*} \sum (1)^ka_k \end{equation*}where \(a_k\) is a positive real number for each \(k\text{.}\)
The sequence of partial sums of a convergent alternating series oscillates around the sum of the series if the sequence of \(n\)th terms converges to 0. That is why the Alternating Series Test shows that the alternating series \(\sum_{k=1}^{\infty} (1)^ka_k\) converges whenever the sequence \(\{a_n\}\) of \(n\)th terms decreases to 0.

The difference between the \(n1\)st partial sum \(S_{n1}\) and the \(n\)th partial sum \(S_n\) of a convergent alternating series \(\sum_{k=1}^{\infty} (1)^ka_k\) is \(S_n  S_{n1} = a_n\text{.}\) Since the partial sums oscillate around the sum \(S\) of the series, it follows that
\begin{equation*} S  S_n \lt a_n\text{.} \end{equation*}So the \(n\)th partial sum of a convergent alternating series \(\sum_{k=1}^{\infty} (1)^ka_k\) approximates the actual sum of the series to within \(a_n\text{.}\)
Exercises 8.4.6 Exercises
1. Testing convergence for an alternating series.
(a) Carefully determine the convergence of the series \(\sum\limits_{n=1}^{\infty} {(1)^n\over 3 n}\text{.}\) The series is
absolutely convergent
conditionally convergent
divergent
(b) Carefully determine the convergence of the series \(\sum\limits_{n=1}^{\infty} {(1)^n\over 3^n}\text{.}\) The series is
absolutely convergent
conditionally convergent
divergent
2. Estimating the sum of an alternating series.
For the following alternating series,
\(\displaystyle \sum_{n=1}^\infty a_n = 0.5  \frac{(0.5)^3}{3!} + \frac{(0.5)^5}{5!}  \frac{(0.5)^7}{7!} + ...\)
how many terms do you have to compute in order for your approximation (your partial sum) to be within 0.0000001 from the convergent value of that series?
3. Estimating the sum of a different alternating series.
For the following alternating series,
\(\displaystyle \sum_{n=1}^\infty a_n = 1  \frac{(0.4)^2}{2!} + \frac{(0.4)^4}{4!}  \frac{(0.4)^6}{6!} + \frac{(0.4)^8}{8!}  ...\)
how many terms do you have to go for your approximation (your partial sum) to be within 0.0000001 from the convergent value of that series?
4. Estimating the sum of one more alternating series.
For the following alternating series,
\(\displaystyle \sum_{n=1}^\infty a_n = 1  \frac{1}{10} + \frac{1}{100}  \frac{1}{1000} + ...\)
how many terms do you have to go for your approximation (your partial sum) to be within 1e08 from the convergent value of that series?
5.
Conditionally convergent series converge very slowly. As an example, consider the famous formula^{ 1 }
In theory, the partial sums of this series could be used to approximate \(\pi\text{.}\)
Show that the series in (8.4.3) converges conditionally.
Let \(S_n\) be the \(n\)th partial sum of the series in (8.4.3). Calculate the error in approximating \(\frac{\pi}{4}\) with \(S_{100}\) and explain why this is not a very good approximation.
Determine the number of terms it would take in the series (8.4.3) to approximate \(\frac{\pi}{4}\) to 10 decimal places. (The fact that it takes such a large number of terms to obtain even a modest degree of accuracy is why we say that conditionally convergent series converge very slowly.)
6.
We have shown that if \(\sum (1)^{k+1} a_k\) is a convergent alternating series, then the sum \(S\) of the series lies between any two consecutive partial sums \(S_n\text{.}\) This suggests that the average \(\frac{S_n+S_{n+1}}{2}\) is a better approximation to \(S\) than is \(S_n\text{.}\)
Show that \(\frac{S_n+S_{n+1}}{2} = S_n + \frac{1}{2}(1)^{n+2} a_{n+1}\text{.}\)

Use this revised approximation in (a) with \(n = 20\) to approximate \(\ln(2)\) given that
\begin{equation*} \ln(2) = \sum_{k=1}^{\infty} (1)^{k+1} \frac{1}{k}\text{.} \end{equation*}Compare this to the approximation using just \(S_{20}\text{.}\) For your convenience, \(S_{20} = \frac{155685007}{232792560}\text{.}\)
7.
In this exercise, we examine one of the conditions of the Alternating Series Test. Consider the alternating series
where the terms are selected alternately from the sequences \(\left\{\frac{1}{n}\right\}\) and \(\left\{\frac{1}{n^2}\right\}\text{.}\)
Explain why the \(n\)th term of the given series converges to 0 as \(n\) goes to infinity.

Rewrite the given series by grouping terms in the following manner:
\begin{equation*} (1  1) + \left(\frac{1}{2}  \frac{1}{4}\right) + \left(\frac{1}{3}  \frac{1}{9}\right) + \left(\frac{1}{4}  \frac{1}{16}\right) + \cdots\text{.} \end{equation*}Use this regrouping to determine if the series converges or diverges.
Explain why the condition that the sequence \(\{a_n\}\) decreases to a limit of 0 is included in the Alternating Series Test.
8.
Conditionally convergent series exhibit interesting and unexpected behavior. In this exercise we examine the conditionally convergent alternating harmonic series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\) and discover that addition is not commutative for conditionally convergent series. We will also encounter Riemann's Theorem concerning rearrangements of conditionally convergent series. Before we begin, we remind ourselves that
a fact which will be verified in a later section.

First we make a quick analysis of the positive and negative terms of the alternating harmonic series.
Show that the series \(\sum_{k=1}^{\infty} \frac{1}{2k}\) diverges.
Show that the series \(\sum_{k=1}^{\infty} \frac{1}{2k+1}\) diverges.
Based on the results of the previous parts of this exercise, what can we say about the sums \(\sum_{k=C}^{\infty} \frac{1}{2k}\) and \(\sum_{k=C}^{\infty} \frac{1}{2k+1}\) for any positive integer \(C\text{?}\) Be specific in your explanation.

Recall addition of real numbers is commutative; that is
\begin{equation*} a + b = b + a \end{equation*}for any real numbers \(a\) and \(b\text{.}\) This property is valid for any sum of finitely many terms, but does this property extend when we add infinitely many terms together?
The answer is no, and something even more odd happens. Riemann's Theorem (after the nineteenthcentury mathematician Georg Friedrich Bernhard Riemann) states that a conditionally convergent series can be rearranged to converge to any prescribed sum. More specifically, this means that if we choose any real number \(S\text{,}\) we can rearrange the terms of the alternating harmonic series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\) so that the sum is \(S\text{.}\) To understand how Riemann's Theorem works, let's assume for the moment that the number \(S\) we want our rearrangement to converge to is positive. Our job is to find a way to order the sum of terms of the alternating harmonic series to converge to \(S\text{.}\)

Explain how we know that, regardless of the value of \(S\text{,}\) we can find a partial sum \(P_1\)
\begin{equation*} P_1 = \sum_{k=1}^{n_1} \frac{1}{2k+1} = 1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{2n_1+1} \end{equation*}of the positive terms of the alternating harmonic series that equals or exceeds \(S\text{.}\) Let
\begin{equation*} S_1 = P_1\text{.} \end{equation*} 
Explain how we know that, regardless of the value of \(S_1\text{,}\) we can find a partial sum \(N_1\)
\begin{equation*} N_1 = \sum_{k=1}^{m_1} \frac{1}{2k} = \frac{1}{2}  \frac{1}{4}  \frac{1}{6}  \cdots  \frac{1}{2m_1} \end{equation*}so that
\begin{equation*} S_2 = S_1 + N_1 \leq S\text{.} \end{equation*} 
Explain how we know that, regardless of the value of \(S_2\text{,}\) we can find a partial sum \(P_2\)
\begin{equation*} P_2 = \sum_{k=n_1+1}^{n_2} \frac{1}{2k+1} = \frac{1}{2(n_1+1)+1} + \frac{1}{2(n_1+2)+1} + \cdots + \frac{1}{2n_2+1} \end{equation*}of the remaining positive terms of the alternating harmonic series so that
\begin{equation*} S_3 = S_2 + P_2 \geq S\text{.} \end{equation*} 
Explain how we know that, regardless of the value of \(S_3\text{,}\) we can find a partial sum
\begin{equation*} N_2 = \sum_{k=m_1+1}^{m_2} \frac{1}{2k} = \frac{1}{2(m_1+1)}  \frac{1}{2(m_1+2)}  \cdots  \frac{1}{2m_2} \end{equation*}of the remaining negative terms of the alternating harmonic series so that
\begin{equation*} S_4 = S_3 + N_2 \leq S\text{.} \end{equation*} Explain why we can continue this process indefinitely and find a sequence \(\{S_n\}\) whose terms are partial sums of a rearrangement of the terms in the alternating harmonic series so that \(\lim_{n \to \infty} S_n = S\text{.}\)
