Skip to main content
Logo image

Applied Combinatorics

Section 9.3 Advancement Operators

Much of our motivation for solving recurrence equations comes from an analogous problem in continuous mathematics—differential equations. You don’t need to have studied these beasts before in order to understand what we will do in the remainder of this chapter, but if you have, the motivation for how we tackle the problems will be clearer. As their name suggests, differential equations involve derivatives, which we will denote using “operator” notation by \(Df\) instead of the Leibniz notation \(df/dx\text{.}\) In our notation, the second derivative is \(D^2 f\text{,}\) the third is \(D^3 f\text{,}\) and so on. Consider the following example.

Example 9.6.

Solve the equation
\begin{equation*} Df = 3f \end{equation*}
if \(f(0) = 2\text{.}\)
Solution.
Even if you’ve not studied differential equations, you should recognize that this question is really just asking us to find a function \(f\) such that \(f(0)=2\) and its derivative is three times itself. Let’s ignore the initial condition \(f(0)=2\) for the moment and focus on the meat of the problem. What function, when you take its derivative, changes only by being multiplied by \(3\text{?}\) You should quickly think of the function \(e^{3x}\text{,}\) since \(D(e^{3x}) = 3e^{3x}\text{,}\) which has exactly the property we desire. Of course, for any constant \(c\text{,}\) the function \(ce^{3x}\) also satisfies this property, and this gives us the hook we need in order to satisfy our initial condition. We have \(f(x) = ce^{3x}\) and want to find \(c\) such that \(f(0)=2\text{.}\) Now \(f(0) = c\cdot 1\text{,}\) so \(c=2\) does the trick and the solution to this very simple differential equation is \(f(x) = 2e^{3x}\text{.}\)
With differential equations, we apply the differential operator \(D\) to differentiable (usually infinitely differentiable) functions. For recurrence equations, we consider the vector space \(V\) whose elements are functions from the set \(\ints\) of integers to the set \(\complexes\) of complex numbers. We then consider a function \(A:V\longrightarrow V\text{,}\) called the advancement operator, and defined by \(A f(n) = f(n+1)\text{.}\) (By various tricks and sleight of hand, we can extend a sequence \(\{a_n\colon n\geq n_0\}\) to be a function whose domain is all of \(\ints\text{,}\) so this technique will apply to our problems.) More generally, \(A^p f(n)= f(n+p)\) when \(p\) is a positive integer.

Example 9.7.

Let \(f\in V\) be defined by \(f(n)=7n-9\text{.}\) Then we apply the advancement operator polynomial \(3A^2-5A+4\) to \(f\) with \(n=0\) as follows:
\begin{equation*} (3A^2-5A+4)f(0)=3f(2) - 5f(1) +4f(0)= 3(5)-5(-2)+4(-9)=-11. \end{equation*}
As an analogue of Example 9.6, consider the following simple example involving the advancement operator.

Example 9.8.

Suppose that the sequence \(\{s_n\colon n\geq 0\}\) satisfies \(s_0 = 3\) and \(s_{n+1} = 2s_{n}\) for \(n\geq 1\text{.}\) Find an explicit formula for \(s_n\text{.}\)
Solution.
First, let’s write the question in terms of the advancement operator. We can define a function \(f(n) = s_n\) for \(n\geq 0\text{,}\) and then the information given becomes that \(f(0)=3\) and
\begin{equation*} Af(n) = 2f(n),\qquad n\geq 0. \end{equation*}
What function has the property that when we advance it, i.e., evaluate it at \(n+1\text{,}\) it gives twice the value that it takes at \(n\text{?}\) The first function that comes into your mind should be \(2^n\text{.}\) Of course, just like with our differential equation, for any constant \(c\) , \(c2^n\) also has this property. This suggests that if we take \(f(n) = c2^n\text{,}\) we’re well on our way to solving our problem. Since we know that \(f(0) = 3\text{,}\) we have \(f(0) = c2^0 = c\text{,}\) so \(c= 3\text{.}\) Therefore, \(s_n = f(n) = 3\cdot 2^n\) for \(n\geq 0\text{.}\) This clearly satisfies our initial condition, and now we can check that it also satisfies our advancement operator equation:
\begin{equation*} Af(n) = 3\cdot 2^{n+1} = 3\cdot 2\cdot 2^n = 2\cdot (3\cdot 2^n) = 2\cdot f(n). \end{equation*}
Before moving on to develop general methods for solving advancement operator equations, let’s say a word about why we keep talking in terms of operators and mentioned that we can view any sequence as a function with domain \(\ints\text{.}\) If you’ve studied any linear algebra, you probably remember learning that the set of all infinitely-differentiable functions on the real line form a vector space and that differentiation is a linear operator on those functions. Our analogy to differential equations holds up just fine here, and functions from \(\ints\) to \(\complexes\) form a vector space and \(A\) is a linear operator on that space. We won’t dwell on the technical aspects of this, and no knowledge of linear algebra is required to understand our development of techniques to solve recurrence equations. However, if you’re interested in more placing everything we do on rigorous footing, we discuss this further in Section 9.5.

Subsection 9.3.1 Constant Coefficient Equations

It is easy to see that a linear recurrence equation can be conveniently rewritten using a polynomial \(p(A)\) of the advancement operator:
\begin{equation} p(A)f=(c_0A^{k}+ c_1A^{k-1} + c_2A^{k-2} + \dots+c_k)f = g.\tag{9.3.1} \end{equation}
In (9.3.1), we intend that \(k\ge1\) is an integer, \(g\) is a fixed vector (function) from \(V\text{,}\) and \(c_0,c_1,\dots, c_k\) are constants with \(c_0,c_k\neq0\text{.}\) Note that since \(c_0\neq0\text{,}\) we can divide both sides by \(c_0\text{,}\) i.e., we may in fact assume that \(c_0=1\) whenever convenient to do so.

Subsection 9.3.2 Roots and Factors

The polynomial \(p(A)\) can be analyzed like any other polynomial. It has roots and factors, and although these may be difficult to determine, we know they exist. In fact, if the degree of \(p(A)\) is \(k\text{,}\) we know that over the field of complex numbers, \(p(A)\) has \(k\) roots, counting multiplicities. Note that since we assume that \(c_k\neq0\text{,}\) all the roots of the polynomial \(p\) are non-zero.

Subsection 9.3.3 What’s Special About Zero?

Why have we limited our attention to recurrence equations of the form \(p(A)f = g\) where the constant term in \(p\) is non-zero? Let’s consider the alternative for a moment. Suppose that the constant term of \(p\) is zero and that \(0\) is a root of \(p\) of multiplicity \(m\text{.}\) Then \(p(A) = A^mq(A)\) where the constant term of \(q\) is non-zero. And the equation \(p(A)f=g\) can then be written as \(A^mq(A)f=g\text{.}\) To solve this equation, we consider instead the simpler problem \(q(A)f=g\text{.}\) Then \(h\) is a solution of the original problem if and only if the function \(h'\) defined by \(h'(n) = h(n+m)\) is a solution to the simpler problem. In other words, solutions to the original problem are just translations of solutions to the smaller one, so we will for the most part continue to focus on advancement operator equations where \(p(A)\) has nonzero constant term, since being able to solve such problems is all we need in order to solve the larger class of problems.
As a special case, consider the equation \(A^m f =g\text{.}\) This requires \(f(n+m)=g(n)\text{,}\) i.e. , \(f\) is just a translation of \(g\text{.}\)

Reading Questions 9.3.4 Reading Questions

1.

For this question, we consider the recurrence equation \(g(n+2) = 5g(n+1) -2g(n)\text{,}\) which we have written in function notation rather than using the subscript-heavy sequence notation. Rewrite this recurrence equation as an advancement operator equation in the style of (9.3.1): \(p(A)g = h\text{.}\)
Hint.
In Example 9.8, the form we would have arrived at by getting all copies of \(f(n)\) on one side would be \((A-2)f = 0\text{,}\) so \(p(A) = A-2\) in that example.
You have attempted of activities on this page.