5.4. The Three Laws of Recursion

Like the robots of Asimov, all recursive algorithms must obey three important laws:

  1. A recursive algorithm must have a base case.

  2. A recursive algorithm must change its state and move toward the base case.

  3. A recursive algorithm must call itself, recursively.

Let’s look at each one of these laws in more detail and see how it was used in the vectsum algorithm. First, a base case is the condition that allows the algorithm to stop recursing. A base case is typically a problem that is small enough to solve directly. In the vectsum algorithm the base case is a list of length 1.

To obey the second law, we must arrange for a change of state that moves the algorithm toward the base case. A change of state means that some data that the algorithm is using is modified. Usually the data that represents our problem gets smaller in some way. In the vectsum algorithm our primary data structure is a vector, so we must focus our state-changing efforts on the vector. Since the base case is a list of length 1, a natural progression toward the base case is to shorten the vector. This is exactly what happens on line 5 of ActiveCode 2 when we call vectsum with a shorter list.

The final law is that the algorithm must call itself. This is the very definition of recursion. Recursion is a confusing concept to many beginning programmers. As a novice programmer, you have learned that functions are good because you can take a large problem and break it up into smaller problems. The smaller problems can be solved by writing a function to solve each problem. When we talk about recursion it may seem that we are talking ourselves in circles. We have a problem to solve with a function, but that function solves the problem by calling itself! But the logic is not circular at all; the logic of recursion is an elegant expression of solving a problem by breaking it down into smaller and easier problems.

It is important to note that regardless of whether or not a recursive function implements these three rules, it may still take an unrealistic amount of time to compute (and thus not be particularly useful). A great example of this is the ackermann function, named after Wilhelm Ackermann, which recursively expands to unrealistic proportions. An example as to how many calls this function makes to itself is the case of ackermann(4,3). In this case, it calls itself well over 100 billion times, with an average time of expected completion that falls after the predicted end of the universe. Despite this, it will eventually come to an answer, but you probably won’t be around to see it. This goes to show that the efficiency of recursive functions are largely dependent on how they’re implemented.

In the remainder of this chapter we will look at more examples of recursion. In each case we will focus on designing a solution to a problem by using the three laws of recursion.

Self Check

You have attempted of activities on this page