11  Recursive algorithms

12 Key concepts

12.1 Algorithmic recursion

Algorithmic recursion refers to the process of defining a problem or solution in terms of a smaller version of itself. In other words, a function or subroutine is designed to call itself repeatedly until a base case is reached. This is known as a recursive function or a recursive algorithm.

The basic idea behind algorithmic recursion is to break down a complex problem into simpler subproblems and solve them recursively. Each recursive call solves a smaller instance of the same problem until the base case is reached, at which point the solution is returned.

A classic example of a recursive algorithm is the factorial function, which calculates the product of all positive integers up to a given number. The factorial of a number n is defined as n * (n-1) * (n-2) * … * 1. The recursive implementation of the factorial function is straightforward: if n is 0 or 1, the function returns 1; otherwise, it multiplies n by the factorial of (n-1).

Recursive algorithms are widely used in computer science, especially in data structures and algorithms, and can be very efficient in solving problems that can be broken down into smaller subproblems. However, they can also be tricky to implement correctly and efficiently, and may require careful management of memory usage and stack overflow.

12.2 The structure of recursive algorithms

The structure of a recursive algorithm typically consists of two parts: the base case and the recursive case.

  • Base case: This is the simplest form of the problem that can be solved without recursion. It is the termination condition that stops the recursion from continuing indefinitely. The base case is important because without it, the recursive algorithm would run forever.

  • Recursive case: This is the more complex form of the problem that can be broken down into simpler subproblems. In the recursive case, the problem is solved by recursively calling the same function with smaller instances of the problem until the base case is reached.

In general, a recursive algorithm follows the following steps:

  1. Check if the current problem instance is a base case. If it is, return the solution directly.

  2. If the current problem instance is not a base case, break it down into smaller subproblems.

  3. Solve each subproblem recursively by calling the same function on the subproblem.

  4. Combine the solutions of the subproblems to get the solution to the original problem.

  5. Return the final solution.

It’s important to note that recursive algorithms can have multiple base cases and multiple recursive cases depending on the complexity of the problem. In addition, care must be taken to ensure that the recursive algorithm terminates for all possible inputs.

12.3 Recurrence equations

Recurrence equations, also known as recurrence relations, are equations that describe a sequence of numbers in terms of one or more of the preceding terms in the sequence. They are often used in computer science and mathematics to describe the time or space complexity of recursive algorithms.

In the context of recursive algorithms, recurrence equations are used to analyze the performance of the algorithm and determine its time or space complexity. The recurrence equation expresses the time or space required to solve a problem of size n in terms of the time or space required to solve smaller subproblems.

For example, let’s consider the recursive implementation of the Fibonacci sequence:

fibonacci(n):
  if n == 0 or n == 1:
    return n
  else:
    return fibonacci(n-1) + fibonacci(n-2)

The recurrence equation for the time complexity of this algorithm can be expressed as follows:

T(n) = T(n-1) + T(n-2) + O(1)

where T(n) is the time required to compute the nth Fibonacci number, and O(1) represents the constant time required for the base case.

Solving the recurrence equation involves finding a closed-form expression for T(n) in terms of n. In this case, the solution is O(2^n), which means that the time required to compute the nth Fibonacci number grows exponentially with n.

Recurrence equations can be useful for understanding the performance of recursive algorithms and for comparing different algorithms for the same problem. However, solving them can be challenging and often requires advanced mathematical techniques such as generating functions or the master theorem.

12.4 Master Theorem

The master theorem is a mathematical tool used to analyze the time complexity of divide-and-conquer algorithms, which are a common type of recursive algorithm. It provides a formula for the time complexity of such algorithms in terms of their recursion depth and the size of their subproblems.

The master theorem has three cases, depending on the relative sizes of the subproblems and the amount of work done in combining their solutions:

  1. If the size of the subproblems is a constant fraction of the original problem size, and the amount of work done in combining their solutions is also constant, then the time complexity is given by a simple formula: T(n) = a * T(n/b) + O(n^d), where a is the number of subproblems, b is the size of the subproblems, and d is the amount of work done in combining their solutions. The time complexity is then given by:
  • If d < log_b(a), then T(n) = O(n^log_b(a)).
  • If d = log_b(a), then T(n) = O(n^d * log(n)).
  • If d > log_b(a), then T(n) = O(n^d).
  1. If the size of the subproblems is a constant fraction of the original problem size, but the amount of work done in combining their solutions is greater than constant, then the time complexity is given by T(n) = a * T(n/b) + O(n^d), where a, b, and d are as above, but d > log_b(a). The time complexity is then given by T(n) = O(n^d).

  2. If the size of the subproblems decreases faster than a constant factor of the original problem size, then the time complexity is dominated by the cost of the leaves of the recursion tree, and is given by T(n) = O(f(n)), where f(n) is the cost of the leaves of the recursion tree.

The master theorem is a powerful tool for analyzing the time complexity of divide-and-conquer algorithms, and can simplify the process of determining their time complexity. However, it is important to note that the master theorem only applies to a specific class of recursive algorithms and may not be applicable to all types of recursive algorithms.

13 Learning outcomes

13.1 Trace and write recursive algorithms

Tracing and writing recursive algorithms can be a bit challenging, but there are some general steps you can follow to help you understand and write recursive algorithms:

  1. Identify the base case: The first step in writing a recursive algorithm is to identify the base case, which is the simplest form of the problem that can be solved without recursion. The base case is important because it stops the recursion from continuing indefinitely.

  2. Define the recursive case: Once you have identified the base case, you need to define the recursive case, which is the more complex form of the problem that can be broken down into simpler subproblems. In the recursive case, the problem is solved by recursively calling the same function with smaller instances of the problem until the base case is reached.

  3. Write the function signature: After you have identified the base case and recursive case, you can write the function signature, which includes the function name, the input parameters, and the return type. Write the function body: With the function signature in place, you can now write the function body. The function body should start by checking whether the current problem instance is a base case. If it is, the function should return the solution directly. If not, the function should break down the problem into smaller subproblems and solve each subproblem recursively by calling the same function on the subproblem.

  4. Combine the solutions: Once the recursive calls have returned their solutions, the final step is to combine the solutions of the subproblems to get the solution to the original problem. This may involve some additional processing or computation, depending on the nature of the problem.

  5. Test the algorithm: Finally, it is important to test the recursive algorithm with a range of inputs to ensure that it produces correct and expected results.

When tracing recursive algorithms, it can be helpful to draw a recursion tree to visualize the flow of execution and the order in which the recursive calls are made. This can also be useful for identifying potential problems with the algorithm, such as infinite recursion or redundant computations.

13.2 Write the recursive version of an iterative algorithm using pseudocode

Here’s an example of an iterative algorithm that computes the sum of the elements in an array:

// Iterative algorithm for computing the sum of an array
sum_array(array):
    sum = 0
    for element in array:
        sum = sum + element
    return sum

To convert this iterative algorithm into recursive form, we can use a recursive helper function that takes the current index of the array as a parameter and computes the sum of the remaining elements in the array. The base case occurs when the current index is greater than or equal to the length of the array, at which point the function returns 0. The recursive case involves computing the sum of the current element and the sum of the remaining elements using a recursive call to the same function with the next index as a parameter. The sum of the array is then given by the sum of the current element and the sum of the remaining elements.

Here’s the recursive version of the algorithm:

// Recursive algorithm for computing the sum of an array
sum_array_helper(array, index):
    if index >= len(array):
        return 0
    else:
        return array[index] + sum_array_helper(array, index + 1)
        
sum_array(array):
    return sum_array_helper(array, 0)

In this recursive algorithm, sum_array_helper is the recursive helper function that takes an array and an index as input and computes the sum of the remaining elements in the array. The sum_array function is the main recursive function that calls sum_array_helper with an initial index of 0.

When converting an iterative algorithm to a recursive algorithm, the key is to identify the base case and the recursive case, just as we did in this example. In general, the base case should be the simplest form of the problem that can be solved directly, while the recursive case should be a more complex form of the problem that can be broken down into smaller subproblems. Once you have identified the base case and the recursive case, you can write a recursive helper function that calls itself with smaller instances of the problem until the base case is reached.

It’s worth noting that not all iterative algorithms can be easily converted to recursive form, and in some cases the resulting recursive algorithm may be less efficient or more difficult to implement. However, for certain types of problems, the recursive approach can offer a more elegant and intuitive solution.

I hope this clarifies how to convert an iterative algorithm to recursive form. Please let me know if you have any further questions or concerns.

13.3 Calculate the time complexity of recursive algorithms

Calculating the time complexity of recursive algorithms can be more challenging than for iterative algorithms, since the time complexity depends on the number of recursive calls and the size of the subproblems at each level of the recursion. However, there are some general steps you can follow to calculate the time complexity of recursive algorithms:

  1. Identify the base case: The first step is to identify the base case, which is the simplest form of the problem that can be solved directly. The base case is important because it stops the recursion from continuing indefinitely.

  2. Determine the number of recursive calls: Once you have identified the base case, you need to determine the number of recursive calls made by the algorithm at each level of the recursion. This depends on the size of the subproblems being solved and the specific algorithm being used.

  3. Analyze the time complexity of each recursive call: For each recursive call, you need to analyze the time complexity of the operations being performed. This may involve analyzing loops, conditional statements, and other operations.

  4. Combine the time complexity of each level of the recursion: Once you have analyzed the time complexity of each recursive call, you need to combine the time complexity of each level of the recursion. This may involve using recurrence relations or other mathematical tools to compute the total time complexity.

  5. Simplify the time complexity if possible: Finally, you may be able to simplify the time complexity if there is a pattern or formula that describes the time complexity of each level of the recursion.

It’s worth repeating that calculating the time complexity of recursive algorithms can be more challenging than for iterative algorithms, since the time complexity can depend on many factors, including the number of recursive calls, the size of the subproblems, and the specific operations being performed. In some cases, it may be necessary to use more advanced mathematical tools, such as the master theorem, to analyze the time complexity of recursive algorithms.