Recursion & Dynamic Programming

Here are some resources to learn about recursion:

Recursion is when a function calls itself. You want to use recursion when your problem is composed of similar subproblems. Subproblems are smaller versions (they look the same) of your original problem. When you write a recursive function, you must have a:

  • Recursive step: The part of your function that turns the given problem into a subproblem. It then solves the subproblem by calling itself.
  • Base case: At some point your subproblem will be so small (or simple) that you no longer need to "call yourself" to solve it. This is the base case. It is solved without recursion and should be easy to solve.

Example: Sodapack

Given different size packs of soda, write a function that returns all the different combinations of pack sizes you could use to buy sodas. Each pack of soda can be used multiple times.

For example:

Input:
packs = [1, 2, 3, 5 ]
n = 5
Output:
[ 1, 1, 1, 1, 1 ]
[ 1, 1, 1, 2 ]
[ 1, 2, 2 ]
[ 1, 1, 3 ]
[ 2, 3 ]
[ 5 ]

Order of output does not matter.

Solution 1

How can we can solve our current problem by first solving a smaller problem? Our current problem is packs = [ 1, 2, 3, 5], n = 5. Assume we can solve packs = [ 1, 2, 3, 5], n = 4. Then we can tack on a size 1 soda pack to the solution it gives. What if we solved packs = [ 1, 2, 3, 5], n = 3? Then we could tack on either a size 2 soda pack or two size 1 soda packs. And so on...

The recursive step of our recursive function is to decrease n by the size of some pack, solve the smaller problem and then tack that pack onto the solution. Our base case is when n == 0, in which case we return an empty list.

Java Implementation (Ideone)

// Returns the number of ways to get n sodapacks from the
// sodapacks at from indices start to packs.length - 1.
List<List<Integer>> sodaPacks(int[] packs, int n, int start) {
  List<List<Integer>> result = new ArrayList<>();

  // Base case: we have one way to make an empty list
  if (n == 0) {
    result.add(new ArrayList<>());
    return result;
  }

  // Recursive step: we solve a smaller problem
  for (int i = start; i < packs.length; i++) {
    if (n - packs[i] >= 0) {
      List<List<Integer>> ways = sodaPacks(packs, n - packs[i], i);
      for (List<Integer> way : ways) {
        // Tack on our solution:
        way.add(packs[i]);
        result.add(way);
      }
    }
  }
  return result;
}

The time complexity of this problem is difficult to analyze since there is a variable number of branches at each recursive call. I don't know how to analyze it!

Solution 2

Let be the first element of our packs list and let be the function that solves our problem. Given a problem (that is, given and ), we divide it into two mutually exclusive subproblems:

  1. Solutions where is not used: in which case we remove from our list (because we don't use it) and run .
  2. Solutions where is used: if we use it then we will solve the smaller problem and then tack on to the solution.

If we combine these two solutions, then we get the solution to our entire problem.

Why are these mutually exclusive? You can only exclude or include , not both.

The above describes the recursive step, what is the base case? The base case occurs when in which case we return an empty solution. Note that if (packs is empty) then there is no solution since there are no packs. We should never run .

Java Implementation (Ideone)

List<List<Integer>> sodaPacks(int[] packs, int n, int i) {
    List<List<Integer>> result = new ArrayList<>();

  // Base case: we have one way to make an empty list
  if (n == 0) {
      result.add(new ArrayList<>());
      return result;
  }

  // Recursive step: we solve a smaller problem
  // For each pack, you either include it or exclude it
  // Include it:
  if (n - packs[i] >= 0) {
      List<List<Integer>> include = sodaPacks(packs, n - packs[i], i);
      // If we include it, we must tack on packs[i] to the solution
      for (List<Integer> list : include) {
        list.add(packs[i]);
        result.add(list);
      }
  }
  // Exclude it:
  if (i + 1 < packs.length) {
      List<List<Integer>> exclude = sodaPacks(packs, n, i + 1);
      result.addAll(exclude);
  }
  return result;
}

The time complexity of this algorithm is much easier to analyze so I will do it. At each , we make two recursive calls. In the worst case we iterate over each value from so the complexity is .

Of course, we never want to have exponential running time. Can you think of a way to optimize the algorithm using dynamic programming?

Dynamic Programming

Dynamic programming is applicable in problems where we compute the same result over and over again. In a nutshell, it means to save results so that when we need to do the same computation again, we can use our saved result instead.

Dynamic programming and recursion are related because often, recursive programs repeat computations. This happens whenever you call your recursive function with the same parameters twice. Any recursive program that repeats computations in this way can be optimized using dynamic programming.

Here are some resources to learn dynamic programming:

Example: Sodapack

Let's apply dynamic programming to the soda pack example from above. Notice that the function sodaPack has two parameters that change in the recursion: the number of soda packs we need and (or in solution 1) which tells us which soda packs we are allowed to use. To apply dynamic programming, we save the result (the return value) we get when we run sodaPack on parameters and . We save it in a data structure that maps each pair to a solution. The next time we call our recursive sodaPack method with the same parameters, we will get our saved result from the map and return it. This technique is called memoization. We "memoize" previous results to use them later. We modify the solution accordingly:

Java Memoization Implementation (Full implementation)

List<List<Integer>> sodaPacks(int[] packs, int n, int i) {
  Params params = new Params(n, i);
  if (memo.containsKey(params)) {
    return memo.get(params);
  }

  List<List<Integer>> result = new ArrayList<>();

  // base case and recursive step here omitted...

  memo.put(params, new ArrayList<>(result));
  return result;
}

What is the time complexity of this algorithm? We will call sodaPack() on each (, ) pair once. Each call takes constant time because we use memoized results (constant time) and each base case takes constant time. How many pairs are there? There are pairs therefore the time complexity is where is the number of packs. Notice how much faster this algorithm is. It is polynomial in rather than exponential!

This is called top-down dynamic programming because it starts with a recursive solution (as we did above). The recursive solution is naturally top-down because we start with a larger problem, break it into subproblems and solve those problems. The only difference is that with dynamic programming we memoize the solution we computed for a subproblem so we can use it later.

If you have already written a recursive solution in an interview, this is probably the easiest way to optimize it using dynamic programming. However, this is not the most efficient solution since recursive functions have overhead which makes them slower. There is a way to write an equivalent dynamic programming solution without recursion using the bottom-up approach. In the bottom-up approach we start with the base case and we build larger and larger solutions from it. Read Chapter 8 of Cracking the Coding Interview to see a bottom-up dynamic programming example.

Examples of dynamic programming problems

results matching ""

    No results matching ""