Dynamic Programming for Fun

I've been working at Timehop for a little over a year, and while the work is interesting and challenging, it's always a good idea to have something that you do on the side to please yourself. It can be anything, playing a musical instrument, painting, collecting samurai swords, or practicing survival skills. One of my hobbies is solving Dynamic Programming problems. I like it so much that I even decided to record a YouTube course devoted to Dynamic Programming only. This blog post is yet another opportunity to spread the knowledge about the topic and potentially get more people interested in this area of Computer Science.

What is Dynamic Programming?

Dynamic Programming is when we have one big problem which is not clear how to solve, and we break it down into smaller problems which are not clear how to solve either. © A. Kumok


On a more serious note, Dynamic Programming (DP) is a method of solving problems, which is used in computer science, mathematics and economics. Using this method, a complex problem is split into simpler problems, which are then solved. At the end, the solutions of the simpler problems are used to find the solution of the original complex problem. And I want to highlight that DP is not an algorithm or a data structure, but a method–a technique.

I like to compare Dynamic Programming with an instant camera–a type of camera which uses self-developing film to create a chemically developed print shortly after taking the picture. It's fascinating to watch it producing the print if you don't know how such a camera works internally. But even if you do, it still feels like magic. Dynamic Programming is of the same kind: a miraculous optimization technique that reduces the runtime complexity of an algorithm from exponential time to polynomial time. Exponential algorithms are usually used for situations in which you have to try every possible combination or permutation in order to solve a problem. As a quick example, let's take a look at a typical function that returns the n-th Fibonacci number.

func Fib(n int) int {
	if n == 0 {
		return 0
	}
	if n <= 2 {
		return 1
	}
	return Fib(n-1) + Fib(n-2)
}

Here is a figure showing the recursion tree when Fib is called with n equals 6.

dynamic-programing-flow.png

We can notice that there are many different calls of the function with the exact same input value. In particular, Fib(3) was calculated three times from scratch. Fib(2) was calculated five times! These re-calculations are quite expensive. Just to give you some perspective, by using the Fib function above, it takes about 700ms to calculate the 40th number of Fibonacci. But if we were to calculate Fib(100), it would take several thousand(!) years.

Fib(60) → 2.62 hours
Fib(80) → 1.23 years
Fib(100) → 5 thousand years
Fib(120) → 20.69 million years
Fib(140) → 84.92 billion years

As you can see, the time progression is enormous, which is why exponential algorithms should be avoided at all costs.

Memoization to the Rescue

What if, instead of calculating Fib(3) three times, we created an algorithm that calculates it once, caches the result, and access the cache every subsequent Fib(3) call? That's exactly what memoization does.

func Fib(n int) int {
	memo := make([]int, n+1)
	return fibonacciHelper(n, memo)
}

func fibonacciHelper(n int, memo []int) int {
	if n == 0 {
		return 0
	}
	if n <= 2 {
		return 1
	}

	if memo[n] > 0 {
		return memo[n]
	}
	memo[n] = fibonacciHelper(n-1, memo) + fibonacciHelper(n-2, memo)

	return memo[n]
}


The basic template is:

  • If base case, then return

  • Else

    • Check cache for answer. If answer exists, then return it

    • Otherwise recurse and store the result in a cache

    • Return the result from cache

Notice that here we do a trade: to achieve time efficiency, we’re willing to give up memory space to allocate all computed sub-problems’ solutions.

dynamic-programming-flow2.png

With the memoization in place, we no longer need to recalculate Fib(2), Fib(3), Fib(4) and so on. If we were to calculate Fib(100), it would take less than a 100 microseconds. The runtime complexity is now linear. Pretty good improvement, huh?

Recursion+caching is what they call Top-Down Dynamic Programming. The advantage of this approach is that the code is arguably easier to implement. As soon as the recursive solution is ready, applying memoization to it is a pretty trivial task. The disadvantage though is that the code is prone to a stack overflow error if the recursion tree is too deep. A workaround is to use Bottom-Up Dynamic Programming, also known as tabulation.

Tabulation

Tabulation aims to solve the same kind of problems, but completely removes recursion, saving the memory cost that recursion incurs when it builds up the call stack. In the tabulation approach to Dynamic Programming we iteratively solve all sub-problems and similar to the top-down approach, store the results in an array. These results are then used to solve larger problems that depend on the previously computed results.

func Fib(n int) int {
 	dp := make([]int, n+2)
 	dp[0] = 0
	dp[1] = 1
	for i := 2; i <= n; i++ {
		dp[i] = dp[i-1] + dp[i-2]
	}
	return dp[n]
}


In our case, we calculate the smaller values of Fib first, then build larger values from them. It’s like starting at the lower level of your recursive tree and moving your way up. Notice how the solution of the return value comes from the memoization array dp[], which is iteratively filled in by the for loop. By “iteratively,” I mean that dp[2] is calculated and stored before dp[3], dp[4], …, and dp[n]. Because dp[] is filled in this order, the solution for each sub-problem (n = 3) can be solved by the solutions to its preceding sub-problems (n = 2 and n = 1) because these values were already stored in dp[] at an earlier time.

One key thing about the bottom-up approach is that we can optimize the solution even further.

func Fib(n int) int {
 	first := 0
	second := 1
	var ans int
	for i := 2; i <= n; i++ {
		ans = first + second
		first = second
		second = ans
	}
	return ans
}


Let's summarize and outline the pros and cons of both top-down and bottom-up approaches.

Top-down Dynamic Programming is essentially recursion, but enhanced with memoization. We "memoize" the computed values in a lookup table (usually an array), to avoid having to recompute those values again in the future.

Bottom-up Dynamic Programming is an iterative version of the same core idea of caching previously-evaluated cases. Sub-problems are solved in such an order that any time you would have needed to call the function recursively, the answer you need is already in the cache.

Top-down approach

  • Pros

    • Easier to implement

    • Only necessary states are computed, and if a state is not needed, it is skipped

  • Cons

    • Easy to run out of space in the stack

    • Hard to reason about time and space complexity

Bottom-up approach

  • Pros

    • Can be space optimized

    • No recursion overhead

    • Easy to reason about time and space complexity

  • Cons

    • All states must be computed since we are going in a loop

    • Usually more difficult to implement

If you want to learn more about optimization problems, feel free to subscribe to my channel, where I have a course on Dynamic Programming for Beginners.

twitter facebook facebook
Tags