Dynamic Programming
September 16, 2016
Dynamic programming is a fancy name for storing intermediate results and re-using the stored result instead of re-computing them each time.
Let’s see an example of a dynamic programming problem. Once we solve the problem using dynamic programming, the formal technical definitions will be easier to follow.
Tiling problem
Problem: You are given a grid of size $n \times 2$ and $n$ tiles of size $2 \times 1$. In how many different ways can you tile the grid such that the entire grid is covered and no tiles overlap. (The tiles look identical to each other. Two ways of tiling are different based on whether the tiles are placed horizontally or vertically).
Example: There are 3 possible ways of tiling a $3 \times 2$ grid.
Tiling 1:
Three vertical tiles
__ __ __
| | | |
|__|__|__|
Tiling 2:
1 vertical tile, followed by 2 horizontal tiles
__ _____
| |_____|
|__|_____|
Tiling 3:
2 horizontal tiles, followed by 1 vertical tile
_____ __
|_____| |
|_____|__|
First solution (recursion)
In any tiling of the $n \times 2$ grid, the bottom-left square will either be covered with a vertical tile, or a horizontal tile.
- If it is covered with a vertical tile, then what remains is a $(n-1) \times 2$ grid. (example, tiling 1 and tiling 2 above). We can look at this grid yet to be tiled, as a smaller version of the same problem of size $(n-1)$.
- If it is covered with a horizontal tile, the only way to cover the top left square (and the square to its immediate right) is to use another horizontal tile, leaving us with a $(n-2) \times 2$ grid. (example, tiling 3 above). This too is a smaller version of the same problem, this time of size $(n-2)$.
Since the bottom left square can only be tiled in these two distinct ways, the number of ways to tile a $n \times 2$ grid is the sum of the number of ways to tile a $(n-1) \times 2$ grid and the number of ways to tile a $(n-2) \times 2$ grid.
Hence, we can define a function $f(n)$ = number of ways of tiling an $n \times 2$ grid = $f(n-1) + f(n-2)$.
This gives us a recursive solution. So we need a few base cases. In this case, $f(1) = 1$ (number of ways of tiling a $1 \times 2$ grid) and $f(2) = 2$ (number of ways of tiling a $2 \times 2$ grid).
Code:
def tile(n):
# base case
if n == 1: return 1
if n == 2: return 2
# recursive case
return tile(n-1) + tile(n-2)
How slow is our solution?
Now, let’s see what happens when we call tile(7)
. tile(7)
calls tile(6)
and tile(5)
. Each of these calls the tile
function with smaller values of $n$, but notice that both of them will call tile(4)
. This continues till we reach tile(1)
and tile(2)
.
The following shows the first few levels of the recursion tree.
tile(7)
/ \
tile(6) tile(5)
/ \ / \
tile(5) tile(4) tile(4) tile(3)
/ \ / \ / \ / \
tile(4) tile(3) tile(3) tile(2) tile(3) tile(2) tile(2) tile(1)
. . . .
. . . .
In total, tile(6)
will be called once, tile(5)
will be called 2 times, tile(4)
is called 3 times, tile(3)
is called 5 times, tile(2)
is called 8 times …
In general, the size of the above recursion tree is exponential in $n$. This means that the naive recursive solution is really slow.
Dynamic Programming solution (memoization)
Notice that in the above solution, we are calculating the same value repeatedly. For example, we calculate tile(3)
5 times. There’s a simple fix to this, store the values once they are calculated.
In python:
# solution with memoization
answer = [None]*100
def tile(n):
if answer[n] is None: # answer is not stored (*)
# calculate the answer ..................(*)
# base case
if n == 1: result = 1
elif n == 2: result = 2
# recursive case
else:
result = tile(n-1) + tile(n-2)
answer[n] = result # store the answer .. (*)
return answer[n] # return stored answer .... (*)
print("Number of ways of tiling 7x2 is: " + str(tile(7)))
In cpp:
#include <bits/stdc++.h>
using namespace std;
int answer[1000];
int tile(int n)
{
// base case
if (n == 1 || n == 2)
return n;
// if answer[n] is calculated, return it (*)
if (answer[n] != 0)
return answer[n];
// calculate and storing in answer[n] (*)
answer[n] = tile(n - 1) + tile(n - 2);
// returning the stored answer (*)
return answer[n];
}
int main()
{
cout << "Number of ways of tiling 7x2 is: " << tile(7) << endl;
return 0;
}
The important lines in the above code are marked with a (*)
. Consider what happens when tile(4)
is called the first time. Since answer[4]
is None
, we calculate result by calling tile(3)
and tile(2)
and store it in answer[4]
. The next time tile(4)
gets called (no matter who calls it), answer[4]
is not None
anymore. So tile(3)
and tile(2)
don’t get called again. We simply return answer[4]
.
The new recursion tree looks as follows:
tile(7)
/ \
tile(6) tile(5)
/ \
tile(5) tile(4)
/ \
tile(4) tile(3)
. .
. .
This is because tile(5)
calls tile(4) + tile(3)
exactly once (on the very left of the recursion tree). After it is calculated, when it is called again, it directly returns the already stored value in answer[5]
, without making more calls to tile(4)
and tile(3)
. Hence, the entire subtree below the tile(5)
on the right is missing. This leads a large amount of time saving.
How fast is the memoization solution?
The memoization solution is O($n$), i.e. linear run-time. To see why, each node in the recursion tree gets expanded only once. Next time, they don’t get expanded since the solution is already stored. Since the total number of expanded nodes is at-most $n$, the total number of nodes overall can’t be more than $3n$ (as each node has at-most 2 direct children). Hence, the overall solution is O($n$).
Dynamic Programming solution (bottom-up)
Now, let us see another way to implement the above solution. We start from the base case, and then solve the problems for larger numbers. This is called the bottom-up implementation.
In python:
# bottom-up implementation
def tile(n):
answer = [None, None, None, ... (n times) ... ]
# base case
answer[1] = 1
answer[2] = 2
# recursive case
for i in 3 ... n:
answer[i] = answer[i-1] + answer[i-2]
return answer[n]
In cpp:
#include <bits/stdc++.h>
using namespace std;
int answer[100];
int main()
{
answer[1] = 1;
answer[2] = 2;
for(int i = 3;i<10;i++)
answer[i] = answer[i - 1] + answer[i - 2];
cout << "Number of ways of tiling 7x2 is: " << answer[7] << endl;
return 0;
}
Note that although implementing the bottom-up solution is much simpler for this problem, in general, which solution is easier to implement (memoization vs bottom-up) depends on the specific problem.
How fast is the bottom-up solution?
It is much easier to see that the bottom-up solution is O($n$). It’s just a for loop which iterates approximately $n$ times.
What does solving the problem using DP mean?
When we declare answer[i]
= number of ways to tile an $i \times 2$ grid, and decide re-use the results stored in answer[i]
, we are solving the problem using DP. If we choose to re-calculate the value tile(i)
again and again, then we are using recursion, but not using DP.
Technical definitions (theory)
Now that we have successfully solved a problem using dynamic programming, let’s introduce some technical definitions.
The two important properties a solution needs to have for it to be considered dynamic programming is
- Optimal sub-structure: The property says that the optimal solution to a problem can be composed from optimal solutions to similar smaller problems. In our tiling example,
tile(n) = tile(n-1) + tile(n-2)
. - Overlapping sub-problems: If implemented naively, the same smaller problem is calculated many many times when trying to calculate the original answer. In our tiling example,
tile(3)
was being calculated 5 times when we tried to calculatetile(7)
.
There are in general two ways of implementing dynamic programming solutions to problems. The first method is recursive with memoization, and the second method is bottom-up and iterative.
How to solve a Dynamic programming problem
A dynamic programming solution consists of two main parts.
- State: The parameters that define the subproblems. In the tiling example,
answer[i] for i in 1 .. n
are the states. We defined the state asanswer[i]
= number of ways to tile an $i \times 2$ grid. - Recursive formula: The relation between the larger problems and the smaller problems. In the tiling example,
tile(i) = tile(i-1) + tile(i-2)
oranswer[i] = answer[i-1] + answer[i-2]
is the recursive formula.
Note that these two parts are deeply interlinked to each other. We need to find a state space, such that (a) each state completely represents a certain position in the given problem and (b) there exists a recursive relation between the states.
The overall efficiency of the solution depends both on the total number of states as well as the time taken to compute each state’s answer given the answer for all the smaller states. In the tiling example, there were O($n$) states and it took O(1) time to compute the answer for a state using answers of the smaller states. Hence, the overall efficiency was O($n$)*O(1) = O($n$).
Conclusion
We covered a lot in this tutorial. Let’s do a short review.
- First, we saw a tiling problem and its recursive solution. We saw that the recursive solution had an exponential run-time.
- Then, we used memoization to re-use our calculations instead of repeating them. This made our solution O($n$).
- We also implemented the solution using a bottom-up iterative approach.
- We saw the formal definitions of optimal sub-structure (breaks down into smaller problems) and overlapping sub-problems (naive recursive solution repeatedly calculates the same sub-problem many times).
- Lastly, we discussed the main parts of the solution to a DP problem - the DP state and the relation between the solution to the larger problem and the smaller problems.
Next steps
For trivia about why Dynamic Programming is named so, a video and other information, check out the posts in this discussion.
The best way to get better at dynamic programming is to solve different kinds of dynamic programming problems. This is what we will be doing in the upcoming tutorials.