What is dynamic programming and how to approach problems based on dynamic programming?

Kshitij Patil
5 min readJan 10, 2022
source link

You might have heard that, Dynamic Programming is mainly an optimization over plain recursion. This is true, but this does not give any idea about dynamic programming; so lets see what recursion is.

The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function. Using recursive algorithm, certain problems can be solved quite easily. The problems in which all the scenarios should be taken into consideration to find optimal solution. In recursion, we break down the bigger problem into smaller sub-problems and find the solution for smaller subproblems. This property is called optimal substructure. A problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. So, in a nutshell, we break down our problem into smaller subproblems and try to find relation between them. This relation is called recurrence relation and it is the key to our recursive solution. Lets consider the famous nth Fibonacci number problem. If you want to find nth Fibonacci number, you will need we need n-1 th term and n-2 th Fibonacci number and when we add them we get the nth term. This is how to break our problem and find relation between them.

A typical recursive function can be broken down into 3 main sections. First section which is base case. Base case is the smallest subproblem of which we know the solution. In nth Fibonacci number problem, base cases can be first and second term as they don’t change and initiate Fibonacci series. As we consider all the possible scenarios, recursion takes comparatively greater time for execution. Also, sometimes solution for same values gets calculated more than one time. That is why time complexity of recursive solution is exponential.

This is where dynamic programming came into picture. When a problem has the properties optimal substructure which was earlier discussed and overlapping subproblems then we can use dynamic programming there. In dynamic programming we store the results of the subproblems in a data structure which is array most of the times and whenever we need the solution for same subproblem we just use it instead of re-computation. Lets, jump into the steps involved in solving problems based on dynamic programming.

  1. Look at options you have and define state
  2. Find recurrence relation and base case.
  3. Implement with recursive or iterative approach.

Lets take a closer look. First of all, we define a state. A state is usually defined as the particular condition that something is in at a specific point of time. Similarly, in terms of Dynamic Programming, a state is defined by a number of necessary variables at a particular instant that are required to calculate the optimal result. Using those variables, we can uniquely define a subproblem. In nth Fibonacci number problem, we just need a variable to express the term that we have to find. We can define a recursive function with a single parameter int i, where we need to find ith Fibonacci number. So, its state contains a single variable i.

There are two ways to implement dp approach. First one is recursion with memoization and another one is iterative method. In both the approaches, we store the solutions for the states in some data structure which is most of times a single or multi-dimentional array. Dimension depends on the number of variables in the state. If the state has let say 3 variables, then we need to create 3-dimensional array. Recursion with memoization is simple and intuitive approach. First of all, we initialize the data structure with -1 in most of all the cases. While returning, we store the result for that particular state in the data structure. Before starting to find optimal solution for some state, we check if we have already solved for that state before. If yes, we simply return that value else we process to find optimal solution. Although this approach is intuitive, it makes lot of functional call due to which stack memory can get overflow so to avoid this, iterative method is used. We just use same space that we use in memorization few for loops depending upon number of variables in state.

Interesting things is, time and space complexity is same for both of the methods. Dynamic programming can reduce the time complexity of the problem from exponential to linear or quadratic (depending upon number of variables in state) and factorial time to exponential time complexity. Just we have to use extra space and logic!

Lets try to solve a good problem with steps mentioned above. Read this problem from one of the famous problem set “Educational DP contest” from Atcoder. We have n days where in you have to perform one activity every day. We have given scores of 3 different activities on each day. We get some score after performing it. We have to maximize the score. Condition is we can’t select particular activity on two consecutive days. So greedy approach will not work here because choosing the activity at current moment depends on what activity is chosen last time. So it’s a dp problem.

Let’s see the option and define state.On ith day, we have 3 options. Choose activity 0, activity 1 or choose activity 2. But this is affecting the option for previous day. So we have to allocate a variable for day and a variable for activity which means our state is (I,j) we will store optimal solution at position I,j. Now let’s find recurrence relation. If we select jth activity on ith day, we will get score for that activity and then we have to find optimal solution for previous day with remaining activities. As we have to maximize the total score, we will choose the maximum out of them. That is our recurrence relation!

Recursive approach.
Recursion+Memoization approch
Iterative approach

Analysis : Time complexity of recursive function is always exponential for dp problems. Here it is 3 power n with space complexity O(n). When we use memorization, complexity reduces to n here and space complexity is also O(n) as no. of activity is fixed. It is same for tabulation or Iterative method.

Some links for reference:

Thank you! Happy learning!

— Kshitij, Amitesh, Pranit, Vivek.

--

--