Lets first talk about dynamic programming. Like the divide-and-conquer technique, DP also solves problems by combining solutions to the subproblems. The main difference is that the subproblems in DP overlap. So in DP once we solve a subproblem, we save it’s answer using some data structure.

Now it’s important to understand exactly what type of problems should make us think of using DP. Typically DP is applied to “Optimization Problems”, in which a problem can have many solutions, but we need only the best of all.

For optimization problems, problems that have multiple solutions only best one should be selected. These can be broken down into subproblems, eg: the fibonacci(n) using naive recursion has time complexity of O(2^n). We can either use an iterative approach to bring down complexity or use dynamic programming. The main gist is to save the solutions of overlapping problems in a data structure and avoid redundant work. This technique can only be applied if the problem in hand has the property of “OPTIMAL SUBSTRUCTURE”. A problem will be of this nature, if it’s optimal solution can be found by combining optimal solutions of it’s subproblems of the same type. eg: Fibonacci(n) = Fibonacci(n-1) + Fibonacci(n-2) These problems can be of 2 types, i.e. there can be many solutions present for a given problem and we need to get the optimal one. In case of fibonacci, though it has the property of optimal substructure, but at each recursive call, there is only one solution that comes out. But there can be problems with optimal substructure property but many solutions possible. For this we pass arguments to Max or Min or some function. Always prove the optimal substructure property using contradiction or induction. Problems that have this property have subproblems that are completely independent. Also for problems that do not have this property, just one example will suffice to prove that dynamic programming can’t be applied.

Greedy algorithms, as the name suggests, are greedy at each step and try to get the best at each step. So they always make locally optimal decision. This may not always lead to the correct solution. Eg: Largest number formed from ‘3’, ‘7’, ‘8’, ‘5’ => will be 8753. With greedy technique, at each stage it finds the largest i.e. 8, then 7, then 5, then 3. It is also important to remember the concept of “SAFE MOVE”. Not any greedy choice can lead to a right solution. So the first greedy choice that we make should be safe. This can be done by comparing with an already existing optimal solution and see if that solution also has the first choice same as our greedy choice. Then we get subproblems and we continue with our greedy choice. “All safe moves are greedy, but not all greedy moves are safe”.

During the DP approach, we go through all the possible solutions to get the most optimal one. But if a problem has a greed property, we can avoid going through all possible solutions. First we have to prove that a problem has greedy property.

So in general in terms of time complexity in descending order will be as follows, Naive Recursive > Dynamic Programming(memoization) > Greedy

Some great resources for understanding DP are as follows: https://www.youtube.com/watch?v=W2ote4jCuYw&list=PLyEvk8ZeQDMVbsg7CEfT0NV3s3GkMx1vN https://www.youtube.com/watch?v=kFUs5VUxO-s&list=PLyEvk8ZeQDMU1kCAyT7re9wO1FB4zdkEV