Answer: Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each of these subproblems just once, storing their solutions for future use.
Answer: Unlike Divide and Conquer, which solves subproblems independently, Dynamic Programming solves subproblems in a way that their solutions are reused to solve larger problems, avoiding redundant calculations.
The two main approaches are Top-Down (Memoization) and Bottom-Up (Tabulation).
Memoization is a Top-Down approach where solutions to subproblems are stored (memorized) to avoid recalculating them. It typically uses recursion with a cache.
Tabulation is a Bottom-Up approach where solutions to all possible subproblems are computed and stored in a table (array) iteratively, building up the solution to the main problem.
The Fibonacci sequence is a classic example where Dynamic Programming can be used to store the results of each Fibonacci number to avoid redundant calculations.
The time complexity is O(n) when using Dynamic Programming, compared to the exponential time complexity O(2^n) of the naive recursive approach
Dynamic Programming improves performance by storing the results of expensive function calls and reusing them when the same inputs occur again, reducing the time complexity significantly.
Common data structures include arrays, hash tables, and sometimes matrices, depending on the problem being solved.
No, Dynamic Programming requires both overlapping subproblems and optimal substructure. Without optimal substructure, the problem cannot be broken down into subproblems whose optimal solutions lead to an overall optimal solution.