0% found this document useful (0 votes)
3 views

Dynamic Programming

Dynamic Programming (DP) is a technique for solving problems by breaking them into overlapping subproblems, storing results to avoid redundant calculations. Key principles include optimal substructure, overlapping subproblems, memoization or tabulation, and defining base cases. Examples include the Fibonacci sequence and the staircase problem, both demonstrating how DP can optimize performance from exponential to linear time complexity.

Uploaded by

almudharisaleh8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Dynamic Programming

Dynamic Programming (DP) is a technique for solving problems by breaking them into overlapping subproblems, storing results to avoid redundant calculations. Key principles include optimal substructure, overlapping subproblems, memoization or tabulation, and defining base cases. Examples include the Fibonacci sequence and the staircase problem, both demonstrating how DP can optimize performance from exponential to linear time complexity.

Uploaded by

almudharisaleh8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Dynamic Programming

Dynamic Programming (DP) is a problem-solving technique used to solve problems that


can be broken down into overlapping subproblems. Each subproblem is solved only once,
and its result is stored (usually in a table or array) to be reused later instead of recomputing
it.

Dynamic Programming Principles

1. Optimal Substructure:
The optimal solution to the larger problem depends on the optimal solutions to
smaller subproblems.
2. Overlapping Subproblems:
The large problem contains subproblems that are repeated many times. The results
of these subproblems can be stored and reused to avoid redundant calculations.
3. Memoization or Tabulation:
To optimize the performance of algorithms by reducing the time complexity of
problems involving repeated calculations:
o Memoization: Storing results during recursive calls (Top-Down approach).
o Tabulation: Building results from the smallest to the largest using a table
(Bottom-Up approach).
The goal of both: Avoid recalculating the same values.
4. Design an Appropriate Data Structure to Store Results:
For example, a one-dimensional or two-dimensional array, or a HashMap,
depending on the problem's nature.
5. Formulate the Recurrence Relation:
A mathematical relationship that connects the overall solution with solutions to
smaller subproblems.
6. Identify the Base Cases:
These are known solutions to smaller problems and are used as the starting point in
both memoization and tabulation.

Steps to Solve Any Problem Using Dynamic Programming


1. Analyze the Problem:
Ensure that the problem satisfies the conditions for dynamic programming:
overlapping subproblems and optimal substructure.
Dynamic Programming 1 Dr. Rafat alhanani
2. Define the State:
Choose the variables that represent the state of the system (e.g., current position,
certain sum, number of choices, etc.).
3. Formulate the Recurrence Relation:
Build a relationship that connects each state to smaller states.
4. Identify the Base Cases:
Define the basic solutions for simple cases that do not require further
decomposition.
5. Choose the Execution Method (Memoization or Tabulation):
Depending on the nature of the problem or required execution efficiency.
6. Implement the Algorithm and Optimize Performance:
Apply the algorithm and monitor its performance. It can be optimized by reducing
memory usage or eliminating unnecessary branches.

What Makes Dynamic Programming Unique:


1. Relies on Overlapping Subproblems:
The problem is solved by breaking it down into smaller subproblems and storing
their results for later reuse.
2. Prevents Redundancy:
Instead of recalculating the same values multiple times (as in regular recursion or
Divide & Conquer), dynamic programming saves results.
3. Used When the Optimal Solution Relies on Optimal Subproblems (Optimal
Substructure + Overlapping Subproblems).

Examples of Dynamic Programming with the Six Principles Applied:

❖ Fibonacci Numbers:
The Fibonacci sequence is a series of numbers where:
• Each number is the sum of the two preceding ones.
• It starts as:
o F(0) = 0
o F(1) = 1
o Then:
Dynamic Programming 2 Dr. Rafat alhanani
o F(2) = F(1) + F(0) = 1
o F(3) = F(2) + F(1) = 2
o F(4) = F(3) + F(2) = 3
o F(5) = F(4) + F(3) = 5
o And so on...
The sequence looks like this: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
Applications of the Fibonacci Sequence:
1. In Engineering, Design, and Arts:
o The Golden Ratio extracted from the sequence is used in:
▪ Graphic design
▪ Architecture
▪ Logos (e.g., Twitter logo)
2. In Nature and the Environment:
o The Fibonacci sequence appears in:
▪ Leaf arrangements
▪ Petal count in flowers
▪ Growth patterns in shells and pineapples
▪ Seed distribution in sunflowers
Because the ratio between consecutive numbers approaches the "Golden Ratio" (≈ 1.618),
which frequently appears in nature.
Application: Drawing the "Fibonacci Spiral" :
Draw squares with side lengths from the Fibonacci sequence, then draw an arc inside
each square, creating a shape known as the Golden Spiral.
This shape appears in:
• Shells
• Galaxies
• Roses
• Even in professional logo designs!

Dynamic Programming 3 Dr. Rafat alhanani


First: A Simplified Code for Fibonacci with Detailed Explanation
First: The Simple Recursive Solution Without Any Optimization (Inefficient)
def fib(n):
if n == 0:
return 0
if n == 1:
return 1
return fib(n-1) + fib(n-2)
Explanation:
• This code only relies on the recursive relationship: F(n) = F(n-1) + F(n-2).
• However, it is inefficient because it recalculates the same values multiple times.
Dynamic Programming 4 Dr. Rafat alhanani
• For example, fib(5) will calculate fib(3) more than once.
Second: Using Memoization to Improve Performance by Storing Previously
Calculated Results (Using C)
In C, we use arrays or pointers to store previously computed results.
#include <stdio.h>
#define MAX 1000
// Array to store calculated results
long memo[MAX];
// Fibonacci function using memoization
long fib(int n) {
if (n == 0)
return 0;
if (n == 1)
return 1;
// If n has been calculated before, return the stored result
if (memo[n] != -1)
return memo[n];
// Calculate Fibonacci and store the result in the array
memo[n] = fib(n - 1) + fib(n - 2);
return memo[n];
}
int main() {
// Initialize the array with -1 (i.e., not calculated yet)
for (int i = 0; i < MAX; i++) {
memo[i] = -1;
}
int n = 50; // Example to calculate Fibonacci of 50
printf("Fibonacci of %d is: %lld\n", n, fib(n));
return 0;}
Dynamic Programming 5 Dr. Rafat alhanani
Code Explanation:
1. Each time we calculate fib(n), we store the result instead of recalculating it.
2. This reduces the time complexity from O(2ⁿ) to O(n).
3. The memo[] array: We use the memo array to store the results of Fibonacci
calculations.
4. Base Case: If n == 0 or n == 1, we return the result directly.
5. Checking Stored Value: If memo[n] contains a value other than -1, it means n has
been calculated before, so we use the stored result.
6. Calculation and Storage: If memo[n] has not been calculated yet, we compute it
and store it in the array.
Why did we use -1?
• To avoid mixing calculated values (like 0 or 1) with the state "not calculated yet."
Hence, we use -1 as a marker indicating that fib(n) has not been computed yet.

The Six Principles of Dynamic Programming Applied to the Fibonacci Problem


1. Optimal Substructure:
Each value in the sequence depends on the optimal solutions to the immediately
preceding values.
o Example:
F(5) = F(4) + F(3)
And then:
F(4) = F(3) + F(2)
⟵ So, the optimal solution depends on optimal solutions to smaller
problems.
2. Overlapping Subproblems:
When calculating F(5), we will calculate F(4) and F(3), and each of them will
calculate F(2) multiple times.
The same values are computed repeatedly, we can store them to speed up
performance.
3. Memoization or Tabulation:
o Memoization: We store values during recursion (Top-Down approach).
o Time complexity is improved from O(2ⁿ) to O(n).
4. Designing an Appropriate Data Structure to Store Results:
5.
Dynamic Programming 6 Dr. Rafat alhanani
o We used:
memo = {} in the Memoization approach.
6. Formulating the Recurrence Relation:
The relationship is very clear:
F(n) = F(n-1) + F(n-2)
This relation is the basis for building any solution.
7. Identifying the Base Case(s):
The beginning of the sequence is well known:
F(0) = 0
F(1) = 1
These values don't require any further decomposition or calculation and are used as
starting points in all methods.
Approach to reaching degree n:
The number of ways to reach degree n = Ways to reach degree n-1 + Ways to reach
degree n-2.

❖ Staircase Problem:
A person stands at the bottom of a staircase with n steps. In each step, they can either
climb one step or two steps. How many different ways can they reach the top (step n)?
Example: If the number of steps n = 3, the possible ways are:
• 1→1→1
• 1→2
• 2→1
So, the result is 3 different ways.
Dynamic Programming Principles in This Example:
1. Optimal Substructure: To reach step n, we need the optimal solution for n-1 and
n-2. Hence, the larger problem depends on solutions to smaller problems.
2. Overlapping Subproblems: When using the recursive approach, we calculate
climbStairs(2) multiple times, and also climbStairs(1). These are repetitive
subproblems, and thus their results can be saved.
3. Memoization or Tabulation: In this case, we use Tabulation, where we build the
results from the smallest step (0) to n.

Dynamic Programming 7 Dr. Rafat alhanani


4. Data Structure for Storing Results: We use an array dp[] to store the number of
ways to reach each step so that we can reuse the results instead of recalculating
them.
5. Recurrence Relation: dp[n] = dp[n-1] + dp[n-2] This expresses that we can reach
step n either from step n-1 (by taking one step) or from step n-2 (by taking two
steps).
6. Base Case(s):
o dp[0] = 1 (one way to stay at the bottom without climbing).
o dp[1] = 1 (one way to climb the first step).

Inefficient Code (Without Dynamic Programming)


function climbStairs($n) {
if ($n <= 1) return 1;
return climbStairs($n - 1) + climbStairs($n - 2);
}
Explanation:
• To reach step n, the number of ways is the sum of the ways to reach:
o step n-1 (then take one step),
o step n-2 (then take two steps).
• However, this solution recalculates the same values multiple times, leading to a
time complexity of O(2^n), which is inefficient.
Example for n = 3:
• The possible ways are:
1. 1 → 1 → 1 (one step three times).
2. 1 → 2 (one step, then two steps).
3. 2 → 1 (two steps, then one step).
Hence, there are 3 ways to reach the top.
How the Code Works:
The recursive approach calculates the ways to reach step n based on two main
possibilities:

Dynamic Programming 8 Dr. Rafat alhanani


1. Climbing from step n-1: If you are at step n-1, you can take one more step to reach
n. So, if we know the ways to reach n-1, we add that here.
2. Climbing from step n-2: If you are at step n-2, you can take two steps to reach n.
So, if we know the ways to reach n-2, we add that here.
Recurrence Relation:
To explain further:
• Ways to reach step n = Ways to reach step n-1 + Ways to reach step n-2
Example for climbStairs(3):
• climbStairs(3)
o Calls climbStairs(2)
▪ Calls climbStairs(1) [1 way]
▪ Calls climbStairs(0) [1 way]
o Calls climbStairs(1) [1 way]
So, the total number of ways to reach step 3 is ways(2) + ways(1) = 2 + 1 = 3.
Dynamic Programming Solution Using Tabulation in C

#include <stdio.h>
int climbStairs(int n) {
if (n <= 1) return 1;
int dp[n + 1]; // Array to store the number of ways for each step
dp[0] = 1; // Base case: 1 way to stay at the bottom (step 0)
dp[1] = 1; // Base case: 1 way to reach the first step (step 1)
for (int i = 2; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2]; // Sum of ways to reach the previous two steps
}
return dp[n]; // The final answer is stored at dp[n]
}
int main() {
int n = 5;

Dynamic Programming 9 Dr. Rafat alhanani


printf("Number of ways to reach step %d is: %d\n", n, climbStairs(n));
return 0;}
Code Explanation:
1. int dp[n + 1]: We create an array dp[] to store the number of ways to reach each
step from 0 to n.
2. dp[0] = 1; dp[1] = 1;: Base cases:
o dp[0] = 1 (one way to stay at the bottom without climbing).
o dp[1] = 1 (one way to climb the first step).
3. for loop:
o Starts from i = 2 because dp[0] and dp[1] are already known.
o Each value is calculated using the recurrence relation:
▪ dp[i] = dp[i - 1] + dp[i - 2]
4. Final result:
o After the loop completes, dp[n] contains the total number of ways to reach
the top.

This approach reduces the time complexity from O(2^n) to O(n), making it more efficient.

Dynamic Programming 10 Dr. Rafat alhanani

You might also like