0% found this document useful (0 votes)
11 views49 pages

DAA Unit-3 ppt 19

This document provides an overview of Dynamic Programming (DP), an optimization technique used to solve problems by breaking them into smaller overlapping sub-problems and storing their solutions to avoid redundant computations. Key concepts include the Principle of Optimality, characteristics of DP algorithms, and specific applications such as the 0/1 Knapsack problem and the Travelling Salesman Problem. The document also contrasts DP with Greedy algorithms, highlighting differences in approach, optimality guarantees, and time complexity.

Uploaded by

bharikrishna64
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views49 pages

DAA Unit-3 ppt 19

This document provides an overview of Dynamic Programming (DP), an optimization technique used to solve problems by breaking them into smaller overlapping sub-problems and storing their solutions to avoid redundant computations. Key concepts include the Principle of Optimality, characteristics of DP algorithms, and specific applications such as the 0/1 Knapsack problem and the Travelling Salesman Problem. The document also contrasts DP with Greedy algorithms, highlighting differences in approach, optimality guarantees, and time complexity.

Uploaded by

bharikrishna64
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

SUBJECT :

DESIGN AND ANALYSIS OF ALGORITHMS


Unit-III
Dynamic Programming
Unit-III
Dynamic Programming:-
Introduction
The Principle of Optimality
Problem Solving using Dynamic Programming
Matrix chain multiplication
All Pair Shortest Path
0/1 Knapsack
Optimal Binary Search Tree
Travelling Salesman Problem
Dynamic programming approach is similar to divide and
conquer in breaking down the problem into smaller and yet
smaller possible sub-problems.
But unlike divide and conquer, these sub-problems are not
solved independently.
Rather, results of these smaller sub-problems are remembered
and used for similar or overlapping sub-problems.
Mostly, dynamic programming algorithms are used for solving
optimization problems.
Before solving the in-hand sub-problem, dynamic algorithm will try
to examine the results of the previously solved sub-problems.
.

The solutions of sub-problems are combined in order to achieve the


best optimal final solution.
This paradigm is thus said to be using Bottom-up approach.
What is Dynamic Programming?
Dynamic Programming is mainly an optimization over plain recursion.

Wherever we see a recursive solution that has repeated calls for same
inputs, we can optimize it using Dynamic Programming.

The idea is to simply store the results of subproblems, so that we do not


have to re-compute them when needed later.

This simple optimization reduces time complexities from exponential to


polynomial.
For example, if we write simple recursive solution
for Fibonacci Numbers, we get exponential time
complexity and if we optimize it by storing solutions of
subproblems, time complexity reduces to linear.
So we can conclude that −
•The problem should be able to be divided into smaller
overlapping sub-problem.
•Final optimum solution can be achieved by using an
optimum solution of smaller sub-problems.
•Dynamic algorithms use memorization.
Characteristics of Dynamic Programming Algorithm:
•In general, dynamic programming (DP) is one of the most powerful techniques for
solving a certain class of problems.
•There is an elegant way to formulate the approach and a very simple thinking
process, and the coding part is very easy.
•Essentially, it is a simple idea, after solving a problem with a given input, save the
result as a reference for future use, so you won’t have to re-solve it.. briefly
‘Remember your Past’.
•It is a big hint for DP if the given problem can be broken up into smaller sub-
problems, and these smaller subproblems can be divided into still smaller ones, and
in this process, you see some overlapping subproblems.
•Additionally, the optimal solutions to the subproblems contribute to the optimal
solution of the given problem.
• The solutions to the subproblems are stored in a table or array (memoization) or in
a bottom-up manner (tabulation) to avoid redundant computation.
•The solution to the problem can be constructed from the solutions to the
subproblems.
Problem Solving using Dynamic Programming
Dynamic programming is an optimization technique developed by Richard Bellman
in the 1950s. Basically, dynamic programming is an optimization over the normal
recursion.

In the case of recursion, repeated calls are made for the same sub-problem, but we
can optimize this problem with the help of dynamic programming.

The idea behind using dynamic programming is that we store the results of sub-
problems so that we do not need to re-compute the sub-problem whenever required.
This reduces the time complexities from exponential time to linear time.
To dynamically solve a problem, we need to check two necessary
conditions:
1. Overlapping Subproblems: When the solutions to the same subproblems are needed repetitively
for solving the actual problem. The problem is said to have overlapping subproblems property.

N-th Fibonacci Series as Overlapping Subproblems

2,Optimal Substructure Property: If the optimal solution of the given


problem can be obtained by using optimal solutions of its subproblems then
the problem is said to have Optimal Substructure Property.
Steps to solve a Dynamic programming problem:
1.Identify if it is a Dynamic programming problem.
2.Decide a state expression with the Least parameters.
3.Formulate state and transition relationships.
4.Do tabulation (or memorization).
1) How to classify a problem as a Dynamic Programming algorithm Problem?

•Typically, all the problems that require maximizing or minimizing certain


quantities or counting problems that say to count the arrangements under certain
conditions or certain probability problems can be solved by using Dynamic
Programming.

•All dynamic programming problems satisfy the overlapping


subproblems property and most of the classic Dynamic programming problems
also satisfy the optimal substructure property. Once we observe these properties in
a given problem be sure that it can be solved using Dynamic Programming.
3) Formulating a relation among the states:
The hardest part of a Dynamic Programming challenge is this step, which calls for a lot of intuition,
observation, and training.
Example:
Given 3 numbers {1, 3, 5}, the task is to tell the total number of ways we can form a number N using the
sum of the given three numbers. (allowing repetitions and different arrangements).
The total number of ways to form 6 is: 8
1+1+1+1+1+1
1+1+1+3
1+1+3+1
1+3+1+1
3+1+1+1
3+3
1+5
5+1
2) Deciding the state:
Problems with dynamic programming are mostly concerned with the state and its
transition. The most fundamental phase must be carried out with extreme care
because the state transition depends on the state definition you select.

State:

A state is a collection of characteristics that can be used to specifically describe a


given position or standing in a given challenge. To minimise state space, this set of
parameters has to be as compact as feasible.
Following are the steps to solve the problem:
•We choose a state for the given problem.
•N will be used as the determining factor for the state because it can be used to
identify any subproblem.
•The DP state will resemble state(N), where the state(N) is the total number of
arrangements required to create N using the elements 1, 3, and 5. Identify the
relationship of the transition between any two states.
•We must now calculate the state (N).
// Returns the number of arrangements to form 'n'
int solve(int n)
{
// base case
if (n < 0)
return 0;
if (n == 0)
return 1;
return solve(n-1) + solve(n-3) + solve(n-5);
}
4) Adding memoization or tabulation for the state
The simplest portion of a solution based on dynamic programming is this. Simply
storing the state solution will allow us to access it from memory the next time that
state is needed.
Adding memoization to the above
// Initializecode:
to -1
int dp[MAXN];
// This function returns the number of arrangements to form 'n'
int solve(int n)
{
// base case
if (n < 0)
return 0;
if (n == 0)
return 1;
// Checking if already calculated
if (dp[n]!=-1)
return dp[n];
// Storing the result and returning
return dp[n] = solve(n-1) + solve(n-3)+ solve(n-5);
}
// C++ code for the above approach:
#include <iostream>
using namespace std;
// Function to find nth fibonacci number
int fib(int n)
{
if (n <= 1) {
return n;
}
int x = fib(n - 1);
int y = fib(n - 2);
return x + y;
}
// Drivers code
int main()
{
int n = 5;
// Function Call
cout << fib(n);
return 0;
}
Greedy approach vs Dynamic programming

Feature Greedy method Dynamic programming

In Dynamic Programming we make


In a greedy Algorithm, we make
decision at each step considering current
whatever choice seems best at the
Feasibility problem and solution to previously
moment in the hope that it will lead to
solved sub problem to calculate optimal
global optimal solution.
solution .

It is guaranteed that Dynamic


In Greedy Method, sometimes there is no
Programming will generate an optimal
Optimality such guarantee of getting Optimal
solution as it generally considers all
Solution.
possible cases and then choose the best.
A Dynamic programming
A greedy method follows
is an algorithmic technique
the problem solving
which is usually based on
Recursion heuristic of making the
a recurrent formula that
locally optimal choice at
uses some previously
each stage.
calculated states.

It requires Dynamic
It is more efficient in
Programming table for
Memoization terms of memory as it
Memoization and it
never look back or revise
increases it’s memory
previous choices
complexity.
Greedy methods are generally faster. For Dynamic Programming is generally
example, Dijkstra’s shortest path slower. For example,
Time complexity
algorithm takes O(ELogV + VLogV) Bellman Ford algorithm takes O(VE)
time. time.

The greedy method computes its solution Dynamic programming computes its
by making its choices in a serial forward solution bottom up or top down by
Fashion
fashion, never looking back or revising synthesizing them from smaller optimal
previous choices. sub solutions.

Fractional knapsack .
Example 0/1 knapsack problem
0/1 Knapsack problem

Here knapsack is like a container or a bag. Suppose we have given some items which have
some weights or profits. We have to put some items in the knapsack in such a way total value
produces a maximum profit.

For example, the weight of the container is 20 kg. We have to select the items in such a way
that the sum of the weight of items should be either smaller than or equal to the weight of
the container, and the profit should be maximum.

There are two types of knapsack problems:

•0/1 knapsack problem


•Fractional knapsack problem
What is the 0/1 knapsack problem?

The 0/1 knapsack problem means that the items are either completely or no items are filled in
a knapsack. For example, we have two items having weights 2kg and 3kg, respectively. If we
pick the 2kg item then we cannot pick 1kg item from the 2kg item (item is not divisible); we
have to pick the 2kg item completely. This is a 0/1 knapsack problem in which either we pick
the item completely or we will pick that item. The 0/1 knapsack problem is solved by the
dynamic programming.

You might also like