Greedy Algorithm

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 46

4

G r e e d y ALGORITHM

1
Introduction
 As the name suggests, they are short-sighted in their approach,
taking decisions on the basis of information immediately at hand
without worrying about the effect these decisions may have in
the future.
 Thus they are easy to invent, easy to implement, and-when they
work-efficient.
 Greedy algorithms are typically used to solve optimization
problems.
 Examples

finding the shortest route from one node to another through a


network, or finding the best order to execute a set of jobs on a
computer.
In such a context a greedy algorithm works by choosing the arc,
or the job, that seems most promising at any instant; it never
reconsiders this decision, whatever situation may arise later.

2
Change Making Problem
Making c h a n g e (1)
 Suppose we live in a country where the following coins
areavailable: dollars (100 cents),
quarters (25 cents),
dimes (10 cents),
nickels (5 cents)
and pennies (1 cent).
Our problem is to devise an algorithm for paying a given
amount to a customer using the smallest possible number of
coins.

For instance,
if we must pay $2.89 (289 cents), than what is solution
of above problem?

5
Making c h a n g e (1)
 Suppose we live in a country where the following coins
areavailable: dollars (100 cents),
quarters (25 cents),
dimes (10 cents),
nickels (5 cents)
and pennies (1 cent).
Our problem is to devise an algorithm for paying a given
amount to a customer using the smallest possible number of
coins.
For instance, if we must pay $2.89 (289 cents), the best solution is
to give the customer 10 coins: 2 dollars, 3 quarters, 1 dime and 4
pennies.
Most of us solve this kind of problem every day without thinking
twice, unconsciously using an obvious greedy algorithm: starting
with nothing, at every stage we add to the coins already chosen a
coin of the largest value available that does not take us past the
amount to be paid. 4
Algorithm

 The algorithm is "greedy" because at every step it chooses the largest


coin it can, without worrying whether this will prove to be a sound
decision in the long run.
 Furthermore it never changes its mind: once a coin has been included
the solution, it is there for
in 5
General characteristics of greedy
algorithms
 Commonly, greedy algorithms and the problems they can solve
are characterized by most or all of the following features.

 A Set of candidates :
We have some problem to solve in an optimal way. To construct
the solution of our problem, we have a set (or list) of
candidates: the coins that are available, the edges of a graph that
may be used to build a path, the set of jobs to be scheduled, or
whatever.

 Chosen set and rejected set :


 As the algorithm proceeds, we accumulate two other sets. One
contains candidates that have already been considered and
chosen, while the other contains candidates that have been
considered and rejected.
8
Solution Function :
There is a function that checks whether a particular set of
candidates provides a solution to our problem, ignoring
questions of optimality for the time being.
For instance, do the coins we have chosen add up to the amount
to be paid?
Do the selected edges provide a path to the node we wish to
reach? Have all the jobs been scheduled?

Isfeasible function
A second function checks whether a set of candidates isfeasible,
that is, whether or not it is possible to complete the set by adding
further candidates so as to obtain at least one solution to our
problem. Here too, we are not for the time being concerned with
optimality. We usually expect the problem to have at least one
solution that can be obtained using candidates from the set
initially available.
9
Continue…
 Selection function :
Yet another function, the selection function, indicates at any time which
of the remaining candidates, that have neither been chosen nor rejected,
is the most promising.

For Coin Problem :


 Set of Candidate : The candidates are a set of coins, representing in
our example 100, 25,10, 5 and 1 units, with sufficient coins of each
value that we never run out. (However the set of candidates must be
finite.)
 Solution function : The solution function checks whether the value of
the coins chosen so far is exactly the amount to be paid.
 Isfeasible : A set of coins is feasible if its total value does not exceed
the amount to be paid.
 Selection function : The selection function chooses the highest-valued
coin remaining in the set of candidates.
 The objective function counts the number of coins used in
solution.
the 8
Graphs: Minimum spanning
trees
 Let G = (N, A) be a connected, undirected graph where N
is the set of nodes and A is the set of edges. Each edge
has a given nonnegative length. The problem is to find
a subset T of the edges of G such that all the nodes
remain connected when only the edges in T are used,
and the sum of the lengths of the edges in T is as small
as possible. Since G is connected, at least one solution
must exist.
1) Kruskal's algorithm
2) Prim's algorithm
1) Kruskal's algorithm
 The set T of edges is initially empty. As the algorithm
progresses, edges are added to T. So long as it has not
found a solution, the partial graph formed by the nodes of
G and the edges in T consists of several connected
components.
 (Initially when T is empty, each node of G forms a distinct
trivial connected component.) The elements of T included
in a given connected component form a minimum spanning
tree for the nodes in this component.
 At the end of the algorithm only one connected component
remains, so T is then a minimum spanning tree for all the
nodes of G.
Continue …
 To illustrate how this algorithm works, consider the
graph in bellow Figure. In increasing order of length
the edges are: {1, 2}, {2, 3}, {4, 5}, {6, 7}, {1, 4}, {2,
5}, {4, 7}, {3, 5}, {2, 4}, {3, 6}, {5, 7} and {5, 6}.
Continue…
 The algorithm proceeds as
follows.

 When the algorithm stops, T contains the chosen edges {1,


2}, {2, 3}, {4, 5},{6, 7}, {1, 4} and {4, 7}. This minimum
spanning tree is shown by the heavy lines in Figure; its total
length is 17.
Algorithm
Prim's algorithm
 In Kruskal's algorithm the selection function chooses edges
in increasing order of length without worrying too much about
their connection to previously chosen edges, except that we
are careful never to form a cycle.

 The result is a forest of trees that grows somewhat


haphazardly, until finally all the components of the forest
merge into a single tree.

 In Prim's algorithm, on the other hand, the minimum


spanning tree grows in a natural way, starting from an
arbitrary root. At each stage we add a new branch to the
tree already constructed; the algorithm stops when all the
nodes have been reached.
Continue…
Let B be a set of nodes, and T a set of edges. Initially, B
contains a single arbitrary node, and T is empty. At each
step Prim's algorithm looks for the shortest possible edge
{ u, v } such that u ϵ B and v ϵ N \ B. It then adds v to B and
{ u, v } to T. In this way the edges in T form at any instant a
minimum spanning tree for the nodes in B. We continue thus as
long as B != N. Here is an informal statement of the algorithm.
Continue…
To illustrate the algorithm, consider once again the graph in
bellow Figure. We arbitrarily choose node 1 as the starting
node. Now the algorithm might progress as follows.
Continue
…We arbitrarily choose node 1 as the starting node.
Now the algorithm might progress as follows.
Example-
1
1) Find minimum spanning tree by using
Prim’s algorithm and Kruskal’s algorithm.
1) solution by KRUSKAL’S algorithm
Steps Edge Connected Component
considered
Initialization ------------- {a} , {b} , {c} , {d} , {e} , {z}
{d,e} {a} , {b} , {c} , {d , e} , {z}
{a,b} {a , b} , {c} , {d , e} , {z}
{c,e} {a , b} , {c , d , e} , {z}
{d,z} {a , b} , {c , d , e , z}
{a,c} {a , b , c , d , e , z}

Total Minimum Cost = 10

Analysis:
1) solution by PRIM’S algorithm
Steps {u,v} B
Initialization ------------- {a}
{a,b} {a,b}
{a,c} {a,b,c}
{c,e} {a,b,c,e}
{e,d} {a,b,c,d,e}
{d,z} { a , b , c, d , e , z }
Example -
2
2) By kruskal’s algorithm
Steps Edge Connected Component
considered
Initialization ------------- {A} , {B} , {C} , {D} , {E} , {F}
{B,D } {A} , {B , D} , {C} , {E} , {F}
{B,C} {A} , {B , C , D} , {E} , {F}
{C ,D } Rejected
{B,F} {A} , {B , C , D , F} , {E}
{F,D } Rejected
{A,B} {A , B , C , D , F} , {E}
{E,D } {A , B , C , D , E , F}

Total Minimum Cost = 56


Minimum Spanning
Tree
Example -
3
Solutio
n Steps Edge
Considered
Connected Component

Initialization ------------------- {a} {b} {c} {d} {e} {f} {g}

1 {a,d} {a , d} {b} {c} {e} {f} {g}


2 {c,e} {a , d} {b} {c , e} {f} {g}
3 {d,f} {a , d , f} {b} {c , e} {g}
4 {a,b} {a , b , d , f} {c , e} {g}
5 {b,e} {a , b , c , d , e , f} {g}
6 {b,c} Rejected
7 {e,f} Rejected
8 {b,d} Rejected
9 {e,g} {a , b , c , d , e , f , g}
Total minimum cost = 30
Graph: Shortest Path
Consider now a directed graph G = (N, A) where N is the set
of nodes of G and A is the set of directed edges. Each edge
has a nonnegative length. One of the nodes is designated as
the source node. The problem is to determine the length of
the shortest path from the source to each of the other nodes of
the graph.
Exampl
e
Continue…
The algorithm proceeds as follows on the graph.
Analysis of the algorithm.
Suppose Dijkstra's algorithm is applied to a
graph with n nodes and a edges. Using the
representation suggested up to now, the instance is
given in the form of a matrix L[L.. n, 1.. n].
Initialization takes a time in 0(n). In a
straightforward implementation, choosing v in the
repeat loop requires all the elements of C to be
examined, so we look at n - 1, n - 2, .. ., 2 values
of D on successive iterations, giving a total time in
0(n2 ). The inner for loop does n - 2, n - 3, . . . 1
iterations, for a total also in 0(n2). The time
required by this version of the algorithm is
therefore in 0(n2 ).
The knapsack problem (1)
 We are given n objects and a knapsack.
For i = 1, 2, .. ., n, object i has a positive weight
wi and a positive value vi.

 Theknapsack can carry a weight not exceeding


W. Our aim is to fill the knapsack in a way that
maximizes the value of the included objects, while
respecting the capacity constraint.

 Inthis first version of the problem we assume that


the objects can be broken into smaller pieces, so
we may decide to carry only a fraction xi of object
i, where 0 < xi < 1.
Continue…
In this case, object i contributes xiwi to the total weight
in the knapsack, and xivi, to the value of the load. In
symbols, the problem can be stated as follows:

 where vi > 0, wi > 0 and 0 <= xi <= 1 for 1 <= i <= n.


Algorithm
Exampl
e

 Here we have five objects, and W = 100. 7


 Different solutions
If we select the objects in order of decreasing value.
If we select the objects in order of increasing weight
If we select the objects in order of decreasing vi /wi,
Exampl
e

 Here we have five objects, and W = 100. 7


 Different solutions
 If we select the objects in order of decreasing value, then
we choose first object 3, then object 5, and finally we fill
the knapsack with half of object 4. The value of the
solution obtained in this way is 66 + 60 + 40/2 = 146.

 If we select the objects in order of increasing weight, then


we choose objects 1,2, 3 and 4 in that order, and now the
knapsack is full. The value of this solution is 20 + 30 + 66
+ 40 = 156.
Continue…
 Finally if we select the objects in order of
decreasing vi /wi, we choose first object 3, then
object 1, next object 2, and finally we fill the
knapsack with four-fifths of object 5. Using this
tactic, the value of the solution is 20 + 30 + 66 +
0.8 x 60 = 164.
Continue …
Analysis:
 Implementation of the algorithm is straightforward. If the
objects are already sorted into decreasing order of vi/wi,
then the greedy loop clearly takes a time in 0 (n); the total
time including the sort is therefore in 0(n log n).
 It may be worthwhile to keep the objects in a heap with
the largest value of vi/wi at the root.
 Creating the heap takes a time in 0(n), while each trip
round the greedy loop now takes a time in 0(log n) since
the heap property must be restored after the root is
removed.
 Although this does not alter the worst-case analysis, it
may be faster if only a few objects are needed to fill the
knapsack.
Schedulin
g In this section we present two problems concerning
the
optimal way to schedule jobs on a single machine.

 In the first, the problem is to minimize the average time


that a job spends in the system.

 In the second, the jobs have deadlines, and a job brings in a


certain profit only if it is completed by its deadline: our
aim is to maximize profitability.

 Both these problems can be solved using greedy


algorithms.
1. Minimizing time in the system
 A single server, such as a processor, a petrol pump, or a
cashier in a bank, has n customers to serve.

 The service time required by each customer is known in


advance: customer i will take time ti, 1 < i < n. We want
to minimize the average time that a customer spends in
the system.

 Since n, the number of customers, is fixed, this is the


same as minimizing the total time spent in the system by
all the customers. In other words, we want to minimize
Exampl
e Suppose for example we have three
customers, with t1, = 5, t2 = 10 and t3 = 3.
There are six possible orders of service.
Solutio
n
2. Scheduling with
deadlines
 We have a set of n jobs to execute, each of which takes
unit time. At any time T = 1,2, ... we can execute exactly
one job. Job i earns us a profit gi > 0 if and only if it is
executed no later than time di.
 For example, with n = 4 and the following values:
Solutio
n
the schedules to consider and the corresponding
profits are
 The sequence 3,2 for instance is not considered because job 2
would be executed at time t = 2, after its deadline d2 = 1. To
maximize our profit in this example, we should execute the
schedule 4,1.
 In the preceding example we first choose job 1. Next, we choose
job 4; the set {1, 4} is feasible because it can be executed in the
order 4,1.
 Next we try the set {1, 3, 4}, which turns out not to be feasible.

 Finally we try {1, 2, 4}, which is also infeasible.

 Our solution-optimal in this case-is therefore to execute the set


of jobs { 1, 4}, which can only be done in the order 4,1. It
remains to be proved that this algorithm always finds an optimal
schedule and to find an efficient way of implementing it.

You might also like