0% found this document useful (0 votes)
53 views71 pages

Data Structures and Algorithms: B.Tech (CSE) Module-4

The document discusses various algorithm design paradigms, including divide and conquer algorithms, matrix multiplication algorithms, heap sort, and brute force algorithms. It provides examples of divide and conquer approaches to problems like matrix multiplication and describes recursive algorithms for matrix multiplication. It also explains heap sort through an example and describes the selection sort brute force algorithm.

Uploaded by

dasd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views71 pages

Data Structures and Algorithms: B.Tech (CSE) Module-4

The document discusses various algorithm design paradigms, including divide and conquer algorithms, matrix multiplication algorithms, heap sort, and brute force algorithms. It provides examples of divide and conquer approaches to problems like matrix multiplication and describes recursive algorithms for matrix multiplication. It also explains heap sort through an example and describes the selection sort brute force algorithm.

Uploaded by

dasd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 71

DATA STRUCTURES AND ALGORITHMS

B.Tech (CSE)
MODULE-4
ALGORITHM DESIGN PARADIGMS

Dr.B.K.Tripathy
Senior Professor
School of Computer Science and Engineering
VIT University, Vellore-632014
Tamil Nadu, India
DIVIDE AND CONQUER ALGORITHMS

• In this approach we solve a problem recursively, applying three


steps at each level of the recursion
• DIVIDE:
• The problem is divided into a number of sub-problems that are
smaller instances of the original problem
• CONQUER:
• Conquer the sub-problems by solving them recursively
• If the sizes of the sub-problems are small enough then these
can be solved directly instead of recursion
• COMBINE:
• The solutions of the sub-problems are combined into the
solution of the original problem
DIVIDE AND CONQUER ALGORITHMS

• Recurrences go hand in hand with the divide and conquer


algorithms
• They provide us a natural way to characterise the running
times of divide and conquer algorithms
• A recurrence is an equation or inequality that describes a
function in terms of its value on smaller inputs
• We have earlier dealt with a divide and conquer algorithm to
describe the MERGE-SORT algorithm
• The Master Theorem helps us in finding the complexity
estimation of specific recurrences
DIVIDE AND CONQUER ALGORITHMS- MATRIX
MULTIPLICATION
•• If
  A = and B = are two matrices such that the number of
columns of A is same as the number of rows of B then we can
find their product

• If both A and B are square matrices of order n then the


product C is also a matrix of dimension n and it is given by
MULTIPLICATION ALGORITHM FOR SQUARE
MATRICES
•• SQUARE-MATRIX-MULTIPLY
  (A, B)
• STEP 1: n = A.rows
• STEP 2: let C be a new n x n matrix
• STEP 3: for i =1 to n
• STEP 4: for j = 1 to n
• STEP 5:
• STEP 6: for k = 1 to n
• STEP 7:
• STEP 8: return C
MATRIX MULTIPLICATION CONTD…

•• The
  previous algorithm is of complexity
• A simple divide and conquer algorithm:
• We assume that the size of the matrices is an exact power of
2
• Let

• So, as we have C = A x B,
• We have the following
SQUARE-MATRIX-MULTIPLY-RECURSIVE (A, B)

•  STEP 1: n = A. rows
• STEP 2: let C be a new n x n matrix
• STEP 3: If n == 1
• STEP 4:
• STEP 5: else partition A, B and C as in the above
Step-6
equations
• STEP 6- 9: Step-7

Step-8

Step-9
SQUARE-MATRIX-MULTIPLY-RECURSIVE (A, B)

•• STEP
  10: Return C
• A query: How do we partition the matrices?
• Solution: If we create 12 (3 x 4 each) new n/2 x n/2 matrices
we would require time copying entries.
 We use index calculations
• We identify a sub-matrix by a range of row indices and a
range of column indices of the original matrix
• The advantage is that we can compute STEP 5 in time
• If we represent by T(n) the time to multiply two matrices
using this procedure then
SQUARE-MATRIX-MULTIPLY-RECURSIVE (A, B)

•• When
  n = 1, T(1) =
• For n > 1,

• So, the time complexity of the SQUARE-MATRIX-MULTIPLY-


RECURSIVE algorithm is:
STRASSEN’S METHOD

•• This
  method is to make the recursion tree slightly bushy
• Instead of performing eight recursive multiplications on (n/2) x
(n/2) matrices it performs seven
• The cost of eliminating one matrix multiplication will be several
new additions of (n/2) x (n/2) matrices
• The four steps:
• STEP 1: Divide the input matrices A and B and output matrix C
into (n/2) x (n/2) submatrices. This takes time by index calculation
• STEP 2: Create 10 matrices each of which is (n/2) x (n/2) and is
the sum or difference of two matrices created in step 1. We can
create all 10 matrices in time
STRASSEN’S METHOD

•• STEP
  3: Using the submatrices created in step 1 and the 10
submatrices created in step 2, recursively compute seven
matrix products . Each matrix is (n/2) x (n/2)

• STEP 4: Compute the desired sub-matrices of the result


matrix C by adding and subtracting various combinations of
the matrices. We can compute all four matrices in time
STRASSEN’S METHOD

•• The
  10 matrices are defined as follows

• Since we must add or subtract (n/2) x (n/2) matrices 10


times, this step does indeed take
STRASSEN’S METHOD

•• The
  product matrices are given by
STRASSEN’S METHOD

•  
• Example:

• We have

• C=
• It matches with the direct computation of the multiplication of the
matrices


HEAP SORT
• BUILD-MAX-HEAP is used to build a max heap on the input
array A[1..n]. n = A.length
• Since the maximum element occurs at the root position, we
put it at the position A[n] of the array by interchanging A[1]
with A[n]
• The node ‘n’ is discarded from the heap by decrementing its
size from n to n-1
• Now the max-heap property may be violated after the
interchange
• So, we call MAX-HEAPIFY(A, 1)
HEAP SORT CONTD…

• The result is a max heap A[1..n-1]


• The procedure is repeated for the heap of size n-1
• Like this continue till a heap of size 2 is reached.

• HEAPSORT (A)
• STEP 1: BUILD-MAX-HEAP (A)
• STEP 2: For I = A.length to 2
• STEP 3: exchange A[1] with A[n]
• STEP 4: A.heap-size = A.heap-size-1
• STEP 5: MAX-HEAPIFY (A, 1)
HEAPSORT- EXAMPLE

• Let the original array be


4 1 3 2 16 9 10 14 8 7

• MAX-HEAPIFY(A)
16
10
14

8 7 9 3

2 4 1
EXAMPLE CONTD…

• i = 10. Exchange A[1] with A[10]. So that A[10] = 16


• Remove A[10]. A.heap-size = 9.

A[1]
14

10
8

4 7 9 3

2 1 A[9]
EXAMPLE CONTD…

• Exchange A[1] and A[9]. A[9] = 14


• heap-size = heap-size – 1 = 8
• MAX-HEAPIFY (A, 1) A[1]
10

8 9

4 7 1 3

2 A[8]
EXAMPLE CONTD…

• Exchange A[1] and A[8]. A[8] = 10


• heap-size = heap-size – 1 = 7
• MAX-HEAPIFY (A, 1)

8 3

4 7 1 2
EXAMPLE CONTD…

• Exchange A[1] and A[7]. A[7] = 9


• heap-size = heap-size – 1 = 6
• MAX-HEAPIFY (A, 1)

3
7

4 2 1
EXAMPLE CONTD…

• Exchange A[1] and A[6]. A[6] = 8


• heap-size = heap-size – 1 = 5
• MAX-HEAPIFY (A, 1)

4 3

1 2
EXAMPLE CONTD…

• Exchange A[1] and A[5]. A[5] = 7


• heap-size = heap-size – 1 = 4 4
• MAX-HEAPIFY (A, 1)
2 3

1
• Exchange A[1] and A[4]. A[4] = 4
• heap-size = heap-size – 1 = 3
3
• MAX-HEAPIFY (A, 1)
2 1
EXAMPLE CONTD…

• Exchange A[1] and A[3]. A[3] = 3


2
• heap-size = heap-size – 1 = 2
• MAX-HEAPIFY (A, 1)
1

• Exchange A[1] and A[2]. A[2] = 2


• heap-size = heap-size – 1 = 1
1
• MAX-HEAPIFY (A, 1)

• The sorted array is


1 2 3 4 7 8 9 10 14 16
BRUTE FORCE ALGORITHM

• In computer science, brute-force search or exhaustive search,


also known as generate and test, is a very general problem-
solving technique
• That consists of systematically enumerating all possible
candidates for the solution and
• Checking whether each candidate satisfies the problem's
statement

• IT IS CONSIDERED AS ONE OF THE EASIEST APPROACH TO


APPLY AND IS USEFUL FOR SOLVING SMALL–SIZE INSTANCES
OF A PROBLEM
BRUTE FORCE ALGORITHM-EXAMPLE

• SELECTION SORT:
• the entire given list of n elements is scanned to find its
smallest element and exchange it with the first element.
• Thus, the smallest element is moved to its final position in the
sorted list.
• Then, the list is scanned again, starting with the second
element in order to find the smallest element among the n –
1 and exchange it with the second element.
• The second smallest element is put in its final position in the
sorted list. After n-1 passes, the list is sorted.
SELECTION SORT
• ALGORITHM SELECTION-SORT (A[0..n-1])
• STEP 1: for i ← 0 to n-2
• STEP 2: do min ← i
• STEP 3: for j ← i + 1 to n-1
• STEP 4: do if A[j] < A[min]
• STEP 5: min ← j
• STEP 6: swap A[i] and A[min]

5 3 8 4 7 1 3 9 11 14

i = 0, min = 0. j = 1 to 9; if A[j] < A[0] then min = j


We see that j=1, A[1] < A[0], min = 1; then j =2, 3 and 4 do not satisfy this
condition. Move to j= 5; A[5] < A[1]. So, set min =5. After that no element
satisfies the property. So, finally min = 5. Exchange A[0] with A[5].
Next, for i =1 follow the same procedure and continue like that.
GREEDY ALGORITHMS

• A greedy algorithm always makes the choice that looks best


at the moment
• It makes a locally optimal choice in the hope that this choice
will lead to a globally optimal solution
• Greedy algorithms do not always yield optimal solutions
• For many problems they do
• We will consider two problems under this category
• The Activity-selection problem
• The knapsack problem
THE ACTIVITY-SELECTION PROBLEM

•• It
  is a problem of scheduling several competing activities that
require exclusive use of a common resource with a goal of
selecting a maximum-size set of mutually compatible
activities
• S = be a set of proposed activities
• They wish to use a common resource
• Each activity has a start time and a finish time
• We have
• If selected the activity takes place during the time interval
THE ACTIVITY-SELECTION PROBLEM

•• Two
  activities are said to be compatible if the intervals and
do not overlap
• Equivalently, are said to be compatible if
• We assume that the activities are sorted in monotonically
increasing order of finish time; i. e.

• Example of 11 such activities are can be

i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 9 9 10 11 12 14 16
THE ACTIVITY-SELECTION PROBLEM

•• In this example
 The subset is a subset of mutually compatible activities

• The subset is a largest subset of mutually compatible
activities
• Another such subset is
THE OPTIMAL SUBSTRUCTURE OF THE
ACTIVITY-SELECTION PROBLEM
•• Let
  be the set of activities that start after activity finishes
and finish before activity starts
• Suppose we want to find a maximum set of mutually
compatible activities in
• Let is such a subset which includes some such activity
• This generates two sub-problems
• 1. Finding the mutually compatible activities in the set
• 2. Finding the mutually compatible activities in the set
• Let
• So, contains the activities in that finish before starts
• contains the activities in that start after finishes
THE OPTIMAL SUBSTRUCTURE OF THE
ACTIVITY-SELECTION PROBLEM
•• Thus
  we have the relation
• So, the maximum size set of mutually compatible activities
in consists of activities
• So, the optimal solution must also include optimal solutions to
the two sub-problems for
• A greedy choice will help us
• That is we will choose an activity that leaves the resource
available for as many other activities as possible
GREEDY CHOICE

•• Of
  the activities we end by choosing one of the activities that
finishes first
• So, our intuition tells us to choose the activity in S having the
earliest finish time
• This will leave the resource available for as many activities
that follow it
• If more than one activities have the earliest finish time, we
choose any one of them
• Now, since the activities are sorted in the increasing order of
their finish time, will be our greedy choice
GREEDY CHOICE

•• After
  making the greedy choice, we have only one remaining
sub-problem to solve:
• Finding the activities that start after finishes
• Let , which is the set of activities that start after finishes
• So, after selecting (greedy choice), we have only to solve
• So, optimal substructure tells us that if is in the optimal
solution then an optimal solution to the original problem
consists of activity and all the activities in an optimal
solution to the sub-problem
A RECURSIVE GREEDY ALGORITHM

•• Inputs:
  (s, f, k, n)
• s and f as arrays containing the start and finish times of all the
activities,
• k is the sub-problem number to be solved,
• n is the size of the original problem

• Output: maximum-size set of mutually compatible activities in


• Assumptions:
• The n input activities are already ordered by monotonically
increasing finish time
• Start with a fictitious activity with finishing time = 0.
A RECURSIVE GREEDY ALGORITHM

•• The
  initial call is RECURSIVE-ACTIVITY-SELECTOR (s, f, 0, n)
• RECURSIVE-ACTIVITY-SELECTOR (s, f, k, n)
• m = k+1 So long as the start time of the next
• While and s[m] < f[k] activities are before the kth activity
• m = m+1 finishes increase m. As all these
activities are not compatible with the
• If kth activity.
• Return
• Else return
EXPLANATIONS

•• The
  while loop in steps 2 and 3 looks for the first activity in to
finish
• The loop examines , until it finds the first activity that is
compatible with
• Such an activity has
• After finding such an activity it makes a recursive call with the
new position m to
• RECURSIVE-ACTIVITY-SELECTOR(s, f, m, n)
ILLUSTRATION
k sk fk
0 ---- 0 a0
1 1 4 a0 a1
2 3 5 a2
3 0 6 a3
4 5 7 a4
5 3 9 a5
6 5 9 a6
7 6 10 a7
8 8 11 a8
9 8 12 a9
10 2 14 a10
11 12 16 a11
A ITERATIVE GREEDY ALGORITHM
•• It
  is an iterative version of the procedure RECURSIVE-ACTIVITY-
SELECTOR
• It also assumes that the input activities are ordered by
monotonically increasing finish time.
• It collects selected activities into a set A and returns this set
when it is done
• GREEDY-ACTIVITY_SELECTOR (s, f)
• n = s. length, A = k =1
• For m = 2 to n
For the first activity that has the start time is
• If s[m] f[k]
greater than the finish time of the kth activity
• A= add it to A. Set k to m and proceed further.
• k=m
• Return A
WORKING PRINCIPLE

•• The
  variable k indexes the most recent addition to A,
corresponding to the activity in the recursive version
• Since we consider the activities in order of monotonically
increasing finish time, is always the maximum finish time of
any activity in A.
• First we select activity . Initialize A and k is this index
• The for loop finds the earliest activity in (the set of activities
which start after is to finish)
• The loop considers each activity in turn and adds to A if it is
compatible with all previously selected activities
• Such an activity is the earliest in to finish
WORKING PRINCIPLE

•• To
  see whether activity is compatible with every activity
currently in A, it is sufficient to check that its start time is not
earlier than the finish time of the activity most recently
added to A
• If activity is compatible then this activity is added to A and we
set k = m
• The set A is returned finally
DYNAMIC PROGRAMMING

• Like Divide and Conquer method, solves problems by combining


the solutions to sub-problems
• NOTE:
• Programming here refers to a tabular method. It is not writing a
computer code
• In contrast to Divide and Conquer technique which partition the
problem into disjoint sub-problems, Dynamic programming is
applied when the sub-problems overlap
• By overlapping we mean the sub-problems share sub-sub-
problems
• The divide and Conquer algorithms do more work than
necessary. It repeatedly solves the sub-problems
DYNAMIC PROGRAMMING

• A dynamic programming algorithm


• solves every sub-problem only once and
• saves its answer in a table
• So, it avoids re-computing the answer every time it solves
each sub-sub-problem
• Dynamic programing is applicable to Optimization problems
• The optimization problems have a set of solutions
• Each solution has a value
• The solution which is optimal ( maximum or minimum) in
value is selected
• It may happen that the number of optimal solutions may be
more than one (That is solutions having same optimal value)
DYNAMIC PROGRAMMING

• STEPS IN DEVELOPING A DYNAMIC PROGRAMMING


ALGORITHM
• STEP 1: Characterise the structure of an optimal solution
• STEP 2: Recursively define the value of an optimal solution
• STEP 3: Compute the value of an optimal solution in a bottom-
up fashion
• STEP 4: Construct an optimal solution from computed
information
• If we only want the value of an optimal solution and not the
solution itself, we can omit step-4
ROD CUTTING PROBLEM

•• Suppose
  an enterprise buys long steel rods and cuts them
into shorter rods
• Then it sells shorter rods
• The cutting process is free
• So, the enterprise wants to know the best way to cut the
rods
• Suppose, the enterprise knows the price of a rod of length i,
i = 1, 2, …n
• It wants to find the maximum revenue obtainable by cutting
up the rod and selling the pieces
ROD CUTTING

• Consider an example: (8 ways of cutting a rod of length 4)

9 1 8 5 5

8 1 1 1 5 1 5 1

5 1 1 1 1 1 1
• A sample price table for rods: (Known beforehand)
i 1 2 3 4 5 6 7 8 9 10
pi 1 5 8 9 10 17 17 20 24 30
ROD CUTTING

•• We can cut a rod of length n in ways


 There are (n-1) intermediate positions, each of which may be

taken or not (in 2 ways)
• A rod of length 5 can be cut in such a way that 5 = 1+2+2
means that it is cut into 2 pieces of length 2 each and a rod of
length 1
• Suppose an optimal decomposition cuts a rod of length ‘n’
into ‘k’ pieces of length then
 Suppose n = 7 and k = 3 then we may have
• The corresponding revenue is

 In the above example, we may have


ROD CUTTING

•• The
  revenue function values for in terms of optimal revenues
from shorter rods is given by

• The first argument is for no cutting of the rod


• The other arguments are corresponding to cutting the rod into
two pieces of sizes i and n-i for i = 1, 2, … n-1 and obtaining their
optimal values by further cutting
• Since we don’t know beforehand about which value of i gives the
optimal value we consider all possible values and take their
maximum
• We take no value of i when selling the rod without cutting
generates the highest revenue
ROD CUTTING

•• Since
  to find the optimal structure of the original problem,
we depend on finding the optimal structure of the two sub-
problems and so on, we say that rod-cutting problem exhibits
optimal substructure property
• A Simpler Version: (recursive structure for the rod-cutting
problem)
• We view the decomposition as consisting of a first piece of
length i cut off from the left end and a right hand remainder
of length n-i
• Then we cut only the right hand remainder part but not the
left hand one
• The formula now becomes
ROD CUTTING ALGORITHM (RECURSIVE)
•  
• The procedure by using the previous equation in a
topdown, straightforward recursive approach is:

• CUT-ROD(p, n) It takes an array of prices p[1,…n] ad an integer n


Returns maximum revenue possible for a rod of length n
• STEP 1: If n == 0
If the length of the rod is 0 then no revenue can be
• STEP 2: return 0 generated and so it is set to 0.
• STEP 3: q =  q is initialised to , so that the computation of the
• STEP 4: for i =1 tovalue
n of q in step-5 is correct.
• STEP 5: q = max{q, p[i]+ CUT-ROD(p, n-i)}
• STEP 6: return q
ANALYSIS OF THE RECURSIVE ALGORITHM

• For slightly large values of n, this algorithm takes a lot of


time
• For n = 40 it takes several minutes and may take more than
one hour
• If we increase the value of n by ‘1’, the time gets doubled
• The problem is that the CUT-ROD function calls itself again
and again, i.e. solves the same sub-problem repeatedly
ROD CUTTING PROBLEM CONTD…

• If the input size becomes large the previous algorithm takes a


lot of time for execution
• The problem is that the CUT-ROD algorithm calls itself
recursively over and over again with the same parameter
values
The cutting can be at the points 1 or
4 2 or 3 or 4.

3 2 1 0

2 1 0 1 0
0
1 0 0 P(4) calls P(3), P(2), P(1) and P(0)
0 P(3) calls P(2), P(1) and P(0)
P(2) calls P(1) and P(0)
0
P(1) calls P(0)
ROD CUT PROBLEM CONTD…

•• CUT-ROD(p,
  n) calls CUT-ROD(p, n-i) for i = 1, 2, …n
• Or
• CUT-ROD(p, n) calls CUT-ROD(p, j) for each j = 0, 1, 2, …n-1
• At the end of the process the amount of work done becomes
exponential
• If T(n) is the total number of calls made to CUT-ROD then
• T(n) = 1 + ‘1’ corresponds to the root
• It can be seen that T(n) = T(1) = 1+ T(0) = 2; T(2) = 1+ T(0)+T(1) = 2 T(1) =
2x2; T(3) = 1+ T(0)+T(1)+T(2) = 2 T(2) = 2x2x2= 2
• This algorithm can be converted pow3. into an efficient one by
using dynamic programming
Suppose it is true for n = k. Then for n = k+1,
T(k+1) = 1 + T(0)+T(1)+…T(k-1) + T(k) = 2 T(k) =
2x 2 pow (k) = 2 pow (k+1) . So, by induction we
get T(n) = 2 pow n
THE KNAPSACK PROBLEM

•• Both
  the Dynamic programming and Greedy strategies exploit
substructure
• But, the problem is that one may be tempted to generate a
dynamic programming solution when greedy solution is
sufficient
• May be conversely
• The 0 -1 knapsack problem:
• A thief robbing a store finds n items
• The item is worth rupees and weighs kg.s
• Here, and are integers
THE KNAPSACK PROBLEM

• The thief wants to take as valuable a load as possible


• He can carry at most W kg.s in his knapsack, for some integer
W
• Which items he should take?
• The problem is called a 0 – 1 knapsack problem as the thief
has to take an item (1) or not (0)
• He cannot take factional part of the items
• THE FRACTIONAL KNAPSACK PROBLEM
• The set up is same
• But the thief can take fractions of items
THE KNAPSACK PROBLEM CONTD…

•• Both
  the knapsack problems exhibit the optimal-substructure
property
• For the 0 – 1 knapsack problem:
• First take the most valuable item with weight less than W kgs
• Remove this item j from the list
• Then from the remaining n-1 items select the one most valuable
with weight less than W-
• Continue like this
• Fractional knapsack problem:
• Here, if we remove weight w of an item j from the optimal load,
the remaining load must be the most valuable load weighing at
most W-w that the thief can take from the n-1 original items plus
the – w kg.s of item j
SOLUTIONS TO THE KNAPSACK PROBLEMS

•• Although
  the problems are similar, we can solve the
fractional knapsack problem by a greedy strategy but we
cannot do so for the 0-1 knapsack problem
• Example: (Fractional Version)
• To solve the fractional problem we compute the value per kg
for each item i.
• Following the greedy strategy the thief can take as much as
he can from the item having the greatest value per kg.
• If the supply of that item is over and he can carry more, then
he can take as much as he can from the next item having
greatest value per kg.
SOLUTIONS TO THE KNAPSACK PROBLEMS

• This strategy is continued further until the total weight W kg.s


• The items can first be sorted by value per kg. (high– low)
• Then the selection can be done
• This way the greedy algorithm runs in O(n.log n) time
• The greedy strategy does not work for 0-1 knapsack
problem:
Rs.120
30
50 30 Rs.100
30
20 20 Rs.100
10
10 Rs.60
Rs.60 Rs.100 Rs.120 Knapsack Total = Rs. 220
Total = Rs. 160
SOLUTIONS TO THE KNAPSACK PROBLEMS
CONTD…
20
• a

Rs.80
30
30 Rs.120
20 Rs.100

10 Rs.60
10 Rs.60
Total Rs.180
Total Rs.240
• The thief must select a subset of the three items shown whose weight
must not exceed 50 pounds
• The optimal subset includes items 2 and 3
• Any solution with item 1 is suboptimal even though item 1 has the
greatest value per pound
• For the fractional knapsack problem, taking the items in order of
greatest value per pound yields an optimal solution
A COMPARISON

• The greedy algorithm strategy does not work for the 0 – 1


knapsack problem
• Taking item 1 doesn’t work in the 0 – 1 problem because the
thief is unable to fill his knapsack to capacity
• The empty space lowers the effective value per kg of his load
• In the 0-1 problem, when we consider whether to include an
item in the knapsack, we must compare the solution to the sub-
problem that the item with the solution to the sub-problem that
excludes the item before we can make the choice
• THE PROBLEM FORMULATED IN THIS WAY GIVES RISE TO MANY
OVERLAPPING SUBPROBLEMS--- A HALL MARK OF DYNAMIC
PROGRAMMING
BACKTRACKING

• Backtracking is based on the observation that it is often


possible to reject a solution by looking at just a small portion
of it.
• For example:
• In computer science, the Boolean Satisfiability
Problem(sometimes called Propositional Satisfiability
Problem and abbreviated as SATISFIABILITY or SAT) is
the problem of determining if there exists an interpretation
that satisfies a given Boolean formula
SAT PROBLEM

•• Study
  of Boolean functions generally is concerned with the
set of truth assignments (assignments of 0 or 1 to each of the
variables) that make the function true

• Does there exist a truth assignment making the function true?


• is satisfiable
• (0, 1) and (1, 0) are the solutions
• is not satisfiable
SAT PROBLEM

• For example:
• if an instance of SAT contains the clause (x1 ∨ x2), then all
assignments with x1 = x2 = 0 (i.e., false) can be instantly
eliminated.

• By quickly checking and discrediting this partial assignment,


we are able to prune a quarter of the entire search space

• This is one of the four possible cases { (0, 0), (0, 1), (1, 0), (1,
1)}
BACK TRACKING

•• Consider
  the Boolean formula (w, x, y, z) specified by the
set of clauses
• (w ∨ x ∨ y ∨ z), , ), , ,
• Backtracking shows that the formula is not satisfied

W=0 W=1
( x  y  z ),( x ),( x  y ),( y  z ) ( x  y ),( y  z ),( z ),( z )
x=0 x=1 z=0 z=1
( y  z ),( y ),( y  z ) ( y  z) x y x  
( y𝑦( y ))
y= 0 y= 1
( z ),( z ) ()
z=0 z=1
() ()
BACK TRACKING

•• In
  the case of Boolean satisfiability, each node of the search
tree can be described either by a partial assignment or by the
clauses that remain when those values are plugged into the
original formula.
• For instance,
• If w = 0 and x = 0 then any clause with or is instantly satisfied
and
• Any literal w or x is not satisfied and can be removed
• The “empty clause” ( ) ruling out satisfiability
• Thus the nodes of the search tree, representing partial
assignments, are themselves SAT sub-problems
EIGHT QUEENS PROBLEM

• The eight queens puzzle is the problem of placing eight chess


queens on an 8×8 chessboard so that no two queens threaten
each other.
• Thus, a solution requires that no two queens share the same
row, column, or diagonal.
• The eight queens puzzle is an example of the more
general n queens problem of placing n non-attacking queens
on an n × n chessboard, for which solutions exist for all natural
numbers n with the exception of n=2 and n=3
• Chess composer Max Bezzel published the eight queens
puzzle in 1848.
RECURSION AND BACK TRACKING

• Recur­sion is the key in back­track­ing pro­gram­ming


• As the name sug­gests we back­track to find the solu­tion
• We start with one pos­si­ble move out of many avail­able moves
• Try to solve the prob­lem if we are able to solve the prob­lem
with the selected move
• Then we will print the solu­tion else we will back­track and
select some other move and try to solve it
• If none if the moves work out we will claim that there is no
solu­tion for the problem
EIGHT QUEENS PROBLEM

•• Franz
  Nauck published the first solutions in 1850
• Nauck also extended the puzzle to the n queens problem,
with n queens on a chess board of n × n squares
• In 1972, Edsger Dijkstra used this problem to illustrate the
power of what he called structured programming
• He published a highly detailed description of a depth-first
backtracking algorithm
• The Eight Queen problem can be quite computationally
expensive, as there are 4,426,165,368 () possible
arrangements of eight queens on an 8×8 board, but only 92
solutions
EIGHT QUEENS PROBLEM- A SOLUTION
• a

Q
EIGHT QUEENS PROBLEM

You might also like