7 Design Paradigms of Algorithm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 789

7 Design Paradigms

of

Algorithm
by
Sung-Hyuk Cha

from the CS608 Algorithms & Computing Theory Lecture Materials

In preparation
Draft Version 5.0.5
March 14, 2020
1

The Seidenberg School of Computer Science and Information Systems


Pace University

Published by Pace University Press

c
Copyright 2018
Pace University retains the rights to this book.
You may not distribute this document without permission.
Preface

Algorithm is a fundamental subject of Computer Science. It touches all branches of


computer science such as artificial intelligence, compilers, computer networks, database,
data mining, graphics, operating systems, etc. Algorithms for the dictionary problem are
fundamental to database management systems, DBMS in short. Many planning problems
appear in operating systems, DBMS, compiler, etc. One of the major applications of graph
algorithms is computer networks.
Computational problems appearing in this textbook are drawn from a variety of domains
such as elementary algebra, combinatorics, graph theory, logic, number theory, scheduling,
set theory, etc. Discrete mathematics is a sufficient prerequisite, assuming one has familiarity
with programming.
While some readers might be interested in the best or most efficient algorithms for
certain computational problems from the problem oriented point of view, the present book
describes algorithms from the algorithm design paradigm oriented point of view. If a reader
is interested in a particular problem, the ‘Index of Computational Problems’ in the appendix
on Page 736 is useful. It provides respective page numbers of formal problem definitions and
various algorithms. Most algorithms are categorized into the respective design paradigms
that best characterize them in this book, regardless of whether they are naı̈ve or innovative.
It should be noted that page numbers with prefix ‘S-’ means the page number in the solution
manual accompanying this book. Theorem, problem, algorithm numbers with previs ‘S-’
can be found in the solution manual as well.
Rather than providing students simply with the best known algorithm for a certain
problem, various algorithms shall be presented, if the paradigm is applicable, to master the
algorithm design paradigm. Beginners of computer science can train their algorithm design
skills via trivial algorithms on elementary problem examples. Graduate level students can
test their abilities to apply the algorithm design paradigm to devise an efficient algorithm
for intermediate or complex problems. Seven algorithm design paradigms are as follows:
Name Key idea Chapter
1 inductive programming domino effect 2
2 divide and conquer divide and conquer 3
3 greedy algorithm greedy choice 4
4 tabulation method (strong inductive trade-off between time and 5&6
programming and memoization) space
5 data structures (stack, queue, circular use of tools 7∼9
array, priority queues, trees, etc.)
6 reduction utilization of other algorithms 10
7 randomized and/or approximate randomness (probability) 12
algorithm

i
ii

Sequential iterative algorithms are referred to as “inductive programming” and intro-


duced as the first algorithm design paradigm in Chapter 2. Iterative programming is
the more general term. The second algorithm design paradigm is the divide and conquer
paradigm and covered in Chapter 3. Chapter 4 introduces the third paradigm called the
greedy algorithm, which serves as an approximate algorithm if it fails to solve the problem.
One of most famous algorithm design paradigms is the dynamic programming. As it is a
broad paradigm, it is divided in several chapters. One basic aspect of dynamic programming
is called the strong inductive programming in this book. It is often called the bottom up dy-
namic programming. The top down dynamic programming known as memoization technique
is also presented in Chapters 5 and 6. Various elementary data structures are covered to de-
sign algorithms by combining them with other paradigms. When data structures are used,
the computational time and space complexities are often improved. Various data struc-
tures are introduced in Chapters from 7 to 9. Although the reduction concept is primarily
used in complexity theory, it is presented in Chapter 10 as an algorithm design paradigm.
Finally, probably correct and/or efficient, and probably approximately correct algorithms
are discussed at the end of this book. Approximate algorithms are briefly described since
approximate heuristics are primarily covered in A.I. courses.
Which design paradigm is the best? It depends on the problems. A certain paradigm is
best for some problems, while others perform better for different problems. For the sorting
problem, “No single sorting algorithm is best for all use cases” [39] Numerous algorithms
are categorized into the ‘simple’, ‘brute force’, ‘straight-forward’ or ‘naı̈ve’ algorithm. These
terms are relative terms. When one paradigm performs worse than another, the algorithm
may be categorized into the simple algorithm.
All problem solving begins with intuitions, then goes to evaluations, and is finally com-
pleted in efficient algorithms. Nevertheless, a flow of algorithm design strategy is suggested
in Figure 1, where chapters are organized accordingly.
This book is organized as follows. The basic preliminary knowledge is recapitulated in
chapter 1. Definitions of algorithms and computational problems are given. Elementary
summation concepts are reviewed. Asymptotic notations and their usage in algorithm anal-
ysis are introduced. Chapters 2 ∼ 6 and 10 provide five different algorithm design paradigms
with numerous examples. Chapters 7 ∼ 9 deal with data structures and their usage in al-
gorithms. Next, chapter 11 deals with some aspects of the theory of computation focusing
on the NP-completeness. Finally, chapter 12 covers the final algorithm design paradigm
leading to randomized and approximate algorithms.
This book compiles lecture notes developed primarily from a course, “CS608 Algorithms
& Computing Theory,” given at Pace University since the Spring semester of 2012. Other
courses that contributed some materials include a graduate course “CS601 Data Structures
& Algorithms” offered in the Fall semester of 2001, an undergraduate course “CS241 Data
Structures & Algorithms” offered in the Fall semester of 2011, the Spring semester of 2012,
the Fall semester of 2012, the Spring semester of 2013, the Fall semester of 2013, the Spring
semester of 2015, the Spring semester of 2016, , the Fall semester of 2016, the Spring
semester of 2017, and the Fall semester of 2017, and a doctorate course “CS801 Advanced
Algorithms” offered in the Spring of 2015, Spring of 2016, Spring of 2017, and Spring of
2018. Taken together, this book is suitable as a text for both undergraduate and graduate
level algorithm courses.
There are over 300 exercises for students to improve their algorithm design and analysis
skills. Their difficulties range from basic to challenging. Some questions lead to open prob-
lems. The answers for most questions are available in the accompanying solution manual. It
iii

yes Divide &


Divide
Conquer
recurrence
(Chap 3)

Inductive
Computational 1st order linear
yes Programming
Problem recurrence
(Chap 2) Data Structures
(Chap 7~9)
no

yes Greedy
Greedy
algorithm Deterministic
choice
(Chap 4) program
no

yes Strong Inductive


Other
Programming
recurrence Approximate
(Chaps 5 & 6)
program
no

Reducible yes Reduction


toP? (Chap 10)

no Monte carlo
NP-hard
method
(Chap 11)
(Chap 12)

Figure 1: Chapter plan.

is recommended that students attempt the exercises without the solution manual, in order
to improve their knowledge and skills.
Some historical figures are inserted with brief bibliographic statements for inspirational
purpose. Photos of famous mathematicians and computer scientists are also included as
footnotes.

Acknowledgments
I would like to thank many students who have provided me with their valuable feedback
and comments. To list a few, they are Yousef Alhwaiti, Rania Almajalid, Teresa Brooks,
Daniel Evans, Yu Hou, Abu Kamruzzaman, Sukun Li, Yaobin Liang, Haitao Luo, Kenneth
Mann, and Lewis E. Westfall.
Although citation to wikipedia and Wolfram MathWorld are omitted, they provided
enormous information, insights, and direction.
Contents

1 Introduction 1
1.1 Formulating Computational Problems . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Least Common Multiple . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Volume of a Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Euclid’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Closed Form of Summation . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.1 Asymptotic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.2 Analysis of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.3 Search Unsorted List . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4.4 Integer Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4.5 Maximum Contiguous Subsequence Sum . . . . . . . . . . . . . . . . . 22
1.4.6 Analysis of Algorithms with Logarithms . . . . . . . . . . . . . . . . . 24
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2 Recursive and Inductive Programming 35


2.1 Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.1 Recursive Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.2 Types of Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.1.3 Closed Form of a First Order Linear Recursion . . . . . . . . . . . . . 38
2.2 Inductive Programming on Arithmetic Problems . . . . . . . . . . . . . . . . 41
2.2.1 Summation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2.2 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.3 n-digit Long Integer Addition . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.4 n-digit Long Integer Multiplication . . . . . . . . . . . . . . . . . . . . 44
2.2.5 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2.6 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.2.7 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.8 k-Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.2.9 Fermat Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.3 Problems on List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.1 Prefix Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3.2 Number of Ascents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

iv
CONTENTS v

2.3.3 Element Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56


2.3.4 Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.3.5 Order Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.3.6 Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.3.7 Alternating Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.3.8 Random Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.3.9 Palindrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.4 Linked list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.4.2 Array vs. Linked List . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.4.3 Insertion Sort with a Linked List . . . . . . . . . . . . . . . . . . . . . 73
2.5 Iterative Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.5.1 Bubble Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.5.2 Tail Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.5.3 Quotient and Remainder . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.5.4 Square root of n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.5.5 Lexicographical Order . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.6 Theorem Proving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

3 Divide and Conquer 91


3.1 Dichotomic Divide and Conquer . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1.1 Finding Min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.1.2 Number of Ascents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.1.3 Alternating Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.1.4 Merge Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.1.5 Maximum Contiguous Sub-sequence Sum . . . . . . . . . . . . . . . . 99
3.1.6 Search Unsorted List . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.1.7 Palindrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.1.8 Checking Up-Down Sequence . . . . . . . . . . . . . . . . . . . . . . . 103
3.2 Bisection With a Single Branch Call . . . . . . . . . . . . . . . . . . . . . . . 104
3.2.1 Binary Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2.2 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2.3 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.2.4 Modulo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.2.5 Quotient and Remainder Problem . . . . . . . . . . . . . . . . . . . . 112
3.3 Beyond Bisection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.3.1 Logarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.3.2 Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.3.3 n-digit Long Integer Multiplication . . . . . . . . . . . . . . . . . . . . 116
3.3.4 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4 Beyond the Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.4.1 Checking Greater Between Elements Sequence . . . . . . . . . . . . . 120
3.4.2 n-digit Long Integer Addition . . . . . . . . . . . . . . . . . . . . . . . 122
3.4.3 n-digit Long Integer by a Single Digit Multiplication . . . . . . . . . . 123
3.5 Iterative Divide and Conquer . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
3.5.1 Logarithm Base b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
3.5.2 Radix r Number System . . . . . . . . . . . . . . . . . . . . . . . . . . 126
vi CONTENTS

3.5.3 Merge Sort II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


3.6 Partition and Conquer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.6.1 Bit Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.6.2 Radix-select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.6.3 Radix-sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.6.4 Stable Counting Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
3.7 General Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
3.7.1 Drawing a Perfect Binary Tree . . . . . . . . . . . . . . . . . . . . . . 137
3.7.2 Drawing a Fibonacci Tree . . . . . . . . . . . . . . . . . . . . . . . . . 139
3.7.3 Truth Table Construction . . . . . . . . . . . . . . . . . . . . . . . . . 140
3.7.4 Scheduling a Round Robin Tennis Tournament . . . . . . . . . . . . . 140
3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

4 Greedy Algorithm 153


4.1 Problems on List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.1.1 Order Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.1.2 Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.1.3 Alternating Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.2 Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.2.1 Select the k Subset Sum Maximization . . . . . . . . . . . . . . . . . . 157
4.2.2 Postage Stamp Minimization Problem . . . . . . . . . . . . . . . . . . 158
4.2.3 0-1 Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.2.4 Fractional Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.2.5 Unbounded integer knapsack . . . . . . . . . . . . . . . . . . . . . . . 166
4.2.6 Rod cutting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.2.7 Classical Optimization and Related Problems . . . . . . . . . . . . . . 168
4.3 Scheduling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.3.1 Activity Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . 169
4.3.2 Task Distribution with a Minimum Number of Processors . . . . . . . 171
4.3.3 Multiprocessor Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.3.4 Bin Packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.3.5 Job Scheduling with Deadline . . . . . . . . . . . . . . . . . . . . . . . 177
4.4 Graph Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.4.1 Graph Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.4.2 Vertex Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.4.3 Minimum Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.4.4 Shortest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
4.4.5 Traveling Salesman Problem . . . . . . . . . . . . . . . . . . . . . . . 190
4.5 Minimum Length Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
4.5.1 Huffman Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
4.5.2 Minimum Length r-ary Code . . . . . . . . . . . . . . . . . . . . . . . 197
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

5 Tabulation - Strong Induction 215


5.1 Strong Inductive Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.1.1 Prime Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.2 Stamp Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.2.1 3-5 Stamp Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
CONTENTS vii

5.2.2 Postage stamp minimization problems . . . . . . . . . . . . . . . . . . 221


5.2.3 Unbounded Subset Sum Equality . . . . . . . . . . . . . . . . . . . . . 225
5.2.4 Unbounded Subset Sum Minimization . . . . . . . . . . . . . . . . . . 227
5.3 More Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.3.1 Rod Cutting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.3.2 Unbounded Integer Knapsack Problem . . . . . . . . . . . . . . . . . . 229
5.3.3 Weighted Activity Selection Problem . . . . . . . . . . . . . . . . . . . 231
5.4 Proving Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5.4.1 Divide Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . 233
5.4.2 Complete Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . 235
5.4.3 Euler Zigzag Number . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.5 Memoization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.5.1 Winning Ways Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.5.2 Divide Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . 242
5.5.3 Linear Divide Recurrence Relations . . . . . . . . . . . . . . . . . . . 244
5.6 Fibonacci and Lucas Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.6.1 Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.6.2 Kibonacci Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.6.3 Lucas Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.6.4 Memoization in Divide & Conquer . . . . . . . . . . . . . . . . . . . . 251
5.7 Directed Acyclic Graph Problems . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.7.1 Topological Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
5.7.2 Number of paths problem . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.7.3 Shortest Path Length in DAG . . . . . . . . . . . . . . . . . . . . . . . 259
5.7.4 Shortest Path Cost in Weighted DAG . . . . . . . . . . . . . . . . . . 261
5.7.5 Minimum Spanning Rooted Tree . . . . . . . . . . . . . . . . . . . . . 263
5.7.6 Critical Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

6 Higher Dimensional Tabulation 293


6.1 Two Dimensional Strong Inductive Programming . . . . . . . . . . . . . . . . 293
6.1.1 Prefix Sum of Two Dimensional Array . . . . . . . . . . . . . . . . . . 293
6.1.2 Stamp Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.1.3 Postage Stamp Equality Minimization Problem . . . . . . . . . . . . . 298
6.1.4 0-1 Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
6.1.5 Unbounded Integer Knapsack . . . . . . . . . . . . . . . . . . . . . . . 303
6.1.6 Subset sum equality problem . . . . . . . . . . . . . . . . . . . . . . . 305
6.1.7 Unbounded Subset Product Equality Problem . . . . . . . . . . . . . . 307
6.2 Problems on Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.2.1 Longest Common Sub-sequence . . . . . . . . . . . . . . . . . . . . . . 309
6.2.2 String Edit Distance: InDel . . . . . . . . . . . . . . . . . . . . . . . . 312
6.2.3 Levenshtein Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6.2.4 Longest Palindromic Sub-sequence . . . . . . . . . . . . . . . . . . . . 316
6.3 Problems on Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6.3.1 Binomial Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6.3.2 Lucas Sequence Coefficient . . . . . . . . . . . . . . . . . . . . . . . . 323
6.3.3 Integer Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
6.3.4 Twelve Fold Ways of Combinatorics . . . . . . . . . . . . . . . . . . . 329
viii CONTENTS

6.3.5 Integer Partition with at Most k Parts . . . . . . . . . . . . . . . . . . 329


6.4 Three Dimensional Tabulation . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6.4.1 01-Knapsack with Two Constraints . . . . . . . . . . . . . . . . . . . . 334
6.4.2 Bounded Integer Partition . . . . . . . . . . . . . . . . . . . . . . . . . 337
6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340

7 Stack and Queue 355


7.1 Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.1.1 Balancing Parenthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
7.1.2 Undo and Redo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
7.1.3 Procedure Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
7.1.4 NFA Acceptance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2 Propositional Logic with Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . 364
7.2.1 Infix, Prefix, and Postfix Notations . . . . . . . . . . . . . . . . . . . . 365
7.2.2 Evaluating Postfix Expression . . . . . . . . . . . . . . . . . . . . . . . 367
7.2.3 Prefix Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
7.2.4 Infix Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
7.3 Graph Problems with Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
7.3.1 Depth First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.3.2 Graph Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
7.3.3 Cycle Detection in Ungraphs . . . . . . . . . . . . . . . . . . . . . . . 377
7.3.4 Cycle Detection in Digraphs . . . . . . . . . . . . . . . . . . . . . . . . 379
7.3.5 Topological Sorting with a Stack . . . . . . . . . . . . . . . . . . . . . 381
7.4 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
7.4.1 Breadth First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
7.4.2 Shortest Path Length . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
7.4.3 Topological Sorting With a Queue . . . . . . . . . . . . . . . . . . . . 389
7.4.4 Huffman Code Using a Queue . . . . . . . . . . . . . . . . . . . . . . . 391
7.5 Circular Array in Strong Inductive Programming . . . . . . . . . . . . . . . . 394
7.5.1 Kibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
7.5.2 Postage Stamp Equality Minimization . . . . . . . . . . . . . . . . . . 399
7.5.3 Bouncing Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
7.6 Cylindrical Two Dimensional Array . . . . . . . . . . . . . . . . . . . . . . . . 402
7.6.1 0-1 Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
7.6.2 Ways of Stamping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
7.7 Rolling Cylinder on Triangle Tables . . . . . . . . . . . . . . . . . . . . . . . 407
7.7.1 Binomial Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
7.7.2 Stirling Numbers of the Second Kind . . . . . . . . . . . . . . . . . . . 408
7.7.3 Bell Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
7.7.4 Set Partition Number of at Most k Partitions . . . . . . . . . . . . . . 412
7.7.5 Set Partition Number of at Least k Partitions . . . . . . . . . . . . . . 413
7.7.6 Integer partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
7.7.7 Hopping Cylindrical Array . . . . . . . . . . . . . . . . . . . . . . . . 416
7.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
CONTENTS ix

8 Tree Data Structures for Dictionary 435


8.1 Binary Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.1.2 Depth and Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.1.3 Number of Rooted Binary Trees . . . . . . . . . . . . . . . . . . . . . 438
8.2 Binary Search Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.2.2 Search in BST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
8.2.3 Insertion in BST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
8.2.4 Min and Max in BST . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
8.2.5 Deletion in BST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
8.2.6 Average Case Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
8.3 AVL Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
8.3.1 Height Balanced Binary Tree . . . . . . . . . . . . . . . . . . . . . . . 447
8.3.2 Insertion in AVL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
8.3.3 Deletion in AVL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
8.4 2-3 Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
8.4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
8.4.2 Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
8.4.3 Deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
8.5 B Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
8.5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
8.5.2 Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.5.3 Deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
8.6 B+ Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
8.6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
8.6.2 Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
8.6.3 Deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
8.7 Skip List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.7.1 Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
8.7.2 Deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
8.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489

9 Priority Queue 499


9.1 Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.1.1 Complete Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.1.2 Heap Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
9.1.3 Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.1.4 Delete Min/Max Operation . . . . . . . . . . . . . . . . . . . . . . . . 506
9.1.5 Constructing a Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.2 Problems on Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
9.2.1 Heapselect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.2.2 Heapsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
9.2.3 Alternating Permutation Problem . . . . . . . . . . . . . . . . . . . . 515
9.3 Greedy Algorithms with Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
9.3.1 Fractional Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . . 518
9.3.2 Activity Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . 520
9.3.3 Huffman Code with Heaps . . . . . . . . . . . . . . . . . . . . . . . . . 521
x CONTENTS

9.4 Min-max Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523


9.4.1 Checking Min-max Heap . . . . . . . . . . . . . . . . . . . . . . . . . . 523
9.4.2 Insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
9.4.3 Delete min and Delete max . . . . . . . . . . . . . . . . . . . . . . . . 527
9.4.4 Construct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.5 Leftist Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.5.1 Definition of a Leftist Heap . . . . . . . . . . . . . . . . . . . . . . . . 532
9.5.2 Merge Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.5.3 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
9.6 AVL Tree as a Priority Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
9.6.1 AVL Select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
9.7 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
9.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546

10 Reduction 555
10.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
10.2 Reduction: px ≤p py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
10.3 Dual problem: px ≡p py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
10.3.1 GBW ≡p LBW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
10.4 Reduction to Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
10.4.1 Order Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
10.4.2 Alternating Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 559
10.4.3 Element Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
10.4.4 Random Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
10.4.5 Sorting-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
10.5 Reduction to Graph Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
10.5.1 NPP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
10.5.2 More Sorting Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 569
10.5.3 Critical Path Problem to Longest Path Cost Problem . . . . . . . . . 571
10.5.4 Longest Increasing Sub-sequence to LPL . . . . . . . . . . . . . . . . . 572
10.5.5 Activity Selection Problem to LPC . . . . . . . . . . . . . . . . . . . . 574
10.5.6 String Matching to LPC . . . . . . . . . . . . . . . . . . . . . . . . . . 575
10.5.7 SPC-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
10.6 Proving Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.6.1 Lower Bound for Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . 579
10.6.2 Lower Bound for AVL Tree Construction . . . . . . . . . . . . . . . . 579
10.6.3 Convex Hull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
10.7 Multi-reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
10.7.1 Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
10.7.2 Integer Partition Problem . . . . . . . . . . . . . . . . . . . . . . . . . 587
10.7.3 At Most or At Least Combinatorics . . . . . . . . . . . . . . . . . . . 589
10.7.4 Reduction Relations in Fibonacci Related Problems . . . . . . . . . . 595
10.8 Combining with Strong Inductive Programming . . . . . . . . . . . . . . . . . 598
10.8.1 Maximum Prefix Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
10.8.2 Longest Increasing Sub-sequence . . . . . . . . . . . . . . . . . . . . . 599
10.8.3 Longest Alternating Sub-sequence . . . . . . . . . . . . . . . . . . . . 600
10.8.4 Longest Palindromic Consecutive Sub-sequence . . . . . . . . . . . . . 602
10.9 Consecutive Sub-sequence Arithmetic Problems . . . . . . . . . . . . . . . . . 604
CONTENTS xi

10.9.1 minCSS ≤p MCSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604


10.9.2 minCSPp ≤p MCSPp . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
10.9.3 MCSPp ≤p MCSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
10.9.4 Kadane’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
10.10Reduction to Matrix Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 609
10.10.1 Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
10.10.2 Kibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
10.10.3 Number of Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614

11 NP-complete 635
11.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
11.1.1 Computability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
11.1.2 Tractability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
11.1.3 Non-deterministic in Polynomial Time . . . . . . . . . . . . . . . . . . 636
11.1.4 P vs. NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
11.1.5 NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
11.1.6 NP-hard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
11.2 NP-complete Logic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
11.2.1 Combinational Circuit Satisfiability . . . . . . . . . . . . . . . . . . . 641
11.2.2 Satisfiability of CNF-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
11.2.3 Satisfiability of CNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
11.2.4 NAND Gate Only Circuit Satisfiability . . . . . . . . . . . . . . . . . 654
11.3 NP-complete Set Theory Problems . . . . . . . . . . . . . . . . . . . . . . . . 655
11.3.1 Subset Sum Equality Problem . . . . . . . . . . . . . . . . . . . . . . 655
11.3.2 Unbounded Subset Sum Problem . . . . . . . . . . . . . . . . . . . . . 658
11.3.3 Set Partition Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
11.4 NP-hard Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . 660
11.4.1 Subset Sum Maximization Problem . . . . . . . . . . . . . . . . . . . . 661
11.4.2 Subset Sum Minimization Problem . . . . . . . . . . . . . . . . . . . . 661
11.4.3 01-Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
11.5 NP-complete Scheduling Problems . . . . . . . . . . . . . . . . . . . . . . . . 664
11.5.1 Bin Packing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
11.5.2 Multiprocessor Scheduling Problem . . . . . . . . . . . . . . . . . . . . 666
11.6 NP-hard Graph Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
11.6.1 Clique Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
11.6.2 Independent Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
11.6.3 Vertex Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
11.6.4 Set Cover Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
11.6.5 Hamiltonian Path and Cycle . . . . . . . . . . . . . . . . . . . . . . . 676
11.6.6 Traveling Salesman Problem . . . . . . . . . . . . . . . . . . . . . . . 678
11.7 co-NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
11.7.1 Fallacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
11.7.2 Tautology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
11.7.3 LEQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
11.7.4 CNF vs. DNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
11.7.5 Frobenius Postage Stamp . . . . . . . . . . . . . . . . . . . . . . . . . 684
11.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
xii CONTENTS

12 Randomized and Approximate Algorithms 695


12.1 Las Vegas Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
12.1.1 Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
12.1.2 Quicksort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
12.1.3 Quickselect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
12.1.4 Random Permutation by Riffling . . . . . . . . . . . . . . . . . . . . . 703
12.1.5 Random Alternating Permutation . . . . . . . . . . . . . . . . . . . . 705
12.2 Monte Carlo Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
12.2.1 Top m Percent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
12.2.2 Primality Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
12.3 Approximate Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
12.3.1 Subset Sum Maximization . . . . . . . . . . . . . . . . . . . . . . . . . 714
12.3.2 Vertex Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
12.3.3 Metric Traveling Salesman Problem . . . . . . . . . . . . . . . . . . . 716
12.3.4 Probably Approximately Correct . . . . . . . . . . . . . . . . . . . . . 718
12.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718

Bibliography 724

Index of Computational Problems 736

List of Abbreviations 768

List of Symbols and Notations 774


Chapter 1

Introduction

Figure 1.1: Rubik’s Cube puzzle.

An algorithm is a well-defined computational procedure that takes some value, or set of


values, as its input and produces some value, or set of values, as its output. The term ‘algo-
rithm’ originates from al-Khwārizmı̄, a mathematician who provided systematic approaches
to solving linear and quadratic equations [15, p 10].
A systematic set of instructions to solve a Rubik’s Cube is an algorithm for solving the
Rubik’s Cube problem. When one follows these step-by-step instructions, one should be able
to correctly transform an arbitrarily shuffled cube into a cube where each side is unicolor, as
depicted in Figure 1.1. Numerous algorithms exist for solving this puzzle correctly (see [178]
for an extensive list). Which one is the best? Winners of various Rubik’s Cube speed-based
contests often confess “I was lucky.” It really depends on the given input. Hence, contest
organizers also award a prize for the average overall completion time. As such, algorithms
need to be analyzed by considering the best, worst and average cases.
There are five standard criteria designers of algorithms uphold: Accurate, Efficient,
Innovative, Optimal, and User-friendly. First, one must ensure the accuracy of an algo-
rithm, such that desired output value(s) are produced correctly given any arbitrary input
value(s). So, we must prove the correctness of a proposed algorithm. Second, when two or
more algorithms are presented for a problem, which is the best? The most efficient algorithm
is preferred. In order to compare degrees of efficiency, we must analyze the computational

Muh.ammad ibn Mūsā al-Khwārizmı̄ (780 - 850) was a Persian mathemati-


cian, astronomer and geographer in the House of Wisdom in Baghdad. He was referred
to as the ‘inventor of algebra’ in Renaissance Europe. One of his major contributions is
‘The Compendious Book on Calculation by Completion and Balancing’.
c Portrait is in public domain.

1
2 CHAPTER 1. INTRODUCTION

complexity of each algorithm. Third, many problems require innovative approaches. Numer-
ous examples of innovative algorithms, which creatively manipulate basic algorithm design
paradigms, will appear throughout this book. Next, one must strive to come up with the
optimal algorithm, which outperforms all other viable algorithms. This requires proving the
lower bound of the computational complexity of an algorithm. Finally, the ideal algorithm
must be user-friendly. Any user should be able to follow the instructions to find the de-
sired outputs and the written algorithm should be easily transferrable to any conventional
programming language. For this reason, pseudo code shall be used throughout this book.
Objectives of this chapter are as follows: First, readers must understand the definitions
of algorithms and computational problems. Formulating a computational problem before
devising a suitable algorithm is extremely important. The importance of proving the cor-
rectness of an algorithm is stressed. Elementary summation concepts will be reviewed as
basic preliminary knowledge. Finally, asymptotic notations and their usage in algorithm
analysis are introduced.

1.1 Formulating Computational Problems


Algorithms solve computational problems by transforming an input into an output.
Hence, when faced with a problem described informally or by examples, computer scientists
formally define the computational problem in terms of input and output.
Suppose a biologist wishes to find occurrences of a particular DNA segment in a long
DNA sequence. This particular problem can be generalized as a string matching problem
where a string is a sequence of elements. A string S of length n is denoted as S1∼n . The
lower case symbol, si , is used with the index as a subscription to denote the ith element in
the string S. Now the input of the problem can be presented as two strings: S1∼m and a
query string Q1∼n where m > n > 0. A computational problem must consist of an input
and an output. Figure 1.2 provides a sample example where three occurrences of the query

Q=TAGCAT
3 16
z }| { z }| {
S=ACTAGCA T
| A G C
{z A T
} A C T A G a − tetranum − ip2C A T TT
8

Figure 1.2: DNA string matching example.

string are found in the long string S. The set-builder notation is helpful when formally
defining the output of the DNA sequence matching problem as the outcome of the exact
string matching problem. It is formally defined as follows:

Problem 1.1. Exact string matching (DNA)


Input: Two strings S1∼m and Q1∼n whose elements si and qj ∈ {A, C, G, T }
where m > n > 0
Output: i such that the sub-string Si∼i+n−1 matches to Q1∼n , i.e.,
{i ∈ {1, · · · , m − n + 1} | ∀ j ∈ {1, · · · , n}, (si+j−1 = qj )}

The symbol ‘∀’ is the universal quantifier and reads “for all” or “for each.” An extensive
list of algorithms for the string matching problem can be found in [166].
1.1. FORMULATING COMPUTATIONAL PROBLEMS 3

1 2 3 4 5 6 7
1 s d b a y e s word position direction
Bayes,
Bayes (1,3) e →
Euler, 2 g n i r u t l
Euler (1,6) sw .
Euclid, 3 a n e l e a b Euclid (6,7) nw -
Gauss,
4 u w e w c a s Gauss (2,1) s ↓
Newton,
s r u s t u s Newton (2,2) se &
Pascal, 5
Pascal (7,2) ne %
Turing 6 s a a l a o e Turing (2,6) w ←
7 g p a y a b n

(a) word list (b) 2 dimensional array of letters (c) output

Figure 1.3: Word search puzzle example.

As another example of a computational problem, consider the popular word search puzzle,
hereafter referred to as WSP. The aim is to locate a set of given words in a two-dimensional
grid of letters running in one of eight possible directions - horizontally, vertically, or di-
agonally. Before attempting to formulate the puzzle into a computational problem, it is
helpful to consider a toy example such as the one in Figure 1.3 (a) and (b). There are two
inputs in the WSP problem. The first one is a list of n words, W1∼n as given in Figure 1.3
(a). Let |wi | be the size of the ith word. The second input is an a × b grid of alphabetical
letters, T as shown in Figure 1.3 (b). Let Ti,j denote the cell located in the ith row and jth
column of the table T . Many valid arrangements of the output exist. One possible output
representation is given in Figure 1.3 (c). Although at times lengthy, it is often desirable to
formulate a problem with as much detail as possible. WSP can be defined as follows:

Problem 1.2. Word search puzzle


Input: a word list W1∼n where wi ∈ English dictionary and
an (a × b) table, T whose elements Ti,j ∈ alphabet
Output: a list of positions P1∼n where pi = (xi , yi ) and
a list of positions D1∼n where di ∈ {e, sw, nw, n, s, se, ne, w} such that
∀k ∈ {0 ∼ |wi | − 1}(Txi −k,yi = wk+1 ) if di = ‘n’
∀k ∈ {0 ∼ |wi | − 1}(Txi −k,yi +k = wk+1 ) if di = ‘ne’
∀k ∈ {0 ∼ |wi | − 1}(Txi ,yi +k = wk+1 ) if di = ‘e’
∀k ∈ {0 ∼ |wi | − 1}(Txi +k,yi +k = wk+1 ) if di = ‘se’
∀k ∈ {0 ∼ |wi | − 1}(Txi +k,yi = wk+1 ) if di = ‘s’
∀k ∈ {0 ∼ |wi | − 1}(Txi +k,yi −k = wk+1 ) if di = ‘sw’
∀k ∈ {0 ∼ |wi | − 1}(Txi ,yi −k = wk+1 ) if di = ‘w’
∀k ∈ {0 ∼ |wi | − 1}(Txi −k,yi −k = wk+1 ) if di = ‘nw’

1.1.1 Least Common Multiple


The least common multiple, or simply LCM, of two positive integers is the smallest
nonzero integer that is a multiple of both, i.e., divisible by both numbers. For example,
LCM(6, 9) = 18. The problem of finding the LCM between two positive integers can be
defined as an optimization problem.
4 CHAPTER 1. INTRODUCTION

Problem 1.3. Least common multiple


Input: Two positive integers a and b ∈ Z+
Output: m × a such that

minimize m
subject to m × a = n × b (1.1)
where 1 ≤ m, n, Integer

Note that Z+ denotes the set of all positive integers. A well-defined problem is often
helpful for deriving a straightforward algorithm. In equation (1.1), m × a is divisible by b
since m × a = n × b. To find the lowest possible value of m, start with m = 1 and increment
m by one until m × a becomes divisible by b. A pseudo code is provided below:
Algorithm 1.1. Naı̈ve LCM
LCM(a, b)
L = max(a, b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
S = min(a, b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = L ............................................3
while O % S 6= 0, (i.e., S - O) . . . . . . . . . . . . . . . . . . 4
O = O + L ..................................... 5
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
The symbol ‘=’ in lines 2, 3, and 5 means assignment unless it is used in the conditional
statements such as if or while statements.
Input representation is instrumental to designing an algorithm. Unless otherwise stated,
the default numerical representation is the conventional decimal number, also known as
the Hindu Arabic numeral system. If the Roman numeral system were to be used, many
arithmetic problems such as addition and multiplication would become extremely difficult
to compute.
Hindu Arabic Roman numeral
a= 119 CXIX
b= 42 XLII
a+b= 161 CLXI
a×b= 4998 MMMMCMXCVIII
According to the fundamental theorem of arithmetic [146, p 155], every positive integer
greater than 1 can be written uniquely as a prime or as the product of two or more primes.
This principle is often called the unique-prime-factorization theorem. If the input numbers
are represented by their respective prime factors, many basic arithmetic problems such as
multiplication and finding the least common multiple (LCM) or greatest common divisor
(GCD) can be easily computed.
Decimal numbers Prime factors
a= 2205 20 × 32 × 51 × 72 × 110
b= 1750 21 × 30 × 53 × 71 × 110
a×b= 3858750 21 × 32 × 54 × 73 × 110 add
lcm(a, b) = 110250 21 × 32 × 53 × 72 × 110 max
gcd(a, b) = 35 20 × 30 × 51 × 71 × 110 min
1.2. CORRECTNESS 5

The product of two numbers may be calculated by simply adding the respective exponent
values of each unique prime factor. LCM is determined by selecting the higher exponent
value and GCD is computed by selecting the lower exponent value for each unqiue prime
factor. The performance of computing problems varies depending on how the input data is
represented and organized. This issue shall be dealt with in detail in subsequent chapters.
The lesson to be learned here is that defining a problem as formally as possible in terms of
input and output is important.

1.2 Correctness
The foremost criterion of algorithm design is ‘accuracy.’ An algorithm is only acceptable
if it is correct. When an algorithm is devised, it must be proven correct. All algorithms
must be vetted for accuracy before the coding stage. “The burden of proof lies with the
person claiming to have an algorithm” [2, p 2].

1.2.1 Volume of a Frustum


Consider the problem of finding the volume of a truncated pyramid, also known as a
frustum, formulated as follows:

Problem 1.4. Finding the volume of a frustum

Input: a, b, and h ∈ R+ a=2


Output: volume of the frustum. h=6

R+ denotes the set of all positive real numbers.


b=4

A step-by-step set of instructions for computing the volume of a frustum appears in the
Moscow Mathematical Papyrus, [75, p 10] which dates to approximately 1850 BC [35].

Step Moscow Papyrus general


1 You are to square the 4, result 16. S1 = b2
2 You are to double 4, result 8. S2 = a × b
3 You are to square this 2, result 4. S3 = a2
4 You are to add the 16 and the 8 and the 4, result 28. S4 = S1 + S2 + S3
5 You are to take one-third of 6, result 2. S5 = 13 h
6 You are to take 28 twice, result 56. V = S4 × S5

This step-by-step guide to converting a set of inputs into the desired output is, however, not
considered to be one of the oldest algorithms for two reasons. For one, an algorithm must
be applicable when generalized for any input, but the one given in the Moscow Papyrus is
only applied to a particular case where a = 2, b = 4 and h = 6. A pseudo code for the
general case can be stated as follows:
6 CHAPTER 1. INTRODUCTION

Algorithm 1.2. Algebraic formula for the volume of a frustum

volume of frustum(a, b, h)
S1 = b × b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
S2 = a × b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
S3 = a × a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
S4 = S1 + S2 + S3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
S5 = h/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return S4 × S5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Algorithm 1.2 may be restated as a simple formula as in eqn (1.2).


1
volume of frustum(a, b, h) = h(a2 + ab + b2 ) (1.2)
3
Secondly, no correctness proof has been found for the frustum volume problem. A
frustum is formed from a pyramid with base length and height (b, H), from which a smaller
pyramid with base length and height (a, H − h) is excised. Making use of the fact that a
frustum is a truncated pyramid, one possible correctness proof is depicted in Figure 1.4 (a).
We assume that we already know that the volume of a pyramid with base length and height
2
(b, H) is Hb
3 .

H−h
H
a h

(a) full pyramid − top pyramid = frustum (b) similar triangles


P y(b, H) − P y(a, H − h) = F r(a, b, h) ∆(H, b) ∼ ∆(H − h, a)

Figure 1.4: Proving frustum volume formula.

Theorem 1.1. Algorithm 1.2 correctly finds the volume of a frustum.


Proof. Although the height of the large pyramid H is unknown, a relationship between H
and h can be derived since the ∆(H, b) and ∆(H − h, a) triangles are similar, as shown
in Figure 1.4 (b). The symbol ‘∼’ signifies the similarity relation between two geometrical
objects.

H H −h
=
b a
(b − a)H = bh (1.3)
2 2
b H = abH + b h (1.4)

Hb2 (H − h)a2
volume of frustum(a, b, h) = −
3 3
1.2. CORRECTNESS 7

1
= (Hb2 − Ha2 + ha2 )
3
1
= (abH + b2 h − Ha2 + ha2 ) by eqn (1.4)
3
1
= (aH(b − a) + b2 h + ha2 )
3
1
= (abh + b2 h + ha2 ) by eqn (1.3)
3
1
= h(a2 + ab + b2 ) 
3

1.2.2 Euclid’s Algorithm


The “first algorithm in history” is attributed to Euclid, who devised an efficient algorithm
along with its correctness proof for the problem of finding the greatest common divisor
(GCD) of two positive integers (see [62] for the translated English version). For example,
given inputs m = 128 and n = 72, GCD(128, 72) = 8.

divisor(128) = {1, 2, 4, 8, 16, 32, 56, 128} (1.5)


divisor(72) = {1, 2, 3, 4, 6, 8, 9, 12, 18, 24, 36, 72} (1.6)
common divisor(128, 72) = divisor(128) ∩ divisor(72) = {1, 2, 4, 8} (1.7)
GCD(128, 72) = max({1, 2, 4, 8}) = 8 (1.8)

Its approach reflective of its name, a naı̈ve algorithm finds the GCD of two positive integers
in four steps. The first and second steps compute all integer divisors of the first and second
integers as exemplified in eqns (1.5) and (1.6), respectively. Next, the set of common divisors
is computed by the intersection of the two previously retrieved sets of divisors, as shown
in eqn (1.7). Finally, the GCD is evaluated by finding the maximum value of the common
divisors set, as given in equation (1.8).
The problem of finding the GCD of two positive integers can be formulated as an opti-
mization problem as follows:
Problem 1.5. Greatest common divisor
Input: m and n ∈ Z+
Output: g such that
maximize g
subject to (g | m) ∧ (g | n)
where 1 ≤ g, Integer
The notation ‘a | b’ denotes “a divides b” and ‘a - b’ denotes “a does not divide b.” For
example, 2 | 8 and 3 - 8. a | b if and only if b % a = 0 and a - b if and only if b % a 6= 0.
The symbol ‘∧’ is called the logical conjunction operator and signifies ‘and.’

Euclid (approximately 300 BCE), also known as Euclid of Alexandria, was a Greek
mathematician and is regarded as the “father of geometry”. His major contributions
include geometric algebra, elementary number theory, and Euclid’s Elements, which
covers Euclidean geometry. c Portrait is in public domain.
8 CHAPTER 1. INTRODUCTION

Another
√ slightly better algorithm to find GCD(m, n) is to check divisibility starting from
1 to n, assuming m ≥ n without loss of generality. The following naı̈ve Algorithm√1.3
reduces unnecessary divisibility checking by using the fact that ‘if x×y = n, min(x, y) ≤ n’.
A pseudo code is stated below.
Algorithm 1.3. Naı̈ve GCD
gcd naı̈ve(m, n) √
for i = 1 to b nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if ((i | n) ∧ ( ni | m)), return ni . . . . . . . . . . . . . . . 2
if ((i | n) ∧ (i | m)), g = i . . . . . . . . . . . . . . . . . . . .3
return g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
In Euclid’s element proposition VII.2 [62, p 196], a remarkably efficient algorithm for
computing the GCD is outlined. It is widely known as Euclid’s algorithm or the Euclidean
algorithm and is stated in recursion as follows:
Algorithm 1.4. Euclid’s algorithm
(
gcd(n, m % n) if n > 0
gcd(m, n) = (1.9)
m if n = 0

56 3

16
8
72 9

128 21

gcd(128, 72) = gcd(72, 128 % 72) gcd(21, 9) = gcd(9, 21 % 9)


gcd(72, 56) = gcd(56, 72 % 56) gcd(9, 3) = gcd(3, 9 % 3)
gcd(56, 16) = gcd(16, 56 % 16) gcd(3, 0) =3
gcd(16, 8) = gcd(8, 16 % 8)
gcd(8, 0) =8
(a) gcd(128, 72) = 8 (b) gcd(21, 9) = 3

Figure 1.5: Euclid algorithm illustration as tiling.

GCD Problem 1.5 may be likened to finding the biggest square tile to perfectly fill up
a rectangle with positive integer width and length values. Euclid’s algorithm suggests first
using the n × n square tile to fill up as much of the rectangle as possible. If any remaining
(n × (m % n)) rectangle is left, proceed to use a ((m % n) × (m % n)) square to fill up
the remaining rectangle. If it fits perfectly, the algorithm halts and the biggest square tile
that can perfectly fill up the rectangle is found. A couple of examples are illustrated in
Figure 1.5.
This step-by-step set of instructions to computing the GCD is widely accepted as one of
the oldest algorithms because it not only solves the non-trivial problem efficiently, but also,
and more importantly, because Euclid provided its correctness proof.
1.3. SUMMATION 9

Theorem 1.2. Euclid’s Algorithm 1.4 correctly finds GCD(m, n).


Proof. Assume m ≥ n without loss of generality. If not true, swap the numbers.
Case 1: When n | m, then gcd(m, n) = n since n is the biggest divisor for n.
Algorithm 1.4 computes gcd(m, n) = gcd(n, 0) = n.
Case 2: When n - m, there must be a remainder r; m = nq + r where q ∈ Z+ and 0 < r < n.
Let g be the greatest common divisor of m and n. Since g | m and g | n, g | r. Hence,
gcd(m, n) = gcd(n, r), which is equivalent to gcd(m, n) = gcd(n, m % n). 

1.3 Summation
The capital Greek letter Sigma, Σ, is used in the summation notation introduced by
Lagrange in 1772 [107, p 9]. It is used to abbreviate the summation of a sequence of
numbers starting from its subscript index value and ending at its superscript index value.
n
X
f (i) = f (1) + f (2) + · · · + f (n) (1.10)
i=1

The summation concept is not only of great interest when finding a simpler closed formula,
but also essential to analyzing the complexity of many algorithms. Here, closed forms of
summations that appear frequently in the analysis of algorithms are reviewed.

1.3.1 Closed Form of Summation


The legendary story about ten-year-old Gauss portrays the importance of learning al-
gorithm design and analysis. Gauss’ teacher, Herr Buttner, had asked his class to add up
all the numbers from 1 to 100, expecting that it would take a long time. This problem is
known as the nth triangular number problem, or simply TRN, since it forms a triangle if
each number is represented as dots. The problem is defined as follows:
Problem 1.6. nth triangular number, TRN(n)
1 1
Input: n ∈ Z+ 2 3
n n
X 3 6 TRN(n)
Output: TRN(n) = i 4 10
i=1 5 15

Joseph-Louis Lagrange (1736-1813) was an Italian mathematician and as-


tronomer. He made significant contributions to the fields of analysis, number theory,
as well as both classical and celestial mechanics. Portrait
c is in public domain.

Carl Friedrich Gauss (1777-1855) was a German mathematician and is often


referred to as the Prince of Mathematicians. He made significant contributions to
many fields, including number theory, algebra, statistics, analysis, differential geometry,
geodesy, geophysics, mechanics, electrostatics, astronomy, matrix theory, and optics.
Portrait
c is in public domain.
10 CHAPTER 1. INTRODUCTION

Buttner expected his students to use the following naı̈ve summation algorithm, which re-
quires n number of additions.
Algorithm 1.5. Summing 1 ∼ n

summation(n)
t = 0 ............................................. 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
t = t + i ........................................3
return t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

To the teacher’s surprise, Gauss replied with the correct answer, 5050, after hardly any time
had passed by using the following triangular number formula:
Algorithm 1.6. Triangular number formula

n(n + 1)
TRN(n) = (1.11)
2
Moreover, Gauss proved the formula’s correctness.
Theorem 1.3. Algorithm 1.6 correctly produces the nth triangular number.
n
X n(n + 1)
i=
i=1
2

Proof. The reverse of the summation order equates to the same sum as the original order.
n
P
i = 1 + 2 + · · · + (n − 1) + n
i=1
Pn
+ i = n + (n − 1) + ··· + 2+ 1
i=1
Pn
2 i = (n + 1) + (n + 1) + ··· + (n + 1) + (n + 1)
i=1
n
P
∴ 2 i = n(n + 1) 
i=1

The symbol ‘∴’ means ‘therefore.’


Consider the problem of adding the first n odd numbers as formally defined below.
Problem 1.7. nth square number, SQN(n)
25 9
16 7
+
Input: n∈Z 9 5
n
X 4 3
Output: SQN(n) = (2i − 1)
1 1
i=1
SQN(n) ODD(n) 1 2 3 4 5
n

This problem is known as finding the nth square number, since these numbers always form
squares. Thus, a simple closed formula is derived in Theorem 1.4.
1.3. SUMMATION 11

Theorem 1.4. The sum of the first n odd numbers is the nth square number, n2 .
n n
X X n(n + 1)
Proof. (2i − 1) = 2 i−n=2 − n = n2 
i=1 i=1
2

Consider the problem of adding the first n square numbers as formally defined in Prob-
lem 1.8. This problem is known as finding the nth pyramid number, or simply PRN, since
the numbers always form pyramids.
Problem 1.8. nth pyramid number, PRN(n)
1 1 1

2 4 5
Input: n ∈ Z+ n
n
X 3 9 14
Output: PRN(n) = i2
i=1 4 10 30
SQN(n)PRN(n)

A closed formula is derived in Theorem 1.5.


Theorem 1.5. The sum of the first n square numbers is the nth pyramid number.
n
X n(n + 1)(2n + 1)
i2 = (1.12)
i=1
6

Proof. (by induction)


1
!  
X
2 1(1 + 1)(2 + 1)
base (n = 1) case: i =1 = =1
i=1
6
n n+1
X n(n + 1)(2n + 1) X (n + 1)(n + 2)(2n + 3)
inductive case: Assuming i2 = , show i2 = .
i=1
6 i=1
6
n+1 n  
X
2
X
2 2 n(n + 1)(2n + 1) 2 n(2n + 1)
i = i + (n + 1) = + (n + 1) = (n + 1) + (n + 1)
i=1 i=1
6 6
2n2 + 7n + 6 (n + 1)(n + 2)(2n + 3)
= (n + 1) = 
6 6
Consider the problem of adding the first n triangular numbers as formally defined in
Problem 1.9. This problem is known as finding the nth tetrahedral number, or THN in
short, since the numbers always form tetrahedrals.
Problem 1.9. nth tetrahedral number, THN(n)
1 1 1
2 3 4
Input: n ∈ Z+ n
n X
i 3 6 10
X
Output: THN(n) = j 4 10 20
i=1 j=1 TRN(n)THN(n)

A closed formula is derived in Theorem 1.6.


12 CHAPTER 1. INTRODUCTION

Theorem 1.6. The sum of the first n triangular numbers is the nth tetrahedral number.

n(n + 1)(n + 2)
THN(n) = (1.13)
6

n X
i n n n
!
X X i(i + 1) 1 X
2
X
Proof. THN(n) = j= = i + i
i=1 j=1 i=1
2 2 i=1 i=1
   
1 n(n + 1)(2n + 1) n(n + 1) n(n + 1) 2n + 4 n(n + 1)(n + 2)
= + = = 
2 6 2 2 6 6
Consider the problem of finding PTN, the number of nodes in a perfect k-ary tree of
height h. In a perfect k-ary tree, all internal nodes have exactly k children and leaf nodes
have no children. In addition, there are exactly k l number of nodes at level l. Hence, the
number of nodes in a perfect k-ary tree of height h may be determined by adding the number
of nodes at each level from 0 to h.
Problem 1.10. Number of nodes in a perfect k-ary tree
level num. of PTN(l)
l nodes at l
Input: k ∈ Z+ and h ∈ N 0 20 = 1 1
h
1 21 =2 3
X
Output: PTNk (h) = kl
2 22 = 4 7
l=0
3 23 = 8 15

Note that N denotes the set of all natural numbers; N = {0} ∪ Z+ = {0, 1, 2, 3, · · · }. The
summation notation of Problem 1.10 is also known as a geometric series.
Theorem 1.7. A closed formula, presented as a geometric series, for the number of nodes
in a perfect k-ary tree of height h.

k h+1 − 1
PTNk (h) =
k−1

h
X
Proof. PTNk (h) = ki by Problem 1.10 definition.
i=0
h
X h
X h+1
X h
X
kPTNk (h) = k ki = k i+1 = ki = k i + k h+1
i=0 i=0 i=1 i=1
h
X
= k i + k h+1 − 1 = PTNk (h) + k h+1 − 1
i=0
(k − 1)PTNk (h) = k h+1 − 1
k h+1 − 1
∴ PTNk (h) = 
k−1
By setting PTNk (h) = n, the height of the perfect k-ary tree can be derived in terms of
n, conversely.
1.3. SUMMATION 13

Corollary 1.1. The height of the perfect k-ary tree

h = logk ((k − 1)n + 1) − 1


h+1
k −1
Proof. n= by Theorem 1.7
k−1
k h+1 = (k − 1)n + 1 by rearranging
h+1
logk k = logk ((k − 1)n + 1) by taking logarithm
h = logk ((k − 1)n + 1) − 1 the goal 

Consider the problem of finding PTDk (h), the sum of all nodes’ depth in a perfect k-ary
tree of height h. The depth of a node in a tree is the length of the shortest path to the root
node.
Problem 1.11. Sum of depths in a perfect k-ary tree
level sum. of PTD(l)
l depths at l
Input: k ∈ Z+ and h ∈ N 0 0 0×1 = 0 0
h
X 1 1 1 2×1 = 2 2
Output: PTDk (h) = lk l
2 2 2 2 2 4×2 = 8 10
l=0
3 3 3 3 3 3 3 3 3 8×3 = 24 34

Theorem 1.8. The sum of all nodes’ depth in a perfect k-ary tree of height h
hk h+2 − (h + 1)k h+1 + k
PTDk (h) =
(k − 1)2

h
Proof. PTDk (h) =
X
ik i = 1 · k 1 + 2 · k 2 + · · · + h · k h
i=1
k · PTDk (h) = 1 · k 2 + 2 · k 3 + · · · + h · k h+1
= (2 · k 2 + 3 · k 3 + · · · + h · k h ) + h · k h+1 − (k 2 + k 3 + · · · + k h )
= (1 · k 1 + 2 · k 2 + 3 · k 3 + · · · + h · k h ) + h · k h+1
− (k 0 + k 1 + k 2 + k 3 + · · · + k h ) + 1
k h+1 − 1
= PTDk (h) + h · k h+1 − +1
k−1
hk h+2 − (h + 1)k h+1 + k
(k − 1)PTDk (h) =
(k − 1)
h+2
hk − (h + 1)k h+1 + k
PTDk (h) = 
(k − 1)2
By Corollary 1.1 and Theorem 1.8, the following corollary is derived for a perfect binary
tree, i.e., k = 2.
Corollary 1.2. The sum of all nodes’ depth in a perfect binary tree with n number of
nodes where n = 2h − 1 = 1, 3, 7, 15, 31, · · · .

PTD02 (n) = (n + 1) log (n + 1) − 2n


14 CHAPTER 1. INTRODUCTION

1.3.2 Product
The capital Greek letter Pi, Π, is used to denote product notation. It provides a compact
way to represent products of a sequence of numbers starting from its subscript index value
and ending at its superscript index value.

n
Y
f (i) = f (1) × f (2) × · · · × f (n) (1.14)
i=1

For example, the nth power in eqn 1.15 and factorial in eqn 1.16 functions can be represented
using the product notation when f (i) = a and f (i) = i, respectively.

n
Y
a = a × a × · · · × a = an (1.15)
i=1
n
Y
i = 1 × 2 × · · · × n = n! (1.16)
i=1

The product notation may be converted to a summation notation by taking its logarithm.

Theorem 1.9. Logarithmic rule of the product series

n
Y n
X
log f (i) = log f (i) (1.17)
i=1 i=1

n
Proof. Y
log f (i) = log (f (1) × f (2) × · · · × f (n))
i=1
n
X
= log f (1) + log f (2) + · · · + log f (n) = log f (i) 
i=1

n
Y n
X
For example, log n! = log i= log i.
i=1 i=1
Consider the problem of computing SPCk , the sum of the first n products of k consecutive
numbers as formally defined below.

Problem 1.12. Sum of the first n products of k consecutive numbers


Input: n ∈ Z+ and k ∈ N
n k−1
X Y
Output: SPCk (n) = (i + j)
i=1 j=0

The first four sums of SPCk=2 (n) and SPCk=3 (n) are shown in Figure 1.6.
1.4. COMPUTATIONAL COMPLEXITY 15

1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6,
Q1 Q2
(1 + j) = 2 (1 + j) = 6
Qj=0
1 Qj=0
2
(2 + j) = 6 (2 + j) = 24
Qj=0
1 Qj=0
2
(3 + j) = 12 (3 + j) = 60
Qj=0
1 Qj=0
2
j=0 (4 + j) = 20 j=0 (4 + j) = 120
SPCk=2 (n) = 2, 8, 20, 40, ··· SPCk=3 (n) = 6, 30, 90, 210, · · ·
(a) SPCk=2 (n) (b) SPCk=3 (n)

Figure 1.6: Sum of the first n products of k consecutive numbers, SPCk (n)

For example cases of k = 0 ∼ 3,


n
X n
SPCk=0 (n) = 1 = (1.18)
i=1
1
n
X n(n + 1)
SPCk=1 (n) = i = (1.19)
i=1
2
n
X n(n + 1)(n + 2)
SPCk=2 (n) = i(i + 1) = (1.20)
i=1
3
n
X n(n + 1)(n + 2)(n + 3)
SPCk=3 (n) = i(i + 1)(i + 2) = (1.21)
i=1
4

Based on equations (1.18)∼ (1.21), which can be verified algebraically, we may develop and
prove the following general equation for any k.
Theorem 1.10. Closed formula for the sum of the first n products of k consecutive numbers.
n k−1 k
X Y 1 Y
SPCk (n) = (i + j) = (n + i)
i=1 j=0
k + 1 i=0

A proof is given as an exercise in Q. 1.11 on page 28.

1.4 Computational Complexity


The computational complexity of an algorithm is often measured by the elapsed time
of computation and the amount of memory space required to execute the algorithm. The
elapsed time, T (n), and required space, S(n), are expressed by growth functions, i.e., non-
decreasing functions with respect to the input size n.
Figure 1.7 illustrates the plots of two growth functions. For sufficiently large n, the
growth function g(n) grows faster than f (n), i.e. an algorithm with g(n) time complexity
takes longer to compute than an algorithm with f (n) time complexity.

1.4.1 Asymptotic Notation


Asymptotic notations, also known as Landau notations, serve as a useful language not
only to compare the growth rates of functions, but also to analyze the theoretical approxi-
mation of time elapsed to execute an algorithm. Five symbols, {O, o, Θ, Ω, ω}, are used in
16 CHAPTER 1. INTRODUCTION

40 g(n) = n2 + 2

30

20 f (n) = 2n + 10

10

0
0 2 n0 4 6 8 n

Figure 1.7: Growth functions.


asymptotic notations - namely, big-O, little-o, theta, big-omega and little omega, respec-
tively. Their definitions are given in Table 1.1. The big-O notation was first introduced by
Bachmann and little-o was introduced by Landau [77, pp 7-8]. In [101], Knuth traced the
most prominent appearance of the big-omega notation as Titchmarsh’s magnum opus on
Riemann’s zeta function [171] and introduced the theta notation.
Asymptotic notations allow us to represent a complex function as a simplified function
solely using the dominant term and excluding constants. For example, a function f (n) =
23n2 + 2n + 3 can be expressed as O(n2 ) because there exist c = 24 and n0 = 1 such that
the function 24g(n) = 24n2 grows faster than f (n) when n ≥ 1.
Theorem 1.11. Dominant term only excluding constants
If T (n) = c1 f1 (n) + c2 f2 (n) + · · · + ck fk (n) where f1 (n) is the largest term, c1 > 0, and each
fi (n) is a growth function for all 1 ≤ i ≤ k, then T (n) = O(f1 (n)).
Proof. We search for n0 and c such that T (n) ≤ cf1 (n) for ∀n ≥ n0 .
T (n) ≤ |c1 |f1 (n) + |c2 |f2 (n) + · · · + |ck |fk (n)
≤ |c1 |f1 (n) + |c2 |f1 (n) + · · · + |ck |f1 (n)
k
!
X
= |ci | f1 (n)
i=1
k
P
Since there exists c = |ci | and n0 = 1, T (n) = O(f1 (n)). 
i=1

Paul Gustav Heinrich Bachmann (1837-1920) was a German mathematician


who worked in the field of analytic number theory in which Big-O notation was first
introduced.
c Portrait is in public domain.

Edmund Georg Hermann Landau (1877-1938) was a German mathematician


who worked in the fields of number theory and complex analysis.

c Portrait is in public domain.
1.4. COMPUTATIONAL COMPLEXITY 17

Table 1.1: Asymptotic notations.


Name notation informally formal definition
f ≤g There exist positive constants c and n0
Big-O f (n) = O(g(n))
at most such that f (n) ≤ cf (n) when n ≥ n0 .
f ≥g There exist positive constants c and n0
Big-omega f (n) = Ω(g(n))
at least such that f (n) ≥ cf (n) when n ≥ n0 .
f =g
Theta f (n) = Θ(g(n)) f (n) = O(g(n)) and f (n) = Ω(g(n)).
exactly
f <g
Little-o f (n) = o(g(n)) f (n) = O(g(n)) and f (n) 6= Θ(g(n)).
less than
f >g
Little-omega f (n) = ω(g(n)) f (n) = Ω(g(n)) and f (n) 6= Θ(g(n)).
greater than

Table 1.2: Frequently encountered growth functions


Function Name examples
O(1) Constant 2, 1000
O(log n) Logarithmic 3 log n, 7 log n + 5
2
O(log
√ n) Log-squared 2√ log2 n, 3 log2 n + 5 log n + 2
O( n) Square root 2 n Polynomial
O(n) Linear 9n, 6n + 7 O(np )
O(n log n) Linearithmic 4n log n, 3n log n + 5n + 7 log n
O(n2 ) Quadratic 3n2 , 2n2 + 6n log n + 7
O(n3 ) Cubic 4n3 , 2n3 + 5n2 log n + 1
O(2n ) Exponential 3 × 2n , 2n + n3
Exponential
O(n!) Factorial n!, 7n! + 2n
ω(np )
O(nn ) 3nn + 5n!

Table 1.2 lists growth functions frequently encountered in algorithm analysis in increasing
growth rate order. These growth functions are categorized into two groups: polynomial and
exponential functions. While algorithms with a polynomial time complexity are executed
in reasonable time for large n, algorithms with an exponential time complexity do not run
to completion within a reasonable time period. For example, if a single operation takes one
millisecond, an algorithm with Θ(2n ) time complexity would take 36, 197 years for n = 50.
An algorithm with O(n!) time complexity would take 78, 218, 300 years for the very small
input size n = 20. Even if computer technology improves so that time required for each
operation is reduced to a nanosecond, an algorithm with O(n!) time complexity would still
take more than 78 years to complete.
One way to prove the order of growth functions in Table 1.2 is by induction. A sample
proof for n! = ω(2n ) is stated as follows:
Theorem 1.12. n! = ω(2n )
Proof. Base case: For n0 = 4, (4! = 24) > (24 = 16).
Inductive step: Assuming that n! > 2n for n > 4 is true, show (n + 1)! > 2n+1 .
(n + 1)n! > (n + 1)2n > 2 × 2n 
Another useful way to determine the order of two growth functions is the limit comparison
18 CHAPTER 1. INTRODUCTION

test [8, p 657] as summarized in Table 1.3. The function f (n)/g(n) converges to either 0, c
or ∞ as n approaches ∞. The symbol ∞ means infinity.

Table 1.3: Limit comparison test and asymptotic relationships


If the limit is True statements examples
f (n) = o(g(n)) 1 1 1
lim = = 0, lim = 0,
f (n) f (n) = O(g(n)) n→∞ n ∞ n→∞ log n
lim =0
n→∞ g(n) g(n) = ω(f (n)) n 1
lim = lim =0
g(n) = Ω(f (n)) n→∞ n2 n→∞ n
f (n) = Θ(g(n)) 3n n2 1
g(n) = Θ(f (n)) lim = 3, lim = ,
n→∞ n n→∞ 5n2 5
f (n) f (n) = O(g(n)) 2n2 + 1 2n2 1
lim =c lim = lim 2 + lim 2 = 2,
n→∞ g(n) f (n) = Ω(g(n)) n→∞ n2 n→∞ n n→∞ n
g(n) = O(f (n)) log n
lim = log r
g(n) = Ω(f (n)) n→∞ logr n
f (n) = ω(g(n)) 3n2
f (n) f (n) = Ω(g(n)) lim = lim 3n = ∞,
n→∞ n n→∞
lim =∞
n→∞ g(n) g(n) = o(f (n)) n log n
lim = lim log n = ∞
g(n) = O(f (n)) n→∞ n n→∞

1.4.2 Analysis of Algorithms


Analyzing algorithms is necessary to determining the efficiency of algorithms. Analysis
is conducted in terms of the time it takes to produce the output and the space required
to perform the algorithm. Asymptotic notations are extremely useful for analyzing the
computational time and space complexities of algorithms. Computational space analysis
will be discussed from Chapter 5 onwards, while computational time complexity will be
emphasized in earlier chapters.
Constant time operations include assignments, comparisons and basic arithmetic opera-
tions on primitive data types and small fixed-size inputs. Accessing the value of a variable or
the ith cell value of an array is assumed to take constant time as well. When these constant
time operations are placed in a loop, the complexity changes. Consider some simple codes
with ‘for’ loops in Table 1.4. In the code in the leftmost column, the body of the loop is
accessed exactly n number of times and thus its computational time complexity is Θ(n). In
the middle column, the code contains outer and inner loops. The body of the inner loop is
accessed n2 number of times and thus the code’s time complexity is Θ(n2 ). In the rightmost

Table 1.4: Summations and for-loops.


n n P n n n P
i n
n = Θ(n2 ) i = Θ(n2 )
P P
1=
P P P
1 = Θ(n) 1=
i=1 i=1 j=1 i=1 i=1 j=1 i=1
s=0 s=0
s=0
for i = 1 to n for i = 1 to n
for i = 1 to n
for j = 1 to n for j = 1 to i
s=s+1
s=s+1 s=s+1
1.4. COMPUTATIONAL COMPLEXITY 19

column, the body of the loop in the code is accessed exactly n(n+1)
2 number of times. Thus,
the code takes Θ(n2 ) time.
Consider Problem 1.9 of finding the nth tetrahedral number and the following naı̈ve
Algorithm 1.7.

n P
P i
Algorithm 1.7. Tetrahedranum j
i=1 j=1

tetrahedranum(n)
O = 0 ............................................ 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
O = O + j ................................... 4
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The symbol ‘∼’ in “for i = s ∼ b” signifies for each i from s to b incremented by one.
Since line 4 of Algorithm 1.7 takes constant time, the computational time complexity of
Algorithm 1.7 is as follows:

i
n X n n
X X X n(n + 1)
T (n) = c= ci = c i=c ∈ Θ(n2 )
i=1 j=1 i=1 i=1
2

Since the nth tetrahedral number equates to the sum of the first n triangular numbers, we
may devise a slightly more efficient algorithm which utilizes the triangular number formula
in eqn (1.11) as follows:
n n
P P i(i+1)
Algorithm 1.8. Tetrahedral number T r(i) = 2
i=1 i=1

tetrahedranum(n)
O = 0 ............................................ 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = O + i(i+1) 2 ................................. 3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Since line 3 of Algorithm 1.8 takes constant time and is embedded in a ‘for’ loop, the
computational time complexity of Algorithm 1.8 is
n
X
T (n) = c = cn ∈ Θ(n)
i=1

While both Algorithms 1.7 and 1.8 correctly find the nth tetrahedral number, equa-
tion (1.13) on page 12 produces the same result in constant time, O(1).
Frequently encountered summations in algorithm analysis are outlined in Table 1.5 on
page 33.
20 CHAPTER 1. INTRODUCTION

1.4.3 Search Unsorted List


Suppose that there are n customers with distinct ID numbers. Customers are stored in
an unsorted array. To search a customer by his or her ID number, the entire array of size
n may be searched. This search problem of whether a query, q, occurs in a list is defined
below.
Problem 1.13. Searching (unique), search(A1∼n , q)
Input: A
( sequence A of n distinct quantifiable elements and a query element, q
p if q ∈ A1∼n and ap = q
Output:
 if q 6∈ A1∼n

Consider the following simple search algorithm.


Algorithm 1.9. Sequential search

search(A1∼n , q)
p = 0 ............................................. 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai = q, p = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
return p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Algorithm 1.9 finds and returns the position in the array if a query q occurs in the array.
If a query q does not appear, it returns 0. Algorithm 1.9 consists of one ‘for’ loop whose
body contains a constant amount of work. Hence, the computational time complexity is
Θ(n). If the query q happens to be the first element in the array, there is no need to check
the rest of the array. However, Algorithm 1.9 continues comparing until the end of the array.
To improve it, consider another simple search algorithm that uses a ‘while’ loop.
Algorithm 1.10. Sequential search II

search(A1∼n , q)
i = 0 ............................................. 1
Flag = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
while Flag = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
if ai = q, Flag = T . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if i ≤ n, return i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
else return  or 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Algorithm 1.10 terminates in constant time for the best case scenario where the query
q is near the beginning of the array. In the worst case scenario, the computational time
complexity remains Θ(n). Thus, the best way to describe the computational time complexity
of Algorithm 1.10 is O(n). The average case can be derived by adding all n query instances’
time complexities and dividing by n.
Theorem 1.13. The average time complexity of Algorithm 1.10 is Θ(n).
Proof.
Pn
i n(n+1)
i=1 2 n+1
= = = Θ(n) 
n n 2
1.4. COMPUTATIONAL COMPLEXITY 21

1.4.4 Integer Multiplication


Consider the problem of finding the product of two integers m ∈ Z and n ∈ N. Note
that Z denotes the set of all integers. Without loss of generality, n is assumed to be a
natural number. If n is a negative integer, the inputs (m, n) become (−m, −n), such that
−n becomes a natural number.
Problem 1.14. Integer multiplication
m
Input: m ∈ Z and n ∈ N
n
X
Output: m × n = m + m + ··· + m = m. n
| {z }
n i=1

According to the definition above, an integer m can be added n number of times to


compute multiplication. A pseudo code for this naı̈ve algorithm is given below:
Algorithm 1.11. Naı̈ve integer multiplication
times(m, n)
p = 0 ............................................. 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
p = p + m ......................................3
return p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Algorithm 1.11 clearly takes Θ(n) time as it involves n number of additions, assuming
each addition operation takes constant time.
Ancient Egyptians, however, used a doubling method to multiply two numbers [181, p
13]. As illustrated in Figure 1.8 (a), to multiply 11 and 212, they would start by doubling
two numbers (1, 212), resulting in (2, 414). Next, they would keep doubling the resulting
two numbers until the first number does not exceed 11. Then, they identified all rows
whose first elements compose 11. In this case, these are the first, second and last rows, i.e.,
1 + 2 + 8 = 11. Only adding these selected rows provides the answer, (11, 2332).
What is remarkable about this doubling method is its relationship to the modern binary
number representation. The selected rows correspond to the binary number representation
of 11, which is 10112 = 1110 , and each doubling step corresponds to the shift left operation of
a binary number, as shown in Figure 1.8 (a). Hence, a pseudo code of the doubling method
may be presented as in Algorithm 1.12, assuming the input arguments are in binary. In
fact, all numbers are represented in binary in modern computers.
Algorithm 1.12. Doubling method
times(m, n)
p = 0 ............................................. 1
for i = least to most significant bit of n . . . . . . . . . . . 2
if ni == 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
p = p + m ....................................4
m = shift-left(m), % i.e., m = 2 × m . . . . . . . . . 5
return p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Most readers would find the answer using grade school multiplication, as shown in Fig-
ure 1.8 (b). The doubling method in Algorithm 1.12 is the binary equivalent of grade school
multiplication, as demonstrated in Figure 1.8 (c).
22 CHAPTER 1. INTRODUCTION

X 1 212 = 1 × 212 1 000011010100


X 2 424 = 2 × 212 1 000110101000
4 848 = 4 × 212 0 001101010000
X 8 1696 = 8 × 212 1 011010100000
11 2332 = (1 + 2 + 8) × 212 100100011100
(a) Egyptian’s doubling method and binary number interpretation
1 1 0 1 0 1 0 0
2 1 2 × 1 0 1 1
× 1 1 1 1 0 1 0 1 0 0
2 1 2 1 1 0 1 0 1 0 0
2 1 2 0 0 0 0 0 0 0 0
2 3 3 2 1 1 0 1 0 1 0 0
1 0 0 1 0 0 0 1 1 1 0 0
(b) grade school multiplication (c) binary number multiplication

Figure 1.8: Doubling method illustration.

1.4.5 Maximum Contiguous Subsequence Sum


For practice analyzing algorithms with inner loops, consider the maximum contiguous
subsequence sum problem, or MCSS in short. MCSS finds the maximum value of the sum
of a contiguous subarray within a one-dimensional array of numbers. This problem is also
known as the maximum subarray problem and was first posed in [74]. For a toy example
P5
where A = h−3, 1, 3, −3, 4, −7i, the maximum contiguous subsequence sum is ai = 5.
i=2
The maximum subarray problem is formally defined as follows:
Problem 1.15. Maximum contiguous subsequence sum
Input: A sequence A of n real numbers
Xe
Output: max({ ai | 1 ≤ b ≤ e ≤ n})
i=b

A naı̈ve algorithm, derived directly from the problem’s definition, is to generate all pos-
sible contiguous subsequences and compute their summations in order to find the maximum
value, as illustrated in Figure 1.9 (a). A pseudo code is given as follows:
Algorithm 1.13. Naı̈ve MCSS
naı̈ve MCSS(A1∼n )
ans = −∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = i to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
s = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
for k = i to j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
s = s + ak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if s > ans, ans = s . . . . . . . . . . . . . . . . . . . . . . . . 7
return ans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Algorithm 1.13 contains three ‘for’ loops: an outermost loop, one inner-middle loop,
and one innermost loop. Hence, the computational time complexity of Algorithm 1.13 is
1.4. COMPUTATIONAL COMPLEXITY 23

b e −3 1 3 −3 4 −7
1 −3
2 −2
3 1
1
4 −2
5 2
6 −5
2 1 A= −3 1 3 −3 4 −7
3 4 b e 1 2 3 4 5 6
2 4 1 1 −3 −2 1 −2 2 −5
5 5 2 1 4 1 5 −2
6 −2 3 3 0 4 −3
3 3 4 −3 1 −6
4 0 5 4 −3
3
5 4 6 −7
6 −3
4 −3
4 5 1
6 −6
5 4
5
6 −3
6 6 −7
(a) Θ(n3 ) Algorithm 1.13 (b) Θ(n2 ) Algorithm 1.14

Figure 1.9: Maximum consecutive subsequence sum algorithm illustration.

Θ(n3 ). However, astute readers may notice that MCSS can be found in Θ(n2 ) time. Once
the summation of ai ∼ aj is computed, there is no need to add individual elements all over
again to find the summation of ai ∼ aj+1 .

j+1
X j
X
ak = ak + aj+1
k=i k=i

A pseudo code with only two loops is illustrated in Figure 1.9 (b) and outlined as follows:

Algorithm 1.14. Naı̈ve MCSS II

naı̈ve MCSS(A1∼n )
ans = −∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
s = 0 ...........................................3
for j = i to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
s = s + aj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if s > ans, ans = s . . . . . . . . . . . . . . . . . . . . . . . . 6
return ans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

The computational time complexity can be viewed as the number of times the body of
the innermost loop is accessed.
24 CHAPTER 1. INTRODUCTION

Theorem 1.14. Algorithm 1.14 takes Θ(n2 ) time.


Proof.
n X
n n X
i n
X X X n(n + 1)
T (n) = 1= 1= i= = Θ(n2 ) 
i=1 j=i i=1 j=1 i=1
2

Theorem 1.15. Algorithm 1.13 takes Θ(n3 ) time.

Proof.
 
n X
X j
n X n X
X i X
i n X
X i n
X Xi i
X i
X
T (n) = 1= 1= (i − j + 1) =  i− j+ 1
i=1 j=i k=i i=1 j=1 k=j i=1 j=1 i=1 j=1 j=1 j=1
n   X n   n n
X i(i + 1) 1 2 1 1X 2 1X
= i2 − +i = i + i = i + i
i=1
2 i=1
2 2 2 i=1 2 i=1
n(n + 1)(2n + 1) n(n + 1) 1 1 1
= + = n3 + n2 + n = Θ(n3 ) 
12 4 6 2 3
In this book, MCSS is categorized as one of many consecutive subsequence arithmetic
problems included in the index of computational problems on page 736, where numerous
algorithms are listed for these problems. The rest of the consecutive subsequence arithmetic
problems are left for exercises Q 1.17 ∼ Q 1.19 and will also be dealt with in upcoming
chapters.

1.4.6 Analysis of Algorithms with Logarithms


Logarithmic functions appear frequently in the analysis of many algorithms. For exam-
ple, the computational running time complexity of Euclid’s Algorithm 1.4 is O(log n). A
proof follows directly from Lame’s theorem which states that “the number of divisions used
by Euclid’s Algorithm 1.4 to find gcd(m, n) is less than or equal to five times the number of
decimal digits in n.” A proof of Lame’s theorem can be found in [146, p.348]. The number
of decimal digits in n is log10 n and by Lemma 1.1, the worst case complexity is Θ(log n).
The best case scenario yields constant running time, e.g. when m = 500 and n = 100. Only
one modulo operation is necessary to determine gcd(500, 100) = 100 by Euclid’s algorithm.
Hence, the running time complexity of Euclid’s Algorithm 1.4 is O(log n).
For more examples, the computational time complexities of the naı̈ve integer multipli-
cation Algorithm 1.11 and the doubling method Algorithm 1.12 are Θ(n log n log m) and
Θ(log m log n), respectively, since the number of digits in the binary representation of a
decimal number n is log n. The computational complexity of the grade school multiplica-
tion method shall be discussed in detail on page 45 in Chapter 2. Algorithms that involve
logarithmic functions in their computational time complexities will appear primarily start-
ing from Chapter 3. In Chapters 8 and 9, algorithms related to tree-based data structures
involve logarithmic functions in their analysis as well.
In this book, the base for the logarithmic function log n is two unless otherwise specified.
In fact, logr n = Θ(log n).

Lemma 1.1. logr n = Θ(log n) for r > 1.


1.4. COMPUTATIONAL COMPLEXITY 25

Proof.
log n
logr n = (1.22)
log r
First, we search for n0 and c such that logr n ≤ c log n for ∀n ≥ n0 . Since there exists
c = log1 r and n0 = 1, logr n = O(log n) by eqn (1.22).
Next, we search for n0 and c such that log n ≤ c logr n for ∀n ≥ n0 . Since there exists
c = log r and n0 = 1, log n = O(logr n) by eqn (1.22).
∴ logr n = Θ(log n) for r > 1. 
It is not easy to compare the logarithmic function with other growth functions. L’Hôpital’s
rule, stated in in equation (1.23), comes in handy when making such comparisons.
f (n) f 0 (n)
lim = lim 0 (1.23)
n→∞ g(n) n→∞ g (n)

log n 1 d 1
To show log n ∈ o(n), evaluate lim = lim = 0 since logb n = . The
n→∞ n n→∞ n ln 2 dx n ln b
logarithm notation ‘ln’ is the natural logarithm function whose base is the natural number,
e = 2.7182818....
Theorem 1.16. logp n ∈ o(n) for any constant value p ≥ 1.
log n 1
Proof. Base case: When p = 1, log n ∈ o(n) since lim = lim = 0.
n n→∞n→∞ n ln 2
p p+1
Inductive step: Assume that log n ∈ o(n) is true for p. Show log n ∈ o(n).

logp+1 n (p + 1) logp n p+1 logp n


lim = lim = lim =0 
n→∞ n n→∞ n ln 2 ln 2 n→∞ n
Similarly, one can easily prove that logarithmic and log-squared functions are slower
growth functions than the square root function.
Now, deriving asymptotic notation from summation with logarithmic functions will be
Pn n
P
presented. The proof for log n = Θ(n log n) is straightforward, but proving log i =
i=1 i=1
Θ(n log n) appears to be quite challenging for many students.
n
X
Theorem 1.17. log i = Θ(n log n)
i=1
n
X
Proof. log i = log 1 + log 2 + · · · + log (n − 1) + log n
| {z }
i=1
n
< log n + log n + · · · + log n + log n
| {z }
n
= O(n log n)
n
X n
log i = log 1 + log 2 + · · · + log + · · · + log (n − 1) + log n
i=1 | 2 {z }
n/2
n n n
> log + · · · + log + log
| 2 {z 2 2}
n/2
26 CHAPTER 1. INTRODUCTION

n n n
= log = (log n − 1)
2 2 2
= Ω(n log n)

n
X n
X n
X
Since log i = O(n log n) and log i = Ω(n log n), log i = Θ(n log n). 
i=1 i=1 i=1

Corollary 1.3. log n! = Θ(n log n)

Proof. By equations (1.16) and (1.17), Corollary 1.3 is an equivalent variation of Theo-
rem 1.17.
n
Y Xn
log n! = log i= log i = Θ(n log n) 
i=1 i=1

1.5 Exercises
Q 1.1. Formulate the 3 × 5 picture puzzle problem described by an example in Figure 1.10.

(a) input example (b) output example

Figure 1.10: A picture puzzle example.

Q 1.2. Consider the following growth functions:


i) 2n + 1 √ ii) 2n log n + n iii) 7 √ iv) 2n2 + 3 √
v) 2n + 3 n + 2 vi) 2n2 + n log n + 3 vii) 5 n + 3 log n viii) 2n3 + 5 n
Identify all functions that have the complexity of the following asymptotic functions.
a) O(n) b) O(n
√ log n) c) Ω(n) d) o(n2 )
e) Θ(n) f) ω( n) g) ω(n2 ) h) ω(n)

Q 1.3. Consider the following two growth functions:


( √ √
f (n) = 2 n + 5n log n + 5 n log n + 4
g(n) = 2n log n + 7

Which of the following statements are true?


1.5. EXERCISES 27

a). f (n) = O(g(n)) e). f (n) = Θ(g(n)) g). g(n) = O(f (n))
b). f (n) = o(g(n)) f). g(n) = Θ(f (n)) h). g(n) = o(f (n))
c). f (n) = Ω(g(n)) i). g(n) = Ω(f (n))
d). f (n) = ω(g(n)) j). g(n) = ω(f (n))
Q 1.4. Consider the following two growth functions:
( √
f (n) = 4n + 3n3 log n + 7 n log n
g(n) = 5n2 log n + 3n4

Which of the following statements are true?

a). f (n) = O(g(n)) e). f (n) = Θ(g(n)) g). g(n) = O(f (n))
b). f (n) = o(g(n)) f). g(n) = Θ(f (n)) h). g(n) = o(f (n))
c). f (n) = Ω(g(n)) i). g(n) = Ω(f (n))
d). f (n) = ω(g(n)) j). g(n) = ω(f (n))
Q 1.5. Prove or disprove the asymptotic notations.
a). 2n3 − 3n2 + 5n + 12 ∈ O(n3 )
b). 2n2 − 2n + 1 ∈ Θ(n2 )
c). 2n+1 ∈ O(2n )
d). 22n ∈ O(2n )
e). 2n ∈ o(n!)
f). log (n + 1) ∈ Θ(log n)
Q 1.6. Place the following functions into increasing asymptotic order. If two or more of
the functions are of the same asymptotic order, then indicate this. Prove the correctness of
your ordering. In other words, if you claim that g(n) is greater than f (n), then show that
f (n) = O(g(n)) but f (n) 6= Θ(g(n)).

n log n, 2n2 , 2 log n, n, n!, 4n, 2n , ln n

Note that ln n means the natural log, i.e., logarithm base e, the natural number.
Q 1.7. Prove or disprove the asymptotic relationships.
n
X
a). i3 = Θ(n4 )
i=1
n
X
b). ip = Θ(np+1 )
i=1
n
X
c). i log i = Θ(n2 log n)
i=1
28 CHAPTER 1. INTRODUCTION

n
X
d). (log i)2 = Θ(n(log n)2 )
i=1

n
X
e). log i2 = Θ(n log n)
i=1

Q 1.8. Consider the problem of adding the first n even numbers.

a). Formulate the problem.

b). Derive a closed form.

c). Prove the correctness of your closed form algebraically.

d). Prove the correctness of your closed form using induction.

Q 1.9. Prove the following theorems by induction.

a). Theorem 1.3 on Page 10

b). Theorem 1.4 on Page 11

c). Theorem 1.6 on Page 12

d). Corollary 1.2 on Page 13

Q 1.10. Consider the number of nodes in a perfect k-ary tree of height h Problem 1.10
and the sum of all nodes’ depth in a perfect k-ary tree of height h Problem 1.11 defined on
pages 12 and 13, respectively. If k = 2, the tree is called a perfect binary tree.

a). A closed formula for PTNk=2 (n) is given in eqn (1.24).


n
X
2i = 2n+1 − 1 (1.24)
i=0

Prove it using induction.

b). Prove Theorem 1.7 on Page 12 using induction.

c). A closed formula for PTDk=2 (h) is given in eqn (1.25).


h
X
i2i = (h − 1)2h+1 + 2 (1.25)
i=1

Prove it using induction.

d). Prove Theorem 1.8 on Page 13 using induction.

Q 1.11. Consider the problem of the sum of the product of k consecutive numbers, defined
in Problem 1.12 on Page 14.

a). Consider the problem of the sum of the product of two consecutive numbers, SPCk=2 (n)
in equation (1.20) on Page 15. Prove it algebraically.
1.5. EXERCISES 29

b). Consider the problem of the sum of the product of two consecutive numbers, SPCk=2 (n)
in equation (1.20) on Page 15. Prove it using induction.

c). Consider the problem of the sum of the product of three consecutive numbers, SPCk=3 (n)
in equation (1.21) on Page 15. Prove it algebraically.

d). Consider the problem of the sum of the product of three consecutive numbers, SPCk=3 (n)
in equation (1.21) on Page 15. Prove it using induction.

e). Prove the correctness of Theorem 1.10 on Page 15.

Q 1.12. Formulate the problem of finding the sum of the first nth sum of k consecutive
numbers, SSCk (n). (Hint: Instead of the product in Question 1.11, this problem is about
adding them.)

a). Formulate the problem.

b). Derive a closed formula for k = 1.

c). Derive a closed formula for k = 2.

d). Derive a closed formula for k = 3.

e). Derive a closed formula for any k.

Q 1.13. Consider the problem of finding the sum of the first n tetrahedral numbers, or
simply STH.

a). Formulate the problem.

b). Provide computational time complexities of the following algorithms to find STH(n):

Algorithm 1.15. STH-1 Algorithm 1.16. STH-2 Algorithm 1.17. STH-3


STH(n) STH(n) STH(n)
o = 0 ..................1 o = 0 ................ 1 o = 0 ................ 1
for i = 1 ∼ n . . . . . . . . . .2 for i = 1 ∼ n . . . . . . . . 2 for i = 1 ∼ n . . . . . . . . 2
for j = 1 ∼ i . . . . . . . . 3 for j = 1 ∼ i . . . . . . . 3 o = o + i(i+1)(i+2)
6 .3
for k = 1 ∼ j . . . . . . 4 o = o + j(j+1) 2 ...4 return o . . . . . . . . . . . . . 4
o = o + k .......5 return o . . . . . . . . . . . . . 5
return o . . . . . . . . . . . . . . 6

c). Derive a closed formula for STH(n).

d). Prove the correctness of your closed formula.

Q 1.14. Consider the following verifying algorithm to check whether a given sequence A1∼n
of n numbers is an up-down alternating sequence.
30 CHAPTER 1. INTRODUCTION

Algorithm 1.18. Checking up-down sequence

isUPDOWN(A1∼n )
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if i is odd and ai > ai−1 , return false . . . . . . . . 2
if i is even and ai < ai−1 , return false . . . . . . . . 3
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

For example, Algorithm 1.18 returns true if A = h2, 5, 1, 8, 3, 6i or h6, 8, 1, 5, 2, 3i and false
if A = h6, 5, 1, 3, 8, 2i or h1, 2, 6, 5, 8, 3i.
a). Formulate the problem of checking an up-down alternating sequence.
b). Provide the computational time complexity of Algorithm 1.18.
c). Analyze the best case running time of Algorithm 1.18 and give an example.
d). Analyze the worst case running time of Algorithm 1.18 and give an example.
Q 1.15. Consider the naı̈ve Algorithm 1.3 stated on page 8 to find gcd(m, n).
a). Assuming that computing d | n, i.e., checking mod(n, d) = 0 takes constant time, what
is the computational time complexity?
b). Assuming that computing d | n, i.e., checking mod(n, d) = 0 takes O(log n) time, what
is the computational time complexity?
Q 1.16. Consider the following algorithm to solve the problem of determining whether a
given positive integer is a prime. For example, the algorithm must return true if n is a prime
number, n ∈ {2, 3, 5, 7, · · · } and false if n is a composite number.
Algorithm 1.19. Checkng primality
isprime(n) √
for i = 2 ∼ b nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n % i = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

a). Formulate the problem.


b). Assuming that computing d | n, i.e., checking mod(n, d) = 0 takes constant time, what
is the computational time complexity of Algorithm 1.19?
c). Assuming that computing d | n, i.e., checking mod(n, d) = 0 takes O(log n), what is
the computational time complexity of Algorithm 1.19?
d). Analyze the best case running time of Algorithm 1.19 and give an example.
e). Analyze the worst case running time of Algorithm 1.19 and give an example.
Q 1.17. Consider the minimum consecutive subsequence sum problem, minCSS in short,
which finds the consecutive subsequence of a sequence A = ha1 , a2 , · · · , an i whose sum is
the minimum of the sums of all consecutive subsequences. Let A = ha1 , a2 , · · · , an i be a
sequence of arbitrary real numbers.
1.5. EXERCISES 31

a). Formulate the problem.

b). What is the minimum consecutive subsequence sum on the following toy example?

3 -1 5 -3 -3 7 4 -1

c). Derive a naı̈ve algorithm from the problem’s definition.

d). Provide the computational time complexity of the proposed algorithm in c).

Q 1.18. Consider the maximum consecutive subsequence product problem, MCSP in short,
which finds the consecutive subsequence of a sequence of arbitrary real numbers A =
ha1 , a2 , · · · , an i whose product is the maximum of the products of all consecutive subse-
quences.

a). Formulate the problem.

b). What is the maximum consecutive subsequence product on the following toy example?

-2 0 -1 2 1 -1 -2 2

c). Derive a naı̈ve algorithm from the problem’s definition.

d). Provide the computational time complexity of the proposed algorithm in c).

Q 1.19. Consider the minimum consecutive subsequence product problem, minCSP in short,
which finds the consecutive subsequence of a sequence of arbitrary real numbers A =
ha1 , a2 , · · · , an i whose product is the minimum of the products of all consecutive subse-
quences.

a). Formulate the problem.

b). What is the minimum consecutive subsequence product on the following toy example?

-2 0 -1 2 1 -1 -2 2

c). Derive a naı̈ve algorithm from the problem’s definition.

d). Provide the computational time complexity of the proposed algorithm in c).

Q 1.20. This question contains several basic mathematical facts and properties that appear
frequently in this book. Proving the following ‘Product rule of exponents’ Theorem is beyond
the scope of this book.

Theorem 1.18. Product rule of exponents

em × en = em+n where e is the natural number.

Assuming that it is a proven fact, prove the following theorems.

a). Prove the following ‘Product rule of powers’ Theorem using the above ‘Product rule
of exponents’ Theorem 1.18.
32 CHAPTER 1. INTRODUCTION

Theorem 1.19. Product rule of powers

bm × bn = bm+n

b). Prove the following ‘Product rule of logarithms’ Theorem using the above ‘Product
rule of powers’ Theorem 1.19.
Theorem 1.20. Product rule of logarithms

log (a × b) = log a + log b

c). Prove ‘Logarithmic rule of the product series’ Theorem 1.9 using induction.
n
Y n
X
log f (i) = log f (i) (1.17)
i=1 i=1

Q 1.21. Consider the word search puzzle Problem 1.2 defined on page 3. Analyze the
computational time complexity of an algorithm that checks every position in eight directions
for each word.
1.5. EXERCISES 33

Table 1.5: Some summations and their asymptotic notations.

# sum asymptotic notation comments/examples


n
P n #5 (p = 0)
1=n
P
1 O(1) = Θ(n) #8 (f (n) = 1)
i=1 i=1

n
(3i + 1) = Θ(n2 )
P
n n(n + 1) n
Θ(i) = Θ(n2 )
P P
2 i= i=1
i=1 2 i=1 #5 (p = 1)

n
(i2 + 1) = Θ(n3 )
P
n n(n + 1)(2n + 1) n
i2 = Θ(i2 ) = Θ(n3 )
P P
3 i=1
i=1 6 i=1 #5 (p = 2)

n
i3 = Θ(n4 )
P
n n2 (n + 1)2 n
i3 = Θ(i3 ) = Θ(n4 )
P P
4 i=1
i=1 4 i=1 #5 (p = 3)

n
n4 = Θ(n5 )
P
n n
p
P p p+1
i for p ≥ 0
P
5 Θ(i ) = Θ(n ) i=1
i=1 i=1 Faulhaber’s formula

If a = 2,
n
P ian+1 − a n
P i n n
6 a = Θ(a ) = Θ(a ) P
2i = Θ(2n )
i=1 a−1 i=0
i=0

n
iai =
P
n If a = 2,
i=1 P i n n
7 n+2 Θ(ia ) = Θ(na ) P
i2i = Θ(n2n )
na − (n + 1)an+1 + a i=0
i=0
(a − 1)2
n
(3n + 1) = Θ(n2 )
P
n n
i=1
P
f (n) = nf (n)
P
8 Θ(f (n)) = Θ(nf (n)) Pn
i=1 i=1 log n = Θ(n log n)
i=0

n n
(3n + 1) = Θ(n2 )
P Qn P
log i = log i=1 i n
P i=1
9 i=1 Θ(log i) = Θ(n log n) n
= log n! i=1
P
log n = Θ(n log n)
i=0
n n
Θ(i log i) = Θ(n2 log n)
P
i log i
P
10
i=1 i=1
34 CHAPTER 1. INTRODUCTION
Chapter 2

Recursive and Inductive


Programming

Understanding the concept of recursion is integral not only to formulating certain com-
putational problems but also to designing algorithms. First order linear recurrence relations
are the focus of this chapter. Other types of recursions will be dealt with in other chapters.
Induction is another concept closely related to recursion. While the concept of induction
is widely adopted for proving theorems, it is primarily introduced here for the purpose of
designing algorithms.
Although the first algorithm design paradigm we will cover, called ‘inductive program-
ming,’ does not provide the optimal algorithm for many problems, it enables one to devise
accurate algorithms for many computational problems and to better understand the proof by
induction method. Practicing this paradigm on simple problems in this chapter will sharpen
problem solving skills necessary for solving more complex problems in subsequent chapters.
The objectives of this chapter include the following: understanding the concept of recur-
sion, deriving a first order linear recurrence relation, designing algorithms by the inductive
programming paradigm, and proving theorems using the proof by induction method.

35
36 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

2.1 Recursion
The best joke on recursion is a play on its definition. Recursion is defined recursively
as follows: “If you still don’t get it, see recursion.” Recursion appears in many areas of
computer science, and plays an important role in algorithm design and analysis. It is also
often useful for formulating computational problems as mentioned earlier in Chapter 1.

2.1.1 Recursive Programming


A function is conventionally written with two parts separated by an equal sign. The
left side of a function equation contains the function name followed by parentheses contain-
ing an ordered list of input arguments. The right side of a function equation contains a
computational definition, such that the output can be evaluated when values are assigned
to the variables. Consider the example of the function used √ to compute the length of the
hypotenuse of a right triangle: hypotenuse leng(a, b) = a2 + b2 . The function name is
hypotenuse leng and the function has two input arguments, which are the leg lengths a
and b of a right triangle.
√ When a = 3 and b = 4, the function can be evaluated as hy-
potenuse leng(3, 4) = 32 + 42 = 5.
A recursive function is a function that is defined in terms of itself. Simply put, the
function name on the left hand side of the equation appears on the right hand side of the
equation. There are multiple lines on the right side of a recursive function. Usually, a basic
recursion function has two lines: a recursive call and a base case. For example, the nth
triangular number Problem 1.6, defined on page 9, can be defined by a recursive function as
follows: (
TRN(n − 1) + n if n > 1 (recursive call)
TRN(n) = (2.1)
1 if n = 1 (base case)
To evaluate TRN(5), the recursive definition in eqn (2.1) does not directly return the desired
value but instead returns TRN(4) + 5, which requires solving the value of TRN(4). Hence,
one must list all recursive calls of the equation until the base case is reached, and then find
the proper values by plugging in the TRN(n − 1) part of the equation backwards.
TRN(5) = TRN(4) + 5 = 15
TRN(4) = TRN(3) + 4 = 10
TRN(3) = TRN(2) + 3 = 6
TRN(2) = TRN(1) + 2 = 3
TRN(1) = 1
If not for the base case, the evaluation of a recursion function would never terminate.
In most conventional programming languages, recursive programming contains a method
that is defined by itself. A method name appears in the body of the procedure. The recursive
programming algorithm for equation (2.1) can be written as follows:
Algorithm 2.1. Recursive triangular number
TRN(n)
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else return TRN(n − 1) + n . . . . . . . . . . . . . . . . . . . . 2
A compiler needs a stack to evaluate the output, as illustrated in Figure 2.1. To evaluate
TRN(3), the procedure requires invoking TRN(2). The procedure of invoking TRN(2) is
2.1. RECURSION 37

TRN(1)
=1
TRN(2) TRN(2) TRN(2)
= TRN(1) + 2 = TRN(1) + 2 =1+2=3
TRN(3) TRN(3) TRN(3) TRN(3) TRN(3)
= TRN(2) + 3 = TRN(2) + 3 = TRN(2) + 3 = TRN(2) + 3 =3+3=6
(a) push TRN(3) (b) push TRN(2) (c) push TRN(1) (d) pop TRN(1) (e) pop TRN(2)

Figure 2.1: Stack of solving a linear recursion

stacked up on top of the TRN(3) invoking procedure. Figures 2.1 (a) ∼ (c) illustrate invoking
the recursive calls in the stack. Figures 2.1 (d) and (e) demonstrate returning to previously
invoking recursive calls and evaluating the respective values. The recursive Algorithm 2.1
takes Θ(n) time and requires Θ(n) space for the internal stack, which will be covered in
Chapter 7.

2.1.2 Types of Recursion


The simplest recurrence relation is the first order linear recurrence form, where the
recurrence part of the equation contains only the previous f (n − 1). This chapter mainly
covers first order linear recurrence relations. For each recursive call, the value n is diminished
by one until the base case is reached. If a recurrence contains f (n − x) where x > 1, it is
categorized as a higher order subtract recurrence. These appear in Chapter 4 and are mainly
dealt with in Chapter 5. In first order and higher order recurrence relations, each recursive
call reduces n by subtraction and eventually reaches the base case.

Table 2.1: Types of recursion by their diminishing function


Diminishing function Recurrence relation examples
T (n) = T (n − 1) + O(1) ←→ T (n) ∈ Θ(n)
First order linear T (n) = T (n − 1) + Θ(n) ←→ T (n) ∈ Θ(n2 )
T (n) = 2T (n − 1) + O(1) ←→ T (n) ∈ Θ(2n )
T (n) = T (n − 2) + O(1) ←→ T (n) ∈ Θ(n)
Higher order T (n) = T (n − 1) + T (n − 2) ←→ T (n) ∈ Θ(ϕn )
k
←→ T (n) ∈ Ω(ϕn )
P
subtract T (n) = T (n − i), k ≥ 2
i=1
T (n) = T (n/2) + O(1) ←→ T (n) ∈ Θ(log n)
Divide n/d T (n) = 2T (n/2) + +O(1) ←→ T (n) ∈ Θ(n)
T (n) = 2T (n/2) + Θ(n) ←→ T (n) ∈ Θ(n log n)
T (m, n) = T (n, m % n) ←→ T (m, n) = GCD(m, n)
Modulo m % n
T (m, n) = T (n, m % n) + O(1) ←→ T (n) ∈ O(log n)
T (n) = T (log n) + 1 ←→ T (n) ∈ Θ(log∗ n)
Logarithm log n
T (n) = T (log n) + Θ(n) ←→ T (n) ∈ Θ(n)

√ T (n) = T
√ ( n)√+ O(1) ←→ T (n) ∈ Θ(log log n)
Square root n T (n) = √nT (√n) + Θ(n) ←→ T (n) ∈ Θ(n log log n)
T (n) = nT ( n) + Θ(log n) ←→ T (n) ∈ Θ(log n log log n)
* Base cases are omitted for simplicity’s sake.
38 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

If division is used instead of subtraction as a diminishing function, it is called a divide


recurrence relation. Chapter 3 covers divide recurrence relations. The discrete version of
divide recurrence relations are addressed in Chapter 5.
Common recurrence relation diminishing functions in algorithm design and √ analysis in-
clude subtraction −, division /, modulo %, logarithm log n and square root n, as listed in
Table 2.1. The recurrence relation in Euclid’s Algorithm 1.4 uses the modulo function as a
diminishing function. The computational time complexity of Euclid’s Algorithm 1.4 can be
stated as a recurrence relation, T (m, n) = T (n, m % n) + O(1), and thus T (n) = O(log n).
The recurrence relation in eqn (2.2) is called the iterated logarithm and denoted log∗ n
in [42, p 55]. This log star function appears in the analysis of algorithms for Delaunay
triangulation [38] and for 3-coloring an n-cycle [46] problems.
(
∗ 0 if n ≤ 1
log n = ∗ (2.2)
log log n + 1 if n > 1

Finally, recurrence relations with a square root as a diminishing function appear in Van
emde boas tree data structure [174]. Readers may focus on first order linear recursions in
this chapter and skip the other recurrence relations in Table 2.1 for the time being. Other
recurrence relations will be easily proven and understood when readers reach Chapter 5.

2.1.3 Closed Form of a First Order Linear Recursion


Most recurrence relation functions have equivalent closed formulas. Table 2.2 contains

Table 2.2: Closed form of first order linear recurrence relations


Recursion Sum & Closed form Asymptotic
T (n) = T (n − 1) 1 or base case T (n) ∈ O(1)
Pn
T (n) = T (n − 1) + c c = cn T (n) ∈ Θ(n)
i=1
n
i = 2c n(n + 1) T (n) ∈ Θ(n2 )
P
T (n) = T (n − 1) + cn c
i=1
n
P T (n) ∈ Θ(n × f (n))
T (n) = T (n − 1) + f (n) f (i)
i=1 if f (n) ∈ polynomial
Pn
dlog ne
T (n) = T (n − 1) + dlog ne i=1 T (n) ∈ Θ(n log n)
= ndlog ne − 2dlog ne + 1
n−1
a = an−1 T (n) ∈ Θ(an )
Q
T (n) = aT (n − 1)
i=1
n
T (n) = 2T (n − 1) + 1
2i−1 = 2n − 1 T (n) ∈ Θ(2n )
P
T (n) = T (n − 1) + 2n−1 i=1
T (n) = 2T (n − 1) + c
(c + 1)2n−1 − c T (n) ∈ Θ(2n )
T (n) = T (n − 1) + (c + 1)2n−2
n
T (n) = 3T (n − 1) + 1 n−1
3i−1 = 3 2 T (n) ∈ Θ(3n )
P
n−1
T (n) = T (n − 1) + 3 i=1
n
T (n) = aT (n − 1) + 1 n−1
ai−1 = aa−1 T (n) ∈ Θ(an )
P
n−1
T (n) = T (n − 1) + a i=1
* Base cases are omitted for simplicity’s sake.
2.1. RECURSION 39

common first order linear recurrence relations with their equivalent closed formulas and
asymptotic notations.
Consider the problem of finding the nth even number. To come up with a first order
linear recursion, imagine the (n−1)th even number, E(n−1), is known and try to determine
the nth even number, E(n). All we need to do is simply add 2 to E(n − 1). The following
first order linear recurrence with a base condition and recursive call can be derived.
(
0 if n = 0
E(n) = (2.3)
E(n − 1) + 2 if n > 0

The recurrence relation in eqn (2.3) is clearly equivalent to 2n. Similarly, the nth natural
number can also be defined recursively.
(
0 if n = 0
N (n) = (2.4)
N (n − 1) + 1 if n > 0

Clearly, N (n) = n. Both examples satisfy the second row of Table 2.2. The base case value
need not always be zero. For example, the nth odd number, defined recursively in eqn (2.5),
has a base case value equal to one.
(
1 if n = 1
Odd(n) = (2.5)
Odd(n − 1) + 2 if n > 1

Clearly, Odd(n) = 2n − 1. In general, a recursion of the form T (n) = T (n − 1) + c = Θ(n).

Theorem 2.1. The following linear recurrence in eqn (2.6) is equivalent to c(n − 1) + 1.
(
T (n − 1) + c if n > 1
T (n) = (2.6)
1 if n = 1

Proof. (by induction) Base case: when n = 1, eqn (2.6) returns 1 and c(1 − 1) + 1 = 1
Inductive step: Assuming T (n) = c(n − 1) + 1 for n > 1, show T (n + 1) = cn + 1.

T (n + 1) = T (n) + c by eqn (2.6)


= c(n − 1) + 1 + c by assumption
= cn + 1 goal 

An ordinary induction or proof by induction can be used to prove the respective closed
forms of many first order linear recurrence relations.
c
Theorem 2.2. The following linear recurrence in eqn (2.7) is equivalent to n(n + 1).
2
(
T (n − 1) + cn if n > 0
T (n) = (2.7)
0 if n = 0
40 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

c
Proof. (by induction) Base case: when n = 0, eqn (2.7) returns 0 and × 0 × 1 = 0
2
c c
Inductive step: Assuming T (n) = n(n + 1) for n > 0, show T (n + 1) = (n + 1)(n + 2).
2 2
T (n + 1) = T (n) + c(n + 1) by eqn (2.7)
c
= n(n + 1) + c(n + 1) by assumption
2
c
= (n + 1)(n + 2) goal 
2
A slightly complicated proof by induction is given for the following theorem:
Theorem 2.3. The following linear recurrence in eqn (2.8) is equivalent to
ndlog ne − 2dlog ne + 1
(
T (n − 1) + dlog ne if n > 1
T (n) = (2.8)
0 if n = 1

Proof. (by induction) Base case: when n = 1, eqn (2.8) returns 0 and dlog 1e − 20 + 1 = 0.
Inductive step: Assuming T (n) = ndlog ne − 2dlog ne + 1 for n > 1,
show T (n + 1) = (n + 1)dlog (n + 1)e − 2dlog (n+1)e + 1.
There are two cases: when n is an exact power of 2 and when it is not.
Case 1: If n is an exact power of 2, then
dlog (n + 1)e = dlog ne + 1 (2.9)
2dlog (n)e = n (2.10)
Using eqns (2.9) and (2.10), the inductive step to be proven for case 1 becomes eqn (2.11).
T (n + 1) = (n + 1)(dlog ne + 1) − 2dlog ne+1 + 1
= (n + 1)dlog ne + n + 1 − 2n + 1
= (n + 1)dlog ne − n + 2 (2.11)
Inductive step for case 1:
T (n + 1) = T (n) + dlog ne + 1 by eqns (2.8) and (2.9)
= ndlog ne − n + 1 + dlog ne + 1 by assumption and (2.10)
= (n + 1)dlog ne − n + 2 goal in eqn (2.11)
Case 2: If n is not an exact power of 2, then
dlog (n + 1)e = dlog ne (2.12)
Using eqn (2.12), the inductive step to be proven for case 2 becomes eqn (2.13).
T (n + 1) = (n + 1)dlog ne − 2dlog ne + 1 (2.13)
Inductive step for case 2:
T (n + 1) = T (n) + dlog ne by eqns (2.8) and (2.12)
dlog ne
= ndlog ne − 2 + 1 + dlog ne by assumption
= (n + 1)dlog ne − 2dlog ne + 1 goal in eqn (2.13)
∴ T (n) = ndlog ne − 2dlog ne + 1. 
2.2. INDUCTIVE PROGRAMMING ON ARITHMETIC PROBLEMS 41

2.2 Inductive Programming on Arithmetic Problems

P(1) P(2) P(3) P(4) P(n−1) P(n)

Figure 2.2: Domino effect in inductive programming.

As observed in Chapter 1 and the previous section, many theorems can be proven by
induction. While the concept of induction had been known and implicitly utilized earlier,
Pascal was the first to precisely describe the process of mathematical induction [69, p 139].
Since then, induction has been widely used to prove many theorems. Anderson made a bold
statement asserting that induction is the foundation of all correctness proofs for computer
programs in [5]. Proof by induction involves proving the base case and the inductive step.
The reason why verifying these two cases provides a sufficient proof is the domino effect, as
depicted in Figure 2.2. If the first tile is guaranteed to be knocked down by the base case,
all remaining tiles are also guaranteed to collapse by the inductive step.
The inductive programming paradigm is arguably the most fundamental and widely used
algorithm design technique, although it is not the be-all and end-all of algorithm design.
It directly implements the domino effect for solving computational problems. The basic
notion behind the inductive programming paradigm is manifested in two stages: “Think
backward but solve forward.” During the backward thinking stage, one should attempt to
come up with a first order linear recurrence relation. Then, instead of solving backward, i.e.
recursively, the inductive programming model solves a problem forward, i.e. sequentially
or iteratively using the domino effect. Algorithms based on inductive programming consist
merely of a base case and a loop for the domino effect. The inductive paradigm is also
referred to as “sequential programming” or “domino programming.”
Inductive Programming template
(
f (P (n − 1)) if n > n0
Backward thinking P (n) =
P (n0 ) if n = n0 base case
Forward solving P (n0 ) base case
for i = (n0 + 1) ∼ n
P (i) ←− f (P (i − 1))
The generic template above applies to most inductive programming algorithms for problems
that can be formulated by linear recurrence relations. The psuedo code is the forward solving

Blaise Pascal (1623 - 1662) was a French mathematician. One of his major contri-
butions was the “Treatise on the Arithmetical Triangle,” which describes a convenient
tabular presentation for binomial coefficients, now called Pascal’s Triangle. He also con-
structed a mechanical calculator, called Pascaline. He was the founder of probability
theory. c Portrait is in public domain.
42 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

part. Initially, it computes the base case. Sequentially, it invokes the inductive step, which
is the recurrence relation defined during the backward thinking stage.
In the remainder of this section, we will look at trivial arithmetic problems to better
understand the inductive programming paradigm. Other problems from elementary combi-
natorics will also be presented.

2.2.1 Summation Problems


Consider the summation problems introduced in Section 1.3. To come up with an al-
n
P
gorithm to compute the value of f (i) by inductive programming, first think backward,
i=1
i.e. ask yourself, “Supposing that the solution for the smaller sub-problem sumf(n − 1) is
known, can we solve the larger problem, sumf(n)?” The following first order linear recur-
n
P n−1
P
rence relation can be derived since f (i) = f (i) + f (n) by definition.
i=1 i=1
(
sumf(n − 1) + f (n) if n > 0
sumf(n) = (2.14)
0 if n = 0

Using the recurrence relation in eqn (2.14), a pseudo code using inductive programming is
presented as follows:

Algorithm 2.2. Sequential summation

sumf(n)
O = 0 ............................................ 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = O + f (i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Line 1 is the base case used to get started and line 3 is the inductive step that will be
repeated. The loop enables the domino effect, i.e. the sub-problem’s solution will lead to
the solution of the larger problem. Algorithm 2.2 is illustrated in Figure 2.3 using a toy
example of sumf(4).
n
P
n sumf(n) = f (i) = sumf(n − 1) + f (n)
i=1
0 sumf(0) =0 base case
1 sumf(1) = f (1) = sumf(0) + f (1)
2 sumf(2) = f (1) + f (2) = sumf(1) + f (2)
3 sumf(3) = f (1) + f (2) + f (3) = sumf(2) + f (3)
4 sumf(4) = f (1) + f (2) + f (3) + f (4) = sumf(3) + f (4)

Figure 2.3: Sequential summation Algorithm 2.2 illustration.

The next step is verifying the correctness of the algorithm. Proving correctness is triv-
ial by induction since the inductive programming algorithm itself is the domino effect of
induction.
2.2. INDUCTIVE PROGRAMMING ON ARITHMETIC PROBLEMS 43

n
X
Theorem 2.4. Algorithm 2.2 sumf(n) correctly produces f (i).
i=1

Proof. (by induction)


0
!
X
Base case: when n = 0, (sumf(0) = 0) = f (i) = 0
i=1
n
X n+1
X
Inductive step: Assuming sumf(n) = f (i), show sumf(n + 1) = f (i).
i=1 i=1
n
X n+1
X
sumf(n + 1) = sumf(n) + f (n + 1) = f (i) + f (n + 1) = f (i)
i=1 i=1
n
X
∴ sumf(n) correctly produces f (i). 
i=1

All summation notations in Section 1.3, such as triangular number, square number, etc.,
can be solved by inductive programming and their correctness proofs are similar to the
proof by induction for Theorem 2.4. Case in point, thePnaı̈ve Algorithm 1.5 on page 10
n
for computing the nth triangular number, TRN(n) = i=1 i, is based on the inductive
Pn
programming paradigm. As a matter of fact, the definition of summation itself, i=1 ,
given in eqn (1.10) on page 9 can be thought of as inductive programming. A number
of correctness proofs for algorithms based on the inductive programming paradigm in this
chapter are omitted for exercises, since they resemble the generic proof by induction for
Theorem 2.4.

2.2.2 Multiplication
Another simple example is the integer multiplication Problem 1.14: a × n. In order to
demonstrate inductive programming, let’s assume that n is a nonnegative integer without
loss of generality. The recurrence relation in eqn (2.15) can be understood by a student who
has not yet memorized the multiplication table. A student who has difficulty answering
the question “6 × 8?” may answer “48” immediately when prompted with the fact that
“6 × 7 = 42”. They would simply add six to 42. Hence, the following first order linear
recurrence relation for the multiplication problem can be derived.
(
times(a, n − 1) + a if n > 0
times(a, n) = (2.15)
0 if n = 0

Algorithm 1.11 on page 21 is an inductive programming based algorithm that is derived


from the first order linear recurrence relation in equation (2.15).

2.2.3 n-digit Long Integer Addition


Consider the problem of adding two n-digit long positive integers.
Problem 2.1. n-digit long integer addition
Input: n-digit long integers A1∼n and B1∼n where Ai and Bi ∈ {0, 1, · · · , 9}
Output: C1∼n+1 = A1∼n + B1∼n
44 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Note that index 1 is the least significant digit, i.e. the rightmost digit. For a toy example
where n = 8,

A1∼n 4 1 2 4 5 3 2 9
+ B1∼n + 8 9 3 2 5 7 4 3
C1∼n+1 1 3 0 5 7 1 0 7 2
Although a sequential algorithm is commonly taught in most elementary schools, here
we will design an algorithm using inductive programming. Start by thinking backward. If
the answer for the smaller sub-problem, add(A1∼n−1 , B1∼n−1 ) is known, can we solve the
larger problem, add(A1∼n , B1∼n )? One may sketch the following:

A1∼n−1 1 2 4 5 3 2 9
+ B1∼n−1 + 9 3 2 5 7 4 3
C1∼n 1 0 5 7 1 0 7 2
an 4
+ bn 8
C1∼n+1 1 3 0 5 7 1 0 7 2
Now, a first order linear recurrence relation can be derived.
(
add(A1∼n−1 , B1∼n−1 ) + 10n−1 (an + bn + cn ) if n > 0
add(A1∼n , B1∼n ) = (2.16)
0 if n = 0
   
 (A1∼n−1 + B1∼n−1 ) = (an−1 + bn−1 + cn−1 ) if n > 1
where the carry, cn = 10n−1 10
0 if n = 1

A pseudo code which invokes equation (2.16) by inductive programming is given below:

Algorithm 2.3. Grade school n-digit long integer addition


add(A1∼n , B1∼n )
c1 = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
ci = (ai + bi + ci ) mod 10 . . . . . . . . . . . . . . . . . . . . . . 3
ci+1 = b(ai + bi + ci )/10c . . . . . . . . . . . . . . . . . . . . . . . 4
return C1∼n+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

This is equivalent to the widely known elementary school algorithm, which runs in Θ(n)
time. Its correctness proof is trivial by induction.

2.2.4 n-digit Long Integer Multiplication


The method taught in most elementary schools to multiply a single digit to a n-digit
long positive integer can also be realized as an inductive programming algorithm. Consider
the following toy example:
2.2. INDUCTIVE PROGRAMMING ON ARITHMETIC PROBLEMS 45

A1∼n 4 1 2 4 5 3 2 9
× x × 3
C1∼n+1 1 2 3 7 3 5 9 8 7
Problem 2.2. n-digit long integer by a single digit multiplication, times(A1∼n , x)
Input: An n-digit long integer A1∼n and a digit x where ai and x ∈ {0, 1, · · · , 9}
Output: C1∼n+1 = A1∼n × x
Again, consider the smaller sub-problem’s solution to answer the larger original problem.

A1∼n−1 1 2 4 5 3 2 9
× x × 3
C1∼n 0 3 7 3 5 9 8 7
n−1
+ an × x × 10 1 2
A1∼n × x 1 2 3 7 3 5 9 8 7
Now a recurrence relation can easily be derived.
(
times1 (A1∼n−1 , x) + 10n−1 (an × x) if n > 0
times1 (A1∼n , x) = (2.17)
0 if n = 0

Using the recurrence relation in equation (2.17), we can begin solving from the least signifi-
cant digit to the most significant digit, n. The domino effect guarantees the correct solution.
A pseudo code by inductive programming is written as follows:
Algorithm 2.4. Grade school n-digit long integer by a single digit multiplication
times1 (A1∼n , x)
c1 = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
ci = (ai × x + ci ) % 10 . . . . . . . . . . . . . . . . . . . . . . . . . 3
ci+1 = b(ai × x + ci )/10c . . . . . . . . . . . . . . . . . . . . . . . 4
return C1∼n+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Algorithm 2.4 evidently takes linear time.
Consider the problem of multiplying two m- and n-digit long positive integers. The first
argument is called the “multiplicand”, which is m digits long, and the second argument is
called the “multiplier”, which is n digits long. The resulting output is a (m + n) digits long
integer. For example,

A1∼m 4 1 2 4 5 3 2 9
× B1∼n × 8 9 3 2 5 7 4 3
C1∼m+n 3 6 8 4 2 6 9 6 5 8 2 0 4 4 4 7
Problem 2.3. n-digit long integer multiplication
Input: m and n digit long integers A1∼m and B1∼n where ai and bi ∈ {0, 1, · · · , 9}
Output: C1∼m+n = A1∼m × B1∼n
46 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

In order to devise an algorithm by inductive programming, it is necessary to think


backward. Imagine the (m digit × (n − 1) digit) multiplication sub-problem’s solution is
given. Try to obtain the solution for the original (m digit × n digit) multiplication problem
as follows:

A1∼m 4 1 2 4 5 3 2 9
× B1∼n−1 × 9 3 2 5 7 4 3
C1∼n+m−1 3 8 4 6 4 3 3 3 8 2 0 4 4 4 7
m−1
+ A1∼m × bn × 10 3 2 9 9 6 2 6 3 2
A1∼m × B1∼n 3 6 8 4 2 6 9 6 5 8 2 0 4 4 4 7

Now a first order linear recurrence relation can be derived.

timesn (A1∼m , B1∼n ) =


(
timesn (A1∼m , B1∼n−1 ) + 10n−1 times1 (A1∼m , bn ) if n > 1
(2.18)
times1 (A1∼m , b1 ) if n = 1

The recurrence relation in equation (2.18) invokes the (m + 1)-digit long integer addition in
Algorithm 2.3 and the single digit to m-digit long integer multiplication in Algorithm 2.4.
Using this recurrence relation, we can begin solving from the least significant digit to the
most significant digit, n. Each partial product is right-aligned with the respective digit in
the multiplier. The partial products are then summed. This method is often referred to
as the ‘grade school multiplication algorithm,’ such as in [113], since it is taught in most
elementary schools. The grade school multiplication algorithm is illustrated for the following
toy example.

A1∼m 4 1 2 4 5 3 2 9
× B1∼n × 8 9 3 2 5 7 4 3
100 times1 (A1∼m , b1 ) 1 2 3 7 3 5 9 8 7
1
10 times1 (A1∼m , b2 ) 1 6 4 9 8 1 3 1 6
.. ..
. .
10n−1 times1 (A1∼m , bn ) 3 2 9 9 6 2 6 3 2
A1∼m × B1∼n 3 6 8 4 2 6 9 6 5 8 2 0 4 4 4 7

Below is a pseudo code based on inductive programming:

Algorithm 2.5. Grade school n digit ×m-digit long integer multiplication

timesn (A1∼m , B1∼n )


C1∼m+1 = times1 (A1∼m , b1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Ci∼m+2 = add(Ci∼m+i , times1 (A1∼m , bi ) × 10i−1 ) . . . . . 3
return C1∼m+n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2. INDUCTIVE PROGRAMMING ON ARITHMETIC PROBLEMS 47

Note that ‘×10i−1 ’ in line 3 is the digit shift for the alignment, which takes constant time.
The domino effect guarantees the correct solution. Proving the recurrence relation in
equation (2.18) is sufficient for verifying the correctness of the inductive programming algo-
rithm.
Theorem 2.5. Algorithm 2.5 timesn (A1∼m , B1∼n ) correctly produces A1∼m × B1∼n .
Proof. (by induction) Let B1∼n = bn × 10n−1 + B1∼n−1 , then A1∼m × B1∼n = A1∼m × bn ×
10n−1 + A1∼m × B1∼n−1 . Thus, the recurrence relation in equation (2.18) is correct. The
domino effect can be trivially proven by induction. 
Algorithm 2.4 takes quadratic Θ(mn) time and linear Θ(m + n) space.

2.2.5 Matrix Multiplication


A matrix is a two dimensional rectangular array of numbers. One of the most frequent
operations carried out with matrices is matrix multiplication. Before embarking on an
algorithm to solve the matrix multiplication problem, it is necessary to begin with a dot
product of two vectors problem. A dot product problem, or simply DPD, takes two equal-
size vectors and produces the sum of the products of the corresponding entries of the two
vectors.
Problem 2.4. Dot product
Input: Two vectors A1∼n and B1∼n where ai and bi ∈ R
Xn
Output: ai bi
i=1

R denotes the set of all real numbers. A first order linear recurrence of the dot product
is shown below:
(
a1 × b1 if n = 1
DPD(A1∼n , B1∼n ) = (2.19)
DPD(A1∼n−1 , B1∼n−1 ) + an × bn if n > 1

By mirroring the problem definition or the recurrence relation in equation (2.19), a linear
time algorithm based on the inductive programming paradigm can be devised.
Algorithm 2.6. Dot product
dot product(A1∼n , B1∼n )
D = 0 ............................................ 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
D = D + ai × bi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
The number of columns in the first matrix Al×m and the number of rows in the second
matrix Bm×n must be equal in order to make matrix multiplication possible.
Problem 2.5. Matrix multiplication
Input: Two matrices Al×m and Bm×n
n
X
Output: Cl×n where ci,j = ai,k bk,j = DPD(Ai,1∼n , B1∼n,j )
k=1
48 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

For example, the product of a 3 × 2 matrix A and a 2 × 2 matrix B results in a 3 × 2


matrix C.
   
a1,1 a1,2   a1,1 b1,1 + a1,2 b2,1 a1,1 b1,2 + a1,2 b2,2
b 1,1 b 1,2
A × B = a2,1 a2,2  × = a2,1 b1,1 + a2,2 b2,1 a2,1 b1,2 + a2,2 b2,2 
b2,1 b2,2
a3,1 a3,2 a3,1 b1,1 + a3,2 b2,1 a3,1 b1,2 + a3,2 b2,2

To obtain the value of the ith row and jth column output cell, ci,j , we take the dot product
of the ith row vector of matrix A and the jth column vector of matrix B. Although the cells
can be computed in any order, one possible order is given here to derive its first order linear
recursion in equation (2.20) and the resulting inductive programming algorithm. Notice
that if we add one more column to B, the previously computed matrix remains the same.
Let Bm×n = [Bm×n−1 |B1∼m,n ] denote appending the nth vertical column vector, B1∼m,n
to the matrix Bm×n−1 .

Al×m × Bm×n = Al×m × [Bm×n−1 |B1∼m,n ] = [Al×m × Bm×n−1 |Al×m × B1∼m,n ] (2.20)
       
a1,1 a1,2   a1,1 a1,2   a1,1 a1,2  
b b b
a2,1 a2,2 × 1,1 1,2 1,3 =  a2,1 a2,2  × 1,1 1,2 b b b
a2,1 a2,2  × 1,3 
b2,1 b2,2 b2,3 b2,1 b2,2 b2,3
a3,1 a3,2 a3,1 a3,2 a3,1 a3,2
 
a1,1 b1,1 + a1,2 b2,1 a1,1 b1,2 + a1,2 b2,2 a1,1 b1,3 + a1,2 b2,3
= a2,1 b1,1 + a2,2 b2,1 a2,1 b1,2 + a2,2 b2,2 a2,1 b1,3 + a2,2 b2,3 
a3,1 b1,1 + a3,2 b2,1 a3,1 b1,2 + a3,2 b2,2 a3,1 b1,3 + a3,2 b2,3
Using the first order linear recursion in equation (2.20), an algorithm based on the inductive
programming paradigm can be formulated.
Algorithm 2.7. Matrix multiplication
MAT multiplication(Al×m , Bm×n )
create a matrix Cl×n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
for j = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to l . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Ci,j = dot product(Ai,1∼m , B1∼m,j ) . . . . . . . . . . 4
return C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
The computational time complexity of Algorithm 2.7 is Θ(lmn). If matrices are square
n × n matrices, i.e. n = l = m, the complexity becomes Θ(n3 ) and thus Algorithm 2.7 is
said to be a cubic time algorithm.

2.2.6 Power
Inductive programming arguments similar to those for the summation notation apply to
the product notation Π introduced in subsection 1.3.2 on page 14. Problems formulated by
Π notation include many related to permutations in elementary combinatorics.
Ordered arrangements of length n drawn from k possible elements where repetition is
allowed are known as permutations with repetition. For example, there are 105 possible
five-digit zip codes, where n = 5 and k = 10. There are 216 possible binary codes, where
n = 16 and k = 2. Permutation problems where order matters and repetition is allowed
involve solving the power function, k n . Although k and n need not be integers, certain
assumptions are made to illustrate the inductive programming paradigm better.
2.2. INDUCTIVE PROGRAMMING ON ARITHMETIC PROBLEMS 49

Problem 2.6. Power, pow(k, n)


Input: k ∈ Z and n ∈ N
Yn
n
Output: k = k
i=1

A first order linear recurrence can be derived as follows:


(
pow(a, n − 1) × a if n > 0
pow(a, n) = (2.21)
1 if n = 0

The following Algorithm 2.8 based on inductive programming is almost identical to Algo-
rithm 2.2.

Algorithm 2.8. Sequential Power

pow(k, n)
O = 1 ............................................ 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = O × k ..................................... 3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

The domino effect that occurs in Algorithm 2.8 is illustrated using a toy example of pow(k, 4)
in Figure 2.4.
n
Q
n pow(k, n) = k = pow(k, n − 1) × k
i=1
0 pow(k, 0) =1 base case
1 pow(k, 1) =k = pow(k, 0) × k
2 pow(k, 2) =k×k = pow(k, 1) × k
3 pow(k, 3) =k×k×k = pow(k, 2) × k
4 pow(k, 4) =k×k×k×k = pow(k, 3) × k

Figure 2.4: Sequential power Algorithm 2.8 illustration.

It is sufficient at the moment to state that the computational time complexity of Algo-
rithm 2.8 is Θ(n), assuming that the multiplication operation is constant for simplicity’s
sake. If we do not make this assumption, however, the computational time complexity of
Algorithm 2.8 would be Θ(n2 log2 k) if the long digit multiplication Algorithm 2.5 is used.
The digit length of the output is Θ(n log k) and the digit length of k is Θ(log k). Hence,
n
Θ(i log k × log k) = Θ(n2 log2 k).
P
T (n) =
i=1

2.2.7 Factorial
The term permutation by definition is an act of arranging n distinct elements in a certain
order. If there are (n = 3) distinct elements in a set A = {a, b, c}, (3! = 6) arrangements are
possible, i.e. {ha, b, ci, ha, c, bi, hb, a, ci, hb, c, ai, hc, a, bi, hc, b, ai}. This permutation problem
involves solving the factorial problem n!.
50 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Problem 2.7. Factorial, facto(n)


Input: n∈N
n
Y
Output: n! = i
i=1

As shown in Figure 2.5, listing the first few examples often provides insight for deriving
a first order linear recurrence relation.
n
Q
n facto(n) = i = facto(n − 1) × n
i=1
0 facto(0) =1 base case
1 facto(1) =1 = facto(0) × 1
2 facto(2) =1×2 = facto(1) × 2
3 facto(3) =1×2×3 = facto(2) × 3
4 facto(4) =1×2×3×4 = facto(3) × 4

Figure 2.5: Factorial Algorithm 2.9 illustration.

(
facto(n − 1) × n if n > 0
facto(n) = (2.22)
1 if n = 0
An inductive programming algorithm can be devised as follows:
Algorithm 2.9. Factorial
facto(n)
O = 1 ............................................ 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = O × i ......................................3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The computational time complexity of Algorithm 2.9 is Θ(n), assuming the multiplica-
tion operation is constant.
Proving the first order linear recurrence in equation (2.22) by induction suffices for the
correctness proof of Algorithm 2.9.
Theorem 2.6. Algorithm 2.9, facto(n) correctly produces n!.
Proof. Base case: When n = 0, (facto(0) = 1) = (0! = 1)
Inductive step: Supposing that facto(n) = n! is true, show facto(n + 1) = (n + 1)!.
facto(n + 1) = facto(n) × (n + 1) = n! × (n + 1) = (n + 1)! 

2.2.8 k-Permutation
Suppose that there are k kinds of prizes in a raffle and n number of people are par-
ticipating with a single ticket each. How many winning arrangements are possible? The
order of prizes matters and repetition is not allowed. This sort of permutation without
repetition problem is called the k-permutation of n problem. The notations P (n, k) or Pkn
on a calculator button are conventionally used to denote this problem. The abbreviation
2.2. INDUCTIVE PROGRAMMING ON ARITHMETIC PROBLEMS 51

PNK is used traditionally, but since it is confused with many other problems, such as the
integer partition Problem 6.11 in Chapter 6, we will use the abbreviation KPN to identify
the k-permutation of n problem.
Problem 2.8. k-permutation of n, KPN(n)
Input: n and k ∈ N
n
Y
Output: P (n, k) = i
i=n−k+1

A couple of different recurrence relations can be derived. The first one is as follows:
(
P (n, k − 1) × (n − k + 1) if k > 0
P (n, k) = (2.23)
1 if k = 0

If we apply the recurrence relation in equation (2.23) to designing an algorithm, we have


the following pseudo code.
Algorithm 2.10. descending factorial
KPN(n, k)
O = 1 ............................................ 1
for i = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = O × (n − i + 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Algorithm 2.10 computes P (n, k) = n × (n − 1) × · · · × (n − k + 1) in descending order.
Hence, KPN is also called descending factorial in [165, p 8] or falling factorial power in [102,
p 50] and is denoted as nk¯ or (n)k .
k−1
Y
nk¯ = (n − i) = n × (n − 1) × · · · × (n − k + 1) = P (n, k)
| {z }
i=0
k

Another recurrence relation of the k-permutation is:


(
P (n − 1, k − 1) × n if k > 0
P (n, k) = (2.24)
1 if k = 0

If we apply inductive programming to an algorithm based on equation (2.24), we have the


following pseudo code.
Algorithm 2.11. KPN by ascending permutation
KPN(n, k)
O = 1 ............................................ 1
for i = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = O × (n − k + i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Algorithm 2.11 computes P (n, k) = (n − k + 1) × (n − k + 2) × · · · × n in ascending order.
n
Y
i = (n − k + 1) × (n − k + 2) × · · · × (n − 1) × n = P (n, k)
| {z }
i=n−k+1
k
52 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

2.2.9 Fermat Number


Consider the problem of finding the nth Fermat Number [143].

Problem 2.9. Fermat Number


Input: n∈N
n
Output: FM(n) = 22 + 1

Fermat number problem requires understanding a couple of Power properties.

Lemma 2.1. Power of a Power Property

(ap )n = apn

Proof. Base case: When n = 1, ((ap )1 = ap ) = (ap×1 = ap ).


Inductive step: Assume (ap )n = apn is true. Show (ap )n+1 = ap(n+1) .

(ap )n+1 = (ap )n × (ap )1 by Theorem 1.19 Product rule of powers


= apn × ap by assumption
= apn+p = ap(n+1) by Theorem 1.19 Product rule of powers 

Most arithmetic operators such as ‘−’ and ‘÷’ follow the left associative rule. For exam-
ple, 8 ÷ 2 ÷ 2 = (8 ÷ 2) ÷ 2 = 4 ÷ 2 = 2 but 8 ÷ 2 ÷ 2 6= 8 ÷ (2 ÷ 2). However, the power
operator follows the right associative rule.

Property 2.1. Right Associative Power Rule


m m
an = a(n )
6= (an )m
3 3
For example, 22 = 2(2 ) = 28 = 256 while (22 )3 = 43 = 64. The nth Fermat number
can be found by invoking power Problem 2.6 twice as follows:

FM(n) = pow(2, pow(2, n)) + 1 (2.25)

FM(n) should not be coded as pow(pow(2, 2), n), which would result in (4n + 1).
The nth Fermat number has the first order linear recurrence relation in equation (2.26).

Theorem 2.7. The first order linear recurrence relation in equation (2.26) is equivalent to
n
22 + 1.

(
(FM(n − 1) − 1)2 + 1 if n > 0
FM(n) = (2.26)
3 if n = 0

Pierre de Fermat (1607-1665) was a French lawyer and the most famous amateur
mathematician. His major contributions to mathematics include Fermat’s principle for
light propagation, Fermat’s little theorem and Fermat’s last theorem.
c Portrait is in public domain.
2.3. PROBLEMS ON LIST 53

0
Proof. Base case: When n = 0, (FM(0) = 3) = (22 + 1 = 3).
n n+1
Inductive step: Assume FM(n) = 22 + 1 is true. Show FM(n + 1) = 22 + 1.
FM(n + 1) = (FM(n) − 1)2 + 1 by eqn. (2.26)
2n 2
= (2 + 1 − 1) + 1 by assumption
2n 2
= (2 ) + 1
n n+1
= 22×2 + 1 = 22 +1 by Lemma 2.1 Power of a Power 
The first order linear recurrence in equation (2.26) enables us to devise the following
inductive programming algorithm:
Algorithm 2.12. Sequential Fermat Number
FMN(n)
FN = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
FN = (FN − 1)2 + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return FN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

n FMN(n) = (FMN(n − 1) − 1)2 + 1


0 FMN(0) base case =3
1 FMN(1) = (FMN(0) − 1)2 + 1 = 5
2 FMN(2) = (FMN(1) − 1)2 + 1 = 17
3 FMN(3) = (FMN(2) − 1)2 + 1 = 257
4 FMN(4) = (FMN(3) − 1)2 + 1 = 65537
5 FMN(5) = (FMN(4) − 1)2 + 1 = 4294967297
6 FMN(6) = (FMN(5) − 1)2 + 1 = 18446744073709551617
7 FMN(7) = (FMN(6) − 1)2 + 1 = 340282366920938463463374607431768211457

Figure 2.6: Fermat Number Algorithm 2.12 illustration.

Figure 2.6 illustrates Algorithm 2.12. The computational time complexity of Algo-
rithm 2.12 is Θ(n), assuming the multiplication operation is constant.

2.3 Problems on List


To better understand the inductive programming paradigm, we will consider several
problems involving lists. A list A of size n is denoted as A1∼n = ha1 , a2 , · · · , an i. The ith
element of the list A is ai . An array is assumed to be a representation of a list by default
throughout this book unless otherwise stated.

2.3.1 Prefix Sum


Consider the prefix sum problem, or simply PFS, which adds all elements in a list.
Problem 2.10. Prefix sum, prefixsum(A1∼n )
Input: A sequence A1∼n of n numbers
Xn
Output: ai
i=1
54 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

For a toy sample example where A = h3, 8, 2, 3, 5i and n = 5, the output is 3 + 8 + 2 + 3 +


5 = 21. To come up with an algorithm using inductive programming, first think backward,
i.e. think “Supposing that the solution for the smaller sub-problem prefixsum(A1∼n−1 ) is
known, can we solve the larger problem, prefixsum(A1∼n )?” The following first order linear
n
P n−1
P
recurrence relation can be derived since ai = ai + an by definition.
i=1 i=1

Lemma 2.2. First order linear recurrence of prefix sum


(
prefixsum(A1∼n−1 ) + an if n > 0
prefixsum(A1∼n ) = (2.27)
0 if n = 0

Next, sketch or draw the forward solving steps, i.e. the domino effect, using Lemma 2.2
on the toy sample, as in Figure 2.7.

i A1∼i recurrence Output


i=0 A1∼0 <base case> =0
i=1 3 prefixsum(A1∼1 ) = prefixsum(A1∼0 ) + a1 =3
i=2 3 8 prefixsum(A1∼2 ) = prefixsum(A1∼1 ) + a2 = 11
i=3 3 8 2 prefixsum(A1∼3 ) = prefixsum(A1∼2 ) + a3 = 13
i=4 3 8 2 3 prefixsum(A1∼4 ) = prefixsum(A1∼3 ) + a4 = 16
i=5 3 8 2 3 5 prefixsum(A1∼5 ) = prefixsum(A1∼4 ) + a5 = 21

Figure 2.7: Sequential prefix sum Algorithm 2.13 illustration.

The first order linear recurrence in Lemma 2.2 permits us to devise the following algo-
rithm based on the inductive programming paradigm:

Algorithm 2.13. Sequential prefix sum

prefix sum(A1∼n )
PS = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
PS = PS + ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return PS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Since the body of the loop contains only a single constant addition operation, the com-
putational time complexity is clearly Θ(n).

2.3.2 Number of Ascents

3 2 7 9 4 1 1 2 3 4 7 9 3 9 4 7 1 2
& % % & & % % % % % % & % & %
NAS(h3, 2, 7, 9, 4, 1i) = 2 NAS(h1, 2, 3, 4, 7, 9i) = 5 NAS(h3, 9, 4, 7, 1, 2i) = 3
(a) arbitrary sequence (b) ascending sorted list (c) up-down sequence

Figure 2.8: Number of ascents problem examples.


2.3. PROBLEMS ON LIST 55

Given a list of n unique quantifiable elements, the number of ascents problem, or simply
NAS, aims to find the number of elements that are greater than the immediate previous
element. For a toy example of A = h3, 2, 7, 9, 4, 1i in Figure 2.8 (a), NAS(A) = 2, since
the element 7 is greater than 2 and the element 9 is greater than 7. If A is an ascending
sorted list, NAS(A) = n − 1, as exemplified in Figure 2.8 (b). If A is an up-down sequence,
NAS(A) = b n2 c, as shown in Figure 2.8 (c). The problem is formally defined as follows:

Problem 2.11. Number of ascents


Input: A sequence A of n unique( quantifiable elements
n
X 1 if ai−1 < ai
Output: f (i) where f (i) =
i=2
0 otherwise

Although an inductive programming algorithm immediately follows from the problem’s


definition, try to begin with deriving a first order linear recurrence for the sake of practice.
Assuming that NAS(A1∼n−1 ) is known, try to determine the answer for the full original list,
NAS(A1∼n ). The following first order linear recurrence relation can be derived:

0
 if n = 0 or 1
NAS(A1∼n ) = NAS(A1∼n−1 ) + 1 if an−1 < an and n > 1 (2.28)

NAS(A1∼n−1 ) if an−1 > an and n > 1

Instead of solving recursively using equation (2.28), an inductive programming algorithm


starts from the base onward to n as illustrated in Figure 2.9 with the toy example of
A = h3, 2, 7, 9, 4, 1i from Figure 2.8 (a). A pseudo code that solves the problem forward by
inductive programming is written below:

i A1∼i ai−1 < ai ? NAS(A1∼i )


1 3 Base 0
2 3 2 F 0
3 3 2 7 T 1
4 3 2 7 9 T 2
5 3 2 7 9 4 F 2
6 3 2 7 9 4 1 F 2

Figure 2.9: Sequential number of ascents Algorithm 2.14 illustration.

Algorithm 2.14. Sequential count ascents

NAS(A1∼n )
k = 0 .............................................1
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai−1 < ai , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
k = k + 1 .................................... 4
return k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The computational time complexity of Algorithm 2.14 is clearly linear, Θ(n).


56 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

2.3.3 Element Uniqueness


Some problems involving a list, such as the number of ascents Problem 2.11, require
elements in the list to be unique. Checking whether the elements of a list are unique is
important. Element uniqueness, or CEU (checking element uniqueness), determines whether
all elements are unique given a list of n elements. If any two elements are equal, the output
is false. Otherwise, the output is true. For example, the string ‘UNIQUE’ is not unique
because the character ‘U’ appears twice in the string. For an alternative example, the string
‘ALGORITHM’ is unique since every character in the string appears exactly once, i.e. there
are no duplicate characters. This problem can formally be defined as follows:
Problem 2.12. Element uniqueness
Input: A sequence A1∼n of n elements
(
True if ∀ax , ay ∈ A1∼n if x 6= y, ax 6= ay
Output:
False otherwise

In order to come up with an algorithm that uses inductive programming, a first order
linear recurrence relation of the problem must be derived.
Lemma 2.3. Recurrence of checking element uniqueness
(
T if n = 1
is element uniq(A1∼n ) =
is element uniq(A1∼n−1 ) ∧ (an 6∈ A1∼n−1 ) if n > 1

Checking (an 6∈ A1∼n−1 ) is the same as searching an in A1∼n−1 , which takes linear time.
Based on the recurrence relation in Lemma 2.3, a pseudo code for an inductive programming
algorithm can be constructed as follows:
Algorithm 2.15. Checking Element Uniqueness
is element uniq(A1∼n )
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 ∼ i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai = aj , return false . . . . . . . . . . . . . . . . . . . . . 3
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

i A1∼i (ai 6∈ A1∼i−1 ) is element uniq(A1∼i )


1 C T
2 C H T T
3 C H E T T
4 C H E C F F
5 C H E C K T F

Figure 2.10: Checking element uniqueness Algorithm 2.15 illustration.

n P
i
j = Θ(n2 ). The worst case
P
The worst case time complexity of Algorithm 2.15 is
i=1 j=1
happens when the input string is unique. Algorithm 2.15 is illustrated in Figure 2.10 on a
2.3. PROBLEMS ON LIST 57

toy example of ‘CHECK’ where n = 5. Note that the (i = 5)th ‘K’ character checking is
not executed by Algorithm 2.15 because it is automatically false by the non-uniqueness of
the previous character, ‘C’. The best case complexity is constant and it occurs when the
duplicate characters appear early in the beginning of the string such as ‘LLAMA.’ Hence,
the computational time complexity of Algorithm 2.15 is O(n2 ).
Algorithm 2.15 can be equivalently restated using the search procedure, which shall be
presented in the next subsection.
Algorithm 2.16. Checking Element Uniqueness by searching
is element uniq(A1∼n )
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if search(ai , A1∼i−1 ) = null, . . . . . . . . . . . . . . . . . . . . 2
return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3.4 Searching
Consider the problem of finding all positions where a query q occurs in a list.
Problem 2.13. Searching all occurrences, findall(A1∼n , q)
Input: A sequence A of n quantifiable elements and a query element, q
Output: {x ∈ Z + | a x = q and 1 ≤ x ≤ n}

For a toy example of A = h3, 8, 2, 3, 5i, n = 5 and q = 3, the output should be {1, 4} be-
cause a1 = a4 = (q = 3). To come up with an algorithm using inductive programming,
first think backward, i.e. ask “Supposing that the solution for the smaller sub-problem
findall(A1∼n−1 , q) is known, can we solve the larger problem, findall(A1∼n , q)?” The follow-
ing recurrence relation can be derived.
Lemma 2.4. Recurrence of findall

findall(A1∼n−1 , q) ∪ {n} if an = q

findall(A1∼n , q) = findall(A1∼n−1 , q) if an 6= q

Ø if n = 0

Next, sketch or draw the forward solving steps, i.e. the domino effect, using Lemma 2.4
on the toy example, as in Figure 2.11.
Now an algorithm using the inductive programming paradigm can be written as follows:
Algorithm 2.17. Sequential search-all
findall(A1∼n , q)
O = ∅ ............................................1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai equals to q, O = O ∪ {i} . . . . . . . . . . . . . . . . 3
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Since the body of the loop contains one comparison and one assignment, which are
constant operations, the computational time complexity is clearly Θ(n). Proving the cor-
rectness of the algorithm based on the inductive programming paradigm is trivial because
the algorithm itself is the induction.
58 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

i A1∼i recurrence Output


i=0 findall(A0 , q) <base case> =Ø
i=1 3 findall(A1∼1 , q) = findall(A0 , q) ∪ {1} = {1}
i=2 3 8 findall(A1∼2 , q) = findall(A1∼1 , q) = {1}
i=3 3 8 2 findall(A1∼3 , q) = findall(A1∼2 , q) = {1}
i=4 3 8 2 3 findall(A1∼4 , q) = findall(A1∼3 , q) ∪ {4} = {1, 4}
i=5 3 8 2 3 5 findall(A1∼5 , q) = findall(A1∼4 , q) = {1, 4}

Figure 2.11: Sequential search-all Algorithm 2.17 illustration.

2.3.5 Order Statistics


Consider the problem of finding the minimum element in a list as defined below.

Problem 2.14. findmin


Input: A sequence A of n quantifiable elements
Output: x and/or ax such that ax ∈ A ∧ ∀ay ∈ A, ax ≤ ay

For a toy example of A = h3, 8, 2, 3, 1i where n = 5, the output should be a5 = 1. To


come up with an algorithm using inductive programming, first think backward, i.e. ask
“Supposing that the solution for the smaller sub-problem findmin(A1∼n−1 ) is known, can
we solve the larger original problem, findmin(A1∼n )?” Suppose that findmin(A1∼n−1 ) =
findmin(h3, 8, 2, 3i) = 2 is given and an = 1. Clearly, the answer for findmin(A1∼n ) = an =
1, since an < findmin(A1∼n−1 ). If an ≥ findmin(A1∼n−1 ), findmin(A1∼n ) would be equal
to findmin(A1∼n−1 ). Hence, the following recurrence relation can be derived:

Lemma 2.5. Recurrence of findmin


(
min(an , findmin(A1∼n−1 )) if n > 1
findmin(A1∼n ) =
a1 if n = 1

Next, sketch or draw the forward solving steps, i.e. the domino effect, using Lemma 2.5
on a toy example, as in Figure 2.12. Now a pseudo code for an algorithm using the inductive
programming paradigm can be written as follows:

Algorithm 2.18. Sequential find-min

findmin(A1∼n )
O = a1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai < O, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
O = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The variable O in line 3 contains findmin(A1∼i−1 ) for each iteration and is reassigned
the value ai if ai < O.
2.3. PROBLEMS ON LIST 59

i A1∼i recurrence Output


i=1 3 findmin(A1∼1 ) <base case> = a1 = 3
i=2 3 8 findmin(A1∼2 ) = min(a2 , findmin(A1∼1 )) = a1 = 3
i=3 3 8 2 findmin(A1∼3 ) = min(a3 , findmin(A1∼2 )) = a3 = 2
i=4 3 8 2 3 findmin(A1∼4 ) = min(a4 , findmin(A1∼3 )) = a3 = 2
i=5 3 8 2 3 1 findmin(A1∼5 ) = min(a5 , findmin(A1∼4 )) = a5 = 1

Figure 2.12: Sequential find-min Algorithm 2.18 illustration.

Theorem 2.8. Algorithm 2.18 correctly finds the minimum in a list A1∼n .

Proof. (by induction)


Base (n = 1) case: In a list of one element, the element, a1 itself is the minimum.
Inductive case: Assuming that Algorithm 2.18 findmin(A1∼n ) finds the minimum, show
that it also finds the minimum in A1∼n+1 . Let x = findmin(A1∼n ). If x ≤ an+1 , x ≤ ∀y ∈
A1∼n+1 . If x > an+1 , an+1 ≤ ∀y ∈ A1∼n+1 . 

The computational time complexity of Algorithm 2.18 is Θ(n) since there is only one for
loop and the body of the loop contains only operations that run in constant time.
Problems finding a minimum, maximum or median are special cases of the kth order
statistic problem or kth selection problem. Consider KLG, the problem of finding the kth
largest number in a list. Given a list A of n quantifiable elements, the problem aims to
find an element x ∈ A, such that k − 1 other elements in A are greater than or equal to x
and the n − k remaining elements in A are less than or equal to x. For a toy example of
A = h3, 7, 2, 9, 2, 9, 8i where n = 7 and k = 3, the output should be 8 because 8 is the third
largest number in the list. Let A0 be a sorted list of A in descending or nonincreasing order,
i.e. A0 = h9, 9, 8, 7, 3, 2, 2i. Then, the kth largest number is simply a0k . Thus, the problem
can be defined as follows:

Problem 2.15. kth order statistics KOS (kth largest element, KLG)
Input: A sequence A of n quantifiable elements and k such that 0 < k ≤ n
Output: a0k where A0 is the sorted list of A in nonincreasing order.

This problem is of great interest not only because it is an important sub-problem in


many complex problems such as nearest neighbor, shortest path, etc., but also because it
exemplifies how various algorithm design paradigms can be applied to a problem.
Let Mk (A) be the set of the k largest elements of A, such that KLGk (A) = min(Mk (A)).
For the toy example used previously, Mk=3 (A) = {9, 8, 9}. To come up with an algorithm
using inductive programming, first think backward, i.e. ask “Supposing that the solution
to the smaller sub-problem Mk (A1∼n−1 ) is known, can we solve Mk (A1∼n )?” Imagine
that we know that Mk (A1∼n−1 = h3, 7, 2, 9, 2, 9i) = {9, 7, 9}. Then, can we get Mk (A1∼n =
h3, 7, 2, 9, 2, 9, 8i) = {9, 8, 9}? All we need to do is find the minimum element in Mk (A1∼n−1 ),
compare it with an and swap it with an if and only if an > min Mk (A1∼n−1 ). The base case
is when n = k, in which Mk (A1∼n ) = A1∼k . The following recurrence relation on Mk (A1∼n )
can be derived formally:
60 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Lemma 2.6. Recurrence of k largest elements

Mk (A1∼n ) =

A1∼n
 if n = k
Mk (A1∼n−1 ) − {min(Mk (A1∼n−1 ))} ∪ {an } if min(Mk (A1∼n−1 )) < an (2.29)

Mk (A1∼n−1 ) if min(Mk (A1∼n−1 )) ≥ an

Next, sketch or draw the forward solving steps using Lemma 2.6 on a toy example, as
in Figure 2.13. Once the k largest elements are found, the minimum of those k elements

n A1∼n Mk (A1∼n ) min (Mk (A1∼n ))


3 3 7 2 3 7 2 2
4 3 7 2 9 3 7 9 3
5 3 7 2 9 2 3 7 9 3
6 3 7 2 9 2 9 9 7 9 7
7 3 7 2 9 2 9 8 9 8 9 8

Figure 2.13: Sequential select Algorithm 2.19 illustration.

is the output for KLG, i.e. KLG(A1∼n ) = min(Mk (A1∼n )). Now an algorithm using the
inductive programming paradigm can be written.
Algorithm 2.19. Sequential select
kth-select(A1∼n , k)
M = A1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = k + 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai > min (M ), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Margmin(M ) = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return min (M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Note that x = argmin(M1∼k ) if Ax = min(M1∼k ). Algorithm 2.19 has a computational
time complexity of Θ(kn) and requires Θ(k) space to store the k largest elements.

2.3.6 Sorting
Consider the sorting problem, which has been studied extensively in computer science.
Numerous sorting algorithms have been developed to rearrange elements in a list in either
ascending (increasing) or descending (decreasing) order. Elements must be quantifiable,
meaning that they can be ordered either numerically or lexicographically. The lexicographic
order is the alphabetical dictionary order. Sorting names in lexicographical order is exem-
plified in Fig. 2.14. Readers will observe lexicographic ordering formally as a computational
problem in another section.
The problem of sorting in ascending order is formally defined as follows:
Problem 2.16. Sorting
Input: A sequence A of n quantifiable elements
Output: a permutation A0 of A such that a0i ≤ a0j if 0 < i < j ≤ n
2.3. PROBLEMS ON LIST 61

(a) Input example

(b) Output example

Figure 2.14: Sorting example. (


c Portraits are in public domain.)

A naı̈ve algorithm derived directly from the problem’s definition has Θ(n!) computational
time complexity. It generates all n! permutations of the input list A and verifies each
sequence.
In order to design a better algorithm using the inductive programming paradigm, imagine
you have a fully sorted hand of cards and receive another card to be added to your hand,
as depicted in Figure 2.15. One may apply this retrograde thinking to the sub-problem of
inserting an element into a sorted list.

(a) input (b) output

Figure 2.15: A hand of sorted playing cards and one more card to be inserted.

Problem 2.17. Insert into a sorted list


Input: A sorted list S of size n and an element q
Output: A sorted list S 0 of size n + 1 where q ∈ S 0 and ∀x ∈ S(x ∈ S 0 )

Assuming that the sorted list is an array, the element q to be inserted can first be
inserted at the end of the array. The element q is then continuously swapped with the
previous element in the array under the condition that the previous element is greater than
q. A pseudo code of this algorithm is given as follows:
62 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Algorithm 2.20. Insertion by swapping

insert sorted list I(S1∼n , q)


sn+1 = q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
p = n .............................................2
while ap > ap+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
swap(ap , ap+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
p = p − 1 .......................................5
return S1∼n+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

There is a slightly better algorithm that involves less swapping. First find the correct
position p to insert q in the sorted list and then shift all elements in Sp∼n by one to allocate
space to insert q. This sliding approach requires solving the problem of searching in a sorted
list, which can be formulated as follows:
Problem 2.18. Find the position to place q in a sorted list, findp(S1∼n , q)
Input: A sorted list S of n elements and an element q
Output: the position p such that sp−1 ≤ q and sp > q
Most readers with an introductory computer science background would immediately use
the efficient binary search algorithm to solve Problem 2.18. This algorithm will be explained
in Chapter 3. For now, Problem 2.18 can be solved in linear time using a slightly modified
Algorithm 2.17. An inductive programming algorithm need not start from left (i = 1) to
right (i = n), but may progress from right to left in reverse sequential order.
Lemma 2.7. Recurrence of findp
(
n+1 if an ≤ q
findp(A1∼n , q) =
findp(A1∼n−1 , q) otherwise

Using the linear recurrence relation in Lemma 2.7, the following algorithm which takes
O(n) time can be derived.
Algorithm 2.21. Sequential search
findp(S1∼n , q)
p = n + 1 .........................................1
while sp−1 > q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
p = p − 1 .......................................3
return p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Using Algorithm 2.21, an algorithm for Problem 2.17 is given as follows:
Algorithm 2.22. Sliding and insert
insert sorted list II(S, q)
p = findp(S1∼n , q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = n down to p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
si+1 = si . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
sp = q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return S1∼n+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3. PROBLEMS ON LIST 63

(a) Numbers of comparisons for forward and backward search

(b) Slide and insert mechanism

Figure 2.16: Visual illustration of inserting an element into a sorted list.

If an ordinary inductive programming algorithm similar to Algorithm 2.17 is used in line


1 of Algorithm 2.22, it would take Θ(n) time. This is because Algorithm 2.17 must compare
all elements in A1∼p to q and then shift all remaining elements Ap∼n by one as depicted in
Figure 2.16 (a). Dashed arcs indicate the comparisons necessary to insert ‘Gauss’ into the
sorted list and solid arcs indicate the required shift operations. Algorithm 2.22, however,
takes O(n) time since Algorithm 2.21 is used in line 1. Only elements in Ap∼n need to both
be compared to q and shifted. This is illustrated in Figure 2.16 (a). The worst case scenario
takes Θ(n) and the best case takes O(1) time. Astute readers may notice that binary search,
which will be covered in Chapter 3, should be used instead of sequential search to find the
position to insert the new element. If binary search, which takes Θ(log n) time, is used to
find the position, the best case scenario is Θ(log n) mainly due to the number of comparisons
and the worst case is Θ(n) mainly due to sliding.
Considering Algorithm 2.22 as backward thinking, the following first order linear recur-
rence can be derived for the sorting problem:

Lemma 2.8. Sorting linear recurrence


(
a1 if n = 1
sort(A1∼n ) =
insert sorted list II(sort(A1∼n−1 ), an ) if n > 1

Now, the sorting problem can be solved forward by an inductive programming paradigm.
This inductive process can be effectively explained with playing cards. Imagine that the
dealer hands out exactly one new card at a time. The players sort their hand by placing
each new card in the proper sorted position as they receive it, starting from the first card
to the last card dealt. This approach is detailed in Algorithm 2.23 and is called insertion
sort. It is often used by people to sort bridge hands [162, p 250]. A pseudo code is provided
below.
64 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Algorithm 2.23. insertion sort


insert sort(A1∼n )
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A1∼i = insert sorted list II(A1∼i−1 , ai ) . . . . . . . . . 2
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Alternatively, it can be extended to the following pseudo code:

insert sort(A1∼n )
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
p = findp(A1∼i−1 , ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
tmp = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = i − 1 down to p . . . . . . . . . . . . . . . . . . . . . . . . . 4
aj+1 = aj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ap = tmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

7 3 2 4 1 7 4 3 2 1 1 2 3 4 7

3 7 2 4 1 4 7 3 2 1 1 2 3 4 7

2 3 7 4 1 3 4 7 2 1 1 2 3 4 7
2 3 4 7 1 2 3 4 7 1 1 2 3 4 7

1 2 3 4 7 1 2 3 4 7 1 2 3 4 7

(a) illustration (b) worst case (c) best case

Figure 2.17: Insertion sort illustration when the input A is an array.

As exemplified in Figure 2.17, Algorithm 2.23 takes O(n2 )time since the linear time
Algorithm 2.22 is invoked n − 1 number of times. The best case running time complexity is
Θ(n), which reflects the case when the input is an almost completely sorted list, as pictured
in Figure 2.17 (c).
Table 2.3 summarizes the computational time complexities of various implementations
of insertion sort algorithms. Binary search, which takes O(log n) time, will be covered in
Chapter 3 on page 106 and the linked list version will be dealt with later in this chapter on
page 73.

Table 2.3: Insertion sort complexities.

Worst case Average case Best case


Array with forward search Θ(n2 ) Θ(n2 ) Θ(n2 )
Array with backward search Θ(n2 ) Θ(n2 ) Θ(n)
Array with binary search Θ(n2 ) Θ(n2 ) Θ(n log n)
Linked list Θ(n2 ) Θ(n2 ) Θ(n)
2.3. PROBLEMS ON LIST 65

A list is represented as a one dimensional array by default throughout this book. Another
way to represent a list is a linked list (see [2]). Unlike arrays, one has to traverse a linked list
from the beginning to access the ith element, which takes O(n) time. To insert an element
at the beginning of a linked list only takes constant time, while it takes linear time for
arrays. Deleting the first element takes constant time and linear time for linked lists and
arrays, respectively. A linked list data structure will be reviewed later in section 2.4.

2.3.7 Alternating Permutation


An alternating permutation of a sequence A of n distinct quantifiable elements is an
arrangement of those elements into an order A0 = ha01 , · · · a0n i such that no a0i between a0i−1
and a0i+1 and a01 < a02 . With a toy example of A = h6, 2, 5, 1, 3, 8i, possible alternating
permutations include A0 = h2, 5, 1, 8, 3, 6i and A0 = h6, 8, 1, 5, 2, 3i. The number of all
possible alternating permutations from n number of elements, E(n), also known as the
Euler zigzag number or up-down number, as well as how to determine E(n) will be covered in
Chapter 5. In this section, we consider the problem of generating an alternating permutation
or up-down sequence, formally defined as follows:

Problem 2.19. Alternating permutation or up-down


Input: A sequence A of n distinct quantifiable
( elements, i.e. ai 6= aj if i 6= j
0
a i < a0i+1 if i is odd and i < n
Output: a permutation A0 of A such that
a0i > a0i+1 if i is even and i < n

In order to devise an algorithm based on the inductive programming paradigm, thinking


backward is key. One may sketch a toy example as shown in Figure 2.18. There are four
cases. If the solution for A1∼n−1 ends with an upward turn and an is less than an−1 , there
is nothing left to do for the solution to A1∼n . If not, all we need to do is swap an−1 and
an . Since an−2 < an−1 < an , swapping an−1 and an guarantees the up-down sequence, as
shown in Figure 2.18.

(a) case 1: n is odd and low.

(b) case 2: n is odd but high.

(c) case 3: n is even and high.

(d) case 4: n is even but low.

Figure 2.18: Four cases of inductive step for the up-down sequence problem.
66 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

There is often no need to explicitly state the backward thinking as a formal recursion.
It is simply a useful technique for creating an inductive programming algorithm to solve
forward. An algorithm using the inductive programming paradigm for producing an up-
down sequence can be written as follows:
Algorithm 2.24. Sequential up-down
convert updown(A1∼n )
A0 = A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if i is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if a0i < a0i−1 , swap(a0i , a0i−1 ) . . . . . . . . . . . . . . . . 4
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if a0i > a0i−1 , swap(a0i , a0i−1 ) . . . . . . . . . . . . . . . . 6
0
return A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

n hA01∼n−1 , an i swap(a0n−1 , an ) A01∼n


1 6 base 6
2 6 2 yes 2 6
3 2 6 3 no 2 6 3
4 2 6 3 1 yes 2 6 1 3
5 2 6 1 3 5 yes 2 6 1 5 3
6 2 6 1 5 3 8 no 2 6 1 5 3 8

Figure 2.19: Sequential up-down Algorithm 2.24 illustration.

Algorithm 2.24 is illustrated in Figure 2.19 using the previous toy example. The com-
putational time complexity of Algorithm 2.24 is clearly Θ(n).
A recursive version, whose computational complexity is also Θ(n), can be stated as
follows:
Algorithm 2.25. Recursive up-down
Let A1∼n be global.
updown(n)
if n > 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
updown(n − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if n is even and an < an−1 , swap(an , an−1 ) . . . . . . . . . 3
else if n is odd and an > an−1 , swap(an , an−1 ) . . . . . .4
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Consider the problem of checking whether a given sequence A1∼n of n numbers is an
up-down sequence. It is formally defined as follows:
Problem 2.20. Checking up-down sequence
Input: A
( sequence A of n quantifiable elements
true ∀i ∈ {1, · · · , n − 1} if i is odd, ai < ai+1 and if i is even, ai > ai+1
Output:
false otherwise
2.3. PROBLEMS ON LIST 67

In order to design an algorithm based on inductive programming, imagine the solution


of is updown(A1∼n−1 ) is known. If it is false, is updown(A1∼n ) is automatically false. If it
is true, an must be compared to an−1 . This backward thinking may be stated recursively.

true
 if n = 1
is updown(A1∼n ) = is updown(A1∼n−1 ) ∧ (an < an−1 ) if n is odd. (2.30)

is updown(A1∼n−1 ) ∧ (an > an−1 ) if n is even.

An inductive programming algorithm can easily be derived from equation (2.30). A pseudo
code was given in Algorithm 1.18 on page 30 whose computational time complexity is clearly
O(n).

2.3.8 Random Permutation


Consider the problem of generating a random permutation of a sequence without any or-
dering constraints, such as sorting Problem 2.16 and alternating permutation Problem 2.19.
For the example string ‘RANDOM,’ some possible outputs include ‘DARMON,’ ‘NOD-
MAR,’ ‘MANDOR,’ ‘MONDAR,’ ‘RONMDA,’ ‘ADMNOR,’ etc. Any one of n! permuta-
tions of the string is a possible answer. The problem can be thought of as shuffling the
elements and is formally defined as follows:
Problem 2.21. Random permutation (Shuffling)
Input: A sequence A of n elements
Output: A01∼n ∈ AP(A1∼n ) where AP(A1∼n ) = {P1∼n | sort(P1∼n ) = sort(A1∼n )}
Although ‘S’ or other symbols are used to denote the set of all permutations in other
fields, a new symbol ‘AP’ is used to avoid confusion with other symbols in the upcoming
chapters.
The Random Permutation Problem 2.21, or simply RPP, can easily be solved by an in-
ductive programming technique if a random number generator is available. Using retrograde
thinking, suppose a random permutation of n − 1 elements, A01∼n−1 , is given. To generate
a random permutation of n elements, A01∼n , first choose an index number r randomly from
{1, · · · , n}. Then, swap the nth element with the rth element in A01∼n−1 . No swapping is
necessary if r = n. The following linear recurrence can be derived:
Lemma 2.9. Recurrence of random permutation
(
a1 if n = 1
randperm(A1∼n ) =
swap(r, n, append(randperm(A1∼n−1 ), an )) if n > 1
where r = random(1 ∼ n)
Next, sketch or draw the forward solving steps using Lemma 2.9 on a toy example, as
shown in Figure 2.20. Now an algorithm using the inductive programming paradigm can be
written.
Algorithm 2.26. Knuth shuffle
shuffle(A1∼n )
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
r = random(1 ∼ i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
swap(ai , ar ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
68 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

n hA01∼n−1 , an i r hA01∼n−1 , an i r hA01∼n−1 , an i r


1 R 1 R 1 R 1
2 R A 2 R A 1 R A 2
3 R A N 1 A R N 2 R A N 3
4 N A R D 1 A N R D 3 R A N D 4
5 D A R N O 2 A N D R O 1 R A N D O 5
6 D O R N A M 3 O N D R A M 6 R A N D O M 6
o D O M N A R O N D R A M R A N D O M

Figure 2.20: Knuth shuffle Algorithm 2.26 illustration.

For a string of length four, ‘abcd’, any one of (4! = 24) permutations can be produced
by Algorithm 2.26, as shown in Table 2.4.

Table 2.4: Random permutations of ‘abcd’.


R A0 R A0 R A0
h1, 1, 1, 1i hd, a, b, ci h1, 1, 3, 1i hd, a, c, bi h1, 2, 2, 1i hd, c, b, ai
h1, 1, 1, 2i hc, d, b, ai h1, 1, 3, 2i hb, d, c, ai h1, 2, 2, 2i ha, d, b, ci
h1, 1, 1, 3i hc, a, d, bi h1, 1, 3, 3i hb, a, d, ci h1, 2, 2, 3i ha, c, d, bi
h1, 1, 1, 4i hc, a, b, di h1, 1, 3, 4i hb, a, c, di h1, 2, 2, 4i ha, c, b, di
h1, 1, 2, 1i hd, c, a, bi h1, 2, 1, 1i hd, b, a, ci h1, 2, 3, 1i hd, b, c, ai
h1, 1, 2, 2i hb, d, a, ci h1, 2, 1, 2i hc, d, a, bi h1, 2, 3, 2i ha, d, c, bi
h1, 1, 2, 3i hb, c, d, ai h1, 2, 1, 3i hc, b, d, ai h1, 2, 3, 3i ha, b, d, ci
h1, 1, 2, 4i hb, c, a, di h1, 2, 1, 4i hc, b, a, di h1, 2, 3, 4i ha, b, c, di

Assuming that generating a random number takes constant time, the computational
time complexity of Algorithm 2.26 is Θ(n).
Algorithm 2.26 is widely referred to as the Knuth shuffle after Donald Knuth as in [151,
p 32] for his work in [100]. Since it was described earlier by Ronald Fisher and Frank
Yates [61] in 1938 and formalized in [52], it is also often called the FisherYates shuffle.

2.3.9 Palindrome
A palindrome is a string of elements that reads the same from left to right as it does
from right to left. Some example words include ‘Anna,’ ‘dontnod,’ ‘SOS,’ ‘tattarrattat,’ etc.
Palindromes are of great interest in many fields such as bioinformatics, poetry, and pure
mathematics.
Checking whether a sequence is a palindrome is formally defined as follows:

Problem 2.22. isPalindrome(A)


Input: A
( sequence A
True if ∀i ∈ {1, · · · , n}, ai = an−i+1
Output:
False otherwise
2.4. LINKED LIST 69

The usual backward thinking used to derive linear recurrence relations does not apply
to this problem. Using a flexible way of backward thinking, the following linear recurrence
can be derived.

Lemma 2.10. Linear recurrence of palindrome



True
 if n = 1 or (n = 2 and a1 = a2 )
isPalindrome(A1∼n ) = False if n > 1 and a1 6= an

isPalindrome(A2∼n−1 ) if n > 2 and a1 = an

Now an algorithm using a flexible way of applying the inductive programming paradigm
can be written as follows:

Algorithm 2.27. Palindrome verify

isPalindrome(A1∼n )
for i = d n2 e down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if ai 6= an−i+1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Figure 2.21 provides additional toy examples of odd- and even-length palindromes to aid
understanding of Algorithm 2.27.

? ?
i Ai∼n−i+1 ai = an−i+1 Ai∼n−i+1 ai = an−i+1
4 E a4 = a4 D D a4 = a5
3 C E C a3 = a5 I D D I a3 = a6
2 A C E C A a2 = a6 V I D D I V a2 = a7
1 R A C E C A R a1 = a7 A V I D D I V A a1 = a8
(a) odd-length palindrome case (b) even-length palindrome case

Figure 2.21: Checking palindrome algorithm illustration.

The algorithm may terminate early when an input sequence is non-palindromic. For
example, the string ‘palindrome’ is not a palindrome. Algorithm 2.27 would only compare
(a5 = ‘n’) 6= (a6 = ‘d’) and immediately return false. Hence, the computational time
complexity of Algorithm 2.27 is clearly linear, O(n), rather than Θ(n).

2.4 Linked list


Thus far, we held the assumption that a list is an array and will continue to assume so
in the remaining chapters by default unless otherwise stated. A widely and frequently used
alternative representation of a list is a linked list. This section first defines a linked list and
conducts trade-off analysis between arrays and linked lists. The insertion sort algorithm
using a linked list as input will be presented to demonstrate the difference in computational
complexities.
70 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

2.4.1 Definition
Among numerous definitions of the word ‘list’ provided in the Oxford dictionary [167],
the definition closest to the list used in computer science would be ‘a number of connected
items or names written or printed consecutively, typically one below the other’. In more
precise terms, a list is an array of symbols drawn from an alphabet or a symbol set, Σ. For
example, a DNA string is a list with elements drawn from Σ = {A, C, G, T} and a binary
number representation is a list with elements drawn from Σ = {0, 1}.
A formal recursive definition of a list gives insight into understanding the concept of the
linked list. First, consider the problem of checking whether a string is a valid list containing
elements in Σ.

Problem 2.23. Checking a list


Input: A sequence S1∼n of n elements and Σ = {σ1 , · · · , σm }
(
T if ∀ai ∈ S1∼n , si ∈ Σ
Output:
F otherwise

A first order linear recurrence relation to solve Problem 2.23 is stated as follows:
(
T if n = 0
isList(S1∼n ) = (2.31)
isList(S1∼n−1 ) ∧ (sn ∈ Σ) if n > 0

Assuming that checking (sx ∈ Σ) takes linear time, O(m), an inductive programming al-
gorithm or the recursive algorithm in equation (2.31) takes O(mn) time. If the symbols in
Σ are sorted and a binary search, which will be introduced in Chapter 3, is conducted, the
computational time complexity would be O(n log m).
By slightly altering the recurrence relation in equation (2.31), the linear recurrence
relation can be rewritten as follows:
(
T if n = 0
isList(S1∼n ) = (2.32)
(s1 ∈ Σ) ∧ isList(S2∼n ) if n > 0

In the recurrence relation in equation (2.32) for checking the validity of a string, a string
S1∼n is a valid list if the first element is valid, i.e. si ∈ Σ and the rest of the string is a valid
list.
Let _ be a concatenation or appending operator, e.g. h1, 2i _ h3, 4i = h1, 2, 3, 4i.

Definition 2.1. String concatenation operator, _

(
_
ai if i ≤ n
A1∼n B1∼m = C1∼n+m where ci =
bi−n if n < i ≤ n + m

Using the string concatenation operator _, a linked list of a string of Σ, S1∼n , can be
represented and defined recursively.
(
 if n = 0
S1∼n = _
where (s1 ∈ Σ) ∧ isList(S2∼n ) (2.33)
s1 S2∼n if n > 0
2.4. LINKED LIST 71

Hence, a linked list is either empty or a valid symbol in Σ followed by the rest of the linked
list as recursively stated in equation (2.33).
To implement equation (2.33) on a computer, each element in the linked list needs to be
a node that consists of the actual element data and a link to the rest of the list. A linked list
L stores a reference to the beginning node, which is the starting node. Let L.data denote
the data value of the node and L.next be the pointer to the rest of the linked list. Ending
node, x, has no list linked after it and thus x.next = .

2.4.2 Array vs. Linked List


Finding the value of the pth position in a list takes constant time if the list is implemented
as an array, since it is simply A[p], or A[p − 1] if the first index is 0. Unlike arrays, one has
to traverse a linked list from the beginning to access the pth element, which takes O(n) or
Θ(p) time. A recursive algorithm for accessing the pth element when the list is implemented
in a linked list is stated as follows:
(
L.data if p = 1
LLaccess(L, p) = (2.34)
LLaccess(L.next, p − 1) if p > 1

A disadvantage of arrays, however, is that the maximum size of the array, m, must be
specified. The computational space complexity of all operations in arrays is Θ(m), while
the actual data size is n. Hence, the array implementation of a list is static. The linked list
implementation of a list is dynamic because the computational space complexities of most
operations are Θ(n).
Inserting an element at a certain position in an array is similar to the sliding and insert
Algorithm 2.22 and illustrated in Figure 2.22 (a). This takes O(n) time. Insertion at the
end of the array takes constant time, which is the best case, as long as n < m. If the array
is full, i.e. n = m, one may create a bigger array and copy all elements to the new array.
The pseudo code for insertion operation in the pth position in a linked list is stated as
follows:

Algorithm 2.28. Linked list - insert

LLinsert(L, q, p)
x = nodify(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if p = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
x.next = L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L = x .......................................... 4
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
C = L ..........................................6
for i = 2 ∼ p − 1, C = C.next . . . . . . . . . . . . . . . . 7
x.next = C.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
C.next = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
return L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Algorithm 2.28 is illustrated in Figure 2.22 (b). The computational time complexity is
O(n) or Θ(p). Inserting an element at the beginning of the linked list only takes constant
time.
72 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

L
N L I K E D N N I D E L K
1 2 3 4 5 6 7 8 9
a1 a2 a3 a4 a5 a6 1 2 3 4 5 6 7 8 9
A1~6

Original array n = 5 and m = 6 Create a node x and a linked list L


C L

N N I D E L K
L I K E D
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
a1 a2 a3 a4 a5 a6

Make a room at p Access p − 1 and x.next = C.next


L

N N I D E L K
L I N K E D
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
a1 a2 a3 a4 a5 a6

Place the new element, x: A[p] = x C.next = x


(a) Array insert(N,p = 3) (b) Linked list insert(N,p = 3)
C L

K K N I D E L K
L I N K E D
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
a1 a2 a3 a4 a5 a6

Remove A[p] Access p − 1


C L

N I D E L K
L I N E D
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
a1 a2 a3 a4 a5 a6

Shift one by one C.next = C.next.next


L

L I N E D N I D E L
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
a1 a2 a3 a4 a5 a6

Resulting array Resulting linked list


(c) Array delete(p = 4) (d) Linked list delete(p = 4)

Figure 2.22: Insert and deletion operations on Array and Linked list

Deleting the element at the pth position in an array takes O(n) time, as n − p elements
must be shifted left one by one to close the gap. This is illustrated in Figure 2.22 (c). The
best case time complexity is O(1) when p = n.
Pseudo code for insertion operation at the pth position in a linked list is written as
follows:

Algorithm 2.29. Linked list - delete

LLdelete(L, p)
if p = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
L= L.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
C = L ..........................................4
for i = 2 ∼ p − 1, C = C.next . . . . . . . . . . . . . . . . 5
C.next = C.next.next . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Algorithm 2.29 is illustrated in Figure 2.22 (d). The computational time complexity is
2.4. LINKED LIST 73

O(n) or Θ(p). Deleting an element at the beginning of the linked list only takes constant
time.

Table 2.5: Operation complexity comparisons between arrays and linked lists

Operation position Array Linked list head-tail L.L.


beginning O(1)  O(1)  O(1) 
access end O(1) O(1) Θ(n) O(n) O(1) O(n)
any p O(1) Θ(p) Θ(p)
beginning Θ(n)  O(1)  O(1) 
insertion end O(1)/Θ(n) O(n) Θ(n) O(n) O(1) O(n)
any p Θ(n − p) Θ(p) Θ(p)
beginning Θ(n)  O(1)  O(1) 
deletion end O(1) O(n) Θ(n) O(n) Θ(n) O(n)
any p Θ(n − p) Θ(p) Θ(p)

Table 2.5 summarizes the computational complexities of three basic operations of list
when implemented in an array and a linked list. If pointers to both the head and tail of the
linked list are maintained, the computational time complexities of the access operation and
insertion at the end of the list are improved to O(1). Deleting an element at the end of the
list still takes Θ(n) time.

2.4.3 Insertion Sort with a Linked List

S nil A 2 3 1 7 4 nil
S 2 nil A 3 1 7 4 nil
S 2 3 nil A 1 7 4 nil
S 1 2 3 nil A 7 4 nil
S 1 2 3 7 nil A 4 nil
S 1 2 3 4 7 nil A nil

Figure 2.23: Insertion sort illustration when the input A is a linked list.

When the input list A1∼n is represented as a linked list, the insertion sort algorithm
starts with an empty solution list, S. It removes the first element from A and inserts it in
the sorted list S. Figure 2.23 illustrates insertion sort when the input list is a linked list. A
pseudo code is provided below.
Algorithm 2.30. insertion sort with a linked list

insert sortLL(A1∼n )
x = delete (A, a1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
declare a linked list, S = hxi . . . . . . . . . . . . . . . . . . . . . . 2
74 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
p = 1 ...........................................4
x = delete LL(A, a1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
c = s1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
while c < x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
c = c.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
p = p + 1 ..................................... 9
insert LL(S, p) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return S1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 3 1 7 4 7 4 3 2 1 1 2 3 4 7
2 3 1 7 4 4 7 3 2 1 1 2 3 4 7
1 2 3 7 4 3 4 7 2 1 1 2 3 4 7
1 2 3 7 4 2 3 4 7 1 1 2 3 4 7
1 2 3 4 7 1 2 3 4 7 1 2 3 4 7

(a) illustration O(n2 ) (b) best case Θ(n) (c) worst case Θ(n2 )

Figure 2.24: Insertion sort illustration when the input A is a linked list.

Best and worst case computational time complexities are Θ(n) and Θ(n2 ), as shown in
Figure 2.24 (b) and (c), respectively. While being the worst case for the array representation,
an almost completely sorted list in descending order is the best case for sorting a linked list
in ascending order. In conclusion, we see that the performance and behavior of an algorithm
vary depending on how the input is represented.

2.5 Iterative Programming


Inductive programming should not be confused with iterative programming. In contrast
to recursive programming, the term ‘iterative programming’ [132] appears in almost all
programming language books. Iterative programming is a more general term for algorithms
with loops. In this section, some iterative programming examples that are not based on the
inductive programming paradigm are presented as interesting asides.

2.5.1 Bubble Sort


Consider the following iterative sorting algorithm known as bubble sort.
Algorithm 2.31. Bubble sort
bubblesort(A1∼n )
for i = n down to 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to i − 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
if aj > aj+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
swap(aj , aj+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5. ITERATIVE PROGRAMMING 75

7 3 2 4 1

3 7 2 4 1 3 2 4 1 7
3 2 7 4 1 2 3 4 1 7 2 3 1 4 7
3 2 4 7 1 2 3 4 1 7 2 3 1 4 7 2 1 3 4 7

3 2 4 1 7 2 3 1 4 7 2 1 3 4 7 1 2 3 4 7

(a) i = n = 5 step (b) i = 4 step (c) i = 3 step (d) i = 2 final step

Figure 2.25: Bubble sort illustration.

As illustrated in Figure 2.25, Algorithm 2.31 compares two consecutive elements and
swaps them if the first element is greater than the next one. If this swapping is conducted
from the beginning to the end of an array, the element with the maximum value is pushed
all the way to the end of the array. By repeating this procedure, the next highest value is
pushed to the right.
The term “bubble sort” was first used by Iverson in [87] (see [12] for comprehensive
survey) and has been conventionally used in [42, 103]. Other names for Algorithm 2.31 in-
clude exchange sort [121], shuttle sort [155] and sinking sort [25]. The bubble sort algorithm
appears in most textbooks mainly to demonstrate its inefficiency, as the computational time
complexity is Θ(n2 ).

2.5.2 Tail Recursion


Both inductive and their respective recursive programming based algorithms presented
in this chapter have the same computational time complexities, i.e. Θ(n × f (n)) where
f (n) is the time the body of the loop or the recursive call takes to execute. As illustrated in
Figure 2.1, however, recursive programming has the drawback that each recursive procedure
must return its output to its invoking procedure. If the desired output is computed at the
end of the recursive calls, i.e. the base case, the process of returning the output value to
the invoking procedures can immediately be terminated. This kind of special recursion is
called tail recursion. This line of discussion was first raised in [164].
The tail recursion concept is primarily dealt with in designing programming languages
and compilers. Here, its concept is used purely from an algorithm design perspective. As
illustrated in Figure 2.26, while inductive programming solves the problem forward from
the base case, recursive programming invokes the sub-problem backward and then returns
to the original problem. Tail recursion solves the problem backward and terminates at the
base case.
For example, in factorial Problem 2.7, while the inductive programming based Algo-
rithm 2.9 solves forward as shown in equation (2.35), the tail recursion solves backward as
given in equation (2.36).

n! = 1 × 2 × · · · × (n − 1) × n (2.35)
n! = n × (n − 1) × · · · × 2 × 1 (2.36)

A tail recursion can be expressed with an iterative program using a loop solving back-
ward. For example, in the k-permutation of n Problem 2.8, or KPN, defined on page 51,
76 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

basis P(1) P(n−1) P(n)


start output

(a) Inductive programming (solving forward)


basis P(1) P(n−1) P(n)
start
output

(b) Recursive programming (solving recursively)


basis P(1) P(n−1) P(n)
output start

(c) Tail recursion (solving backward)

Figure 2.26: Problem solving directions.

Algorithm 2.11 by ascending permutation and Algorithm 2.10 by descending factorial can
be regarded as an inductive programming algorithm and a tail recursion algorithm, respec-
tively, from the recursion in equation (2.24).
As another example, consider the search in a sorted list Problem 2.18 defined on page 62.
The sequential search Algorithm 2.17 is a model of inductive programming. The first order
linear recurrence relation in Lemma 2.7 is a model of recursive programming. The iterative
Algorithm 2.21 stated on page 62 is a tail recursion algorithm.
Some problems are impossible to solve forward and thus require either recursive pro-
gramming or tail recursion. Recall Euclid’s Algorithm 1.4 for the greatest common divisor
Problem 2.25, gcd(m, n). The recurrence relation in Euclid’s Algorithm 1.4 is a canonical
example of tail recursion. When it reaches the base case, the output is found and there is no
need to return to its invoking procedures. The tail recursion version, which is the iterative
version of solving backward using a loop, is stated as follows:

Algorithm 2.32. Euclid’s algorithm II

gcd(m, n)
while n 6= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
t = n ...........................................2
n = m % n .....................................3
m = t .......................................... 4
return m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.5.3 Quotient and Remainder


The quotient-remainder theorem states that given any integer n and positive integer d,
there exists a unique integer pair (q, r) such that n = dq + r and 0 ≤ r < d (see [57, p 180]
for a proof). For example, if n = 22 and d = 4, there exists a unique pair (q = 5, r = 2),
such that 22 = 4 × 5 + 2. Consider the division problem which takes a dividend n and a
divisor d as inputs and returns the quotient q as output.
2.5. ITERATIVE PROGRAMMING 77

Problem 2.24. div(n, d)


Input: n ∈ N and d ∈ Z + where n ≥ 0
Output: q ∈ N such that q × d + r = n where 0 ≤ r < d.

This problem can also be formulated as a maximization problem.

maximize q
subject to q × d ≤ n (2.37)
where integer q ≥ 0

The following linear recurrence can be derived:


(
div(n − d, d) + 1 if n ≥ d
div(n, d) = (2.38)
0 if n < d

Algorithm 2.33 solves the problem backward iteratively instead of solving forward. It is
a backward solving iterative programming version based on the tail recursion paradigm.

Algorithm 2.33. Division I Algorithm 2.34. Division II


div(n, d) div2(n, d)
q = 0 .............................1 q = m = 0 ........................1
while n ≥ d . . . . . . . . . . . . . . . . . . . . . . 2 while m ≤ n . . . . . . . . . . . . . . . . . . . . . . 2
n = n − d ......................3 m = m + d .....................3
q = q + 1 . . . . . . . . . . . . . . . . . . . . . . .4 q = q + 1 . . . . . . . . . . . . . . . . . . . . . . .4
return q . . . . . . . . . . . . . . . . . . . . . . . . . . 5 return q . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Thus far, ‘for’ loops have been utilized to implement algorithms of the inductive program-
ming paradigm. ‘While’ loops can also be used to design algorithms modeled on inductive
programming. A pseudo code that exemplifies this is shown in Algorithm 2.34.
All three algorithms - equation (2.38) by recursive programming, Algorithm 2.33 by
tail recursion, and Algorithm 2.34 by inductive programming - take Θ(n) time, or more
specifically, Θ( nd ).
The problem of finding the remainder r, also known as modulo or modulus, is denoted
by the % operator, e.g. r = n % d. It is formally stated as follows:

Problem 2.25. Modulo (n % d)


Input: n ∈ N and d ∈ Z +
Output: r ∈ N such that q × d + r = n where 0 ≤ r < d and q ∈ N .

The following linear recurrence is derived:


(
mod(n − d, d) if n ≥ d
mod(n, d) = (2.39)
n if n < d

Solving forward is seemingly impossible, but a backward-solving approach works iteratively.


An iterative programming algorithm that takes a backward solving approach based on the
tail recursion paradigm is stated as follows:
78 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Algorithm 2.35. Modulo (n % d)


mod(n, d)
while n ≥ d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
n = n − d ...................................... 2
return n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Both the recursive programming algorithm in equation (2.39) and the tail recursion
based Algorithm 2.35 take Θ(n) time, or more specifically, Θ( nd ).

2.5.4 Square root of n


√ Consider√the problem of√
finding the floor of square root of n, or simply SRN. For example,
b 2c = 1, b 5c = 2, and b 11c = 3.

Problem 2.26. Floor of square root of n, SRN(n) = b nc
Input: n ∈ Z+ (n > 1 for simplicity sake)
Output: r such that r2 ≤ n < (r + 1)2 where r ∈ Z+

Since 1 < b nc < n, it can be realized as the search in a sorted list Problem 2.18. The
first order linear recurrence relation in eqn (2.40) can be derived. Call SRN(n, n) initially.
(
r−1 if (r − 1)2 ≤ n
SRN(n, r) = (2.40)
SRN(n, r − 1) otherwise
The tail recursion version and sequential search based version using inductive program-
ming are stated in Algorithm 2.36 and Algorithm 2.37, respectively.


Algorithm 2.36. Tail recursion b nc Algorithm 2.37. Sequential square root
SRN(n) SRN(n)
r = n . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 r = 1 .............................1
while (r − 1)2 > n, . . . . . . . . . . . . . . . 2 while (r + 1)2 ≤ n, . . . . . . . . . . . . . . . 2
r = r − 1 . . . . . . . . . . . . . . . . . . . . . . .3 r = r + 1 . . . . . . . . . . . . . . . . . . . . . . .3
return r − 1 . . . . . . . . . . . . . . . . . . . . . . 4 return r . . . . . . . . . . . . . . . . . . . . . . . . . . 4

¬ n¼
1 r r+1 n−1 n

Induc. Prog. Tail recursion

Recursive Programming


Figure 2.27: Computational time complexities of 3 algorithms for b nc .

While the computational time complexities of both first order recursion√based algorithm
in eqn (2.40) and tail recursion algorithm
√ in Algorithm 2.36 are n − n = Θ(n), the
time complexity of Algorithm 2.37 is Θ( n). This time complexity analysis is depicted in
Figure 2.27.
2.5. ITERATIVE PROGRAMMING 79

2.5.5 Lexicographical Order


Most people use dictionaries and encyclopedias to understand a new concept, notion,
or terminology. Computer scientists, however, may incline toward utilizing these resources
to construct a computational problem and devise an algorithm for it. Such a process often
helps us understand related concepts or notions more concretely. For example, the term,
‘lexicographical order’ may not sound familiar to many students. Informally, it is the order in
which words are sorted in a dictionary. Here, this concept shall be understood by formulating
a problem and devising algorithms.
The following three words, {‘9’, ‘80’, ‘123’}, can be ordered in two different ways. The
first is the numerical order where ‘9’ < ‘80’ < ‘123’. The other option is the lexicographical
order where ‘123’ < ‘80’ < ‘9’. Before defining the lexicographical order, the order of
characters used in words needs to be defined first. Let Σ be a finite set of characters, which
are quantifiable symbols, such as a blank, special characters, numbers and alphabetical
letters. Although certain standard order rules such as the ASCII table order (see [11] for
the ASCII table) are mandatory, the following character order rules are sufficient to order
examples used in this book:
Definition 2.2. Character order rules

‘ ’ < ‘!’ < · · · < ‘+’ < ‘−’ < · · · < ‘∼’ < ‘0’
|{z} < ·{z
· · < ‘9’} < ‘a’
| < ·{z
· · < ‘z’}
| {z } |
empty special number alphabet

Rule 1: Empty character: A space or empty character precedes all other characters.
Rule 2: Special character: Special symbols such as {+, −} precede numbers.
Rule 3: Numbers: Numbers precede alphabetical letters.
Rule 4: Case insensitive: Upper and lower case characters are treated the same.
The order among special symbols is omitted, as they hardly appear, but the fact that
‘+’ < ‘−’ will be used later in this textbook.

A1∼7 = T H E O R E M A1∼4 = A L G O A1∼3 = L E X


∧ ∧ ∧
=
=
=
=
=

=
=
=

B1∼6 = T H E O R Y B1∼3 = L E X B1∼6 = L E X I C O


(a) (i = 6) case (b) (i = 1) best case (c) (i = n + 1) worst case

Figure 2.28: Lexicographical order A1∼n < B1∼m examples

A word is a string of elements drawn from Σ. Given two words A1∼n and B1∼m , the
lexicographical order problem, or simply LEX, aims to determine whether A1∼n precedes
B1∼m . In other words, the goal is to determine whether the word A1∼n comes before B1∼m
in a dictionary. This problem is formally defined as follows:
Problem 2.27. Lexicographical order
Input: Two words A1∼n and B1∼m where ∀x ∈ A1∼n ∪ B1∼m , x ∈ Σ
Output: LEX(A
 1∼n , B1∼m ) =
True if (∃i ∈ (1 ∼ n) such that ∀j ∈ (1 ∼ i − 1)aj = bj ∧ ai < bi )

∨ (n < m ∧ ∀j ∈ (1 ∼ n)aj = bj )

False otherwise

80 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

The following first order linear recurrence relation in equation (2.41) defines the problem.

True
 if a1 < b1 ∨ (n = 0 ∧ m 6= 0)
LEX(A1∼n , B1∼m ) = False if a1 > b1 ∨ m = 0 (2.41)

LEX(A2∼n , B2∼m ) if a1 = b1

The recursive algorithm in equation (2.41) compares the first character of both words. If
the first character of A is lower than that of B, it returns True immediately. If it is higher,
it returns False. If they are equal, however, the algorithm calls the remaining elements of
the words recursively.
One may simply scan from the left of both words using a loop to identify the first
character that differs between the two words. A pseudo code for this tail recursion algorithm
is stated as follows:

Algorithm 2.38. Lexicographical order

LEX(A1∼n , B1∼m )
i = 1 ............................................. 1
while i ≤ n and i ≤ m and ai = bi . . . . . . . . . . . . . . . . 2
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if i > m, return False . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else if i > n, return True . . . . . . . . . . . . . . . . . . . . . . . 5
else if ai < bi , return True . . . . . . . . . . . . . . . . . . . . . 6
else, return False . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

In Figure 2.28 (a), the sixth characters differ and all characters from the first through
the fifth are the same. The best case scenario is when the first characters differ, as shown in
Figure 2.28 (b). The best case computational complexity is O(1). The worst case scenario
is when the end of the word is reached, as indicated in Figure 2.28 (c). The worst case
computational complexity is Θ(min(n, m)). Hence, the computational time complexity of
Algorithm 2.38 is O(n) or O(min(n, m)).

2.6 Theorem Proving


Before wrapping up this chapter, another role of inductive programming known as the
algorithmic proof for theorem proving is introduced. Suppose that we need to prove the
following proposition: “There are infinitely many odd numbers.” If the proposition is false,
there must exist a largest odd number. Note that the nth odd number, O(n) = 2n − 1,
was defined recursively in equation (2.5). Consider the following inductive programming
algorithm to find the largest odd number:

Algorithm 2.39. Infinitely many odd numbers

largestoddnum()
o = 1 ............................................. 1
while 2 6 | o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
o = o + 2 .......................................3
return o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.7. EXERCISES 81

Clearly, Algorithm 2.39 loops forever and does not terminate, iteratively generating the
next larger odd number. Therefore, there are infinitely many odd numbers.
Similarly, to prove that there are infinitely many triangular numbers, one can write the
following inductive programming algorithm.
Algorithm 2.40. Infinitely many triangular numbers
largestTRnum()
n = o = 1 .........................................1
while n | 2o and (n + 1) | 2o . . . . . . . . . . . . . . . . . . . . . . 2
n = n + 1 ...................................... 3
o = o + n .......................................4
return o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Since Algorithm 2.40 continually generates the next larger triangular number in never ending
succession, there are infinitely many triangular numbers.

2.7 Exercises
Q 2.1. Consider a recursive formula in eqn (2.42).
(
2f (n − 1) + 1 if n > 0
f (n) = (2.42)
0 if n = 0

a). Find the value of f (7) by hand.


b). Find the value of f (80) by computer.
c). Devise an algorithm using inductive programming.
Q 2.2. Consider a recursive formula in eqn (2.43).
(
2f (n − 1) + n if n > 0
f (n) = (2.43)
0 if n = 0

a). Find the value of f (7) by hand.


b). Find the value of f (51) by computer.
c). Devise an algorithm using inductive programming.
Q 2.3. Consider a recursive formula in eqn (2.44).
(
f (n − 1) + 2n − 1 if n > 1
f (n) = (2.44)
1 if n = 1

a). Find the value of f (7) by hand.


b). Derive a closed form of eqn (2.44).
c). Prove that your derived closed form in b) is equivalent to eqn (2.44) using induction.
82 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

d). Devise an algorithm using inductive programming.

Q 2.4. Prove the following first order linear recurrence relation is equivalent to its closed
formula. Let T (0) = 0 for all base cases.

a). T (n) = T (n − 1) + c vs. T (n) = cn

b). T (n) = T (n − 1) + 2n vs. T (n) = n(n + 1)

c). T (n) = 2T (n − 1) + 1 vs. T (n) = 2n − 1


3n − 1
d). T (n) = 3T (n − 1) + 1 vs. T (n) =
2
Q 2.5. Prove the correctness of the algorithms based on inductive programming.

a). Algorithm 1.5 on page 10 or, equivalently, the recurrence relation in Algorithm 2.1.

b). Algorithm 1.11 on page 21 or, equivalently, the recurrence relation in eqn (2.15).

c). Algorithm 2.6 on page 47 or, equivalently, the recurrence relation in eqn (2.19).

d). Algorithm 2.8 on page 49 or, equivalently, the recurrence relation in the equation (2.21).

e). Algorithm 2.13 on page 54 or, equivalently, the recurrence relation in eqn (2.27).

Q 2.6. Consider the problem of adding the first n even numbers.

a). Derive a first order linear recurrence relation of the problem.

b). Devise an algorithm using inductive programming.

c). Prove the correctness of your algorithm in b).

Q 2.7. Consider the sum of the first n square numbers (PRN) Problem 1.8 defined on
page 11.

a). Derive a first order linear recurrence relation.

b). Devise an algorithm using inductive programming.

c). Prove the correctness of your algorithm in b).

Q 2.8. Consider the sum of the first n cubic numbers Problem (SCB):
n
X
SCB(n) = i3 = 13 + 23 · · · + n3
i=1

a). Formulate the problem.

b). Derive a first order linear recurrence relation.

c). Devise an algorithm using inductive programming.

d). Prove the correctness of your algorithm in c).


2.7. EXERCISES 83

e). Prove the closed form in eqn (2.45) using induction.


 2
n(n + 1)
SCB(n) = (2.45)
2

Q 2.9. Consider the nth tetrahedral number Problem 1.9, or simply THN, defined on
page 11 and the problem of adding first n tetrahedral numbers, or STH in short, considered
on page 29 as an exercise in Q 1.13.
a). Derive a first order linear recurrence relation for THN.
b). Devise an algorithm using inductive programming for THN.
c). Prove the correctness of your proposed algorithm in b).
d). Derive a first order linear recurrence relation for STH.
e). Devise an algorithm using inductive programming for STH.
Q 2.10. Consider the problems regarding a perfect k-ary tree: PTNk (h), the number of
nodes in a perfect k-ary tree of height h Problem 1.10 and PTDk (h), the sum of depths in
a perfect k-ary tree of height h Problem 1.11 defined on pages 12 and 13, respectively.

a). Derive a first order linear recurrence relation for PTNk (h).
b). Devise an algorithm using inductive programming for PTNk (h).
c). Prove the correctness of your algorithm provided in b).
d). Derive a first order linear recurrence relation for PTDk (h).
e). Devise an algorithm using inductive programming for PTDk (h).
f). Prove the correctness of your algorithm provided in e).

Q 2.11. Consider the problems regarding a perfect binary tree.

a). Formulate the number of nodes in a perfect binary tree of height h problem, PTN2 (h).
b). Derive a first order linear recurrence relation for PTN2 (h).
c). Devise an algorithm using inductive programming for PTN2 (h).
d). Formulate the sum of depths in a perfect binary tree of height h problem, PTD2 (h).
e). Derive a first order linear recurrence relation for PTD2 (h).
f). Devise an algorithm using inductive programming for PTD2 (h).

Q 2.12. Consider the sum of first n floor of log problem.


Problem 2.28. Sum of first n floor of log
Input: n ∈ Z+
n
X
Output: SFL(n) = blog ic
i=1
84 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

SFL(n) can be defined recursively as in eqn (2.46).


(
SFL(n − 1) + blog nc if n > 1
SFL(n) = (2.46)
0 if n = 1

a). Find the value of SFL(7) by hand

b). Prove the first order linear recurrence in eqn (2.46) is equivalent to the closed form in
eqn (2.47) using induction.

(n + 1)blog nc − 2(2blog nc − 1) (2.47)

c). Derive an algorithm for SFL(n) using inductive programming.

Q 2.13. Recall SPCk Problem 1.12 defined on page 14 which is adding the first n product
of consecutive k numbers.

a). Derive a first order linear recurrence relation.

b). Devise an algorithm using inductive programming.

c). Analyze the computational time complexity of your algorithm in b).

Q 2.14. Recall SSCk problem considered on page 29 which is adding the first n sum of
consecutive k numbers.

a). Derive a first order linear recurrence relation.

b). Devise an algorithm using inductive programming.

c). Analyze the computational time complexity of your algorithm in b).

Q 2.15. Consider the prefix product problem or simply PFP which multiplies all elements
in a list.

a). Formulate the problem.

b). Derive a first order linear recurrence relation.

c). Devise an algorithm using inductive programming.

d). Prove the correctness of your algorithm provided in c).

Q 2.16. Consider the generic product problem.


n
Y
f (i) = f (1) × f (2) × · · · × f (n) (1.14)
i=1

a). Derive a first order linear recurrence relation.

b). Devise an algorithm using inductive programming.

c). Prove the correctness of your algorithm in b).


2.7. EXERCISES 85

Q 2.17. Consider the double factorial problems. For example, 5!! = 5 × 3 × 1 and 6!! =
6 × 4 × 2.

a). Derive a first order linear recurrence relation for the double factorial of the nth even
number Problem 2.29.

Problem 2.29. Double factorial of the nth even number (DFE) (2n)!!

Input: n∈N
n
Y
Output: (2n)!! = 2i
i=1

b). Devise an algorithm using inductive programming for Problem 2.29.

c). Devise an algorithm using tail recursion for Problem 2.29.

d). Derive a first order linear recurrence relation for the double factorial of the nth odd
number Problem 2.30.

Problem 2.30. Double factorial of the nth odd number (DFO) (2n − 1)!!

Input: n ∈ Z+
n
Y
Output: (2n − 1)!! = (2i − 1)
i=1

e). Devise an algorithm using inductive programming for Problem 2.30.

f). Devise an algorithm using tail recursion for Problem 2.30.

Q 2.18. Consider the problem of finding the maximum value in an unsorted list of n number
of values.

a). Formulate the problem.

b). Derive a first order linear recurrence relation of the problem.

c). Devise an algorithm using inductive programming.

d). Prove the correctness of your algorithm in c).

e). Analyze the computational time complexity of your algorithm in c).

Q 2.19. Consider the problem of finding the kth smallest element in an unsorted list of n
number of values.

a). Formulate the problem.

b). Derive a first order linear recurrence relation of the problem.

c). Devise an algorithm using inductive programming.

d). Analyze the computational time complexity of your algorithm in c).


86 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

Q 2.20. Consider the problem of finding the subset sum of k numbers out of n numbers
such that the subset sum of these k items is maximized.

a). Formulate the problem.

b). Devise an algorithm using inductive programming.

c). Analyze the computational time complexity of your algorithm in b).

Q 2.21. A list is a sequence of symbols which are drawn from an alphabet or symbol set,
Σ. For example, a DNA has Σ = {A, C, G, T}. A list hACTTAGUGGATTi is not a DNA
sequence, as it contains an alien symbol ‘U’.

a). Formulate the problem of checking whether a sequence is a valid list on a given Σ.

b). Derive a first order linear recurrence relation of the problem.

c). Devise an algorithm using inductive programming.

d). Analyze the computational time complexity of your algorithm in c).

e). Provide a best and worst case scenario on the computational time complexities of your
algorithm in c).

Q 2.22. Consider the problem of checking whether a given a sequence A1∼n of n quantifiable
elements is in a non-decreasing order. For examples, issorted asc(h1, 2, 2, 3, 9i) returns true
and issorted asc(h1, 2, 1, 4, 7, 9i) returns false.

a). Formulate the problem.

b). Derive a first order linear recurrence relation of the problem.

c). Devise an inductive programming algorithm for the problem.

d). Analyze the computational time complexity of the algorithm given in c).

e). Devise an iterative algorithm using tail recursion.

f). Analyze the computational time complexity of the algorithm given in e).

Q 2.23. Consider the problem of checking whether a given a sequence A1∼n of n numbers
is a down-up alternating sequence. For example, isdownup(h5, 2, 9, 1, 4i) returns true and
isdownup(h5, 2, 1, 9, 4, 3i) returns false.

a). Formulate the problem.

b). Derive a first order linear recurrence relation of the problem.

c). Devise an inductive programming algorithm for the problem.

d). Analyze the computational time complexity of the algorithm given in c).

Q 2.24. Consider the down-up problem, or simply DUP, which is one of alternating permu-
tation problems modified from the up-down alternating permutation Problem 2.19 defined
on page 65.
2.7. EXERCISES 87

a). Formulate the down-up problem.

b). Devise an inductive programming algorithm for the down-up problem.

c). Analyze the computational time complexity of your algorithm in b).

d). Devise a recursive algorithm for the down-up problem.

Q 2.25. Consider the problem of checking whether a given a sequence A1∼n of n num-
bers is an up-up-down sequence. For example, isupupdown(h1, 2, 9, 3, 4i) returns true and
isupupdown(h5, 2, 1, 9, 4, 3i) returns false.

a). Formulate the problem.

b). Derive a first order linear recurrence relation of the problem.

c). Devise an inductive programming algorithm for the problem.

d). Analyze the computational time complexity of the algorithm given in c).

Q 2.26. Consider the up-up-down problem, or simply UUD, which is one of the alternating
permutation problems modified from the up-down alternating permutation Problem 2.19
defined on page 65.

a). Formulate the up-up-down problem.

b). Devise an inductive programming algorithm for the up-up-down problem.

c). Analyze the computational time complexity of your algorithm in b).

d). Devise a recursive algorithm for the up-up-down problem.

Q 2.27. Given a list of n unique quantifiable elements, the number of descents problem, or
simply NDS, is to find the number of elements that are less than the immediate previous
element. For a toy example of A = h3, 2, 7, 9, 4, 1i, NDS(A) = 3 as the element 2 is less than
3, the element 4 is less than 9, and the element 1 is less than 4.

a). Formulate the number of descents problem.

b). Derive a first order linear recurrence of the number of descents problem.

c). Devise an inductive programming algorithm for the number of descents problem.

d). Provide the computational time complexity of your proposed algorithm in c).

e). Illustrate your proposed algorithm in c) on a toy example of A = h3, 2, 7, 9, 4, 1i.

Q 2.28. Consider the problem of searching a query unsorted distinct elements. For a toy
example of A = h3, 2, 7, 9, 4, 1i and q = 7, the output should be 3, as a3 = q = 7. If q = 5,
the output should be nothing or ∞.

a). Formulate the problem.

b). Derive a first order linear recurrence of the problem.

c). Devise an algorithm using inductive programming.


88 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

d). Provide the computational time complexity of your proposed algorithm in c).

Q 2.29. Consider a string S1∼2n whose elements are drawn from a multiset, {1, 1, 2, 2, · · · , n,
n}. Each k ∈ {1 ∼ n} appears exactly twice in S1∼2n . Consider the greater between elements
sequence validation problem, or GBW in short. If all the numbers appearing between the
two occurrences of each k in S1∼2n are greater than k, S1∼2n is a valid greater between
elements sequence. Let such a valid sequence be simply a GBW sequence. For example,
h2, 2, 1, 1, 3, 3i and h1, 2, 3, 3, 2, 1i are valid GBW sequences. h1, 3, 2, 3, 2, 1i is an invalid
GBW sequence because 2 appears between 3’s. h3, 2, 1, 1, 2, 3i is an invalid GBW sequence
because 2 and 1 appear between 3’s and 1 appear between 2’s.

a). Which of the following sequence(s) is(are) (a) valid GBW sequence(s)?

h1, 4, 4, 1, 2, 3, 3, 2i h4, 4, 1, 2, 3, 3, 1, 2i h1, 4, 4, 2, 2, 1, 3, 3i h4, 4, 2, 1, 1, 3, 3, 2i

b). Formulate the problem of checking whether a sequence is a valid greater between
elements sequence.

c). Derive a first order linear recurrence relation.

d). Devise an algorithm using inductive programming.

e). Provide the computational time complexity of your proposed algorithm in d).

Q 2.30. Consider a string S1∼2n whose elements are drawn from a multiset, {1, 1, 2, 2, · · · , n,
n}. Each k ∈ {1 ∼ n} appears exactly twice in S1∼2n . Consider the Less between elements
sequence validation problem, or LBW in short. If all the numbers appearing between the
two occurrences of each k in S1∼2n are less than k, S1∼2n is a valid less between elements se-
quence. Let such a valid sequence be simply an LBW sequence. For example, h2, 2, 1, 1, 3, 3i
and h3, 2, 1, 1, 2, 3i are valid LBW sequences. h3, 1, 2, 1, 2, 3i is an invalid LBW sequence
because 2 appears between 1’s. h1, 2, 3, 3, 2, 1i is an invalid LBW sequence because 2 and 4
appear between 1’s and 3 appear between 2’s.

a). Which of the following sequence(s) is(are) (a) valid LBW sequence(s)?

h4, 1, 1, 4, 3, 2, 2, 3i h1, 1, 4, 3, 2, 2, 4, 3i h4, 1, 1, 3, 3, 4, 2, 2i h1, 1, 3, 4, 4, 2, 2, 3i

b). Formulate the problem of checking whether a sequence is a valid less between elements
sequence.

c). Derive a first order linear recurrence relation.

d). Devise an algorithm using inductive programming.

e). Provide the computational time complexity of your proposed algorithm in d).

Q 2.31. Consider the kth order statistics Problem 2.15 defined on page 59 and the bubble
sort Algorithm 2.31 on page 74.

a). Devise an iterative algorithm similar to the bubble sort Algorithm 2.31 to find the kth
largest element. You may call this algorithm “bubble-select.”
2.7. EXERCISES 89

b). Analyze the computational time complexity of your algorithm in a).


c). Devise an iterative algorithm similar to the bubble sort Algorithm 2.31 to find the kth
smallest element.
Q 2.32. Consider the colexicographic or colex order [9, p 177], which is a variant of the
lexicographical order. It reads finite sequences from the right to the left instead of reading
them from the left to the right. Three examples in Figure 2.29 provide insights.

A1∼4 = A L G O A1∼3 = L E X A1∼6 = L E X I C O


∧ ∧ ∧

=
=
=

=
=
=
=
=
B1∼3 = L E X B1∼5 = C O L E X B1∼6 = M E X I C O
(a) (i = n) case (b) (i = 0) case (c) (i = 1) case

Figure 2.29: CoLexicographical order A1∼n < B1∼m examples

a). Formulate the problem which is determine whether the first word, A1∼n precedes the
second word, B1∼m in the colex order.
b). Derive a first order linear recurrence relation.
c). Devise an algorithm using tail recursion
d). Analyze the computational time complexity of your algorithm in c).
e). Order the following four words in the lexicograhpical order.

{‘induction’, ‘recursion’, ‘lexico’, ‘colex’}

f). Order the above four words listed in e) in the colexicograhpical order.
Q 2.33. Consider the greatest common divisor of n numbers. For example, mGCD(6, 9, 21)
= 3.
a). Formulate the greatest common divisor of multi-number problem, or mGCD in short.
b). Derive a first order linear recurrence relation.
c). Devise an algorithm using inductive programming.
Q 2.34. Consider the least common multiple of n numbers. For example, mLCM(6, 9, 21) =
126
a). Formulate the least common multiple of multi-number problem, or mLCM in short.
b). Derive a first order linear recurrence relation.
c). Devise an algorithm using inductive programming.
Q 2.35. Consider the rising factorial power problem or simply RFP which is often denoted
as nk̄ or n(k) .
nk̄ = n × (n + 1) × · · · × (n + k − 1)
| {z }
k
The rising factorial power [102, p 50] is also called the ascending factorial [165, p 8] and the
Pochhammer function.
90 CHAPTER 2. RECURSIVE AND INDUCTIVE PROGRAMMING

a). Formulate the rising factorial power problem.


b). Derive a first order linear recurrence relation.
c). Devise an algorithm using inductive programming.
d). Provide the computational time complexity of your proposed algorithm in c).

Q 2.36. Assuming that the m digit long integer multiplication takes Θ(m2 ), show the
computational time complexities of the following algorithms:

a). Algorithm 2.8 on page 49 for the (k n ) power Problem 2.6.


b). Algorithm 2.9 on page 50 for the (n!) factorial Problem 2.7.
Chapter 3

Divide and Conquer

The divide and conquer strategy has been widely utilized as a common principle in
business, military, and politics. In computer science, the divide and conquer paradigm is a
general and popular algorithm design technique that first divides a big problem into small
sub-problems recursively until the size of the problem is conquerable. Next, it combines
solutions for sub-problems to the big problem. This idea dates as far back as Babylonia in
200 BC [103]. This divide and conquer paradigm often allows one to come up with efficient
algorithms for many computational problems.
The objective of this chapter is twofold. The first objective is to improve their ability to
devise algorithms using the divide and conquer paradigm for various computational prob-
lems. The second one is to understand the divide recurrence relations and their closed forms
for the analysis of algorithms. Readers must be able to utilize the Master Theorem in order
to analyze divide and conquer algorithm time complexities. The Master theorem provides
a cookbook solution to derive asymptotic notation for various divide recurrence relations.

3.1 Dichotomic Divide and Conquer


The most common form of divide and conquer algorithms is the dichotomic divide and
conquer algorithm. It can be devised for problems that can be stated by the following
generic divides recurrence relation in eqn (3.1).
(
combine(P (A1∼bn/2c ), P (Abn/2c+1∼n )) if n > 1
P (A1∼n ) = (3.1)
basis solution if n = 1

First, the input for the original problem is divided into two half sized sub-problems which
occur recursively. Solutions of these two sub-problems must be combined to derive the
original solution. A successful divide and conquer algorithm relies on this combining step,

91
92 CHAPTER 3. DIVIDE AND CONQUER

which often requires innovative thinking. The second line in eqn (3.1) is the basis case,
which corresponds to the conquer part. The divides recurrence relation in eqn (3.1) can be
stated as the following generic template in pseudo-code style:

Divide and Conquer Template


P (A1∼n )
If n = 1,
Conquer
return basis solution
else
solleft = P (A1∼bn/2c ) Divide
solright = P (Abn/2c+1∼n )
return combine(solleft, solright) Combine

3.1.1 Finding Min

5 9 2 4 0 8 1 7

2 0

Combine
5 9 2 4 0 8 1 7
Divide
5 2 0 1
5 9 2 4 0 8 1 7

5 9 2 4 0 8 1 7
Conquer
5 9 2 4 0 8 1 7

Figure 3.1: A divide and conquer tree for finding a minimum.

To understand the divide and conquer paradigm, consider the simple Problem 2.14 of
finding the minimum element in an array, defined on page 58. Drawing a divide and con-
quer tree with a toy example, such as in Figure 3.1, is helpful for not only identifying the
distributive property to merge the sub-solutions, but also analyzing its computational time
complexity. Partial sub-arrays form nodes in the divide and conquer tree and the root node
contains the full input array. The output of the respective sub-array is shown above the
sub-array.
To come up with a divide and conquer algorithm, a divide recurrence relation, as in
eqn (3.2), can be derived.

Lemma 3.1. Divide Recurrence of finding min


(
a1 if n = 1
findmin(A1∼n ) = (3.2)
min(findmin(A1∼bn/2c ), findmin(Abn/2c+1∼n )) otherwise

The following divide and conquer algorithm can be devised straight from the template
using eqn (3.2). Let the list A be global and call findmin(1, n) initially.
3.1. DICHOTOMIC DIVIDE AND CONQUER 93

Algorithm 3.1. Find minimum


findmin(b, e)
if b = e, return ab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
m = b b+e−1 2 c ..........................................3
return min(findmin(b, m), findmin(m + 1, e)) . . . . . . . . . . 4

The following distributive property in eqn (3.3) is the essence of correctness in Algorithm 3.1.

min(A1∼n ) = min(min(A1∼x ), min(Ax+1∼n )) for 1 ≤ x < n (3.3)

The computational time complexity of Algorithm 3.1, T (n), depends on its two half-
sized subproblems’ time complexities, T (n/2), recursively as depicted in Figure 3.2 (c). The
combining solutions step only requires constant comparison. Hence, T (n) can be stated as
the following divide recurrence: T (n) = 2T (n/2) + O(1). Deriving a divide recurrence of the
computational complexity is sufficient to prove that Algorithm 3.1 takes Θ(n) according to
Master theorem, which shall be presented later in section 3.3.2. Here is a quick simple proof
by induction when n is an exact power of 2. When n is an exact power of 2, a divide and
conquer algorithm’s recursive calls form a perfect binary tree as shown in Figure 3.2 (a).

1 1
1 1
T(n/2)
1 1 1 1 T(n)
T(n/2) T(n/2)
1 1 1 1 1 1 1 1 n

(a) full binary tree (b) T (n) (c) 2T (n/2) + 1 (d) T (n/2) + n

Figure 3.2: Analyzing divide and conquer time complexity.

Theorem 3.1. The solution of the following divide recurrence in eqn. (3.4) is T (n) = 2n−1
when n is an exact power of 2.

(
1 if n = 1, i.e., k = 0
T (n) = (3.4)
2T (n/2) + 1 if n = 2k , for k > 0

Proof. (by induction) Basis: When k = 0, i.e., n = 20 = 1,


(T (1) = 1 by eqn. (3.4)) = (T (1) = 2 × 1 − 1 = 1).
Inductive step: Assuming T (2k ) = 2 × 2k − 1, show T (2k+1 ) = 2 × 2k+1 − 1.

T (2k+1 ) = 2T (2k+1 /2) + 1 by eqn (3.4)


k k
= 2T (2 ) + 1 = 2(2 × 2 − 1) + 1 by assumption
k k+1
= 2(2 × 2 − 1) + 1 = 2 × 2 −1 goal 

Another way to reason why T (n) = 2n − 1 = Θ(n) is the relationship between the
number of leaf nodes and the number of internal nodes in a full binary tree. A full binary
tree is a binary tree whose internal nodes have exactly two child nodes.
94 CHAPTER 3. DIVIDE AND CONQUER

Theorem 3.2. A full binary tree with n leaf nodes contains n − 1 internal nodes.
Proof. Basis: n = 1; There is only one leaf node and no internal node.
Inductive step: Supposing that the claim is correct for n, show that a full binary tree
with (n + 1) leaf nodes contains n internal nodes. Since the tree is a full binary tree, the
only way to increase its internal node is to change one leaf node into an internal node by
adding two new leaf nodes underneath. Thus, the number of leaf nodes changes from n to
n − 1 + 2 = n + 1 and the internal nodes changes from n − 1 to n. 

Figure 3.3: First seven divide recurrence trees.

Divide recurrence trees whose input size is n always form a full binary tree, as the first
seven such trees are provided in Figure 3.3. The number of internal nodes is always one
less than the number of leaf nodes, which is roughly the half of all nodes. The binary
tree of internal nodes only is roughly equivalent to T (n/2), as shown in Figure 3.2 (c).
Hence, T (n) = T (n/2) + n = 2n − 1 according to the Master theorem or the following by
Theorem 3.3.
Theorem 3.3. The solution of the following divide recurrence in eqn. (3.5) is T (n) = 2n−1
when n is an exact power of 2.
(
1 if n = 1, , i.e., k = 0
T (n) = (3.5)
T (n/2) + n if n = 2k , for k > 0
A proof by induction is left for an exercise in Q 3.1 (b) and the full proof for for any positive
integer, n, shall be covered in chapter 5 on page 276.
Albeit the theoretical asymptotic complexity of Algorithm 3.1 is the same as that of the
inductive programming Algorithm 2.18, the latter is practically better and simpler. Divide
and conquer paradigm fails to provide a faster algorithm on this particular problem.

3.1.2 Number of Ascents


Consider the number of ascents Problem 2.11, defined on page 55. Before embarking on
the algorithm by the divide and conquer paradigm, drawing a divide and conquer tree on a
toy example often provides a better insight, as given in Figure 3.4.
To combine sub-problems’ solutions, there are two cases to consider. Both depend on
whether the last element of the left half-sized sub-problem list is less or greater than the
first element of the right half-sized sub-problem. If less, the solution is simply adding the
two sub-solutions. If not, the solution is adding the sum of the two sub-solutions by one
more ascent. Hence, the following divide recurrence relation in eqn (3.6) can be derived:

0
 if n = 0 or 1
NAS(A1∼n ) = NAS(A1∼d n2 e ) + NAS(Ad n2 e+1∼n ) + 1 if ad n2 e < ad n2 e+1 and n > 1 (3.6)

NAS(A1∼d n2 e ) + NAS(Ad n2 e+1∼n ) if ad n2 e > ad n2 e+1 and n > 1

3.1. DICHOTOMIC DIVIDE AND CONQUER 95

8 1 3 7 2 4 5 6

2 3

8 1 3 7 2 4 5 6

0 1 1 1

8 1 3 7 2 4 5 6

0 0 0 0 0 0 0 0
8 1 3 7 2 4 5 6

Figure 3.4: Number of ascents divide and conquer tree.

An algorithm by the divide and conquer paradigm is stated as follows: Let A1∼n be
global and call NAS(1, n) initially.

Algorithm 3.2. Divide & conquer NAS

NAS(b, e)
sol = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if b < e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
m = b b+e
2 c ..................................... 3
sol = NAS(b, m) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
sol = sol + NAS(m + 1, e) . . . . . . . . . . . . . . . . . . . . . . 5
if am < am+1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return sol +1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return sol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time complexity of Algorithm 3.2, T (n), depends on its two half-sized
sub-problems’ time complexities, T (n/2), recursively. The combining solutions step only
requires constant comparison. Hence, T (n) can be stated as the following divide recurrence:
T (n) = 2T (n/2) + O(1). By Theorem 3.1, T (n) = Θ(n).

3.1.3 Alternating Permutation

Consider the up-down Problem 2.19, defined on page 65. Before embarking on the
algorithm by the divide and conquer paradigm, drawing a couple of divide and conquer
trees on toy examples often provides a better insight, as given in Figure 3.5.
To combine sub-problems’ solutions, there are four cases, as illustrated in Figure 3.6.
An algorithm by the divide and conquer paradigm is stated as follows: Let A1∼n be global
and call up-down(1, n) initially.
96 CHAPTER 3. DIVIDE AND CONQUER

5 1 2 0 4 9 8 7 5 1 2 0 4 9
Divide
5 1 2 0 4 9 8 7 5 1 2 0 4 9

5 1 2 0 4 9 8 7 5 1 2 0 4 9
Conquer
1 5 0 2 4 9 7 8 1 5 2 4 0 9

1 5 0 2 4 9 7 8 1 5 2 4 0 9
Combine
1 5 0 4 2 9 8 7 1 5 2 4 0 9

Figure 3.5: The divide and conquer Algorithm 3.3 illustration for the up down problem.

(a) case 1 even-odd no swap (b) case 2 even-odd swap

(c) case 3 odd-even no swap (d) case 4 odd-even swap

Figure 3.6: Four cases of merging two up down/ down up sequences.

Algorithm 3.3. Divide & conquer up-down permutation


up-down(b, e)
if b < e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
m = b b+e
2 c ..................................... 2
up-down(b, m) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
up-down(m + 1, e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if m is even and am < am+1 , . . . . . . . . . . . . . . . . . . . . 5
swap(am , am+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if m is odd and am > am+1 , . . . . . . . . . . . . . . . . . . . . 7
swap(am , am+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

The computational time complexity of Algorithm 3.3, T (n), depends on its two half-
sized sub-problems’ time complexities T (n/2), recursively. The combining solutions step
only requires constant comparison and possible swapping. Hence, T (n) can be stated as the
following divide recurrence: T (n) = 2T (n/2) + O(1). By Theorem 3.1, T (n) = Θ(n).
3.1. DICHOTOMIC DIVIDE AND CONQUER 97

3.1.4 Merge Sort


A well known case that the divide and conquer paradigm demonstrates better perfor-
mance than a simple plain inductive programming technique is the sorting Problem 2.16,
defined on page 60. It should be noted that the inductive programming paradigm based
algorithms, when combined with other paradigms to be covered in later chapters, result in
comparable performance to the one introduced here, though.
Before embarking on the divide and conquer algorithm for the sorting problem, it is
necessary to design an algorithm for the problem of merging two sorted lists into one sorted
list. This problem is defined as follows:

Problem 3.1. Merge two sorted lists


Input: Two sorted lists A1∼na and B1∼nb in nondecreasing order.
Output: A nondecreasing sorted list C1∼na +nb such that both A1∼na and B1∼nb are
sub-sequences of C1∼na +nb and A1∼na = C1∼na +nb − B1∼nb .

Figure 3.7 (a) and (b) provide sample inputs and output of Problem 3.1. A first order
linear recurrence relation can be derived as follows:


 B1∼nb if na = 0

A
1∼na if nb = 0
mergeSL(A1∼na , B1∼nb ) = (3.7)


 append(mergeSL(A 1∼na −1 , B1∼nb ), ana )) if na ≥ nb
append(mergeSL(A1∼na , B1∼nb −1 ), bnb )) if na < nb

A pseudo code of an inductive programming algorithm is stated as follows:

Algorithm 3.4. Merge two sorted lists

mergeSL(A1∼n1 , B1∼n2 )
k, i, j = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
while i 6= n1 and j 6= n2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai < bj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ck = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
i = i + 1 ..................................... 5
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
ck = bj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
j = j + 1 .....................................8
k = k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
if i = n1 , Ck∼n1 +n2 = Bj∼n2 . . . . . . . . . . . . . . . . . . . 10
else, Ck∼n1 +n2 = Ai∼n1 . . . . . . . . . . . . . . . . . . . . . . . .11
return C1∼n1 +n2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

John von Neumann (1903-1957) was a Hungarian-American scientist. His major


contributions in computing include Von Neumann architecture, linear programming,
Monte Carlo method, self-replicating machines, and stochastic computing. He suggested
the merge sort algorithm as early as 1945 [103, p.159].
c Photography is in public domain.
98 CHAPTER 3. DIVIDE AND CONQUER

A1∼na = A D D O T
=⇒ C1∼n1 +n2 = A D D F O R T Y
B1∼nb = F R Y
(a) input (b) output
k A1∼n1 = A D D O T B1∼n2 = F R Y C1∼n1 +n2
0 ↑i ↑j - - - - - - - -

1 A ↑i ↑j A - - - - - - -

2 A D ↑i ↑j A D - - - - - -

3 A D D ↑i ↑j A D D - - - - -

4 A D D ↑i F ↑j A D D F - - - -

5 A D D O ↑i F ↑j A D D F O - - -

6 A D D O ↑i F R ↑j A D D F O R - -

7 A D D O ∅T F R ↑j A D D F O R T -

8 A D D O T ∅ F R Y ∅ A D D F O R T Y

(c) Algorithm 3.4 illustration

Figure 3.7: Merging two sorted lists.

Algorithm 3.4 is illustrated in Figure 3.7 (c). Its computational time complexity is linear,
i.el, Θ(n1 + n2 ), as only a single comparison and two assignment operations occur for each
iteration (see [84] for the full description and analysis).
The following divide recurrence relation for the sorting problem can be derived utilizing
Algorithm 3.4.

Lemma 3.2. Divide Recurrence of sorting


(
a1 if n = 1
sort(A1∼n ) =
mergeSL(sort(A1∼bn/2c ), sort(Abn/2c+1∼n )) otherwise

The following divide and conquer algorithm can be devised straight from the template using
Lemma 3.2. Let the list A be global and call mergesort(1, n) initially.

Algorithm 3.5. Merge sort

mergesort(b, e)
if b = e, return ab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
m = b b+e−1 2 c ........................................... 3
return mergeSL(mergesort(b, m), mergesort(m + 1, e)) . . . 4

Note that mergeSL sub-routine Algorithm 3.4 needs a little modification so that it can
be called in the mergesort Algorithm 3.5. It is left for an exercise. Figure 3.8 illustrates the
mergesort Algorithm 3.5.
The computational time complexity of Algorithm 3.5 on the input size n, T (n), consists
of two of T (n/2) for calling two half-sized problems and the merging step which takes Θ(n).
T (n) = 2T (n/2) + n is depicted in Figure 3.9 (a). And thus, T (n) = Θ(n log n) according
to the Master theorem or the following Theorem 3.4.
3.1. DICHOTOMIC DIVIDE AND CONQUER 99

9 5 2 4 0 8 7 1

9 5 2 4 0 8 7 1 Divide

9 5 2 4 0 8 7 1

9 5 2 4 0 8 7 1 Conquer

5 9 2 4 0 8 1 7

2 4 5 9 0 1 7 8 Combine

0 1 2 4 5 7 8 9

Figure 3.8: Mergesort Algorithm 3.5 illustration.

n n
n n/2 n/2 n
.....

.....

.....
log n +1
2 2 ....... 2 2 n
T(n) = T(n/2) T(n/2)
1 1 1 1 ....... 1 1 1 1 n

(a) T (n) = 2T (n/2) + n (b) T (n) = n × (log n + 1)

Figure 3.9: Analyzing divide and conquer time complexity: T (n) = 2T (n/2) + n.

Theorem 3.4. The solution of the following divide recurrence in eqn. (3.8) is T (n) =
n log n + n where n is an exact power of 2.

(
1 if n = 1, i.e., k = 0
T (n) = (3.8)
2T (n/2) + n if n = 2k , for k > 0

A proof by induction is left for an exercise in Q 3.1 (c) and the full proof for for any positive
integer, n, shall be covered in chapter 5 on page 277. Figure 3.9 (b) gives insights. The
height of the divide recurrence tree is log n + 1 and there are n items on each level.

3.1.5 Maximum Contiguous Sub-sequence Sum


Consider the maximum contiguous sub-sequence sum Problem 1.15, or simply MCSS,
defined on page 22 which is one of the consecutive sub-sequence arithmetic problems. So as
to devise a divide and conquer algorithm, one may draw the divide and conquer tree with a
toy example of h−3, 1, 4, 3, −4, 7, −4, −1i, as shown in Figure 3.10. One may use the naı̈ve
Algorithm 1.13 on page 22 to fill out the solutions in the divide and conquer tree.
One may draw two or more toy examples to think about how to merge two sub-solutions
as it is not trivial as previous problems. It would be easy if one of the two sub-solutions
were the solution for the larger original problem solution, as in cases in Figure 3.11 (c) and
(e). However, there are cases where the actual solution happens to pass the divide break
point, as in Figure 3.11 (a). In order to handle these situations, the maximum contiguous
100 CHAPTER 3. DIVIDE AND CONQUER

11

−3 1 4 3 −4 7 −4 −1

8 7

−3 1 4 3 −4 7 −4 −1

1 7 7 0

−3 1 4 3 −4 7 −4 −1

0 1 4 3 0 7 0 0
−3 1 4 3 −4 7 −4 −1

Figure 3.10: A divide and conquer algorithm illustration for the MCSS problem.

sub-sequence that includes the divide break point must be found. First, the prefix sum
of the right half sub-sequence must be evaluated using the Algorithm 2.13 described on
page 54. Next, the postfix sum of the left half sub-sequence must be evaluated, which
can be done in linear time by reversing the prefix sum Algorithm 2.13. Next, find the
maximum values of the prefix sum in the right sub-sequence and the postfix sum in the
left sub-sequence. If the maximum value is negative, as in Figure 3.11 (f), the maximum
value is zero which corresponds to an empty sub-sequence. Next, add the maximum prefix
sum and postfix sum. This value is the maximum contiguous sub-sequence sum that passes
through the divide break point. These processes are illustrated in Figure 3.11 (b), (d), and
(f) which correspond to toy examples in Figure 3.11 (a), (b), and (c), respectively. Finally,

11
A1 n̴ /2 −3 1 4 3 −4 7 −4 −1 An/2+1 n̴
−3 1 4 3 −4 7 −4 −1 PoFS(A1 n̴ /2) 5 8 7 3 −4 3 −1 −2 PFS(An/2+1 ̴n)
8 7 max(PoFS) 8 3 max(PFS)

(a) MCSS(A1∼8 ) = 11 (b) MCSS(A1∼8 ) = max((8 + 3), 8, 3)


5
A1 ̴n/2 −3 1 4 −4 −2 3 −4 1 An/2+1 n̴
−3 1 4 −4 −2 3 −4 1 PoFS(A1 ̴n/2) −2 1 0 −4 −2 1 −3 −2 PFS(An/2+1 n̴ )
5 3 max(PoFS) 1 1 max(PFS)

(c) MCSS(A1∼8 ) = 5 (d) MCSS(A1∼8 ) = max((1 + 1), 5, 3)


5
A1 ̴n/2 3 −1 −4 −3 4 −7 4 1 An/2+1 n̴
3 −1 −4 −3 4 −7 4 1
PoFS(A1 ̴n/2) −5 −8 −7 −3 4 −3 1 2 PFS(An/2+1 n̴ )
3 5 max(PoFS) 0 4 max(PFS)

(e) MCSS(A1∼8 ) = 5 (f) MCSS(A1∼8 ) = max((0 + 4), 3, 5)

Figure 3.11: Three cases of merging two sub-solutions of MCSS sub-problems.

the solution for the original larger problem is either the left sub-problem solution, the right
sub-problem solution, or the maximum contiguous sub-sequence sum that passes through
3.1. DICHOTOMIC DIVIDE AND CONQUER 101

the divide break point. Actually, the solution is the maximum value of these three numbers.
A pseudo code for the divide and conquer algorithm for the MCSS problem is stated as
follows:
Algorithm 3.6. Divide & conquer MCSS
MCSS-dc(b, e)
if b = e and ab > 0, return ab . . . . . . . . . . . . . . . . . . 1
elseif b = e and ab ≤ 0, return 0 . . . . . . . . . . . . . . . . 2
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
m = b b+e−1 2 c ................................... 4
soll = MCSS-dc(b, m) . . . . . . . . . . . . . . . . . . . . . . . . . . 5
solr = MCSS-dc(m + 1, e) . . . . . . . . . . . . . . . . . . . . . . 6
ls, mls = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for i = m down to b . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
ls = ls + ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
if ls > mls, mls = ls . . . . . . . . . . . . . . . . . . . . . 10
rs, mrs = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
for i = m + 1 to e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
rs = rs + ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if rs > mrs, mrs = rs . . . . . . . . . . . . . . . . . . . .14
return max(soll, solr, mls + mrs) . . . . . . . . . . . . . . 15
Lines 7 ∼ 10 are to find the maximum value of postfix sums in the left half sub-sequence and
lines 11 ∼ 14 are to find the maximum value of prefix sums in the right half sub-sequence.
The computational time complexity of Algorithm 3.6, T (n), depends on its two half-
sized sub-problems’ time complexities, 2T (n/2), recursively and the combining solutions
step which takes linear time. Hence, T (n) can be stated as the following divide recurrence:
T (n) = 2T (n/2) + Θ(n). By Theorem 3.4, T (n) = Θ(n log n).
This divide and conquer idea can be applied to the rest of the consecutive sub-sequence
arithmetic problems and are left for exercises.

3.1.6 Search Unsorted List


Consider Problem 1.13 of searching unsorted distinct elements, defined on page 20. The
following divide recurrence relation in eqn (3.9) can be derived easily by observing from
Figure 3.12 (a).

T
 if n = 1 and a1 = q
search(A1∼n , q) = F if n = 1 and a1 6= q (3.9)

search(A1∼d n2 e , q) ∧ search(Ad n2 e+1∼n , q) if n > 1

The computational time complexity of a divide and conquer algorithm based on eqn (3.9)
would be Θ(n) as T (n) depends on its two half-sized sub-problems’ time complexities,
2T (n/2), recursively and the combining solutions step only requires constant logical op-
eration. Hence, T (n) = 2T (n/2) + O(1) and thus T (n) = Θ(n) by Theorem 3.1.
However, a divide and conquer algorithm can be O(n). First, select a middle pivot
element and compare it with the query. If it is a match, stop and return m. Otherwise, call
two half-sized sub-problems excluding the middle pivot element. Note that the circled area
is not visited in Figure 3.12 (b). A pseudo code can stated as follows:
102 CHAPTER 3. DIVIDE AND CONQUER

T 5

8 1 3 7 2 4 5 6 8 1 3 7 2 4 5 6

F T 0 5

8 1 3 7 2 4 5 6 8 1 3 2 4 5 6

F F T F 0 0 5 0

8 1 3 7 2 4 5 6 8 3 2 5 6
0
F F F F T F F F
6
8 1 3 7 2 4 5 6 1 2 3 4 5 6 7 8

(a) Search q = 2 on A1∼8 by eqn (3.9) (b) Search q = 2 on A1∼8 by Algorithm 3.7

Figure 3.12: Divide and conquer trees for searching an unsorted list.

Algorithm 3.7. Divide and conquer search

Let the list A1∼n and q be global and call search(1, n) initially.
search(b, e)
if b = e and ab = q, return b . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if b = e and ab 6= q, return 0 . . . . . . . . . . . . . . . . . . . . . . 2
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
m = b b+e−1 2 c ..........................................4
if am = q, return m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
o = search(b, m − 1)+ search(m + 1, e) . . . . . . . . . . . . . . 7
if o = 0, o = search(m + 1, e) . . . . . . . . . . . . . . . . . . . . . 8
return o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

The worst case computational time complexity of Algorithm 3.7 is Θ(n) by Theorem 3.1.
Since the best time complexity of Algorithm 3.7 is constant if the middle pivot element is a
match, the computational time complexity is O(n).

3.1.7 Palindrome
Consider Problem 2.22 of checking whether a list is a palindrome, shown on page 68.
Recall that the typical inductive programming technique did not solve the problem in pre-
vious Chapter 2. Similarly, the typical dichotomic divide and conquer template does not
solve the problem, either. Flexible halving is necessary. Figure 3.13 gives full palindrome
divide trees for a couple of toy examples.
As depicted in Figure 3.14, in order for a sequence to be a palindrome, the inner half
sub-sequence and the outer half sub-sequence need to be palindromes. There are three cases
to consider. The first case is when n is odd, as exemplified in Figure 3.14 (a). The outer
sub-sequence must be an even length. The second case is when both n and n2 are even. The
last case is when n is even but n2 is odd.
The following pseudo-code can be derived:
3.1. DICHOTOMIC DIVIDE AND CONQUER 103

k a n a k a n a k n e v e r r e v e n

n a k a n k a a k v e r r e v n e e n

a k a n n a a k k e r r e v v e e n n

k a a r r e e
n
(a) n is odd case (b) n is even but 2 is odd case

Figure 3.13: Palindrome divide tree.

A C T C A C T C A A C C A T C A C T

(a) n is odd case.


A C T C C T C A A C C A T C C T

(b) n/2 is even case.


G C T T A A T T C G G C C G T T A A T T

(c) n/2 is odd case.

Figure 3.14: Three cases of dividing palindrome sequences.

Algorithm 3.8. Divide & conquer palindrome verification


isPalindrome(A1∼n )
if n = |A| = 1, return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if n = 2 ∧ a1 = a2 , return true . . . . . . . . . . . . . . . . . . . . . . . . .2
else if n = 2 ∧ a1 6= a2 , return false . . . . . . . . . . . . . . . . . . . . . . . . 3
else return (isPalindrome(Ab n4 c+1∼n−b n4 c )
∧ isPalindrome(concat(A1∼b n4 c , An−b n4 c+1∼n ))) . . . . .4
Line 4 checks the inner half and outer half sub-sequences instead of left half and right
half sub-sequences. The worst case computational time complexity of Algorithm 3.8, T (n),
depends on its two half size sub-problems’ time complexities, 2T (n/2), recursively and the
combining solutions step only requires constant logical operation. Hence, T (n) can be stated
as the following divide recurrence: T (n) = 2T (n/2) + O(1). By Theorem 3.1, T (n) = Θ(n).
Since Algorithm 3.8 terminates early if an input is not a palindrome, the complexity is O(n).

3.1.8 Checking Up-Down Sequence


Another example of flexible halving is the Alternating Permutation Problem 2.19 and
Checking Up-down Sequence Problem 2.20, defined on pages 65 and 66, respectively. Note
that the divide and conquer Algorithm 3.3 for the up-down alternating permutation prob-
lem does not call two up-down half-sized problems but calls both up-down and down-up
sub-problems when necessary. However, it can be solved by calling solely the up-down
sub-problems. To illustrate the algorithm, consider the Checking Up-down Sequence Prob-
lem 2.20. Every up turn occurs at odd indices. When dividing a sequence, the second half
104 CHAPTER 3. DIVIDE AND CONQUER

2 11 3 10 7 9 1 5 4 8 6

T T

2 11 3 10 7 9 1 5 4 8 6

T T T T

2 11 3 10 7 9 1 5 4 8 6

T T T T
2 11 3 10 4 8 6

Figure 3.15: checking up-down sequence divide and conquer tree.

sub-problem’s sequence must start at an odd index. Hence, the following divide and conquer
algorithm can be devised with the flexible halving technique, as depicted in Figure 3.15. Call
the method, isUpDown(1, n), initially.

Algorithm 3.9. Divide & conquer up-down verification

isUpDown(b, e)
if b = e, return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if e − b = 1 ∧ ab < ae , return true . . . . . . . . . . . . . . . . . . . . . 2
else if e − b = 1 ∧ ab > ae , return false . . . . . . . . . . . . . . . . . . . . . 3
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ns = e − b + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if ns is even ∧ ns/2 is even, m = b + ns/2 − 1 . . . . . . . . . . . 6
else if ns is even ∧ ns/2 is odd, m = b + ns/2 . . . . . . . . . . . 7
else if ns is odd ∧ ns−1 2 is even, m = b + ns−1 2 − 1 .......8
ns−1
else if ns is odd ∧ 2 is odd, m = b + ns−1 2 ........... 9
if am < am+1 , return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return isUpDown(b, m) ∧ isUpDown(m + 1, e) . . . . . . . . . . . . . 11

The computational time complexity of Algorithm 3.9 is O(n). The best case running
time is O(1) when the first mid point violates the up-down sequence. The worst case running
time follows the recurrence relation of T (n) = 2T (n/2) + O(1) and thus T (n) = Θ(n).

3.2 Bisection With a Single Branch Call


Thus far, divide binary recurrence relations have been considered. In this section, nu-
merous problems that have divide linear recurrence relations are considered. Instead of
calling two half-sized sub-problems, many problems require calling only one of half-sized
sub-problem. Divide recursion trees can be binary or unary depending on whether the
divide recurrence relations are binary or linear, respectively.
3.2. BISECTION WITH A SINGLE BRANCH CALL 105

1 2 4 5 7 9 11 12 15 7

1 2 4 5 9 11 12 15 4 12

2 5 11 15 2 5 11 15

1 9 1 9

(a) Divide & conquer illustration (b) Size balanced binary search tree
step b e m sm sm = q? Sb∼e
1 1 9 5 s5 = 7 s5 < q 1 2 4 5 7 9 11 12 15
2 6 9 8 s8 = 12 s8 > q 9 11 12 15
3 6 7 7 s7 = 11 s7 > q 9
11
4 6 6 6 s6 = 9 s6 6= q 9
(c) Algorithm 3.10 illustration where q = 10

Figure 3.16: Binary search.

3.2.1 Binary Search


Consider the classic problem of finding an element, q, in a sorted list, S1∼n .

Problem 3.2. Search an element in a sorted list, searchl(S1∼n , q)


Input: A sorted list S1∼n of n unique quantifiable elements and a query element, q
Output: the position, x such that sx = q or  if q ∈
/S

For a toy sample example of S = h1, 2, 4, 5, 7, 9, 11, 12, 15i, n = 9, and q = 11, the output
should be 7 since s7 = 11. A straight forward algorithm using the inductive programming
paradigm would take O(n). However, it can be searched more quickly by using the divide
and conquer paradigm. Similar to searching for a word in a dictionary, compare the middle
of the sorted list for the match, and search only the respective half of the list if not a match
recursively. As depicted in Figure 3.16 (a), to search q = 11 in S, first compare q with the
middle element, s5 . Since (q = 11) > (s5 = 7), search S6∼9 only. Although the concept of
binary search is natural and widely known, it was mentioned as part of the Moore School
Lectures by John Mauchly for the first time in 1946 [29, 103].
The following divide recurrence relation for the binary search can be derived:


 bn/2c + 1 if q = sbn/2c+1

binarysearch(S
1∼bn/2c , q) if q < sbn/2c+1
binarysearch(S1∼n , q) = n
(3.10)


 binarysearch(S bn/2c+2∼n , q) + b 2 c + 1 if q > sbn/2c+1
0.5 if n = 0

John William Mauchly (1907-1980) was an American physicist. His major con-
tributions includes designing ENIAC, one of the earliest electronic general-purpose com-
puters along with John Presper Eckert and pioneering fundamental computer concepts
including the stored program, subroutines, and programming languages.
c Photo Credit: courtesy of Charles Babbage Institute, University of Minnesota.
106 CHAPTER 3. DIVIDE AND CONQUER

Eqn (3.10) can be written as the following pseudo-code. Let the sorted list S be global
and call binary search(1, n, q) initially.
Algorithm 3.10. Binary search
binary search(b, e, q)
m = d b+e2 e .............................................. 1
if (b > e) ∨ (b = e ∧ q 6= sm ), return  . . . . . . . . . . . . . . . . . 2
else if q = sm , return m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else if q < sm , return binarysearch(b, m − 1, q) . . . . . . . . 4
else if q > sm , return binarysearch(m + 1, e, q) . . . . . . . . 5
Algorithm 3.10 is illustrated using a toy example in Figure 3.16 (c) with q = 10. Line
2 returns an error message that the query, q, is not in the list. Line 3 is the case when the
pivot value matches the query, q. Line 4 invokes the left side sub-problem only and line 5
invokes the right side sub-problem only.
The computational time complexity of the binary search Algorithm 3.10 is O(log n).
The best case running time complexity is constant, and it would be like when you open the
dictionary and the word you are looking for is right there. In the worst case, however, one
has to keep searching by halving the size of the problem, but the number of the halving
steps is Θ(log n).
Theorem 3.5. The average time complexity for binary search Algorithm 3.10 is Θ(log n).
Proof. Each element in a list corresponds to a node in the size balanced binary search tree,
as shown in Figure 3.16 (b). The computational complexity of an element is the depth of
the corresponding node in the tree. Let SD(n) be the sum of all depths. It has the following
recurrence relation: SD(n) = 2SD(b n2 c) + n, as depicted in Figure 3.17. SD(n) = Θ(n log n)

1 1

2 2 1 1 1 1

3 3 3 3 1 1 1 1 2 2 2 2

4 4 1 1 3 3

SD(n) −n = 2SD(b n2 c)

Figure 3.17: Sum of all elements’ number of comparisons.

by the Master Theorem or Theorem 3.4. Sum of depths of all nodes divided by the number
of nodes, n, is the average case.
SD(n) Θ(n log n)
= = Θ(log n)
n n
Equivalently, if each node in the size balanced binary search tree is labeled by a breadth
first traversal, the depth of the ith node is dlog i + 1e.
n
P
dlog i + 1e
i=1 Θ(n log n)
= = Θ(log n) 
n n
3.2. BISECTION WITH A SINGLE BRANCH CALL 107

3.2.2 Bisection Method

b m e
b m e
b m e

Figure 3.18: Illustration of the bisection method.

Consider the problem of finding the root of a plynomial function, f (x) passes x axis only
once between 0 and n. A simplified version of the problem, RTF in short, is formulated as
follows:

Problem 3.3. Root finding: findroot(f (x), n)


Input: A polynomial function with a single root, f (x) and a positive integer, n.
Output: brc such that f (r) = 0 where 0 ≤ r < n.

For example, if f (x) = x2 − x − 12 and n = 10, the output is 4 because 42 − 4 − 12 = 0. A


straight forward algorithm using inductive programming would take O(n). However, it can
be found much faster by using a divide and conquer paradigm. If the sign of the beginning
position and that of the ending position are different, there must be an answer within.
As depicted in Figure 3.18, the problem can be halved by checking the sign of the middle
position. The divide recurrence relation in eqn (3.11) can be derived where f (x) is global
and call findroot(0, n) initially.


 if f (b) × f (e) > 0
 b+e 

if f ( b+e
 
) × f ( b+e
 
+ 1) ≤ 0
findroot(b, e) = 2  b+e  2  b+e  2 (3.11)
findroot(b, ) if f (b) × f ( 2 ) ≤ 0
 b+e2



f ( b+e
 
findroot( 2 + 1, e) if + 1) × f (e) ≤ 0

2

Eqn (3.11) can be written as the following pseudo-code:

Algorithm 3.11. Bisection method for finding a single root.

findroot(b, e)
if f (b) × f (e) > 0, return  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
m = b+e 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if f (m) × f (m + 1) ≤ 0, return m . . . . . . . . . . . . . . . . . . 4
else if f (b) × f (m) ≤ 0, return findroot(b, m) . . . . . . .5
else (if f (m + 1) × f (e) ≤ 0) return findroot(m + 1, e) . . . 6

Figure 3.18 illustrates Algorithm 3.11. The computational running time is Θ(log n).
108 CHAPTER 3. DIVIDE AND CONQUER

3.2.3 Power
Consider the Power Problem 2.6 defined on page 49. Figure 3.19 provides a clear idea
to design a divide and conquer algorithm. Since the solutions of both left and right sub-
problems are identical, only one half-sized sub-problem can be called as illustrated in Fig-
ure 3.19 (b).

a×a×a×a×a×a×a×a×a
29 = 24 × 24 × 2 = 512
↓ ↑
a×a×a×a a a×a×a×a 24 = 22 × 22 = 16
↓ ↑
a×a a×a a×a a×a 22 = 21 × 21 =4
↓ ↑
a a a a a a a a 21 = 2 =2

(a) A divide & conquer tree for a9 , (n = 9) (b) A unary tree for 29 (a = 2, n = 9)

Figure 3.19: Power problem illustration.

The following divide recurrence relation can be derived:



2
pow(a, n/2)
 if n is even.
2
pow(a, n) = pow(a, bn/2c) × a if n is odd. (3.12)

1 if n = 0

A divide and conquer algorithm can be directly followed from eqn (3.12).
Algorithm 3.12. Divide & conquer powering: an
pow(a, n)
if n = 0, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if n is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
o = pow(a, n/2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
o = o × o .......................................4
else if n is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
o = pow(a, (n − 1)/2) . . . . . . . . . . . . . . . . . . . . . . . . . . 6
o = o × o × a ...................................7
return o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
The correctness of Algorithm 3.12 can be proven directly by the product law of exponents.

xa xb = xa+b (3.13)

The formal proof in Theorem 3.6 can be skipped for now until readers reach Chapter 5.
Theorem 3.6. Algorithm 3.12, pow(a, n), correctly produces an .
Proof. (by strong induction)
Basis case: When n = 0, (pow(a, 0) = 1) = (a0 = 1).
Inductive step: Let P (n) be the proposition. Assume that P (j) is true for all positive
integers j where 0 < j ≤ k. Show P (k + 1) is also true. There are two cases. If k + 1 is even,
3.2. BISECTION WITH A SINGLE BRANCH CALL 109

k+1 k+1
then pow(a, k + 1) = pow(a, k+1 k+1
2 )× pow(a, 2 ) is true because a
2 ×a 2 = ak+1 . If k + 1
k k
is odd, then pow(a, k + 1) = pow(a, k2 )× pow(a, k2 ) × a is true because a 2 × a 2 × a = ak+1 .
P (k + 1) is true. 

The computational running time is Θ(log n) since T (n) = T ( n2 ) + O(1), assuming that
the multiplication operation takes constant time. If we do not make this assumption, it is
the same as the time complexity of an algorithm for the n digit long integer multiplication
problem.

3.2.4 Modulo
Consider the modulo arithmetic (n % d) Problem 2.25, defined on page 77. In order to
devise a divide & conquer algorithm, draw a divide & conquer tree with a toy example as
given in Figure 3.20.

5
50 % 9

7 7
25 % 9 25 % 9

3 3 3 3
12 % 9 1 12 % 9 12 % 9 1 12 % 9

6 6 6 6 6 6 6 6
6% 9 6%9 6%9 6%9 6%9 6%9 6%9 6%9

Figure 3.20: Modulo illustration.

The first insight from the tree in Figure 3.20 is that the solution for the big problem is
the sum of two half-sized sub-problems. If the sum of two sub-solutions exceeds d, one may
subtract d from it. Recall that the output’s bound is from 0 to d − 1. Another caveat is the
case that n is an odd number; one must be added to the sum of two n−1 2 -sized sub-solutions.
A divide recurrence relation can be stated as given in eqn (3.14).

mod(2 × mod(n/2, d), d)
 if n is even.
mod(n, d) = mod(2 × mod(abn/2c , d) + 1, d) if n is odd. (3.14)

n if n < d

A divide and conquer algorithm can be directly followed by the divide recurrence relation
in eqn (3.14).

Algorithm 3.13. Divide & conquer modulo (n % d)

mod(n, d)
if n < d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
return n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
110 CHAPTER 3. DIVIDE AND CONQUER

if n is even . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
r = mod(n/2, d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
r = 2 × r .....................................6
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
r = mod((n − 1)/2, d) . . . . . . . . . . . . . . . . . . . . . . . . 8
r = 2 × r + 1 .................................9
if r ≥ d, r = r − d . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

The correctness of Algorithm 3.13 can be proven directly by the following lemma 3.3:
Lemma 3.3. Addition property of modulo

mod((a + b), d) = mod(mod(a, d) + mod(b, d), d)

Proof. Let a = qa d + ra and b = qb d + rb

mod((a + b), d) = mod((qa + qb )d + ra + rb , d)


= mod(ra + rb , d)
= mod(mod(a, d) + mod(b, d), d) 

The time complexity of Algorithm 3.13 is Θ(log n). To compute mod(12345, 9) = 3, only
twelve steps are necessary, as outlined in Table 3.2.4, while Algorithm 2.35 by tail recursion
paradigm on page 78 requires 1,371 steps of computation. To compute mod(1234567, 5) = 1,

Table 3.1: Outline of computation of Algorithm 3.13 to compute mod(12345, 9) = 3.

n mod(n, 9) n mod(n, 9) n mod(n, 9) n mod(n, 9)


12345 3 1543 7 192 0 24 0
6172 1 771 3 96 0 12 0
3086 5 385 1 48 0 6 9

Algorithms 3.13 and 2.35 require 19 and 246,913 steps, respectively.


Consider the problem of computing mod(an , d) = an %d. Many theorems, such as Fer-
mat’s little theorem and Euler’s totient theorem in number theory (see [77] for the full
description of these theorems), require computing the function.
Problem 3.4. Modular of power: modpow(a, n, d) = mod(an , d)
Input: a, n, and d ∈ Z+ where d > 1
Output: an % d
A naı̈ve algorithm would first compute the very big integer, b = an and then compute
mod(b, d).
modpow(a, n, d) = mod(pow(a, n), d) (3.15)
This algorithm takes Θ(n), as computing the power an takes only Θ(log n) if Algorithm 3.12
is used, but computing the mod takes Θ(log an ) = Θ(n) if Algorithm 3.13 is used. However,
this problem can be computed much more efficiently by a divide & conquer paradigm.
3.2. BISECTION WITH A SINGLE BRANCH CALL 111

1
210 % 11

10 10
25 % 11 25 % 11

4 4 4 4

22 % 11 21 22 % 11 22 % 11 21 22 % 11

2 2 2 2 2 2 2 2

2 % 11 2 % 11 2 % 11 2 % 11 2 % 11 2 % 11 2 % 11 2 % 11

Figure 3.21: an Modulo illustration.

Consider the divide and conquer tree in Figure 3.21. After careful observations, the following
divide recurrence relation in eqn (3.16) can be derived.

n 2
mod(modpow(a, 2 , d) , d)
 if n is even.
modpow(a, n, d) = mod(modpow(a, b 2 c, d)2 × a, d) if n is odd.
n
(3.16)

mod(a, d) if n = 1

A divide and conquer algorithm can be directly followed by the divide recurrence relation
in eqn (3.16).

Algorithm 3.14. Divide & conquer modulo of power


modpow(a, n, d)
if n = 1, return mod(a, d) . . . . . . . . . . . . . . . . . . . . . . 1
else if n is even . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
r = modpow(a, n/2, d) . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return mod(r × r, d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
r = modpow(a, (n − 1)/2, d) . . . . . . . . . . . . . . . . . . . . 6
return mod(r × r × a, d) . . . . . . . . . . . . . . . . . . . . . . . . 7
The correctness of Algorithm 3.14 can be proven directly by the following Lemma 3.4.

Lemma 3.4. Product property of modulo

mod((a × b), d) = mod(mod(a, d) × mod(b, d), d)

Proof. Let a = qa d + ra and b = qb d + rb

a × b = (qa d + ra ) × (qb d + rb )
= (qa qb d + qa rb + qb ra )d + ra rb
mod((a × b), d) = mod(ra rb , d)
= mod(mod(a, d) × mod(b, d), d) 
112 CHAPTER 3. DIVIDE AND CONQUER

7⋅9 + 3
66 / 9

3⋅9 + 6 3⋅9 + 6
33 / 9 33 / 9

1⋅9 + 7 1 1⋅9 + 7 1⋅9 + 7 1 1⋅9 + 7


16 / 9 1/9 16 / 9 16 / 9 1/9 16 / 9

0⋅9 + 8 0⋅9 + 8 0⋅9 + 8 0⋅9 + 8 0⋅9 + 8 0⋅9 + 8 0⋅9 + 8 0⋅9 + 8


8/ 9 8/9 8/9 8/9 8/9 8/9 8/9 8/9

Figure 3.22: Divide & conquer illustration for the quotient and remainder problem.

3.2.5 Quotient and Remainder Problem


Consider the division Problem 2.24 defined on page 77. At first glance, it is hard to
derive a divide recurrence relation individually. However, if the problem is combined into
the modulo Problem 2.25, namely the quotient remainder problem, it is possible to derive a
divide recurrence relation. Drawing a divide and conquer tree on a small toy example usually
provides an idea on how to design a divide and conquer algorithm, as given in Figure 3.22.
A pseudo code that returns both quotient and remainder can be stated as follows:
Algorithm 3.15. Divide and conquer quotient and remainder algorithm
quotrem(n, d)
if n < d, return (0, n) . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if n is even . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
(qh , rh ) = quotrem(n/2, d) . . . . . . . . . . . . . . . . . . . . . . 3
if 2 × rh < d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
q = 2 × qh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
r = 2 × rh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
q = 2 × qh + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
r = 2 × rh − d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
else (if n is odd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
(qh , rh ) = quotrem((n − 1)/2, d) . . . . . . . . . . . . . . . 11
if 2 × rh + 1 < d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
q = 2 × qh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
r = 2 × rh + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
q = 2 × qh + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
r = 2 × rh + 1 − d . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
return (q, r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Both quotient and remainder are found in Θ(log n) by Algorithm 3.15.
Theorem 3.7. Algorithm 3.15 quotrem(n, d) correctly produces (q, r) such that n = q×d+r
where 0 ≤ r < d.
3.3. BEYOND BISECTION 113

Proof. By the quotient-remainder theorem, n = q × d + r where 0 ≤ r < d. When n is even,


let (qh , rh ) be the unique integer pair, such that n/2 = qh × d + rh . Then n = 2qh × d + 2rh .
If 2rh < d, (q, r) = (2qh , 2rh ). If 2rh ≥ d, n = (2qh + 1) × d + 2rh − d, and, thus
(q, r) = (2qh + 1, 2rh − d).
When n is odd, let (qh , rh ) be the unique integer pair, such that (n − 1)/2 = qh × d + rh .
Then n = 2qh × d + 2rh + 1. If 2rh + 1 < d, (q, r) = (2qh , 2rh + 1). If 2rh + 1 ≥ d,
n = (2qh + 1) × d + 2rh + 1 − d and, thus, (q, r) = (2qh + 1, 2rh + 1 − d). 

3.3 Beyond Bisection


Thus far, divide and conquer algorithms divided a problem into halves. Hitherto, the
examples of algorithms that divide the large problems into more than two sub-problems are
presented. The size of the sub-problems is smaller than n2 , i.e., nb where b > 2.

3.3.1 Logarithm
The most natural arithmetic problem where the divide and conquer paradigm can be
applied is one involving the logarithm. Consider the problem of computing the floor of
logarithm value, defined as follows:

Problem 3.5. flogb (n)


Input: n ∈ Z+ and b ∈ R+
Output: blogb nc = x such that x ≤ logb n < x + 1 where x ∈ N

It can be computed very efficiently by a divide and conquer. To think backward, consider
the following puzzle problem. Suppose that bacteria double every day and fill a lake it has
infested after precisely 100 days. On what day was the lake half full? The answer is the 99th
day. Let the size of the lake be n and let the number of days to fill the lake be nb(n). Since
bacteria double every day, nb(n) = nb(n/2) + 1. This is exactly the recursive definition for
the logarithm base 2. What if bacteria triple every day? nb(n) = nb(n/3)+1. The backward
thinking for the floor of logarithm base two and three are illustrated in Figure 3.23 (a) and
(b), respectively.
The divide recurrence relation for the floor of the logarithm base b where b > 1 is given
in eqn (3.17), which is the divide and conquer algorithm itself.
(
flogb (b nb c) + 1 if n > 1
flogb (n) = (3.17)
0 if n = 1

The correctness of the algorithm in eqn (3.17) can be shown straight from the logarithmic
rule of the product series in Theorem 1.9.

Theorem 3.8. The algorithm in eqn (3.17) correctly produces blogb nc.
n  n n
Proof. logb n = logb b = logb + logb b = logb + 1 
b b b
Since only one branch recursive call is necessary to compute the corresponding value,
the time complexity of algorithm in eqn (3.17) is Θ(logb n) = Θ(log n).
114 CHAPTER 3. DIVIDE AND CONQUER

log 11 = 3

log 5 = 2 log 5 = 2

log 2 = 1 log 2 = 1 log 2 = 1 log 2 = 1

log1 = 0 log1 = 0 log1 = 0 log 1 = 0 log1 = 0 log 1 = 0 log 1 = 0 log1 = 0

(a) Floor of log base two.


log 3 22 = 2

log 3 7 = 1 log3 7 = 1 log 3 7 = 1

log 3 2 = 0 log 3 2 = 0 log 3 2 = 0 log 3 2 = 0 log 3 2 = 0 log 3 2 = 0 log 3 2 = 0 log 3 2 = 0 log 3 2 = 0

(b) Floor of log base three.

Figure 3.23: Floor of log.

3.3.2 Master Theorem


Bentley et al. provided a general method for solving divide-and-conquer recurrences
in [22], and a modernized version appears under the name “Master Theorem” in [42]. Master
Theorem takes a, b, and f (n) as inputs, and outputs its asymptotic notation for many divide
recurrence forms of T (n) = aT ( nb ) + f (n) where a ≥ 1 and b > 1. b determines the size of
the sub-problem and a is the number of the sub-problems of size nb . Finally, f (n) can be
thought of as the time complexity of merging solutions of sub-problems.

Theorem 3.9. Master Theorem


If T (n) has a divide recurrence relation form of T (n) = aT ( nb ) + f (n) where a ≥ 1 and
b > 1,
Case 1: If f (n) = O(nlogb (a)− ) for some  > 0, T (n) = Θ(nlogb a )
k
Case 2: If f (n) = Θ(nlogb a log n) for some k ≥ 0, T (n) = Θ(nlogb a logk+1 n)
Case 3: If f (n) = Ω(nlogb (a)+ ) for some  > 0 and T (n) = Θ(f (n))
af ( nb ) ≤ cf (n) for some c < 1,
For example, to derive an asymptotic notation for the following divide recurrence re-
lation: T (n) = T ( n3 ) + O(1) where a = 1, b = 3, and f (n) = O(1), Master Theorem 3.9
can be utilized. First, try Case 1. Since 1 = ω(n− ) for any  > 0, it cannot be Case 1.
Next, try Case 2 with k = 0. Since 1 = Θ(log0 n), T (n) = Θ(log n). Once it fits into one of
3.3. BEYOND BISECTION 115

three cases, there is no need to check for other cases. Nevertheless, if we try Case 3, since
1= o(nlog3 1+ ) for any  > 0, it cannot be Case 3.
Consider Algorithm 3.12 on page 108 to compute 2n . The merge step involves the n/2
n n n
digit long integer multiplication, i.e., 2n = 2 2 × 2 2 . The integer, 2 2 , is n/2 digits long since
n
log 2 2 = n2 . If Θ(n2 ) multiplication algorithm is used, the computational time complexity
of Algorithm 3.12 becomes T (n) = T (n/2) + Θ(n2 ). Master Theorem 3.9 allows us to
find a closed asymptotic notation. First, identify the corresponding parts: a = 1, b = 2,
and f (n) = Θ(n2 ). Next, try Case 1. Since n2 = ω(n− ) for any  > 0, it cannot be
Case 1. Next, try Case 2. Since n2 6= Θ(n0 log nk ) for any k ≥ 0, it cannot be Case 2.
2
Finally, try Case 3 with  ≥ 2. Since n2 = Ω(n ) and n4 ≤ cn2 for any ( 14 ≤ c < 1),
2 2
T (n) = T (n/2) + Θ(n ) = Θ(n ). It can be further generalized.
Corollary 3.1. If T (n) = T ( n2 ) + f (n) and f (n) = Ω(np ) for some p > 0, then T (n) =
Θ(f (n))
Proof. Proven by Case 3 of Master Theorem 3.9 where  = p. 

T(n) T(n) T(n)

f(n) T(n/2) f(n) T(n/2) f(n) T(n/2)

f(n/4) T(n/4) f(n/4) T(n/4) f(n/4) T(n/4)

f(n) f(n) f(n) 1/ f(n) 2.4143 f(n)


3f(n)

(a) f (n) = n, (b) f (n) = n2 , (c) f (n) = n
T (n) = 2f (n) T (n) = 34 f (n) T (n) = 3.4143f (n)

Figure 3.24: Corollary 3.1 illustration.

The special kind of recursions in Corollary 3.1 can be graphically proven using the
figurative idea similar to the famous Achilles and the Tortoise (see [148, p 19] for Zeno’s
paradoxes). Suppose a tortoise has to work T (n) but T (n) is composed of two parts f (n)
and T ( n2 ). If a tortoise completes f (n) work, he still has to complete the remaining T ( n2 )
recursively. Adding all of the work he has done when he reaches the basis case, T (n) ≈
2f (n) = Θ(f (n)), as √ depicted in Figure 3.24. Figure 3.24 (a), (b), and (c) show cases when
f (n) = n, n2 , and n. Indeed, if T (n) = T ( n2 ) + f (n) with its base f (n) = 1 if n < 1 and
f (n) = Ω(np ) for some p > 0,
 
1
T (n) = 1 + p f (n) (3.18)
2 −1
If f (n) = o(np ), however, Case 3 does not apply. If f (n) = O(logp n), Case 2 applies
sometimes when p = 0 and p = 1. If f (n) = O(1) and Θ(log n), T (n) = Θ(log n) and
Θ(log2 n), respectively, according to Master Theorem 3.9 Case 2. If f (n) = Θ(log log n),
116 CHAPTER 3. DIVIDE AND CONQUER

however, the Master Theorem does not apply. To be more specific, since there is no k where
0 < k < 1 such that log log n = Θ(logk n), Master Theorem 3.9 does apply for this case, but
T (n) = Θ(log n log log n). Perhaps, it seems to be safe to state the following:

Conjecture 3.1. If T (n) = T ( n2 ) + f (n) and f (n) = O(logp n) for some p ≥ 0, then
T (n) = Θ(f (n) log n).

Although Master Theorem 3.9 is not a panacea, it is an extremely useful method for the
divide recurrence of the form T (n) = aT (n/b) + f (n).

3.3.3 n-digit Long Integer Multiplication


Consider the n digit long integer multiplication Problem 2.3, defined on page 45. For
convenience, let both arguments, multiplicand and multiplier, be n digits long. If one is
short, the leading zeros can be added. Consider the toy example in Figure 3.25 to come up
with a divide and conquer algorithm.
Ah Al
z }| {z }| {
A 4 1 2 4 5 3 2 9
×B × 8 9 3 2 5 7 4 3
| {z }| {z }
Bh Bl

Al × Bl 3 0 6 0 4 4 4 7
1
10 Ah × Bl 2 3 6 8 4 1 3 2
1
10 Al × Bh 4 7 5 9 8 6 2 8
n−1
10 Ah × Bh 3 6 8 3 5 5 6 8

A×B 3 6 8 4 2 6 9 6 5 8 2 0 4 4 4 7
(a) Merging sub-solutions
A 4 1 2 4 5 3 2 9
B 8 9 3 2 5 7 4 3

Ah 4 1 2 4 Al 5 3 2 9 Ah 4 1 2 4 Al 5 3 2 9
Bh 8 9 3 2 Bh 8 9 3 2 Bl 5 7 4 3 Bl 5 7 4 3
……..

……..
……..

……..

3 6 8 3 5 5 6 8 4 7 5 9 8 6 2 8 2 3 6 8 4 1 3 2 3 0 6 0 4 4 4 7

3 6 8 4 2 6 9 6 5 8 2 0 4 4 4 7

(b) Naı̈ve divide & conquer tree for multiplication

Figure 3.25: The divide and conquer Algorithm 3.16 illustration for multiplication.
3.3. BEYOND BISECTION 117

A rough divide recurrence relation of n-digit long integer multiplication assuming n is


even is as follows:
Algorithm 3.16. n-digit long integer multiplication

times(A1∼n , B1∼n )
(
A1 × B1 if n = 1
= n n
10n times(Ah , Bh ) + 10 2 times(Ah , Bl ) + 10 2 times(Al , Bh ) + times(Al , Bl ) if n > 1

If n is odd, adding a leading zero makes the digit length even.


Theorem 3.10. Algorithm 3.16 correctly produces times(A1∼n , B1∼n ).
n n
Proof. Since A = 10 2 Ah + Al and B = 10 2 Bh + Bl ,
n n
A × B = (10 2 Ah + Al ) × (10 2 Bh + Bl )
n n
= 10n Ah Bh + 10 2 Ah Bl + 10 2 Al Bh + Al Bl 

Algorithm 3.16 calls four half-sized sub-problems. And three linear time integer additions
are necessary to merge these four sub-solutions. Hence, T (n) = 4T ( n2 ) + Θ(n). According
to Master Theorem 3.9, T (n) = Θ(n2 ). It is clear that this simple divide and conquer
algorithm fails to provide a faster algorithm than Algorithm 2.5.
Is it possible to devise an algorithm in o(n2 ) to multiply two n-digit long integers? In
1956, Andrey Kolmogorov conjectured that it is not possible. In 1960, however, his student,
Anatolii Karatsuba, disproved this conjecture [51, p 85] and gave a brilliant divide and
conquer algorithm in [93]. Karatsuba’s Algorithm has become a highly revered algorithm in
computer science. The central algorithmic concept in Karatsuba’s algorithm is calling only
three half digit length multiplication sub-problems, rather than the four in Algorithm 3.16.

A 4 1 2 4 5 3 2 9
B 8 9 3 2 5 7 4 3

Ah 4 1 2 4 Ah + Al 0 9 4 5 3 Al 5 3 2 9
Bh 8 9 3 2 Bh + Bl 1 4 6 7 5 Bl 5 7 4 3
……..

……..

……..

3 6 8 3 5 5 6 8 1 3 8 7 2 2 7 7 5 3 0 6 0 4 4 4 7

3 6 8 4 2 6 9 6 5 8 2 0 4 4 4 7

Figure 3.26: Karatsuba’s divide and conquer Algorithm 3.17 illustration.


118 CHAPTER 3. DIVIDE AND CONQUER

Algorithm 3.17. Karatsuba Algorithm


(
A×B if n = 1
A×B = n
10n Ah Bh + 10 2 ((Ah + Ah )(Bh + Bl ) − Ah Bh − Al Bl ) + Al Bl if n > 1

This fast divide and conquer algorithm for the integer multiplication problem is illus-
trated in Figure 3.26.

Theorem 3.11. Karatsuba Algorithm 3.17 correctly produces times(A1∼n , B1∼n ).

Proof. Since (Ah + Ah )(Bh + Bl ) − Ah Bh − Al Bl = Ah Bl + Al Bh ,


n
10n Ah Bh + 10 2 ((Ah + Ah )(Bh + Bl ) − Ah Bh − Al Bl ) + Al Bl
n n n n
n
= 10 Ah Bh + 10 2 Ah Bl + 10 2 Al Bh + Al Bl = (10 2 Ah + Al ) × (10 2 Bh + Bl ) = A × B 

Theorem 3.12. Karatsuba Algorithm 3.17 takes o(n2 ) time.

Proof. Since there are only three unique half digit long sub-problems, i.e., (Ah + Ah )(Bh +
Bl ), Ah Bh , and Al Bl , and only six linear time additions are made, T (n) = 3T ( n2 ) + Θ(n).
According to Master Theorem 3.9, T (n) = Θ(nlog 3 ) ≈ Θ(n1.585 ) = o(n2 ). 

3.3.4 Matrix Multiplication


Consider the Matrix multiplication Problem 2.5, defined on 47. An n × n square matrix
can be divided into four n2 × n2 square matrices. As depicted in Figure 3.27, Let AUL ,
AUR , ALL , and ALR represent the upper left, upper right, lower left, and lower right sub
matrices. The resulting n × n matrix, C = A × B, can be computed by eight half-sized
matrix multiplication problems, as given in eqn (3.19).



 C = a1,1 × b1,1 if n = 1
C = A × B + A × B ,
  
UL UL UL UR LL

  
  
A×B = C = A × B + A × B ,
  (3.19)
UR UL UR UR LR
 C= if n > 1
C = A × B + A × B ,

LL LL UL LR LL 

 
 

  
CLR = ALL × BUR + ALR × BLR
  

If n is even, half-sized sub-matrices are also square matrices. If it is odd, one more
row and column with all zeros can be appended so that the sub-matrices can be square.
Albeit no explicit pseudo code is given here, the eqn (3.19) provides sufficient ideas for the
divide and conquer algorithm. It calls eight half-sized sub-problems, and combining steps
are simply matrix addition. The combining steps take Θ(n2 ). The computational time
complexity has the following divide recurrence relation: T (n) = 8T ( n2 ) + Θ(n2 ). According
to the Master Theorem 3.9, it is Θ(n3 ). Clearly, it is no better than the grade school matrix
multiplication Algorithm 2.7 given on page 48.

Anatolii Alexeevich Karatsuba (1937-2008) was a Russian mathematician


working in the field of analytic number theory, p-adic numbers and Dirichlet series. He
is best known for Karatsuba algorithm among computer scientists.
c Photo Credit: Riemann, licensed under CC BY 3.0.
3.3. BEYOND BISECTION 119

In [168], Strassen showed a o(n3 ) algorithm, though. Instead of eight sub-problems,


Strassen reduces the number of sub-problems into seven. This results T (n) = 7T ( n2 )+Θ(n2 )
and thus Θ(nlog 7 ) ≈ Θ(n2.807 ) according to the Master Theorem 3.9. Strassen’s Algorithm
first prepares seven sub-problems in Θ(n2 ), as listed in eqns (3.20 ∼ 3.26).

M1 = (AUL + ALR )(BUL + BLR ) (3.20)


M2 = (ALL + ALR )BUL (3.21)
M3 = AUL (BUR − BLR ) (3.22)
M4 = ALR (BLL − BUL ) (3.23)
M5 = (AUL + AUR )BLR (3.24)
M6 = (ALL − AUL )(BUL + BUR ) (3.25)
M7 = (AUR − ALR )(BLL + BLR ) (3.26)

Next, it combines sub-solutions in Θ(n2 ) as in eqn (3.27).

CUL = M1 + M4 − M5 + M7 
 

 
C

=M +M


UR 3 5
A×B =C = if n > 1 (3.27)

 C LL = M2 + M4 

 
CLR = M1 − M2 + M3 + M6
 

The correctness of Strassen’s Algorithm can be shown algebraically.

Theorem 3.13. Strassen’s Algorithm in eqn (3.27) correctly produces the output.

Proof. Only the first two cases are shown below and the remaining ones are left for exercises.

CUL = M1 + M4 − M5 + M7
= (AUL + ALR )(BUL + BLR ) + ALR (BLL − BUL )
− (AUL + AUR )BLR + (AUR − ALR )(BLL + BLR )
= AUL BUL + AUL BLR + ALR BUL + ALR BLR + ALR BLL − ALR BUL
− AUL BLR − AUR BLR + AUR BLL + AUR BLR − ALR BLL − ALR BLR
= AUL × BUL + AUR × BLL
CUR = M3 + M5
= AUL (BUR − BLR ) + (AUL + AUR )BLR
= AUL BUR − AUL BLR + AUL BLR + AUR BLR
= AUL × BUR + AUR × BLR 

Volker Strassen (1936-) is a German mathematician and statistician. His major


contributions include Strassen’s algorithm and SchönhageStrassen algorithm for per-
forming matrix multiplication and randomized algorithms such as the Solovay-Strassen
primality test.
c Photo Credit: David Eppstein, licensed under CC BY 3.0, crop change was made.
120 CHAPTER 3. DIVIDE AND CONQUER

AUL AUR BUL BUR


×
ALL ALR BLL BLR

AUL BUL AUR BLL AUL BUR AUR BLR ALL BUL ALR BLL ALL BUR ALR BLR

CUL CUR

CLL CLR

(a) 8 half-sized sub-problems in naı̈ve divide and conquer algorithm in eqn (3.19)
AUL AUR BUL BUR
×
ALL ALR BLL BLR

AUL BUL ALL BUR BLL AUL ALL BUL AUR BLL
+ + + BUL AUL − ALR − + BLR − + − +
ALR BLR ALR BLR BUL AUR AUL BUR ALR BLR

CUL CUR

CLL CLR

(b) 7 half-sized sub-problems in Strassen’s Algorithm in eqn (3.27)

Figure 3.27: Divide and conquer algorithm illustration for square matrix multiplication.

3.4 Beyond the Master Theorem


In this section, a common mistake using Master Theorem 3.9 is considered. In order
to utilize Master Theorem 3.9, the computational time complexity of the basis case in the
divide recurrence relations must be constant. Otherwise, Master Theorem 3.9 may not
provide a correct analysis. Also, the divide and conquer algorithms whose combining step
is O(f (n)) rather than Θ(f (n)) require careful attention when analyzing the computational
time complexities.

3.4.1 Checking Greater Between Elements Sequence


Recall the Greater between elements sequence validation problem, considered as an ex-
ercise in Q 2.29 on page 88. Elements in a string S1∼2n are drawn from a multiset,
{1, 1, 2, 2, · · · , n, n}. Each k ∈ {1 ∼ n} appears exactly twice in S1∼2n . If all the numbers
appearing between the two occurrences of each k in S1∼2n are greater than k, then S1∼2n is
a GBW sequence. For example, h2, 2, 1, 1, 3, 3i and h1, 2, 3, 3, 2, 1i are valid GBW sequences,
but h1, 3, 2, 3, 2, 1i and h3, 2, 1, 1, 2, 3i are not because the element 2 occurs between 3’s in
3.4. BEYOND THE MASTER THEOREM 121

both invalid sequences. This problem is formally defined as follows:

Problem 3.6. is Greater between elements sequence? (GBW)


Input: A sequence S1∼2n (where each k ∈ {1 ∼ n} appears exactly twice.
T if ∀k ∈ {1 ∼ n}∀l ∈ {f (k) + 1 ∼ s(k) − 1}k < sl
Output: isGBW(S1∼2n ) =
F otherwise
where f (k) and s(k) are indices of k occurring first and second in S1∼2n .

F T
1 2 3 4 5 6 7 8 1 2 3 4 5 6

T F T T
1 2 3 4 5 6 7 8 1 2 3 4 5 6

T T T F T T T T

1 2 3 4 5 6 7 8 1 2 3 4 5 6

T T T T T T F T T T T T
1 2 3 4 5 6 7 8 1 2 4 5

(a) S = h1, 2, 7, 8, 8, 5, 7, 5, 6, 6, 3, 3, 2, 4, 4, 1i (b) S = h1, 2, 5, 6, 6, 5, 3, 4, 4, 3, 2, 1i

Figure 3.28: Divide and conquer trees for checking a GBW sequence.

Instead of dividing the sequence S1∼2n , a possible divide and conquer strategy is to
divide n unique quantifiable elements, as illustrated in Figure 3.28. If all of the elements in
the left sub-problem and the right sub-problem satisfy the greater between condition, the
original problem is true. However, if one of the sub-problem is false, the original problem is
also false. Consider the following divide and conquer algorithm to check whether a sequence
is a GBW sequence based on this simple strategy:

Algorithm 3.18. Divide and conquer isGBW

Call isGBW(1, n) initially and let S1∼2n be global.


isGBW(b, e)
if b = e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
return check base(S1∼|S| , b) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return isGBW(b, b e+b e+b
2 c) ∧ isGBW(b 2 c + 1, e) . . . . . . . 4
check base(S1∼2n , k)
i = 1 ....................................................1
while si 6= k, i = i + 1 ................................2
i = i + 1 ................................................ 3
while si > k, i = i + 1 ................................4
if si 6= k, return F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
else return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
122 CHAPTER 3. DIVIDE AND CONQUER

The computational time complexity, T (n) of Algorithm 3.18 depends on two half-sized
sub-problems, and combining the sub-solutions takes constant time. The computational
time complexity looks like linear as T (n) = 2T (n/2) + O(1), according to Master Theo-
rem 3.9, but it is actually O(n2 ). Since the base case is not constant, applying the Master
Theorem is a mistake. The sub-procedure, check base(S1∼2n , k), checks whether all ele-
ments between two occurences of a particular k. This clearly takes linear time, O(n). While
the divide and merge part follows linear time, there are n basis cases and each takes linear
time.
Yet, Master Theorem 3.9 is extremely useful in analyzing the computational time com-
plexity of divide and conquer algorithms. It should be duly noted that the basis case must
take constant time. If a divide and conquer algorithm’s basis case takes Ω(1), divide and
merge parts can still be analyzed by the Master Theorem but the computational time com-
plexity of the basis case multiplied by the number of basis case invoked must be augmented
toward the total computational time complexity. Hence, the computational time complexity
of Algorithm 3.18 is Θ(n) + O(n2 ) = O(n2 ).

3.4.2 n-digit Long Integer Addition


Consider the following divide and conquer algorithm for the elementary school Prob-
lem 2.1 of adding two n digit long positive integers:

Algorithm 3.19. Divide & conquer n digit long integer addition


Let A1∼n , B1∼n , C1∼n and O1∼n+1 be global
and call add(1, n) initially
add(s, e)
if s = e, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
cs = b(as + bs )/10c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
os = (as + bs ) % 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
m = b s+e 2 c ..................................... 5
add(s, m) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
add(m + 1, e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
while m < e ∧ cm = 1 ∧ om+1 = 9 . . . . . . . . . . . . . . . 8
om+1 = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
cm+1 = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
m = m + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Lines 1 ∼ 3 are for the basis case, which take constant time. Lines 6 and 7 call the
half sized problem twice. Lines 8 ∼ 11 are for the combining step, which takes O(n). In
the worst case, the combining step takes Θ(n) due to the carry. Hence, the computational
time complexity of Algorithm 3.19 seems to be O(n log n) because T (n) = 2T (n/2) + O(n).
However, tigher analysis is possible.

Theorem 3.14. The computational time complexity of Algorithm 3.19 is Θ(n).

Proof. The computational time complexity without considering the carry is T (n) = 2T (n/2)+
O(1), which is Θ(n). The carry at any digit can occur either 0 or 1 throughout the merge
tree, as illustrated in Figure 3.29. Suppose the carry at a certain position occurs twice. The
3.4. BEYOND THE MASTER THEOREM 123

A 4 5 4 5 8 9 3 9
B 5 4 5 4 3 9 9 5

4 5 4 5 8 9 3 9
5 4 5 4 3 9 9 5

4 5 4 5 8 9 3 9
5 4 5 4 3 9 9 5

4 5 4 5 8 9 3 9
5 4 5 4 3 9 9 5

0 0 0 0 1 1 1 1
9 9 9 9 2 8 2 4
0 0 1 1
9 9 9 9 3 8 3 4

0 1
9 9 9 9 3 9 3 4

1
0 0 0 0 3 9 3 4

c c c c c c c c
1 0 0 0 0 3 9 3 4

Figure 3.29: Divide and conquer addition Algorithm 3.19 illustration.

first carry occurs because the sum of two single digit numbers are greater than or equal to
10. The sum of two single digit numbers cannot be greater than 18. In order the second
carry to occur due to the carry from the lower significant digit, the current value must be
19. This is impossible. Hence, Θ(n) + O(n) = Θ(n). 

3.4.3 n-digit Long Integer by a Single Digit Multiplication


Consider the following divide and conquer algorithm for another elementary school Prob-
lem 2.2 of multiplying a n digit long positive integer to a single digit x:

Algorithm 3.20. Divide & conquer n × 1 digit long multiplication: O1∼n+1 = A1∼n × x

Let A1∼n , C1∼n , O1∼n+1 , and x be global


and call times1(1, n) initially
times1(s, e)
if s = e, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
cs = b(as × x)/10c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
os = (as × x) % 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
124 CHAPTER 3. DIVIDE AND CONQUER

else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
m = b s+e 2 c ..................................... 5
times1(s, m) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
times1(m + 1, e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
t = 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
while m < e ∧ t ≥ 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
t = om+1 + cm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
om+1 = t % 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
if t ≥ 10, cm+1 = cm+1 + 1 . . . . . . . . . . . . . . . 12
m = m + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

A 2 8 5 7 3 9 4 8

2 8 5 7 3 9 4 8

2 8 5 7 3 9 4 8

2 8 5 7 3 9 4 8
1 5 3 4 2 6 2 5
4 6 5 9 1 3 8 6
1 3 2 3
9 6 9 9 7 3 3 6
1 2
9 9 9 9 7 6 3 6
2
0 0 0 1 7 6 3 6
c c c c c c c c
2 0 0 0 1 7 6 3 6

(a) blah blah blah


A1∼n 2 8 5 7 3 9 4 8
× x × 7

C1∼n+1 1 5 3 4 2 6 2 5 0
+ B1∼n 4 6 5 9 1 3 8 6

O1∼n+1 2 0 0 0 1 7 6 3 6
(b) blah blah blah

Figure 3.30: Divide and conquer addition illustration.

Lines 1 ∼ 3 are for the basis case, which take constant time. Lines 6 and 7 call the
half sized problem twice. Lines 8 ∼ 13 are for the combining step, which takes O(n). In
the worst case, the combining step takes Θ(n) due to the carry. Hence, the computational
time complexity of Algorithm 3.20 seems to be O(n log n) because T (n) = 2T (n/2) + O(n).
3.5. ITERATIVE DIVIDE AND CONQUER 125

However, it is Θ(n). The proof of the computational time complexity of Algorithm 3.20 can
be realized as the reduction to the n-digit long addition Problem 2.1.

times1 (A1∼n , x) = add(C1∼n+1 , B1∼n )


(
0 if i = 1
where ci = ×x and bi = (ai × x) % 10 (3.28)
b ai−1
10 c if 1 < i ≤ n + 1

The reduction relation in eqn (3.28) is illustrated in Figure 3.30 (b).

3.5 Iterative Divide and Conquer


Thus far, most divide and conquer algorithms had been based on the backward thinking
and recursive solving method. Some problems can be tackled by solving iteratively from
the bottom up instead of solving recursively. This argument is very similar to that among
the recursive programming, inductive programming, and tail recursion, as depicted in Fig-
ure 2.26 on page 76 in Chapter 2. In this section, iterative divide and conquer methods, such
as tail recursion D&C and bottom up D&C, are presented. Although the recursive divide
and conquer method is applicable for majority of problems, iterative divide and conquer
method can be applied to some problems.

3.5.1 Logarithm Base b

return 2

start return 2 start t = 27, o = 2
↓ ↑ ↓ ↑
blog3 22c =2 n = 22, o = 0 t = 9, o = 1
↓ ↑ ↓ ↑
blog3 7c =1 n = 7, o = 1 t = 3, o = 0
↓ ↑ ↓ ↑
blog3 2c =0 n = 2, o = 2 t = 1, o = −1
x ↓ ↑
basis case return 2 start
(a) Recursive D.& C. (b) Tail recursion D.& C. (c) Bottom up D.& C.

Figure 3.31: Divide and conquer algorithms to compute flog3 (22) = blog3 22c.

Recall the floor of logarithm base b, blogb nc Problem 3.5, defined on page 113. Al-
though the divide recurrence relation in eqn (3.17) was illustrated as a b-ary divide tree
in Figure 3.23, it is indeed a divide linear recurrence relation as illustrated in Figure 3.31
(a). The floor of logarithm based b can be thought of as the number of times that one can
keep dividing the number by b. Hence, instead of solving recursively as in eqn (3.17), it
can be solved iteratively using the tail recursion technique, as illustrated in Figure 3.31 (b).
Conversely, blogb nc can be though as the number of times that one can keep multiply b
before it exceeds n. This technique can be categorized as a bottom up divide and conquer
126 CHAPTER 3. DIVIDE AND CONQUER

algorithm, as illustrated in Figure 3.31 (c). The iterative tail recusion and bottom-up divide
and conquer algorithms are stated in Algorithm 3.21 and 3.22, respectively.

Algorithm 3.21. Tail recursion D&C Algorithm 3.22. Bottom up D&C


flogb(n, b) flogb(n, b)
o = 0 .............................1 o = −1, t = 1 . . . . . . . . . . . . . . . . . 1
while n ≥ b . . . . . . . . . . . . . . . . . . . . . . . 2 while t ≤ n . . . . . . . . . . . . . . . . . . . . . 2
o = o + 1 . . . . . . . . . . . . . . . . . . . . . . .3 o = o + 1 .....................3
n = bn/bc . . . . . . . . . . . . . . . . . . . . . . 4 t = t × b . . . . . . . . . . . . . . . . . . . . . .4
return o . . . . . . . . . . . . . . . . . . . . . . . . . . 5 return o . . . . . . . . . . . . . . . . . . . . . . . . 5

The correctness and time complexities of Algorithm 3.21 and Algorithm 3.22 are same as
the recursive divide and conquer algorithm in eqn (3.17). However, both iterative algorithms
do not require Θ(logb n) space as the recursive programming algorithm in eqn (3.17) does.

3.5.2 Radix r Number System

1
1.4
1.41
1.414
1.4142
1.41421
1.414213
1. … 1.4 … 1.414213562373 · · ·

(a) Decimal number system


1
1.0
1.01
1.011
1.0110
1.01101
1.011010
1. … 1.0 … 1.01 … 1.01101010000 · · ·

(b) Binary number system

Figure 3.32: Radix r number system.

The radix r number representation itself can be viewed as a divide and conquer paradigm.
The most common way to represent numbers is the decimal number system, also known as
the Hindu-Arabic numeral system. To represent a real number, √ it uses the divide by 10
and represent paradigm. Suppose we would like to represent 2 which is the length of the
hypotenuse of a right triangle with legs of length one, as depicted in Figure 3.32 (a). This
follows from the Pythagorean theorem. First, one can try to fit as many sticks of the unit
length as possible. Only one can fit in this case. Then one can divide the stick into ten equal
3.5. ITERATIVE DIVIDE AND CONQUER 127

sized pieces so that the unit length of the smaller stick becomes 0.1. This fitting into the
remaining portion and√ dividing the piece into ten equal sized pieces process is recursively
applied to represent 2. The number is represented in the tail recursive manner, as depicted
in Figure 3.32.
If the unit length of a√stick is broken into two equal sized smaller sticks, the binary
number representation of 2 is derived, as shown in Figure 3.32 (b). If the half-sized stick
does not fit in the remaining portion, 0 is placed. If it does, 1 is placed. This process is
indeed identical to the bisection method in Algorithm 3.11, described on page 107. Note
that the integer version of the bisection method in Algorithm 3.11 can be modified to be
the numerical analysis version of the bisection method as√described in [27, p 48] to find the
root of the quadratic equation, x2 − 2 = 0, which is x = 2.

3.5.3 Merge Sort II

A1 ~ n 9 5 2 4 0 8 7 1 6 3 9 5 8 7

B1 ~ n 5 9 2 4 0 8 1 7 3 6 5 9 2 4 0 7 8 1 6 3

A1 ~ n 2 4 5 9 0 1 7 8 3 6 2 5 9 0 4 1 7 8 3 6

B1 ~ n 0 1 2 4 5 7 8 9 3 6 0 2 4 5 9 1 3 6 7 8

A1 ~ n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9

(a) Iterative merge sort Algorithm 3.23 (b) Recursive merge sort Algorithm 3.5

Figure 3.33: Merge sort algorithm illustration.

Consider the merge sort Algorithm 3.5 for the sorting Problem 2.16. It utilizes the divide
recurrence relation in Lemma 3.2 and solves the problem recursively. Instead of recursion, an
iterative method can be utilized. For each ith iteration, the elements are grouped into size of
2i elements. Next, each pair of groups from the left are merged, as illustrated in Figure 3.33
(a). It behaves slightly differently from the normal dichotomic divide and conquer version,
the merge sort Algorithm 3.5, as illustrated in Figure 3.33 (b).
Iterative, or bottom up, merge sort pseudo code is given as follows:
Algorithm 3.23. Iterative Merge Sort

mergesort(A1∼n )
s = 1; e = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,2
while s ≤ dn/2e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
i = s ..........................................................4
while i < n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if e = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Bi−s+1∼min(i+s,n) = merge(Ai−s+1∼i , Ai+1∼min(i+s,n) ) . . . . . 7
else, (if e = 0 is even) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Ai−s+1∼min(i+s,n) = merge(Bi−s+1∼i , Bi+1∼min(i+s,n) ) . . . . . 9
128 CHAPTER 3. DIVIDE AND CONQUER

i = i + 2 × s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
for i = bn/sc × s + 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
if e = 1, bi = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
else, ai = bi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
s = 2 × s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
e = |e − 1| (flips 0 and 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

The computational time complexity of Algorithm 3.23 is Θ(n log n), the same as Algo-
rithm 3.5.

3.6 Partition and Conquer


Thus far, problems have been divided into roughly equal sized sub-problems. Here,
partitioning a large problem into unevenly sized sub-problems is considered. Most partition
based divide and conquer algorithms have unevenly sized sub-problems. They split the input
into two partitions, either randomly or by their element representation (radix). Randomized
partition algorithms are covered in Chapter 12 and radix based partition algorithms are
presented in this section.
The computational time complexity of a partition based divide and conquer algorithm
consists of the time costs for partitioning a large problem, solving sub-problems, and combin-
ing sub-solutions. While standard divide and conquer algorithms’ divide step takes constant
time, splitting a large problem of size n into partitions usually takes linear time. Another
salient difference between ordinary divide and conquer and partition based divide and con-
quer is that in partition based divide and conquer, elements in each partition share a certain
common property. This common property often assists greatly in designing algorithms.

3.6.1 Bit Partitioning


Suppose that the input elements are represented in a binary number system; each element
is represented in uniform d number of bits. As a matter of fact, most computers use a binary
(radix = 2) representation of numbers to store and use internally. The input, A1∼n , can
be divided into two partitions where the first and second partitions contain all elements
whose kth bit is 0 and 1, respectively. Let ai,k be the kth least significant bit of the ith
element where ai,k = 0 or 1. The least significant digit is the right-most digit, 1, and the
most significant digit is the left-most digit, d. For example, in a binary number a2 = 10112 ,
a2,4 = 1, a2,3 = 0, a2,2 = 1, and a2,1 = 1. Using these notations, the bit partitioning
problem is defined formally as follows:
Problem 3.7. Bit partition
Input: A sequence A1∼n whose elements are represented in a d-digits binary system
and a digit k where 1 ≤ k ≤ d
Output: A0 , a permutation of A such that
∀i, j ∈ {1, · · · , n} (if a0i,k = 0 and a0j,k = 1, then i < j)
The output can be a position p such that the kth bit of all elements in A1∼p−1 is 0 and
the kth bit of all elements in Ap∼n is 1; If i ∈ {1, · · · , p − 1}, a0i,k = 0 and if i ∈ {p, · · · , n},
a0i,k = 1.
3.6. PARTITION AND CONQUER 129

There are a couple of linear time algorithms for the bit partitioning Problem 3.7. They
are outside-in and progressive partitioning algorithms.

i→ 0010 ↓ 0010 0010 0010 0010


1011 i→ 1011 ↓ 0011 0011 0011
1100 1100 i→ 1100 0111 0111
0100 0100 0100 0100 0100
0101 0101 0101 ↓ 0101 0101
1101 1101 1101 i→ 1101 ↓ 0001
0001 0001 0001 j→ 0001 ij ⇒ 1101
0111 0111 j→ 0111 ↑ 1100 1100
j→ 0011 j→ 0011 ↑ 1011 1011 1011

Figure 3.34: The outside-in bitwise partition Algorithm 3.24 illustration (k = d = 4).

The first outside-in algorithm is illustrated in Figure 3.34, where k = d, the most signif-
icant digit. Two indices, i and j are initially placed in the starting and ending position of
the input. i is incremented until it finds an element whose kth bit is 1 and j is decremented
until it finds an element whose kth bit is 0. And then it swaps those elements and repeats
the process until i and j meet. A pseudo code is stated as follows:

Algorithm 3.24. Outside-in bitwise partition

A1∼n is declared globally.


b patition oi(b, e, k)
i = b and j = e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
while i < j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
while ai,k = 0 and i < j, i = i + 1 . . . . . . . . . . . .3
while aj,k = 1 and i < j, j = j − 1 ...........4
if i < j, swap(ai,k , aj,k ) . . . . . . . . . . . . . . . . . . . . . . 5
if i = e and ai,k = 0, return e + 1 . . . . . . . . . . . . . . 6
else return j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Note that line 6 is necessary for a special case when all elements in the input list have
their kth bit equal to 0. Returning p = e + 1 means that the second partition is empty.
Similarly, if all elements have their kth bit equal to 1, i = j = 1, then the zero partition is
empty. No extra line in the pseudo code for the second special case is necessary, as p = j = 1.
As each element in the problem is scanned exactly once by either i or j and the swap
operation takes constant time, the computational time complexity of Algorithm 3.24 takes
Θ(n).
The second progressive bitwise partitioning algorithm is illustrated in Figure 3.35, where
k = 1, the least significant digit. Both two indices, i and j are initially placed in the starting
position of the input. i is incremented until it finds an element whose kth bit is 1 and j is
incremented until it finds an element whose kth bit is 0. And then it swaps those elements
and repeats the process until either i or j goes beyond the ending position. A pseudo code
is stated as follows:
130 CHAPTER 3. DIVIDE AND CONQUER

ij ⇒ 0010 ↓ 0010 0010 0010


1011 i→ 1011 ↓ 1100 1100
1100 j→ 1100 i→ 1011 ↓ 0100
0100 0100 j→ 0100 i→ 1011
0101 0101 0101 0101
1101 1101 1101 1101
0001 0001 0001 0001
0111 0111 0111 0111
0011 0011 0011 ↓ 0011
j→

Figure 3.35: The progressive bitwise partition Algorithm 3.25 illustration (k = 1).

Algorithm 3.25. Progressive bitwise partition

A1∼n is declared globally.


b patition pr(b, e, k)
i = b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
while i ≤ e and j ≤ e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
while ai,k = 0 and i ≤ e, i = i + 1 . . . . . . . . . . . .3
j = i + 1 ....................................... 4
while aj,k = 1 and j ≤ e, j = j + 1 . . . . . . . . . . .5
if i ≤ e and j ≤ e, ........................... 6
swap(ai,k , aj,k ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
i = j .........................................8
return i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

The computational time complexity of Algorithm 3.25 takes Θ(n).

3.6.2 Radix-select
Consider the kth order statistic Problem 2.15, defined on page 59. This problem can
be efficiently solved in an array that uses binary number representation. Suppose all of the
numbers in the set A can be represented by d numbers of digits, e.g. d = 4 in Figure 3.36.
Starting from the most significant bit, one can partition the set into two groups, i.e., zero and
one groups. If the size of the zero group is larger than or equal to k, the kth smallest element
must be in the zero group. Conversely, if the size of the zero group is smaller than k, then
the kth smallest element must be in the one group. Ergo, only one of the groups is solved
recursively. This divide and conquer algorithm is called the radix-select Algorithm [124] and
is illustrated in Figure 3.36, where k = 5. At the end of the radix-select Algorithm, the
resulting array, A, is not sorted but all elements in A1∼k−1 are guaranteed to be less than
or equal to ak and all elements in Ak+1∼n are guaranteed to be greater than or equal to ak .
Clearly, the desired output, ak is found.
Each partition step may divide the big problem into two unevenly sized sub-problems.
Although there are many ways to partition elements according to zeros and ones, here is one
version that does not require any extra space. Unlike partition-based selection algorithms,
which shall be covered in Chapter 12, the radix-select Algorithm does not require an actual
pivot, but uses virtual pivots (e.g., 8, 4, (4+2) and (4 + 1)) for each step, as illustrated in
3.6. PARTITION AND CONQUER 131

Partition Partition Partition Partition


A in binary (k = 4) (k = 3) (k = 2) (k = 1) end

2 0010 0010 0010 0010 2


11 1011 0011 0011 0011 3
12 1100 0111 0001 0001 1
4 0100 0100 0100 0100 0100 0100 4

5 0101 0101 0101 0101 0101 0101 5


13 1101 0001 0111 0111 0111 7
1 0001 1101 1101 13
7 0111 1100 1100 12
3 0011 1011 1011 11
0000 0100 0100
+ 100 + 10 + 1
Virtual pivot 1000 0100 0110 0101

Figure 3.36: The binary radix select algorithm illustration.

Figure 3.36. A pseudo code for the radix-select Algorithm is stated as follows:

Algorithm 3.26. Radix-select (binary)

Let A1∼n be global and call radixselect(1, n, k, d) initially.


radixselect(b, e, k, s)
if s = 0, return ak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
p = b patition oi(b, e, s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if k < p, return radixselect(b, p − 1, k, s − 1) . . . . . . . . . 4
else, return radixselect(p, e, k, s − 1) . . . . . . . . . . . . . . . . . 5
Algorithm 3.26 invokes the bit-wise partitioning algorithm to find a position of the be-
ginning of the second partition in line 3. Either outside-in or progressive bitwise partitioning
algorithms can be invoked. And then, only the respective sub-problem is solved recursively
in lines 4 and 5. Line 1 is accessed when all digits are examined as a basis case.
The number of recursive calls made by the radix-select Algorithm 3.26 is exactly d.
Partitioning the problem takes linear time. Hence, the computational time complexity of
Algorithm 3.26 is O(dn). If the elements are all distinct, the minimum number of digits to
represent all n elements is d = log n. It is safe to say that the computational time complexity
of Algorithm 3.26 is O(n log n). If we assume that the number of digits is fixed and there
are duplicates in the list, it can be also said to be a linear time algorithm.
Instead of a binary number system, other radix number systems can be utilized. If
the hexadecimal number system (base 16 number representation) is used to represent the
numbers, the problem can be partitioned into 16 sub-problems, and only the respective
partition is examined recursively, as illustrated in Figure 3.37.
In the general radix-select Algorithm, not all of the partitions need to be divideded.
Only the partition that the kth smallest element falls into is necessary to be bisected from
132 CHAPTER 3. DIVIDE AND CONQUER

the other part that the kth smallest element does not belong. To do so, first, construct
a histogram based on the most significant place. This step takes linear time. Next, find
the cumulative sum from the left until it exceeds k. The xth bin where the cumulative
sum exceeds k is the partition
Px−1 of interest. Now the smaller sub-problem has updated input:
n = H[x] and k = k − i=0 H[i]. Moving to the next most significant place and calling
the smaller sub-problem, the k smallest element can be found. A pseudo code is stated as
follows: Let (r ≥ 2) be the radix where the input elements are represented.
Algorithm 3.27. Radix select (general)

Let A1∼n be global and call radixselect(n, k, d) initially.


radixselect(n, k, s)
if s = 0, return ak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
declare H0∼r−1 whose elements are 0 initially. . . . . . . . . . 3
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
H[ai ] = H[ai ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
c = H[0] and i = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
while c < k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
H[i] = H[i] + H[i − 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i = i + 1 ............................................9
p = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
j = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
for i = 1 ∼ H[p] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
while aj,s 6= p, j = j + 1 . . . . . . . . . . . . . . . . . . . . . . . . 13
swap(ai , aj ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
return radixselect(H[p], k − (c − H[p]), s − 1). . . . . . . . . .15

Lines 3 ∼ 5 build a histogram. Lines 6 ∼ 10 find the respective bin number, p, and its
cumulative sum, c. Lines 11 ∼ 14 move all elements within the respective array portion
whose s position in the radix system is equal to p to the beginning of the original array.
Finally, line 15 calls a smaller sub-problem with a respective partition only recursively.
Both O(dn) or O(n log n) are valid for the computational time complexity of Algo-
rithm 3.26.

3.6.3 Radix-sort
If both sub-problems of partitions are invoked instead of calling only one partition in the
radix-select Algorithm 3.26, it solves the sorting Problem 2.16, as illustrated in Figure 3.38
(a). This algorithm is known as Radix-sort and a pseudo code is stated as follows:
Algorithm 3.28. Radix sort (Most significant digit first)
Let A1∼n be global and call radixsort(1, n, d) initially.
radixsort(b, e, s)
if b < e, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
p = b patition oi(b, e, s) . . . . . . . . . . . . . . . . . . . . . . . . 2
if p − 1 > b, radixsort(b, p − 1, s − 1) . . . . . . . . . . 3
if p < e, radixsort(p, e, s − 1) . . . . . . . . . . . . . . . . . 4
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.6. PARTITION AND CONQUER 133

radixselect(A1∼n1 , k1 , 4)
n1 = 10000
k1 = 1000
0 1 2 3 4 5 6 7 8 9 A B C D E F

radixselect(A1∼n2 , k2 , 3)
n2 = 624
k2 = 335
0 1 2 3 4 5 6 7 8 9 A B C D E F

radixselect(A1∼n3 , k3 , 2)
n3 = 44
k3 = 15
0 1 2 3 4 5 6 7 8 9 A B C D E F

radixselect(A1∼n4 , k4 , 1)
n4 = 8
k4 = 8
0 1 2 3 4 5 6 7 8 9 A B C D E F

(a) Histograms throughout the algorithm.


input in Hex Partition Partition Partition Partition
A (s = 4) (s = 3) (s = 2) (s = 1) (s = 0)
k = 1000 k0 = 1000 k1 = 335 k2 = 15 k2 = 8 k2 = 1
n = 10000 n0 = 10000 n1 = 624 n2 = 44 n2 = 8 n2 = 1

2679 0 A 7 7 1 B 1 5 1 8 0 F 1 8 4 B 1 8 4 F
6933 1 B 1 5 1 F 2 B 1 8 4 B 1 8 4 A
2932 0 B 7 4 1 3 F A 1 8 B 2 1 8 4 C
48777 B E 8 9 1 8 4 B 1 8 4 7
…….…...

59 0 0 3 B 1 8 4 8
…….…...

1 8 4 0
………..……...

………..……...

1 8 9 A 1 8 4 F
1 8 F A 1 8 4 9
1 A 9 F 1 8 4 C
1 0 A A
1 1 1 C
21679 5 4 A F
26779 6 8 9 B
30178 7 5 E 2
13011 3 2 D 3

(b) Arrays throughout the algorithm.

Figure 3.37: The hexadecimal radix select algorithm illustration: 184F = 6223.

In line 2, either an outside-in or a progressive bitwise partitioning algorithm can be


invoked. The computational time complexity of Algorithm 3.28 is Θ(dn) or Θ(n log n). As
before, if the elements are all distinct, the minimum number of digits to represent all n
elements is d = log n.
The most-significant-digit-first radix-sort Algorithm 3.28 is also called a “top down”
134 CHAPTER 3. DIVIDE AND CONQUER

Partition Partition Partition Partition


A in binary (k = 4) (k = 3) (k = 2) (k = 1) end

2 0010 0010 0010 0001 0001 1


11 1011 0011 0011 0011 0010 2
12 1100 0111 0001 0010 0011 3
4 0100 0100 0100 0100 0100 4

5 0101 0101 0101 0101 0101 5


13 1101 0001 0111 0111 0111 7
1 0001 1101 1011 1011 1011 11
7 0111 1100 1100 1100 1100 12
3 0011 1011 1101 1101 1101 13

(a) Most significant digit first Radix sort Algorithm 3.28 illustration.
Partition Partition Partition Partition
A in binary (k = 1) (k = 2) (k = 3) (k = 4) end

2 0010 0010 1100 0001 0001 1


11 1011 1100 0100 0010 0010 2
12 1100 0100 0101 1011 0011 3
4 0100 1011 1101 0011 0100 4

5 0101 0101 0001 1100 0101 5


13 1101 1101 0010 0100 0111 7
1 0001 0001 1011 0101 1011 11
7 0111 0111 0111 1101 1100 12
3 0011 0011 0011 0111 1101 13

(b) Least significant digit first Radix sort Algorithm 3.29 illustration.

Figure 3.38: Radix sort algorithm illustration.

radix-sort. The sorting problem can also be solved by a “bottom up” or “least significant
digit first” radix-sort. The entire list is partitioned by a respective digit, starting from the
least significant digit, as illustrated in Figure 3.38 (b).

Algorithm 3.29. Radix sort (Least significant digit first)

radixsort bu(A1∼n )
for s = 1 ∼ d, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
b patition sb(1, n, s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

The computational time complexity of Algorithm 3.29 is Θ(dn) or Θ(n log n).
Neither the outside-in bit partitioning algorithm or the progressive bit partitioning algo-
rithm will solve the problem, though, as they are not stable. Partitioning is said to be stable
3.6. PARTITION AND CONQUER 135

if two elements belonging to a same partition appear in the same order in a partitioned
output list as they appear in the original input list. Let m(x) be a partition membership of
an element, x. Let idx(x, A) be the index of an element, x, in a list, A.
Definition 3.1. A partitioning algorithm, A0 = partition(A), is stable

∀ai , aj ∈ A, if m(ai ) = m(aj ) ∧ i < j, idx(ai , A0 ) < idx(aj , A0 ).

A stable partitioning algorithm is necessary in line 2 of Algorithm 3.29. Binary repre-


sentation is a special case of particular importance. A stable partitioning algorithm when
inputs are represented in binary is left for an exercise, and a general stable partitioning for
any radix is given in the following sub-section. Figure 3.39 demonstrates the radix sort on
decimal numbers. Each partition must be stable. After the first partition in Figure 3.39,
two elements, 22340 and 70200, belong to the same partition whose least significant digit
equals to zero. 22340 precedes 70200 in both original input list and partitioned list. If all
pairs of elements satisfy this property, the partitioning Algorithm is stable.

Partition Partition Partition Partition Partition


A (k = 1) (k = 2) (k = 3) (k = 4) (k = 5)
34521 22340 70200 32002 40012 10021
20478 70200 32002 40012 10021 12209
14542 34521 12209 10021 70200 14542
40012 10021 40012 70200 20478 19291
22340 19291 34521 12209 32002 20478
10021 14542 10021 19291 12209 22340
19291 40012 22340 22340 22340 32002
70200 32002 14542 20478 34521 34521
32002 20478 20478 34521 14542 40012
12209 12209 19291 14542 19291 70200

Figure 3.39: Radix sort Algorithm 3.29 illustration on decimal numbers.

3.6.4 Stable Counting Sort


Suppose there are four classes, {A, B, C, D} where ‘A’ is the highest and ‘D’ is the
lowest. If two people with different classes come in a line, the person with a higher class
gets in first. However, for people in the same class, first-come first-in principle must be
applied. When a sorted output meets these criteria, a sorting algorithm is said to be stable.
Definition 3.2. A sorting algorithm, A0 = sort(A), is stable

∀ai , aj ∈ A, if ai = aj ∧ i < j, idx(ai , A0 ) < idx(aj , A0 ).

When r, the number of possible values for elements, is finite and very small, there is a
linear time stable sorting algorithm called stable counting sort, which was first introduced
in [152, p 25-28]. It can be implemented in two different ways. The first version is the
backward counting sort. First, it builds a histogram for each possible value. Indices of the
histogram are possible values in order. Next, convert the histogram into the prefix sum
136 CHAPTER 3. DIVIDE AND CONQUER

array. A principal observation on this prefix sum array is that the prefix sum value contains
the last position of the respective value in the final sorted array. The backward counting
sort algorithm utilizes this prefix sum array, filling an empty array starting from the last
element in the original unsorted array one at a time. The prefix sum array provides the
location to put the element and then reduces the value by one, as illustrated in Figure 3.40
(a). A pseudo code is stated as follows:

1 2 3 4 5 6 7 8 A B C D
A= C A B D A D C A H= 3 1 2 2
C= 3 4 6 8
i=8 A 2 4 6 8
7 A C 2 4 5 8
6 A C D 2 4 5 7
5 A A C D 1 4 5 7
4 A A C D D 1 4 5 6
3 A A B C D D 1 3 5 6
2 A A A B C D D 0 3 5 6
1 A A A B C C D D 0 3 4 6
(a) Backward counting sort
1 2 3 4 5 6 7 8 A B C D
A= C A B D A D C A H= 3 1 2 2
C0 = 0 3 4 6
i=1 C 0 3 5 6
2 A C 1 3 5 6
3 A B C 1 4 5 6
4 A B C D 1 4 5 7
5 A A B C D 2 4 5 7
6 A A B C D D 2 4 5 8
7 A A B C C D D 2 4 6 8
8 A A A B C C D D 3 4 6 8
(b) forward counting sort

Figure 3.40: Counting sort algorithm illustration.

Algorithm 3.30. Backward counting sort

countsort bw(A1∼n )
declare O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
declare Cv1 ∼vr inially 0’s . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
C[ai ] = C[ai ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 2 ∼ r, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
C[vi ] = C[vi ] + C[vi−1 ] . . . . . . . . . . . . . . . . . . . . . . . . . 6
for i = n down to 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
O[C[ai ]] = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
C[ai ]] = C[ai ]] − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.7. GENERAL APPLICATIONS 137

Lines 2 ∼ 4 of Algorithm 3.30 build a histogram, which takes Θ(n). Lines 5 ∼ 6


compute the prefix sum, which take Θ(r). Lines 7 ∼ 9 fill up the output array, scanning the
original unsorted array backward, which clearly takes Θ(n). Hence, the computational time
complexity of Algorithm 3.30 is Θ(n) assuming r  n. A drawback of this approach is that
it requires an extra space to store the output array. The computational space complexity
of Algorithm 3.30 is Θ(n).
Instead of scanning the original unsorted array backward, it can be sorted by scanning
forward. The prefix sum array needs a little modification, i.e., values need to be pushed to
the right and the first value is zero. A principal observation on this prefix sum array is that
the prefix sum value contains the position of the respective previous value in the final sorted
array. Hence, in order to put an element in the output array, the value of the prefix sum
array must be augmented by one and then the element is placed in the updated position,
as illustrated in Figure 3.40 (b). A pseudo code is stated as follows:
Algorithm 3.31. Forward counting sort
countsort fw(A1∼n )
declare O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
declare Cv1 ∼vr inially 0’s . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
C[ai ] = C[ai ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 2 ∼ r, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
C[vi ] = C[vi ] + C[vi−1 ] . . . . . . . . . . . . . . . . . . . . . . . . . 6
for i = r down to 2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
C[vi ] = C[vi−1 ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
C[v1 ] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
for i = 1 ∼ n, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
C[ai ]] = C[ai ]] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
O[C[ai ]] = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Lines 7 ∼ 9 are added to modify the prefix sum array. Lines 10 ∼ 12 scan the original
array forward and the prefix sum array is updated before placing the element in the respec-
tive position. Both the computational time and space complexities of Algorithm 3.31 are
the same as those of Algorithm 3.30, i.e., Θ(n).
If the stable counting sort Algorithm 3.30 or 3.31 is applied from the least significant
digit to the most significant digit, it becomes the radix-sort Algorithm 3.29. Example in
Figure 3.39 demonstrates the radix-sort on decimal numbers.

3.7 General Applications


The idea of divide and conquer goes beyond algorithm design. Here, a couple of tree
drawing graphics problems and a couple of table construction applications are presented to
appreciate the divide and conquer paradigm.

3.7.1 Drawing a Perfect Binary Tree


Imagine that you are drawing the perfect binary tree, as shown in Figure 3.41 (a). In a
perfect binary tree, every internal node has exactly two children nodes and all leaf nodes
138 CHAPTER 3. DIVIDE AND CONQUER

(a) Output of a sample perfect binary tree of size h = 1

(b) Inductive process of perfect binary trees

Figure 3.41: Drawing perfect trees.

are located on the same level. It would take a long time if one draws 63 nodes and 62 edges,
one at a time. With thinking backward, if one drew the left half sub-tree, we can copy
the entire sub-tree and paste it to the right and then combine them with a root node to
make the entire perfect binary tree. We can start with the basis case, which is simply a
tree with one node, and then apply the copy, paste, and combine algorithm, as depicted in
Figure 3.41 (b). A big perfect binary tree can be drawn very quickly, i.e., Θ(log n), since
it takes T (n) = T (b n2 c) + O(1) where n is the number of nodes. The computational time
complexity can be stated in terms of the tree height; Θ(h) while the naı̈ve algorithm, that
draws nodes and edges one at a time, would take Θ(2h ).
Recall the number of nodes in the perfect k-ary tree of height h Problem 1.10, Ptnk (h),
defined on page 12. When k = 2, the tree is a perfect binary tree. With the same line of
backward thinking, a following recurrence relation in eqn (3.29) can be derived.
(
1 if h = 1
Ptn2 (h) = (3.29)
2Ptn2 (h − 1) + 1 if h > 1

It can further generalized into the perfect k-ary tree as given in eqn (3.30).
(
1 if h = 1
Ptnk (h) = (3.30)
kPtnk (h − 1) + 1 if h > 1

The eqns (3.29) and (3.30) are first order linear recurrence relations in terms of the tree
height, h. However, the underlying thinking to derive the recurrence relation is the divide
and conquer paradigm. They can be stated as divide recurrence relations in terms of the
number of nodes, n.
A similar recurrence relation can be derived for the sum of depths in a perfect k-ary
tree Problem 1.11, defined on page 13. Figure 3.42 (a) provides an insight for deriving the
3.7. GENERAL APPLICATIONS 139

0
1 1 0 0 1 1
2 2 2 2 1 1 1 1 1 1 1 1
3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1

Ptd2 (3) = 2Ptd2 (2) + 2Ptn2 (2)


(a) A perfect binary (a = 2) tree
0
1 1 1 0 0 0 1 1 1

2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Ptd3 (2) = 3Ptd3 (1) + 3Ptn3 (1)


Ptdk (h) = k× Ptdk (h − 1) + k× Ptnk (h − 1)
(b) A perfect ternary (k = 3) tree and generalization

Figure 3.42: Deriving a recurrence for the sum of depths in a perfect k-ary tree

recurrence relation of the sum of dpeths in a perfect binary tree in eqn (3.31).
(
0 if h = 1
Ptd2 (h) = h
(3.31)
2Ptd2 (h − 1) + 2(2 − 1) if h > 1

By Theorem 1.7 stated on page 12, the following general recurrence relation in eqn (3.32)
can be derived, as depicted in Figure 3.42 (b).
(
0 if h = 1
Ptdk (h) = kh+1 −k
(3.32)
kPtdk (h − 1) + k−1 if h > 1

3.7.2 Drawing a Fibonacci Tree

(a) Fibonacci trees of height h = 0 ∼ 5


h 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
FTN(h) 1 2 4 7 12 20 33 54 88 143 232 376 609 986 1596
(b) Number of nodes in a Fibonacci tree of height h = 0 ∼ 14

Figure 3.43: Drawing Fibonnacci trees.

Akin to drawing a perfect binary tree, the divide and conquer paradigm can be applied
to drawing Fibonacci tree, as depicted in Figure 3.43 (a). Instead of copying the entire tree
140 CHAPTER 3. DIVIDE AND CONQUER

and pasting it on the right, only the left sub-tree is copied and pasted on the right and then
combined into the new root node. A big Fibonacci tree can be drawn in Θ(log n), assuming
copying and pasting the sub-tree take constant time and drawing a single new root node
connecting to two sub-trees takes constant time. The number of nodes in the hth Fibonacci
tree, FTN(h) can be defined recursively as follows:


 0 if h < 0

1 if h = 0
FTN(h) = (3.33)


 2 if h = 1
FTN(h − 1) + FTN(h − 2) + 1 if h > 1

Fibonacci tree sizes for the height from zero to fourteen are given in Figure 3.43 (b).

3.7.3 Truth Table Construction


Moreover, the divide and conquer paradigm can even be applied to building a truth table
frame of n variables, as shown in Figure 3.44. One may notice the same exact two copies
of sub-truth tables with n − 1 variables in the truth table, with n variables on the top and
bottom. If a truth table with n − 1 variables is constructed, one can copy the entire table
and then place it below. To combine them, one more column is added in the left. ‘T’ is
placed on the top half table and ‘F’ is filled in the left column on the bottom half table.
This divide and conquer method is significantly faster than constructing the truth table row
by row.

p1 p1 p2 p1 p2 p3
T −→ T T −→ T T T
F T F T T F
& F T T F T
F F T F F
& F T T
F T F
F F T
F F F

Figure 3.44: Building a truth table.

3.7.4 Scheduling a Round Robin Tennis Tournament


Finally, consider the problem of scheduling a round robin tennis tournament for n = 2k
players, which appears in [2, p 310]. Each player must play all n − 1 other players and have
one match per day for exactly n − 1 days. The output is a table of schedule, T that shows
who each player plays on which day. The top index row indicates the day and the left most
index column indicates the player’s ID. The schedule table must have n number of rows
corresponding to the players and n − 1 number of columns corresponding to the days. The
cell, Ti,j contains the other player’s ID that the ith player plays on j day. Several possible
valid schedules for (n = 4) players are given in Figure 3.45 (a). The problem of scheduling
a round robin tennis tournament is formally defined as follows:
3.7. GENERAL APPLICATIONS 141

1 2 3 1 2 3 1 2 3
     
1 2 3 4 1 3 2 4 1 2 3 4
21 4 3 24 1 3 21 4 3
T1 =   T2 =   T3 =  
34 2 1 31 4 2 34 1 2
4 3 1 2 4 2 3 1 4 3 2 1
(a) Valid round robin tennis tournament schedules for n = 4
n= 21 n = 22 n = 23
1 1 23 123 4567
1 2 1 2 34 1 2 3 4 5 6 7 8
2 1 2 1 43 2 1 4 3 6 7 8 5
3 4 1 2 7 8 5 6
+2 3 4 12 4 3 2 1 8 5 6 7
4 3 21
5 6 7 8 1 2 3 4
+4
6 5 8 7 2 3 4 1
7 8 5 6 3 4 1 2
8 7 6 5 4 1 2 3

(b) Constructing a round robin tennis tournament schedule

Figure 3.45: Round-robin tournaments.

Problem 3.8. Scheduling a round robin tournament


Input: the number of players, n = 2k where k ∈ Z +
Output: n × (n − 1) schedule table, T where Ti,j ∈ {1, · · · , n} such that

∀i ∈ {1, · · · , n}(Ti,1∼n−1 = {1, · · · , n} − {i}) (3.34)


∧ ∀j ∈ {1, · · · , n − 1}(T1∼n,j = {1, · · · , n}) (3.35)

The first constraint in eqn (3.34) is that the ith row player must play all other players
exactly once and i should not appear on the ith row. And the second constraint in eqn (3.35)
is that there are exactly n−1 matches and each player plays exactly once on each jth column
day.
A divide and conquer algorithm can be designed, as illustrated in Figure 3.45 (b). Note
that the following pseudo code assumes n is an exact power of 2.
Algorithm 3.32. Scheduling a round robin tournament

RRT(n)
declare Tn×(n−1) table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n = 2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1][1] = 2 and T [2][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else, i.e., (n > 2), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
S n2 ×( n2 −1) = RRT( n2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 1 ∼ n2 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 1 ∼ n2 − 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i][j] = S[i][j] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T [i + n2 ][j] = S[i][j] + n2 . . . . . . . . . . . . . . . . . . . . . . . . . . 9
142 CHAPTER 3. DIVIDE AND CONQUER

for j = 1 ∼ n/2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
T [i + n2 ][ n2 − 1 + j] = (i + j − 2) % n2 + 1 . . . . . . . . 11
T [i][ n2 − 1 + j] = T [i + n2 ][ n2 − 1 + j] + n2 . . . . . . . . 12
return Tn×(n−1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Lines 2 and 3 are the basis case when there are only two players where they play each
other on the first day. The line 8 fills the upper-left quadrant of the table, which is the
identical to the solution of the half-sized problem. The line 9 fills the lower-left quadrant of
the table whose cell is exactly n/2 more than the corresponding cell in the table of the half-
sized problem. The line 11 fills the lower-right quadrant of the table. Players in the lower
half of the table need to play against players 1 ∼ n2 . So they can be listed and placed in the
( n2 + 1)th row in the lower-right quadrant. For the remaining part, the list can be rotated
either clockwise or counter-clockwise. Algorithm 3.32 rotates it in the counter-clockwise in
line 11. The upper-right quadrant of the table is exactly n/2 more than the corresponding
cell in the lower-right quadrant. The resulting table is a valid round robin tennis tournament
schedule.

3.8 Exercises
Q 3.1. Assuming n is an exact power of 2, prove the following statements regarding divide
recurrences by induction:

a). The solution of the following recurrence in eqn (3.36) is T (n) = log n.
(
0 if n = 1, , i.e., k = 0
T (n) = (3.36)
T (n/2) + 1 if n = 2k , for k > 0

b). The solution of the following recurrence in eqn (3.5) is T (n) = 2n − 1.


(
1 if n = 1, , i.e., k = 0
T (n) = (3.5)
T (n/2) + n if n = 2k , for k > 0

c). The solution of the following recurrence in eqn (3.8) is T (n) = n log n + n.
(
1 if n = 1, , i.e., k = 0
T (n) = (3.8)
2T (n/2) + n if n = 2k , for k > 0

d). The solution of the following recurrence in eqn (3.37) is T (n) = 3n − log n − 2.
(
1 if n = 1, , i.e., k = 0
T (n) = (3.37)
2T (n/2) + log n if n = 2k , for k > 0

Q 3.2. Show the equivalent closed asymptotic notations for the following divide recurrences
using the Master Theorem 3.9.

a). T (n) = T (n/2) + O(1)


3.8. EXERCISES 143

b). T (n) = T (n/2) + Θ(n)


c). T (n) = 2T (n/2) + O(1)
d). T (n) = 2T (n/2) + Θ(log n)
e). T (n) = 2T (n/2) + Θ(n)
f). T (n) = 23 T (n/2) + O(1)
g). T (n) = 23 T (n/2) + Θ(n)
h). T (n) = 3T (n/3) + Θ(n)
i). T (n) = 6T (n/2) + Θ(n2 )
j). T (n) = 8T (n/4) + 24 log n
Q 3.3. Consider the following two growth functions:
√ √
f (n) = 2 n + 5n log n + 5 n log n + 4
(
2g(n/2) + n if n > 1
g(n) =
1 if n = 1

Which of the following statement(s) is(are) true?

a). f (n) = O(g(n)) e). f (n) = Θ(g(n)) g). g(n) = O(f (n))
b). f (n) = o(g(n)) f). g(n) = Θ(f (n)) h). g(n) = o(f (n))
c). f (n) = Ω(g(n)) i). g(n) = Ω(f (n))
d). f (n) = ω(g(n)) j). g(n) = ω(f (n))
Q 3.4. Consider the following two growth functions:

f (n) = 2 n + 5n log n + 3
(
2g(n/2) + n2 + 2n + 1 if n > 1
g(n) =
1 if n = 1

Which of the following statement(s) is(are) true?

a). f (n) = O(g(n)) e). f (n) = Θ(g(n)) g). g(n) = O(f (n))
b). f (n) = o(g(n)) f). g(n) = Θ(f (n)) h). g(n) = o(f (n))
c). f (n) = Ω(g(n)) i). g(n) = Ω(f (n))
d). f (n) = ω(g(n)) j). g(n) = ω(f (n))
Q 3.5. Recall the problem of finding the maximum value in an unsorted list of n number
of values, considered as an exercise in Q 2.18 on page 85.

a). Derive a divide recurrence relation for the problem.


b). Devise an algorithm using the divide and conquer paradigm.
144 CHAPTER 3. DIVIDE AND CONQUER

c). Illustrate your algorithm on the following toy example.

5 9 2 4 0 8 1 7

d). Analyze the computational time complexity of your divide and conquer algorithm.

Q 3.6. Consider Problem 2.13 of finding all occurrences of the query element in an unsorted
list, defined on page 57.
a). Devise an algorithm using the divide and conquer paradigm for Problem 2.13.
b). Illustrate your algorithm in a) on the following toy example with a query, q = 3:

3 8 2 3 2 3 5 8

c). Analyze the computational time complexity of your divide and conquer algorithm.
Q 3.7. Consider the element uniqueness (CEU) Problem 2.12, defined on page 56, which
checks whether all elements in a list are unique.

a). Devise an algorithm using the divide and conquer paradigm for Problem 2.12.
b). Illustrate your algorithm in a) on the following toy example:

3 8 2 5 4 7 5 8

c). Analyze the worst case computational time complexity of your divide and conquer
algorithm provided in a) and provide a worst case example.
d). Analyze the best case computational time complexity of your divide and conquer
algorithm provided in a) and provide a best case example.
e). Provide the overall computational time complexity of your divide and conquer algo-
rithm provided in a).

Q 3.8. Consider the prefix sum Problem 2.10, defined on page 53, which is simply adding
all elements in a list.
a). Derive a divide recurrence relation for the problem.
b). Devise an algorithm using the divide and conquer paradigm.
c). Illustrate your algorithm in b) on the following toy example:

3 -1 5 -3 -3 7 4 -1

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).
Q 3.9. Recall the prefix product problem, or simply PFP, considered as an exercise in
Q 2.15 on page 84, which is simply multiplying all elements in a list.
a). Derive a divide recurrence relation for the problem.
3.8. EXERCISES 145

b). Derive an algorithm using the divide and conquer paradigm.

c). Illustrate your algorithm in b) on the following toy example:

2 -1 -2 1 -3 1 2 -1

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.10. Consider the dot product Problem 2.4, defined on page 47, which is simply adding
each product of corresponding elements in two vectors.

a). Derive a divide recurrence relation for the problem.

b). Devise an algorithm using the divide and conquer paradigm.

c). Illustrate your algorithm in b) on the following toy example:

A= 4 1 2 1 2 3 0 1
B= 1 5 0 2 5 1 2 3

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.11. Recall the number of descents problem, or simply NDS, considered as an exercise
in Q 2.27 on page 87, which is simply counting all elements that are less than immediate
previous element in the list.

a). Derive a divide recurrence relation for the problem.

b). Devise an algorithm using the divide and conquer paradigm.

c). Illustrate your algorithm in b) on the following toy example:

8 5 3 7 2 4 1 6

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.12. Recall the problem of checking whether a given a sequence A1∼n of n quantifiable el-
ements is in a non-decreasing order, which was considered as an exercise in Q 2.22 on page 86.
For examples, issorted asc(h1, 2, 2, 3, 9i) returns true and issorted asc(h1, 2, 1, 4, 7, 9i) returns
false.

a). Derive a divide recurrence relation for the problem.

b). Devise an algorithm using the divide and conquer paradigm. The best case running
time must be O(1).

c). Illustrate your algorithm in b) on the following toy example:

1 3 2 2 4 4 9 7
146 CHAPTER 3. DIVIDE AND CONQUER

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

e). Provide a best case running time scenario.

Q 3.13. Consider the up-down alternating permutation Problem 2.19, or simply UDP,
defined on page 65.

a). Illustrate the up-down divide and conquer Algorithm 3.3, described on page 96 on the
following toy example:

A= 4 2 3 5 1 8 7 6

b). Devise a dvide and conquer algorithm purely calling the up-down procedure.

c). Illustrate your algorithm in b) on the following two examples:

5 1 2 0 4 9 8 5 1 2 0 4 9

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.14. Recall the down-up alternating permutation problem and checking down-up se-
quence problem which were considered as exercises in Q 2.24 on page 86 and Q 2.23 on
page 86, respectively.

a). Devise a dvide and conquer algorithm for the down-up problem.

b). Illustrate your algorithm in a) on the following toy example:

A= 2 5 1 0 4 9 8

c). Provide the computational time complexity of your divide and conquer algorithm
provided in a).

d). Devise a dvide and conquer algorithm for the checking down-up sequence problem.

e). Illustrate your algorithm in d) on the following toy example:

A = 11 3 10 7 9 1 5 4 8 6

f). Provide the computational time complexity of your divide and conquer algorithm
provided in d).

Q 3.15. Recall the up-up-down alternating permutation problem and checking up-up-down
sequence problem which were considered as exercises in Q 2.26 on page 87 and Q 2.25 on
page 87, respectively.

a). Devise a dvide and conquer algorithm for the up-up-down problem by halving.

b). Illustrate your algorithm in a) on the following two examples:


3.8. EXERCISES 147

5 1 2 0 4 9 8 5 1 2 0 4 9

c). Provide the computational time complexity of your divide and conquer algorithm
provided in a).
d). Devise a dvide and conquer algorithm for the up-up-down problem by flexible halving,
such that all sub-problems are up-up-down sequences.
e). Illustrate your algorithm in a) on the following two examples:

5 1 2 0 4 7 9 8 5 1 2 0 4 9 6 7 3 8

f). Devise a dvide and conquer algorithm for the checking up-up-down sequence problem.

Q 3.16. Illustrate the divide and conquer Algorithm 3.6, stated on page 101, for the max-
imum consecutive subsequence sum problem on the following toy examples:
a). 3 -1 -4 -3 4 -7 4 1

b). 3 -1 5 -3 -3 7 4 -1

c). -3 1 -5 3 3 -7 -4 1

Q 3.17. Recall the minimum consecutive sub-sequence sum problem, minCSS in short,
which was considered as an exercise in Q 1.17 on page 30. It is to find the consecutive sub-
sequence of a sequence, A = ha1 , a2 , · · · , an i whose sum is a minimum over all consecutive
sub-sequences.

a). Devise an algorithm using the divide and conquer paradigm.


b). Provide the computational time complexity of your divide and conquer algorithm
provided in a).
c). Illustrate your algorithm in a) on the following example:

3 -1 5 -3 -3 7 4 -1

d). Illustrate your algorithm in a) on the following example:

-3 1 -5 3 3 -7 -4 1

Q 3.18. Recall the maximum consecutive sub-sequence product problem, MCSP in short,
which was considered as an exercise in Q 1.18 on page 31. It is to find the consecutive
sub-sequence of a sequence of arbitrary real numbers, A = ha1 , a2 , · · · , an i whose product
is a maximum over all consecutive sub-sequences.

a). Consider the modified maximum consecutive sub-sequence product problem, MCSPp
in short, which is to find the consecutive sub-sequence of a sequence of positive real
numbers, A = ha1 , a2 , · · · , an i whose product is a maximum over all consecutive sub-
sequences. Formulate the problem.
b). Devise an algorithm for the MCSPp problem defined in a) using the divide and conquer
paradigm.
148 CHAPTER 3. DIVIDE AND CONQUER

c). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

d). Illustrate your algorithm in b) on the following example:

2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5

e). Illustrate your algorithm in b) on the following example:

0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0

f). Devise an algorithm for MCSP using the divide and conquer paradigm.

g). Provide the computational time complexity of your divide and conquer algorithm
provided in f).

h). Illustrate your algorithm in f) on the following example:

-2 0 -1 2 1 -1 -2 2

Q 3.19. Recall the minimum consecutive sub-sequence product problem, minCSP in short,
which was considered as an exercise in Q 1.19 on page 31. It is to find the consecutive
sub-sequence of a sequence of arbitrary real numbers, A = ha1 , a2 , · · · , an i whose product
is a minimum over all consecutive sub-sequences.

a). Consider the modified minimum consecutive sub-sequence product problem, minCSPp
in short, which is to find the consecutive sub-sequence of a sequence of positive real
numbers, A = ha1 , a2 , · · · , an i whose product is a minimum over all consecutive sub-
sequences. Formulate the problem.

b). Devise an algorithm for minCSPp using the divide and conquer paradigm.

c). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

d). Illustrate your algorithm in b) on the following example:

2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5

e). Illustrate your algorithm in b) on the following example:

0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0

f). Devise an algorithm for minCSP using the divide and conquer paradigm.

g). Provide the computational time complexity of your divide and conquer algorithm
provided in f).

h). Illustrate your algorithm in f) on the following example:

-2 0 -1 2 1 -1 -2 2
3.8. EXERCISES 149

Q 3.20. Given a sequence of quantifiable elements, A1∼n , the longest increasing con-
secutive sub-sequence problem (LICS in short) is to find the longest consecutive sub-
sequence, As∼e , of A1∼n such that ai ≤ ai+1 for every i ∈ {s ∼ e − 1}. For example,
LICS(h7, 2, 4, 6, 7, 7, 8, 5, 1i) = A2∼7 = h2, 4, 6, 7, 7, 8i.

a). Formulate the problem.


b). Devise an algorithm using the divide and conquer paradigm.
c). Provide the computational time complexity of your divide and conquer algorithm
provided in b).
d). Illustrate your algorithm in b) on the following example:

2 -3 -1 1 2 2 3 -4

Q 3.21. Given a sequence of quantifiable elements, A1∼n , the longest decreasing con-
secutive sub-sequence problem (LDCS in short) is to find the longest consecutive sub-
sequence, As∼e , of A1∼n such that ai ≥ ai+1 for every i ∈ {s ∼ e − 1}. For example,
LDCS(h7, 2, 4, 6, 7, 7, 8, 5, 1i) = A7∼9 = h8, 5, 1i.

a). Formulate the problem.


b). Devise an algorithm using the divide and conquer paradigm.
c). Provide the computational time complexity of your divide and conquer algorithm
provided in b).
d). Illustrate your algorithm in b) on the following example:

-2 3 1 -1 -2 -2 -3 4

Q 3.22. Consider the (a × n) multiplication Problem 1.14.


a). Draw a divide and conquer tree for (a × 9) where n = 9.
b). Devise an algorithm using the divide and conquer paradigm.
c). Prove the correctness of your algorithm in b). (Hint: distributive property)
d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).
Q 3.23. Consider the problem of solving the following formula: ap−1 % p.
a). Devise an algorithm using the divide and conquer paradigm.
b). Illustrate the algorithm in b) using a = 2 and p = 12
c). Illustrate the algorithm in b) using a = 4 and p = 15
d). Prove the correctness of your algorithm in a).
e). Provide the computational time complexity of your divide and conquer algorithm
provided in a).
150 CHAPTER 3. DIVIDE AND CONQUER

Q 3.24. Recall the root finding Problem 3.3 defined on page 107.

a). Derive a first order linear recurrence relation.


b). Devise a sequential search based algorithm using inductive programming.
c). Provide the computational time complexity of your algorithm provided in b).
d). Convert the divide and conquer Algorithm 3.11 stated on page 107 using the tail
recursion divide and conquer method.
e). Provide the computational time complexity of your algorithm provided in d).

Q 3.25. Recall the floor of square root of n (SRN(n) = b nc) Problem 2.26 defined on
page 78.

a). Devise a recursive algorithm using the divide and conquer paradigm (Hint: Bisection
method).
b). Provide the computational time complexity of your algorithm provided in a).
c). Convert your divide and conquer in a) using the tail recursion divide and conquer
method.

Q 3.26. Recall the Less between elements sequence validation problem, considered as an
exercise in Q 2.30 on page 88.

a). Devise an algorithm using the divide and conquer paradigm.


b). Illustrate your algorithm in a) on the toy example,
where n = 8 and S1∼2n = h8, 7, 2, 1, 1, 4, 2, 4, 3, 3, 6, 6, 7, 5, 5, 8i.
c). Illustrate your algorithm in a) on the toy example,
where n = 6 and S1∼2n = h6, 5, 2, 1, 1, 2, 4, 3, 3, 4, 5, 6i.
d). Provide the computational time complexity of your divide and conquer algorithm
provided in a).

Q 3.27. Recall the greatest common divisor of n numbers problem, mGCD in short, which
was considered as an exercise in Q 2.33 on page 89.

a). Derive a divide recurrence relation for the problem.


b). Devise an algorithm using the divide and conquer paradigm.
c). Illustrate your algorithm in b) on the following toy example:

180 150 300 90 75 450

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.28. Recall the least common multiple of n numbers problem, mLCM in short, which
was considered as an exercise in Q 2.34 on page 89.
3.8. EXERCISES 151

a). Derive a divide recurrence relation for the problem.

b). Devise an algorithm using the divide and conquer paradigm.

c). Illustrate your algorithm in b) on the following toy example:

60 180 30 42 98 105

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.29. Consider the k-Permutation of n Problem 2.8, or simply KPN, defined on page 51.

a). Derive a divide recurrence relation for the problem.

b). Devise an algorithm using the divide and conquer paradigm.

c). Illustrate your algorithm in b) on P (9, 9).

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.30. Recall the the rising factorial power problem, RFP in short, which was considered
as an exercise in Q 2.35 on page 89.

RFP(n, k) = nk̄ = n × (n + 1) × · · · × (n + k − 1)
| {z }
k

a). Derive a divide recurrence relation for the problem.

b). Devise an algorithm using the divide and conquer paradigm.

c). Illustrate your algorithm in b) on RFP(2, 8).

d). Provide the computational time complexity of your divide and conquer algorithm
provided in b).

Q 3.31. Consider the problem of the nth power of a k × k square matrix, Mk×k .

a). Formulate the problem.

b). Derive a divide recurrence relation for the problem.

c). Devise an algorithm using the divide and conquer paradigm.

d). Illustrate your algorithm in c) on the following toy example where n = 9.


   9
1 1 1 1 1 1
M = 1 0 0 1 0 0 =?
0 1 0 0 1 0

e). Provide the computational time complexity of your divide and conquer algorithm
provided in c) assuming matmult(Ak×k , Bk×k ) takes Θ(k 2.8074 ) time.
152 CHAPTER 3. DIVIDE AND CONQUER

f). Derive a first order linear recurrence relation for the problem.
g). Devise an inductive programming algorithm for the problem based on the recurrence
relation derived in f). You may use matmult(Ak×k , Bk×k ) for two square matrix
multiplication.

h). Provide the computational time complexity of your inductive programming algorithm
provided in g) assuming matmult(Ak×k , Bk×k ) takes Θ(k 2.8074 ) time.
Q 3.32. Finish the correctness proof for Strassen’s square matrix multiplication algorithm
in Theorem 3.13 on page 119.
Q 3.33. Consider the least-significant-digit-first Radix sort Algorithm 3.29 described on
page 134.
a). The least-significant-digit-first Radix sort Algorithm 3.29 does not produce the correct
output if the outside-in partitioning Algorithm 3.24 described on page 129 is used.
Prove the incorrectness using the counter example on A = h2, 11, 12, 4, 5, 13, 1, 7, 3i
and d = 4. d = 4 means that 4 is represented in binary as 0 1 0 0 in four bits.

b). The least significant digit first Radix sort Algorithm 3.29 does not produce the correct
output if the progressive partitioning Algorithm 3.25 described on page 130 is used
Prove the incorrectness using the counter example on A = h2, 11, 12, 4, 5, 13, 1, 7, 3i
and d = 4.
c). Devise a stable bit partitioning algorithm so that the least-significant-digit-first Radix
sort Algorithm 3.29 produces correct outputs.
Chapter 4

Greedy Algorithm

Thus far in our algorithm design paradigms, we have devoted our attention to a backward
thinking notion, i.e., recursive thinking. We turn now to thinking forward. The greedy algo-
rithm is a widely applied algorithm design technique that always takes the best immediate,
or local, solution iteratively until the desired output is found. Albeit the greedy algorithm
concept had been widely used to design algorithms, the term was first coined in [136] by
Edmonds in 1971.
The following generic template may apply to most greedy algorithms presented in this
chapter:

Greedy Algorithm Template


Greedyalgo(A)
C = rank(A) candidate set (optional)
S=∅
while C 6= ∅
co = findbest(C) Greedy choice
S = S ∪ {co } include it to solution
C = C − {co } update candidate set
C = reevaluate(C) reevaluation (optional)
return S

First, a certain greedy choice must be selected to rank the input candidates. Then, a greedy
algorithm selects the best candidate locally, without considering future consequences, and
includes into the solution set. Some candidates that do not meet certain problem conditions
are excluded. While simple and easy to implement, the greedy algorithms do not always
produce desired or optimal solutions. When a proposed greedy algorithm does not find
a solution for certain problems, it is labeled as ‘greedy approximation’ in the Index of

153
154 CHAPTER 4. GREEDY ALGORITHM

Computational Problems Table on page 736. Albeit the template can be stated in recursion,
the greedy algorithms shall mostly be stated with iteration in this chapter because the
recursion works in a tail-recursive manner.
The chapter has following two objectives: ability to design a greedy algorithm for various
computational problems and proving the correctness or incorrectness of any proposed greedy
algorithm. The first objective is to improve the ability to devise greedy algorithms for
computational problems with various greedy choices. Computational problems are drawn
from a variety of domains such as optimization, scheduling, graph theory, set theory, etc.
Readers must be able to formulate various classical optimization problems and design their
respective greedy algorithms or approximations. Numerous scheduling problems as well as
famous graph theoretic problems are also introduced.
The second and most important objective is to show either the correctness or incorrect-
ness of any proposed greedy algorithm. Readers must be able to determine which greedy
choices work or do not work for which problems. Proving incorrectness can be done rather
easily by a single counter-example. Proving correctness is rather harder, but is usually done
via proof by contradiction.
It should be noted that most greedy algorithms’ computational time complexities im-
prove dramatically when combined with a data structure called ‘priority queue,’ which will
be discussed in Chapter 9. In this chapter, algorithms are stated without any data structure
in order to master the greedy algorithm paradigm first.

4.1 Problems on List


4.1.1 Order Statistics
Suppose there is a greedy professor who enforces the ‘no food in class’ rule when all of
the students brought foods to class. He confiscates the food, but picks and eats the most
delicious food. After eating it, he feels still hungry and takes the next most delicious food
and so on. If he stops at the kth food, he has found the answer for the kth order statistics
Problem 2.15.
This greedy choice process is illustrated in Figure 4.1 using a toy example.

k Top k A1∼n−k max(A1∼n−k )


1 9 5 2 9 4 0 8 7 1 9
2 9 8 5 2 X 4 0 8 7 1 8
3 9 8 7 5 2 X 4 0 X 7 1 7

Figure 4.1: Selection kth max Algorithm 4.1 illustration.

Now an algorithm using the greedy algorithm paradigm can be written as follows:
Algorithm 4.1. Selection kth max
selection(A1∼n , k)
for i = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
j = argmax(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
delete aj or (aj = −∞ or min(A1∼n ) − 1) . . . . . . . 3
return j or aj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.1. PROBLEMS ON LIST 155

Note that argmax(A1∼n ) returns an index j where aj = max(A1∼n ); argmax(A1∼n )


finds the index of the element of the maximum value. Since finding the argmax in an array
in line 2 takes linear time, the complexity of Algorithm 4.1 takes Θ(kn). Algorithm 4.1
assumes that 1 ≤ k ≤ n. To solve the kth smallest element problem considered on page 85,
lines 2 and 3 of Algorithm 4.1 can be replaced with argmin and aj = ∞ or max(A1∼n ) + 1,
respectively.

4.1.2 Sorting
Consider the sorting Problem 2.16, formally defined on page 60. In Algorithm 4.1, the
top k best solutions may be stored in an array, O, of size k. If we let k = n, the output would
contain the sorted list. However, a list can be sorted without an extra array. Since one best
choice element is deleted and included in the solution set for each iteration in Algorithm 4.1,
the selected element can be swapped within the original input array. If swapped with the
element in the beginning of the array, the first 1 ∼ i elements in A are the partial solution
set and the remaining i + 1 ∼ n elements are the unsorted candidate list at the ith iteration.
This is illustrated using a toy sample string ‘GREEDYALGO’ in Figure 4.2.

i S1∼i−1 ↔ Ai∼n min(Ai∼n ) swap


1 G R E E D Y A L G O min(A1∼n ) = a7 = ‘A’ swap(a1 , a7 )
2 A R E E D Y G L G O min(A2∼n ) = a5 = ‘D’ swap(a2 , a5 )
3 A D E E R Y G L G O min(A3∼n ) = a3 = ‘E’ swap(a3 , a3 )
4 A D E E R Y G L G O min(A4∼n ) = a4 = ‘E’ swap(a4 , a4 )
5 A D E E R Y G L G O min(A5∼n ) = a7 = ‘G’ swap(a5 , a7 )
6 A D E E G Y R L G O min(A6∼n ) = a9 = ‘G’ swap(a6 , a9 )
7 A D E E G G R L Y O min(A7∼n ) = a8 = ‘L’ swap(a7 , a8 )
8 A D E E G G L R Y O min(A8∼n ) = a10 = ‘O’ swap(a8 , a10 )
9 A D E E G G L O Y R min(A9∼n ) = a10 = ‘R’ swap(a9 , a10 )
10 A D E E G G L O R Y

Figure 4.2: Selection sort Algorithm 4.2 illustration.

Now, an algorithm for the sorting problem using the greedy algorithm paradigm can be
written simply as follows:

Algorithm 4.2. Selection sort


selectionsort(A, k)
for i = 1 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
j = argmin(Ai∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
swap(ai , aj ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

This ‘selection sort’ Algorithm 4.2 is also referred to as ‘Sorting by Selection’ in [103, p
138-141]. Recall that j = argmin(A1∼n ) if aj = min(A1∼n ). If the iteration terminates at
k, it solves the kth order statistics (min) problem.
The swap operation in line 3 can be done in constant time and combines the including
a solution and updating the candidate set steps in the template. Since finding the argmin
156 CHAPTER 4. GREEDY ALGORITHM

in an array in line 2 takes linear time and the swap operation in line 3 takes constant, the
computational time complexity of Algorithm 4.1 is clearly Θ(n2 ), as derived in eqn (4.1).
n
X
(n − i + 1 + O(1)) ∈ Θ(n2 ) (4.1)
i=1

4.1.3 Alternating Permutation


Algorithm 4.2 sorts an array in ascending order. To sort it in descending order, ‘argmax,’
which finds an index of maximum value in line 2, can be used instead of ‘argmin,’ which
finds the index of minimum value. If both minimum and maximum values are selected
alternatively, the alternating permutation Problem 2.19, defined on page 65, can be solved.
A pseudo code is stated as follows:

Algorithm 4.3. Alternating permutation -up-down - greedy


up-down-greedy(A)
for i = 1 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if i is odd, j = argmin(Ai∼n ) . . . . . . . . . . . . . . . . 2
else j = argmax(Ai∼n ) . . . . . . . . . . . . . . . . . . . . . . .3
swap(ai , aj ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Since finding an element with either a minimum or maximum value takes linear time,
which is executed n number of times, Algorithm 4.3 takes quadratic time, Θ(n2 ), as analyzed
in eqn (4.1).

(a) sample input (b) Greedy algo (c) induc. prog. & D&C
i S1∼i−1 ↔ Ai∼n min(Ai∼n ) swap
1 3 1 2 4 0 7 6 5 min(A1∼n ) = a5 = 0 swap(a1 , a5 )
2 0 1 2 4 3 7 6 5 max(A2∼n ) = a6 = 7 swap(a2 , a6 )
3 0 7 2 4 3 1 6 5 min(A3∼n ) = a6 = 1 swap(a3 , a6 )
4 0 7 1 4 3 2 6 5 max(A4∼n ) = a7 = 6 swap(a4 , a7 )
5 0 7 1 6 3 2 4 5 min(A5∼n ) = a6 = 2 swap(a5 , a6 )
6 0 7 1 6 2 3 4 5 max(A6∼n ) = a8 = 5 swap(a6 , a8 )
7 0 7 1 6 2 5 4 3 min(A7∼n ) = a8 = 3 swap(a7 , a8 )
8 0 7 1 6 2 5 3 4

(d) Algorithm 4.3 illustration

Figure 4.3: Toy example of up-down problem

For a sample input in Figure 4.3 (a), the greedy Algorithm 4.3 produces the output in
Figure 4.3 (b), whereas the inductive programming Algorithm 2.24 and divide & conquer
Algorithm 3.3 produce the output in Figure 4.3 (c). Both outputs in Figure 4.3 (b) and (c)
are valid. Figure 4.3 (d) illustrates Algorithm 4.3.
4.2. OPTIMIZATION PROBLEMS 157

4.2 Optimization Problems

(a) Pyramidal mountain (b) Rocky mountain

Figure 4.4: Pyramidal and rocky mountain type optimization problems.

Imagine a wolf with a limited sight. Its goal is to reach the top of the mountain. In
each step, it always chooses to climb upward without regard for future consequences. This
greedy approach sometimes makes it reach the top of the mountain if the type of optimal
problem is like a pyramidal mountain as in Figure 4.4 (a). If the type of optimal problem
is like a rocky mountain as in Figure 4.4 (b), a greedy algorithm may fail to reach a global
maximum, but end up in a local maximum due to its shortsighted approach.
A greedy algorithm takes the best it can get right now without regard for future con-
sequences. The solution that it finds at the end of the algorithm may be the best solution
if the problem type is pyramidal, but may not be the optimal solution otherwise. Hence,
algorithm designers must either prove its correctness, usually by contradiction, or prove its
incorrectness by a counter-example. By the principle of the asymmetry between verification
and falsification [?], the correctness of an algorithm can never be verified by a finite number
of examples but can be falsified by only one counter-example.

4.2.1 Select the k Subset Sum Maximization


Recall the kth order statistics Problem 2.15, defined on page 59. It can be modified to be
an optimization problem. It picks k items out of n items, such that the subset sum of these
k items is maximized. This optimization problem, that shall be simply called SKSS, can be
formally defined using the standard optimization problem formulation as in eqn (4.2).

Problem 4.1. Select k subset sum maximization (SKSS)


Input: A list A of n quantifiable elements and k ∈ Z +
Output: X = hx1 , x2 , · · · , xn i such that
n
X
maximize ai xi
i=1
n
X (4.2)
subject to xi = k
i=1
where xi = 0 or 1

Figure 4.5 (a) shows the sample output for a a toy example on page 154 where k = 3.
A pseudo code, almost identical to Algorithm 4.1, is given as seen below. Only line 4,
which returns the output, is different.
158 CHAPTER 4. GREEDY ALGORITHM

Input: A= 5 2 9 4 0 8 7 1, k=3
n
P
Output: X= 0 0 1 0 0 1 1 0 ai xi = 24
i=1
(a) Sample input and output for SKSS P
k A1∼k ↔ Ak+1∼n swap(ak , max(Ak+1∼n )) (A1∼k )
0 5 2 9 4 0 8 7 1 0
1 9 2 5 4 0 8 7 1 swap(5, 9) 9
2 9 8 5 4 0 2 7 1 swap(2, 8) 17
3 9 8 7 4 0 2 5 1 swap(5, 7) 24
(b) Algorithm 4.4 illustration.

Figure 4.5: A greedy algorithm for SKSS.

Algorithm 4.4. Select k subset sum max


selectionSKSS(A, k)
for i = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
j = argmax(Ai∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
swap(ai , aj ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Pk
return ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
i=1
Figure 4.5 (b) illustrates the greedy select k subset sum maximization Algorithm 4.4.
Pk Pn
The computational time complexity of Algorithm 4.4 is j ∈ Θ(kn). The greedy
i=1 j=i
Algorithm 4.1 with a small modification solves Problem 4.1, whose type is apparently a pyra-
midal mountain type optimization problem. Most correctness proofs for greedy algorithms
utilize the proof by contradiction technique as follows:

Theorem 4.1. Algorithm 4.4 correctly finds the optimal solution for the SKSS Problem 4.1.

Proof. (by contradiction) Let S be the optimal solution set produced by the greedy algo-
k k
rithm. Suppose there is another solution S 0 such that S 0 6= S and s0i >
P P
si . Sort S and
i=1 i=1
k k
S 0 in descending order. Then s0i ≤ si for ∀i ∈ {1, · · · , k}. s0i ≤
P P
si , which contradicts
i=1 i=1
the assumption. Therefore, Algorithm 4.4 correctly finds the optimal solution. 

4.2.2 Postage Stamp Minimization Problem


Suppose you want to make change for $7.47 with the fewest possible bills and coins. A
greedy approach takes the largest possible bill or coin that does not overshoot. Hence, the
output is one $5 bill, two $1 bills, one quarter, two dimes, no nickel, and two pennies. There
is no other way to make the change less than 8 bills or coins, and the greedy algorithm
correctly finds the optimal solution for bill/coin change minimization problem.
This coin change problem can be generalized to the postage stamp equality minimization,
or simply PSEmin problem, and is defined as follows:
4.2. OPTIMIZATION PROBLEMS 159

Problem 4.2. Postage stamp equality minimization problem


Input: A
 list A of n different value stamps and m amount
Pn
 x if ∃X = hx , x , · · · , x i
i 1 2 n
Output: i=1 such that
False otherwise

n
X
minimize xi
i=1
Xn
subject to a i xi = m
i=1
where 0 ≤ xi integer
Figure 4.6 (a) shows the sample input and output for the coin example.

Input: A= 5 1 .25 .1 .05 .01 m = 7.47


Output: X= 1 2 1 2 0 2
(a) Sample input and output for PSEmin

i mi−1 −x0i × a0i = ni


0 basis = 7.47 0 0 0 0 0 0
1 7.47 −1 × 5 = 2.47 1 0 0 0 0 0
2 2.47 −2 × 1 = .47 1 2 0 0 0 0
3 .47 −1 × .25 = .22 1 2 1 0 0 0
4 .22 −2 × .1 = .02 1 2 1 2 0 0
5 .02 −0 × .05 = .02 1 2 1 2 0 0
6 .02 −2 × .01 =0 1 2 1 2 0 2
(b) Cashier’s Algorithm 4.5 illustration.

Figure 4.6: A greedy algorithm for PSEmin.

A greedy algorithm must find a maximum or minimum value depending on its greedy
choice for each iteration. If the input A is already sorted, a best greedy choice can be found
in constant time for each iteration. Let sort(A, asc) and sort(A, desc) be functions that sort
A in ascending and descending orders, accordingly. A pseudo code for a greedy algorithm
for the postage stamp equality minimization problem, which is widely known as Cashier’s
algorithm, is stated as follows:
Algorithm 4.5. Cashier’s algorithm
PSEmin-greedy(A1∼n , m)
A0 = sort(A, desc) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
x0i = b am0 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
i
m = m − x0i × a0i P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
n
if m = 0, return i=1 x0i . . . . . . . . . . . . . . . . . . . . . . . 5
else, return F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
160 CHAPTER 4. GREEDY ALGORITHM

When there is no need for reevaluating candidates, the pseudo code template can be
modified. It can first sort by the greedy choice and then determine whether each candidate
is selected or not from the beginning. The computational time complexity of such greedy
algorithms with sorting in the beginning is usually O(n log n). A step by step illustration
of Algorithm 4.5 on the coin sample problem is provided in Figure 4.6.
The next step is to prove its correctness. Is it possible to make $7.47 with less than 8
coins? Although the answer is ‘no’ for this specific case, does Algorithm 4.5 always guarantee
to find an optimal solution in general? In order to prove its incorrectness, all we need is to
find a counter-example.

Theorem 4.2. The greedy Algorithm 4.5 does not find an optimal solution for the PSEmin.

Proof. Consider an counter-example, A = h44, 33, 24, 1i and m = 67, as given in Figure 4.7
(a). There are four kinds of stamps and we are to find a minimum number of stamps needed
to make 67c. Algorithm 4.5 would give a solution of 24 stamps, as shown in Figure 4.7 (b).
However, there is a better way that requires only three stamps, as given in Figure 4.7 (c).
Thus, the greedy Algorithm 4.5 does not solve the PSEmin Problem 4.2. 

(a) Four kinds of stamps

•| • •{z
• • • •}
23
(b) Greedy algorithm solution: 24 stamps needed to make 67c.

(c) Optimal solution: 3 stamps needed to make 67c.

or

(d) Exceeding solutions: 2 stamps needed to cover 67c.

Figure 4.7: Stamp example

Indeed, the postage stamp equality minimization problem is an NP-hard problem [4],
which shall be covered in Chapter 11. A set of bills and coins, A in most countries’ money
and coin systems in circulation, is designed such that the greedy algorithm would work.
A set of coins or stamps is said to be canonical if the greedy algorithm finds an optimal
solution [120]. Interested readers may see [133] for an algorithm to check whether a set of
items is canonical.
Instead of searching for the minimum number of stamps to make exactly 67c, some
people use simply two 44c stamps, as given in Figure 4.7 (d). As long as the total amount
is greater than or equal to the required amount, the solution is acceptable. This slightly
modified postage stamp minimization is defined as follows:
4.2. OPTIMIZATION PROBLEMS 161

Problem 4.3. Postage stamp minimization problem


Input: A
 list A of n different value stamps and m amount
Pn
 x if ∃X = hx , x , · · · , x i
i 1 2 n
Output: i=1 such that
False otherwise

n
X
minimize xi
i=1
Xn
subject to a i xi ≥ m
i=1
where 0 ≤ xi integer

The following algorithm in eqn (4.3), which is based on the greedy paradigm similar to
the Cashier’s Algorithm 4.5, finds the minimum number of stamps to cover the required
amount.
 
m
PSmin(m, A1∼n ) = (4.3)
max(A1∼n )
Theorem 4.3. The greedy algorithm in eqn (4.3) correctly finds an optimal solution.
Proof. Let z = max(A1∼n ) and X be the optimal solution produced by the greedy algorithm
in eqn (4.3). Suppose there is a better solution X 0 such that eqn (4.4) holds:
n n  !
X X m
x0i < xi = (4.4)
i=1 i=1
max(A1∼n )

Since all elements in A1∼n are less than or equal to z, they can be stated in terms of z;
A1∼n = hz − b1 , z − b2 , · · · , z − bn i where bi ≥ 0.
n
X n
X n
X n
X
x0i ai = x0i (z − bi ) = z x0i − bi x0i (4.5)
i=1 i=1 i=1 i=1
n
X lmm n
X n
X n
X
z x0i < z by eqn (4.4), and thus z x0i < m. Since z bi x0i ≥ 0, ai x0i < m.
i=1
z i=1 i=1 i=1
n
X
This contradicts that ai x0i ≥ m. Therefore, the greedy algorithm in eqn (4.3) correctly
i=1
finds an optimal solution. 

4.2.3 0-1 Knapsack


Consider a classical optimization problem called ‘0-1 knapsack’ problem. Given a set of
items with respective weights and values and a knapsack whose maximum capacity is m,
the 0-1 knapsack problem is to find such a combination of the items to maximize the total
value, while not exceeding the knapsack maximum capacity.
For example, consider an algorithm APP store that has (n = 9) different APPs with
their values and size as shown in Figure 4.8. One day, the store is giving out the APPs for
162 CHAPTER 4. GREEDY ALGORITHM

(a) Algorithm APP store on a tablet with (m = 40M ) storage capacity.


A BF DA EA HC KA KR LM PA SA
P 11 11 4 12 9 8 21 4 12
W 7 10 3 5 12 14 25 5 15
(b) Table of APPs where item names are abbreviated in order to (a).
A LM HC SA B DA KA KR EA PA
P 21 12 12 11 11 9 8 4 3
W 25 5 15 7 10 12 14 3 5
X 1 1 0 1 0 0 0 1 0
(c) 48 profits made by the greedy Algorithm 4.6.
A HC BF EA DA LM SA PA KA KR
P 12 11 4 11 21 12 4 9 8
W 5 7 3 10 25 15 5 12 14
U 2.4 1.57 1.33 1.1 0.84 0.8 0.8 0.75 0.57
X 1 1 1 1 0 1 0 0 0
(d) 50 profits can be made by the greedy Algorithm 4.7.

Figure 4.8: Algorithm APP store example


4.2. OPTIMIZATION PROBLEMS 163

free. Suppose that one has a tablet PC with the maximum storage space, (m = 40M ). The
task is to download apps so that the tablet’s total value is maximized. Since each APP is
either downloaded once or not selected, this problem belongs to the 0-1 knapsack problem,
which is formally defined as an optimization problem.
Problem 4.4. 0-1 Knapsack
Input: A1∼n , a list of n different items where each item is represented by its profit
and weight, ai = (pi , wi ) and
m, the maximum weight that a knapsack can hold.
Output: X = hx1 , x2 , · · · , xn i such that
Xn
maximize p i xi
i=1
n
X
subject to w i xi ≤ m
i=1
where xi = 0 or 1
One may sort APPs by their profits and choose the most profitable APP in the solution
as long as it fits into the memory space. This greedy algorithm is stated as follows:
Algorithm 4.6. Greedy 01-knapsack I
zo-knapsack-greedyI(A, m)
A = sort(A, dec) by P . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
X1∼n = 0’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if m − wi ≥ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
m = m − wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T = T + pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
xi = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
return T and X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Algorithm 4.6 takes O(n log n) as it involves sorting the list. Algorithm 4.6 is illustrated
on a toy example in Figure 4.8 (c). 48 profits can be made by the greedy Algorithm 4.6. Is
this profit optimal? One can find a better solution easily.
Theorem 4.4. Algorithm 4.6 does not find the optimal solution.
Proof. Consider a counter example in Figure 4.9 where m = 25. While the greedy Algo-

i 1 2 3 i 1 2 3
pi 140 90 70 pi 140 90 70
wi 25 10 5 wi 25 10 5
xi 1 0 0 xi 0 1 1
(a) 140 profit (b) Optimal profit of 160

Figure 4.9: A counter example of the greedy Algorithm 4.6 where m = 25.

rithm 4.6 returns a solution of 140 profit as given in Figure 4.9 (a), there exists another
solution of 160 profit, which is more than 140, as given in Figure 4.9 (b). 
164 CHAPTER 4. GREEDY ALGORITHM

One may choose a different way of ranking candidates or a different greedy choice. One
such a greedy choice is a unit profit value. The unit profit value, ui , is the profit value
divided by its weight; ui = pi /wi . Sort apps by their unit profit values instead of plain
profit values, and choose an app with the highest unit profit value in the solution as long as
it fits into the memory space. The algorithm is same as Algorithm 4.6 except for the line 1.
Algorithm 4.7. Greedy 01-knapsack II
zo-knapsack-greedyII(A, m)
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ui = pi /wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A = sort(A, desc) by U . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
X1∼n = 0’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if m − wi ≥ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
m = m − wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T = T + pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
xi = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
return T and X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Algorithm 4.7 is illustrated on a toy example in Figure 4.8 (d). 50 profits can be made by
the greedy Algorithm 4.7, which is a better solution than one by the greedy Algorithm 4.6.
Is this profit optimal?
Theorem 4.5. Algorithm 4.7 does not find the optimal solution.
Proof. Consider a counter example in Figure 4.10 where m = 50. While the greedy Algo-

i 1 2 3 i 1 2 3
pi 60 100 120 pi 60 100 120
wi 10 20 30 wi 10 20 30
ui 6 5 4 ui 6 5 4
xi 1 1 0 xi 0 1 1
(a) 160 profit (b) Optimal profit of 220

Figure 4.10: A counter example of the greedy Algorithm 4.7 where m = 50.

rithm 4.6 returns a solution of 140 profit as given in Figure 4.10 (a), there exists another
solution of 160 profit, which is more than 140, as given in Figure 4.10 (b). 
Both Algorithm 4.6 and Algorithm 4.7 take O(n log n) since the list of items must be
sorted first.

4.2.4 Fractional Knapsack


Consider a variant of the previous knapsack problem where each item can be divisible.
It would be meaningless if one downloads a portion of an APP in the APP store example
in Figure 4.8 but one may take portions of the items in many different applications. For
example, in the mixed fruit juice in Figure 4.11 (a), one would like to maximize the amount
of iron in a mixed juice cup of capacity m given a set of n kinds of fruit juices. The objective
4.2. OPTIMIZATION PROBLEMS 165

Fruit apple banana blackberry blueberry grapefruit peach raspberry


amt. 200ml 180ml 50ml 80ml 190ml 100ml 120ml
Iron 0.18mg 0.22mg 0.18mg 0.13mg 0.16mg 0.15mg 0.41mg
(a) A sample input
Fruit blackb. raspb. blueb. peach banana apple grapef.
amt. 50ml 120ml 80ml 100ml 180ml 200ml 190ml
Iron 0.18mg 0.41mg 0.13mg 0.15mg 0.22mg 0.18mg 0.16mg
unit 3.6µg 3.4µg 1.6µg 1.5µg 1.2µg 0.9µg 0.8µg
x 1 1 1 1 1/6 0 0
(b) A sample output of the greedy Algorithm 4.8 where m = 380ml.

Figure 4.11: Mixed fruit juice with a maximum iron ingredient example

is similar to the 0-1 knapsack Problem 4.4. The difference is that items are breakable. A
Fractional knapsack, or continuous knapsack, problem is formally defined as follows:
Problem 4.5. Fractional knapsack
Input: A1∼n , a list of n different items where each item is represented by its profit
and weight, ai = (pi , wi ) and
m, the maximum weight that a knapsack can hold.
Output: X = hx1 , x2 , · · · , xn i such that
n
X
maximize p i xi
i=1
Xn
subject to w i xi ≤ m
i=1
where 0 ≤ xi ≤ 1

Similar to Algorithm 4.7, one can first compute the iron amount per unit volume and
include all items in full as long as it does not exceed the capacity. When it comes to the last
item that may cause overflow, one can pour the last item until the knapsack is full. This
process is illustrated in Figure 4.11 (b). This greedy algorithm, which first appears in [45],
is stated as follows:
Algorithm 4.8. Greedy Fractional knapsack

frac-knapsack-greedy(A, m)
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ui = pi /wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A = sort(A, desc) by U . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
X1∼n = 0’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
166 CHAPTER 4. GREEDY ALGORITHM

for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if m − wi ≥ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
m = m − wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T = T + pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
xi = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
xi = wmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
T = T + pi × xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
return T and X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Algorithm 4.8 takes O(n log n), as it involves sorting.


Theorem 4.6. Algorithm 4.8 correctly finds an optimal solution.
Proof. Let S be the optimal solution produced by greedy Algorithm 4.8. Note that S is
sorted by unit value in non-increasing order. Let S e be the extended list of S by unit
element. If the first item s1 in S has w1 amount, there are w1 first elements in the extended
list S e . Clearly, S e is the list sorted by the unit value. The first m elements in S e are the
solution given by the greedy Algorithm 4.8, which has to be the maximum value. Suppose
m0 m
there is another solution S 0 such that s0e 0e
sei . There must exist x ∈ S1∼m
P P
i > but
i=1 i=1
e e e
x 6∈ S1∼m . x must replace y ∈ S1∼m . x ≤ y since all z ∈ Sm+1∼|S e | ≤ y. The total profit
0
of S is less than S by x − y. This is a contradiction. Therefore, there is no better solution
than one produced by Algorithm 4.8. 

4.2.5 Unbounded integer knapsack

missile A blue red yellow


energy W 1 4 5
point P 1 18 20

Figure 4.12: k = 3 missile game example

In the 0-1 knapsack Problem 4.4, each item is either selected once or not selected.
Suppose an item can be selected more than once. This problem is known as the unbounded
integer knapsack problem.
4.2. OPTIMIZATION PROBLEMS 167

As an example of the unbounded integer knapsack problem, consider the three missile
game depicted in Figure 4.12. There are three missiles (blue, red, and yellow) with different
points gained and energy required. The blue or red missile is fired when the blue or red
button is pressed, respectively. When both buttons are pressed, the yellow missile is fired.
One can consider the energy consumed for each missile as the weight of the missile, and
the total available energy as the maximum capacity of the knapsack. Then, this problem
becomes the unbounded knapsack problem and it can be formally defined as follows:
Problem 4.6. Unbounded integer knapsack
Input: A1∼n , a list of n different items where each item is represented by its profit
and weight, ai = (pi , wi ) and
m, the maximum weight that a knapsack can hold.
Output: X = hx1 , x2 , · · · , xn i such that
n
X
maximize p i xi
i=1
Xn
subject to w i xi ≤ m
i=1
where 0 ≤ xi , Integer
As before, the unit profit per point can be chosen as a greedy choice and the following
greedy Algorithm 4.9 can be derived.
Algorithm 4.9. Greedy unbounded integer knapsack
unbounded-integer-knapsack-greedy(A, m)
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ui = pi /wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A = sort(A, desc) by U . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
X1∼n = 0’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if m ≥ wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
xi = b wmi c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
m = m − wi × xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
T = T + pi × xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return X and/or T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Theorem 4.7. Algorithm 4.9 does not find the optimal solution.
Proof. Consider a counter example where m = 10.
length W 1 4 5
Profit P 1 18 20
Unit Prof. U 1 4.5 4
While the greedy Algorithm 4.9 returns a solution of profit 38 where X = h2, 2, 0i, there
exists another solution of higher profit 39 where X = h1, 1, 1i. An optimal solution is 40,
where X = h0, 0, 2i. 
The computational time complexity of the greedy Algorithm 4.9 is again O(n log n) as
it involves sorting.
168 CHAPTER 4. GREEDY ALGORITHM

4.2.6 Rod cutting

8
Unit p5 = 8
length profit profit 8
p4 = 7 p1 = 1
W P U 9
p3 = 5 p2 = 4
5 8 1.6 7
p3 = 5 p1 = 1 p1 = 1
4 7 1.8
9
3 5 1.7 p2 = 4 p2 = 4 p1 = 1
7
2 4 2 p2 = 4 p1 = 1 p1 = 1 p1 = 1
5
1 1 1 p1 = 1 p1 = 1 p1 = 1 p1 = 1 p1 = 1

(a) A sample input (b) Seven different ways of cutting.

Figure 4.13: Rod cutting of size 5

Given a rod of length n units and the price of all 1 ∼ k unit length pieces, find the
most profitable cutting of the rod. This rod cutting problem, or simply RCP, which appears
in [42, p 360], is a special case of the general unbounded integer knapsack maximization
Problem 4.6 where wi = i. It is formally defined as follows:

Problem 4.7. Rod cutting


Input: A sequence P1∼k and a rod of length n ∈ N
Pk
Output: R(n) = pi xi or X = hx1 , x2 , · · · , xk i such that
i=1

k
X
maximize pi xi
i=1
k
X
subject to ixi = n
i=1
where 0 ≤ xi integer
Suppose that there are five different unit length rods (k = 5) with different prices as
shown in Figure 4.13 (a). Note that pi is the price of the rod of length i. There are 7
different ways to cut the rod of length (n = 5), as shown in Figure 4.13 (b). The most
profitable way is the third one, which has one of a rod of length 3 and one of a rod of length
2, whose total profit is 9. The fifth one is also the most profitable way, but the problem is
finding one optimal solution. A naı̈ve algorithm would be to generate all possible ways and
find a maximum profit. This would take exponential time, as the number of ways to cut the
rod of length n is the same as the integer partition number. A greedy algorithm simlar to
Algorithm 4.9 can be devised and its pseudo code and incorrectness are left for exercises.

4.2.7 Classical Optimization and Related Problems


Classical but captivating optimization problems such as the postage stamp related prob-
lems, variants of the knapsack problem from [97], and subset arithmetic related problems
4.3. SCHEDULING PROBLEMS 169

are summarized in Tables 4.3 and 4.4 on pages 212 and 213, respectively. The problems
that are not covered here shall appear in the forthcoming chapters, and greedy algorithms
for some problems are left for exercises.

4.3 Scheduling Problems


While scheduling problems are of great interest in many industries such as business, eco-
nomics, and transportation, they also frequently appear in operating systems for computer
scientists. Among the many types of scheduling problems in [135], some basic problems are
introduced in this section. Well devised greedy algorithms solve some scheduling problems.

4.3.1 Activity Selection Problem

activity A 1 2 3 4 5 6 7 8 9 10 11
start S 7 5 2 12 0 2 8 1 6 3 9
finish F 12 8 6 14 7 13 11 5 10 9 12
select X 0 1 0 1 0 0 1 1 0 0 0
(a) A sample input
10 11
6
3 9
8 2 7
5 1 4

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

(b) an optimal solution in a time scheulde plot


activity A 5 8 3 6 10 2 9 1 7 11 4
start S 0 1 2 2 3 5 6 7 8 9 12
finish F 7 5 6 13 9 8 10 12 11 12 14
select X 1 0 0 0 0 0 0 1 0 0 1
(c) a non-optimal solution by the earliest-start-time greedy algorithm
activity A 8 3 5 2 10 9 7 1 11 6 4
start S 1 2 0 5 3 6 8 7 9 2 12
finish F 5 6 7 8 9 10 11 12 12 13 14
select X 1 0 0 1 0 0 1 0 0 0 1
(d) an optimal solution by the earliest-finish-time greedy Algorithm 4.10
activity A 5 8 3 6 10 2 9 1 7 11 4
start S 0 1 2 2 3 5 6 7 8 9 12
finish F 7 5 6 13 9 8 10 12 11 12 14
select X 0 1 0 0 0 1 0 0 0 1 1
(e) an optimal solution by the latest-start-time-first greedy Algorithm 4.10

Figure 4.14: Activity selection example

Given a set of activities represented by their start and finish times, ai = (si , fi ), the
170 CHAPTER 4. GREEDY ALGORITHM

activity selection problem, or simply ASP, is to select the maximum number of mutually
compatible activities. Two activities are compatible or non-conflicting if their intervals do
not overlap. This problem can be formulated as a maximization problem as follows:

Problem 4.8. Activity selection problem


Input: A list A1∼n of n different activities where each activity is represented by
its start and finish time ai = (si , fi ).
Output: X1∼n such that
n
X
maximize xi
i=1
subject to ∀i, j ∈ {1 ∼ n}(fi xi ≤ sj xj ∨ si xi ≥ fj xj )
where xi = 0 or 1
For the toy input example in Figure 4.14 (a), the maximum number of compatible
acitivites is 4 and there are two possible sequences: h8, 2, 7, 4i and h8, 2, 11, 4i as given in
Figure 4.14 (d) and (e), respectively. To come up with a greedy algorithm, a greedy choice
must be made. Suppose the start time is chosen as a greedy choice. First, sort activities by
their start time and add a0i to the solution as long as it is compatible with its previous one.
This algorithm is illustated in Figure 4.14 (c) and only three activities are selected while
there are better solutions. Clearly, this greedy choice fails to find an optimal solution.
Suppose the finish time is chosen as a greedy choice. Sort activities by their finish time
and add a0i to the solution as long as it is compatible with its previous one. A pseudo-code
is given as follows:

Algorithm 4.10. Greedy activity selection by the earliest-finish-time

greedy-activity-selection(A)
A0 = sort(A, asc) by F . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
O = {a01 } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
cf = f10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if s0i ≥ cf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
O = O ∪ {a0i } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
cf = fi0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
return O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Algorithm 4.10 is illustated in Figure 4.14 (d). This earliest-finish-time greedy choice
gives an optimal solution, but a formal correctness proof is still required.

Theorem 4.8. The greedy Algorithm 4.10 correctly finds an optimal solution.

Proof. Let A01∼k be the solution provided by the greedy Algorithm 4.10. Suppose A01∼k is
not optimal, i.e., there exists another solution O1∼m 6= A01∼k such that m > k. Assume
A01∼k and O1∼m are sorted in ascending order without loss of generality. As activities in a
solution are compatible, it can be sorted by either starting or finishing time. Let x.s and x.f
be the starting and finishing time of the activity x. We first prove the following eqn (4.6)
by induction:
∀i ∈ {1 ∼ k}, a0i .f ≤ oi .f (4.6)
4.3. SCHEDULING PROBLEMS 171

Basis case: If i = 1, a01 .f ≤ o1 .f is obvious since the greedy algorithm always takes the
activity whose finishing time is earliest.
Inductive case: Assume a0i .f ≤ oi .f is true for 1 < i < k, show a0i+1 .f ≤ oi+1 .f ).
a0i .f ≤ oi .f ≤ oi+1 .s ≤ oi+1 .f . If a0i+1 .f > oi+1 .f ), it contradicts the greedy choice of
Algorithm 4.10. Hence, the eqn (4.6) is true.
Now, consider ok+1 activity that is in O1∼m , but not in A01∼k . By the eqn (4.6), a0k .f ≤
ok .f ≤ ok+1 .s). The greedy Algorithm 4.10 would select ok+1 but ok+1 6∈ A01∼k . This is a
contradiction. Therefore, A01∼k is a correct optimum solution. 
Since Algorithm 4.10 requires sorting and the remaining selection parts take linear time,
the computational time complexity is O(n log n) + Θ(n) = O(n log n).
There is another successful greedy algorithm. While choosing the earliest start time
greedily fails to find an optimal solution, the latest-start-time-first algorithm does find an
optimal solution. First, sort by starting time in non-increasing order and then select the
activity with the latest start time. The table is filled from right to left as illustrated in
Figure 4.14 (e). A pseudo-code utilizing the latest-start-time-first greedy choice is given as
follows:
Algorithm 4.11. Greedy activity selection by the latest-start-time-first
greedy-activity-selection(A)
A0 = sort(A, asc) by S . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Declare T1∼n initially all 0’s . . . . . . . . . . . . . . . . . . . . . . 2
tn = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
cs = s0n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = n − 1 down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if fi0 ≤ cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
ti = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
cs = s0i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Pn
return T or ti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
i=1

A similar correctness proof to Algorithm 4.10 in Theorem 4.8 can be applied to prove
the correctness of Algorithm 4.11, but a different reasoning is provided in Theorem 4.9.
Theorem 4.9. The greedy Algorithm 4.11 correctly finds an optimal solution.
Proof. Suppose a strange universe where time traverses backward. Start and finish time
in our universe becomes finish and start time in the reverse time universe. The solution
found by Algorithm 4.10 in the reverse time universe is identical to the solution found by
Algorithm 4.11 in our universe. 
The computational time complexity of Algorithm 4.11 is the same as that of Algo-
rithm 4.10.

4.3.2 Task Distribution with a Minimum Number of Processors


Consider that activities in the previous activity selection Problem 4.8 are tasks to com-
plete. If there is only one processor or worker, not all tasks can be completed, as some tasks
are incompatible. If one hires n number of workers to process each task, all of the tasks
can be done in parallel. A greedy boss, however, would like to hire the minimum number
172 CHAPTER 4. GREEDY ALGORITHM

of workers needed to complete all tasks by distributing tasks into subsets of compatible
tasks. A toy example of this minimum number of processors, or MNP, problem is given in
Figure 4.15. At a given time, x = 3 in the time table in Figure 4.15 (c), there are three
incompatible tasks and, thus, at least three workers are necessary to complete all tasks in
parallel.

task T 1 2 3 4 5 6 7 8
start S 6 5 2 8 0 1 6 3
finish F 8 8 5 9 3 5 9 5
(a) a sample input
task T 0 5 6 3 8 2 1 7 4
start S 0 0 1 2 3 5 6 6 8
finish F 0 3 5 5 5 8 8 9 9
P1 1 0 0 1 1 0 0 0
P2 0 1 0 0 0 1 0 1
P3 0 0 1 0 0 0 1 0
(b) a sample output
P3 3 7
P2 6 1 4
P1 5 8 2
0 1 2 3 4 5 6 7 8 9

(c) time table for distributing compatible tasks

Figure 4.15: Task scheduling with minimum processors

Given a set of tasks represented by their start and finish times, ti = (si , fi ), the minimum
number of processors or simply MNP problem is to assign tasks to processors such that the
number of processors is minimized and each processor has compatible tasks. One possible
output format is given in Figure 4.15 (b) using k different 0-1 vectors. This problem can be
formulated as a minimization problem as follows:

Problem 4.9. Minimum number of processors


Input: A list T1∼n of n different tasks, where each task is represented by its start
and finish time ti = (si , fi ).
Output: k and/or P1∼k,1∼n such that

minimize k
k
X
subject to ∀j ∈ {1 ∼ n} pi,j = 1
i=1
(4.7)
and ∀i, j ∈ {1 ∼ n}(fi xi ≤ sj xi ∨ si xi ≥ fj xi )
where pi,j = 0 or 1

The first constraint in eqn (4.7) states that each task must be performed by exactly one
processor, and the second constraint states that each processor must contain compatible
tasks.
4.3. SCHEDULING PROBLEMS 173

A greedy algorithm is to start with hiring a single worker by assigning the first task with
the earliest start time. If the next task is compatible, no hiring occurs and it is assigned to
the existing available worker. However, if it is incompatible, another worker is hired. This
simple greedy algorithm is stated as follows

Algorithm 4.12. Greedy minimum number of processors

greedy-MNP(T )
T 0 = sort(T , asc) by S . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
P1 = {a01 } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
pcf1 = f10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
k = 1 .............................................4
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
j = 1 ...........................................6
while s0i < pcfj and j ≤ k . . . . . . . . . . . . . . . . . . . . . . . 7
j = j + 1 .....................................8
if j ≤ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Pj = Pj ∪ {ai } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
pcfj = fi0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
k = k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Pk = Pk ∪ {ai } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
pcfk = fi0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
return k and/or P1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Lines 6 ∼ 8 in Algorithm 4.12 search for an available processor for the ith task. If found,
lines 9 ∼ 11 assign the task to the available processor. If not, lines 12 ∼ 15 assign the task
to the a new processor.

Theorem 4.10. Algorithm 4.12 correctly finds the optimal solution.

Proof. Let k be the number of processors found by the greedy Algorithm 4.12. Suppose the
answer is not optimal, i.e., there is another solution: o < k. Consider the time x when the
kth processor is added in Algorithm 4.12. There are k incompatible tasks at the time x. It
is impossible to execute all k incompatible tasks with o < k processors at the same time.
This is a contradiction. Hence, Algorithm 4.12 correctly finds the optimal solution. 

Since Algorithm 4.12 involves sorting in line 1 and searching k processors in line 7, the
computational time complexity is O(n log n + kn).

4.3.3 Multiprocessor Scheduling


Given k processors and a set of n independent tasks represented by their running time,
find the minimum running time by distributing tasks to k processors. In this problem,
tasks are independent, i.e., the order of tasks does not matter and they can be done in
parallel. This k multiprocessor scheduling problem, or simply MPS, can be formally defined
as follows:
174 CHAPTER 4. GREEDY ALGORITHM

Task A 1 2 3 4 5 6 7 8 9 10
Input
time T 30 24 35 7 41 28 5 31 25 44
P1 1 0 0 1 0 1 0 0 1 0
Output P2 0 0 0 0 1 0 1 0 0 1
P3 0 1 1 0 0 0 0 1 0 0
(a) A sample input and output for MPS
P3 35 31 25
P2 41 30 24
P1 44 28 7 5
0 10 20 30 40 50 60 70 80 90 100

(b) Longest-running-time-first greedy Algorithm 4.13: makespan = 95


P3 24 30 41
P2 7 28 35
P1 5 25 31 44
0 10 20 30 40 50 60 70 80 90 100

(c) Shortest-running-time-first greedy Algorithm 4.14: makespan = 105


P3 35 31 24
P2 44 41 5
P1 30 28 25 7
0 10 20 30 40 50 60 70 80 90 100

(d) Optimal solution: makespan = 90

Figure 4.16: Distributing n tasks to k multiprocessors

Problem 4.10. Multiprocessor scheduling


Input: A list A of n independent tasks where each task is represented by its running
time T = {t1 , · · · , tn } and k, the number of processors
Pn
Output: makespan = max pi,j tj such that
i∈{1∼k} j=1

n
X
minimize max pi,j tj
i∈{1∼k}
j=1
k
X
subject to ∀j ∈ {1 ∼ n} pi,j = 1
i=1
where pi,j = 0 or 1

A sample toy input and its output are given in Figure 4.16 (a). The total elapsed time
to complete all tasks by a certain algorithm is called the makespan.
One may choose the longest running time task first and assign it to the processor with
the lowest total time. This longest-running-time-first greedy algorithm is illustrated in
Figure 4.16 (a) and is stated in Algorithm 4.13. The computational time complexity of
Algorithm 4.13 is O(n log n + kn). The makespan of Algorithm 4.13 is 95, as given in
Figure 4.16 (b) whereas an optimal solution’s makespan is 90, as given in Figure 4.16 (d).
4.3. SCHEDULING PROBLEMS 175

Algorithm 4.13. Longest first greedy Algorithm 4.14. Shortest first greedy
longest-first-MPS(A1∼n , k) shortest-first-MPS(A1∼n , k)
A = sort(A, desc) . . . . . . . . . . . . . . . . 1 P1∼k = ∅ . . . . . . . . . . . . . . . . . . . . . . . 1
P1∼k = ∅ . . . . . . . . . . . . . . . . . . . . . . . . 2 for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . .2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . 3 c = argmin(Ai∼n ) . . . . . . . . . . . . 3
c = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . .4 swap(ai , ac ) P.................. 4
for j =P2 ∼ k .P ..................5 c = argmin( x) . . . . . . . . . . . 5
j=1∼k x∈Pj
if x< x, c = j ...6
x∈Pj x∈Pc Pc = Pc ∪ {aP
i } . . . . . . . . . . . . . . . .6
Pc = Pc ∪ {aP
i} . . . . . . . . . . . . . . . . . 7 return max ( x) or P1∼k . . . . 7
i=1∼k x∈Pi
return max ( x) or P1∼k . . . . . . 8
i=1∼k x∈Pi

Instead of the longest-running-time-first, one may select a shortest running time first
as a greedy choice. This shortest-running-time-first greedy algorithm is identical to Algo-
rithm 4.13 except that the first line needs to change to sort in ascending order. However,
a pseudo code without sorting but searching the shortest job in each iteration is given in
Algorithm 4.14. Lines 4 ∼ 6 in Algorithm 4.13 are equivalent to the line 5 in Algorithm 4.14.
The output of Algorithm 4.14 is given in Figure 4.16 (c) and its makespan is 105, which is
worse than the first greedy Algorithm 4.13 for this particular toy example. In either case,
greedy algorithms do not solve the problem, but provide approximate solutions.

4.3.4 Bin Packing

Input: i 1 2 3 4 5
7 5
2 4 1 ai 2 7 4 5 1 m=7

Output: B1 1 0 1 0 0 ↑
B2 0 1 0 0 0 3
1 B3 0 0 0 1 1 ↓
5
7 B4 0 0 0 0 0
2 4
B5 0 0 0 0 0
(a) Bin packing example (b) input & output representation

Figure 4.17: Bin packing problem example.

Suppose that each worker is not allowed to work more than m = 8 hours. Given a set
of n independent tasks represented by their running time, a greedy boss would like to hire
the minimum number of workers to complete all tasks. Each task’s running time, ti , is
assumed to be less than or equal to m without loss of generality. Since the problem can be
analogized to distributing different length items to fixed size bins, as depicted in Figure 4.17
(a), this bounded partition problem is conventionally known as a bin packing problem, or
simply BPP, and defined formally as follows:
Problem 4.11. Bin packing
Input: A list A of n various length items and m, the uniform bin capacity
Pn
Output: u(Bj ) such that
j=1
176 CHAPTER 4. GREEDY ALGORITHM

n
X
minimize u(Bj )
j=1

subject to ∀i ∈ {1 ∼ n}∃!j ∈ {1 ∼ n}, ai ∈ Bj


Xn
∧ ∀j ∈ {1 ∼ n}, ai bj,i ≤ m
i=1
(
0 if Bj = ∅
where bj,i = 0 or 1 and u(Bj ) =
1 otherwise

The function u(Bj ) is 1 if the bin is used or 0 if not used. bj,i = 1 means that the ith
item ai is assigned to the jth bin, Bj . The notation ∃! stands for the unique existential
quantification, means ‘there exists only one.’ The first constraint ∀i ∈ {1 ∼ n}∃!j ∈ {1 ∼
n}, ai ∈ Bj indicates that each element in A is assigned to exactly one bin and implicitly
Sn Pn
implies that Bj ’s partitions of A; A = Bj . The second constraint ∀j ∈ {1 ∼ n}, bj,i ≤
j=1 i=1
m indicates that each bin cannot contain more than m in length. A sample toy input and
its possible output, where only three bins are used, are given in Figure 4.17.
So as to design a greedy algorithm, one may sort the tasks by their length in descending
order and place each task into the least remaining bin. A pseudo code is stated as follows:

Algorithm 4.15. best-fit-decreasing BPP

BFD-BPP(A1∼n , m)
A0 = sort(A, desc) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
B1∼n = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
c = 0 ............................................. 3
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
p = 1 ...........................................5
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if bj + a0i ≤ m and bj > bp , p = j .......... 7
if bp = 0, c = c + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . .8
bp = bp + a0i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
return c and/or B1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

The greedy approximate Algorithm 4.15 is referred to as ‘Best fit decreasing’ algorithm
in [90], and its computational time complexity is Θ(n2 ).

Theorem 4.11. Algorithm 4.15 does not find the optimal solution.

Proof. Consider a counter-example where A = h7, 5, 4, 4, 3, 3, 2, 2i and m = 10. The best-fit


decreasing greedy Algorithm 4.15 finds four bins, as illustrated in Figure 4.18 (a). However,
a better solution, indeed an optimal solution, requires only three bins, as given in Figure 4.18
(b). 

Numerous greedy algorithms, as enumerated in [37, 90], have been suggested without
success.
4.3. SCHEDULING PROBLEMS 177

B8 B8

…….

…….
B4 2 B4

B3 4 3 2 B3 4 4 2

B2 5 4 B2 5 3 2

B1 7 3 B1 7 3

(a) Best-fit-decreasing Algorithm 4.15: 4 (b) Optimal solution: 3

Figure 4.18: Bin packing toy sample results

4.3.5 Job Scheduling with Deadline


Consider a set A of n tasks, where each task takes a unit time. There are only k unit
time slots available and, thus, only up to k tasks can be done. Each task consists of its profit
and deadline: ai = (pi , di ). The offer is that if the task is completed by a certain deadline,
di , they will pay pi amount. This Job scheduling with deadline, or simply JSD, problem is
to maximize the total profit by selecting and scheduling a subset of tasks that can be done
before the respective deadlines. The output is an ordered set, S, which is a k-permutation
of A. Let si .p and si .d be the profit and deadline of the ith task in S. A sample input and
output of a toy example is given in Figure 4.19 and JSD is formally defined as follows:

Problem 4.12. Job scheduling with deadline (JSD)


Input: A list A1∼n of n different tasks, where each task is represented by its profit
and deadline: ai = (pi , di ) and a positive integer k, which is the number of
slots.
Output: S1∼k such that

k
X
maximize si .p
i=1
subject to ∀i ∈ {1 ∼ k}si .d ≤ i
where S = k-permutation of A
A naı̈ve greedy Algorithm 4.6 on page 163 for the 0-1 knapsack Problem 4.4 that utilizes
the highest-profit-first greedy choice could be used for this JSD problem. The output of
this naı̈ve greedy algorithm would be S 0 = h2, 5, 1i and its total profit is 18 as illustrated in
Figure 4.19 (c). As given in Figure 4.19 (b), the optimal solution is S = h2, 3, 5i whose total
profit is 20. Unlike the 0-1 knapsack Problem 4.4, the JSD problem output set is an ordered
set instead of an unordered set. Hence, a slightly different greedy algorithm, which takes
the order into account, can be designed. It utilizes the as-last-minute-as-possible greedy
choice in addition to the highest-profit-first greedy choice. Instead of including a task into
a solution set, include it as close to the deadline as possible. It is illustrated in Figure 4.19
(d). In the second step, the task a5 is placed in the 3rd slot instead of the 2nd slot. In
this way, the task a3 can be placed in the second slot. Otherwise, a task a1 with lower
profit would be selected instead of the task a3 with higher profit. A pseudo code is stated
as follows:
178 CHAPTER 4. GREEDY ALGORITHM

task A 1 2 3 4 5 time T 1 2 3
profit P 3 8 5 6 7 solution S 2 3 5
deadline D 3 1 2 1 3 profit P 8 5 7
(a) A sample input (b) A sample output of total profit 20.
step sorted task A0 = (a0i .p, a0i .d) solution S1∼(k=3)
sort 2 (8, 1) 5 (7, 3) 4 (6, 1) 3 (5, 2) 1 (3, 3) T 1 2 3
1 5 (7, 3) 4 (6, 1) 3 (5, 2) 1 (3, 3) 2 (8, 1)
2 4 (6, 1) 3 (5, 2) 1 (3, 3) 2 (8, 1) 5 (7, 3)
3 3 (5, 2) 1 (3, 3) 2 (8, 1) 5 (7, 3)
4 1 (3, 3) 2 (8, 1) 5 (7, 3)
5 2 (8, 1) 5 (7, 3) 1 (3, 3)
(c) A naı̈ve greedy algorithm similar to Algorithm 4.6 illustration
step sorted task A0 = (a0i .p, a0i .d) solution S1∼(k=3)
sort 2 (8, 1) 5 (7, 3) 4 (6, 1) 3 (5, 2) 1 (3, 3) T 1 2 3
1 5 (7, 3) 4 (6, 1) 3 (5, 2) 1 (3, 3) 2 (8, 1)
2 4 (6, 1) 3 (5, 2) 1 (3, 3) 2 (8, 1) 5 (7, 3)
3 3 (5, 2) 1 (3, 3) 2 (8, 1) 5 (7, 3)
4 1 (3, 3) 2 (8, 1) 3 (5, 2) 5 (7, 3)
(d) Greedy Algorithm 4.16 illustration

Figure 4.19: Job scheduling with deadline toy example where n = 5 and k = 3.

Algorithm 4.16. last minute greedy JSD

JSD(A1∼n , k)
A0 = sort(A, desc) by P . . . . . . . . . . . . . . . . . . . . . . . . . . 1
S1∼k = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
j = si .d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
while sj 6= 0 and j > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 5
j = j − 1 .....................................6
if j > 0, sj = a0i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
k
a0i .p . . . . . . . . . . . . . . . . . . . . . . . . . . .8
P
return S and/or
i=1

A proof that Algorithm 4.16 finds an optimal solution can be found in [26, p 208]. Sorting
in line 1 takes O(n log n) and the loop in lines 3 ∼ 7 takes O(kn), as each task needs to find
a possible position in the solution ordered list. Hence, the computational time complexity
of Algorithm 4.16 is O(n log n + kn).

4.4 Graph Problems


In this section, graph related problems are introduced to master the greedy algorithm
design skill. First, a graph representation is briefly reviewed. A couple of canonical graph
problems where the greedy approach works are presented. They are the minimum spanning
tree and shortest path problems. Finally, a couple of famous graph optimization problems
where the greedy approach fails to find an optimal solution are presented. They include
4.4. GRAPH PROBLEMS 179

the traveling salesman and vertex cover problems. While the vertex cover problem takes a
simple unweighted graph as an input, the other problems take a weighted graph.

4.4.1 Graph Representation


A graph G = (V, E) is an ordered pair of a set V of vertices and a set E of edges. There
are seven vertices and ten edges in a graph in Figure 4.20 (a). An edge (vx , vy ) is associated
with two vertices, vx and vy . Drawing of a graph, such as in Figure 4.20 (a), is for human
visual system. To represent a graph in a computer, there are two common representations:
adjacency matrix and adjacency list, as given in Figure 4.20 (b) and (c), respectively.

v1 v2 v3 v4 v5 v6 v7
v1 v2 v1

0 1 1 0 1 0 0
 v1 → {v2 , v3 , v5 }
v2 1 0 1 1 0 0 0 v2 → {v1 , v3 , v4 }
v3 1 1 0 0 0 1 1 v3 → {v1 , v2 , v6 , v7 }
 
v5 v3 v4 v4 0
 1 0 0 0 1 0  v4 → {v2 , v6 }
v5 1
 0 0 0 0 0 1  v5 → {v1 , v7 }
v6 0 0 1 1 0 0 1 v6 → {v3 , v4 , v7 }
v7 v6 v7 0 0 1 0 1 1 0 v7 → {v3 , v5 , v6 }
(a) a sample graph (b) adjacency matrix (c) adjacency list

Figure 4.20: A sample graph representation

Two vertices are adjacent if there exists an edge. A set of vertices are considered an
ordered set: hv1 , v2 , · · · , vn i. In an adjacency (n × n) matrix, M , with with respect to the
order of vertices, mi,j = 1 if (vi , vj ) ∈ E and mi,j = 0 otherwise. The adjacency matrix
representation requires Θ(n2 ) space where the number of vertices |V | = n. If a graph is
sparse, meaning that there are very few edges, an adjacency list may be space efficient. An
adjacency list is a one dimensional array of lists where the order of an array corresponds to
the order of vertices. For each vertex, all adjacent vertices are associated with the vertex as
a list. Checking whether there is an edge between vx and vy , it takes constant time in an
adjacency matrix, while it may take O(n) in an adjacency list.

4.4.2 Vertex Cover


A vertex cover of a graph is a subset, Vc , of a set of all vertices, V , such that each edge
of the graph is incident to at least one vertex in Vc . An edge is said to be incident to a
vertex vx if one of any two vertices that compose the edge is vx . Figure 4.21 show some
toy examples to get a sense of the vertex cover. Three vertices {v1 , v2 , v5 } cover all edges of
the graph in Figure 4.21 (a). There are five edges and all edges contain one of these three
vertices, as shown in Figure 4.21 (b): {(v1 , v2 ), (v1 , v5 )), (v2 , v3 ), (v2 , v5 ), (v4 , v5 )}. Two
vertices {v2 , v5 } also cover all edges, {(v1 , v2 ), (v1 , v5 )), (v2 , v3 ), (v2 , v5 ), (v4 , v5 )}, as shown
in Figure 4.21 (c). It is the minimum vertex cover because there is no subset of vertices
whose cardinality is fewer than two that cover all edges. Figure 4.21 (d) ∼ (f) show a graph
in which there may be more than one minimum vertex cover.
Finding a minimum vertex cover is called a vertex cover problem, or simply VCP, and
it is defined formally as follows:
180 CHAPTER 4. GREEDY ALGORITHM

v1 v2 v3 v4 v5
  v1 v2 v1 v2
v1 0 1 0 0 1
v2 
 1 0 1 0 1 
v3 
 0 1 0 0 0  v3 v4 v3 v4
v4  0 0 0 0 1
v5 1 1 0 1 0 v5 v5

(a) A sample adjacent matrix (b) (c = 3) {v1 , v2 , v5 } (c) (c = 2) {v2 , v5 }


v v2 v3 v4 v5
 1  v1 v2 v1 v2
v1 0 1 1 1 0
v2 
1 0 0 1 0

v3 
 1 0 0 1 1 
 v3 v4 v3 v4
v4  1 1 1 0 0 
v5 0 0 1 0 0 v5 v5

(d) A sample adjacent matrix (e) (c = 3) {v1 , v3 , v4 } (f) (c = 3) {v1 , v2 , v3 }

Figure 4.21: Vertex cover examples

Problem 4.13. Vertex cover


Input: A graph G = (V, E)
Output: Vc ⊂ V such that minimize |Vc |
where ∀(vx , vy ) ∈ E, ∃vz ∈ Vc , (vx = vz ∨ vy = vz ).

Straight from the problem definition is a naı̈ve exponential time algorithm that generates
and tests each subset ofPvertices.
 It takes exponential time because the number of proper
n−1
subsets is exponential ( k=0 nk = O(2n )) and testing each subset of size k takes O(kn2 ).

3 4 3 3 3 2
v1 v2 v1 v2 v1 v2
v6 v6 v6
3 6 2 4 2 5 1 3 1 0 2
v3 v4 v5 v3 v4 v5 v3 v5

v7 2 v8 7 3 v9 v7 1 2 v9 v7 1 1 v9

(a) Vc = {v8 , · · · } (b) Vc = {v8 , v4 , · · · } (c) Vc = {v8 , v4 , v1 , · · · }


1 0
v2 v2 v1 v2
v6 v6
0 0 2 0 0
v3 v5 v3 v5 v3 v4 v5

1 0
0 0
v7 v9 v7 v9 v7 v8 v9

(d) Vc = {v8 , v4 , v1 , v6 , · · · } (e) Vc = {v8 , v4 , v1 , v6 } (f) c = 4 vertex cover

Figure 4.22: Greedy Algorithm 4.17 illustration for vertex cover

In order to utilize a greedy algorithm paradigm, one must make a greedy choice. One
4.4. GRAPH PROBLEMS 181

possible greedy choice is the degree of vertices. The degree of a vertex is the number
of adjacent vertices or the number of edges associated with the vertex. For example, in
Figure 4.22 (a), deg(v1 ) = 3, deg(v2 ) = 4, · · · , and deg(v8 ) = 7. The degree of a vertex is
shown outside the node, while the vertex index is shown inside to identify the vertex.
A greedy algorithm includes the vertex with the highest degree into the vertex cover set
and eliminates the vertex together with all covered edges by the vertex. Then it repeats the
process until all edges are covered as illustrated Figure 4.22. A pseudo code for this greedy
algorithm is stated as follows:
Algorithm 4.17. Highest-degree-first greedy algorithm for Vertex Cover
greedyVC(G)
Vc = ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
while E 6= ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
vm = argmax(deg(vx )) . . . . . . . . . . . . . . . . . . . . . . . . . . 3
vx ∈V
Vc = Vc ∪ {vm } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
V = V − {vm } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
E = E − {(a, b) ∈ E | a = vm ∨ b = vm } . . . . . . . . 6
return |Vc | or Vc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
In Figure 4.21, the highest-degree-first greedy Algorithm 4.17 is illustrated in three
different representations of an input graph. The Vc = {v2 , v5 } example in Figure 4.21
(a) can be represented by an adjacent matrix and an adjacent list. They are illustrated
in Figure 4.23 (a) and (b), respectively. It should also be evident that a graph can be
represented by an edge list. Figure 4.23 (c) illustrates finding Vc = {v1 , v2 , v3 } by the
greedy Algoritm 4.17 when the graph in Figure 4.21 (d) is represented by an edge set.
So as to prove the incorrectness of greedy Algorithm 4.17, all we need to do is to find a
counter example.
Theorem 4.12. The greedy Algorithm 4.17 does not always find an optimal solution for
the vertex cover problem.
Proof. Consider the counter example in Figure 4.24. The greedy Algorithm 4.17 produces
(c ≥ 5) vertex covers, as illustrated in Figure 4.24 (a). It always picks the center-middle one
first. However, the actual optimal solution is (c = 4) vertex cover, which does not include
the center-middle one. It is given in Figure 4.24 (b). 

4.4.3 Minimum Spanning Tree


While edges in the graphs in the previous vertex cover problem mean the existence of
connection between vertices, many other problems take a weighted graph as an input where
edges are associated with their respective weight values, as shown in Figure 4.25 (a). The
weight value of an edge is shown as close to the middle of the respective edge as possible. A
weighted graph can be represented easily in an adjacency matrix by replacing ones with the
respective weight values as shown in Figure 4.25 (b). Weights must be associated together
with vertices, as shown in Figure 4.25 (c).
A tree is a special graph without any cycle. A spanning tree, T , is a sub-graph of a
graph, G, such that T is a tree and all vertices must appear in the tree, ∀x ∈ V (x ∈ T ). A
spanning tree connects all n vertices in G with n − 1 edges in T . There are exactly n − 1
edges in a tree because every vertex has exactly one parent except for one vertex called the
182 CHAPTER 4. GREEDY ALGORITHM

v1 v2 v1 v2 v1 v2

v3 v4 −→ v3 v4 −→ v3 v4

v5 v5 v5

v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5
     
v1 0 1 0 0 1 v1 0 0 0 0 1 v1 0 0 0 0 0
v2 
 1 0 1 0 1  v2 
 0 0 0 0 0  v2 
0 0 0 0 0 

v3 
0 1 0 0 0  −→ v3 
 0 0 0 0  −→
0  v3 
0 0 0 0 0 

v4  0 0 0 0 1 v4  0 0 0 0 1  v4  0 0 0 0 0 
v5 1 1 0 1 0 v5 1 0 0 1 0 v5 0 0 0 0 0
deg 2 3 1 1 3 deg 1 0 0 1 2 deg 0 0 0 0 0
pick v2 pick v5 finished
(a) Greedy Algorithm 4.17 illustration on an adjacency matrix of G in Figure 4.21 (a)
V = {v1 , v2 , v3 , v4 , v5 } V = {v1 , v3 , v4 , v5 } V = {v1 , v3 , v4 }
v1 2 → {v2 , v5 } v1 1 → {v5 } v1 0 → {}
v2 3 → {v1 , v3 , v5 } v2 - ∈ V c v2 - ∈ V c
v3 1 → {v2 } v3 0 → {} v3 0 → {}
v4 1 → {v5 } v4 1 → {v5 } v4 0 → {}
v5 3 → {v1 , v2 , v4 } v5 2 → {v1 , v4 } v5 - ∈ V c
Vc = {}, pick v2 Vc = {v2 }, pick v5 Vc = {v2 , v5 }, finished
(b) Greedy Algorithm 4.17 illustration on an adjacency list of G in Figure 4.21 (a)
v1 v2 v1 v2 v1 v2 v1 v2

v3 v4 v3 v4 v3 v4 v3 v4

v5 v5 v5 v5
deg(V )
|E| E v1 v2 v3 v4 v5 Vc
6 {(v1 , v2 ), (v1 , v3 ), (v1 , v4 ), (v2 , v4 ), (v3 , v4 ), (v3 , v5 )} 3 2 3 3 1 v1
3 {(v2 , v4 ), (v3 , v4 ), (v3 , v5 )} - 1 2 2 1 v3
1 {(v2 , v4 )} - 1 - 1 0 v2
0 {} - - - 0 0 -
(c) Greedy Algorithm 4.17 illustration on an edge sets of G in Figure 4.21 (d)

Figure 4.23: Greedy Algorithm 4.17 illustration of vertex covers


4.4. GRAPH PROBLEMS 183

(a) Possible (c ≥ 5) outputs by Algorithm 4.17 (b) (c = 4) optimal solution

Figure 4.24: A counter example of the greedy algorithm 4.17’s incorrectness.

v1 v2 v3 v4 v5
v1 5 v2
4 6 v1

0 5 4 5 0
 v1 → {(v2 , 5), (v3 , 4), (v4 , 5)}
v2 5 0 6 0 9 v2 → {(v1 , 5), (v3 , 6), (v5 , 9)}
5 v3 9 v3 4
 6 0 4 2  v3 → {(v1 , 4), (v2 , 6), (v4 , 4), (v5 , 2)}
4 2 v4 5 0 4 0 5 v4 → {(v1 , 5), (v3 , 4), (v5 , 5)}
5 v5 0 9 2 5 0 v5 → {(v2 , 9), (v3 , 2), (v4 , 5)}
v4 v5

(a) weighted graph (b) adjacency matrix (c) adjacency list

Figure 4.25: A sample weighted graph representation

‘root’. Hence, a tree can be represented by a table of vertices, vx , and their corresponding
parent node, par(vx ), as given in Figure 4.26. A spanning tree, T , is a collection of n − 1
edges, (vx , par(vx )) where (vx , par(vx )) ∈ E. T can be considered as a subset of the edge
set, E.

v1 v2 v1 5 v2 v1 5 v2
4 6 6 4

v3 5 v3 9 v3
4 2 4 2

v4 v5 v4 v5 v4 v5

vx par(vx ) w(vx , par(vx )) vx par w(vx , par) vx par w(vx , par)


v1 v3 4 v1 r 0 v1 r 0
v2 v3 6 v2 v1 5 v2 v1 5
v3 r 0 v3 v2 6 v3 v1 4
v4 v3 4 v4 v1 5 v4 v3 4
v5 v3 2 v5 v2 9 v5 v3 2
(a) ws(Ta ) = 16 (b) ws(Tb ) = 25 (c) ws(Tc ) = 15

Figure 4.26: Sample spanning trees and their representations

A minimum spanning tree, or simply MST, is a spanning tree of an undirected weighted


graph such that the sum of weights is minimized. Let w(u, v) be the weight of an edge,
(u, v) ∈ E. w(vx , par(vx )) is the weight of the edge (vx , par(vx )) ∈ T . The weight of the
root node in T is zero. Let ws(T ) be the sum of all weights of edges in T .
X
ws(T ) = w(vx , par(vx )) (4.8)
(vx ,par(vx ))∈T

The sums of all edge weights of spanning trees in Figure 4.26 (a), (b), and (c) are 16, 25,
184 CHAPTER 4. GREEDY ALGORITHM

and 15, respectively, and an MST is one in Figure 4.26 (c). The problem of finding an MST
is formally defined as follows:
Problem 4.14. Minimum spanning tree
Input: A connected undirected weighted graph G =P(V, E)
Output: a spanning tree T of G such that ws(T ) = w(par(vx ), vx ) is minimized
vx ∈V

Since an MST is an unrooted tree, any vertex can be considered a root. One can start
from any vertex as a part of an MST and grow the tree greedily by selecting the next vertex,
which connects to the tree with the minimum edge. This greedy algorithm is depicted in
Figure 4.27. The vertex set, V , is a candidate set and the bold vertices are part of the
tree, which is a solution set. Suppose the greedy algorithm takes v1 as a root, as depicted
in Figure 4.27 (a). Other vertices’ parents are not decided yet and, thus, w(vx , par(vx ))
are all ∞, except for the root, w(v1 , par(v1 )) = 0. Next, assign v1 as parent nodes for all
vertices that are adjacent to v1 with their weights, as updated in Figure 4.27 (b). Next, pick
a vertex vx with the lowest w(vx , par(vx )) which is not in the MST yet and then update
par(vy ) and w(vy , par(vy )) = w(vx , vy ) if vy is adjacent to vx , not in the tree yet, and
w(vy , par(vy )) > w(vx , vy ). Continue this greedy approach until all vertices are part of
the MST. A pseudo code is stated below. Let T (vx ).p, T (vx ).w, and T (vx ).s be par(vx ),
w(vx , par(vx )) and a Boolean variable, whether vx is in the solution set or not.
Algorithm 4.18. Prim-Jarnik MST
greedyMST1(G)
declare T whose all T (v).w = ∞ and T (v).s = F . . . . . . . . . . . . . . . 1
randomly pick vx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T (vx ).w = 0 and T (vx ).s = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
p = 0 and m = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if T (vj ).w < m and T (vj ).s = F . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
p = j and m = T (vj ).w and T (vj ).s = T . . . . . . . . . . . . . . . . . . . 8
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
if T (vj ).s = F and (vp , vj ) ∈ E and T (vj ).w > w(vp , vj ) . . .10
T (vj ).p = vp and T (vj ).w = w(vp , vj ) . . . . . . . . . . . . . . . . . . .11
Pn
return T and/or T (vi ).w . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
i=1

Lines 5 ∼ 8 in Algorithm 4.18 find a vertex vp with the lowest w(vp , par(vp )) which is
not in the MST yet. Lines 9 ∼ 11 in Algorithm 4.18 update parents and weights of all
adjacent vertices, vx , which are not in the MST yet if T (vx ).w > w(vp , wx ). Albeit the
computational time complexity can be improved with a data structure as in chapter 9, the
version in Algorithm 4.18 takes Θ(n2 ) or Θ(|V |2 ). The classical greedy Algorithm 4.18 is
well known as Prim’s algorithm due to the work in [138], although it was developed by
Jarnı́k earlier in 1930 [89].
Lemma 4.1. (Cut property) Let T be a minimum spanning tree, and an edge (u, v) ∈ T
partitions all vertices in V into two parts: L and R, where R = V − L, u ∈ L, and v ∈ R.
Then,
∀x ∈ L, ∀y ∈ R((x, y) ∈ E → w(x, y) ≥ w(u, v))
4.4. GRAPH PROBLEMS 185

5 5 5
v1 v2 v1 v2 v1 v2
4 6 4 6 4 6
5 v3 9 5 v3 9 5 v3 9
4 2 4 2 4 2
5 5 5
v4 v5 v4 v5 v4 v5
vx par w(vx , par) vx par w(vx , par) vx par w(vx , par)
v1 r 0 v1 r 0 v1 r 0
v2 - ∞ v2 v1 5 v2 v1 5
v3 - ∞ v3 v1 4 v3 v1 4
v4 - ∞ v4 v1 5 v4 v3 4
v5 - ∞ v5 - ∞ v5 v3 ∞
(a) Initialization pick v1 (b) step 1 (v1 ) (c) step 2 (v3 )
5 5 5
v1 v2 v1 v2 v1 v2
4 6 4 6 4 6
5 v3 9 5 v3 9 5 v3 9
4 2 4 2 4 2
5 5 5
v4 v5 v4 v5 v4 v5
vx par w(vx , par) vx par w(vx , par) vx par w(vx , par)
v1 r 0 v1 r 0 v1 r 0
v2 v1 5 v2 v1 5 v2 v1 5
v3 v1 4 v3 v1 4 v3 v1 4
v4 v3 4 v4 v3 4 v4 v3 4
v5 v3 2 v5 v3 2 v5 v3 2
(d) step 3 (v5 ) (e) step 4 (v4 ) (f) step 5 (v2 )

Figure 4.27: Prim-Jarnik’s Algorithm 4.18 illustration

Proof. Suppose ∃(x, y) ∈ E((x ∈ L ∧ y ∈ R) → w(x, y) < w(u, v)). Consider a spanning
tree T 0 = T ∪ {(x, y)} − {(u, v)}. ws(T 0 ) < ws(T ) and this contradicts that T is a minimum
spanning tree. Therefore, ∀x ∈ L, ∀y ∈ R((x, y) ∈ E → w(x, y) ≥ w(u, v)). 

Theorem 4.13. T produced by Prim-Jarnik’s Algorithm 4.18 is a minimum spanning tree.

Proof. Suppose that T is not a minimum spanning tree, i.e., there exists another spanning
tree, T 0 , such that ws(T 0 ) < ws(T ). Then, there exists an edge (u, v) in T 0 but not in T ;
(u, v) ∈ T 0 ∧ (x, y) 6∈ T because T 0 − T 6= ∅. The edge (u, v) partitions V into two parts,
L and R, since T 0 is a spanning tree. There exists an edge (x, y) ∈ T such that x ∈ L
and y ∈ R. Since T 0 is assumed to be a minimum spanning tree, w(x, y) ≥ w(u, v) by the
cut property Lemma 4.1. Prim-Jarnik’s Algorithm 4.18 selected (x, y) as a least cost edge
crossing L and R, w(x, y) ≤ w(u, v). Thus, w(x, y) = w(u, v). Now, consider a spanning
tree T2 = T 0 − {(u, v)} ∪ {(x, y)}. Since w(x, y) = w(w, z), ws(T 0 ) = ws(T2 ) and T2 is
another minimum spanning tree. Note that |T2 − T | = |T 0 − T | − 1. If this process is
repeated |T 0 − T | number of times with Ti , T|T 0 −T | becomes T , and we have

ws(T 0 ) = ws(T2 ) = ws(T3 ) = · · · ≤ ws(T )


186 CHAPTER 4. GREEDY ALGORITHM

This contradicts the assumption that ws(T 0 ) < ws(T ).


∴ The T produced by Prim-Jarnik’s Algorithm 4.18 is a minimum spanning tree. 
Another classical greedy algorithm on the MST Problem 4.14 is known as Kruskal’s
algorithm and first appeared in [109]. Instead of selecting an adjacent vertex with the lowest
value greedily, Kruskal’s greedy Algorithm 4.19 chooses an edge with the lowest weight first.
It starts all individual vertices as trees in a forest and merges them with the lowest weight
edge connecting two trees into one. It stops when all vertices belong to one single tree and
this tree is a minimum spanning tree.

6 6
v1 v2 v1 v2
3 7 3 7
1 v3 2 1 v3 2
4 9 4 9
5 5
v4 v5 v4 v5
T = {{v1 }, {v2 }, {v3 }, {v4 }, {v5 }} T = {{v1 , v4 }, {v2 }, {v3 }, {v5 }}
E0 = h(v1 , v4 ), (v2 , v5 ), (v1 , v3 ), (v3 , v4 ), E0 = h(v2 , v5 ), (v1 , v3 ), (v3 , v4 ), (v4 , v5 ),
(v4 , v5 ), (v1 , v2 ), (v2 , v3 ), (v3 , v5 )i (v1 , v2 ), (v2 , v3 ), (v3 , v5 )i
S= ∅ S= {(v1 , v4 )}
Initialization Step 1. accept (v1 , v4 )
6 6
v1 v2 v1 v2
3 7 3 7
1 v3 2 1 v3 2
4 9 4 9
5 5
v4 v5 v4 v5
T = {{v1 , v4 }, {v2 , v5 }, {v3 }} T = {{v1 , v3 , v4 }, {v2 , v5 }}
E0 = h(v1 , v3 ), (v3 , v4 ), (v4 , v5 ), (v1 , v2 ), E0 = h(v3 , v4 ), (v4 , v5 ), (v1 , v2 ), (v2 , v3 ),
(v2 , v3 ), (v3 , v5 )i (v3 , v5 )i
S= {(v1 , v4 ), (v2 , v5 )} S= {(v1 , v4 ), (v2 , v5 ), (v1 , v3 )}
Step 2. accept (v2 , v5 ) Step 3. accept (v1 , v3 )
6 6
v1 v2 v1 v2
3 7 3 7
1 v3 2 1 v3 2
4 9 4 9
5 5
v4 v5 v4 v5
T = {{v1 , v3 , v4 }, {v2 , v5 }} T = {{v1 , v2 , v3 , v4 , v5 }}
E0 = h(v4 , v5 ), (v1 , v2 ), (v2 , v3 ), (v3 , v5 )i E0 = h(v1 , v2 ), (v2 , v3 ), (v3 , v5 )i
S= {(v1 , v4 ), (v2 , v5 ), (v1 , v3 )} S= {(v1 , v4 ), (v2 , v5 ), (v1 , v3 ), (v4 , v5 )}
Step 4. reject (v3 , v4 ) Step 5. accept (v4 , v5 ) & finish

Figure 4.28: Kruskal’s Algorithm 4.19 illustration

This greedy algorithm is illustrated in Figure 4.28. First, edges are sorted by their
weights in ascending order. Note that an edge (vx , vy ) is accepted if each vertex comes from
4.4. GRAPH PROBLEMS 187

different tree. It is rejected if they are from the same tree. If not rejected, it would result
in an extra edge, causing a cycle. Note that a tree is without a cycle. The algorithm stops
when all vertices are connected, i.e., when exactly n − 1 merges occur. A pseudo code is
given as follows:

Algorithm 4.19. Kruskal’s algorithm MST

greedyMST2(G)
declare T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n, t[vi ] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
S = ∅ ......................................................... 3
E 0 = sort(E, ‘asc’) % note that e0j = (vj,x 0 0
, vj,y ).............4
j = 1 .......................................................... 5
for i = 1 ∼ n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
0 0
while t[vj,x ] = t[vj,y ], j = j + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
0 0
S ⊃ {(vj,x , vj,y )} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
0
temp = y[vj,y ] ............................................... 9
for k = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
0
if t[vk ] = temp, t[vk ] = T [vj,x ] . . . . . . . . . . . . . . . . . . . . . . . . . 11
return S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Lines 1 ∼ 2 in Algorithm 4.19 create a forest of n trees corresponding to each vertex.


Lines 7 ∼ 8 in Algorithm 4.19 find and include a valid edge into a solution set. If the two
vertices incident to the respective edge come from the same tree in the forest, it creates
a cycle and thus is rejected. If they come from two different trees, they are added to the
spanning tree in line 8 and merged in lines 9 ∼ 11, where each merge step takes linear time.
This set-merge process can be implemented much more efficiently with a data structure
called a disjoint set (see [42, p 561] for more information). Here, one without any special
data structure is given. The computational time complexity of the naı̈ve implementation
version of the Kruskal’s greedy Algorithm 4.19 is Θ(n2 ) or Θ(|V |2 ).
The correctness of Kruskal’s Algorithm 4.19 can be proven in a very similar manner as
the one for the Prim-Jarnik’s Algorithm 4.18. It is left for an exercise.

4.4.4 Shortest Path


Many graph problems take a directed weighted graph, also known as a weighted digraph,
where edges are directed. Vertices and edges are often referred to as nodes and arcs in
digraph in order to distinguish between undirected and directed graphs. An arc (vx , vy )
means that there is an arc from vx to vy . While an undirected graph has a symmetric
adjacent matrix, the adjacent matrix for a directed graph is not necessarily symmetric.
Nodes in rows are starting nodes and nodes in columns are ending nodes in an adjacent
matrix, as given in Figure 4.29.
Let path(vx , vy ) be a set of all paths from vx to vy . For example, path(v2 , v6 ) =
{hv2 , v3 , v6 i, hv2 , v3 , v4 , v6 i, hv2 , v4 , v6 i} in Figure 4.29. To formally define a path, P , P
is a path if (pi ∈ P → pi ∈ V ) ∧ (∀i ∈ {1, · · · , |P | − 1}(pi , pi+1 ) ∈ E) where |P | is the
path length. This definition is often referred to as a walk and the path is defined as a
walk without repeated vertices and edges in [145]. Here, both terms, ‘path’ and ‘walk,’ are
188 CHAPTER 4. GREEDY ALGORITHM

2
v1 v2
4 1 3 10
2 2
v3 v4 v5
8 4
5 6
v6 1 v7

(a) a sample weighted directed graph


v1 v2 v3 v4 v5 v6 v7 V adjacent weighted arcs
 
v1 0 2 ∞ 1 ∞ ∞ ∞ v1 → {(v2 , 2), (v4 , 1)}
v2 ∞ 0 ∞ 3 10 ∞ ∞ v2 → {(v4 , 3), (v5 , 10)}
v3  4 ∞ 0 ∞ ∞ 5 ∞
 
∞ v3 → {(v1 , 4)}
v4 ∞ 2 0 2 8 4 

∞
 v4 → {(v3 , 2), (v5 , 2), (v6 , 8), (v7 , 4)}
v5 ∞ ∞ ∞ 0 ∞ 6 
v6

∞ ∞ ∞ ∞ ∞ 0 ∞
 v5 → {(v7 , 6)}
v7 ∞ ∞ ∞ ∞ ∞ 1 0 v6 → {}
v7 → {(v6 , 1)}
(a) adjacent matrix (b) adjacent list

Figure 4.29: A sample weighted directed graph representation

interchangeably used. Let pc(P ) be a path cost, as defined in eqn (4.9).


|P |−1
X
pc(P ) = w(pi , pi+1 ) (4.9)
i=1

For example, pc(hv2 , v3 , v4 , v6 i) = w(v2 , v3 ) + w(v3 , v4 ) + w(v4 , v6 ) = 3 + 2 + 6 = 11. Let


spc(vx , vy ) be the shorest path cost from vx to vy , as formally defined in eqn (4.10). For
example, spc(v2 , v6 ) = 7.
spc(vx , vy ) = min pc(P ) (4.10)
P ∈path(vx ,vy )

The shortest path cost problem, or simply SPC, is to find paths with the minimum path
cost from a specific node r to all other vertices. The output is a table of spc(r, vx ) for each
vertex vx ∈ V . It is defined formally as follows:
Problem 4.15. Shortest path cost
Input: A connected directed weighted graph G = (V, E) and r ∈ V
Output: ∀vx ∈ V, spc(r, vx )
One of the most famous and canonical example of greedy algorithms is Dikstra’s al-
gorithm, which is to solve the shortest path cost Problem 4.15 and was originally de-
scribed in [50]. It starts with an initial table containing infinite costs except for the

Edsger Wybe Dijkstra (1930 - 2002) was a Dutch computer scientist and an
early pioneer in many research areas of computer science such as software engineering
and distributed computing. Among many algorithms invented by him, the algorithm for
the shortest path problem is now called Dijkstra’s algorithm.
c Photo Credit: Hamilton Richards, licensed under CC BY 3.0.
4.4. GRAPH PROBLEMS 189

source node r, as given in Figure 4.30 (a). The shortest path cost from a node to it-
self is 0. Next, the candidate set, the table is updated where the neighboring nodes
vx ∈ {vx | v x ∈ V ∧ (vs , vx ) ∈ E} of the selected node vs may be updated if the cur-
rent cost is more expensive than spc(r, vs ) + w(vs , vx ). Once the candidate set is updated
and the currently selected node is eliminated and included into the solution set, the next
node with minimum SPC is selected. This greedy approach is repeated until all vertices are
selected. A pseudo code is given below. Let T [vx ].c contain spc(vr , vx ) and T [vx ].f be the
flag indicating whether it is in the solution set or candidate set. ‘T [vx ].f = T’ means that
it is in the solution set, not in the candidate set.

v1 0 2 v2 ’ v1 0 2 v2 2
4 1 3 10 4 1 3 10
’ ’ ’ ’ 1 ’
2 2 2 v4 2
v3 v4 v5 v3 v5
8 4 8 4
5 ’ ’ 6 5 ’ ’ 6
v6 1 v7 v6 1 v7

v1 v2 v3 v4 v5 v6 v7 6 v1 v2 v3 v4 v5 v6 v7
0 ∞ ∞ ∞ ∞ ∞ ∞ 60 2 ∞ 1 ∞ ∞ ∞
(a) initialization v1 (b) update C and select v4
v1 0 2 v2 2 v1 0 2 v2 2
4 1 3 10 4 1 3 10
3 1 3 3 1 3
2 2 v3 2 2
v3 v v4 5 v4 v5
8 4 8 4
5 9 5 6 5 8 5 6
v6 1 v7 v6 1 v7

6 v1 v2 v3 6 v4 v5 v6 v7 6 v1 6 v2 v3 6 v4 v5 v6 v7
60 2 3 61 3 9 5 60 62 3 61 3 8 5
(c) update C and select v2 (d) update C and select v3
v1 0 2 v2 2 v1 0 2 v2 2
4 1 3 10 4 1 3 10
3 1 3 3 1 3
2 2 v 2 2
v3 v 4 5 v3 v4 v5
8 4 8 4
5 8 5 6 5 6 5 6
v6 1 v7 v6 1 v7

6 v1 6 v2 6 v3 6 v4 v5 v6 v7 6 v1 6 v2 6 v3 6 v4 6 v5 v6 v7
60 62 63 61 3 8 5 60 62 63 61 63 6 5
(e) update C and select v5 (f) update C and select v7

Figure 4.30: Dikstra’s algorithm illustration

Algorithm 4.20. Dijkstra’s algorithm SPC

greedySPC(G, vr )
declare T1∼n with ∞ cost and F flag values initially . . . . . . . . . . . . 1
T [vr ].c = 0 and T [vr ].f = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for each vx where (vr , vx ) ∈ E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if T [vx ] > T [vr ] + w(vr , vx ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [vx ] = T [vr ] + w(vr , vx ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
190 CHAPTER 4. GREEDY ALGORITHM

vr = argmin (T [vx ].c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


T [vx ].f =F
T [vr ].f = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
return T.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Lines 4 and 5 in Algorithm 4.20 update the candidate set. Line 6 chooses a node with
minimum value greedily among only the nodes in the candidate set. Line 7 includes it into
the solution set by switching its flag value. The current pseudo code in Algorithm 4.20 takes
O(n2 ) or O(|V |2 ). The computational time complexity will improve when it is combined
with a data structure called the priority queue in chapter 9.
Theorem 4.14. Dijkstra’s Algorithm 4.20 correctly finds the shortest path cost.
Proof. basis case (n = 1), spc(v1 , v1 ) = 0 and Algorithm 4.20 returns a table of one node
T [v1 ].c = 0.
Inductive step: Assume Algorithm 4.20 is correct for k visited nodes. Show that it is
also true for k + 1 visited nodes. V1∼n = S1∼k ∪ Ck+1∼n , where S1∼k is the solution set
that is assumed to be correct and Ck+1∼n is the candidate set of unvisited nodes. Let
vx ∈ S1∼k and vy ∈ Ck+1∼n , such that (vx , vy ) ∈ E and T [vy ].c is the minimum among
T [vz ].c∀vz ∈ Ck+1∼n . Algorithm 4.20 chooses vy as the k + 1 node to be included into
the solution set, S1∼k+1 = S1∼k ∪ {vy } and removes it from the candidate set, Ck+2∼n =
Ck+1∼n − {vy }. It also updates T [vz ].c = T [vy ].c + w(vy , vz ) for all vz ’s if (vy , vz ) ∈ E and
T [vz ].c > T [vy ].c + w(vy , vz ). Let vx be the last node which updated T [vy ].c, according to
Algorithm 4.20, T [vy ].c = T [vx ].c + w(vx , vy ). Let P = hvr , · · · , vx , vy i be the path which
makes T [vy ].c = pc(P ).
Now suppose that T [vy ].c 6= spc(vr , vy ), i.e., there exists another path, P 0 , such that
pc(P ) > (pc(P 0 ) = spc(vr , vy ). There must be vw such that vw ∈ P 0 but vw 6∈ P . If
vw ∈ S1∼k , it contradicts the assumption that T [vx ].c = spc(vr , vx ). If vw ∈ Ck+1∼n , it
contradicts T [vy ].c is the minimum. Therefore, S1∼k+1 = S1∼k ∪ {vy } is also true. 

4.4.5 Traveling Salesman Problem


Consider the famous traveling salesman problem or simply TSP, which is to find the
shortest route a traveling salesman could take to visit all cities. It takes a list of n number
of cities and costs between cities as inputs. The input can be thought of as a weighted
complete graph or simply a cost matrix, as given in Figure 4.31. The output is a sequence
of all cities in order, such that the sum of costs between cities is minimized. The TSP
problem can be formulated as follows:
Problem 4.16. Traveling salesman problem
Input: A sequence V1∼n of size n and an n × n cost matrix Cv1 ∼vn ,v1 ∼vn
where cvi ,vj is the cost from vi to vj .
n−1
0
P
Output: V a permutation of V such that cvi0 ,vi+1
0 is minimized.
i=1

Immediately evident from the problem definition is a naı̈ve algorithm with Θ(n!) com-
putational time complexity that generates all n! permutations of V1∼n and computes each
sum of costs to find the minimum solution.
Consider the following greedy algorithm, assuming the cost matrix is symmetric. A
salesman may start with any city, vx , and greedily select the next city, vy , whose cost from
4.4. GRAPH PROBLEMS 191

v1 v2 v3 v4 v5 v1 7
v2 v1 7
v2
 
v1 0 7 3 2 8 3 7 2 3 3 7 2 3
v2  7 0 7 3 7
  8 7 8 7
v3  3 7 0 1 8 v3 v4 v3 v4
  1 1
v4  2 3 1 0 4
8 4 8 4
v5 8 7 8 4 0
v5 v5

(a) Cost matrix (b) Greedy algo. solution (c) Optimal solution
hv1 , v4 , v3 , v2 , v5 i = 17 hv1 , v3 , v4 , v2 , v5 i = 14

Figure 4.31: Traveling Salesman problem

vx is the lowest. Repeat this greedy process until all cities are visited. This simple greedy
algorithm was called the ‘nearest-neighbor method’ in [147].

Algorithm 4.21. Nearest-neighbor method for TSP

greedyNN4TSP(C)
Let U = V be the set of unvisited vertices . . . . . . . . . . . . . . . 1
s = vx where vx is randomly selected. . . . . . . . . . . . . . . . . . 2
U = U − {s} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
R = hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
while U 6= ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
s = argmin(w(s, x)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
x∈U
U = U − {s} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
append s at the end of R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
return R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Since a string has two open ends that are starting and ending, the route grows in two
directions in Algorithm 4.21. Clearly, the nearest-neighbor method greedy Algorithm 4.21
does not always find an optimal solution, as a counter-example is given in Figure 4.32. An
optimal solution was given in Figure 4.31 (c).
There is another greedy algorithm for TSP that resembles Kruskal’s Algorithm 4.19.
While Kruskal’s Algorithm starts with individual trees for each sub-graph, the greedy merge
algorithm for TSP starts with individual routes for each sub-graph. Since a route is a string,
it starts with n strings that contain only one vertex. Then, two strings are greedily merged
only from end points and not from the middle point of the string. A pseudo code is stated
as follows:

Algorithm 4.22. Greedy merge method for TSP

greedyMG4TSP(C)
Declare n strings S = {s1 , · · · , sn } where si = hvi i . . . . . . . 1
for i = 1 ∼ n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
(sx , sy ) = argmin c(sx [|sx |], sy [1]) . . . . . . . . . . . . . . . . . 3
sx ,sy ∈S,sx 6=sy
sx = append(sx , sy ), i,e, S = S − {sy } . . . . . . . . . . . . . .4
return S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
192 CHAPTER 4. GREEDY ALGORITHM

7 7 7 7
v1 v2 v1 v2 v1 v2 v1 v2
3 7 2 3 3 7 2 3 3 7 2 3 3 7 2 3
8 7 8 7 8 7 8 7
v3 1
v4 v3 1
v4 v3 1
v4 v3 1
v4
8 4 8 4 8 4 8 4
v5 v5 v5 v5

hv1 , v4 i hv1 , v4 , v3 i hv1 , v4 , v3 , v2 i hv1 , v4 , v3 , v2 , v5 i


(a) v1 is a start. 2 + 1 + 7 + 7 = 17
7 7 7 7
v1 v2 v1 v2 v1 v2 v1 v2
3 7 2 3 3 7 2 3 3 7 2 3 3 7 2 3
8 7 8 7 8 7 8 7
v3 1
v4 v3 1
v4 v3 1
v4 v3 1
v4
8 4 8 4 8 4 8 4
v5 v5 v5 v5

hv2 , v4 i hv2 , v4 , v3 i hv2 , v4 , v3 , v1 i hv2 , v4 , v3 , v1 , v5 i


(b) v2 is a start. 3 + 1 + 3 + 8 = 15
7 7 7 7
v1 v2 v1 v2 v1 v2 v1 v2
3 7 2 3 3 7 2 3 3 7 2 3 3 7 2 3
8 7 8 7 8 7 8 7
v3 1
v4 v3 1
v4 v3 1
v4 v3 1
v4
8 4 8 4 8 4 8 4
v5 v5 v5 v5

hv3 , v4 i hv3 , v4 , v1 i hv3 , v4 , v1 , v2 i hv3 , v4 , v1 , v2 , v5 i


(c) v3 is a start. 1 + 2 + 7 + 7 = 17

Figure 4.32: Nearest-neighbor method Algorithm 4.21 illustration for TSP

Figure 4.33 illustrates Algorithm 4.22. Note that the graph weights are different from the
previous example in Figure 4.31 to emphasize the merge processes. The previous example
in Figure 4.31 serves as a counter-example of the incorrectness of Algorithm 4.22.

Theorem 4.15. The greedy merge Algorithm 4.22 does not always find an optimal solution.

Proof. A counter example is one given in Figure 4.31 (a). While the greedy merge method
Algorithm 4.22 produces a total cost of 17 (hv3 , v4 , v1 , v2 , v5 i), the better and optimal solu-
tion is given in Figure 4.31 (c). 

Various other greedy algorithms have been proposed without success. In 1967 [54],
Edmonds conjectured that there is no polynomial-time algorithm for the traveling salesman
problem.
4.5. MINIMUM LENGTH CODE 193

7 7 7 7
v1 v2 v1 v2 v1 v2 v1 v2
3 7 7 3 3 7 7 3 3 7 7 3 3 7 7 3
8 2 8 2 8 2 8 2
v3 1
v4 v3 1
v4 v3 1
v4 v3 1
v4
8 4 8 4 8 4 8 4
v5 v5 v5 v5

hv1 i, hv2 i, hv3 , v4 i, hv5 i hv1 i, hv2 , v5 i, hv3 , v4 i hv1 i, hv3 , v4 , v2 , v5 i hv1 , v3 , v4 , v2 , v5 i

Figure 4.33: Greedy merge method Algorithm 4.22 illustration for TSP

4.5 Minimum Length Code


A canonical example of a greedy algorithm that requires selecting two candidates and
updating the candidate set is a Huffman code. Suppose that there are n kinds of characters
and they are coded in a binary system. In the uniform or fixed length coding system, only
dlog ne bits are necessary to represent all n characters uniquely, as listed in Table 4.1. If
the frequencies of these characters differ, this uniform coding system may not be efficient.
To transmit a message more efficiently, Morse code is used. On the toy example, the sum
of each character’s length multiplied by its frequency is smaller in Morse code than in the
uniform coding system. One of the major problems of Morse code, however, is its ambiguity.
If a received message is ‘001000100’, the uniform code reads the message uniquely as ‘001(B)
000(A) 100(E)’. However, the message can be interpreted in many different ways in Morse
code: ‘0(E) 0(E) 100(D) 01(A) 0(E) 0(E)’, ‘0010(F) 0010(F) 0(E)’, ‘0010(F) 0(E) 0(E)
100(D)’, etc.

Table 4.1: Various coding systems

si fi uniform Morse Huffman


A 2 000 01 10100
B 3 001 1000 10101
C 5 010 1010 1011
D 8 011 100 100
E 13 100 0 00
F 15 101 0010 01
G 18 110 110 11
Pn
f i li 192 187 143
i=1

Consider a different coding system called the Huffman code, discovered and described
in [85]. The Huffman code algorithm is described in the following subsection. A Huffman
coding system on a toy example is given in Table 4.1 compared with uniform and Morse
codes. Its total length is smaller than other coding systems. Indeed, Huffman proved that
it is optimal in [85]. It uniquely reads the sample message, ‘001000100’ as ‘00(E) 100(D)
01(F) 00(E)’.
If symbols are placed at the leaf level in a binary tree, messages can be read uniquely. In
194 CHAPTER 4. GREEDY ALGORITHM

Node par freq. code


1 (A) 8 2 11110 13, 64
0 1
2 (B) 8 3 11111
3 (C) 9 5 1110 11, 28 12, 36
4 (D) 10 8 110 0 1 0 1
5 (E) 11 13 00 E, 13 F, 15 G, 18 10, 18
6 (F) 11 15 01 0 1
7 (G) 12 18 10 D, 8 9, 10
8 (AB) 9 5 0 1
9 (CAB) 10 10 C, 5 8, 5
10 (DCAB) 12 18 0 1
11 (EF) 13 28 A, 2 B, 3
12 (GDCAB) 13 36
13 (EFGDCAB) - 64
Pn
(a) Optimal & Huffman code and its binary tree: i=1 fi li = 143
Node par code
1 (A) 8 000
2 (B) 8 001
3 (C) 9 010 14, 64
4 (D) 9 011 0 1
5 (E) 10 100
12, 18 13, 46
6 (F) 10 101
0 1 0 1
7 (G) 11 110
8 (AB) 12 8, 5 9, 13 10, 28 11, 18
9 (CD) 12 0 1 0 1 0 1 0
10 (EF) 13 A, 2 B, 3 C, 5 D, 8 E, 13 F, 15 G, 18
11 (G) 13
12 (ABCD) 14
13 (EFG) 14
14 (ABCDEFG)
Pn
(b) Uniform code and its binary tree: i=1 fi li = 192
Node par code
1 (A) 13 0 13, 64
0 1
2 (B) 12 10
A, 2 12, 62
3 (C) 11 110 0 1
4 (D) 10 1110 B, 3 11, 59
5 (E) 9 11110 0 1
6 (F) 8 111110 C, 5 10, 54
0 1
7 (G) 8 111111
8 (FG) 9 D, 8 9, 46
0 1
9 (EFG) 10
E, 13 8, 33
10 (DEFG) 11 0 1
11 (CDEFG) 12 F, 15 G, 18
12 (BCDEFG) 13
13 (ABCDEFG)
Pn
(c) A worst code and its binary tree: i=1 fi li = 318

Figure 4.34: Binary coding systems represented in binary coding trees


4.5. MINIMUM LENGTH CODE 195

a binary tree representation, no two binary codes are prefixes of each other. This property
enables the deciphering of encoded messages uniquely. Morse code cannot be represented
by a binary tree, but the Huffman and uniform codes are shown as binary code trees in
Figures 4.34 (a) and (b), respectively. Leaf nodes contain a symbol followed by its frequency.
An internal node contains the node number followed by its frequency, which is the sum of
all symbols’ frequencies in its sub-tree. Each symbol’s binary code can be generated by
tracing a path from the root node. The left and right branches are 0 and 1, respectively.
For example, the symbol ‘D’ in Figures 4.34 (a) is ‘110’. There are an exponential number
of different binary code trees. A binary code tree in Figure 4.34 (c) assigns each symbol to
a binary code as follows: A - 0, B - 10, C - 110, D - 1110, E - 11110, F - 111110, and G
- 111111. It reads the sample message, ‘001000100’ as ‘0(A) 0(A) 10(B) 0(A) 0(A) 10(B)
n
P
0(A)’ uniquely. However, its total length, fi li = 318, is higher than the Huffman code.
i=1
To represent a binary tree for binary coding, a table of height (2n − 1) to store nodes
and their respective parent nodes can be used as shown in Figures 4.34. Recall that there
are exactly n − 1 internal nodes if there are n leaf nodes.
A finding a minimum length code problem is defined as follows:

Problem 4.17. Minimum length code


Input: A list A1∼n where ai = (si , fi ) and the radix r
Output: APr-ary tree, T whose leaf nodes are ai ’s such that
depth(x) × fi is minimized.
x∈Leaf

The typical radix is binary, r = 2, by default. The ultimate output is the list of r-ary
code for each character but since it can be generated trivially by tracing if an r-ary tree
is provided. Hence, the output in Problem 4.17 is the r-ary tree represented in a table as
shown in Figure 4.34.

4.5.1 Huffman Code


The Huffman code is a binary code where the most frequently appearing symbol gets
the shortest code length. This can be achieved by merging two sub-trees and reevaluating
candidate sub-trees. First, each symbol is treated as a single node sub-tree. As depicted
in Figure 4.35, there are n single node sub-trees in the forrest initially. Two of the least
frequent nodes are merged into a sub-tree by creating a new root node whose frequency is
the sum of two sub-trees’ frequencies. By repeating this process until there is only one tree,
an optimal solution is found.
This greedy algorithm is known as Huffman code algorithm. The output tree is repre-
sented as a (2n − 1 × 2) table. The ith row represents the ith character if i ≤ n. The ith
row where n < i < 2n represents the internal node. The last (2n − 1)th row is the root
node for the final tree. The first and second columns represent frequency and parent node

David Albert Huffman (1925-1999) was an American electrical engineer and a


pioneer in computer science. He is best known for his Huffman coding.

c Photo Credit: courtesy of University of California, Santa Cruz
196 CHAPTER 4. GREEDY ALGORITHM

information. A row with no parent node is root node of a sub-tree. The frequency of the
ith input character is denoted as ai .f . A pseudo code is stated as follows:

Initialization 1. Merge A and B and include 8. 2. Merge C and 8 and include 9.

8 9 E G
C
C E E G
A G
A B C 8 D F
B F F
D D A B

s i
A B C D E F G C 8 (AB) D E F G D 8 (ABC) E F G
fi
2 3 5 8 13 15 18 5 5 8 13 15 18 8 10 13 15 18

3. Merge D and 9 and include 10. 4. Merge E and F and include 11. 5. Merge G and 10 and include 12.

10 12
10 11
E G 11
G G 10
D 9 D 9 E F
F E F
D 9
C 8 C 8
C 8
A B A B
A B
E F G 10 (ABCD) 10 (ABCD) G 11 (EF) 11 (EF) 12 (ABCDG)
13 15 18 18 18 18 28 28 36

Figure 4.35: Illustrating the greedy Huffman code Algorithm 4.23

Algorithm 4.23. Naı̈ve greedy Huffman code


greedyHuffman(A1∼n )
declare T2n−1×2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n, T [i][1] = ai .f . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = n + 1 ∼ 2n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
minval = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 ∼ i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if T [j][2] 6= (0 or null) and T [j][1] < minval, . . . . . . . . 6
minval = T [j][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
minidx = j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T [minidx][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
T [i][1] = T [minidx][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
minval = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
for j = 1 ∼ i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
if T [j][2] 6= (0 or null) and T [j][1] < minval, . . . . . . . 13
minval = T [j][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
minidx = j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
T [minidx][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
T [i][1] = T [i][1] + T [minidx][1] . . . . . . . . . . . . . . . . . . . . . . . 17
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
In lines 1 and 2, the table is initialized with each symbol’s frequency. As the parent
information is left empty, all nodes are single node sub-trees in the forest. For each internal
4.5. MINIMUM LENGTH CODE 197

node where the merge process occurs, the two nodes with least frequencies must be found.
Lines 4 ∼ 8 and 11 ∼ 15 find two nodes with the first and second least frequencies. They get
merged into a single tree whose parent node is i as assigned in lines 9 and 16. The frequency
of the new parent node of two merged sub-trees is sum of frequencies of two children nodes
as summed up in lines 10 and 17.
The number of merge processes is exactly n − 1, as there are n − 1 internal nodes if there
are n leaf nodes indexed from n + 1 ∼ 2n − 1. If the input table is not sorted or arranged
into a special data structure, finding the least frequent candidates takes linear time and
this must be repeated n − 1 times. Hence, the computational time complexity of the naı̈ve
implementation of the greedy Huffman code Algorithm 4.23 takes quadratic time, Θ(n2 ). It
can be improved later in Chapters 7 and 9 utilizing certain data structures.

4.5.2 Minimum Length r-ary Code


Symbols can be represented in a different radix system as discussed in the previous
chapter on page 126. If symbols are placed at the leaf level in an r-ary tree, messages
can be read uniquely. In an r-ary tree representation, no two binary codes are prefixes of
each other and thus this property enables deciphering encoded messages uniquely. A greedy
algorithm similar to Algorithm 4.23 can be designed.
Before designing a greedy algorithm, the number of internal nodes in a k-ary tree must
be determined.

Problem 4.18. Minimum number of internal nodes in a k-ary tree


Input: number of leaf nodes n ∈ N in a k-ary tree T and k > 1
Output: minimize |M | where M is a set of internal nodes that form a k-ary tree T
together with n leaf nodes and ∀x ∈ M, (0 < |{y| par(y) = x}| ≤ k)

If there are n number of leaf nodes, the minimum number of internal nodes, MNK(n)
has the following linear recurrence relation.

0
 if n = 1
MNK(n) = 1 if 1 < n ≤ k (4.11)
 n n
MNK(n − (k − 1) k ) + k if n > k

By grouping leaf nodes into a group of size r, the quotient nk accounts for the
 
 n number
of internal nodes and a recursive sub-problem. In the recursive sub-problem, k nodes
become leaf nodes together with the n − k nk ) remainder nodes: MNK( nk + n − k nk ).
     

If there is only one node, the number of internal node is zero. If the number of leaf nodes
is greater than 1 but less than k, they can have a root node which is an internal node.
A following theorem 4.16 provides a closed form of the minimum number of internal
nodes in a k-ary tree.

Theorem 4.16. MNK(n), the minimum number of internal nodes in a k-ary tree where n
is the number of leaf nodes is  
n−1
MNK(n) = (4.12)
k−1
198 CHAPTER 4. GREEDY ALGORITHM

Initialization 1. Merge (a, b, c) and include 9. 2. Merge (d, e, f) and include 10.
c e g g 9 g
a 9 e 10
b h h
d f h a b c d a b c d e f
f

s i
a b c d e f g h s i
9 (abc) d e f g h s i
9 (abc) 10 (def) g h
fi
1 2 3 4 4 5 5 7 f i
6 4 4 5 5 7 fi
6 13 5 7

3. Merge (g, 9, h) and include 11. 4. Merge 10 and 11 and include 12.
12
11
11 10
g 9 h 10
g 9 h d e f
d e f
a b c
a b c
s i
10 (def) 11 (abcgh) s i
12 (abcdefgh)
f
i
13 18 f
i
31

Figure 4.36: Illustrating the greedy minimum length r-ary code Algorithm 4.24 where r = 3

Theorem 4.16 can be proven by a simple strong induction which is a main topic of the
following chpater 5. Hence, this proof is left for an exercise Q 5.3 d) in chpater 5.
Once the minimum number of internal nodes in a k-ary tree is known, the minimum
number of merges in line 3 of Algorithm 4.23 must be updated for the general r-ary code.
Note that lines 4 ∼ 8 were repeated twice in lines 11 ∼ 15 in Algorithm 4.23 and they have
to be repeated r number of times in the general minimum r-ary code length problem. It
would be more succinct in pseudo code if lines 4 ∼ 8 become a subroutine. Figure 4.36
illustrates the generalized greedy algorithm for the minimum length ternary (r = 3) code
problem. A pseudo code for a greedy minimum length r-ary code is stated as follows:

Algorithm 4.24. Greedy minimum length r-ary code


m = n + d n−1
r−1 e is declared globally
Tm×2 is declared globally
greedyHuffman(A1∼n , r)
for i = 1 ∼ n, T [i][1] = ai .f . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = n + 1 ∼ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [i][1] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 ∼ r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
minidx = findminidx(i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [minidx][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [i][1] = T [i][1] + T [minidx][1] . . . . . . . . . . . . . . . . . . . . . . 7
for i = 1 ∼ m, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if T [i][2] = (0 or null), T [i][2] = m . . . . . . . . . . . . . . . . . . 9

Subroutine 4.1. findminidx(i)

minval = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 ∼ i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
if T [j][2] 6= (0 or null) and T [j][1] < minval, . . . . . . . . . . 3
4.5. MINIMUM LENGTH CODE 199

minval = T [j][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
minidx = j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return minidx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The greedy Algorithm 4.24 produces a r-ary tree where all internal nodes except for
the root node have exactly r children nodes. The root node may have less than r children
nodes. Although the greedy huffman code Algorithm 4.23 finds an optimal solution when
the code is binary, (r = 2), the general greedy Algorithm 4.24 does not always find an
optimal solution if the code’s arity is higher than 2.

Theorem 4.17. The greedy Algorithm 4.24 does not always find a minimum length r-ary
code.
Proof. A counter example of ternary (r = 3) code is given in Figure 4.37. While the greedy
Algorithm 4.24 gives 65 as shown in Figure 4.37 (a), the uniform code give a smaller value,
62 as provided in Figure 4.37 (b). An optimal solution of 58 is given in Figure 4.37 (c). 

31
0 1
13 18
a (1) → ‘110’ b (2) → ‘111’
0 1 2 0 1 2 c (3) → ‘112’ d (4) → ‘00’
4 4 5 5 6 7 e (4) → ‘01’ f (5) → ‘02’
0 1 2 g (5) → ‘10’ h (7) → ‘12’
1 2 3

(a) Greedy solution - 65


31
0 1 2 a (1) → ‘00’ b (2) → ‘01’
6 13 12 c (3) → ‘02’ d (4) → ‘10’
0 1 2 0 1 2 0 1 e (4) → ‘11’ f (5) → ‘12’
1 2 3 4 4 5 5 7 g (5) → ‘20’ h (7) → ‘21’

(b) Uniform - 62
31
0 2
1
10 14 7
a (1) → ‘000’ b (2) → ‘001’
0 1 2 0 1 2 c (3) → ‘01’ d (4) → ‘02’
3 3 4 4 5 5 e (4) → ‘10’ f (5) → ‘11’
0 1 g (5) → ‘12’ h (7) → ‘2’
1 2

(c) Optimal solution - 58

Figure 4.37: Ternary coding trees


200 CHAPTER 4. GREEDY ALGORITHM

4.6 Exercises
Q 4.1. Consider the following two modified alternating permutation problems: down-up
and up-up-down problems, which were considered as exercises in Q 2.24 on page 86 and
Q 2.26 on page 87, respectively.

a). Devise a greedy algorithm for the down-up problem.


b). Provide the computational time complexity of your greedy algorithm provided in a).
c). Devise a greedy algorithm for the up-up-down problem.
d). Provide the computational time complexity of your greedy algorithm provided in c).

Q 4.2. Consider the problem of selecting k items out of n numbers, such that the sum of
these k numbers is minimized.
a). Formulate the problem.
b). Design a greedy algorithm.
c). Illustrate the proposed algorithm on the following example where k = 3.

5 2 9 4 0 8 7 1

d). Provide the computational time complexity of your greedy algorithm provided in b).
e). Prove the correctness or incorrectness of the proposed algorithm in b).
Q 4.3. Consider the problem of selecting k items out of n positive numbers, such that the
subset product of these k items is maximized.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate the proposed algorithm on the following example where k = 3.

0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0

d). Provide the computational time complexity of your greedy algorithm provided in b).
e). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.4. Consider the problem of selecting k items out of n positive numbers, such that the
subset product of these k items is minimized.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate the proposed algorithm on the following example where k = 3.

2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5


4.6. EXERCISES 201

d). Provide the computational time complexity of your greedy algorithm provided in b).
e). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.5. Suppose that there are only three kinds of stamps, A = h7, 5, 1i and the problem is
to make T c with the minimum number of stamps.

(n = 3) kinds of stamps

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate the proposed algorithm in b) where T = 10c.
d). Prove the correctness or incorrectness of the proposed algorithm in b).
e). Provide the computational time complexity of your greedy algorithm provided in b).

Q 4.6. Suppose that there are n kinds of stamps and one would like to make T c with the
maximum number of stamps.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate the proposed algorithm where A = h5, 6, 7i and T = 35c.

(n = 3) kinds of stamps

d). Prove the correctness or incorrectness of the proposed algorithm in b).


e). Provide the computational time complexity of your greedy algorithm provided in b).

Q 4.7. There are (k = 3) kinds of canned foods, represented by their weights: A = h4, 5, 7i.
Suppose that an astronaut would like to carry as many canned foods as possible, i.e., max-
imize the quantity.

4kg 5kg 7kg

a). Suppose that the spaceship safety regulation requires that the total weight of the
canned foods must be exactly nkg in order to launch safely. Formulate the problem.
202 CHAPTER 4. GREEDY ALGORITHM

b). Design a greedy algorithm for the problem in a).


c). Prove the correctness or incorrectness of the proposed algorithm in b).
d). Suppose that the spaceship safety regulation requires that the total weight of the
canned foods cannot exceed nkg in order to launch safely. Formulate the problem.
e). Design a greedy algorithm for the problem in d).
f). Prove the correctness or incorrectness of the proposed algorithm in e).
g). Provide the computational time complexity of your greedy algorithm provided in d).

Q 4.8. One needs (m = T c) amount on an envelope. While rummaging through a drawer,


one finds only ten stamps, A = h1, 44, 7, 5, 7, 1, 33, 24, 33, 44i. The problem is to make at
least T c with available stamps.

Ten available stamps

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate the proposed algorithm in b) where T = 72c.
d). Provide the computational time complexity of your greedy algorithm provided in b).
e). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.9. Suppose a person is moving n number of boxes represented by their weights in


kilograms. The person rented a one-way truck, but according to the law, the total weight
cannot exceed a certain weight limit, m. The mover would like to maximize the total weight
while not exceeding the weight limit. Hence, the problem is to select items whose total
weights is at most m.

32 33
89 54 71
120 62 94

Eight items and a truck with a weight limit m = 500kg.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate the proposed algorithm in b) where A = h32, 33, 89, 54, 71, 120, 62, 94i and
the m = 500kg.
d). Provide the computational time complexity of your greedy algorithm provided in b).
e). Prove the correctness or incorrectness of the proposed algorithm in b).
4.6. EXERCISES 203

Q 4.10. Given an integer n and a set of integers,PA1∼k , the subset sum equality, or SSE,
problem is to select a subset S ⊆ A such that x = n. For example, if n = 12 and
x∈S
A = {2, 3, 5, 7}, the output should be S = {2, 3, 7} or S = {5, 7}. For another example, if
n = 11 and A = {2, 3, 5, 7}, the output should be impossible.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Prove the correctness or incorrectness of the proposed algorithm in b).
d). Provide the computational time complexity of your greedy algorithm provided in b).

Q 4.11. Consider a problem where two or more greedy choices fail to find an optimal
solution. For example, two greedy Algorithms 4.6 and 4.7, stated on pages 163 and 164,
failed to find an optimal solution for the 01-knapsack Problem 4.4, defined on page 163.
Consider the following algorithm which combines Algorithm 4.6 and Algorithm 4.7, and
returns the maximum of two solutions.
Algorithm 4.25. Combine two greedy algorithms
zo-knapsack-greedyIII(A, m)
C1 = zo-knapsack-greedyI(A, m) . . . . . . . . . . . . . . . . . . . 1
C2 = zo-knapsack-greedyII(A, m) . . . . . . . . . . . . . . . . . . 2
return max(C1 , C2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

a). Prove the incorrectness of Algorithm 4.25, i.e., come up with a counter-example.
b). Consider the multiprocessor scheduling Problem 4.10, defined on page 174, and its
greedy approximate Algorithms 4.13 and 4.14. Devise a greedy algorithm which com-
bines two greedy approximate algorithms.
c). Prove the correctness or incorrectness of the greedy algorithm provided in b).
d). Consider the traveling salesman Problem 4.16, defined on page 190, and its greedy
approximate Algorithms 4.21 and 4.22. Devise a greedy algorithm which combines
two greedy approximate algorithms.
e). Prove the correctness or incorrectness of the greedy algorithm provided in d).

Q 4.12. Mr. Gardon Gekco needs at least 50,000 stocks of ‘Greed’ company to take over.
There are n number of sellers with different offer prices and quantities. Each offer is not
divisible, i.e., take it or not. Greedy Gekco would like to buy at least 50,000 stocks with
the minimum amount of money.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate your proposed greedy algorithm in b) with the following toy example:

A a1 a2 a3 a4 a5 a6 a7
P $560K $300K $620K $145K $800K $450K $189K
Q 20000 10000 20000 5000 25000 15000 7000
204 CHAPTER 4. GREEDY ALGORITHM

d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.13. There are n distinct foods with their total fat and amount, as given shown below.
One must take at least (m = 500g) of foods, but would like to minimize the amount of fats.
(Hint: since one can take a portion of a certain food, this problem is a fractional knapsack
minimization problem.)

Foods
cake crab donut egg fried pastry pizza
amt. 60g 150g 40g 50g 120g 200g 150g
fat 13 5 16 20 9 41 12

a). Formulate the problem.

b). Design a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) with the above food toy example.

d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.14. Suppose that a fast-food restaurant sells chicken nuggets in packs of 4, 6 and 9.
One has to get at least (m = 11) chicken nuggets, but not too many.

M M M
4 6 9

Three kinds of nugget boxes

a). Formulate the problem.

b). Design a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) where m = 11 and A = h4, 6, 9i.

d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.15. Suppose that a fast-food restaurant sells chicken nuggets in packs of 3, 5 and 7.
Their respective prices are $6, $7, and $14, respectively. One has to purchase at least 12
chicken nuggets.

M M M
3 $6 5 $7 7 $14

Three kinds of Nugget boxes


4.6. EXERCISES 205

One can purchase 4 packs of 3 which cost $24 or purchase one pack of 5 and one pack of 7
which cost only $21. There is another better solution. Purchasing one pack of 3 and two
packs of 5 gives 13 chicken nuggets for $20. The problem in general is to minimize the cost
to purchase at least m number of items given a set of n packs, where each pack consists of
quantity and price.
a). Formulate the problem.
b). Design a greedy algorithm.
c). Illustrate your proposed greedy algorithm in b) with the above chicken nugget toy
example.
d). Prove the correctness or incorrectness of the proposed algorithm in b).
Q 4.16. Suppose that a fast-food restaurant sells chicken nuggets in packs of 4, 6 and 9.
A generous man would like to fill up a gift box with as many nuggets as possible. At most,
(m = 11) nuggets can fit into the gift box.

M M M
4 6 9

(n = 3) kinds of Nugget boxes (m = 11) gift box.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate your proposed greedy algorithm in b) where m = 11 and A = h4, 6, 9i.
d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.17. Given a set P of k integers, P = {p1 , p2 , · · · , pk }, find a multi-subset of numbers


whose product equals exactly n where a number may be used more than once. Call this
problem the unbounded subset product problem, or simply USPE. For a toy example of
P = {2, 3, 5} and n = 20, the solution is true by X = {2, 0, 1} because 22 × 30 × 51 = 20.

a). Formulate the problem.


b). Design a greedy algorithm.
c). Illustrate your proposed greedy algorithm in b) where n = 18 and P = h9, 3, 6i.
d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.18. Consider the (k = 3) missile game depicted in Figure 4.12 on page 166. There are
three missiles (blue, red, and yellow) with different points gained and energy required. The
problem, V (n), is to score exactly n points using as little energy as possible. Note that if
the points gained is short or exceeds m, it is a loss. Exactly m points earned using the least
energy is a winner.

a). Formulate the problem.


206 CHAPTER 4. GREEDY ALGORITHM

b). Design a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) with the following toy example where
m = 12.

missile blue red yellow


energy E 2 3 4
point P 1 5 6

d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.19. Consider for the rod cutting maximization Problem 4.7, or simply RCM, defined
on page 168.

a). Come up with a greedy algorithm for RCM.

b). Prove the correctness or incorrectness of the proposed algorithm in a).

c). Suppose that the profits are costs of rods. One would like to cut a rod such that the
cost is minimized. Formulate the Rod cutting minimization problem, or RCmin, in
short.

d). Come up with a greedy algorithm for RCmin.

e). Prove the correctness or incorrectness of your algorithm provided in d).

Q 4.20. There are n number of cargoes represented by its profit and length, ai = (pi , wi ).
A driver has a trail of length, m where cargoes can be placed. The driver would like to
maximize the profit while the total length for selected cargoes must be exactly m = 40m.
If there is a gap between cargoes, it is a violation of a certain safety regulation.

1 2 3 4 5 6 7 8
10M$ 2M$ 5M$ 5M$ 1M$ 12M$ 4M$ 8M$
18m 10m 15m 10m 5m 20m 15m 15m

(a) A sample input: (n = 8)


m = 40m m = 40m
6 12M$ 1 10M$ 6 8 5
12M$ 8M$ 1M$
20m 18m 20m 15m 5m

(b) An invalid solution with 22M $ profit. (c) A valid solution with 21M $ profit.

Figure 4.38: Sample ‘cargo’ toy example

a). Formulate the problem.

b). Design a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) on the above ‘cargo’ toy example in
Figure 4.38 (a) where m = 40m.

d). Prove the correctness or incorrectness of the proposed algorithm in b).


4.6. EXERCISES 207

Q 4.21. There are n number of cargoes containing radioactive wastes. Each cargo is repre-
sented by its radioactive amount and length of the cargo, ai = (pi , wi ). A driver has a trail
of length, m where cargoes can be placed. The scared driver would like to minimize the
total radioactive amount, like it matters, while the total length for selected cargoes must
be exactly m = 40m. If there is a gap between cargoes, it is a violation of a certain safety
regulation.

1 2 3 4 5 6 7 8
4MBq 2MBq 5MBq 5MBq 12MBq 4MBq 8MBq
18m 10m 15m 10m 1MBq
5m 20m 15m 15m

(a) A sample input: (n = 8)


m = 40m m = 40m
2 5 2 1
2MBq 2MBq 4MBq
10m 1MBq 10m 18m
5m

(b) An invalid solution with 10MBq. (c) An invalid solution with 7MBq.
m = 40m
2 3 7
2MBq 5MBq 4MBq
10m 15m 15m

(d) A valid solution with 11MBq.

Figure 4.39: Sample ‘cargo with radioactive waste’ toy example

a). Formulate the problem.

b). Design a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) on the above ‘radioactive cargo’ toy
example in Figure 4.39 (a) where m = 40m.

d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.22. Given a set of activities represented by their start and finish times, select the
maximum number of compatible activities. Two activities are compatible if their intervals
do not overlap.

a). Formulate the problem.

b). Come up with a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) using the following toy example:

i 1 2 3 4 5 6
si 3 5 2 4 1 6
fi 4 9 5 6 2 7

d). Prove the correctness or incorrectness of the proposed algorithm in b).

e). Provide the computational time complexity of your greedy algorithm provided in b).
208 CHAPTER 4. GREEDY ALGORITHM

Q 4.23. You are given a set of activities represented by their start and finish time, as well
as the profit; ai = (si , fi , pi ). Selecting compatible activities such that the sum of their
profits is maximized is known as a weighted activity selection problem. Two activities are
compatible if their intervals do not overlap.

a). Formulate the problem.


b). Come up with a greedy algorithm.
c). Illustrate your proposed greedy algorithm in b) using the following toy example:

i 1 2 3 4 5 6
si 3 5 2 4 1 6
fi 4 9 5 6 2 7
pi 1 5 3 4 3 2

d). Prove the correctness or incorrectness of the proposed algorithm in b).


e). Provide the computational time complexity of your greedy algorithm provided in b).

Q 4.24. Suppose that there are n number of unique topics that a student would like to
learn. There are m number of courses where each course covers only subset of topics. A
certain topic is covered in multiple courses. The goal of a student is to take the minimum
number of courses such that all topics are covered. This problem is called the minimum set
cover problem or simply set cover problem or SCV in short. Given a universe set U of n
distinct elements and m number of subsets which contain only parts of elements, one would
like to select the minimum number of subsets such that all elements are covered.
m
S
U= Si = {a, b, c, d, e, f }
i=1
S1 = {a, b, c, f } S1 ∪ S2 ∪ S3 = U S1 ∪ S4 = {a, b, c, f } 6= U
S2 = {a, c, d} S1 ∪ S3 ∪ S5 = U S1 ∪ S2 ∪ S4 = {a, b, c, d, f } 6= U
S3 = {b, c, e} S2 ∪ S3 ∪ S4 = U S1 ∪ S3 ∪ S4 = {a, b, c, e, f } 6= U
S4 = {a, f } S3 ∪ S4 ∪ S5 = U S2 ∪ S4 ∪ S5 = {a, c, d, e, f } =
6 U
S5 = {d, e} S1 ∪ S5 =U S2 ∪ S3 ∪ S5 = {a, b, c, d, e} 6= U
(a) Inputs (b) Set covers (c) invalid set covers

Figure 4.40: Set cover example where n = 6 and m = 5

Figure 4.40 (a) shows the inputs for SCV and Figure 4.40 (b) and (c) provide some valid
and invalid sub-collections of subsets, respectively. The minimum set cover size is two.

a). Formulate the problem.


b). Come up with a greedy algorithm.
c). Illustrate your proposed greedy algorithm in b) using the toy example in Figure 4.40
(a).
d). Prove the correctness or incorrectness of the proposed algorithm in b).
e). Provide the computational time complexity of your greedy algorithm provided in b).
4.6. EXERCISES 209

Q 4.25. Consider the minimum weighted set cover problem, or simply wSCV, which is a
modified problem of SCV considered earlier as an exercise in Q 4.24. In the course and topic
example in Q 4.24, the SCV was to minimize the number of courses. Suppose that costs are
associated with courses and one would like to minimize the total cost to cover all topics. If
the prices in Figure 4.40 are S1 = 5, S2 = 3, S3 = 2, S4 = 1, and S1 = 2, then the solution
S1 ∪ S5 would have the total cost of 7 while the solution S2 ∪ S3 ∪ S4 would have the total
cost of 6.

a). Formulate the wSCV problem.

b). Come up with a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) using the toy example in Figure 4.40
(a).

d). Prove the correctness or incorrectness of the proposed algorithm in b).

e). Provide the computational time complexity of your greedy algorithm provided in b).

Q 4.26. An independent set of a graph is a subset of vertices such that no pair of vertices
in the independent set is in the edge set of the graph.
v1 v2
v6
v3 v4 v5

v7 v8 v9

The above toy example has its independent set {v2 , v3 , v5 , v7 , v9 }. Finding a maximum
independent set is called the independent set problem, or simply IDS.

a). Formulate the problem.

b). Come up with a greedy algorithm.

c). Illustrate your proposed greedy algorithm in b) using the above toy example.

d). Prove the correctness or incorrectness of the proposed algorithm in b).

Q 4.27. Consider the minimum spanning tree Problem 4.14, or simply MST, defined on
page 184 and the following toy example graph:
3
v1 v2
4 5 3

5 v3 4 v6
4 3
2
5
v4 v5

a). Illustrate the Prim-Jarnik’s Algorithm 4.18 stated on page 184 to find a minimum
spanning tree on the above toy example.
210 CHAPTER 4. GREEDY ALGORITHM

b). Illustrate the Kruskal’s Algorithm 4.19 stated on page 187 to find a minimum spanning
tree on the above toy example.

c). Prove the correctness or incorrectness of the Kruskal’s Algorithm 4.19 stated on
page 187.

d). Formulate the maximum spanning tree problem, which is to find a spanning tree such
that the sum of all weights are maximized.

e). Devise a greedy algorithm similar to the Prim-Jarnik’s Algorithm 4.18.

f). Illustrate your proposed greedy algorithm in e) to find a maximum spanning tree on
the above toy example.

g). Prove the correctness or incorrectness of the proposed algorithm in e).

h). Devise a greedy algorithm similar to the Kruskal’s Algorithm 4.19.

i). Illustrate your proposed greedy algorithm in h) to find a maximum spanning tree on
the above toy example.

j). Prove the correctness or incorrectness of the proposed algorithm in h).

Q 4.28. Consider the traveling salesman Problem 4.16, or simply TSP, defined on page 190
and the following toy example:
v v2 v3 v4 v5 v6
 1
v1 0 4 2 7 1 3

v1 744 v2
v2  13 7
5
4 0 4 3 7 5 2 3

v3  2 4 0 4 8 6 
 
v3 4 v4
v4 
7 3 4 0 5 8

6 5
v5  1 7 8 5 0 7  8 8
7
v6 3 5 6 8 7 0 v5 v6
(a) Cost matrix (b) Complete graph

a). Illustrate the nearest neighbor Algorithm 4.21 stated on page 191.

b). Illustrate the merge Algorithm 4.22 stated on page 191.

c). Notice that the nearest neighbor Algorithm 4.21 expands from one end while the string
of cities can grow from both ends. Come up with a greedy algorithm by modifying
the nearest neighbor Algorithm 4.21 so that it can grow from both ends.

d). Illustrate the two end nearest neighbor algorithm devised in c).

e). Prove the correctness or incorrectness of the two end nearest neighbor algorithm de-
vised in c).

Q 4.29. Consider the modified version of traveling salesman Problem 4.16, or simply TSPx.
Instead of cost matrix in TSP, the point matrix is given for TSPx. Instead of finding a path
with the minimum cost in TSP, TSPx is to find a path with the maximum points.

a). Formulate the problem.


4.6. EXERCISES 211

b). Come up with a greedy algorithm.


c). Illustrate your proposed greedy algorithm in b) using the following toy example.
v1 v2 v3 v4 v5 v1 7
v2
 
v1 0 7 3 7 8 3 7 7 3
v2 
7 0 7 3 2  8 2
v3 
3 7 0 1 8 v3 v4
 1
v4  7 3 1 0 4
8 4
v5 8 2 8 4 0
v5

d). Illustrate your proposed greedy algorithm in b) using the following toy example.
v1 v2 v3 v4 v5 v1 7
v2
 
v1 0 7 3 2 8 3 3
7 2
v2  7 0 7 3 7
  8 7
v3  3 7 0 1 8 v3 v4
  1
v4  2 3 1 0 7
8 7
v5 8 7 8 7 0
v5

e). Prove the correctness or incorrectness of the proposed algorithm in b).

Table 4.2: Classical Postage stamp optimization problems.


maximization minimization
n
X X n
Postage stamp maximize xi minimize xi Postage stamp
equality i=1 i=1 equality
n n
maximization X X minimization
(PSEmax) subject to ai xi = m subject to ai xi = m (PSEmin)
i=1 i=1
page 201 page 159
where 0 ≤ xi integer where 0 ≤ xi integer
X n X n
maximize xi minimize xi
Postage stamp Postage stamp
i=1 i=1
maximization Xn X n minimization
(PSmax) subject to ai xi ≤ m subject to ai xi ≥ m (PSmin)
page 201 i=1 i=1 page 161
where 0 ≤ xi integer where 0 ≤ xi integer
212 CHAPTER 4. GREEDY ALGORITHM

Table 4.3: Classical knapsack optimization problems and their variations.


maximization minimization
n
X n
X
maximize pi xi minimize pi xi 01-
01-
i=1 i=1 knapsack
knapsack n n
X X minimization
(ZOK) subject to w i xi ≤ m subject to w i xi ≥ m (ZOK-min)
page 163 i=1 i=1
page 203
where xi = 0 or 1 where xi = 0 or 1
n n
X X 01
01 maximize pi xi minimize pi xi
knapsack
knapsack i=1 i=1
n n Equality
Equality X X
subject to w i xi = m subject to w i xi = m minimization
(ZOKE)
i=1 i=1 (ZOKE-min)
page 206
where xi = 0 or 1 where xi = 0 or 1 page 207
Xn Xn
maximize pi xi minimize pi xi Fractional
Fractional
i=1 i=1 knapsack
knapsack n n
X X minimization
(FKP) subject to w i xi ≤ m subject to w i xi ≥ m (FKP-min)
page 165 i=1 i=1
page 204
where 0 ≤ xi ≤ 1 where 0 ≤ xi ≤ 1
Xn Xn
maximize p i xi minimize p i xi Unbounded
Unbounded
i=1 i=1 knapsack
knapsack n n
X X minimization
(UKP) subject to wi xi ≤ m subject to wi xi ≥ m (UKP-min)
page. 167 i=1 i=1
page. 204
where 0 ≤ xi integer where 0 ≤ xi integer
n n
X X Unbounded
Unbounded maximize p i xi minimize p i xi
Equality
Equality i=1 i=1
n n knapsack
knapsack X X
subject to wi xi = m subject to wi xi = m minimization
(UKE)
i=1 i=1 (UKE-min)
page 168
where 0 ≤ xi integer where 0 ≤ xi integer page 205
X n X n
maximize p i xi minimize p i xi
Rod cutting Rod cutting
i=1 i=1
(RCP) Xn Xn minimization
(maximization) subject to ixi = m subject to ixi = m (RCmin)
page 168 i=1 i=1 page 168
where 0 ≤ xi integer where 0 ≤ xi integer
4.6. EXERCISES 213

Table 4.4: Subset arithmetic problems.


Subset sum problems
0-1 (select) Unbounded
find X find X Unbounded
Subset sum
n n subset sum
equality X X
subject to ai xi = m subject to ai xi = m equality
(SSE) i=1 i=1 (USSE)
page 305
where xi = 0 or 1 where 0 ≤ xi integer page 225
Xn X n
maximize ai xi maximize ai xi Unbounded
Subset sum
i=1 i=1 Subset sum
maximization n n
X X maximization
(SSM) subject to ai xi ≤ m subject to ai xi ≤ m (USSM)
page 202 i=1 i=1
page 205
where xi = 0 or 1 where 0 ≤ xi integer
Xn X n
minimize ai xi minimize ai xi Unbounded
Subset sum
i=1 i=1 Subset sum
minimization n n
X X minimization
(SSmin) subject to ai xi ≥ m subject to ai xi ≥ m (USS-min)
page 202 i=1 i=1
page 227
where xi = 0 or 1 where 0 ≤ xi integer
Subset product problems
0-1 (select) Unbounded
find X find X Unbounded
Subset prod.
n n subset prod.
equality Y Y
subject to ai xi = m subject to ai xi = m equality
(SPEp) i=1 i=1 (USPE)
page 344
where xi = 0 or 1 where 0 ≤ xi integer page 307
Yn Yn
maximize ai xi maximize ai xi Unbounded
Subset prod.
i=1 i=1 subset prod.
maximization n n
Y Y maximization
(SPMp) subject to ai xi ≤ m subject to ai xi ≤ m (USPM)
page 345 i=1 i=1
page 690
where xi = 0 or 1 where 0 ≤ xi integer
Yn Yn
minimize ai xi minimize ai xi Unbounded
Subset prod.
i=1 i=1 subset prod.
minimization n n
Y Y minimization
(SPminp) subject to ai xi ≥ m subject to ai xi ≥ m (USP-min)
page 345 i=1 i=1
page 690
where xi = 0 or 1 where 0 ≤ xi integer
214 CHAPTER 4. GREEDY ALGORITHM
Chapter 5

Tabulation - Strong Induction

(a) Plimpton 322 Babylonian clay tablet (b) pages from Henry Briggs’ 1617

c Image is in public domain. Logarithmorum Chilias Prima

c Image is in public domain.

Figure 5.1: Look-up tables for various functions

Most people answer 7×6 immediately instead of adding 7 six times. This is because most
people have memorized the multiplication table, which contains the precomputed outputs.
Pre-computing and storing values is the key idea of the tabulation method. Increasing space
often results in saving time in computation.
Before the advent of computers and calculators, the tabulation method, i.e., the use
of look-up tables, were practiced to speed up computing complex functions, such as in
trigonometry, logarithms, and statistical density functions [28]. Figure 5.1 (a) is the Plimp-
ton 322 Babylonian clay tablet containing 15 Pythagorean triples, which dates to approx-
imately 1800 B.C. [128]. Despite the different interpretations of it as in [144], one of the
main purposes of the tablet is for look-up. One of the most popular look-up tables that
had appeared in most early mathematics textbooks is the logarithm table, as shown in Fig-
ure 5.1 (b). John Napier introduced the logarithms in 1614 [127]. Moreover, various tables

John Napier (1550-1617) was a Scottish mathematician. He is best known as the


inventor of logarithms. One of major contributions is Mirifici Logarithmorum Canonis
Descriptio which provides ninety pages of tables of numbers related to natural logarithms
in 1614. c Portrait is in public domain.

215
216 CHAPTER 5. TABULATION - STRONG INDUCTION

of precomputed statistical values are contained even in contemporary statistics textbooks


for the sake of fast computation.
In computer science, a look-up table is an array that replaces run-time computation with
a simpler array cell access operation, which takes constant time. Retrieving a value from
memory is significantly faster than undergoing an expensive computation or input/output
operation. The tables are pre-computed and stored in static program storage or even stored
in hardware in application-specific platforms. A drawback is that extra space for a look-up
table is necessary to save computational time.
Utilization of a look-up table in algorithm design is known as a dynamic programming,
which was first introduced by Richard Bellman in 1950. It is a general algorithm design
technique for solving problems defined by or formulated as recurrences with overlapping
sub-instances. There are two distinct approaches for dynamic programming: top-down and
bottom-up [131]. Here, the bottom-up approach of dynamic programming, which solves
starting from small problems to larger problems, is interpreted as strong inductive program-
ming. The top-down approach, also known as memoization, can be considered a backward
solving combined with a time-space trade-off paradigm. In this chapter, strong inductive
programming, which combines the inductive programming and tabulation methods, is in-
troduced to solve simple and complex problems, while most other current textbooks, such
as [71, 42], describe it as dynamic programming primarily for optimization problems. Since
the term dynamic programming is used too broadly, its concepts are divided and spread out
in this chapter 5, chapter 6, and chapter 10.
Objectives of this chapter include understanding following concepts: proof by strong
induction, higher order recurrence relation, strong inductive programming, memoization.
First, readers must be able to prove theorems by strong inductions. Second, students must
be able to derive higher order recurrence relations from various computational problems.
Next, one must be able to design an algorithm based on strong inductive programming.
Also, students must be able to design an algorithm based on the memoization technique.
Moreover, students must be able to analyze and compare computational time and space
complexities of strong inductive programming, memoization, and naı̈ve recursion versions.
Finally, problems on directed acyclic graphs are introduced, where strong inductive pro-
gramming can be naturally and visually applied.
Astute readers might notice that further space efficient versions are possible for most
algorithms present in this chapter. This discussion shall be intensively dealt with in Chap-
ter 7 using certain data structures. Here, mastering problem solving skills using the full
tabulation method is a primary goal.

5.1 Strong Inductive Programming


In strong induction, also known as complete induction, the inductive step in eqn (5.1)
requires assuming that ∀j ∈ {1, · · · , k}P (j) are true instead of solely assuming that P (k) is

Richard Ernest Bellman (1920-1984) was an American mathematician born in


New York city. He invented the dynamic programming technique to solve optimization
problems in 1950 through the book, ’Applied dynamic programming [19]

c Photo Credit: unknown ownership, from National Academy of Engineering
5.1. STRONG INDUCTIVE PROGRAMMING 217

true for the regular or ordinary induction’s inductive step, P (k) → P (k + 1) [146].

[P (1) ∧ P (2) ∧ · · · ∧ P (k)] → P (k + 1) (5.1)

One of the easiest ways to grasp the concept of strong induction is an incorrect proof
version of ordinary induction. Consider the following incorrect proof by ordinary induction
for Theorem 5.1:

Theorem 5.1. Following two first order linear recurrences are equivalent: T1 (n) = T2 (n).
(
2T1 (n − 1) + c if n > 1
T1 (n) = (5.2)
1 if n = 1
(
T2 (n − 1) + (c + 1)2n−2 if n > 1
T2 (n) = (5.3)
1 if n = 1

Proof. (by induction) Basis: when n = 1, both recursive formulas return 1;


(T1 (1) = 1) = (T2 (1) = 1)
Inductive step: Assuming that T1 (n) = T2 (n) is true for n > 1, show T1 (n + 1) = T2 (n + 1)
is also true.

T1 (n + 1) = 2T1 (n) + c by eqn (5.2)


= 2T2 (n) + c by assumption: T1 (n) = T2 (n)
n−2
= 2(T2 (n − 1) + (c + 1)2 ) + c by eqn (5.3)
n−1
= 2T2 (n − 1) + (c + 1)2 +c
n−1
= 2T1 (n − 1) + (c + 1)2 +c by assumption? T1 (n − 1) = T2 (n − 1) (5.4)
n−1
= T1 (n) + (c + 1)2 by eqn (5.2)
n−1
= T2 (n) + (c + 1)2 by assumption: T1 (n) = T2 (n)
= T2 (n + 1) by eqn (5.3)

∴ T1 (n) = T2 (n). 

Since we only assumed T1 (n) = T2 (n) but not T1 (n − 1) = T2 (n − 1), the proof line in
eqn (5.4) would be invalid. It would be correct if we assumed that ∀k ∈ {1, · · · , n}(T1 (k) =
T2 (k)) are true.
In Chapter 2, ordinary induction, also known as a weak induction as opposed to the
strong induction, plays a central role in designing algorithms. There is no need for storing
solutions for all sub-problems but only the immediate previous solution in a variable. There
are problems that require storing all sub-problem solutions in an array in order to solve
them by an inductive programming. This section considers problems that are formulated
by a higher order recurrence relation. It also introduces a strong inductive programming
technique that stores solutions of all sub-problems in a table in order to solve the original
problem.
The strong inductive programming technique combines the inductive programming and
tabulation methods. The heart of strong inductive programming based algorithms is the
table of solutions for all sub-problems. It starts with declaring an empty table of size n
218 CHAPTER 5. TABULATION - STRONG INDUCTION

and solves the basis case. Then, it uses inductive programming to fill out the rest of the
table sequentially. The strong induction naturally proves the correctness of most algorithms
presented henceforth in the remaining chapter. It should be noted that assuming P (n)
is true to prove P (n + 1) in proof by induction corresponds to the use of the solution of
P (n − 1) to solve P (n) in inductive programming. Likewise, assuming that all ∀i=1∼n P (i)
are true to prove P (n + 1) in proof by strong induction corresponds to storing all solutions
of ∀i=1∼n−1 P (i) in a table to solve P (n) in strong inductive programming. The following
generic template may apply to most strong inductive programming algorithms presented in
this chapter.

strong inductive programming template


StrIndProg(n, · · · )
Declare a table Tn0 ∼n declaring an empty table
T [n0 ] = basis basis case, P (n0 )
for i = n0 + 1 ∼ n
T [i] = f (T [1 ∼ i − 1]) strong inductive step
return T [n]

5.1.1 Prime Factors


Euclid’s element proposition VII.32 states that every number is either prime or is mea-
sured by some prime number [62, p32]. In other words, any number is either a prime or
a composite which has a prime divisor. This statement can be written as the following
theorem, which can easily be proven by a strong induction but is difficult by an ordinary
induction [146, p250].

Theorem 5.2. Any integer n > 1 is the product of primes.

Proof. (by strong induction) Let P (n) be the proposition.


Basis case: When n = 2, 2 is the product of one prime: itself.
Inductive step: Assume that P (j) is true for all positive integers, j, where 2 < j ≤ k. Show
P (k+1) is also true. There are two cases. If k+1 is a prime, i.e., ∀d ∈ (2 ≤ d ≤ k), d 6 | (k+1),
then P (k + 1) is true because k + 1 is the product of one prime: itself. If k + 1 is composite,
i.e., ∃d ∈ (2 ≤ d ≤ k), d | (k + 1), then k + 1 is product of two integers; k + 1 = d × k+1 d .
Since both d and k+1 d are assumed to be products of primes, P (k + 1) is true. 

The proof by strong induction for Theorem 5.2 can be used as an algorithm to solve the
number of prime factors, or simply NPF, Problem 5.1.

Problem 5.1. Number of prime factors of n (NPF)


Input: a positive integer n > 1
k
P
Output: the number of prime factors of n, ei
i=1
k
pei i and pi is a prime number.
Q
where n =
i=1

For example, np(140) = 4 since 140 = 2 × 2 × 5 × 7. Before devising a strong inductive


programming based algorithm, it is imperative to derive a recurrence relation. Figure 5.2
provides insights for a recurrence relation, which is given as follows:
5.1. STRONG INDUCTIVE PROGRAMMING 219

np(60) = 4 np(140) = 4 np(288) = 7

=1 =3 =1 =3
=3 =4
np(2) np(30) np(2) np(70) np(18) np(16)

=1 =2 =1 =2 =1 =2 =2 =2
np(2) np(15) np(2) np(35) np(2)np(9) np(4) np(4)

=1 =1 =1 =1 =1 =1 =1 =1 =1 =1
np(3) np(5) np(5) np(7) np(3)np(3) np(2)np(2) np(2)np(2)

(a) np(60) = 4 (b) np(140) = 4 (c) np(288) = 7


n 2 3 4 5 6 7 8 9 10 11 ··· 1008 ··· 2016 2017 ···
np(n) 1 1 2 1 2 1 3 2 2 1 ··· 7 ··· 8 1 ···
(d) Algorithm 5.1 illustration

Figure 5.2: Number of prime factors of n.

( √
1 if ∀d ∈ {2 ≤ d ≤ b nc} d 6 | n
np(n) = √ (5.5)
np(d) + np( nd ) if ∃d ∈ {2 ≤ d ≤ b nc} d | n

If np(n) = 1, n is a prime number. Note that 2 | 140 and 70 | 140 since 2 × 70 = 140. If
np(2) = 1 and np(70) = 3 are already computed and stored in a table, np(140) can be easily
computed by adding np(2) and np(70). Hence, the following recurrence can be derived.
Just like forward solving in the inductive programming in Chapter 2, the table can be filled
with the solution starting from the basis case and sequentially toward n, as illustrated in
Figure 5.2 (d). An algorithm based on the strong inductive programming paradigm can be
stated as follows:

Algorithm 5.1. Dynamic prime factors

cardinality prime factors(n)


Declare a table T of size n . . . . . . . . . . . . . . . . . . . . . . . . 1
T [2] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 3 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
j = 2 . . . . . . . . . . . . . . .√
............................4
while j 6 | n and j ≤ b ic . . . . . . . . . . . . . . . . . . . . . . . . 5
j++ √. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if j ≤ b ic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i] = T [j] + T [ ji ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
else T [i] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

The computational space complexity of Algorithm 5.1 is clearly Θ(n) since solutions of
all sub-problems are stored in a table. Although a tighter
√ bound may be possible, a safe
computational time complexity of Algorithm 5.1 is O(n n log n) because checking whether
Pn √ √
j divides n takes O(log n) and i i log i = Θ(n n log n).
i=2
220 CHAPTER 5. TABULATION - STRONG INDUCTION

5.2 Stamp Problems


Various stamp related problems that can be tackled by strong inductive programming
are presented in this section.

5.2.1 3-5 Stamp Problem


For another simple and popular example of strong induction, consider the following
Theorem 5.3:

Theorem 5.3. Every amount of postage of greater than 15 can be made using both 3-cent
and 5-cent stamps, i.e., 3x + 5y = n > 15 where x and y are positive integers.

Proof. (by strong induction)


Basis case: When n = 16, 17, and 18, (x, y) = (2, 2), (4, 1), and (1, 3), respectively.
Inductive step: Let k > 18. Assume that there exists a pair of positive integers (x, y) to
make postage of j cents, where 15 < j ≤ k. Show that there exists a pair of positive integers
(x0 , y 0 ) to make k + 1 cents. Clearly, there exist positive integers (x, y) for the postage of
k − 2 cents because of the assumption 16 < k − 2 ≤ k. Then there exists a pair of positive
integers (x0 , y 0 ) = (x + 1, y) by adding one more 3-cent stamp. 

The proof by strong induction for Theorem 5.3 can be used as an algorithm to solve
Problem 5.2, which is defined as follows:

Problem 5.2. 3-5 Stamp problem


Input: postage amount n ∈ Z+ and n > 15
Output: a pair of positive integer, (x, y) such that 3x + 5y = n

(n = 24) = 3x(24) + 5y(24) (n = 27) = 3x(27) + 5y(27)

=⇒

x(24) = 3 x(27) = x(24) + 1 = 4


y(24) = 3 x(27) = x(24) = 3
(a) Backward thinking
amount n 16 17 18 19 20 21 22 23 24 25 26 27 ···
3-cent x 2 4 1 3 5 2 4 6 3 5 7 4 ···
5-cent y 2 1 3 2 1 3 2 1 3 2 1 3 ···
(b) illustration of Algorithm 5.2

Figure 5.3: 3-5 stamp problem example

Strong inductive programming directly utilizes the proof by strong induction for Theo-
5.2. STAMP PROBLEMS 221

rem 5.3. First, higher order recurrences for x and y can be derived as follows:
 

 x(n − 3) + 1 if n > 18 
 y(n − 3) if n > 18

1 
if n = 18 3 if n = 18
x(n) = y(n) = (5.6)


 4 if n = 17 

 1 if n = 17
2 if n = 16 2 if n = 16
 

Using the higher order recurrences in eqn. (5.6), the nth postage amount can be solved
forward by storing the smaller sub-problems’ solutions in a table.
An algorithm can be stated as follows:
Algorithm 5.2. Dynamic 3-5 cent stamp

stamp3-5(n)
Declare a table T of size 2 × n whose elements are 0’s . . . 1
T [16][1] = 2, T [16][2] = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [17][1] = 4, T [17][2] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [18][1] = 1, T [18][2] = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 19 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
T [i][1] = T [i − 3][1] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [i][2] = T [i − 3][2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return (T [n][1], T [n][2]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time complexity of Algorithm 5.2 is Θ(n). The computational space
complexity of Algorithm 5.2 is Θ(n) since solutions of all sub-problems are stored in a
table. It should be noted that the space required can be reduced to Θ(k) for most problems
presented in this chapter when a queue data structure is used, which will be discussed later
in Chapter 7. In this chapter, the full table shall be used in the algorithm to emphasize
the tabulation method and sharpen the skills of utilizing the strong inductive programming
paradigm.

5.2.2 Postage stamp minimization problems


Consider the postage stamp equality minimization Problem 4.2, or simply PSEmin,
defined on page 159, where the greedy Algorithm 4.5 fails to find a minimum number of
stamps to meet the amount. Let n be the amount needed and k be the number of stamps
with different values. A sample input and output are given in Figure 5.4 (a) and (b),
respectively.
In order to utilize the strong inductive programming paradigm, the first suggested step is
to fantasize a solution, as depicted in Figure 5.4 (c). Imagine an envelope with PSEmin(n),
a minimum number of stamps, for the total n amount. Suppose one of the stamps of ax
amount is removed. Then the remaining postage stamps must be PSEmin(n − ax ), the
minimum number of stamps for the total n − ax amount. If the remaining postage is not
PSEmin(n − ax ), then it contradicts our assumption that PSEmin(n) = PSEmin(n − x) + x
is the minimum.
Since x can be any one of k stamps, it is necessary to try them all and pick a minimum.
As depicted in Figure 5.4 (c), the solution of PSEmin(67) can be found if all k sub-solutions
of PSEmin(23), PSEmin(34), PSEmin(43), and PSEmin(66) are known. One can pick the
minimum of these k sub-solutions and add one more respective stamp to find the solution
222 CHAPTER 5. TABULATION - STRONG INDUCTION

=⇒
(a) Input: A1∼k = {1, 24, 33, 44} and n = 67 (b) Output PSEmin(67) = 3

(c) Fantasizing (strong assumption) stage


amt n 0 1 2 3 ··· 23 24 ··· 33 34 ··· 43 44 ··· 66 67
PSEmin(n) 0 1 2 3 ··· 23 1 ··· 1 2 ··· 11 1 ··· 2 3
(d) Table of PSEmin(0) ∼ PSEmin(67): Algorithm 5.3 illustration

Figure 5.4: Strong inductive programming for the PSEmin problem

of PSEmin(67). The function PSEmin(n) associated with A1∼k has the following higher
order recurrence in eqn (5.7). The vital input argument, A1∼k , is omitted in the function
argument for simplicity’s sake.


∞ if n < 0


PSEmin(n) = 0 if n = 0 (5.7)
 min (PSEmin(n − ai ) + 1) if n > 0


1≤i≤k

To obtain the value of PSEmin(x), values in PSEmin(x−44), PSEmin(x−33), PSEmin(x−


24), and PSEmin(x − 1) are necessary. If solutions for the (k = 4) sub-problems are pre-
viously computed and stored in a table, PSEmin(n) can be trivially solved. Just like the
inductive programming in Chapter 2, we can solve it forward, filling up the table. This
process is demonstrated in Figure 5.4 (d) to compute PSEmin(67). This is exactly the
domino effect process of the strong induction. Now an algorithm using the strong inductive
programming paradigm can be written.
5.2. STAMP PROBLEMS 223

Algorithm 5.3. Dynamic minimum stamps

find-min-stamps(n, A1∼k )
Declare a table T0∼n whose elements are ∞ . . . . . . . . . . . . . .1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if i − aj > 0 and T [i − aj ] + 1 < T [i] . . . . . . . . . . . . . . . . . 5
T [i] = T [i − aj ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

It should be noted that if the algorithm returns ∞, it means impossible. The computational
time complexity of Algorithm 5.3 is Θ(kn). The space complexity of Algorithm 5.3 is Θ(n)
since solutions of all sub-problems are stored in a table.
Consider the postage stamp minimization Problem 4.3, or simply PSmin, defined on
page 161. Recall that the constraint “exactly equals to’ is changed to “less than or equals
to.”’ Since the greedy algorithm in eqn (4.3) successfully finds a minimum number of stamps
to meet the amount, a strong inductive programming algorithm may not be necessary.
Nevertheless, it is discussed here because it is helpful to prove the correctness of the greedy
algorithm in eqn (4.3) using the strong induction.
The higher order recurrence in eqn (5.8) can be derived.

0 if n = 0


PSmin(n) = 1 if 0 < n ≤ max(A1∼k ) (5.8)
 min (PSmin(n − ai ) + 1) if n > 0


1≤i≤k

Only the basis case differs from that of PSEmin(n) in eqn (5.7). As depicted in Figure 5.5
(c), the solution of PSmin(67) can be found if all k sub-solutions of PSmin(23), PSmin(34),
PSmin(43), and PSmin(66) are known. One can pick the minimum of these k sub-solutions
and add one more respective stamp to find the solution of PSmin(67).
An algorithm using the strong inductive programming paradigm similar to Algorithm 5.3
can be written.

Algorithm 5.4. Dynamic PSmin

find-min-stamps(n, A1∼k )
Declare a table T0∼n whose elements are ∞ . . . . . . . . . . . . . .1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if i − aj > 0 and T [i − aj ] + 1 < T [i] . . . . . . . . . . . . . . . . . 5
T [i] = T [i − aj ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Algorithm 5.4 is demonstrated in Figure 5.5 (d). The space complexity of Algorithm 5.4
is Θ(n) since solutions of all sub-problems are stored in a table. The computational time
complexity of Algorithm 5.4 is Θ(kn) while that of the greedy algorithm in eqn (4.3) is only
Θ(k).
224 CHAPTER 5. TABULATION - STRONG INDUCTION

(a) Input: A1∼k = {1, 24, 33, 44} and n = 67 (b) Output PSEmin(67) = 2

(c) Fantasizing (strong assumption) stage


amt n 0 1 2 ··· 44 45 ··· 67 ··· 88 89 ··· 131 132
PSmin(n) 0 1 1 ··· 1 2 ··· 2 ··· 2 3 ··· 3 4
(d) Table of PSmin(0) ∼ PSmin(132): Algorithm 5.4 illustration

Figure 5.5: Strong inductive programming for the PSmin problem

The correctness of Algorithm 5.4 can be shown by the correctness of the greedy algorithm
in eqn (4.3) in Theorem 4.3 on page 161.
 
n
Corollary 5.1. The recurrence relation in eqn (5.8) is equivalent to .
max(A1∼k )
Proof. (by strong induction)
Let m = max(A1∼k ). 0 
Basis case: When n = 0, (PSmin(0) = 0 by eqn (5.8)) =  m =0 .
x

For ∀x ∈ {1, · · · m}, (PSmin(x) = 1 by eqn (5.8)) = m =1 .
j
Inductive step: Assume
 n+1  that for all j where m + 1 < j ≤ n, PSmin(j) = m . Show that
PSmin(n + 1) = m .

PSmin(n + 1) = min (PSmin(n − ai + 1) + 1)


1≤i≤k
      
n + 1 − ai n+1−m n+1
= min +1 = +1= 
1≤i≤k m m m
5.2. STAMP PROBLEMS 225

5.2.3 Unbounded Subset Sum Equality


Suppose that nuggets are sold in boxes of 6, 9, and 20 only. If one needs 21 nuggets, one
can purchase 2 boxes of six and 1 box of 9. However, if one needs 13 pieces, there is no way.
A McNugget number is a positive integer that can be obtained by adding together orders of
McDonald’s Chicken McNuggetsTM [176, p 19, p 233]. Checking whether a positive integer
is a McNugget number or not can be generalized to the unbounded subset sum equality
problem.
Problem 5.3. Unbounded subset sum equality problem
Input: a set A of(k positive integers and an integer n ∈ N
T if X exists.
Output: M (n) = such that find X
F otherwise
Xk
subject to ai xi = n
i=1
(5.9)
where 0 ≤ xi integer
The input set of values can be thought of stamp values instead of nugget quantities in
boxes. Frobenius considered this problem as a part of the Frobenius postage stamp problem.
As depicted in Figure 5.6 (b), M (n) = F if all cases by subtracting each corresponding stamp
amount are false. Otherwise, if any case is true, M (n) = T, as exemplified in Figure 5.6
(b). The following higher order recurrence can be derived:

F
 if n < 0
M (n) = T if n = 0 (5.10)
 ∨ (M (n − a )) if n > 0

i
i=1∼k

A table can be filled from the basis toward n using the recurrence relation in eqn (5.10) as
illustrated in Figure 5.6 (d).
Now an algorithm using the strong inductive programming paradigm can be stated as
follows:
Algorithm 5.5. Dynamic unbounded subset sum equality
USSE(n, A1∼k )
Declare a table T0 n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [0] = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [i] = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if i ≥ aj , T [i] = T [i] ∨ T [i − aj ] . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Ferdinand Georg Frobenius (1849-1917) was a German mathematician, best


known for the Frobenius postage stamp problem. His major contributions include the
theory of elliptic functions, differential equations, and group theory.
c Photo Credit: MFO, licensed under CC BY-SA 3.0, crop change was made.
226 CHAPTER 5. TABULATION - STRONG INDUCTION

(a) M (40) = M (34) ∧ M (31) ∧ M (20) = T (b) M (43) = M (37) ∧ M (34) ∧ M (23) = F
n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
M (n) T F F F F F T F F T F F T F F T
n 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
M (n) F F T F T T F F T F T T F T T F
n 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
M (n) T T F T T F T T T T T F T T T T
(d) Table of M (0) ∼ M (47): Algorithm 5.5 illustration

Figure 5.6: Strong inductive programming for the unbounded subset sum equality problem.

The computational time complexity of Algorithm 5.5 is Θ(kn). The computational space
complexity of Algorithm 5.5 is Θ(n), since solutions of all sub-problems are stored in a table.
The Frobenius postage stamp problem, or simply FSP, is a mathematical riddle that asks
what is the largest postage value which cannot be placed on an envelope given a multi set
of k stamps, A1∼k where gcd(a1 , · · · , ak ) = 1. It is also referred to as the Postage Stamp
Problem in [154], the Frobenius Coin Problem, or simply Frobenius Problem in [4]. The
largest integer is called the Frobenius number and is denoted as g(a1 , · · · , ak ). For examples,
g(6, 9, 20) = 43, g(3, 5) = 7. Although this problem’s complexity shall be discussed later in
Chapter 11, this problem is stated here because it is related to the unbounded subset sum
equality problem, and especially the McNugget number problem.

Problem 5.4. Frobenius postage stamp problem


Input: a set A of k positive integers where mgcd(A1∼k ) = 1
Output: a positive integer n such that

maximize n
n
!
X
subject to ∀X ai xi 6= n (5.11)
i=1
where 0 ≤ xi integer and 0 < n integer

Theorem 5.4. Relationship between McNugget and Frobenius numbers.

g(A1∼k ) = n if M (n, A1∼k ) = F and ∀i ∈ {1 ∼ min(A1∼k )}(M (n + i, A1∼k ) = T)


5.2. STAMP PROBLEMS 227

For example, g(6, 9, 20) = 43 and M (43, {6, 9, 20}) = F but the next six consecutive
numbers are all McNugget numbers.
44 = 4 × 6 + 0 × 9 + 1 × 20
45 = 3 × 6 + 3 × 9 + 0 × 20
46 = 1 × 6 + 0 × 9 + 2 × 20
47 = 0 × 6 + 3 × 9 + 1 × 20
48 = 8 × 6 + 0 × 9 + 0 × 20
49 = 0 × 6 + 1 × 9 + 2 × 20
Any integer larger than 49 is a McNugget number because it can be written as a multiple of
six plus one of the above combinations. Hence, an algorithm to find the Frobenius number
is to use Algorithm 5.5 and search for the lowest integer, p, such that the min(A1∼k )
consecutive numbers in from p to p + min(A1∼k − 1 are McNugget numbers. Conversely,
every number greater than the Frobenius number is a McNugget number. The following
Corollary 5.2 can be stated:
Corollary 5.2. Relationship between McNugget and Frobenius numbers II.
M (n, A1∼k ) = T if n > g(A1∼k )

5.2.4 Unbounded Subset Sum Minimization


If the desired amount, n, cannot be made by a set of stamps, one may use at least n
amount possible. For example, since 22c or 23c cannot be made with A = {6, 9, 20}, one
could place 24c to meet the minimum requirement but wishes to not use more than 24c.
This becomes a minimization problem, formally defined by slightly changing eqn (5.9) as
follows:
Problem 5.5. Unbounded subset sum minimization problem
Input: a set A of n positive integers and an integer, m ∈ N
Pn
Output: ai xi such that
i=1
n
X
minimize ai xi
i=1
Xn
subject to ai xi ≥ m
i=1
where 0 ≤ xi integer
The following higher order recurrence can be derived:
(
0 if m ≤ 0
USSmin(m) = (5.12)
min (USSmin(m − ai ) + ai ) if m > 0
i=1∼n

A table can be filled from the basis toward n using the recurrence relation in eqn (5.12),
as illustrated in Figure 5.7.
Now, an algorithm using the strong inductive programming paradigm can be stated as
follows:
228 CHAPTER 5. TABULATION - STRONG INDUCTION

m 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
USSmin(m) 0 6 6 6 6 6 6 9 9 9 12 12 12 15 15
m 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
USSmin(m) 15 18 18 18 20 20 21 24 24 24 26 26 27 29 29
m 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
USSmin(m) 30 32 32 33 35 35 36 38 38 39 40 41 42 44 44

Figure 5.7: Table of USSmin(0) ∼ USSmin(44): Algorithm 5.6 illustration

Algorithm 5.6. Dynamic unbounded subset sum minimization


USSmin(m, A1∼n )
Declare a table T0∼m originally ∞ . . . . . . . . . . . . . . . . 1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
if i < aj and T [i] > aj , . . . . . . . . . . . . . . . . . . . . . . . 5
T [i] = aj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if i ≥ aj and T [i] > T [i − aj ] + aj , . . . . . . . . . . . . 7
T [i] = T [i − aj ] + aj . . . . . . . . . . . . . . . . . . . . . . . 8
return T [m] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
The computational time complexity of Algorithm 5.6 is Θ(kn). The computational space
complexity of Algorithm 5.6 is Θ(n), since solutions of all sub-problems are stored in a table.

5.3 More Optimization Problems


Although the strong inductive programming idea may appear platitudinous by now, it
is worth considering more problems for the sake of practice.

5.3.1 Rod Cutting


Recall the rod cutting Problem 4.7 defined on page 168. There are 7 different ways to
cut a rod of length 5, as shown in Figure 4.13 (b) on page 168. The most profitable way
is the third one, which has one rod of length 3 and another rod of length 2 whose total
profit is 9. Note that Theorem 4.7 indicates that greedy Algorithm 4.9 does not guarantee
an optimal solution.
To use strong inductive programming, first suppose that all sub-problems’ solutions
are already computed and stored in a table. Imagine the solution for R(n), which is a
combination of one or more pieces. Let’s take one piece of length x out, as depicted in
Figure 5.8 (a). Then, the remaining portion must be the maximum profit, R(n − x). This
step shall be called the fantasizing stage. Since the length x can be between 1 ∼ k, all k
pieces must be examined, as depicted in Figure 5.8 (a), and one takes the maximum of sum
of R(n − x) + px . The following kth order recurrence can be derived:

−∞ if n < 0


R(n) = 0 if n = 0 (5.13)
 max (R(n − x) + px ) if n > 0


1≤x≤k
5.3. MORE OPTIMIZATION PROBLEMS 229

n−x x

x=1 x=2 x=3 x=4 x=5


p1 = 1 p2 = 4 p3 = 5 p4 = 7 p5 = 8

R(n − 1) R(n − 2) R(n − 3) R(n − 4) R(n − 5)

(a) Fantasizing stage: strong assumption


n T1∼n T [n] = max(T [n − k] + pk )
1 1 T [1] = p1 = 1 basis
2 1 4 T [2] = max(T [1] + p1 = 2, T [0] + p2 = 4, · · · , T [−3] + p5 = −∞)
3 1 4 5 T [3] = max(T [2] + p1 = 5, T [1] + p2 = 5, · · · , T [−2] + p5 = −∞)
4 1 4 5 8 T [4] = max(T [3] + p1 = 6, T [2] + p2 = 8, · · · , T [−1] + p5 = −∞)
5 1 4 5 8 9 T [5] = max(T [4] + p1 = 9, T [3] + p2 = 9, · · · , T [0] + p5 = 8)
6 1 4 5 8 9 12 T [6] = max(T [5] + p1 = 10, T [4] + p2 = 12, · · · , T [1] + p5 = 9)

(b) Algorithm 5.7 illustration where n = 6.


Figure 5.8: Strong inductive programming for the rod cutting problem

A higher order recurrence relation of a given problem plays a central role in designing
a strong inductive programming algorithm. Using the kth order recurrence relation in
eqn (5.13), the problem can solved forward starting from the basis and storing the solutions
in a table T1∼n sequentially, as illustrated in Figure 5.8 (b) where n = 6. Now an algorithm
using the strong inductive programming paradigm can be written.
Algorithm 5.7. Dynamic rod cutting

Rod cut(n, P )
Declare a table T0∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [0] = 0, T [1] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if i − j ≥ 0 and T [i − j] + pj > T [i] . . . . . . . . . . . 5
T [i] = T [i − j] + pj . . . . . . . . . . . . . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

The computational time complexity of Algorithm 5.7 is Θ(kn) and the computational space
complexity is Θ(n), since solutions of all sub-problems are stored in a table.

5.3.2 Unbounded Integer Knapsack Problem


Consider the unbounded integer knapsack Problem 4.6 defined on page 167, where greedy
Algorithm 4.9 fails to find an optimal solution to meet the maximum capacity. Let n be
the maximum capacity available and k be the number of items with different values and
profits. The problem is to score the most points with less than or equal to the maximum
230 CHAPTER 5. TABULATION - STRONG INDUCTION

U(n) = U(n − ex) + px

y …………. z x
missile A blue red yellow U(9) = ?
energy W 1 4 5
point P 1 18 20
(a) input: n = 9 p1= 1 p4= 18 p5= 20
U(8) = 36 U(5) = 20 U(4) = 18
or

(b) output: U (9) = 38 (c) Fantasizing stage: strong assumption


Max. Energy n 1 2 3 4 5 6 7 8 9 10 11 12 ···
Max. Points U (n) 1 2 3 18 20 21 22 36 38 40 41 54 ···
(d) Algorithm 5.8 illustration
Figure 5.9: Deriving a higher order recurrence relation for the 3 missile game problem.

energy n. Consider the toy example, which was given as a counter-example in Theorem 4.7
on page 167 and shown in Figure 5.9 (a) and (b), where the number of items k = 3 and the
maximum energy n = 9.
In order to utilize the strong inductive programming paradigm, the first suggested step
is to fantasize a solution, as depicted in Figure 5.9 (c). Imagine U (n), the highest score
obtained with less than or equal to n energy spent. Suppose the last missile x’s px point is
removed. Then, the remaining sum of missiles’ points must be U (n − ex ). If the remaining
point, U (n − ex ), is not optimal, then it contradicts our assumption that U (n) = U (n −
ex ) + px is the maximum.
Since x can be any one of k missiles, it is necessary to try them all and pick a maximum,
as depicted in Figure 5.9 (c). The following higher order recurrence can be derived:

( (
0 if n ≤ 0 U (n − e) + p if n ≥ e
U (n) = where f (n, e, p) = (5.14)
max f (n, ei , pi ) if n > 0 0 if n < e
1≤i≤k

Using the kth order recurrence relation in eqn (5.14), the problem can solved forward,
starting from the basis and storing the solutions in a table, as illustrated in Figure 5.9 (d) for
the toy example when n = 12. Now, an algorithm using the strong inductive programming
paradigm can be written. It assumes that the items are sorted in ascending order of the
energy required.
Algorithm 5.8. Dynamic unbounded integer knapsack
Unbounded knapsack(n, P, E)
Declare a table T0∼n with 0’s initially . . . . . . . . . . . . . 1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if i − ej ≥ 0 and T [i − ej ] + pj > T [i]. . . . . . . . . .5
T [i] = T [i − ej ] + pj . . . . . . . . . . . . . . . . . . . . . . . .6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.3. MORE OPTIMIZATION PROBLEMS 231

The computational time complexity of Algorithm 5.8 is Θ(kn) and the computational space
complexity is Θ(n), since solutions of all sub-problems are stored in a table.

5.3.3 Weighted Activity Selection Problem

10 (4)
6 (4)
3 (5) 9 (4)
8 (2) 2 (4) 7 (2)
5 (4) 1 (5) 4 (2)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

(a) Sample activities with profits


activity A 1 2 3 4 5 6 7 8 9 10
start S 7 5 2 12 0 2 8 1 6 3
finish F 12 8 6 14 7 13 11 5 10 9
profit P 5 4 5 2 4 4 2 2 4 4
select X 1 0 1 1 0 0 0 0 0 0
(b) weighted activity selection sample input and output

Figure 5.10: Weighted activity selection example

Consider the weighted activity selection problem, or simply wASP. While the goal of the
activity selection Problem 4.8 defined on page 170 is to maximize the number of activities,
the goal of wASP is to maximize the total profit of compatible activities. Each activity is
associated with its profit value as well as the starting and finishing times; ai = (si , fi , pi ).
Sample input and output are given in Figure 5.10 and selected activities are highlighted.
This wASP problem can be formulated as a maximization problem as follows:

Problem 5.6. weighted Activity Selection Problem


Input: A list A1∼n of n different activities, where each activity is represented by
its starting time, finishing time, and profit ai = (si , fi , pi ).
Output: X1∼n such that

n
X
maximize p i xi
i=1
subject to ∀i, j ∈ {1 ∼ n}(fi xi ≤ sj xj ∨ si xi ≥ fj xj )
where xi = 0 or 1

Let’s assume that activities are sorted by finishing time. For thinking backward, fantasize
an optimal solution and think about the last finishing activity, a0n . There are two cases: a0n
is included or not included in an optimal solution set. If not inlucded, wASP(A01∼n ) should
be the same as wASP(A01∼n−1 ) without the activity, a0n , as illustrated in Figure 5.11 (a). If
a0n is included, any activities that are not compatible cannot be selected. Those activities
are the ones whose finishing time is greater than s0n . Let k be the largest index, such that
232 CHAPTER 5. TABULATION - STRONG INDUCTION

……..… aʹn−1 ……..…


…….… …….…
……..… aʹn ……..… akʹ aʹn

(a) wASP(A01∼n ) = wASP(A01∼n−1 ) (b) wASP(A01∼n ) = wASP(A01∼k ) + p0n


activity A0 8 3 5 2 10 9 7 1 6 4
start S 0 1 2 0 5 3 6 8 7 2 12
finish F 0 5 6 7 8 9 10 11 12 13 14
profit P 0 2 5 4 4 4 4 2 5 4 2
wASP(A01∼i ) 2 5 5 6 6 9 9 10 10 12
(c) a table of wASP(A01∼i ) for i = 1 ∼ n.

Figure 5.11: Weighted activity selection example

fk0 ≤ s0n . Then, wASP(A01∼n ) should include the sub-solution of wASP(A01∼k ) as well as the
profit of a0n , as shown in Figure 5.11 (b). The following recurrence relation can be derived:

0
 if n = 0
wASP(A01∼n ) = p01 if n = 1 (5.15)
0 0 0

max(wASP(A1∼n−1 ), wASP(A1∼k + pn )) if n > 1

where k = argmax (fi0 ) such that fi0 ≤ s0n (5.16)


i∈{1∼n−1}

Using the recurrence relation in eqn (5.15), all sub-problems’ solutions can be stored in
a table starting from the basis case when n = 1, as provided in Figure 5.11 (c). A pseudo
code for this strong inductive programming is devised as follows:

Algorithm 5.9. Dynamic weighted activity selection

wASP(A1∼n )
Declare a table T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A0 = sort(A, asc) by F . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1] = p01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
0
k = searchp(F1∼i−1 , si ) . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [i] = max(T [i − 1], T [k] + pi ) . . . . . . . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

It should be noted that the ‘searchp’ sub-procedure in line 5 of Algorithm 5.9 returns a
position as defined in eqn (5.16). A slight modification in the search algorithm is required.
If q does not exist in a sequence A, it should return k such that ak < q < ak+1 . If there are
multiple matches, it should return k such that ak = q < ak+1 .
In line 2 of Algorithm 5.9, sorting by finishing time takes O(n log n). Since the ‘searchp’
sub-procedure in line 5 takes O(log n) if a binary search is used, the computational time
complexity of Algorithm 5.9 is O(n log n). The computational space complexity is Θ(n).
5.4. PROVING RECURRENCE RELATIONS 233

5.4 Proving Recurrence Relations


As ordinary induction proves the first order linear recurrence relation in Chapter 2, many
recurrence relations, such as divide recurrences and higher order recurrences, can be proven
using strong induction. This section demonstrates such cases. In the analysis of algorithms
thus far, continuous functions had been used. The number of steps of any algorithm is,
however, discrete, i.e., integers. In this section, the discrete version of computational time
complexities is expressed so that strong induction can be utilized to prove their closed
formulas.

5.4.1 Divide Recurrence Relations


In Chapter 3, a closed formula for a divide recurrence relation was proven by an ordinary
induction for n to be an exact power of 2, such as in Theorem 3.1. Here, full proofs by
strong induction for the computational time complexities for variouss divide and conquer
algorithms are presented.
Theorem 5.5. The solution of the following divide recurrence in eqn. (5.17) is T (n) = 2n−1
for any positive integer n.
(
1 if n = 1
T (n) = (5.17)
T ( n2 ) + T ( n2 ) + 1
   
if n > 1

Proof. (by strong induction)


Basis: When n = 1; T (1) = 1 by eqn. (5.17), and T (1) = 2 × 1 − 1 = 1.
Inductive step: Assume that T (k) is true for all positive integers k where 1 < k ≤ n. Show
T (n + 1) = 2n + 1.
   
n+1 n+1
T (n + 1) = T ( ) + T( )+1 by eqn (5.17)
2 2
   
n+1 n+1
=2 −1+2 − 1 + 1 by strong assumption
2 2
   
n+1 n+1
=2 +2 −1 (5.18)
2 2
n+1 n+1
if n + 1 is even: (5.18) = 2 +2 −1
2 2
= 2n + 1
n n+2
if n + 1 is odd: (5.18) = 2 + 2 −1
2 2
= 2n + 1 
7 7
To find the value of T (7), first find the values of T ( 2 ) = T (3) = 5 and T ( 2 ) = T (4) =
7 in the table, L, and then use eqn (5.17); T l(7)m= T (3) + T (4) + 1 = 13, as illustrated in
n
Figure 5.12 (a). When n is an even, n2 = = n2 . For example, to compute T (8), it
 
2
only requires the value in L[4], and then use eqn (5.17); T (7) = T (4) + T (4) + 1 = 15, as
illustrated in Figure 5.12 (b). Clearly, L is a sequence of odd numbers; T (n) = 2n − 1.
Table L can be filled starting from the basis toward the nth cell. Hence, an algorithm
using the strong inductive programming paradigm can be written as follows:
234 CHAPTER 5. TABULATION - STRONG INDUCTION

L[1] L[2] L[3] L[4] L[5] L[6] L[7] L[1] L[2] L[3] L[4] L[5] L[6] L[7] L[8]
1 3 5 7 9 11 ? 1 3 5 7 9 11 13 ?

(a) n is an odd number case (b) n is an even number case


n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
L[n] = T (n) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29
(c) a table of T (n) in eqn (5.17) for n = 1 ∼ 15.

Figure 5.12: Strong induction programming Algorithm 5.10 illustration

Algorithm 5.10. Dynamic T (n) = 2T (n/2) + 1 recurrence relation


T (n)
Declare a table L of size n . . . . . . . . . . . . . . . . . . . . . . . . 1
L[1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if i is even . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
L[i] = 2 × L[i/2] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . .5
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
L[i] = L[ i−1 i+1
2 ] + L[ 2 ] + 1 . . . . . . . . . . . . . . . . . . . . 7
return L[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
The computational time complexity of Algorithm 5.10 is Θ(n) and the computational
space complexity is Θ(n) since solutions of all sub-problems are stored in a table.
Another divide recurrence relation, which was proven by either the Master Theorem 3.9
or by an ordinary induction for n is an exact power of 2, is T (n) = T (n/2) + 1 = Θ(log n).
Theorem 5.6. The solution of the following divide recurrence in eqn. (5.19) is T (n) =
blog nc + 1 for any positive integer n.

(
1 if n = 1
T (n) = (5.19)
T ( n2 ) + 1
 
if n > 1

Proof. (by strong induction)


Basis: When n = 1; T (1) = 1 by eqn. (5.19), and T (1) = blog 1c + 1 = 1.
Inductive step: Assume that T (k) is true for all positive integers k where 0 < k ≤ n. Show
T (n + 1) = blog (n + 1)c + 1.
 
n+1
T (n + 1) = T ( )+1 by eqn (5.19)
2
  
n+1
= log + 1 + 1 by strong assumption (5.20)
2
x
if n + 1 is even: (5.20) = blog (n + 1) − 1c + 2 because log2 = log2 x − 1
2
= blog (n + 1)c + 1 because bx − 1c = bxc − 1
j nk jxk x − 1
if n + 1 is odd: (5.20) = log +1+1 because = if x is odd.
2 2 2
5.4. PROVING RECURRENCE RELATIONS 235

x
= blog n − 1c + 2 because log2 = log2 x − 1
2
= blog nc − 1 + 2 because bx − 1c = bxc − 1
= blog (n + 1)c + 1 by Lemma 5.1 

Lemma 5.1. blog kc = blog (k + 1)c if k is even.

Proof. blog kc =
6 blog (k + 1)c if and only if k + 1 is an exact power of 2. If k + 1 is an exact
power of 2, k is odd. Hence, blog kc = blog (k + 1)c if k is even. 

L[1] L[2] L[3] L[4] L[5] L[6] L[7] L[8] L[9] L[10]
1 2 2 3 3 3 3 4 4 4

(a) L[n] = L[ n2 ] + 1
 

n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
L[n] = T (n) 1 2 2 3 3 3 3 4 4 4 4 4 4 4 4 5 5
(b) a table of T (n) in eqn (5.19) for n = 1 ∼ 17.

Figure 5.13: Strong induction programming Algorithm 5.11 illustration

To find the value of T (15), first find the value of T ( 15


 
2 ) = T (7) = 3 in the table, L, of
all sub-solutions and then use eqn (5.19); T (15) = T (7)+1 = 4, as illustrated in Figure 5.13.
Table L can be filled starting from the basis. Hence, an algorithm using the strong
inductive programming paradigm can be written as follows:

Algorithm 5.11. Dynamic T (n) = T (n/2) + 1 recurrence relation

T (n)
Declare a table L of size n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
L[1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if i is even . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
L[i] = L[i/2] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
L[i] = L[ i−1 2 ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
return L[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time complexity of Algorithm 5.11 is Θ(n) and the computational
space complexity is Θ(n), since solutions of all sub-problems are stored in a table.

5.4.2 Complete Recurrence Relations


Higher order recurrences that invoke all sub-problems from the basis to T (n−1) are called
complete recurrence relations. A popular complete recurrence relation that one encounters
frequently in analysis of algorithms is given in eqn (5.21).

Theorem 5.7. The solution of the following higher order recurrence in eqn. (5.21) is T (n) =
2n − 1 for any positive integer n.
236 CHAPTER 5. TABULATION - STRONG INDUCTION



1 if n = 1
 n−1
T (n) = (5.21)
P
T (i)
 i=1

+ n if n > 1

n−1

Proof. (by strong induction)


Basis: When n = 1; T (1) = 1 by eqn. (5.21), and T (1) = 2 × 1 − 1 = 1.
Inductive step: Let P (n) be the proposition. Assume that T (j) is true for all positive
integers j where 0 < j ≤ k. Show P (k + 1) = 2k + 1.

k
P
T (i)
i=1
T (k + 1) = +k+1 by eqn (5.21)
k
k
P
(2i − 1)
i=1
= +k+1 by assumption
k
k
P
2 i−k
i=1
= +k+1 by summation rules
k
k(k + 1) − k
= +k+1 by Theorem 1.3
k
= 2k + 1 

To find the value of T (7), all values of T (1) ∼ T (6) are required for eqn (5.21), as
illustrated in Figure 5.14 (a). Add all six values, divide it by 6, and then add 7. The answer
is 11. Such full recurrences are called complete recurrences. First15 values are shown in

L[1] L[2] L[3] L[4] L[5] L[6] L[1] L[2] L[3] L[4] L[5] L[6] L[7]
1 3 5 7 9 11 1 3 5 7 9 11 13

(a) Complete recurrence for T (6) (b) Complete recurrence for T (7)
n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
L[n] = T (n) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29
(c) a table of T (n) in eqn (5.21) for n = 1 ∼ 15.

Figure 5.14: Strong induction programming illustration for complete recurrence relation

Figure 5.14 (b). Clearly, L is a sequence of odd numbers.


Table L can be filled starting from the basis. Hence, an algorithm using the strong
inductive programming paradigm can be written as follows:
5.4. PROVING RECURRENCE RELATIONS 237

Algorithm 5.12. Dynamic complete recurrence relation in (5.21)


T (n)
Declare a table L of size n . . . . . . . . . . . . . . . . . . . . . . . . 1
L[1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L[i] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 ∼ i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
L[i] = L[i] + L[j] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
L[i]
L[i] = + i .................................7
i−1
return L[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time complexity of Algorithm 5.12 is Θ(n2 ) and the computational
space complexity is Θ(n), since solutions of all sub-problems are stored in a table.

5.4.3 Euler Zigzag Number


The number of all alternating permutations of n number of elements, E(n), is known as
Euler zigzag number, or up-down number. E(n) grows exponentially as n grows. Determin-
ing E(n) is called André’s problem, as André first studied it in [6, 7] and comprehensive
studies can be found in [161, 162]. In [7], a beautiful higher order recurrence for the nth
Euler zigzag number was given, as stated in eqn (5.22).
n  
X n
2En+1 = Ei En−i (5.22)
i=0
i

The recurrence relation in eqn (5.22) is a complete recurrenece relation, since all values from
0 to nth term are required to compute En+1 . The value of the nth Euler zigzag number
can certainly be found using strong inductive programming. However, there are duplicate
computations in eqn (5.22) due to the symmetry in eqn (5.23).
   
n n
Ei En−i = En−i Ei (5.23)
i n−i

Half of the computation can be eliminated, as exemplified here:


            
1 5 5 5 5 5 5
E6 = E0 E5 + E1 E4 + E2 E3 + E3 E2 + E4 E1 + E5 E0
2 0 1 2 3 4 5
     
5 5 5
= E0 E5 + E1 E4 + E2 E3
0 1 2
          
1 4 4 4 4 4
E5 = E0 E4 + E1 E3 + E2 E2 + E3 E1 + E4 E0
2 0 1 2 3 4
     
4 4 1 4
= E0 E4 + E1 E3 + E2 E2
0 1 2 2

Based on these two symmetry case observations, as illustrated in Figure 5.15 (a) and (b),
238 CHAPTER 5. TABULATION - STRONG INDUCTION

E0 E1 E2 E3 E4 E5 E6 E0 E1 E2 E3 E4 E5 E6 E7
1 1 1 2 5 16 ? 1 1 1 2 5 16 61 ?

(a) Even symmetry case (b) Odd symmetry case


n 0 1 2 3 4 5 6 7 8 9 10 11 12
L[n] = En 1 1 1 2 5 16 61 272 1385 7936 50521 353792 2702765
(c) First 13 Euler zigzag numbers in a table

Figure 5.15: Strong induction programming Algorithm 5.13 illustration for Euler zigzag
numbers

a computationally efficient recurrence relation can be derived as follows:




 1 if n = 0 or 1
Pb n−1 c
n−1

En = i=0
2
i Ei En−i−1 if n > 1 and even (5.24)
n−1
2 −1 n−1

1 n−1
P   2
Ei En−i−1 + n−1 E n−1 if n > 1 and odd


i=0 i 2 2 2

Using the complete recurrence relation in eqn (5.24), a pseudo code by strong inductive
programming is as follows:
Algorithm 5.13. Dynamic Euler ZigZag Number
EulerZigZagNum(n)
Declare a table L of size n + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
L[0] = L[1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L[i] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 0 ∼ d n−1 2 e − 1 .................................5
L[i] = L[i] + L[j] × L[i − j − 1] × C(i − 1, j) . . . . . . . . 6
if i is even . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
L[i] = L[i] + 12 L[ i−1 2 i−1
2 ] × C(i − 1, 2 ) . . . . . . . . . . . . . . . 8
return L[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Algorithm 5.13 invokes C(n, k) = nk , which is a binomial coefficient. This binomial




coefficient problem shall be covered later in Chapter 6. For now, let’s assume the values
of C(n, k) are available in a table so that it can be accessed in constant time. Then, the
computational time complexity of Algorithm 5.13 is Θ(n2 ) and the computational space
complexity is Θ(n), since solutions of all sub-problems are stored in a table.

5.5 Memoization
Strong inductive programming stores all sub-solutions in a look-up table by solving
forward, starting from basis cases. In some cases, not all sub-solutions but only parts of
sub-solutions are necessary to solve problems. The speed of a program may be expedited,
if only necessary sub-solutions are computed. This may be achieved by solving problems
recursively, or backward. This technique is called the ‘memoization’ method, and its term
5.5. MEMOIZATION 239

was first coined by Michie in [123]. It may be called a recursive tabulation method, as strong
inductive programming can be viewed as a forward solving tabulation method. They were
referred to as top-down and bottom-up approaches, respectively, in [42, p 347]. All problems
solved by strong inductive programming can be also solved by the memoization method.
Only a few problems are presented here and in the next section for demonstration purposes
and others are left for exercises.

5.5.1 Winning Ways Problem


An example where we may find clear advantages of memoization technique over strong
inductive programming is winning ways problem. Consider the (k = 3) missile game example
given in Figure 4.12 on page 166. Supposing there is unlimited energy, how many ways are
there to earn n points with k kinds of missiles with different corresponding point values?
This combinatoric problem is called the winning ways problem, or WWP in short.
In order to formulate the problem and/or derive a higher order recurrence, it is useful to
consider a toy example. Suppose there are (k = 3) kinds of missiles as given in Figure 5.16
(a); P1∼3 = h1, 9, 11i. One may enumerate all winning scenarios for small n. Figure 5.16 (b)
enumerates all seven possible ways to earn n = 12 points and then they are partitioned by
the last missile used. As the fantasizing step, imagine what if the last missile was not fired.
If the last missile was the blue 1 point one, the five scenarios in the first partition without
the last missile are exactly the number of winning ways to get 11 points. With identical
reasoning for the rest of partitions, a partial recursion tree is depicted in Figure 5.16 (c)
and the following higher order recurrence can be derived:

W (n − 1) + W (n − 9) + W (n − 11) if n > 0

W (n) = 1 if n = 0 (5.25)

0 if n < 0

Now the problem can be formally formulated with the generalized recurrence relation as
follows:

Problem 5.7. Winning ways


Input: a list, P of k different points and n, a total pointed needed
Output: W (n) in eqn (5.26)

k
P
 W (n − pi ) if n > 0



i=1
W (n) = (5.26)

 1 if n = 0

0 if n < 0

Donald Michie (1923-2007) was a British computer scientist. His major contri-
butions are in artificial intelligence and machine learning. He developed the Machine
Educable Noughts And Crosses Engine (MENACE), one of the first programs capable
of learning to play a perfect game of Tic-Tac-Toe. He was founder of the Machine Intel-
ligence series and the Human-Computer Learning Foundation.
c Photo Credit: Petermowforth, licensed under CC BY 3.0, crop change was made.
240 CHAPTER 5. TABULATION - STRONG INDUCTION

missile blue red yellow


point P 1 9 11
(a) A sample input: n = 12 and P1∼3
11 1 W(12) = 7
1 9 11
1 1 9 1

W(11) = 5 W(3) = 1 W(1) = 1


9 1 1 1
11 1 1 1 1

1 9 1 1
1 1 9

1 ………. 1 1
9 1 1

1 1 1 9
1 9 1

1 11 1 ……… 1

(b) W (12) partitioned by the last missile (c) Fantasizing (strong assumption) stage
n 0 1 2 3 ··· 7 8 9 10 11 12 13 14 15 16 17 18
W (n) 1 1 1 1 ··· 1 1 2 3 5 7 9 11 13 15 17 20
(d) a table of W (n) in eqn (5.26) for n = 0 ∼ 18
Figure 5.16: Deriving higher order recurrence for winning ways; W (12) case

It should be noted that WWP Problem 5.7 is closely related to Kibonacci (KB2) Prob-
lem 5.9 and Fibonacci Problem 5.8. Suppose that there are only two missiles, blue and
red, in the Winning Ways Problem 5.7. The blue missile gets 1 point and the red missile
gets k 0 points. This special case of WWP where k = 2 and P = h1, k 0 i is closely related
to the nth Kibonacci (KB2) Problem 5.9 with slightly different basis case. When k 0 = 2,
WWP(n, h1, 2i) is closely related to the nth Fibonacci Problem 5.8 with different basis case.
Before stating a pseudo code, it is helpful to fill up the table by hands starting from
the basis toward n, as shown in Figure 5.16 (d). For example, W (18) = 20 can be easily
computed since W (18 − 1) = 17, W (18 − 9) = 2, and W (18 − 11) = 1 have been already
computed and stored in the table. It is worthwhile to reemphasize that a successful design
of a strong inductive programming algorithm lies in deriving a correct recurrence relation,
as in eqn (5.26). Now a trivial algorithm using the strong inductive programming paradigm
can be written as follows:

Algorithm 5.14. Dynamic winning ways

Winning ways(n, P )
Declare a table T0∼n whose elements are 0 initially . . . . . . 1
T [0] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if i − pj ≥ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
T [i] = T [i] + T [i − pj ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.5. MEMOIZATION 241

The computational time complexity of Algorithm 5.14 is Θ(kn) and the computational
space complexity is Θ(n), since the solutions of all sub-problems are stored in a table.
A recursive programming algorithm for the higher order recurrence relation, such as in
eqn (5.26), should not be implemented because it takes exponential time. A memoization
method, which is a recursive programming with a table, may be implemented though. It
always starts with declaring a global table. A global variable can be accessed from any
method or function within a program while a local variable can be accessed only within a
method or procedure where it is declared. Based on the higher order recurrence relation in
eqn (5.26), a memoization method for Problem 5.7 can be stated as follows:

Algorithm 5.15. Winning ways by memoization method

Declare a global table T1∼n whose elements are 0 initially


WWP(n, P1∼k )
if n < 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n = 0, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if n > 0 ∧ T [n] = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [i] = T [i] + WWP(n − pj , P1∼k ) . . . . . . . . . . . . . . . . . . . 5
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

W(150) = 2
=1 =1

W(120) W(100)
=1 =0 =0 =1

W(90) W(70) W(70) W(50)


=1 =0 =0 =0

W(60) W(40) W(40) W(20) W(40) W(20) W(20) W(0)


=1
=1 =0 =0

W(30) W(10) W(10) W(-10) W(10)W(-10) W(-10)W(-30)W(10)W(-10)W(-10)W(-30) W(-10)W(-30)


=0 =0 =0

W(0)W(-20)W(-20)W(-40) W(-20)W(-40) W(-20)W(-40) W(-20)W(-40)


=1 =0 =0 =0

(a) the naı̈ve recursive call tree vs. memoization tree for W (150, h30, 50i)
n 0 1 2 3 4 ··· 10 11 ··· 20 21 ··· 30
W (n) 1 - - - - ··· 0 - ··· 0 - ··· 1
n 31 ··· 40 41 ··· 50 51 ··· 60 61 ··· 69 70
W (n) - ··· 0 - ··· 1 - ··· 1 - ··· - 0
n 71 ··· 90 91 ··· 100 101 ··· 120 121 ··· 149 150
W (n) - ··· 1 - ··· 1 - ··· 1 - ··· - 2
(b) a table for W (150, h30, 50i) by Algorithm 5.15

Figure 5.17: Recursion trees for the number of winning ways: WWP(n = 150, P = h30, 50i).

If one of points in P is one, there is no advantage of the memoization method over the
strong inductive programming Algorithm 5.14. However, if the smallest point in the set
242 CHAPTER 5. TABULATION - STRONG INDUCTION

is greater than one, the memoization method in eqn (5.26) may be better. Suppose that
the blue and red missiles’ points are 30 and 50, respectively. While the strong inductive
program Algorithm 5.14 computes all sub-problems from 0 ∼ n, not all sub-problems need to
be solved. Figure 5.17 (a) illustrates the recursive call tree by the naı̈ve recursive algorithm
in eqn (5.26) to compute W (150). The shaded subtrees are redundant recursive subtrees
that can be eliminated by the memoization method in Algorithm 5.15.
While the naı̈ve recursive algorithm in eqn (5.26) makes 39 recursive calls, only 23
recursive calls are made for the memoization method Algorithm 5.15. While the strong
inductive programming Algorithm 5.14 must compute all 150 values sequentially, only twelve
highlighted cells in the table need to be computed and stored, as illustrated in Figure 5.17
(b).
The computational time complexity of the memoization method stated in Algorithm 5.15
is Θ(n), as the full table must be declared. The computational space complexity of Algo-
rithm 5.15 is O(kn).
If all sub-solutions must be computed and stored, a strong inductive programming, which
is solving forward using a loop, is faster than the recursive memoization method in practice
If not all sub-solutions are necessary to be stored, a memoization method might be faster
than strong inductive programming.

5.5.2 Divide Recurrence Relations


Consider the divide recurrence relation in eqn (5.17) on page 233. Figure 5.18 (a)
shows the full recursion tree to compute T (29) by eqn (5.17). Shaded subtrees represent
unnecessary repeated computation. These subtrees can be removed by the memoization
method. The pseudo code for the memoization method is stated as follows:
Algorithm 5.16. Divide recurrence relation T (n) = T ( n2 ) + T ( n2 ) + 1
   

Declare a global table L1∼n whose elements are 0 initially


T(n)
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n > 1 ∧ L[n]
 = 0, . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
L[i] = T ( n2 ) + T ( n2 ) + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return L[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
It can be also stated as in eqn (5.27).

1
 if n = 1
T (n) = LT [n] = T ( n2 ) + T ( n2 ) + 1 if LT [n] = nil and n > 1
   
(5.27)

LT [n] if LT [n] =
6 nil and n > 1

where LT1∼n is a global table whose values are nil initially.


The memoization algorithm, stated in eqn (5.27), is illustrated in a table to compute
T (29) in Figure 5.18 (b). Arrows on the top and bottom of the table indicate d n2 e and
b n2 c, respectively. Only 17 recursive calls are made in the memoization method to compute
T (29), as indicated in Figure 5.18 (a). Only 9 cells of the table are necessary to be computed
and stored to compute T (29), as indicated in Figure 5.18 (b).
The computational space complexity of the memoization method in eqn (5.27) is Θ(n),
the same as the strong inductive programming Algorithm 5.10, as both algorithms requires a
5.5. MEMOIZATION 243

table of size n. The computational time complexity of the memoization method in eqn (5.27)
is Θ(log n) while that of Algorithm 5.10 is Θ(n). While the strong inductive programming
Algorithm 5.10 computes all cells, the memoization method computes only Θ(log n) number
of cells in the table. Figure 5.18 (c) illustrates only 19 cells are computed in the memoization

T(29)

T(14) T(15)

T(7) T(7) T(7) T(8)

T(3) T(4) T(3) T(4) T(3) T(4) T(4) T(4)

T(1) T(2) T(2) T(2) T(1) T(2) T(2) T(2) T(1) T(2) T(2) T(2) T(2) T(2) T(2) T(2)

T(1)T(1) T(1)T(1)T(1)T(1) T(1)T(1) T(1)T(1)T(1)T(1) T(1)T(1) T(1)T(1)T(1)T(1)T(1)T(1)T(1)T(1) T(1)T(1)T(1)T(1)

(a) the naı̈ve recursion tree tree for T (29) by eqn (5.17)

1 3 5 7 - - 13 15 - - - - - 27 29 - - - - - - - - - - - - - 57 - - -
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

(b) a table of T (29) by the memoization algorithm in eqn (5.27)


n 1 2 3 4 5 6 7 8 9 ··· 15 16
T (n) 1 3 5 7 - - 13 15 - ··· 29 31
n 17 ··· 31 32 33 ··· 63 64 65 ··· 126 127
T (n) - ··· 61 63 - ··· 125 127 - ··· 251 253
n 128 ··· 504 505 506 ··· 1008 1009 1010 ··· 2018 2019
T (n) - ··· 1007 1009 - ··· 2015 2017 - ··· - 4037
(c) a table of T (2019) by the memoization algorithm in eqn (5.27)

level = 0 T(33) {T(33)}

1 T(16) T(17) {T(16),T(17)}

2 T(8) T(8) T(8) T(9) {T(8),T(9)}

3 T(4) T(4) T(4) T(5) {T(4),T(5)}

4 T(2) T(2) T(2) T(3) {T(2),T(3)}

log n = 5 T(1) T(1) T(1) T(2) {T(1),T(2)}

(d) the memoization recursion tree for T (33) in eqn (5.27)

Figure 5.18: Recursion trees for the number of winning ways: WWP(n = 150, P = h30, 50i).
244 CHAPTER 5. TABULATION - STRONG INDUCTION

method to compute T (2019) while Algorithm 5.10 computes all 2019 cells. Figure 5.18 (d)
provides better insights on the computational time complexity of Θ(log n). The height of the
tree is clearly blog nc. At each level, only up to four calls are made and only two consecutive
cells are computed. Clearly, the memoization method in eqn (5.27) is better than the naı̈ve
recursive programming algorithm in eqn (5.17) as well as the strong inductive programming
Algorithm 5.10.

5.5.3 Linear Divide Recurrence Relations


Both strong inductive programming and memoization method are effective for problems
whose recurrence relation are non-linear. If the recurrence relation is linear, simple recursive
programming or tail recursion are more effective. For example, consider the linear divide
recurrence relation in eqn (5.19). Strong inductive programming Algorithm 5.11 was in-
troduced on page 235 in order to help readers understand the concept of strong induction.
Indeed, various theorems involving the divide recurrence relation in eqn (5.19), such as
Theorem 5.6 on page 234, may require strong induction. The problem of using the strong
inductive programming method for the linear recurrence relation is that both computational
time and space complexities are Θ(n).
The computational time complexity can be reduced to Θ(log(n)) if a memoization
method is utilized. An algorithm by a memoization method based on eqn (5.19) is stated

1 2 - 3 - - - 4 - - - - - - - 5 - - - - - - - - - - - - - - - 6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

(a) a table to compute T (32) by the memoization method in eqn (5.19)


n 1 2 3 4 5 6 7 8 ··· 14 15 16 ···
T (n) 1 - 2 - - - 3 - ··· - 4 - ···
n 31 ··· 63 ··· 126 ··· 252 ··· 504 ··· 1009 ··· 2019
T (n) 5 ··· 6 ··· 7 ··· 8 ··· 9 ··· 10 ··· 11
(b) a table to compute T (2019) by the memoization method in eqn (5.19)
start return start
↓ ↑ ↓
T (15) = T (b15/2c) + 1 =4 T (9) o = 1
↓ ↑ ↓
T (7) = T (b7/2c) + 1 =3 T (4) o=2
↓ ↑ ↓
T (3) = T (b3/2c) + 1 =2 T (2) o=3
↓ ↑ ↓
T (1) = 1 =1 T (1) o=4
x ↓
basis case return
(c) Recursive programming in eqn (5.19) (d) Tail recursion Algorithm 5.17

Figure 5.19: Linear Divide Recursion Relation in eqn (5.19).


5.6. FIBONACCI AND LUCAS SEQUENCES 245

as follows: 
1
 if n = 1
T (n) = LT [n] = T ( n2 ) + 1
 
if LT [n] = nil and n > 1 (5.28)

LT [n] if LT [n] =
6 nil and n > 1

where LT1∼n is a global table whose values are nil initially.


The memoization method in eqn (5.28) is illustrated in Figure 5.19 (a) and (b) to compute
T (32) and T (2019). Only blog 32c + 1 = 6 and blog 2019c + 1 = 11 cells are computed.
The memoization method in eqn (5.28) still requires Θ(n) extra space. When the recur-
sive programming algorithm is directly implemented on the linear divide recurrence relation
in eqn (5.19), the recursive call tree is unary tree and there is no need to store sub-solutions
in a table. Figure 5.19 (c) illustrates the recursive programming without a table to com-
pute T (15). Both computational time and space complexities of the recursive programming
algorithm in eqn (5.19) are Θ(log n).
The computational space complexity can be further reduced to O(1) if a tail recursion
method is used. No extra space is required to store many sub-solutions but only immediate
previous solution can be stored in a variable. Figure 5.19 (d) illustrates the tail recursion
algorithm to compute T (9). An algorithm by a tail recursion based on the linear recurrence
relation in eqn (5.19) is stated as follows:
Algorithm 5.17. Tail recursion D&C
T(n)
o = 1 ............................................. 1
while n ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
o = o + 1 .......................................3
n = bn/2c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
The computational time and space complexities of various algorithms for the linear
recurrence relation in eqn (5.19) are summarized in Table 5.1.

Table 5.1: Computational complexities for Linear divide recurrence

Strong inductive Memoization Recursive Tail


Algorithms programming method programming recursion
Algorithm 5.11 eqn (5.28) eqn (5.19) Algorithm 5.17
Time Θ(n) Θ(log n) Θ(log n) Θ(log n)
Space Θ(n) Θ(n) Θ(log n) O(1)

5.6 Fibonacci and Lucas Sequences


5.6.1 Fibonacci
Consider the problem of finding the nth Fibonacci number, which is the sum of previ-
ous two numbers. The problem is defined formally using the famous Fibonacci recurrence
relation given in eqn (5.29) [115].
246 CHAPTER 5. TABULATION - STRONG INDUCTION

Problem 5.8. nth Fibonacci number

Input: n∈N
Output:

0
 if n = 0
F (n) = 1 if n = 1 (5.29)

F (n − 1) + F (n − 2) if n > 1

If one composes a recursive program directly from the eqn (5.29), the program will invoke
25 recursive calls to find the sixth Fibonacci number as depicted in Figure 5.20 (a). Lots of
redundant calls are observed.
In order to avoid massive redundant recursive calls, a memoization technique can be
used. First, declare a global table with only basis case values. Update the table cell when
its value is computed for the first time only and if a table cell is already computed, do not
make recursive calls but simply return the table value. The pseudo code is as follows:
Algorithm 5.18. Fibonacci with memoization
Declare a global table T of size n + 1
Fib(n)
if n = 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [n] = nil, T [n] = Fib(n − 1) + Fib(n − 2) . . . . . . . . . 3
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The following simple recursive tabulation formula in eqn (5.30) may serve as a pseudo
code as well.


 0 if n = 0

1 if n = 1
Fm (n) = (5.30)


 T [n] = Fm (n − 1) + Fm (n − 2) if T [n] = nil
T [n] if T [n] 6= nil

where T1∼n is a global table whose values are nil initially.


The memoization based Algorithm 5.18 makes only 11 recursive calls, as depicted in Fig-
ure 5.20 (b). Table 5.2 compares the number of recursive calls invoked by the naı̈ve recursive
algorithm in eqn (5.29) and memoization recursive algorithm in eqn (5.30).
If we assume that each recursive call takes 1 nano second, the naı̈ve recursive algorithm
in eqn (5.29) would take over 36,000 years to compute the F (100). A simple table added in
the recursive algorithm in eqn (5.30) changes the complexities from exponential to linear.

Leonardo Bonacci (1170 - 1250?) a.k.a. Fibonacci was an Italian mathematician.


Fibonacci popularized the HinduArabic numeral system to the Western World. Fibonacci
sequence, an interesting integer sequence that appeared in his book, Liber Abaci (Book
of Calculation) is coined after his name. c Portrait is in public domain.
5.6. FIBONACCI AND LUCAS SEQUENCES 247

F(6) Fm(6)

F(5) F(4) Fm(5) Fm(4)

F(4) F(3) F(3) F(2) Fm(4) Fm(3)

F(3) F(2) F(2) F(1) F(2) F(1) F(1) F(0) Fm(3) Fm(2)

F(2) F(1) F(1) F(0) F(1) F(0) F(1) F(0) Fm(2) Fm(1)

F(1) F(0) Fm(1) Fm(0)

(a) Naı̈ve recursion tree (b) Memoization recursion tree

Figure 5.20: Recursion trees for the 6th Fibonacci number

Table 5.2: Comparison of the naı̈ve recursive algorithm vs. memoization

n F (n) FRC(n) = nrc(F (n)) nrc(Fm (n))


1 1 1 1
2 1 3 3
3 2 5 5
4 3 9 7
5 5 15 9
6 8 25 11
.. .. .. ..
. . . .
10 55 177 19
25 75025 242785 49
50 12586269025 40730022147 99
75 2111485077978050 68329092458134135 149
100 354224848179261915075 1146295688027634168201 199
125 59425114757512643212875125 192303710926036844937549135 249

Let ‘nrc’ be the function that counts the number of recursive calls of a certain recursive
function. From Table 5.2, nrc(Fm (n)) = 2n − 1 can be easily observed and, thus, the
memoization Algorithm 5.18 takes Θ(n).
The number of recursive calls for the naı̈ve recursive algorithm in eqn (5.29), nrc(F (n)),
or simply FRC, has the following recurrence relation:
(
1 if n = 0 or 1
nrc(F (n)) = FRC(n) = (5.31)
FRC(n − 1) + FRC(n − 2) + 1 if n ≥ k

Since nrc(F (n)) > F (n) for any n > 1, the naı̈ve recursive algorithm in eqn (5.29) takes
exponential time. It should be noted that FRC in eqn (5.31) is closely related to the Fi-
bonacci tree number, or simply FTN, which was presented earlier in eqn (3.33). Recurrence
relations for FRC in eqn (5.31) and FTN in eqn (3.33) differ only in the basis case.
248 CHAPTER 5. TABULATION - STRONG INDUCTION

5.6.2 Kibonacci Number


The generalized order-k Fibonacci numbers, also known as Kibonacci numbers, have
been defined in many differently ways. A couple of them are considered here and later in
the exercises. One version of the generalized Fibonacci number is that the nth Kibonacci
number is the sum of two terms: the n − 1th term and the n − kth term. It is abbreviated
as KB2, as it involves only two terms, and recursively defined as follows:

Problem 5.9. Kibonacci number (KB2)


Input: n ∈ Z and k ∈ Z+
Output:

KB2(n − 1) + KB2(n − k) if n ≥ k

KB2(n, k) = 1 if 0 < n < k (5.32)

0 if n ≤ 0

When k = 2, it is the same as the Fibonacci number in Problem 5.8. When k = 3,


it is called Tribonacci number. When k = 8, it is called Octonacci number. The first
(n = 1 ∼ 15) for k = 2 ∼ 8 are given in Table 5.3.

Table 5.3: Kibonacci from Fibonacci (k = 2) to Octonacci (k = 8)


n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ···
KB2(n, 2) 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 ···
KB2(n, 3) 0 1 1 1 2 3 4 6 9 13 19 28 41 60 88 129 ···
KB2(n, 4) 0 1 1 1 1 2 3 4 5 7 10 14 19 26 36 50 ···
KB2(n, 5) 0 1 1 1 1 1 2 3 4 5 6 8 11 15 20 26 ···
KB2(n, 6) 0 1 1 1 1 1 1 2 3 4 5 6 7 9 12 16 ···
KB2(n, 7) 0 1 1 1 1 1 1 1 2 3 4 5 6 7 8 10 ···
KB2(n, 8) 0 1 1 1 1 1 1 1 1 2 3 4 5 6 7 8 ···

An algorithm based on the strong inductive programming paradigm can be stated as


follows:

Algorithm 5.19. Dynamic Kibonacci

KB2(n, k)
Declare a table T of size n . . . . . . . . . . . . . . . . . . . . . . . . 1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Ti = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = k to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Ti = Ti−1 + Ti−k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return Tn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Both computational time and space complexities of Algorithm 5.19 are Θ(n). If k = 2,
it solves the Fibonacci Problem 5.8.
An algorithm based on the memoization technique can be stated as in eqn (5.33), as-
5.6. FIBONACCI AND LUCAS SEQUENCES 249

suming a table of size n is declared globally.



0


if n = 0
1 if 0 < n < k
KB2(n) = (5.33)
T [n] = KB2(n − 1) + KB2(n − k) if T [n] = nil


T [n] if T [n] =
6 nil

where T1∼n is a global table whose values are nil initially.

Both the computational time and space complexities of the memoization algorithm in
eqn (5.33) are Θ(n).

5.6.3 Lucas Sequences


Another way of generalizing the Fibonacci sequence in eqn (5.29) is Lucas sequence [49,
pp. 393-411], or simply LUS, as given in eqn (5.34).

0
 if n = 0
U (n, p, q) = 1 if n = 1 (5.34)

pU (n − 1) − qU (n − 2) if n > 1


2
 if n = 0
V (n, p, q) = p if n = 1 (5.35)

pV (n − 1) − qV (n − 2) if n > 1

Lucas sequence II, or simply LUS2, has different basis cases, as given in eqn (5.35). LUS
and LUS2 are formally defined in Problem 5.10 and 5.11, respectively.
The pseudo codes of strong inductive programming and memoization method for Lucas
sequence Problem 5.10 are stated in Algorithms 5.20 and 5.22, respectively. The pseudo
codes of strong inductive programming and memoization method for Lucas sequence II
Problem 5.11 are stated in Algorithms 5.21 and 5.23, respectively. Their computational
time and space complexities are Θ(n).
First nince closed polynomial forms of Lucas sequence and Lucas sequence II are given
in Figure 5.21 (a) and (b), respectively. Lucas sequence and Lucas sequence II coefficients
shall be dealt in Chapter 6 as Lucas sequence triangles. Lucas sequence is generalized form
of Fibonacci number. When p = 1 and q = −1, U (n, 1, −1) = Fn . Other sequences with
different p and q values are summarized in Table 5.4 and designing algorithms for them are
left for exercises.

Édouard Lucas (1842-1891) was a French mathematician. He is best known for his
study of the Fibonacci sequence. The related Lucas sequences and Lucas numbers are
named after him. He is also known for Lucas’s primality tests and the LucasLehmer
primality test. c Photography is in public domain.
250 CHAPTER 5. TABULATION - STRONG INDUCTION

Problem 5.10. nth Lucas Sequence Problem 5.11. nth Lucas Sequence II
Input: n ∈ N, p and q ∈ Z Input: n ∈ N, p and q ∈ Z
Output: U (n, p, q) in eqn (5.34) Output: V (n, p, q) in eqn (5.35)
Algorithm 5.20. Lucas sequence Algorithm 5.21. Lucas sequence II
LUS(n, p, q) LUS2(n, p, q)
Declare a table T0∼n . . . . . . . . . . . . . 1 Declare a table T0∼n . . . . . . . . . . . . . 1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 2 T [0] = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 3 T [1] = p . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . 4 for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . 4
T [i] = pT [i − 1] − qT [i − 2] . . . . . 5 T [i] = pT [i − 1] − qT [i − 2] . . . . . 5
return T [n] . . . . . . . . . . . . . . . . . . . . . . . 6 return T [n] . . . . . . . . . . . . . . . . . . . . . . . 6

Algorithm 5.22. Lucas sequence Algorithm 5.23. Lucas sequence II


Declare a global table T1∼n Declare a global table T1∼n
LUS(n, p, q) LUS2(n, p, q)
if n = 0, return 0 . . . . . . . . . . . . . . 1 if n = 0, return 2 . . . . . . . . . . . . . . 1
if n = 1, return 1 . . . . . . . . . . . . . . 2 if n = 1, return p . . . . . . . . . . . . . . 2
if n > 1 ∧ T [n] = 0, . . . . . . . . . . . . . . . 3 if n > 1 ∧ T [n] = 0, . . . . . . . . . . . . . . . 3
T [n] = pLUS(n − 1, p, q) T [n] = pLUS2(n − 1, p, q)
− qLUS(n − 2, p, q) . . . . 4 − qLUS2(n − 2, p, q) . . . 4
return T [n] . . . . . . . . . . . . . . . . . . . . . . . 5 return T [n] . . . . . . . . . . . . . . . . . . . . . . . 5

n U (n, p, q) n V (n, p, q)
0 0 0 2
1 1 1 p
2 p 2 p2 − 2q
3 p2 − q 3 p3 − 3pq
4 p3 − 2pq 4 p4 − 4p2 q + 2q 2
5 p4 − 3p2 q + q 2 5 p5 − 5p3 q + 5pq 2
6 p5 − 4p3 q + 3pq 2 6 p6 − 6p4 q + 9p2 q 2 − 2q 3
7 p6 − 5p4 q + 6p2 q 2 − q 3 7 p7 − 7p5 q + 19p3 q 2 − 7pq 3
8 p7 − 6p5 q + 10p3 q 2 − 4pq 3 8 p8 − 8p6 q + 31p4 q 2 − 25p2 q 3 + 2q 4
(a) Lucas sequence (b) Lucas sequence II
Figure 5.21: Lucas sequences for n = 0 ∼ 8
Table 5.4: Lucas sequences
p q Lucas sequence U (n, p, q) Lucas sequence II V (n, p, q)
1 −1 Fibonacci num. FIB eqn (5.29) Lucas number LUC eqn (5.29)
2 −1 Pell num. PLN eqn (5.67) Pell-Lucas num. PLL eqn (5.70)
1 −2 Jacobsthal num. JCN eqn (5.73) Jacobsthal-Lucas JCL eqn (5.78)
2 1 n FVN eqn (5.66) 2
3 2 Mersenne num. MSN eqn (5.82) Mersenne-Lucas MSL eqn (5.86)
6 1 square root of STNr eqn (5.94)
square triangular num.
5.6. FIBONACCI AND LUCAS SEQUENCES 251

5.6.4 Memoization in Divide & Conquer


The superiority of the memoization technique over strong inductive programming is
often observed when it is combined with the divide and conquer paradigm. Here is another
beautiful algorithm for Problem 5.8 of finding the nth Fibonacci number, which combines
both memoization and the divide and conquer paradigm. The Fibonacci sequence is one of
the most intensively studied problems in number theory. Among numerous identities, the
following two equations can be observed from the Fibonacci sequence:
Theorem 5.8. Fibonacci halving identities

Fn2 + Fn−1
2
= F2n−1 (5.36)
Fn2 + 2Fn Fn−1 = F2n (5.37)

Proof. (by strong induction) Basis: when n = 1, eqn (5.36) holds as (F12 + F02 = F1 ) ⇒
(1 + 0 = 1) and eqn (5.37) holds as (F12 + 2F1 F0 = F2 ) ⇒ (1 + 0 = 1).
Inductive step: Assume eqns (5.36) and (5.37) are true for all 1 ≤ k ≤ n, show eqns (5.38)
and (5.39).
2
Fn+1 + Fn2 = F2n+1 (5.38)
2
Fn+1 + 2Fn+1 Fn = F2n+2 (5.39)

For eqn (5.38),


2
Fn+1 + Fn2 = (Fn + Fn−1 )2 + Fn2 by eqn (5.29)
= Fn2 + 2Fn Fn−1 + 2
Fn−1 + Fn2 by binomial expansion
= F2n + F2n−1 by strong assumptions
= F2n+1 goal by eqn (5.29)

For eqn (5.39),


2
Fn+1 + 2Fn+1 Fn = (Fn + Fn−1 )2 + 2(Fn + Fn−1 )Fn by eqn (5.29)
= Fn2 + 2Fn Fn−1 + 2
Fn−1 + 2Fn2 + 2Fn Fn−1
= F2n + F2n−1 + F2n by strong assumptions
= F2n + F2n+1 = F2n+2 goal by eqn (5.29)

∴ eqns (5.36) and (5.37) are true by strong induction. 


Using the eqns (5.36) and (5.37) in the Fibonacci halving identity Theorem 5.8, the
following divide recurrence relationship can be derived:



 0 if n = 0
1

if n = 1
Fn = 2 (5.40)
 F n + 2F n F n −1
2 2
if n is even and > 1

 2
F 2n + F 2n

if n is odd and > 1
b c2 d e 2

A naı̈ve divide and conquer algorithm in eqn (5.40) divides the problem into two roughly
half-sized sub-problems. The combining step takes constant time. Hence, the computational
252 CHAPTER 5. TABULATION - STRONG INDUCTION

time complexity is T (n) = T (b n2 c) + T (d n2 e) + Θ(1) and, thus, Θ(n) according to the Master
Theorem 3.9. There is no superiority over the linear time strong inductive programming
algorithm yet.
A recursion tree is given in Figure 5.22 (a), and there are massive redundant subtrees
identified in the shaded areas. These redundant subtrees can be eliminated by the memo-
ization technique. A pseudo code is stated as follows:

Algorithm 5.24. FIB with memoization + D&C

Declare a global table T1∼n initially nil except for T [1] = 1


Fib(n)
if n = 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [n] = nil and n is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ls = Fib( n2 − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
rs = Fib( n2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [n] = rs × rs + 2 × ls × rs . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if T [n] = nil and n is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ls = Fib(b n2 c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
rs = Fib(d n2 e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
T [n] = ls × ls + rs × rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Analyzing the computational time complexity of Algorithm 5.24, which combines the
memoization technique and divide & conquer paradigm, is quite perplexing. If n = 2k
is even or n = 2k − 1 = o + e where o < e, grandchildren nodes of the left and right
subtrees are identical. Right grandchild nodes are already computed and stored in the left
subtree and need not be computed again. A left leg tree whose right subtree sizes are
constant shall be constructed, as exemplified in Figure 5.22 (a). If n = 2k + 1 = e + o
where e < o, grandchildren nodes of left and right subtrees are different, as there are three
unique grandchildren nodes. Although analyzing the computational time complexity of
Algorithm 5.24 is quite perplexing, it is Θ(log n) surprisingly. Figure 5.22 (b) provides a
great insight where the dashed area indicates the eliminated redundant subtrees.

Theorem 5.9. Algorithm 5.24 takes Θ(log n) time.

Proof. Consider a divide recurrence tree, as exemplified in Figure 5.22 (b). At the lth level,
n n n
there can be up to three unique numbers: {b l+1 c − 1, b l+1 c, b l+1 c + 1} and subtrees are
visited only once for each. Hence, there are a constant number of nodes for each level - up
to six in the worst case, to be exact. Therefore, Algorithm 5.24 takes Θ(log n) time. 

Astute readers might notice only two Fibonacci numbers need to be computed, since the
third one can easily be found by Fibonacci recurrence relation. Only line 9 in Algorithm 5.24
can be modified so that only the left side leg part is computed in Figure 5.22. The pseudo
code for the left leg only algorithm is given as follows:
5.6. FIBONACCI AND LUCAS SEQUENCES 253

F31

F15 F16

F7 F8 F7 F8

F3 F4 F3 F4 F3 F4 F3 F4

F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 F1 F2

F0 F1 F0 F1 F0 F1 F0 F1 F0 F1 F0 F1 F0 F1 F0 F1

(a) Divide recursion trees for F31 .

level = 0 F33 {F33}

1 F16 F17 {F16, F17}

2 F7 F8 F8 F9 {F7, F8, F9}

3 F3 F4 F3 F4 F4 F5 {F3, F4, F5}

4 F1 F 2 F1 F2 F2 F3 {F1, F2, F3}

log n = 5 F0 F1 F1 F2 {F0, F1}

(b) Divide memoization recursion trees for F33 .

Figure 5.22: Divide recursion trees for Fibonacci numbers.


254 CHAPTER 5. TABULATION - STRONG INDUCTION

Algorithm 5.25. Left leg only FIB with memo + D&C

Declare a global table T1∼n initially nil except for T [1] = 1


Fib(n)
if n = 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [n] = nil and n is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ls = Fib( n2 − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
rs = Fib( n2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [n] = rs × rs + 2 × ls × rs . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if T [n] = nil and n is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ls = Fib(b n2 c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if d n2 e is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
rs = ls + T [b n2 c − 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
T [d n2 e] = rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
else rs = Fib(d n2 e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
T [n] = ls × ls + rs × rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Although it calls two half sized sub-problems, the second recursive calls are terminated
in constant time, as their grandchildren nodes are already computed. The only trouble is
when n = 2k + 1 case results in even + odd, where the odd part is greater. Instead of calling
a recursive function, it returns immediately, as given in lines 10 and 11 of Algorithm 5.25.
Hence, Algorithm 5.25 clearly takes T (n) = T (n/2) + O(1) = Θ(log n), as only the left leg
side subtrees need to be computed recursively.

5.7 Directed Acyclic Graph Problems


A directed acyclic graph, or simply DAG, is a directed graph without directed cycles.
There are either no or a finite number of paths from any vertex, vx , to another vertex, vy .
If there were directed cycles in a graph, the number of paths between certain vertices would
be infinite. An example of a directed acyclic graph with seven vertices or nodes and twelve
directed edges or arcs is shown in Figure 5.23 (a).
A directed acyclic graph representation, as given in Figure 5.23, is no different from the
regular graph in Chapter 4. The rows and columns in an adjacent matrix indicate from
and to vertices, respectively. For example, to represent an edge (v2 , v4 ), v2 ’s row and v4 ’s
column becomes 1 while v4 ’s row and v2 ’s column is 0.
There are two kinds of adjacent lists. Out-going and in-coming adjacent list representa-
tions are given in Figure 5.23 (c), respectively. Let arc from(vx ) and arc to(vx ) be sets or
lists of out-going arcs from vx and in-coming arcs to vx , respectively.

arc from(vx ) = {vy | ( vx , vy ) ∈ E} (5.41)


arc to(vx ) = {vy | ( vy , vx ) ∈ E} (5.42)

For example in Figure 5.23 (a), the vertex v3 has a list of adjacent out-going vertices
arc from(v3 ) = {v4 , v5 , v6 , v7 } as {(v3 , v4 ), (v3 , v5 ), (v3 , v6 ), (v3 , v7 )} ⊂ E, and a list of ad-
jacent in-coming vertices arc to(v3 ) = {v1 , v2 } as {(v1 , v3 ), (v2 , v3 )} ⊂ E. An important
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 255

v1 v2 v3 v4 v5 v6 v7
 
v1 v2 v1 0 1 1 0 1 0 0
v2 0 0 1 1 0 0 0
v3 0 0 0 1 1 1 1
 
v5 v3 v4 v4 0
 0 0 0 0 1 0 
v5 0
 0 0 0 0 0 1 
v6 0 0 0 0 0 0 1
v7 v6 v7 0 0 0 0 0 0 0
(a) a sample DAG (b) adjacent matrix
indeg(vx ) arc to(vx ) vx arc from(vx ) outdeg(vx )
0 {} ← v1 → {v2 , v3 , v5 } 3
1 {v1 } ← v2 → {v3 , v4 } 2
2 {v1 , v2 } ← v3 → {v4 , v5 , v6 , v7 } 4
2 {v2 , v3 } ← v4 → {v6 } 1
2 {v1 , v3 } ← v5 → {v7 } 1
2 {v3 , v4 } ← v6 → {v7 } 1
3 {v3 , v5 , v6 } ← v7 → {} 0
(c) in-coming and out-going adjacent lists

Figure 5.23: A sample directed acyclic graph and its representations

property regarding these sets or lists is the cardinality.


X X
|arc to(vx )| = |arc from(vx )| = |E| (5.43)
vx ∈V vx ∈V

While the former one, arc from(vx ), was used to represent a graph in Chapter 4, the
later one, arc to(vx ), is more advantageous, especially in this chapter, and shall be used.
The root node has an empty list; arc to(vr ) = ∅.
To understand the directed acyclic graph further, consider the problem of determining
whether a directed graph is a DAG.
Problem 5.12. isDAG(G)
Input: a directed graph, G = (V, E)
(
True if G has no directed cycle.
Output:
False otherwise

Education

Good Job Money

(a) A poor student dilemma (b) a traffic deadlock

Figure 5.24: Deadlock examples.

Although a directed cycle detection algorithm, such as the depth-first-search in [169],


would solve the problem, a topological sort order can be also used to check whether a graph
256 CHAPTER 5. TABULATION - STRONG INDUCTION

is a DAG. A directed graph is a DAG if and only if there exists a topological sorted order of
vertices as in eqn (5.44).
(
T if ∃V 0 , isTopoSort(V 0 , G) = T
isDAG(G) = where V 0 is a permutation of V. (5.44)
F otherwise

If a directed graph contains a directed cycle, the vertices cannot be ordered and, thus,
introduce a deadlock. A poor student dilemma is depicted as a directed graph with a cycle
in Figure 5.24 (a). A poor student needs money to pay for education. In order to have
money, a good job is required, but a good job requires a good education. Figure 5.24 (b)
shows an example of bumper to bumper traffic deadlock. Detecting a deadlock is of great
importance, especially in operating systems [157].

5.7.1 Topological Sorting


Consider a precedence graph whose nodes are courses and arcs represent the pre-requisite
relation between courses. If a student takes one course per semester, he or she would like
to find a valid order of courses such that no pre-requisite constraints are violated. For a
toy sample example in Figure 5.23 (a), a couple of valid topological sorted lists are given in
Figure 5.25 (a) and (b). Notice that all arcs are pointing right and no arc is pointing left.
A couple of invalid topological sorted lists are given in Figure 5.25 (c) and (d). There are
arcs pointing right and thus they are invalid.

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v6 v5 v7

(a) hv1 , v2 , v3 , v4 , v5 , v6 , v7 i (b) hv1 , v2 , v3 , v4 , v6 , v5 , v7 i

v1 v2 v7 v3 v4 v5 v6 v2 v1 v3 v4 v7 v5 v6

(c) hv1 , v2 , v7 , v3 , v4 , v5 , v6 i (d) hv2 , v1 , v3 , v4 , v7 , v5 , v6 i

Figure 5.25: Sample topological sorted lists.

This problem is called the topological sorting problem and was first formally studied in
the context of the PERT technique for scheduling in project management in early 1960s [88].

Problem 5.13. Topological sorting


Input: a DAG G = (V, E)
Output: a sequence T of size |V | such that ∀(vx , vy ) ∈ E, idx(vx , T ) < idx(vy , T )

Although there exist Θ(|V | + |E|) algorithms to be discussed in Chapter 7, here is a


simple greedy algorithm that selects a node whose in-degree is 0 and updates candidates.
Since first described in [91], it is often called Kahn’s Algorithm.
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 257

Algorithm 5.26. Topological sort (Kahn’s Algorithm)


declare a table T1∼n where n = |V | . . . . . . . . . . . . . . . 1
i = 0 ............................................. 2
for each v ∈ V whose indeg(v) = 0 . . . . . . . . . . . . . . . . 3
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
ti = v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
V = V − {v} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
E = E − {(a, b) ∈ E | a = v} . . . . . . . . . . . . . . . . . . . 7
return T1∼n or false if i 6= n . . . . . . . . . . . . . . . . . . . . . . . 8

If the graph contains a cycle, Algorithm 5.26 returns false. Algorithm 5.26 successfully
produces a topological sorted list of all vertices if the input graph is a directed acyclic graph.
To understand Algorithm 5.26 better, imagine that vertices and arcs represent courses and
pre-requisite relationships, respectively. Suppose a student must complete all courses. He
or she may start taking a course that has no pre-requisite course in the beginning. Once a
course is taken, remove it along with all its pre-requisite constraints. This results in a new
smaller topological sorting problem with remaining courses. Repeat this process until all
courses are taken, as illustrated in Figure 5.35 on page 292.
In line 6 of Algorithm 5.26, candidate vertices’ in-degrees must be reevaluated. This step
takes O(|V |) and, thus, the computational time complexity of Algorithm 5.26 is O(|V |2 ). If
the DAG is represented by an adjacent list, it takes O(|V | + |E|).
Step by step illustrations of topological sorting Algorithm 5.26 in Figure 5.35 on page 292
also provide the sequence of the counter sub-graphs. At each ith step, the partial solution is
the solution for the respective counter sub-graph. The sequence of these counter sub-graphs
plays a central role in designing strong inductive programming algorithms for many DAG
related problems in the following remaining sub-sections. Since Figure 5.35 should be handy
for illustrating many other forthcoming algorithms as a template or worksheet, it is placed
at the end of this chapter on page 292.

5.7.2 Number of paths problem


Given a DAG, the number of paths, or simply NPP, problem is to find the number of all
possible paths from a given source node vs to all other vertices. A path between s and t ver-
tices, path(s, t) can be defined as a sequence of vertices in which each vertex is adjacent to the
next one, or as a sequence of directed edges (arcs) which connect s to t. For example in Fig-
ure 5.26 (a), path(v1 , v4 ) includes three sequences: {hv1 , v2 , v4 i, hv1 , v3 , v4 i, hv1 , v2 , v3 , v4 i}
or {h(v1 , v2 ), (v2 , v4 )i, h(v1 , v3 ), (v3 , v4 )i, h(v1 , v2 ), (v2 , v3 ), (v3 , v4 )i}. For simplicity sake, a
sequence of adjacent vertices is used by default to determine a path unless otherwise noted.
The number of paths from a source node to a target node is shown outside of the respective
target node. path(s, t) is defined recursively as follows in eqn (5.45):

⊃ {hs, ti}
 if (s, t) ∈ E
path(s, t) ⊃ {hs, path(x, t)i|(s, x) ∈ E} if x 6= s and path(x, t) 6= ∅ (5.45)

=∅ otherwise

The problem of finding the number of paths takes a DAG and a source node, s, as inputs.
A source node can typically be assumed to be a vertex whose in-degree equals zero. The
258 CHAPTER 5. TABULATION - STRONG INDUCTION

1 1 6
v1 v2 x
3 2 3 4 ?
v5 v3 v4 s y t
10 5 7
v7 v6 z

(a) A sample DAG example of NP (b) Backward thinking


1 1 1 1 1 1 1 1 1 1 1
v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2
2 2 3 3 2 3 3 2 3
v3 v3 v4 v5 v3 v4 v5 v3 v4
5
v6

(c) Topological order steps to solve the NPP


P forward
V0 Table of np(v1 , x) np(v1 , x) = np(v1 , y)
(y,x)∈E

v1 1 np(v1 , v1 ) = 1
v2 1 1 np(v1 , v2 ) = np(v1 , v1 ) = 1
v3 1 1 2 np(v1 , v3 ) = np(v1 , v1 ) + np(v1 , v2 ) = 2
v4 1 1 2 3 np(v1 , v4 ) = np(v1 , v2 ) + np(v1 , v3 ) = 3
v5 1 1 2 3 3 np(v1 , v5 ) = np(v1 , v1 ) + np(v1 , v3 ) = 3
v6 1 1 2 3 3 5 np(v1 , v6 ) = np(v1 , v3 ) + np(v1 , v4 ) = 5
v7 1 1 2 3 3 5 10 np(v1 , v7 ) = np(v1 , v3 ) + np(v1 , v5 ) + np(v1 , v6 ) = 10

(d) Algorithm 5.27 illustration

Figure 5.26: Number of Paths Problem example

output is a table of all vertices, x, with their respective cardinality of the set, |path(s, x)|.
It is formally defined as follows:

Problem 5.14. Number of Paths Problem (NPP)

Input: a DAG and a source node s ∈ V


Output: a table of (x, |path(s, x)|) for ∀x ∈ V

To come up with an algorithm using strong inductive programming, first think backward.
For a target vertex t, assume that there are three arcs to t from three other vertices, and
solutions of all previous vertices are already solved and stored in a table. As depicted in
Figure 5.26 (b), immediate vertices’ solutions are known for vertices, x, y, and z. All paths
from s to t must pass through one of these three vertices. Thus np(s, t) must be the sum
of three solutions: np(s, x) + np(s, y) + np(s, z). The following recurrence relation can be
derived with a basis case when the source and target vertices are the same and there is only
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 259

one way from one node to itself.




 1 if s = t

np(s, t) = 0 P if s =
6 t ∧ indeg(t) = 0 (5.46)


 np(s, x) if s =6 t ∧ indeg(t) > 0
(x,t)∈E

To solve for a target vertex t, the recurrence relation in eqn (5.46) requires solutions for all
vertices x in arcs (x, t) that point to the vertex t. Now the problem can be solved forward,
starting from the basis case, as illustrated in Figure 5.26 (c). Note that if the vertices to be
solved are sorted and solved in a topologically valid sorted order, previous vertex solutions
for any vertex t are already solved and stored in a table. An algorithm based on the strong
inductive programming paradigm can be stated as follows:

Algorithm 5.27. Dynamic number of paths


np(DAG, s)
Declare a table T of size n whose elements are 0’s, initially . . . 1
V 0 = topological sort(V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [v10 ] = 1 Note v10 = s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for each (vx0 , vi0 ) ∈ E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [vi0 ] = T [vi0 ] + T [vx0 ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Algorithm 5.27 is illustrated in Figure 5.26 (c) and (d) using the toy example in Fig-
ure 5.26 (a). The table is filled out in a topologically valid order, i.e., solving forward. Lines
5 and 6 identify all adjacent vertices that have anP arc to the current respective vertex and
can be replaced with a simple formula: T [vi0 ] = T [vx0 ] by the eqn (5.46). Lines 4 ∼ 6
0 ,v 0 )∈E
(vx i
2
take O(|V | ) or O(|V | + |E|) if the DAG is represented by an adjacent matrix or adjacent
list, respectively. Note that n = |V | is the number of vertices in the input DAG. Hence,
Algorithm 5.27 takes O(|V | + |E|) or simply O(n2 ).

5.7.3 Shortest Path Length in DAG


The path length between two vertices is the number of arcs in a path from one vertex
to the other. The shortest path length problem in DAG, or simply SPL-dag(s, DAG), is
to find the minimum path length among all possible paths from a given source node vs to
all other vertices. For example, in Figure 5.27 (a), SPL-dag(v1 , v4 ) is 2 for the shortest
path hv1 , v2 , v4 i since there are only two arcs (v1 , v2 ) and (v2 , v4 ). All other path lengths
between v1 and v4 are equal to or greater than 2. The shortest path length problem in
DAG, SPL-dag(s, DAG), is formally defined as follows:
Problem 5.15. Shortest path length problem (SPL-dag)
Input: a DAG and a source node s ∈ V
Output: a table of (x, min(length(path(s, x)))) for ∀x ∈ V
To come up with an algorithm using strong inductive programming, first think backward.
As depicted in Figure 5.27 (b), assume that immediate vertices’ solutions are known for
260 CHAPTER 5. TABULATION - STRONG INDUCTION

0 1 6
v1 v2 x
1 1 2 4 ?
v5 v3 v4 s y t
2 2 7
v7 v6 z

(a) A sample DAG example of SPL (b) Backward thinking


0 0 1 0 1 0 1 0 1 0 1
v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2
1 1 2 1 1 2 1 1 2
v3 v3 v4 v5 v3 v4 v5 v3 v4
2
v6

(c) Topological order steps to solve the SPL forward


V Table of spl(v1 , x) spl(v1 , x) = min (spl(v1 , y)) + 1
(y,x)∈E

v1 0 spl(v1 , v1 ) = 0
v2 0 1 spl(v1 , v2 ) = min(spl(v1 , v1 )) + 1 = 1
v3 0 1 1 spl(v1 , v3 ) = min(spl(v1 , v1 ), spl(v1 , v2 )) + 1 = 1
v4 0 1 1 2 spl(v1 , v4 ) = min(spl(v1 , v2 ), spl(v1 , v3 )) + 1 = 2
v5 0 1 1 2 1 spl(v1 , v5 ) = min(spl(v1 , v1 ), spl(v1 , v3 )) + 1 = 1
v6 0 1 1 2 1 2 spl(v1 , v6 ) = min(spl(v1 , v3 ), spl(v1 , v4 )) + 1 = 2
v7 0 1 1 2 1 2 2 spl(v1 , v7 ) = min(spl(v1 , v3 ), spl(v1 , v5 ), spl(v1 , v6 )) + 1 = 2
(d) Algorithm 5.28 illustration

Figure 5.27: Shortest Path Length problem example

vertices, x, y, and z. The path between s and t must pass through either x, y, or z, and
spl(s, t) must be adding one more arc from the minimum of spl(s, x), spl(s, y), and spl(s, z).
Clearly, spl(s, t) = 6 in Figure 5.27 (b). Hence, the following recurrence relation can be
derived where ∞ means that there is no path from s:



0 if s = t

spl(s, t) = ∞ if s =6 t ∧ indeg(t) = 0 (5.47)
 min spl(s, x) + 1 if s 6= t ∧ indeg(t) > 0


(x,t)∈E

As depicted in Figure 5.27 (c), an algorithm based on the strong inductive programming
paradigm can be stated as follows:
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 261

Algorithm 5.28. Dynamic shortest path length on DAG


SPL(DAG, s)
Declare a table T1∼n whose elements are ∞’s initially . . . . 1
V 0 = topological sort(V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [v10 ] = 0 Note v10 = s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for each (vx0 , vi0 ) ∈ E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if T [vi0 ] < T [vx0 ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
T [vi0 ] = T [vx0 ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Algorithm 5.28 is illustrated in Figure 5.27 (c) and (d) using the toy example in Fig-
ure 5.27. Lines 5 ∼ 7 can be replaced with a simple formula: T [vi0 ] = 0min
0
T [vx0 ] + 1 by
(vx ,vi )∈E
the eqn (5.47). Algorithm 5.28 takes O(|V | + |E|), or simply O(n2 ), by the same reasoning
as that in the previous Algorithm 5.27.

5.7.4 Shortest Path Cost in Weighted DAG

2
v1 v2
4 1 3 10
2 2
v5 v3 v4
8 4
5 6
1
v7 v6

(a) a sample weighted DAG


v1 v2 v3 v4 v5 v6 v7 indeg V adjacent weighted arcs
 
v1 0 2 1 ∞ 4 ∞ ∞ (0) v1 → {(v2 , 2), (v3 , 1), (v5 , 4)}
v2 ∞ 0 3 10 ∞ ∞ ∞ (1) v2 → {(v3 , 3), (v4 , 10)}
v3 ∞ ∞ 0 2 2 4 8 
 
∞ (2) v3 → {(v4 , 2), (v5 , 2), (v6 , 4), (v7 , 8)}
v4 ∞ ∞ 0 ∞ 6 ∞
(2) v4 → {(v6 , 6)}
 
v5 ∞ ∞ ∞ ∞ 0 ∞ 5 
(2) v5 → {(v7 , 5)}
 
v6 ∞ ∞ ∞ ∞ ∞ 0 1 
v7 ∞ ∞ ∞ ∞ ∞ ∞ 0 (2) v6 → {(v7 , 1)}
(3) v7 → {}
(a) adjacent matrix (b) adjacent list

Figure 5.28: A sample weighted directed acyclic graph representation

In a weighted DAG, each arc has its weight. Let w(vx , vy ) denote the cost for the arc
(vx , vy ). For example, in Figure 5.28(a), the weighted DAG can be represented either in a
weighted adjacency matrix or a weighted adjacency list, as shown in Figure 5.28(b) and (c),
respectively.
Imagine that one must pay the cost of an arc if one has to use the arc. For a flight
planning example, the shortest path cost problem deals with minimizing the total cost of
connecting flights between two airports, while the shortest path length problem deals with
minimizing the number of connecting flights between two airports.
262 CHAPTER 5. TABULATION - STRONG INDUCTION

The shortest path cost problem in a weighted DAG, or simply SPC-dag(s, wDAG), is
to find the minimum path cost among all possible paths from a given source node vs to all
other vertices. For example, in Figure 5.29 (a), SPC-wdag(v1 , v4 ) is 3 for the shortest path
hv1 , v3 , v4 i since the sum of these two arc costs is 3 - w(v1 , v3 ) + w(v3 , v4 ) = 3. All other
path costs between v1 and v4 are greater than three. The shortest path cost problem in a
weighted DAG, SPC-wdag(s, wDAG), is formally defined as follows:

Problem 5.16. Shortest path cost problem (SPC-wdag)


Input: a wDAG and a source node s ∈ V
Output: a table of (x, min(sumw(path(s, x)))) for ∀x ∈ V
l−1
P
where sumw(P ) = w(pi , pi+1 ) and l = length(P )
i=1

To come up with an algorithm using strong inductive programming, first think backward.
As depicted in Figure 5.29 (b), assume that immediate vertices solutions are known for
vertices, x, y, and z. The path between s and t must pass through either x, y, or z and
spc(s, t) must be the minimum of spc(s, x)+w(x, t), spc(s, y)+w(x, t), and spc(s, z)+w(x, t).
Hence, spc(s, t) = 8 + 1 = 9 in Figure 5.29 (b). The following recurrence relation can be
derived: 

 0 if s = t

spc(s, t) = ∞ if s 6= t ∧ indeg(t) = 0 (5.48)
 min (spc(s, x) + w(x, t)) if s 6= t ∧ indeg(t) > 0


(x,t)∈E

As depicted in Figure 5.29 (c), an algorithm based on the strong inductive programming
paradigm can be stated as follows:

Algorithm 5.29. Dynamic shortest path cost

SPC(DAG, s)
Declare a table T1∼n whose elements are ∞’s initially . . . . 1
V 0 = topological sort(V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [v10 ] = 0 Note v10 = s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for each (vx0 , vi0 ) ∈ E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if T [vi0 ] < T [vx0 ] + w(vx0 , vi0 ) . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [vi0 ] = T [vx0 ] + w(vx0 , vi0 ) . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Algorithm 5.29 is illustrated in Figure 5.29 (c) and (d) using the toy example in Fig-
ure 5.29. Lines 5 ∼ 7 can be replaced with the following simple formula by the eqn (5.48):

T [vi0 ] = min (T [vx0 ] + w(vx0 , vi0 ))


0 ,v 0 )∈E
(vx i

The computational time complexity of Algorithm 5.28 is O(|V | + |E|), or simply O(n2 ), by
the same reasoning as that in the previous Algorithm 5.27.
If the input source node is another node, the output might be different. Table 5.5 shows
the shortest path costs with various source vertices.
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 263

0 2 3
2
v1 v2 x
4 1 3 10 5
1 1 ?
3 3
2 2 8
v5 v3 v4 s y t
8 4 1
5 6 5 6 5
1
v7 v6 z
(a) A sample DAG example of SPC (b) Backward thinking
0 0 2 0 2 0 2 0 2 0 2
2 2 2 2 2
v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2
1 3 1 3 10 4 1 3 10 4 1 3 10
1 1 3 3 1 3 3 1 3
2 2 2 2 2
v3 v3 v4 v5 v3 v4 v5 v3 v4
4
5 6
v6

(c) Topological order steps to solve the SPC forward


V Table of spc(v1 , x) spc(v1 , x) = min (spc(v1 , y) + w(y, x))
(y,x)∈E

v1 0 spc(v1 , v1 ) = 0
v2 0 2 spc(v1 , v2 ) = min(spc(v1 , v1 ) + w(v1 , v2 )) = 1
 
spc(v1 , v1 ) + w(v1 , v3 )
v3 0 2 1 spc(v1 , v3 ) = min =1
 spc(v1 , v2 ) + w(v2 , v3 ) 
spc(v1 , v2 ) + w(v2 , v4 )
v4 0 2 1 3 spc(v1 , v4 ) = min =3
 spc(v1 , v3 ) + w(v3 , v4 ) 
spc(v1 , v1 ) + w(v1 , v5 )
v5 0 2 1 3 3 spc(v1 , v5 ) = min =3
 spc(v1 , v3 ) + w(v3 , v5 ) 
spc(v1 , v3 ) + w(v3 , v6 )
v6 0 2 1 3 3 5 spc(v1 , v6 ) = min =5
 spc(v1 , v4 ) + w(v4 , v6 ) 
spc(v1 , v3 ) + w(v3 , v7 )
v7 0 2 1 3 3 5 6 spc(v1 , v7 ) = min  spc(v1 , v5 ) + w(v5 , v7 )  = 5
spc(v1 , v6 ) + w(v6 , v7 )
(d) Algorithm 5.29 illustration

Figure 5.29: Shortest Path Cost problem example

5.7.5 Minimum Spanning Rooted Tree

A DAG may contain multiple nodes whose in-degree is zero. Let rDAG be a rooted
directed acyclic graph where there is only one node r with zero in-degree and there exists
a path to all other nodes from r. Figure 5.30 shows several spanning rooted trees of the
sample rDAG given in Figure 5.28 (a). Each arc in a spanning rooted tree can be interpreted
as a parent-of relation. Thus, there are exactly n − 1 number of arcs in a spanning rooted
tree as only the exceptional node, r, does not have a parent node.
Before defining a minimum spanning rooted tree problem, spanning rooted trees can be
evaluated in two different ways. The first one is the same way as the one in the regular
minimum spanning tree Problem 4.14. It is to minimize the sum of edge weights in a
spanning rooted tree and is formally defined as follows:
264 CHAPTER 5. TABULATION - STRONG INDUCTION

Table 5.5: DAG shortest path cost with different roots.


s v1 v2 v3 v4 v5 v6 v7
v1 0 2 1 3 3 5 6
v2 ∞ 0 3 5 5 7 8
v3 ∞ ∞ 0 2 2 4 5
v4 ∞ ∞ ∞ 0 ∞ 6 7
v5 ∞ ∞ ∞ ∞ 0 ∞ 5
v6 ∞ ∞ ∞ ∞ ∞ 0 1
v7 ∞ ∞ ∞ ∞ ∞ ∞ 0

Problem 5.17. Minimum spanning rooted tree with edge weight


Input: A rooted DAG G = (V, E) where v1 is theProot.
Output: a spanning rooted tree T of G such that w(par(vx ), vx ) is minimized
vx ∈V

As depicted in Figure 5.31 (b), assume that a minimum spanning rooted tree up to
immediate vertices are known for vertices, x, y, and z. A new minimum spanning rooted
tree including r to t in a topological sorted list must contain one of (x, t), (y, t), or (z, t) and
selects the minimum of w(x, t), w(y, t), and w(z, t). Hence, the parent node for each vertex
except for the root node can be determined by the following equation:


 r if s = t

par(s, t) = argmin(w(x, t)) if s 6= t ∧ indeg(t) > 0 (5.49)
 (x,t)∈E

not a rooted DAG if s 6= t ∧ indeg(t) = 0

As illustrated in Figure 5.31 (c) and (d), an algorithm can be stated as follows:
Algorithm 5.30. Minimum spanning rooted tree
MSRT(DAG, s)
Declare a table T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
V 0 = topological sort(V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [vi0 ] = argmin w(vx0 , vi0 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
0 ,v 0 )∈E
(vx i
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Since Algorithm 5.30 never utilizes any previous solutions stored in a table to solve the
next sub-problem, it is hard to categorize it in the strong inductive programming paradigm,
even though the pseudo code resembles the rest of graph problems involving DAGs.
To make it more interesting and design an algorithm based on the strong inductive
programming paradigm, consider the following modified spanning rooted tree problem. This
second problem is to find a spanning rooted tree whose sum of path costs of all paths from
r to each node is minimized. Let pc(v1 , vx ) be the path cost, which is the sum of weights in
the path from the root node v1 to vx in a spanning rooted tree T .
|P |−1
X
pc(v1 , vx ) = w(pi , pi+1 ) where P is a path in T from v1 to vx (5.50)
i=1
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 265

2 2
v1 v2 v1 v2
1 4 1 10
2 2
v5 v3 v4 v5 v3 v4
4
5 6
1
v7 v6 v7 v6

vx par(vx ) w(par(vx ), vx ) pc(v1 , vx ) vx par(vx ) w(par(vx ), vx ) pc(v1 , vx )


v1 - 0 0 v1 - 0 0
v2 v1 2 2 v2 v1 2 2
v3 v1 1 1 v3 v1 1 1
v4 v3 2 3 v4 v2 10 12
v5 v3 2 3 v5 v1 4 4
v6 v3 4 5 v6 v4 6 18
v7 v6 1 6 v7 v5 5 9
P P
12 20 28 46
(a) a minimum spanning rooted tree (b) a spanning rooted tree
2 2
v1 v2 v1 v2
4 3 10 3 10
2
v5 v3 v4 v5 v3 v4
8
6 6
1
v7 v6 v7 v6

vx par(vx ) w(par(vx ), vx ) pc(v1 , vx ) vx par(vx ) w(par(vx ), vx ) pc(v1 , vx )


v1 - 0 0 v1 - 0 0
v2 v1 2 2 v2 v1 2 2
v3 v2 3 5 v3 v2 3 5
v4 v2 10 12 v4 v2 10 12
v5 v1 4 4 v5 v3 2 7
v6 v4 6 18 v6 v4 6 18
v7 v3 8 13 v7 v6 1 19
P P
33 54 24 63
(c) a maximum spanning rooted tree (d) a maximum spanning rooted tree
in respect of sum of edge weights in respect of sum of path costs

Figure 5.30: Sample minimum spanning rooted trees and their representations

For example, in Figure 5.30 (a), pc(v1 , v7 ) = 1 + 4 + 1 = 6. This second version of the
minimum spanning rooted tree problem is formally defined as follows:

Problem 5.18. Minimum spanning rooted tree with path cost


Input: A rooted DAG G = (V, E) where v1 is theProot.
Output: a spanning rooted tree T of G such that pc(v1 , vx ) is minimized
vx ∈V

When the sum of path costs of all vertices to the root node is used instead of the sum of
edge costs in the tree, the problem becomes the same as the shortest path cost Problem 5.16
considered on page 262. Hence, Algorithm 5.29 also solves the minimum spanning rooted
tree with path cost Problem 5.18, as illustrated in Figures 5.32 (c) and (d). As long as the
input DAG is a rooted DAG, Algorithm 5.29 connects all vertices without creating a cycle.
266 CHAPTER 5. TABULATION - STRONG INDUCTION

3
2
v1 v2 x
4 1 3 10 2?
?
y 8
2 2
v5 v3 v4 s t
5
8 4
6 1?
1
v7 v6 z
(a) A sample MSrT (b) Backward thinking
2 2 2 2 2
v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2
1 3 1 3 10 4 1 3 10 4 1 3 10

2 2 2 2 2
v3 v3 v4 v5 v3 v4 v5 v3 v4
4
6
v6

(c) Topological order steps to solve the MSrT forward


V Table of par(x) par(x) = argmin(w(y, x))
(y,x)∈E

v1 r par(v1 ) = r
v2 r v1 par(v2 ) = argmin(w(v1 , v2 )) = v1
v3 r v1 v1 par(v3 ) = argmin(w(v1 , v3 ), w(v2 , v3 )) = v1
v4 r v1 v1 v3 par(v4 ) = argmin(w(v2 , v4 ), w(v3 , v4 )) = v3
v5 r v1 v1 v3 v3 par(v5 ) = argmin(w(v1 , v5 ), w(v3 , v5 )) = v3
v6 r v1 v1 v3 v3 v3 par(v6 ) = argmin(w(v3 , v6 ), w(v4 , v6 )) = v3
v7 r v1 v1 v3 v3 v3 v6 par(v7 ) = argmin(w(v3 , v7 ), w(v5 , v7 ), w(v6 , v7 )) = v6

(d) Table of par(x) to solve the MSrT forward

Figure 5.31: Minimum spanning rooted tree problem example

5.7.6 Critical Path Problem

Suppose that we would like to prepare a roasted turkey for a Thanksgiving party. There
are eight required tasks and they are represented with the time that they take. Certain
tasks are dependent on other tasks, and these dependencies are represented as a DAG, as
depicted in Figure 5.33. How fast can all of these tasks be completed if tasks can be done
in parallel? This problem is known as a critical path problem, or simply CPP, because it
depends on a critical path or bottleneck route in a DAG. In the roasted turkey example,
the highlighted critical path, H → P → C → S takes 186 minutes in total. Even if the task
of making gravy can be done earlier, one cannot start enjoying the meal until the turkey is
roasted.

Before formally defining the problem, sketching out its inputs and outputs, as given in
Figures 5.34 (a) and (b), helps formulating the problem.
5.7. DIRECTED ACYCLIC GRAPH PROBLEMS 267

0 2 3
2
v1 v2 x
4 1 3 10 2?
3 1 3 1
?
y 8
2 2
v5 v3 v4 s t
5
8 4 1?
6 5 6 7
1
v7 v6 z
(a) A sample MSrT (b) Backward thinking
0 0 2 0 2 0 2 0 2 0 2
2 2 2 2 2
v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2
1 3 1 3 10 4 1 3 10 4 1 3 10
1 1 3 3 1 3 3 1 3
2 2 2 2 2
v3 v3 v4 v5 v3 v4 v5 v3 v4
4
5 6
v6

(c) Topological order steps to solve the MSrT forward


V Table of par(vx ) Table of spc(v1 , vx )
v1 r 0
v2 r v1 0 2
v3 r v1 v1 0 2 1
v4 r v1 v1 v 3 0 2 1 3
v5 r v1 v1 v 3 v3 0 2 1 3 3
v6 r v1 v1 v 3 v3 v3 0 2 1 3 3 5
v7 r v1 v1 v 3 v3 v3 v6 0 2 1 3 3 5 6
(d) Table of par(x) to solve the MSrT forward

Figure 5.32: Minimum Spanning rooted Tree with path cost problem

Problem 5.19. Critical Path Problem (CPP-dag)


Input: a DAG and a table, Tv1 ∼vn where P
tvx is the time to complete vx .
Output: a critical path P that maximizes tx where vx and vy ∈ V and
x∈P
P = hvx , · · · , vy i if a path from vx to vy exists.

To come up with an algorithm using strong inductive programming, a recurrence relation


by thinking backward is helpful. As depicted in Figure 5.34 (c), assume that immediate
vertices’ solutions are known for vertices, x, y, and z. The critical path between s and t,
cpath(s, t), must pass through either x, y, or z and it must be the maximum of cpath(s, x),
cpath(s, y), and cpath(s, z) plus time(t). Hence, cpath(s, t) = 14 in Figure 5.34 (c). The
following recurrence relation can be derived:

time(t) if s = t
cpath(s, t) = (5.51)
 max cpath(s, x) + time(t) if s =
6 t
(x,t)∈E
268 CHAPTER 5. TABULATION - STRONG INDUCTION

P (30) 31
(Preheat
Oven)
M (1) 2
(Melt 186
1 butter) 11 181
H (1) B (5) C (150)
S (5)
(Wash (Brush (Roast
(Serve)
hands) butter) Turkey)
R (5)
(Remove
Gliblets) 6
16
G (15)
(Make
gravy)

Figure 5.33: Roasted turkey example: Critical path problem

As illustrated in Figure 5.34 (d) and (e), an algorithm based on the strong inductive pro-
gramming paradigm can be stated as follows: Let’s assume that the source vertex s = v1 .

Algorithm 5.31. Critical path method


Cpath(DAG, Tv1 ∼vn )
Declare a table P1∼n whose elements are ∞’s initially . . . 1
V 0 = topological sort(V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
P [v10 ] = T [v10 ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
P [vi0 ] = max 0
P [vx ] + T [vi0 ] . . . . . . . . . . . . . . . . . . . . . . . . . 5
(vx ,vi )∈E
return P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Algorithm 5.31 takes O(|V | + |E|), or simply O(n2 ), by the same reasoning as one in the
previous algorithm 5.27.
Algorithm 5.31 was developed in the late 1950s and referred to as the critical path method
in [98] or Program Evaluation and Review Technique in US Navy [129, p.98].

5.8 Exercises
Q 5.1. Identify any mistake in the following proof by an ordinary induction and suggest
any necessary change.

a). For Theorem 3.1 on page 93, without the assumption that n is an exact power of 2.

Proof. Basis: When n = 1; T (1) = 1 by eqn. (3.4), and T (1) = 2 × 1 − 1 = 1.


Inductive step: Supposing that T (n)
 = 2n − 1 is
 true, show T (n + 1) = 2n + 1.
n+1
T (n + 1) = 2T ((n + 1)/2) + 1 = 2 2 − 1 + 1 = 2n + 1. 
2

b). For Question 3.1 b) on page 94, without the assumption that n is an exact power of
2.
5.8. EXERCISES 269

1 2 W V adjacent list 1 1 2
3
v1 v2 (1) v1 → {v2 , v3 , v5 } v1 v2
(2) v2 → {v3 , v4 }
3 4 2 3 4 2
(4) v3 → {v4 , v5 , v6 , v7 }
v5 v3 v4 (2) v4 → {v6 } v5 v3 v4
10
(3) v5 → {v7 } 7 9
2 3 2 3
14
(3) v6 → {v7 } v7 v6 12
v7 v6
(2) v7 → {}
(a) adjacent list as input (b) output
3
x 10

3 4 2
7
s y t
3
?
z 12

(c) Backward thinking


1 1 2 1 2 1 2 1 2 1 2
1 3 1 3 1 3
v1 v1 v2 v1 1 v2 3 v1 v2 v1 v2 v1 v2
1 1 3
4 4 2 3 4 2 3 4 2
v3 v3 v4 v5 v3 v4 v5 v3 v4
7 7 9 10 7 9 10 7 9
3
v6 12

(d) Topological order steps to solve the CPP forward


V Table of cpath(v1 , x) cpath(v1 , x) = max (cpath(v1 , y))+ time(x)
(y,x)∈E

v1 1 cpath(v1 , v1 ) = time(v1 ) = 1
v2 1 3 cpath(v1 , v2 ) = max(cpath(v1 , v1 ))+ time(v2 ) = 3
v3 1 3 7 cpath(v1 , v3 ) = max(cpath(v1 , v1 ), cpath(v1 , v2 ))+ time(v3 )=7
v4 1 3 7 9 cpath(v1 , v4 ) = max(cpath(v1 , v2 ), cpath(v1 , v3 ))+ time(v4 )=9
v5 1 3 7 9 10 cpath(v1 , v5 ) = max(cpath(v1 , v1 ), cpath(v1 , v3 ))+time(v5 )=10
v6 1 3 7 9 10 12 cpath(v1 , v6 ) = max(cpath(v1 , v3 ), cpath(v1 , v4 ))+time(v6 )=12
v7 1 3 7 9 10 12 14 cpath(v1 , v7 ) = max(cpath(v1 , v3 ), · · · )+ time(v7 ) = 14
(e) Algorithm 5.31 illustration

Figure 5.34: Critical Path method illustration


270 CHAPTER 5. TABULATION - STRONG INDUCTION

Proof. Basis: When n = 1; T (1) = 1 by eqn. (3.5), and T (1) = 2 × 1 − 1 = 1.


Inductive step: Supposing that T (n) = 2n − 1 is true, show T (n + 1) = 2n + 1.
n+1
T (n + 1) = T ((n + 1)/2) + n + 1 = 2 − 1 + n + 1 = 2n + 1. 
2

Q 5.2. Consider the following 4-5 Stamp Theorem 5.10:

Theorem 5.10. Every amount of postage of 12 cents or more can be formed using just
4-cent and 5-cent stamps.

···
| {z } | {z } | {z }
n=12c n=13c n=14c

a). Prove Theorem 5.10 using a strong induction.

b). Formulate the 4-5 Stamp problem to find a pair of non-negative integers (x, y) such
that 4x + 5y = n where n ≥ 12.

c). Come up with a strong inductive programming algorithm for the 4-5 Stamp problem.

d). Illustrate the algorithm devised in c) to find (x, y) such that 4x + 5y = 23.

e). Analyze the computational complexities of the algorithm proposed in c).

f). What is the Frobenius number? g(4, 5) =?

Q 5.3. Prove the following theorems using strong induction.

a). The parity of nth Fibonnacci number

Theorem 5.11. The parity of nth Fibonnacci number, Fn .


(
0 if n % 3 = 0
odd(Fn ) = (5.52)
1 otherwise

b). Theorem 4.16: The minimum number of internal nodes in a k-ary tree, stated on
page 197.
The closed formula in the eqn (4.12) is equivalent to the eqn (4.11).

c). The recurrence relation with a square root as a diminishing function, I.

Theorem 5.12. The recurrence relation in eqn (5.53) is equivalent to

T (n) = Θ(log log n)


(
0 if n = 2
T (n) = √ (5.53)
T ( n) + 1 if n > 2
5.8. EXERCISES 271

d). The recurrence relation with a square root as a diminishing function, II.

Theorem 5.13. The recurrence relation in eqn (5.54) is equivalent to

T (n) = Θ(n log log n)


(
0 if n = 2
T (n) = √ √ (5.54)
nT ( n) + n if n > 2

Q 5.4. Suppose that there are only two kinds of missiles, with 3 points for the blue missile
and 7 points for the red missile in Figure 4.12 on page 166.

a). Formulate the problem of winning ways with only two missiles to get n points, W2(n).
Hint: Problem 5.7 defined on page 239.

b). Devise a strong inductive programming algorithm to find the number of ways to get
n points.

c). Illustrate your strong inductive programming algorithm to find W2(16).

d). Analyze the computational complexities of the algorithm proposed in b).

Q 5.5. Recall the unbounded equality knapsack minimization problem, considered as an


exercise Q 4.18 on page 205 for the three missile game depicted in Figure 4.12 on page 166.
There are three missiles (blue, red, and yellow) with different points gained and energy
required. The problem, V (n), is to score exactly n points using as little energy as possible.
Note that if the points gained is short or exceeds m, it is a loss. Exactly m points earned
with the least energy is a winner.

missile blue red yellow


energy E 1 3 4
point P 1 5 6

a). Construct a table of all sub-solutions for V (n) where n = 0 ∼ 11 using the above toy
example.

b). Derive a higher order recurrence relation for the problem.

c). Devise a strong inductive programming algorithm.

d). Analyze the computational complexities of the algorithm proposed in c).

e). Devise a memoization algorithm.

f). Analyze the computational complexities of the algorithm proposed in e).

Q 5.6. Instead of the exact n points in the above exercise in Q 5.5, consider the problem,
V (n), to score at least n points using as little energy as possible. This problem is the
unbounded knapsack minimization problem, , considered as an exercise Q 4.15 on page 204.
272 CHAPTER 5. TABULATION - STRONG INDUCTION

missile blue red yellow


energy E 1 3 4
point P 1 5 6

a). Construct a table of all sub-solutions for V (n) where n = 0 ∼ 11 using the above toy
example.

b). Derive a higher order recurrence relation for the problem.

c). Devise a strong inductive programming algorithm.

d). Analyze the computational complexities of the algorithm proposed in c).

e). Devise a memoization algorithm.

f). Analyze the computational complexities of the algorithm proposed in e).

Q 5.7. Consider the k missile example depicted in Figure 4.12 on page 166. Suppose we
would like to maximize the score but all n energy must be consumed. If any energy is left,
it is a loss. This problem is the unbounded knapsack equality problem, or simply UKE.

missile blue red yellow


energy E 1 3 4
point P 4 14 20

a). Formulate the problem.

b). Construct a table of all sub-solutions for UKE(n) where n = 0 ∼ 11 using the above
toy example.

c). Derive a higher order recurrence relation for the problem.

d). Devise a strong inductive programming algorithm.

e). Analyze the computational complexities of the algorithm proposed in d).

f). Devise a memoization algorithm.

g). Analyze the computational complexities of the algorithm proposed in f).

Q 5.8. Postage stamp equality minimization problem: Suppose that there are (k = 3) kinds
of stamps, A = h7, 5, 1i, and we are to make nc with the minimum number of stamps.

(n = 3) kinds of stamps

a). Derive a higher order recurrence relation for the problem.


5.8. EXERCISES 273

b). Devise a strong inductive programming algorithm.


c). Illustrate the algorithm proposed in b) where n = 10c.
d). Analyze the computational complexities of the algorithm proposed in b).
e). Devise a memoization algorithm.
f). Analyze the computational complexities of the algorithm proposed in e).

Q 5.9. Postage stamp equality maximization problem: Suppose that there are (k = 3)
kinds of stamps, A = h3, 5, 7i, and we are to make nc with the maximum number of stamps.

(k = 3) kinds of stamps

a). Derive a higher order recurrence relation for the problem.


b). Devise a strong inductive programming algorithm.
c). Illustrate the algorithm proposed in b) where n = 14c.
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
e). Devise a memoization algorithm.
f). Analyze the computational complexities of the algorithm proposed in e).
g). Illustrate the algorithm proposed in e) where n = 14c.
h). What is the Frobenius number? g(3, 5, 7) =?

Q 5.10. There are (k = 3) kinds of canned foods, represented by their weights: A =


h4, 5, 7i. Suppose that an astronaut would like to carry as many canned foods as possible,
i.e., maximize the quantity, but the total weight of the canned foods cannot exceed nkg in
order to launch the spaceship safely.

4kg 5kg 7kg

a). Derive a higher order recurrence relation for the problem.


b). Devise a strong inductive programming algorithm.
c). Illustrate the algorithm proposed in b) where n = 11 and A = h4, 5, 7i.
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
274 CHAPTER 5. TABULATION - STRONG INDUCTION
j k
n
e). Prove that the recurrence relation in a) is equivalent to min(A1∼k ) .

Q 5.11. Suppose that a fast-food restaurant sells chicken nuggets in packs of 4, 6 and 9.
One has to get at least (m = 11) chicken nuggets, but not too many.

M M M
4 6 9

Three kinds of McNugget boxes

a). Derive a higher order recurrence relation for the problem.


b). Devise a strong inductive programming algorithm.
c). Illustrate the algorithm proposed in b) where m = 11 and A = h4, 6, 9i.
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
e). Devise a memoization algorithm.
f). Illustrate the algorithm proposed in e) where m = 11 and A = h4, 6, 9i.
g). Analyze the computational complexities of the algorithm proposed in e).

Q 5.12. Suppose that a fast-food restaurant sells chicken nuggets in packs of 4, 6 and 9.
One has to get at most (m = 11) chicken nuggets. This unbounded subset sum maximization
problem was considered on page 205.

M M M
4 6 9

Three kinds of McNugget boxes

a). Derive a higher order recurrence relation for the problem.


b). Devise a strong inductive programming algorithm.
c). Illustrate the algorithm proposed in b) where m = 11 and A = h4, 6, 9i.
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
e). Devise a memoization algorithm.
f). Illustrate the algorithm proposed in e) where m = 11 and A = h4, 6, 9i.
g). Analyze the computational complexities of the algorithm proposed in e).

Q 5.13. Consider the rod cutting minimization problem considered in exercise 4.19 on
page 206.
5.8. EXERCISES 275

a). Derive a higher order recurrence relation for the problem.


b). Devise a strong inductive programming algorithm.
c). Illustrate the algorithm proposed in b) where n = 11 and C = h3, 3, 5, 7, 8i.
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
e). Devise a memoization algorithm.
f). Devise a memoization algorithm for for the rod cutting (maximization) problem where
a higher order recurrence relation is given in eqn (5.13) on page 228.

Q 5.14. Consider the following divide recurrence relation:


(
1 if n = 1
T (n) = (5.55)
T ( n2 ) + 1 if n > 1
 

The computational time complexities for numerous algorithms, such as Algorithm 3.13 and
Algorithm 3.14, have this divide recurrence relation.

a). Prove that the solution of the divide recurrence in eqn. (5.55) is T (n) = blog nc + 1
for any positive integer n using strong induction. (Note that this proof may serve as
a full proof for eqn (3.36) in exercise Q 3.1).
b). Devise a strong inductive programming algorithm to find T (n) in eqn. (5.55).
c). Illustrate the algorithm proposed in b) for T (15).
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
e). Devise a memoization algorithm to find T (n) in eqn (5.55).
f). Illustrate the algorithm proposed in e) for T (122).
g). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.15. Consider the following divide recurrence relation:


(
1 if n = 1
T (n) =  n−1  (5.56)
T ( 2 ) + 1 if n > 1

The computational time complexities for the binary search Algorithm 3.10 and bisection
method 3.11 have this divide recurrence relation.

a). Devise a strong inductive programming algorithm to find T (n) in eqn. (5.56).
b). Illustrate the algorithm proposed in a) for T (15).
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
276 CHAPTER 5. TABULATION - STRONG INDUCTION

d). Prove Theorem 5.14 using strong induction. (Note that this proof may serve as a full
proof for eqn (3.36) in exercise Q 3.1).
Theorem 5.14. The solution of the divide recurrence in eqn. (5.56) is T (n) =
blog nc + 1 for any positive integer n.

e). Devise a memoization algorithm to find T (n) in eqn (5.56).


f). Illustrate the algorithm proposed in e) for T (15).
g). Analyze the computational time and space complexities of the algorithm proposed in
e).
h). Illustrate the recursive programming algorithm based on eqn (5.56) for T (15).
i). Analyze the computational time and space complexities of the recursive programming
algorithm illustrated in h).
j). Devise a tail recursion algorithm based on eqn (5.56).
k). Illustrate the algorithm proposed in j) for T (9).
l). Analyze the computational time and space complexities of the algorithm proposed in
j).

Q 5.16. Consider the following divide recurrence relation:


(
1 if n = 1
T (n) = (5.57)
T ( n2 ) + 2 n2
   
if n > 1

a). Devise a strong inductive programming algorithm to find T (n) in eqn. (5.57).
b). Illustrate the algorithm proposed in a) for T (15).
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
d). Prove Theorem 5.15 using strong induction. (Note that this proof may serve as a full
proof for eqn (3.5) in exercise Q 3.1).
Theorem 5.15. The solution of the divide recurrence in eqn. (5.57) is T (n) = 2n − 1
for any positive integer n.

e). Devise a memoization algorithm to find T (n) in eqn (5.57).


f). Illustrate the algorithm proposed in e) for T (15).
g). Analyze the computational time and space complexities of the algorithm proposed in
e).
h). Illustrate the recursive programming algorithm based on eqn (5.57) for T (15).
i). Analyze the computational time and space complexities of the recursive programming
algorithm illustrated in h).
5.8. EXERCISES 277

j). Devise a tail recursion algorithm based on eqn (5.57).


k). Illustrate the algorithm proposed in j) for T (9).
l). Analyze the computational time and space complexities of the algorithm proposed in
j).
Q 5.17. Consider the following divide recurrence relation:

0
 if n = 0
T (n) = 1 if n = 1 (5.58)
  n−1 
T ( 2 ) + T ( n−1
  
2 ) + 1 if n > 1

a). Devise a strong inductive programming algorithm to find T (n) in eqn. (5.58).
b). Illustrate the algorithm proposed in a) for T (15).
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
d). Prove Theorem 5.16 using strong induction. (Note that this proof may serve as a full
proof for Theorem 3.1 on page 93.
Theorem 5.16. The solution of the divide recurrence in eqn. (5.58) is same as T (n) =
n for any positive integer n.

e). Devise a memoization algorithm to find T (n) in eqn (5.58).


f). Illustrate the algorithm proposed in e) for T (61).
g). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.18. Consider the following divide recurrence relation:


(
1 if n = 1
T (n) = n n (5.59)
T ( 2 ) + T ( 2 ) + n if n > 1

a). Devise a strong inductive programming algorithm to find T (n) in eqn. (5.59).
b). Illustrate the algorithm proposed in a) for T (15).
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
d). Prove that the solution of the divide recurrence in eqn. (5.59) is same as eqn (5.60)
for any positive integer n using strong induction.

T (n) = ndlog ne + 2n − 2dlog ne (5.60)

(Note that this proof may serve as a full proof for Theorem 3.4 on page 99.
e). Devise a memoization algorithm to find T (n) in eqn (5.57).
278 CHAPTER 5. TABULATION - STRONG INDUCTION

f). Illustrate the algorithm proposed in e) for T (15).

g). Draw the memoization recursion tree for T (31) when the algorithm provided in e) is
used.

h). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.19. Consider the following divide recurrence relation:


(
0 if n = 0
T (n) = (5.61)
T ( n−1 ) + T ( n−1
   
2 2 ) + blog nc + 1 if n > 0

a). Devise a strong inductive programming algorithm to find T (n) in eqn. (5.61).

b). Illustrate the algorithm proposed in a) for T (15).

c). Analyze the computational time and space complexities of the algorithm proposed in
a).

d). Devise a memoization algorithm to find T (n) in eqn (5.61).

e). Illustrate the algorithm proposed in d) for T (15).

f). Analyze the computational time and space complexities of the algorithm proposed in
d).

g). Prove that T (n) = Θ(n) in eqn. (5.61) using strong induction. You may use the upper
and lower bounds in eqn (5.62).

2n − blog nc − 1 ≤ T (n) < 3n − blog nc (5.62)

(Note that this proof may serve as a full proof for eqn (3.37) in exercise Q 3.1).

h). Can you come up with an exact closed formula for eqn (5.61) instead of the bound in
eqn (5.62)? <Open problem>

Q 5.20. Consider the problem of finding the nth Lucas number, or LUC in short, which is
defined in recursion as follows:

2
 if n = 0
L(n) = 1 if n = 1 (5.63)

L(n − 1) + L(n − 2) if n > 1

a). Devise a strong inductive programming algorithm to compute L(n) in eqn. (5.63).

b). Illustrate the algorithm proposed in b) for L(10), i.e., build a table from index 0 ∼ 10.

c). Analyze the computational time and space complexities of the algorithm proposed in
a).

d). Draw the recursion tree for L(6) based on the recursion in eqn (5.63) and find the
number of recursive calls, i.e., nrc(L(6)).
5.8. EXERCISES 279

e). Identify any redundant subtrees in the recursion tree drawn in question d).
f). Devise a memoization algorithm to compute L(n) based on the recursion in eqn (5.63).
g). Prove the Lucas halving identities Theorem 5.17. Note that Ln = L(n).
Theorem 5.17. Lucas halving identities

Ln Ln−1 + (−1)n = L2n−1 (5.64)


L2n − 2(−1)n
= L2n (5.65)

h). Derive a divide recurrence relation based on Theorem 5.17.


i). Devise a divide and conquer algorithm with memoization technique.
j). Illustrate the algorithm provided in i) to compute L31 and L33 .
k). Provide the computational time and space complexities of the algorithm devised in i).
Q 5.21. Consider the following recurrence relation:

0
 if n = 0
N (n) = 1 if n = 1 (5.66)

2N (n − 1) − N (n − 2) if n > 1

(See [143] for details about the number.)

a). Devise a strong inductive programming algorithm to compute N (n) in eqn. (5.66).
b). Illustrate the algorithm proposed in a) for N (10), i.e., build a table from index 0 ∼
10.
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
d). Devise a memoization algorithm to compute N (n) based on the recursion in eqn (5.66).
e). Prove Theorem 5.18 using strong induction.
Theorem 5.18. The solution of the recurrence relation in eqn. (5.66) is N (n) = n.

Q 5.22. Consider the problem of finding the nth Pell number, or PLN in short, which is
defined in recursion as follows:

0
 if n = 0
P (n) = 1 if n = 1 (5.67)

2P (n − 1) + P (n − 2) if n > 1

(See [83] for details about the Pell number.)


a). Devise a strong inductive programming algorithm to compute P (n) in eqn. (5.67).
b). Illustrate the algorithms proposed in a) for P (10), i.e., build tables from index 0 ∼ 10.
280 CHAPTER 5. TABULATION - STRONG INDUCTION

c). Analyze the computational time and space complexities of the algorithm proposed in
a).

d). Draw the recursion tree for P (6) based on the recursion in eqn (5.67) and find the
number of recursive calls, i.e., nrc(P (6)).

e). Identify any redundant subtrees in the recursion tree drawn in question d).

f). Devise a memoization algorithm to compute P (n) based on the recursion in eqn (5.67).

g). Prove the Pell halving identities Theorem 5.19.


Let Pn be PLN(n).

Theorem 5.19. Pell halving identities

Pn2 + Pn−1
2
= P2n−1 (5.68)
2Pn2 + 2Pn Pn−1 = P2n (5.69)

h). Derive a divide recurrence relation based on Theorem 5.19.

i). Devise a divide and conquer algorithm with memoization technique.

j). Illustrate the algorithm provided in g) to compute P31 .

k). Provide the computational time and space complexities of the algorithm devised in g).

Q 5.23. Consider the problem of finding the nth Pell-Lucas number, or PLL in short, which
is defined in recursion as follows:
Let Qn be PLL(n).

2
 if n = 0
Q(n) = 2 if n = 1 (5.70)

2Q(n − 1) + Q(n − 2) if n > 1

(See [83] for details about the Pell-Lucas number.)

a). Devise a strong inductive programming algorithm to compute Q(n) in eqn. (5.70).

b). Illustrate the algorithms proposed in a) for Q(10), i.e., build tables from index 0 ∼ 10.

c). Analyze the computational time and space complexities of the algorithm proposed in
a).

d). Devise a memoization algorithm to compute Q(n) based on the recursion in eqn (5.70).

e). Prove the Pell-Lucas halving identities Theorem 5.20.

Theorem 5.20. Pell-Lucas halving identities

Qn Qn−1 + 2(−1)n = Q2n−1 (5.71)


Q2n − 2(−1) n
= Q2n (5.72)
5.8. EXERCISES 281

f). Derive a divide recurrence relation based on Theorem 5.20.


g). Devise a divide and conquer algorithm with memoization technique.
h). Illustrate the algorithm provided in g) to compute Q31 .
i). Provide the computational time and space complexities of the algorithm devised in g).
Q 5.24. Consider the problem of finding the nth Jacobsthal number, or JCN in short,
which is defined in recursion as follows:

0
 if n = 0
JCN(n) = 1 if n = 1 (5.73)

JCN(n − 1) + 2JCN(n − 2) if n > 1

(See [23] for details about the Jacobsthal number.)


a). Devise a strong inductive programming algorithm to compute JCN(n) in eqn. (5.73).
b). Illustrate the algorithms proposed in a) for JCN(10), i.e., build tables from index 0 ∼
10.
c). Devise a memoization algorithm to compute JCN(n) based on the recursion in eqn (5.73).
d). Prove that the solution of the recurrence relation in eqn. (5.73) is same as eqn (5.74)
using strong induction.

0
 if n = 0
JCN1 (n) = 2JCN1 (n − 1) + 1 if n > 1 and n is odd. (5.74)

2JCN1 (n − 1) − 1 if n > 1 and n is even.

e). Devise an inductive programming algorithm to compute JCN(n) based on the first
order recurrence relation in eqn. (5.74).
f). Prove that the solution of the recurrence relation in eqn. (5.73) is same as eqn (5.75)
using strong induction.
(
0 if n = 0
JCN2 (n) = n−1
(5.75)
2 − JCN2 (n − 1) if n > 1

g). Devise an inductive programming algorithm to compute JCN(n) based on the first
order recurrence relation in eqn. (5.75).
h). Prove the JCN halving identities Theorem 5.21.
Theorem 5.21. JCN halving identities

Jn2 + 2Jn−1
2
= J2n−1 (5.76)
Jn2 + 4Jn Jn−1 = J2n (5.77)

i). Derive a divide recurrence relation based on Theorem 5.21.


282 CHAPTER 5. TABULATION - STRONG INDUCTION

j). Devise a divide and conquer algorithm with memoization technique.

k). Illustrate the algorithm provided in i) to compute J31 .

Q 5.25. Consider the problem of finding the nth Jacobsthall-Lucas number, or JCL in
short, which is defined in recursion as follows:

2
 if n = 0
JCL(n) = 1 if n = 1 (5.78)

JCL(n − 1) + 2JCL(n − 2) if n > 1

(See [23] for details about the Jacobsthal-Lucas number.)

a). Devise a strong inductive programming algorithm to compute JCL(n) in eqn. (5.78).

b). Illustrate the algorithms proposed in a) for JCL(10), i.e., build tables from index 0 ∼
10.

c). Devise a memoization algorithm to compute JCL(n) based on the recursion in eqn (5.78).

d). Prove that the solution of the recurrence relation in eqn. (5.78) is same as eqn (5.79)
using strong induction.

2
 if n=0
JCL1 (n) = 2JCL1 (n − 1) − 3 if n > 1 and n is odd. (5.79)

2JCL1 (n − 1) + 3 if n > 1 and n is even.

e). Devise an inductive programming algorithm to compute JCL(n) based on the first
order recurrence relation in eqn. (5.79).

f). Prove the JCL halving identities Theorem 5.22.


Let JLn be JCL(n).

Theorem 5.22. JCL halving identities

JLn JLn−1 + (−1)n JLn−1 + 1 = JL2n−1 (5.80)


JL2n n
− 2(−1) JLn + 2 = JL2n (5.81)

g). Derive a divide recurrence relation based on Theorem 5.22.

h). Devise a divide and conquer algorithm with memoization technique.

i). Illustrate the algorithm provided in h) to compute JL31 .

Q 5.26. Consider the problem of finding the nth Mersenne number, or MSN in short, which
is defined in recursion as follows:

0
 if n = 0
MSN(n) = 1 if n = 1 (5.82)

3MSN(n − 1) − 2MSN(n − 2) if n > 1

5.8. EXERCISES 283

a). Devise a strong inductive programming algorithm to compute MSN(n) in eqn. (5.82).

b). Illustrate the algorithms proposed in a) for MSN(10), i.e., build tables from index 0 ∼
10.

c). Devise a memoization algorithm to compute MSN(n) based on the recursion in eqn (5.82).

d). Prove that the solution of the recurrence relation in eqn. (5.82) is same as the first
order linear recursion in eqn (5.83) using strong induction.
(
0 if n = 0
MSN1 (n) = (5.83)
2MSN1 (n − 1) + 1 if n > 0

e). Devise an inductive programming algorithm to compute MSN(n) based on the first
order recurrence relation in eqn. (5.83).

f). Prove the MSN halving identities Theorem 5.23.


Let Mn be MSN(n).

Theorem 5.23. MSN halving identities

Mn Mn−1 + Mn + Mn−1 = M2n−1 (5.84)


Mn2 + 2Mn = M2n (5.85)

g). Derive a divide recurrence relation based on Theorem 5.23.

h). Devise a divide and conquer algorithm with memoization technique.

i). Illustrate the algorithm provided in h) to compute M31 .

Q 5.27. Consider the problem of finding the nth Mersenne-Lucas number, or MSL in short,
which is defined in recursion as follows:

2
 if n = 0
MSL(n) = 3 if n = 1 (5.86)

3MSL(n − 1) − 2MSL(n − 2) if n > 1

a). Devise a strong inductive programming algorithm to compute MSL(n) in eqn. (5.86).

b). Illustrate the algorithms proposed in a) for MSL(10), i.e., build tables from index 0 ∼
10.

c). Devise a memoization algorithm to compute MSL(n) based on the recursion in eqn (5.86).

d). Prove that the solution of the recurrence relation in eqn. (5.86) is same as the first
order linear recursion in eqn (5.87) using strong induction.
(
2 if n = 0
MSL1 (n) = (5.87)
2MSL1 (n − 1) − 1 if n > 0
284 CHAPTER 5. TABULATION - STRONG INDUCTION

e). Devise an inductive programming algorithm to compute MSL(n) based on the first
order recurrence relation in eqn. (5.87).

f). Prove the MSL halving identities Theorem 5.24.


Let MLn be MSL(n).

Theorem 5.24. MSL halving identity

MLn Mn−1 − MLn − MLn−1 + 2 = ML2n−1 (5.88)


MLn MLn − 2MLn + 2 = ML2n (5.89)

g). Derive a divide recurrence relation based on Theorem 5.24.

h). Devise a divide and conquer algorithm with memoization technique.


0
i). Illustrate the algorithm provided in i) to compute M31 .

Q 5.28. Positive integers that are both square and triangular numbers are called square
triangular numbers. The problem of finding the nth square triangular number, or simply
STN, has the following recurrence relation:

0
 if n = 0
S(n) = 1 if n = 1 (5.90)

34S(n − 1) − S(n − 2) + 2 if n > 1

a). Devise a strong inductive programming algorithm to compute STN(n) in eqn. (5.90).

b). Illustrate the algorithms proposed in a) for STN(6), i.e., build tables from index 0 ∼
6.

c). Devise a memoization algorithm to compute STN(n) based on the recursion in eqn (5.90).

d). Illustrate the algorithm proposed in c) for STN(6), i.e., draw the recursion tree.

e). Prove the STN halving identities Theorem 5.25.


Let Sn be STN(n).

Theorem 5.25. STN halving identities

(Sn − Sn−1 )2 = S2n−1 (5.91)


2
(Sn+1 − Sn−1 ) = 36S2n (5.92)

f). Derive a divide recurrence relation based on Theorem 5.25.

g). Devise a divide and conquer algorithm with memoization technique.

h). Illustrate the algorithm provided in g) to compute STN(31). You may omit the large
value computations.
5.8. EXERCISES 285

Q 5.29. Consider the problem of finding the square root of the nth sqaure triangular
number, or STNr in short. p
STNr(n) = STN(n) (5.93)
It is defined in recursion as follows:

0
 if n = 0
STNr(n) = 1 if n = 1 (5.94)

6STNr(n − 1) − STNr(n − 2) if n > 1

a). Devise a strong inductive programming algorithm to compute STNr(n) in eqn. (5.94).
b). Illustrate the algorithms proposed in a) for STNr(8), i.e., build tables from index 0 ∼
10.
c). Devise a memoization algorithm to compute STNr(n) based on the recursion in eqn (5.94).
d). Prove the STNr halving identities Theorem 5.26.
Let Sn be STNr(n).
Theorem 5.26. STNr halving identities

Sn2 − Sn−1
2
= S2n−1 (5.95)
2 2
Sn+1 − Sn−1 = 6S2n (5.96)

e). Derive a divide recurrence relation based on Theorem 5.26.


f). Devise a divide and conquer algorithm with memoization technique.
g). Illustrate the algorithm provided in f) to compute STNr(31).

Q 5.30. Consider the full Kibonacci number, also known as k-generalized Fibonacci num-
bers [63]: It is the sum of all k previous numbers and defined as follows:
Problem 5.20. Full Kibonacci number (KBF)
Input: n ∈ Z and k ∈ Z+
Output:



0 if n ≤ 0
1 if n = 1

KBF(n, k) = (5.97)
 k
P
 KBF(n − i, k) if n > 1


i=1

Note that this problem differs from the Kibonacci number Problem 5.9.

a). Devise a strong inductive programming algorithm to find KBF(n, k) in eqn (5.97).
b). Illustrate the algorithm proposed in a) where k = 8 and n = 14, which is the 14th
Octanacci number.
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
286 CHAPTER 5. TABULATION - STRONG INDUCTION

d). Devise a memoization algorithm to compute KBF(n, k) in eqn (5.97).


e). Devise a strong inductive programming algorithm to find KBF2 (n, k) in eqn (5.98).


0
 if n ≤ 0
KBF2 (n, k) = 1 if n = 1 or 2 (5.98)

2KBF2 (n − 1, k) − KBF2 (n − (k + 1), k) if n > 2

f). Illustrate the algorithm proposed in e) where k = 8 and n = 14, which is the 14th
Octanacci number.
g). Analyze the computational time and space complexities of the algorithm proposed in
e).
h). Devise a memoization algorithm to compute KBF2 (n, k) in eqn (5.98).
i). Prove that eqn. (5.97) is equivalent to eqn. (5.98).

KBF(n, k) = KBF2 (n, k)

Q 5.31. Consider the following complete recurrence relation:



1 if n = 1
T (n) = n−1
P (5.99)
 T (i) + 1 if n > 1
i=1

Theorem 5.27. The solution of the complete recurrence relation in eqn. (5.99) is same as
2n−1 for any positive integer n.

a). Prove Theorem 5.27 using strong induction.


b). Devise a strong inductive programming algorithm to find T (n) in eqn (5.99).
c). Illustrate the algorithm proposed in b) for T (12). (Write a program.)
d). Analyze the computational time and space complexities of the algorithm proposed in
b).
e). Devise a memoization algorithm to find T (n) in eqn (5.99).
f). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.32. Consider the following complete recurrence relation.



1 if n = 0 or 1
T (n) = n−1
P (5.100)
 T (i) + n if n > 1
i=0

Theorem 5.28. The solution of the complete recurrence relation, T (n) in eqn. (5.100) is
in Θ(2n ) for any positive integer n.
5.8. EXERCISES 287

a). Prove Theorem 5.28 (T (n) = Θ(2n )) using strong induction.

b). Devise a strong inductive programming algorithm to find T (n) in eqn (5.100).

c). Illustrate the algorithm proposed in b) for T (12). (Write a program.)

d). Devise a memoization algorithm to compute T (n) in eqn (5.100).

e). Analyze the computational time and space complexities of the algorithm proposed in
e).

f). Observing from the table provided in c), one may derive the following first order linear
recurrence relation for T (n):

1
 if n = 0 or 1
T (n) = 4 if n = 2 (5.101)

2T (n − 1) + 1 if n > 2

Devise an inductive programming algorithm.

g). Prove that the solution of the complete recurrence relation, T (n) in eqn. (5.100) is
equivalent to that of the first order linear recurrence relation in eqn. (5.101).

Q 5.33. Consider the following complete recurrence relation:




 1 if n = 1
 n−1
T (n) = (5.102)
P
T (i)
 i=1

+ n if n > 1

n−1
a). Devise a strong inductive programming algorithm to find T (n) in eqn (5.102).

b). Illustrate the algorithm proposed in a) for T (12).

c). Analyze the computational time and space complexities of the algorithm proposed in
a).

d). Prove that the solution of the complete recurrence relation in eqn. (5.102) is same as
T (n) = 2n − 1 for any positive integer n using strong induction.

e). Devise a memoization algorithm to compute T (n) in eqn (5.102).

f). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.34. Consider the following complete recurrence relation:




0 if n = 0

1
 if n = 1
T (n) = n−1
P (5.103)


 T (i)
2 i=0

+1 if n > 1
n
288 CHAPTER 5. TABULATION - STRONG INDUCTION

a). Devise a strong inductive programming algorithm to find T (n) in eqn (5.103).
b). Illustrate the algorithm proposed in a) for T (12).
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
d). Prove Theorem 5.29 using strong induction.
Theorem 5.29. The complete recurrence relation in eqn. (5.103) is same as T (n) = n
for any positive integer n.

e). Devise a memoization algorithm to compute T (n) in eqn (5.103).


f). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.35. Let BPn be the set of valid balanced parantheses with n pairs of parentheses.
n BPn |BPn |
0 {‘0 } 1
1 {()} 1
2 {()(), (())} 2
3 {()()(), ()(()), (())(), (()()), ((()))} 5
4 {()()()(), ()()(()), ()(())(), ()(()()), ()((())), (())()(), (())(()), 14
(()())(), ((()))(), (()()()), (()(())), ((())()), ((()())), (((())))}
The cardinality of BPn are known as Catalan numbers [163], and a recurrence relation of
Cn = |BPn | is as follows:

1 if n = 0 or 1
Cn = n−1
P (5.104)
 Ci Cn−i−1 if n > 1
i=0

a). Devise a strong inductive programming algorithm to find the nth Catalan number.
b). Illustrate the algorithm proposed in a) for C12 .
c). Analyze the computational time and space complexities of the algorithm proposed in
a).
d). The commutative property allows to reduce some redundant computation. Eqn (5.104)
is equivalent to the following eqn (5.105):


 1 if n = 0 or 1
 Pb n−1 c
Cn = 2 i=02 Ci Cn−i−1 if n > 1 and even (5.105)
n−1

2 P −1 2
2
Ci Cn−i−1 + C n−1 if n > 1 and odd

i=0
2

Devise a memoization algorithm to compute Cn based on eqn (5.105).


e). Draw the memoization recursion tree for C6 of the algorithm proposed in d) and find
the number of recursive calls, i.e., nrc(C6 ).
5.8. EXERCISES 289

f). Analyze the computational time and space complexities of the algorithm proposed in
d).

g). Analyze the computational time and space complexities of the naı̈ve recursive program
in eqn (5.104).

Q 5.36. Consider the following complete recurrence relation:



3 if n = 0
FM(n) = n−1Q (5.106)
 FM(i) + 2 if n > 0
i=0

a). Devise a strong inductive programming algorithm to find FM(n) in eqn (5.106).

b). Illustrate the algorithm proposed in b) for FM(8). (Write a program.)

c). Analyze the computational time and space complexities of the algorithm proposed in
b).

d). Devise a memoization algorithm to find FM(n) in eqn (5.106).

e). Eqn (5.106) is equivalent to the nth Fermat Number Problem 2.9 defined on page 52.
Prove Theorem 5.30 using induction.

Theorem 5.30. The solution of the complete recurrence relation in eqn. (5.106) is
n
equivalent to 22 + 1.

Q 5.37. Recall the unbounded subset product problem, or simply USPE, considered as an
exercise in Q 4.17 on page 205.

a). Derive a higher order recurrence relation of the problem.

b). Devise a strong inductive programming based on the recurrence relation derived in a).

c). Illustrate the algorithm proposed in b) for n = 16 and P = {2, 3, 5}.

d). Analyze the computational time and space complexities of the algorithm proposed in
b).

e). Devise a memoization algorithm based on the recurrence relation derived in a).

f). Illustrate the algorithm proposed in e) for n = 32 and P = {2, 3, 5}.

g). Analyze the computational time and space complexities of the algorithm proposed in
e).

Q 5.38. Devise a memoization algorithm for the recurrence relations in a) ∼ c).

a). eqn (5.14) for the Unbounded integer knapsack Problem 4.6 on page 230.

b). eqn (5.15) for the weighted Activity selection Problem 5.6 on page 232.

c). eqn (5.24) for the Euler zigzag number or André’s problem on page 238.
290 CHAPTER 5. TABULATION - STRONG INDUCTION

Q 5.39. Suppose that the midterm requires eight topics. Certain topic requires some
preliminary knowledge in other topics, as in the following precedence graph.

induction Inductive Greedy


program algorithm

Divide &
recursion midterm
conquer

Strong Strong Data


induction ind. prog. structure

Provide at least two topologically valid sequences to study for the midterm.

Q 5.40. Given the following directed acyclic graph, which of the following list(s) is(are)
topologically valid?

a). h6, 3, 7, 5, 1, 2, 4i
1 3
b). h1, 3, 7, 5, 2, 6, 4i
2 7 6
c). h1, 2, 4, 3, 7, 5, 6i
4 5
d). h1, 3, 2, 6, 5, 7, 4i

Q 5.41. Consider the longest path length problem in a DAG, or simply LPL-dag(vs , DAG).
It is to find a path with the maximum length among all possible paths from a given source
node, vs , to all other vertices.

a). Formulate the problem.

b). Derive a recurrence relation.

c). Devise an algorithm based on strong inductive programming

d). Demonstrate the algorithm proposed in c) using the DAG in Figure 5.23 (a) where
s = v1 .

Q 5.42. Consider the longest path cost problem in a weighted DAG, or simply LPC-dag(vs ,
wDAG). It is to find a path with the maximum cost among all possible paths from a given
source node, vs , to all other vertices.

a). Formulate the problem.

b). Derive a recurrence relation.

c). Devise an algorithm based on strong inductive programming

d). Demonstrate the algorithm proposed in c) using the DAG in Figure 5.28 (a) where
s = v1 .
5.8. EXERCISES 291

Q 5.43. Consider the maximum spanning rooted tree problems in a weighted rooted DAG,
which are variants of minimum spanning rooted tree problems: Problem 5.17 and Prob-
lem 5.18 defined on pages 264 and 265, respectively.

a). Formulate the problem of finding a maximum spanning rooted tree whose sum of all
edge weights in the tree is maximized.
b). Derive a recurrence relation for the problem defined in a).

c). Devise a strong inductive programming algorithm for the problem defined in a).
d). Illustrate the algorithm proposed in c) using the DAG in Figure 5.28 (a) where s = v1 .
e). Formulate the problem of finding a maximum spanning rooted tree whose sum of all
path costs in the tree is maximized.

f). Devise a strong inductive programming algorithm for the problem defined in e).
g). Illustrate the algorithm proposed in f) using the dag in Figure 5.28 (a) where s = v1 .

Q 5.44. Consider the following directed acyclic graph:


v1 v2

v5 v3 v4

v7 v6

a). Find at least two topological orders of the following graph.


b). Find the number of paths where the source node is v1 . Hint: NPP Problem 5.14
defined on page 258.

c). Find the shortest path length where the source node is v1 . Hint: SPL-dag Problem 5.15
defined on page 259.
d). Find the longest path length where the source node is v1 . Hint: Exercise Q. 5.41
292 CHAPTER 5. TABULATION - STRONG INDUCTION

i sub-graph counter-sub-g i sub-graph counter-sub-g


v1 v2 v1 v2 v1 v2 v1 v2

v5 v3 v4 v5 v3 v4 v5 v3 v4 v5 v3 v4
0 4
v7 v6 v7 v6 v7 v6 v7 v6

∅ v1 , v2 , v3 , v4
v1 v2 v1 v2 v1 v2 v1 v2

v5 v3 v4 v5 v3 v4 v5 v3 v4 v5 v3 v4
1 5
v7 v6 v7 v6 v7 v6 v7 v6

v1 v1 , v 2 , v 3 , v 4 , v 5
v1 v2 v1 v2 v1 v2 v1 v2

v5 v3 v4 v5 v3 v4 v5 v3 v4 v5 v3 v4
2 6
v7 v6 v7 v6 v7 v6 v7 v6

v1 , v2 v1 , v2 , v3 , v4 , v5 , v6
v1 v2 v1 v2 v1 v2 v1 v2

v5 v3 v4 v5 v3 v4 v5 v3 v4 v5 v3 v4
3 7
v7 v6 v7 v6 v7 v6 v7 v6

v1 , v2 , v3 v1 , v 2 , v 3 , v 4 , v 5 , v 6 , v 7
(a) Topological sorting process and its counter-sub-graphs.

v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2

v3 v3 v4 v5 v3 v4 v5 v3 v4

v6

(b) A sequence of counter-sub-graphs.

2 2 2 2 2
v1 v1 v2 v1 v2 v1 v2 v1 v2 v1 v2
1 3 1 3 10 4 1 3 10 4 1 3 10
2 2 2 2 2
v3 v3 v4 v5 v3 v4 v5 v3 v4
4
6
v6

(c) A sequence of counter-sub-graphs of a wDAG.

Figure 5.35: Topological sorting Algorithm 5.26 illustration and its counter-sub-graphs
Chapter 6

Higher Dimensional Tabulation

This chapter presents the strong inductive programming and memoization methods in-
troduced in chapter 5, but with two or higher dimensional tables instead of one dimensional
table. Often, two dimensional tables of sub-solutions assist better in finding a higher order
recurrence relation pattern. They provide a flash of insight to solve many problems.
This chapter has several objectives. The first objective is to design a strong inductive
programming algorithm by setting a two dimensional table for various optimization prob-
lems. The second objective is to design and demonstrate a memoization algorithm based on
a higher order recurrence relation using a two dimensional table. Next, full solutions instead
of simple cardinalities must be derived by a backtracking algorithm, based on the table con-
structed by either strong inductive programming or a memoization algorithm. Next, the
use of a two dimensional table on string matching problems is presented. Numerous prob-
lems on strings are considered in order to demonstrate strong inductive programming with
a two dimensional table, which is conventionally known as a dynamic programming. Next,
classical problems on combinatorics are introduced using two dimensional tables and a mem-
oization method. Finally, problems that require a three or higher dimensional tabulation
are presented.
It should be noted that the space complexity of algorithms presented in this chapter can
be dramatically improved when a cylindric array data structure is used. Most algorithms
shall be revised in Chapter 7 to reduce the computational space. Here, pure strong inductive
programming with two dimensional tabulation shall be the focus.

6.1 Two Dimensional Strong Inductive Programming


There are problems that are formulated by a recurrence relation with more than one
variable. When two variables are involved, often a two dimensional look-up table is necessary
to store solutions of sub-problems. Strong inductive programming needs to be combined
with the two dimensional look-up table to tackle these problems.

6.1.1 Prefix Sum of Two Dimensional Array


Consider the two dimensional prefix sum problem that can greatly aid in the under-
standing of two dimensional array manipulation. Given a two dimensional array or matrix

293
294 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

of numbers, the problem is to find the prefix sum of each cell. It is formally defined as
follows:

Problem 6.1. 2-dimensional prefix sum, PFS2(A1∼n,1∼m )


Input: A1∼n,1∼m , an (n × m) two dimensional array of numbers
Output: PS21∼n,1∼m , an (n × m) two dimensional array whose elements are
y
x X
X
∀x ∈ [1 ∼ n]∀y ∈ [1 ∼ m], ps2x,y = ai,j
i=1 j=1

For a toy sample example, consider the following (4 × 4) input array and corresponding
output array in Figure 6.1 (a) and (b), respectively.

2 1 3 1 2 3 6 7
3 2 1 2 5 8 12 15
1 3 0 1 6 12 16 20
4 2 2 2 10 18 24 30
(a) A sample input A1∼4,1∼4 (b) A sample output PS21∼4,1∼4
2 1 3 1 2 1 3 2 1 3 1 2 1 3
3 2 1 2 = 3 2 1 + 3 2 1 2 − 3 2 1 +
1 3 0 1 1 3 0 1 3 0 1 1 3 0
4 2 2 2 4 2 2 2

PS2(A1∼4,1∼4 ) = PS2(A1∼4,1∼3 ) + PS2(A1∼3,1∼4 ) − PS2(A1∼3,1∼3 ) + a4,4


(ps24,4 = 30) = (ps24,3 = 24) + (ps23,4 = 20) − (ps23,3 = 16) + (a4,4 = 2)

(c) Backward thinking

Figure 6.1: 2D prefix sum

To come up with an algorithm, first attempt to derive a recurrence relation. A matrix


or two dimensional array can be divided into sub-arrays, as in Figure 6.1 (c). A general
recurrence relation is as follows:

a1,1

 if n = 1 ∧ m = 1
PFS2(A1,1∼m−1 ) + a1,m if n = 1 ∧ m > 1



PFS2(A1∼n,1∼m ) = PFS2(A1∼n−1,1 ) + an,1 if n > 1 ∧ m = 1 (6.1)

PFS2(A1∼n,1∼m−1 ) + PFS2(A1∼n−1,1∼m ) if n > 1 ∧ m > 1





 − PFS2(A
1∼n−1,1∼m−1 ) + an,m

The recursive definition in eqn (6.1) means that the entry in the ith row position and jth
column of the table, ps2i,j , is obtained by three sub-problems’ solutions: ps2i,j−1 , ps2i−1,j ,
and ps2i−1,j−1 , except for the basis cases. There are three basis cases. The first line of the
recurrence relation in eqn (6.1) is the sole basis case which corresponds to the upper left
most cell in the table. The second and third lines in eqn (6.1) are semi basis, or recursive
parts which correspond to the first row and first column of the table, respectively. Using
the recurrence relation in eqn (6.1), the whole ps2x,y ’s can be solved from left to right and
top to bottom, as illustrated in Figure 6.2. The highlighted cells are ps2x,y ’s and other cells
are ax,y ’s. A pseudo code of this strong inductive programming is stated as follows:
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 295

2 3 6 7 2 3 6 7 2 3 6 7
5 8 12 15 5 8 12 15 5 8 12 15
6 3 0 1 6 12 0 1 6 12 16 1
4 2 2 2 4 2 2 2 4 2 2 2
ps23,1 = 6 ps23,2 = 12 ps23,3 = 16
ps22,1 + a3,1 ps23,1 + ps22,2 − ps22,1 + a3,2 ps23,2 + ps22,3 − ps22,2 + a3,3

2 3 6 7 2 3 6 7 2 3 6 7
5 8 12 15 5 8 12 15 5 8 12 15
6 12 16 20 6 12 16 20 6 12 16 20
4 2 2 2 10 2 2 2 10 18 2 2
ps23,4 = 20 ps24,1 = 10 ps24,2 = 18
ps23,3 + ps22,4 − ps22,3 + a3,4 ps23,1 + a4,1 ps24,1 + ps23,2 − ps23,1 + a4,2

Figure 6.2: Strong inductive programming illustration for the 2D prefix sum problem

Algorithm 6.1. 2D prefix sum


Prefix sum2(A1∼n,1∼m )
Declare a (n × m) table T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][1] = a1,1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 2 ∼ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [1][j] = T [1][j − 1] + a1,j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [i][1] = T [i − 1][1] + ai,1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 2 ∼ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i][j] = T [i][j − 1] + T [i − 1][j] − T [i − 1][j − 1] + ai,j . . . 8
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Both computational time and space complexity of Algorithm 6.1 are Θ(nm), as it simply
fills up the (n × m) table in top to bottom and left to right order.

6.1.2 Stamp Combinatorics


Suppose that there are three kinds of stamps A = h1, 3, 5i. There are seven different
ways to make 10c where the order does not matter, as enumerated in Figure 6.3 (a). The
problem of how many ways of creating n amount with stamps. stamping n amount is defined
formally as follows:

Problem 6.2. Ways of creating n amount with stamps


Input: A = ha1 , · · · , ak i, a set of k different stamps where ai ∈ Z+ and
n ∈ Z, a total amount needed
Output: |M |, the cardinality of M where
Xk
M = {hx1 , · · · , xk i | xi ai = n where xi ∈ N}
i=1

It should be noted that this problem is different from the Winning Ways Problem 5.7
296 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

defined on page 239. The order of stamps does not matter in Problem 6.2, whereas it
matters in Problem 5.7.

X = h10, 0, 0i, 10 × 1 + 0 × 3 + 0 × 5 = 10

X = h7, 1, 0i, 7 × 1 + 1 × 3 + 0 × 5 = 10

X = h4, 2, 0i, 4 × 1 + 2 × 3 + 0 × 5 = 10 X = h1, 3, 0i

X = h5, 0, 1i X = h2, 1, 1i X = h0, 0, 2i


 (a) Enumeration
 of M (10, {1,
 3, 5}) 

 h10, 0, 0i,
 
 h10, 0i,
h7, 1, 0i, h7, 1i,

 
  
=⇒ M (10, {1, 3}) = 4

 

 h4, 2, 0i,  h4, 2i, 
 
no 5c
  
 
M (10, {1, 3, 5}) = 7 h1, 3, 0i,  h1, 3i 
 
h5, 0, 1i, h5, 0, 0i,

 


 

h2, 1, 1i, =⇒ h2, 1, 0i, M (5, {1, 3, 5}) = 3

 

 
one less 5c
 
h0, 0, 2i h0, 0, 1i
   
(b) Backward thinking
k A1∼k ak \n 0 1 2 3 4 5 6 7 8 9 10
1 {1} a1 = 1 1 1 1 1 1 1 1 1 1 1 1
2 {1, 3} a2 = 3 1 1 1 2 2 2 3 3 3 4 4
3 {1, 3, 5} a3 = 5 1 1 1 2 2 3 4 4 5 6 7
(c) Sub-solution table with A = {1, 3, 5}.
k A1∼k ak \n 0 1 2 3 4 5 6 7 8 9 10
1 {5} a1 = 5 1 0 0 0 0 1 0 0 0 0 1
2 {5, 3} a2 = 3 1 0 0 1 0 1 1 0 1 1 1
3 {5, 3, 1} a3 = 1 1 1 1 2 2 3 4 4 5 6 7
(d) Sub-solution table with A = {5, 3, 1}.
k A1∼k ak \n 0 1 2 3 4 5 6 7 8 9 10
1 {1} a1 = 1 1 1 1 1 1 1 1 1 1 1 1
2 {1, 3} a2 = 3 1 1 1 - 2 2 - 3 - - 4
3 {1, 3, 5} a3 = 5 1 - - - - 3 - - - - 7
(e) The memoization Algorithm 6.3 illustration.

Figure 6.3: Ways to make n = 10 cents with three kinds of stamps, A = {1, 3, 5}.

A two-dimensional table is an excellent and useful tool to solve the problem and design
an algorithm. The table whose cell, T [i][j] contains the solution for using a subset of stamps
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 297

{s1 , · · · , si } to make j amount is given in Figure 6.3 (c).


The order of the stamp set, A, does not matter. Figure 6.3 (d) shows the table with
A = h5, 3, 1i in a different order. The final result is the same for n = 10: T [3][10] = 7.
Through careful observation, the following higher order two dimenstional recurrence
relation can be derived:

1


if n = 0
0 if n < 0
M (n, A1∼k ) = (6.2)
M (n − a1 , A1∼k )
 if n > 0 ∧ k = 1

M (n, A1∼k−1 ) + M (n − ak , A1∼k ) if n > 0 ∧ k > 1

The first line in eqn (6.2) corresponds to the first column of the table. There is only one way
to make a zero amount, which is ‘no stamp at all.’ The second line in eqn (6.2) is ‘there is
no way to make a negative amount with positive valued stamps.’ The third line in eqn (6.2)
corresponds to the first row of the table. There is only one kind of stamp and only one way
is possbile if n is the multiple of a1 and there is no way if n is not a multiple of a1 . The rest
of the table can be filled with the last line rule. The key recursive part, which is the last
line 4 rule, can be derived from backward thinking, as depicted in in Figure 6.3 (b). The
solution set of ‘M (10, {1, 3, 5}) = 7’ can be partitioned into two sets: ones without any 5c
stamp and the other with at least one 5c stamp. First, for the ones without any 5c stamp,
there are exactly four ways to make n = 10 with only two kinds of stamps, A1∼2 = {1, 3}:
M (10, {1, 3}) = 4. Next for the other partition with at least one 5c stamp, if one removes
one 5c stamp, they are three ways to make n = 5 with A = {1, 3, 5}): M (4, {1, 3, 5}) = 3.
The recurrence relation in eqn (6.2) can be equivalently rewritten as the following recurrence
relation, which might be more pseudo-code friendly:



 1 if n = 0 ∧ k = 1
0 if n − a1 < 0 ∧ k = 1



M (n, A1∼k ) = M (n − c1 , A1∼k ) if n − a1 ≥ 0 ∧ k = 1 (6.3)

M (n, A1∼k−1 ) if n − ak < 0 ∧ k > 1





M (n, A
1∼k−1 ) + M (n − ak , A1∼k ) if n − ak ≥ 0 ∧ k > 1

The naı̈ve recursive algorithm straight from eqns (6.2) or (6.3) takes exponential time.
Hence, a table is necessary for either memoization or strong inductive programming. A
strong inductive programming version with a two dimensional table is stated as follows:
Algorithm 6.2. 2D dynamic ways of stamping
ways of stamping(n, A1∼k )
Declare a (k × (n + 1)) table T . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][0] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if j − a1 < 0, T [1][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else T [1][j] = T [1][j − a1 ] . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if j − ai < 0, T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . 8
else T [i][j] = T [i − 1][j] + T [i][j − ai ] . . . . . . . . . . . . . 9
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
298 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

The computational time complexity of Algorithm 6.2 is clearly Θ(kn). The space complexity
of Algorithm 6.2 is Θ(kn) if solutions of all sub-problems are stored in a table. Note that
only two rows of the table, which is a cylindrical array, can be used to compute the output
instead of the entire two dimensional table. The space efficient algorithm with a cylindrical
array data structure shall be dealt with later in Chapter 7.
The memoization technique can be applied, and the following pseudo code is based on
the recurrence relation in (6.2):

Algorithm 6.3. Ways of stamping - memoization

Declare a global two dimensional table, T1∼k,0∼n .


WS(n, A1∼k )
if n < 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T [k][n] = nil, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if n = 0, return T [k][n] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . 3
else if n > 0 ∧ k = 1, T [k][n] = WS(n − ak , A1∼k ) . . .4
else if n > 0 ∧ k > 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [k][n] = WS(n, A1∼k−1 ) + WS(n − ak , A1∼k ) . . . . . . 6
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

When the memoization Algorithm 6.3 is used, not all cells in the table may need to
be computed. Only parts of the table are computed, as shown Figure 6.3 (e) where A =
{1, 3, 5} to compute WS(10, {1, 3, 5}). The computational time and space complexities of
Algorithm 6.3 are O(kn) and Θ(kn), respectively.

6.1.3 Postage Stamp Equality Minimization Problem


Consider the Postage stamp equality minimization Problem 4.2 defined on page 159.
Although only one dimensional table was necessary to solve the problem with the strong
inductive programming Algorithm 5.3 on page 223, here an algorithm with a two dimensional
table is introduced for the sake of exercise. A two dimensional table, similar to Problem 6.2,
is set up and filled up with sub-solutions, as shown in Figure 6.4 (a) where A1∼4 = h1, 3, 5, 7i.
The ith row and jth column cell is the minimum number of stamps to make j amount with
stamps {a1 ∼ ai }.
Once again, the order of the stamp set, A, does not matter. Figure 6.4 (b) shows the
table of A in a different order. The final result is the same for n = 11: T [3][11] = 3. Certain
amount n is impossible to make. The ‘X’ symbol is used to denote the cell with impossible
value. The following recurrence relation in eqn (6.4) can be derived:


 0 if n=0




 ∞ if n − a1 < 0 ∧ k = 1
PSEmin(n − a , A
1∼k ) + 1 if n − a1 ≥ 0 ∧ k = 1

1
PSEmin(n, A1∼k ) = (6.4)

 PSEmin(n, A1∼k−1 ) ! if n − ak < 0 ∧ k > 1

PSEmin(n, A )


 1∼k−1
min if n − ck ≥ 0 ∧ k > 1


PSEmin(n − ak , A1∼k ) + 1
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 299

k A1∼k ck \n 0 1 2 3 4 5 6 7 8 9 10 11
1 {1} a1 =1 0 1 2 3 4 5 6 7 8 9 10 11
2 {1, 3} a2 =3 0 1 2 1 2 3 2 3 4 3 4 5
3 {1, 3, 5} a3 =5 0 1 2 1 2 1 2 3 2 3 2 3
4 {1, 3, 5, 7} a4 =7 0 1 2 1 2 1 2 1 2 3 2 3
(a) Sub-solution table with A = {1, 3, 5, 7}.
k A1∼k ck \n 0 1 2 3 4 5 6 7 8 9 10 11
1 {5} a1 =5 0 X X X X 1 X X X X 2 X
2 {5, 7} a2 =7 0 X X X X 1 X 1 X X 2 X
3 {5, 7, 1} a3 =1 0 1 2 3 4 1 2 1 2 3 2 3
4 {5, 7, 1, 3} a4 =3 0 1 2 1 2 1 2 1 2 3 2 3
(b) Sub-solution table with A = {5, 7, 1, 3}.
MPS(11,{1,3,5,7})
=3 +1

MPS(11,{1,3,5}) MPS(4,{1,3,5,7})
=3 =2
+1

MPS(11,{1,3}) MPS(6,{1,3,5}) MPS(4,{1,3,5}) MPS(−3,{1,3,5,7})


=5
=2 +1 =2

MPS(6,{1,3}) MPS(1,{1,3,5}) MPS(4,{1,3}) MPS(−1,{1,3,5})

=2 +1 =1 =2
+1
MPS(6,{1}) MPS(3,{1,3}) MPS(1,{1,3}) MPS(−4,{1,3,5}) MPS(4,{1}) MPS(1,{1,3})
=6 =4 =1
=1 =1
+1

MPS(3,{1}) MPS(0,{1,3}) MPS(1,{1}) MPS(−2,{1,3}) MPS(1,{1}) MPS(−2,{1,3})


=3 =0 =1
= 1 +1
+1
MPS(1,{ }) MPS(0,{1}) = 0 MPS(1,{ }) MPS(0,{1}) = 0

(c) Backtracking illustration for the n = 11 with {1, 3, 5, 7} stamps.

Figure 6.4: Postage stamp equality minimization with 2-dimensional Table.

Or simply,


0 if n = 0

∞ if n < 0




PSEmin(n, A1∼k ) = PSEmin(n − a1 , A1∼k ) + 1 ! if n > 0 ∧ k = 1 (6.5)

PSEmin(n, A )

1∼k−1

min PSEmin(n − a , A if n > 0 ∧ k > 1



k 1∼k ) + 1

Note that the impossible cell with ‘X’ is represented by ∞ in eqn. (6.4) and a value
≥ n + 1 may be used in codes such as in Algorithm 6.4. Since the number of stamps to
make nc amount must be less than or equal to n, an output greater than n represents the
impossible.
300 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Algorithm 6.4. 2D dynamic minimum stamps


find-min-stamps2(n, A1∼k )
Declare a (k × (n + 1)) table T . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if j − a1 < 0, T [1][j] = n + 1 (or X) . . . . . . . . . . . . . . . . 4
else T [0][j] = T [1][j − a1 ] + 1 . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if j − ai < 0, T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . 8
else T [i][j] = min(T [i − 1][j], T [i][j − ai ] + 1) . . . . . 9
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

The computational time complexity of Algorithm 6.4 is Θ(kn). The computational space
complexity of Algorithm 6.4 is Θ(kn) if solutions of all sub-problems are stored in a table.
Albeit Algorithm 6.4 finds the minimum number of stamps necessary to make nc amount,
it does not give the actual list of stamps, X. Note that only two rows of the table, instead
of entire two dimensional table, can be used to compute the output. The computational
space complexity is Θ(n), which shall be covered in subsequent Chapter 7.
Possible outputs when n = 11 and A = {1, 3, 5, 7} are X = {1, 1, 0, 1}, X = {0, 2, 1, 0},
and X = {1, 0, 2, 0}. They all use exactly
P three stamps to make 11c. If the desired output
is the actual X rather than simply X, it can be generated from the table, T , by the
following backtracking algorithm which retrospects the constructed table:
Algorithm 6.5. Backtracking the postage stamp equality minimization table
PSEmin2BT(i, j, A, X)
hT is global and already computed by Algorithm 6.4.i
if i = 1 ∧ j = 0, return X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if Ti,j = Ti−1,j ∧ i > 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return PSEmin2BT(i − 1, j, A, X) . . . . . . . . . . . . . . . . . . . . . 3
else if Ti,j = Ti,j−ai + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
xi = xi + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return PSEmin2BT(i, j − ai , A, X) . . . . . . . . . . . . . . . . . . . . 6
The recursive Algorithm 6.5 is called initially with PSEmin2BT(k, n, C, X), where all
k elements, xi ∈ X are 0. Note that lines 2 ∼ 3 and lines 4 ∼ 6 can be swapped. When
ties occur, this order affects which direction the backtracking follows. The highlighted cells
in the toy example table are ones that the backtracking Algorithm 6.5 visits. The full
backtracking tree for the toy example is illustrated in Figure 6.4, showing all three possible
valid answer paths with red bold font.
Some problems can be tackled with a one dimensional tabulation method while other
problems require two or higher tabulation methods. Apparently, the strong inductive pro-
gramming Algorithm 5.3 on page 223, which only requires Θ(n) space, is more space efficient.

6.1.4 0-1 Knapsack


Consider the 0-1 Knapsack Problem 4.4 defined on page 163. Utilizing a two dimensional
table to design an algorithm for the 0-1 knapsack problem was first devised in [86]. As before,
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 301

k A1∼k (pk , wk )\n 0 1 2 3 4 5 6 7 8 9 10 11


1 {(1, 1)} (1, 1) 0 1 1 1 1 1 1 1 1 1 1 1
2 {(1, 1), (4, 3)} (4, 3) 0 1 1 4 5 5 5 5 5 5 5 5
3 A1∼2 ∪ {(6, 5)} (6, 5) 0 1 1 4 5 6 7 7 10 11 11 11
4 A1∼3 ∪ {(8, 7)} (8, 7) 0 1 1 4 5 6 7 8 10 11 12 13
(a) Sub-solution table with A = {(1, 1), (4, 3), (6, 5), (8, 7)}.
k A1∼k (pk , wk )\n 0 1 2 3 4 5 6 7 8 9 10 11
1 {(6, 5)} (6, 5) 0 0 0 0 0 6 6 6 6 6 6 6
2 {(6, 5), (8, 7)} (8, 7) 0 0 0 0 0 6 6 8 8 8 8 8
3 A1∼2 ∪ {(1, 1)} (1, 1) 0 1 1 1 1 6 7 8 9 9 9 9
4 A1∼3 ∪ {(4, 3)} (4, 3) 0 1 1 4 5 6 7 8 10 11 12 13
(b) Sub-solution table with A = {(6, 5), (8, 7), (1, 1), (4, 3)}.
k A1∼k (pk , wk )\n 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1)} (1, 1) - 1 - 1 1 - 1 - 1 - - 1
2 {(1, 1), (4, 3)} (4, 3) - - - - 5 - 5 - - - - 5
3 A1∼2 ∪ {(6, 5)} (6, 5) - - - - 5 - - - - - - 11
4 A1∼3 ∪ {(8, 7)} (8, 7) - - - - - - - - - - - 13
(c) The memoization Algorithm 6.8 illustration.
Z(11,{(1,1),(4,3),(6,5),(8,7)}) (p,w)
+8 = 13
(8,7)

Z(4,{(1,1),(4,3),(6,5)}) Z(11,{(1,1),(4,3),(6,5)})
+6 =5 = 11
(6,5)

Z(−1,{(1,1),(4,3)}) Z(4,{(1,1),(4,3)})
=5
=× +4
(4,3)

Z(1,{(1,1)}) Z(4,{(1,1)}) =1
+1 =1
(1,1)
Z(0,{ }) = 0 Z(1,{ }) = ×

(d) Backtracking illustration for the 01-Knapsack.

Figure 6.5: 01-Knapsack with 2-dimensional Table.

a two dimensional table of all sub-solutions can be constructed, as shown in Figure 6.5 (a),
where A = {(1, 1), (4, 3), (6, 5), (8, 7)} and n is the maximum capacity of the knapsack. The
ith row and jth column cell contains the maximum total profit selected from the item set,
{a1 ∼ ai }, with at most j weights. It is a good idea to construct another table with a
different order of A. It helps to determine the basis cases better. Figure 6.5 (b) shows the
table of A in a different order.

Thinking backward, the output may contain ai item or not. If the output does not
contain ai , Z(n, A1∼i ) should be the same as Z(n, A1∼i−1 ). If the output does include ai ,
Z(n, A1∼i ) should be the same as Z(n − wi , A1∼i−1 ) + pi . The following recurrence relation
302 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

in eqn (6.6) can be derived:




 0 if n=0




 0 if n − w1 < 0 ∧ k = 1
p

if n − w1 ≥ 0 ∧ k = 1
1
Z(n, A1∼k ) = (6.6)
Z(n, A1∼k−1 )

 ! if n − wk < 0 ∧ k > 1
Z(n, A )


 1∼k−1
max if n − wk ≥ 0 ∧ k > 1


Z(n − wk , A1∼k−1 ) + pk

Now an algorithm using two dimensional strong inductive programming, which is conven-
tionally known as a dynamic programming, can be derived as follows:

Algorithm 6.6. 2D dynamic 01-knapsack


dynamic 01-knapsack(n, A)
Declare a (k × (n + 1)) table T . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if j − w1 < 0, T [1][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else T [1][j] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if j − wi < 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
T [i][j] = max(T [i − 1][j], T [i − 1][j − wi ] + pi ) . . . 11
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Both computational time and space complexities of Algorithm 6.6 are Θ(kn) if solutions
of all sub-problems are stored in a table. Again, only two rows, instead of the entire two
dimensional table can be used to compute the output and, thus, the computational space
complexity can be reduced to Θ(n), which will be covered in Chapter 7.
If the desired output is the actual X rather than the maximum profit value, it can be
generated from the table T by the following backtracking Algorithm 6.7:

Algorithm 6.7. 2D 01-knapsack backtracking


hT is global and already computed by Algorithm 6.6.i
ZOK-BT(i, j, A, X)
if j = 0, return X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if i = 1 ∧ T1,j = 0, return X . . . . . . . . . . . . . . . . . . . . . . 2
else if i = 1 ∧ T1,j 6= 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
x1 = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
else if Ti,j = Ti−1,j , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return ZOK-BT(i − 1, j, A, X) . . . . . . . . . . . . . . . . . . . . . . . . . 7
else if Ti,j = Ti−1,j−wi + pi , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
xi = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
return ZOK-BT(i − 1, j − wi , A, X) . . . . . . . . . . . . . . . . . . 10
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 303

The recursive function ZOK-BT is called initially with ZOK-BT(k − 1, n, A, X), where
all k elements, xi ∈ X are 0. Note that lines 7 ∼ 8 and lines 9 ∼ 11 can be swapped. When
ties occur, this order affects which direction the backtracking goes. The highlighted cells
are ones that the backtracking Algorithm 6.7 visits. The backtracking process is illustrated
in Figure 6.5 (c) with a toy example where n = 11 and A = {(1, 1), (4, 3), (6, 5), (8, 7)}.
The memoization algorithm is stated as follows based on the recurrence relation in (6.6):

Algorithm 6.8. ZOK-memoization

Declare a global two dimensional table, T1∼k,0∼n .


ZOK(n, A1∼k )
if n < 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T [k][n] = nil, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if k = 1 and n − w1 < 0, T [k][n] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if k = 1 and n − w1 ≥ 0, T [k][n] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if n > 0 ∧ k > 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [k][n] = max(ZOK(n, A1∼k−1 ), ZOK(n − wk , A1∼k−1 ) + pk ) . . . 6
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

When the memoization Algorithm 6.8 is used, not all cells in the table may need to be
computed. Only twelve cells of the table are necessary, as shown in Figure 6.5 (c) when
A = {(1, 1), (4, 3), (6, 5), (8, 7)} to compute ZOK(11, {(1, 1), (4, 3), (6, 5), (8, 7)}). The com-
putational time and space complexities of Algorithm 6.8 are O(kn) and Θ(kn), respectively.

6.1.5 Unbounded Integer Knapsack

Consider the unbounded integer knapsack problem, defined on page 167. Recall that
each item is either not taken or taken multiple times. Consider the following table of sub-
solutions in Figure 6.6 (a), where T [i][j] contains the solution where the maximum capacity
is j and the set of items are A1∼i . As before, the order of A does not matter. Figure 6.6
(b) shows the table of A in a different order.
The following higher order recurrence relation can be derived:



 0 if n=0




 0 if n − w1 < 0 ∧ k = 1
U (n − w , A ) + p

if n − w1 ≥ 0 ∧ k = 1
1 1∼1 1
U (n, A1∼k ) = (6.7)

 U (n, A1∼k−1 ) ! if n − wk < 0 ∧ k > 1

U (n, A1∼k−1 )



max if n − wk ≥ 0 ∧ k > 1


U (n − wk , A1∼k ) + pk

Starting from the first row as the basis row, the entire table can be filled up sequentially
using the following strong inductive programming with a two dimensional table:
304 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

k A1∼k (pk , wk )\n 0 1 2 3 4 5 6 7 8 9 10 11


1 {(1, 1)} (1, 1) 0 1 2 3 4 5 6 7 8 9 10 11
2 {(1, 1), (5, 3)} (5, 3) 0 1 2 5 6 7 10 11 12 15 16 17
3 A1∼2 ∪ {(7, 4)} (7, 4) 0 1 2 5 7 8 10 12 14 15 17 19
4 A1∼3 ∪ {(8, 5)} (8, 5) 0 1 2 5 7 8 10 12 14 15 17 19
(a) Sub-solution table with A = {(1, 1), (5, 3), (7, 4), (8, 5)}.
k A1∼k (pk , wk )\n 0 1 2 3 4 5 6 7 8 9 10 11
1 {(7, 4)} (7, 4) 0 0 0 0 7 7 7 7 14 14 14 14
2 {(7, 4), (5, 3)} (5, 3) 0 0 0 5 7 7 10 12 14 15 17 19
3 A1∼2 ∪ {(1, 1)} (1, 1) 0 1 2 5 7 8 10 12 14 15 17 19
4 A1∼3 ∪ {(8, 5)} (8, 5) 0 1 2 5 7 8 10 12 14 15 17 19
(b) Sub-solution table with A = {(7, 4), (5, 3), (1, 1), (8, 5)}.
(p,w) (1,1) (5,3) (7,4) (8,5)
0 1 2 0
U(11,{(1,1),(5,3),(7,4),(8,5)})
= 19
+8
U(11,{(1,1),(5,3),(7,4)}) U(6,{(1,1),(5,3),(7,4),(8,5)})
= 19 = 10
+7 +8
U(11,{(1,1),(5,3)}) U(7,{(1,1),(5,3),(7,4)}) U(1,{(1,1),(5,3),(7,4),(8,5)})
= 17 = 12 =1
+7

U(7,{(1,1),(5,3)}) U(3,{(1,1),(5,3),(7,4)})
=5 =5
+7

U(3,{(1,1),(5,3)}) U(−1,{(1,1),(5,3),(7,4)})
=5
+5

U(3,{(1,1)}) = 3 U(0,{(1,1),(5,3)}) = 0

U (11, {(1, 1), (5, 3), (7, 4), (8, 5)}) = 1 × 0 + 5 × 1 + 7 × 2 + 8 × 0 = 19


(c) Backtracking illustration for the unbounded knapsack problem.

Figure 6.6: Unbounded Knapsack with 2-dimensional Table.

Algorithm 6.9. 2D dynamic unbounded knapsack


dynamic UB-knapsack(n, A)
Declare a (k × (n + 1)) table T . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if j − l1 < 0, T [1][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else T [1][j] = T [1][j − l1 ] + p1 . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if j − li < 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
T [i][j] = max(T [i − 1][j], T [i][j − li ] + pi ) . . . . . . . . 11
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 305

The computational time complexity of Algorithm 6.9 is Θ(kn). The computational space
complexity of Algorithm 6.9 is Θ(kn) if solutions of all sub-problems are stored in a table.
If the desired output is the actual X rather than the maximum profit value, it can be
generated from the table T by the following backtracking algorithm:

Algorithm 6.10. 2D dynamic unbounded knapsack backtracking

hT is global and already computed by Algorithm 6.9.i


UBK-BT(i, j, A, X)
if j = 0, return X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if i = 1 ∧ T1,j = 0, return X . . . . . . . . . . . . . . . . . . . . . . . . . 2
else if Ti,j = Ti,j−li + pi , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
xi = xi + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return UBK-BT(i, j − li , A, X) . . . . . . . . . . . . . . . . . . . . . . . . 5
else if Ti,j = Ti−1,j , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return UBK-BT(i − 1, j, A, X) . . . . . . . . . . . . . . . . . . . . . . . . 7

The recursive function UBK-BT is called initially with UBK-BT(k − 1, n, A, X) where all
k elements, xi ∈ X are 0. The highlighted cells are the ones that the backtracking Al-
gorithm 6.10 visits. The backtracking process is illustrated in Figure 6.6 (c) with a toy
example where n = 11 and A = {(1, 1), (5, 3), (7, 4), (8, 5)}.

6.1.6 Subset sum equality problem


Given a set S of k rods of different lengths, S = {s1 , s2 , · · · , sk }, find a subset of the
rods whose sum has total length of exactly n. This Subset Sum Equality problem, SSE in
short, is defined formally as follows:

Problem 6.3. Subset sum equality


Input: A
 set S of k integers and an integer n∈Z
0
 0 |S |
if ∃S 0 ⊆ S such that s0i = n
P
S

Output: i=1

False otherwise

For example, in Figure 6.7, there are a couple of solutions for n = 12 but no solution for
n = 11. A naı̈ve algorithm would be trying all 2k−1 non-empty subsets to see whether the
sum of their elements equals n. Clearly, it takes exponential time.
Utilizing a two dimensional table to design an algorithm for the subset sum equality
problem was first devised in [86]. As before, a two dimensional table of all sub-solutions can
be constructed, as shown in Figure 6.7 (d). Figure 6.7 (e) shows the table of S in a different
order.
The kth piece, sk may or may not be used in the final solution. If not used, SSE(n, S1∼k )
would be the same as SSE(n, S1∼k−1 ), which is the problem without the kth piece. If used,
SSE(n, S1∼k ) would be the same as SSE(n−sk , S1∼k−1 ). The recurrence relation in eqn (6.8)
306 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

7
5
3 S
2

(a) sample input: S = {2, 3, 5, 7}


2 7 3 2 3 5
5 7 7 3 2

12 11

(b) possible case: (n = 12) = 5 + 7 (c) impossible case: (n = 11)


k S1∼k sk \n 0 1 2 3 4 5 6 7 8 9 10 11 12
1 {2} s1 =2 T F T F F F F F F F F F F
2 {2, 3} s2 =3 T F T T F T F F F F F F F
3 {2, 3, 5} s3 =5 T F T T F T F T T F T F F
4 {2, 3, 5, 7} s4 =7 T F T T F T F T T T T F T
(d) Sub-solution table with S = {2, 3, 5, 7}: SSE(S, 12) = 7 + 5 = 12
k S1∼k sk \n 0 1 2 3 4 5 6 7 8 9 10 11 12
1 {5} s1 =5 T F F F F T F F F F F F F
2 {5, 3} s2 =3 T F F T F T F F T F F F F
3 {5, 3, 2} s3 =2 T F T T F T F T T F T F F
4 {5, 3, 2, 7} s4 =7 T F T T F T F T T T T F T
(e) Sub-solution table with S = {5, 3, 2, 7}: SSE(S, 12) = 7 + 2 + 3 = 12

Figure 6.7: Subset Sum Equality with 2-dimensional Table.

can be derived where E be the function SSEp.




T if n = 0 ∨ n = s1

F if n > 0 ∧ n 6= s1 ∧ k = 1
E(n, S1∼k ) = (6.8)


E(n, S1∼k−1 ) if n − sk < 0 ∧ k > 1
E(n, S1∼k−1 ) ∨ E(n − sk , S1∼k−1 ) if n − sk ≥ 0 ∧ k > 1

Now, an algorithm using two dimensional strong inductive programming, which is con-
ventionally known as dynamic programming, can be derived as follows:
Algorithm 6.11. 2D dynamic subset sum equality

dynamic subset sum equ(n, S)


Declare a (k × (n + 1)) table T with ‘F’ initially . . . . . . . . . 1
T [1][0] = T and T [1][s1 ] = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if j − si < 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
6.1. TWO DIMENSIONAL STRONG INDUCTIVE PROGRAMMING 307

T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i][j] = T [i − 1][j] ∨ T [i − 1][j − si ] . . . . . . . . . . . . . . 8
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Both computational time and space complexities of Algorithm 6.11 are Θ(kn) if solutions
of all sub-problems are stored in a table. The computational space complexity of Algo-
rithm 6.11 will be reduced to Θ(n) in later Chapter 7. A backtracking method can generate
the actual subset, rather than just a true or false output. It is left for an exercise.

6.1.7 Unbounded Subset Product Equality Problem


Given a set P of k positve numbers, P = {p1 , p2 , · · · , pk }, find a subset of numbers whose
product equals exactly n where a number may be used more than once. This unbounded
subset product of positive number problem, or simply USPE, is defined formally as follows:
Problem 6.4. Unbounded subset product of positive number equality (USPE)
Input: A set P of k positive numbers and n ∈ R+
k

True if ∃X such that Q pxi i = n where 0 ≤ xi integer
Output: i=1
False otherwise

For a toy example of P = {2, 3, 5} and n = 20, the solution is X = {2, 0, 1} because
22 × 30 × 51 = 20. A similar two dimensional table as before can be constructed for the
problem, as shown in Figure 6.8.

k P1∼k pk \n 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1 {2} 2 T T F T F F F T F F F F F F
2 {2, 3} 3 T T T T F T F T T F F T F F
3 {2, 3, 5} 5 T T T T T T F T T T F T F F
4 {2, 3, 5, 7} 7 T T T T T T T T T T F T F T

Figure 6.8: Unbounded Subset Product Equality with 2-dimensional Table.

The kth integer, pk may or may not divide n. If it does not divide n, USPE(n, P1∼k )
would be the same as USPE(n, P1∼k−1 ), which is the problem without the kth integer. If
it divides n, USPE(n, P1∼k ) would be the same as USPE(n/pk , P1∼k−1 ). The following
recurrence relation in eqn (6.9) can be derived:



 T if n = 1
T if n > 1 ∧ k = 1 ∧ p1 | n




USPE(n, P1∼k ) = F if n > 1 ∧ k = 1 ∧ p1 6 | n (6.9)

USPE(n/pk , P1∼k−1 ) if n > 1 ∧ k > 1 ∧ pk | n




USPE(n, P
1∼k−1 ) if n > 1 ∧ k > 1 ∧ pk 6 | n

The backtracking for USPE(14, {2, 3, 5, 7}) is highlighted in the table in Figure 6.8.
Based on the recurrence relation in eqn (6.9), a 2D strong inductive programming algorithm
can be devised as follows:
308 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Algorithm 6.12. 2D dynamic unbounded subset product equality


USPE(n, P1∼k )
Declare a (k × n) table T with ‘F’ initially . . . . . . . . 1
T [1][1] = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if p1 | i, T [1][i] = T [1][i/p1 ] . . . . . . . . . . . . . . . . . . . 4
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [i][1] = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if pi | j, T [i][j] = T [i − 1][j/pi ] . . . . . . . . . . . . . 8
else, T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . . . . 9
return T [k][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Both computational time and space complexities of Algorithm 6.12 are Θ(kn).
The unbounded subset product equality problem was considered as exercises Q. 4.17
and Q. 5.37 on pages 205 and 289, respectively. Strong inductive programming with a one
dimensional table was possible. Albeit one of our mottoes is to come up with an efficient
algorithm, building the two dimensional table is often desirable in order to get better insights
beyond the algorithm design aspect.
Let’s consider whether it is possible to compose P such that the entire row is true for
any n ≥ 2. The answer is ‘no’ and it leads to the infinitude of primes theorem which was
proven by Euclid as Proposition IX-20 [62, p271].
Theorem 6.1 (Infinitude of Primes). There are infinitely many prime numbers.
Proof. Suppose that there are finite number of prime numbers, i.e., P = {p1 , p2 , · · · , pk }
Qk
are only prime numbers. Let n = pi + 1, which cannot be divisible by any prime pi ∈ P .
i=1
If n is a prime number, it is not in the original set, P . This is a contradiction. If n is a
composite number, n must have prime factors that are not in P . This is also a contradiction.
Therefore, there are infinitely many prime numbers. 
To understand the proof better, let’s consider its recurrence relation in eqn (6.10) and
the following inductive programming Algorithm 6.13 to generate prime numbers based on
the Euclid’s proof.

{2}
 ! basis
P = |P
Q | (6.10)
P ∪ factor
 pi + 1 inductive step
i=1

Algorithm 6.13. Prime number generation


gen prime()
P = {2} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
n = 2 + 1 .........................................2
while USPE(P, n) = false . . . . . . . . . . . . . . . . . . . . . . . . . 3
P = P ∪ factor(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
|P
Q|
n= pi + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
i=1
return ‘halts’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
6.2. PROBLEMS ON STRINGS 309

|P
Q|
i Prime set P n= +1 factor(n)
i=1
1 {2} 2+1=3 {3}
2 {2, 3} 2×3+1=7 {7}
3 {2, 3, 7} 2 × 3 × 7 + 1 = 43 {43}
4 {2, 3, 7, 43} 2 × 3 × 7 × 43 + 1 = 1807 {13, 139}
5 {2, 3, 7, 43, 13, 139} 2 × 3 × 7 × 43 × 13 × 139 + 1 = 3263443 {3263443}
.. .. .. ..
. . . .

Figure 6.9: Infinitude of Primes Theorem 6.1 illustration.

Since sets P and factor(n) are mutually exclusive, line 4 in Algorithm 6.13 is simply
appending two disjoint sets. The first five iterations of Algorithm 6.13 are illustrated in
|P
Q|
Figure 6.9. As USPE(P, n) is always false if n = +1, Algorithm 6.13 never halts and
i=1
goes to an infinite loop, producing a new set of prime numbers. Algorithm 6.13 is the essence
of the proof for Theorem 6.1. A program that never halts often serves as a proof for many
theorems involving ‘infinitely many’ questions and a tabulation may provide better insights
to devise a proof. Although this subsection deviates from the main stream of this chapter,
it is included to point out how computer scientists conceive many mathematical theorems
via algorithms.

6.2 Problems on Strings

4
4

(a) Twinkle, Twinkle, Little Star music note


4
4

(b) Ah vous dirai-je, Maman music note

Figure 6.10: Music applications of the longest common sub-sequence

In this section, problems involving strings and their sub-sequences are considered. These
problems have various real world problems and applications, such as bioinformatics, informa-
tion retrieval, music, pattern recognition, plagiarism detection, spell checkers, etc. A string,
a sequence of symbols, is one of the most popular pattern representations. When patterns
are represented by strings, how similar or different two patterns are can be measured by
string matching problems. For example, in Figure 6.10, music notes can be represented by
strings and their similarity can be determined by string matching.

6.2.1 Longest Common Sub-sequence


A sub-sequence S of a sequence or string A is obtained by deleting zero or more elements
from A. For example, some sub-sequences of ‘Algorithm’ include ‘Agori’, ‘gorim’, ‘Arith’,
310 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

A l g o r i t h m

A l k h w a r i z m i
(a) Assignment of LCS: ‘Alrim’
A1~n −1 A1~n A1~n−1

A1~n m A1~n d A1~n d

B1~m m B1~m f B1~m f


B1~m−1 B1~m−1 B1~m

Case 1: (an = bm ) Case 2: (an 6= bm ) Case 3 (an 6= bm )


LCS(A1∼n , B1∼m ) = LCS(A1∼n , B1∼m ) = LCS(A1∼n , B1∼m ) =
LCS(A1∼n−1 , B1∼m−1 ) + 1 LCS(A1∼n , B1∼m−1 ) LCS(A1∼n−1 , B1∼m )
(b) backward thinking
A l k h w a r i z m i
0 0 0 0 0 0 0 0 0 0 0 0
A 0 1 1 1 1 1 1 1 1 1 1 1
l 0 1 2 2 2 2 2 2 2 2 2 2
g 0 1 2 2 2 2 2 2 2 2 2 2
o 0 1 2 2 2 2 2 2 2 2 2 2
r 0 1 2 2 2 2 2 3 3 3 3 3
i 0 1 2 2 2 2 2 3 4 4 4 4
t 0 1 2 2 2 2 2 3 4 4 4 4
h 0 1 2 2 2 2 2 3 4 4 4 4
m 0 1 2 2 2 2 2 3 4 4 5 5

(c) table of longest common sub-sequence

Figure 6.11: Edit distance illustration.

etc.. Some sub-sequences such as ‘Algo’ and ’rithm’ are consecutive. A sub-sequence need
not be consecutive though. A sequence S is a sub-sequence of A if all elements in S appear
in order in A. Let’s denote S ⊆0 A to distinguish it from a subset notation S ⊆ A where the
order does not matter, i.e., {g, i, l, o, t} ⊆ {a, g, h, i, l, m, o, r, t}. For each element si ∈ S,
there exists ai0 ∈ A where ai0 = si .

Definition 6.1. A sequence S is a sub-sequence of a sequence A; S ⊆0 A if the following


condition is met:

∀i, j ∈ {1, · · · , |S|}, ∃i0 , j 0 ∈ {1, · · · , |A|} such that if i < j, i0 < j 0 ∧ si = ai0 ∧ sj = aj 0

The longest common sub-sequence problem or simply LCS in short is to find a maximum
length common sub-sequence between two sequences. The longest common sub-sequence
between ‘Al-khwarizmi’ and ‘Algorithm’ strings is 5 since both strings contain ‘Alrim,’ whose
6.2. PROBLEMS ON STRINGS 311

length is 5, as in Figure 6.11 (a). This problem is formally defined as follows:

Problem 6.5. Longest Common Sub-sequence LCS(A1∼n , B1∼m )


Input: a string A of size n and a string B of size m
Output: |S| or S such that |S| is maximized where S ⊆0 A ∧ S ⊆0 B.

In the backward thinking illustrated in Figure 6.11 (b), imagine that three sub-problems,
LCS(A1∼n−1 , B1∼m−1 ), LCS(A1∼n−1 , B1∼m ), and LCS(A1∼n , B1∼m−1 ) are known. There
are two cases whether an 6= bn or an = bn . The first case is when an 6= bn . The solution
for LCS(A1∼n , B1∼m ) is identical to either LCS(A1∼n , B1∼m−1 ) or LCS(A1∼n−1 , B1∼m ).
Whichever gives a higher value is the solution. When an = bn , LCS(A1∼n , B1∼m ) is clearly
LCS(A1∼n−1 , B1∼m−1 ) + 1. One might wonder why the actual assignment in a solution
does not contain the assignment from an to bm , but bm is assigned to ax where x < n. Yet,
LCS(A1∼n−1 , B1∼m−1 ) is still the same and LCS(A1∼n , B1∼m ) = LCS(A1∼n−1 , B1∼m−1 )+1.
Finally, if one of the input string is empty, the longest common sub-sequence is clearly zero,
which plays basis cases. The following recurrence relation for LCS in eqn (6.11) can be
derived:


 0 if n = 0 or m = 0
LCS(A
1∼n−1 , B1∼m−1 ) + 1 ! if n, m > 0 and an = bm

LCS(A1∼n , B1∼m ) = (6.11)
 LCS(A 1∼n , B1∼m−1 )
max if n, m > 0 and an 6= bm


LCS(A1∼n−1 , B1∼m )

As depicted in Figure 6.11 (c), a two dimensional table, Tn+1×m+1 , comes in very handy
to compute the longest common sub-sequence. First, place the first and second input strings
A1∼n and B1∼m in the left and top of the table, respectively. The ith row and jth column
cell,T [i, j], contains the value of LCS(A1∼i , A1∼j ). Then, each cell of the table can be
computed using the recurrence relation in eqn (6.11). Its pseudo code is stated as follows:

Algorithm 6.14. Dynamic Longest common sub-sequence

LCS(A1∼n , B1∼m )
Declare a (n + 1) × (m + 1) table T with 0 initially . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 to m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if ai = bj , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Ti,j = Ti−1,j−1 + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Ti,j = max(Ti,j−1 , Ti−1,j ) . . . . . . . . . . . . . . . . . . . . . . . . . 7
return Tn,m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Basis cases are already computed by declaring the table, as default initial values are
zeros in most contemporary programming languages. Computing |S| clearly takes Θ(nm)
time, and the computational space complexity is Θ(nm) as well. To find the actual longest
common sub-sequence, S, the below backtracking Algorithm 6.15 can be invoked initially
with LCS2BT(n, m). T is global and already computed by Algorithm 6.14. Both strings
A1∼n and B1∼m are global as well. The symbol  means the end of a string.
312 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Algorithm 6.15. Backtracking the longest common sub-sequence table


LCS2BT(i, j)
hT is global and already computed by Algorithm 6.14.i
if Ti,j = 0, return  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if ai = bj , return append(ai , LCS2BT(i − 1, j − 1)) . . . 2
if ai 6= bj , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if Ti,j−1 > Ti−1,j , return LCS2BT(i, j − 1) . . . . . . . . . 4
else, return LCS2BT(i − 1, j) . . . . . . . . . . . . . . . . . . . . . . .5

The computational time complexity of the backtracking Algorithm 6.15 is O(n + m).

6.2.2 String Edit Distance: InDel

A l g o r i t h m
m m d d i i i i m m d d i m i

A l k h w a r i z m i
(a) Assignment of InDel edit distance
A l k h w a r i z m i
0 1 2 3 4 5 6 7 8 9 10 11
A 1 0 1 2 3 4 5 6 7 8 9 10
l 2 1 0 1 2 3 4 5 6 7 8 9
g 3 2 1 2 3 4 5 6 7 8 9 10
o 4 3 2 3 4 5 6 7 8 9 10 11
r 5 4 3 4 5 6 7 6 7 8 9 10
i 6 5 4 5 6 7 8 9 6 7 8 9
t 7 6 5 6 7 8 9 10 7 8 9 10
h 8 7 6 7 6 7 8 9 8 9 10 11
m 9 8 7 8 7 8 9 10 9 10 9 10

(b) table of InDel edit distance

Figure 6.12: InDel edit distance illustration.

While the longest common sub-sequence Problem 6.5 is concerned with how similar two
strings are, distance measure problems are concerned with how different two strings are.
One of the popular distance measures between strings is the edit distance with insertion
and deletion operations only. To distinguish it from other edit distance measures defined
slightly differently, the abbreviation, ‘InDel,’ shall be used. InDel is to convert a string,
A1∼n , to another string, B1∼m , by inserting some elements from B1∼m and deleting some
elements from A1∼n in order, as shown in Figure 6.12 (a). Four deletion and six insertion
operations are necessary to convert the string ‘Algorithm’ to the other string ‘Alkhwarizmi’.
The order of operations matters, i.e., if the ith element in B is inserted in the i0 th place,
any j > i can be only inserted into the (j 0 > i0 )th place. As formulating the problem is
6.2. PROBLEMS ON STRINGS 313

quite tricky, the recurrence relation in eqn (6.12) shall serve as the output.
Problem 6.6. Indel edit distance Indel(A1∼n , B1∼m )
Input: a string A of size n and a string B of size m
Output: Indel(A1∼n , B1∼m )


max(n, m)
 if min(n, m) = 0
Indel(A

, B ) if n, m > 0 ∧ an = bm
1∼n−1 1∼m−1
Indel(A1∼n , B1∼m ) = ! (6.12)
 Indel(A1∼n , B1∼m−1 )
min + 1 if n, m > 0 ∧ an 6= bm


Indel(A1∼n−1 , B1∼m )

Basis cases occur when one of the string is empty. If the first string, A, is empty, all m
elements in B1∼m must be inserted. If the second string, B, is empty, all n elements in
A1∼n must be deleted. These basis cases are the first row and first column of the table. For
the remaining cells, if ai = bj , indel(A1∼n , B1∼m ) = indel(A1∼n−1 , B1∼m−1 ). If ai 6= bj ,
either ai must be removed in addition to indel(A1∼n−1 , B1∼m ) or bj must be inserted in
addition to indel(A1∼n , B1∼m−1 ). Whichever the value is smaller is the distance value
for indel(A1∼n , B1∼m ). Using the recurrence relation in eqn (6.12), a 2D strong inductive
programming can be stated as follows:
Algorithm 6.16. Dynamic Edit distance
ED indel(A1∼n , B1∼m )
Declare a (n + 1) × (m + 1) table T . . . . . . . . . . . . . . . . . . . . . 1
T0,0 = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 to m, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T0,j = j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Ti,0 = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 1 to m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if (ai = bj ), Ti,j = Ti−1,j−1 . . . . . . . . . . . . . . . . . . . . . . . 8
else, Ti,j = min(Ti,j−1 , Ti−1,j ) + 1 . . . . . . . . . . . . . . . . 9
return Tn,m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Both computational time and space complexities of Algorithm 6.16 are Θ(nm). Fig-
ure 6.12 (b) shows the full table and highlighted cells indicate the backtracking path. The
diagonal arrows are matches, and up-arrow and down-arrow indicate the deletion and inser-
tion operations, respectively.
An edit distance measure (InDel) has a relationship with a similarity measure (LCS).
Theorem 6.2. Relationship between the longest common sub-sequence (LCS) and the edit
distance with indels (InDel).
Indel(A, B) = |A| + |B| − 2 × LCS(A, B)
Proof. Let S be LCS(A, B). |A| = |A − S| + |S| and |B| = |B − S| + |S|. Indel(A, B) is
the number of deletions from A in addition to the number of insertions to B in order to
edit from A to B. Exactly |A − S| number of deletions is necessary from A and |B − S|
number of insertions is required, as depicted in Figure 6.12 (b). Hence, InDel(A, B) =
|A − S| + |B − S| = |A| − |S| + |B| − |S| = |A| + |B| − 2|S| = |A| + |B| − 2× LCS(A, B). 
314 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Theorem 6.2 states that if LCS is solved, Indel can be trivially computed. Conversely,
LCS can be trivially computed once Indel is solved by the following eqn (6.13):

n + m − Indel(A1∼n , B1∼m )
LCS(A1∼n , B1∼m ) = (6.13)
2

6.2.3 Levenshtein Distance


In 1965, Levenshtein suggested an edit distance which allows substitutions as well as
insertion and deletion operations [111]. The penalty cost for a substitution may vary, de-
pending on applications, but one version is given in eqn (6.14). The Levenshtein edit distance
problem is defined as follows:
Problem 6.7. Levenshtein edit distance
Input: a string A of size n and a string B of size m
Output: Lev(A1∼n , B1∼m )


 max(n, m) if min(n, m) = 0
  
 Lev(A1∼n−1 , B1∼m−1 ) + c(an , bm )
=
min  Lev(A1∼n , B1∼m−1 )  if n, m > 0
  


Lev(A1∼n−1 , B1∼m )

(
0 if x = y
where c(x, y) = (6.14)
1 if x =
6 y

Based on the recurrence relation in eqn (6.14), a two dimensional strong inductive pro-
gramming algorithm can be devised as follows:
Algorithm 6.17. Levenshtein edit distance
ED Lev(A1∼n , B1∼m )
Declare a (n + 1) × (m + 1) table T . . . . . . . . . . . . . . . . . . . . . 1
T0,0 = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 to m, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T0,j = j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Ti,0 = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 1 to m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if (ai = bj ), c = 0 ................................ 8
else, c = 1 .....................................9
Ti,j = min(Ti,j−1 + 1, Ti−1,j + 1, Ti−1,j−1 + c) . . . . . . 10
return Tn,m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Vladimir I. Levenshtein (1935 - 2017) was a Russian scientist. A pioneer in the


theory of error correcting codes, Dr. Vladimir I. Levenshtein is known as the father of
coding theory in Russia. c Photo Credit: Renate Schmid, MFO, licensed under CC
BY-SA 2.0 DE.
6.2. PROBLEMS ON STRINGS 315

A l k h w a r i z m i
0 1 2 3 4 5 6 7 8 9 10 11
A 1 0 1 2 3 4 5 6 7 8 9 10
l 2 1 0 1 2 3 4 5 6 7 8 9
g 3 2 1 1 2 3 4 5 6 7 8 9
o 4 3 2 2 2 3 4 5 6 7 8 9
r 5 4 3 3 3 4 4 4 5 6 7 8
i 6 5 4 4 4 4 5 5 4 5 6 7
t 7 6 5 5 5 5 5 6 5 5 6 7
h 8 7 6 6 5 6 6 6 6 6 6 7
m 9 8 7 7 6 6 7 7 7 7 6 7

(a) table of Levenshtein edit distance


A l g o r i t h m
m m s s i i m m d s m i

A l k h w a r i z m i
(b) Assignment of Levenshtein edit distance
A l k h w a r i z m i
0 1 2 3 4 5 6 7 8 9 10 11
A 1 0 1 2 3 4 5 6 7 8 9 10
l 2 1 0 1 2 3 4 5 6 7 8 9
g 3 2 1 1 2 3 4 5 6 7 8 9
o 4 3 2 2 2 3 4 5 6 7 8 9
r 5 4 3 3 3 4 4 4 5 6 7 8
i 6 5 4 4 4 4 5 5 4 5 6 7
t 7 6 5 5 5 5 5 6 5 5 6 7
h 8 7 6 6 5 6 6 6 6 6 6 7
m 9 8 7 7 6 6 7 7 7 7 6 7

(c) table of Levenshtein edit distance


A l g o r i t h m
m m i i s s m m s s s

A l k h w a r i z m i
(d) Assignment of Levenshtein edit distance

Figure 6.13: Levenshtein edit distance illustration.


316 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Both computational time and space complexities of Algorithm 6.17 are Θ(nm). Fig-
ure 6.13 shows a couple of possible edit assignments and their backtracking paths. The blue
diagonal arrows are matches and red bold diagonal arrows indicate substitutions. Up-arrows
and down-arrows indicate the deletion and insertion operations, respectively. Levenshtein
edit distance is the sum of minimum substitution, deletion, and insertion operations.
A memoization version of eqn (6.14) is given below, but there is no computational
advantage over the 2D strong inductive programming Algorithm 6.17. It is invoked as
Lev(n, m) initially.

Algorithm 6.18. Levenshtein edit distance by memoization

Declare a global two dimensional table, T1∼n,1∼m .


Let A1∼n and B1∼m declared globally.
Lev(r, c)
if r = 0, return c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if c = 0, return r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [r][c] = nil, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if ar = bc , t = Lev(r − 1, c − 1) . . . . . . . . . . . . . . . . . . . . . 4
if ar 6= bc , t = Lev(r − 1, c − 1) + 1 . . . . . . . . . . . . . . . . . 5
T [r][c] = min(t, Lev(r, c − 1), Lev(r − 1, c)) . . . . . . . . . . . . 6
return T [r][c] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

6.2.4 Longest Palindromic Sub-sequence

(a) Mary Had A Little Lamb music note

(b) Output: longest palindrome sub-sequence note

Figure 6.14: Music application of longest palindrome sub-sequence

Consider the longest palindromic sub-sequence problem, or LPS in short. The input is a
string of symbols and the output is a longest sub-sequence such that the length is maximized.
Figure 6.14 depicts the longest palindromic sub-sequence note in a music application. The
problem can be formally defined as follows:

Problem 6.8. Longest palindromic sub-sequence


Input: A string A1∼n of symbols
Output: A sub-string S or |S| such that |S| is maximized
where S ⊆0 A and isPalindrome(S) = T

A two dimensional (n × n) table, as shown in Figure 6.15, can be useful in tackling this
problem, where the ith row and jth column cell represents the starting and ending elements
of the input string’s sub-string; T [i][j] = LPS(Ai∼j ). First, place the input string on the
top and right indices. Only the upper-right triangle part of the table are computed. A
6.2. PROBLEMS ON STRINGS 317

C G F A F C C A A G C A
C 1 1 1 1 3 5 C 1 1 2 2 4 4
G 0 1 1 1 3 3 A 0 1 2 2 2 3
F 0 0 1 1 3 3 A 0 0 1 1 1 3
A 0 0 0 1 1 1 G 0 0 0 1 1 1
F 0 0 0 0 1 1 C 0 0 0 0 1 1
C 0 0 0 0 0 1 A 0 0 0 0 0 1

LPS(‘CGFAFC’) = ‘CFAFC’ LPS(‘CAAGCA’) = ‘CAAC’


a) odd length output case b) even length output case

Figure 6.15: Music application of longest palindrome sub-sequence

recurrence relation of the longest palindromic sub-sequence problem is given below:




 1 if i = j

0 if i > j
LPS(Ai∼j ) = (6.15)
LPS(Ai+1∼j−1 ) + 2
 if i < j and ai = aj

max(LPS(Ai+1∼j ), LPS(Ai∼j−1 )) if i < j and ai 6= aj

The basis part of a recurrence relation is the main diagonal of the table. If a sub-string
starts from the ith position and ends at the ith position, it is a palindrome by itself of length
1. Cells in the lower-left triangle below the main diagonal are 0’s. For the main recursive
relation parts, there are two cases to consider. First, when the starting and ending elements
are matched, the longest palindromic sub-sequence is that of the inner sub-string plus two:
LPS(As,e ) = LPS(As+1,e−1 ) + 2. Next, when the starting and ending elements are different,
LPS(As,e ) is the same as either LPS(As+1,e ) or LPS(As,e−1 ), whichever is greater.
Using the recurrence relation in eqn (6.15), a 2D strong inductive programming algorithm
can be devised by starting from the main diagonal and solving toward the upper-right corner.
A pseudo code is stated as follows:
Algorithm 6.19. Dynamic longest palindrome sub-sequence
LPS(A1∼n , B1∼m )
Declare an n × n table T with 0 initially . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Ti,i = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 1 to n − j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
if ai = bi+j , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Ti,i+j = Ti+1,i+j−1 + 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Ti,i+j = max(Ti+1,i+j , Ti,i+j−1 ) . . . . . . . . . . . . . . . . . . . 9
return T1,n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Declaring a table with initial value 0s in line 1 of Algorithm 6.19 takes care of the second
line basis case in eqn (6.15). Lines 2 and 3 solve the main diagonal basis case. Lines 4 ∼ 9
318 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

are the main recurrence relation parts. Both computational time and space complexities of
Algorithm 6.19 are Θ(n2 ).
A memoization version of eqn (6.15) is given below, but there is no computational
advantage over the 2D strong inductive programming Algorithm 6.19. It is invoked initially
with LPS(1, n).

Algorithm 6.20. Longest palindromic sub-sequence by memoization

Declare a global two dimensional table, T1∼n,1∼n .


Let A1∼n declared globally.
LPS(r, c)
if r = c, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if r > c, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [r][c] = nil, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if ar = ac , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [r][c] = LPS(r + 1, c − 1) + 2 . . . . . . . . . . . . . . . . . . . . . . 5
if ar 6= ac , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [r][c] = max(LPS(r + 1, c), LPS(r, c − 1)) . . . . . . . . . . 7
return T [r][c] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

To find the actual longest palindromic sub-sequence, As∼e , the following backtracking
algorithm can be invoked initially with LPS2BT(1, n). T is global and already computed
by Algorithm 6.19. The input string A1∼n is declared globally as well. The symbol  means
an empty string.

Algorithm 6.21. Backtracking the longest palindromic sub-sequence table

LPS2BT(i, j)
hT is global and already computed by Algorithm 6.19.i
if i > j, return  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if i = j, return hai i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if i < j ∧ ai 6= aj , return LPS2BT(i + 1, j − 1) . . . . . . . . 3
if i < j ∧ ai = aj , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return append(ai , LPS2BT(i + 1, j − 1), aj ) . . . . . . . . . . . 5

The computational time complexity of the backtracking Algorithm 6.21 is Θ(n).

6.3 Problems on Combinatorics


Many problems on combinatorics involve the tabulation method. A triangle tabulation
method is often cast in a form suitable for many problems. Manipulating triangle tables to
solve such problems is presented in this section.

6.3.1 Binomial Coefficient


As an example of memoization in action,
 consider the problem of computing the binomial
coefficient, C(n, k), often denoted as nk . It is to choose k items from n distinct elements
without repetition, and is one of the most pervasive problems in combinatorics. Here is one
formal definition of the problem:
6.3. PROBLEMS ON COMBINATORICS 319

Problem 6.9. Binomial coefficient (BNC)


Input: n and k ∈ Z 
Output: C(n, k) = nk = |X| where
n
X
X = {(x1 , · · · , xn ) | xi = k where xi = 0 or 1}
i=1

Another well known definition of the binomial coefficient is as follows:


 
n n!
= (6.16)
k k!(n − k)!

It can be defined recursively.



0
 if k < 0 or k > n
C(n, k) = 1 if k = 0 or k = n (6.17)

C(n − 1, k − 1) + C(n − 1, k) otherwise

Although the recurrence relation of C(n, k) in eqn (6.17) is well known and the problem
itself may be defined in recursion, deriving a recurrence relation is key to successful algorithm
design using strong inductive programming and a memoization technique. Many other
problems in combinatorics may not be defined recursively. Hence, it is very important to
practice deriving a recurrence relation.

I D G S M D G S M
I D G S M
I D G S M D G S M
I D G S M n−1
−−−→ n−1

I D G S M k−1 D G S M k−1
I D G S M
I D G S M D G S M
I D G S M
I D G S M D G S M
I D G S M partition
−−−−−−→ I D G S M D G S M
I D G S M
I D G S M
I D G S M D G S M
I D G S M n−1 n−1

I D G S M −−−→ D G S M k
k
I D G S M
I D G S M D G S M
I D G S M
I D G S M D G S M

Figure 6.16: Deriving a recurrence relation for binomial coefficient

A toy example often provides great insight. Suppose that three different questions will
appear on the midterm out of five topics: inductive programming (I), divide and conquer
(D), greedy algorithm (G), strong inductive programming (S), and memoization (M). The
question is how many different kinds of exams are possible where the order of questions
does not matter. Here n = 5 and k = 3 and the problem is to find C(5, 3). Figure 6.16
lists all 10 possible exams. There are exams with inductive programming (I) and other
exams do not contain it. Thus, the list can be partitioned into two lists. Since inductive
320 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

programming (I) is selected in the first list, a task for the exam designer is to select two
more questions out of four remaining topics, which is C(4, 2). Since inductive programming
(I) is not selected in the second partition list, the task is to select all three questions out
of four remaining topics, which is C(4, 3). Based on this toy example observation, one may
surmise the recurrence relation of C(n, k) in eqn (6.17). Now, it must be validated with a
formal proof, as in Theorem 6.3.

Theorem 6.3. The recurrence relation of C(n, k) in eqn (6.17) correctly produces
 
n n!
= .
k k!(n − k)!

Proof.   
n n−1
 
n−1

= +
k k−1 k
n! (n − 1)! (n − 1)!
= +
k!(n − k)! (k − 1)!(n − k)! k!(n − k − 1)!
k(n − 1)! + (n − k)(n − 1)!
=
k!(n − k)!
n(n − 1)! n!
= = 
k!(n − k)! k!(n − k)!

The above proof is a succinct one. One possible full lengthy proof is using two dimen-
sional strong induction. Here, the key idea in two dimensional strong inductive proof shall
be utilized to design algorithms though.
If the recurrence relation of C(n, k) in eqn (6.17) is used to solve the problem naı̈vely,
its computational time complexity is exponential, as depicted as a massive recursion tree
with tremendous redundant sub-trees in Figure 6.17 (a) for C(5, 2). A succinct recursion
tree after eliminating a redundant sub-trees, as shown in Figure 6.17 (b), is possible with a
memoization technique, as stated in Algorithm 6.22.

C(5,2) C(5,2)

C(4,1) C(4,2) C(4,1) C(4,2)

C(3,0) C(3,1) C(3,1) C(3,2) C(3,0) C(3,1) C(3,1) C(3,2)

C(2,0) C(2,1) C(2,0) C(2,1) C(2,1) C(2,2) C(2,0) C(2,1) C(2,1) C(2,2)

C(1,0)C(1,1) C(1,0) C(1,1) C(1,0) C(1,1) C(1,0) C(1,1)

(a) naı̈ve recursion tree (b) Memoization tree

Figure 6.17: Binomial coefficient recursion trees


6.3. PROBLEMS ON COMBINATORICS 321

Algorithm 6.22. C(n, k) by memoization

Declare a global table LTn×k whose values are nil initially.




 0 if k < 0 or k > n

1 if k = 0 or k = n
C(n, k) =


 LT [n][k] = C(n − 1, k − 1) + C(n − 1, k) if LT [n][k] = nil
LT [n][k] if LT [n][k] 6= nil

Both computational time and space complexities of Algorithm 6.22 are Θ(kn).

k→
1
n 1
1 1
↓ 1 1
1 2 1
1 2 1
1 3 3 1
1 4 6 4 1
1 3 3 1
1 5 10 10 5 1
1 4 6 4 1
1 6 15 20 15 6 1
1 5 10 10 5 1
1 7 21 35 35 21 7 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
(a) Pascal’s Triangle (b) Left-aligned look-up table
j→ 0 1 2 3 4
i 1 1 1 1 1 0 (0, 0) (1, 1) (2, 2) (3, 3) (4, 4)
↓ 1 2 3 4 5 1 (1, 0) (2, 1) (3, 2) (4, 3) (5, 4)
1 3 6 10 15 2 (2, 0) (3, 1) (4, 2) (5, 3) (6, 4)
1 4 10 20 35 3 (3, 0) (4, 1) (5, 2) (6, 3) (7, 4)
1 5 15 35 70 4 (4, 0) (5, 1) (6, 2) (7, 3) (8, 4)
1 6 21 56 126 5 (5, 0) (6, 1) (7, 2) (8, 3) (9, 4)
1 7 28 84 210 6 (6, 0) (7, 1) (8, 2) (9, 3) (10, 4)

(c) Left-rotated look-up table (d) Left-rotated look-up table indices

Figure 6.18: Pascal’s Triangle and its rectangular table representations

One easy way to represent Pascal’s triangle in Figure 6.18 (a) is the left-aligned table
given in Figure 6.18 (b). The memoization Algorithm 6.22 utilizes the left-aligned (n × k)
table and computes only highlighted cells. The empty cells in upper right corner and cells in
the lower left corner are not necessary. To utilize the table more effectively, the left-rotated
table in Figure 6.18 (c) can be used. The following relationships between table indices and
(n, k) can be derived based on the observation in Figure 6.18 (d):

C(n, k) = LT (n − k, k) where n ≥ k ≥ 0 (6.18)


LT (i, j) = C(i + j, j) (6.19)

Hence, LT (n, k) in the memoization Algorithm 6.22 can be replaced with LT (n − k, k) to


utilize the space slightly more effectively. Here, the (n − k × k) table is necessary instead of
(n × k) table.
322 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Algorithm 6.23. C(n, k) by memoization

Declare a global table LTn−k×k whose values are nil initially.




 0 if k < 0 or k > n

1 if k = 0 or k = n
C(n, k) =


 LT [n − k][k] = C(n − 1, k − 1) + C(n − 1, k) if LT [n − k][k] = nil
LT [n − k][k] if LT [n − k][k] 6= nil

Both computational time and space complexities of Algorithm 6.23 are O(kn), or specifically
Θ(k(n − k)).
Next, a two dimensional strong inductive programming algorithm can be devised by
filling the left-rotated look-up table in Figure 6.18 (c) sequentially from top row to bottom
row and left to right. A pseudo code is stated as follows:

Algorithm 6.24. Dynamic binomial coefficient

nchoosek(n, k)
Declare a table T0∼n−k,0∼k . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 0 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [0][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 1 to n − k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [i][0] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [i][j] = T [i][j − 1] + T [i − 1][j] . . . . . . . . . . . . . . 7
return T [n − k][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Table 6.1: Number of steps comparison of three algorithms for C(20, k)


naı̈ve memoi- 2D Str.
C(20, k) C(20, k) recursion zation ind. prog.
C(20, 1) 20 39 39 19
C(20, 2) 190 379 73 36
C(20, 3) 1140 2279 103 51
C(20, 4) 4845 9689 129 64
C(20, 5) 15504 31007 151 75
C(20, 6) 38760 77519 169 84
C(20, 7) 77520 155039 183 91
C(20, 8) 125970 251939 193 96
C(20, 9) 167960 335919 199 99
C(20, 10) 184756 369511 201 100
C(20, 11) 167960 335919 199 99
C(20, 12) 125970 251939 193 96
C(20, 13) 77520 155039 183 91
C(20, 14) 38760 77519 169 84
C(20, 15) 15504 31007 151 75
C(20, 16) 4845 9689 129 64
C(20, 17) 1140 2279 103 51
C(20, 18) 190 379 73 36
C(20, 19) 20 39 39 19
6.3. PROBLEMS ON COMBINATORICS 323

Both computational time and space complexities of Algorithm 6.24 are O(kn), or specifi-
cally Θ(k(n−k)). Table 6.1 shows the number of necessary steps to compute C(20, k) for the
naı̈ve recursion in eqn (6.17), the memoization Algorithm 6.23, and the 2D strong inductive
programming Algorithm 6.24. The 2D strong inductive programming Algorithm 6.24 is the
best among the three versions presented thus far for the binomial coefficient Problem 6.9.

6.3.2 Lucas Sequence Coefficient


Recall the nth Lucas Sequence Problem 5.10 on page 250. First nine Lucas sequences are
listed as formulas in Figure 5.21 (a) on page 250. The nth Lucas Sequence can be expressed
as an extended formula in eqn (6.20) so that formula has exactly n terms with respective
coefficient, L(n, k).
n
k
X
L(n, k)pn−k q b 2 c (6.20)
k=1

Figure 6.19 (a) shows Lucas sequences in the extended formula for n = 1 ∼ 8.

n LUS(n, p, q)
1 1p0 q 0
2 1p q + 0p0 q 1
1 0

3 1p q + 0p1 q 1 − 1p0 q 1
2 0

4 1p3 q 0 + 0p2 q 1 − 2p1 q 1 + 0p0 q 2


5 1p4 q 0 + 0p3 q 1 − 3p2 q 1 + 0p1 q 2 + 1p0 q 2
6 1p q + 0p4 q 1 − 4p3 q 1 + 0p2 q 2 + 3p1 q 2 + 0p0 q 3
5 0

7 1p q + 0p5 q 1 − 5p4 q 1 + 0p3 q 2 + 6p2 q 2 + 0p1 q 3 − 1p0 q 3


6 0

8 1p7 q 0 + 0p6 q 1 − 6p5 q 1 + 0p4 q 2 + 10p3 q 2 + 0p2 q 3 − 4p1 q 3 + 0p0 q 4


(a) Lucas sequence for n = 1 ∼ 8
1
L(1,1)
1 0
1 0 −1
1 0 −2 0
1 0 −3 0 1
1 0 −4 0 3 0
1 0 −5 0 6 0 −1
1 0 −6 0 10 0 −4 0
1 0 −7 0 15 0 −10 0 1
1 0 −8 0 21 0 −20 0 5 0

(b) Lucas’ triangle

Figure 6.19: Lucas sequence and Lucas coefficient.

Let L(n, k) be the coefficient for the kth term in the nth Lucas Sequence. Figure 6.19 (b)
shows the Lucas’ triangle for L(n, k). The problem of finding the Lucas Sequence Coefficient,
or simply LSC, is defined recursively as follows:
Problem 6.10. Lucas Sequence Coefficient
Input: n and k ∈ Z
Output: L(n, k) in eqn (6.21)
324 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

L(7,5) = 6 L(7,5) = 6
=3 = −3 =3 = −3

L(6,5) L(5,3) L(6,5) L(5,3)

=1 =−2 =−2 =1 =1 =−2 =−2 =1


L(5,5) L(4,3) L(4,3) L(3,1) L(5,5) L(4,3) L(4,3) L(3,1)

=0 =−1 =−1 =1 =−1 =1 =0 =−1 =−1 =1


L(4,5) L(3,3) L(3,3) L(2,1) L(3,3) L(2,1) L(4,5) L(3,3) L(3,3) L(2,1)

=0 =1 =0 =1 =0 =1 =0 =1
L(2,3) L(1,1) L(2,3) L(1,1) L(2,3) L(1,1) L(2,3) L(1,1)

(a) naı̈ve recursion tree (b) memoization recursion tree

Figure 6.20: Recursion trees for L(7, 5)


1
 if k = 1 ∧ n > 0
L(n, k) = 0 if n ≤ 0 ∨ k ≤ 0 ∨ k > n (6.21)

L(n − 1, k) − L(n − 2, k − 2) otherwise

Figure 6.20 (a) shows the full recursion tree when the naı̈ve recursive programming
algorithm stated in eqn (6.21) is implemented to compute L(7, 5) = 6. 19 recursive calls are
made with many redundant sub-trees. To eliminate the redundant sub-trees, an algorithm
based on the memoization method is stated as follows:

Algorithm 6.25. Lucas Sequence Coefficient, L(n, k) by memoization

T1∼n−k+1,1∼k = nil declared globally


LSC(n, k)
if k ≤ 1 ∨ k > n ∨ n ≤ 0, return 0 . . . . . . . . . . . . . . . . . . . . . 1
if k = 1 ∧ n > 0, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [n − k + 1][k] 6= nil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [n − k + 1][k] = LSC(n − 1, k) − LSC(n − 2, k − 2) . . 4
return T [n − k + 1][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Only 13 recursive calls are made by the memoization Algorithm 6.25 to compute L(7, 5),
as illustrated in Figure 6.20 (b). Both computational time and space complexities are Θ(kn)
or specifically Θ(k(n − k)).
It should be noted that the top most cell’s index is (1, 1) in the integer partition triangle
whereas it was (0, 0) in Pascal’s triangle. Based on the recurrence relation in eqn (6.22),
an algorithm based on the two dimensional strong inductive programming paradigm can be
devised as follows:
6.3. PROBLEMS ON COMBINATORICS 325

j→ j→
i 1 0 −1 0 1 i 1 1 1 1 1
↓ 1 0 −2 0 3 ↓ 0 0 0 0 0
1 0 −3 0 6 −1 −2 −3 −4 −5
1 0 −4 0 10 0 0 0 0 0
1 0 −5 0 15 1 3 6 10 15
1 0 −6 0 21 0 0 0 0 0
1 0 −7 0 28 −1 −4 −10 −20 −35

(a) The SW (7 × 5) table for L(11, 5) (b) The SE (7 × 5) table for L(11, 7)
Algorithm 6.26 illustration Algorithm 6.27 illustration

Figure 6.21: Tables for LSC.

Algorithm 6.26. Dynamic Lucas Sequence Coefficient with a SW table.


LSC(n, k)
Declare a table T1∼n−k+1,1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1][2] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 3 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [1][j] = −1 × T [1][j − 2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [i][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i][2] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
for j = 3 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
T [i][j] = T [i − 1][j] − T [i][j − 2] . . . . . . . . . . . . . . . . . . . . 10
return T [n − k + 1][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Algorithm 6.26 uses the left-rotated table, which is the south-west direction table, as
shown in Figure 6.24 (a). Algorithm 6.26 assumes that n ≥ k and k > 2 for simplicity sake.
Both computational time and space complexities of Algorithm 6.26 are Θ(kn). The pseudo
code for utilizing the south-east direction table is shown in Figure 6.24 (b) and stated as
follows:
Algorithm 6.27. Dynamic Lucas Sequence Coefficient with a SE table.
LSC(n, k)
Declare a table T1∼k,1∼n−k+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 ∼ n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [2][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 3 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [i][1] = −1 × T [i − 2][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 2 ∼ n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i][j] = T [i][j − 1] − T [i − 2][j] . . . . . . . . . . . . . . . . . . . . . 8
return T [k][n − k + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Algorithm 6.27 also assumes that n ≥ k and k > 2 for simplicity sake. While both
Algorithms 6.26 and 6.27 compute every cell in the table, the memoization Algorithm 6.25
326 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

compute only the highlighted cells, as highlighted in Figure 6.24. The memoization Algo-
rithm 6.25 is practically better than the strong inductive programming algorithms for this
particular problem. The asymptotic computational complexities are the same though.

6.3.3 Integer Partition

P(7,1) P(7,2) P(7,3) P(7,4) P(7,5) P(7,6) P(7,7)

(a) P (7, k) for k = 1 ∼ 7

1 1
2 1 1
3 1 1 1
4 1 2 1 1
5 1 2 2 1 1
6 1 3 3 2 1 1
7 1 3 4 3 2 1 1
8 1 4 5 5 3 2 1 1

(b) Integer Partition triangle

Figure 6.22: Integer partition p(n, k)

Partitioning a positive integer n into k parts is an important combinatorial problem.


A positive integer n can be represented by ordered sums of positive integer terms. For
example, if n = 7, there are 15 distinct ways, as enumerated in Figure 6.22. Let P (n, k)
be the set of exactly k ordered partitions of an integer n. Let p(n, k) be the cardinality
of partitions of n into k parts, i.e., p(n, k) = |P (n, k)|. As depicted in Figure 6.22, p(n, k)
can be analogized to the number of ways of distributing n unlabeled balls into k unlabeled
urns, where no urn is empty [162]. The problem of counting ways of partitioning a positive
integer n into exactly k parts, or IPE in short, is formulated as follows:
Problem 6.11. Integer partition, p(n, k) exactly k parts
Input: n and k ∈ Z+
Output: p(n, k) = |X| where
k
X
X = {(x1 , · · · , xk ) | xi = n where 1 ≤ xi integer and ∀(i, j) if i < j, xi ≥ xj }
i=1
6.3. PROBLEMS ON COMBINATORICS 327

(7 1 1 1) (7 1 1)
(7 1 1 1)
(6 2 1 1) (6 2 1)
(6 2 1 1)
(5 3 1 1) (5 3 1)
(5 3 1 1) −(0,··· ,0,1)
(5 2 2 1) −−−−−−−→ (5 2 2) p(n − 1, k − 1)
(5 2 2 1) −1
partition (4 4 1 1) (4 4 1)
(4 4 1 1) −−−−−−→
(4 3 2 1) (4 3 2)
(4 3 2 1)
(3 3 3 1) (3 3 3)
(4 2 2 2)
(3 3 3 1)
(4 2 2 2) −(1,··· ,1,1) (3 1 1 1)
(3 3 2 2) −−−−−−−→ p(n − k, k)
(3 3 2 2) −k (2 2 1 1)
(a) p(10, 4) = p(9, 3) + p(6, 4) = 9

P(6,1)

P(6,2)

P(7,2) P(5,2) P(7,3) P(4,3)

(b) p(7, 2) = p(6, 1) + p(5, 2) = 3 (c) p(7, 3) = p(6, 2) + p(4, 3) = 4

Figure 6.23: Deriving a recurrence relation for integer partition

A recursion relation of p(n, k) given in [162, p 65] is as follows:



0
 if k < 1 or k > n
p(n, k) = 1 if k = 1 or k = n (6.22)

p(n − 1, k − 1) + p(n − k, k) otherwise

Figure 6.23 provides insights of the recurrence relation in eqn (6.22). Enumerating all
partitions of a sample example helps in deriving a recurrence relation. Figure 6.23 (a)
shows all partitions of P (10, 4). They can be partitioned into two groups: the first group
consists of all integer partitions that contain at leat one ‘1,’ and the other group consists
of all integer partitions that contain no ‘1’. The first group can be generated if all integer
partitions of n − 1 into exactly k − 1 parts are given. For the second group, if each number of
partitions is subtracted by 1, it becomes all integer partitions of n − k into exactly k parts.
As shown in Figure 6.22 (b), the integer partition number forms a triangle, akin to
Pascal’s triangle, that will provide a two dimensional tabulation method to find the integer
partition number. It should be noted that the top most cell’s index is (1, 1) in the integer
partition triangle, whereas it was (0, 0) in Pascal’s triangle. Based on the recurrence relation
in eqn (6.22), an algorithm based on the two dimensional strong inductive programming
paradigm can be devised as follows:
328 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Algorithm 6.28. Dynamic Integer partition number


IPE(n, k)
Declare a table T1∼n−k+1,1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [i][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if i ≤ j, T [i][j] = T [i][j − 1] . . . . . . . . . . . . . . . . . . . . . . . 7
if i > j, T [i][j] = T [i][j − 1] + T [i − j][j] . . . . . . . . . . 8
return T [n − k + 1][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Algorithm 6.28 uses the left-rotated table, as shown in Figure 6.24 (b). Both computa-
tional time and space complexities of Algorithm 6.28 are Θ(kn).

p(10,4) = 9

p(9,3) = 7 p(6,4) = 2

=2 =0
p(8,2) = 4 p(6,3) = 3 p(5,3) p(2,4)

=1 =3 =2 =1 =2 =0
p(7,1) p(6,2) p(5,2) p(3,3) p(4,2) p(2,3)

=1 =2 =1 =1 =1 =1
p(5,1) p(4,2) p(4,1) p(3,2) p(3,1) p(2,2)

=1 =1 =1 =0
p(3,1) p(2,2) p(2,1) p(1,2)

(a) Recursion tree for integer partition


1 2 3 4 1 2 3 4
1 p(1, 1) = 1 p(2, 2) = 1 p(3, 3) = 1 p(4, 4) = 1 1 - 1 1 -
2 p(2, 1) = 1 p(3, 2) = 1 p(4, 3) = 1 p(5, 4) = 1 2 1 1 - -
3 p(3, 1) = 1 p(4, 2) = 2 p(5, 3) = 2 p(6, 4) = 2 3 1 2 2 2
4 p(4, 1) = 1 p(5, 2) = 2 p(6, 3) = 3 p(7, 4) = 3 4 1 2 3 -
5 p(5, 1) = 1 p(6, 2) = 3 p(7, 3) = 4 p(8, 4) = 5 5 1 3 - -
6 p(6, 1) = 1 p(7, 2) = 3 p(8, 3) = 5 p(9, 4) = 6 6 - - - -
7 p(7, 1) = 1 p(8, 2) = 4 p(9, 3) = 7 p(10, 4) = 9 7 1 4 7 9
(b) Look-up table for integer partition (c) Memoization

Figure 6.24: Integer Partition number algorithms’ illustration

The integer partition number can be computed by the memoization technique. A pseudo
code based on the globally declared left-aligned table version is as follows:
6.3. PROBLEMS ON COMBINATORICS 329

Algorithm 6.29. p(n, k) by memoization I


T1∼n,1∼k = nil declared globally


 0 if k < 1 or k > n

1 if k = 1 or k = n
p(n, k) =
T [n][k] = p(n − 1, k − 1) + p(n − k, k) if T [n][k] = nil


T [n][k] if T [n][k] =
6 nil

A slightly more space efficient version utilizes the left rotated table and it is stated below
in Algorithm 6.30. As provided in Figure 6.24 (b), it only requires a (n − k + 1) × k look-up
table and IPE(n, k) = T [n − k + 1][k].

Algorithm 6.30. p(n, k) by memoization II


T1∼n−k+1,1∼k = nil declared globally
IPE(n, k)
if k < 1 ∨ k > n, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if k = 1 ∨ k = n, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [n − k + 1][k] 6= nil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [n − k + 1][k] = IPE(n − 1, k − 1)+ IPE(n − k, k) . . . . 4
return T [n − k + 1][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Unlike the binomial coefficient Problem 6.9, the memoization Algorithm 6.30 is better
than the 2D strong inductive programming Algorithm 6.28, as not all cells in the look-
up table need to be computed. Only the highlighted cells are necessary, as depicted in
Figure 6.24 (c). Figure 6.24 (a) shows the full recursion tree for p(10, 4) by eqn (6.22), and
the repeated subtree is p(4, 2). The number of necessary recursive calls is 23 by the naı̈ve
recursion in eqn (6.22), whereas it is only 21 with the memoization Algorithm 6.30.

6.3.4 Twelve Fold Ways of Combinatorics


How many ways to distribute k number of tasks to k number of processors are possible?
This question can be answered in twelve different ways. The answer depends on whether
tasks and processors are labeled (distinct) or unlabeled, whether idle processors are allowed
or not, and whether a processor is restricted to handle only one task (injective). A systematic
classification of 12 related enumerative problems is known as the twelve fold way [162] and
is listed in Table 6.2.
Finding the indices (i, j) which is the ith row and jth column in a table with regard to
(n, k) can be perflexing Templates with indices in Table 6.32 on page 354 may be useful in
designing algorithms.

6.3.5 Integer Partition with at Most k Parts


Consider the problem of counting ways of partitioning a positive number n into at most
k parts, I(n, k), which is abbreviated to IPam. For example, (n = 5) can be represented in
at most (k = 3) parts in five different ways: {(5), (4 + 1), (3 + 2), (3 + 1 + 1), (2 + 2 + 1)}.
This problem can be interpreted as distributing n unlabeled processes into k unlabeled
processors, while idle processors are allowed. While no idle processor was allowed in p(n, k),
idle processors are allowed in I(n, k). The problem formulation is almost identical to that of
330 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

Table 6.2: Twelve fold ways of Combinatorics.


Balls Urns Any Injective Surjective
k n 0 ≤ |u| |u| ≤ 1 1 ≤ |u|
Sequencing Permutation Surjective Seq.
L L k n!
n P (n, k) = n!S(k, n)
(n − k)!
POW (p.49) KPN (p.51) SSN (p.587)
Multiset coefficient
!! Binomial coefficient
! Surjective !
MSC
U L n n k−1
M (n, k) = C(n, k) =
k k n−1
MSC (p.350) BNC (p.319) SMSC (p.351)
Set part. # with idle Pigeon hole trivia Set partition
( )#
L U n
P k
S(k, i) 1 if k ≤ n S(k, n) =
i=1 n
SPam (p.412) 0 if k > n SNS (p.347)
Int. part. # with idle Pigeon hole trivia Integer partition #
n
U U I(k, n) =
P
p(k, i) 1 if k ≤ n p(k, n)
i=1
IPam (p.331) 0 if k > n IPE (p.326)

Table 6.3: Popular combinatoric problems with their recurrence relations


Sym. Problem Recurrence relation
!
n Binomial coefficient
C(n, k) = C(n − 1, k − 1) + C(n − 1, k)
k p. 319
( )
n Stirling number of the
S(n, k) = S(n − 1, k − 1) + kS(n − 1, k)
k second kind p. 347
" #
n Stirling number of the
S(n, k) = S(n − 1, k − 1) + (n − 1)S(n − 1, k)
k first kind p. 348
* +
n Eulerian number E(n, k) =
k p. 348 (n − k)E(n − 1, k − 1) + (k + 1)E(n − 1, k)
** ++
n Eulerian number of E(n, k) =
k the second kind p. 349 (2n − k − 1)E(n − 1, k − 1) + (k + 1)E(n − 1, k)
!!
n Multiset coefficient
M (n, k) = M (n, k − 1) + M (n − 1, k)
k p. 350
Surjective multiset coeffi-
M(n, k) = M(n, k − 1) + M(n − 1, k − 1)
cient p. 351
Integer partition with at
I(n, k) = I(n, k − 1) + I(n − k, k)
most k parts p. 331
Integer partition with ex-
p(n, k) = p(n − 1, k − 1) + p(n − k, k)
actly k parts p. 326
Lucas Sequence Coefficient
L(n, k) = L(n − 1, k) − L(n − 2, k − 2)
p. 323
Lucas Sequence II Coeffi-
L(n, k) = L(n − 1, k) − L(n − 2, k − 2)
cient p. 352
6.3. PROBLEMS ON COMBINATORICS 331

(7 0 0) (7 0)
(7 0 0)
(6 1 0) −(······ ,0) (6 1) I(n, k − 1)
(6 1 0) −−−−−−→
(5 2 0) (5 2) I(7, 2) = 4
(5 2 0)
(4 3 0) (4 3)
I(n, k) (5 1 1) partition
−−−−−−→
I(7, 3) = 8 (4 3 0)
(5 1 1) (4 0 0)
(4 2 1)
(4 2 1) −(1,··· ,1,1) (3 1 0) I(n − k, k)
(3 3 1) −−−−−−−→
(3 3 1) −k (2 2 0) I(4, 3) = 4
(3 2 2)
(3 2 2) (2 1 1)
(a) I(n, k) = I(n, k − 1) + I(n − k, k)

I(7,2) = 4 I(4,3) = 4

(b) Balls and urns: I(7, 3) = I(7, 2) + I(4, 3)


(4 0 0 0 0 0 0) (4 0 0 0 0 0) (4 0 0 0 0) (4 0 0 0)
(3 1 0 0 0 0 0) (3 1 0 0 0 0) (3 1 0 0 0) (3 1 0 0)
(2 2 0 0 0 0 0) (2 2 0 0 0 0) (2 2 0 0 0) (2 2 0 0)
(2 1 1 0 0 0 0) (2 1 1 0 0 0) (2 1 1 0 0) (2 1 1 0)
(1 1 1 1 0 0 0) (1 1 1 1 0 0) (1 1 1 1 0) (1 1 1 1)
I(4, 7) = 5 = I(4, 6) = 5 = I(4, 5) = 5 = I(4, 4) = 5
(b) I(n, k) = I(n, n) if n < k

Figure 6.25: Deriving a recurrence relation for integer partition with at most k parts

the integer partitioning into exactly k parts Problem 6.11, previously defined on page 326,
except for the bound constraint 0 ≤ xi part.

Problem 6.12. Integer partition number with at most k parts, I(n, k)

Input: n ∈ Z and k ∈ Z+
Output: I(n, k) = |X| where

k
X
X = {hx1 , · · · , xk i | xi = n where 0 ≤ xi integer and ∀(i, j) if i < j, xi ≥ xj }
i=1
332 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

I(n, k) has the following recurrence relation:


1
 if n = 0 ∨ k = 1
I(n, k) = I(n, n) if n < k (6.23)

I(n, k − 1) + I(n − k, k) otherwise

Figure 6.25 provides insights into the recurrence relation in eqn (6.23). Enumerating
all partitions of a sample example helps in deriving a recurrence relation. Figure 6.25 (a)
shows all partitions of I(7, 3) = 8. They can be partitioned into two groups. The first group
consists of all integer partitions that contain at least one ‘0’ or empty urn. The other group
consists of all integer partitions that contain no ‘0’ or no empty urn. The first group can
be generated if all integer partitions of n into at most k − 1 parts are given. For the second
group, if a ball is removed from each urn, it becomes all integer partitions of n − k into at
most k parts. Hence, I(7, 3) = I(7, 2) + I(4, 3) = 4 + 4 = 8, as depicted in Figure 6.25 (b).
It should be noted that while it is impossible if n < k in p(n, k) = 0, it is possible if n < k
in I(n, k). Indeed, I(n, k) = I(n, n), as illustrated in Figure 6.25 (b): I(4, 7) = I(4, 6) =
I(4, 5) = I(4, 4) = 5.
Although an at-most k integer partition triangle, I(n, k) is possible, as shown in Fig-
ure 6.26 (a) where the top most cell’s index is (1, 1). Based on the recurrence relation
in eqn (6.23), an algorithm based on the two dimensional strong inductive programming
paradigm can be devised as follows:

Algorithm 6.31. Dynamic integer partiton number with at most k parts

IPam(n, k)
Declare a table T0∼n,1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [0][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [i][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 2 to i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [i][j] = T [i][j − 1] + T [i − j][j] . . . . . . . . . . . . . . 8
for j = i + 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
T [i][j] = T [i][i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return T [n][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Algorithm 6.31 utilizes the left aligned table. Both computational time and space com-
plexities of Algorithm 6.31 are Θ(kn).
Figure 6.26 (b) shows the naı̈ve recursion tree to compute the value of I(5, 3) using
eqn (6.23) and the number of recursive calls nrc(I(5, 3)) = 11. The memoization technique
can be very effective for this problem. A pseudo code based on the globally declared left-
aligned table version is as follows:
6.3. PROBLEMS ON COMBINATORICS 333

1
1 2
1 2 3
1 3 4 5
1 3 5 6 7
1 4 7 9 10 11
1 4 8 11 13 14 15
1 5 10 15 18 20 21 22
1 5 12 18 23 26 28 29 30
1 6 14 23 30 35 38 40 41 42

(a) At most k integer partition triangle, I(n, k)


I (5,3) = 5 n\k 1 2 3 4 5 6 7 8
0 1 1 1 1 1 1 1 1
=2
I (5,2) = 3 I (2,3) 1 1 1 1 1 1 1 1 1
2 1 2 2 2 2 2 2 2
=1 =2 =2 3 1 2 3 3 3 3 3 3
I (5,1) I (3,2) I (2,2) 4 1 3 4 5 5 5 5 5
5 1 3 5 6 7 7 7 7
=1 =1 =1 =1 6 1 4 7 9 10 11 11 11
I (3,1) I (1,2) I (2,1) I (0,2) 7 1 4 8 11 13 14 15 15
8 1 5 10 15 18 20 21 22
=1 9 1 5 12 18 23 26 28 29
I (1,1) 10 1 6 14 23 30 35 38 40
(b) Naı̈ve recursion tree (c) Left aligned table for I(n, k).

Figure 6.26: Computing I(n, k) illustration

Algorithm 6.32. I(n, k) by memoization


T1∼n,1∼k = nil declared globally
IPam(n, k)
if k = 1 ∨ n = 0, return 1 . . . . . . . . . . . . . . . . . . . . . . 1
if n < k, return IPam(n, n) . . . . . . . . . . . . . . . . . . . . 2
if T [n][k] 6= nil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [n][k] = IPam(n, k − 1)+ IPam(n − k, k) . . . . . . 4
return T [n][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Note that the zero’s row in the left-aligned table, as shown in Figure 6.26 (c), may not
be explicitly stored. Only 45 cells are invoked and computed to compute I(10, 8) = 40 and
they are highlighted in Figure 6.26 (c).
334 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

6.4 Three Dimensional Tabulation


1 2 3 4 5 6 7 8 9 10 11 12
1 1 2 3 4 5 6 7 8 9 10 11 12
1 1 2 3 4 5 6 7 8 9 10 11 12
2
1 1 2 3 4 5 6 7 8 9 10 11 12
3 2
1 1 2 3 4 5 6 7 8 9 10 11 12
3 2
4
2 1
4 3
5
3 2
5 4
6
4 3
6 5
7
5 4
7 6
6 5
7
7 6
7

In this section, problems that require a three dimensional look-up table in order to
utilize strong inductive programming or the memoization method are introduced. They are
01-Knapsack with two constraints and bounded integer partition problems.

6.4.1 01-Knapsack with Two Constraints

7M$
5M$
4M$ 5Mg 3M$
30Mg
3Mg 15m 2Mg
20m
10m 10m

10M$ 1M$ 8M$


12Mg 1Mg 7Mg
20m 5m 15m

21Mg, 45m

Figure 6.27: Railway wagon example of the 01-knapsack with 2 constraints problem

Using a railway wagon as an example, suppose that there is a maximum weight limit as
well as the total length of a train. Unlike the previous 01-knapsack Problem 4.4 defined on
page 163, where there was only one weight constraint, there are two constraints. One wishes
to maximize the total profit by selecting a subset of freight wagons while not exceeding both
weight and length limits. Ignore the gaps between carts for simplicity’s sake when computing
the total length. This 01-knapsack with two constraints, or simply ZOK2, problem can be
defined as follows:

Problem 6.13. 01-Knapsack with two constraints


Input: 1. A1∼k , a list of k different items where each item is represented by its
profit, weight, and length ai = (pi , wi , li ),
2. wm , maximum weight, and
3. lm , maximum length that a knapsack can hold.
Output: X = hx1 , x2 , · · · , xk i such that
6.4. THREE DIMENSIONAL TABULATION 335

k
X
maximize pi xi
i=1
k
X k
X (6.24)
subject to wi xi ≤ wm and li x i ≤ lm
i=1 i=1
where xi = 0 or 1

The following recurrence relation in eqn (6.25) can be derived:

Z2 (A1∼k , wm , lm ) =

0


if wm ≤ 0 ∨ lm ≤ 0



 0 if k = 1 ∧ (wm < wk ∨ lm < lk )
p

if k = 1 ∧ (wm ≥ wk ∧ lm ≥ lk )
1
(6.25)

 Z 2 (A 1∼k−1 , w ,
m m l ) ! if k > 1 ∧ (wm < wk ∨ lm < lk )

Z2 (A1∼k−1 , wm , lm )



max if (k > 1 ∧ wm ≥ wk ∧ lm ≥ lk )


Z2 (A1∼k−1 , wm − wk , lm − lk ) + pk

This problem requires a series of tables, as shown in Figure 6.28 where k = 4 and A =
{(1, 1, 2), (4, 3, 1), (6, 5, 2), (8, 7, 3)}. One of the constraints can be fixed to compose a table
and the length constraint is fixed in each table in Figure 6.28. Values for cells in the tables
can be evaluated starting from the basis table, where lm = 1 iteratively. Now, an algorithm
using three dimensional strong inductive programming can be derived as follows:

Algorithm 6.33. 3D dynamic 01-knapsack with 2 constraints


dynamic 01-knapsack(A1∼k , wm , lm )
Declare a (lm × k × (wm + 1)) table T . . . . . . . . . . . . . . . . . . . 1
for j = 0 ∼ wm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if j < w1 ∨ l1 > 1, T [1][1][j] = 0 . . . . . . . . . . . . . . . . . . . . . 3
else, T [1][1][j] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [1][i][0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 1 ∼ wm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if j < wi ∨ li > 1, T [1][i][j] = T [1][i − 1][j] . . . . . . . . 8
else, T [1][i][j] = max(T [1][i − 1][j], pi ) . . . . . . . . . . . . .9
for t = 2 ∼ lm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
for j = 0 ∼ wm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
if j < w1 ∨ l1 > 1, T [t][1][j] = 0 . . . . . . . . . . . . . . . . . . 12
else, T [t][1][j] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
T [t][i][0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
for j = 1 ∼ wm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
if j < wi ∨ t < li , T [t][i][j] = T [t][i − 1][j] . . . . . . 17
else, T [t][i][j] = max(T [t][i − 1][j],
T [t − li ][i][j − wi ] + pi ) . . . . 18
return T [lm ][k][wm ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
336 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

lm =1
k A1∼k (pk , wk , lk )\wm 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1, 2)} (1, 1, 2) 0 0 0 0 0 0 0 0 0 0 0 0
2 A1∼1 ∪ (4, 3, 1) 0 0 0 4 4 4 4 4 4 4 4 4
3 A1∼2 ∪ (6, 5, 2) 0 0 0 4 4 4 4 4 4 4 4 4
4 A1∼3 ∪ (8, 7, 3) 0 0 0 4 4 4 4 4 4 4 4 4
lm =2
k A1∼k (pk , wk , lk )\wm 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1, 2)} (1, 1, 2) 0 1 1 1 1 1 1 1 1 1 1 1
2 A1∼1 ∪ (4, 3, 1) 0 1 1 4 4 4 4 4 4 4 4 4
3 A1∼2 ∪ (6, 5, 2) 0 1 1 4 4 6 6 6 6 6 6 6
4 A1∼3 ∪ (8, 7, 3) 0 1 1 4 4 6 6 6 6 6 6 6
lm =3
k A1∼k (pk , wk , lk )\wm 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1, 2)} (1, 1, 2) 0 1 1 1 1 1 1 1 1 1 1 1
2 A1∼1 ∪ (4, 3, 1) 0 1 1 4 5 5 5 5 5 5 5 5
3 A1∼2 ∪ (6, 5, 2) 0 1 1 4 5 6 6 6 10 10 10 10
4 A1∼3 ∪ (8, 7, 3) 0 1 1 4 5 6 6 6 10 10 10 10
lm =4
k A1∼k (pk , wk , lk )\wm 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1, 2)} (1, 1, 2) 0 1 1 1 1 1 1 1 1 1 1 1
2 A1∼1 ∪ (4, 3, 1) 0 1 1 4 5 5 5 5 5 5 5 5
3 A1∼2 ∪ (6, 5, 2) 0 1 1 4 5 6 7 7 10 10 10 10
4 A1∼3 ∪ (8, 7, 3) 0 1 1 4 5 6 7 8 10 10 12 12
lm =5
k A1∼k (pk , wk , lk )\wm 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1, 2)} (1, 1, 2) 0 1 1 1 1 1 1 1 1 1 1 1
2 A1∼1 ∪ (4, 3, 1) 0 1 1 4 5 5 5 5 5 5 5 5
3 A1∼2 ∪ (6, 5, 2) 0 1 1 4 5 6 7 7 10 11 11 11
4 A1∼3 ∪ (8, 7, 3) 0 1 1 4 5 6 7 8 10 11 12 12
lm =6∼∞
k A1∼k (pk , wk , lk )\wm 0 1 2 3 4 5 6 7 8 9 10 11
1 {(1, 1, 2)} (1, 1, 2) 0 1 1 1 1 1 1 1 1 1 1 1
2 A1∼1 ∪ (4, 3, 1) 0 1 1 4 5 5 5 5 5 5 5 5
3 A1∼2 ∪ (6, 5, 2) 0 1 1 4 5 6 7 7 10 11 11 11
4 A1∼3 ∪ (8, 7, 3) 0 1 1 4 5 6 7 8 10 11 12 13

Figure 6.28: A series of tables by the length

Lines 2 ∼ 9 compute the first initial table, where lm = 1, and lines 10 ∼ 18 compute the
rest of the tables iteratively. Both the computational time and space complexities of strong
inductive or dynamic programming Algorithm 6.33 are Θ(kwm lm ) if solutions of all sub-
problems are stored in a table. A memoization technique provides a much faster algorithm.
As indicated in Figure 6.28, only twelve highlighted cells are computed and stored in a
memoization, instead of entire tables. Based on the recurrence relation in the eqn (6.25),
6.4. THREE DIMENSIONAL TABULATION 337

the pseudo code for a memoization technique is as follows:


Algorithm 6.34. ZOK2-memoization
Declare a global three dimensional table, T1∼lm ,1∼k,0∼wm .
Calls ZOK2(A1∼k , wm , lm ) initially.
ZOK2(A1∼k0 , wx , lx )
if wx ≤ 0 ∨ lx ≤ 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T [lx ][k 0 ][wx ] = nil, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if k 0 = 1 ∧ (wx < w1 ∨ lx < l1 ), T [lx ][k 0 ][wx ] = 0 . . . . . . . . . 3
0
if k = 1 ∧ wx ≥ w1 ∧ lx ≥ l1 , T [lx ][k 0 ][wx ] = p1 . . . . . . . . . . 4
0
if k > 1 ∧ (wx < w1 ∨ lx < l1 ), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [lx ][k 0 ][wx ] = ZOK2(A1∼k0 −1 , wx , lx ) . . . . . . . . . . . . . . . . . . . . 6
if k 0 > 1 ∧ wx ≥ w1 ∧ lx ≥ l1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [lx ][k 0 ][wx ] = max(ZOK2(A1∼k0 −1 , wx , lx ),
ZOK2(A1∼k0 −1 , wx − wk0 , lx − lk0 )) . . . . . . . 8
return T [lx ][k 0 ][wx ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Z2(A1~4,11,5)
{(1,1,2),(4,3,1),(6,5,2),(8,7,3)} (p,w,l)
+8 = 12
(8,7,3)
Z2(A1~3,4,2) Z2(A1~3,11,5)
{(1,1,2),(4,3,1),(6,5,2)} {(1,1,2),(4,3,1),(6,5,2)}
=4 +6 = 11
(6,5,2)
Z2(A1~2,4,2) Z2(A1~2,6,3) Z2(A1~2,11,5)
{(1,1,2),(4,3,1)} {(1,1,2),(4,3,1)} {(1,1,2),(4,3,1)}
+4 = 4 +4 = 5 +4 = 5
(4,3,1)
Z2(A1~1,1,1) Z2(A1~1,4,2) Z2(A1~1,3,2) Z2(A1~1,6,3) Z2(A1~1,8,4) Z2(A1~1,11,5)
{(1,1,2)} {(1,1,2)} {(1,1,2)} {(1,1,2)} {(1,1,2)} {(1,1,2)}
=0 =1 =1 =1 =1 =1
(1,1,2)

Figure 6.29: A recursion tree for the 01-knapsack with 2 constraints problem.

Figure 6.29 shows a recursion tree for both the naı̈ve recursive programming in eqn (6.25)
and the memoization Algorithm 6.34 to compute Z2 ({(1, 1, 2), (4, 3, 1), (6, 5, 2), (8, 7, 3)}, 11, 5).
There are exactly twelve nodes corresponding to the highlighted cells in Figure 6.28. The
bold nodes and paths in Figure 6.29 are the result of backtracking; items (8,7,3) and (4,3,1)
are selected.

6.4.2 Bounded Integer Partition


The integer partition p(n, k) Problem 6.11 was analogized as the number of ways to
distribute n unlabeled balls into k unlabeled urns, where no urn is empty, as depicted in
Figure 6.22(a). Suppose each urns capacity is b, i.e., no urn can have more than b number of
balls in it. This problem of counting ways of partitioning a positive integer n into exactly k
parts with upper bound, or BIP in short, where no part can be greater than b or 0 (empty),
is formulated as follows:
338 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

P4(9,3)
P3(6,2) P (6,2)

P5(7,3)
P4(6,2)
5

P4(10,4)
P3(9,3)

P4(7,3)

P3(10,4)
P3(7,3)
P2(6,4)

P3(6,4)
P(4,3)
= P2(4,3) = P3(4,3)
= P4(4,3) = P5(4,3)
(a) p5 (7, 3) = p5 (6, 2) + p4 (4, 3) = 4 (b) p4 (10, 4) = p4 (9, 3) + p3 (6, 4) = 5
p4 (7, 3) = p4 (6, 2) + p3 (4, 3) = 3 p3 (10, 4) = p3 (9, 3) + p2 (6, 4) = 2
p3 (7, 3) = p3 (6, 2) + p2 (4, 3) = 2

Figure 6.30: Deriving a recurrence relation for bounded integer partition

Problem 6.14. Bounded Integer partition, pb (n, k) exactly k parts


Input: n, k, and b ∈ Z+
Output: pb (n, k) = |X| where
k
X
X = {(x1 , · · · , xk ) | xi = n where 1 ≤ xi ≤ b integer and ∀(i, j) if i < j, xi ≥ xj }
i=1

The ball and urn analogy is also helpful in deriving a recursive relation for the upper
bounded integer partition coefficient, as exemplified in Figure 6.30. The recurrence relation
is very similar to that of the regular integer partition coefficient in eqn (6.22). The set of all
possible ways to distribute n unlabeled balls into k urns whose capacity is b, Pb (n, k), can be
divided into two partitions. One way is adding an urn with one ball in it to Pb (n − 1, k − 1),
and the other is adding one ball to each urn in Pb−1 (n − k, k). The only difference from
eqn (6.22) is that the urn’s bound must be b−1 so that it does not exceed the original bound
when one ball is added. Thus, a simple recursive formula for this upper bounded integer
partition coefficient is given in eqn (6.26). Assume that n > 0 and b > 0 for simplicity’s
sake.


 0 if (k = 1 ∧ n > b) ∨ k < 1 ∨ k > n

 ∨ (b = 1 ∧ k < n)
pb (n, k) = (6.26)


 1 if (k = 1 ∧ n ≤ b) ∨ n = k
pb (n − 1, k − 1) + pb−1 (n − k, k) otherwise

This upper bound b can be considered as one more constraint, and a series of bounded
integer partition triangles can be generated according to the bound value b = 1 ∼ 8, as
shown in Figure 6.31. The basis triangle can first be computed and stored. Then, the next
triangle can be generated inductively. Hence, strong inductive programming with three
dimensional tables can be stated as follows:
6.4. THREE DIMENSIONAL TABULATION 339

1 1
0 1 1 1
0 0 1 0 1 1
0 0 0 1 0 1 1 1
0 0 0 0 1 0 0 1 1 1
0 0 0 0 0 1 0 0 1 1 1 1
0 0 0 0 0 0 1 0 0 0 1 1 1 1
0 0 0 0 0 0 0 1 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1

(a) pb=1 (n, k) (b) pb=2 (n, k)


1 1
1 1 1 1
1 1 1 1 1 1
0 2 1 1 1 2 1 1
0 1 2 1 1 0 2 2 1 1
0 1 2 2 1 1 0 2 3 2 1 1
0 0 2 2 2 1 1 0 1 3 3 2 1 1
0 0 1 3 2 2 1 1 0 1 3 4 3 2 1 1
0 0 1 2 3 2 2 1 1 0 0 3 4 4 3 2 1 1
0 0 0 2 3 3 2 2 1 1 0 0 2 5 5 4 3 2 1 1

(c) pb=3 (n, k) (d) pb=4 (n, k)


1 1
1 1 1 1
1 1 1 1 1 1
1 2 1 1 1 2 1 1
1 2 2 1 1 1 2 2 1 1
0 3 3 2 1 1 1 3 3 2 1 1
0 2 4 3 2 1 1 0 3 4 3 2 1 1
0 2 4 5 3 2 1 1 0 3 5 5 3 2 1 1
0 1 5 5 5 3 2 1 1 0 2 6 6 5 3 2 1 1
0 1 4 7 6 5 3 2 1 1 0 2 6 8 7 5 3 2 1 1

(e) pb=5 (n, k) (f) pb=6 (n, k)


1 1
1 1 1 1
1 1 1 1 1 1
1 2 1 1 1 2 1 1
1 2 2 1 1 1 2 2 1 1
1 3 3 2 1 1 1 3 3 2 1 1
1 3 4 3 2 1 1 1 3 4 3 2 1 1
0 4 5 5 3 2 1 1 1 4 5 5 3 2 1 1
0 3 7 6 5 3 2 1 1 0 4 7 6 5 3 2 1 1
0 3 7 9 7 5 3 2 1 1 0 4 8 9 7 5 3 2 1 1

(g) pb=7 (n, k) (h) pb=8 (n, k)

Figure 6.31: Bounded integer partition tables

Algorithm 6.35. Dynamic Bounded Integer partition number

BIP(n, k, b)
Declare a table T1∼b,1∼n−k+1,1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1][1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
340 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [1][i][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for l = 2 to b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T [l][1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
for i = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
if i ≤ l, T [l][i][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
else, T [l][i][1] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
for j = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if i ≤ j, T [l][i][j] = T [l][i][j − 1] . . . . . . . . . . . . . . . . . . . . 14
else, T [l][i][j] = T [l][i][j − 1] + T [l − 1][i − j][j] . . . . . 15
return T [b][n − k + 1][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Note that BIP(n, k, b) = pb (n, k) = T [b][n − k + 1][k], since Algorithm 6.35 uses the
left-rotated tables. Lines 2 ∼ 6 compute the basis table when b = 1. Lines 4 ∼ 6 may
be omitted if the table values are initialized to zero by default. Lines 7 ∼ 15 inductively
compute the next table for b − 1 number of times. Both computational time and space
complexities of the naı̈ve Algorithm 6.35 are Θ(bkn). Once again, the computational space
complexity can be reduced with data structures, to be further discussed in Chapter 7.
Next, the bounded integer partition coefficient can be computed by the memoization
technique. A pseudo code based on the globally declared left-rotated table version is as
follows:
Algorithm 6.36. BIP(n, k, b) by memoization
T1∼b,1∼n−k+1,1∼k = nil declared globally
BIP(n, k, b)
if k < 1 ∨ k > n, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if k = 1 ∨ k = n, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [b][n − k + 1][k] 6= nil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
T [b][n − k + 1][k] = BIP(b, n − 1, k − 1)+ BIP(b − 1, n − k, k) . . . . . 4
return T [b][n − k + 1][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

For example, to find p8 (10, 4) = 9, only 20 cells are computed in the memoization
Algorithm 6.36, as highlighted in Figure 6.31, while 8 × 7 × 4 = 224 cells are computed in
the strong inductive programming Algorithm 6.35.

6.5 Exercises
Q 6.1. Consider the following two dimensional array:
4 2 1 2
1 3 0 3
2 1 3 5
3 0 2 3

a). Find the prefix sums of the above two dimensional array.
b). Find the prefix products of the above two dimensional array.
6.5. EXERCISES 341

c). Formulate the problem of finding the prefix products.

d). Derive a recurrence relation for finding the prefix products.

e). Devise a strong inductive programming algorithm for finding the prefix products.

Q 6.2. Consider the ways of stamping n amount Problem 6.2 defined on page 295.
Hint: The two dimensional strong inductive programming Algorithm 6.2 and the memoiza-
tion Algorithm 6.3 are stated on pages 297 and 298, respectively.

a). Illustrate the two dimensional strong inductive programming Algorithm 6.2, without
sorting A, where n = 11 and the set of amounts of stamps, A = {1, 3, 5, 7}.

b). Illustrate the memoization Algorithm 6.3, without sorting A, where n = 11 and the
set of amounts of stamps A = {1, 3, 5, 7}.

c). Illustrate the two dimensional strong inductive programming Algorithm 6.2, without
sorting A, where n = 10 and the set of amounts of stamps, A = {3, 5, 1}.

d). Illustrate the memoization Algorithm 6.3, without sorting A, where n = 10 and the
set of amounts of stamps A = {3, 5, 1}.

Q 6.3. Consider the postage stamp equality minimization Problem 4.2 defined on page 159.

a). Illustrate the two dimensional strong inductive programming Algorithm 6.4 stated on
page 300 where n = 11 and A = {4, 5}.

b). Illustrate the two dimensional strong inductive programming Algorithm 6.4 stated on
page 300 where n = 11 and A = {4, 5, 3}.

c). Highlight the cells by a backtracking method for the table provided in b).

d). Draw the full backtracking tree based on the table provided in b).

e). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in eqn (6.4).

f). Illustrate the two dimensional memoization algorithm devised in e), where n = 11 and
A = {1, 3, 5, 7}.

g). Provide the computational time and space complexities of the algorithm devised in e).

Q 6.4. Recall the 0-1 knapsack equality problem, considered as an exercise in Q 4.20 on
page 206.

a). Construct a two dimensional table containing all sub-solutions where n = 11 and
A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

b). Derive a two dimensional higher order recurrence relation based on the table con-
structed in a).

c). Devise a two dimensional strong inductive programming algorithm.

d). Highlight the cells by a backtracking method in the table provided in a).
342 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

e). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in b).

f). Illustrate the two dimensional memoization algorithm devised in e) where n = 11 and
A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

Q 6.5. Recall the 0-1 knapsack minimization problem, considered as an exercise Q 4.12 on
page 203. Each input element ai ∈ A1∼k consists of its cost and length: ai = (ci , li ). One
needs at least n length at the minimum cost. Each rod can be either selected or not, and
cannot be selected more than once.
(8,7)
(6,5)
(4,3) A = (C,L)
(1,1)

a). Construct a two dimensional table containing all sub-solutions where n = 11 and
A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

b). Derive a two dimensional higher order recurrence relation based on the table con-
structed in a).

c). Devise a two dimensional strong inductive programming algorithm.

d). Highlight the cells by a backtracking method in the table provided in a).

e). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in b).

f). Illustrate the two dimensional memoization algorithm devised in e) where n = 11 and
A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

Q 6.6. Recall the 0-1 knapsack equality minimization problem, considered as an exercise
in Q 4.21 on page 207.

a). Construct a two dimensional table containing all sub-solutions where n = 11 and
A = {(1, 1), (4, 3), (6, 5), (8, 7)}.
(8,7)
(6,5)
(4,3) A = (C,L)
(1,1)

b). Derive a two dimensional higher order recurrence relation based on the table con-
structed in a).

c). Devise a two dimensional strong inductive programming algorithm.


6.5. EXERCISES 343

d). Highlight the cells by a backtracking method in the table provided in a).

e). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in b).

f). Illustrate the two dimensional memoization algorithm devised in e) where n = 11 and
A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

Q 6.7. Consider the dynamic UB-knapsack Algorithm 6.9, stated on page 304, for the
unbounded Knapsack Problem 4.6 defined on page 167. Each element, ai ∈ A, consists of
its profit and weight: ai = (pi , wi ).

a). Illustrate Algorithm 6.9 where n = 10 and A = {(1, 1), (5, 4), (6, 5)}.

b). Illustrate Algorithm 6.9 where n = 10 and A = {(1, 1), (5, 4), (6, 5), (10, 8)}.

c). Highlight the cells by a backtracking method in the table provided in a) and b).

d). Draw the full recursion tree based on the table provided in a).

e). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in eqn (6.7).

f). Illustrate the two dimensional memoization algorithm devised in e) where n = 10 and
A = {(1, 1), (5, 4), (6, 5), (10, 8)}.

g). Draw the memoization tree based on the table provided in f).

h). Provide the computational time and space complexities of the algorithm devised in e).

Q 6.8. Consider the strong inductive programming Algorithm 6.11, stated on page 306 for
the subset sum positive Problem 6.3.

a). Illustrate Algorithm 6.11 where n = 12 and S = {5, 3, 7, 2} without sorting S.

b). Illustrate Algorithm 6.11 where n = 12 and S = {2, 3, 3, 5} without sorting S.

c). Highlight the cells by a backtracking method in the table provided in a).

d). Draw the full backtracking tree based on the table provided in a).

e). Devise a backtracking algorithm to find an actual X, rather than just a true or false
output.

f). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in eqn (6.8) given on page 306.

g). Illustrate the two dimensional memoization algorithm devised in f) where n = 12 and
S = {2, 3, 3, 5} without sorting S.

h). Provide the computational time and space complexities of the algorithm devised in f).

Q 6.9. Recall the subset sum maximization problem, considered as an exercise in Q 4.9 on
page 202.
344 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

a). Construct a two dimensional table containing all sub-solutions where n = 12 and
A = {2, 3, 5, 7}.
b). Derive a two dimensional higher order recurrence relation based on the table con-
structed in a).
c). Devise a two dimensional strong inductive programming algorithm.
d). Provide computational time and space complexities of the proposed algorithm in c).
e). Highlight the cells by a backtracking method in the table provided in a).
f). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in b).
g). Illustrate the two dimensional memoization algorithm devised in f) where n = 12 and
A = {2, 3, 5, 7}.

Q 6.10. Recall the subset sum minimization problem, considered as an exercise in Q 4.8
on page 202.

a). Construct a two dimensional table containing all sub-solutions where n = 12 and
A = {2, 3, 5, 7}.
b). Derive a two dimensional higher order recurrence relation based on the table con-
structed in a).
c). Devise a two dimensional strong inductive programming algorithm.
d). Provide computational time and space complexities of the proposed algorithm in c).
e). Highlight the cells by a backtracking method in the table provided in a).
f). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in b).
g). Illustrate the two dimensional memoization algorithm devised in f) where n = 12 and
A = {2, 3, 5, 7}.

Q 6.11. Consider the unbounded subset sum equality Problem 5.3 defined on page 225.

a). Construct a two dimensional table containing all sub-solutions where n = 12 and
A = {3, 5, 7, 11}.
b). Derive a two dimensional higher order recurrence relation based on the table con-
structed in a).
c). Devise a two dimensional strong inductive programming algorithm.
d). Provide computational time and space complexities of the proposed algorithm in c).
e). Highlight the cells by a backtracking method in the table provided in a).

Q 6.12. Given a set S of k positive numbers, S = {s1 , s2 , · · · , sk }, find a subset of positive


integers whose product is exactly n. This problem is the subset product equality of positive
numbers problem, or SPEp in short.
6.5. EXERCISES 345

a). Formulate the problem.

b). Construct a two dimensional table containing all sub-solutions where n = 12 and
S = {2, 3, 4, 5}.

c). Derive a two dimensional higher order recurrence relation based on the table con-
structed in b).

d). Devise a two dimensional strong inductive programming algorithm.

e). Highlight the cells by a backtracking method in the table provided in b).

f). Provide computational time and space complexities of the proposed algorithm in d).

g). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in c).

h). Illustrate the two dimensional memoization algorithm devised in g) where n = 12 and
S = {2, 3, 4, 5}.

Q 6.13. Given a set S of k positive numbers, S = {s1 , s2 , · · · , sk }, find a subset of positive


integers such that the product of selected numbers is maximized but it cannot be greater
than n. This problem is the subset product maximization of positive numbers problem, or
SPMp in short.

a). Formulate the problem.

b). Construct a two dimensional table containing all sub-solutions where n = 13 and
S = {2, 3, 4, 5}.

c). Derive a two dimensional higher order recurrence relation based on the table con-
structed in b).

d). Devise a two dimensional strong inductive programming algorithm.

e). Highlight the cells by a backtracking method in the table provided in b).

f). Provide computational time and space complexities of the proposed algorithm in d).

g). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in c).

h). Illustrate the two dimensional memoization algorithm devised in g) where n = 13 and
S = {2, 3, 4, 5}.

Q 6.14. Given a set S of k positive numbers, S = {s1 , s2 , · · · , sk }, find a subset of positive


integers such that the product of selected numbers is minimized but it cannot be less than n.
This problem is the subset product minimization of positive numbers problem, or SPminp
in short.

a). Formulate the problem.

b). Construct a two dimensional table containing all sub-solutions where n = 13 and
S = {2, 3, 4, 5}.
346 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

c). Derive a two dimensional higher order recurrence relation based on the table con-
structed in b).

d). Devise a two dimensional strong inductive programming algorithm.

e). Highlight the cells by a backtracking method in the table provided in b).

f). Provide computational time and space complexities of the proposed algorithm in d).

g). Devise a memoization algorithm with a two dimensional table based on the two di-
mensional recurrence relation in c).

h). Illustrate the two dimensional memoization algorithm devised in g) where n = 13 and
S = {2, 3, 4, 5}.

Q 6.15. Consider two strings: A = ‘Aljabr’ and B = ‘Algebra’.

a). Find the longest common sub-sequence between them by using a two dimensional
strong inductive programming algorithm (dynamic programming).

b). Backtrack the table derived in a) and show the longest common sub-sequence assign-
ment.

c). Find the in-del edit distance between them by using a two dimensional strong inductive
programming algorithm (dynamic programming).

d). Backtrack the table derived in c) and show the in-del edit assignment.

e). Find the Levenshtein edit distance between them by using a two dimensional strong
inductive programming algorithm (dynamic programming).

f). Backtrack the table derived in e) and show the Levenshtein edit assignment.

Q 6.16. Consider two strings: A = ‘ACCTGAAGC’ and B = ‘CACTAGACTA’.

a). Find the longest common sub-sequence between them by using a two dimensional
strong inductive programming algorithm (dynamic programming).

b). Backtrack the table derived in a) and show the lognest common sub-sequence assign-
ment.

c). Find the in-del edit distance between them by using a two dimensional strong inductive
programming algorithm (dynamic programming).

d). Backtrack the table derived in c) and show the in-del edit assignment.

e). Find the Levenshtein edit distance between them by using a two dimensional strong
inductive programming algorithm (dynamic programming).

f). Backtrack the table derived in e) and show the Levenshtein edit assignment.

g). Find the longest palindrome sub-sequence of A by using a two dimensional strong
inductive programming algorithm (dynamic programming).
6.5. EXERCISES 347

h). Find the longest palindrome sub-sequence of B by using a two dimensional strong
inductive programming algorithm (dynamic programming).

Q 6.17. Consider the problem of finding the set partition number, a.k.a., Stirling number
of the second kind, or simply SNS. (See [73, p257-279] for details about the number.) For
example, consider a set A = {a, b, c, d} where n = 4. All set partitions of A into (k = 3)
parts include

X ={{{a}, {b}, {c, d}}, {{a}, {c}, {b, d}}, {{a}, {d}, {b, c}},
{{b}, {c}, {a, d}}, {{b}, {d}, {a, c}}, {{c}, {d}, {a, b}}}

Therefore, S(4, 3) = |X| = 6. Its recurrence relation is defined in eqn (6.27).



0
 if k < 1 or k > n
S(n, k) = 1 if k = 1 or k = n (6.27)

S(n − 1, k − 1) + kS(n − 1, k) otherwise

a). Consider a set A = {a, b, c, d} where n = 4. When k = 2, it is called bi-partites, e.g.,


{{{a, b}, {c, d}}, {{b}, {a, c, d}}, · · · }. Generate all ways that partition the set A into
(k = 2) parts.
b). Formulate the set partition problem.
c). Find the value of S(5, 2) and the number of recursive calls when the naı̈ve recursive
algorithm in eqn (6.27) is used.
d). Devise a memoization algorithm to compute eqn (6.27).
e). Provide computational time and space complexities of the proposed algorithm in d).
f). Find the number of recursive calls necessary to compute S(5, 2) when a memoization
algorithm proposed in d) is used.
g). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to
compute eqn (6.27).
h). Provide computational time and space complexities of the proposed algorithm in g).
i). Demonstrate your algorithm provided in g) using S(5, 2).

Q 6.18. The Stirling number of the first kind, or simply SNF, is related to the number of k
necklaces one can make with n distinct stones. If there are (n = 4) kinds of stones, {Crystal,
Gold, Ruby, Sapphire}, there are eleven different ways to make (k = 2) necklaces.

S C G R S R G C S G R C

G R S R S G S R

S C G C R C C G

R G R S G S R S
S C G C R C C G
348 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

S(n, k) = nk is
 
To be more specific, it is the number of k cycles. The bracket notation
n
widely used, as opposed to the stering number of the second kind k . (See [73, p259-267]
for details about the number).
The problem is formulated with its recursive definition.

Problem 6.15. Stirling number of first kind


Input: n and k ∈ Z
Output:

1
 if k = n = 0
S(n, k) = 0 if (k = 0 ∧ n > 0) ∨ k > n (6.28)

S(n − 1, k − 1) + (n − 1)S(n − 1, k) if 0 < k < n

a). Find the value of S(5, 2) and the number of recursive calls when the naı̈ve recursive
algorithm is used for eqn (6.28).

b). Devise a memoization algorithm to compute eqn (6.28).

c). Provide computational time and space complexities of the proposed algorithm in b).

d). Find the number of recursive calls necessary to compute S(5, 2) when a memoization
algorithm proposed in b) is used.

e). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to


compute eqn (6.28).

f). Provide computational time and space complexities of the proposed algorithm in e).

g). Demonstrate your algorithm provided in e) using S(5, 2).

Q 6.19. Recall the number of ascents Problem 2.11, or simply NAS, defined on page 55.
Given a sequence A = h1, · · · , ni, the amount of permutation
sequences of A that have
exactly k ascents is known as the Eulerian number, denoted as nk . Finding the Eulerian

number, E(n, k), is defined in a recurrence relation as follows:

Problem 6.16. Eulerian number


Input: n and k ∈ Z
Output: E(n, k) =

0
 if n < 0 ∨ (k ≥ n ∧ k 6= 0)
1 if k = 0 ∨ n = k = 0 (6.29)

(n − k)E(n − 1, k − 1) + (k + 1)E(n − 1, k) otherwise

The following Figure may provide better insight for the recurrence relation in eqn (6.29).
6.5. EXERCISES 349

E(2,0) = 1 E(2,1) = 1
¢2, 1² ¢1, 2²

E(3,0) = 1 E(3,1) = 4 E(3,2) = 1


¢3 2 1² ¢2 3 1² ¢2 1 3² ¢3 1 2² ¢1 3 2² ¢1 2 3²

E(4,0) = 1 E(3,1) = 11 E(3,2) = 11 E(4,3) = 1


¢4 3 2 1² ¢3 4 2 1² ¢2 4 3 1² ¢2 1 4 3² ¢3 1 4 2² ¢1 4 3 2² ¢2 3 4 1² ¢2 4 1 3² ¢3 4 1 2² ¢1 3 4 2² ¢4 1 2 3² ¢1 2 3 4²
¢3 2 1 4² ¢2 4 3 1² ¢2 1 4 3² ¢3 1 4 2² ¢1 4 3 2² ¢2 3 1 4² ¢2 1 3 4² ¢3 1 2 4² ¢1 3 2 4² ¢1 4 2 3²
¢3 2 4 1² ¢1 2 4 3²

(See [73, p267-269] for details about the Eulerian number.)

a). Find the value of E(5, 2) and the number of recursive calls when the naı̈ve recursive
algorithm is used for eqn (6.29).

b). Devise a memoization algorithm to compute eqn (6.29).

c). Provide computational time and space complexities of the proposed algorithm in b).

d). Find the number of recursive calls necessary to compute E(5, 2) when a memoization
algorithm proposed in b) is used.

e). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to


compute eqn (6.29).

f). Provide computational time and space complexities of the proposed algorithm in e).

g). Demonstrate your algorithm provided in e) using E(5, 2).

Q 6.20. Recall the greater between elements sequence Problem 3.6, or simply GBW, defined
on page 121. Given a multiset, A = {1, 1, 2, 2, · · · , n, n} where each k ∈ {1 ∼ n} appears
exactly twice, the amount of valid GBW sequences drawn from A that

have exactly k ascents


is known as the Eulerian number of the second kind, denoted as nk . Finding the Eulerian
number of the second kind, E2 (n, k), is defined in a recurrence relation as follows:

Problem 6.17. Eulerian numbers of the second kind


Input: n and k ∈ Z
Output:
E2 (n, k) =

0
 if n < 0 ∨ (k ≥ n ∧ k 6= 0)
1 if k = 0 ∨ n = k = 0 (6.30)

(2n − k − 1)E2 (n − 1, k − 1) + (k + 1)E2 (n − 1, k) otherwise

The following Figure may provide better insight for the recurrence relation in eqn (6.30).
350 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

E2 (1,0) = 1 E2(1,1) = 0
¢1 1² ¢ ²

E2 (2,0) = 1 E2(2,1) = 2 E2(2,2) = 0


¢2 2 1 1² ¢1 1 2 2² ¢1 2 2 1² ¢ ²

E2(3,0) = 1 E2 (3,1) = 8 E2 (3,2) = 6 E2 (3,3) = 0


¢3 3 2 2 1 1² ¢2 2 1 1 3 3², ¢1 1 3 3 2 2², ¢1 3 3 2 2 1², ¢1 1 2 2 3 3², ¢1 2 2 1 3 3², ¢ ²
¢2 2 1 3 3 1², ¢3 3 1 1 2 2², ¢ 3 3 1 2 2 1² ¢1 1 2 3 3 2², ¢1 2 3 3 2 1²,
¢2 2 3 3 1 1², ¢1 3 3 1 2 2², ¢1 2 2 3 3 1²
¢2 3 3 2 1 1²,

(See [73, p270] for details about the Eulerian number of the second kind.)
a). Find the value of E2 (5, 2) and the number of recursive calls when the naı̈ve recursive
algorithm is used for eqn (6.30).
b). Devise a memoization algorithm to compute eqn (6.30).
c). Provide computational time and space complexities of the proposed algorithm in b).
d). Find the number of recursive calls necessary to compute E2 (5, 2) when a memoization
algorithm proposed in b) is used.
e). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to
compute eqn (6.30).
f). Provide computational time and space complexities of the proposed algorithm in e).
g). Demonstrate your algorithm provided in e) using E2 (5, 2).
Q 6.21. Suppose there are 5 kinds of distinct topics and 3 questions in an exam. In the
binomial coefficient Problem 6.9, nk , each topic can be either selected or not. If topics
can be selected more than once, this counting problem without regard to order and with
repetitions allowed is known as the multiset coefficient problem, or MCP in short, and defined
formally as follows:
Problem 6.18. Multiset coefficient
Input: n and k ∈ Z+
n
M (n, k) = nk = |X| where {X |
 P
Output: xi = k where 0 ≤ xi integer}
i=1
n

The multiset coefficient notation is widely used, as opposed to the binomial coeffi-
k
cient notation nk . Its recurrence relation is defined in eqn (6.31).

1
 if k = 0
M (n, k) = 0 if n = 0 and k 6= n (6.31)

M (n, k − 1) + M (n − 1, k) if n, k > 0

Note that k can be greater than n, e.g., 35 = 21. (See [100, 162] for details about the


multiset coefficient.)
6.5. EXERCISES 351

a). Consider a set A = {a, b, c} where n = 3. When k = 3, the multisets include


{{a, a, a},
 {a, a, b}, {a, a, c}, {a, b, b}, · · · }. Generate all multisets of size (k = 3). Note
that 33 = 10
b). Find the value of M (4, 2) and the number of recursive calls when the naı̈ve recursive
algorithm is used for eqn (6.31).
c). Devise a memoization algorithm to compute eqn (6.31).
d). Provide computational time and space complexities of the proposed algorithm in c).
e). Find the number of recursive calls necessary to compute M (4, 2) when a memoization
algorithm proposed in c) is used.
f). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to
compute eqn (6.31).
g). Provide computational time and space complexities of the proposed algorithm in f).
h). Demonstrate your algorithm provided in f) using M (5, 7).

Q 6.22. Suppose there are n kinds of distinct topics and k questions in an exam. In the
multiset coefficient Problem 6.18, a topic can be selected more than once or may not be
selected. If each topic must be selected at least once, this problem becomes the surjective
multiset coefficient problem, or SMSC in short, and is defined as follows:
Problem 6.19. Surjective multiset coefficient
Input: n and k ∈ Z+
k
P
Output: M(n, k) = |X| where {X | xi = n where 1 ≤ xi integer}
i=1

Its recurrence relation is defined in eqn (6.32).



0
 if n < 1 or n > k
M(n, k) = 1 if n = 1 or n = k (6.32)

M(n − 1, k − 1) + M(n, k − 1) if 1 < n < k

Note that k must be equal to or greater than n. Otherwise, the output is simply 0.

a). Find the value of M(4, 6) and the number of recursive calls when the naı̈ve recursive
algorithm is used for eqn (6.32).
b). Devise a memoization algorithm to compute eqn (6.32).
c). Provide computational time and space complexities of the proposed algorithm in b).
d). Find the number of recursive calls necessary to compute M(4, 6) when a memoization
algorithm proposed in b) is used.
e). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to
compute eqn (6.32).
f). Provide computational time and space complexities of the proposed algorithm in e).
352 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

g). Demonstrate your algorithm provided in e) using M(5, 7).

Q 6.23. Recall the nth Lucas Sequence II Problem 5.11, or simply LUS2, defined on
page 250. First nine Lucas sequence II numbers are listed as formulas in Figure 5.21 (b) on
page 250. The nth Lucas Sequence II can be expressed as an extended formula in eqn (6.33)
so that formula has exactly n terms with respective coefficient, L(n, k).
n
k
X
L(n, k)pn−k q b 2 c (6.33)
k=0

The following figure provides Lucas sequence II in the extended formula for n = 0 ∼ 7.
n LUS2(n, p, q)
0 2p0 q 0
1 1p q + 0p0 q 0
1 0

2 1p q + 0p1 q 0 − 2p0 q 1
2 0

3 1p q + 0p2 q 0 − 3p1 q 1 + 0p0 q 1


3 0

4 1p q + 0p3 q 0 − 4p2 q 1 + 0p1 q 1 + 2p0 q 2


4 0

5 1p q + 0p4 q 0 − 5p3 q 1 + 0p2 q 1 + 5p1 q 2 + 0p0 q 2


5 0

6 1p q + 0p5 q 0 − 6p4 q 1 + 0p3 q 1 + 6p2 q 2 + 0p1 q 2 − 2p0 q 3


6 0

7 1p q + 0p6 q 0 − 7p5 q 1 + 0p4 q 1 + 14p3 q 2 + 0p2 q 2 − 7p1 q 3 + 0p0 q 3


7 0

L(n, k) forms the Lucas’ second triangle as follows:

2
L(0,0)
1 0
1 0 −2
1 0 −3 0
1 0 −4 0 2
1 0 −5 0 5 0
1 0 −6 −2 0 9 0
1 0 −7 0
−7 0 14 0
1 0 −8 0 20 0 −16 0 2
1 0 −9 0 27 0 −30 0 9 0

Let L(n, k) be the coefficient for the kth term in the nth Lucas Sequence II. The problem
of finding the nth Lucas Sequence II Coefficient is abbreviated as LSC2. L(n, k) has the
following recurrence relation:


 2 if n = 0 ∧ k = 0

1 if n > 0 ∧ k = 0
L(n, k) = (6.34)


 0 if n<0∨k <0∨k >n
L(n − 1, k) − L(n − 2, k − 2) otherwise

a). Find the value of L(6, 4) and the number of recursive calls when the naı̈ve recursive
algorithm is used for eqn (6.34).

b). Devise a memoization algorithm to compute eqn (6.34).


6.5. EXERCISES 353

c). Provide computational time and space complexities of the proposed algorithm in b).
d). Find the number of recursive calls necessary to compute L(6, 4) when a memoization
algorithm proposed in b) is used.
e). Devise a strong inductive programming (a.k.a. dynamic programming) algorithm to
compute eqn (6.34).
f). Provide computational time and space complexities of the proposed algorithm in e).
g). Demonstrate your algorithm provided in e) to compute L(6, 4).

Q 6.24. Consider the example in Figure 6.27 on page 334. Suppose that each wagon
contains recycled material represented by its cost, weight, and length. One wishes to select
a subset of wagons such that the total cost is minimized. There are constraints. First, the
total length of the train must be at least lm . Otherwise, the department of transportation
prohibits the operation. Second, the total recycled material weight must be at least wm .
a). Formulate the problem.
b). Derive a recurrence relation.
c). Devise a strong inductive programming algorithm. (Hint: 3D tabulation.)

d). Illustrate the algorithm devised in c) on a toy example where k = 4, wm = 11, lm = 4,


and A = {(1, 1, 2), (4, 3, 1), (6, 5, 2), (8, 7, 3)} where each item is represented by cost
price, weight, and length: ai = (pi , wi , li ).
e). Devise a memoization algorithm.

f). Illustrate the algorithm devised in e) on the above toy example in d).
g). Draw the recursion tree to compute the above example and highlight the backtracking
result to find the actual items selected.
Q 6.25. Consider the integer partition into at most k parts Problem 6.12 defined on
page 331. If each part cannot be greater than an upper bound, b, this problem becomes the
bounded integer partition with at most k parts, Ib (n, k).

a). Formulate the problem.

b). Derive a recurrence relation.


c). Devise a strong inductive programming algorithm. (Hint: 3D tabulation.)
d). Illustrate the algorithm devised in c) to compute Ib=4 (10, 9) or Ib=8 (10, 12) with a
computer.

e). Devise a memoization algorithm.


f). Illustrate the algorithm devised in d) to compute Ib=4 (7, 6)
354 CHAPTER 6. HIGHER DIMENSIONAL TABULATION

(0,0)

(1,0) (1,1)

(2,0) (2,1) (2,2)

(3,0) (3,1) (3,2) (3,3)

(4,0) (4,1) (4,2) (4,3) (4,4)

(5,0) (5,1) (5,2) (5,3) (5,4) (5,5)

(6,0) (6,1) (6,2) (6,3) (6,4) (6,5) (6,6)

(7,0) (7,1) (7,2) (7,3) (7,4) (7,5) (7,6) (7,7)

(a) (n, k) triangle whose root is (0, 0)


0 1 2 3 4 0 1 2 3 4
0 (0, 0) (1, 1) (2, 2) (3, 3) (4, 4) 0 (0, 0) (1, 0) (2, 0) (3, 0) (4, 0)
1 (1, 0) (2, 1) (3, 2) (4, 3) (5, 4) 1 (1, 1) (2, 1) (3, 1) (4, 1) (5, 1)
2 (2, 0) (3, 1) (4, 2) (5, 3) (6, 4) 2 (2, 2) (3, 2) (4, 2) (5, 2) (6, 2)
3 (3, 0) (4, 1) (5, 2) (6, 3) (7, 4) 3 (3, 3) (4, 3) (5, 3) (6, 3) (7, 3)
4 (4, 0) (5, 1) (6, 2) (7, 3) (8, 4) 4 (4, 4) (5, 4) (6, 4) (7, 4) (8, 4)
5 (5, 0) (6, 1) (7, 2) (8, 3) (9, 4) 5 (5, 5) (6, 5) (7, 5) (8, 5) (9, 5)
6 (6, 0) (7, 1) (8, 2) (9, 3) (10, 4) 6 (6, 6) (7, 6) (8, 6) (9, 6) (10, 6)
(b) South west direction table indices (c) South east direction table indices

(1,1)

(2,1) (2,2)

(3,1) (3,2) (3,3)

(4,1) (4,2) (4,3) (4,4)

(5,1) (5,2) (5,3) (5,4) (5,5)

(6,1) (6,2) (6,3) (6,4) (6,5) (6,6)

(7,1) (7,2) (7,3) (7,4) (7,5) (7,6) (7,7)

(8,1) (8,2) (8,3) (8,4) (8,5) (8,6) (8,7) (8,8)

(d) (n, k) triangle whose root is (1, 1)


1 2 3 4 5 1 2 3 4 5
1 (1, 1) (2, 2) (3, 3) (4, 4) (5, 5) 1 (1, 1) (2, 1) (3, 1) (4, 1) (5, 1)
2 (2, 1) (3, 2) (4, 3) (5, 4) (6, 5) 2 (2, 2) (3, 2) (4, 2) (5, 2) (6, 2)
3 (3, 1) (4, 2) (5, 3) (6, 4) (7, 5) 3 (3, 3) (4, 3) (5, 3) (6, 3) (7, 3)
4 (4, 1) (5, 2) (6, 3) (7, 4) (8, 5) 4 (4, 4) (5, 4) (6, 4) (7, 4) (8, 4)
5 (5, 1) (6, 2) (7, 3) (8, 4) (9, 5) 5 (5, 5) (6, 5) (7, 5) (8, 5) (9, 5)
6 (6, 1) (7, 2) (8, 3) (9, 4) (10, 5) 6 (6, 6) (7, 6) (8, 6) (9, 6) (10, 6)
7 (7, 1) (8, 2) (9, 3) (10, 4) (11, 5) 7 (7, 7) (8, 7) (9, 7) (10, 7) (11, 7)
(e) South west direction table indices (f) South east direction table indices

Figure 6.32: Templates for combinatoric problems with triangles.


Chapter 7

Stack and Queue

(a) stack as a magazine gun (LIFO) (b) queue as a machine gun (FIFO)

Figure 7.1: Metaphor for stack and queue

The most basic and pervasively used data structures are stacks and queues. Both support
inserting and removing an element from a set, but a stack removes the most recently inserted
element while a queue removes the first inserted element. As depicted in Figure 7.1 (a),
a stack is like a magazine gun, where the last inserted bullet is fired first. This is called
last-in, first-out, or simply LIFO, principle. A queue is like a machine gun, where the first
inserted bullet is fired first, as depicted in Figure 7.1 (b). This is called first-in, first-out, or
simply FIFO, principle.
Consider the famous 007 film where James Bond and Q appear. Q is a fictional character
who can understand, build, and invent a weapon. James Bond, however, does not pay much
attention when Q explains the details of his weapon and only knows its abstract features.
With only the abstract knowledge of the weapon, James Bond is an excellent problem solver
albeit he cannot build a weapon. Q is an excellent engineer who can build a weapon but
can hardly solve problems. It is clear that building a weapon and solving a problem using
the weapon are two different skills. When dealing with a data structure, it is important to
master it in two different aspects. First, a data structure must be treated as an abstract data
type, or simply ADT, so as to improve the problem solving skills just like James Bond’s point
of view. ADT is a model where a weapon is described abstractly with a list of operations
and certain necessary values without knowing how it is constructed. The other aspect is to
learn how to construct a data structure efficiently.
Objectives of this chapter include understanding how stacks and queues are implemented,
their applications, and their usages in algorithm design. Readers must be able to implement
elementary data structures such as stack and queue using arrays and linked lists guaran-
teeing their operations are efficient, i.e., constant. This chapter presents the depth-first
pre-order, in-order, and post-order traversal of a binary tree using a stack. Readers must

355
356 CHAPTER 7. STACK AND QUEUE

be able to use a stack to evaluate postfix and prefix expressions, and convert from infix to
postfix. Various graph problems that can be solved by traversing a graph in breadth first
search (BFS) and depth first search (DFS) orders using a queue and stack, respectively, are
introduced. Finally, a circular array is introduced to save space for the many strong in-
ductive programming algorithms whose problems were expressed in higher order recurrence
relations, covered in Chapter 5. Similarly, a cylindrical array is introduced to save space
in the many two dimensional inductive programming algorithms covered in Chapter 6. A
jumping array is also introduced for the many divide and conquer memoization algorithms
in Chapter 5.

7.1 Stack
A stack is a container of objects that are inserted and removed according to the last-in-
first-out (LIFO) principle. Inserting and deleting an item are known as “pushing” onto the
stack and “popping” off the stack. An element can be inserted on the top of the stack and
deleted from the top of the stack. These rudimentary operations can be formally defined as
follows:
Problem 7.1. Push operation in Stack
Input: A stack St1∼n , a current stack size n, a maximum stack size m,
and
( an element to be inserted x
St1∼n+1 such that x ∈ St1∼n+1 if n < m
Output:
stack overflow error otherwise(n = m)

The definition of the push operation in stack is nothing but that of a set inclusion
operation. The definition of the pop operation, however, requires an in-time function,
which provides the time when the item was pushed onto the stack. The in-time function is
introduced to define the pop operation formally only. The explicit in-time function is not
necessary during the implementation, as the argmax in-time(x) can be implicitly identified.
x∈St
The function argmax in-time(x) returns the element most recently inserted onto the stack.
x∈St

Problem 7.2. Pop operation in Stack


Input: A
 stack St1∼n
y and St − {y} where y = argmax in-time(x) if n > 0
Output: x∈St
stack empty error otherwise (n = 0)

A stack can be implemented in many different ways, but we would like both push and pop
operations to be performed efficiently. A list in an array or a linked list with a restriction on
operations can be used to implement a stack where both operations are performed efficiently.
As long as the maximum size of a stack m is known for certain problems, an array can be
an excellent choice for a stack implementation. Since both insertion and deletion operations
at the end of the array take constant time, the rear of an array can be utilized as the top
of the stack as depicted in Figure 7.2 (b). Insertion and deletion operations at the end of
the array as push and pop operations guarantee constant computational time complexities
as long as the number of contents is within the maximum array size m. If the beginning of
an array were used as the top of a stack, it would result in inefficient linear time push and
7.1. STACK 357

insert front del front insert rear del rear


Array Θ(n) Θ(n) O(1) O(1)
Linked list O(1) O(1) Θ(n) Θ(n)
(a) List operations on array and linked list
top top

T S S T
top top

T S I I S T
top top

T S I F F I S T
top top
T S I F I S T
top top

T S I L L I S T

(b) Array as a stack (c) Linked list as a stack

Figure 7.2: Stack after push(I), push(F), pop(), and push(L) operations

pop operations, as indicated in Figure 7.2 (a). Pseudo codes for these operations are stated
below where A1∼m , m, and n are declared globally.

Algorithm 7.1. Stack push (Array) Algorithm 7.2. Stack pop (Array)
Stack push(x) Stack pop()
if n = m, return error . . . . . . . . 1 if n = 0, return error . . . . . . . . .1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
n = n + 1 .................... 3 n = n − 1 .................... 3
A[n] = x . . . . . . . . . . . . . . . . . . . . . 4 return A[n + 1] . . . . . . . . . . . . . . . 4
return success . . . . . . . . . . . . . . . . 5

In a linked list version of a stack, the front of a linked list should be the top of the stack
in order for both rudimentary operations to have constant time complexities. The maximum
stack size m can be assumed to be ∞, or no limit in the linked list version of a stack. Pseudo
codes for these operations are stated below, where the top is declared globally.

Algorithm 7.3. Stack push (LL) Algorithm 7.4. Stack pop (LL)
Stack push(x) Stack pop()
Declare a node Z . . . . . . . . . . . . . . . 1 if top = null, return error . . . . 1
Z.data = x . . . . . . . . . . . . . . . . . . . . . 2 else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Z.next = top . . . . . . . . . . . . . . . . . . . 3 y = top.data . . . . . . . . . . . . . . . . . 3
top = Z . . . . . . . . . . . . . . . . . . . . . . . . 4 top = top.next . . . . . . . . . . . . . . . 4
return success . . . . . . . . . . . . . . . . . . 5 return y . . . . . . . . . . . . . . . . . . . . . . 5

Figure 7.2 (b) and (c) illustrate the stack after push(I), push(F), pop(), and push(L)
358 CHAPTER 7. STACK AND QUEUE

operations in a sequence on both array and linked list versions, respectively. In this section,
various applications and problems that can easily be solved by a stack data structure are
presented.

7.1.1 Balancing Parenthesis

import java.util.Stack;
class ReverseStr {
public static void main(String args[]) {
char[] str = {‘k’, ‘c’, ‘a’, ‘t’, ‘s’};
Stack<Character> stk = new Stack<Character>();
for(int i = 0; i < str.length; i++) {
stk.push(str[i]);
}
for(int i = 0; i < str.length; i++) {
str[i] = stk.pop();
}
}
}

(a) A sample Java program


S = h{, (, [, ], ), {, [, ], {, }, (, ), (, ), {, (, [, ], ), }, (, ), {, [, ], (, ), }, }, }i
(b) A sequence of parenthesis symbols

Figure 7.3: Checking parenthesis balance of a program

For a practical application of a stack, consider a sample Java program code, shown in
Figure 7.3 (a). It contains several types of parentheses: (), [], and {}. These parenthe-
ses must be well balanced so that the program can be compiled. To make this checking
parenthesis balance problem simpler, consider the compact string of parentheses, as given
in Figure 7.3 (b), that omits all symbols except for parenthesis symbols. Valid sequences of
parentheses include {h(, [, ], )i, h{, [, ], (, )}i} as well as the one given in Figure 7.3 (b). Invalid
sequences of parentheses include {h(, ], [, }i, h{, ], [, {, )}i}. The following problem definition
utilizes the Context-Free Grammar (see [82, p 79] for more about Context-Free Grammar):
Problem 7.3. Checking parenthesis balance
Input: A
( string S1∼n where si ∈ Po = {‘(’, ‘{’, ‘[’} or Pc = {‘)’, ‘}’, ‘]’}
T if S follows the context free grammar: S → SS | (S) | [S] | {S} | 
Output:
F otherwise
The problem output can be formulated by recursion as shown below and its parsing or
recursion tree is shown in Figure 7.4.

P-bal(Sb∼e ) =

T if e < b, i.e., S = ∅

T if ∃j ∈ {b + 1 ∼ e}(P-match(sb , sj ) ∧ P-bal(Sb+1∼j−1 ) ∧ P-bal(Sj+1,e )) = T (7.1)

F if ∀j ∈ {b + 1 ∼ e}(P-match(sb , sj ) ∧ P-bal(Sb+1∼j−1 ) ∧ P-bal(Sj+1,e )) = F

7.1. STACK 359

{( [] ) {}} S1~8 = { ( [ ] ) { } }

s1 S2~7 s8 S9~8
{ ([]){} }
S2~7 = ( [ ] ) { }

s2 S3~4 s5 S6~7
( [] ) {}

S3~4 = [ ] S6~7 = { }

[ ] { } s3 S4~3 s4 S5~4 s6 S7~6 s7 S8~7

Figure 7.4: Balancing parenthesis parse tree

where P-match(x, y) is defined in eqn (7.2) and one may include other types of parenthese
symbols.
(
T if (x = ‘(’ ∧ y = ‘)’) ∨ (x = ‘{’ ∧ y = ‘}’) ∨ (x = ‘[’ ∧ y = ‘]’)
P-match(x, y) = (7.2)
F otherwise

The recursive algorithm in eqn (7.1) would take exponential time. The 2D strong inductive
programming technique introduced in chapter 6 can solve the problem, but it would be an
awful waste of space. A stack data structure comes in very handy to tackle this problem.
Scanning from the beginning of the sequence, if the element is one of the open parentheses,
push it to the stack and if the element is one of the close parentheses, pop the top element in
the stack and check whether they match. Valid and invalid cases are illustrated in Figure 7.5
(a) and (b), respectively. When the end of the sequence is reached and the stack is empty,
the sequence is well balanced. A sequence is invalid if a popped element does not match with
the current element or if the stack is not empty when the end of the sequence is reached. A
pseudo code is stated as follows:
Algorithm 7.5. Checking parenthesis balance

isBalanced(S1∼n )
declare a stack T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if si ∈ {‘(’, ‘{’, ‘[’} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
push si to T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else if si ∈ {‘)’, ‘}’, ‘]’} . . . . . . . . . . . . . . . . . . . . . . . . . .5
if P-match( pop(T ), si ) = F, . . . . . . . . . . . . . . . . . . 6
return F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if T = ∅, return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Both computational time and space complexities of Algorithm 7.5 are O(n), as each
element is either pushed or popped once. The stack size can reach n/2 in the worst case.
A slight modification to Algorithm 7.5 can implement the auto-indent. In most pseudo
codes given in this book, indent plays an important role in indicating the scopes of loops or
conditional statements. In most conventional programming languages, indent has no impact
360 CHAPTER 7. STACK AND QUEUE

{ ( [ ] ) { } } ∅
[ ( { {

[
( ( ( {
{ { { { { { {
push(‘{’) push(‘(’) push(‘[’) x = pop() x = pop() push(‘{’) x = pop() x = pop() isempty?
match(si,x) match(si,x) match(si,x) match(si,x)

(a) a well balanced case: S = h{, (, [, ], ), {, }, }i


{ [ ] ( ) [ { } }
[ ( { [

{
[ ( [ [ [
{ { { { { { { { {
push(‘{’) push(‘[’) x = pop() push(‘(’) x = pop() push(‘[’) push(‘{’) x = pop() x = pop()
match(s i,x) match(s i,x) match(si,x) match(si,x)

(b) a unbalanced case: S = h{, [, ], (, ), [, {, }, }, ]i

Figure 7.5: Checking parenthesis balance using a stack

in compiling a program; its purpose is mainly readability. A program is nothing but a string
of characters and can be written in a single line. When a program is written without any
indent or in poor indentation, auto-indent may improve readability. Note that the symbol
‘{’ indicates the beginning of the body of a class, loop, or conditional statement. The size
of the current stack determines how many indents should be applied.

7.1.2 Undo and Redo


As the simplest application of a stack, consider the ‘undo’ and ‘redo’ functionalities
provided in various software applications such as word processor, spread sheet, presentation,
etc. These functionalities can be simply implemented using a slightly modified stack data
structure. To illustrate the role of stack in undo and redo implementation, we shall consider
a web-browser application. Backward and forward functions on web-browsers are similar
concepts to undo and redo functions and are illustrated in Figure 7.6.
In order to implement these forward and backward functionalities, two variables (indices)
called current, c, and forward limit, f , are used. Initially, a stack of visited web-pages is
empty and, thus, both variables point empty or 0. When a user enters an URL or clicks
a hyper-link, both variables increment by one. The first two stacks in Figure 7.6 show the
initial web-page visits. There are six total web-pages that have been visited in sequence. If
a user presses a backward button, the current page on the web-browser shows w5 page and
the c variable points w5 while the f variable points w6 . When a user presses the backward
button continuously, it gets decremented c until it points 0 index. When f variable reaches
0, the button should be disabled. It gets enabled again if f > 0. The forward button gets
7.1. STACK 361

cf ⇒ w6 f→ w6 f→ w6 f→ w6 f→ w6 ↓ w6
cf ⇒ w5 ⇑ w5 c→ w5 ↓ w5 w5 w5 cf ⇒ w7
⇑ w4 w4 w4 c→ w4 ↓ w4 c → w4 ↑ w4
w3 w3 w3 w3 c → w3 ↑ w3 w3
w2 w2 w2 w2 w2 w2 w2
w1 w1 w1 w1 w1 w1 w1
enter w5 enter w6 backward backward backward forward enter w7

Figure 7.6: Navigating web-sites using a web-browser with backward and forward buttons.

enabled and disabled when f > c and f = c, respectively. When f > c and a user enters or
clicks a hyperlink, the variable c gets incremented by one, the variable f points the same
index as c, and then the forward button gets disabled. Note that the page w6 in the last
stack in Figure 7.6 may be called a zombie, as it cannot be accessed.

7.1.3 Procedure Stack

public class Samplejava{


public static int add(int a, int b){ main
return a + b;
} 5 6
public static int add2(int a, int b){
add(2,3) add2(2,1)
return a * add(a,b);
}
3
public static void main(String args[]){
System.out.println(add(2,3)); add(2,1)
System.out.println(add2(2,1));
}
}
(a) A sample Java code with three procedures (b) Procedure call tree
add(2, 1)

add(2, 3) add2(2, 1) add2(2, 1) add2(2, 1)

main main main main main main main

push main push add pop 5 push add2 push add pop 3 pop 6

(c) A stack of procedure calls.

Figure 7.7: Procedure stack illustration

Another example of a stack is a procedure stack example [44]. A procedure is often


referred to as a function or method in different programming languages. Calling a procedure
and procedure return are closely related to open and close symbols in the parenthesis balance
example. On return, it returns to the most recently called procedure. A procedure stack
can be used to keep track of this.
Consider the sample Java code in Figure 7.7 (a) which consists of three methods (proce-
dures). First, the main method is called initially, and lines in the main method are executed
362 CHAPTER 7. STACK AND QUEUE

in order. Second, the ‘add’ method is invoked with two arguments and returns 5 as an
output. Next, the ‘add2’ method is invoked, but the ‘add’ method is invoked inside of the
‘add2’ method. The procedure call tree is depicted in Figure 7.7 (b). Traversing to a child
node and the parent node corresponds to pushing and popping onto a procedure stack, re-
spectively. The procedure stack is illustrated in Figure 7.7 (c) according to the execution
of the Java code in Figure 7.7 (a).

push f3 pop 2

f3 = f2 + f 1

push f2 push f1 pop 1


pop 1
f2 = f1 + f0 f1 = 1
push f1 push f0 pop 0
pop 1
f1 = 1 f0 = 0

(a) a binary recursion tree


f1 = 1 f0 = 0

f2 = f1 + f0 f2 = f1 + f0 f2 = 1 + f0 f2 = 1 + f0 f2 = 1 + 0

f3 = f2 + f1 f3 = f2 + f1 f3 = f2 + f1 f3 = f2 + f1 f3 = f2 + f1 f3 = f2 + f1 f3 = 1 + f1

push f3 push f2 push f1 pop 1 push f0 pop 0 pop 1


(b) recursive procedure stack

Figure 7.8: A recursive procedure stack to find the third Fibonacci number.

An internal stack is used to evaluate recursive functions, as seen in Figure 2.1 on page 37.
Figure 7.8 (a) demonstrates the depth first traversal of the binary recursion tree for the
Fibonacci number Problem 5.8. The recursive procedure stack is illustrated in Figure 7.8
(b). The naı̈ve recursive algorithm in eqn (5.29) stated on page 246 takes exponential time,
as illustrated in Figure 7.8 (b).

7.1.4 NFA Acceptance

Q\Σ 0 1 1 0
A {C} {B, D} A 1 B
B {B,D} {A}
C {A} {D} 0 0 1 0 1
D {} {B, C} 1
starting state b = A C D
1
terminal states T = {D}
(a) State transition table (b) State transition diagram

Figure 7.9: Non-deterministic finite state automaton (NFA)

In automata theory [82], a finite state machine (FSM) or finite state automata (FSA) is
a model of computation that has a finite number of symbols, Σ, and a finite set of states,
7.1. STACK 363

Q, where one of the states is designated as a starting state and T , a subset of states, is
designated as a set of terminal or accepting states. For example, in Figure 7.9, Q = {A, B,
C, D}, Σ = {0, 1}, T = {D}, and A is the starting state.
A string of symbols drawn from Σ is accepted by a FSA if there exists a state transition
sequence corresponding to the string that ends with a terminal state. For example, a string
h1100i is accepted by the FSA given in Figure 7.9 because there exists a sequence of states
that starts at the starting state and ends at one of the terminal states as follows:
1 1 0 0
A−
→D−
→B−
→B−
→D

Other strings that are accepted include {h1i, h11i, h111i, h1111i, h100i, h001i, · · · } and
strings that are not accepted include {h0i, h00i, h000i, h010i, h011i, h101i, · · · }.
An FSA is called a deterministic finite automaton (DFA) if each state has zero or one
transitional state for each symbol. Determining whether a string is acceptable by a given
DFA is straight-forward, as there is only one way to trace the sequence. An FAS is called
a nondeterministic finite automaton (NFA), introduced in [141], if multiple transitions are
allowed for some states corresponding to certain symbols. The FSA in Figure 7.9 is an NFA
because a state A has two possible transitional states {B, D} when the symbol is 1. A table
of linked lists, NFA can represent the state transition table, as given in Figure 7.9 (a). Let
NFAx,y be the linked list of states to which there are arcs whose label is y from x state. For
example, NFAB,0 = hB, Di and NFAD,1 = hB, Ci. NFAA,0 = hCi means that there is an
arc with its label 1 from A state to C state.
The problem of determining whether a string is acceptable by a given NFA is defined as
follows:
Problem 7.4. NFA acceptance problem
Input: an NFA (Q, Σ, b, T ) and a string S1∼n where si ∈ Σ

T if ∃R1∼n+1 such that ∀i ∈ {1, · · · , n}ri+1 ∈ NFAri ,si

Output: where r1 = b, and rn+1 ∈ T

F otherwise

To determine whether a string of symbols is acceptable by an NFA requires finding a


valid sequence of states in transition. A stack data structure can be utilized to perform
a depth first traversal on the state transition diagram. When the stack size reaches the
input string size, n, and the state on the top of the stack is one of the terminal states, the
input string is accepted, as given in Figure 7.10 (a). If the entire depth first traversal is
performed, the stack must be empty and there exists no sequence of states in transition to
satisfy the input string. The size of the stack corresponds to the position of the input string
and the maximum stack size is the same as the length of the input string, n. This algorithm
is known as a backtracking method. An equivalent recursive programming algorithm which
utilizes the internal stack is stated as follows:
Algorithm 7.6. NFA acceptance recursive algorithm (backtracking)

Let NFA and S1∼n be global and call NFAaccept(1, b) initially.


NFAaccept(p, sc )
if p = n ∧ sc ∈ T, return true . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if p = n ∧ sc 6∈ T, return false . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ |NFAsc ,sp | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
364 CHAPTER 7. STACK AND QUEUE

if NFAaccept(p + 1, NFAsc ,sp [i]) = true, . . . . . . . . . . . . . . . 4


return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The backtracking Algorithm 7.6 takes exponential time.

ɛ Aɛ Bɛ Dɛ
0 C1 C1 C1 B1 B1 B2 B1
0 A1 A1 A1 A1 A1 B1 B1 B1 B1 B1
1 B1 B1 B1 B1 B1 B1 B1 D1 D1 D1 D1 D1 D1
1 A1 A1 A1 A1 A1 A1 A1 A1 A2 A2 A2 A2 A2 A2 A2

(a) an acceptable case: h1 1 0 0i


ɛ Aɛ Bɛ Cɛ
1 B1 B1 B1 D1 D1 D2 D2 D2
0 B1 B1 B1 B1 B2 B2 B2 B2 B2 B2 B2 D
1 A1 A1 A1 A1 A1 A1 A1 A1 A1 A1 A1 A1 A2 A2 A2

(b) an unacceptable case: h1 0 1i

Figure 7.10: Depth first traversal with a stack for the NFA acceptance problem.

7.2 Propositional Logic with Stacks


This section considers a stack as a propositional logic statement cruncher. Before em-
barking on evaluating propositional logic statements, it is necessary to review the propo-
sitional logic as a preamble. A compound statement is a logical statement where multiple
propositions (Boolean variables) are combined using some connectives, such as negation (¬),
conjunction (∧), disjunction (∨), etc., as provided in Table 7.1. The first and last columns in
Table 7.1 are (T) tautology, and (F) fallacy. Other obvious straight-forward columns include

Table 7.1: Truth table for binary operators


1 2 3 4 5 6 7 8
p q T p∨q q →p p p→q q p↔q p∧q
T T T T T T T T T T
T F T T T T F F F F
F T T T F F T T F F
F F T F T F T F T F
9 10 11 12 13 14 15 16
p q p↑q p⊕q ¬q p ∧ ¬q ¬p ¬p ∧ q p↓q F
T T F F F F F F F F
T F T T T T F F F F
F T T T F F T T F F
F F T F T F T F T F
7.2. PROPOSITIONAL LOGIC WITH STACKS 365

4, 6, 11, and 14. There can be 10 different binary operators for the remaining columns, but
not all columns have respective connective symbols. Three connective symbols, ¬, ∧, and
∨, can express all columns.
Sometimes extraneous connective symbols, such as implication (→), biconditional (↔),
exclusive or or simply XOR (⊕), NAND (↑), and NOR (↓) may be used in compound
˜ are often used to denote NAND and called a Sheffer stroke,
statements. Symbols |, ↑, or ∧
named after Henry M. Sheffer for his work as in [156]. Symbols ↓ or ∨ ¯ are interchangeably
used to denote NOR and called a Peirce arrow, named after Charles Sanders Peirce for
his work as in [134] or a Quine dagger, named after Willard Van Orman Quine. In digital
logic, denotes the XNOR; p q ≡ ¬(p ⊕ q) ≡ p ↔ q. The symbol ‘≡’ means the logical
equivalence. These five connectives are extraneous because they are logically equivalent to
ones expressed using only three symbols, ¬, ∧, and ∨ by eqns (7.3 ∼ 7.7).

p→q ≡ ¬p ∨ q (7.3)
p↔q ≡p q ≡ (¬p ∨ q) ∧ (p ∨ ¬q) (7.4)
p⊕q ≡ (p ∨ q) ∧ ¬(p ∧ q) (7.5)
p↑q ≡ ¬(p ∧ q) (7.6)
p↓q ≡ ¬(p ∨ q) (7.7)

7.2.1 Infix, Prefix, and Postfix Notations


There are three different ways to express arithmetic and logical statements depending on
the location of the operator: infix, prefix, and postfix notations. Perhaps the most familiar
one to readers is the infix notation, such as ‘1 + 2’ or ‘P ∧ Q’ where the operator comes
between two operands (arguments), as they are widely dealt in most grade schools. However,
evaluating a statement in the infix notation according to the rules learned in early grade
schools is computationally more expensive than ones presented in the subsequent sections.
The infix notation requires understanding the basic rules, such as precedence, association,
and inner most parenthesis first rules. According to the precedence rule, p∨q ∧r = p∨(q ∧r)
but 6= (p∨q)∧r. All binary logical operators have the left to right association rules; p → q 6=
q → p. The inner most parenthesis is processed first: p → (q ∨ r) 6= (p → q) ∨ r. Alternative
forms such as postfix and prefix notations require no knowledge of these complicated rules.
Albeit the prefix notation of an expression such as ‘+ 1 2’ may seem awkward at first
glance, it should be very familiar to programmers, as most programming languages use the
prefix notation for functions and methods, e.g., ‘add(1,2)’. In most programming languages,
the function, procedure, or method name appears before its arguments. It is presumably
because in many natural languages, such as English, the verb comes before arguments when
commanding: “Add one and two!” In Korean and Japanese languages, verbs come at the
end, which corresponds to the postfix notation of expression, such as ‘1 2 +’. The infix
notation seems to be the most unnatural notation in terms of linguistics.
The prefix notation is often called polish notation (PN) [76], normal Polish notation
(NPN), forward Polish notation (FPN), Lukasiewicz notation, or Warsaw notation. The
postfix expression is called reverse polish notation (RPN) in [17]. One of the advantages
of prefix and postfix notations is that they are parenthesis free. While parentheses are
inevitable in the infix notation, they can be omitted in prefix and postfix notations.
Consider the compound statement represented in an expression tree in Figure 7.11 (a).
All internal nodes contain a certain operator (connective symbols) and all leaf nodes contain
366 CHAPTER 7. STACK AND QUEUE

<
Infix: (¬p → q) ∧ (r ∨ ¬q)

>
Prefix: ∧→¬ p q∨r ¬ q
¬ q r ¬
Postfix: p ¬ q →r q ¬∨∧
p q

(a) An expression tree and three different notations


Postfix p ¬ q → r q ¬

>

<
Prefix → ¬ p q r ¬ q
<

>
p q
¬ ¬ ¬ q r ¬ ¬ ¬
→ → → → → → →

> <

> <

> <
> <

> <

> <
> <
<

<

<

<

<

<

<

<

<

<
Infix ( ¬ p → q ) < ( r ¬ q )

>
(b) Depth first traversal with a stack.

Figure 7.11: Propositional expression: “(¬p → q) ∧ (r ∨ ¬q)”

a Boolean variable. It should be noted that all connective symbols are binary operators
except for the negation ‘¬’ symbol. The negation is a unary operator having only a right
sub-tree and a null left child.
Expressions in infix, prefix, and postfix notations can be obtained by traversing the
express tree in in-order, pre-order, and post-order depth first traversal manner, respectively.
The recursive algorithms for the depth first traversal pre-order and post-order are stated in
Algorithms 7.7 and 7.8, respectively. They are called with the root node initially. The value
of the currently visited node is printed before and after visiting children nodes in the DFT
pre-order and post-order, respectively.

Algorithm 7.7. DFT prefix expression Algorithm 7.8. DFT postfix expression
DFT prefix(x) DFT postfix(x)
print(x.val) . . . . . . . . . . . . . . . . . . . . . 1 DFT postfix(x.left) . . . . . . . . . . . . . 1
DFT prefix(x.left) . . . . . . . . . . . . . . 2 DFT postfix(x.right) . . . . . . . . . . . .4
DFT prefix(x.right) . . . . . . . . . . . . . 3 print(x.val) . . . . . . . . . . . . . . . . . . . . . 3
return . . . . . . . . . . . . . . . . . . . . . . . . . . 4 return . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Figure 7.11 (b) depicts the internal stack used to perform the depth first traversal. If
the currently visited node is either a leaf node or has both children nodes already being
visited, then pop it from the stack. Otherwise, push the current node to the stack. This
simple algorithm allows us to generate the expressions in prefix and postfix notations. To
get the expression in the prefix notation, whenever the ‘push’ operation is performed, the
item to be pushed is written. To get the expression in the postfix notation, whenever the
‘pop’ operation is performed, the popped item is written.
7.2. PROPOSITIONAL LOGIC WITH STACKS 367

The DFT in-order can be simply stated by placing the ‘print(x.val)’ line after visiting
the left child node and before visiting the right child node. However, a parenthesis should
be considered in generating the infix notation. A parenthesis may be necessary if the child
node is a compound statement that does not start with the negation. A parenthesis is not
necessary if the child node is a propositional variable or a negation. A slight modification
is necessary to get the expression in the infix notation from an expression tree. A pseudo
code is stated as follows:

Algorithm 7.9. DFT infix expression

DFT infix(x)
if (x.left.val ∈ {binary op}), print(‘(’) . . . . . . . . . . . . . . . . 1
DFT infix(x.left) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if (x.left.val ∈ {binary op}), print(‘)’) . . . . . . . . . . . . . . . . 3
print(x.val) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
if (x.right.val ∈ {binary op}), print(‘(’) . . . . . . . . . . . . . . . 5
DFT infix(x.right) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
if (x.right.val ∈ {binary op}), print(‘)’) . . . . . . . . . . . . . . . 7
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

If the current visited node is either a leaf node or has only a left child node already being
visited, then pop it from the stack. Otherwise, push the current node to the stack, as
depicted in Figure 7.11 (b).

7.2.2 Evaluating Postfix Expression


A postfix expression is a series of operators and operands where the operator comes
after its operands. Evaluating a postfix expression is surprisingly simply by scanning left
to right using a stack. When an operand is seen, it is pushed onto a stack. When a unary
operator, such as the negation, ‘¬’, is seen, the operand is popped from the stack, negated,
and pushed onto the stack. If the operator is a binary operator, two operands are popped,
evaluated, and then pushed onto the stack.

Algorithm 7.10. Postfix expression evaluation

eval postfix(A1∼n )
declare a stack, Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai ∈ Operand = {T, F } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
push(ai ) to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if op = ai ∈ binary operator = {∨, ∧, →, ←, ↔, ⊕, ↑, ↓} . . . . . 5
fpi = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
spi = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
push(spi op fpi) to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if ai ∈ unary operator = ‘{¬}’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
fpi = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
push(¬ fpi) to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
return pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
368 CHAPTER 7. STACK AND QUEUE

p ¬ q → r q ¬

>

<
F T
F F F F T
T F F T T T T T T
push(p) fpi = pop() push(q) fpi = pop() push(r) push(q) fpi = pop() fpi = pop() fpi = pop()
push(¬fpi) spi = pop() push(¬fpi) spi = pop() spi = pop()
push(spi →fpi) push(spi fpi) push(spi fpi)

<
>
Figure 7.12: Evaluating a postfix expression: “p ¬ q → r q ¬ ∨ ∧” for (p = T, q = F, r = F )

Figure 7.12 demonstrates Algorithm 7.10 evaluating a postfix expression: “p ¬ q →


r q ¬ ∨ ∧” for (p = T, q = F, and r = F ).
The computational time complexity of Algorithm 7.10 is clearly O(n) as the input string
is scanned only once and each element is processed as either push or pop operations, which
take constant time. Careful attention is necessary in line 8 of Algorithm 7.10, especially
when a binary operator is non-commutative, i.e., the order of the operands matters. Two
operands must be popped if the operator is binary. The second popped item (SPI) must
be followed by the first popped item (FPI). One easy way to remember the order is “A spy
(SPI) is followed by an FBI (FPI) agent.”
It should also be noted that the stack must contain exactly one item when the end of
the input is reached. If the size of the stack is not one, the input string is clearly not a valid
postfix expression. The pseudo code in Algorithm 7.10 assumes that the input string is a
valid postfix expression.

7.2.3 Prefix Evaluation


A prefix expression is a series of operators and operands where the operator comes before
its operands. Evaluating a prefix expression is as easy as evaluating a postfix expression.
However, with a prefix evaluation, one scans right to left using a stack instead of scanning
left to right. The input string must be read in the reverse order. Starting from the end
toward the beginning of the input string, when an operand is seen, it is pushed onto a
stack. When a unary operator such as the negation, ‘¬’, is seen, the operand is popped
from the stack, negated, and pushed onto the stack. If the operator is a binary operator,
two operands are popped, evaluated, and then pushed onto the stack.

Algorithm 7.11. Prefix expression evaluation

eval prefix(A1∼n )
declare a stack, Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = n down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai ∈ Operand = {T, F } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
push (ai ) to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if op = ai ∈ binary operator = {∨, ∧, →, ←, ↔, ⊕, ↑, ↓} . . . . . 5
fpi = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
spi = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
push(fpi op spi) to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
7.2. PROPOSITIONAL LOGIC WITH STACKS 369

if ai ∈ unary operator = ‘{¬}’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


fpi = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
push(¬ fpi) to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
return pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

q ¬ r q p ¬ →

>

<
T F
F F F F T
F T T T T T T T T
push(q) fpi = pop() push(r) fpi = pop() push(q) push(p) fpi = pop() fpi = pop() fpi = pop()
push(¬fpi) spi = pop() push(¬fpi) spi = pop() spi = pop()
push(fpi spi) push(fpi →spi) push(fpi spi)

<
>

Figure 7.13: Evaluating a prefix expression: “∧ → ¬ p q ∨ r ¬ q” for (p = T, q = F, r = F )

Figure 7.13 demonstrates Algorithm 7.11 evaluating a prefix expression: “∧ → ¬ p q ∨


r ¬ q” for (p = T, q = F, and r = F ).
The computational time complexity of Algorithm 7.11 is clearly O(n), as the input string
is scanned only once in reverse order and each element is processed as either push or pop
operations, which take constant time.
Algorithms 7.10 and 7.11 are identical except for lines 2 and 8. Line 2 determines
whether the input string is scanned from the beginning or the reverse order. In line 8 of
Algorithm 7.11, the first popped item must be followed by the second popped item, while
the second popped item must be followed by the first popped item in Algorithm 7.10.

7.2.4 Infix Evaluation


Finally, an infix expression is a series of operators and operands where a binary operator
comes between operands. The most widely used method to evaluate an infix expression by
freshman year students is to evaluate the inner most part with the highest priority first.
For example, to evaluate an expression, “p → ¬(p ∧ q ∨ r)” where p = T, q = F, and r = F,
four (the number of operators) steps are conducted where each step requires searching for
the highest operator;

‘T → ¬(T ∧ F} ∨F)’ =⇒ ‘T → ¬(F


| {z ∨ F})’ =⇒ ‘T → ¬(F))’ =⇒ ‘ |T →
| {z {z T} ’ =⇒ T
| {z }
≡ F ≡ F ≡ T ≡ T

The computational time complexity of this grade school method is quadratic.


There is another efficient algorithm to evaluate infix expressions. Consider the follow-
ing simple algorithm which first converts the infix expression to the corresponding postfix
expression and then use the postfix evaluation Algorithm 7.10.

Algorithm 7.12. Evaluate infix

eval infix(A)
B =convert inf2post(A) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
return eval postfix(B) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
370 CHAPTER 7. STACK AND QUEUE

The computational time complexity of Algorithm 7.12 depends on the conversion time
in addition to the time to evaluate the postfix expression, which is linear.
There exists a linear time algorithm to convert an infix expression to the corresponding
postfix expression and, thus, the computational time complexity of Algorithm 7.12 becomes
linear. The linear time algorithm to convert an infix to a postfix expression utilizes a stack.
It is called ‘operator precedence parsing’ and is stated as follows:

Algorithm 7.13. Operator precedence parsing

conv infix2post(A1∼n )
declare a stack T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
j = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if ai ∈ Operand (boolean variable) . . . . . . . . . . . . . . . . . . . . . . . . . 4
b++j = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if top(T stack) = ‘¬’, b++j = pop from T stack . . . . . . . . 6
else if ai ∈ binary boolean operators, . . . . . . . . . . . . . . . . . . . . . . . 7
while top(T stack) has higher or equal precedence . . . . . . . . 8
b++j = pop from T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
push ai to T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
else if ai = ‘(’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
push ai to T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
else if ai = ‘)’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
x = pop from T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
while x 6= ‘(’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
b++j = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
x = pop from T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
if top(T stack) = ‘¬’, b++j = pop from T stack . . . . . . . 18
b++j = pop from T stack until T stack is empty . . . . . . . . . . . . . . 19
return B1∼j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Algorithm 7.13 contains a new notation, ‘++j’, called the pre-increment operator, which
is familiar to most computer programmers but may be not to readers new to this field. It
was used to reduce the number of lines. ‘b++j = x’ is equivalent to the two lines of code
‘j = j + 1’ followed by ‘bj = x’.
The operator precedence parsing Algorithm 7.13 is demonstrated with a couple of exam-
ples in Figure 7.14. First, when the symbol in the infix expression is one of operands, it is
immediately printed, as stated in lines 4 ∼ 5. The negation symbol must be checked; ¬p in
in an infix expression is p¬ in a postfix expression. If an operand is printed and the current
stack’s top contains the negation symbol, it is popped and printed as stated in line 6
Second, lines 7 ∼ 10 are the code when the symbol in the infix expression is one of binary
operators. All binary operators with higher or equal precedence must be popped out when
they are seen on the top of the stack. If the top of the stack is not a binary operator with
higher or equal precedence, the symbol is pushed onto the stack. The known precedence
order for the logical operators is as follows:

Fact 7.1. Precedence rules for logical operators

‘¬’ > ‘∧’ > ‘∨’ > ‘→’ > ‘↔’


7.3. GRAPH PROBLEMS WITH STACKS 371

p → ¬ ( p q r ) ∅

<

>
<

<

>

>
( ( ( ( ( (
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
→ → → → → → → → → →
p p q r ¬ →

>
<
(a) “p → ¬(p ∧ q ∨ r)” to “p p q ∧ r ∨ ¬ →”
( p q r ) → ¬ r ∅
>

<

< >
< >

¬ ¬
>
>

( ( ( ( ( ( → → → →
p q r r ¬ →
>
<

(b) “(p ∨ q ∧ r) → ¬r” to “p q r ∧ ∨ r ¬ →”

Figure 7.14: Converting infix to postfix notation

Unlike the arithmetic operators, the logic operators’ precedence list is incomplete to the
best of the author’s knowledge; precedence rules for other logical symbols listed in Table 7.1
are undefined.
Next, the parentheses symbols are dealt with in lines 11 ∼ 18. An open parenthesis
symbol is immediately pushed onto the stack. When a closec parenthesis symbol is seen,
keep popping and printing until the corresponding open parenthesis is seen. Do not print
open or closed parenthesis symbol to the postfix expression.
Finally, when the end of the input infix expression is reached, all elements in the stack
must be popped and printed in order, as stated in line 19. The final postfix expression is
constructed in B1∼j . Note that the size of B may be smaller than n, the size of A1∼n , as
parentheses symbols do not occur in postfix and prefix notations.
At first glance, the computational time complexity of Algorithm 7.13 seems to be Ω(n), as
the outer most loop in lines 3 ∼ 19 contains inner loops. However, since all operand symbols
are immediately printed and all other symbols except for the closed parenthesis symbol are
pushed and popped exactly once, Algorithm 7.13 takes linear time, Θ(n). Consequently,
Algorithm 7.12 for evaluating an infix expression takes Θ(n) as well.

7.3 Graph Problems with Stacks


As seen in the previous sections, a stack is a natural tool to perform a depth first traversal
in a tree and in backtracking algorithms. This section first discusses a depth first search in
a graph, which results in the depth first order of vertices. Using a depth first search with
a stack data structure, several graph problems can be easily tackled. They include graph
372 CHAPTER 7. STACK AND QUEUE

connectivity, finding connected components, cycle detection, etc. These problems can be
solved by visiting every node in a systematic way. One such means is the depth first order
of vertices using a stack or recursion that utilizes an internal stack.

7.3.1 Depth First Search


Depth first search, or simply DFS, is a method of searching the graph starting from a
source in a depth first order. There are two different ways to perform the DFS with an
implicit or explicit stack.
First, a depth first order of vertices can be generated by recursively calling a depth first
search for adjacent vertices of a source. Recall that the recursion uses an internal stack.
A pseudo code is given in Algorithm 7.14. Let Uv1 ∼vn be the flag table to indicate all the
visited vertices and all ux = F, except for the initial source vertex r whose ur = T. U and
G are declared globally. The recursive Algorithm 7.14 is invoked initially with DFS-Gr(r).

Algorithm 7.14. Recursive depth first search in Graph

DFS-Gr(x)
for each adjacent vertex, y to x . . . . . . . . . . . . . . . . . . . 1
if uy = F, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
uy = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DFS-Gr(y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The recursive DFS using an implicit stack is illustrated in Figure 7.15 (a) and its spanning
tree is given in Figure 7.15 (c). The remaining elements in the last stack in Figure 7.15 are
popped one at a time. When the stack is empty, the algorithm returns to the main refering
procedure and terminates. By following the pre-order depth first traversal on the spanning
tree, the same order of vertices as that by Algorithm 7.14 is generated.
The computational time complexity of Algorithm 7.14 is O(|E| + |V |), as each edge is
checked exactly twice by the adjacencies of two end vertices. The computational space
complexity of Algorithm 7.15 is Θ(|V |), as the flag table requires Θ(|V |).
The second method to generate a depth first order of vertices utilizes an explicit stack [99,
p 93]. Starting from the root node, all adjacent vertices to the currently visited node are
pushed onto the stack, as long as they have not been visited or are not in stack. Continue
popping a vertex from the stack and repeat the process. When the stack is empty, a depth
first order is generated. To check whether a vertex has been visited or is in stack, an extra
flag table Uv1 ∼vn is utilized. For directed graph cases, a vertex y is adjacent to another
vertex x if there exists an arc from x to y, (x, y). A pseudo code is stated in Algorithm 7.15
on page 374. Line 10 can be replaced with ‘for each y where (x, y) ∈ E’ to make a directed
arc more clear.
Figure 7.15 (b) illustrates Algorithm 7.15 on a toy example. If each vertex is processed
when popping, the depth first search order, hv1 , v5 , v7 , v6 , v4 , v3 , v2 i can be generated. The
last element, v2 , in the stack in Figure 7.15 (b) must be popped. When the stack is empty,
Algorithm 7.15 terminates. Figure 7.15 (d) shows the spanning tree by the depth first search
Algorithm 7.15. The pre-order DFT of this tree gives the same depth first order of vertices
as one by Algorithm 7.15.
7.3. GRAPH PROBLEMS WITH STACKS 373

v4
v1 v2 v1 v2 v1 v2 v4
v6 v6 v6
v5 v3 v4 v5 v3 v4 v3 v3 v5 v3 v4 v3 v3
v2 v2 v2 v2 v2
v7 v6 v1 v1 v7 v6 v1 v1 v7 v6 v1 v1

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 0 0 0 0 0 1 1 1 0 0 1 0 1 1 1 1 0 1 0
push v1 & push v2 push v3 & push v6 push v4 & pop v4
v5 v7 v6
v1 v2 v7 v5 v1 v2 v7 v1 v2 v3
v6 v7 v6 v6
v5 v3 v4 v3 v5 v3 v4 v3 v3 v5 v3 v4 v3
v2 v2 v2 v2 v2
v7 v6 v1 v1 v7 v6 v1 v1 v7 v6 v1

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
push v7 & push v5 pop v5 & pop v7 pop v6 , v3 , v2 , v1
(a) Recursive depth first search: hv1 , v2 , v3 , v6 , v4 , v7 , v5 i
v1 v2 v1 v2 v5 v1 v2 v7
v5 v7 v6
v5 v3 v4 v5 v3 v4 v5 v3 v4
v3 v3 v3 v3 v3
v7 v6 v1 v2 v7 v6 v2 v2 v7 v6 v2 v2

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 1 0 1 0 0 1 1 1 0 1 0 1 1 1 1 0 1 1 1
push/pop v1 & push v2 , v3 , v5 pop v5 & push v7 pop v7 & push v6
v1 v2 v6
v1 v2 v4 v1 v2 v3
v4 v5 v3 v4 v5 v3 v4
v5 v3 v4
v3 v3 v3
v7 v6 v2 v2 v7 v6 v2 v7 v6 v2

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
pop v6 & push v4 pop v4 pop v3
(b) DFS with a stack pop order hv1 , v5 , v7 , v6 , v4 , v3 , v2 i
push order hv1 , v2 , v3 , v5 , v7 , v6 , v4 i
v1 v1
v2 v5 v3 v2
v3
v7
v6
v6
v4 v7
v5 v4

(c) A spanning tree (d) A spanning tree


by the recursive DFS in (a) by the DFS with a stack in (b)

Figure 7.15: Depth first search DFS


374 CHAPTER 7. STACK AND QUEUE

Algorithm 7.15. Depth first search in a Graph

DFS-G(G, r)
declare a stack T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
declare a flag table Uv1 ∼vn initially F . . . . . . . . . . . . . 2
i = 1 ............................................. 3
push r to T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ur = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
while T stack is not empty . . . . . . . . . . . . . . . . . . . . . . . . 6
x = pop(T stack) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
si = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
for all y adjacent to x . . . . . . . . . . . . . . . . . . . . . . . . . 10
if uy = F, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
uy = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
push(y) to T stack . . . . . . . . . . . . . . . . . . . . . . . . 13
return S1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

The computational time and space complexities of Algorithm 7.15 are the same as those
of Algorithm 7.14: O(|E| + |V |) and Θ(|V |), respectively, for the same reasons. The stack
size may reach O(|V |).
A different kind of ordering with a stack can be generated if each vertex is processed right
before pushing instead of popping. Line 8 in Algorithm 7.15, which determines the order
of vertices, is moved to right before push operations instead of right after pop operations.
The pseudo code is stated as follows:
Algorithm 7.16. Stack push order of DFS in a Graph

SPO-G(G, r)
declare a stack T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
declare a flag table Uv1 ∼vn initially F . . . . . . . . . . . . . 2
i = 1 ............................................. 3
push r to T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ur = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
s1 = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
while T stack is not empty . . . . . . . . . . . . . . . . . . . . . . . . 7
x = pop(T stack) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
for all y adjacent to x . . . . . . . . . . . . . . . . . . . . . . . . . 10
if uy = F, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
uy = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
si = y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
push(y) to T stack . . . . . . . . . . . . . . . . . . . . . . . . 14
return S1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

This stack push order of the toy example in Figure 7.15 (b) is hv1 , v2 , v3 , v5 , v7 , v6 , v4 i.
This order is not a depth first search order and a spanning tree cannot be generated by this
order. However, this order is closely related to the DFS with a stack, and all problems that
can be solved by DFS ordering can be also solved by the stack push order.
7.3. GRAPH PROBLEMS WITH STACKS 375

7.3.2 Graph Connectivity

v1 v2 v1 v2 v7 v1 v2 v5

v5 v3 v4 v5 v3 v4 v5 v3 v4
v7
v7 v6 v5 v7 v6 v5 v7 v6

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 0 0 0 1 0 1 1 0 0 0 1 0 1 1 0 0 0 1 0 1
select v1 & push v5 & v7 pop v7 pop v5
v1 v2 v1 v2 v4 v1 v2 v6

v5 v3 v4 v5 v3 v4 v5 v3 v4
v4 v6
v7 v6 v3 v7 v6 v3 v3 v7 v6 v3

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 2 2 2 1 0 1 1 2 2 2 1 2 1 1 2 2 2 1 2 1
select v2 & push v3 & v4 pop v4 & push v6 pop v6 & pop v3

Figure 7.16: Graph connectivity: 2 connected components: hv1 , v5 , v7 i, hv2 , v3 , v4 , v6 i

In the previous depth first search Algorithms 7.14, 7.15, and 7.16, all nodes are visited
in a depth first order as long as they are connected from the root node. When the stack
becomes empty before all vertices are visited, the graph is not connected. As illustrated
in Figure 7.16, the undirected graph is not connected and the stack becomes empty in the
third step. All three depth first search Algorithms 7.14 ∼ 7.15 naturally solve the graph
connectivity problem, or simply GCN, which is defined as follows:
Problem 7.5. Graph connectivity
Input: An undirected(graph G = (V, E)
T if ∀vx , ∀vy ∈ V (path(vx , vy ) 6= ∅)
Output: isConn(H) = .
F otherwise, i.e., ∃vx , ∃vy ∈ V (path(vx , vy ) = ∅)
A graph is connected if for each vertex, vx , there exists a path to all other vertices in V .
A graph is not connected if there exists a pair of vertices such that there is no path between
them. By changing line 14 in Algorithm 7.15 to ‘return ∧ ui ,’ it returns true only if the
x∈V
graph is connected, i.e., all vertices are visited. If the flag table, Uv1 ∼vn , is represented by
0 or 1, the following condition in eqn (7.8) can be used instead:
  
T if ∧ u = T or P Q
i ui = n or ui = 1
isConn(G) = x∈V x∈V x∈V (7.8)
F otherwise

The flag tables, Uv1 ∼vn , can be generated by recursive DFS, DFS with a stack, or BFS
which shall be presented in the next section. Only vertices in the first connected component
have T or 1 values. All the other vertices disconnected to the first connected component
have F or 0 values.
376 CHAPTER 7. STACK AND QUEUE

Now consider the problem of finding all connected components of a graph, or simply
GCC, stated as follows:
Problem 7.6. All connected components
Input: An undirected graph G = (V, E)
Output: A partition of V , P = (V1 , V2 , · · · , Vk ) such that
∀Vx ∈ P , (isConn(Vx ) = T) ∧ ∀vx ∈ Vx , ∀vy ∈ Vy (Vx 6= Vy → path(vx , vy ) = ∅)
A set of sub-sets of vertices are said to be a partition of V if they are mutually exclusive
each other and their union is equivalent to V .
Definition 7.1. P = (V1 , V2 , · · · , Vk ) is a partition of V if
[
∀Vx , ∀Vy ∈ P, (if Vx 6= Vy , Vx ∩ Vy = ∅) ∧ Vx = V.
Vx ∈P

By slightly modifying the DFS Algorithms 7.14 and 7.15, all connected components
can be found. If a graph is not connected and the DFS is repeated for any remaining
unvisited vertex, all connected components can be identified. A pseudo code based on the
DFS Algorithm 7.15 with an explicit stack is given below and illustrated in Figure 7.16.
An algorithm and its illustration based on the recursive DFS Algorithm 7.14 are left for
exercises.
Algorithm 7.17. Find all connected components

Find Conn Comp(G)


declare a stack T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
declare a flag table Uv1 ∼vn initially 0 . . . . . . . . . . . . . . 2
i = 0 ............................................. 3
nc = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
while i < n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if T stack is empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
pick any x ∈ V where ux = 0 . . . . . . . . . . . . . . . . . 7
nc = nc + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i = i + 1 ..................................... 9
ux = nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
x = pop(T stack) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
for all y adjacent to x . . . . . . . . . . . . . . . . . . . . . . . . . 14
if uy = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
uy = nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
push(y) to T stack . . . . . . . . . . . . . . . . . . . . . . . . 17
return Uv1 ∼vn and/or nc . . . . . . . . . . . . . . . . . . . . . . . . . 18

All the vertices during the initial depth first search belong to the first connected com-
ponent. If the DFS is continued with any unvisited vertex, the next connected component
can be found. In the toy example in Figure 7.16, the lowest alphabetical or numerical order
is used to select an unvisited vertex. Vertices in a same partition share the same number
in the table Uv1 ∼vn , and vertices from two different partitions have different numbers. The
7.3. GRAPH PROBLEMS WITH STACKS 377

computational time and space complexities of Algorithm 7.17 are O(|E| + |V |) and Θ(|V |),
respectively.
It should be noted that in Algorithm 7.17, the component membership of each vertex
is assigned in the stack push order: hv1 , v5 , v7 , v2 , v3 , v4 , v6 i. However, the membership
assignment in the depth first search order with a stack, hv1 , v7 , v5 , v2 , v4 , v6 , v3 i, is illustrated
in Figure 7.16. Either ordering correctly finds all connected components or determines the
connectivity.
Graph connectivity Problem 7.5 is closely related to finding all connected components
Problem 7.6. Let Num of Conn Comp(G) be the number of connected components, nc,
returned by Algorithm 7.17.
(
T if Num of Conn Comp(G) = 1
isConn(G) = (7.9)
F otherwise, i.e., Num of Conn Comp(G) > 1

7.3.3 Cycle Detection in Ungraphs

v1 v2 v1 v2 v3 v1 v2 v7

v5 v3 v4 v5 v3 v4 v5 v3 v4
v7
v7 v6 v3 v7 v6 v5
v7 v6 v5 v5

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 0 1 0 0 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1
select v1 & push v3 pop v3 & push v5 push v7 & pop v7
v1 v2 v5 v1 v2 v1 v2 v6

v5 v3 v4 v5 v3 v4 v5 v3 v4
v6
v7 v6 v7 v6 v4 v4 v7 v6 v4

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 2 1 0 1 0 1 1 2 1 2 1 2 1 1 2 1 2 1 2 1
pop v5 & select v2 push v4 & push v6 pop v6 & a cycle detected

Figure 7.17: Cycle detection by a recursive DFS

A cycle in a graph appears frequently in computer science and detecting a cycle in a


graph is of great importance. Recall that a unconnected graph G without a cycle is referred
to as a forest, and a single connected component graph G without a cycle is called a tree.
A graph G contains a cycle if there exists a vertex which is reachable from itself without
repeating edges and vertices, except for the starting and ending vertices. The cycle length
must be greater than two and must visit at least two other vertices. In Figure 7.17, a
path, h(v2 , v4 ), (v4 , v6 ), (v6 , v2 )i in an edge sequence or hv2 , v4 , v6 , v2 i in a vertex sequence,
is a cycle of length three, contains at least two other vertices {v4 , v6 } that are not v2 , and
has no vertex is repeated except for v2 , which appears exactly twice in the beginning and
end. The following problem definition utilizes the path as a vertex sequence, not an edge
sequence.
378 CHAPTER 7. STACK AND QUEUE

Problem 7.7. Cycle detection problem


Input: A list A of n various length items and m, the uniform bin capacity
Output:
(
T if ∃vx ∈ V, ∃P ∈ path(vx , vx )(|P | > 3 ∧ ∀i, j ∈ {2 ∼ |P |}(i 6= j → pi 6= pj ))
F otherwise

The first constraint, ‘|P | > 3,’ guarantees the exclusion of the single edge (vx , vy ) path,
hvx , vy , vx i. The second constraint guarantees no repetition of vertices except for the starting
and ending vertices.
Again, the depth first search plays a crucial role in solving the cycle detection problem,
as stated below.
Algorithm 7.18. A recursive DFS for a cycle detection
declare a global flag table Uv1 ∼vn initially 0
DetectCycle(G)
nc = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
cyc = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
While ∃x ∈ V (ux = 0) and cyc = F . . . . . . . . . . . . . . . 3
select x whose ux = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . .4
nc = nc + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
ux = nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
cyc = DFS-Cyc(x, nc, x) . . . . . . . . . . . . . . . . . . . . . . . . 7
return cyc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

DFS-Cyc(r, nc, par)


for each adjacent vertex, y to r . . . . . . . . . . . . . . . . . . . 1
if uy = nc and y 6= par, return T . . . . . . . . . . . . 2
if uy = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
uy = nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
cyc = DFS-Cyc(y, nc, r) . . . . . . . . . . . . . . . . . . . . . . 5
if cyc = T, return cyc . . . . . . . . . . . . . . . . . . . . . 6
return F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
The main procedure and a recursive DFS sub-procedure are necessary for a disconnected
graph. To detect a cycle, we can perambulate each individual connected component by
checking back edges in a depth first search order. For each connected component, if the
currently visited vertex, vx , has an adjacent vertex, vy , that has been already visited except
for its direct parent vertex, then it is a cycle. For an undirected graph, let vr be the root
vertex for the current connected component. Then, there must be a path from vx to vr and
vr to vy . Hence, path(vx , vr ) + path(vr , vy ) + (vy , vx ) is a cycle. The edge (vx , vy ) is widely
called a back edge, such as in [42, p 607].
An extra parameter, par, is necessary to avoid the single edge cycle in undirected graphs.
Line 7 of the main ‘DetectCycle’ procedure invokes the ‘DFS-Cyc’ procedure whose argu-
ments include par. Since the root node does not have a parent node to pass to the recursive
‘DFS-Cyc’ procedure, either null value or itself can be passed. Itself is passed in the pseudo
code. For the remaining non-root nodes in line 5 of the ‘DFS-Cyc’ procedure, their parent
node value is passed so that a single edge is not considered a cycle. The condition, ‘y 6= par’
in line 2 of the ‘DFS-Cyc’ procedure guarantees the exclusion of the single edge cycle.
7.3. GRAPH PROBLEMS WITH STACKS 379

The computational time and space complexities of Algorithm 7.18 are the same as those
of the recursive DFS Algorithm 7.14: O(|E| + |V |) and Θ(|V |), respectively. This problem
can also be solved by an explicit stack or queue, and they are left for exercises.

v1 v2 v1 v2

v5 v3 v4 v5 v3 v4

v7 v6 v7 v6

(a) (v1 , v2 ) is a back edge. (b) (v1 , v2 ) is not a back edge.


CCM(v1 ) = CCM(v2 ) on G0 CCM(v1 ) 6= CCM(v2 ) on G0

Figure 7.18: Checking whether (v1 , v2 ) is a back edge where G0 = (V, E − {(v1 , v2 )}).

A graph G contains a cycle if and only if there is a back edge present in G.


(
T ∃(vx , vy ) ∈ E, (isBackEdge((vx , vy ), G) = T)
ContainCycle(G) =
F otherwise (7.10)
0
where G = (V, E − {(vx , vy )})

To check whether an edge (vx , vy ) is a back edge, consider a graph G0 omitting (vx , vy ) from
G, as depicted in Figure 7.18. The checking back edge problem, or simply CBE, can be
determined if finding all connected components Problem 7.6 is solved. Let CCM(vx , G) be
the connected component membership of the vertex vx on a graph G.
(
T if CCM(vx , G0 ) = CCM(vy , G0 )
isBackEdge((vx , vy ), G) =
F otherwise (7.11)
where G0 = (V, E − {(vx , vy )})

If the memberships of both vx and vy are the same on G0 , the edge (vx , vy ) creates a cycle
on G and, thus, is a back edge. If the memberships of both vx and vy are different on G0 ,
the edge (vx , vy ) cannot create a cycle on G and, thus, is not a back edge. Testing every
edge in G by eqn (7.11) for a cycle detection problem would take O(|E|2 ), though.

7.3.4 Cycle Detection in Digraphs


Recall the DAG checking Problem 5.12 that determines whether a directed graph con-
tains no cycle. It is the exact opposite problem of the digraph version of the cycle detection
Problem 7.7. Problem 5.12 was tackled by the existence of a topological sorted order of
vertices. It can also be tackled simply by the recursive DFS of the directed graph.
There are subtle differences in the cycle definition between ungraphs and digraphs. Most
importantly, a single edge is not considered as a cycle in ungraph as (x, y) = (y, z). However,
since (x, y) 6= (y, x) in a digraph, if both arcs, (x, y) and (y, x) ∈ E, hx, y, xi is a cycle.
Hence, the parent information is not necessary for digraphs, although they were necessary
in Algorithm 7.18 for ungraphs. The second difference is how the cycle is detected in the
recursive DFS. In ungraphs, a cycle is detected if a current visited vertex has an adjacent
vertex which has been already visited. In digraphs, a cycle is detected if a current visited
380 CHAPTER 7. STACK AND QUEUE

v6
v1 v2 v1 v2 v1 v2 v6
v3 v3 v3
v3 v4 v5 v3 v4 v5 v4 v4 v3 v4 v5 v4 v4
v2 v2 v2 v2 v2
v6 v7 v1 v1 v6 v7 v1 v1 v6 v7 v1 v1

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 0 0 0 0 0 1 1 1 1 0 0 0 1 1 1 1 0 2 0
select v1 & push v1 , v2 push v4 , v3 push v6 & pop v6
v6 v7
v1 v2 v1 v2 v7 v1 v2 v1
v5 v5 v5
v3 v4 v5 v v4 v3 v4 v5 v4 v3 v4 v5 v4
4
v2 v2 v2 v2
v6 v7 v1 v1 v6 v7 v1 v6 v7 v1

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 2 1 0 2 0 1 1 2 1 1 2 0 1 1 2 1 1 2 1
pop v3 & push v5 push v7 & a cycle detected return T by popping all

Figure 7.19: Cycle detection by a recursive DFS

vertex x has an adjacent vertex y, such that (x, y) ∈ E and y is currently in the stack. An
array Uv1 ∼vn of size |V | is used to keep track of vertices in the recursion stack, those already
visited, or those never visited. Let the flag values 1 and 2 indicate ‘in-stack’ and ‘visited
but no longer in stack’, respectively.
Figure 7.19 illustrates the recursive DFS Algorithm 7.19 to detect a cycle in a directed
graph. When v7 vertex is pushed onto the stack, the algorithm checks all vertices {v4 , v6 }
adjacent from v7 . v6 has been visited before, i.e., uv6 = 2, but v4 is currently in the stack,
i.e., uv4 = 1. This means that there is a cycle hv4 , v5 , v7 , v4 i. When a cycle is detected, the
algorithm stops recursive DFS and returns back with the output. A pseudo code is stated
as follows:

Algorithm 7.19. a recursive DFS for a cycle detection in digraphs

declare a global flag table Uv1 ∼vn initially 0


DetectCycle(G)
cyc = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
While ∃x ∈ V (ux = 0) and cyc = F . . . . . . . . . . . . . . . 2
select x whose ux = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . .3
ux = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
cyc = DFS-Cyc(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ux = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
return cyc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

DFS-Cyc(x)
for each adjacent vertex, y from x . . . . . . . . . . . . . . . . . 1
if uy = 1, return T . . . . . . . . . . . . . . . . . . . . . . . . . . 2
7.3. GRAPH PROBLEMS WITH STACKS 381

if uy = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
uy = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
cyc = DFS-Cyc(y) . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
if cyc = T, return cyc . . . . . . . . . . . . . . . . . . . . . 6
uy = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time and space complexities of Algorithm 7.19 are the same as those
of the ungraph case Algorithm 7.18: O(|E| + |V |) and Θ(|V |), respectively. Unlike the
ungraph cycle detection problem, it is quite tricky to devise an algorithm using an explicit
stack or queue for the directed graph cycle detection problem.

7.3.5 Topological Sorting with a Stack

v1 v2 v7 v4 v2
v1 v2 v1 v2
v7
v5 v3 v4 v4 v5 v3 v4 v4 v4 v5 v3 v4
v2 v2 v2 v2
v7 v6 v1 v1 v7 v6 v1 v1 v7 v6 v1 v1

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 0 1 0 0 0 1 1 0 1 0 0 1 1 1 0 1 0 0 1
select v1 & push v1 , v2 , v4 push v7 & pop v7 pop v4 & pop v2
T = h-, -, -, -, -, -, -i T = h-, -, -, -, -, -, v7 i T = h-, -, -, -, v2 , v4 , v7 i
v1 v2 v6 v5
v1 v2 v1 v2 v1
v6
v5 v3 v4 v5 v5 v3 v4 v5 v5 v5 v3 v4
v3 v3 v3 v3 v2
v7 v6 v1 v1 v7 v6 v1 v1 v7 v6 v1

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
push v3 & push v5 push v6 & pop v6 pop v5 , v3 & pop v1
T = h-, -, -, -, v2 , v4 , v7 i T = h-, -, -, v6 , v2 , v4 , v7 i T = hv1 , v3 , v5 , v6 , v2 , v4 , v7 i

Figure 7.20: Topological sorting with a recursive DFS

If a depth-first search in Algorithm 7.19 cannot detect a cycle in a directed cycle, the
graph is a DAG where vertices can be listed in a topological order. Consider the topological
sorting Problem 5.13 defined on page 256. A slight variation of the cycle detection in di-
graph Algorithm 7.19 using a recursive DFS can also solve the topological sorting problem.
In 1976, Tarjan suggested utilizing a recursive DFS to solve the topological sorting Prob-
lem 5.13 [170]. As illustrated in Figure 7.20, whenever a vertex gets popped or returned, it
is included in a sequence at the end. Tarjan’s algorithm produces a sequence of vertices in
topological order, and a pseudo code is stated as follows:
382 CHAPTER 7. STACK AND QUEUE

Algorithm 7.20. A recursive DFS for a topological sorting (Tarjan’s algorithm)

declare a global flag table Uv1 ∼vn initially F (0)


declare a global list T initially empty
DetectCycle(G)
While ∃x ∈ V (ux = F or 0) . . . . . . . . . . . . . . . . . . . . . . . 1
select x whose ux = F . . . . . . . . . . . . . . . . . . . . . . . . . . 2
ux = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DFS-topo(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
insert x to the beginning of T . . . . . . . . . . . . . . . . . . 5

DFS-topo(x)
for each adjacent vertex, y from x . . . . . . . . . . . . . . . . . 1
if uy = F, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
uy = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DFS-topo(y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
insert y to the beginning of T . . . . . . . . . . . . . . . . 5

The root node v1 is placed in front of the output sequence, guaranteeing that all nodes
that need to come after v1 are placed after v1 . It applies to the rest of the vertices, vx , for its
respective sub-graphs whose root is vx . A vertex vx gets popped only after its children nodes
are processed recursively. Placing vx in front of its children nodes guarantees the topological
order. If a DAG is represented as an (arc f rom(vx )) adjacent list, the computational time
complexity of Algorithm 7.20 is Θ(|V |+|E|). The computational space complexity is Θ(|V |).

0 2
indeg(vx ) arc to(vx ) vx arc f rom(vx )
v1 v2 0 ∅← v1 → {v2 , v3 , v5 }
2 1 1 2 {v1 , v3 } ← v2 → {v4 }
1 {v1 } ← v3 → {v2 , v5 , v6 }
v5 v3 v4
1 {v2 } ← v4 → {v7 }
2 2 2 {v1 , v3 } ← v5 → {v6 }
v7 v6 2 {v3 , v5 } ← v6 → {v7 }
2 {v4 , v6 } ← v7 →∅
(a) a sample graph (b) out-going adjacent list with indegs

Figure 7.21: Topological sorting with a stack

A stack can be applied to Kahn’s Algorithm 5.26, described on page 257, to solve the
topological sorting Problem 5.13. The main idea is that instead of repeatedly searching
for a vertex whose in-degree is zero in line 3 of Algorithm 5.26, which is computationally
expensive, push only vertices whose current in-degree is zero. First, the in-degree of every
vertex is stored in a table to facilitate determining whether all preceding vertices have been
processed, as shown in Figure 7.21. In the beginning of the illustration in Figure 7.22, only
v3 is pushed onto the stack because v2 and v5 have in-coming arcs whose preceding vertices
have not been processed yet. Note that all three vertices in arc f rom(v1 ) are pushed onto
the stack in the depth first search Algorithm 7.15, but only vertices whose all preceding
vertices are processed. All three vertices’ in-degrees are reduced by one. Instead of a flag
table, the in-degree table is utilized indicating the number of directly preceding vertices that
have not been processed. The value ‘−1’ means that the vertex has been processed. The
value ‘0’ means that all preceding vertices have been already processed and the vertex is
7.3. GRAPH PROBLEMS WITH STACKS 383

currently in the stack. Other higher value means that the current vertex cannot be included
in the topological list as there exist other preceding vertices that have not been processed.
A pseudo code is stated as follows:

Algorithm 7.21. Topological sorting using a stack

Topological sorting(G)
declare a stack T stack and output sequence T1∼n . . . . . . . . . 1
declare a flag table Uv1 ∼vn where uvx = indeg(vx ) . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if T stack is empty, pick any x ∈ V where ux = 0 . . . . . 4
else, x = pop(T stack) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ux = −1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
ti = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for all y ∈ arc f rom(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if uy = 0, push(y) to T stack . . . . . . . . . . . . . . . . . . . . . . . 9
else, uy = uy − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

−1 1 −1 0 −1 0
v1 v2 v1 v1 v2 v3 v1 v2 v5
1 0 1 0 −1 1 −1 −1 1
v5 v3 v4 v5 v3 v4 v5 v3 v4
v5 v6
2 2 2 1 2 0
v7 v6 v3 v7 v6 v2 v7 v6 v2 v2

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
− 1 0 1 1 2 2 − 0 − 1 0 1 2 − 0 − 1 − 0 2
select v1 & push v3 pop v3 & push v2 , v5 pop v5 & push v6
T = hv1 , -, -, -, -, -, -i T = hv1 , v3 , -, -, -, -, -i T = hv1 , v3 , v5 , -, -, -, -i
−1 0 −1 −1 −1 −1
v1 v2 v6 v1 v2 v2 v1 v2 v4
−1 −1 1 −1 −1 0 −1 −1 −1
v5 v3 v4 v5 v3 v4 v5 v3 v4
1 −1 1 −1 0 −1
v7 v6 v2 v2 v7 v6 v4 v7 v6 v7

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
− 0 − 1 − − 1 − − − 0 − − 1 − − − − − − 0
pop v6 pop v2 & push v4 pop v4 & push v7
T = hv1 , v3 , v5 , v6 , -, -, -i T = hv1 , v3 , v5 , v6 , v2 , -, -i T = hv1 , v3 , v5 , v6 , v2 , v4 , -i

Figure 7.22: Topological sorting with a stack Algorithm 7.21 illustration

Counting the in-degrees for every vertex in line 2 of Algorithm 7.21 takes Θ(|V | + |E|).
Algorithm 7.21 works very similarly to the depth-first search. However, a stack allows for
generating a topological DFS order, as illustrated in Figure 7.22 (c). The computational
time and space complexities are Θ(|V | + |E|) and Θ(|V |), respectively.
Vertices are added to the topological list from the beginning whenever they get popped,
whereas the topological list is built from the end in Tarjan’s Algorithm 7.20. Algorithm 7.21
can be said to be on-line linear. Recall from Chapter 5 that many problems require a
384 CHAPTER 7. STACK AND QUEUE

topologically sorted list. A partially constructed topological list from the beginning in
Algorithm 7.21 allows us to solve the sub-problems. In Tarjan’s Algorithm 7.20, however, a
partial list does not allow us to solve any sub-problem. From this perspective, Algorithm 7.21
is more advantageous.

7.4 Queue

is.wav restricted.wav to.wav the.wav

Figure 7.23: Speech synthesis

A queue can be thought of as waiting in line for service on a first-come, first-served


basis.. Indeed, a queue is a British word for a line. Queues are useful for storing pending
work, such as a shared printer queue. A queue for a speech synthesization is illustrated in
Figure 7.23. Playing sound files on a speaker may take longer than typing the input words.
Input words must be queued in order so that they can be played in order.
While all access is restricted to the most recently inserted elements in a stack, different
restrictions are applied to a queue. As opposed to a stack, a queue is a container of objects
that are inserted and removed according to the first-in-first-out (FIFO) principle. Inserting
and deleting an item are known as ‘enqueue’ and ‘dequeue’. These rudimentary operations
of a queue data structure can be formally defined as follows:
Problem 7.8. Enqueue operation
Input: A queue Q1∼n , a current queue size n, a maximum queue size m,
and
( an element to be inserted x
Q1∼n+1 such that x ∈ Q1∼n+1 if n < m
Output:
queue overflow error otherwise(n = m)
The enqueue operation in a queue data structure is defined in exactly the same way as
the stack push operation. The dequeue operation is defined in a similar way as the stack pop
operation, but argmin is used instead of argmax. The function argmin in-time(x) returns
x∈Q
the element inserted earliest among all elements currently in the queue.
Problem 7.9. Dequeue operation
Input: A
 queue Q1∼n
y and Q − {y} where y = argmin in-time(x) if n > 0
Output: x∈Q
queue empty error otherwise (n = 0)
A list can be used to implement a queue with a restriction on operations. Thus, a
queue can be implemented using either an array or a linked list with some modifications to
guarantee that both operations are constant.
7.4. QUEUE 385

insert front del front insert rear del rear


Head-Tail Linked list O(1) O(1) O(1) Θ(n)
Circular Array O(1) O(1) O(1) O(1)
(a) List operations on a circular array and linked list
front rear front rear

R E A L R E A L
front rear front rear

E A L R E A L
front rear rear front

E A L G R E A L G
front rear rear front

A L G R E A L G
front rear rear front

A L G O O R E A L G

(b) Linked list as a queue (c) Array as a queue


rear G rear G rear G O
7 0 7 0 7 0 7 0
L L L L rear
6 1 6 1 6 1 6 1
front front front front
5 2 5 2 5 2 5 2
A A A A
4 3 4 3 4 3 4 3

E R E R E R E R

(d) Circular array

Figure 7.24: Queue after dequeue(), enqueue(G), dequeue(), and enqueue(O) operations.

At first glance, it seems impossible to implement a queue using an ordinary linked list
guaranteeing both enqueue and dequeue to be constant. In order to maintain a line, elements
must be inserted in one end and deleted from the other end. Based on the computational
time complexities in Figure 7.2 (a) on page 357, if the front is the beginning of a queue,
s dequeue operation could be constant, but an enqueue would be linear. If the rear is the
beginning of a queue, an enqueue operation would be constant, but a dequeue would be
linear. However, if the head and tail of a linked list are maintained, as shown in Figure 7.24
(b), the insertion in the rear position becomes constant, as given in Figure 7.24 (a).

Pseudo codes for enqueue and dequeue operations of a head-tail linked list version of a
queue are stated below where front and rear pointers are declared globally.
386 CHAPTER 7. STACK AND QUEUE

Algorithm 7.22. Enqueue (LL) Algorithm 7.23. Dequeue (LL)


enqueue(x) dequeue()
Declare a node Z . . . . . . . . . . . . . . . 1 if front = null, return error . . 1
Z.data = x . . . . . . . . . . . . . . . . . . . . . 2 else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
rear.next = Z . . . . . . . . . . . . . . . . . . 3 x = front.data . . . . . . . . . . . . . . . . 3
rear = Z . . . . . . . . . . . . . . . . . . . . . . . 4 front = front.next . . . . . . . . . . . . 4
return success . . . . . . . . . . . . . . . . . . 5 return x . . . . . . . . . . . . . . . . . . . . . . 5

Figure 7.24 illustrates the queue after dequeue(), enqueue(G), dequeue(), and enqueue(O)
operations. The head-tail linked list version of a queue is illustrated in Figure 7.24 (b). Head
and tail are referred to as front and rear, respectively.
Next, it seems to be difficult or impossible to implement a queue using an array, as
the list grows from one end and deleted from the other end. Eventually, the end of the
maximum array size will be reached and an enqueue operation will be impossible. Hence, a
more creative technique is necessary to implement a queue using an array. A circular array
allows a queue to grow infinitely, as long as the current number of elements in the queue is
less than the maximum capacity, as illustrated in Figures 7.24 (c) and (d). Figures 7.24 (c)
and (d) are the same except that circular arrays are shown in a one dimensional array and
ring, respectively. A circular array requires maintaining the front and rear indices.
Thus far, the first element’s index in a list has been 1 for simplicity’s sake. Here, the first
element’s index in an array is 0 and the last element’s index in an array of size n is n − 1.
This indexing system for an array is common and popular in most programming languages.
This (0 ∼ n − 1) indexing system is necessary to make an array circular. A modulo function,
(% m), allows a circular or modular array. Pseudo codes for these operations are stated
below, where A1∼m , m, and n are declared globally. The front and rear are declared globally
as well and they are initially 0.

Algorithm 7.24. Enqueue (Cir. Array) Algorithm 7.25. Dequeue (Cir. Array)
enqueue(x) dequeue()
if n = m, return error . . . . . . . . 1 if n = 0, return error . . . . . . . . .1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
n = n + 1 .................... 3 n = n − 1 .................... 3
A[rear] = x . . . . . . . . . . . . . . . . . . . 4 x = A[front] . . . . . . . . . . . . . . . . . . 4
rear = (rear + 1) % m . . . . . . . 5 front = (front + 1) % m . . . . . . 5
return success . . . . . . . . . . . . . . . . 6 return x . . . . . . . . . . . . . . . . . . . . . . 6

7.4.1 Breadth First Search


While a stack data structure allows ordering vertices in a depth-first search manner in
a graph, a queue data structure allows ordering vertices in a breadth first search manner.
Breadth first search, or simply BFS, first visits all vertices directly adjacent to a vertex r,
known as the source. Then, it visits all vertices reachable from r whose SPL(r, x) = 2. And
then, it visits all vertices with SPL(r, x) = 3 and so on. Recall that SPL stands for the
shortest path length between two vertices. The problem of checking whether a vertex order
is a BFS can be formulated using the shortest path length.
7.4. QUEUE 387

level 0
v1 v2 v1 v1
V0 = {v1}
Rearrange level 1
v5 v3 v4 by the level
v2 v3 v5 v2 v3 v5
V1 = {v2,v3,v5}
level 2
v7 v6 v4 v6 v7 v4 v6 v7
V2 = {v4,v6,v7}

Figure 7.25: A level structure of a graph and its spanning tree by a BFS

Problem 7.10. BFS order checking


Input: A
( sequence Vo = hv1 , v2 , · · · , vn i of V
T if ∀vx ∀vy ∈ Vo (x ≤ y → SPL(v1 , vx ) ≤ SPL(v1 , vy ))
Output:
F otherwise

A level structure of an undirected graph partitions vertices such that vertices in the
same partition have the same shortest path length from a given root vertex. A sample
level structure of a graph and its spanning tree is given in Figure 7.25. There are three
partitions: V = V0 ∪ V1 ∪ V2 and |V | = |V0 | + |V1 | + |V2 |. Breadth first search explores
vertices in the lowest level of the structure first. The order of vertices in the same level
does not matter. Sequences such as hv1 , v2 , v3 , v5 , v4 , v6 , v7 i and hv1 , v5 , v3 , v2 , v7 , v6 , v4 i are
in valid BFS order, but a sequence hv1 , v2 , v3 , v4 , v5 , v6 , v7 i is not a valid BFS order since
(SPL(v4 ) = 2) >(SPL(v5 ) = 1) but v4 comes before v5 .
A valid BFS order can be generated using a queue data structure. As depicted in Fig-
ure 7.26, the root vertex is first enqueued and dequeued. Whenever a vertex x is dequeued,
all adjacent vertices to x are enqueued as long as they have not been previously visited or
enqueued. After all vertices in the ith level are dequeued, all vertices in the (i + 1)th level
are dequeued in turn, until the queue becomes empty. A pseudo code for the breadth first
search using a queue data structure is stated as follows:

Algorithm 7.26. Breadth first search in Graph

BFS-G(G, r)
declare a queue Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
declare a flag table Uv1 ∼vn initially F . . . . . . . . . . . . . 2
i = 1 ............................................. 3
enqueue r to Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ur = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
while Q is not empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
x = dequeue(Q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
si = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
for all y adjacent to x . . . . . . . . . . . . . . . . . . . . . . . . . 10
if uy = F, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
uy = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
enqueue(y) to Q . . . . . . . . . . . . . . . . . . . . . . . . . . 13
return S1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
388 CHAPTER 7. STACK AND QUEUE

v1 v2 v1 v2 v1 v2
v7
v5 v4 v6
v5 v3 v4 v5 v3 v4 v5 v3 v4
v3 v5 v4
v7 v6 v1 v2 v3 v5
v1 v7 v6 v2 v7 v6 v3

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 1 0 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1
v1 en-dequeued & v2 dequeued & v3 dequeued &
enqueue(v2 , v3 , v5 ) enqueue(v4 ) enqueue(v6 , v7 )
v1 v2 v1 v2 v1 v2
v7
v5 v3 v4 v6 v5 v3 v4 v7 v5 v3 v4
v4 v6 v7
v7 v6 v5 v7 v6 v4 v7 v6 v6

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
v5 dequeued v4 dequeued v6 dequeued

Figure 7.26: Breadth first search with a queue: hv1 , v2 , v3 , v5 , v4 , v6 , v7 i

The computational time and space complexities of Algorithm 7.26 are O(|E| + |V |) and
Θ(|V |), respectively. Each edge is checked exactly twice for its adjacency. The queue size
may reach up to |V | = n.
Recall that processing the vertex whenever pushing or popping resulted in two different
orderings, as exemplified in Figure 7.15 (b) on page 373. However, processing the vertex
whenever enqueuing or dequeuing results in the same ordering. By the FIFO principle, the
order of entering into a line is the same as the order of exiting from the line. Algorithm 7.26
processes vertices when they are dequeued, as illustrated in Figure 7.26.
Numerous problems, such as graph connectivity Problem 7.5, finding all connected com-
ponents Problem 7.6, and cycle detection Problem 7.7, can be solved by slightly modifying
the BFS using a queue. They are left for exercises. Only the shortest path length problem
and topological sorting problem are considered in subsequent subsections.

7.4.2 Shortest Path Length


Recall that in the previous shortest path length Problem 5.15, the input graph was
limited to a DAG in order to utilize strong inductive programming by ordering vertices in
a topological order. Consider the following shortest path length Problem 7.11, which takes
on any graph where edges are either undirected or directed.

Problem 7.11. Shortest path length problem


Input: G and a source node s ∈ V
Output: a table of (x, min(length(path(s, x)))) for ∀x ∈ V

As checking BFS order Problem 7.10 utilized the SPL to define the problem, BFS can
also naturally solve the shortest path length problem. Consider the following pseudo code
solving the shortest path length problem by a BFS.
7.4. QUEUE 389

v1 v2 v1 v2 v1 v2
v7
v5 v4 v6
v5 v3 v4 v5 v3 v4 v5 v3 v4
v3 v5 v4
v7 v6 v1 v2 v3 v5
v1 v7 v6 v2 v7 v6 v3

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
0 1 1 ∞ ∞ 1 ∞ 0 1 1 ∞ 2 1 ∞ 0 1 1 2 2 1 2
v1 en-dequeued & v2 dequeued & v3 dequeued &
enqueue(v2 , v3 , v5 ) enqueue(v4 ) enqueue(v6 , v7 )

Figure 7.27: BFS illustration to solve the SPL where the root is v1

Algorithm 7.27. Shortest length path using a queue

SPL-Q(G, r)
declare a queue Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
declare a level table Lv1 ∼vn initially ∞ . . . . . . . . . . . . 2
enqueue r to Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
lr = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
while Q is not empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
x = dequeue(Q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for all y adjacent to x . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if ly > lx + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
ly = lx + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
enqueue(y) to Q . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return Lv1 ∼vn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Algorithm 7.27 solves the SPL problem by strong inductive programming manner. First,
the basis case SPL(r, r) = 1 is solved in line 4. Next, if all vertices belonging to the
Vi partition whose SPL(r, x) = i have been already solved and stored in the table L, all
vertices belonging to Vi+1 can be solved if they have not been solved and are adjacent to
x. In lines 8 and 9, each vertex’s shortest path length from the source vertex is determined
before it gets enqueued. Figure 7.27 demonstrates a BFS to solve the shortest path length
problem where the root is v1 .

7.4.3 Topological Sorting With a Queue


Recall the Algorithm 7.21 using a stack described on page 383, to solve the topological
sorting Problem 5.13. If a queue data structure is used instead of a stack, it still finds a
topological sorted list, as depicted in Figure 7.28 (a). Push and pop operations are replaced
by enqueue and dequeue, accordingly. A pseudo code is stated as follows:
Algorithm 7.28. Topological sorting using a queue

Topological sorting(G)
declare a queue Q and output sequence T1∼n . . . . . . . . . . . . . . 1
declare a flag table Uv1 ∼vn where uvx = indeg(vx ) . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
390 CHAPTER 7. STACK AND QUEUE

if Q is empty, pick any x ∈ V where ux = 0 . . . . . . . . . . 4


else, x = dequeue(Q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ux = −1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
ti = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for all y ∈ arc f rom(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if uy = 0, enqueue(y) to Q . . . . . . . . . . . . . . . . . . . . . . . . . 9
else, uy = uy − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

−1 1 −1 0 −1 −1
v1 v2 v1 v2 v1 v2
1 0 1 0 −1 1 0 −1 0
v5 v3 v4 v5 v3 v4 v5 v5 v3 v4 v4
v3 v2 v5 v5
2 2 2 1 v3 2 1 v2
v7 v6 v7 v6 v7 v6

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
− 1 0 1 1 2 2 − 0 − 1 0 1 2 − − − 0 0 1 2
select v1 & enqueue v3 dequeue v3 & enqueue v2 , v5 dequeue v2 & enqueue v4
T = hv1 , -, -, -, -, -, -i T = hv1 , v3 , -, -, -, -, -i T = hv1 , v3 , v2 , -, -, -, -i
−1 −1 −1 −1 −1 −1
v1 v2 v1 v2 v1 v2
−1 −1 0 −1 −1 −1 −1 −1 −1
v5 v3 v4 v6 v5 v3 v4 v5 v3 v4
v4 v6 v7
2 0 v5
v4 1 0 v4 0 −1 v6
v7 v6 v7 v6 v7 v6

v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v7
− − − 0 − 0 2 − − − − − 0 1 − − − − − − 0
dequeue v5 & enqueue v6 dequeue v4 dequeue v6 & enqueue v7
T = hv1 , v3 , v2 , v5 , -, -, -i T = hv1 , v3 , v2 , v5 , v4 , -, -i T = hv1 , v3 , v2 , v5 , v4 , v6 , -i

Figure 7.28: Topological sorting Algorithm 7.28 with a queue illustration

Algorithm 7.28 using a queue produces a sequence hv1 , v3 , v2 , v5 , v4 , v6 , v7 i, whereas Al-


gorithm 7.21 using a queue produces a sequence hv1 , v3 , v5 , v6 , v2 , v4 , v7 i. Both sequences
are topologically sorted lists of vertices. Both Algorithms 7.28 and 7.21 are correct, as they
are essentially the same as Kahn’s Algorithm 5.26. The only difference is that they utilize
the queue and stack data structures, respectively.
The computational time and space complexities of Algorithm 7.28 using a queue are
Θ(|V | + |E|) and Θ(|V |), respectively, which are identical to those of Algorithm 7.28 using
a stack.
Algorithm 7.28 can be viewed as a combination of topological order breadth first search,
or simply BFSt. Recall how the BFS was defined in Problem 7.10. It utilizes the shortest
path length to partition vertices. The following BFSt checking problem definition utilizes
the longest path length, considered on page 290, instead of the shortest path length in the
BFS checking Problem 7.10.
7.4. QUEUE 391

Problem 7.12. BFSt order checking


Input: A sequence Vo = hv1 , v2 , · · · , vn i of V and a DAG G = (V, E)
(
T if ∀vx ∀vy ∈ Vo (x ≤ y → LPL(v1 , vx ) ≤ LPL(v1 , vy ))
Output:
F otherwise

An l-level structure of a DAG partitions vertices such that vertices in the same partition
have the same longest path length from a given root vertex. A sample l-level structure of
a graph is givein in Figure 7.29. There are five partitions: V = V0 ∪ V1 ∪ V2 ∪ V3 ∪ V4
and |V | = |V0 | + |V1 | + |V2 | + |V3 | + |V4 |. The BFSt explores vertices in the lowest level
of the structure first. The order of vertices in a level does not matter. Sequences such as
hv1 , v3 , v2 , v5 , v4 , v6 , v7 i and hv1 , v3 , v5 , v2 , v4 , v6 , v7 i are in valid BFSt order, but sequences
such as hv1 , v2 , v3 , v4 , v5 , v6 , v7 i and hv1 , v3 , v5 , v6 , v2 , v4 , v7 i are not a valid BFSt order. Any
BFSt order sequence is in topologically sorted order, since all lower level vertices come before
the higher level vertices.

LPL(v1,vx) = 0 level 0
v1 V0 = {v1}
v1 v1
LPL(v1,vx) = 1 level 1
v3 V1 = {v3} v3 v3
LPL(v1,vx) = 2 level 2
v2 v5 V2 = {v2,v5} v2 v5 v2 v5
LPL(v1,vx) = 3 level 3
v4 v6 V3 = {v4,v6} v4 v6 v4 v6
LPL(v1,vx) = 4 level 4
v7 V4 = {v7} v7 v7

(a) an l-level structure of a DAG (b) Hasse diagram (c) spanning tree

Figure 7.29: Topological sorting with a queue

The Hasse diagram H of a DAG G is a sub-graph of G where all reflective and transitive
arcs are removed (see [146, p 622]). A sample Hasse diagram is drawn as a level structure in
Figure 7.29 (b). Taking the breadth first search on the Hasse diagram is the same as taking
BFSt on the original DAG.

is-BFSt(V 0 , G) = is-BFS(V 0 , H) (7.12)

The corresponding spanning tree by Algorithm 7.28 is given in Figure 7.29 (c).
Astute readers might notice that BFSt can solve the longest path length problem natu-
rally as the BFS solves the shortest path length problem naturally in Algorithm 7.27 stated
on page 389. It is left for an exercise.

7.4.4 Huffman Code Using a Queue


A queue can be utilized to improve many greedy algorithms. One such a problem is the
minimum length binary code Problem 4.17, known as the Huffman code, presented earlier
on page 195. Recall that the greedy Huffman code Algorithm 4.23 selects the first two
least frequent items from the candidate set and inserts the sum of these two frequencies to
the candidate set. If the candidate set is represented by a sorted list, finding the first two
smallest items is easy but inserting them back to the sorted list of candidates is quite an
392 CHAPTER 7. STACK AND QUEUE

expensive operation, as discussed in Problem 2.17 defined on page 61. However, Huffman
code can be computed in linear time if input frequencies are sorted [175].
Instead of inserting the merged new candidate back into the sorted list, make a sorted
queue and insert the merged item into the sorted queue. The next minimum item is from
either the last item of the sorted list or the first item in the sorted queue. If the merged item
is enqueued, the queue is guaranteed to contain a sorted list. This algorithm is illustrated
in Figure 7.30 and a pseudo code is stated as follows:
Algorithm 7.29. Greedy Huffman code with a queue data structure

greedyHuffman(A1∼n )
declare T2n−1×2 and Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A1∼n = sort(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, T [i][1] = ai .f . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [1][2] = T [2][2] = n + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
sf = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
T [n + 1][1] = T [1][1] + T [2][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Q.engeue(n + 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for i = n + 2 ∼ 2n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if sf ≤ n and T [Q.peek())][1] < T [sf ][1], . . . . . . . . . . . . . . . 9
minidx = Q.dequeue() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
minidx = sf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
sf = sf + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
T [minidx][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
T [i][1] = T [minidx][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
if sf ≤ n and T [Q.peek())][1] < T [sf ][1], . . . . . . . . . . . . . .16
minidx = Q.dequeue() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
minidx = sf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
sf = sf + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
T [minidx][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
T [i][1] = T [i][1] + T [minidx][1] . . . . . . . . . . . . . . . . . . . . . . . 22
Q.enqueue(i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Line 2 of Algorithm 7.29 can be omitted if we assume that the input sequence is already
sorted. Each iteration of the ‘for’ loop takes constant time since the delete min, dequeue,
and enqueue operation take constant time. Hence, if the input is already sorted, the compu-
tational time complexity of Algorithm 7.29 is Θn. Otherwise, it is O(n log n) since the input
must be sorted. Algorithm 7.29 uses the queue data structure and, thus, the computational
space complexity is O(n). In the worst case, the size of the queue can grow up to n − 1,
which is the number of internal nodes.
Careful observation on the output representation, as shown in Figure 7.30 (b), suggests
that we do not need extra space for the queue data structure. The top 1 ∼ n rows of the
output tree table represent leaf nodes sorted by their frequency. The bottom n + 1 ∼ 2n − 1
rows of the output tree table are internal nodes and an implicit queue shall be embedded
in this space. Instead of declaring a queue data structure, a variable qf , which indicates
the front of the queue, can be declared. At the ith iteration, the index, i, serves as the
7.4. QUEUE 393

step instruction Sorted list Queue


0 sort 8 18 2 5 15 3 13 ← ←
1 enqueue(delmin(SL) + delmin(SL)) 2 3 5 8 13 15 18 ← ←
2 enqueue(delmin(SL) + dequeue(Q)) X X 5 8 13 15 18 ←5 ←
3 enqueue(delmin(SL) + dequeue(Q)) X X X 8 13 15 18 ← 10 ←
4 enqueue(delmin(SL) + delmin(SL)) X X X X 13 15 18 ← 18 ←
5 enqueue(delmin(SL) + dequeue(Q)) X X X X X X 18 ← 18,28 ←
6 enqueue(dequeue(Q) + dequeue(Q)) X X X X X X X ← 28,36 ←
7 dequeue(Q) & finish X X X X X X X ← 64 ←
(a) Algorithm 7.29 illustration
Node par fr. Node par fr. Node par fr.
sf → 1 - 2 1 8 2 1 8 2
2 - 3 2 8 3 2 8 3

3 - 5 sf → 3 - 5 3 9 5

4 - 8 4 - 8 sf → 4 - 8

5 - 13 5 - 13 5 - 13
6 - 15 6 - 15 6 - 15
7 - 18 7 - 18 7 - 18
qf → 8 - - qf → 8 - 5 8 9 5
i% 9 - - i→ 9 - - qf → 9 - 10
10 - - 10 - - i → 10 - -
11 - - 11 - - 11 - -
12 - - 12 - - 12 - -
13 - - 13 - - 13 - -
initialization & sort merge 1 & 2 & update 8 merge 3 & 8 & update 9
Node par fr. Node par fr. Node par fr.
1 8 2 1 8 2 1 8 2
2 8 3 2 8 3 2 8 3
3 9 5 3 9 5 3 9 5
4 10 8 4 10 8 4 10 8
sf → 5 - 13 5 11 13 5 11 13
6 - 15 6 11 15 6 11 15

7 - 18 sf → 7 - 18 sf & 7 12 18
8 9 5 8 9 5 8 9 5
9 10 10 9 10 10 9 10 10
qf → 10 - 18 qf → 10 - 18 10 12 18

i → 11 - - 11 - 28 qf → 11 - 28
12 - - i → 12 - - & 12 - 36
13 - - 13 - - i → 13 - -
merge 4 & 9 & update 10 merge 5 & 6 & update 11 merge 7 & 10 & update 12
(b) Algorithm 7.30 illustration

Figure 7.30: Huffman coding with a sorted list


394 CHAPTER 7. STACK AND QUEUE

rear of the queue where the merged item is inserted. The dequeue operation is nothing but
incrementing qf by one. The enqueue operation is to insert the merged item into the ith
row and increment i by one. The following psuedo code utilizes the implicit queue in the
output tree representation, as illustrated in Figure 7.30 (b):

Algorithm 7.30. Greedy Huffman code with a queue

greedyHuffman(A1∼n )
declare T2n−1×2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A1∼n = sort(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, T [i][1] = ai .f . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [1][2] = T [2][2] = n + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
sf = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
qf = n + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [n + 1][1] = T [1][1] + T [2][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for i = n + 1 ∼ 2n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if qf = i or (sf ≤ n and T [sf ][1] < T [qf ][1]) . . . . . . . . . . . 9
T [sf ][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
T [i][1] = T [sf ][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
sf = sf + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
T [qf ][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
T [i][1] = T [qf ][1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
qf = qf + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
if qf = i or (sf ≤ n and T [sf ][1] < T [qf ][1]) . . . . . . . . . . 17
T [sf ][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
T [i][1] = T [i][1] + T [sf ][1] . . . . . . . . . . . . . . . . . . . . . . . . . . 19
sf = sf + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
T [qf ][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
T [i][1] = T [i][1] + T [qf ][1] . . . . . . . . . . . . . . . . . . . . . . . . . . 23
qf = qf + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

If the input sequence is already sorted and line 2 is removed, the computational time
complexity of Algorithm 7.30 is Θn. Otherwise, it is O(n log n) since the input must be
sorted. As opposed to Algorithm 7.29, Algorithm 7.30 does not require any extra space
since it does not use an explicit queue data structure.

7.5 Circular Array in Strong Inductive Programming


While only a variable was necessary to store the solution for the (n − 1)th solution in
the inductive programming in Chapter 2, the entire table was used to store all solutions
for (1 ∼ n − 1)th sub-problems in the strong inductive programming in Chapter 5. Except
for complete recursion, however, many higher order linear recursions have a periodic form
and, thus, do not require a full table. Most of the problems that appeared in Chapter 5 are
repeated in this section to save the computational space as much as possible. The template
7.5. CIRCULAR ARRAY IN STRONG INDUCTIVE PROGRAMMING 395

2
1
0
4
3
2

Figure 7.31: Circular process of strong inductive programming.

for strong inductive programming algorithms with a circular array begins by filling the
periodic array with initial basis values and then advances toward the nth value in a circular
array. As depicted in Figure 7.31, the primary issue in this section is that hiring and rotating
k people to carry an object of length k to the n step is better than hiring n people.
There are four important steps to design strong inductive programming using a circular
array. First, one must determine s, the size of a necessary table of solutions, and declare
a table, T0∼s−1 . Second, the initial values for the table, T0∼s−1 , are computed by an
identical method as a plain strong inductive programming. Next, the remaining solutions
for s ≤ i ≤ n are computed and stored in T [i % s] cell inductively. Finally, return the final
solution, which is stored in the table cell, T [n % s]. The following generic template may
apply to most problems presented in this section and its exercises.
Strong Inductive Programming With a Circular Array Template
StrIndProgwCirArray(n, · · · )
s = ??? determine the size of table
Declare a table T0∼s−1 declaring an empty table
for i = 0 ∼ s − 1
T [i] = basis basis case, P (0 ∼ s − 1)
for i = s ∼ n
T [i % s] = f (T [0 ∼ s − 1]) strong inductive rolling step
return T [n % s]
This space efficient, strong inductive programming algorithm design technique using a
circular array is demonstrated by Kibonacci and the postage stamp equality minimization
problems. Other problems that appeared in Chapter 5 are left for exercises. Utilizing a
circular array in memoization technique is also presented, and a bouncing array is introduced
for the divide and conquer memoization technique.

7.5.1 Kibonacci
Consider the nth Kibonacci number Problem 5.9, or simply KB2, defined on page 248.
Finding the nth Kibonacci number does not require all sub-solutions from first to the (n −
1)th sub-problems, but only requires solutions for the KB2(n − k, k) and KB2(n − 1, k)
sub-problems, as indicated in the recurrence relation in eqn (5.32). The strong inductive
programming Algorithm 5.19, which utilizes a full table of size n, can be improved in terms
of space by using a queue.
Here is an algorithm which utilizes a built-in queue data structure. Since up to the
(n − k)th solution is necessary, the maximum size of a queue can be k. Assuming that a
396 CHAPTER 7. STACK AND QUEUE

built-in queue data structure presumably supports only enqueue and dequeue operations,
an extra variable is necessary to store the (n − 1)th solution. This algorithm is illustrated
in Figure 7.32 (a) and a pseudo code is stated as follows:

Algorithm 7.31. Kibonacci with a built-in queue data structure

KB2(n, k)
Create a queue Q of maximum size, k . . . . . . . . . . . . . 1
if n = 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Q.enqueue(0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 1 ∼ k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Q.enqueue(1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
p = 1 ............................................. 6
for i = k to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
p = p + Q.dequeue() . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Q.enqueue(p) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
return p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

If a circular array is used, any element in the circular array can be accessible. Hence,
the extra variable to store the (n − 1)th solution is not necessary. Unlike the circular array
implementation of a queue, where both front and rear pointers are required, only the current
pointer is necessary here, as indicated in Figure 7.32. Notice that the array is always full
except for the initialization. Values in the left and right side of a backslash in Figure 7.32
(b) and (e) are old values and updated new values, respectively. Values in highlighted cells
are the ones necessary to compute the next Kibonacci number. An algorithm based on the
strong inductive programming paradigm using a circular array can be stated as follows:

Algorithm 7.32. Kibonacci with a circular array

KB2(n, k)
Declare a table Q0∼k−1 of size k . . . . . . . . . . . . . . . . . . 1
Q[0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Q[i] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = k ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Q[i % k] = Q[(i − 1) % k] + Q[(i − k) % k] . . . . . 6
return Q[n % k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Lines 1 ∼ 4 are the initialization steps, which are identical to the plain strong inductive
programming Algorithm 5.19. The mod function is used to find the ith Kibonacci number
in lines 5 and 6. The computational time complexity of Algorithm 7.32 is the same as that
of Algorithm 5.19, Θ(n). However, the extra space complexity is only Θ(k), whereas it is
Θ(n) in Algorithm 5.19.
Algorithm 7.32 can be stated in recursion as follows:
7.5. CIRCULAR ARRAY IN STRONG INDUCTIVE PROGRAMMING 397

i\n 0 1 2 3 4 5 6 7 8 9 10 11 12 p
7 ← 0 1 1 1 1 1 1 1 ← 1
8 ← 1 1 1 1 1 1 1 1 ← 1
9 ← 1 1 1 1 1 1 1 2 ← 2
10 ← 1 1 1 1 1 1 2 3 ← 3
11 ← 1 1 1 1 1 2 3 4 ← 4
12 ← 1 1 1 1 2 3 4 5 ← 5
(a) Algorithm 7.31 illustration to find KB28 (12) = 5

1 0 1 0/1 1 1 1 1
7 0 7 0 7 0 7 0
1 6 1
1 1 6 1
1 1 6 1
1/2 1 6 1
2
curr curr curr curr
5 2 5 2 5 2 5 2
1 1 1 1 1 1 1 1/3
4 3 4 3 4 3 4 3

1 1 1 1 1 1 1 1

KB2(7, 8) KB2(8, 8) KB2(9, 8) KB2(10, 8)


(b) Circular arrays for KB2(7, 8) ∼ KB2(10, 8)
0 1 2 3 9 10 11 12 13 14 15
0 1 1 2 7 10 14 19 26 36 50
1 19
2 1 0 0 3 14 7 1
1 10
1 2

(c) Rolling circular array illustration for KB2(15, 4) = 50


Q\i 3 4 5 6 7 8 9 10 11 12 13 14 15 16
q0 0 1 1 1 1 5 5 5 5 19 19 19 19 69
q1 1 1 2 2 2 2 7 7 7 7 26 26 26 26
q2 1 1 1 3 3 3 3 10 10 10 10 36 36 36
q3 1 1 1 1 4 4 4 4 14 14 14 14 50 50
(d) Algorithm 7.32 illustration to find KB2(16, 4) = 69

4 5 4 5 4/14 5 14 5/19
3 0 3 0 3 0 3 0

curr curr curr curr


2 1 2 1 2 1 2 1
3 2/7 3/10 7 10 7 10 7

KB2(9, 4) KB2(10, 4) KB2(11, 4) KB2(12, 4)


(e) Circular arrays for KB2(9, 4) ∼ KB2(12, 4)

Figure 7.32: Kibonacci (KB2) with a circular array.


398 CHAPTER 7. STACK AND QUEUE

Algorithm 7.33. Recursive Kibonacci with a circular array

Declare a global table Q0∼k−1 of size k


KB2(n, k)
if n < k, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Q[0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Q[i] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
KIB2(n − 1, k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Q[n % k] = Q[(n − 1) % k] + Q[(n − k) % k] . . . . 7
return Q[n % k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time and space complexities of recursive Algorithm 7.33 are the same
as those of Algorithm 7.32.
If k = 2, both Algorithms 7.32 and 7.31 solve the Fibonacci Problem 5.8, i.e., FIB(n)
= KB2(n, 2). Since the size of the necessary table is only two, one can use two variables
instead of an array, as stated in Algorithm 7.34.

Algorithm 7.34. Iterative Fibonacci Algorithm 7.35. Recursive Fibonacci


(Str. Ind. + Cir. Arr.) (Memoization + Cir. Arr.)
Fib(n) Declare a global table, Q0∼1
if n = 0, return 0 . . . . . . . . . . . . . . 1 Fib(n)
q0 = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 if n = 0, return 0 . . . . . . . . . . . . . . 1
q1 = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 if n = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . 4 Q[0] = 0 . . . . . . . . . . . . . . . . . . . . . . . 3
if i is even, q0 = q0 + q1 . . . . . 5 Q[1] = 1 . . . . . . . . . . . . . . . . . . . . . . . 4
else, q1 = q0 + q1 . . . . . . . . . .6 else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if i is even, return q0 . . . . . . . . . . 7 FIB(n − 1) . . . . . . . . . . . . . . . . . . . . . 6
else, return q1 . . . . . . . . . . . . . . . 8 Q[n % 2] = Q[0] + Q[1] . . . . . . . . .7
return Q[n % 2] . . . . . . . . . . . . . . . . . . 8

i 1 2 3 4 5 6 7 8 9 10 11 12 13 14
q0 0 1 1 3 3 8 8 21 21 55 55 144 144 377
q1 1 1 2 2 5 5 13 13 34 34 89 89 233 233
(a) Algorithm 7.34 illustration for FIB(1) ∼ FIB(14)
0 1 2 3 6 7 8 9 10 11 12
0 1 1 2 8 13 21 34 55 89 144

2 21
1 13
0 1

(b) Rolling circular array illustration for FIB(0) ∼ FIB(12)

Figure 7.33: Fibonacci with a circular array


7.5. CIRCULAR ARRAY IN STRONG INDUCTIVE PROGRAMMING 399

This simple Algorithm 7.34 can be viewed as strong inductive programming with a
queue or circular array. Lines 5 and 6 correspond to dequeue and enqueue operations. The
recursive version is given in Algorithm 7.35. It can be viewed as the memoization method
with a circular array. Both Algorithms 7.34 and 7.35 are illustrated in Figure 7.33.
Although the computational time complexity of Algorithm 7.34 is the same as that of the
plain strong inductive programming Algorithm 5.19 presented in chapter 5, Algorithm 7.34
uses only two variables and does not require a full table. Hence, the computational time
complexities of both Algorithms 7.34 and 7.35 are the same, Θ(n). The computational space
complexities of them are also the same: O(1).

7.5.2 Postage Stamp Equality Minimization


Consider the postage stamp equality minimization Problem 4.2 defined on page 159. Re-
call Algorithm 5.3 based on the strong inductive programming paradigm stated on page 223.
It used a table of size n. However, a smaller size table is necessary to compute PSEmin(n, A1∼k ).
If there are three kinds of stamps, A1∼k = h1, 3, 8i, only solutions for PSEmin(n−1, h1, 3, 8i),
PSEmin(n−3, h1, 3, 8i), and PSEmin(n−8, h1, 3, 8i) are required to compute PSEmin(n, h1, 3, 8i).
Hence, a circular array of size max(A1∼k ) is necessary, as illustrated in Figure 7.34. A pseudo
code for a strong inductive programming algorithm using a circular array is stated as follows:

Algorithm 7.36. Postage stamp equality minimization with a circular array.

PSEminq(n, A1∼k )
m = max(A1,··· ,k ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Declare a table Q0∼m−1 of size m initially ∞ . . . . . . . . . . . . 2
Q[0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 1 ∼ m − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if i ≥ aj and Q[i − aj ] + 1 < Q[i] . . . . . . . . . . . . . . . . . . . . 6
Q[i] = Q[i − aj ] + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for i = m ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
if Q[(i − aj ) % m] + 1 < Q[i % m] . . . . . . . . . . . . . . . . . 10
Q[i % m] = Q[(i − aj ) % m] + 1 . . . . . . . . . . . . . . . . . 11
return Q[n % m] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Algorithm 7.36 starts with determining the size of the necessary table to be declared.
Lines 3 ∼ 7 correspond to the initial table setting, which is identical to the part of the plain
strong inductive programming Algorithm 5.3. The circular array of PSEmin(7, h8, 3, 1i)
in Figure 7.34 (a) and the 7th row in Figure 7.34 (b) show the finished initialization of
the circular array. The rest of the values can be computed by lines 8 ∼ 11 of Algo-
rithm 7.36. PSEmin(7, h8, 3, 1i) ∼ PSEmin(12, h8, 3, 1i) are illustrated in Figure 7.34 (a)
and PSEmin(7, h8, 3, 1i) ∼ PSEmin(14, h8, 3, 1i) are illustrated starting from the 8th row in
Figure 7.34 (b). While the computational time complexity of Algorithm 7.36 is Θ(kn), the
space complexity is Θ(max(A1∼k )) because of the circular array.
A pseudo code for the recursive version of Algorithm 7.36 using the circular array is
stated as follows:
400 CHAPTER 7. STACK AND QUEUE

∞ 0 ∞ 0 ∞/3 0 3 0/1
7 0 7 0 7 0 7 0
∞ 6 1
∞ ∞ 6 1
1 2 6 1
1 2 6 1
1
curr curr curr curr
5 2 5 2 5 2 5 2
∞ ∞ ∞ 2 3 2 3 2
4 3 4 3 4 3 4 3

∞ ∞ ∞/2 1 2 1 2 1

PSEmin(0, h8, 3, 1i) PSEmin(4, h8, 3, 1i) PSEmin(7, h8, 3, 1i) PSEmin(8, h8, 3, 1i)

3 1 3 1 3 1 3 1
7 0 7 0 7 0 7 0
2 6 1
1/2 2 6 1
2 2 6 1
2 2 6 1
2
curr curr curr curr
5 2 5 2 5 2 5 2
3 2 3 2/3 3 3 3 3
4 3 4 3 4 3 4 3

2 1 2 1 2 1/2 2/3 2

PSEmin(9, h8, 3, 1i) PSEmin(10, h8, 3, 1i) PSEmin(11, h8, 3, 1i) PSEmin(12, h8, 3, 1i)
(a) Circular array illustration for PSEmin(0, h8, 3, 1i) ∼ PSEmin(12, h8, 3, 1i)
i\T 0 1 2 3 4 5 6 7 PSEmin(i)
0 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0
1 0 1 ∞ ∞ ∞ ∞ ∞ ∞ 1
2 0 1 2 ∞ ∞ ∞ ∞ ∞ 2
3 0 1 2 1 ∞ ∞ ∞ ∞ 1
4 0 1 2 1 2 ∞ ∞ ∞ 2
5 0 1 2 1 2 3 ∞ ∞ 3
6 0 1 2 1 2 3 2 ∞ 2
7 0 1 2 1 2 3 2 3 3
8 1 1 2 1 2 3 2 3 1
9 1 2 2 1 2 3 2 3 2
10 1 2 3 1 2 3 2 3 3
11 1 2 3 2 2 3 2 3 2
12 1 2 3 2 3 3 2 3 3
13 1 2 3 2 3 4 2 3 4
14 1 2 3 2 3 4 3 3 3
(b) Algorithm 7.36 illustration for PSEmin(0, h8, 3, 1i) ∼ PSEmin(14, h8, 3, 1i)

Figure 7.34: Postage stamp equality minimization problem using a circular array.
7.5. CIRCULAR ARRAY IN STRONG INDUCTIVE PROGRAMMING 401

Algorithm 7.37. Postage stamp equality minimization by recursion and circular array

Let m = max(A1,··· ,k ) be global.


Declare a global table T0∼m−1 initially ∞
PSEmin(n, A1∼k )
if n = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [0] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
if n > 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
PSEmin(n − 1, A1∼k ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if n ≥ ai and T [(n − ai ) % m] + 1 < T [n % m] . . . . . . 6
T [n % m] = T [(n − ai ) % m] + 1 . . . . . . . . . . . . . . . . . 7
return T [n % m] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time and space complexities of Algorithm 7.37 are the same as those
of Algorithm 7.36.

7.5.3 Bouncing Array

(a) Bouncing array


F31 , F32 F32 , F33 F33 , F34 F34 , F35 F35 , F36 F36 , F37 F37 , F38 F38 , F39
↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑
F15 , F16 F15 , F16 F16 , F17 F16 , F17 F17 , F18 F17 , F18 F18 , F19 F18 , F19
↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑
F7 , F 8 F7 , F 8 F7 , F 8 F7 , F 8 F8 , F 9 F8 , F 9 F8 , F 9 F8 , F 9
↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑
F3 , F 4 F3 , F 4 F3 , F 4 F3 , F 4 F3 , F 4 F3 , F 4 F3 , F 4 F3 , F 4
↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑
F1 , F 2 F1 , F 2 F1 , F 2 F1 , F 2 F1 , F 2 F1 , F 2 F1 , F 2 F1 , F 2
↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑ ↓↑
F0 , F 1 F0 , F 1 F0 , F 1 F0 , F 1 F0 , F 1 F0 , F 1 F0 , F 1 F0 , F 1

(b) Bouncing array

Figure 7.35: Jumping queue.

Readers may skip this sub-section for now if it appears too complicated. This concept
becomes natural and easy when readers reach Chapter 10. The point here is that the saving
space idea can be applied to the memoization technique as well.
The array is called a circular array in strong inductive programming and memoization
since the array advanced toward n by rolling the array. Recall the divide and conquer
402 CHAPTER 7. STACK AND QUEUE

memoization Algorithm 5.25 described on page 254. It utilized a table of size n. But only
a table of two cells is necessary to compute the nth Fibonacci number. This array shall be
called either a bouncing array or a jumping queue, as depicted in Figure 7.35 (a).

Algorithm 7.38. FIB with memo + D&C + bouncing array

Declare a global table T0∼1


Fib(n)
if n = 0, T [0] = 0, T [1] = 1, and return T [0] . . . . 1
Fib(d n2 e − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if n is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
t2 = T [0] + T [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [0] = T [1] × T [1] + 2 × T [0] × T [1] . . . . . . . . . . . . . 5
T [1] = T [1] × T [1] + t2 × t2 . . . . . . . . . . . . . . . . . . . . . 6
if n is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
t2 = T [0] × T [0] + T [1] × T [1] . . . . . . . . . . . . . . . . . . . 8
T [1] = T [1] × T [1] + 2 × T [1] × T [0] . . . . . . . . . . . . . 9
T [0] = t2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return T [0] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

When computing Fib(n) = Fn , the bouncing array T0∼1 contains T [0] = Fd n2 e−1 and
T [1] = Fd n2 e . The bouncing array is updated to T [0] = Fn and T [1] = Fn+1 and returned
to the invoking procedure. To compute Fn+1 in line 6 of Algorithm 7.38 requires the value
of F n2 +1 , which is T [0] + T [1]. It is saved in a temporary variable, t2 , in line 4 and used in
line 6. Algorithm 7.38 would be the best algorithm to compute the nth Fibonacci number,
as it takes Θ(log n) and only requires constant extra space.

7.6 Cylindrical Two Dimensional Array

(a) Top-down rolling cylinder (b) Left-right rolling cylinder

Figure 7.36: Cylindrical two dimensional array.


7.6. CYLINDRICAL TWO DIMENSIONAL ARRAY 403

As a circular array (queue) may save space for many of the strong inductive program-
ming algorithms appeared in Chapter 5, a cylindrical two dimensional queue data structure,
or simply cylindrical array, as depicted in Figure 7.36, may save space for many of the two-
dimensional strong inductive programming algorithms that appeared in Chapter 6. In this
section, improving strong inductive programming that involves a two-dimensional rectangu-
lar table by utilizing a cylindrical array is presented. A cylindrical array can roll top-down
or left-right on the rectangular table, as shown in Figures 7.36 (a) and (b), respectively.
Computational problems, where strong inductive programming with a two dimensional
rectangular table can be devised, as in Chapter 6, can be categorized into two groups by the
types of the top-down rolling cylinder. One group of problems utilizes a one-dimensional
array of size n, and the other utilizes a (2 × n) array. Both techniques reduce the computa-
tional space complexity from Θ(kn) to Θ(n). The 0-1 Knapsack Problem 4.4 and the ways
of stamping n amount Problem 6.2 are introduced as representatives of these groups. The
pseudo codes for these problems shall serve as templates for the rest of the problems, which
are left for exercises.

7.6.1 0-1 Knapsack


Consider the two-dimensional strong inductive programming Algorithm 6.6 stated on
page 302 for the 0-1 Knapsack Problem 4.4. Algorithm 6.6 requires a full table of size
Θ(kn), as shown in Figure 7.37 (a). The recurrence relation in eqn (6.6) suggests that only
the previous (i − 1)th row is required to compute the ith row. Hence, a table of two even
and odd rows instead of k rows can be used by rolling it down, as depicted in Figure 7.37
(b). A pseudo code is stated as follows:

Algorithm 7.39. Dynamic 01-knapsack II

dynamic 01-knapsack2(A1∼k , n)
Declare a (2 × (n + 1)) table T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if j − w1 < 0, T [1][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else T [1][j] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 0 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if j − wi < 0, T [i % 2][j]  = T [(i − 1) % 2][j] . . . . . . . . .. . . . . 7
T [(i − 1) % 2][j],
else, T [i % 2][j] = max ...8
T [(i − 1) % 2][j − wi ] + pi
return T [k % 2][n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Algorithm 7.39 is essentially the same as Algorithm 6.2, except that the space complexity
is Θ(n) instead of Θ(kn). The computational time complexity of Algorithm 7.39 is Θ(kn).
A cylinder rolling from left to right can be also utilized. To compute the jth column,
only a portion of the table from the (j − m) ∼ (j − 1) columns is necessary, where m is the
maximum weight item: m = max(W1∼k ). Figures 7.37 (c) and (d) show the cases m = 5 and
7, respectively. The first m columns are computed initially in the same way as the regular
strong inductive programming Algorithm 6.6, and then the rest of columns are computed
in sequence by rolling the cylinder from left to right. A pseudo code is stated as follows:
404 CHAPTER 7. STACK AND QUEUE

k A1∼k (pk , wk )\n 0 1 2 3 4 5 6 7 8 9 10 11 12


1 {(1, 1)} (1, 1) 0 1 1 1 1 1 1 1 1 1 1 1 1
2 {(1, 1), (4, 3)} (4, 3) 0 1 1 4 5 5 5 5 5 5 5 5 5
3 A1∼2 ∪ {(6, 5)} (6, 5) 0 1 1 4 5 6 7 7 10 11 11 11 11
4 A1∼3 ∪ {(8, 7)} (8, 7) 0 1 1 4 5 6 7 8 10 11 12 13 14
(a) A full (k × n) table
k\n 0 1 2 3 4 5 6 7 8 9 10 11 12
- - - - - - - - - - - - -
k=1 0 1 1 1 1 1 1 1 1 1 1 1 1

k=2 0 1 1 4 5 5 5 5 5 5 5 5 5
k=1 0 1 1 1 1 1 1 1 1 1 1 1 1

k=2 0 1 1 4 5 5 5 5 5 5 5 5 5
k=3 0 1/1 1/1 1/4 1/5 1/6 1/7 1/7 1/10 1/11 1/11 1/11 1/11

k=4 0 1/1 1/1 4/4 5/5 5/6 5/7 5/8 5/10 5/11 5/12 5/13 5/14
k=3 0 1 1 4 5 6 7 7 10 11 11 11 11
(b) Rolling an even and odd cylinder from top to down Algorithm 7.39 illustration
n%5 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
ak \n 0 1 2 3 4 0/5 1 2 3 4 10 11 7/12 8 9
a1 0 1 1 1 1 → 1/1 1 1 1 1 → ·· 1 1 1/1 1 1
a2 0 1 1 4 5 1/2 1 1 2 2 5 5 5/5 5 5
a3 0 1 1 4 5 1/3 1 1 2 2 11 11 7/11 10 11
(c) Rolling a cylinder from left to right Algorithm 7.40 illustration
with A1∼3 = h(1, 1), (4, 3), (6, 5)i, n = 12, and m = max(W1∼3 ) = 5
n%7 0 1 2 3 4 5 6 0 1 2 3 4 5 6
ak \n 0 1 2 3 4 5 6 7 8 9 10 11 5/12 6
a1 0 1 1 1 1 1 1 1 1 1 1 1 1/1 1
→ ··· →
a2 0 1 1 4 5 5 5 5 5 5 5 5 5/5 5
a3 0 1 1 4 5 6 7 7 10 11 11 11 6/11 7
a4 0 1 1 4 5 6 7 8 10 11 12 13 6/14 7
(d) Rolling a cylinder from left to right Algorithm 7.40 illustration
with A1∼4 = h(1, 1), (4, 3), (6, 5), (8, 7)i, n = 12, and m = max(W1∼4 ) = 7

Figure 7.37: Strong inductive programming for the 0-1 knapsack problem using cylindrical
arrays.
7.6. CYLINDRICAL TWO DIMENSIONAL ARRAY 405

Algorithm 7.40. Dynamic 01-knapsack III

dynamic 01-knapsack3(A1∼k , n)
m = max(W1∼k ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Declare a (k × m) table T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 0 ∼ m − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if j < w1 , T [1][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else T [1][j] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
for j = 0 ∼ m − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if j − wi < 0, T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . . . . . . . . . 8
else T [i][j] = p1 + T [i][j − wi ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
for j = m ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
if i ≥ w1 , T [1][j % m] = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
for i = 2 ∼ k . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 12
T [i − 1][j % m],
T [i][j % m] = max . . . . . . . . 13
T [i − 1][(j − wi ) % m] + pi
return T [k][n % m] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Algortihm 7.40 starts to determine the size of the table. Lines 3 ∼ 9 fill up the intial
table values, which are identical to those in the ordinary two-dimensional strong inductive
programming Algorithm 6.6. The remaining parts compute the values of each remaining
column from left to right by rolling the table.
The computational time complexity of Algorithm 7.40 is the same as that of Algo-
rithm 6.6: Θ(kn). The computational space complexity of Algorithm 7.40 is, however,
Θ(k max(W1∼k )). Algorithm 7.40 should be used if max(W1∼k ) < 2n k . Algorithm 7.39
should be used otherwise.

7.6.2 Ways of Stamping


Consider the two dimensional strong inductive programming Algorithm 6.2 stated on
page 297 for the ways of stamping n amount Problem 6.2. Algorithm 6.2 required a full
table of size Θ(kn), as illustrated in Figure 7.38 (b). In order to compute the (i, n)th
cell value, the (i − 1, n)th cell in the previous i − 1th row and the (i, n − ai )th cell in the
same row are required. Only one dimensional table of length n + 1 is necessary. Once a
table is initialized, the nth cell contains the previous (i − 1)th row value. If the values are
computed from left to right, the (i, j − ai )th value is already computed before computing
the (i, j)th value. This strong inductive programming algorithm with a top-down rolling
one-dimensional table is illustrated in Figure 7.38 (b) and a pseudo code is stated as follows:
Algorithm 7.41. Ways of stamping II

ways of stamping2(n, A1∼k )


Declare a table T0∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [0] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if j − a1 < 0, T [j] = 0 . . . . . . . . . . . . . . . . . . . . . . . 4
else T [j] = T [j − a1 ] . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
406 CHAPTER 7. STACK AND QUEUE

for j = j − ai ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [j] = T [j] + T [j − ai ] . . . . . . . . . . . . . . . . . . . . . . . 8
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Algorithm 7.41 is essentially the same as Algorithm 6.2, except that the space complexity
is Θ(n) instead of Θ(kn).

k A1∼k ak \n 0 1 2 3 4 5 6 7 8 9 10 11 12
1 {1} a1 =1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 {1, 3} a2 =3 1 1 1 2 2 2 3 3 3 4 4 4 5
3 {1, 3, 5} a3 =5 1 1 1 2 2 3 4 4 5 6 7 8 9
4 {1, 3, 5, 7} a4 =7 1 1 1 2 2 3 4 5 6 7 9 10 12
(a) A full (k × n) table
ak \n 0 1 2 3 4 5 6 7 8 9 10 11 12
a1 = 1 1 1 1 1 1 1 1 1 1 1 1 1 1

a2 = 3 1 1 1 1/2 1/2 1/2 1/3 1/3 1/3 1/4 1/4 1/4 1/5

a3 = 5 1 1 1 2 2 2/3 3/4 3/4 3/5 4/6 4/7 4/8 5/9

a4 = 7 1 1 1 2 2 3 4 4/5 5/6 6/7 7/9 8/10 9/12
(b) Rolling a cylinder from top to down Algorithm 7.41 illustration
n%5 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
ak \n 0 1 2 3 4 0/5 1 2 3 4 10 11 7/12 8 9
a1 = 1 1 1 1 1 1 → 1/1 1 1 1 1 → ·· 1 1 1/1 1 1
a2 = 3 1 1 1 2 2 1/2 1 1 2 2 4 4 3/5 3 4
a3 = 5 1 1 1 2 2 1/3 1 1 2 2 7 8 4/9 5 6
(c) Rolling a cylinder from left to right Algorithm 7.42 illustration
with A1∼3 = h1, 3, 5i, n = 12, and m = max(A1∼3 ) = 5

Figure 7.38: Ways of stamping problem using cylindrical arrays.

An algorithm utilizing a cylinder rolling from left to right, similar to Algorithm 7.40,
can be also devised. Figure 7.38 (c) demonstrates this algorithm, where A1∼3 = h1, 3, 5i,
n = 12, and m = max(A1∼3 ) = 5. A pseudo code is stated as follows:

Algorithm 7.42. Ways of stamping III

ways of stamping3(n, A1∼k )


m = max(A1∼k ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Declare a (k × m) table T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 0 ∼ m − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if j % a1 , T [1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else, T [1][j] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [i][0] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for j = 1 ∼ m − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if j − ai ≥ 0, T [i][j] = T [i − 1][j] + T [i − ai ][j] . . . . . . . . . . . . 9
7.7. ROLLING CYLINDER ON TRIANGLE TABLES 407

else, T [i][j] = T [i − 1][j] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10


for j = m ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
if j % a1 , T [1][j % m] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
else, T [1][j % m] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
for i = 2 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
T [i][j % m] = T [i − 1][j % m] + T [i − 1][(j − ai ) % m] . . . . . 15
return T [k][n % m] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

The computational time complexity of Algorithm 7.42 is the same as that of Algo-
rithm 7.41 and Algorithm 6.2: Θ(kn). The computational space complexity of Algo-
rithm 7.42 is, however, Θ(k max(A1∼k )). Algorithm 7.42 should be used if max(A1∼k ) < nk .
Algorithm 7.41 should be used otherwise.

7.7 Rolling Cylinder on Triangle Tables

(a) Southwest rolling (b) Southeast rolling (c) Right aligned rolling

Figure 7.39: Rolling Cylindric table on triangular tables.

Many problems on combinatorics in Chapter 6 involve triangle tables. Instead of filling


up all cells in the triangle, a cylinder can be utilized to save space. It can be rolled in many
different ways, as Figure 7.39 shows some of them. Finding the indices (i, j), which is the
ith row and jth column in a table with regard to (n, k), can be perplexing when rolling a
cylinder on an implicit triangle. Templates with indices in Table 6.32 on page 354 may be
useful.

7.7.1 Binomial Coefficient

Consider Pascal’s triangle in Figure 6.18 (a), which contains the binomial coefficient
values. The left-rotated table in Figure 6.18 (c) forms a rectangular table and a cylinder
can be rolled from top to bottom as discussed in the previous section. This rolling can be
viewed as a southwest rolling on the triangle, as depicted in Figure 7.40. A pseudo code for
the southwest rolling to find C(n, k) is stated as follows:
408 CHAPTER 7. STACK AND QUEUE

1 1 1 1 1
1 1 1 2 3 4
11 2 1
33 1 3 6 10 n −k+1
1 3 1
1 4 4 1 66 1 4 10 20
10
1 5 10 5
10 1
1 5 15 35
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
k+1

Figure 7.40: Pascal’s Triangle with a rolling cylinder

Algorithm 7.43. Dynamic binomial coefficient with a one-dimensional cylinder

nchoosek(n, k)
if k > n − k, k = n - k .........................1
Declare a table T0∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 0 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 1 to n − k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [j] = T [j − 1] + T [j] . . . . . . . . . . . . . . . . . . . . . . . . 7
return T [k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Since Pascal’s triangle is symmetric, nk = n−k


n
 
, line 1 is added to save the cylinder
size. While the computational time complexity of Algorithm 7.43 is (k + 1)(n − k + 1) =
Θ(min(kn, (n − k)n), the space complexity is Θ(min(k, n − k)).

7.7.2 Stirling Numbers of the Second Kind

1 1 1 1 1 1
1
1 3
1 1 1 3 7 15
1 7 1 3 1
1 6 25 90 k
n −k+1 1 15 1 7 6 11
1 31 1 10
10 1 15 25 1 10 65 350
1 63 11 65 15
65 31 1 90
63 350 140 21 1 15 140 1050
1 127 1 63 301 350 1
1 127 966 1701 1050 266 28 1
k n −k +1

Figure 7.41: Stirling numbers of the second kind triangle with rolling cylinders

Algorithm 7.43 rolls a one-dimensional cylinder in a southwest direction. When k ≥


dn/2e, the size of the cylinder can be reduced if it is rolled in southeast direction. Since
Pascal’s triangle is symmetric, this direction issue can be reduced by calling nchoosek(n, n −
k) when k ≥ dn/2e. Most other non-symmetric triangles, such as the Stirling numbers and
Eulerian numbers, require determining the direction of the rolling cylinder to minimize the
space. Here, the Stirling numbers of the second kind is presented, while others are left as
7.7. ROLLING CYLINDER ON TRIANGLE TABLES 409

exercises. As shown in Figure 7.41, the southwest rolling and the southeast rolling are used
to find SNS(7, 1) = 127 and SNS(7, 4) = 1050, respectively.

Algorithm 7.44. Stirling numbers of the second kind with a one-dimensional cylinder

SNS(n, k)
if k ≤ n/2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Declare a table T1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [j] = T [j − 1] + j × T [j] . . . . . . . . . . . . . . . . . . 7
return T [k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Declare a table T1∼n−k+1 . . . . . . . . . . . . . . . . . . . . . . 10
for j = 1 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . 11
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
for i = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
for j = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . 14
T [j] = i × T [j − 1] + T [j] . . . . . . . . . . . . . . . . . 15
return T [n − k + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Lines 1 ∼ 8 and lines 9 ∼ 16 utilize the southwest and southeast rolling cylinders,
respectively. The computational time complexity of Algorithm 7.44 is Θ(min(nk, n(n − k +
1))), or simply O(n2 ). The computational space complexity is Θ(min(k, n − k + 1)).

7.7.3 Bell Number


An intriguing problem where various cylindrical rollings are possible is the Bell number
problem. While the Stirling numbers of the second kind counts the number of ways to
partition a set of n distinct elements into k parts, the Bell number is the number of ways to
partition the set into any number of parts. For example, if n = 4, there are 15 different ways
to partition the four distinct elements, as enumerated in Table 7.2. Hence, the Bell number
problem can be formulated utilizing the Stirling numbers of the second kind as follows:

Table 7.2: Partitioning of a set {a, b, c, d}.


k list S(4, k)
1 {{a, b, c, d}} S(4, 1) = 1
{{a}, {b, c, d}}, {{b}, {a, c, d}}, {{c}, {a, b, d}}, {{d}, {a, b, c}},
2 S(4, 2) = 7
{{a, b}, {c, d}}, {{a, c}, {b, d}}, {{a, d}, {b, d}}
{{a}, {b}, {c, d}}, {{a}, {c}, {b, d}}, {{a}, {d}, {b, c}},
3 S(4, 3) = 6
{{b}, {c}, {a, d}}, {{b}, {d}, {a, c}}, {{c}, {d}, {a, b}}
4 {{a}, {b}, {c}, {d}} S(4, 4) = 1
410 CHAPTER 7. STACK AND QUEUE

Problem 7.13. Bell number, bell(n)


Input: n ∈ Z+
n
P
Output: bell(n) = S(n, k)
k=1
where S(n, k) is the Stirling numbers of the second kind.

1 1 1 1 1 1 1 1 1
1
1 1 2 1 1 1 3 6 10 15 21 1
11 3 1
1 3 1 5 77 1 7 25 65 140 21 1
1 6 1
11 77 66 11 15 1 15 25
10
25 1 1 15 90 350 140 21 1
52 1 31 90 65
65 15 1
1 15 25 10 1 1 31 301 350 140 21 1
1 63 301 350 140
140 21 1
1 31 90 65 15 1 203 21 1 63 301 350 140 21 1
1
1 63 301 350 140 21 1 877 1 63 301 350 140 21 1

(a) Top-down 2 × n cylinder (b) South-west one-dimensional cylinder


1 1 1 1 1 1 1 1
1 1 1 3 7 15 31 63 1
1 3 11
1 6 25 90 301 63 1
1 7 66 1
1 15 25
25 10 1 1 10 65 350 301 63 1
1 31 90
90 65 15 1
301 350 140 21 1 15 140 350 301 63 1
1 63 301 1
63 1 21 140 350 301 63 1
1
1 21 140 350 301 63 1

(c) South-east one-dimensional cylinder

Figure 7.42: Rolling Cylinder for Bell numbers using the Stirling numbers of the second
kind triangle

It is simply the sum of the entire bottom row of the Stirling numbers of the second kind.
Since a value of the nth row requires values of the n − 1th row, a cylinder of size 2 × n rolling
on the left aligned triangle can be utilized, as depicted in Figure 7.42 (a). Once the nth row
is computed, the values are added to find bell(n). A pseudo-code is stated as follows:
Algorithm 7.45. Bell number using SNS left-aligned triangle
bell(n)
Declare a (2 × n) table T0∼1,1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [i % 2][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [i % 2][i] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 2 to i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [iP% 2][j] = T [(i − 1) % 2][j − 1] + j × T [(i − 1) % 2][j] . . . . 7
n
return i=1 T [n % 2][i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
While the computational time complexity of Algorithm 7.45 is Θ(n2 ), the space com-
plexity is Θ(n), as it requires a 2 × n cylinder.
7.7. ROLLING CYLINDER ON TRIANGLE TABLES 411

A more space saving algorithm is possible by modifying the southwest rolling Algo-
rithm 7.44 for the Stirling numbers of the second kind. Instead of the 1 × k array, a 1 × n
array is rolled down in a southwest direction, as depicted in Figure 7.42 (b). The differ-
ence is that at the i iteration, only 1 ∼ n − i + 1 cells are computed and the remaining
(k = n − i + 2 ∼ n) cells contain the nth row’s partial solutions. When the n − 1 row is
computed, the entire cylinder contains exactly the nth row of the Stirling numbers of the
second kind and, thus, the algorithm simply returns the sum of all numbers. A pseudo-code
is stated in Algorithm 7.46.

Algorithm 7.46. Bell number rolling a SW Algorithm 7.47. Bell number rolling a SE
cylinder on the SNS triangle cylinder on the SNS triangle
bell(n) bell(n)
Declare a table T1∼n . . . . . . . . . . . . . 1 Declare a table T1∼n . . . . . . . . . . . . . 1
for j = 1 to n . . . . . . . . . . . . . . . . . . . . 2 for j = 1 to n . . . . . . . . . . . . . . . . . . . . 2
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . 3 T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n − 1 . . . . . . . . . . . . . . . . . 4 for i = 2 to n − 1 . . . . . . . . . . . . . . . . . 4
for j = 2 to n − i + 1 . . . . . . . . . . . 5 for j = 2 to n − i + 1 . . . . . . . . . . . 5
T [j] = T [j − 1] + j × T [j] . . . . 6 T [j] = i × T [j − 1] + T [j] . . . . 6
Pn Pn
return T [i] . . . . . . . . . . . . . . . . . . . . 7 return T [i] . . . . . . . . . . . . . . . . . . . . 7
i=1 i=1

A cylinder can roll on the SNS triangle in a southeast direction, as depicted in Figure 7.42
(c). Its pseudo-code is stated in Algorithm 7.47.
The computational time and space complexities of both algorithms 7.46 and 7.47 are
Θ(n2 ) and Θ(n), respectively.

1 1 2
1 2 2 3 5
2 3 5 55 77 10
10 15
15
5 7 10 15
15 20 27 37 52
15 20 27 52 37
52 67 87 114 151 203 52 67 87 114 151 203
203 255 322 409 523 674 877 203 255 322 409 523 674 877

(a) Bell triangle (b) Right aligned Bell triangle

Figure 7.43: Using Bell triangle to solve Bell numbers

The Bell triangle [153] in Figure 7.43 (a), which is also called Aitken’s array or the Peirce
triangle [104], has been utilized to compute the Bell number. So as to minimize the space,
consider the right aligned Bell triangle as shown in Figure 7.43 (b). An algorithm starts
from the top triangle with a value 1, which is stored at the end of the cylinder. For the
remaining rows, the last element value is copied to the beginning of the ith row which is
the (n − i + 1)th cell. And then the prefix sum, which takes linear time, is performed on
the ith row. Then, the last nth cell contains bell(i) for the ith iteration. By repeating
this operation until the iteration reaches the nth row, the algorithm terminates and returns
bell(n) = the last element in the cylinder. This top-down rolling on the right aligned Bell
412 CHAPTER 7. STACK AND QUEUE

triangle algorithm can be stated as follows:

Algorithm 7.48. Bell number using Bell triangle with a one dimensional cylinder

bell(n)
Declare a table T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [n] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [n − i + 1] = T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = n − i + 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [j] = T [j] + T [j − 1] . . . . . . . . . . . . . . . . . . . . . . . . 6
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

While the computational time complexity of Algorithm 7.48 is Θ(n2 ), the space com-
plexity is Θ(n).

7.7.4 Set Partition Number of at Most k Partitions

1 1 1 1 1 1
1
1 1 2 1 1 1 3 6 10
11 3 1
1 3 1 5 1 7 25 65
1 77 6 1
11 77 66 11 15 1 15 25
25 10 1 15 90 350
1 31 90 6565
1 15 25 10 51 1 31 301 350
1 63 301 350
1 31 90 65 187 1 63 301 350

1 63 301 350 715 1 63 301 350

(a) Top-down 2 × k cylinder (b) South-west one-dimensional cylinder

Figure 7.44: Rolling cylinder for SPam using the Stirling numbers of second kind triangle

Stirling number of the second kind nk Problem S-6.5 counts the number of set partitions


of exactly k parts. Consider the number of set partitions of up to k parts where 1 ≤ k ≤ n.


The difference from the Stirling number of the second kind is that some partitions may
be empty. For example, S0 (4, 2) = 8 as {{{a, b, c, d}, {}}, {{a}, {b, c, d}}, {{b}, {a, c, d}},
{{c}, {a, b, d}}, {{d}, {a, b, c}}, {{a, b}, {c, d}}, {{a, c}, {b, d}}, {{a, d}, {b, d}}}. In other
words, it is the number of at most k partitions. This problem, abbreviated as SPam, can
be formulated in the same way as the Stirling number of the second kind Problem S-6.5,
except for one lesser constraint as follows:

Problem 7.14. Set partition number with at most k partitions.


Input: n and k ∈ Z+
Output: |X| where A is a set of n distinct elements and

k
[
X = {(x1 , · · · , xk ) | xi = A ∧ (∀i, j ∈ {1, · · · , k} if i 6= j, xi ∩ xj = ∅)}
i=1
7.7. ROLLING CYLINDER ON TRIANGLE TABLES 413

Or the output can be stated as the sum of the parts of the Stirling numbers of the second
kind.
Xk
SPam(n, k) = SNS(n, i) (7.13)
i=1

As it can be solved by adding parts of the n row in the Stirling number of the second kind
triangle, a couple of algorithms are possible with rolling a cylinder. The first algorithm
utilizes the left-aligned SNS triangle and a (2 × k) cylinder, as illustrated in Figure 7.44 (a).

Algorithm 7.49. SPam with left-aligned SNS triangle

SPam(n, k)
Declare a (2 × k) table T0∼1,1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T [1][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
T [i % 2][1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if i ≤ k, T [i % 2][i] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for j = 2 to min(k, i − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [iP% 2][j] = T [(i − 1) % 2][j − 1] + j × T [(i − 1) % 2][j] . . . . 7
n
return i=k T [n % 2][i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The second algorithm utilizes a one dimensional (1 × k) cylinder rolling in a southwest


direction, as depicted in Figure 7.44 (b).

Algorithm 7.50. SPam using SNS triangle with a one-dimensional cylinder

SPam(n, k)
Declare a table T1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [j] = T [j − 1] + j × T [j] . . . . . . . . . . . . . . . . . . . . 6
Pk
return i=1 T [i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Both Algorithms 7.49 and 7.50 have Θ(kn) time and Θ(k) space complexities.

7.7.5 Set Partition Number of at Least k Partitions


Consider the number of at least k partitions problem, SPal in short.

Problem 7.15. Set partition number with at least k partitions.


Input: n and k ∈ Z+
Output:

k
X
SPal(n, k) = SNS(n, i) (7.14)
i=1
414 CHAPTER 7. STACK AND QUEUE

1 1 1 1 1
1 1 1 3 7 15
1 3 11
1 6 25 90
1 7 66 1
15 25 10
25 1 1 10 65 350
90 65
90 15 1
1 15 140 350
350 140 21 1
1 21 140 350

1 21 140 350

Figure 7.45: Rolling Cylinder for SPal using the Stirling numbers of the second kind triangle

Similar to the SPam Problem 7.14, SPal can be solved by adding parts of the n row in
the Stirling number of the second kind triangle. As depicted in Figure 7.45, an algorithm
which utilizes a one dimensional (1 × k) cylinder rolling in a southeast direction is stated as
follows:
Algorithm 7.51. SPal using SNS triangle with a one dimensional cylinder

SPal(n, k)
Declare a table T1∼n−k+1 . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 2 to min(n − k + 1, n − i + 1) . . . . . . . . . . . 5
T [j] = T [j − 1] + j × T [j] . . . . . . . . . . . . . . . . . . . . 6
Pn−k+1
return i=1 T [i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Algorithm 7.51 has Θ(kn) time and Θ(n − k + 1) space complexities.

7.7.6 Integer partition


Recall counting ways of partitioning a positive integer n into exactly k parts Prob-
lem 6.11, or IPE in short, defined on page 326. The computational space complexity of the
2D strong inductive programming Algorithm 6.28 stated on page 328 can be dramatically
reduced using rolling cylinders. There are two different ways to roll a cylinder: southeast
(SE) and southwest (SW) rolling.
A one dimensional array of size n − k + 1 can be slided in the southeast direction, as
depicted in Figure 7.46 (a). A pseudo code is stated as follows:
Algorithm 7.52. Integer partition number, p(n, k) with a SE cylinder

IPE-SE(n, k)
Declare a table T1∼n−k+1 . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if j > i, T [j] = T [j] + T [j − i] . . . . . . . . . . . . . . 6
return T [n − k + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
7.7. ROLLING CYLINDER ON TRIANGLE TABLES 415

1 1 1 1 1 1 1 1
1 1
1 1 2 2 3 3 4
1 1 1
k
1 2 1 11 1 1 2 3 4 5 7
1 2 2 11 1
1 3 3 22 1 1 1 1 2 3 5 6 9
1 3 4 33 2 1 1
1 4 5 55 3 2 1 1 n −k+1
1 4 7 66 5 3 2 1 1
1 5 8 99 7 5 3 2 1 1

(a) Integer partition triangle with a SE rolling cylinder


1
i=4 (0) 1 2 3 3 i = 4 (0) 1 2 3 3
1 1
i = 1/5(1) 1/1 1/3 1/4 1/5 11 1 1 i = 5 (1) 1 3 4 5
k 11
i=2 (2) 1 1 1 1 22 1 1 i = 6 (2) 1 3 5 6
1 22 22 1 1
i=3 (3) 1 2 2 2 1 3 33 22 1 1 i = 3/7(3) 1/1 2/4 2/7 2/9
1 3 4 33 2 1 1
k 1 4 5 5 3 2 1 1
1 4 7 6 5 3 2 1 1
1 5 8 9 7 5 3 2 1 1

(b) Integer partition triangle with a SW rolling cylinder

Figure 7.46: Integer partition triangle with a SW rolling cylinder

The computational time complexity of Algorithm 7.52 is Θ(kn) but the computational
space complexity is Θ(n − k).
A (k × k) cylinder can be rolled down in southwest direction, as depicted in Figure 7.46
(b). A pseudo code is stated as follows:

Algorithm 7.53. Integer partition number, p(n, k) with a SW cylinder

IPE-SW(n, k)
Declare a table T0∼k−1,1∼k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [1][j] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 2 to n − k + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 2 to k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if i > j, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
T [i % k][j] = T [i % k][j − 1] + T [(i − j) % k][j] . . . .7
return T [(n − k + 1) % k][k] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The computational time complexity of Algorithm 7.53 is Θ(kn) but the computational
space complexity is Θ(k 2 ).

If k > n, Algorithm 7.52 should be used. Otherwise, Algorithm 7.53 is more space
efficient.
416 CHAPTER 7. STACK AND QUEUE

7.7.7 Hopping Cylindrical Array


Consider the Lucas Sequence Coefficient Problem 6.10, or simply LSC, defined on page 323.
The LSC triangle containing L(n, k) is shown in Figure 7.47 (b). One important LSC prop-
erty is L(n, k) = 0 if k is even. This property can be utilized to speed up the strong inductive
programming with a cylindrical array. Pseudo codes are as follow:

Algorithm 7.54. Lucas Sequence Coeffi- Algorithm 7.55. Lucas Sequence Coeffi-
cient, L(n, k) with a SE cylinder cient, L(n, k) with a SW cylinder
LSC-SEcyl(n, k) LSC-SWcyl(n, k)
if k is even, return 0 . . . . . . . . . . . 1 if k is even, return 0 . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Declare a table T1∼n−k+1 . . . . . . 3 Declare a table T1∼ k+1 . . . . . . . . . 3
2
for j = 1 ∼ n − k + 1 . . . . . . . . . . . 4 for j = 1 ∼ k+1 ................ 4
2
T [j] = 1 . . . . . . . . . . . . . . . . . . . . . 5 if j is odd, T [j] = 1 . . . . . . . . 5
for i = 2 ∼ k+1 2 ................ 6 else, T [j] = −1 . . . . . . . . . . . . 6
if i is odd, T [1] = 1 . . . . . . . . 7 for i = 2 ∼ n − k + 1 . . . . . . . . . . . 7
else, T [1] = −1 . . . . . . . . . . . . 8 for j = 2 ∼ k+1 ..............8
2
for j = 2 ∼ n − k + 1 . . . . . . . . 9 T [j] = T [j] − T [j − 1] . . . . . .9
T [j] = T [j − 1] − T [j] . . . . 10 return T [ k+1 2 ] . . . . . . . . . . . . . . . . . 10
return T [n − k + 1] . . . . . . . . . . . 11

1 −1 1 1
1 0 1 1 1 1
1 −2 3
1−1 0
n −k+1

1 −3 6 1 0 −2 0 −1 −2 −3 −4 ˩
k/2˥
1 −4 10 1 0 −3 0 11
1 3 6 10
11 0 −4 0 33 0
1 −5 15
1 0 −5 0 66 0 −1 −1 −4 −10 −20
1 −6 21 1 0 −6 −6 0 10
10 0 −4 0
1 0 −7 0 15 0 −10 0 1
n −k+1
0 2121
˩
k/2˥ 1 0 −8 0 −20 0 5 0

(a) LSC(10, 5) = 21 (b) LSC triangle (c) LSC(10, 7) = −20


using a SW cylinder using a SE cylinder
1 1 1
1 −1 1
1 −2 1 −1 −1 −2 −3
1 −3 1 −2 1 1 3 6 ˩
−1 k/2˥
n−k+1 1 −4 1 −1 −3 3
−4 −1 −4 −10
1 −5 1 −4 6 −4 1
11 −5 −10 5
10 −10 −1 1 5 15
1 −6
1 −6−6 15 −20 15 −6 1 −1 −6 −21
1 −7
1 −7 21 −35 35 −21 7 −1
˩
k/2˥ n −k+1

(d) LSC(9, 3) = −7 (e) condensed LSC triangle (f) LSC(13, 11) = −20
using a SW cylinder using a SE cylinder

Figure 7.47: Lucas Sequence Coefficient Triangle with Hopping Cylindrical Array
7.8. EXERCISES 417

The condensed LSC triangle is the LSC triangle without even k lines whose cell values are
zeroes, as shown in Figure 7.47 (e). Hopping a SE cylinder in the LSC triangle is equivalent
to sliding a SE cylinder in the condensed LSC triangle, as illustrated in Figure 7.47 (c) and
(f). There are some interesting symmetric relations in the condensed LSC triangle but they
are beyond the scope of this text.
The computational time complexities of both Algorithms 7.54 and 7.55 are O(kn). The
computational space complexities of Algorithms 7.54 and 7.55 are Θ(n − k) and Θ(k),
respectively.

7.8 Exercises
Q 7.1. Consider the following logical statement and its expression tree:

S = (¬p ∨ q ∨ r) ∧ (p ∨ ¬q)
<
>

>

¬ p ¬
>

p q r q

a). Perform the preorder depth-first traversal l to get the prefix expression.

b). Evaluate the prefix expression found in a) for (p = F, q = T, and r = T) using a stack.

c). Perform the postorder depth-first traversal l to get the postfix expression.

d). Convert the statement S into a postfix expression using a stack in linear time.

e). Evaluate the postfix expression found in d) for (p = T, q = F, and r = T) using a


stack.

Q 7.2. Consider the following logical statement and its expression tree:

S = ((p ↔ q) ↓ ¬r) → ¬(q ↑ r)


↓ ¬


→ ¬ ↑

p q r q r

a). Perform the preorder depth first traversal l to get the prefix expression.

b). Evaluate the prefix expression found in a) for (p = T, q = F, and r = T) using a stack.
418 CHAPTER 7. STACK AND QUEUE

c). Perform the postorder depth first traversal l to get the postfix expression.
d). Convert the statement S into a postfix expression using a stack in linear time.
e). Evaluate the postfix expression found in d) for (p = T, q = F, and r = F) using a
stack.

Q 7.3. Consider the following logical statement expressed in infix notation:

¬(p → q) ∨ (r → p)

a). Draw the expression tree.


b). Convert the statement into a postfix expression using a stack in linear time.
c). Evaluate the converted postfix expression for (p = T, q = F, and r = F) using a stack.

Q 7.4. Consider the following logical statement expressed in infix notation:

p → ¬(p ∨ q ∧ r)

a). Draw the expression tree.


b). Convert the statement into a postfix expression using a stack in linear time.
c). Evaluate the converted postfix expression for (p = T, q = F, and r = F) using a stack.

Q 7.5. Consider the following logical statement expressed in infix notation:

(p ∧ q ∨ r) → ¬r

a). Draw the expression tree.


b). Convert the statement into a postfix expression using a stack in linear time.
c). Evaluate the converted postfix expression for (p = T, q = F, and r = T) using a stack.
d). Evaluate the converted postfix expression for (p = T, q = F, and r = F) using a stack.

Q 7.6. Consider the expression tree:

+ −

x + x

y z z

x y

a). Perform the inorder depth-first traversal.


7.8. EXERCISES 419

b). Perform the preorder depth-first traversal.

c). Perform the postorder depth-first traversal.

d). Evaluate the postfix expression when (x = 2, y = 1, and z = 2) using a stack.

e). Evaluate the prefix expression when (x = 2, y = 1, and z = 2) using a stack.

f). Evaluate the posttfix expression when (x = −2, y = 2, and z = 3) using a stack.

g). Evaluate the pretfix expression when (x = −2, y = 2, and z = 3) using a stack.

h). Evaluate the postfix expression when (x = 3, y = −2, and z = 2) using a stack.

i). Evaluate the pretfix expression when (x = 3, y = −2, and z = 2) using a stack.

Q 7.7. Consider the expression tree:

+ v

y /

z −

w y

a). Perform the inorder depth-first traversal.

b). Perform the preorder depth-first traversal.

c). Perform the postorder depth-first traversal.

d). Evaluate the postfix expression when (x = 8, y = 3, z = 4, w = 7, and v = 2) using a


stack.

e). Evaluate the pretfix expression when (x = 8, y = 3, z = 4, w = 7, and v = 2) using a


stack.

Q 7.8. Consider the Greater between elements sequence validation Problem 3.6, or simply
GBW, defined on page 121.

a). Devise an algorithm for GBW using a stack.

b). Demonstrate the algorithm in a) on S = h1, 2, 4, 4, 2, 3, 3, 1i.

c). Demonstrate the algorithm in a) on S = h1, 2, 2, 4, 4, 3, 5, 5, 1, 3i.

d). Provide the computational time complexity of the proposed algorithm in a).
420 CHAPTER 7. STACK AND QUEUE

Q 7.9. Consider the Less between elements sequence validation problem, or simply LBW,
considered as exercises in Q 2.30 and Q 3.26 on pages 88 and 150.

a). Devise an algorithm for LBW using a stack.

b). Demonstrate the algorithm in a) on S = h4, 3, 1, 1, 3, 2, 2, 4i.

c). Demonstrate the algorithm in a) on S = h5, 4, 4, 2, 2, 3, 1, 1, 5, 3i.

d). Provide the computational time complexity of the proposed algorithm in a).

Q 7.10. Consider the finding all connected components Problem 7.6 defined on page 376.
v1 v2

v5 v3 v4

v7 v6

a). Devise an algorithm utilizing a recursive depth-first search.

b). Demonstrate the algorithm in a) on the above graph.

c). Provide the computational time and space complexities of the algorithm provided in
a).

d). Devise an algorithm using a queue data structure.

e). Demonstrate the algorithm in d) on the above graph.

f). Provide the computational time and space complexities of the algorithm provided in
d).

Q 7.11. Consider the cycle detection Problem 7.7 defined on page 378.
v1 v2

v5 v3 v4

v7 v6

a). Devise an algorithm utilizing a depth-first search by an explicit stack.

b). Demonstrate the algorithm devised in a) on the above graph.

c). Identify the back edge by the algorithm devised in a) on the above graph.

d). Provide the computational time and space complexities of the algorithm provided in
a).

e). Devise an algorithm using a queue data structure.


7.8. EXERCISES 421

f). Demonstrate the algorithm devised in e) on the above graph.


g). Identify the back edge by the algorithm devised in e) on the above graph.
h). Provide the computational time and space complexities of the algorithm provided in
e).

Q 7.12. Consider the following directed graph and various graph search algorithms.
v1 v2

v3 v4 v5

v6 v7

a). Demonstrate the recursive depth-first search Algorithm 7.14 on the above directed
graph.
b). Draw the spanning tree by the recursive DFS on the above directed graph.
c). Demonstrate the depth-first search using stack Algorithm 7.15 on the above directed
graph.
d). Provide both the DFS order using a stack and the stack push order on the the above
directed graph. Draw the spanning tree by the DFS using a stack as well.
e). Demonstrate the breadth-first search using queue Algorithm 7.26 on the above directed
graph.
f). Draw the level graph and spanning tree based on BFS on the above directed graph.

Q 7.13. Consider the following directed graph and various graph search algorithms.
v1 v2

v5 v3 v4

v7 v6

a). Demonstrate the recursive depth-first search Algorithm 7.14 on the above directed
graph.
b). Draw the spanning tree by the recursive DFS on the above directed graph.
c). Demonstrate the depth-first search using stack Algorithm 7.15 on the above directed
graph.
d). Provide both the DFS order using a stack and the stack push order on the the above
directed graph. Draw the spanning tree by the DFS using a stack as well.
e). Demonstrate the breadth-first search using queue Algorithm 7.26 on the above directed
graph.
422 CHAPTER 7. STACK AND QUEUE

f). Draw the level graph and spanning tree based on BFS on the above directed graph.
g). Devise an algorithm using a queue to solve the longest path length problem as an
exercise in Q 5.41 on page 290.
h). Demonstrate the algorithm devised in g) on the above directed acyclic graph.

Q 7.14. Consider the following Fibonacci related problems: FTN, FRN, LUS, and LUS2.
• FTN stands for the Fibonacci tree size problem in eqn (3.33) on page 140.
• FRC stands for the number of recursive calls of Fibonacci in eqn (5.31) on page 247.
• LUS stands for the Lucas sequence Problem 5.10 on page 250.
• LUS2 stands for the Lucas sequence II Problem 5.11 on page 250.

a). Devise a strong inductive programming algorithm for FTN using a circular array.
b). Provide the computational time and space complexities of the proposed algorithm in
a).
c). Devise a memoization algorithm for FTN using a circular array.
d). Provide the computational time and space complexities of the proposed algorithm in
c).
e). Devise a strong inductive programming algorithm for FRN using a circular array.
f). Devise a memoization algorithm for FRC using a circular array.
g). Devise a strong inductive programming algorithm for LUS using a circular array.
h). Devise a memoization algorithm for LUS using a circular array.
i). Devise a strong inductive programming algorithm for LUS using a circular array.
j). Devise a memoization algorithm for LUS2 using a circular array.

Q 7.15. Consider the Lucas number, or simply LUC, defined recursively in eqn (5.63) on
page 278.

a). Devise a strong inductive programming algorithm using a circular array.


b). Provide the computational time and space complexities of the proposed algorithm in
a).
c). Devise a memoization algorithm using a circular array.
d). Devise a divide and conquer algorithm using a jumping array. (Hint: Lucas halving
identities Theorem 5.17 on page 279.)
e). Illustrate the proposed algorithm in d) to compute L31 and L32 .
f). Provide the computational time and space complexities of the proposed algorithm in
d).
7.8. EXERCISES 423

Q 7.16. Consider the Pell number, or simply PLN, defined recursively in eqn (5.67) on
page 279.

a). Devise a strong inductive programming algorithm using a circular array.

b). Provide the computational time and space complexities of the proposed algorithm in
a).

c). Devise a memoization algorithm using a circular array.

d). Devise a divide and conquer algorithm using a jumping array. (Hint: Pell halving
identities Theorem 5.19 on page 280.)

e). Illustrate the proposed algorithm in d) to compute P31 .

f). Provide the computational time and space complexities of the proposed algorithm in
d).

Q 7.17. Consider the Pell-Lucas number, or simply PLL, defined recursively in eqn (5.70)
on page 280.

a). Devise a strong inductive programming algorithm using a circular array.

b). Provide the computational time and space complexities of the proposed algorithm in
a).

c). Devise a memoization algorithm using a circular array.

d). Devise a divide and conquer algorithm using a jumping array. (Hint: Pell-Lucas
halving identities Theorem 5.20 on page 280.)

e). Illustrate the proposed algorithm in d) to compute Q31 .

f). Provide the computational time and space complexities of the proposed algorithm in
d).

Q 7.18. Consider the Jacobsthal number, or simply JCN, defined recursively in eqn (5.73)
on page 281.

a). Devise a strong inductive programming algorithm using a circular array.

b). Provide the computational time and space complexities of the proposed algorithm in
a).

c). Devise a memoization algorithm using a circular array.

d). Devise a divide and conquer algorithm using a jumping array. (Hint: Jacobsthal
halving identities Theorem 5.21 on page 281.)

e). Illustrate the proposed algorithm in d) to compute J31 .

f). Provide the computational time and space complexities of the proposed algorithm in
d).
424 CHAPTER 7. STACK AND QUEUE

Q 7.19. Consider the Jacobsthal-Lucas number, or simply JCL, defined recursively in


eqn (5.78) on page 282.

a). Devise a strong inductive programming algorithm using a circular array.

b). Provide the computational time and space complexities of the proposed algorithm in
a).

c). Devise a memoization algorithm using a circular array.

d). Devise a divide and conquer algorithm using a jumping array. (Hint: Pell-Lucas
halving identities Theorem 5.22 on page 282.)

e). Illustrate the proposed algorithm in d) to compute JL31 .

f). Provide the computational time and space complexities of the proposed algorithm in
d).

Q 7.20. Consider the Mersenne number, or simply MSN, defined recursively in eqn (5.82)
on page 282.

a). Devise a strong inductive programming algorithm using a circular array.

b). Provide the computational time and space complexities of the proposed algorithm in
a).

c). Devise a memoization algorithm using a circular array.

d). Devise a divide and conquer algorithm using a jumping array. (Hint: Mersenne halving
identities Theorem 5.23 on page 283.)

e). Illustrate the proposed algorithm in d) to compute M31 .

f). Provide the computational time and space complexities of the proposed algorithm in
d).

Q 7.21. Consider the Mersenne-Lucas number, or simply MSL, defined recursively in


eqn (5.86) on page 283.

a). Devise a strong inductive programming algorithm using a circular array.

b). Provide the computational time and space complexities of the proposed algorithm in
a).

c). Devise a memoization algorithm using a circular array.

d). Devise a divide and conquer algorithm using a jumping array. (Hint: Pell-Lucas
halving identities Theorem 5.24 on page 284.)

e). Illustrate the proposed algorithm in d) to compute ML31 .

f). Provide the computational time and space complexities of the proposed algorithm in
d).
7.8. EXERCISES 425

Q 7.22. Consider the full Kibonacci number problem, or simply KBF, defined recursively
in eqn (5.97) on page 285.

a). Devise a strong inductive programming algorithm to compute KBF(n, k) using a cir-
cular array based on eqn (5.97).

b). Illustrate the algorithm in a) where k = 8 and n = 14.

c). Provide the computational time and space complexities of the proposed algorithm in
a).

d). Devise a strong inductive programming algorithm to compute KBF2 (n, k) using a
built-in queue data structure based on eqn (5.98).

e). Illustrate the algorithm in d) where k = 8 and n = 14 using a built-in queue data
structure.

f). Provide the computational time and space complexities of the proposed algorithm in
d).

g). Devise a strong inductive programming algorithm to compute KBF2 (n, k) using a
circular array based on eqn (5.98).

h). Illustrate the algorithm in g) where k = 8 and n = 14 using a circular array.

Q 7.23. Consider the rod cutting related problems, the maximization Problem 4.7 (RCM)
defined on page 168 and the minimization problem (RCmin) considered as an exercise in
Q 4.19 on page 206. For up to k = 8, the profits or costs for each unit length rod is given
as follows:
Length 1 2 3 4 5 6 7 8
Profit or cost P/C 2 4 5 7 8 10 13 14

a). Devise a strong inductive programming algorithm to solve RCM using a circular array.

b). Illustrate the proposed algorithm in ? where n = 11.

c). Provide the computational time and space complexities of the proposed algorithm in ?.

d). Devise a strong inductive programming algorithm to solve RCmin using a circular
array.

e). Illustrate the proposed algorithm in ? where n = 11.

Q 7.24. Consider the winning way Problem 5.7, or simply WWp, defined on page 239. For
example, it is to find the number of ways to gain n points in the k missile game depicted in
Figure 4.12 on page 166.

missile blue red yellow


point P 3 5 8

a). Devise a strong inductive programming algorithm using a circular array.


426 CHAPTER 7. STACK AND QUEUE

b). Illustrate the algorithm proposed in a) using a circular array on the above toy example
where n = 11.
c). Provide the computational time and space complexities of the proposed algorithm.

Q 7.25. Consider the k missile game depicted in Figure 4.12 on page 166. Two kinds of
problems that maximize points include UKP and UKE.
- UKP stands for the unbounded integer knapsack Problem 4.6 defined on page 167.
- UKE stands for the unbounded integer knapsack equality problem considered as an
exercise in Q 5.7 on page 272.
While the UKP is to maximize points with at most n energy, the UKE is to maximize
points with exactly n energy.

missile blue red yellow


energy E 1 3 4
point P 4 14 20

a). Devise a strong inductive programming algorithm for UKP using a circular array.
b). Illustrate the algorithm proposed in a) using a circular array on the above example
where n = 7.
c). Provide the computational time and space complexities of the algorithm devised in a).
d). Devise a strong inductive programming algorithm for UKE using a circular array.
e). Illustrate the algorithm proposed in d) using a circular array on the above example
where n = 7.
f). Provide the computational time and space complexities of the algorithm devised in
d).

Q 7.26. Consider the k missile game depicted in Figure 4.12 on page 166. Two kinds of
problems that minimize energy include UKP-min and UKEmin.
- UKP-min stands for the unbounded integer knapsack minimization problem considered
as an exercise in Q 4.15 on page 204.
- UKEmin stands for the unbounded integer knapsack equality minimization problem
considered as exercises in Q 4.18 on page 205 and Q 5.5 on page 271.
While UKP-min is to find the minimum energy necessary to gain at least n points,
UEP-min is to find the minimum energy necessary to gain exactly n points.

missile blue red yellow


energy E 2 3 5
point P 3 5 8

a). Devise a strong inductive programming algorithm for UKP-min using a circular array.
b). Illustrate the algorithm proposed in a) using a circular array on the above example
where n = 11.
c). Provide the computational time and space complexities of the algorithm devised in a).
d). Devise a strong inductive programming algorithm for UKEmin using a circular array.
7.8. EXERCISES 427

e). Illustrate the algorithm proposed in d) using a circular array on the above example
where n = 11.
f). Provide the computational time and space complexities of the algorithm devised in
d).

Q 7.27. Suppose that a fast-food restaurant sells chicken nuggets in packs of 4, 6 and 9
only; A = h4, 6, 9i.

M M M
4 6 9

Three kinds of McNugget boxes


Consider the unbounded subset sum related problems enumerated in Table 4.4 on page 213.
a). Recall the unbounded subset sum equality Problem 5.3 defined on page 225. A cus-
tomer needs exactly (n = 11). Is this possible? Devise a strong inductive programming
algorithm using a circular array.
b). Illustrate the algorithm provided in a) where n = 11 using a circular array.
c). Provide the computational time and space complexities of the algorithm proposed in
a).
d). Recall the unbounded subset sum minimization Problem 5.5 defined on page 227. A
customer needs at least (n = 11). Devise a strong inductive programming algorithm
using a circular array.
e). Illustrate the algorithm provided in d) where n = 11 using a circular array.
f). Recall the unbounded subset sum maximization problem considered on page 205. A
customer needs at most (n = 11). Devise a strong inductive programming algorithm
using a circular array.
g). Illustrate the algorithm provided in f) where n = 11 using a circular array.
Q 7.28. Consider a set of three kinds of stamps, A = h3, 5, 7i, for the Postage stamp
equality maximization problem considered as an exercise in Q 4.6 on page 201.

Three kinds of stamps

a). Devise a strong inductive programming algorithm using a circular array.


b). Illustrate the proposed algorithm where T = 11c.
c). Analyze the computational time and space complexities of your proposed algorithm.

Q 7.29. Consider a set of four kinds of stamps, A = h1, 3, 5, 8i, for the postage stamp
equality minimization Problem 4.2 defined on page 159.
428 CHAPTER 7. STACK AND QUEUE

Four kinds of stamps

a). Illustrate Algorithm 7.36 using the strong inductive programming with a circular array
stated on page 399 where T = 12c.

b). Modify the two-dimensional strong inductive programming Algorithm 6.4 stated on
page 300 using an implicit cylindrical array.

c). Illustrate the algorithm proposed in question b) where T = 10c.

d). Analyze the computational time and space complexities of the algorithm proposed in
question b).

Q 7.30. Recall the 0-1 knapsack minimization problem, considered as an exercise Q 4.12
on page 203. A two-dimensional strong inductive programming algorithm was devised as
an exercise in Q 6.5 on page 342.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward.

b). Demonstrate the algorithm provided in a) on n = 12 and A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

c). Provide the computational time and space complexities of the algorithm provided in
a).

d). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling from left to right.

e). Demonstrate the algorithm provided in d) on n = 12 and A = {(1, 1), (4, 3), (6, 5), (8, 7)}.

f). Provide the computational time and space complexities of the algorithm provided in
d).

Q 7.31. Consider the subset sum equality Problem 6.3, or simply SSE, defined on page 305.
See Figure 6.7 on page 306 for a toy example to illustrate the algorithms.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for SSE. (Hint: A two-dimensional strong inductive programming
was given in Algorithm 6.11 on page 306.)

b). Demonstrate the algorithm provided in a) on n = 12 and S = {2, 3, 5, 7}.

c). Provide the computational time and space complexities of the algorithm provided in
a).

d). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling from left to right for SSE.

e). Demonstrate the algorithm provided in d) on n = 12 and S = {2, 3, 5, 7}.


7.8. EXERCISES 429

f). Provide the computational time and space complexities of the algorithm provided in
d).

Q 7.32. Consider the subset sum maximization problem, or simply SSM, considered as
exercises in Q 4.9 on page 202 and Q 6.9 on page 343. See Figure 6.7 on page 306 for a toy
example to illustrate the algorithms.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for SSM.
b). Demonstrate the algorithm provided in a) on n = 12 and S = {2, 3, 5, 7}.
c). Provide the computational time and space complexities of the algorithm provided in
a).
d). Devise a two-dimensional strong inductive programming algorithm using a cylinder
rolling from left to right for SSM.
e). Demonstrate the algorithm provided in d) on n = 12 and S = {2, 3, 5, 7}.
f). Provide the computational time and space complexities of the algorithm provided in
d).

Q 7.33. Recall the subset sum minimization problem, or simply SSmin, considered as
exercises in Q 4.8 on page 202 and Q 6.10 on page 344. See Figure 6.7 on page 306 for a
toy example to illustrate the algorithms.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for SSmin.
b). Demonstrate the algorithm provided in a) on n = 12 and S = {2, 3, 5, 7}.
c). Provide the computational time and space complexities of the algorithm provided in
a).
d). Devise a two-dimensional strong inductive programming algorithm using a cylinder
rolling from left to right for SSmin.
e). Demonstrate the algorithm provided in d) on n = 12 and S = {2, 3, 5, 7}.
f). Provide the computational time and space complexities of the algorithm provided in
d).

Q 7.34. Recall the subset product equality of positive numbers problem, or SPEp for
short, which was introduced in Q 6.12 on page 344. Given a set S of k positive numbers,
S = {s1 , s2 , · · · , sk }, find a subset of positive integers whose product is exactly n.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward.
b). Demonstrate the algorithm provided in a) on n = 12 and S = {2, 3, 4, 5}.
c). Provide the computational time and space complexities of the algorithm provided in
a).
430 CHAPTER 7. STACK AND QUEUE

d). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling from left to right.

e). Demonstrate the algorithm provided in d) on n = 12 and S = {2, 3, 4, 5}.

f). Provide the computational time and space complexities of the algorithm provided in
d).

Q 7.35. Let SPMp be the subset product maximization of positive numbers problem, which
was introduced in Q 6.13 on page 345. Let SPminp be the subset product minimization of
positive numbers problem, which was introduced in Q 6.14 on page 345.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for SPMp.

b). Demonstrate the algorithm provided in a) on n = 12 and S = {2, 3, 4, 5}.

c). Provide the computational time and space complexities of the algorithm provided in
a).

d). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for SPminp.

e). Demonstrate the algorithm provided in d) on n = 12 and S = {2, 3, 4, 5}.

Q 7.36. Recall Multiset coefficient Problem 6.18 (MSC, M (n, k)) and Surjective multiset
coefficient Problem 6.19 (SMSC, M(n, k)) defined on pages 350 and 351.

a). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for MSC.

b). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling left to right for MSC.

c). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling downward for SMSC.

d). Devise a two-dimensional strong inductive programming algorithm using a cylinder


rolling left to right for SMSC.

Q 7.37. Consider the subset selection way related problems. The Binomial coefficient nk


Problem 6.9 defined on page 319 is the ways of selecting a subset of size exactly k from n
elements without repetition. Utilize Pascal’s triangle given in Figure 7.40 on page 408.

a). Formulate the problem, abbreviated as SWam, which is the ways of selecting a subset
of size at most k from n elements without repetition.

b). Devise an algorithm using a cylinderic array on Pascal’s triangle to find SWam(n, k).

c). Demonstrate how the algorithm devised in b) rolls a cylinder to find SWam(6, 3).

d). Provide the computational time and space complexities of the algorithm devised in
b).
7.8. EXERCISES 431

e). Formulate the problem, abbreviated as SWal, which is the ways of selecting a subset
of size at least k from n elements without repetition.

f). Devise an algorithm using a cylinderic array on Pascal’s triangle to find SWal(n, k).

g). Demonstrate how the algorithm devised in f) rolls a cylinder to find SWal(6, 3).

h). Provide the computational time and space complexities of the algorithm devised in f).

n7.38. Consider the cycle number related problems. The Stirling number of the first kind
Q
k Problem 6.15 defined on page 348 is the number of making exactly k cycles with n
elements. Devise algorithms utilizing the Stirling number of the first kind triangle given in
Figure 7.48.

1
0 1
0 1 1
0 2 3 1
0 6 11 6 1
0 24 50 1 35 10
0 120 274 225 85 15 1
0 720 1764 1624 735 175 21 1

Figure 7.48: The Stirling number of the first kind triangle

a). Devise a strong inductive programming algorithm while minimizing   the space required
using cylinderic arrays for the Stirling number of the first kind nk Problem 6.15.

b). Demonstrate how the algorithm devised in a) rolls a cylinder.

c). Provide the computational time and space complexities of the algorithm devised in a).

d). Formulate the problem, abbreviated as CNam, which counts the making of at most k
cycles with n elements.

e). Devise an algorithm using a cylinderic array on the Stirling number of the first kind
triangle to find CNam(n, k).

f). Demonstrate how the algorithm devised in e) rolls a cylinder.

g). Provide the computational time and space complexities of the algorithm devised in e).

h). Formulate the problem, abbreviated as CNal, which counts the making of at least k
cycles with n elements.

i). Devise an algorithm using a cylinderic array on the Stirling number of the first kind
triangle to find CNal(n, k).

j). Demonstrate how the algorithm devised in i) rolls a cylinder.

k). Provide the computational time and space complexities of the algorithm devised in i).
432 CHAPTER 7. STACK AND QUEUE

Q 7.39. Consider the ascent number related problems. The Eulerian number nk Prob-

lem 6.16 considered on page 348 is the number of permutations with exactly k number of
ascents. Devise algorithms utilizing the Eulerian number triangle given in Figure 7.49. Note
that the triangle is almost symmetric.

1
1 0
1 1 0
1 4 1 0
1 11 11 1 0
1 26 66 0 26 1
1 57 30257 1 302
0
1 120 1191 2416 1191 120 1 0

Figure 7.49: Eulerian number triangle

a). Devise a strong inductive programming algorithm while minimizing the space required
using cylindrical arrays for the Eulerian number Problem 6.16.

b). Demonstrate how the algorithm devised in a) rolls a cylinder.

c). Provide the computational time and space complexities of the algorithm devised in a).

d). Formulate the problem, abbreviated as NAam, which counts the number of permuta-
tions with at most k ascents.

e). Devise an algorithm using a cylindrical array on the Eulerian number triangle to find
NAam(n, k).

f). Demonstrate how the algorithm devised in e) rolls a cylinder.

g). Provide the computational time and space complexities of the algorithm devised in e).

h). Formulate the problem, abbreviated as NAal, which counts the number of permuta-
tions with at least k ascents.

i). Devise an algorithm to find NAal(n, k).

j). Demonstrate how the algorithm devised in i) rolls a cylinder to find NAal(6, 3).

k). Provide the computational time and space complexities of the algorithm devised in i).

Q 7.40. Consider

the
ascent number in GBW sequence problems. The Eulerian number of
the second kind nk Problem 6.17 defined on page 349 is the number of GBW sequences
with exactly k number of ascents. Devise algorithms utilizing the Eulerian number of the
second kind triangle given in Figure 7.50.

a). Devise a strong inductive programming algorithm while minimizing the space required
using cylindrical arrays for the Eulerian number of the second kind Problem 6.17.

b). Demonstrate how the algorithm devised in a) rolls a cylinder.


7.8. EXERCISES 433

1
1 0
1 2 0
1 8 06
1 22 58 24 0
1 52 328 444 120 0
1 114 1452 4400 3708 720 0
1 240 5610 32120 58140 33984 5040 0

Figure 7.50: Eulerian number of the second kind triangle

c). Provide the computational time and space complexities of the algorithm devised in a).

d). Formulate the problem, abbreviated as NA2am, which counts the number of GBW
sequences with at most k ascents.

e). Devise an algorithm using a cylindrical array on the Eulerian number of the second
kind triangle to find NA2am(n, k).

f). Demonstrate how the algorithm devised in e) rolls a cylinder.

g). Provide the computational time and space complexities of the algorithm devised in e).

h). Formulate the problem, abbreviated as NA2al, which counts the number of GBW
sequences with at least k ascents.

i). Devise an algorithm using a cylindrical array on the Eulerian number of the second
kind triangle to find NA2al(n, k).

j). Demonstrate how the algorithm devised in i) rolls a cylinder to find NA2al(6, 3).

k). Provide the computational time and space complexities of the algorithm devised in i).

Q 7.41. Consider the integer partition triangle in Figure 6.22 on page 326 or Figure 7.46
on page 415 to solve various integer partition related problems.

a). Formulate the problem of partitioning a positive integer into at least k parts, or simply
IPal. For example, a positive integer (n = 4) can be represented in five different ways:
{(4), (3 + 1), (2 + 2), (2 + 1 + 1), (1 + 1 + 1 + 1)}.

b). Devise an algorithm using a cylindrical array to find IPal(n, k).

c). Demonstrate how the algorithm devised in b) rolls a cylinder to find IPal(7, 4).

d). Provide the computational time and space complexities of the algorithm devised in
b).

e). Formulate the problem of partitioning a positive integer into any part, or simply IPN.
For example, a positive integer (n = 4) can be represented in two different ways in at
least (k = 3) parts: {(2 + 1 + 1), (1 + 1 + 1 + 1)}.

f). Devise an algorithm using a cylindrical array to find IPN(n).


434 CHAPTER 7. STACK AND QUEUE

g). Demonstrate how the algorithm devised in f) rolls a cylinder to find IPN(7).
h). Provide the computational time and space complexities of the algorithm devised in f).
i). Devise an algorithm using a cylindrical array to find IPam(n, k) where IPam stands
for the integer partition number of at most k parts Problem 6.12, I(n, k), defined on
page 331. For example, a positive integer (n = 4) can be represented in three different
ways in at most (k = 2) parts: {(4), (3 + 1), (2 + 2)}.
j). Demonstrate how the algorithm devised in i) rolls a cylinder.
k). Provide the computational time and space complexities of the algorithm devised in i).

Q 7.42. Consider the Lucas Sequence II Coefficient problem, or simply LSC2, considered
as exercises in Q 6.23 on page 352. It was defined recursively in eqn (6.34). Note that LSC2
triangle is given on page 352 as well.

a). Devise a strong inductive programming algorithm using a SW cylindrical array to find
LUC2(n, k).

b). Demonstrate how the algorithm devised in a) rolls a cylinder to find LUC2(9, 4).
c). Provide the computational time and space complexities of the devised algorithm in a).
d). Devise a strong inductive programming algorithm using a SE cylindrical array to find
LUC2(n, k).

e). Demonstrate how the algorithm devised in d) rolls a cylinder to find LUC2(9, 6).
f). Provide the computational time and space complexities of the devised algorithm in
d).
Chapter 8

Tree Data Structures for


Dictionary

The dictionary problem is a classic computer science problem that takes a set of unique
elements identified by key values and performs three fundamental operations on these ele-
ments: search, insert, and delete. A dictionary is an abstract data type (ADT) such that
any data structure that supports these three operations is a valid realization of a dictionary.
Table 8.1 presents computational complexities of dictionary operations implemented in dif-
ferent data structures that have been described in earlier chapters or will be presented later
in this book.
First, suppose that all elements are stored in a simple array as a list. An element can
simply be inserted at the end of the list. This insertion operation normally takes constant
time but when the array is full, doubling the array size takes linear time. The sequential
search Algorithm 2.17 on page 57 can be used to search a specific key valued element in a list,
which takes O(n) time. If an item to be deleted is searched sequentially from the beginning
of the array, the deletion operation would take Θ(n) time since all elements located after
the deleted position must be shifted one by one after deleting an item in an array. If the
item to be deleted is searched from the end of the array, the deletion operation takes O(n)
time.
Next, if the dictionary data structure is a sorted array, the search operation can be
completed in O(log n) time using the binary search Algorithm 3.10 described on page 106.
The insertion into a sorted list operation defined on page 61 takes O(n) time by Al-

435
436 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Table 8.1: Dictionary operations with different data structures.


(a) Data structures in other chapters
Dictionary Data structure
operations simple array sorted array Linked list sorted L.L. Hash table
search O(n) O(log n) O(n) O(n)/O(1)‡
insert O(1)/O(n)∗ O(n) O(n) O(n)/O(1)‡
delete O(n) O(n) O(n) O(n)/O(1)‡
(b) Data structures in this chapter
Dictionary Data structure
operations binary search tree AVL tree 2-3 tree B tree B+ tree Skip list
search O(n)/O(log n)† O(log n) Θ(log n)
insert O(n)/O(log n)† Θ(log n) Θ(log n)
delete O(n)/O(log n)† Θ(log n) Θ(log n)
* exceptional case when the array is full / † average case analysis / ‡ amortized cost analysis

gorithm 2.20 or 2.22. The deletion operation takes O(n) time since all elements must be
shifted in the worst case. An array that does not shift elements when inserting or deleting
is known as a Hash table, which will be explained more in detail later on. If the dictionary
structure is a plain linked list or a sorted linked list, all three operations take O(n) time, as
discussed on page 69.
A dictionary should ideally be compatible with an efficient search method, as well as
enable users to insert new words and delete unused or incorrect words efficiently. As enu-
merated in Table 8.1 (b), this chapter describes data structures such as BST, AVL tree, 2-3
tree, B tree, B+ tree, and Skip list, in which the three fundamental dictionary operations
are performed efficiently. The primary objective of this chapter is to understand these data
structures’ operations and their respective computational complexities.

8.1 Binary Tree


Before embarking on the binary search tree, a inary tree is defined and its properties are
examined in this section.

8.1.1 Definition
A tree is a connected digraph without a cycle. A rooted tree is a tree where each node
has exactly one incoming edge except for one special node called the root. Hence, there are
exactly n − 1 number of directed edges (arcs) in a tree with n vertices. The directed edge
can be considered as a ‘parent-of’ relation. Nodes with no outgoing edge or no child are
called leaf nodes. A k-ary tree is a rooted tree where every node has up to k outgoing edges,
i.e. each node can have up to k children binary trees. A linked list, defined on page 2.33,
can be viewed as a unary tree where k = 1.
A binary tree is a special k-ary tree where k = 2. A binary tree is either empty or has left
and right sub-trees which are also binary trees recursively as depicted in Figure 8.1. Recall
that the node x in a linked list is recursively defined by x.val and x.next where x.next is
the rest of the linked list without x. A node in a binary tree can also be defined recursively
with two pointers instead of one.
8.1. BINARY TREE 437

key 7 7

4 9 4 9

Binary 3 5 8 3 5 8
Tree Left Right 1 1

(a) Binary tree recursion (b) an example of binary tree

Figure 8.1: Building a BST by insert operations

A binary tree, T , stores a reference to the root node, which is the starting point. For
example in Figure 8.1 (b), T = 7 . Let T.key or 7 .key denote the key value of the root
node, 7. Let T.Left and T.Right denote the left and right sub-trees, e.g. T.Left = 4 and
T.Right = 9 . Leaf nodes such as 5 have no children and thus 5 .Left =  and 5 .Right
= . Let par(x) be the parent node of the key x, e.g. par(3) = 4 . Simplified trees with
circular nodes instead of rectangular nodes with pointers will be used throughout this book,
as shown in Figure 8.1 (b).

8.1.2 Depth and Height


The depth of a node x in a binary tree can be defined as the shortest path length from
the root node to x.

Problem 8.1. Depth of a node in a binary tree


Input: a binary tree T , a root node r, and a node x ∈ T .
Output: SPL(r, x)

As depicted in Figure 8.2 (a), the depth of a node in a binary tree is the same as the
level of the node in a level graph.

depth height 3
0 7 3 7 7
2 1
1 6 5 2 6 6 5
1 0 0
2 3 4 1 1 3 5 3 4 1
0
3 2 0 2 4 1 2

(a) Depth of a node (b) Height of a node (c) height of a node

Figure 8.2: Depth and height of a node in a binary tree

The height of a node x in a binary tree is equivalent to the length of the path to the
deepest leaf node and, thus can be solved using the longest path length problem.

Problem 8.2. Height of a node in a binary tree


Input: a binary tree T and a node x ∈ T .
Output: max(LPL(x, y)) where y ∈ the leaf node set and ∃ path(x, y).
438 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

The condition ∃ path(x, y) is added to avoid the max(LPL(x, y)) = ∞ case. LPL(x, y)) =
∞ when there is no path from x to y. Instead of repeatedly invoking LPL for each leaf node,
the height of a node in a binary tree Problem 8.2 can be trivially solved by the recurrence
relation in eqn (8.1).
(
−1 if T = 
heightBT(x) = (8.1)
max(heightBT(x.Left), heightBT(x.Right)) + 1 otherwise

The height of a tree is one more than the maximum of the heights of the left and right
sub-trees. The height of a binary tree T is the height of the root node.

8.1.3 Number of Rooted Binary Trees

BT0-r-BT1 BT1-r-BT0 BT0-r-BT2 BT1-r-BT1 BT2-r-BT0

|BT0| = 1 |BT1| = 1 |BT2| = 2 |BT3| = 5

BT0-r-BT3 BT1-r-BT2 BT2-r-BT1 BT3-r-BT0

|BT4| = 14

(a) Rooted binary trees with n nodes


FBT0-r-FBT1 FBT1-r-FBT0 FBT0-r-FBT2 FBT1-r-FBT1 FBT2-r-FBT0

|FBT0| = 1 |FBT1| = 1 |FBT2| = 2 |FBT3| = 5

(b) Rooted full binary trees with n internal nodes

Figure 8.3: Number of rooted binary trees with n nodes

Let BTn be the set of all possible rooted binary trees with n number of nodes. Figure 8.3
(a) displays all distinct rooted binary trees when n = 0 ∼ 4. Let nrbt(n) = |BTn |, the
cardinality of BTn , which is also known as Catalan number [163]. The problem of finding
the number of rooted binary trees is formulated as follows using the complete recurrence
relation for the the nth Catalan number given earlier in eqn (5.104).
Problem 8.3. Number of rooted binary trees (the nth Catalan number)
Input: n ∈ N.

1 if n = 0 or 1
Output: nrbt(n) = n−1
P
 nrbt(i)nrbt(n − i − 1) if n > 1
i=0

To derive a recurrence relation in eqn (5.104) or equivalently in Problem 8.3, suppose


that there are n number of nodes. A unique structure of a binary tree is determined by the
8.1. BINARY TREE 439

structures of the left and right sub-trees of the root node. If the size of the left sub-tree is i,
then the size of the right sub-tree should be n − i − 1 because the total number of nodes, n,
is the size of the left sub-tree, i, plus the size of the right subtree, (n − i − 1), plus one more
root node; n = i + (n − i − 1) + 1. If nrbt(i) and nrbt(n − i − 1) are known, those quantities
must be multiplied by the product rule of counting. Since i can range from 0 to n − 1, each
case’s values must be added by by the sum rule of counting. Hence, the recurrence relation
in eqn (5.104) or in Problem 8.3 is derived. Figure 8.3 (a) illustrates the recurrence relation.
Many students may be perplexed by the fact that |BT0 | = 1 in the base case. How many
rooted trees are possible with zero nodes? The answer is one empty tree. This is the same
line of reasoning as 0! = 1 when counting.
The complete recurrence relation given in eqn (5.104) or in Problem 8.3 can be solved by
the strong inductive programming or memoization methods introduced in Chapter 5. First,
consider the following pseudo code for a strong inductive programming approach.

Algorithm 8.1. Catalan number (dynamic programming)

nrbt(n)
Declare a table L of size n . . . . . . . . . . . . . . . . . . . . . . . . 1
L[0] = L[1] = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L[i] = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 0 ∼ i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
L[i] = L[i] + L[j] × L[i − j − 1] . . . . . . . . . . . . . . . 6
return L[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

A table of Catalan numbers for values of n up to 12 is given as follows:


n 0 1 2 3 4 5 6 7 8 9 10 11 12 ···
Cn 1 1 2 5 14 42 132 429 1430 4862 16796 58786 208012 ···

Next, a memoization method based on the complete recurrence relation in eqn (5.104)
or in Problem 8.3 is as follows: Assume that the table T0∼n is declared globally and CAT(n)
is called initially.

Algorithm 8.2. Catalan number (memoization)

Declare a global table T0∼n = 0’s


CAT(n) is called initially.
CAT(n)
if n = 0 or 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n < 0, return 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if T [n] = 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = 0 ∼ n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T [n] = T [n]+ CAT(i)× CAT(n − i − 1) . . . . . . . . . . . . . 5
return T [n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The computational time complexities of strong inductive programming Algorithm 8.1


and memoization method Algorithm 8.2 are both Θ(n2 ) assuming that multiplication and
addition are constant operations. Both algorithms require a table to store solutions for all
sub-problems and thus the computational space complexity is Θ(n).
440 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Solutions to various problems yield the same Catalan number sequence, including the
number of balanced parentheses problem considered on page 288 and the number of rooted
binary trees Problem 8.3. The number of rooted full binary trees with n number of internal
nodes is shown in Figure 8.3 (b). A binary tree is called a full binary tree if each of its
nodes is either a leaf node or has exactly two child sub-trees, i.e. no node has only one child
sub-tree. The problem of finding the number of rooted full binary trees is essentially the
same as finding the number of rooted binary trees if null nodes are viewed as leaf nodes.
Furthermore, 66 different interpretations of the Catalan numbers can be found in [163].

8.2 Binary Search Trees

11

7 18

4 9 14 23

2 5 8 10 12 17 21

2 4 5 7 8 9 10 11 12 14 17 18 21 23

(a) binary search tree


7 6 5 4 2 2 1

6 5 7 2 7 2 6 1 6 1 3 2

5 1 1 4 6 1 3 5 7 5 7 4 3

4 2 3 4 5 4

3 3 3 6 5

2 4 7 6

1 7

(b) Various binary search trees for A = h1, 2, 3, 4, 5, 6, 7i

Figure 8.4: binary search tree

In this section, we first define what a binary search tree is and then observe how the
three fundamental dictionary operations are implemented in a binary search tree.

8.2.1 Definition
A binary search tree, or simply BST, is a binary tree with a special ordering property.
In a binary search tree, the key values of all nodes in the left sub-tree are less than the root
node’s key and the key values of all nodes in the right sub-tree are greater than the root
node’s key. The left and right sub-trees must also be binary search trees. Several binary
search trees are shown in Figure 8.4. The binary search Algorithm 3.10 provides an implicit
BST, as depicted in Figure 3.16 (b) on page 105, resulting in a size balanced binary search
tree. Here, explicit binary search tree data structure is considered. The problem of checking
whether a binary tree is a binary search tree can be formulated as follows:
8.2. BINARY SEARCH TREES 441

Problem 8.4. isBST(T )


Input: A binary tree T
Output: isBST(T ) =
(
True if ∀x ∈ T (∀y ∈ x.Left(y.key < x.key) ∧ ∀z ∈ x.Right(z.key > x.key))
(8.2)
False otherwise

Eqn (8.2) can be rewritten equivalently as follows:


(
True if ∀x ∈ T (max(x.Left) < x.key ∧ min(x.Right) > x.key)
isBST(T ) = (8.3)
False otherwise

The in-order depth first traversal (DFT) of a BST visits nodes in increasing key order.
Hence, checking whether a binary tree is a binary search tree can be done trivially by
in-order DFT as stated as follows:
Algorithm 8.3. Checking whether a binary search tree
isBST(T )
inord = DFTinorder(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
return issorted(inord) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Algorithm 8.3 takes linear time as the depth first traversal takes Θ(n) time and checking
whether a sequence is in increasing order also takes Θ(n) time. In order to have an O(n)
BST checking algorithm, the DFTinorder algorithm can be modified as follows:
Algorithm 8.4. Checking whether a binary search tree
Let cur be a global variable and cur = −∞ initially.
Call isBST(r) initially where r is the root node of T .
isBST(x)
if x = null, return True . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
tmp = isBST(x.Left) . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if tmp = False ∨ cur > x.key, return False . . . 4
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
cur = x.key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return isBST(x.Right) . . . . . . . . . . . . . . . . . . . . . . . . 7
The computational time complexity of Algorithm 8.4 is Θ(log n) at best and Θ(n) at
worst.

8.2.2 Search in BST


The most rudimentary dictionary operation is the search operation, i.e. looking up a
query element.
Problem 8.5. Search operation in a BST
Input: a
(BST, T and a quantifiable element, q
True if q ∈ T
Output:
False otherwise q 6∈ T
442 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Searching an element in a binary search tree can be done in a nearly identical approach
to that of the binary search Algorithm 3.10. Starting from the root node, if the root node
matches with the query, the query element is found. Otherwise, either the left or the right
sub-binary search tree is explored depending on whether the query element is less than or
greater than the key value of the root node. The sub-problem with a sub-binary search tree
is processed recursively until it finds a match or reaches a null tree, i.e., the query element
is not found.

Algorithm 8.5. BST- search

BSTsearch(T, q)
if r = null, return False . . . . . . . . . . . . . . . . . . . . . . . 1
else if r.key = q, return True . . . . . . . . . . . . . . . . . . . . . . . . 2
else if r.key > q, return BSTsearch(r.Left, q) . . . . . . . . 3
else (if r.key < q), return BSTsearch(r.Right, q) . . . . . . 4

The computational time complexity of Algorithm 8.5 is constant at best when the search
query is the root node and can reach O(n) at worst, depending on the height of the binary
search tree. The height of a binary search tree is O(log n) at best and O(n) at worst, as
shown in the leftmost and rightmost trees in Figure 8.4 (b). A unary tree where all nodes
have zero or exactly one child sub-tree is the worst case senario of a binary tree whose height
is n.

8.2.3 Insertion in BST

5 7 7 7 12 7 7 7

3 11 3 11 3 11 3 11 3 11 3 11

1 4 8 14 1 4 8 14 1 4 8 14 1 4 8 14 1 4 8 14 1 4 8 14
5 5 5 5 12 5 12

(a) insert(
7 ,5) (b) insert(
7 , 12)

11 4 15 2 5 12 1 7 9
11 11 11 11 11 11 11 11 11

4 4 15 4 15 4 15 4 15 4 15 4 15 4 15

2 2 5 2 5 12 2 5 12 2 5 12 2 5 12

1 1 7 1 7

(c) inserting h11, 4, 15, 2, 5, 12, 1, 7, 9i in sequence

Figure 8.5: Building a BST by insert operations

Another principal operation of a dictionary abstract data type is inserting a new item
into an existing dictionary. This insert operation in a BST as a dictionary can be defined
as a computational problem as follows:
8.2. BINARY SEARCH TREES 443

Problem 8.6. Insertion operation in a BST


Input: a BST, T and a quantifiable element, q
Output: a new BST, T 0 such that q ∈ T 0 and ∀x ∈ T, x ∈ T 0

To insert a new element, the search operation is first performed to locate the position
where the element is to be inserted. If all key values are unique, the search will fail and
fall out of the tree. Once the leaf level node to which the new element is to be inserted is
identified, the element to be inserted is nodified first as a leaf node and then linked to the
tree. Nodifying an element q means that the element becomes a single node tree where the
node’s key value is q and the left and right sub-trees are null.

Algorithm 8.6. BST- insert

BSTinsert(T, q)
if T = null, T = nodify(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if r.key < q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if r.Left = null, r.Left = nodify(q) . . . . . . . . . . . . . . . . . 3
else, BSTinsert(r.Left, q) . . . . . . . . . . . . . . . . . . . . . . . . 4
else (r.key ≥ q ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if r.Right = null, r.Right = nodify(q) . . . . . . . . . . . . . . 6
else, BSTinsert(r.Right, q) . . . . . . . . . . . . . . . . . . . . . . . 7

The computational time complexity of Algorithm 8.6 is O(n) by the same line of rea-
soning as for the BST search Algorithm 8.5. It depends on the height of the binary tree.

8.2.4 Min and Max in BST

21 17

10 31 2 29

9 17 25 38 7 25

3 8 14 19 23 27 36 4 11 23 27

1 5 11 15 22 24 33 5 8 21 24

(a) min(
21 ) = 1 and max(
21 ) =38 (b) min(
17 ) = 2 and max(
17 ) =29

Figure 8.6: Minimum and maximum elements in a binary search tree

Consider the problems of finding the minimum and maximum elements of a binary search
tree. Although these operations are not rudimentary operations of a dictionary, they will
be crucial to designing a deletion operation in the following subsection. Examples of finding
minimum and maximum elements of binary search trees are highlighted in Figure 8.6. First
to find the minimum, one can traverse the BST to the leftmost node that does not have a
left sub-tree. It does not matter if the node has a right sub-tree, since all elements in the
right sub-tree are greater than the node by definition of BST. The path from the root to
the node with the minimum value forms the left spine of the binary tree as highlighted in
red in Figure 8.6. A simple recursive left traversal algorithm is stated as follows:
444 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Algorithm 8.7. BST- findmin Algorithm 8.8. BST- findmax


BSTmin(T ) BSTmax(T )
if T = null, return null . . . . . . .1 if T 6= null, . . . . . . . . . . . . . . . . . . . . . 1
else if T .Left = null, . . . . . . . . . . . . 2 tmp = T . . . . . . . . . . . . . . . . . . . . . 2
return T .key . . . . . . . . . . . . . . . . . .3 while tmp.Right 6= null . . . . . . . 3
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 tmp = tmp.Right . . . . . . . . . . 4
return BSTmin(T .Left) . . . . . . . . . 5 return tmp.key . . . . . . . . . . . . . . . 5

Similarly, one can traverse the BST to the rightmost node that does not have a right
sub-tree to find the maximum value. The path from the root to the node with the maximum
value forms the right spine of the binary tree as highlighted in blue in Figure 8.6. A simple
iterative right traversal algorithm is stated in Algorithm 8.8.
The computational time complexities of both Algorithms 8.7 and 8.8 are O(n) as they
both depend on the height of the tree.

8.2.5 Deletion in BST


Out of the three fundamental dictionary operations in a BST, the deletion operation
tends to be the most complicated to design. The problem of a deletion operation in a BST
is defined similarly to that of the insertion operation. This deletion operation in a BST as
a dictionary can be defined as a computational problem as follows:
Problem 8.7. Deletion operation in a BST
Input: a BST, T , and an element, q
(
T 0 such that isBST(T 0 ) ∧ {x | x ∈ T 0 } = {x | x ∈ T } − {q} if q ∈ T
Output:
T and/or not found error otherwise
There are three cases to consider depending on the number of child nodes of the node
to be deleted. The first and simplest case is that the item to be deleted is located at one of

7 7 7 7 7

3 11 3 11 3 11 3 11 3 14

1 4 8 14 1 4 14 1 4 14 1 4 14 1 4 12
5 12 5 12 5 12 5 12 5

(a) delete(
7 ,8) (leaf node) (b) delete(
7 ,11) (node with one child)
7 7 5 7 7 8

3 11 3 11 3 11 3 11 3 11 3 11

1 5 8 14 1 5 8 14 1 4 8 14 1 5 8 14 1 5 8 14 1 5 14
4 12 4 12 12 4 12 4 12 4 12

(c) delete(
7 ,7) (node with two children) (d) delete(
7 ,7) (node with two children)
replace with the max in the left sub-tree replace with the min in the right sub-tree

Figure 8.7: Building a BST by insert operations


8.2. BINARY SEARCH TREES 445

the leaf nodes, i.e. no child nodes. To delete a leaf node x, let par(x) point to a null value
instead of to x as depicted in Figure 8.7 (a). The second case is that the item to be deleted
only has one sub-tree. To delete a node x with one sub-tree, let par(x) point to x.left or
x.right as relevant instead of to x as depicted in Figure 8.7 (b). Finally, the last case is
when the item to be deleted has both left and right sub-trees. This is the most difficult
to implement of the three cases. There are two options for deleting a node that has both
left and right sub-trees. To delete a node x with two sub-trees, replace x with either the
maximum element in the left sub-tree of x or the minimum element in the right sub-tree of x
as illustrated in Figures 8.7 (c) and (d), respectively. First, delete the minimum element or
maximum element, which is simple since the node either has no children or only one child.
Next, replace x.key with the deleted key value. A pseudo code for the deletion operation is
written as follows:
Algorithm 8.9. BST- Deletion
BSTdelete(t, q)
if t = null, not found error . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
else if t.key > q, BSTdelete(t.left, q) . . . . . . . . . . . . . . . . . . 2
else if t.key < q, BSTdelete(t.right, q) . . . . . . . . . . . . . . . . 3
else if t.key = q ∧ t.left = null, t = t.right . . . . . . . . . . . . . 4
else if t.key = q ∧ t.right = null, t = t.left . . . . . . . . . . . . . 5
else (if t.key = q ∧ t.right 6= null ∧ t.left 6= null), . . . . . . . . . 6
t.key = BSTmax(t.left) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
BSTdelete(t.left, t.key) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Lines 2 and 3 of Algorithm 8.9 traverse the BST to locate the item to be deleted. If a
null node is reached, an error message is reported in line 1. Lines 4 and 5 take care of the
second case where the node to be deleted has only one child. The first case where the node
to be deleted has no children is reflected in line 4. Finally, lines 6 ∼ 8 handle the last case
where the node to be deleted has two children. The maximum element of the left sub-tree
replaces the item to be deleted in line 7. BSTmin(t.right), the minimum value of the right
sub-tree, can also be used to replace the item to be deleted. Once either the designated
minimum or maximum value replaces the deleted item, it must be deleted as indicated in
line 8.
Before deleting an item, the item must be searched for in the binary search tree. Once the
item is found, the rest of the deletion operation takes constant time. Hence, the computa-
tional time complexity of deletion Algorithm 8.9 is the same as that of search Algorithm 8.5.

8.2.6 Average Case Analysis


The computational time complexities of all three dictionary operations in a BST rely
on the height of the BST. The best and worst cases for height can simply be shown by
examples. The best case is a well balanced binary tree whose height is Θ(log n) and the
worst case is a skewed binary tree or unary tree whose height is Θ(n). This section shall
prove that the average height of a binary search tree is Θ(log n) and that consequently the
expected running time of all three dictionary operations in a BST is Θ(log n) as well. This
can be verified by a strong induction.
Theorem 8.1. The expected running time of BST insert, delete, and search operations are
Θ(log n).
446 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Proof. Recall the recurrence relation of the height of a binary tree in eqn (8.1). It is one
more than the maximum of the heights of the left and right sub-trees. Since the sizes of left
and right sub-trees vary, the expected running time is adding all possible cases divided by
the number of cases. Hence, we have the following complete recurrence relation:
n−1
P
max(T (i), T (n − i − 1))
i=0
T (n) = +1 (8.4)
n
The claim is that the complete recurrence relation in eqn (8.4) is Θ(log n).
(Proof by strong induction) Supposing T (i) = Θ(log i) for all 1 ≤ i ≤ k, show T (k + 1) =
Θ(log (k + 1)).
k
P
max(T (i), T (n − i − 1))
i=0
T (k + 1) = +1 by eqn (8.4)
k+1
(k + 1)Θ(log k)
≤ +1 by assumption
k+1
= O(log k)
k
P
T (i)
i=0
T (k + 1) ≥ +1
k+1
kΘ(log k)
= Ω( + 1) by assumption
k+1
= Ω(log k)

Hence, the expected running time is Θ(log n). 

8.3 AVL Trees


A major problem with binary search trees is that a BST can be unbalanced, i.e., the
height can be O(n) in the worst case. Devising a balanced search tree had been a challenge
for computer scientists until 1962. The first known balanced binary search tree data struc-
ture is the AVL tree, named after its inventors Adelson-Velsky and Landis. It was proposed

Georgy M. Adelson-Velsky (1922-2014) was a Soviet and Israeli mathematician


and computer scientist. His major inventions include Kaissa, a chess program developed
in the 1960s, and the AVL tree in collaboration with Landis.
c Photo Credit: courtesy of Cornell University

Evgenii M. Landis (1921 - 1997) was a Soviet mathematician. His major con-
tributions include uniqueness theorems for elliptic and parabolic differential equations,
Harnack inequalities, and Phragmén-Lindelöf type theorems. He invented AVL data
structure along with Adelson-Velsky.
c Photo Credit: Konrad Jacobs, MFO, licensed under CC BY-SA 2.0 DE.
8.3. AVL TREES 447

in [1]. An AVL tree is nothing but a BST with one added height balance constraint: the
height difference between the left and right sub-trees is at most 1 for every node. The AVL
tree guarantees O(log n) time complexity for all three dictionary (search, insert, and delete)
operations.

8.3.1 Height Balanced Binary Tree


One of the many possible measures for defining a balanced binary tree is balanced height.
A binary tree is said to be height balanced if the height difference between the left and right
sub-trees is at most 1 for every node. The problem of checking whether a binary tree is
height balanced is formally defined below. The heightBT function was previously defined
in eqn (8.1).
Problem 8.8. Checking height balance of a binary tree
Input: a
(binary tree, T
true if ∀x ∈ T (|heightBT(x.Left) − heightBT(x.Right)| > 1)
Output:
false otherwise ∃x ∈ T (|heightBT(x.Left) − heightBT(x.Right)| ≤ 1)

The height balancedness of a binary tree can be validated by checking the balancedness
of each node recursively starting from the root node. A pseudo code for this recursive depth
first traversal checking is stated in the following eqn (8.5).

isheightBal(T ) =


True if T = 

False if |heightBT(T.Left) − heightBT(T.Right)| > 1
(8.5)


isheightBal(T.Left) ∧ otherwise
isheightBal(T.Right)

The computational time complexity of the recursive algorithm in eqn (8.5) is O(n) as it
follows the recursive depth first traversal order.
An AVL tree is a height balanced BST. It has two important properties: the same
ordering property as BSTs given in eqn (8.3) and the height balance property in eqn (8.5).
To check whether a binary tree is an AVL, one must check whether it is a BST and height
balanced.
isAVL(T ) = isBST(T ) ∧ isheightBal(T ) (8.6)
Validating an AVL tree clearly takes linear time.
A couple of valid and several invalid AVL trees are provided in Figure 8.8. The nodes
are annotated with their respective heights. Both trees in Figure 8.8 (a) and (b) are height
balanced and are also binary search trees. The BST in Figure 8.8 (c) is not an AVL tree
because the height difference between the two sub-trees of node 7 is 2, which violates the
allowed bound, 1. The BST in Figure 8.8 (d) is not an AVL tree because of nodes 4 and
. Recall that the height of a null node is −1. The last tree in Figure 8.8 (e) is not an
8
AVL because it is not a BST even though it is height balanced.
Given a height balanced binary tree of height h, consider the problem of finding upper
and lower bounds for the number of nodes. The upper bound O(2h ) is straightforward as
one can construct a perfect binary tree of height h with 2h+1 − 1 number of nodes. The
number of nodes in a binary tree is one plus the number of nodes in the left sub-tree plus
448 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

3 3 3 3 3
4 3 7 5 5
1 2 1 2 2 0 2 2 1 2
2 6 2 6 4 8 4 8 3 6
0 0 0 1 0 1 1 1 1 1 -1 1 -1 0 0 1 0
1 3 5 7 1 5 7 2 6 2 7 1 2 4 7
0 0 0 0 0 0 0 0 0 0
8 4 8 1 3 5 1 3 6 8

(a) BST & (b) BST & (c) BST & (d) BST & (e) not BST &
height-Bal height-Bal not height-Bal not height-Bald height-Bal

Figure 8.8: Valid AVL trees in (a) and (b) invalid AVL trees in (c) ∼ (e)

the number of nodes in the right sub-tree. The lower bound occurs when the heights of the
left and right sub-trees differ by one. One such binary tree is the Fibonacci tree shown in
Figure 3.43 on page 139. Its recurrence relation of the number of nodes in a Fibonacci tree
of height h, or simply FTN(h), was given in eqn (3.33) on page 140.
Theorem 8.2. The height of a height balanced tree or an AVL tree is Θ(log n).
Proof. Since the height of a perfect binary tree is clearly Θ(log n), we shall prove the height
of the Fibonacci tree only where FTN(h) = n.

FTN(h) = FTN(h − 1) + FTN(h − 2) + 1 by eqn (3.33)


< 2FTN(h − 1) since FTN(h − 1) > FTN(h − 2) + 1
< 2h
h = Ω(log n)
FTN(h) > 2FTN(h − 2) since FTN(h − 2) < FTN(h − 1) + 1
h
>2 2

h = O(log n)
h = Θ(log n) sinceh = Ω(log n) ∧ O(log n) 

The insertion and deletion methods for a BST may breach the height balance of an AVL
tree. Hence, efficient insertion and deletion methods that guarantee the height balance of
an AVL tree are necessary.

8.3.2 Insertion in AVL


Inserting a new element in an AVL tree by BST insertion Algorithm 8.6 fails to produce
a height balanced tree in some cases. Hence, it is necessary to rebalance the tree after each
insertion operation.
There are four cases to consider. Let x be a node that violates the height balance property
and x.H be the height of the sub-tree rooted at x. As depicted in Case 1 in Figure 8.10,
the first case occurs when x..Left.H >x..Right.H and x..Left.Left.H >x..Left.Right.H. This
violation can be fixed by rotating right as depicted in Figure 8.9 (a). The BST ordering
property is preserved and the height balance is maintained after the rotation. A pseudo
code is given in Algorithm 8.10. Height information for the node must be updated after a
rotation as stated in line 4.
8.3. AVL TREES 449

k2 k1 k1 k2

k1 k2 k2 k1
Z X
X Z
Y Y Z Y X Y
X Z

X < k1 < Y < k2 < Z X < k1 < Y < k 2 < Z


(a) case 1: single right rotation at k2 (b) case 2: single left rotation at k1
k3 k3
k3 k1
k1 k1
k1 k3
D D
D A
k2
A A
A D
Y
Y Y B C

(c) case 3: single right rotation at k3 (d) case 3: further decomposition


k3 k3 k2
k1 k2 k1 k3
D D
k2 k1
A C A B C D

B C A B

A < k1 < B < k2 < C < k3 < D


(e) case 3: double rotation: left rotation at k1 followed by right rotation at k3
k1 k1
k1 k3
k3 k3
k3 k1
A A
A D
k2
D D
D A
Y
Y Y B C

(f) case 4: single left rotation at k1 (g) case 4: further decomposition


k1 k1 k2
k3 k2 k1 k3
A A
k2 k3
D B A B C D

B C C D

A < k1 < B < k2 < C < k3 < D


(h) case 4: double rotation: right rotation at k3 followed by left rotation at k1

Figure 8.9: AVL tree rules for balancing height


450 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

7 7 7 4 4 4 4 4 4

4 4 1 7 1 7 1 7 1 7 1 6 1 6

1 5 5 6 5 7 2 5 7
Case 1 6 5 Case 3
4 4 4 4 4 4

1 6 2 6 2 6 2 6 2 6 2 6

2 5 7 1 3 5 7 1 3 5 7 1 3 5 7 1 3 5 7 1 3 5 8

3 Case 2 9 9 8 7 9
8 Case 4 9

Figure 8.10: AVL insertion illustration: insert h7, 4, 1, 5, 6, 2, 3, 9, 8i in sequence

Algorithm 8.10. Rotate right (AVL) Algorithm 8.11. Rotate left (AVL)
rotate right(x) rotate left(x)
y = x.Left . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 y = x.Right . . . . . . . . . . . . . . . . . . . . . . . . . . 1
x.Left = y.Right . . . . . . . . . . . . . . . . . . . . . 2 x.Right = y.Left . . . . . . . . . . . . . . . . . . . . . 2
y.Right = x . . . . . . . . . . . . . . . . . . . . . . . . . . 3 y.Left = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
y.H = max(y.Left.H, y.Right.H) + 1 . .4 y.H = max(y.Left.H, y.Right.H) + 1 . .4
return y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 return y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 8.10 illustrates inserting elements in an AVL tree in a sequence h7, 4, 1, 5, 6, 2, 3, 9, 8i


one by one.
The second case is the mirror case of case 1, so a left rotation is needed to rebalance the
tree as depicted in Figure 8.9 (b). Pseudo code for a left rotation is in Algorithm 8.11. An
example of case 2 is given in Figure 8.10 where the node 1 violates the height balance.
The third case occurs when x..Left.H >x..Right.H but x..Left.Left.H <x..Left.Right.H.
A single right rotation on a node with case 3 violation preserves the BST ordering but fails
to rebalance the tree as depicted in Figure 8.9 (c). Hence, the height balance violating
sub-tree rooted at node x is further decomposed as shown in Figure 8.9 (d). The double
rotation, that is, the left rotation at k1 followed by the right rotation at k3 , guarantees
restored height balance of the tree. Finally, the fourth case is the mirror case of case 3. The
opposite double rotation, that is, the right rotation at k3 followed by the left rotation at k1
guarantees rebalancing in the fourth case. A pseudo code for rebalancing the tree at a node
which violates height balance by exactly 2 is given below. It identifies and combines all four
cases.
Algorithm 8.12. AVL-rebalance

AVLrebalance(x)
if x.Left.H >x.Right.H + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if x.Left.Left.H ≥ x.Left.Right.H, . . . . . . . . . . . . . . . . . . . . 2
rotate right(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
8.3. AVL TREES 451

rotate left(x.Left) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
rotate right(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
else if x.Right.H >x.Left.H + 1, . . . . . . . . . . . . . . . . . . . . . . . . . 7
rotate left(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
rotate right(x.Right) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
rotate left(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Line 3 and line 8 are cases 1 and 2, respectively. Lines 5 to 6 are case 3 and lines 10
to 11 are case 4. The following pseudo code for inserting an element in an AVL tree is
almost identical to the recursive BST insertion Algorithm 8.6 but invokes the AVLrebalance
Algorithm 8.12 right before returning.
Algorithm 8.13. AVL- insert

AVLinsert(T, q)
if T = null, T = nodify(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if r.key < q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if r.Left = null, r.Left = nodify(q) . . . . . . . . . . . . . . . . . 3
else, AVLinsert(r.Left, q) . . . . . . . . . . . . . . . . . . . . . . . . 4
else (r.key ≥ q ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if r.Right = null, r.Right = nodify(q) . . . . . . . . . . . . . . 6
else, AVLinsert(r.Right, q) . . . . . . . . . . . . . . . . . . . . . . . 7
AVLrebalance(r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Only nodes in the path from the root to the newly inserted node are each visited and
checked for its height balance. Hence, the computational time complexity of Algorithm 8.13
is Θ(log n). Although all nodes in the path from the root to the newly inserted node are
checked for its height balance, rebalancing only occurs exactly once if there exists a node
that violates the height balance.

8.3.3 Deletion in AVL


Deleting an element from an AVL tree by the pure BST deletion method in Algorithm 8.9
may cause height balance violations in nodes along the path between the deleted node and
the root node. Figure 8.11 illustrates deletion of 10, 6, and 9 in sequence. Note that when an
internal node x is deleted, the minimum of its right sub-tree replaces x and is then deleted.
The pseudo code for deleting an element from an AVL tree is almost identical to the BST
deletion Algorithm 8.9 except for rebalancing.

Algorithm 8.14. AVL- Deletion

AVLdelete(t, q)
if t = null, not found error . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
else if t.key > q, AVLdelete(t.left, q) . . . . . . . . . . . . . . . . . . 2
else if t.key < q, AVLdelete(t.right, q) . . . . . . . . . . . . . . . . 3
else if t.key = q ∧ t.left = null, t = t.right . . . . . . . . . . . . . 4
else if t.key = q ∧ t.right = null, t = t.left . . . . . . . . . . . . . 5
else (if t.key = q ∧ t.right 6= null ∧ t.left 6= null), . . . . . . . . . 6
452 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

t.key = BSTmax(t.left) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
AVLdelete(t.left, t.key) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
AVLrebalance(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Case 2
10 12 12 12 6

4 14 4 14 4 16 6 16 4 12

2 6 12 16 2 6 12 16 2 6 14 17 4 7 14 17 2 5 7 16

1 5 7 17 1 5 7 17 1 5 7 2 5 9 1 9 14 17
9 9 9 1
Case 3

(a) AVL deletion of 10


Case 2
6 7 7 7 7

4 10 4 10 4 10 4 10 4 16

2 5 7 16 2 5 7 16 2 5 9 16 2 5 9 16 2 5 10 17
1 9 14 17 1 9 14 17 1 14 17 1 14 17 1 14

(b) AVL deletion of 6 (c) AVL deletion of 9

Figure 8.11: AVL deletion illustration: delete 10, 6, and 9 in sequence

The computational time complexity of Algorithm 8.14 is Θ(log n). When deleting an
element, multiple nodes along the path toward the root node may require rebalancing.

8.4 2-3 Trees


A 2-3 tree [2, p169-189] is an alternative balanced search tree that guarantees logarithmic
computational time complexities for all three dictionary operations. According to [42, p 454],
J. E. Hopcroft invented 2-3 trees in 1970.

8.4.1 Definition
A 2-3 tree is a ternary search tree where each internal node has either 2 or 3 child
sub-trees. Before introducing the 2-3 tree, the ternary search tree, or simply TST, is first
defined. As depicted in Figure 8.12 (a) and (b), each node contains either one or two keys,

John Edward Hopcroft (1939-), is an American computer scientist. He is the co-


recipient (with Robert Tarjan) of the 1986 Turing Award “for fundamental achievements
in the design and analysis of algorithms and data structures.” He is also the co-recipient
(with Jeffrey Ullman) of the 2010 IEEE John von Neumann Medal “for laying the foun-
dations for the fields of automata and language theory and many seminal contributions
to theoretical computer science.” c Photography is in public domain.
8.4. 2-3 TREES 453

k1 k1 k2

L M L M R
L ≤ k1 k1 ≤ M L ≤ k1 k1 ≤M ≤k2 k2 ≤ R

(a) a node with one key and two sub-trees (b) a node with two keys
9 9

4 18
4 14 18
3 7 14 23

1 3 5 7 12 17 21 23 1 5 12 17 21

(c) a sample 2-3 tree (d) a BST equivalent to one in (c)


5
5

2 8 2 8

1 4 7 9
1 4 7 9

(e) a sample 2-3 tree (f) a BST equivalent to one in (e)

Figure 8.12: 2-3 tree

k1 and/or k2 . A ternary tree is said to be a ternary search tree if all key values in the
leftmost sub-tree are less than k1 , all key values in the middle sub-tree are greater than k1
but less than k2 , and all key values in the rightmost sub-tree are greater than k2 . This is the
essential ternary search tree property. Each internal node can have up to three sub-trees.
Let x.L, x.M and x.R be the left, middle and right sub-trees of a node x. Let x.k1 and x.k2
be the first and second key values of a node x.
A couple of ternary search trees are given in Figure 8.12 (c) and (e) and their equivalent
binary search trees are given in Figure 8.12 (d) and (f), respectively. Each pair of circled
nodes in the BST in Figure 8.12 (d) represents a single node in the TST in Figure 8.12 (c).
The relationship between Figures 8.12 (e) and (f) suggests that all BSTs are TSTs.
Finding the minimum element in a TST is similar to finding the minimum in a BST as
stated in Algorithm 8.7 on page 444. Only the left sub-tree is explored until no child left
sub-tree remains as recursively defined in eqn (8.7). Finding the maximum element of a
ternary search tree is a little more complicated than for that of a BST. Since a node may
have one or two key values, four cases must be considered as recursively defined in eqn (8.8).

(
T.k1 if T.L = ε
min3T(T ) = (8.7)
min23(T.L) otherwise


 T.k2 if T.k2 6= ε ∧ T.R = ε

T.k
1 if T.k2 = ε ∧ T.M = ε
max3T(T ) = (8.8)


 max3T(T.R) if T.k2 6= ε ∧ T.R 6= ε
max3T(T.M ) if T.k2 = ε ∧ T.M 6= ε

454 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Using the min3T and max3T subroutines in eqns (8.7) and (8.8), the problem of checking
whether a ternary tree is a ternary search tree is formally defined as follows:
Problem 8.9. is a ternary search tree(T )
Input: A ternary tree T
Output: isTST(T )
(
True if ∀x ∈ T, TST prop(x)
isTST(T ) = (8.9)
False otherwise (if ∃x ∈ T, ¬TST prop(x))


 False if (x.k1 6= ε ∧ x.k2 6= ε ∧ x.k1 > x.k2 )

∨ (x.k1 6= ε ∧ x.L 6= ε ∧ x.k1 < maxT3(x.L))





 ∨ (x.k1 6= ε ∧ x.M 6= ε ∧ x.k1 > minT3(x.M ))
TST prop(x) = (8.10)


 ∨ (x.k2 6= ε ∧ x.M 6= ε ∧ x.k2 < maxT3(x.M ))
∨ (x.k2 6= ε ∧ x.R 6= ε ∧ x.k2 > minT3(x.R)





True otherwise

Since there are too many conditions to consider for the ternary search tree property,
eqn (8.9) checks the cases where the property is violated. Similar to the problem of checking
for a BST Problem 8.4, checking whether a ternary tree is a ternary search tree can be done
trivially by an in-order DFT. The following pseudo code resembles Algorithm 8.4.
Algorithm 8.15. Checking whether a ternary search tree

Let cur be a global variable and cur = −∞ initially.


Call isTST(r) initially where r is the root node of T .
isTST(x)
if x = null, return True . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
tmp = isTST(x.L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if tmp = False ∨ cur > x.k1 , return False . . . . 4
else, cur = x.k1 .key . . . . . . . . . . . . . . . . . . . . . . . . . . 5
tmp = isTST(x.M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if tmp = False, return False . . . . . . . . . . . . . . . . . 7
else if x.k2 = ε, return True . . . . . . . . . . . . . . . . . .8
else if x.k2 6= ε and cur > x.k2 , return False . .9
else, cur = x.k2 .key . . . . . . . . . . . . . . . . . . . . . . . . 10
return isTST(x.R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

The computational time complexity of Algorithm 8.15 is O(n).


Searching a key in a ternary search tree is very similar to searching a key in a binary
search tree.



 Not found if T = ε
Found if T.k1 = q or T.k2 = q



searchTST(T, q) = searchTST(T.L, q) if T.k1 > q (8.11)

searchTST(T.R, q) if T.k2 6= ε ∧ T.k2 > q




searchTST(T.M, q) otherwise
8.4. 2-3 TREES 455

The computational time complexity of the search operation in a TST by eqn (8.11) depends
on the height of the tree. In the worst case of a plain TST, it is linear. In a balanced TST,
it is logarithmic.
A 2-3 tree is a ternary search tree with two additional restrictions to guarantee logarith-
mic height. The first important property or restriction is that all leaf nodes are located at
the same level, i.e. depths of all leaf nodes are the same. The second property is that each
internal node has either 2 or 3 sub-trees. To be more precise, each node either has one key
x.k1 with two x.L and x.M sub-trees or two keys with three sub-trees.
First, checking whether all leaf nodes in a ternary tree are located at the same level can
be defined as follows:

Problem 8.10. Same leaf level property of a ternary tree


Input: A ternary tree T
Output: Sameleaflevel(T )

True
 if ∀x, ∀y ∈ T, (x.L = x.M = x.R = y.L = y.M = y.R = ε)
= → (depth(x) = depth(y)) (8.12)

False otherwise

If the depths are the same for all pairs of leaf nodes which have no sub-trees, eqn (8.12)
returns true. If there exists a pair of leaf nodes whose depths are different, eqn (8.12) returns
false. A depth first traversal can solve Problem 8.10 and it is left for an exercise.
The second restriction in a 2-3 tree is that each node is either a leaf node with no
sub-trees, or an internal node with two or three sub-trees.

Problem 8.11. 2-3 property of a ternary tree


Input: A ternary tree T
Output: 2-3 prop(T )

True
 if ∀x ∈ T, (x.L = x.M = x.R = ε) ∨ (x.L 6= ε ∧ x.M 6= ε ∧ x.R = ε)
= ∨ (x.L 6= ε ∧ x.M 6= ε ∧ x.R 6= ε) (8.13)

False otherwise

A 2-3 tree is a ternary tree with TST, same leaf level, and 2-3 properties. Hence, the
problem of checking whether a ternary tree is a 2-3 tree can be formulated with three
properties as follows:

Problem 8.12. is a 2-3 tree(T )


Input: A ternary tree T
Output: is2-3tree(T ) = isTST(T ) ∧ sameleaflevel(T ) ∧ 2-3 prop(T )

One of the interesting properties of a ternary tree with same leaf level and 2-3 properties is
that every internal node has equal height child sub-trees. If the rightmost sub-tree is empty,
the first two child sub-trees’ heights are the same. Hence, checking whether a ternary tree
456 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

is a 2-3 tree can be performed by checking heights.

is2-3tree(T ) =

True if ∀x ∈ T (TST prop(X) ∧ ((x.k2 = ε) → (height(x.L) = height(x.M ))

∨ (x.k2 6= ε) → (height(x.L) = height(x.M ) = height(x.R)))) (8.14)

False otherwise

The height of a node in a ternary tree can be computed recursively as follows:


(
−1 if T = ε
height(T ) = (8.15)
max(height(T.L), height(T.M ), height(T.R)) otherwise
Unlike in an AVL tree, no explicit height information needs to be stored for each node.
The insertion and deletion operations to be introduced in the following subsections do not
change the height information of any existing node.
Theorem 8.3. The height of a 2-3 tree is Θ(log n).
The same leaf level and 2-3 properties guarantee that the height of a 2-3 tree be loga-
rithmic, the worst case being blog2 nc for the case where every node holds only one key and
has only two child sub-trees. Height decreases when any node contains two keys. blog3 n2 c
is the best case height for the case where almost all internal nodes have three sub-trees and
even leaf level nodes contain two keys.

8.4.2 Insertion
The search operation in a 2-3 tree is the same as that for a TST. The next important
rudimentary operation is the insertion operation. Assuming all key values are unique, the
element to be inserted is also assumed to initially not be present in the 2-3 tree in the pseudo
code presented in this section.
To insert an element, the element is first searched. This process will lead to the bottom
level of the tree since this element is not in the tree. If the respective leaf node has room,
the element can be inserted in the proper position in cases 1 and 2 in Figure 8.13 (a). If
the node is overcrowded by inserting a new item, i.e., it already has two key values, split
the node and move the middle key value up to the parent node to be inserted recursively.
There are three cases when the node is overcrowded as shown in Figure 8.13 (b). When
the middle key value moves up to the parent node, the whole insertion process is repeated
recursively.
Figure 8.14 demonstrates insertion operations. To insert an element of a key value 15
into the tree in Figure 8.12 (c), the search operation takes place. Since the leaf node (17,-)
has room, the case 1 rule is applied. To insert 8 into the tree in Figure 8.14 (a), there is
no room in the leaf node (5,7). Hence, the node gets split and the case 4 rule is applied.
The value 7 goes to the parent node to be inserted. To do so, the case 2 rule is applied
since the parent node has room. Figure 8.14 (c) shows a worst case scenario where all nodes
along the path from the leaf node to the root node need to be updated. Indeed, the height
of the tree may grow by one in the worst case if the root node does not have room. The
computational time complexity of the insertion operation is Θ(log n) as it depends on the
height of the tree and the height of a 2-3 tree is Θ(log n) according to Theorem 8.3.
A pseudo code with subroutines for five cases is provided below in Algorithm 8.16. Note
that multiple lines occasionally appear in a single line to make the pseudo code compact.
8.4. 2-3 TREES 457

y Case 1 x y Case 2 x

x y
C A B C A
A B B C

(a) Single key node insertion cases 1 and 2


y
y z Case 3 Case 4 x y
x z
x z
C D A B
A B A B C D C D
Case 5
x z

y
A D
B C

(b) Full node insertion cases 3, 4, and 5

Figure 8.13: 2-3 tree insertion rules

Algorithm 8.16. 2-3 Tree - insert

Let UP be a global flag variable.


23Tinsert(T, q)
if T = ε . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
make a single node tree with T.k1 = q . . . . . . . . . . . . . . . . . . . . . . . . . 2
UP = True . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else if q < T.k1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T 0 = 23Tinsert(T.L, q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if UP = True and T.k2 = ε, T = 23Tinsert case1(T, T 0 ) . . . . . 6
else if UP = True and T.k2 6= ε, T = 23Tinsert case3(T, T 0 ) . . . 7
else if T.k2 6= ε and T.k2 < q, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T 0 = 23Tinsert(T.R, q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
if UP = True, T = 23Tinsert case4(T, T 0 ) . . . . . . . . . . . . . . . . . 10
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
T 0 = 23Tinsert(T.M, q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
if UP = True and T.k2 = ε, T = 23Tinsert case2(T, T 0 ) . . . .13
else if UP = True and T.k2 6= ε, T = 23Tinsert case5(T, T 0 ) . . 14
return(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
458 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Subroutine 8.1. 23Tinsert case1(T1 , T2 ) Subroutine 8.2. 23Tinsert case2(T1 , T2 )


T1 .k2 = T1 .k1 and T1 .k1 = T2 .k1 . . 1,2 T1 .k2 = T2 .k1 . . . . . . . . . . . . . . . . . . . 1
T1 .L = T2 .L and T1 .R = T1 .M . . . 3,4 T1 .M = T2 .L . . . . . . . . . . . . . . . . . . . 2
T1 .M = T2 .M . . . . . . . . . . . . . . . . . . . . . . 5 T1 .R = T2 .M . . . . . . . . . . . . . . . . . . . 3
UP = False . . . . . . . . . . . . . . . . . . . . . . . . 6 UP = False . . . . . . . . . . . . . . . . . . . . . 4
return T1 . . . . . . . . . . . . . . . . . . . . . . . . . . 7 return T1 . . . . . . . . . . . . . . . . . . . . . . . 5

Subroutine 8.3. 23Tinsert case3(T1 , T2 ) Subroutine 8.4. 23Tinsert case3(T1 , T2 )


declare T3 . . . . . . . . . . . . . . . . . . . . . . . . . .1 declare T3 . . . . . . . . . . . . . . . . . . . . . . 1
T3 .k1 = T1 .k1 . . . . . . . . . . . . . . . . . . . . . . 2 T3 .k1 = T1 .k2 . . . . . . . . . . . . . . . . . . . 2
T3 .L = T2 and T3 .M = T1 . . . . . . . . 3,4 T3 .L = T1 . . . . . . . . . . . . . . . . . . . . . . 3
T1 .k1 = T1 .k2 and T1 .k2 = ε . . . . . . . 5,6 T3 .M = T2 . . . . . . . . . . . . . . . . . . . . . .4
T1 .L = T1 .M . . . . . . . . . . . . . . . . . . . . . . . 7 T1 .k2 = ε . . . . . . . . . . . . . . . . . . . . . . . . 5
T1 .M = T1 .R and T1 .R = ε . . . . . . . 8,9 T1 .R = ε . . . . . . . . . . . . . . . . . . . . . . . .6
UP = True . . . . . . . . . . . . . . . . . . . . . . . 10 UP = True . . . . . . . . . . . . . . . . . . . . . 7
return T3 . . . . . . . . . . . . . . . . . . . . . . . . . 11 return T3 . . . . . . . . . . . . . . . . . . . . . . . 8

Subroutine 8.5. 23Tinsert case5(T1 , T2 )


declare T3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T3 .k1 = T1 .k2 , T3 .L = T2 .M , and T3 .M = T1 .R . . . . . . . 2,3,4
T1 .M = T2 .L, T1 .k2 = ε, and T1 .R = ε . . . . . . . . . . . . . . . 5,6,7
T2 .L = T1 and T2 .M = T3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8,9
UP = True . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return T2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

The global flag variable, UP, indicates whether insertion in the parent level is necessary.

8.4.3 Deletion
Deleting an element from a 2-3 tree is similar to deletion in AVL trees. First, the element
to be deleted must be searched. Removing the respective key from the node in a 2-3 tree
may cause 2-3 tree property violation. Hence, either the predecessor or successor replaces
the key. The predecessor and successor of k in a tree T are defined as follows:

predecessor(k, T ) = max({x ∈ T | x < k}) (8.16)


successor(k, T ) = min({x ∈ T | x > k}) (8.17)

For example, predecessor(18, T ) = 17 and successor(18, T ) = 20 where T is the 2-3 tree in


Figure 8.14 (c). In this book, the predecessor defined in eqn (8.16) is used to replace the
element to be deleted. Note that the successor was used in AVL deletion.
Both the predecessor and the successor of any key value in an internal node are always
located at the leaf level. When the predecessor, x, is moved to replace the deleted element,
x is removed from the node at the leaf level. If the node containing x, before moving, has
two key values, the deletion is basically done by either case 1 or 2 in Figure 8.15. This best
case scenario is given in Figure 8.16 (a).
8.4. 2-3 TREES 459

4 14 18
15

1 3 5 7 12 15 17 21 23 17

(a) Inserting 15 into the tree in Figure 8.12 (c): Case 1


9

4 7 14 18
7

1 3 5 8 12 15 17 21 23 5 7 8


9
7

4 7 14 18 4

1 3 5 8 12 15 17 21 23

(b) Inserting 8 into the tree in Figure 8.14 (a): Case 4 & Case 2
9

4 7 14 18 21
21

1 3 5 8 12 15 17 20 23 20 21 23


9
18 18

4 7 14 21 14 18 21

1 3 5 8 12 15 17 20 23


9 18 9

18
4 7 14 21

1 3 5 8 12 15 17 20 23

(c) Inserting 20 into the tree in Figure 8.14 (b): Case 3, Case 4, & Case 2

Figure 8.14: 2-3 tree insertion


460 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

x Case 1 x Case 2 x

A B A B A B

x Case 3 y Case 4 z

y z x z x y

A B C D A B C D A B C D

x Case 5 Case 6 y

y x y x

A B C A B C A B C

Figure 8.15: 2-3 tree deletion

If the node containing x, before moving, has only one key value, x, the node becomes
empty, which is a violation of the 2-3 tree property. Hence, this empty node must be merged
with its right sibling node and their common parent key value. If there is no right sibling
node, it is merged with its left sibling node and their common parent key value. If its sibling
node is a full node, i.e. has two keys, case 3 and 4 rules in Figure 8.15 are applied and the
deletion process is complete. If its sibling node has only one key, case 5 and 6 rules in
Figure 8.15 are applied and the deletion process continues with its parent node. The term,
percolate up, will be used to refer to the latter process. Deletion always starts from the leaf
level and continues toward the root node. These scenarios are given in Figure 8.16 (b) and
(c).
A short rough pseudo code for deleting an element from a 2-3 tree is outlined as follows:

23Tdelete(q, T )
a node, x = search(q, T ) . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if x is in the leaf node level, . . . . . . . . . . . . . . . . . . . . . . . 2
remove q from x and percolate up . . . . . . . . . . . . . . 3
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
p = predecessor(q, T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
replace the key in x with p . . . . . . . . . . . . . . . . . . . . . 6
percolate up from original position of p . . . . . . . . . 7

Percolate up takes O(log n) time. The search part in line 1 takes O(log n) time, but
search and predecessor in lines 1 and 5 together take Θ(log n) time. Hence, the compu-
tational time complexity of Algorithm 8.17 is Θ(log n). A lengthy version is stated in the
following Algorithm 8.17:
Algorithm 8.17. 2-3 Tree - delete

23Tdelete(q, T )
8.4. 2-3 TREES 461

Let u = T, s = T (search) and p = q (predecessor) be global.


if q 6= T.k1 ∧ q 6= T.k2 ∧ T.L = ε ∧ s = T, return q 6∈ T error . . . . . . . . . . . . . . . . . 1
if q = T.k2 ∧ T.L = ε, T.k2 = ε and u = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
else if q = T.k1 ∧ T.k2 = ε ∧ T.L = ε, T = ε . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
else if q = T.k1 ∧ T.k2 6= ε ∧ T.L = ε, T.k1 = T.k2 , T.k2 = ε, and u = F . . . . . . 4
else if s = F ∧ T.k2 = ε ∧ T.L = ε, p = T.k1 and T = ε . . . . . . . . . . . . . . . . . . . . . . 5
else if s = F ∧ T.k2 6= ε ∧ T.L = ε, p = T.k2 , T.k2 = ε, and u = F . . . . . . . . . . . 6
else if q = T.k2 ∧ T.L 6= ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
s = F ........................................................................8
T 0 = 23Tdelete(q, T.M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
T.k2 = p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
if u = F, T.M = T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
if T.R.k2 = ε, T = 23Tdel case5(T, T 0 , 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
else, T = 23Tdel case3(T, T 0 , 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
else if q = T.k1 ∧ T.L 6= ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
s = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
T 0 = 23Tdelete(q, T.L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
T.k1 = p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
if u = F, T.L = T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
if T.M.k2 = ε, T = 23Tdel case5(T, T 0 , 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
else, T = 23Tdel case3(T, T 0 , 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
else if q < T.k1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
T 0 = 23Tdelete(q, T.L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
if u = F, T.L = T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
if T.M.k2 = ε, T = 23Tdel case5(T, T 0 , 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
else, T = 23Tdel case3(T, T 0 , 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
else if q > T.k1 ∧ T.k 2 6= ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
T 0 = 23Tdelete(q, T.M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
if u = F, T.M = T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
if T.R.k2 = ε, T = 23Tdel case5(T, T 0 , 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
else, T = 23Tdel case3(T, T 0 , 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
else if q > T.k1 ∧ T.k 2 = ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
T 0 = 23Tdelete(q, T.M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
if u = F, T.M = T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
if T.L.k2 = ε, T = 23Tdel case6(T, T 0 , 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
else, T = 23Tdel case4(T, T 0 , 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
else if q > T.k2 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
T 0 = 23Tdelete(q, T.R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
if u = F, T.R = T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
if T.M.k2 = ε, T = 23Tdel case6(T, T 0 , 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
else, T = 23Tdel case4(T, T 0 , 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
462 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Subroutine 8.6. 23Tdel case3(T1 , T2 , k)

if k = 2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Tx .L = T2 , Tx .M = T1 .M.L, and Tx .k1 = T1 .k1 . . . . . . . . . . . . . . . . 2,3,4
Ty .L = T1 .M.M , Ty .M = T1 .M.R, and Ty .k1 = T1 .M.k2 . . . . . . . 5,6,7
T1 .k1 = T1 .M.k1 , T1 .L = Tx , and T1 .M = Ty . . . . . . . . . . . . . . . . . 8,9,10
else (if k = 3), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Tx .L = T2 , Tx .M = T1 .R.L, and Tx .k1 = T1 .k2 . . . . . . . . . . . . . 12,13,14
Ty .L = T1 .R.M , Ty .M = T1 .R.R, and Ty .k1 = T1 .R.k2 . . . . . 15,16,17
T1 .k2 = T1 .R.k1 , T1 .M = Tx , and T1 .R = Ty . . . . . . . . . . . . . . . 18,19,20
u = F and return T1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21,22

Subroutine 8.7. 23Tdel case4(T1 , T2 , k)

if k = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Tx .L = T1 .L.L, Tx .M = T1 .L.M , and Tx .k1 = T1 .L.k1 . . . . . . . . . 2,3,4
Ty .L = T1 .L.R, Ty .M = T2 , and Ty .k1 = T1 .k1 . . . . . . . . . . . . . . . . . 5,6,7
T1 .k1 = T1 .L.k2 , T1 .L = Tx , and T1 .M = Ty . . . . . . . . . . . . . . . . . . 8,9,10
else (if k = 2), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Tx .L = T1 .M.L, Tx .M = T1 .M.M , and Tx .k1 = T1 .M.k1 . . . 12,13,14
Ty .L = T1 .M.R, Ty .M = T2 , and Ty .k1 = T1 .k2 . . . . . . . . . . . . 15,16,17
T1 .k2 = T1 .R.k2 , T1 .M = Tx , and T1 .R = Ty . . . . . . . . . . . . . . . 18,19,20
u = F and return T1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21,22

Subroutine 8.8. 23Tdel case5(T1 , T2 , k)

if k = 2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Tx .L = T2 , Tx .M = T1 .M.L, and Tx .R = T1 .M.M . . . . . . . . . . . . . 2,3,4
Tx .k1 = T1 .k1 and Tx .k2 = T1 .M.k1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5,6
else (if k = 3), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Tx .L = T2 , Tx .M = T1 .R.L, and Tx .R = T1 .R.M . . . . . . . . . . . . . 8,9,10
Tx .k1 = T1 .k2 and Tx .k2 = T1 .R.k1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11,12
return Tx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Subroutine 8.9. 23Tdel case6(T1 , T2 , k)

if k = 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Tx .L = T1 .L.L, Tx .M = T1 .L.M , and Tx .R = T2 . . . . . . . . . . . . . . . 2,3,4
Tx .k1 = T1 .L.k1 and Tx .k2 = T1 .k1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5,6
else (if k = 2), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Tx .L = T1 .M.L, Tx .M = T1 .M.M , and Tx .R = T2 . . . . . . . . . . . . 8,9,10
Tx .k1 = T1 .M.k1 and Tx .k2 = T1 .k2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 11,12
return Tx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Cases 1 and 2 are implicitly implemented within the code. The argument k in subroutines
indicates the sibling node to be merged. k is either 1 or 2 for cases 4 and 5. k is either 2 or
3 for cases 3 and 6.
8.4. 2-3 TREES 463

9 17

4 7 14 21
18 15

1 3 5 8 12 15 20 23 15

(a) Deleting 18 from the tree in Figure 8.14 (c): Case 2


9 15

4 7 14 21
14
12 14

1 3 5 8 12 20 23 12


9 15

15
4 7 21 15 21

21
1 3 5 8 12 14 20 23


9 9

9
4 7 15 21

1 3 5 8 12 14 20 23

(b) Deleting 17 from the tree in Figure 8.16 (a): Case 6, Case 5, and Case 2
8

4 7 15 21
7
5 7

1 3 5 12 14 20 23 5


8

4
4 15 21
4

1 3 5 7 12 14 20 23

(c) Deleting 9 from the tree in Figure 8.16 (b): Case 6, and Case 2

Figure 8.16: 2-3 tree deletion


464 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

8.5 B Trees
2-3 trees are balanced ternary search trees. The notion of balancing the height of ternary
search trees to create 2-3 trees can be similarly applied to k-ary search trees. One such
generalized search tree is called the B tree, invented in [18].

8.5.1 Definition

k1 k2 kb-1 k1 k2 k3

C1 C2 C3 Cb C1 C2 C3 C4
C1 < k1 k1 <C2<k2 k2<C3 <k3 kb-1 <Cb C1 < k1 k1 <C2<k2 k2<C3 <k3 k3 < C4

(a) B-tree (b) 2-3-4 tree, (b = 4) B-tree


11

4 14 17 21

1 3 5 9 12 16 18 20 22 29 38

(c) a sample 2-3-4 tree


11 k2
4 17
k1 k3
3 9 14 21

1 5 12 16 20 29
C1 C2 C3 C4
18 22 38 C1 < k1 k1 <C2<k2 k2<C3 <k3 k3 < C4

(d) BST equivalent to the 2-3-4 tree in (c) (e) BST of a 2-3-4 tree node

Figure 8.17: B tree

A B tree is a k-ary search tree with restrictions for balancing the height. Before intro-
ducing the B tree, the k-ary search tree, or simply KST, is first defined. As depicted in
Figure 8.17 (a), each node can contain up to k − 1 key values and up to k sub-trees. The
variable b is used instead of k, to better relate it to the name, B tree.
A k-ary tree is said to be a k-ary search tree if all key values in the first sub-tree are less
than k1 , all key values in the ith sub-tree where 1 < i < k, are greater than ki−1 but less
than ki , and all key values in the last kth sub-tree are greater than kk−1 .
This is the essential KST property. An array, K1∼k−1 , is used to store up to k − 1 keys.
An array, C1∼k , is used to store up to k sub-trees. These arrays must be filled from the left.
No missing values are possible between keys or sub-trees. Let T.s be the number of keys in
the root node of T , where s < k. If there are only s keys, ks+1 ∼ kk−1 = ε and k1 ∼ ks
cannot be empty. If there are only s + 1 sub-trees, Cs+2 ∼ Ck = ε and C1 ∼ Cs+1 cannot
be empty unless it is a leaf node.
A sample quaternary search tree, i.e., a (k = 4)-ary search tree, is given in Figure 8.17
(c) and its equivalent binary search tree is given in Figure 8.17 (d). An internal node in
8.5. B TREES 465

a quaternary search tree depicted in Figure 8.17 (b) can be realized as a BST as shown in
Figure 8.17 (e). Checking whether a k-ary tree is a KST can be conducted by an in-order
DFT.
Problem 8.13. is a k-ary search tree(T )
Input: A k-ary tree T , k
Output: isKST(T, k) = is sorted(inorder DFT(T ))
Another balanced tree that supports efficient dictionary operations is the B tree. While
2-3 trees have either two or three child nodes, B trees can have up to b number of child
nodes. Figure 8.17 (c) shows a B tree where b = 4. This type of tree is often called a 2-3-4
tree, as an internal node can have 2, 3, or 4 children.
According to [103], a B tree of order b has four properties. The first property is that
each node has at most b children. In other words, T is a b-ary search tree. Perhaps this is
how the tree was named. The second is that all leaf nodes appear in the same level. The
third is the lower bound for the number of keys. Every node except the root node has at
least d 2b e − 1 keys. If the node is an internal node, it has d 2b e children. The last property
is the lower bound for the number of keys in the root node. The root node is an exception
which can have one or more keys, instead of at least d 2b e − 1 keys. The third and fourth
properties can be combined when checking. Hence, the problem of checking whether a b-ary
tree is a B tree can be formulated with three properties as follows:
Problem 8.14. is a B tree(T, b)
Input: A b-ary tree T and b where b > 2
Output: isBtree(T ) = isKST(T, b) ∧ sameleaflevel(T ) ∧ keylowerbound(T, b)
A pseudo code that utilizes the depth first traversal for checking whether a k-ary tree is
a B tree is provided below. Algorithm 8.18 is called initially with isBtree(T, b, 0).
Algorithm 8.18. B Tree - checking

leafl = −1 and p = −∞ initially.


isBtree(T, b, l)
if l > 0 ∧ T.s < d 2b e − 1, return F . . . . . . . . . . . . . . . . . . . . . 1
if T.C[1] = ε (a leaf node) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if leafl 6= −1 ∧ leafl 6= l, return F . . . . . . . . . . . . . . . . . . 3
if leafl = −1, leafl = l . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if p > T.K[1], return F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 2 ∼ T.s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if T.K[i − 1] ≥ T.K[i], return F . . . . . . . . . . . . . . . . . . 7
p = T.K[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
else (an internal node) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
for i = 1 ∼ T.s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
if isBtree(T.C[i], b, l + 1) = F, return F . . . . . . . . . 11
if p ≥ T.K[i], return F . . . . . . . . . . . . . . . . . . . . . . . . . . 12
else, p = T.K[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if isBtree(T.C[T.s + 1], b, l + 1) = F, return F . . . . . 14
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
466 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

The first line checks the third property for the lower bound of the number of keys. When
l = 0, the tree is the root node and thus excluded for checking. When the first leaf level
node is visited, the global variable, leafl, is set to the level of the current leaf node as stated
in line 4. When other leaf nodes are visited and their level is different from leafl, the code
returns F as stated in line 3. Lines 5 ∼ 8 check whether keys in the leaf node are sorted.
Lines 9 ∼ 14 perform the depth first traversal to check the first property of whether T is a
b-ary search tree. The computational time complexity of Algorithm 8.18 is clearly Θ(n), as
it is basically the depth first traversal.
A pseudo code for searching q in a B tree is stated as follows:
Algorithm 8.19. B tree- search

Btree-search(T, q)
p = search p(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if p ≤ T.s ∧ T.K[p] = q, return Found . . . . . . . . . . 2
else if T.S[p] = ε, return not Found . . . . . . . . . . . . 3
else, return Btree-search(T.C[p], q) . . . . . . . . . . . . . 4

The subroutine ‘search p’ in line 1 finds the position where q is located or the sub-tree
which may contain q. If the binary search Algorithm 3.10 is used in line 1, the computational
time complexity of Algorithm 8.19 is O(log b logb n), or simply O(log n) if we consider b a
constant. If the input is a k-ary search tree rather than a B tree, the tree may not be
balanced and thus the computational time complexity is O(n).

8.5.2 Insertion
Figure 8.18 shows cases of insertion in a B tree where b = 4, i.e., a 2-3-4 tree. The
insertion operation in a B tree seems to be more complicated than that for a 2-3 tree.
However, there are essentially only two cases depending on whether there is room or not.
First, the element to be inserted is searched. Since it is not found, the search will lead to
the position for insertion in the respective leaf node. If there is room, insertion is nothing
but inserting into a sorted list Problem 2.17, as illustrated in Figure 8.18 (a) and (b). If
inserting an element exceeds b − 1 key values in the node, the original sorted list is copied
to a new array of size b where the insertion is possible. Then the array is divided into two
nodes and the middle value is selected as the root node for these two nodes, as illustrated
in Figure 8.18 (c). This process is percolated up to its parent node. A pseudo code for
inserting q in a B tree is stated as follows:
Algorithm 8.20. B tree - insert

Let UP be a global flag variable.


Btree-insert(T, q)
p = search p(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T.C[p] = ε ∧ T.s < b − 1, (leaf node with space case) . . . . . . . . . . . . . . . . . . . 2
for i = T.s + 1 down to p + 1, T.K[i] = T.K[i − 1] . . . . . . . . . . . . . . . . . . . . . 3
T.K[p] = q, T.s = T.s + 1, and UP = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4,5,6
else if T.C[p] = ε ∧ T.s = b − 1, (a full leaf node case) . . . . . . . . . . . . . . . . . . . . 7
if p ≤ d 2b e, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T.C[1].K1∼p−1 = T.K1∼p−1 and T.C[1].Kp+1∼d b e = T.Kp∼d b e−1 . . . . . 9,10
2 2
8.5. B TREES 467

T.C[1].K[p] = q and T.C[2].K1∼b b c = T.Kd b e+1∼b−1 . . . . . . . . . . . . . . . . 11,12


2 2
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
T.C[1].K1∼d b e = T.K1∼d b e and T.C[2].K[p − d 2b e] = q . . . . . . . . . . . . . . 14,15
2 2
T.C[2].K1∼p−d b e−1 = T.Kd b e+1∼p−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 2
T.C[2].Kp−d b e+1∼b b c = T.Kp∼b−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 2
T.s = 2, T.C1 .s = d 2b e, T.C2 .s = b 2b c, and UP = T . . . . . . . . . . . . . . 18,19,20,21
else if T.Cp 6= ε, (internal node) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
T2 = Btree-insert(T.Cp , q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
if UP = F, T.C[p] = T2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
else if T.s < b − 1, (internal node with space) . . . . . . . . . . . . . . . . . . . . . . . . . 25
for i = T.s + 1 down to p + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
T.K[i] = T.K[i − 1] and T.C[i + 1] = T.C[i + 2] . . . . . . . . . . . . . . . . . . 27,28
T.C[p] = T2 .C[1] and T.C[p + 1] = T2 .C[2] . . . . . . . . . . . . . . . . . . . . . . . . . . 29,30
T.K[p] = T2 .K[1], T.s = T.s + 1, and UP = F . . . . . . . . . . . . . . . . . . . 31,32,33
else, (a full internal node case) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Declare temporary arrays, A1∼b and D1∼b+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
for i = 1 ∼ p − 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A[i] = T.K[i] and D[i] = T.C[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37,38
for i = p + 1 ∼ b, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
A[i] = T.K[i − 1] and D[i + 1] = T.C[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . 40,41
A[p] = T2 .K[1], D[p] = T2 .C[1], D[p + 1] = T2 .C[2] . . . . . . . . . . . . . . . 42,43,44
T.C[1].K1∼b b c = A1∼b b c and T.C[2].K1∼d b e−1 = Ad b e+1∼b . . . . . . . . . 45,46
2 2 2 2
T.C[2].C1∼b b c+1 = D1∼b b c+1 and T.C[2].C1∼d b e = Dd b e+2∼b+1 . . . . . 47,48
2 2 2 2
T.K[1] = A[b 2b c + 1], T.K2∼b−1 = ε , and T.C3∼b = ε . . . . . . . . . . . . 49,50,51
T.s = 2, T.C1 .s = d 2b e, T.C2 .s = b 2b c, and UP = T . . . . . . . . . . . . 52,53,54,55
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

The subroutine ‘search p’ in line 1 finds the position where q should be inserted or the
sub-tree to which q belongs. Lines 2 ∼ 6 handle the case of inserting q into a leaf node with
space. All elements after p in the key array T.K are moved by one to the right to make
room for q, as stated in line 3. Then q is placed into T.kp and the insertion is complete by
setting the flag, UP = F. Note that inserting into a sorted list Algorithm 2.22 on page 62
may be utilized as a subroutine to simplify the pseudo code.
Lines 7 ∼ 21 are for the case where the leaf node is full. The node needs to be split into
two equal sized nodes. The inserted element may be in the left or right partitioned node as
stated in lines 8 ∼ 12 and lines 13 ∼ 17, respectively. percolated up is marked necessary by
setting the flag, UP = T.
If the current node is an internal node, Btree-insert is invoked recursively with the
relevant child sub-tree, as stated in line 23. If no percolate up is necessary, i.e., the flag UP
is false, Algorithm 8.20 returns the tree with the updated sub-tree, as stated in line 24.
Lines 25 ∼ 33 deal with the case where the internal node has empty spaces. Not only is
the key inserted, but the sub-B trees, T2 .C1 and T2 .C2 , are also inserted in their respective
positions. The flag UP is set to F.
Lines 34 ∼ 55 are for the case where the internal node is full. Not only are the keys
split, but the child sub-trees are also split into half. Temporary arrays, A1∼b and D1∼b+1 ,
are used to simplify the pseudo code but this procedure can be implemented without them.
468 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

y x y x

x y
C A B C A
A B B C

(a) (T.s = 1) < (b − 1 = 3) case


x y z
y z x z

x y
A B C D
C D A D
A B B C
x y

z
A B
C D

(b) (T.s = 2) < (b − 1 = 3) case


x y z w x y
w x y z
w z

C D E A B C
A B C D E
A B D E

split
w y z w x z

x y y

A D E A B E
B C w x z C D

A B C D E

(c) T.s = (b − 1 = 3) case

Figure 8.18: (b = 4) B tree (2-3-4 tree) insertion cases

A1∼b , whose size is one bigger than the maximum size of T.K, is created so that q can be
inserted. Next, A is split into two roughly equal sized nodes. The middle value is used to
create a new node with a single key. Its first and second child are the splitted nodes. This
half splitting process guarantees the property for the lower bound of the number of keys in
a B tree. D1∼b+1 is used to sort the sub-trees. Finally, the flag UP is set to true, so that
the insertion process is percolated up to its parent node.
Only one path from the root to a leaf node is involved in insertion and thus the compu-
tational time complexity of Algorithm 8.20 is O(b logb n), or simply Θ(log n) if we consider
b a constant.
Figure 8.19 (a) demonstrates inserting 19 in the sample B tree in Figure 8.17 (c). First,
8.5. B TREES 469

11

19
4 14 17 21

1 3 5 9 12 16 18 20 22 29 38


11

4 14 17 21

1 3 5 9 12 16 18 19 20 22 29 38

(a) insert 19 in the B tree in Figure 8.17 (c)


11

4 14 17 21
25

1 3 5 9 12 16 18 19 20 22 29 38


11

4 14 17 21 29
25

1 3 5 9 12 16 18 19 20 22 38


11
21

4 14 17 29

1 3 5 9 12 16 18 19 20 22 25 38


11 21

4 14 17 29

1 3 5 9 12 16 18 19 20 22 25 38

(b) insert 25 in the B tree in Figure 8.19 (a)

Figure 8.19: (b = 4) B tree (2-3-4 tree) insertion illustration


470 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

19 is searched to locate the leaf level for insertion. The leaf node has room to store 19.
Hence, simply sliding each element by one to make room for 19 and inserting it would
complete the insertion process. Figure 8.19 (b) demonstrates inserting 25 in the sample B
tree in Figure 8.19 (a). The leaf node is full and thus it is split into two parts. The insertion
process is percolated up to its parent node. Its parent node is also full and thus it is split
into two parts. The insertion process continues to the root node. The root node has room
to store and thus the insertion process is complete.

8.5.3 Deletion
The deletion mechanism in a B tree is similar to that for a 2-3 tree. First, the element to
be deleted, q, is located. If not in the tree, an error message should appear. If the element
to be deleted, q, is located at the leaf level, it is deleted from the node. If q is in an internal
node, it is replaced with its predecessor, p, which is located at the leaf level. Then, p is
deleted from the leaf node and percolate up to its parent node.
Three cases of 2-3 tree deletion in Figure 8.15 can be generalized to B trees. If the
number of keys is greater than or equal to d 2b e − 1 even after removing the element, the
deletion process is done, as illustrated in Figure 8.20 (a). If the number of keys is less than
d 2b e − 1 after removing the element, the tree is no longer a B tree unless the node from
which the element was removed is a root node. Hence, the node is merged with its right
sibling node. If there is no right sibling, it is merged with its left sibling node. If the sibling
node has more than d 2b e − 1 keys, the rules in Figure 8.20 (b) are applied and the deletion
process is done. If the sibling node has exactly d 2b e − 1 keys, two nodes are merged into one
and percolate up is necessary, as illustrated in Figure 8.20 (c). A pseudo code for deletion
is written as follows:
Algorithm 8.21. B Tree - delete

A global flag variable, r = ε


Btree-delete(T, q)
p = search p(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T.C[p] = ε ∧ T.K[p] 6= q ∧ r 6= q, does not exist error! . . . . . . . . . . . . . . . 2
else if T.C[p] = ε ∧ T.K[p] 6= q ∧ r = q, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
r = T.K[s], T.K[s] = ε, and T.s = T.s − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 4,5,6
else if T.C[p] = ε ∧ T.K[p] = q, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
for i = p ∼ T.s − 1, T.K[i] = T.K[i + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T.K[s] = ε and T.s = T.s − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9,10
else, (internal node) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Btree-delete(T.C[p], q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
if T.K[p] = q, T.K[p] = r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if T.C[p].s < d 2b e − 1 ∧ p ≤ T.s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
ms = T.C[p].s + T.C[p + 1].s + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
if ms < b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
T.C[p].K[T.C[p].s + 1] = T.K[p], . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
for i = 1 ∼ T.C[p + 1].s, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
T.C[p].K[T.C[p].s + 1 + i] = T.C[p + 1].K[i] . . . . . . . . . . . . . . . . . . . 19
T.C[p].C[T.C[p].s + 1 + i] = T.C[p + 1].C[i] . . . . . . . . . . . . . . . . . . . . 20
T.C[p].C[ms] = T.C[p + 1].C[T.C[p + 1].s + 1] . . . . . . . . . . . . . . . . . . . .21
for i = p ∼ T.s − 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8.5. B TREES 471

T.K[i] = T.K[i + 1] and T.C[i] = T.C[i + 1] . . . . . . . . . . . . . . . . . 23,24


T.C[T.s] = T.C[T.s + 1] and T.C[T.s + 1] = ε . . . . . . . . . . . . . . . . . 25,26
T.K[T.s] = ε and T.s = T.s − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27,28
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A1∼ms = merge(T.C[p].K, T.K[p], T.C[p + 1].K) . . . . . . . . . . . . . . . . .30
D1∼ms+1 = merge(T.C[p].C, T.C[p + 1].C) . . . . . . . . . . . . . . . . . . . . . . . 31
T.C[p].K1∼b ms 2 c
= A1∼b ms 2 c
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
T.C[p].C1∼b ms 2 c+1
= D 1∼b 2 c+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ms

T.K[p] = A[b ms 2 c + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
T.C[p + 1].K1∼ms−b ms 2 c−1
= Ab ms 2 c+2∼ms
. . . . . . . . . . . . . . . . . . . . . . . . 35
T.C[p + 1].C1∼ms−b ms 2 c
= D ms
b 2 c+2∼ms+1 . . . . . . . . . . . . . . . . . . . . . . . . 36
ms ms
T.C[p].s = b 2 c and T.C[p + 1].s = ms − b 2 c − 1 . . . . . . . . . . . .37,38
else if T.C[p].s < d 2b e − 1 ∧ p = T.s + 1, (i.e., no right sibling) . . . . . . . . . 39
ms = T.C[p].s + T.C[p − 1].s + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
if ms < b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
T.C[p − 1].K[T.C[p − 1].s + 1] = T.K[p], . . . . . . . . . . . . . . . . . . . . . . . . . 42
for i = 1 ∼ T.C[p].s, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
T.C[p − 1].K[T.C[p − 1].s + 1 + i] = T.C[p].K[i] . . . . . . . . . . . . . . . 44
T.C[p − 1].C[T.C[p − 1].s + 1 + i] = T.C[p].C[i] . . . . . . . . . . . . . . . . 45
T.C[p − 1].C[ms] = T.C[p].C[T.C[p].s + 1] . . . . . . . . . . . . . . . . . . . . . . . 46
T.K[T.s] = ε and T.s = T.s − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47,48
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A1∼ms = merge(T.C[p − 1].K, T.K[p − 1], T.C[p].K) . . . . . . . . . . . . . 50
D1∼ms+1 = merge(T.C[p − 1].C, T.C[p].C) . . . . . . . . . . . . . . . . . . . . . . . 51
T.C[p − 1].K1∼b ms 2 c
= A1∼b ms 2 c
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
T.C[p − 1].C1∼b ms 2 c+1 = D 1∼b ms
2 c+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
T.K[p − 1] = A[b ms 2 c + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
T.C[p].K1∼ms−b ms 2 c−1
= A b 2 c+2∼ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
ms

T.C[p].C1∼ms−b ms 2 c
= D b 2 c+2∼ms+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
ms

T.C[p − 1].s = b ms 2 c and T.C[p].s = ms − b ms 2 c − 1 . . . . . . . . . . . .57,58


return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

The subroutine, ‘search p,’ visits child tree Cp if kp−1 < q < kp . Line 2 checks whether
the B tree contains q. Lines 3 ∼ 6 delete the predecessor of q and set the flag r to the
predecessor. Lines 7 ∼ 10 are applied to the case where the element to be deleted is located
in the leaf node. Line 12 calls the method recursively with the sub-tree that may contain q
or the predecessor.
If the element to be deleted is in an internal node, it is replaced with its predecessor as
stated in line 13. Lines 14 ∼ 28 handle Case 2 where the node has a right sibling node.
Lines 29 ∼ 38 handle Case 3 where the node has a right sibling node. Lines 39 ∼ 48 handle
Case 2 where the node has no right sibling node. Lines 49 ∼ 58 handle Case 3 where the
node has no right sibling node. Case 1 is implicitly embedded.
Only one path from the root to a leaf node is involved in deletion and thus the compu-
tational time complexity of Algorithm 8.21 is O(b logb n), or simply Θ(log n) if we consider
b a constant.
Figure 8.21 (a) illustrates deleting 1 from the B tree in Figure 8.17 (c). It is the simplest
case, i.e. Case 1 in Figure 8.20 (a). Figure 8.21 (b) illustrates deleting 3 from the B
472 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

p q p q p q

A B C A B C A B C

p q r p q r p q r

A B C D A B C D A B C D

(a) Case 1: C[p].s > d 2b e


e h p x e h x e h x

k q r k p q r k p q r

A B C D E A B C D E A B C D E

(b) Case 2: C[p].s = d 2b e ∧ C[p].s + C[p + 1].s < b


e h q e h

k p r k p q r

A B C D E A B C D E

(c) Case 2l: C[p].s = d 2b e ∧ C[p].s + C[p − 1].s < b


e h p x e h q x

k q r s k p r s

A B C D E F A B C D E F

(d) Case 3: C[p].s = d 2b e ∧ C[p].s + C[p + 1].s ≥ b


e h r e h q

k p q s k p r s

A B C D E F A B C D E F

(e) Case 3l: C[p].s = d 2b e ∧ C[p].s + C[p − 1].s ≥ b

Figure 8.20: 2-3-4 B tree deletion

tree in Figure 8.21 (a). This is an example of Case 2 in Figure 8.20 (b). Figure 8.21 (c)
illustrates deleting 11 from the B tree in Figure 8.21 (b). This is a representation of Case
3l in Figure 8.20 (c) followed by Case 3.
8.5. B TREES 473

11

4 14 17 21

3 5 9 12 16 18 20 22 29 38

(a) Case 1: delete 1 from the B tree in Figure 8.17 (c)


11

4 14 17 21

5 9 12 16 18 20 22 29 38


11

5 14 17 21

4 9 12 16 18 20 22 29 38

(b) Case 2: delete 3 from the B tree in Figure 8.21 (a)


9

5 14 17 21

4 12 16 18 20 22 29 38


9

14 17 21

4 5 12 16 18 20 22 29 38


17

9 14 21

4 5 12 16 18 20 22 29 38

(c) Case 3 and Case 2: delete 11 from the B tree in Figure 8.21 (b)

Figure 8.21: 2-3-4 tree deletion


474 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

8.6 B+ Trees
Another data structure that supports efficient O(logb n) dictionary operations is the B+
tree. It is widely used in file systems [68] and database management systems [142]. There
are many different variations of the B+ tree for different applications. The one introduced
here is only one version.

8.6.1 Definition

k1 =min(C2 ) k2=min(C3) ……….. kb-1=min(Cb)

C1 C2 C3 ……….. Cb-1 Cb-


C1 < k1 C2 < k2 C3 < k3 Cb-1 < kb-1

(a) internal node in conventional B+trees


14

4 11 17 21

1 3 4 5 7 9 11 12 14 16 17 20 21 22 29 38

(b) a sample (b = 5) B+ tree using (a).


K1~b k1=min(C1) k2=min(C2) k3 =min(C3) ……….. kb-1 =min(Cb-1) kb=min(Cb)
C1~b
C1 C2 C3 ……….. Cb-1 Cb
C1 < k2 C2 < k3 C3 < k4 Cb-1 < kb

(c) internal node in B+tree


2 11

2 4 11 14 21 27

2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38

(d) a sample (b = 4) B+ tree using (c).

Figure 8.22: B+tree

A conventional B+ tree is a variation of the B tree and thus, most properties of B trees
apply to B+ trees. The fundamental difference is that a B+ tree is a (k = b)-ary tree but
not a k-ary search tree. All keys are stored only at the leaf level. The ith value in an internal
node contains the minimum element in its (i + 1)th sub-tree, as indicated in Figure 8.22 (a).
A sample (b = 5) B+ tree is given in Figure 8.22 (b).
Instead of allowing (b − 1) keys per node as in a conventional B+ tree illustration, both
the size of the key list, T.K, and the child subtree list, T.C, are b in B+ trees presented in
this section, as shown in Figure 8.22 (c). A sample (b = 4) B+ tree is given in Figure 8.22
(d).
8.6. B+ TREES 475

A B+ tree of order b has five properties. The first property regarding the internal node
is that the ith key contains the minimum of the ith sub-tree and the keys in the node, T.K,
are sorted, as depicted in Figure 8.22 (c). The second is that all leaf nodes containing all
key values appear in the same level. The third is that every node has at most b number
of keys and children: |T.K| = |T.C| = b. The next is the lower bound for the number of
keys. Every node except the root node has at least d 2b e keys. Consequently, if the node is
an internal node, it has d 2b e children. The last property is the lower bound for the number
of keys in the root node. The root node is an exception as it can have two or more keys
so that it can have two or more children. The fourth and fifth properties can be combined
when checking.
A pseudo code that utilizes the depth first traversal for checking whether a k-ary tree is
a B+ tree is stated below. Algorithm 8.22 is called initially with isBtree(T, b, 0).
Algorithm 8.22. B+ Tree - checking

leafl = −1 and p = −∞ initially.


isB+tree(T, b, l)
if l = 0, p = T.K[1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if l > 0 ∧ T.s < d 2b e, return F . . . . . . . . . . . . . . . . . . 2
if T.L = ε (a leaf node) . . . . . . . . . . . . . . . . . . . . . . . . 3
if leafl 6= −1 ∧ leafl 6= l, return F . . . . . . . . . . . 4
if leafl = −1, leafl = l . . . . . . . . . . . . . . . . . . . . . . . 5
if p 6= T.K[1], return F . . . . . . . . . . . . . . . . . . . . . . 6
for i = 2 ∼ T.s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
if T.K[i − 1] > T.K[i], return F . . . . . . . . . . . 8
p = T.K[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
else (an internal node) . . . . . . . . . . . . . . . . . . . . . . . . 10
for i = 1 ∼ T.s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
if p ≥ T.K[i], return F . . . . . . . . . . . . . . . . . . . 12
else, p = T.K[i] . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if isB+tree(T.C[i], b, l + 1) = F, return F . 14
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

The computational time complexity of Algorithm 8.22 is clearly Θ(n), as it is basically


the depth first traversal.
Search in a B+ tree is similar to that for a B tree, but the keys are only at the leaf level
in B+ trees. A pseudo code for searching q in a B+ tree is stated as follows:
Algorithm 8.23. B+ tree search

B+tree-search(T, q)
p = search p(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T.C[p] = ε ∧ T.K[p] = q, return Found . . . . . . . 2
else if T.C[p] = ε ∧ T.K[p] 6= q, return not Found . 3
else, return B+tree-search(T.C[p], q) . . . . . . . . . . . 4

The subroutine ‘search p’ in line 1 finds the position where q is located or the sub-tree
which may contain q. If the binary search Algorithm 3.10 is used in line 1, the computational
time complexity of Algorithm 8.23 is O(log b logb n), or simply Θ(log n) if we consider b a
constant.
476 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

8.6.2 Insertion

a o s a o s p s t o s t
q o
o p r o p q r p q r o p q r

(a) Case 1: enough room (b) Case 2: enough room with min update
a o t x a t x a o t x
o r
q r

o p r s o p q r s o p q s

(c) Case 3: No room and split


p t x t x o r t x
o r
o

p q r s o p q r s o p q r s

(d) Case 3: No room and split with min update

Figure 8.23: B+ tree insertion cases

The first step to insert an element, q, in a B+ tree is to search the element. The leaf
node that should contain q is identified. If the leaf node has room to insert q, q is simply
inserted in the sorted key list of the node. This scenario where the node has enough room is
depicted in Figure 8.23 (a). If q happens to be the minimum in the key list, then the parent
node min value needs to be updated as well, as depicted in Figure 8.23 (b). If the leaf node
is full, as depicted in Figure 8.23 (c), an array of size b + 1 is created to contain both the
original keys and q and then divided in half into two nodes. The minimum value of the
second half node moves up to the parent node, and this process is repeated to the parent
node. A global flag variable, UP, is used to indicate whether percolate up is necessary or
if the insertion process is complete. Figure 8.23 (d) shows the special case where the node
is full and q is the minimum. The parent node needs to be updated in this case. Although
the parent node in Figure 8.23 (d) has space for insertion, percolate up may be necessary if
the parent node is full as well. A pseudo code is stated as follows:
Algorithm 8.24. Insertion operation in B+ tree

Let UP be a global flag variable UP = F.


B+tree-insert(T, q)
p = search p(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T.C[p] = ε ∧ T.K[p] = q, already exist error! . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
else if T.C[p] = ε ∧ T.K[p] 6= q, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if T.s < b, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
insert SL(q, T.K) and T.s = T.s + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5,6
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
A1∼b+1 = insert SL(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T.C[1].K1∼d b+1 e = A1∼d b+1 e and T.C[2].K1∼b b+1 c = Ad b+1 e+1∼b+1 . . . 9,10
2 2 2 2
T.K[1] = T.C[1].K[1] and T.K[2] = T.C[2].K[1] . . . . . . . . . . . . . . . . . . . . . 11,12
T.K3∼b = ε, T.C3∼b = ε, and T.s = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13,14,15
8.6. B+ TREES 477

T.C[1].s = d b+1 b+1


2 e, T.C[2].s = b 2 c, and UP = T . . . . . . . . . . . . . . . . 16,17,18
else, (internal node) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
T2 = B+tree-insert(T.C[p], q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
if UP = F, T.C[p] = T2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
else if T.s < b, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
for i = T.s down to p + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
T.K[i + 1] = T.K[i] and T.C[i + 1] = T.C[i] . . . . . . . . . . . . . . . . . . . . . . 24,25
T.K[p] = T2 .K[1] and T.K[p + 1] = T2 .K[2] . . . . . . . . . . . . . . . . . . . . . . . . . 26,27
T.C[p] = T2 .C[1] and T.C[p + 1] = T2 .C[2] . . . . . . . . . . . . . . . . . . . . . . . . . . 28,29
T.s = T.s + 1 and UP = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30,31
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Declare A1∼b+1 and B1∼b+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33,34
A1∼p−1 = T.K1∼p−1 and Ap+2∼b+1 = T.Kp+1∼b . . . . . . . . . . . . . . . . . . . . 35,36
A[p] = T2 .K[1] and A[p + 1] = T2 .K[2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37,38
B1∼p−1 = T.C1∼p−1 and Bp+2∼b+1 = T.Cp+1∼b . . . . . . . . . . . . . . . . . . . . . 39,40
B[p] = T2 .C[1] and B[p + 1] = T2 .C[2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41,42
T.C[1].K1∼d b+1 e = A1∼d b+1 e and T.C[2].K1∼b b+1 c = Ad b+1 e+1∼b+1 . . 43,44
2 2 2 2
T.C[1].C1∼d b+1 e = B1∼d b+1 e and T.C[2].C1∼b b+1 c = Bd b+1 e+1∼b+1 . . .45,46
2 2 2 2
T.K[1] = T.C[1].K[1] and T.K[2] = T.C[2].K[1] . . . . . . . . . . . . . . . . . . . . . 47,48
T.C[1].s = d b+1 b+1
2 e and T.C[2].s = b 2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49,50
T.s = 2, T.K3∼b = ε, and T.C3∼b = ε . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51,52,53
return T , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Line 2 checks whether q is a duplicate key and if so, it returns an error message. Lines
3 ∼ 15 are for the leaf node base cases. If the recursion reaches the leaf level and the leaf
node has room, q is simply inserted as stated in lines 4 ∼ 6. Lines 7 ∼ 15 take care of the
full leaf node case. The node is divided into two parts and the parent node needs to be
updated.
Line 17 recursively calls the respective sub-tree. When the call is returned from its child
node, if the flag UP = T and the current node has room, both key and respective child
sub-tree are inserted as indicated in lines 18 ∼ 28. Lines 28 ∼ 49 take care of the case where
the internal node is full and UP = T. The node needs to be divided into two half sized parts.
The subroutine ‘search p’ in line 1 finds the position where q is located or the sub-tree
which may contain q. If the binary search Algorithm 3.10 is used in line 1, the computational
time complexity of Algorithm 8.24 is O(log b logb n), or simply Θ(log n) if we consider b a
constant.
Figure 8.24 (a) demonstrates inserting 17 to the B+ tree in Figure 8.22 (d). This is
the simplest case in Figure 8.23 (a), where the leaf node has enough room to store the new
element. Figure 8.24 (b) demonstrates inserting 1 to the B+ tree in Figure 8.24 (a). This
is another instance of the simplest case, but the parent node needs to be updated with the
new minimum value. Figure 8.24 (c) demonstrates inserting 8 to the B+ tree in Figure 8.24
(b). This is the split case in Figure 8.23 (b), where the leaf node has no room to store the
new element.

8.6.3 Deletion
The deletion mechanism in a B+ tree is similar to that for a B tree. First, the leaf node
that contains q is identified. Once an element is removed, the lower bound property of the
478 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

2 11

2 4 11 14 21 27
17

2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38

(a) insert 17 to the B+ tree in Figure 8.22 (d)


2 11

2 4 11 14 21 27
1

2 3 4 5 9 10 11 12 14 16 17 19 21 23 27 29 38


11
1

1 4 11 14 21 27

1 2 3 4 5 9 10 11 12 14 16 17 19 21 23 27 29 38

(b) insert 1 to the B+ tree in Figure 8.24 (a)


1 11

1 4 11 14 21 27
8

1 2 3 4 5 9 10 11 12 14 16 17 19 21 23 27 29 38


1 11

1 4 11 14 21 27
4 9

1 2 3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38


1 11

1 4 9 11 14 21 27

1 3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38

(c) insert 8 to the B+ tree in Figure 8.24 (b)

Figure 8.24: B+ tree insertion illustration


8.6. B+ TREES 479

a o s a o s o s t p s t
Case 1 Case 1m
o p r o r o p q r p q r

(a) Case 1: Cp .s > d 2b e (b) Case 1m: Case 1 with delete(Cp .k1 )
p r x p x o r x
Case 2 Case 2m

p q r s p r s o p r s

(c) Case 2: Cp .s = d 2b e ∧ Cp .s + Cp+1 .s − 1 ≤ b (d) Case 2m: Case 2 with delete(Cp .k1 )
h p s h p h p r
Case 2e Case 2e

p q s t p q s p q r s

(e) Case 2l: Cp .s = d 2b e ∧ Cp−1 .s + Cp .s − 1 ≤ b (no right sibling)


p r x p t x o r x
Case 3 Case 3m

p q r s t u p r s t u o p r s t u

(f) Case 3: Cp .s = d 2b e ∧ Cp .s + Cp+1 .s − 1 > b (g) Case 3m: Case 3 with delete(Cp .k1 )
h p u h p s h p t
Case 3e Case 3e

p q r s u x p q r s u p q r s t u

(h) Case 3l: Cp .s = d 2b e ∧ Cp−1 .s + Cp .s − 1 > b (no right sibling)

Figure 8.25: B+ tree deletion cases

B+ tree might be violated. Case 1, as depicted in Figure 8.25 (a) and (b), is the simplest
case in which the lower bound property still holds after removing an element. If the deleted
element happens to be the first minimum element in the key list, then the parent node must
be updated as well, as depicted in Figure 8.25 (b).
If the lower bound is violated, the node that contained q is merged with its right sibling
node. There are two cases (Case 2 and Case 3) depending on whether the size of the merged
list exceeds b or not. If the key list fits into one node, the deletion process is percolated up
to its parent node, as depicted in Figure 8.25 (c) and (d). If the node happens to be the
last node, i.e., it has no right sibling node, it is merged with its left sibling node as depicted
in Figure 8.25 (e). The last case, Case 3, occurs when the node violates the lower bound
and the size of the node merged with its sibling node exceeds b. The merged list is halved
into two nodes, as depicted in Figure 8.25 (f) ∼ (h). A pseudo code for deletion is stated
as follows:
Algorithm 8.25. B+ Tree - delete

B+tree-delete(T, q)
p = search p(q, T.K) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if T.C[p] = ε ∧ T.K[p] 6= q, does not exist error! . . . . . . . . . . . . . . . . . . . . . . . . . . 2
480 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

else if T.C[p] = ε ∧ T.K[p] = q, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


for i = p ∼ T.s − 1, T.K[i] = T.K[i + 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
T.K[T.s] = ε and T.s = T.s − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5,6
else, (internal node) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
B+tree-delete(T.C[p], q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
if T.C[p].s < d 2b e ∧ p < T.s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
ms = T.C[p].s + T.C[p + 1].s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
if ms ≤ b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
for i = 1 ∼ T.C[p + 1].s, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
T.C[p].K[i + T.C[p].s] = T.C[p + 1].K[i] . . . . . . . . . . . . . . . . . . . . . . . . . . 13
T.C[p].C[i + T.Cp .s] = T.C[p + 1].C[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
for i = p ∼ T.s − 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
T.K[i] = T.K[i + 1] and T.C[i] = T.C[i + 1] . . . . . . . . . . . . . . . . . . . . 16,17
T.K[T.s] = ε and T.C[T.s] = ε . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18,19
T.s = T.s − 1 and T.C[p].s = ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20,21
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
for i = 1 ∼ d ms 2 e − T.C[p].s, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
T.C[p].K[i + T.C[p].s] = T.C[p + 1].K[i] . . . . . . . . . . . . . . . . . . . . . . . . . . 24
T.C[p].C[i + T.C[p].s] = T.C[p + 1].C[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
for i = 1 ∼ b ms 2 c, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
T.C[p + 1].K[i] = T.C[p + 1].K[i + d ms 2 e − T.C[p].s] . . . . . . . . . . . . . . . 27
T.C[p + 1].C[i] = T.C[p + 1].C[i + d ms 2 e − T.C[p].s] . . . . . . . . . . . . . . . 28
T.K[p + 1] = T.C[p + 1].K[1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
T.C[p].s = d ms ms
2 e and T.C[p + 1].s = b 2 c . . . . . . . . . . . . . . . . . . . . . . . . .30,31
b
else if T.C[p].s < d 2 e ∧ p = T.s, (i.e., no right sibling) . . . . . . . . . . . . . . . . . . . . 32
ms = T.C[p − 1].s + T.C[p].s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
if ms ≤ b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
for i = 1 ∼ T.C[p].s, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
T.C[p − 1].K[T.C[p − 1].s + i] = T.C[p].K[i] . . . . . . . . . . . . . . . . . . . . . . .36
T.C[p − 1].C[T.C[p − 1].s + i] = T.C[p].C[i] . . . . . . . . . . . . . . . . . . . . . . . 37
T.K[s] = ε and T.C[s] = ε . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38,39
T.s = T.s − 1 and T.C[p − 1].s = ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40,41
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
for i = T.C[p].s down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
T.C[p].K[b ms 2 c − T.C[p].s + i] = T.C[p].K[i], . . . . . . . . . . . . . . . . . . . . . . 44
T.C[p].C[b ms 2 c − T.C[p].s + i] = T.C[p].C[i], . . . . . . . . . . . . . . . . . . . . . . .45
for i = 1 ∼ T.C[p − 1].s − d ms 2 e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
T.C[p].K[i] = T.C[p].K[T.C[p − 1].s − d ms 2 e + i], . . . . . . . . . . . . . . . . . . 47
T.C[p].C[i] = T.C[p].C[T.C[p − 1].s − d ms 2 e + i], . . . . . . . . . . . . . . . . . . . 48
T.C[p − 1].K[T.C[p − 1].s − d ms 2 e + i] = ε, . . . . . . . . . . . . . . . . . . . . . . . . 49
T.C[p − 1].s = d ms 2 e and T.C[p].s = b ms 2 c . . . . . . . . . . . . . . . . . . . . . . . . .50,51
T.K[p] = T.C[p].K[1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Line 2 checks whether the B+ tree contains q. If it does, lines 3 ∼ 6 delete the element
from the leaf node. Rebalancing the size by merging is done in its parent node if the
minimum number of keys is violated. Line 8 calls the method recursively with the sub-tree
that may contain q. Lines 9 ∼ 21 handle Case 2 where the node has a right sibling node.
8.6. B+ TREES 481

1 11

1 4 9 11 14 21 27

1 2 3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38

(a) delete 2: Case 1 example


1 11

1 4 9 11 14 21 27

1 3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38


1 11

1 4 9 11 14 21 27

3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38


11
3

3 9 11 14 21 27

3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38

(b) delete 1: Case 2m followed by Case 1m


3 11

3 9 11 14 21 27

3 4 5 8 9 10 11 12 14 16 17 19 21 23 27 29 38


3
12

3 9 12 17 21 27

3 4 5 8 9 10 12 14 16 17 19 21 23 27 29 38

(c) delete 11: Case 3m followed by parent update.

Figure 8.26: B+ tree deletion


482 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

Lines 22 ∼ 31 handle Case 3 where the node has a right sibling node. Lines 32 ∼ 41 handle
Case 2l where the node has no right sibling node. Lines 42 ∼ 51 handle Case 3l where
the node has no right sibling node. Case 1 is implicitly embedded. Line 52 is necessary to
update the parent node for Case 1m, Case 2m and Case 3m in Figure 8.25.
Figure 8.26 (a) illustrates deleting 2 from a sample B+ tree. This is Case 1 in Figure 8.25
(a). Figure 8.26 (b) illustrates deleting 1 from the B+ tree in Figure 8.26 (a). This is Case
2m, as the lower bound property is violated when deleting 1 and the total number of keys
after merging with the right sibling node fits in a single node. It should be noted that
the root node is also updated. Figure 8.26 (c) illustrates deleting 11 from the B+ tree in
Figure 8.26 (b). This is Case 3m, as the lower bound property is violated when deleting
11 and the total number of keys after merging with the right sibling node does not fit in
a single node. Redistribution is necessary. It should be noted that the root node is also
updated.

8.7 Skip List

2 11

2 4 11 14 21 27

2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38

(a) a sample (b = 4) B+ tree.


2 2 11

1 2 4 11 14 21 27

0 2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38

(b) Skip list equivalent to B+ tree in (a)

Figure 8.27: Skip list

B+ trees are often said to be dynamic because the height of the tree grows and contracts
as records are added and deleted. The B+ tree version introduced in the previous section
is indeed semi-dynamic. That is, keys and sub-trees of a node are stored in arrays of fixed
size b, as shown in Figure 8.27 (a). There are many empty cells. In this section, a fully
dynamic dictionary data structure which utilizes linked lists is introduced.
Skip lists were introduced in [139, 140] as an alternative to balanced trees. A determin-
istic version was given in [126]. Figure 8.27 (b) shows a sample skip list that is equivalent
to the B+ tree in Figure 8.27 (a). So as to distinguish it from a randomized skip list or
other versions of skip list, the skip list in Figure 8.27 (b) shall be referred to as a skip-b list.
Imagine that all keys in the ith level of the B+ tree in Figure 8.27 (a) are linked together
as a linked list. Each node has two pointers. One is for the next node in the linked list and
the other is for the node in the next lower level linked list. The branch between a parent
node and its child node in a B+ tree can be viewed as the link to the node in the next lower
8.7. SKIP LIST 483

level.
An internal node, x, in a skip-b list has three parts: the key value x.v, the pointer x.next
to the next node in the list on the same level, and the pointer x.below to the node in the
next lower level. One property of a skip-b list is x.below.v = x.v. For example, the node x
= 11 1 at level 1 in Figure 8.27 (b) has x.v = 11, x.next = 14 1 , and x.below = 11 0 . For
convenience, a subscript that denotes level is used to distinguish nodes with the same key
value on different levels.
The size of an internal node in a B+ tree is stored as T.s and is bounded, d 2b e ≤ T.s ≤ b.
The lower and upper bounds for B+ trees apply to the skip-b list except for the highest
level linked list, which is bounded by 2 ≤ T.s ≤ b. Let T.s be the skip size for a node in a
skip-b list. A pseudo code to find the skip size of an internal node, x, is stated as follows:

Subroutine 8.10. Skip size of a node x


Skipsize(x)
p = x.below and s = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 1,2
while p 6= x.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
p = p.next and s = s + 1 . . . . . . . . . . . . . . . . . . . . . . 4,5
return s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
The process of finding the skip size of an internal node stated in Subroutine 8.10 takes
Θ(b) time.
The bottom level contains the sorted list of all elements. The list in the level above the
bottom level contains a sorted list but skips some elements. It can be thought as an express
route while the list in the bottom level is the local route in a subway system. The lists at
higher levels are even faster routes. To reach the key value of (q = 16), one starts from
the fastest lane. Since q > 2, one can follow the fastest list to get to the next station 11.
Since the route ends, one transfers to the next lower level train at station 11. Since q > 11,
one travels on the level 1 train. Since q > 14 but q < 21, one transfers to the local route
at station 14. A pseudo code for searching a key value, q, in a skip-b list, L, is stated as
follows:
Algorithm 8.26. Skip-b list search

skip-list-search(L, q)
if L.below = ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
while L.next 6= ε and L.next.v ≤ q . . . . . . . . . . . . . 2
L = L.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if L.v = q, hfoundi . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else, hq does not existi . . . . . . . . . . . . . . . . . . . . . . . 5
else if L.v > q, hq does not existi . . . . . . . . . . . . . . . 6
else if L.next 6= ε and L.next.v ≤ q, . . . . . . . . . . . . . . 7
skip-list-search(L.next, q) . . . . . . . . . . . . . . . . . . . . . . . 8
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
skip-list-search(L.below, q) . . . . . . . . . . . . . . . . . . . . 10

Lines 1 ∼ 5 are for searching the nodes at the bottommost (leaf) level. Line 6 is for the
case where q is smaller than the minimum key. Lines 7 ∼ 8 move to the next node on the
same level and lines 9 ∼ 10 move to the node in the next lower level.
484 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

An advantage of the skip list over the B+ tree is that it is fully dynamic. A disadvan-
tage is that binary search within its implicit nodes is impossible. The computational time
complexity of the search operation in skip lists is O(b log n) whereas it is O(log b log n) in
B+ trees.

8.7.1 Insertion

s x s v x

s t u v w x s t u v w x

(a) Splitting a node illustration for a skip-(b = 4) list case


s y s v y

s t u v w x y s t u v w x y

(b) Splitting a node illustration for a skip-(b = 5) list case

Figure 8.28: Skip-b list split process

Before embarking on the insertion operation of a skip-b list, readers must understand
the concept of splitting a node x whose skip size is between b + 1 and 2b. Such a node
violates the upper bound property, but when divided into two half-sized nodes, their skip
sizes meet both upper and lower bound properties. This node splitting process is illustrated
in Figure 8.28 (a) and (b) where the skip size is b = 4 and 5, respectively. The pseudo code
is stated as follows:

Subroutine 8.11. Split a node x

Skip-split(x, s)
p = x.below . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ d 2s e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
p = p.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
make a node q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
q.next = x.next and q.below = p . . . . . . . . . . . . . . . . 5,6
x.next = q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

This node splitting subroutine, which takes Θ(b) time, will be invoked in both insert and
delete operations.
To insert an element q in a skip-b list, the position where q is to be inserted at the leaf
level is identified by a method similar to Algorithm 8.26. When the element is inserted, the
node at the parent level may exceed the maximum skip size and need a split, as depicted in
Figure 8.29 (a). This node splitting may continue all the way to the root level. When the
size of the list at the root level exceeds b, one more level is created as a new root level list
with a node that requires splitting. This root level case insertion is depicted in Figure 8.29
(b). The pseudo code for insertion is stated as follows:
8.7. SKIP LIST 485

s y s y
v split?

s ... u w ... y s ... u v w ... y

(a) Splitting a node illustration for a skip-(b = 4) list case


s
s ... u w ... if |L| > b split

v s ... u v w ...

s ... u v w ...

(b) Splitting a node illustration for a skip-(b = 5) list case

Figure 8.29: Skip-b list insertion operation

Algorithm 8.27. Skip-b list insert

skip-list-insert(L, q)
while L.next 6= ε and L.next.v ≤ q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
L = L.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if L.below = ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if L.v = q, return halready existsi . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else if L.v < q, create a node whose X.v = q . . . . . . . . . . . . . . . . 5
else (if L.v > q), create a node, X.v = L.v and L.v = q . . . . . 6,7
X.next = L.next and L.next = X . . . . . . . . . . . . . . . . . . . . . . . . . . . 8,9
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
if L.v > q, L.v = q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
skip-list-insert(L.below, q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
if Skipsize(L) > b, Skip-split(L, b + 1) . . . . . . . . . . . . . . . . . . . . . 13
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Lines 1 ∼ 2 find the position for insertion within the linked list on the current level.
Only O(b) nodes are involved to locate the proper location. Lines 3 ∼ 9 insert the element
q into the linked list on the bottom (leaf) level. A new node, X, is created to store q on
line 5. Lines 10 ∼ 13 traverse the linked list in the next lower level. When returned, the
skip size is checked and if it exceeds b, the skip-split subroutine is invoked as indicated on
line 13.
One important exceptional case where q is lower than the minimum key value in the
skip-b list needs careful attention. If the level is not a leaf bottom level, the first node value
is changed to q, as stated in line 11. If the level is a leaf bottom level, a new node, X, is
created to store the value of the current node, L, which is the first node in the list. The
current node’s value is changed to q and the node X is inserted right after L, as indicated
in lines 6 ∼ 9.
The computational time complexity of Algorithm 8.27 is O(b log n) since there are
Θ(log n) number of levels and only O(b) number of nodes are involved in insertion on each
level.
Figure 8.30 (a) illustrates inserting an element 17 to the skip-(b = 4) list in Figure 8.27
486 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

2 2 11

1 2 4 11 14 21 27
17

0 2 3 4 5 9 10 11 12 14 16 17 19 21 23 27 29 38

(a) inserting 17 to the skip-(b = 4) list in Figure 8.27 (b).


2 2 11

1 2 4 11 14 21 27
20

0 2 3 4 5 9 10 11 12 14 16 17 19 20 21 23 27 29 38


2 2 11

1 2 4 11 14 19 21 27

0 2 3 4 5 9 10 11 12 14 16 17 19 20 21 23 27 29 38


2 2 11 21

1 2 4 11 14 19 21 27

0 2 3 4 5 9 10 11 12 14 16 17 19 20 21 23 27 29 38

(b) inserting 20 to the skip-(b = 4) list in Figure 8.30 (a).


2 1 11

1 1 4 11 14 21 27

0 2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38


2 1 11

1 1 4 11 14 21 27
1

0 1 2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38

(c) inserting 1 to the skip-(b = 4) list in Figure 8.30 (a).

Figure 8.30: Skip list


8.7. SKIP LIST 487

(b). It reflects the simplest case where the leaf node has enough room so that no split is
necessary. Figure 8.30 (b) illustrates inserting an element 20 to the skip-(b = 4) list in
Figure 8.30 (a). The skip size for node 14 1 exceeds b and thus the skip-split subroutine is
invoked. This causes the skip size for node 11 2 to exceed b and thus the skip-split subroutine
is invoked again. Finally, the exceptional case where q is lower than the minimum key value
in the skip-b list is illustrated in Figure 8.30 (c). Suppose an element 1 is inserted into the
skip-(b = 4) list in Figure 8.30 (a). Since it is smaller than the first element in the root level
list, the first element value is changed to 1. All first elements in non-leaf levels are updated
accordingly. A new node is created and inserted according to lines 6 ∼ 9 in Algorithm 8.27.

8.7.2 Deletion

s v x s x
split?

s ... v ... x s ... v ... x

(a) Case 1: merging a node with its preceding node


p x p x

p s ... x p ... x split?

p ... s ... x p ... s ... x

(b) Case 2: merging a node with a pointer from the node on the parent level.

Figure 8.31: Skip-b list merge cases

Before covering the deletion operation of a skip-b list, readers must understand the
concept of merging a node x with its neighboring node when its skip size is below the
lower bound d 2b e. The node is merged with the preceding node by removing the node, as
illustrated in Figure 8.31 (a), where the node v violates the lower bound. The tricky part
is to find the previous node as there is no pointer to the previous node. The previous node
is passed to the subroutine as an input argument. If rebalancing is necessary after merging,
i.e., the skip size exceeds b, the skip split Subroutine 8.11 is called.
If the node which violates the minimum skip size has a pointer from its parent level
node, it is merged with the next node instead of the previous node on the same level list, as
illustrated in Figure 8.31 (b), where the node p violates the lower bound. If rebalancing is
necessary after merging, i.e., the skip size exceeds b, the skip split Subroutine 8.11 is called.

Subroutine 8.12. Node merging


Skip-merge(x, p)
if p = x, x.next = x.next.next . . . . . . . . . . . . . . . . . . 1
else, p.next = p.next.next and x = p . . . . . . . . . .2,3
s = Skipsize(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if s > b, Skip-split(L, s) . . . . . . . . . . . . . . . . . . . . . . . . 5
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
488 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

The node merging Subroutine 8.12, which takes O(b) time, will be invoked in the deletion
operation. The first input argument, x, is the node that violates the minimum skip size.
The second input argument, p, is the node preceding x. The previous node input argument,
p, is null for the second case in Figure 8.31 (b), but the current node, x, is passed. Line 1 is
for the (p = x) case, as depicted in Figure 8.31 (b). Lines 2 and 3 merge the node with the
preceding node, as depicted in Figure 8.31 (a). Whether the merged node needs to split is
determined in lines 4 and 5.

s y s y
merge?

s ... u v w ... y s ... u w ... y

(a) Splitting a node illustration for a skip-(b = 4) list case


p p q
merge?

p s ... q s ... merge? q ...

p q ... s ... q ... s ... q ... ...

(b) Splitting a node illustration for a skip-(b = 5) list case


s
merge
s u ... w

s u ... w

(c) Splitting a node illustration for a skip-(b = 4) list case

Figure 8.32: Skip-b list deletion operation

There are essentially three cases for deleting an element q from a skip-b list. The first
general case is depicted in Figure 8.32 (a). First, the position of q in the leaf level is
identified in a similar method as Algorithm 8.26. When the element is removed, the node
on the parent level may go below the minimum skip size and need a merge. This merge
process may continue all the way up to the root level. The second case is when the item to
be removed, q, appears in non-leaf level lists, as illustrated in Figure 8.32 (b). The nodes in
the other levels need to be updated as well. First, the node in the leaf level is identified and
replaced by its succeeding node and then the parent node’s value is updated if their values
are q. Merge case 2 in Subroutine 8.12 is called if necessary. The last case is for the root
level list. If the root level list is also a leaf level list, the item can simply be removed in the
same way as removing an element from a linked list. If the root level list is not leaf level
and there is only one node in the list, the next lower level becomes the new root level list,
as depicted in Figure 8.32 (c). Pseudo code for deletion is stated as follows:
Algorithm 8.28. Skip-b list delete

skip-list-delete(L, q)
p = L ........................................................... 1
while L.next 6= ε and L.next.v ≤ q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
p = L and L = L.next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,4
8.8. EXERCISES 489

if L.below = ε, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if p.v = q, L.v = L.next.v and L.next = L.next.next . . . . . . 6,7
else if L.v = q, L = p and L.next = L.next.next . . . . . . . . . . . 8,9
else, (if L.v 6= q) return hdoes not existi . . . . . . . . . . . . . . . . . . . 10
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
skip-list-delete(L.below, q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
if L.v = q, L.v = L.below.v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if Skipsize(L) < d 2b e, Skip-merge(L, p) . . . . . . . . . . . . . . . . . . . . . 14
if L is root and L.next = ε, make L.below be the root . . . . . 15
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

The first node visited in a new level is stored in p in line 1 of Algorithm 8.28, which shall
be used in the Skip-merge Subroutine 8.12. Lines 2 ∼ 4 move L to the node in the right
position for q within the same level list, which takes O(b) time. p is the preceding node to L
unless L is the first node. Lines 5 ∼ 10 handle deletion in the leaf level list. Line 8 and 9 are
for the first general case where q appears only in the leaf level list. Lines 6 and 7 are for the
second case where q appears in the multiple level lists. Line 10 is optional, and it returns
an error message when q does not exist. Line 12 invokes the deletion method recursively.
Line 13 is for the second case where the node value needs an update if the value is the same
as q. Line 14 calls the Skip-merge Subroutine 8.12 if necessary. Line 15 is for the last case
where the number of levels is reduced by one.
Figure 8.33 (a) illustrates the simplest deletion case that removes 20 from a sample
skip-(b = 4) list. Figure 8.33 (b) illustrates deleting 23. When the node 23 0 is removed
from the leaf level list, the node 21 1 ’s skip size violates the lower bound. The node 21 1 is
merged with its preceding node the node 14 1 . Figure 8.33 (c) illustrates deleting 11. It is
the second deletion case described in Figure 8.32 (b).

8.8 Exercises
Q 8.1. Consider the following binary trees.

T1 T2 T3 T4
7 5 7 4

4 8 2 8 2 8 3 7

2 6 1 4 7 1 4 1 5 8

1 3 3 3 7 2 6

a). Which of the trees above are BST(s)?


b). Which of the trees above are height balanced?
c). Which of the trees above are AVL(s)?

Q 8.2. Construct binary search trees by applying the BST insertion method sequentially
in the following given input orders.
a). h7, 4, 2, 6, 1, 3, 5, 8i
490 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

2 1 11

1 1 4 11 14 21 27

0 1 2 3 4 5 9 10 11 12 14 16 19 20 21 23 27 29 38

(a) Delete 20 from the skip-(b = 4) list.


2 1 11

1 1 4 11 14 21 27

0 1 2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38


2 1 11

1 1 4 11 14 21 27

0 1 2 3 4 5 9 10 11 12 14 16 19 21 27 29 38


2 1 11

1 1 4 11 14 27

0 1 2 3 4 5 9 10 11 12 14 16 19 21 27 29 38

(b) Delete 23 from the skip-(b = 4) list.


2 1 11

1 1 4 11 14 27

0 1 2 3 4 5 9 10 12 14 16 19 21 27 29 38


2 1 12

1 1 4 12 19 27

0 1 2 3 4 5 9 10 12 14 16 19 21 27 29 38

(c) Delete 11 from the skip-(b = 4) list.

Figure 8.33: Skip-(b = 4) list deletion Algorithm 8.28 illustration


8.8. EXERCISES 491

b). h7, 4, 2, 1, 8, 3, 6, 5i

c). h4, 7, 3, 5, 6, 1, 2, 8i

d). h7, 1, 2, 3, 4, 5, 6, 8i

Q 8.3. Construct AVL trees by applying the AVL insertion method sequentially in the
following given input orders.

a). h7, 4, 2, 6, 1, 3, 5, 8i

b). h7, 4, 2, 1, 8, 3, 6, 5i

c). h4, 7, 3, 5, 6, 1, 2, 8i

d). h7, 1, 2, 3, 4, 5, 6, 8i

Q 8.4. Consider the problem of constructing an AVL tree given a list of n quantifiable
elements.

a). Formulate the problem.

b). Derive a first order linear recurrence relation.

c). Devise an inductive programming algorithm.

d). Provide the computational time complexity of the algorithm provided in c).

Q 8.5. Consider the following binary search tree.

Chillaxin

Brill Java

Avatar Festivus Snobbish

Mo Spam

a). What is the pre-order depth first traversal?

b). What is the in-order depth first traversal?

c). What is the post-order depth first traversal?

d). Is it a BST? Justify your answer.

e). Is it an AVL? Justify your answer.

f). Draw the BSTs after these 2 operations: insert ‘Memoization’ followed by delete ‘Java’.

g). Draw the AVLs after these 2 operations: insert ‘Memoization’ followed by delete
‘Chillaxin’.

Q 8.6. Consider the following binary trees.


492 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

T1 T2 T3
AVL BST AVL

BST Hashing AVL Skip list 2-3 tree B+ tree

2-3 tree B tree Skip list 2-3 tree B tree 2-3-4 tree B tree BST

2-3-4 tree B+ tree 2-3-4 tree B+ tree Skip list

Hint: the following words can be ordered according to the ASCII character order.
h‘2-3 tree’, ‘2-3-4 tree’, ‘AVL’, ‘BST’, ‘B tree’, ‘B+ tree’, ‘Hashing’, ‘Red-black’, ‘Skip list’i

a). Which of the binary trees above are BST(s)?

b). Which of the binary trees above are height balanced?

c). Which of the binary trees above are AVL(s)?

d). Construct a BST where the input is inserted in the following sequence:

hBST, AVL, 2-3 tree, B tree, 2-3-4 tree, B+ tree, Skip listi.

e). Draw the BSTs resulting from implementing the following 3 operations in sequence to
the BST constructed in d): insert ‘Hashing’, delete ‘2-3-4 tree’, and insert ‘Red-black’.

f). Construct an AVL tree where the input is inserted in the following sequence:

hBST, AVL, 2-3 tree, B tree, 2-3-4 tree, B+ tree, Skip listi.

g). Draw the AVLs resulting from implementing the following 3 operations in sequence to
the AVL constructed in f): insert ‘Hashing’, delete ‘2-3-4 tree’, and insert ‘Red-black’.

Q 8.7. Consider the following six ternary trees.


8.8. EXERCISES 493

T1 T2
9 5

4 14 18 2 8

1 3 5 7 8 17 21 23 1 4 9

T3 T4
9 2 8

4 14 18 1 5 9

1 3 5 7 11 15 17 21 23 3 6

T5 T6
11 2 8

4 9 14 18 1 5 9

1 3 5 7 10 15 17 21 23 4 7

a). Which of the trees above are TST(s)?


b). Which of the trees above satisfy the same leaf level property defined in Problem 8.10
on page 455?
c). Which of the trees above satisfy the 2-3 property defined in Problem 8.11 on page 455?
d). Which of the trees above are 2-3 tree(s)?

Q 8.8. Consider the various properties of a ternary tree and devise an algorithm to check
each respective property.
a). Devise an algorithm for Problem 8.10 to determine whether a ternary tree holds the
same leaf level property defined on page 455.
b). Provide the computational time complexity of the algorithm proposed in a).
c). Devise an algorithm for Problem 8.11 to determine whether a ternary tree holds the
2-3 property defined on page 455.
d). Provide the computational time complexity of the algorithm proposed in c).
Q 8.9. Construct 2-3 trees by applying the 2-3 tree insertion method sequentially in the
following given input orders.

a). h7, 4, 2, 6, 1, 3, 5, 8i
b). h7, 4, 2, 1, 8, 3, 6, 5i
c). h4, 7, 3, 5, 6, 1, 2, 8i
494 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

d). h7, 1, 2, 3, 4, 5, 6, 8i

Q 8.10. Consider the following 2-3 tree:

12

4 7 15 20

1 3 5 8 14 17 21 23

Draw the 2-3 trees resulting from implementing the following operations in order.

a). insert(10)

b). insert(2)

c). delete(12)

d). delete(4)

Q 8.11. Consider the following partial drawing of a 3-4-5 tree or (b = 5) B tree:

15

4 11 19 26

1 3 5 6 7 9 12 14 16 18 20 23 27 29 33 38

Draw the 3-4-5 trees resulting from implementing the following operations in order.

a). insert(21)

b). insert(8)

c). delete(6)

d). delete(4)

e). delete(9)

Q 8.12. Consider the following (b = 3) B+ tree, which may be referred to as a 2-3+ tree.
Draw the B+ trees resulting from implementing the following operations in order.

1 14

1 7 11 14 17 25

1 3 5 7 9 11 13 14 16 17 19 21 25 29
8.8. EXERCISES 495

a). insert(10)

b). insert(6)

c). delete(7)

d). delete(14)

e). delete(1)

Q 8.13. In B+ trees, the minimum element in the sub-tree is stored in the node. The B+
2
tree is a variation of the B+ tree in which the node stores the maximum element in the
subtree instead of the minimum.

K1~b k1=max(C1) k2=max(C2 ) k3 =max(C3 ) ……….. kb-1 =max(Cb-1 ) kb=max(Cb)


C1~b
C1 C2 C3 ……….. Cb-1 Cb
C1 < k2 C2 < k3 C3 < k4 Cb-1 < kb

Consider the following sample (b = 3) B+ +


2 tree, which may be referred to as a 2-32 tree:

13 29

5 9 13 16 21 29

1 3 5 7 9 11 13 14 16 17 19 21 25 29

Draw the B+
2 trees resulting from implementing the following operations in order.

a). Devise an algorithm to check whether a k-ary tree is a B+


2 tree.

b). Devise an algorithm to search an element, q, in a B+


2 tree.

c). Devise an algorithm to insert an element, q, into a B+


2 tree.

d). Devise an algorithm to delete a key, q, from a B+


2 tree.

e). insert(10)

f). insert(6)

g). delete(7)

h). delete(14)

i). delete(1)

Q 8.14. Consider this sample (b = 4) B+ +


2 tree, which may be referred to as a 2-3-42 tree:
496 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY

10 38

3 10 12 19 23 38

2 3 4 5 9 10 11 12 14 16 19 21 23 27 29 38

Draw the B+ 2 trees resulting from implementing the following operations in order. Note
that B+
2 tree is defined in previous Exercise Q 8.13.

a). insert(1)

b). insert(20)

c). insert(8)

d). delete(2)

e). delete(1)

f). delete(10)

Q 8.15. Consider the following skip-(b = 3) list:

2 2 11 21

1 2 9 11 14 21 27

0 2 4 5 9 10 11 12 14 19 20 21 23 27 29 38

Draw the skip-(b = 3) lists resulting from implementing the following operations in order.

a). insert(18)

b). insert(1)

c). delete(23)

d). delete(21)

e). delete(14)

Q 8.16. Consider the following skip-(b = 4) list:

2 2 11 21

1 2 9 11 14 21 27

0 2 4 5 9 10 11 12 14 19 20 21 23 27 29 38
8.8. EXERCISES 497

Draw the skip-(b = 4) lists resulting from implementing the following operations in order.

a). insert(18)

b). insert(1)
c). delete(23)
d). delete(21)

e). delete(14)
498 CHAPTER 8. TREE DATA STRUCTURES FOR DICTIONARY
Chapter 9

Priority Queue

In a hospital emergency room, it may be desirable to serve the patient with the highest
severity first, instead of serving the patient who came first. A priority queue is an abstract
data structure like a queue, but with a high priority first principle instead of a first-in first-
out principle. Two main operations of a priority queue are insert and delete element with
highest priority.
Priority queues play important roles in many applications, such as emergency services,
flight landing, and operating systems. Consider the following printing jobs with different
number of pages queued in a shared printer: h9, 6, 3, 1i. Suppose printing a single page takes
one unit time. If the ordinary queue is used, the first person would wait 9 unit time and the
last person would wait 19 unit time. The total wait time for everyone is 61. Suppose we use
the minimum page first queue instead of an ordinary (FIFO) queue. Then, the printer would
print them in the order of h1, 3, 6, 9i. The total wait time would be 34. The priority queue,
where the minimum number of pages is the highest priority, often produces the lowest total
wait time.
The priority queue abstract data type can be implemented in many different data struc-
tures, as listed in Table 9.1. Binary heap [180], leftist heap [43], binomial heap [177], and
Fibonacci heap [65] were developed in 1964, 1972, 1978, and 1987, respectively. Their com-
putational complexities for priority queue operations such as insert, delete min, delete max,
construct, etc. are different though. In this chapter, data structures such as binary heap,
min-max heap, leftist heap, and AVL tree are presented as priority queues. The stability of
priority queues is formally discussed at the end.
Suppose that all elements are stored in an array as a list and this simple array is used
as the data structure for the priority queue. An element can be inserted at the end of the
list, which takes constant time. To delete the element with a minimum value, the element
must be found first. Problem 2.14, finding a minimum element, was dealt on page 58.
The sequential findmin Algorithm 2.18 on page 58 can be used, which takes Θ(n). If an
array sorted in ascending order is used as a data structure, the insertion operation takes

499
500 CHAPTER 9. PRIORITY QUEUE

Table 9.1: Priority Queue Operations and Property with Different Data Structures.

Priority queue operations and property


Data delete/find delete/find decrease
insert merge construct
Struc. min max key
binary
O(log n)† Θ(log n)/O(1) Θ(n)/Θ(n) O(log n) Θ(n) Θ(n)
min-heap
binary
O(log n)† Θ(n)/Θ(n) Θ(log n)/O(1) O(log n) Θ(n) Θ(n)
max-heap
min-max
O(log n)† Θ(log n)/O(1) Θ(log n)/O(1) O(log n) Θ(n) Θ(n)
heap
leftist
O(log n) O(log n)/O(1) Θ(n)/Θ(n) O(n)? O(log n) Θ(n)
min-heap
binomial
O(log n) Θ(log n)/Θ(log n) Θ(n)/Θ(n) Θ(log n) O(log n) Θ(n)
min-heap
↑ Non stable heaps / ↓ stable priority queues
Array O(1)† Θ(n)/Θ(n) Θ(n)/Θ(n) O(1) O(1) Θ(n)
Linked
O(1) Θ(n)/Θ(n) Θ(n)/Θ(n) O(n) O(1) O(1)
list (L.L.)
Sorted
Θ(n) Θ(n)/O(1) O(1)/O(1) O(n) Θ(n) O(n log n)
array (asc.)
Sorted
O(n) Θ(n)/O(1) O(1)/O(1) O(n) Θ(n) O(n log n)
L.L. (dsc.)
BST O(n) O(n)/O(n) O(n)/O(n) O(n) Θ(n) O(n log n)

AVL O(log n) Θ(log n)/O(log n) Θ(log n)/O(log n) O(log n) Θ(n) O(n log n)

skip list Θ(log n) Θ(log n)/O(1) Θ(log n)/Θ(log n) O(log n) Θ(n log n) Θ(n log n)

2-3 tree Θ(log n) Θ(log n)/Θ(log n) Θ(log n)/O(1) O(log n) Θ(n log n) Θ(n log n)
† it takes linear time if the array is full.
* it is true if the sorting algorithm is stable.

linear time, but finding the minimum or maximum takes constant time, as the first and
last elements are minimum and maximum values, respectively. To delete the element with
the minimum value which is the first element in the array, entire array elements must be
shifted one by one. Thus, it takes linear time. However, deleting the maximum element
takes constant time.
On page 355 in Chapter 7, ADT vs. data structure was explained by analogizing them
as James Bond and Q in the 007 film. Designing data structures for the priority queue ADT
would be the role of Q. Designing algorithms for various problems utilizing the priority
queue ADT would be the role of 007. In this chapter, algorithms are categorized into 007
and MacGyver versions. MacGyver is from an old TV series, and he can be characterized
as a combination of 007 and Q. If an algorithm is devised utilizing a data structure purely
as an ADT, it is categorized as a 007 version algorithm. If an algorithm is devised utilizing
a data structure and its full detailed mechanisms, it is categorized as a MacGyver version
algorithm.
This chapter has several objectives. An efficient way to implement a priority queue is a
heap, which is typically a complete binary tree stored in an array. Readers must be able to
9.1. HEAP 501

identify whether a given binary tree is a min-heap or max-heap. Equivalently, one should be
able to identify whether a given array is a min-heap or max-heap. Next, one must be able
to insert a new item to a heap data structure efficiently. One must also be able to delete an
element with a minimum or maximum value from the min-heap or max-heap, respectively.
One must understand how to construct a heap using different algorithms. One must be
able to find the efficiencies of priority queues implemented by means other than heaps as
complete binary trees. Finally, one must be able to utilize the priority queue data structure
to design efficient algorithms for various problems, such as sorting, kth order statistics, and
alternating permutation. Many greedy algorithms covered in Chapter 4 may benefit from
the priority queue data structure.

9.1 Heap

A heap is an efficient implementation of a priority queue. Although a heap can be designed


in static or dynamic versions, a static version is conventionally used to define a heap; it is
a partially ordered complete tree with a heap property, where each node has the maximum
priority of all its sub-trees.

9.1.1 Complete Tree


To understand a static version of a heap, the concept of complete trees must be under-
stood. In complete trees, nodes are inserted from left to right, and from the top to the
bottom. Every level of the tree is completely filled, except for the bottom level, which is
filled from left to right. For example, trees in Figure 9.1 (a) are not complete trees, as the
bottom levels are not filled from the left. Trees in Figure 9.1 (b) are complete binary trees.
A complete k-ary tree is a complete tree with up to k children where k ≥ 2. Figure 9.1 (b)
and (c) show binary (d = 2) and ternary (d = 3) complete trees, respectively. Since reading
the nodes in breath first order is complete, an array can be used to represent a complete
tree.

John W. J. Williams (1930-2012) was a pioneering British-born computer scien-


tist. He is best known for inventing heapsort and the binary heap data structure [180].
c No photo in public domain and unable to indentify the ownership.
502 CHAPTER 9. PRIORITY QUEUE

1 1 1 1 1

2 3 2 3 2 3 2 3 2 3 4

4 5 4 5 4 4 5 5 6 7 8 9 10 11

(a) Not complete trees (b) Binary complete trees (c) Ternary complete tree

Figure 9.1: Complete trees

As observed in Figure 9.2, the ith element of the array corresponds to the ith node value
in the complete tree. In other words, the order of the array is the breadth first traversal
order of the complete tree. The index of the parent node is the floor of half the index of the
child nodes. Various properties of a complete binary tree, including the indices of the left
and right child nodes, are listed in Property 9.1.

21 1

19 2 7 3
21 19 7 17 6 3 1 10 9 8
17 4 6 5 3 6 1 7 1 2 3 4 5 6 7 8 9 10
10 8 9 9 8 10

(a) Binary complete tree (b) Array representation

Figure 9.2: Complete tree as an array

Property 9.1. Binary complete tree

parent(ai ) = ab 2i c for ∀i ∈ {2, · · · , n} (9.1)


jnk
Left child(ai ) = a2i for ∀i ∈ {1, · · · , } (9.2)
2j k
n
Right child(ai ) = a2i+1 for ∀i ∈ {1, · · · , } (9.3)
2
level(ai ) = blog ic (9.4)
lnm
number of internal nodes(n) = (9.5)
ln2m
number of leaf nodes(n) = (9.6)
2

Various properties, including the indices of the parent node and its children nodes of a
node in a k-ary complete tree in general, are given in Property 9.2.
9.1. HEAP 503

Property 9.2. Complete k-ary tree where k ≥ 2.

parent(ai ) = ad i−1 e for ∀i ∈ {2, · · · , n} (9.7)


k
jnk
jth child(ai , j) = ak(i−1)+j+1 for ∀i ∈ {1, · · · , } (9.8)
k
level(ai ) = blogk ((k − 1)i)c (9.9)
 
n−1
number of internal nodes(n) = (9.10)
k
   
n−1 (k − 1)n + 1
number of leaf nodes(n) = n− = (9.11)
k k

The height of a k-ary complete tree is Θ(log n) by eqn (9.9).

9.1.2 Heap Definition

80 1

20 35 2 30

17 12 1 30 15 10 80 35

15 2 10 17 20 12

80 20 35 17 12 1 30 15 2 10 1 2 30 15 10 80 35 17 20 12
(a) a sample max-heap (b) a sample min-heap

Figure 9.3: Heap examples

A max-heap is a complete tree where all nodes are greater than or equal to each of
its children nodes, and a min-heap is a complete tree where all nodes are less than or
equal to each of its children nodes, as exemplified in Figure 9.3 (a) and (b), respectively.
When d = 2, 3, and 4, the heaps are referred to as binary, ternary, and quaternary heaps,
accordingly. The default is conventionally a binary heap.
The root node in a max-heap is max(A1∼n ) and each node in a complete tree is the
maximum value of its sub-complete tree. The problem of checking whether a given array is
a max-heap can be defined as follows:

Problem 9.1. isMax-Heap?


Input: an array A1∼n of n quantifiable
( elements
yes if ai ≥ a2i ∧ ai ≥ a2i+1 for ∀i ∈ {1, · · · , b n2 c}
Output: isMaxHeap(A1∼n ) =
no otherwise

As given in the definition, checking all internal nodes for their inequalities guarantees to
answer Problem 9.1. A pseudo code is as follows:
504 CHAPTER 9. PRIORITY QUEUE

Algorithm 9.1. Checking Max-Heap

isMaxHeap(A1∼n )
o = true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ d n2 e − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai < a2i ∨ ai < a2i+1 . . . . . . . . . . . . . . . . . . . . . . . . . .3
o = false and break the loop. . . . . . . . . . . . . . . 4
if n is even and a n2 < an . . . . . . . . . . . . . . . . . . . . . . . . . . 5
o = false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
return o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

The computational time complexity of Algorithm 9.1 is O(n), since there are b n2 c number
of internal nodes in a complete binary tree.
A divide and conquer paradigm can be applied to check whether a given array is a
max-heap. There are left and right sub-heaps for any given node. These sub-heaps can be
assumed to be roughly halves. If these sub-heaps are checked, then to find the answer for
the larger heap, the parent node of two sub-heaps can be checked against the roots of two
sub-heaps. A pseudo code based on the recursive divide and conquer paradigm is stated as
follows:
Algorithm 9.2. Checking MaxHeap - recursive divide and conquer
initially call isMaxHeap(1) and A1∼n is declared globally.
isMaxHeap(i)
if i > n, return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if 2i = n and ai < an , return false . . . . . . . . . . . . . . . . . . . . 2
if 2i = n and ai ≥ an , return true . . . . . . . . . . . . . . . . . . . . .3
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if ai < a2i ∨ ai < a2i+1 return false . . . . . . . . . . . . . . . . . 5
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return isMaxHeap(2i) ∧ isMaxHeap(2i + 1) . . . . . . . . . 7
The computational time complexity of Algorithm 9.2 is O(n) as T (n) = 2T (n/2) + O(1)
according to Master Theorem 3.9. The divide and conquer Algorithm 9.2 can be stated
using the iterative method by solving bottom-up. It is stated as follows:
Algorithm 9.3. Checking Max-Heap - bottom up divide and conquer
isMaxHeap(A1∼n )
if n is even and a n2 < an . . . . . . . . . . . . . . . . . . . . . . . . . . 1
return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = d n2 e − 1 down to 1 . . . . . . . . . . . . . . . . . . . . . . . 4
if ai < a2i ∨ ai < a2i+1 . . . . . . . . . . . . . . . . . . . . . . . 5
return false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 9.4 contains three invalid Max-heaps and shows the orders of checking for three
different algorithms. While Algorithm 9.1 checks in the same order as the array, Algo-
rithm 9.3 checks in the reverse order of the array. Algorithm 9.2 checks in the pre-order of
the depth-first traversal. All Algorithms 9.1, 9.2, and 9.3 take linear and constant time in
the worst and best cases.
9.1. HEAP 505

21 1 21 1 21 5

19 2 7 3 19 2 3 8 19 4 37 3

17 4 6 5 3 6 1 7 17 3 8 6 7 9 1 10 17 2 5 1 3 1

10 8 9 9 8 10 10 4 9 5 6 7 10 9 2

(a) Order of Algorithm 9.1 (b) Order of Algorithm 9.2 (c) Order of Algorithm 9.3
same as array order pre-order DFT reverse array order

Figure 9.4: Orders of checking max-heap algorithms

9.1.3 Insertion
One of two principal operations of a heap is to insert a new item into an appropriate
place in an existing heap. This insert operation can be defined as a computational problem:

Problem 9.2. Insert operation in a max-heap


Input: a max-heap, H1∼n where n < m, and a quantifiable element, x
0 0 0
Output: a new max-heap, H1∼n+1 where x ∈ H1∼n+1 and ∀y ∈ H1∼n , y ∈ H1∼n+1

80 80

20 35 20 35

17 12 1 30 17 12 1 30

15 2 10 15 2 10 25

80 20 35 17 12 1 30 15 2 10 80 20 35 17 12 1 30 15 2 10 25
(a) Max-heap (b) append and compare (25, 12)
80 80

20 35 25 35

17 25 1 30 17 20 1 30

15 2 10 12 15 2 10 12

80 20 35 17 25 1 30 15 2 10 12 80 25 35 17 20 1 30 15 2 10 12
(c) swap (25, 12) and compare (25, 20) (d) swap (25, 20) and compare (25, 80)

Figure 9.5: Insert operation in a Max-heap

As depicted in Figure 9.5, first insert x, the element to be inserted, at the end of the
array: hn+1 = x. Next, compare the value of its parent, and if (x = hn+1 ) > hb n+1 c , swap
2
them. This compare and swap continues repeatedly to the root or until the condition does
not meet. The resulting, H1∼n+1 is a max-heap. This process of comparing and swapping
with its parent node is often called ‘sift up’ or ‘percolate up.’ An algorithm to insert an
element into a max-heap is as follows:
506 CHAPTER 9. PRIORITY QUEUE

Algorithm 9.4. Insertion in a max-heap

Maxheapinsert(H1∼n , x)
if n ≥ m, report an error.. (optional) . . . . . . . . . . .0
p = n + 1 .........................................1
hp = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
while p > 1 and hp > hb p2 c . . . . . . . . . . . . . . . . . . . . . . . . 3
swap(hp , hb p2 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
p = b p2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return H1∼n+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

If the heap is a min-heap, changing line 3 in Algorithm 9.4 to ‘while p > 1 and hp < hb p2 c’
solves the problem of inserting an item into a min-heap. The computational time complexity
of Algorithm 9.4 is O(log n), as the number of swaps necessary may be up to the height of
the complete binary tree. The best case scenario is O(1) when the element to be inserted is
smaller than its parent node, hb n+1 c .
2
Lines 3 ∼ 5 in Algorithm 9.4 can be a sub-procedure called ‘percolate up,’ and a recursive
version of the percolating-up procedure for a max-heap, which is assumed to be declared
globally, is stated as follows:

Subroutine 9.1. Percolate up in a max-heap


Percolate up max(p)
if p > 1 and hp > hb p2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
swap(hp , hb p2 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Percolate up max(b p2 c) . . . . . . . . . . . . . . . . . . . . . . . . . 3
if p > m, report error (optional) . . . . . . . . . . . . . . . 4
else, return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

9.1.4 Delete Min/Max Operation


Another important operation of a max-heap is to delete the maximum item in a max-
heap or the minimum item in a min-heap. Note that the delete max operation is different
from the find max operation, where find max(H1∼n ) = h1 , which simply takes constant
time. The delete max operation problem can be defined as follows:

Problem 9.3. Delete max operation in a max-heap


Input: a max-heap, H1∼n
0
Output: x = max(H1∼n ) = h1 and a max-heap, H1∼n−1
0
where ∀y ∈ H2∼n , y ∈ H1∼n−1
As depicted in Figure 9.6, the maximum element can be found in constant time, as the
root, h1 , contains the maximum value. By deleting the root node, the resulting tree is no
longer a heap data structure. Hence, to delete the maximum element from a max-heap,
output x = h1 and move the last element in the heap to the root, i.e., h1 = hm , making the
size of the heap reduced to n − 1. This process may violate the max-heap property, i.e., the
root may not be the maximum of all children node values. To make it a max-heap, the sift-
down or percolate-down operation must be applied. Compare with two children and swap
the maximum value of its children with the value if necessary. In the worst case, swapping is
9.1. HEAP 507

80 12

35 20 35 20

30 15 1 17 30 15 1 17

10 2 12 10 2

80 35 20 30 15 1 17 10 2 12 12 35 20 30 15 1 17 10 2
(a) Max-heap (b) move(12) and compare (12, 35, 20)
35 35

12 20 30 20

30 15 1 17 12 15 1 17

10 2 10 2

35 12 20 30 15 1 17 10 2 35 30 20 12 15 1 17 10 2
(c) swap (12, 35) and compare (12, 30, 15) (d) swap (12, 30) and compare (12, 10, 2)

Figure 9.6: Delete max operation in a Max-heap

necessary all the way down to the leaf level. An algorithm to delete the maximum element
from a max-heap can be stated as follows:

Algorithm 9.5. Delete-max in a max-heap

deletemax(H1∼n )
x = h1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
h1 = hn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
n = n − 1 .........................................3
percolate down max(1) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Note that the global variables n is the current heap size. Line 3 in Algorithm 9.5 reduces
the size of the heap by one, as the maximum element gets deleted. Line 4 in Algorithm 9.5
invokes a sub-procedure called ‘percolate down.’ A recursive version of the percolating-down
procedure for a max-heap, which is assumed to be declared globally, is stated as follows:

Subroutine 9.2. Percolate down in a max-heap


percolate down max(p)
if n is even and p = n2 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if hp < h2p , swap(hp , h2p ) . . . . . . . . . . . . . . . . . . . . 2
else if 0 < p ≤ b n2 c and hp < max(h2p , h2p+1 ), . . . . . 3
p2 = argmax(h2p , h2p+1 ) . . . . . . . . . . . . . . . . . . . . . . . . 4
swap(hp , hp2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
percolate down max(p2) . . . . . . . . . . . . . . . . . . . . . . . . 6
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
There are three cases to consider in the percolate down subroutine. First, if the node
has only one child node, i.e., the position p is n2 where n is even, then swap the node with
508 CHAPTER 9. PRIORITY QUEUE

its only child node as long as the child node has higher value. Lines 1 ∼ 2 take care of the
single child node case. Next, all other internal nodes whose positions are between one and
b n2 c have exactly two children nodes. Hence, if the child node with higher value is higher
than the node value, swap them and recursively call the percolate down with the respective
child node index, as stated in lines 3 ∼ 6. Last, if the node to be percolated down is a leaf
node, there is nothing to do.
If the heap is a min-heap, changes in lines 4 ∼ 6 of Algorithm 9.4 are necessary to
solve the problem of deleting the minimum item from a min-heap. The computational
time complexity of Algorithm 9.5 is O(log n), as the sifting down, or the number of swaps
necessary, may be up to the height of the complete binary tree.

9.1.5 Constructing a Heap


Consider the problem of constructing a heap given a list of randomly arranged n quan-
tifiable elements. This problem may also be referred to as ‘heapify’ and formulated as
follows:
Problem 9.4. Heap construction (Heapify)
Input: an array A1∼n of n quantifiable elements
Output: a permutation A01∼n of A1∼n such that A0 is a min (or max)-heap.
A naı̈ve algorithm would invoke the insertion operation n number of times. Using the
inductive programming paradigm in Chapter 2, one can derive a first order linear recurrence
relation by assuming A1∼n−1 is already a min-heap.
Lemma 9.1. First order linear recurrence of heapify
(
a1 if n = 1
heapify(A1∼n ) =
percolate up(heapify(A1∼n−1 ), an ) if n > 1

A pseudo code of the algorithm based on the inductive programming paradigm is stated
as follows:
Algorithm 9.6. Naı̈ve heapify
naı̈ve heapify(A1∼n )
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
percolate up(A1∼i−1 , ai ) . . . . . . . . . . . . . . . . . . . . . . . . 2
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Algorithm 9.6 is illustrated in Figure 9.7 (a) on a toy example. Line 2 in Algorithm 9.6
can be replaced by the insert into max-heap Algorithm 9.4. Inserting the ith element takes
O(blog (i + 1)c). In the worst case, each new inserted item may percolate up to the root, i.e.,
the depth of the node in a tree. Hence, the computational time complexity of Algorithm 9.6 is
Pn
blog (i + 1)c = Θ(n log n) by Theorem 1.17 on page 25 in the worst case. This complexity
i=1
can also be realized by Master Theorem 3.9: T (n) = T ( n2 ) + Θ(n log n) = Θ(n log n), as
depicted in Figure 9.7 (b). Recall that the number of internal nodes and leaf nodes are bn/2c
and dn/2e, as given in eqns (9.6) and (9.5), respectively. Each leaf node has blog nc. Hence,
T (n) is composed of the sum of T (n/2) and dn/2eblog nc. It belongs to Master Theorem 3.9
case 3 with  = 1 and c = 21 .
9.1. HEAP 509

80 20 20 15 12

20 30 80 30 80 30 20 30 15 30

15 12 1 35 15 12 1 35 15 12 1 35 80 12 1 35 80 20 1 35

17 2 10 17 2 10 17 2 10 17 2 10 17 2 10

1 1 1 1 1

15 12 15 12 15 12 2 12 2 12

80 20 30 35 80 20 30 35 17 20 30 35 15 20 30 35 15 10 30 35

17 2 10 17 2 10 80 2 10 80 17 10 80 17 20

(a) Constructing a min-heap on h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i
1 1

2 2 2 2

3 3 3 3 3 3 3 3

4 4 4 4 4 4 4 4 4 4

T (n) = + T (b n2 c) d n2 eblog nc
(b) Worst case time analysis of Algorithm 9.6

Figure 9.7: O(n log n) time heap construction Algorithm 9.6 illustration and analysis.

A heap can be constructed more efficiently by the divide and conquer paradigm in
Chapter 3. Recall the delete max/min operation. When the last element comes to the root
node, both left and right sub-trees are heaps and the root node needs to be percolated down.
This gives an idea for a divide and conquer algorithm. Let T (ai ) be a binary heap whose
root is ai . Then, its left and right sub-binary trees can be stated as T (a2i ) and T (a2i+1 ),
respectively. If both left and right sub-trees are heaps, then all we need to do is percolate
down the root element to make T (ai ) a heap. The following divide recurrence relation can
be stated:

Lemma 9.2. Divide recurrence of heapify


ai
 if ai is a leaf
heapify(T (ai )) = nil if ai = nil

percolate down(heapify(T (a2i )), heapify(T (a2i+1 )), ai ) if n > 1

The computational time complexity of the divide and conquer algorithm in Lemma 9.2
is linear because T (n) = 2T (n/2) + O(log n), according to the Master Theorem 3.9. The
divide and conquer recurrence relation in Lemma 9.2 can be stated as an iterative algorithm,
instead of a recursive one, as follows:
510 CHAPTER 9. PRIORITY QUEUE

80 80 80

20 30 20 30 20 30

15 12 1 35 15 10 1 35 2 10 1 35

17 2 10 17 2 12 17 15 12

80 80 1

20 1 2 1 2 30

2 10 30 35 15 10 30 35 15 10 80 35

17 15 12 17 20 12 17 20 12

(a) Constructing a min-heap on h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i
4 3 1

3 3 2 2 1 1

2 2 2 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

T (n) = + T (b n2 c)
n
(b) Worst case time analysis of Algorithm 9.7

Figure 9.8: Linear time heap construction Algorithm 9.7 illustration and analysis.

Algorithm 9.7. Iterative divide and conquer heapify


heapify(A1∼n )
for i = b n2 c down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
percolate-down(i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Instead of percolating up, Algorithm 9.7 utilizes the percolate down subroutine, starting
from the first non-leaf node from the end, i.e., b n2 cth element. As illustrated in Figure 9.8
(a), the percolate-down process at the ith element in Algorithm 9.7 guarantees that the
sub-tree whose root is the ith node will be a min-heap. When this process is repeated until
the root node, the entire resulting tree is a min-heap. The percolate-down process at the
ith element may run the height of the sub-tree. Hence, the computational time complexity
of Algorithm 9.7 is Θ(n). Figure 9.8 (b) captures general insights about the complexity
analysis of Algorithm 9.7: T (n) = T ( n2 ) + Θ(n) = Θ(n) by the Master Theorem 3.9. It
belongs to the Master Theorem 3.9 case 3 with  = 1 and c = 12 .

9.2 Problems on Lists


In this section, a few problems on lists, such as kth order statistics Problem 2.15, sorting
Problem 2.16, and alternating permutation (up-down) Problem 2.19, are considered where
the heap data structure provides great improvement over algorithms without it.
9.2. PROBLEMS ON LISTS 511

9.2.1 Heapselect
Here, various algorithms using heap data structures are presented to solve the order
statistics Problem 2.15 defined on page 59. To find the kth largest element in an array of
size n, the greedy Algorithm 4.1 on page 154 took Θ(kn) because the delete-max operation
took a linear time and must be performed k number of times. If the input array is converted
into a max-heap where the delete-max takes only O(log n), this new greedy algorithm with
a heap data structure is much faster. It is stated as follows:

Algorithm 9.8. Heapselect I (greedy algo + heap)

heapselectI(A1∼n , k)
H = heapify max(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = delete max(H) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
return h1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Figure 9.9 (a) illustrates Algorithm 9.8 using a toy example where n = 10 and k = 4.
First, constructing a max-heap takes a linear time if the divide and conquer Algorithm 9.7
is used. Then, k − 1 number of delete max operations, which takes O(log n) for each, are
performed. Hence, the computational time complexity of Algorithm 9.8 takes O(n log n), or
more specifically, O(n + k log n).
To find the kth smallest element, a min-heap can be utilized in Algorithm 9.8 instead
of a max-heap. Another heap-based algorithm, however, utilizes a min-heap to find the
kth largest element and a max-heap to find the kth smallest element. Recall inductive
programming Algorithm 2.19 stated on page 60, which takes Θ(kn). It starts with the first
k elements as a solution set and finds min(A1∼k ) for the kth largest element in A1∼k . It
then inductively moves toward the nth element. If a min-heap is used instead of a simple
array to store k elements, it becomes much faster, as depicted in Figure 9.9 (b) and stated
as follows:

Algorithm 9.9. heapselect II (inductive programming + heap)

heapselectII(A1∼n , k)
H = heapify min(A1∼k ) . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = k + 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai > h1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
delete min(H) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
insert(ai , H) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return h1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Figure 9.9 (b) illustrates Algorithm 9.9 using a toy example, where n = 10, k = 4, and
A = h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i. First, first or left-most k elements in A are heapified.
Starting from the (k + 1)th element in the input array, if the element, ai , is smaller than
the minimum in the current heap, no action is necessary, as the kth largest element in A1∼i
is the min of H1∼k . If ai is larger than the root of the heap, perform the delete min on
the heap and insert ai into the heap. This step takes O(log k) as the size of the heap is k.
The best case time complexity is Θ(k) + Θ(n − k) = Θ(n). The worst case time complexity
is Θ(k) + Θ((n − k) log k) = Θ(k + n log k). Hence, the computational time complexity of
Algorithm 9.9 is O(k + n log k).
512 CHAPTER 9. PRIORITY QUEUE

instruction array complexity


Input 80 20 30 15 12 1 35 17 2 10
Max-heapify 80 20 35 17 12 1 30 15 2 10 O(n)
delete max 35 20 30 17 12 1 10 15 2 X O(log n)
delete max 30 20 10 17 12 1 2 15 X X O(log n)
delete max 20 17 10 15 12 1 2 X X X O(log n)
find max 20 17 10 15 12 1 2 X X X O(1)
80 35 30 20

20 35 20 30 20 10 17 10

17 12 1 30 17 12 1 10 17 12 1 2 15 12 1 2

15 2 10 15 2 15

(a) heapselect I Algorithm 9.8 illustration (greedy + max-heap)


n A1∼n H1∼k min (H) Complexity
4 80 20 30 15 15 20 30 80 15 Θ(k)
5 80 20 30 15 12 15 20 30 80 15 O(1)
6 80 20 30 15 12 1 15 20 30 80 15 O(1)
7 80 20 30 15 12 1 35 20 35 30 80 20 Θ(log k)
8 80 20 30 15 12 1 35 17 20 35 30 80 20 O(1)
9 80 20 30 15 12 1 35 17 2 20 35 30 80 20 O(1)
10 80 20 30 15 12 1 35 17 2 10 20 35 30 80 20 O(1)

12 15 1 15 35 15 17 20 2 20 10 20 20

20 30 20 30 20 30 35 30 35 30 35 30 35 30

80 80 80 80 80 80 80

(b) heapselect II Algorithm 9.9 illustration (inductive programming + min-heap)


n A1∼n = H1∼k + Ak+1∼n mink (A1∼i ) Complexity
Input A1∼n 80 20 30 15 | 12 1 35 17 2 10
heapify(A1∼k ) 15 20 30 80 | 12 1 35 17 2 10 15 Θ(k)
i=k+1=5 15 20 30 80 | 12 1 35 17 2 10 15 O(1)
i=6 15 20 30 80 | 12 1 35 17 2 10 15 O(1)
i=7 20 35 30 80 | 12 1 15 17 2 10 20 O(log k)
i=8 20 35 30 80 | 12 1 15 17 2 10 20 O(1)
i=9 20 35 30 80 | 12 1 15 17 2 10 20 O(1)
i = 10 = n 20 35 30 80 | 12 1 15 17 2 10 20 O(1)
(c) heapselect III Algorithm 9.10 illustration (inductive programming + min-heap)

Figure 9.9: Illustration of finding the kth largest element using heap data structures
9.2. PROBLEMS ON LISTS 513

Algorithm 9.9 utilizes a min-heap as ADT and requires Θ(k) space to store the heap.
A slightly better version of Algorithm 9.9 does not require any extra space for the heap.
The first k cells in the array can be utilized as a heap. This space efficient algorithm is
illustrated in Figure 9.9 (c) and a pseudo code is stated as follows:

Algorithm 9.10. heapselect III (inductive programming + heap)

heapselectIII(A1∼n , k)
A1∼k = heapify min(A1∼k ) . . . . . . . . . . . . . . . . . . . . . . . 1
for i = k + 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
if ai > a1 , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
swap(a1 , ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
percolate down(a1 , A1∼k ) . . . . . . . . . . . . . . . . . . . . . 5
return a1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The computational time complexity of Algorithm 9.10 is the same as that of Algo-
rithm 9.9 but it does not require any extra space.
Heapselect Algorithms 9.8 and 9.9 can be devised by algorithm designers who understand
the heap solely as a priority queue ADT and, thus, may be categorized into 007 version
algorithms. However, the heapselect Algorithm 9.10 can be devised by algorithm designers
who fully understand the heap data structure and, thus, it may be categorized as a MacGyver
version algorithm.

9.2.2 Heapsort
To sort an array of size n in non-descending order, the greedy selection sort Algorithm 4.2
on page 155 took Θ(n2 ), where the delete-min operation took a linear time and must be
performed n number of times. If the input array is converted into a min-heap, where the
delete-min takes only O(log n), this new greedy algorithm with a heap data structure only
takes O(n log n). It is identical to Algorithm 9.8 where k = n, as illustrated in Figure 9.10
(a). This algorithm is stated as follows:

Algorithm 9.11. heapsortI(A)

heapsortI(A1∼n )
A1∼n = heapify min(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
oi = deletemin(H1∼n−i+1 ) . . . . . . . . . . . . . . . . . . . . . . 3
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
n
P
The worst case time complexity of Algorithm 9.11 is Θ(n) + Θ(log i) = Θ(n log n).
i=1
A drawback of Algorithm 9.11 is that it requires Θ(n) extra space for a heap. Min and

Robert W. Floyd (1936-2001) was an eminent computer scientist born in New


York City. He is well known for numerous algorithms: Floyd-Warshall algorithm, Floyd’s
cycle-finding algorithm, Floyd-Steinberg dithering, etc. A significant achievement of his
was pioneering the field of program verification, which later became Hoare logic.
c Photo Credit: Chuck Painter / Stanford News Service, courtesy of Stanford University
514 CHAPTER 9. PRIORITY QUEUE

instruction A1∼n = H1∼n O1∼n


Input 80 20 30 15 12 1 35 17 2 10 X X X X X X X X X X
Min-heapify 1 2 30 15 10 80 35 17 20 12 X X X X X X X X X X
delete min 2 10 30 15 12 80 35 17 20 X 1 X X X X X X X X X
delete min 10 12 30 15 20 80 35 17 X X 1 2 X X X X X X X X
.. .. ..
. . .
delete min 80 X X X X X X X X X 1 2 10 12 15 17 20 30 35 X
delete min X X X X X X X X X X 1 2 10 12 15 17 20 30 35 80
1 2 10 12

2 30 10 30 12 30 15 30

15 10 80 35 15 12 80 35 15 20 80 35 17 20 80 35

17 20 12 17 20 17

15 17 20 30 35 80

17 30 20 30 35 30 35 80 80

35 20 80 35 80 80

(a) heapsort I Algorithm 9.11 illustration


instruction Complexity
A 80 20 30 15 12 1 35 17 2 10
Max-heapify Θ(n)
H 80 20 35 17 12 1 30 15 2 10
delete max 1 H-O 35 20 30 17 12 1 10 15 2 | 80 Θ(log n)
delete max 2 H-O 30 20 10 17 12 1 2 15 | 35 80 Θ(log (n − 1))
.. .. ..
. . .
delete max n − 2 H-O 2 1 | 10 12 15 17 20 30 35 80 Θ(log 3)
delete max n − 1 O 1 | 2 10 12 15 17 20 30 35 80 Θ(log 2)
80 35 30 20 17

20 35 20 30 20 10 17 10 15 10

17 12 1 30 17 12 1 10 17 12 1 2 15 12 1 2 2 12 1 20

15 2 10 15 2 80 15 35 80 30 35 80 30 35 80

15 12 10 2 1

12 10 2 10 2 1 1 10 2 10

2 1 17 20 1 15 17 20 12 15 17 20 12 15 17 20 12 15 17 20

30 35 80 30 35 80 30 35 80 30 35 80 30 35 80

(b) heapsort II Algorithm 9.12 illustration: (heap + sorted array) representation

Figure 9.10: Heapsort I & II algorithm illustration


9.2. PROBLEMS ON LISTS 515

max-heaps are used to sort in ascending and descending order, respectively. This heap sort
version was introduced in [180].
Floyd utilized a max-heap instead of a min-heap to sort in ascending order, and vice
versa in [64]. A drawback of Algorithm 9.11 is that it requires Θ(n) extra space for a heap.
Floyd introduced a space efficient version of a heapsort. As indicated in Figure 9.10 (a),
every time delete min is performed, it introduces an empty space at the end of the heap and
only the deleted item is inserted into the output sorted list. A max-heap can be utilized so
that each time a delete max is performed, the deleted item can be inserted at the end of the
heap. Hence, the left part of the array is a heap and the remaining right part of the array is
the partially sorted list. A pseudo code, illustrated in Figure 9.10 (b), is stated as follows:
Algorithm 9.12. Heapsort II
heapsortII(A1∼n )
A1∼n = heapify max(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
an−i+1 = deletemax(A1∼n−i+1 ) . . . . . . . . . . . . . . . . . 3
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The computational time complexity of Algorithm 9.12 is the same as Algorithm 9.11,
Pn
Θ(n) + Θ(log i) = Θ(n log n), but it does not require any extra space. Hence, the
i=1
heapsort Algorithm 9.11 can be categorized as a 007 version algorithm, whereas the heapsort
Algorithm 9.12 can be categorized as a MacGyver version algorithm.

9.2.3 Alternating Permutation Problem


Consider the up-down Problem 2.19 defined on page 65. Numerous algorithms have been
presented in previous chapters with different paradigms. Here, several different algorithms
that utilize heap data structures are presented.
The first algorithm extends the greedy Algorithm 4.3 described on page 156, which
alternatively selects minimum and maximum elements. Selecting a minimum is hard for a
max-heap but easy for a min-heap. Selecting a maximum is vice-versa. When a min-heap
is utilized, this algorithm deletes the minimum for every odd index turn. But selecting a
maximum in a min-heap would take linear time. In the alternating permutation problem,
however, it does not have to be a maximum, and any value that is larger than the minimum
value is acceptable. Consider the last element in a min-heap. This value is certainly greater
than the minimum element selected in the previous turn. As illustrated in Figure 9.11. the
following pseudo code selects and removes the minimum and the last element in a min-heap
alternatively,
Algorithm 9.13. Greedy up-down with a min-heap I
UDPminheapgreedy()
A1∼n = heapify min(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . 1
m = n ............................................2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if i is odd, oi = delete min(A1∼m ) . . . . . . . . . . . 4
else, oi = a m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
m = m − 1 ..................................... 6
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
516 CHAPTER 9. PRIORITY QUEUE

step instruction A1∼n O1∼n Comp


0 Min-heapify 3 1 2 4 0 7 6 5 - - - - - - - - Θ(n)
1 delete min 0 1 2 4 3 7 6 5 - - - - - - - - O(log n)
2 delete last 1 3 2 4 5 7 6 - 0 - - - - - - - O(1)
3 delete min 1 3 2 4 5 7 - - 0 6 - - - - - - O(log n)
4 delete last 2 3 7 4 5 - - - 0 6 1 - - - - - O(1)
3 delete min 2 3 7 4 - - - - 0 6 1 5 - - - - O(log n)
5 delete last 3 4 7 - - - - - 0 6 1 5 2 - - - O(1)
6 delete min 3 4 - - - - - - 0 6 1 5 2 7 - - O(log n)
7 delete last 4 - - - - - - - 0 6 1 5 2 7 3 - O(1)
8 stop - - - - - - - - 0 6 1 5 2 7 3 4
0 1 1 2 2 3 3 4

1 2 3 2 3 2 3 7 3 7 4 7 4

4 3 7 6 4 5 7 6 4 5 7 4 5 4

Figure 9.11: Greedy min-heap up-down Algorithm 9.13 illustration

While deleting the minimum in a min-heap takes O(log n), deleting the last element
in a heap takes constant time. The computational time complexity of Algorithm 9.13 is
O(n log n). It requires an extra space to store the output, and, thus. the computational
space complexity of Algorithm 9.13 is Θ(n).
As before in the heapselect and heapsort cases, no extra space is necessary to store the
output or heap. Consider the following pseudo code, which selects the last element and
maximum element in a max-heap alternatively, as illustrated in Figure 9.12. The first left
part of the array is the heap and the second right part of the array is the output. No
operation is necessary to select and delete the last element, as it is automatically in place.
Only the oddity of n requires careful attention. When n is even, as in Figure 9.12 (a), the
last element must be the up’s turn. When n is odd, as in Figure 9.12 (b), the last element
must be the down’s turn.
Algorithm 9.14. up-down with Max-heap II
UDPmaxheapI()
A1∼n = heapify max(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . 1
c = 2b n2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
while c > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
t = delete max(A1∼c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ac = t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
c = c − 2 ....................................... 6
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Line 2 of Algorithm 9.14 takes care of the oddity issue. Algorithm 9.14 is slightly faster
than Algorithm 9.13 and requires no extra space. Its computational time complexity is still
O(n log n). A min-heap instead of a max-heap can be utilized in a similar way, and it is left
as an exercise.
9.2. PROBLEMS ON LISTS 517

c instruction A1∼n = H1∼c + Oc+1∼n Complexity


- Max-heapify 4 3 1 8 5 6 7 2 Θ(n)
8 delete max(H1∼8 ) H1∼8 − ∅ 8 5 7 3 4 6 1 2 | Θ(log n)
6 delete max(H1∼6 ) H1∼6 − O7∼8 7 5 6 3 4 2 | 1 8 Θ(log n)
4 delete max(H1∼4 ) H1∼4 − O5∼8 6 5 2 3 | 4 7 1 8 Θ(log n)
2 delete max(H1∼2 ) H1∼2 − O3∼8 5 3 | 2 6 4 7 1 8 Θ(log n)
0 return ∅ − O1∼8 | 3 5 2 6 4 7 1 8 O(1)
8 7 6 5 3
5 7 5 6 5 2 3 2 5 2

3 4 6 1 3 4 2 1 3 4 7 1 6 4 7 1 6 4 7 1

2 8 8 8 8

(a) n = even case


c instruction A1∼n = H1∼c + Oc+1∼n Complexity
- Max-heapify 4 3 1 8 5 6 7 Θ(n)
6 delete max(H1∼6 ) H1∼6 − O7∼7 8 5 7 3 4 6 1 Θ(log n)
4 delete max(H1∼4 ) H1∼4 − O5∼7 7 5 6 3 | 4 8 1 Θ(log n)
2 delete max(H1∼2 ) H1∼2 − O3∼7 6 5 | 3 7 4 8 1 Θ(log n)
0 return ∅ − O1∼7 | 5 6 3 7 4 8 1 O(1)
8 7 6 5

5 7 5 6 5 3 6 3

3 4 6 1 3 4 8 1 7 4 8 1 7 4 8 1

(b) n = odd case

Figure 9.12: Heap up-down II Algorithm 9.14 illustration

A principal observation when taking the in-order traversal on a max-heap of odd num-
ber size is that the max-heap order property naturally provides the up-down sequence, as
illustrated in Figure 9.13 (a). This works perfectly when a heap is a full binary tree, i.e., the
number of nodes is odd. A caveat is when the number of nodes is even. One can exclude the
last node, which is the last element in an array. Then an up-down sequence can be generated
by taking the in-order traversal on a partial heap, H1∼n−1 . Finally, the last element can
be handled separately; if an > a0n−1 , there is nothing else to do, and if an < a0n−1 , swap
these values. For example in Figure 9.13 (b), the final output is h5, 6, 2, 7, 4, 10, 1, 9, 3, 8i by
swapping 3 and 8. A pseudo code is given as follows:

Algorithm 9.15. up-down with Max-heap III (inorder traversal)

Let A1∼n and O1∼n be global.


UDPmaxheapII()
A1∼n = heapify max(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . 1
if n is odd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
518 CHAPTER 9. PRIORITY QUEUE

O1∼n = heapinordertrav(1, 1, n) . . . . . . . . . . . . . . . . 3
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
O1∼n−1 = heapinordertrav(1, 1, n − 1) . . . . . . . . . . 5
if on−1 < an . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
on = an . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
on = on−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
on−1 = an . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

heapinodertrav(c, i, e)
if 2i ≤ e, heapinordertrav(c, 2i, e) . . . . . . . . . . . . . . 1
oc = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
c = c + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if 2i + 1 ≤ e, heapinordertrav(c, 2i + 1, e) . . . . . . .4
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

11 10 10 3

8 10 7 9 7 9

7 5 1 9 6 4 1 8 6 4 1 8

6 2 4 3 5 2 3 5 2

h6, 7, 2, 8, 4, 5, 3, 11, 1, 10, 9i h5, 6, 2, 7, 4, 10, 1, 9, 8i − 3


(a) odd n case (b) even n case

Figure 9.13: Heap inorder traversal Algorithm 9.15 illustration

Algorithm 9.15 takes Θ(n), as the primary operation is the in-order traversal which
clearly takes linear time. Algorithm 9.15 utilizes the heap property rather than heap oper-
ations; it does not invoke any method of a heap data structure.
An up-down sequence can be generated by traversing a min-heap instead of a max-heap.
It is left for an exercise. In all, Figure 9.14 shows six possible up-down sequences produced
by algorithms that utilize a heap data structure.

9.3 Greedy Algorithms with Heap


Most greedy algorithms in chapter 4 can be restated using a heap data structure. Instead
of sorting the candidates by a certain greedy choice in O(n log n) time, simply build a heap
in linear time. Computational time complexity does not always improve, but the heap data
structure often allows for practically faster greedy algorithms than sorting based greedy
algorithms.

9.3.1 Fractional Knapsack Problem


Consider the greedy Algorithm 4.8 on page 165 for the Fractional knapsack Problem 4.5
defined on page 165. A max-heap data structure may be utilized to design a greedy algorithm
9.3. GREEDY ALGORITHMS WITH HEAP 519

0 7
1 2 5 6

4 3 7 6 4 0 2 3
5 1

h3, 1, 2, 4, 0, 7, 6, 5i h0, 1, 2, 4, 3, 7, 6, 5i h7, 5, 6, 4, 0, 2, 3, 1i


(a) Input (b) min-heap (c) max-heap

h3, 6, 2, 4, 1, 7, 0, 5i h3, 6, 2, 4, 1, 7, 0, 5i h4, 5, 1, 3, 0, 7, 2, 6i


(d) min-heap I algo 9.13 (e) min-heap II Q. 9.13 g) (f) min-heap III Q. 9.13 j)

h2, 4, 3, 5, 0, 6, 1, 7i h2, 4, 3, 5, 0, 6, 1, 7i h4, 5, 0, 7, 2, 6, 1, 3i


(g) max-heap I Q. 9.13 d) (h) max-heap II algo 9.14 (i) max-heap III algo 9.15

Figure 9.14: Various up-down sequences with heap data structures

instead of sorting items by their unit values. Figure 9.15 illustrates this greedy algorithm
with a heap using the same toy example in Figure 4.11 on page 165. A pseudo code is stated
as follows:

Algorithm 9.16. Greedy Fractional knapsack with a max-heap

frac-knapsack-maxheap(A, m)
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ui = pi /wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A = max-heapify(A) by U . . . . . . . . . . . . . . . . . . . . . . . . 3
i = n ............................................. 4
while m > 0 and i > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ai = delete max(A1∼i ) . . . . . . . . . . . . . . . . . . . . . . . . . . 6
m = m − wi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
i = i − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
T = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
for j = n down to i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 10
T = T + pj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
T = T + pi × m+w wi
i
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

As illustrated in Figure 9.15 (b), items are max-heapified first and then the delete-max
operations are invoked until the capacity exceeds. The deleted item is inserted into the
solution set, which is the remaining array portion. When the loop of delete max operations
halts, the last deleted item may not be used fully, and, thus, line 12 computes the fraction
to be added to the solution set. Lines 9 ∼ 12 in Algorithm 9.16 compute the maximum
profits while not exceeding the capacity m.
520 CHAPTER 9. PRIORITY QUEUE

Instruction Heap and solution set m Complexity


A 0.9 1.2 3.6 1.6 0.8 1.5 3.4
Max-heapify Θ(n)
H 3.6 1.6 3.4 1.2 0.8 1.5 0.9 500ml
delete max H-S 3.4 1.6 1.5 1.2 0.8 0.9 - 3.6 450ml Θ(log n)
delete max H-S 1.6 1.2 1.5 0.9 0.8 - 3.4 3.6 330ml Θ(log (n − 1))
delete max H-S 1.5 1.2 0.8 0.9 - 1.6 3.4 3.6 250ml Θ(log (n − 2))
delete max H-S 1.2 0.9 0.8 - 1.5 1.6 3.4 3.6 150ml Θ(log (n − 3))
delete max H-S 0.9 0.8 - 1.2 1.5 1.6 3.4 3.6 −30ml Θ(log (n − 4))

Figure 9.15: Algorithm 9.16 illustration on a toy example

Algorithm 9.16, which utilizes a max-heap, takes O(n log n), as all items might be exam-
ined in the worst case. The best case time complexity is Θ(n), as the heap data structure
must be built. Since not all elements need to be examined in some cases, Algorithm 9.16
is practically better than the plain greedy Algorithm 4.8, which requires sorting the entire
list.

9.3.2 Activity Selection Problem


Consider the greedy Algorithm 4.10 stated on page 170 for the activity selection Prob-
lem 4.8 defined on page 170. Algorithm 4.10 first sorted the input by their finish time
and then greedily selected non-overlapping activities. Here is another version of greedy
Algorithm 4.10, which does not require sorting but utilizes the min-heap data structure.

activity A 1 2 3 4 5 6 7
start S 3 5 2 4 1 6 7
finish F 4 9 5 6 2 7 15
(a) Sample activities with their starting and finishing times
instruction Heap and solution set cf
A1∼n 1(3,4) 2(5,9) 3(2,5) 4(4,6) 5(1,2) 6(6,7) 7(7,15)

Min-heapify H1∼7 |∅ 5(1,2) 1(3,4) 3(2,5) 4(4,6) 2(5,9) 6(6,7) 7(7,15) 0


deletemin H1∼6 |S7∼7 1(3,4) 4(4,6) 3(2,5) 7(7,15) 2(5,9) 6(6,7) 5(1,2) 2
deletemin H1∼5 |S6∼7 3(2,5) 4(4,6) 6(6,7) 7(7,15) 2(5,9) 1(3,4) 5(1,2) 4
deletemin H1∼4 |S6∼7 4(4,6) 2(5,9) 6(6,7) 1(7,15) 1(3,4) 5(1,2) 4
deletemin H1∼3 |S5∼7 6(6,7) 2(5,9) 7(7,15) 4(4,6) 1(3,4) 5(1,2) 6
deletemin H1∼2 |S4∼7 2(5,9) 7(7,15) 6(6,7) 4(4,6) 1(3,4) 5(1,2) 7
deletemin H1∼1 |S4∼7 7(7,15) 6(6,7) 4(4,6) 1(3,4) 5(1,2) 7
return ∅|S3∼7 7(7,15) 6(6,7) 4(4,6) 1(3,4) 5(1,2) 15
(b) Algorithm 9.17 illustration on the toy example given in (a)

Figure 9.16: Algorithm 9.17 illustration for Acitivity Selection Problem 4.8

Figure 9.16 (b) illustrates a greedy algorithm using a min-heap on a toy example given in
9.3. GREEDY ALGORITHMS WITH HEAP 521

Figure 9.16 (a). In Figure 9.16 (b), the left side of the array is utilized to store the heap and
the right side of the array is utilized to store the solution set. Each item with the minimum
finishing time is compared to the current finishing time, cf . If its starting time does not
conflict with cf , it is deleted from the heap and included in the solution set. Otherwise, it is
simply deleted from the heap of candidates but not included in the solution set. A pseudo
code is stated as follows:
Algorithm 9.17. Activity-selection with a min-heap
Heap-activity-selection(A, m)
H = heapify(A, min) by F . . . . . . . . . . . . . . . . . . . . . . . . 1
c = n .............................................2
hc = delmin(H1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
cf = hc .f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for i = 2 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
a = delmin(H1∼n−i+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . .6
if a.s ≥ cf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
hc = a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
c = c − 1 .....................................9
cf = a.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return Hc∼n of size n − c + 1 . . . . . . . . . . . . . . . . . . . . 11

The computational time complexity of Algorithm 9.17 is Θ(n) + O(n log n) = O(n log n).
First, the heapify takes linear time and the delete min operations are called n number of
times. It should be noted that a max-heap instead of a min-heap can also be used to solve
this problem, and it is left for an exercise.

9.3.3 Huffman Code with Heaps


Recall the minimum length binary code Problem 4.17, known as the Huffman code,
presented earlier on page 195. The greedy Huffman code Algorithm 4.23 selects the first
two least frequent items from the candidate set and inserts the sum of these two frequencies
to the candidate set. Here, a couple of greedy algorithms using a min-heap data structure
are presented. The first one is the 007 version, which utilizes a built-in heap data structure
provided in certain programming languages. The pseudo code, illustrated in Figure 9.17, is
stated as follows:
Algorithm 9.18. Greedy Huffman code with a heap data structure

huffman(A1∼n )
declare T2n−1×2 and H1∼n . . . . . . . . . . . . . . . . . . . . . . . . 1
H1∼n = min heapfy(A1∼n ) by ai .f . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, T [i][1] = ai .f . . . . . . . . . . . . . . . . . . . . 3
for i = n + 1 ∼ 2n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
x.f = delete min(H) . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
T [x.s][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
y.f = delete min(H) . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
T [y.s][2] = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
T [i][1] = x.f + y.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
x.f = x.f + y.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
522 CHAPTER 9. PRIORITY QUEUE

x.s = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
insert(H, x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

i inst. array i inst. array


input A 5 8 18 2 13 3 15
0
Heapify 2 5 3 8 13 18 15
deletemin 3 5 15 8 13 18 X deletemin 15 18 18 X X X X
1 deletemin 5 8 15 18 13 X X 4 deletemin 15 18 X X X X X
insert 5 8 5 18 13 15 X insert 18 18 28 X X X X
deletemin 5 8 15 18 13 X X deletemin 18 28 X X X X X
2 deletemin 8 13 15 18 X X X 5 deletemin 28 X X X X X X
insert 8 10 15 18 13 X X insert 28 36 X X X X X
deletemin 10 13 15 18 X X X deletemin 36 X X X X X X
3 deletemin 13 18 15 X X X X 6 deletemin X X X X X X X
insert 13 18 15 18 X X X insert 64 X X X X X X

Figure 9.17: Huffman coding with a min-heap

The second one is a MacGyver version. It utilizes both min-heap and queue embeded in
the output tree representation. It requires no extra space. The pseudo code, illustrated in
Figure 9.18, is stated as follows:
Algorithm 9.19. Greedy Huffman code with heap and queue data structures

huffman(A1∼n )
declare T2n−1×2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n, T [i][1] = ai .f . . . . . . . . . . . . . . . . . . . . 2
T1∼n = min heapfy(T1∼n ) by ai .f . . . . . . . . . . . . . . . . .3
sh = n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
qf = n + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for qe = n + 1 ∼ 2n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if q.f = q.e or T.[1][2] < q.f , . . . . . . . . . . . . . . . . . . . . 7
T [sh ][2] = delete min(T1∼sh ) . . . . . . . . . . . . . . . . . 8
x.f = T [sh ][2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
T [sh ][1] = qe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
sh = sh − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
x.f = T [q.f ][2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
T [q.f ][1] = qe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
q.f = q.f + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
if q.f = q.e or T.[1][2] < q.f , . . . . . . . . . . . . . . . . . . . 16
T [sh ][2] = delete min(T1∼sh ) . . . . . . . . . . . . . . . . 17
x.f = x.f + T [sh ][2] . . . . . . . . . . . . . . . . . . . . . . . . . 18
9.4. MIN-MAX HEAP 523

T [sh ][1] = qe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
sh = sh − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
x.f = x.f + T [q.f ][2] . . . . . . . . . . . . . . . . . . . . . . . . 22
T [q.f ][1] = qe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
q.f = q.f + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
T [qe ][2] = x.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Rows in 1 ∼ sh are the min-heap. qf and qe are the front and end of the queue. The n
elements are heapified. The minimum element comes either from the min-heap or the queue.
Since the queue is always sorted, the delete min from the queue is simply the dequeue.
The computational time complexity of both Algorithms 9.18 and 9.19 is O(n log n).

9.4 Min-max Heap


One of the problems in a binary min-heap is that the delete max operation takes linear
time, as all leaf nodes must be checked. The delete min operation in a binary max-heap is
also expensive. A min-max heap is a double ended priority queue which supports both delete-
min and delete-max and takes O(log n) [13]. As exemplified in Figure 9.19 (a), principal
properties of the min-max heap are that every node at an even level is a minimum of all
of its descendants, and every node at an odd level is a maximum of all of its descendants.
Minimum and maximum properties are placed in alternating levels. A min-max heap, H1∼n ,
can be stored in an array as a complete tree. Then, both find min and find max operations
take constant time by the eqns (9.12) and (9.13), respectively.

min(H1∼n ) = h1 (9.12)
max(H1∼n ) = max(h2 , h3 ) (9.13)

A different visualization of a min-max heap as a Hasse diagram, shown in Figure 9.19


(b), may provide a better insight into where the arc means the ‘greater than or equal to’,
(≥) relationship between two nodes. Levels go down by two in even levels and then go back
up by two in odd levels. The top half of the Hasse diagram looks like a quaternary (k = 4)
min-heap and the bottom parts have two quaternary max-heaps. Note that the leaf level
does not necessarily follow the quaternary branch rule.

9.4.1 Checking Min-max Heap


Perhaps the best way to understand the min-max heap data structure is to define the
checking problem and design a checking algorithm.

Problem 9.5. isMinMaxHeap?


Input: an
 array A1∼n (of n quantifiable elements
yes if a = min(∀aj ∈ T (ai )) if blog ic is even.

i
Output: max(∀aj ∈ T (ai )) if blog ic is odd.

no otherwise

524 CHAPTER 9. PRIORITY QUEUE

# par i(fi ) # par i(fi ) # par i(fi )


1 - 1(5) 1 - 4(2) 1 - 1(5)
2 - 2(8) 2 - 1(5) 2 - 2(8)
3 - 3(18) 3 - 6(3) 3 - 7(15)
4 - 4(2) 4 - 2(8) 4 - 3(18)
5 - 5(13) 5 - 5(13) sh → 5 - 5(13)
6 - 6(3) 6 - 3(18) 6 8 6(3)
sh → 7 - 7(15) sh → 7 - 7(15) 7 8 4(2)
# par fr. # par fr. # par fr.
qf → 8 - - qf → 8 - - qf → 8 - 5
qe % 9 - - qe % 9 - - qe → 9 - -
10 - - 10 - - 10 - -
11 - - 11 - - 11 - -
12 - - 12 - - 12 - -
13 - - 13 - - 13 - -
initialization Heapify(T1∼n ) enqueue(del min,del min)
# par i(fi ) # par i(fi ) # par i(fi )
1 - 2(8) 1 - 5(13) sh → 1 - 3(18)
2 - 5(13) 2 - 3(18) 2 11 7(15)
3 - 7(15) sh → 3 - 7(15) 3 11 5(13)
sh → 4 - 3(18) 4 10 2(8) 4 10 2(8)
5 9 1(5) 5 9 1(5) 5 9 1(5)
6 8 6(3) 6 8 6(3) 6 8 6(3)
7 8 4(2) 7 8 4(2) 7 8 4(2)
8 9 5 8 9 5 8 9 5
qf → 9 - 10 9 10 10 9 10 10
qe → 10 - - qf → 10 - 18 qf → 10 - 18
11 - - qe → 11 - - 11 - 28
12 - - 12 - - qe → 12 - -
13 - - 13 - - 13 - -
enqueue(del min,dequeue) enqueue(del min,dequeue) enqueue(del min,del min)
# par i(fi ) # par i(fi )
sh % 1 12 3(18) sh % 1 12 3(18)
2 11 7(15) 2 11 7(15)
3 11 5(13) 3 11 5(13)
4 10 2(8) 4 10 2(8)
5 9 1(5) 5 9 1(5)
6 8 6(3) 6 8 6(3)
7 8 4(2) 7 8 4(2)
8 9 5 8 9 5
9 10 10 9 10 10
10 12 18 10 12 18
qf → 11 - 28 11 13 28
12 - 36 qf & 12 13 36
qe → 13 - - qe & 13 - 64
enqueue(del min,dequeue) enqueue(dequeue,dequeue) & Finish

Figure 9.18: Huffman coding with a sorted list


9.4. MIN-MAX HEAP 525

3 0

52 55 1

7 27 4 35 2

14 24 47 38 9 10 53 51 3

13 11 21 9 29 44 32 35 6 8 5 7 41 44 40 47 4

h3, 52, 55, 7, 27, 4, 35, 14, 24, 47, 38, 9, 10, 49, 51, 13, 11, 21, 9, 29, 44, 32, 35, 6, 8, 5, 7, 41, 44, 40, 47i
(a) a min-max heap as a complete binary tree.
3 0

7 27 4 35 2

13 11 21 9 29 44 32 35 6 8 5 7 41 44 40 47 4

14 24 47 38 9 10 53 51 3

52 55 1

(b) a Hasse diagram of a min-max heap

Figure 9.19: A min-max heap sample

Recall that T (ai ) is a sub-tree whose root node is ai . So as to check whether an array
is a min-max heap, every node is compared with four grandchildren nodes as well as both
children nodes. A pseudo code is stated as follows:

Algorithm 9.20. Checking MinMaxHeap

isMinMaxHeap(A1∼n )
for i = 1 ∼ b n2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if blog ic is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai > a2i , return false. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if 2i + 1 ≤ n and ai > a2i+1 , return false. . . . . . . . . . 4
if 4i ≤ n and ai > a4i , return false. . . . . . . . . . . . . . 5
if 4i + 1 ≤ n and ai > a4i+1 , return false. . . . . . . . . . 6
if 4i + 2 ≤ n and ai > a4i+2 , return false. . . . . . . . . . 7
if 4i + 3 ≤ n and ai > a4i+3 , return false. . . . . . . . . . 8
else (if blog ic is odd), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
if ai < a2i , return false. . . . . . . . . . . . . . . . . . . . . . . . . . .10
if 2i + 1 ≤ n and ai < a2i+1 , return false. . . . . . . . . 11
if 4i ≤ n and ai < a4i , return false. . . . . . . . . . . . . 12
if 4i + 1 ≤ n and ai < a4i+1 , return false. . . . . . . . . 13
if 4i + 2 ≤ n and ai < a4i+2 , return false. . . . . . . . . 14
if 4i + 3 ≤ n and ai < a4i+3 , return false. . . . . . . . . 15
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
526 CHAPTER 9. PRIORITY QUEUE

The computational time complexity of Algorithm 9.20 is O(n). Algorithms based on the
divide and conquer paradigm are left for exercises.

9.4.2 Insert

4 0

7 27 5 35 2

13 11 21 9 29 44 32 35 6 8 10 7 41 44 4

14 24 47 38 9 47 49 51 3

52 55 1

(a) A sample min-max heap


4 0

7 27 5 35 2

13 11 21 9 29 44 32 35 6 8 10 7 41 44 51 4

14 24 47 38 9 47 49 55 3

52 58 1

(b) Insert 58 into the min-max heap in (a)


3 0

7 27 5 4 2

13 11 21 9 29 44 32 35 6 8 10 7 41 44 51 35 4

14 24 47 38 9 47 49 55 3

52 58 1

(c) Insert 3 into the min-max heap in (b)

Figure 9.20: Illustration of insert operations in a min-max heap

The insert operation in a min-max heap can be defined in a similar way as the insert
operation in a max-heap Problem 9.2..
Problem 9.6. Insert operation in a min-max heap
Input: a min-max heap, H1∼n where n < m, and a quantifiable element, x
0 0 0
Output: a min-max heap, H1∼n+1 where x ∈ H1∼n+1 and ∀y ∈ H1∼n , y ∈ H1∼n+1
Designing algorithms for major operational problems of min-max heap, such as insert,
delete min, and delete max, are similar to, but quite trickier than those of plain min-heap
or max-heaps.
9.4. MIN-MAX HEAP 527

An algorithm for the insertion operation in a min-max heap is similar to that in either
a min or max-heap. An element to be inserted is first inserted into the end of the array.
Then, instead of simply percolating up by comparing with its parent node, each node must be
compared with its grand parent node. The direction for the max-side or min-side percolate
up, as depicted in Figure 9.20 (b) and (c), respectively, must be decided first. To do
so requires comparing the inserted item with its parent node. A pseudo code is given as
follows, where the min-max heap H1∼n and the maximum heap size m are declared globally.
Algorithm 9.21. Insertion in a min-max heap
Minmaxheapinsert(x)
if n ≥ m, report an error.. (optional) . . . . . . . . . . .0
hn+1 = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Gpercolate up(n + 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Subroutine 9.3. Grand Percolate up
Gpercolate up(p)
if p > 1 and blog pc is even, . . . . . . . . . . . . . . . . . . . . . . . 1
if hp > hbp/2c , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
swap(hp , hb p2 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
p = b p2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
while p > 1 and hp > hb p4 c . . . . . . . . . . . . . . . . . . . . 5
swap(hp , hb p4 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
p = b p4 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
while p > 1 and hp < hb p4 c . . . . . . . . . . . . . . . . . . . . 9
swap(hp , hb p4 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
p = b p4 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
else if blog pc is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
if hp < hbp/2c , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
swap(hp , hb p2 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
p = b p2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
while p > 1 and hp < hb p4 c . . . . . . . . . . . . . . . . . . 16
swap(hp , hb p4 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
p = b p4 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
while p > 1 and hp > hb p4 c . . . . . . . . . . . . . . . . . . 20
swap(hp , hb p4 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
p = b p4 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Lines 1 ∼ 11 and lines 12 ∼ 22 are for inserting an element in an even and odd level,
respectively. Even and odd levels contain minimum and maximum values. Lines 2 and 13
decide which direction of the Hasse diagram in Figure 9.20 must be taken. It can percolate
up only in one direction. The computational time complexity of Algorithm 9.21 is clearly
O(log n).

9.4.3 Delete min and Delete max


A min-max heap supports both efficient delete min and delete max operations. They
are defined in a similar way as before.
528 CHAPTER 9. PRIORITY QUEUE

4 0

7 27 5 35 2

13 11 21 9 29 44 32 35 6 8 10 7 41 44 40 4

14 24 47 38 9 47 53 51 3

52 55 1

(a) delete-min on a min-max heap in Figure 9.19 (b).


4 0

7 27 5 35 2

13 11 21 9 29 44 32 35 6 8 10 7 41 40 4

14 24 47 38 9 47 44 51 3

52 53 1

(b) delete-max on a min-max heap in Figure 9.21 (a).

Figure 9.21: A min-max heap sample

Problem 9.7. Delete min operation in a min-max heap


Input: a min-max heap, H1∼n
0
Output: x = min(H1∼n ) = h1 and a min-max heap, H1∼n−1
0
where {y | y ∈ H1∼n−1 } = {z | z ∈ H1∼n } − {x}

Problem 9.8. Delete max operation in a min-max heap


Input: a min-max heap, H1∼n
0
Output: x = max(H1∼n ) = max(h2 , h3 ) and a min-max heap, H1∼n−1
0
where {y | y ∈ H1∼n−1 } = {z | z ∈ H1∼n } − {x}

An algorithm for the delete-min operation on a min-max heap is very similar to Algo-
rithm 9.5 for the delete-max operation on a max-heap. First the last element, x, in a heap
is moved to the root node, as the root node is removed. Instead of percolating down by
comparing with its children nodes, the element x is percolated down by comparing with its
grandchildren nodes, as illustrated in Figure 9.21 (a). When the leaf level is reached, then
the element x is bounced up by comparing with its grandparent node in odd levels as long
as it violates the minmax property. Let’s call this process a grand percolate down, as stated
in Subroutine 9.4 Gpercolate down. Even if it goes down and up, all nodes in the path from
the node to the respective leaf node are examined.
An algorithm for the delete-max operation is quite similar. Instead of the root node,
the second or third node contains the maximum value. The last element is moved to either
second or third cell in the array and then percolated down. Pseudo codes for delete-min
and delete-max operations in a min-max heap are stated as follows:
9.4. MIN-MAX HEAP 529

Algorithm 9.22. delete min Algorithm 9.23. delete max


deletemin(H1∼n ) deletemax(H1∼n )
x = h1 . . . . . . . . . . . . . . . . . . . . . . . . 1 if h2 > h3 , x = h2 and p = 2 . . . . 1
h1 = hn . . . . . . . . . . . . . . . . . . . . . . . 2 else, x = h3 and p = 3 . . . . . . . . . 2
s = n − 1 ..................... 3 hp = hn . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Gpercolate down(1) . . . . . . . . . . . 4 n = n − 1 ........................ 4
return x . . . . . . . . . . . . . . . . . . . . . . 5 Gpercolate down(p) . . . . . . . . . . . . . . 5
return x . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Subroutine 9.4. Grand Percolate down

Gpercolate down(p)
if blog pc is even ∧ blog pc < blog nc − 1 ∧ (q = argmin5(p)) 6= p, . . . . . . . . . 1
swap(hp , hq ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Gpercolate down(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if (r = argmax3g(q)) 6= q, swap(hq , hr ) . . . . . . . . . . . . . . . . . . . . . . . . . . 4
else if blog pc is even ∧ blog pc = blog nc − 1 ∧ (q = argmin3(p)) 6= p , . . . . 5
swap(hp , hq ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
else if blog pc is even ∧ 2p > n ∧ hp > hb p2 c , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
swap(hp , hb p2 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
else if blog pc is odd ∧ blog pc < blog nc − 1 ∧ q = argmax5(p)) 6= p, . . . . . . 9
swap(hp , hq ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Gpercolate down(q) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
if (r = argmin3g(q)) 6= q, swap(hq , hr ) . . . . . . . . . . . . . . . . . . . . . . . . . . 12
else if blog pc is even ∧ blog pc = blog nc − 1 ∧ (q = argmax3(p)) 6= p, . . . 13
swap(hp , hq ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
else if blog pc is even ∧ 2p > n ∧ hp < hb p2 c , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
swap(hp , hb p2 c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

So as to simplify the pseudo code in Subroutine 9.4, six functions in eqns (9.14) and (9.15)
are defined and invoked. To clarify the meaning of these functions, see Subroutines 9.5
and 9.6.

Subroutine 9.5. argmin of upto 5 elem. Subroutine 9.6. argmax of upto 5 elem.

argmin5(p) argmax5(p)
q = p ...........................1 q = p ...........................1
i = 4p . . . . . . . . . . . . . . . . . . . . . . . . . . 2 i = 4p . . . . . . . . . . . . . . . . . . . . . . . . . . 2
l = min(4p + 3, n) . . . . . . . . . . . . . . 3 l = min(4p + 3, n) . . . . . . . . . . . . . . 3
while i ≤ l and hp > hi . . . . . . . . . 4 while i ≤ l and hp < hi . . . . . . . . . 4
q = i ......................... 5 q = i ......................... 5
i = i + 1 . . . . . . . . . . . . . . . . . . . . . .6 i = i + 1 . . . . . . . . . . . . . . . . . . . . . .6
return q . . . . . . . . . . . . . . . . . . . . . . . . 7 return q . . . . . . . . . . . . . . . . . . . . . . . . 7
530 CHAPTER 9. PRIORITY QUEUE

argmin5(p) = argmin(hp , h4p , h4p+1 , h4p+2 , h4p+3 ) where hx = ∞ if x > n (9.14)


argmax5(p) = argmax(hp , h4p , h4p+1 , h4p+2 , h4p+3 ) where hx = −∞ if x > n (9.15)
argmin3(p) = argmin(hp , h2p , h2p+1 ) where hx = ∞ if x > n (9.16)
argmax3(p) = argmax(hp , h2p , h2p+1 ) where hx = −∞ if x > n (9.17)
argmin3g(p) = argmin(h b q2 c , h2q , h2q+1 ) where hx = ∞ if x > n (9.18)
argmax3g(p) = argmax(hb q2 c , h2q , h2q+1 ) where hx = −∞ if x > n (9.19)

The computational time complexity of both Algorithms 9.22 and 9.23 is clearly O(log n),
as they rely on the grand percolate down Subroutine 9.4, which takes O(log n).

9.4.4 Construct
Consider the problem of constructing a min-max heap given a list of randomly arranged
n quantifiable elements. This problem is formulated as follows:

Problem 9.9. Min-max heap construction (Heapify)


Input: an array A1∼n of n quantifiable elements
Output: a permutation A01∼n of A1∼n such that A0 is a min-max heap.

A naı̈ve algorithm would invoke the insertion operation n number of times. Using the
inductive programming paradigm in Chapter 2, one can derive a first order linear recurrence
relation by assuming A1∼n−1 is already a min-max heap.

Lemma 9.3. First order linear recurrence of min-max heapify


(
a1 if n = 1
min-max-heapify(A1∼n ) =
Gpercolate up(min-max-heapify(A1∼n−1 ), an ) if n > 1

A couple of pseudo codes based on the inductive programming paradigm are stated as
follows:

Algorithm 9.24. Min-max heapify Algorithm 9.25. Min-max heapify


naı̈ve Minmax heapify(A1∼n ) naı̈ve Minmax heapify(A1∼n )
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . 1 for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . 1
A1∼i = Minmaxheapinsert A1∼i = Gpercolate up(A1∼i , i) . . . 2
(A1∼i−1 , ai ) . . . 2 return A1∼n . . . . . . . . . . . . . . . . . . . . . . . 3
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . 3

Algorithms 9.24 and 9.25 are illustrated in Figure 9.22 (a). Inserting the ith element
takes O(blog (i + 1)c). Hence, the computational time complexities of both Algorithms 9.24
n
P
and 9.25 are blog (i + 1)c = Θ(n log n).
i=1
A min-max heap can be constructed more efficiently by the divide and conquer paradigm.
The following divide recurrence relation can be stated:
9.4. MIN-MAX HEAP 531

40 10 5 5 5 5 5

8 8 9 8 9 10 8 9 10 13

40 40 10 40 10 40 10 40 13 40 47

5 5 4

8 9 10 13 8 9 10 13 8 5 10 13

38 38 40 38 40 9

40 47 51 47 51 47

h4, 51, 47, 8, 5, 10, 13, 38, 40, 9i


(a) Inductive programming to construct a min-max heap.
40 40 40

8 9 13 47 8 4 13 47 8 4 13 47

38 51 4 38 51 9 38 51 9

10 5 10 5 10 5

40 40 4

8 4 13 5 8 4 13 5 8 9 13 5

38 51 9 38 10 9 38 10 40

10 47 51 47 51 47

h4, 51, 47, 8, 9, 13, 5, 38, 10, 40i


(b) Iterative divide and conquer to construct a min-max heap.

Figure 9.22: Constructing a min-max heap on h40, 10, 5, 8, 9, 13, 47, 38, 51, 4i

Lemma 9.4. Divide recurrence of min-max heapify

min-x-heapify(T (ai )) =

ai
 if ai is a leaf
nil if ai = nil

Gpercolate down(min-x-heapify(T (a2i )), min-x-heapify(T (a2i+1 )), ai ) if n > 1

The divide and conquer recurrence relation in Lemma 9.4 can be stated as an iterative
algorithm, instead of a recursive one, as follows:
Algorithm 9.26. Min-max heapify
naı̈ve Minmax heapify(A1∼n )
for i = b n2 c down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Gpercolate down(i, T, i) . . . . . . . . . . . . . . . . . . . . . . . .2
Algorithm 9.26 is illustrated in Figure 9.22 (b). The computational time complexities
of both divide and conquer algorithm in Lemma 9.4 and Algorithm 9.26 are linear because
T (n) = 2T (n/2) + O(log n), according to the Master Theorem 3.9.
532 CHAPTER 9. PRIORITY QUEUE

9.5 Leftist Heap


Thus far, static implementations of a priority queue, such as binary heaps or min-max
heaps, have been considered. One of disadvantages in static implementation of a priority
queue is that one must know the maximum size of the elements before declaring a priority
queue. This section presents one dynamic implementation of a priority queue called a Leftist
heap, invented in [43]. Another major motivation of this data structure is the high cost in
merging two binary heaps into a single binary heap. The merge operation takes linear
time in a binary heap data structure. The merge operation takes only O(log n) in the leftist
heap data structure while still guaranteeing O(log n) insertion and delete-min or delete-max,
depending on the leftist min or max-heap.
It should be noted that there are two kinds of leftist heaps: height biased [43] and weight
biased [32] heaps. Here, the term ‘leftist heap’ means height biased leftist heap.
In this section, the leftist heap properties are first examined. The null path length
is introduced. Then, rudimentary operations are defined in terms of a merge operation.
Finally, three different algorithms for constructing a leftist heap are presented.

9.5.1 Definition of a Leftist Heap


In order to understand a leftist heap, the null path length, or simply NPL, of a node in
a rooted tree must be understood. An NPL of a node is the shortest path to any null node
from the node in a tree and can be formulated using the shortest path length as follows:

Problem 9.10. The null path length of a node in a binary tree


Input: a rooted tree T and a node x ∈ T .
Output: min(SPL(x, y)) − 1 where y ∈ the leaf node set and ∃ path(x, y).

While the height of a node in a rooted tree defined on page 437 can be visualized as the
longest path legnth to any null node, as depicted in Figure 9.23 (a), the null path length
of a node is a shortest path length to any null node as shown in Figure 9.23 (b). Nodes in
the binary tree in Figure 9.23 (c) are annotated with their respective NPL values. NPL of
a tree is the NPL of the root node.

7 3

6 2 2 7 1
1 0
6 7 1
3 5 1 6 5
0 0 0
2 4 1 0 2 3 4 1 5 0 3 4 1
0
null null 2

(a) Height of a node (b) NPL of a node (c) NPL of a node

Figure 9.23: give examples

An NPL of a node or a tree can be recursively determined as follows:


(
−1 if T = 
NPL(T ) = (9.20)
min(NPL(T.Left), NPL(T.Right)) + 1 otherwise
9.5. LEFTIST HEAP 533

The null path lengh of any node is 1 more than the minimum of chldren’s null path lengths,
with a basis case that NPL() = −1.
A binary tree, T , is said to be a leftist binary tree if the right sub-tree’s NPL is less than
the left sub-tree’s NPL for every node in the tree. The problem of determining whether a
binary tree is a leftist binary tree can be formulated as follows:
Problem 9.11. Leftist binary tree validation
Input: a rooted binary tree T .
(
True if ∀x ∈ T (NPL(x.Left) ≥ NPL(x.Right))
Output:
False otherwise
Whether a binary tree is a leftist binary tree can be validated by checking the leftistness
of each node recursively, starting from the root node. A pseudo code for this recursive
depth-first traversal checking is stated in the following eqn (9.21).

is leftist(T ) =

True
 if T = 
False if NPL(T.Left) < NPL(T.Right) (9.21)

is leftist(T.Left) ∧ is leftist(T.Right) otherwise

The computational time complexity of the recursive algorithm in eqn (9.21) is O(n), as it
follows the recursive depth-first traversal order.
A leftist heap is a leftist binary tree with a heap property. Heap max and min property
in a static complete binary tree were given in Problem 9.1 and exercise Q 9.5 on pages 503
and 546, respectively. To check whether a dynamic binary tree is a leftist heap, it has to
be a leftist tree and satisfy either the min-heap or the max-heap properties in eqns (9.22)
and (9.23).

is minHeap(T ) =


 True if T = 

False if T.val > min(T.Left.val, T.Right.val)
(9.22)


 is minHeap(T.Left) ∧ otherwise
is minHeap(T.Right)

is maxHeap(T ) =


 True if T = 

False if T.val < max(T.Left.val, T.Right.val)
(9.23)
is maxHeap(T.Left) ∧
 otherwise

is maxHeap(T.Right)

Checking whether a binary tree T is a lefitst min-heap can be stated as follows:

is LeftistminHeap(T ) = is leftist(T ) ∧ is minHeap(T ) (9.24)

Let LHmin and LHmax be the leftist min and max heaps, respectively. In a LHmin or
LHmax, the NPL of each node x is stored with the node as x.npl = NPL(x). By combining
534 CHAPTER 9. PRIORITY QUEUE

eqns (9.21) and (9.22), checking whether a binary tree is a lefitst min-heap can be stated as
follows:

is LHmin(T ) =


 True if T = 

False if T.val > min(T.Left.val, T.Right.val)
(9.25)


 ∨ T.Left.npl ≥ T.Right.npl
is LHmin(T.Left) ∧ is LHmin(T.Right) otherwise

Validating a leftist min-heap clearly takes linear time.

1 2 12 1 1 1 1 2 1 1
1 1 0 0 0 0 1 1 0 1
12 2 80 15 12 21 12 2 12 21
0 0 0 0 1 0 0 0 1 0 0 0 0 0
15 14 17 80 20 15 17 15 14 17 80 14 17 80
0 0 0 0 0 0 0 0
20 35 21 30 19 20 35 35

(a) Leftist & (b) Leftist & (c) Leftist & (d) Not leftist & (e) Not leftist &
min-heap min-heap not min-heap min-heap not min-heap

Figure 9.24: Valid and invalid leftist min-heap examples

The first two binary trees in Figure 9.24 are valid leftist min-heaps, whereas the remain-
ing trees are invalid. Node 21 violates the min-heap property in Figure 9.24 (c). Node 12
violates the leftist property in Figure 9.24 (d). Nodes 12 and 17 violate the leftist property
and node 21 violates the min-heap property in Figure 9.24 (e).
Let the right spine of a binary tree be a path from the root node to the right most node,
whose right children are null. For example, the right spines are h1, 2, 80i and h12, 15i in
Figure 9.24 (a) and (b), respectively. Let RSL(T ) be the length of the right spine of T ,
recursively defined as follows:
(
−1 if T = 
RSL(T ) = (9.26)
RSL(T.Right) + 1 otherwise

Lemma 9.5 (Right spine lemma). If T is a leftist tree, NPL(T ) = RSL(T ).


Proof. The lefitst tree property states that if T is a leftist tree, NPL(T .Left) ≥ NPL(T .Right).

NPL(T ) = min(NPL(T.Left), NPL(T.Right)) + 1 by eqn (9.20)


= NPL(T.Right) + 1 by the lefitst tree property
= RSL(T ) by eqn (9.26) 

NPL(T ), or RSL(T ) of a leftist heap, is often called an s-value, as in [122]. Figure 9.25
(a) ∼ (c) show leftist tree examples whose RSL(T ) = 0 ∼ 2, respectively. Nodes filled with
a red color form the respective right spine.
The leftist tree property biases the tree to get deep towards the left and, thus, may result
in an unbalanced tree. However, the upper bound length of the right spine, i.e., NPL(T ), is
O(log n).
9.5. LEFTIST HEAP 535

0 0 0 0 1 1 1 1
….....
0 1 …..... 0 0 0 0 1 0 0 0
0
0 0 0 0 0 0 0 0

(a) NPL(T ) = 0 & MNN(0) = 1 (b) NPL(T ) = 1 & MNN(1) = 3


h+1
2 2 2 ….....
1 1 1 1 1 1 h h
0 0 0 0 0 0 0 0 0 0 0 0

0 0 MNN(h) MNN(h)

(c) NPL(T ) = 0 & MNN(2) = 7 (d) MNN(h + 1) = 2MNN(h) + 1

Figure 9.25: RSL(T ) = LPL(T ) = 0 ∼ 2 examples and minimum number of nodes

Theorem 9.1 (Upper bound NPL(T )). NPL(T ) = O(log n)


Proof. Let h = NPL(T ). Then, n, the number of nodes in a leftist tree of NPL(T ) = h, is
n ≥ 2h+1 − 1. Let MNN(h) be the minimum number of nodes in a leftist tree of NPL(T ).

MNN(h) = 2h+1 − 1 (9.27)

We shall prove eqn (9.27) first by induction.


(basis case) When h = 0, MNN(0) = 21 −1 = 1. A tree with a single node tree in Figure 9.25
(a) has its LPL(T ) = 0. MNN(1) = 22 − 1 = 3 and MNN(2) = 23 − 1 = 7 are shown in the
first tree in Figures 9.25 (b) and (c), respectively.
(inductive step) Assuming MNN(h) = 2h+1 − 1 is true, show MNN(h + 1) = 2h+2 − 1.
Consider a tree whose NPL(T ) = h + 1. The right child node has NPL(T .Right) = h, and
the left child node’s NPL(T .Left) must have at least h. In order to get the minimum number
of nodes, LPL(T .Left) = h. Because there are two sub-leftist trees with the same null path
length, h, and a root node connecting them, we have the following recurrence relation, as
depicted in Figure 9.25 (d).

MNN(h + 1) = 2MNN(h) + 1
= 2 × (2h+1 − 1) + 1 by assumption
h+2
=2 −1

Hence, eqn (9.27) is true.

n ≥ 2h+1 − 1 by eqn (9.27)


log (n + 1) ≥ h + 1
h ≤ log (n + 1) − 1
h ∈ O(log n) 

9.5.2 Merge Operation


Albeit the merge operation is not the fundamental operation of a priority queue, it is the
most important operation in the leftist heap, as it naturally facilitates other rudimentary
536 CHAPTER 9. PRIORITY QUEUE

operations, such as insert and delete min or delete max. Let {T } and |T | be a set of all
nodes and the number of nodes in the tree T . The merge operation takes two leftist trees,
Tx and Ty , as input arguments. We assume that {Tx } and {Ty } are disjoint sets, i.e.,
{Tx } ∩ {Ty } = ∅ and |{Tx } ∪ {Ty }| = |Tx | + |Ty |. Even if two nodes have the same value,
they are two different objects. The merge operation is formally defined as follows:
Problem 9.12. Merge operation in leftist min-heaps
Input: leftist min-heaps, Tx and Ty where they are disjoint sets.
Output: a new leftist min-heap Hz such that
{Tz } = {Tx } ∪ {Ty } and |Tz | = |Tx | + |Ty |
To begin merging two leftist min-heaps, compare the root nodes to determine which
node to be the root node for the merged leftist min-heap. The node with a smaller value
becomes the root node. Then the right sub-tree of the new root node is merged with the
other one recursively. A pseudo code is stated in Algorithm 9.27. Let LHmin and LHmax
stand for leftist min and max-heaps, respectively.
Algorithm 9.27. Leftist min-heap merge
LHmin merge(Tx , Ty )
if Tx = null, return Ty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if Ty = null, return Tx . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if Tx .val < Ty .val, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Tx .Right = LHmin merge(Tx .Right, Ty ) . . . . . . . . 4
if Tx .Left.npl < Tx .Right.npl, . . . . . . . . . . . . . . . . . . 5
swap(Tx .Left, Tx .Right) . . . . . . . . . . . . . . . . . . . . . . 6
Tx .npl = Tx .Right.npl + 1 . . . . . . . . . . . . . . . . . . . . . . 7
return (Tx ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
else (Tx .val ≥ Ty .val), . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Ty .Right = LHmin merge(Ty .Right, Tx ) . . . . . . . 10
if Ty .Left.npl < Ty .Right.npl, . . . . . . . . . . . . . . . . . 11
swap(Ty .Left, Ty .Right) . . . . . . . . . . . . . . . . . . . . . 12
Ty .npl = Ty .Right.npl + 1 . . . . . . . . . . . . . . . . . . . . . 13
return (Ty ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
The base case occurs when either one becomes empty in lines 1 and 2. In line 3, the
new root node is selected. In line 4, the right sub-tree is updated by merging the old right
sub-tree with the other tree. The leftist tree property may be violated and, thus, it must
be updated by swapping in lines 5 and 6.
Figure 9.26 illustrates the recursive merge algorithm with the internal stack. First, the
root nodes of two trees are compared Figure 9.26 (a). Since node 1 is smaller than node
5 , 1 along with its left sub-tree is pushed into the internal stack. The right sub-tree of
1 and the tree rooted at 5 are recursively merged in Figure 9.26 (b). Since node 5 is
smaller than node in Figure 9.26 (c), along with its left sub-tree is pushed into the
7 5
internal stack and the right sub-tree of 5 and the tree rooted at 7 are recursively merged.
Since node is smaller than the node in Figure 9.26 (d), along with its left sub-tree
7 8 7
is pushed into the internal stack.
In Figure 9.26 (d), one of the tree is empty and it is the basis case that returns non-
empty tree as the output in lines 1 and 2 in Algorithm 9.27. Since the null tree is reached,
merging a tree rooted at 8 and a null tree is simply a tree rooted at
8 . This resulting tree
9.5. LEFTIST HEAP 537

1
1 5 5 7
10 7 5
10 7 35 8 35 8 7 7 20 ε 8
15 20 35 8
5 5
15 20 50 1 50 20
50 1 1

(a) (b) (c) (d)


1 1
5 5
7 10 5 5 10
7 35 7 7 35
20 8 15 7 35 7 35 15
5 5
50 20 8 20 8 50
1 1 1 20 8 50 20 8 50

(e) (f) (g)

Figure 9.26: Leftist min-heap merge Algorithm 9.27 illustration

goes under the right sub-tree of 7 by popping it from the internal stack, as in Figure 9.26
(e). The resulting tree by popping in Figure 9.26 (f) violates the leftist tree property. It
can be simply corrected by swapping two child sub-trees. The final merged leftist min-heap
is given in Figure 9.26 (g). Algorithm 9.27 is called a meld algorithm in [122, p 5-4].
Let sizes of two leftist heaps to be merged to be nx and ny . Only comparisons and
swapping processes occur along the right spines of two trees. The right spine path lengths
are O(log nx ) and O(log ny ) according to Theorem 9.1. Hence, the computational time
complexity of Algorithm 9.27 is O(log nx + log ny ) = O(log n) where n = nx + ny .
One of the primary operations of a leftist heap as a priority queue is the insertion
operation. The insertion is nothing but a merge if we consider the new element to be inserted
as a single node leftist heap. Figure 9.27 illustrates inserting h15, 30, 25i as a sequence into
a leftist min-heap: A pseudo code is stated as follows:

Algorithm 9.28. Leftist min-heap insert

LHmin insert(T, x)
T 0 .val = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T 0 .npl = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return LHmin merge(T, T 0 ) . . . . . . . . . . . . . . . . . . . . . . . 3

The insertion operation in Algorithm 9.28 takes O(log n). The best case takes constant
time, as shown in Figure 9.27 (a) when the element to be inserted is smaller than the
minimum element in the leftist min-heap. The worst case is that the element goes all the
way down along the right spine which can be Θ(log n).
Lines 1 and 2 in Algorithm 9.28 make the element to be inserted a leftist heap with a
single node. If we make these lines a subroutine, the insertion algorithm can be stated in a
single line in eqn (9.28).
538 CHAPTER 9. PRIORITY QUEUE

20 15 15 15 15 30 15

80 50 20 20 20 20 30

80 50 80 50 80 50 80 50

(a) inserting 15 (best case) (b) inserting 30


15 25 15 15 15

20 30 20 30 25 20 25 25 20 25

80 50 80 50 80 50 30 30 80 50 30

(c) inserting 25

Figure 9.27: Inserting h15, 30, 25i into a leftist min-heap illustration

Subroutine 9.7. Single node nodifier for a leftist heap


LH nodify(x)
T .val = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T .npl = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

T .Left and T .Right are nulls by default.

LHmin insert(T, x) = LHmin merge(T, LH nodify(x)) (9.28)


The other rudimentary operation of a leftist min-heap is the delete min operation. It can
be also viewed as a merge operation. If the root node, which is the minimum, is removed,
the two children trees are leftist min-heaps to be merged. Figure 9.28 illustrates the delete
min operation in a leftist min-heap. A pseudo code is stated as follows:
Algorithm 9.29. Leftist heap delete min
LHmin delmin(T )
o = T .val . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
T = LHmin merge(T .Left,T .Right) . . . . . . . . . . . . . . . 2
return o and T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

2 2 2 20 15 2 15 20

20 30 20 30 80 30 20 20 80 50

80 50 80 50 50 80 50 80 50

(a) A sample deletion (b) Best case when T .Right = null

Figure 9.28: Leftist min-heap - delete min illustration

The delete min operation in Algorithm 9.29 takes O(log n) because both right spine
lengths for the two sub-trees are O(log n) by Theorem 9.1. The best case is O(1) and it
occurs when the right spine length RSL(T ) is constant, or even zero, as shown in Figure 9.28
(b).
9.5. LEFTIST HEAP 539

9.5.3 Construction

Several varieties of leftist heap construction algorithms are presented here. The first and
most straight-forward algorithm would be the one based on inductive programming. A first
order recurrence relation can be derived as follows:

(
LH nodify(a1 ) if n = 1
LHmin heapify(A1∼n ) = (9.29)
LHmin insert(LHmin heapify(A1∼n−1 ), an ) if n > 1

LHmin insert and LH nodify were provided previously in Algorithm 9.28 and Subroutine 9.7,
respectively. A pseudo code of the algorithm based on the inductive programming paradigm,
which inserts each element sequentially, is stated in Algorithm 9.30.

Algorithm 9.30. LHmin heapify Algorithm 9.31. naı̈ve LHmin heapify


LHmin heapify(A1∼n ) LHmin heapify(A1∼n )
T = LH nodify(a1 ) . . . . . . . . . . 1 T = LH nodify(a1 ) . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 2 ∼ n . . . . . . . . . . . . . . . . 2 for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
LHmin insert(T, ai ) . . . . . . . 3 T = LHmin merge(T , LH nodify(ai )) . . . 3
return T . . . . . . . . . . . . . . . . . . . . 4 return T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Instead of calling the insertion method sequentially, the merge operation can be called,
as stated in Algorithm 9.31. Algorithms 9.30 and 9.31 are the same and illustrated in Fig-
ure 9.29 (a) on a sample input, A1∼n = h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i. The computational
time complexities of both Algorithms 9.30 and 9.31 are O(n log n).
The following two efficient linear time leftist heap construction algorithms are based
on the divide and conquer paradigm. A leftist heap construction algorithm based on the
following divide recurrence relation in eqn (9.30) is depicted in Figure 9.29 (b).

LHmin heapify(b, e) =
(
LH nodify(ab ) if b = e
(9.30)
LHmin merge(LHmin heapify(b, b b+e b+e
2 c), LHmin heapify(b 2 c + 1, e)) if b < e

The input sequence A1∼n is declared globally and eqn (9.30) is called initially with LH-
min heapify(1, n). Since the computational time complexity of the divide and conquer al-
gorithm in eqn (9.30) follows T (n) = 2T (n/2) + O(log n), it is Θ(n), according to Master
Theorem 3.9.
The last leftist heap construction algorithm uses the iterative divide and conquer method
instead of the recursive divide and conquer. Akin to the iterative merge sort Algorithm 3.23
described on page 127, an iterative, or bottom up, divide and conquer leftist heap construc-
tion pseudo code is given as follows:
540 CHAPTER 9. PRIORITY QUEUE

80 20 20 15 12 1 1 1 1 1

80 80 30 20 15 12 12 35 12 17 12 2 12 2

80 30 20 15 15 15 35 15 17 15 17 10

80 30 20 20 20 20 35 20 35

80 30 80 30 80 30 80 30 80 30

(a) Inductive programming Algorithm 9.30 to construct a min-max heap.


80 20 30 15 12 1 35 17 2 10

80 20 30 15 12 1 35 17 2 10

80 20 30 15 12 1 35 17 2 10

80 20 30 15 12 1 35 17 2 10

20 30 12 1 17 2

80 15 35 10
20 1

80 30 12 35 17 1

20 15 1 2 35

80 30 12 2 10 17

20 15 10 17

80 30 35

(b) Iterative divide and conquer to construct a min-max heap.


80 20 30 15 12 1 35 17 2 10
20 15 1 17 2

80 15 30 1 12 1 35 10
30 20 15 12 12 17

80 17 30 35 1
35 20 15 2
80 17 30 10 12

35 20

80

(c) Iterative divide and conquer to construct a min-max heap.

Figure 9.29: Constructing a Leftist min-heap on A1∼n = h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i
9.6. AVL TREE AS A PRIORITY QUEUE 541

Algorithm 9.32. Iterative D&C leftist heapify


LHmin heapify(A1∼n )
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Ti = LH nodify(ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
s = 2 ............................................. 3
while s ≤ dn/2e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
i = s ........................................... 5
while i < n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
LHmin merge(Ti−s+1 , Ti+1 ) . . . . . . . . . . . . . . . . . . 7
i = i + 2 × s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
s = 2 × s .......................................9
return T1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Algorithm 9.32 is illustrated with a toy example in Figure 9.29 (c). The computational
time complexity of Algorithm 9.32 is Θ(n), according to Master Theorem 3.9.

9.6 AVL Tree as a Priority Queue

Consider the AVL tree data structure defined on page 447. In Chapter 8, it was treated
as a dictionary ADT by supporting insert, delete, and search operations. Here, an AVL
tree is treated as a priority queue ADT by supporting insert, delete min, and/or delete max
operations. An AVL tree supports all three operations (insert, delete min, and delete max)
in O(log n). Since an AVL is a BST, the delete min operation can be implemented by
invoking the BST find min operation stated in Algorithm 8.7 and then one can perform the
AVL delete operation stated in Algorithm 8.14. A pseudo code is stated as follows:

Algorithm 9.33. AVL- delete min Algorithm 9.34. AVL- delete max
AVL delmin(T ) AVL delmax(T )
if T = null, return null . . . . . . .1 if T = null, return null . . . . . . .1
else if T .Left = null, . . . . . . . . . . . . 2 else if T .Right = null, . . . . . . . . . . .2
a = T .key . . . . . . . . . . . . . . . . . . . . 3 a = T .key . . . . . . . . . . . . . . . . . . . . 3
T = T .Right . . . . . . . . . . . . . . . . . . 4 T = T .Left . . . . . . . . . . . . . . . . . . . 4
if T 6= null, T.H = 0 . . . . . . . 5 if T 6= null, T.H = 0 . . . . . . . 5
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
a = BSTmin(T .Left) . . . . . . . . . 7 a = BSTmin(T .Right) . . . . . . . . 7
AVLrebalance(T ) . . . . . . . . . . . . . 8 AVLrebalance(T ) . . . . . . . . . . . . . 8
return a . . . . . . . . . . . . . . . . . . . . . . . . 9 return a . . . . . . . . . . . . . . . . . . . . . . . . 9

The AVL delete max operation is almost identical to the AVL delete min operation,
except that the right sub-tree is visited. Both AVL delete min and delete max operations
take Θ(log n), since only leftmost path and rightmost path are traversed and checked for
rebalancing, respectively.
542 CHAPTER 9. PRIORITY QUEUE

9.6.1 AVL Select


Various algorithms using AVL tree data structures can be devised to solve the order
statistics Problem 2.15 defined on page 59. To find the kth smallest element in an array of
size n, greedy Algorithm 4.1 on page 154 took Θ(kn) because the delete min operation took
a linear time and must be performed k number of times. If the input data are in an AVL
tree where the delete-min takes only Θ(log n), this new greedy algorithm with an AVL tree
data structure is much faster than naı̈ve greedy Algorithm 4.1. It is stated as follows:
Algorithm 9.35. AVL select I for kth small (greedy algo + AVL)
AVLselectI(A1∼n , k)
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
AVL insert(T, ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to k − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
O = AVL-delete min(T ) . . . . . . . . . . . . . . . . . . . . . . . . 4
return find min(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Lines 1 and 2 build an AVL tree in the inductive programming way, which takes O(n log n).
Next, AVL-delete min is invoked k − 1 number of times, which takes Θ(k log n). Finally,
it returns the minimum in the AVL tree. The computational time complexity of Algo-
rithm 9.35 is O(n log n). The extra space complexity is Θ(n) to store the AVL tree.
Algorithm 9.35 is illustrated in Figure 9.30 (a) to find (k = 4)th smallest element in
A1∼n = h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i.
Recall inductive programming Algorithm 2.19 stated on page 60, which takes Θ(kn). It
starts with the first k elements as a solution set and finds min(A1∼k ) for the kth largest
element in A1∼k , and then inductively moves toward the nth element. If an AVL tree is
used instead of a simple array to store k elements, it becomes much faster, as depicted in
Figure 9.30 (b). It is stated as follows:
Algorithm 9.36. AVL select II for kth small (inductive programming + AVL)
AVLselectII(A1∼n , k)
for i = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
AVL insert(T, ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
m = find max(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for i = k + 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if ai < m, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
AVL-delete max(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . 6
AVL insert(T, ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
m = find max(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
return m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Lines 1 and 2 build an AVL tree only for the first k elements of the input sequence. For
the remaining elements in the input sequence, if the ith element is less than the maximum
value in the AVL tree, the maximum element is removed from the tree and then the ith
element is inserted into the tree. At the ith iteration, Algorithm 9.36 computes the AVL
tree containing the k smallest elements out of the sequence A1∼i . When the last element is
reached, the maximum element in the final AVL tree is the kth smallest element.
The AVL tree construction in lines 1 and 2 takes Θ(k log k) and the inductive steps take
O((n − k) log k). Hence, the computational time complexity of Algorithm 9.36 is O(n log k).
The extra space complexity is Θ(k) to store the AVL tree of size k.
9.6. AVL TREE AS A PRIORITY QUEUE 543

15 15 15 30

2 30 10 30 10 30 15 80

1 12 20 80 2 12 20 80 12 20 80 12 20 35

10 17 35 17 35 17 35 17

1. Construct an AVL 2. AVL-delete min 3. AVL-delete min 4. AVL-delete min


min(T ) = 1 min(T ) = 2 min(T ) = 10 min(T ) = 12
(a) AVL select I for kth small (greedy algo + AVL) Algorithm 9.35
30 12 20 1 15 35 15 17 12 2 12 12 2

20 80 15 30 12 20 12 20 1 15 1 15 1 12

15 12 1 1 17 2 10

(b) AVL select II for kth small (inductive prog. + AVL) Algorithm 9.36
15 15

2 30 2 30

1 12 20 80 1 12 20 80

10 17 35 10 17 35

(c) AVL select III for kth small (d) AVL select III for kth large
(tree traversal) Algorithm 9.36 for an exercise

Figure 9.30: Various AVL select algorithm illustrations to find (k = 4)th smallest element
in A1∼n = h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i

The third AVL-select algorithm utilizes the in-order depth-first traversal. Once the input
data is stored in an AVL tree, the in-order depth-first traversal can be performed until the
k element is reached. Once the kth element is found, it can stop and return with the value.
A pseudo code by slightly changing the in-order DFT is stated as follows:

Algorithm 9.37. AVL select III for kth small (tree traversal)

AVLselectIII(A1∼n , k)
for i = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
AVL insert(T, ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return DFTinorder(T, 0, k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

DFTinorder(T, c, k)
if c < k ∧ x.Left 6= null, m = DFTinorder(T.Left,c,k) . . . . . . . 1
if c = k − 1, c = c + 1 and m = T.key . . . . . . . . . . . . . . . . . . . . . . 2
if c < k, c = c + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if c < k ∧ x.Right 6= null, m = DFTinorder(T.Right,c,k) . . . . 4
return m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The in-order DFT traverse the AVL tree only for the first k vistis. Figure 9.30 (c)
illustrates Algorithm 9.37. So as to analyze the computational time complexity, the partial
544 CHAPTER 9. PRIORITY QUEUE

route of the in-order DFT is separated into three parts. First, the red arrows indicate finding
the first element, which is the minimum. This step takes Θ(log n), as the minimum element
is located at either the leaf or one above the leaf node. The blue arrows indicate the first k
visited nodes by the in-order DFT. This step clearly takes Θ(k). The green dotted arrows
indicate the return calls after the answer is found. Hence, the partial in-order DFT takes
Θ(log n + k) as long as the input data is already in an AVL tree. Since an AVL tree must
be first constructed, the computational time complexity of Algorithm 9.37 is O(n log n).
If we do not stop at the kth element and continue traversing the tree in Algorithm 9.37,
the resulting sequence is the sorted list. Here is another algorithm for sorting Problem 2.16.

Algorithm 9.38. AVL sort

AVL-sort(A1∼n )
T = null . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
AVLinsert(T, ai ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
O1∼n = DFTinorder(T ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Constructing an AVL tree takes Θ(n log n) and the depth-first traversal takes linear time.
Hence, the computational time complexity of Algorithm 9.38 is Θ(n log n).

9.7 Stability
The term, ‘stable’ is defined previously in Definition 3.2 on page 135 to categorize
sorting algorithms. If two elements have the same priority, they are served according to
their order in the input sequence. Suppose that there are three classes in airlines: A
(the first class), B (business class), and C (coach class). If passengers come in order of
hC1 , A1 , B1 , A2 , B2 , C2 , C3 , A3 , B3 i, the stable sorting algorithm must output hA1 , A2 , A3 ,
B1 , B2 , B3 , C1 , C2 , C3 i. Subscript is used solely to distinguish the order of the passenger
within the same class.
As given in Figure 9.31, the heap sort Algorithms 9.11 and 9.12 are clearly not sta-
ble. Algorithm 9.11 first construct a min-heap, as shown in Figure 9.31 (a), and performs
delete min operations until the heap is empty. The output, as shown in Figure 9.31 (b),
is hA1 , A2 , A3 , B3 , B2 , B1 , C1 , C3 , C2 i, which is not stable. Algorithm 9.12 first construct a
max-heap, as shown in Figure 9.31 (c), and performs delete max operations until the heap
is empty. The output, as shown in Figure 9.31 (d), is hA3 , A2 , A1 , B1 , B2 , B3 , C3 , C2 , C1 i,
which is not stable.
One advantage of AVL sorting Algorithm 9.38 over conventionally popular sorting al-
gorithms, such as quicksort and heapsort, is its stability. Consider the previous sample
input A = hC1 , A1 , B1 , A2 , B2 , C2 , C3 , A3 , B3 i to demonstrate its stability. As depicted in
Figure 9.31 (e), an AVL tree is constructed by inserting one element at a time, starting from
the beginning. If we take the in-order depth-first traversal on the final AVL tree, we have
the perfectly stable sorted list, A0 = hA1 , A2 , A3 , B1 , B2 , B3 , C1 , C2 , C3 i.
The stability of AVL sorting Algorithm 9.38 is guaranteed if an element to be inserted
has the same value as the current node, it is inserted into the right sub-tree because it came
later than the node. If we insert an element with the same value into the left sub-tree,
we would have an unstable sorted list, A00 = hA3 , A2 , A1 , B3 , B2 , B1 , C3 , C2 , C1 i. In other
9.7. STABILITY 545

C1 C1 C1 A1

A1 B1 A1 B1 A1 B1 A2 B1

A2 B2 C2 C3 A2 B2 C2 C3 A2 B2 C2 C 3 A3 B2 C2 C3

A3 B3 A3 B3 A3 B3 C1 B3

(a) Min-heapifying a sample input hC1 , A1 , B1 , A2 , B2 , C2 , C3 , A3 , B3 i


A2 A3 B3 B2 B1 C1 C3 C2

A3 B1 B3 B1 B2 B1 C2 B1 C2 C3 C2 C3 C2

B3 B2 C2 C3 C1 B2 C2 C3 C1 C3 C2 C1 C 3 C1

C1

(b) sorted list output using a min-heap Algorithm 9.11 hA1 , A2 , A3 , B3 , B2 , B1 , C1 , C3 , C2 i


C1 C1 C1 C1

A1 B1 A1 C2 B3 C2 B3 C2

B3 B2 C 2 C3 B3 B2 B1 C3 A1 B2 B1 C3 A1 B2 B1 C3

A3 A2 A3 A2 A3 A2 A3 A2

(c) Max-heapifying a sample input hC1 , A1 , B1 , A2 , B2 , C2 , C3 , A3 , B3 i


C2 C3 B3 B2 B1 A1 A2 A3

B3 C3 B3 B1 B2 B1 A3 B1 A3 A2 A3 A2 A3

A1 B2 B1 A2 A1 B2 A3 A2 A1 A2 A3 A1 A2 A1

A3

(d) sorted list output using a max-heap Algorithm 9.12 hA3 , A2 , A1 , B1 , B2 , B3 , C3 , C2 , C1 i


C1 C1 B1 B1 B1 B1 B1 B1 B1

A1 A1 C1 A1 C1 A1 C1 A1 C1 A1 C1 A2 C1 A2 C1
Case 3 A2 A2 B2 A2 B2 C2 A2 B2 C2 A1 A3 B2 C2 A1 A3 B2 C2

C3 Case 2 C3 B3 C3

(e) Sequential AVL construction on hC1 , A1 , B1 , A2 , B2 , C2 , C3 , A3 , B3 i


C1 C1 C1 C1 C1 C1 C1 C1 C1

A1 A1 A1 A1 A1 C2 A1 C2 A1 C2 A1 C2

B1 B1 B1 B1 B1 C3 B1 C3 B1 C3

A2 A2 B2 A2 B2 A2 B2 A2 B2 A2 B2

A3 A3 B3

(f) Sequential BST construction on hC1 , A1 , B1 , A2 , B2 , C2 , C3 , A3 , B3 i

Figure 9.31: Unstable heap sort algorithms and stable AVL and BST-sort algorithms
546 CHAPTER 9. PRIORITY QUEUE

words, changing the condition from < to ≤ in line 2 of AVL- insert Algorithm 8.13 stated
on page 451 would put the element of the same value to the left sub-tree.
A BST can be used instead of AVL as shown in Figure 9.31 (f). It produces a correct
stable sorted list, but its computational time complexity would be very inefficient, O(n2 ).
Various other data structures can be used as a priority queue and Table 9.1 on page 500
shows ones that result in stable and unstable sorted list when they are used to solve the
sorting Problem 2.16.

9.8 Exercises
Q 9.1. Which of the following binary tree(s) is(are) min-heaps implemented using an array?
a) b) c) d)
1 1 1 3

2 5 3 5 2 5 7 4

4 3 7 4 2 7 3 7 8 8 9 6

Q 9.2. Which of the following binary tree(s) is(are) max-heaps implemented using an array?
a) b) c) d)
9 9 7 9

8 5 7 5 5 1 7 5

7 1 4 2 6 1 2 4 2 8 1

Q 9.3. Which of the array(s) is(are) min-heap(s)?


a). 2 15 7 27 37 17 29 35 25 70

b). 2 7 35 25 15 70 37 29 27 17

c). 2 7 15 17 25 27 29 35 37 70

d). 2 15 17 7 35 27 37 25 29 70
Q 9.4. Which of the array(s) is(are) max-heap(s)?
a). 70 29 37 17 27 35 2 15 7 25

b). 70 37 29 27 15 35 2 25 17 7

c). 70 37 17 27 35 2 15 25 15 7

d). 70 29 35 37 15 27 2 25 17 7
Q 9.5. Consider a binary min-heap.

a). Define the problem of checking whether an array of size n is a min-heap.


b). Devise an algorithm straight from the definition given in a).
9.8. EXERCISES 547

c). Devise a divide and conquer algorithm for the problem in a).

d). Devise an iterative divide and conquer algorithm for the problem in a).

e). Define the problem of inserting an element in a min-heap.

f). Devise an algorithm for the insertion problem defined in e).

g). Define the problem of deleting the minimum element in a min-heap.

h). Devise an algorithm for the delete min problem defined in g).

Q 9.6. Consider a sample sequence A = hA, L, G, O, R, I, T, H, Mi.

a). Demonstrate the O(n log n) naı̈ve Algorithm 9.6 to construct a min-heap on the above
input sequence A.

b). Demonstrate linear time Algorithm 9.7 to construct a min-heap on the above input
sequence A.

c). Demonstrate the O(n log n) naı̈ve Algorithm 9.6 to construct a max-heap on the above
input sequence A.

d). Demonstrate linear time Algorithm 9.7 to construct a max-heap on the above input
sequence A.

Q 9.7. Given the following list in an array, build a max-heap.


80 20 30 15 12 1 35 17 2 10

a). Use the O(n log n) naı̈ve Algorithm 9.6 to construct a max-heap.

b). Use linear time Algorithm 9.7 to construct a max-heap.

c). After constructing a max-heap using linear time Algorithm 9.7, show the array repre-
sentation of the max-heap after performing the delete-max three times.

Q 9.8. Consider the problem of selecting kth smallest element in a list of n quantifiable
elements. Use the following sample array data for illustration.
80 20 30 15 12 1 35 17 2 10

a). Devise a greedy algorithm using a heap data structure.

b). Illustrate the algorithm devised in a) using the above toy example where k = 4.

c). Provide the computational complexity of the algorithm devised in a).

d). Devise an inductive programming algorithm using a heap data structure.

e). Illustrate the algorithm devised in c) using the above toy example where k = 4.

f). Provide the computational complexity of the algorithm devised in c).

Q 9.9. Consider the select k sum maximization Problem 4.1 defined on page 157.
548 CHAPTER 9. PRIORITY QUEUE

a). Devise a greedy algorithm using a heap data structure.


b). Provide the computational complexity of the algorithm devised in a).
c). Illustrate the algorithm devised in a) using a toy example, A = h5, 2, 9, 4, 0, 8, 7, 1i and
k = 4.
d). Devise an inductive programming algorithm using a heap data structure.
e). Provide the computational complexity of the algorithm devised in b).
f). Illustrate the algorithm devised in b) using a toy example, A = h5, 2, 9, 4, 0, 8, 7, 1i and
k = 4.
Q 9.10. Recall the select k sum minimization problem, considered as an exercise Q 4.2 on
page 200.
a). Devise a greedy algorithm using a heap data structure.
b). Provide the computational complexity of the algorithm devised in a).
c). Illustrate the algorithm devised in a) using a toy example, A = h5, 2, 9, 4, 0, 8, 7, 1i and
k = 4.
d). Devise an inductive programming algorithm using a heap data structure.
e). Provide the computational complexity of the algorithm devised in d).
f). Illustrate the algorithm devised in d) using a toy example, A = h5, 2, 9, 4, 0, 8, 7, 1i and
k = 4.
Q 9.11. Recall the Select k subset product maximization problem (SKSP in short), consid-
ered as an exercise Q 4.3 on page 200, which is to find k items out of n positive numbers
such that the product of these k numbers is maximized.

a). Devise a greedy algorithm using a heap data structure.


b). Provide the computational complexity of the algorithm devised in a).
c). Illustrate the algorithm devised in a) using a toy example,
A = h0.5, 2.0, 0.2, 0.5, 0.4, 5.0, 0.4, 2.0i where k = 4.
d). Devise an inductive programming algorithm using a heap data structure.
e). Provide the computational complexity of the algorithm devised in d).
f). Illustrate the algorithm devised in d) using a toy example,
A = h0.5, 2.0, 0.2, 0.5, 0.4, 5.0, 0.4, 2.0i where k = 4.

Q 9.12. Recall the Select k subset product minimization problem (SKSPmin in short),
considered as an exercise Q 4.4 on page 200, which is to find k items out of n positive
numbers such that the product of these k numbers is minimized.

a). Devise a greedy algorithm using a heap data structure.


b). Provide the computational complexity of the algorithm devised in a).
9.8. EXERCISES 549

c). Illustrate the algorithm devised in a) using a toy example,


A = h2.0, 0.5, 5.0, 2.0, 2.5, 0.2, 2.5, 0.5i where k = 4.
d). Devise an inductive programming algorithm using a heap data structure.
e). Provide the computational complexity of the algorithm devised in d).
f). Illustrate the algorithm devised in d) using a toy example,
A = h2.0, 0.5, 5.0, 2.0, 2.5, 0.2, 2.5, 0.5i where k = 4.

Q 9.13. Recall the down-up alternating permutation problem, or simply DUP, considered
as an exercise Q 2.24 on page 86. Use the following array to illustrate algorithms:
3 1 2 4 0 7 6 5

a). Use the linear time divide and conquer Algorithm 9.7 to construct a max-heap.
b). Find the up-down sequence using Algorithm 9.14. Show the illustration.
c). Find the up-down sequence using Algorithm 9.15.
d). Devise an algorithm similar to Algorithm 9.13 to find an up-down sequence using a
max-heap.
e). Find the up-down sequence using the algorithm devised in d). Show the illustration.
f). Use linear time divide and conquer Algorithm 9.7 to construct a min-heap.
g). Devise an algorithm similar to Algorithm 9.14 to find an up-down sequence using a
min-heap.
h). Find the up-down sequence using the algorithm devised in g). Show the illustration.
i). Provide the computational time complexity of the algorithm devised in g).
j). Devise an algorithm similar to Algorithm 9.15 to find an up-down sequence using a
min-heap.
k). Find the up-down sequence using the algorithm devised in j).
l). Provide the computational time complexity of the algorithm devised in j).

Q 9.14. Consider the following array to solve the down-up alternating permutation Prob-
lem S-2.9 considered on page 86.
3 1 2 4 0 7 6 5

a). Devise a greedy algorithm to find a down-up sequence using a min-heap. Hint: similar
to Algorithm 9.14.
b). Find the down-up sequence using the algorithm devised in a). Show the illustration.
c). Provide the computational time complexity of the algorithm devised in a).
d). Devise an algorithm to find a down-up sequence using a min-heap and tree traversal.
Hint: similar to Algorithm 9.15.
550 CHAPTER 9. PRIORITY QUEUE

e). Find the down-up sequence using the algorithm devised in d).
f). Provide the computational time complexity of the algorithm devised in d).
g). Devise a greedy algorithm to find a down-up sequence using a max-heap. Hint: similar
to Algorithm 9.14.
h). Find the down-up sequence using the algorithm devised in g). Show the illustration.
i). Provide the computational time complexity of the algorithm devised in g).
j). Devise an algorithm to find a down-up sequence using a max-heap and tree traversal.
Hint: similar to Algorithm 9.15.
k). Find the down-up sequence using the algorithm devised in j).
l). Provide the computational time complexity of the algorithm devised in j).

Q 9.15. Recall the up-up-down alternating permutation problem, or simply UDD, consid-
ered as an exercise Q 2.26 on page 87. Use the following array to illustrate algorithms:
3 1 2 4 0 7 6 5

a). Devise an algorithm to find an up-up-down sequence using a max-heap.


b). Find the up-up-down sequence using the algorithm devised in a). Show the illustration.
c). Provide the computational time complexity of the algorithm devised in a).
d). Devise an algorithm to find an up-up-down sequence using a min-heap.
e). Find the up-up-down sequence using the algorithm devised in d). Show the illustration.
f). Provide the computational time complexity of the algorithm devised in d).

Q 9.16. Consider the Fractional knapsack minimization problem, which appeared as exer-
cise Q 4.13 on page 204.

a). Provide a greedy algorithm with a heap data structure.


b). Illustrate the algorithm provided in a) using a toy example of n distinct foods with
their fats and amount appeared in exercise Q 4.13 on page 204.
c). Provide the computational time complexity of the algorithm devised in a).

Q 9.17. Consider the activity selection Problem 4.8 defined on page 170.

a). Provide a greedy algorithm with a max-heap data structure. (Hint: greedy Algo-
rithm 4.11)
b). Illustrate the algorithm provided in a) on the following toy example:

activity A 1 2 3 4 5 6 7
start S 3 5 2 4 1 6 7
finish F 4 9 5 6 2 7 15
9.8. EXERCISES 551

c). Provide the computational time complexity of the algorithm devised in a).
d). Prove the correctness of the algorithm provided in a).

Q 9.18. Consider the minimum number of processors Problem 4.9 defined on page 172.

a). Provide a greedy algorithm with a min-heap data structure.


b). Illustrate the algorithm provided in a) using a toy example of (n = 8) tasks, with the
respective starting and finishing times in Figure 4.15 on page 172.
c). Show the processor assignment schedule produced by the algorithm provided in a) on
a toy example of (n = 8) tasks, with the respective starting and finishing times in
Figure 4.15 on page 172.
d). Provide the computational time complexity of the algorithm devised in a).
e). Provide a greedy algorithm with a min-heap data structure.
f). Illustrate the algorithm provided in e) using a toy example of (n = 8) tasks, with the
respective starting and finishing times in Figure 4.15 on page 172.
g). Show the processor assignment schedule produced by the algorithm provided in e) on
a toy example of (n = 8) tasks, with the respective starting and finishing times in
Figure 4.15 on page 172.
h). Provide the computational time complexity of the algorithm devised in e).

Q 9.19. Consider the job scheduling with deadline Problem 4.12, or simply JSD, defined
on page 177.
a). Devise a greedy algorithm with a heap data structure.
b). Illustrate the algorithm provided in a) on the following toy example:

task A 1 2 3 4 5 6 7 8
profit P 3 8 5 6 7 7 6 4
deadline D 3 1 2 1 3 1 3 3

c). Provide the computational time complexity of the algorithm devised in a).
Q 9.20. Which of the following binary tree(s) is(are) min-max heaps?
a) b) c) d)
2 12 1 1

29 70 37 29 9 10 70 29

7 15 35 37 17 25 27 1 2 3 4 5 15 70 27 12

25 27 17 35 18 70 6 7 8 35 18 70

Q 9.21. Consider Problem 9.5 of checking whether a given an array (a complete binary
tree) is a min-max heap defined on page 523.
552 CHAPTER 9. PRIORITY QUEUE

a). Devise a recursive divide and conquer algorithm.

b). Provide the computational time complexity of the algorithm proposed in a).

c). Devise an iterative, or bottom-up, divide and conquer algorithm.

d). Provide the computational time complexity of the algorithm proposed in c).

Q 9.22. Which of the following binary tree(s) is(are) Leftist min-heaps?

a) b) c) d)
12 1 1 1

15 12 2 12 2 12 2

20 15 14 17 10 15 17 15 17

80 30 20 35 35 20 20 16

Q 9.23. Which of the following binary tree(s) is(are) Leftist max-heaps?

a) b) c) d)
80 80 76 19

55 55 71 72 12 8

20 20 15 14 17 60 7

15 15 10 35 6

Q 9.24. Consider an input sequence A = hA, L, G, O, R, I, T, H, Mi.

a). Construct a leftist min-heap using the inductive programming Algorithm 9.30 stated
on page 539.

b). Construct a leftist min-heap using the recursive divide and conquer algorithm in
eqn (9.30) stated on page 539.

c). Construct a leftist min-heap using the iterative, or bottom-up, divide and conquer
Algorithm 9.32 stated on page 541.

Q 9.25. Instead of the leftist min-heap defined on page 533, consider the leftist max-heap.
Use the following sample array data for the questions that involve illustration.

80 20 30 15 12 1 35 17 2 10

a). Define the problem of checking whether a binary tree is a leftist max-heap.

b). Devise an algorithm for checking whether a binary tree is a leftist max-heap.

c). Define the problem of merging two leftist max-heaps.

d). Devise an algorithm for merging two leftist max-heaps.

e). Devise an algorithm for inserting an element into a leftist max-heap.


9.8. EXERCISES 553

f). Devise an algorithm for deleting the element with the maximum value from a leftist
max-heap.
g). Derive a first order linear recurrence for constructing a leftist max-heap.
h). Devise an algorithm to construct a leftist max-heap using inductive programming.
i). Construct a leftist max-heap on the above data using the algorithm devised in h).
j). Devise an algorithm to construct a leftist max-heap using a recursive divide and con-
quer paradigm.
k). Construct a leftist max-heap on the above data using the algorithm devised in j).
l). Devise an algorithm to construct a leftist max-heap using an iterative, or bottom-up,
divide and conquer paradigm.
m). Construct a leftist max-heap on the above data using the algorithm devised in l).
Q 9.26. Consider the sorting Problem 2.16 in ascending order and the leftist min-heap data
structure.
a). Devise a sorting algorithm using a leftist min-heap.
b). Provide the computational time complexity of the algorithm proposed in a)
c). Execute the delete min operation n number of times on the final leftist min-heap in
Figure 9.29 (a). Show each step.
d). Execute the delete min operation n number of times on the final leftist min-heap in
Figure 9.29 (b).
e). Execute the delete min operation n number of times on the final leftist min-heap in
Figure 9.29 (c).
f). Prove or disprove whether the leftist heap sorting algorithm devised in a) is stable.
Q 9.27. Consider the finding the kth largest element Problem 2.15 defined on page 59. For
a toy example, the (k = 4)th largest element in A1∼n = h80, 20, 30, 15, 12, 1, 35, 17, 2, 10i is
20.
a). Devise a greedy algorithm using a leftist max-heap.
b). Illustrate the algorithm proposed in a) on the above data.
c). Devise an inductive programming algorithm using a leftist min-heap.
d). Illustrate the algorithm proposed in c) on the above data.
e). Devise a greedy algorithm using an AVL tree.
f). Illustrate the algorithm proposed in e) on the above data.
g). Devise an inductive programming algorithm using an AVL tree.
h). Illustrate the algorithm proposed in g) on the above data.
i). Devise an algorithm using an AVL tree traversal.
j). Illustrate the algorithm proposed in i) on the above data.
554 CHAPTER 9. PRIORITY QUEUE
Chapter 10

Reduction
bowery

Soho L.I.
Lower
C.T. East side
Tribeca
C.C. 2 bridges

White Wall
hall st.
S.
Tip

(a) map coloring problem (b) graph coloring problem

Figure 10.1: Map coloring problem reduces to graph coloring problem

Suppose that we wish to color a map with the minimum number of colors, such that
no adjacent regions have the same color. Lower Manhattan in Figure 10.1 (a) has eight
regions, and four colors are used. While struggling to devise an algorithm, suppose we found
an algorithm for the graph coloring problem, which is to label vertices with the minimum
number of colors, such that no adjacent vertices have the same color. We can utilize the
algorithm for the graph coloring problem to solve the map coloring problem. There is a
strong relationship between the two problems. If we assign each region a vertex and the
border between two regions becomes an edge, the map coloring problem is nothing but
the graph coloring problem. We say that the map coloring problem reduces to the graph
coloring problem. The reduction based algorithm design paradigm can be stated using the
idiomatic metaphor, “Don’t reinvent the wheel!”
The reduction concept was first used in recursion theory by Post in [137]. For many years,
the established notation for polynomial-time many-one reducibility was ≤pm , introduced by
Ladner et. al. in [110]. The reduction and its simplified notation, ≤p , have been used widely

Emil Leon Post (1897-1954) was a Polish-born American mathematician. His major
contributions to computer science include the unsolvability of his Post correspondence
problem.
c Photography is in public domain.

555
556 CHAPTER 10. REDUCTION

in computer science literature such as in [10, 42] to measure the relative computational
difficulty of two problems. Proving the hardness of problems by using the reduction concept
shall be discussed further in Chapter 11.
The primary objectives of this chapter are as follows: First, one must understand the
concept of reducibility from one problem to another. Second, one should be able to design
an algorithm for a certain problem by reducing it to another relevant problem. Next, readers
must be able to analyze the computational complexity of algorithms based on the reduction
paradigm. It is important to utilize the reduction concept to prove the lower bound of
a certain problem. One must be familiar with solving many problems by sorting. These
problems are said to be in sorting-complete. The concept of px -complete, which is a set of
all problems that reduce to px in polynomial time, must be understood. Also, one must
be able to reduce problems into graph related problems. Reduction concept is illustrated
through numerous problems in graph theory, number theory matrix, and combinatorics.
There are problems which invoke an algorithm for another relevant problem multiple times.
This multi-reduction ≤m p paradigm is introduced.
It should be noted that both Chapters 10 and 11 use abbreviations for problems mas-
sively. Conventional abbreviations are adopted wherever possible, but refer to the list of
abbreviations provided on page 768 for clarification.

10.1 Definitions
Consider the problem of solving a linear equation, c1 x + c0 = 0, LNE in short. The
output is clearly LNE(c0 , c1 ) = −c0 /c1 , as long as c1 6= 0 as x = −c0 /c1 . Instead of writing
a program or algorithm for this problem, suppose that we searched the web and found a
program for the problem of finding roots of a quadratic equation, or simply QRE, which
takes (c0 , c1 , c2 ) as an input and returns roots of c2 x2 + c1 x + c0 = 0 as outputs. A lazy
programmer might just download and use the program for the QRE problem to get the
correct answer for the LNE problem by eqn (10.1).

LNE(c0 , c1 ) = QRE(c0 , c1 , 0) (10.1)

10.2 Reduction: px ≤p py
The intuitive notion of the reduction paradigm is to devise an algorithm for a certain
problem, px by using another problem, py whose algorithm is known. To do so requires
one to think about how to transform the input, Ix , for the problem, px , to the inputs, Iy ,
so that the algoy for the problem, py , can be utilized. Then, one must think about how
to transform the output, Oy returned by the algoy to the desired output, Ox . This whole
process is depicted in Figure 10.2. If one can devise an algorithm for px by using a known

Input Output
Ix Iy algoy Oy Ox
transform transform

Figure 10.2: px ≤p py reduction framework


10.3. DUAL PROBLEM: PX ≡P PY 557

algorithm, algoy for py by transforming Ix to Iy and Oy to Ox in polynomial time, it can


be said that “px reduces to py in polynomial time” and is denoted as px ≤p py .

Definition 10.1. px reduces to py : px ≤p py if and only if there exist algorithms to


transform Ix to Iy and Oy to Ox in polynomial time.

The outer most box in Figure 10.2 is the reduction (px ≤p py ) based algorithm for the px
problem.
Let Ti and To be algorithms to transform Ix to Iy and transform Oy to Ox , respectively.
“Computational complexities are polynomial” means that there exists a positive constant
c such that O(Ti ) and O(To ) ∈ O(nc ). The computational complexity of this kind of
reduction based algorithm for the problem px is O(Ti ) + O(algoy ) + O(To ). In this chapter,
the reduction paradigm, which invokes an existing algorithm for another problem, is utilized
to design an algorithm.
There are many trivial cases in which little effort is necessary to transform one problem
into another. For example, consider the checking primality of a number n problem, or simply
CPN, considered as an exercise in Q 1.16 on page 30. Instead of devising an algorithm, one
may search for a similar problem whose algorithm is known. One such problem is the
number of prime factors Problem 5.1, or simply NPF, defined on page 218. The number of
prime factors for a number, n, is one if n is a prime number and greater than one if n is a
composite number. Hence, the following reduction based equation for CPN ≤p NPF can be
derived: (
True if NPF(n) = 1
isprime(n) = ⇔ CPN ≤p NPF (10.2)
False otherwise
Another obvious reduction relation can be derived for the modulo Problem 2.25, which
reduces to the division Problem 2.24 from the fact: n = q · d + r.

mod(n, d) = n − d × div(n, d) ⇔ MOD ≤p DIV (10.3)

10.3 Dual problem: px ≡p py


Suppose that a program, kthlarge(A1∼n , k), for the k-th order statistics Problem 2.15,
KLG in short, is given. Problems that simply reduce to the k-th order statistics Problem 2.15
include:

findmax(A1∼n ) = kthlarge(A1∼n , 1) ⇔ MAX ≤p KLG (10.4)


findmin(A1∼n ) = kthlarge(A1∼n , n) ⇔ MIN ≤p KLG (10.5)
kthsmall(A1∼n , k) = kthlarge(A1∼n , n − k + 1) ⇔ KSM ≤p KLG (10.6)

The first (k = 1) largest element is the max. The last (k = n)th largest element is the min.
Finding the kth smallest element is identical to finding the (n − k + 1)th largest element.
When px ≤p py and py ≤p px , their relation can be denoted as px ≡p py . The two
problems are said to be dual problems. For example, KLG ≡p KSM because KSM ≤p KLG
by eqn (10.6), as well as KSM ≤p KLG by eqn (10.7).

kthlarge(A1∼n , k) = kthsmall(A1∼n , n − k + 1) ⇔ KSM ≤p KLG (10.7)

Clearly, the KSM problem is nothing less than the KLG problem.
558 CHAPTER 10. REDUCTION

For another simple dual problem example, consider the GCD and LCM problems defined
on pages 7 and 4, respectively. Clearly, GCD ≡p LCM by eqns (10.8) and (10.9).

n×m
LCM(n, m) = ⇔ LCM ≤p GCD (10.8)
GCD(n, m)
n×m
GCD(n, m) = ⇔ GCD ≤p LCM (10.9)
LCM(n, m)

Theorem 10.1. LCM and GCD are correctly computed by eqns (10.8) and (10.9), respec-
tively.

Proof. By the Fundamental Theorem of Arithmetics [146, p 155], let’s represent n and m
by the product of powers of prime numbers.
n= pa1 1 × pa2 2 × ··· × pakk
m= pb11 × pb22 × ··· × pbkk
n×m= pa1 1 +b1 × pa2 2 +b2 × ··· × pakk +bk
max(a ,b )
1 1 max(a ,b )
2 2 max(a ,b )
k k
LCM(n, m) = p1 × p2 × ··· × pk
min(a1 ,b1 ) min(a2 ,b2 ) min(ak ,bk )
GCD(n, m) = p1 × p2 × ··· × pk
Since max(a, b) + min(a, b) = a + b, LCM(n, m) × GCD(n, m) = n × m. 

10.3.1 GBW ≡p LBW

h1, 4, 4, 1, 2, 3, 3, 2i h1, 4, 4, 2, 2, 1, 3, 3i h4, 4, 1, 2, 3, 3, 1, 2i h4, 4, 2, 1, 1, 3, 3, 2i


(a) GBW(S) = T (b) GBW(S) = T (c) GBW(S) = F (d) GBW(S) = F
l l l l

h4, 1, 1, 4, 3, 2, 2, 3i h4, 1, 1, 3, 3, 4, 2, 2i h1, 1, 4, 3, 2, 2, 4, 3i h1, 1, 3, 4, 4, 2, 2, 3i


(e) LBW(S 0 ) = T (f) LBW(S 0 ) = T (g) LBW(S 0 ) = F (h) LBW(S 0 ) = F

Figure 10.3: GBW ≡p LBW illustration.

Recall the Greater between elements sequence validation Problem 3.6, or simply GBW,
defined on page 121 and the Less between elements sequence validation problem, or simply
LBW, considered as an exercise in Q 2.30 on page 88. They are great dual problem example,
which can be shown visually. Consider valid GBW sequences in Figure 10.3 (a) and (b) and
invalid GBW sequences in Figure 10.3 (c) and (d). If figures are flipped vertically, i.e., the
upside down figures are generated as given in Figure 10.3 (e) ∼ (h). Sequences in Figure 10.3
(e) and (f) are valid LBW sequences and sequences in Figure 10.3 (g) and (h) are invalid
10.4. REDUCTION TO SORTING 559

LBW sequences. Let S 0 be the flipped sequence of S. Each element, sx ∈ S, is converted


to s0x ∈ S 0 such that s0x = n + 1 − sx . For example, if n = 4, (1 → 4), (2 → 3), (3 → 2), and
(4 → 1). If S is a valid GBW sequence if and only if S 0 is a valid LBW sequence and vice
versa.
0
isGBW(S1∼2n ) = isLBW(S1∼2n ) ⇔ GBW ≤p LBW (10.10)
0
isLBW(S1∼2n ) = isGBW(S1∼2n ) ⇔ LBW ≤p GBW (10.11)

where ∀i ∈ {1, · · · , 2n}, (s0i ∈ S 0 ) = n + 1 − (si ∈ S).

Since GBW ≤p LBW and LBW ≤p GBW, GBW ≡p LBW.

10.4 Reduction to Sorting


Sorting Problem 2.16 is often an essential ingredient for solving many other problems. In
this section, problems that reduce to the sorting problem whose O(n log n) time complexity
algorithm is known are considered.

10.4.1 Order Statistics


Recall the definition of k-th order statistics Problem 2.15 defined on page 59. The
problem definition itself is the reduction paradigm as it invokes the sorting problem and,
thus, k-th order statistics, or simply KOS, reduces to sorting - more specifically, KLG ≤p
sorting and KSM ≤p sorting. The following pseudo code is a reduction based algorithm
straight from the Problem 2.15 definition.
Algorithm 10.1. k-th order statistics by sorting

kthlarge(A1∼n , k)
A01∼n = sort(A1∼n , ‘desc’) . . . . . . . . . . . . . . . . . . . . . . . . 1
return a0k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

The computational time complexity of Algorithm 10.1 depends on the complexity of the
sorting algorithm, which is O(n log n). Since there is no need for the input transformation
and selecting a0k from a sorted list takes constant time, Algorithm 10.1 takes O(n log n).
Both KLG and KSM of KOS problems reduce to the sorting problem, where A01∼n and
00
A1∼n are sorted lists of A1∼n in ascending and descending orders, respectively.

KSM(A1∼n , k) = a0k = a00n−k+1 ⇔ KSM ≤p Sort (10.12)


KLG(A1∼n , k) = a0n−k+1 = a00k ⇔ KLG ≤p Sort (10.13)

10.4.2 Alternating Permutation


Consider the alternating permutation Problem 2.19 defined on page 65. Let UDP be the
up-down permutation problem. Although numerous algorithms to solve this problem are
enumerated in the table on page 736, the majority of students successfully and immediately
solved the problem using the reduction to sorting paradigm when UDP was given as an exam
question. Perhaps the reduction paradigm is one of the most natural and straight-forward
paradigms.
560 CHAPTER 10. REDUCTION

s e s e
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

(a) Outside in order (b) Inside out order

s e s e
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

(c) Divide progressive order (d) Divide degressive order

s e e s
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

(e) Leap progressive order (f) Leap degressive order

1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1

1 3 2 5 4 7 6 9 8 10 9 10 7 8 5 6 3 4 1 2

(g) Leap progressive order by swapping (h) Leap degressive order by swapping
Algorithm 10.3 illustration Algorithm 10.4 illustration

Figure 10.4: Up-down sequence orders from a sorted list.


10.4. REDUCTION TO SORTING 561

Clearly, UDP ≤p Sorting. Since the input for UDP can be directly used for the sorting
problem, the only problem to resolve is how to transform a sorted list into an up-down
list. Just like greedy Algorithm 4.3, the minimum and maximum value elements can be
alternatively selected. Selecting minimum and maximum take constant time as the list is
sorted. The outer most elements can first be chosen, and the next inner elements can be
selected iteratively, as illustrated in Figure 10.4 (a). The following algorithm is based on
the reduction to sorting paradigm:

Algorithm 10.2. Updown sequence by sorting

updown(A1∼n )
A01∼n = sort(A 1∼n , asc) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 to n2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
o2i−1 = a0i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
o2i = a0n−i+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
if n is odd, on = a0dn/2e . . . . . . . . . . . . . . . . . . . . . . . . 5
return O1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The computational time complexity of Algorithm 10.2 is clearly O(n log n) as the list
must be sorted first. Reading a sorted list in up-down order takes linear time. Algorithm 10.2
produces the up-down pattern given in Figure 10.4 (a), which is identical to the pattern of
greedy Algorithm 4.3.
There are different ways to read a sorted list in up-down order, such as inside-out order,
leap progressive order, divide progressive order, etc. Their orders are illustrated in Fig-
ure 10.4 (b) ∼ (f), where the starting and ending cells are marked as filled and dashed cells,
respectively. Pseudo codes for transforming these other patterns in Figure 10.4 (b) ∼ (f)
are left for exercises.
If the explicit up-down order is desired as an output, no extra space for the output
array is necessary for the leap progressive and leap degressive orders as give in Figure 10.4
(e) and (f), respectively. Suppose an array is sorted in ascending order. If every even cell
element is swapped with its next odd cell element, the array becomes an up-down sequence,
as illustrated in Figure 10.4 (g). The pseudo code is given in Algorithm 10.3. The sorting in
line 1 takes O(n log n). The remaining code takes linear time and, thus, the computational
time complexity of Algorithm 10.3 is O(n log n).

Algorithm 10.3. UDP by asc. sorting Algorithm 10.4. UDP by dsc. sorting
updown(A1∼n ) updown(A1∼n )
A01∼n = sort(A
 1∼n , asc) . . . . . . . . . 1 A01∼n = sort(A
 1∼n , dsc) . . . . . . . . . 1
for i = 1 ∼ n2 − 1 . . . . . . . . . . . . . 2 for i = 1 ∼ n2 . . . . . . . . . . . . . . . . . 2
swap(a02i = a02i+1 ) . . . . . . . . . . . . 3 swap(a02i−1 = a02i ) . . . . . . . . . . . . 3
return A01∼n . . . . . . . . . . . . . . . . . . . . 4 return A01∼n . . . . . . . . . . . . . . . . . . . . 4

Suppose the array is sorted in descending order. If every odd cell element is swapped
with its next even cell element, the array becomes an up-down sequence, as illustrated in
Figure 10.4 (h). The pseudo code is given in Algorithm 10.4.
562 CHAPTER 10. REDUCTION

10.4.3 Element Uniqueness


A sorted list has been stated either in ascending or descending order in this book. A
sorted list in ascending order can be two kinds: strictly increasing or non-decreasing order. If
a list contains no duplicate, the sorted list in ascending order is said to be strictly increasing.
If a list does contain duplicates, it is said to be non-decreasing. A sorted list in descending
order can be divided into strictly decreasing or non-increasing order.
Consider the element uniqueness (CEU) Problem 2.12 defined on page 56, which checks
whether all elements in a list are unique. Suppose the input list is sorted. Then, the element
uniqueness can be checked trivially by scanning for any adjacent duplicates in linear time.
For example, while it is computationally harder to check the uniqueness in an unsorted list,
A = h3, 8, 2, 5, 4, 7, 5, 8i, it is computationally easier to check the uniqueness in a sorted
list, A0 = h2, 3, 4, 5, 5, 7, 8, 8i. Clearly, CEU ≤p Sort. This reduction based algorithm can
be stated as follows:
Algorithm 10.5. Checking element uniqueness by sorting

is element uniq(A1∼n )
A01∼n = sort(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if ai = ai+1 , return false . . . . . . . . . . . . . . . . . . . . . 3
return true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Sorting in line 1 takes O(n log n) and the output transformation, which is scanning
for any adjacent duplicates, takes Θ(n). Hence, the computational time complexity of
Algorithm 10.5 is O(n log n). Note that this simple reduction based Algorithm 10.5 is
the best, i.e., much more efficient than the inductive programming or divide and conquer
Algorithms 2.15 and S-3.3, which took Θ(n2 ).

10.4.4 Random Permutation

0.38 0.91 0.12 0.58 0.47 0.38 0.75

(a) input and random numbers

0.12 0.38 0.38 0.47 0.58 0.75 0.91

(b) sorted random numbers and random output

Figure 10.5: Random permutation by sorting

Consider the random permutation Problem 2.21, or RPP in short, defined on page 67. To
shuffle a deck of n cards, one may assign a random number to each card. Next, sort the cards
10.4. REDUCTION TO SORTING 563

by assigned random numbers. Then, cards are randomly shuffled while the corresponding
random numbers are sorted. This random shuffling algorithm, which utilizes a sorting
algorithm, is stated below and is illustrated in Figure 10.5.
Algorithm 10.6. Random permutation by sorting

RPP(A1∼n )
Let B1∼n = (A1∼n , R1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
ri = random() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0
B1∼n = sort(B1∼n ) by (R1∼n ) . . . . . . . . . . . . . . . . . . . . 4
return A01∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Clearly, the random permutation Problem 2.21 reduces to the sorting Problem 2.16:
RPP ≤p Sorting. The input transformation is just assigning random numbers, which takes
linear time. The output transformation is just extracting the corresponding part of list,
A01∼n from the list, B1∼n
0
, which also takes linear time. The computational time complexity
of the shuffling by sort Algorithm 10.6 is O(n log n) because the sorting takes O(n log n) and
it needs extra Θ(n) space to store random numbers.

10.4.5 Sorting-complete
There are many more problems that can be reduced to a sorting problem. Indeed,
the sorting problem is an essential ingredient of designing algorithms for many problems.
Greedy algorithms which utilized the sorting algorithm in Chapter 4 can be also categorized
into the reduction to sorting paradigm as well. Such problems include the select k sum
maximization Problem S-2.20, the coin change problem, fractional knapsack Problem 4.5,
activity selection Problem 4.8, task scheduling with minimum processors Problem 4.9, job
scheduling with deadline Problem 4.12, etc. Let the set of all problems that can be reduced
to sorting problem be Sorting-complete.

Sorting
UDP
KOS
RPP
MAX MIN UDP KOS RPP CEU
CEU
MDN

etc... MIN MDN MAX

(a) sorting-complete (b) sorting-complete reduction tree

Figure 10.6: Sorting-complete and its reduction tree

Definition 10.2. px -complete is a set of all problems that can be reduced to px .

px -complete = {py | p y ≤p px }

‘py ≤p px’ can be also equivalently stated as ‘py ∈ px -complete.’ For example, ‘UDP ∈
sorting-complete’ since ‘UDP ≤p sorting.’ Figure 10.6 (a) shows some problems that are in
the sorting-complete set. Figure 10.6 (b) shows a reduction tree whose root is the sorting
564 CHAPTER 10. REDUCTION

problem where the arrow indicates the reduction relation; the child problem reduces to its
parent problem. All problems in child sub-trees reduce to the problem in the root node.
Any sub-problem under KOS as a root in the reduction tree in Figure 10.6 (b) also
reduces to the sorting problem by the transitivity of reduction.
Theorem 10.2. Transitivity of Reduction
If px ≤p py and py ≤p pz , then px ≤p pz .

I.T. Iy Oy O.T.
Ix algoy Ox
x→y y→x

(a) px ≤p py

I.T. Iz Oz O.T.
Ix Iy algoz Oy Ox
y→z z→y

(b) py ≤p pz
algoy (py ≤p pz)

I.T. Iy I.T. Iz Oz O.T. Oy O.T.


Ix algoz Ox
x→y y→z z→y y→x

I.T.x→z O.T.z→x

(c) Transivity process

I.T. Iz Oz O.T.
Ix algoz Ox
x→z z→x

(d) px ≤p pz

Figure 10.7: Transitivity of reduction

Figure 10.7 depicts Theorem 10.2. Figure 10.7 (a) and (b) show px ≤p py and py ≤p pz
relations. If the reduction (py ≤p pz ) based algorithm in Figure 10.7 (b) is placed as the
algoy part in Figure 10.7 (a), the diagram in Figure 10.7 (c) is derived. Finally, the two
steps of the input transformations, from px to py and then to pz , can be considered as
the input transformation from px to pz in Figure 10.7 (d). Similarly, the two steps of the
output transformations, from pz to py and then to px , can be considered as the output
transformation from pz to px . Clearly, px ≤p pz .
Corollary 10.1. If px ≤p py , px -complete ⊆ py -complete.
Proof. ∀pz ∈ px -complete, pz ≤p px . By the transitivity of reduction pz ≤p py and thus
pz ∈ py -complete. 
For example, KOS-complete ⊆ Sorting-complete since KOS ≤p Sorting according to
Corollary 10.1.
10.5. REDUCTION TO GRAPH PROBLEMS 565

10.5 Reduction to Graph Problems

M
N B

N M B

S S

(a) seven bridge problem (b) graph representation

Figure 10.8: Graph representation of the seven bridge problem

In 1736, Euler presented a resolution to the historically famous problem known as the
‘Seven Bridges of Königsberg.’ As depicted in Figure 10.8 (a), the problem is whether it is
possible to cross all seven bridges exactly once. Instead of working on the map in Figure 10.8
(a), Euler visualized regions and bridges with abstract connections to vertices and edges,
as depicted in Figure 10.8 (b). If one can find a path visiting every edge exactly once in
a connected graph, such path is called an Euler or Eulerian path. Then, using the parity
of degree of each vertex, Euler derived a theorem that if a connected graph has exactly
two vertices of odd degree, then there is at least one Euler path. Otherwise, no Euler path
exists.
Euler’s original paper in [59] gave birth to graph theory [149, p 195], and the graph in
Figure 10.8 (b) became one of icons of graph theory. His vision is clearly the heart of the
reduction to graph problems. Numerous problems can be reduced graph related problems.
Once graph algorithms are known, a lot of problems which may seem irrelevant to graphs at
first glance can be solved by the graph algorithms. This chapter introduces such problems
that reduce to graph related problems.

10.5.1 NPP-complete
Many problems reduce to the number of path Problem 5.14, or simply NPP, defined on
page 258. One of such problems is the Binomial coefficient Problem 6.9, or BNC  in short,
defined on page 319. It takes two integers, (n, k), as inputs and must return nk . So as to
utilize the NPP, the inputs (n, k) must be transformed into a directed acyclic graph which is
the input into NPP. As depicted in Figure 10.9 (b), the adjacent list for (n − k + 1) × (k + 1)
vertices can be generated according to the recurrence relation in eqn (6.17) from the input

Leonard Euler (1707-1783) was a Swiss mathematician. Among numerous impor-


tant and influential contributions, he is best known for resolving the problem known
as the Seven Bridges of Königsberg. His name also appears in this textbook for Euler
zigzag numbers, Eulerian numbers and Eulerian numbers of the second kind.
c Portrait is in public domain.
566 CHAPTER 10. REDUCTION

v(0,0) → {} 1 4 10
v(1,0) → {v(0,0) } 3,0 4,1 5,2
1
1 1 .. .. 1 3 6
. . 2,0 3,1 4,2
1 2 1
v(3,1) → {v(4,1) , v(4,2) }
1 3 3 1 1 2 3
v(3,2) → {v(2,1) , v(2,2) } 1,0 2,1 3,2
1 4 6 4 1
v(4,1) → {v(3,0) , v(3,1) }
1 5 10 10 5 1 1 1 1
v(4,2) → {v(3,1) , v(3,2) }
0,0 1,1 2,2
v(5,2) → {v(4,1) , v(4,2) }
(a) Pascal’s triangle (b) Adjacent list (c) NPP on grid graph

Figure 10.9: Binomial coefficient problem reduces to number of path problem

(n, k). This input transformation takes Θ(kn). Let’s denote this special directed acyclic
graph, also called a grid graph, G. LPP(G, v(0,0) ) returns
 a table, T , of the number of paths
from v(0,0) to all other vertices. T (v(n,k) ) contains nk . In all, BNC ≤p NPP, as illustrated
in Figure 10.9 and a pseudo-code is stated as follows:

Algorithm 10.7. Binomial coefficient by number of paths

BNC2NPP(n, k)
Declare an adjacent list Lv(0,0) ∼v(n,k)
of size (n − k + 1) × (k + 1) . . . . . . . 1
for i = 1 ∼ k, L[v(i,i) ] = {v(i−1,i−1) } . . . . . . . . . . . . 2
for i = 1 ∼ n − k, L[v(i,0) ] = {v(i−1,0) } . . . . . . . . . 3
for i = 1 ∼ n − k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
L[v(i+j,j) ] = {v(i+j−1,j−1) , v(i+j−1,j) } . . . . . . . . . 6
S = NPP(v(0,0) , L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return Svn,k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

As the number of arcs in the grid graph is Θ(kn), the computational space complexity
of Algorithm 10.7 is Θ(kn). Note that |V | = Θ(kn) and |E| = Θ(kn) since the maximum
in-degree of any vertex is constant: two. Since NPP can be solved in O(|V | + |E|), the
computational time complexity of Algorithm 10.7 is Θ(kn) as well.
Another problem that can be reduced to NPP is the winning way Problem 5.7, or WWP
in short, defined on page 239. It takes n and P1∼k as inputs and must return the number
of ways to get n points with a combination of available points, px ∈ P1∼k , with repetitions
allowed. So as to utilize the NPP, the inputs must be transformed into a directed acyclic
graph. First, create n + 1 number of vertices whose values are 0 ∼ n and then link two
vertices whose difference is any px ∈ P1∼k . For a toy example of n = 14 and P = h3, 7i, a
constructed directed acyclic graph is given in Figure 10.10 (a). This input transformation
takes Θ(kn) because each vertex out degree is up to k. If an algorithm for NPP is used, it
returns a table, S of the number of paths from v0 to all other vertices. S[vn ] contains the
number of ways to obtain n points, which is the desired output for the WWP. Hence, WWP
≤p NPP and a pseudo code is given as follows:
10.5. REDUCTION TO GRAPH PROBLEMS 567

1 1 1 1 1 1
0 3 6 9 12 15

0 0 1 2 3 4 1 2 5 13 34 89
1 4 7 10 13 16 0 2 4 6 8 10

1 3 8 21 55 144
0 0 0 0 1 3
2 5 8 11 14 17 1 3 5 7 9 11

(a) P = h3, 7i winning way (b) P = h1, 2i winning way


(n + 1)th Fibonacci number

Figure 10.10: Winning way problem reduces to number of path problem

Algorithm 10.8. Winning ways by number of paths

WWP2NPP(n, P1∼k )
Declare an adjacent list L0∼n . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for j = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if i − pk ≥ 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
L[i] ⊃ {i − pk } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
S = NPP(0, L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return S[vn ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

As the number of arcs in the graph is Θ(kn), the computational space complexity of
Algorithm 10.8 is Θ(kn). Lines 1 ∼ 5 take Θ(kn). If the strong inductive programming
Algorithm 5.27, which takes Θ(kn), is used in line 7, the computational time complexity of
Algorithm 10.8 is Θ(kn).
Imagine that there are only two kinds of missile with one and two point values in the
winning ways Problem 5.7, as depicted in Figure 10.10 (b). It is reminiscent of the nth
Fibonacci number Problem 5.8, or FIB in short, defined on page 246. This reduction
relationship from the nth Fibonacci Problem 5.8 to the winning ways Problem 5.7 is given
in eqn (10.14).
(
0 if n ≤ 0
FIB(n) = ⇔ FIB ≤p WWP (10.14)
WWP(n − 1, h1, 2i) otherwise

Since FIB ≤p WWP and WWP ≤p NPP, FIB ≤p NPP by the transitivity of reduction
Theorem 10.2. The reduction relations, FIB ≤p WWP and FIB ≤p NPP, are depicted in
Figure 10.10 (b) for (WWP(11, h1, 2i) = 144) = (F12 = 144). A different DAG for a direct
reduction, FIB ≤p NPP, can be designed as shown in Figure 10.11 (a). Let the source node
be 1 and the node, 0, is not connected. The rest of nodes, v = 2 ∼ n, have two in-coming
arcs from v − 1 and v − 2. Then the nth Fibonacci number is the number of paths from
1 to n on the DAG. A pseudo code is stated in Algorithm 10.9 whose computational time
complexity is Θ(n).
568 CHAPTER 10. REDUCTION

0 1 3 8 21 55 1 2 3 7 18 47
0 2 4 6 8 10 d 0 2 4 6 8

1 2 5 13 34 89 1 1 4 11 29 76
1 3 5 7 9 11 s 1 3 5 7 9

(a) FIB ≤p NPP (b) LUC ≤p NPP


0 2 12 70 408 2378 2 6 34 198 1154
0 2 4 6 8 10 0 2 4 6 8

1 5 29 169 985 5741


s 2 14 82 478 2786
1 3 5 7 9 11 1 3 5 7 9

(c) PLN ≤p NPP (d) PLL ≤p NPP


0 1 5 21 85 341 2 5 17 65 257
0 2 4 6 8 10 0 2 4 6 8

1 3 11 43 171 683
s 1 7 31 127 511
1 3 5 7 9 11 1 3 5 7 9

(e) JCN ≤p NPP (f) JCL ≤p NPP

Figure 10.11: Fibonacci related problems reduce to number of path problem

Algorithm 10.9. Fibonacci by NPP Algorithm 10.10. Lucas # by NPP


FIB2NPP(n) LUC2NPP(n)
Declare an adjacent list L0∼n . . . 1 Declare an adjacent list Ls,d,0∼n . . . . . . . . 1
L[0] = {}; L[1] = {0} . . . . . . . . . . . 2 L[d] = {s}; L[0] = {s, d}; L[1] = {s, 0} . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . .3 for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L[i] = {i − 1, i − 2} . . . . . . . . . . . 4 L[i] = {i − 1, i − 2} . . . . . . . . . . . . . . . . . . . . 4
S = NPP(1, L) . . . . . . . . . . . . . . . . . 5 S = NPP(s, L) . . . . . . . . . . . . . . . . . . . . . . . . . .5
return S[n] . . . . . . . . . . . . . . . . . . . . . 6 return S[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Another problem that reduces to NPP is the Lucas number, or simply LUC, defined
in eqn (5.63) on page 278. By adding a starting node and one dummy node, Lucas num-
ber from 0 to n can be computed by the number of paths from the source node, s, as
depicted in Figure 10.11 (b). Clearly, LUC reduces to NPP and a pseudo code is given in
Algorithm 10.10 whose computational time complexity is Θ(n).

Other problems that are closely related Fibonacci and reduce to NPP include Pell number
(PLN) in eqn (5.67), Pell-Lucas number (PLL) in eqn (5.70), Jacobsthal number (JCN) in
eqn (5.73), and Jacobsthal-Lucas number (JCL) in eqn (5.78). Multi-graphs, which can
have multiple edges from a node to another, can be utilized to reduce these problems to
NPP, as illustrated in Figure 10.11 (c) ∼ (f), respectively. Pseudo codes are stated in
Algorithms 10.11 ∼ 10.14, correspondingly.
10.5. REDUCTION TO GRAPH PROBLEMS 569

Algorithm 10.11. Pell # by NPP Algorithm 10.12. Pell-Lucas by NPP


PLN2NPP(n) PLL2NPP(n)
Declare an adjacent list L0∼n . . . 1 Declare an adjacent list Ls,d,0∼n . . . . . . .1
L[0] = {}; L[1] = {} . . . . . . . . . . . . 2 L[s] = {}; L[0] = {s, s}; L[1] = {s, s} . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . .3 for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L[i] = {i − 1, i − 1, i − 2} . . . . . 4 L[i] = {i − 1, i − 1, i − 2} . . . . . . . . . . . . 4
S = NPP(1, L) . . . . . . . . . . . . . . . . . 5 S = NPP(s, L) . . . . . . . . . . . . . . . . . . . . . . . . 5
return S[n] . . . . . . . . . . . . . . . . . . . . . 6 return S[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Algorithm 10.13. Jacobsthal by NPP Algorithm 10.14. Jac.-Luc. by NPP


JCN2NPP(n) JCL2NPP(n)
Declare an adjacent list L0∼n . . . 1 Declare an adjacent list Ls,d,0∼n . . . . . . .1
L[0] = {}; L[1] = {} . . . . . . . . . . . . 2 L[s] = {}; L[0] = {s, s}; L[1] = {s} . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . .3 for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . 3
L[i] = {i − 1, i − 2, i − 2} . . . . . 4 L[i] = {i − 1, i − 2, i − 2} . . . . . . . . . . . . 4
S = NPP(1, L) . . . . . . . . . . . . . . . . . 5 S = NPP(s, L) . . . . . . . . . . . . . . . . . . . . . . . . 5
return S[n] . . . . . . . . . . . . . . . . . . . . . 6 return S[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

10.5.2 More Sorting Algorithms


Numerous algorithms based on various design paradigms for the sorting Problem 2.16
defined on page 60 have been covered in this book, as enumerated in the table on page 736.
A more extensive and exhaustive list of sorting algorithms can be found in [103]. Here,
albeit impractical, several other sorting algorithms based on reduction to graph problems
are introduced.
One such a graph problem is the rooted minimum spanning tree, or rMST in short. To
transform a sample unsorted list input, A = h3, 2, 0, 4, 1i, first find the minimum element
as a root. For each element, ai , in the list, create an arc with weight w(ai , aj ) = aj − ai
as long as aj > ai . This process gives the weighted adjacent matrix given in Figure 10.12
(a). A pseudo code that transforms an unsorted list into a weighted adjacent list and then
invokes an algorithm for the rooted minimum spanning tree problem is stated as follows:

3 2 0 4 1 3
  1 1
3 0 0 0 1 0
2 2
1 0 0 2 0 2 4

h3, 2, 0, 4, 1i 0 3 2 0 4 1  2 3 h0, 1, 2, 3, 4i
  2 3
40 0 0 0 0 1 4
1 2 1 0 3 0 1
1 0

(a) input for sort (b) input for rMST (c) output for rMST (d) output for sort

Figure 10.12: Sorting problem reduces to minimum spanning tree problem


570 CHAPTER 10. REDUCTION

Algorithm 10.15. Sorting by rooted MST

RD2rMST-sort(A1∼n )
a01 = argmin(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Mn×n = 0’s initially . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = i + 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if ai > aj , Mj,i = ai − aj . . . . . . . . . . . . . . . . . . . 5
otherwise, Mi,j = aj − ai . . . . . . . . . . . . . . . . . . .6
Ta1 ∼an = rMST(a01 , Mn×n ) . . . . . . . . . . . . . . . . . . . . . . . .7
c = argmax(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
for i = n down to 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
a0i = c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
c = par(a0i ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
return A01∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Lines from 1 to 6 are the input transformation, which takes Θ(n2 ). Line 7 invokes a
rooted minimum spanning tree, such as Algorithm 5.30 stated on page 264, which takes
Θ(n2 ). The output of rMST is a table of parent nodes for each node. Finally, lines 8 to
10 are the output transformation, which takes linear time. Hence, the computational time
complexity of Algorithm 10.15 is Θ(n2 + n2 + n) = Θ(n2 ).
Line 7 in Algorithm 10.15 can be replaced by an algorithm for the traveling salesman
Problem 4.16, which is a classical problem in graph theory [24]. The traveling salesman
Problem 4.16, or TSP in short, is to find the shortest possible path that visits each vertex
exactly once given a weighted directed graph. Clearly, an algorithm for TSP returns a sorted
list, as shown in Figure 10.12 (b). A straight forward and exhaustive algorithm takes O(n!)
for TSP. Hence, when TSP is used to solve the sorting problem, the RD2TSP-sort algorithm
would take Θ(n2 ) + O(n!) + Θ(n) = O(n!). Certainly, this algorithm should not be used.
Another graph related problem to which the sorting Problem 2.16 reduces is the shortest
path cost Problem 4.15, or simply SPC, defined on page 188. If line 7 in Algorithm 10.15 is
replaced by an algorithm for the SPC, however, the sorting problem cannot be solved. As
shown in Figure 10.13 (a), there are 8 different paths from the minimum, 0, to the maxi-
mum, 4, with the minimum cost, 4; {h0, 4i, h0, 1, 4i, h0, 2, 4i, h0, 3, 4i, h0, 1, 2, 4i, h0, 1, 3, 4i,
h0, 2, 3, 4i, h0, 1, 2, 3, 4i}. Hence, another input transformation technique is necessary. For
each element, ai in the list, create an arc with weight w(ai , aj ) = (aj − ai )2 as long as

3 3 2 0 4 1 3
1 1   1 1
3 0 0 0 1 0
2 2 4
2
2 3
4 1 0 0 4 0 2
4 9
4
0
9 4 0 16 1 
2 3  4 9
1 4 40 0 0 0 0 1 16
1 1 4 1 0 9 0 1
1 0 1 0

(a) output of SPC on the (b) input transformation (c) output of SPC on the
absolute difference graph by squared difference squared difference graph

Figure 10.13: Sorting problem reduces to shortest path cost problem


10.5. REDUCTION TO GRAPH PROBLEMS 571

aj > ai . This process results in the weighted adjacent matrix given in Figure 10.13 (b).
In this way, only the shortest path, which is a sorted list, can be found, as illustrated in
Figure 10.13 (c). Its pseudo code is stated below. Let’s assume that a greedy algorithm for
SPC outputs a table, T , in order of its inclusion to the solution set.

Algorithm 10.16. Sorting by SPC

RD2SPC-sort(A1∼n )
r = argmin(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Mn×n = 0’s initially . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = i + 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if ai > aj , Mj,i = (ai − aj )2 . . . . . . . . . . . . . . . . 5
otherwise, Mi,j = (aj − ai )2 . . . . . . . . . . . . . . . . 6
T = SPC(r, Mn×n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
return T1∼n without costs . . . . . . . . . . . . . . . . . . . . . . . . .8

Both computational time and space complexities of Algorithm 10.16 are Θ(n2 ), as the
n × n adjacent matrix must be constructed.
The following Lemma 10.1 is the essence of reduction based Algorithm 10.16 correctness.

Lemma 10.1. If x + y = z, x2 + y 2 < z 2 for positive numbers, x, y, z > 0.

Proof. (x + y)2 = z 2 ⇒ x2 + y 2 + 2xy = z 2 . Since 2xy > 0, x2 + y 2 < z 2 . 

Indeed, there are infinitely many sorting algorithms stemming from the reduction based
algorithms. The proof can be trivially shown later in Chapter 11.

10.5.3 Critical Path Problem to Longest Path Cost Problem

W V adjacent list V adjacent weighted arcs


(1) v1 → {v2 , v3 , v5 } v1 → {(v2 , 1), (v3 , 1), (v5 , 1)}
(2) v2 → {v3 , v4 } v2 → {(v3 , 2), (v4 , 2)}
(4) v3 → {v4 , v5 , v6 , v7 } v3 → {(v4 , 4), (v5 , 4), (v6 , 4), (v7 , 4)}
=⇒
(2) v4 → {v6 } v4 → {(v6 , 2)}
(3) v5 → {v7 } v5 → {(v7 , 3)}
(3) v6 → {v7 } v6 → {(v7 , 3)}
(2) v7 → {} v7 → {}
(a) input for CPP (b) input for LPC
1 1 2 0 1
3 1
v1 v2 v1 v2
1 1 2 2
3 4 2 7 3 7
⇐= 4 4
v5 v3 v4 v5 v3 v4
10
7 9 4 4
2 3 3 12 9 2
14 3
v7 v6 12 v7 v6

(c) output for CPP (d) output for LPC

Figure 10.14: Critical path problem reduces to longest path cost problem
572 CHAPTER 10. REDUCTION

Consider the critical path Problem 5.19, or simply CPP, defined on page 267. Imagine
that an algorithm for the longest path cost problem, or LPC in short, considered on page 290
is given. Instead of devising an algorithm using other paradigms, let’s utilize the existing
algorithm for LPC to solve CPP. A toy example in Figure 10.14 may provide insights. The
input for CPP in Figure 10.14 (a) can be trivially transformed to the input for LPC, as
given in Figure 10.14 (b). LPC requires a weighted DAG. The time that a task, vx , takes
can be weights for all adjacent outgoing arcs from vx , except for the terminal nodes, which
have no outgoing arc. Next, a given algorithm for LPC with the constructed wDAG as an
input is invoked and returns its output. The output for LPC is almost identical to that for
CPP, as shown in Figure 10.14 (c) and (d). The last node, v7 , is not counted in LPC and,
thus, it must be added to the final output. Clearly, CPP ≤p LPC. This reduction based
algorithm for CPP is stated as follows:
Algorithm 10.17. Critical path using longest path cost

RDrLPP-cpm(V1∼n , w(V1∼n ), A1∼n , v1 , v7 )


for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for each vx ∈ Ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A0i = A0i ∪ {(vx , w(vi ))} . . . . . . . . . . . . . . . . . . . . . . . 3
out = rLPC(A01×n , v1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return out(v7 ) + w(v7 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Lines 1 to 3 are the input transformation, which takes Θ(|E|). Line 4 invokes the longest
path problem algorithm with its root v1 . Finally, line 5 is the output transformation,
which takes constant time. Hence, the computational time complexity of Algorithm 10.17
is Θ(n2 + n2 + n) = Θ(n2 ).

10.5.4 Longest Increasing Sub-sequence to LPL

4
4

(a) Input: Auld Lang Syne music note

(b) Output: longest increasing subsequence note

Figure 10.15: Music application of longest increasing subsequence

The longest increasing sub-sequence problem, or LIS in short, which is to find the longest
sub-sequence of A such that ai < aj for every i < j, was considered in [150]. The output
sub-sequence needs not be consecutive, but must be strictly increasing and its length must
be the maximum. For example, a music application is given in Figure 10.15, where the
length of a longest sub-sequence of the music note sequence is 6. The problem is formally
stated as follows:
Problem 10.1. Longest Increasing Subsequence
Input: A sequence A of n quantifiable elements
Output: X = hx1 , x2 , · · · , xn i such that
10.5. REDUCTION TO GRAPH PROBLEMS 573

n
X
maximize xi
i=1
subject to xi ai < xj aj if i < j and xi = xj = 1
where xi = 0 or 1

s 3 4 2 5 1 t
  t
s 0 1 1 1 1 1 0
3
0 0 1 0 1 0 1

4
 0 0 0 0 1 0 1 

h3, 4, 2, 5, 1i 2
 0 0 0 0 1 0 1 
 3 4 2 5 1 h3, 4, 5i
5
 0 0 0 0 0 0 1 

10 0 0 0 0 0 1
t 0 0 0 0 0 0 0 s

(a) input (b) input for LPC (c) output for LPC (d) output
for LIS for LIS

Figure 10.16: LIS reduces to LPP

The longest increasing sub-sequence problem reduces to the rooted longest path length
problem considered on page 290, i.e., LIS ≤p LPL. To transform the input list, A1∼n , to
a DAG, add a source node, s, and target node, t, in the beginning and end of the list,
respectively. Next, prepare an adjacency matrix, as given in Figure 10.16 (b). Each element
in the list is a node. There are arcs from s to all elements in the input list and to t from all
elements in the input list. There is an arc from ai to aj if ai appears before aj and ai < aj .
The DAG of this adjacency matrix is given in Figure 10.16 (c).
Next, an algorithm for the the rooted longest path length problem is invoked. One
algorithm based on the strong inductive programming paradigm was considered on page 290.
The output for LPL can be trivially transformed into the desired output for the longest
increasing sub-sequence problem by removing the dummy s and t nodes. A pseudo code is
given as follows:
Algorithm 10.18. Longest increasing sub-sequence by longest path
RDLPP-LIS(A1∼n )
M(n+2)×(n+2) = 0’s initially . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 2 to n + 1, M1,i = 1 . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n + 1, Mi,n+2 = 1 . . . . . . . . . . . . . . . . . . . 3
for i = 1 to n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = i + 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if ai < aj , Mi+1,i+1 = 1 . . . . . . . . . . . . . . . . . . . . 6
S = LPL(s, M(n+2)×(n+2) ) . . . . . . . . . . . . . . . . . . . . . . . . 7
return S2∼|S|−1 and/or |S| − 2 . . . . . . . . . . . . . . . . . . . . . 8
Lines 1 to 6 are the input transformation, which takes Θ(n2 ). Line 7 invokes the rooted
longest path length problem algorithm, whose the source node is s, which takes O(n2 ).
574 CHAPTER 10. REDUCTION

Finally, line 8 is the output transformation, which takes constant time. Hence, the compu-
tational time complexity of Algorithm 10.18 is Θ(n2 ).

10.5.5 Activity Selection Problem to LPC


Consider the activity selection Problem 4.8, or simply ASP, defined on page 170. To
devise a reduction based algorithm, consider the longest path length problem, or LPL in
short, considered on page 290.

s 1 2 3 4 5 t
  t
s 0 1 1 1 1 1 0
10 0 0 1 1 1 1
i 1 2 3 4 5 20 0 0 0 1 1 1
si 1 2 3 4 5 3 0

0 0 0 0 1 1

1 2 3 4 5
fi 3 4 5 6 7
 
40 0 0 0 0 0 1
5 0 0 0 0 0 0 1
t 0 0 0 0 0 0 0 s

(a) input (b) input for LPP (c) output

Figure 10.17: Activity selection problem reduces to longest path problem

To transform the input activity list to a DAG, add a source node, s and target node, t.
Next, prepare an adjacency matrix as given in Figure 10.17 (b). Each activity is a node.
There are arcs from s to all elements in the input list and to t from all elements in the input
list. For every pair of activities, (ai , aj ) in the list, create an arc (ai , aj ) as long as fi ≤ sj
or (aj , ai ) as long as fj ≤ si . Note that arc means ‘compatible’. This process results in
the directed adjacent matrix given in Figure 10.17 (b). A maximum number of compatible
activities can be trivially derived from the longest path found in the LPL problem. Clearly,
ASP ≤p LPL, as depicted in Figure 10.17. A pseudo code for this reduction based algorithm
is stated as follows:
Algorithm 10.19. Activity selection by longest path length
ASP-RD2LPP(A1∼n )
M(n+2)×(n+2) = 0’s initially . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 2 to n + 1, M1,i = 1 . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n + 1, Mi,n+2 = 1 . . . . . . . . . . . . . . . . . . . 3
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
if fi ≤ sj , Mi+1,j+1 = 1 . . . . . . . . . . . . . . . . . . . . 6
S = LPL(s, M(n+2)×(n+2) ) . . . . . . . . . . . . . . . . . . . . . . . . 7
return S2∼|S|−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Lines 1 to 6 are the input transformation, which takes Θ(n2 ). Line 7 invokes a longest
path length algorithm, which takes O(n2 ). Finally, line 8 is the output transformation,
which takes constant time. Hence, the computational time complexity of Algorithm 10.19
is Θ(n2 ).
While the activity selection problem reduces to the longest path length problem, the
weighted activity selection Problem 5.6, or simply wASP, defined on page 231 naturally
10.5. REDUCTION TO GRAPH PROBLEMS 575

s 1 2 3 4 5 t
  t
s - 0 0 0 0 0 - 2 2
i 1 2 3 4 5 51 5
1- - - 2 2 2 2
si 1 2 3 4 5 2- - - - 5 5 5 2 5 1
fi 3 4 5 6 7 3-

- - - - 1 1

1 2 3 4 5
pi 2 5 1 5 2
 
4- - - - - - 5 2 5
 2
xi 0 1 0 1 0 5- - - - - - 2 0 0 0
t - - - - - - - 0 0
s

(a) input/output for wASP (b) input for LPC (c) output for LPC

Figure 10.18: wASP ≤p LPC

reduces to the longest path cost problem: wASP ≤p LPC. Activities are nodes and the
weight of the outgoing arc for each node corresponds to the profit of the activity, as depicted
in Figure 10.18. A pseudo code for this reduction based algorithm is stated as follows:

Algorithm 10.20. Weighted Activity selection by longest path cost

wASP-RD2LPC(A1∼n )
M(n+2)×(n+2) = −∞’s initially . . . . . . . . . . . . . . . . . . . . 1
for i = 2 to n + 1, M1,i = 0 . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 to n + 1, Mi,n+2 = pi−1 . . . . . . . . . . . . . . . . 3
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
if fi ≤ sj , Mi+1,j+1 = pi . . . . . . . . . . . . . . . . . . . . 6
S = LPC(s, M(n+2)×(n+2) ) . . . . . . . . . . . . . . . . . . . . . . . . 7
return S2∼|S|−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

‘-’ in the adjacent matrix in Figure 10.18 (b) and ‘−∞’ in Algorithm 10.20 mean that
there is no arc. The computational time complexity of Algorithm 10.20 is the same as that
of Algorithm 10.19.

10.5.6 String Matching to LPC


Another problem that reduces to the longest path cost problem is the longest common
sub-sequence Problem 6.5, or simply LCS, defined on page 311. Figure 10.19 illustrates LCS
≤p LPC. First, the two string inputs of LCS, as shown in Figure 10.19 (a), are transformed
into an input to the LPC. The ith element in A1∼n and the jth element in B1∼m form the (i×
m+j)th vertex. All vertices form a grid weighted graph with zero weight. All vertices except
for the vertices on the top-most row and left-most column have two in-coming arcs with
their weight value equal to zero: w(v(i−1)m+j , vi×m+j ) = 0 and w(v(i×m+j−1 , vi×m+j ) = 0.
Additional arcs are added with the weight value equal to one only for (i × m + j)th vertices
where ai = bj : w(v(i−1)m+j−1 , vi×m+j ) = 1. The resulting graph is a weighted DAG, as
shown in Figure 10.19 (c), where an algorithm for LPC can be applied. The output for
the longest path cost from the vertex v0 to v(n+1)×(m+1)−1 is the solution for the longest
common sub-sequence between two strings. A pseudo code is stated as follows:
576 CHAPTER 10. REDUCTION

V adjacent list
0 1 2 a0 -b0 v0 → {}
A1∼n = T A a0 -b1 v1 → {(v0 , 0)}
.. .. ..
B1∼m = C A G . . .
0 1 2 3 a2 -b2 v10 → {(v5 , 1), (v6 , 0), (v9 , 0)}
a2 -b3 v11 → {(v7 , 0), (v10 , 0)}
(a) input for LCS (b) input transformation to LPC
0 0 1 0 2 0 3
(0,0) (0,1) (0,2) (0,3)
0 0 0 0 C A G
4 0 5 0 6 0 7 0 0 0 0
(1,0) (1,1) (1,2) (1,3) T 0 0 0 0
0 0 1 0 0
A 0 0 1 1
8 0 9 0 10 0 11
(2,0) (2,1) (2,2) (2,3)

(c) output of LPC (d) output transformation to LCS

Figure 10.19: LCS ≤p LPC

Algorithm 10.21. Longest common sub-sequence by longest path

LCS-RD2LPC(A1∼n , B1∼m )
Declare an adjacent list L0∼(n+1)×(m+1)−1 . . . . . . . . . . . . . . . . . . . . 1
for j = 1 ∼ m, Lj ⊃ {(j − 1, 0)} . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, Li×(m+1) ⊃ {((i − 1) × (m + 1), 0)} . . . . . . . . . 3
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = 1 ∼ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Li×(m+1)+j ⊃ {(i × (m + 1) + j − 1, 0), (i × m + j, 0)} . . . . 6
if ai = bj , Li×(m+1)+j ⊃ {(i × m + j − 1, 1)} . . . . . . . . . . . 7
S = LPC(0, L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
return S(n+1)×(m+1)−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

The computational time complexity of Algorithm 10.21 is Θ(n × m). The input trans-
formation in lines 1 ∼ 7 takes Θ(n × m), and the output transformation in line 9 takes
constant time. Since the number of vertices and edges are Θ(n × m), LPC in line 8 takes
Θ(n × m) as well.
While the longest common sub-sequence problem reduces to LPC, the related problems,
such as the edit distance with indels Problem 6.6 defined on page 313 and the Levenshtein
distance Problem 6.7 defined on page 314, reduce to the shortest path length and shortest
path cost problems with quite similar input transformations. They are left for exercises.

10.5.7 SPC-complete
Consider the shortest path cost Problem 4.15, or simply SPC, defined on page 188.
Numerous problems reduce to SPC. One such a problem is the shortest path cost on a DAG
Problem 5.16, or SPC-dag in short. A special case problem always reduces to the general
case problem. A DAG is a special directed graph without a directed cycle and, thus, SPC-
dag ≤p SPC. Another example of special case reduction is SPL ≤p SPC. A regular graph
10.5. REDUCTION TO GRAPH PROBLEMS 577

0 2 0 2
2 2 2
v1 v2 v1 v2 v1 v2
4 1 3 10 4 1 3 10 4 1 3 10
7 5 12 3 1 3
2 2 2 2 2 2
v5 v3 v4 v5 v3 v4 v5 v3 v4
8 4 8 4 8 4
5 6 5 19 18 6 5 6 5 6
1 1 1
v7 v6 v7 v6 v7 v6

(a) A sample G (c) LPC output on G (e) SPC on G


0 −2 0 −2
−2 −2 −2
v1 v2 v1 v2 v1 v2
−4 −1 −3 −10 −4 −1 −3 −10 −4 −1 −3 −10
−7 −5 −12 −3 −1 −3
-2 −2 -2 −2 -2 −2
v5 v3 v4 v5 v3 v4 v5 v3 v4
−8 −4 −8 −4 −8 −4
−5 −6 −5 −19 −18 −6 −5 −6 −5 −6
−1 −1 −1
v7 v6 v7 v6 v7 v6

(b) G− negated from G in (a) (d) SPC output on G− (f) LPC on G−

Figure 10.20: Longest and shortest path cost problems: LPC ≡p SPC

without weights can be considered a weighted graph whose edge weights are all ones.
Another problem that reduces to SPC is LPC. To find an LPC on a weighted graph,
suppose we negate all weight values. Finding an SPC on a negated weighted graph is the
same as finding an LPC on the original weighted graph. Let G− = (V, E − ) be a negated
weighted graph from a weighted graph G = (V, E), such that (vx , vy , −wx,y ) ∈ E − if and
only if (vx , vy , wx,y ) ∈ E.

LPC(G, vs ) = −1 × SPC(G− , vs ) ⇔ LPC ≤p SPC (10.15)



SPC(G, vs ) = −1 × LPC(G , vs ) ⇔ SPC ≤p LPC (10.16)

Figure 10.20 demonstrates that LPC and SPC are dual problems: LPC ≡p SPC. The input
transformation is negating the weights, which takes Θ(|E|). The output transformation is
just negating the output value, which takes constant time. Hence, LPC reduces to SPC in
polynomial time and vice-versa.

7 1 6 1 7
s a b c s a b c
3 1
d e 3 2 5
3 d e
f g 2
3 1
f g 4 6
7 4
8 11
h t h t

(a) input (b) input transformation (c) SPP on w-DAG

Figure 10.21: Maze problem reduces to shortest path problem

Consider the maze problem where there is one entrance and one exit. The goal is to find
578 CHAPTER 10. REDUCTION

a path from the entrance to the exit. If we have a map of the maze, the problem can be
trivially solved by reducing it to the shortest path cost problem. Consider the maze as a
grid world. If a cell has more than two ways to move or exactly one way to move (which
is a dead end case), make the cell a new vertex as illustrated in Figure 10.21 (b). If a cell
has exactly two possible ways to move, there is no need to make it a vertex, except for
the start and target vertices. Connect two vertices with an edge if there is a path between
them without crossing any other vertex. The input maze problem in Figure 10.21 (a) can
be transformed into the graph problem in Figure 10.21 (c).

maze SPC
sort
SPL
EDP KOS
CEU LPC EDP SPL maze sort
LPC RPP
CPP MDN
wASP MAX
UDP LPL wASP LIS CPP UDP KOS RPP CEU
LPL MIN
LIS
ASP
etc... ASP MIN MDN MAX

(a) a set SPC-complete (b) SPC reduction tree

Figure 10.22: SPC-complete

Figure 10.22 (a) shows the SPC-complete, the set of all problems that reduce to the
shortest path cost problem. Figure 10.22 (b) shows the SPC reduction tree, where the root
is the SPC problem and the arrow is the reduction.

10.6 Proving Lower Bound


A statement that the lower bound of a certain problem, px , is known to be Ω(f (n))
means that there cannot exist an algorithm for px whose computational running time is
o(f (n)). Proving the lower bound from scratch is considerably hard in most cases. Only a
handful of problems’ lower bounds are known and proven from scratch. However, if a certain
problem whose lower bound is known is given, the reduction concept provides an easy way
to prove other problems. In this section, the lower bound for the sorting Problem 2.16 is
shown from scratch and then the reduction based proof of lower bounds for other problems
is presented.
The general method for proving a lower bound of py by reduction utilizes the proof by
contradiction as follows: Suppose that there exists an O(f (n)) algorithm for py and we wish
to prove the lower bound of py is Ω(f (n)). The proof by contradiction first states “suppose
that it is possible to devise an algorithm for py in o(f (n)).” Next, choose px , whose lower
bound is known to be Ω(f (n)), and show that px ≤p py . The computational time complexity
of an algorithm for py has an impact on determining the computational time complexity of
the reduction based (px ≤p py ) algorithm for px : O(Ti ) + O(algoy ) + O(To ). If we show
O(Ti ) + O(algoy ) + O(To ) = o(f (n)), the computational time complexity of the reduction
based (px ≤p py ) algorithm for px is o(f (n)). This contradicts that the lower bound of px
is Ω(f (n)). Therefore, the lower bound of py is Ω(f (n)).
10.6. PROVING LOWER BOUND 579

10.6.1 Lower Bound for Sorting


Among numerous algorithms presented previously for the sorting Problem 2.16, the best
performing algorithm’s computational time complexity has been O(n log n) thus far. Can
we devise a faster algorithm? Or does the best algorithm for the sorting problem have
O(n log n) time complexity and all other algorithms have the worst case running time of
Ω(n log n)? Proving the lower bound for a problem answers these questions.

a<b<c a:b
a<c<b a<b<c a<b b<a b<a<c
b<a<c a<c<b b<c<a
b<c<a c<a<b a:c a:c c<b<a
c<a<b
c<b<a a<c c<a a<c c<a
a<b<c b<c<a
a<c<b b:c c<a<b b<a<c b:c c<b<a
b<c c<b b<c c<b

a<b<c a<c<b b<c<a c<b<a

Figure 10.23: Decision tree for comparison based sorting

Any sorting algorithm that makes comparisons only must make at least Θ(n log n) com-
parisons in the worst case. Any comparison based sorting algorithm can be viewed abstractly
as a decision tree, as illustrated in Figure 10.23. The worst-case running time is the length
of the longest path in the decision tree

Theorem 10.3. The lower bound of any sorting algorithm is Ω(n log n).

Proof. Since any sorting algorithm will act differently on each of n! possible permutations,
the decision tree has at least n! leaves. Actually it may have more because most algorithms
perform redundant comparisons. The height of the decision tree is Θ(log n!) = Θ(n log n).
Thus, the lower bound of any sorting algorithm is Ω(n log n). 

10.6.2 Lower Bound for AVL Tree Construction


As a simple example of the reduction based lower bound proof, consider the problem
of constructing an AVL tree. If we insert an element one by one into an AVL tree, this
inductive programming algorithm takes Θ(n log n). Can we construct an AVL tree given an
array of n elements in o(n log n)?

Theorem 10.4. The lower bound of any AVL construction algorithm is Ω(n log n).

Proof. Suppose that there exists an o(n log n) AVL-construct algorithm. Consider the AVL-
sort Algorithm 9.38 stated on page 544, which first constructs an AVL and then traverses the
tree by the in-order DFT. Clearly, Sort ≤p AVL-const. The computational time complexity
of this reduction based AVL-sort Algorithm 9.38 depends on the complexity of the AVL-
const, which is o(n log n), in addition to the in-order DFT time complexity, which is Θ(n).
Then, the computational time complexity of the AVL-sort Algorithm 9.38 is o(n log n). This
580 CHAPTER 10. REDUCTION

contradicts Theorem 10.3 that the sorting problem’s lower bound is Ω(n log n). Therefore,
the lower bound of any AVL construction algorithm is Ω(n log n). 

10.6.3 Convex Hull

y y
p4 p3
p5

p0
p2
p1
x x

(a) input (b) output

Figure 10.24: A sample input and output for the convex hull Problem 10.2

The Convex hull problem is one of the fundamental problems in computational geometry.
Given a set of n points, P1∼n in a plane, the problem is to find a subset of points, C0∼|C|−1 ,
that form a convex hull such that all points are within the convex hull. Each point pi or cj
has two x and y coordinate values, denoted as (pi .x, pi .y) or (cj .x, cj .y). Points selected to
form the convex hull, cj ’s ∈ C are ordered along its boundary such that for every segment
of two consecutive points in C, all other points ∈ P lie in the one side area of the line
y ≥ mx + b or y ≤ mx + b where y = mx + b is the linear line between two consecutive
points. Which side of the line all points lie on depends on the ring sequence of points
in a clockwise or counter-clockwise manner. The following version of the formal problem
definition outputs a ring sequence of extreme points in the counter clockwise rotation, as
shown in Figure 10.24.
Problem 10.2. Convex hull problem
Input: A set of n points, P1∼n = {p1 , · · · , pn } where pi .x and pi .y ∈ R
Output: A ring sequence of extreme points, C0∼|C|−1 such that C ⊆ P and
∀i ∈ (0 ∼ |C| − 1), ∀j ∈ (1 ∼ n),
(pj .y − ci .y)(c(i+1)%|C| .x − ci .x) ≥ (c(i+1)%|C| .y − ci .y)(pj .x − ci .x)
Numerous algorithms had been proposed until Ronald Graham provided an efficient
O(n log n) algorithm called Graham’s scan in [72] in 1972. It first finds the point ps with
the lowest x-coordinate. Then, it finds all angles between ps and the remaining points by
eqn (10.17) and sorts by the angles in descending order, as shown in Figure 10.25 (a).

π/2 if pt .x = ps .x ∧ pt .y > ps .y


atan(ps , pt ) = −π/2   if pt .x = ps .x ∧ pt .y < ps .y (10.17)

tan −1 p .y−p .y
 t s
pt .x−ps .x if pt .x > ps .x

The angles values for points have interval [−π/2, π/2].


Once the input points are sorted by the angles, it utilizes the strong induction. As a
basis case, the first two points with the highest angles to ps form the basis convex hull of
10.6. PROVING LOWER BOUND 581

p5 p5 p5
p4 p4 p4
p1 p1 p1
1 2
3 p4
4 p2 p2 p1 p2
p3 p3 p3
5 p5 p5
p6 p6 p6

(a) Find min x coordinate (b) initial convex hull (c) (i = 4) inductive step
sort by angles C = hp3 , p1 , p5 i C = hp3 , p4 , p5 i
p5 p5 p5
p4 p4 p4
p1 p1 p1
p2 p6
p2 p4 p2 p4 p2
p3 p3 p3
p5 p5
p6 p6 p6

(d) (i = 5) inductive step (e) (i = 6) inductive step (f)


C = hp3 , p2 , p4 , p5 i C = hp3 , p6 , p4 , p5 i C = hp3 , p6 , p4 , p5 i

Figure 10.25: Sort ≤p CVH illustration

three points as exemplified in Figure 10.25 (b). Then, it utilizes a stack to solve inductively.
The next unvisited point with the highest angle to ps is simply pushed onto the stack if it
does not introduce a concavity. If it does introduce a concavity, remove it by popping out
the point from the stack until it does not create a concavity. Then push the current point.
A pseudo code, assuming that there are more than three points, is stated below. Let top(S)
and top2(S) be the top most and second top most elements without removing them.

Algorithm 10.22. Graham’s scan

Grahamscan(P1∼n )
declare a stack S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
s = argmin(pi .x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
i=0∼n
swap(p1 , ps ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
P2∼n = sort(P2∼n ) by atan(p0 , pi ) . . . . . . . . . . . . . . . . . . . . . . . 4
push p1 and p2 onto S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 4 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
while (top(S).y− top2(S).y)(pi .x− top(S).x)
> (pi .y− top(S).y)(top(S).x− top2(S).x), . . . 7
pop(S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
push pi onto S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
i = 0 and ci = p1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
while S is not empty, i = i + 1 and ci = pop(S) . . . . . 11
return C0∼|C|−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Line 2, which finds a point with the minimum x coordinate value, takes linear time.
Line 3 to sort the points takes O(n log n). Lines 5 ∼ 9 take linear time, as each element
is pushed and popped exactly once. All remaining elements in the stack get popped to
generate the counter clockwise sequence of points in the convex hull in line 11. Hence, the
582 CHAPTER 10. REDUCTION

total computational time complexity of the Graham’s scan Algorithm 10.22 is O(n log n).
Can there be a better or more efficient algorithm which takes o(n log n)?
Theorem 10.5. The lower bound of any convex hull algorithm is Ω(n log n).
Proof. This proof shows Sort ≤p CVH to prove the theorem. Suppose that there exists
a o(n log n) convex hull algorithm. As shown in Figure 10.26, each element, ai , in the

x2
A = h1.2, 0.3, −1.2, 1.8, −0.5i 4

3.5
Input transformation
↓ 3
P = {(1.2, 1.22 ), (0.3, 0.09), (−1.2, 1.44),
(1.8, 3.24), (−0.5, 0.25)} 2.5

2
Convex hull algorithm
↓ 1.5
C = h(−1.2, 1.44), (−0.5, 0.25), (0.3, 0.09),
(1.2, 1.44), (1.8, 3.24)i 1

0.5
Output transformation
↓ 0 x
−2 −1 0 1 2
A0 = h−1.2, −0.5, 0.3, 1.2, 1.8i

(a) Sort ≤p CVH processes (b) Sort ≤p CVH plot

Figure 10.26: Sort ≤p CVH illustration

input sequence becomes a point, pi , where pi .x = ai and pi .y = a2i . These points form
a parabola, as shown in Figure 10.26 (b). A convex hull algorithm finds a sequence of
points starting from the point with the minimum x-coordinate value and then points in the
counter clockwise sequence. Every point becomes part of the convex hull. If we select the x
coordinate value only from the convex hull sequence, we get a sorted list of A. A pseudo code
for this sorting algorithm reduced to the convex hull problem is stated in Algorithm 10.23.
The input transformation in line 1 takes linear time. The output transformation in line 3
takes linear time as well. If the convex hull algorithm in the line 2 takes o(n log n), the
computational time complexity of this sorting by convex hull algorithm is o(n log n). This
contradicts Theorem 10.3 that the sorting problem’s lower bound is Ω(n log n). Therefore,
the lower bound of any convex hull algorithm is Ω(n log n). 
Algorithm 10.23. Sorting by convex hull
Sort-RD2CVH(A1∼n )
for i = 1 to n, pi = (ai , a2i ) . . . . . . . . . . . . . . . . . . . . . 1
C0∼n−1 = CVH algo(P1∼n ) . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n, a0i = ci−1 .x . . . . . . . . . . . . . . . . . . . . . . 3
0
return A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The direction of the reduction is important to prove the lower bound of a problem. The
proof for Theorem 10.5 utilizes Sort ≤p CVH. Note that Graham’s scan Algorithm 10.22
can be categorized into a reduction based (CVH ≤p sort) algorithm, as it invokes the sorting
algorithm in line 4. This reduction does not prove the lower bound of CVH though.
10.7. MULTI-REDUCTION 583

10.7 Multi-reduction

Ix1 algo y Ox1

Iy2 algo y O y2
Ix Input Output Ox
transform transform

Iym algo y O ym

Figure 10.27: Multi-reduction framework

Often a certain problem, px , can be solved by invoking an algorithm for another relevant
problem, py , multiple times, as depicted in Figure 10.27. Such algorithms are called the
multi-reduction based algorithms and the notation px ≤m p py shall be used. m should be
constant so that the input and output transformations take polynomial time. Many algo-
rithms can be viewed as a multi-reduction. For example, the insertion sort Algorithm 2.23
itself, stated on page 64, can be viewed as a multi-reduction because it invokes another
Problem 2.17, defined on page 61, that inserts a single item into an already sorted list
multiple times.
The reduction px ≤p py is often called the Karp reduction, named after Richard Karp,
where the algorithm for py is invoked only once as a sub-routine to solve px , i.e., m = 1.
As opposed to the Karp reduction, the algorithm for py is invoked a polynomial number of
times in the Turing reduction or Cook reduction, and the notation ≤P T is used in [70, p 59].
The multi-reduction px ≤m p py can be considered the Turing reduction, but the difference is
m > 1 in the multi-reduction and m ≥ 1 in Turing reduction. Hence, the Turing reduction
includes both the Karp reduction and multi-reduction. The term, ‘multi-reduction’ and the
(≤mp ) are used in this book to distinguish them from the ordinary Karp reduction.
For another simple example, consider the volume of the Frustum Problem 1.4, or simply
VFR, defined on page 5. Finding the volume of the Pyramid of height, h, and base, b,
values, or simply VPR, is given as follows:

hb2
VPR(h, b) = (10.18)
3
Then, the volume of the Frustum can be easily solved by invoking the VPR in eqn (10.18)
twice; the volume of the frustum is the volume of a big pyramid subtracted by that of a top
small pyramid, as depicted in Figure 10.28.
Since the algorithm for the VPR problem is called more than once, we denote VFR ≤m p
VPR. This multi-reduction based algorithm is stated in eqn (10.19).

bh ah
VFR(h, a, b) = VPR( , b) − VPR( , a) ⇔ VFR ≤m
p VPR (10.19)
b−a b−a
As a matter of fact, the correctness proof for Algorithm 1.2 in Theorem 1.1 stated on page 6
utilizes the multi-reduction in eqn (10.19).
584 CHAPTER 10. REDUCTION

bh ah
h a b−a b−a

b a
b

Figure 10.28: VFR ≤m


p VPR

10.7.1 Combinatorics

n facto n!

n!
(n,k) Input k facto k! Output
transf. transf. k! (n−k)!

n−k facto (n−k)!

Figure 10.29: Binomial coefficient problem reduces to factorial problem

Consider the binomial coefficient Problem 6.9, or BNC in short, defined on page 319.
 
n n!
nchoosek(n, k) = = ⇔ BNC ≤m
p FAC (10.20)
k (n − k)!k!

Eqn (10.20) is based on multi-reduction and, thus, BNC ≤m p FAC, as depicted in Fig-
ure 10.29.
Likewise, consider the k permutation of n Problem 2.8, or simply KPN, denoted as
P (n, k) or nk , as defined on page 51. Several reduction and multi-reduction relations can
be devised.
 
n
KPN(n, k) = × k! ⇔ KPN ≤p BNC (10.21)
k
n!
= ⇔ KPN ≤m
p FAC (10.22)
(n − k)!
= (n − k + 1)k̄ = RFP(n − k + 1, k) ⇔ KPN ≤p RFP (10.23)

Since P (n, k) = C(n, k) × k!, KPN reduces to BNC, as given in eqn (10.21). KPN ≤m p FAC
since the factorial function is invoked more than once in eqn (10.22).
RFP stands for the rising factorial power problem and was considered on page 89. Recall
that KPN is the falling factorial power. KPN and RFP are dual problems because of
the reduction relations in eqns (10.23) and (10.24). Other reduction and multi-reduction
10.7. MULTI-REDUCTION 585

relations for RFP are given in eqns (10.25) and (10.26).

RFP(n, k) = P (n + k − 1, k) = KPN(n + k − 1, k) ⇔ RFP ≤p KPN (10.24)


 
n+k−1
= × k! ⇔ RFP ≤p BNC (10.25)
k
(n + k − 1)!
= ⇔ RFP ≤m
p FAC (10.26)
(n − 1)!

The nth Catalan number, Cn , or CAT in short, is the number of valid parenthesis with
n pairs discussed on page 288 and it is equivalent to the number of rooted binary trees with
n nodes Problem 8.3 defined on Page 438. It can be solved by the binomial coefficient, as
given in eqn (10.27) (See [163] for details about eqn (10.27)). Consequently, it can be solved
by the multi-reduction to Facto, as given in eqn (10.28).
 
1 2n
Cn = ⇔ CAT ≤p BNC (10.27)
n+1 n
(2n)!
= ⇔ CAT ≤m p FAC (10.28)
(n + 1)!n!

Consider the multiset coefficient Problem 6.18, or simply MSC, defined on page 350. It
is the number of ways to distribute k number of unlabeled processes to n number of distinct
(labelled) processors. In other words, it is the number of ways to choose k items from n
distinct items where the order does not matter and repetition is allowed. In [60, p 38], Feller
proved MSC ≤p BNC in eqn (10.29) in an elegant way with stars and bars.
     
n n+k−1 n+k−1
= = ⇔ MSC ≤p BNC (10.29)
k k n−1
(n + k − 1)!
= ⇔ MSC ≤m
p FAC (10.30)
(n − 1)!k!

Consequently, MSC ≤m
p FAC, as given in eqn (10.30).

A B C D A B C D
〈1,1,1,0〉 〈3,0,0,0〉
A B C A A A
1 2 3 4 5 6 1 2 3 4 5 6

A B C D A B C D
〈1,2,0,0〉 〈1,0,0,2〉
A B B A D D
1 2 3 4 5 6 1 2 3 4 5 6

A B C D A B C D
〈1,0,2,0〉 〈0,0,0,3〉
A C C D D D
1 2 3 4 5 6 1 2 3 4 5 6

Figure 10.30: Stars and bars for the multiset coefficent problem: n = 4 and k = 3

Theorem 10.6. Eqn (10.29) correctly finds MSC(n, k).


586 CHAPTER 10. REDUCTION

Proof. As depicted in Figure 10.30, k processes are represented as stars. Each processor is n
distinct spaces between n − 1 bars. For example, ‘? | ? ? | |’ represents distributing (k = 3)
processes to (n = 4) spaces with occupancy numbers h1, 2, 0, 0i, i.e., one ‘A’ processor, two
‘B’ processors, zero ‘C’ processor, and zero ‘D’ processor. n − 1 bars and k stars can appear
in an arbitrary order in n + k − 1 distinct cells. Clearly, it is to choose k out of n + k − 1
cells to be stars or n − 1 out of n + k − 1 cells to be bars. 
Recall the surjective multiset coefficient Problem 6.19, or simply SMSC, defined on
page 351. While some of the processors may be idle in the multiset coefficient Problem 6.18,
no idle processor is allowed in SMSC, i.e., every processor must have at least one process.
As depicted in Figure 10.31 (a), if each processor is assigned with exactly one process,
the problem becomes distributing the remaining (k − n) processes to n processors where
order does not matter and repetition is allowed. Clearly, SMSC ≤p MSC by eqn (10.31).
Consequently, SMSC ≤p BNC and SMSC ≤m p FAC by eqns (10.32) and (10.33), respectively.
 
n
SMSC(n, k) = ⇔ SMSC ≤p MSC (10.31)
k−n
   
k−1 k−1
= = ⇔ SMSC ≤p BNC (10.32)
k−n n−1
(k − 1)!
= ⇔ SMSC ≤m p FAC (10.33)
(k − n)!(n − 1)!

In [60, p 39], Feller proved SMSC ≤p BNC in eqn (10.32) with stars and bars, as depicted
in Figure 10.31 (b). The condition that no processor is idle imposes the restriction that no
two bars be adjacent. The k stars leave (k − 1) spaces of which (n − 1) are to be occupied
by bars and, thus, eqn (10.32) is proved.

A B C D
〈1,1,1,1〉 + 〈1,1,1,0〉 = 〈2,2,2,1〉

A A B B C C D
1 2 3 4 5 6

A B C D
〈1,1,1,1〉 + 〈1,2,0,0〉 = 〈2,3,1,1〉

A A B B B C D
1 2 3 4 5 6

(a) SMSC ≤p MSC


A B C D

A B B C C D D
1 2 3 4 5 6

A B C D

A A A B C D D
1 2 3 4 5 6

(b) SMSC ≤p BNC

Figure 10.31: Stars and bars for the surjective multiset coefficient: n = 4 and k = 7
10.7. MULTI-REDUCTION 587

{a} {b, c} {b} {a, c} {c} {a, b}


c c b
a b b a c a

(a) Set partition S(3, 2)


p1 p2 p1 p2 p1 p2 p1 p2 p1 p2 p1 p2
c c c c b b
a b b a b a a b c a a c

hp1 , p2 , p2 i hp2 , p1 , p1 i hp2 , p1 , p2 i hp1 , p2 , p1 i hp2 , p2 , p1 i hp1 , p1 , p2 i


(b) Surjective sequence SS(3, 2)

Figure 10.32: Surjective sequence to set partition: n = 3 and k = 2

Recall the number of partitioning a set into exactly k parts, also known as the Stirling
number of the second kind, or simply SNS. It can be analogized to the number of ways of
distributing n labeled balls into k unlabeled urns, where no urn is empty, as depicted in
Figure 10.32 (a). Consider the number of ways of distributing n labeled balls into k labeled
urns, where no urn is empty, as depicted in Figure 10.32 (b). This problem is called the
surjective sequence number problem, or SSN in short, and is formally defined as follows:
Problem 10.3. The surjective sequence number problem, (SSN)
Input: n and k ∈ Z+
Output: S̃(n, k) = |P | where A is a set of n distinct elements and
k
[
P = {hp1 , · · · , pk i | pi = A ∧ (∀i, j ∈ {1, · · · , k} if i 6= j, pi ∩ pj = ∅)
i=1
∧ (∀i ∈ {1, · · · , k}|pi | > 0)}
The SSN Problem 10.3 definition resembles the SNS problem considered as exercise
Q 6.17 on page 347. While SNS counts the set of non-empty partitions, SSN counts the
sequence of non-empty partitions. SSN can be naturally solved by the reduction to SNS, as
we can permute the urns in order by labeling urns.
 
n
S̃(n, k) = k!S(n, k) = k! ⇔ SSN ≤p SNS (10.34)
k

Eqn (10.34) provides SSN ≤p SNS reduction relation where S(n, k) = nk is the Stirling


number of the second kind.

10.7.2 Integer Partition Problem


Consider problems to find the number of ways to distribute n unlabeled processes into
k unlabeled processors. This question can be answered in two different ways depending
on whether we allow idle processors or not. In the integer partition with exactly k parts
Problem 6.11, p(n, k), or IPE in short, defined on page 326, all k processors must have at
least one process. In the integer partition with at most k parts Problem 6.12, I(n, k), or
IPam in short, defined on page 331, some processors may have zero processes.
588 CHAPTER 10. REDUCTION

p(7,1) = 1 p(7,2) = 3 p(7,3) = 4 p(7,4) = 3 p(7,5) = 2

(a) p(7, k) for k = 1 ∼ 5

(b) I(7, 5) = p(7, 1) + p(7, 2) + p(7, 3) + p(7, 4) + p(7, 5) = 13

Figure 10.33: Reduction from IPE to IPam

Suppose an algorithm for IPE is already given and we wish to devise an algorithm for
IPam. As illustrated in Figure 10.33, solutions of p(7, 1) ∼ p(7, 5) can simply be added to
find I(7, 5). If p(n, k 0 ) is given where k 0 < k, k − k 0 empty urns can be appended to become
an element in I(n, k). This relationship is depicted from (a) to (b) in Figure 10.33. Hence,
the following multi-reduction (IPam ≤m p IPE) relation can be stated, as in eqn (10.35).

k
X
I(n, k) = p(n, i) ⇔ IPam ≤m
p IPE (10.35)
i=1

In addition to the multi-reduction relation in eqn (10.35), IPam has a simple reduction
to IPE, as given in Theorem 10.7.

Theorem 10.7. IPam ≤p IPE, i.e., I(n, k) = p(n + k, k).

Proof. by the recurrence relation definition in eqn (6.22).

p(n + k, k) = p(n + k − 1, k − 1) + p(n, k) by eqn (6.22)


= p(n + k − 2, k − 2) + p(n, k − 1) + p(n, k) by eqn (6.22)
= p(n + k − 3, k − 3) + p(n, k − 2) + p(n, k − 1) + p(n, k) by eqn (6.22)
..
.
10.7. MULTI-REDUCTION 589

k−1
X
= p(n + k − k, k − k) + p(n, k − i)
i=0
k−1
X
= p(n, k − i) ∵ p(n, 0) = 0
i=0
k
X
= p(n, i) = I(n, k) by eqn (10.35) 
i=1

Another proof by two dimensional strong induction is also possible.

Proof. Basis (n = 0, k = 1) case: (I(0, 1) = 1) = (p(1, 1) = 1).


First row basis (n = 0, k > 1) case: (I(0, k) = 1) = (p(k, k) = 1).
First column basis (n > 0, k = 1) case: (I(n, 1) = 1) = (p(n + 1, 1) = 1).
2D inductive step: assume all sub-problems are true.

I(n, k) = I(n, k − 1) + I(n − k, k) by eqn (6.23)


= p(n + k − 1, k − 1) + p(n, k) by strong assumption
= p(n + k, k) by eqn (6.22) 

I(n, k) = p(n+k, k) in Theorem 10.7 can be best understood by pictures in Figure 10.34.
If one ball is added to each urn in I(n, k), the problem becomes p(n + k, k).

(a) p(6, 2) = I(4, 2) (b) p(7, 3) = I(4, 3)

Figure 10.34: IPE ≤p IPam: I(n, k) = p(n + k, k)

Conversely, IPE ≤p IPam by the same reasoning and thus IPE ≡p IPam.

p(n, k) = I(n − k, k) ⇔ IPE ≤p IPam (10.36)

10.7.3 At Most or At Least Combinatorics


Consider the Stirling numbers of the first kind triangle, as shown in Figure 10.35. Each
cell contains SNF(n, k), which is the ways of generating exactly k cycles. Recall the ways
of generating at most k cycles problem, or simply CNam, and the ways of generating at
least k cycles, or simply CNal, considered on page 431. CNam(n, k) and CNal(n, k) can be
computed by adding the first k cells from the left, and from the right on the nth row of the
Stirling numbers of the first kind triangle, respectively. Hence, the following multi-reduction
590 CHAPTER 10. REDUCTION

1 1 = 0!
0 1 1 = 1!
0 1 1 2 = 2!
0 2 3 1 6 = 3!
0 6 11 6 1 24 = 4!
0 24 50 35 10 1 120 = 5!
0 120 274 225 85 15 1 720 = 6!
0 720 1764 1624 735 175 21 1 5040 = 7!

CNam(n, k) CNal(n, k +1)


0 ~ k k +1 ~ n n!
CNam(n, k − 1) CNal(n, k)
0 ~ k−1 k ~ n n!

Figure 10.35: At most and at least k cycle problems on the SNF triangle

to SNF relations can be derived:


k
X
CNam(n, k) = SNF(n, i) ⇔ CNam ≤m
p SNF (10.37)
i=0
Xn
CNal(n, k) = SNF(n, i) ⇔ CNal ≤m
p SNF (10.38)
i=k

CNam and CNal are complementary to each other on the Stirling numbers of the first
kind triangle, as depicted in Figure 10.35. If the sum of every elements in the entire
Pn
nth row, SNF(n, i), is known, different reduction relations can be derived. Indeed,
i=0
n
P n
P
CNam(n, 0) = SNF(n, i) and CNal(n, n) = SNF(n, i) by eqns (10.37) and (10.38),
i=0 i=0
respectively. Hence, the following multi-reduction relations can be derived:

CNam(n, k) = CNal(n, 0) − CNal(n, k + 1) ⇔ CNam ≤m


p CNal (10.39)
CNal(n, k) = CNam(n, n) − CNam(n, k − 1) ⇔ CNal ≤m
p CNam (10.40)
n
P
Since SNF(n, i) = n!, eqns (10.39) and (10.40) can be rewritten using the Karp (or
i=0
ordinary) reduction as follows:

CNam(n, k) = n! − CNal(n, k + 1) ⇔ CNam ≤p CNal (10.41)


CNal(n, k) = n! − CNam(n, k − 1) ⇔ CNal ≤p CNam (10.42)

Clearly, CNam and CNal are dual problems. The correctness of the reduction relations in
eqns (10.41) and (10.42) follows directly from the row sum property of the Stirling numbers
of the first kind triangle Lemma 10.2.
10.7. MULTI-REDUCTION 591

Lemma 10.2. The row sum property of the Stirling numbers of the first kind triangle.

n
X
SNF(n, i) = n! (10.43)
i=0

0
P
Proof. (by induction) Base cases: If n = 0, ( SNF(0, i) = SNF(0, 0) = 1) = (0! = 1).
i=0
n
P n+1
P
Inductive step: Assuming that SNF(n, i) = n!, show SNF(n + 1, i) = (n + 1)!.
i=0 i=0

n+1
X n+1
X
SNF(n + 1, i) = SNF(n + 1, i) ∵ SNF(n, 0) = 0
i=0 i=1
n+1
X
= (SNF(n, i − 1) + nSNF(n, i)) by eqn (6.28)
i=1
Xn n
X
= SNF(n, i) + n SNF(n, i) by eqn (6.28)
i=0 i=1
= n! + n × n! by assumption
= (n + 1)n! = (n + 1)! 

Recall the Stirling numbers of the second kind triangle, or simply SNS triangle, in
Figure 7.41 given on page 408, The nth row and kth column cell in the SNS triangle
represents SNS(n, k), which is the number of ways to partition a set of n elements into
exactly k parts, as described earlier on page 347. Let SPam and SPal represent the number
of ways to partition a set into at most and at least k parts - Problems 7.14 and 7.15 on
pages 412 and 413, respectively. The following two immediate multi-reduction relations can
be generated by the summing from left or right method:

k
X
SPam(n, k) = SNS(n, i) ⇔ SPam ≤m
p SNS (7.13)
i=1
Xn
SPal(n, k) = SNS(n, i) ⇔ SPal ≤m
p SNS (7.14)
i=k

Moreover, the following immediate multi-reduction relations can be generated:

SPal(n, k) = SPam(n, n) − SPam(n, k − 1) ⇔ SPal ≤m


p SPam (10.44)
SPam(n, k) = SPal(n, 0) − SPal(n, k + 1) ⇔ SPam ≤m
p SPal (10.45)

The sum of nth row in the Stirling numbers of the second kind triangle is the Bell
number Problem 7.13, or BLN in short, defined on page 410. The problem definition itself
592 CHAPTER 10. REDUCTION

in Problem 7.13 utilizes the multi-reduction to SNS, as given in eqn (10.46).


n
X
bell(n) = S(n, i) ⇔ BLN ≤m
p SNS (10.46)
i=1
= SPam(n, n) ⇔ BLN ≤p SPam (10.47)
= SPal(n, 0) ⇔ BLN ≤p SPal (10.48)
n  
X n
= bell(n − i) ⇔ BLN ≤m
p BNC (10.49)
i=0
i

See [179, p 23] for the correctness of eqn (10.49) which involves the binomial coefficients.
Using the Bell number, the Turing or multi-reduction relations in eqns (10.44) and (10.45)
become Karp (or ordinary) reduction relations in the following eqns (10.50) and (10.51).

SPal(n, k) = bell(n) − SPam(n, k − 1) ⇔ SPal ≤p SPam (10.50)


SPam(n, k) = bell(n) − SPal(n, k + 1) ⇔ SPam ≤p SPal (10.51)

Consider the Eulerian number of the second kind triangle in Figure 10.36, where each
cell represents EUS(n, k), the number of GBW sequences with exactly k number of ascents.
It was formally defined in Problem 6.17 on page 349. Recall NA2am and NA2al, considered
on page 432, which are the number of GBW sequences with at most k ascents and at
least k ascents, respectively. The following two immediate multi-reduction relations can be
generated by the adding from left or right method:
k
X
NA2am(n, k) = EUS(n, i) ⇔ NA2am ≤m
p EUS (10.52)
i=0
Xn
NA2al(n, k) = EUS(n, i) ⇔ NA2al ≤m
p EUS (10.53)
i=k

1 1
1 0 1 = 1!!
1 2 0 3 = 3!!
1 8 6 0 15 = 5!!
1 22 58 24 0 105 = 7!!
1 52 328 444 120 0 945 = 9!!
1 114 1452 4400 3708 720 0 10395 = 11!!
1 240 5610 32120 58140 33984 5040 0 135135 = 13!!

NA2am(n, k) NA2al(n, k +1)


(2n − 1)!!
NA2am(n, k − 1) NA2al(n, k)

Figure 10.36: At most and at least k ascents in GBW problems on the EUS triangle
10.7. MULTI-REDUCTION 593

As before, the following immediate multi-reduction relations can be generated:


NA2am(n, k) = NA2al(n, 0) − NA2al(n, k + 1) ⇔ NA2am ≤m
p NA2al (10.54)
NA2al(n, k) = NA2am(n, n) − NA2am(n, k − 1) ⇔ NA2al ≤m
p NA2am (10.55)
The sum of nth row in the Eulerian number of the second kind triangle follows the double
factorial of the nth odd number Problem 2.30, defined on page 85.
n
(
X 1 if n = 0
EUS(n, i) = (2n)! (10.56)
i=0
(2n − 1)!! = 2n n! if n > 0

See eqn (10.104) in Exercise Q. 10.20 for the proof of the double factorial of the nth odd
number equation: (2n − 1)!! = (2n)!
2n n! . Using eqn (10.56), eqns (10.54) and (10.55) become
the following simple reduction relations in eqns (10.57) and (10.58), respectively.
NA2am(n, k) = (2n − 1)!! − NA2al(n, k + 1) ⇔ NA2am ≤p NA2al (10.57)
(2n)!
NA2al(n, k) = n − NA2am(n, k − 1) ⇔ NA2al ≤p NA2am (10.58)
2 n!
Recall the problems of counting the ways of selecting a subset of size at most and at
least k from n elements without repetition, abbreviated as SWam and SWal, respectively,
considered on page
 430. Pascal’s triangle given in Figure 10.37, where each cell represents
BNC(n, k) = nk , provides solutions for these problems. Indeed, SWam and SWal problems
can be defined using the following multi-reduction relations to BNC:
k  
X n
SWam(n, k) = ⇔ SWam ≤m p BNC (10.59)
i=0
i
n  
X n
SWal(n, k) = ⇔ SWal ≤mp BNC (10.60)
i
i=k

As before, the following immediate multi-reduction relations can be generated:


SWal(n, k) = SWam(n, n) − SWam(n, k − 1) ⇔ SWal ≤m
p SWam (10.61)
SWam(n, k) = SWal(n, 0) − SWal(n, k + 1) ⇔ SWam ≤m
p SWal (10.62)
The sum of nth row in Pascal’s triangle is given in eqn (10.63).
n  
X n
= 2n (10.63)
i=0
i

Using eqn (10.63), SWam and SWal become dual problems.


SWam(n, k) = 2n − SWal(n, k + 1) ⇔ SWam ≤p SWal (10.64)
SWal(n, k) = 2n − SWam(n, k − 1) ⇔ SWal ≤p SWam (10.65)
Pascal’s triangle is symmetric, as depicted in Figure 10.37. Hence, the duality of SWam
and SWal can be stated using the symmetry relation as follows:
SWam(n, k) = SWal(n, n − k) ⇔ SWam ≤p SWal (10.66)
SWal(n, k) = SWam(n, n − k) ⇔ SWal ≤p SWam (10.67)
594 CHAPTER 10. REDUCTION

1 1 = 20
1 1 2 = 21
1 2 1 4 = 22
1 3 3 1 8 = 23
1 4 6 4 1 16 = 24
1 5 10 10 5 1 32 = 25
1 6 15 20 15 6 1 64 = 26
1 7 21 35 35 21 7 1 128 = 27

SWam(n, k) SWal(n, k +1)


2n
SWam(n, k − 1) SWal(n, k)
SWam(n, k) SWal(n, n − k)

SWam(n, n − k) SWal(n, k)

Figure 10.37: At most and at least k subset selection problems on the Pascal’s triangle

Consider the Eulerian number triangle in Figure 10.38, where each cell represents EUN(n, k),
the number of permutations with exactly k ascents in Problem 6.16. Recall NAam and NAal
considered on page 432, which are the number of permutations with at most k ascents and
at least k ascents, respectively. The following two immediate multi-reduction relations can
be generated by the summing from left or right method:
k
X
NAam(n, k) = EUN(n, i) ⇔ NAam ≤m
p EUN (10.68)
i=0
Xn
NAal(n, k) = EUN(n, i) ⇔ NAal ≤m
p EUN (10.69)
i=k

As before, the following multi-reduction relations can be immediately derived:


NAam(n, k) = NAal(n, 0) − NAal(n, k + 1) ⇔ NAam ≤m
p NAal (10.70)
NAal(n, k) = NAam(n, n) − NAam(n, k − 1) ⇔ NAal ≤m
p NAam (10.71)
The Eulerian number triangle has the same row sum property as the Stirling numbers of
the first kind triangle, as given in eqn (10.72).
n
X
EUN(n, i) = NAam(n, n) = NAal(n, 0) = n! (10.72)
i=0

A permutation with any number of ascents is simply a permutation of n distinct elements,


n!. Hence, eqns (10.70) and (10.71) become simple reduction relations as follows:
NAam(n, k) = n! − NAal(n, k + 1) ⇔ NAam ≤p NAal (10.73)
NAal(n, k) = n! − NAam(n, k − 1) ⇔ NAal ≤p NAam (10.74)
10.7. MULTI-REDUCTION 595

1 1 = 0!
1 0 1 = 1!
1 1 0 2 = 2!
1 4 1 0 6 = 3!
1 11 11 1 0 24 = 4!
1 26 66 26 1 0 120 = 5!
1 57 302 302 57 1 0 720 = 6!
1 120 1191 2416 1191 120 1 0 5040 = 7!

NAam(n, k) NAal(n, k +1)


n!
NAam(n, k − 1) NAal(n, k)
NAam(n, k) NAal(n, n − k − 1)

NAam(n, n − k − 1) NAal(n, k)

Figure 10.38: At most and at least k ascents problems on the EUN triangle

The EUN triangle is not symmetric, but a symmetric sub-triangle is embedded, as de-
picted in Figure 10.38. Hence, the duality of NAam and NAal can be stated using the
symmetry relation as follows:

NAam(n, k) = NAal(n, n − k − 1) ⇔ NAam ≤p NAal (10.75)


NAal(n, k) = NAam(n, n − k − 1) ⇔ NAal ≤p NAam (10.76)

10.7.4 Reduction Relations in Fibonacci Related Problems


Consider the number of nodes in a Fibonacci tree of height n problem, or simply FTN,
defined recursively in eqn (3.33) on page 140. We do not need to solve the problem recursively
or by using a dynamic programming technique as long as the Fibonacci numbers are given
to us by the following reduction relation:
Theorem 10.8. The recurrence relation of FTN in eqn (3.33) is equivalent to the following
eqn (10.77), where F (h) is the hth Fibonacci number.

FTN(h) = F (h + 3) − 1 ⇔ FTN ≤p FIB (10.77)

Proof. (by strong induction) Base cases: If h = 0, (FTN(0) = 1) = (F (3) − 1 = 2 − 1 = 1).


If h = 1, (FTN(1) = 2) = (F (4) − 1 = 3 − 1 = 2).
Inductive step: Assume that FTN(j) = F (j + 3) − 1 is true for all positive integers j where
1 < j ≤ h. Show FTN(h + 1) = F (h + 4) − 1 is also true.

FTN(h + 1) = FTN(h) + FTN(h − 1) + 1 by eqn (3.33)


= F (h + 3) − 1 + F (h + 2) − 1 + 1 by assumption
= F (h + 4) − 1 by eqn (5.29) 
596 CHAPTER 10. REDUCTION

Table 10.1: Fibonacci and its related sequences.

n 0 1 2 3 4 5 6 7 8 9 10 11 12 ···
FIB(n) 0 1 1 2 3 5 8 13 21 34 55 89 144 ···
FTN(n) 1 2 4 7 12 20 33 54 88 143 232 376 609 ···
FRC(n) 1 1 3 5 9 15 25 41 67 109 177 287 465 ···
LUC(n) 2 1 3 4 7 11 18 29 47 76 123 199 322 ···

Another simple problem that reduces to the Fibonacci number is the number of recursive
calls problem to compute the nth Fibonacci number, or simply FRC, defined recursively in
eqn (5.31) on page 247. Even though both FTN and FRC problems have the same recursive
call part, g(n) = g(n − 1) + g(n − 2) + 1, their basis cases are different resulting in completely
different sequences. FRC has the following reduction relation to the nth Fibonacci number
problem:

FRC(n) = 2FIB(n + 1) − 1 ⇔ FRC ≤p FIB (10.78)


FRC(n) = 2FTN(n − 2) + 1 ⇔ FRC ≤p FTN (10.79)

Eqn (10.79) suggests that FRC can be solved if an algorithm for FTN is known. Proofs
using strong induction for enqs (10.78) and (10.79) are left for exercises.
In number theory, various relationships between Fibonacci and Lucas numbers have
been studied (see [80][145, p 35] for a list of such relationships). Among them, the following
relationships are of great interests in designing reduction based algorithms:

∀n ∈ Z, n ≥ 1 →L(n) = F (n + 1) + F (n − 1) ⇔ LUC ≤m
p FIB (10.80)
F (2n)
∀n ∈ Z, n ≥ 1 →L(n) = ⇔ LUC ≤m
p FIB (10.81)
F (n)
L(n − 1) + L(n + 1)
∀n ∈ Z, n ≥ 1 →F (n) = ⇔ FIB ≤m
p LUC (10.82)
5
The nth Lucas number can be expressed in terms of Fibonacci numbers, as in eqns (10.80)
and (10.81), and thus, the nth Lucas number problem defined in eqn (5.63) (LUC) reduces
to the nth Fibonacci number Problem 5.8 (FIB): LUC ≤m p FIB. Eqn (10.82) suggests that
FIB ≤m p LUC.

Theorem 10.9. Eqn (10.80) is true for any positive integer n > 0.

Proof. (using strong induction)


Basis (n = 1) case: (L(1) = 1) = (L(1) = F (2) + F (0) = 1 + 0 = 1).
Inductive step: Assume for all k where 0 < k ≤ n, L(k) = F (k + 1) + F (k − 1) is true,
show that L(k + 1) = F (k + 2) + F (k) is also true.

L(k + 1) = L(k) + L(k − 1) by definition.


= F (k + 1) + F (k − 1) + F (k) + F (k − 2) by strong assumption.
= F (k) + F (k + 1) + F (k − 2) + F (k − 1) by commutativity.
= F (k + 2) + F (k) by definition. 
10.8. COMBINING WITH STRONG INDUCTIVE PROGRAMMING 597

Since the best known algorithm for FIB is Θ(log n), the multi-reduction based algorithm
to solve LUC using eqns (10.80) or (10.81) takes also Θ(log n).
Theorem 10.10. Eqn (10.82) is true for any positive integer n > 0.
Proof. (using strong induction) Basis (n = 1) case:
L(0) + L(2) 2+3
(F (1) = 1) = (F (1) = = = 1).
5 5
L(k − 1) + L(k + 1)
Inductive step: Assume for all k where 0 < k ≤ n, F (k) = is true,
5
L(k) + L(k + 2)
show that F (k + 1) = is also true.
5
F (k + 1) = F (k) + F (k − 1) by definition.
L(k − 1) + L(k + 1) L(k − 2) + L(k)
= + by strong assumption.
5 5
(L(k − 2) + L(k − 1)) + (L(k) + L(k + 1))
= by association.
5
L(k) + L(k + 2)
= by definition. 
5
Many other multi-reduction relations between FIB and LUC can be found in [80][145,
p 35] and some proofs are left for exercise. The proof for the multi-reduction relation in
eqn (10.81) is quite tricky by induction, but it becomes trivial if Binet’s formula is used.
Let ϕ be a golden ratio as given in eqn (10.83).

1+ 5
ϕ= = 1.618033988749894848204586834365638117... (10.83)
2
Binet’s formulae for Fibonacci and Lucas numbers [80] are as follows:

ϕn − (1 − ϕ)n
F (n) = √ ⇔ FIB ≤m
p POW (10.84)
5
L(n) = ϕ + (1 − ϕ)n
n
⇔ LUC ≤m
p POW (10.85)

Correctness proofs for eqns (10.84) and (10.85) are left for exercises Q. 10.24. Computa-
tional time complexities of √Binet’s formula based algorithms in eqns (10.84) and (10.85)
are Θ(log n). Since ϕ and 5 are irrational numbers, these algorithms can be considered
approximate algorithms. The identities in eqns (10.84) and (10.85), however, are useful in
proving the correctness of eqn (10.81).
Theorem 10.11. Eqn (10.81) is true for any positive integer n > 0.
Proof. ϕn − (1 − ϕ)n n
F (n)L(n) = √ (ϕ + (1 − ϕ)n ) by Binet’s formulae
5
ϕ2n − (1 − ϕ)2n
= √ by algebra.
5
= F (2n) by eqn (10.84).
F (2n)
L(n) = 
F (n)
598 CHAPTER 10. REDUCTION

Oy1

Oy2
Strong
Ix Input Iy inductive Output Ox
transf. algoy transf.

Oym

Figure 10.39: Multi-reduction with strong inductive programming framework

10.8 Combining with Strong Inductive Programming


In Chapter 5, when we face a problem, it is recommended that we derive a recurrence
relation by looking at a small toy example. Suppose we fail to derive any recurrence relation
for a problem. This section provides a possible strategy for such problems. The strategy
is to change the harder problem to an easier problem, and to think about reduction. A
solution for an easier problem is often a stepping stone that leads to the solution for the
harder problem. One typical easier version for many problems involving sub-sequences is
an ‘ending-at’ problem. That is, since the sub-sequence of interest must end at a certain
position, the desired output for the original harder problem is often directly related to
the easier ‘ending-at’ version problem. The following example problems emphasize multi-
reduction to the ‘ending-at’ version problems.

10.8.1 Maximum Prefix Sum


Consider the problem of finding the maximum prefix sum in a list.

Problem 10.4. Maximum prefix sum, max pref ixsum(A1∼n )


Input: A sequence A of n numbers
k
P
Output: max ai
k=0∼n i=1

For example, if A = h3, −1, 5, −3, −3, 7, 4, −1i and n = 8, then the output is 12 as
illustrated in Figure 10.40. No previous chapter’s algorithm design paradigms seem to work
for this problem. One can utilize an easier problem, which is the prefix sum Problem 2.10, or
simply PFS, defined on page 53, whose Algorithm 2.13 is known. In order to derive a multi-
reduction MPFS ≤m p PFS based algorithm, the table with a toy example in Figure 10.40
may be helpful.
The multi-reducibility relationship can also be stated as an equation in eqn (10.86),

i 0 1 2 3 4 5 6 7 8
A1∼i 3 −1 5 −3 −3 7 4 −1
PFS(A1∼i ) 0 3 2 7 4 1 8 12 11
MPFS(A1∼i ) 0 3 3 7 7 7 8 12 12

Figure 10.40: MPFS ≤m


p PFS.
10.8. COMBINING WITH STRONG INDUCTIVE PROGRAMMING 599

which is straight from the formal definition of Problem 10.4.


(
0 if n = 0
MPFS(A1∼n ) = ⇔ MPFS ≤m
p PFS (10.86)
max PFS(A1∼i ) if n > 0
i=0∼n

Now an algorithm using the multi-reduction MPFS(A1∼n ) ≤m


p PFS(A1∼n )) paradigm
can be written as follows:

Algorithm 10.24. Maximum prefix sum

max prefixsum(A1∼n )
PS = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
MPS = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 to n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
PS = PS +ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if PS > MPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
MPS = PS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return MPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

The computational time complexity of Algorithm 10.24 is clearly Θ(n). No large extra
space is necessary, but only two variables are used in Algorithm 10.24.

10.8.2 Longest Increasing Sub-sequence


Consider the longest increasing sub-sequence Problem 10.1, or LIS in short, defined on
page 572. So as to devise algorithms based on the previous chapters’ design paradigms, a
table of solutions for a toy example is often useful, as shown in Figure 10.41 (a). If one
stares at this example for several hours, one would not find any direct recurrence relation so
that inductive, divide and conquer, or strong inductive programming can be devised. For
such problems, one technique is to change the problem to an easier problem.

A1∼n = 2 9 5 7 4 8 3
LIS(A1∼n ) = 1 2 2 3 3 4 4
(a) A toy example of LIS (A1∼n )
n A1∼n LISe(A1∼n ) LIS(A1∼n ) = max(LISe(A1∼n ))
1 2 1 1
2 2 9 1 2 2
3 2 9 5 1 2 2 2
4 2 9 5 7 1 2 2 3 3
5 2 9 5 7 4 1 2 2 3 2 3
6 2 9 5 7 4 8 1 2 2 3 2 4 4
7 2 9 5 7 4 8 3 1 2 2 3 2 4 2 4
(b) LIS ≤m
p LISe ⇔ LIS(A1∼n ) = max(LISe(A1∼n ))

Figure 10.41: Longest increasing sub-sequence and LISe


600 CHAPTER 10. REDUCTION

Consider the longest increasing sub-sequence ending at the nth position problem, which
is an easier problem. The solution sequence must include the last element. Since a solution
for LIS must end at a certain position, the straight forward multi-reducibility relationship
can be stated, as in eqn (10.87).
(
0 if n = 0
LIS(A1∼n ) = ⇔ LIS ≤m
p LISe (10.87)
max LISe(A1∼i ) if n > 0
i=1∼n

A higher order recurrence relation for LISe can be derived as follows:

LISe(A1∼n ) =

 max LISe(A1∼k ) + 1 if n > 1 ∧ ∃j ∈ {1 ∼ n}, aj < an
k∈{j | j <n∧aj <an } (10.88)
1 otherwise

A strong inductive programming algorithm can devised immediately based on the recur-
rence relation in eqn (10.88). Its pseudo code is as follows:

Algorithm 10.25. Dynamic LISe longest increasing sub-sequence

dynamic LISe(A1∼n )
declare a table T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
t1 = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ti = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = i − 1 down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . 5
if aj < ai ∧ tj + 1 > ti , ti = tj + 1 . . . . . . . . . 6
return T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

If we change the return statement in line 7 to ‘return max(T1∼n )’, Algorithm 10.25
becomes the multi-reduction algorithm for LIS. The strong inductive programming Algo-
rithm 10.25 for LISe and the multi-reduction (LIS ≤m p LISe) based algorithm for LIS are
illustrated in Figure 10.41 (b) on the example, A = h2, 9, 5, 7, 4, 8, 3i, whose solution is
LIS(A) = h2, 5, 7, 8i. Computational time and space complexities of Algorithm 10.25 are
O(n2 ) and Θ(n), respectively. And, thus, the computational time and space complexities of
the multi-reduction based algorithm in eqn (10.87) are also O(n2 ) and Θ(n), respectively.

10.8.3 Longest Alternating Sub-sequence


Longest alternating sub-sequence problems include the longest up-down and down-up
sub-sequence problems, abbreviated to LUDS and LDUS, respectively. In a music applica-
tion, LUDS and LDUS of Auld Lang Syne music note are given in Figure 10.42 (a) and (b).
The longest up-down sub-sequence problem is defined as follows:

Problem 10.5. Longest Up-down Subsequence


Input: A sequence A of n quantifiable elements
Output: S, a sub-sequence of A such that |S| is maximized
where S is an up-down sequence.
10.8. COMBINING WITH STRONG INDUCTIVE PROGRAMMING 601

4 4
4 4

↓ ↓

(a) longest up-down sub-sequence (b) longest down-up sub-sequence

Figure 10.42: Longest alternating sub-sequences of Auld Lang Syne music note

As before in the longest increasing sub-sequence Problem 10.1, it would be futile to


derive any recurrence relation for the LUDP. Consider the longest up-down sub-sequence
ending at the nth position problem, which is an easier problem. The solution sequence must
include the last element. Since a solution for LUDS must end at a certain position, the
straight forward multi-reducibility relationship can be stated, as in eqn (10.89).
(
0 if n = 0
LUDS(A1∼n ) = ⇔ LUDS ≤m
p LUDSe (10.89)
max LUDSe(A1∼i ) if n > 0
i=1∼n

Unlike the preceding LIS problem case, in which there is only one “ending at” problem,
there are two kinds of “ending at” cases for LUDS; one is the up-turn ending h%& · · · %i
case, or simply LUDSu, and the other is the down-turn ending h%& · · · &i case, or LUDSd
in short. Whichever is bigger is the LUDSe, as given in eqn (10.90).

LUDSe(A1∼n ) = max(LUDSu(A1∼n ), LUDSd(A1∼n )) (10.90)

Higher order cross recurrence relations between LUDSu and LUDSd are given in eqns (10.91)
and (10.92). LUDSu is defined by LUDSd in eqn (10.91), while LUDSd is defined by LUDSu
in eqn (10.92).

LUDSu(A1∼n ) =

 max LUDSd(A1∼k ) + 1 if n > 1 ∧ ∃j ∈ {1 ∼ n − 1}, aj < an
k∈{j | j <n∧aj <an } (10.91)
1 otherwise

LUDSd(A1∼n ) =

max

cond
LUDSu(A1∼k ) + 1 if n > 1 ∧ ∃i, j ∈ {1 ∼ n − 1},
(i < j ∧ ai < aj ∧ aj > an ) (10.92)

1 otherwise

cond = k ∈ {j | j < n ∧ aj > an ∧ LDUSu(A1∼k ) > 1}

A strong inductive programming algorithm for LUDSe can devised based on the recur-
rence relations in eqns (10.90 ∼ 10.92). Its pseudo code is as follows:
602 CHAPTER 10. REDUCTION

n A1∼n LUDSu(A1∼n ) LUDSd(A1∼n ) LUDS(A1∼n )


1 1 1 1 1
2 1 3 1 2 1 1 2
3 1 3 6 1 2 2 1 1 1 2
4 1 3 6 4 1 2 2 2 1 1 1 3 3
5 1 3 6 4 1 1 2 2 2 1 1 1 1 3 3 3
6 1 3 6 4 1 5 1 2 2 2 1 4 1 1 1 3 3 3 4
7 1 3 6 4 1 5 2 1 2 2 2 1 4 2 1 1 1 3 3 3 5 5

Figure 10.43: Longest up-down sub-sequence and LUDSe

Algorithm 10.26. Dynamic LUDSe longest up-down subsequence endting at n

dynamic LUDSe(A1∼n )
declare tables U1∼n and D1∼n . . . . . . . . . . . . . . . . . . . . 1
u1 = 1 and d1 = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 2 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ui = 1 and di = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for j = i − 1 down to 1 . . . . . . . . . . . . . . . . . . . . . . . . . 5
if aj < ai ∧ uj + 1 > ui , ui = uj + 1 . . . . . . . 6
if aj > ai ∧ dj > di , di = dj + 1 . . . . . . . . . . . .7
return max(dn , un ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

If we change the return statement in line 8 to ‘return max(max(D1∼n ), max(U1∼n )),’


Algorithm 10.26 becomes the multi-reduction algorithm for LUDS. The strong inductive
programming Algorithm 10.26 for LUDSe and the multi-reduction (LUDS ≤m p LUDSe) based
algorithm for LUDS are illustrated in Figure 10.43 on the example A = h1, 3, 6, 4, 1, 5, 2i,
whose solution is LUDS(A) = h1, 3, 1, 5, 2i. Computational time and space complexities of
Algorithm 10.26 are O(n2 ) and Θ(n), respectively. And, thus, the computational time and
space complexities of the multi-reduction based algorithm in eqn (10.89) are also O(n2 ) and
Θ(n), respectively.

10.8.4 Longest Palindromic Consecutive Sub-sequence


Consider the longest palindromic consecutive sub-sequence problem, or simply LPCS,
where the sub-sequence must be consecutive. It is formally defined as follows:

Problem 10.6. Longest palindromic consecutive sub-sequence


Input: A string A1∼n of symbols
Output: A sub-string Ab∼e or |Ab∼e | such that e − b + 1 is maximized
where 1 ≤ b ≤ e ≤ n and isPalindrome(Ab∼e ) = T

It differs from the previous LPS Problem 6.8 defined on page 316, where the sub-sequence
needs not be consecutive. LPCS(hb, a, b, b, a, b, a, a, bi) = 6, hb, a, b, b, a, bi, whereas
LPS(hb, a, b, b, a, b, a, a, bi) = 7, hb, a, b, a, b, a, bi or hb, a, b, b, b, a, bi. While the two
10.8. COMBINING WITH STRONG INDUCTIVE PROGRAMMING 603

dimensional strong inductive programming Algorithm 6.19 was possible for LPS, it is not
easy to derive a higher order recurrence relation for LPCS. Consider the related problem,
which finds all of the palindromic consecutive sequences. Then, we can simply return the
longest one for LPCS problem.
Recall checking whether a string is a palindrome Problem 2.22, or simply PLD. The
recurrence relation for PLD in Lemma 2.10 can be stated in terms of the sub-string in
eqn (10.93).

True
 if b = e ∨ (e = b + 1 ∧ ab = ae )
PLD(Ab∼e ) = False if e < b ∨ (e > b ∧ ab 6= ae ) (10.93)

PLD(Ab+1∼e−1 ) if e > b + 1 and a1 = an

b a b b a b a a b
b T F T F F T F F
a T F F T F F F F
b T T F F F F F 8 F
b T F T F F F 7 F
a T F T F F 6 T
b T F F F 5 F
a T T F 4 T
a T F 3 T
b T 2 T
1 T

Figure 10.44: Longest palindrome consecutive sub-sequence illustration.

A tabulation method based on eqn (10.93) can compute all cells in the two-dimensional
table, as seen in Figure 10.44, where the row and column of a cell are b and e, respectively.
First, the main diagonal cells are set to ‘T’ by the (b = e) basis case. The second diagonal
cells where e = b + 1 are set to ‘T’ if ab = ae and to ‘F’ otherwise. The rest of the cells in
the upper right triangle portion are computed based on the recursive part in eqn (10.93) by
inductive steps. Let the main diagonal be the first diagonal, and the kth diagonal be the
(k − 1)th diagonal from the main diagonal. Cells in the kth diagonal only require values
in the (k − 2)th diagonal. Hence, only a two row cylindrical array is required to roll on
the two-dimensional table. Note that when the entire two consecutive rows contain false
values only, cells in the remaining upper right triangle contain F automatically and, thus,
it can halt. A pseudo code utilizing a cylindrical array to find all palindromic consecutive
sub-sequences is stated below. It first rolls on the odd level diagonals first and then rolls on
the even level diagonals.
Algorithm 10.27. Find all palindrome consecutive sub-sequences

PLDall(A1∼n )
declare a table T1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
604 CHAPTER 10. REDUCTION

for i = 1 ∼ n, ti = T . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
l = 3 and Flag = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
while l ≤ n and Flag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Flag = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
for i = 1 ∼ n − l + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
if ti = T and ai = ai+l−1 . . . . . . . . . . . . . . . . . . . . . 7
Flag = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Sol ⊂ {(i, i + l − 1)} . . . . . . . . . . . . . . . . . . . . . . . 9
if ti = T and ai 6= ai+l−1 , ti = F . . . . . . . . . 10
l = l + 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
l = 4 and Flag = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
for i = 1 ∼ n − 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
if ai = ai+1 , ti = T and Flag = T . . . . . . . . . . . 14
else, ti = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
while l ≤ n and Flag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Flag = F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
for i = 1 ∼ n − l + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
if ti = T and ai = ai+l−1 . . . . . . . . . . . . . . . . . . . . 19
Flag = T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Sol ⊂ {(i, i + l − 1)} . . . . . . . . . . . . . . . . . . . . . 21
if ti = T and ai 6= ai+l−1 , ti = F . . . . . . . . . 22
l = l + 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
return Sol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

The computational time and space complexities of Algorithm 10.27 are O(n2 ) and Θ(n2 ),
respectively.
Once PLD for all sub-strings is computed and stored in a table, various problems can
be trivially computed. One such a problem is LPCS.

LPCS(A1∼n ) = maximize e − b + 1 where PLD(Ab∼e ) = T ⇔ LPCS ≤m


p PLD (10.94)

The maximum k where the kth diagonal contains T is LPCS(A1∼n ). Hence, an algo-
rithm based on (LPCS ≤m p PLD) in eqn (10.94) can be stated by slightly modifying Al-
gorithm 10.27. It is left for an exercise as it is essentially the same as Algorithm 10.27.
The computational time complexity of this multi-reduction based algorithm is O(n2 ). It
should be noted that various linear time algorithms have been developed and the earliest
one, published in [118], is known as Manacher’s Algorithm.

10.9 Consecutive Sub-sequence Arithmetic Problems


Consecutive sub-sequence arithmetic problems or subarray arithmetic problems include
maximum sum, minimum sum, maximum product, and minimum product problems.

10.9.1 minCSS ≤p MCSS


Consider the maximum and minimum consecutive sub-sequence sum problems (MCSS
and minCSS in short). MCSS Problem 1.15 was defined on page 22 and minCSS was dealt
with as an exercise in Q 1.17 on page 30. Suppose an algorithm is known for MCSS, and one
10.9. CONSECUTIVE SUB-SEQUENCE ARITHMETIC PROBLEMS 605

is trying to come up with an algorithm for minCSS using a reduction to MCSS. First, all
elements in the input sequence, A, are negated. Let the sequence of negated elements be A0 .
Then, the algorithm for MCSS can be used to find the maximum sum for A0 . Finally, the
output of MCSS(A0 ) can be negated to generate the output of minCSS(A). This reduction
algorithm (minCSS ≤p MCSS) is stated as follows:

Algorithm 10.28. minCSSrd(A1∼n )

minCSSrd(A1∼n )
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ai = −ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
MaxS = MCSSalgorithmX(A1∼n ) . . . . . . . . . . . . . . . . . 3
return −MaxS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Similar to Algorithm 10.28, MCSS ≤p minCSS. Hence, these two problems are dual
problems; MCSS ≡p minCSS, as illustrated in Figure 10.45 on toy examples.

MCSS = 12 minCSS = −12


3 -1 5 -3 -3 7 4 -1 ←→ -3 1 -5 3 3 -7 -4 1
minCSS = −6 MCSS = 6
3 -1 5 -3 -3 7 4 -1 ←→ -3 1 -5 3 3 -7 -4 1

Figure 10.45: MCSS ≡p minCSS

Theorem 10.12. Algorithm 10.28 correctly finds a minimum consecutive subsequence sum.

minCSS(A) = −1 × MCSS(A0 ) where ∀i ∈ {1, · · · , n}, (a0i ∈ A0 ) = −1 × (ai ∈ A).

Proof. Let b and e be beginning and ending indices for the consecutive sub-sequence with a
minimum sum. Suppose minCSS(A) 6= −1 × MCSS(A0 ). That means while minCSS(A) =
e e e0 e
ai , MCSS(A0 ) 6= a0i and there exists (b0 , e0 ) such that a0i > a0i .
P P P P
i=b i=b ! i=b0 i=b
e0 e0 e e
0
P P P P
Clearly, −ai = ai < ai . This contradicts that ai is a minimum.
i=b0 i=b0 i=b i=b
Therefore, minCSS(A) = −1 × MCSS(A0 ). 

10.9.2 minCSPp ≤p MCSPp


Consider the maximum and minimum consecutive positive real number sub-sequence
product problems (MCSP and minCSP) considered as exercises in Q 3.18 on page 147 and
Q 3.19 on page 148, respectively. Suppose an algorithm is known for MCSPp and we are
trying to come up with an algorithm for minCSPp using a reduction to MCSPp. First,
all elements in the input sequence, A, are converted to their reciprocal. Let the sequence
of reciprocal elements be A0 . Then the algorithm for MCSPp can be used to find the
maximum product for A0 . Finally, the reciprocal of the output of MCSPp(A0 ) is the output
of minCSPp(A). This reduction algorithm (minCSPp ≤p MCSPp) is stated as follows:
606 CHAPTER 10. REDUCTION

Algorithm 10.29. Minimum consecutive sub-sequence product (pos) by reduction

minCSPp2MCSPp(A1∼n )
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ai = 1/ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
MaxP = MCSPalgorithmX(A1∼n ) . . . . . . . . . . . . . . . . 3
return 1/MaxP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Similar to Algorithm 10.29, MCSPp ≤p minCSPp. Hence, these two problems are dual
problems; MCSPp ≡p minCSPp, as illustrated in Figure 10.46 on toy examples.

MCSPp(A) = 1/0.04 = 25.0 minCSPp(A0 ) = 1/25 = 0.04


2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5 ←→ 0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0
minCSPp(A) = 1/5 = 0.2 MCSPp(A0 ) = 1/0.2 = 5
2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5 ←→ 0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0

Figure 10.46: MCSP ≡p minCSP

Theorem 10.13. Algorithm 10.29 correctly finds a minimum consecutive positive number
sub-sequence product.

minCSPp(A) = 1/MCSPp(A0 ) where ∀i=1∼n , a0i = 1/ai .

Proof. Let b and e be beginning and ending indices for the consecutive positive number
sub-sequence with a minimum product. Suppose minCSPp(A) 6= 1/MCSPp(A0 ). That
e e
ai , MCSPp(A0 ) 6= a0i and there exists (b0 , e0 ) such that
Q Q
means while minCSPp(A) =
i=b i=b !
e0 e e0 e e0 1 e 1 e0 e
a0i > a0i . If a0i > a0i , then
Q Q Q Q Q Q Q Q
0 < 0. ai < ai contradicts that
i=b0 i=b i=b0 i=b i=b0 ai i=b ai i=b0 i=b
e
ai is a minimum. Therefore, minCSPp(A) = 1/MCSPp(A0 ).
Q

i=b

10.9.3 MCSPp ≤p MCSS


Suppose an algorithm is known for MCSS (the maximum consecutive sub-sequence sum
Problem 1.15) and we are trying to come up with an algorithm for MCSPp (the maximum
consecutive positive real number sub-sequence product problem) using a reduction to MCSS.
The logarithm function becomes useful for MCSPp ≤p MCSS. All elements in the input
sequence must be positive since the logarithm of a negative value is impossible. Note that
the maximum consecutive sub-sequence product problem where the input elements are real
numbers, or simply MCSP, was considered as an exercise in Q 1.18 on page 31. MCSP and
MCSPp are different problems.
An reduction based algorithm (MCSPp ≤p MCSS) utilizing the logarithm function is
stated below in Algorithm 10.30. First, all elements in the input sequence, A, are converted
to their logarithm. Let the sequence of logarithm of elements be A0 . Then the algorithm
for MCSS can be used to find the maximum sum for A0 . Finally, the output of MCSS(A0 )
can be powered to generate the output of MCSP(A).
10.9. CONSECUTIVE SUB-SEQUENCE ARITHMETIC PROBLEMS 607

Algorithm 10.30. MCSPrd(A1∼n )

MCSPrd(A1∼n )
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ai = log ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
MaxS = MCSSalgorithmX(A1∼n ) . . . . . . . . . . . . . . . . . 3
return 2MaxS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Similar to Algorithm 10.30, MCSS ≤p MCSPp, minCSPp ≤p minCSS, and minCSS ≤p


minCSPp. They are left for exercises. Figure 10.46 illustrates the duality relations: MCSS
≡p MCSPp and minCSS ≡p minCSPp.

MCSPp = 24 = 16 MCSS = log 16 = 4


0.5 2.0 1.0 4.0 1.0 2.0 0.25 2.0 ←→ -1 1 0 2 0 1 -2 1
(a) MCSS ≡p MCSPp
minCSPp = 2−2 = 0.25 minCSS = log 0.25 = −2
0.5 2.0 1.0 4.0 1.0 2.0 0.25 2.0 ←→ -1 1 0 2 0 1 -2 1
(b) minCSS ≡p minCSPp

Figure 10.47: MCSS ≡p MCSPp and minCSS ≡p minCSPp

Theorem 10.14. Algorithm 10.30 correctly finds a maximum consecutive positive number
sub-sequence product.

MCSPp(A) = 2MCSS (A0 ) where ∀i=1∼n , a0i = log ai .

Proof. Let b and e be beginning and ending indices for the consecutive sub-sequence with
0
a maximum product. Suppose MCSPp(A) 6= 2MCSS(A ) . That means while MCSPp(A) =
e e e0 e
ai , MCSS(A0 ) 6= a0i and there exists (b0 , e0 ) such that a0i > a0i .
Q P P P
i=b i=b i=b 0 i=b
! 
e0 e0 e0 e e e

a0i = a0i =
P P Q P P Q
Clearly, log ai = log ai > log ai = log ai .
i=b0 i=b0 i=b0 i=b i=b i=b
e 0
ai is a maximum. Therefore, MCSPp(A) = 2MCSS(A ) .
Q
This contradicts that 
i=b

maximize Σ minimize Σ
negate

log pow log pow

reciprocal
maximize Π minimize Π

Figure 10.48: Basic reduction techniques for arithmetic optimization problems


608 CHAPTER 10. REDUCTION

Figure 10.48 illustrates some basic reduction techniques applicable for numerous max-
imization and minimization problems in general. Negating elements enables reduction be-
tween maximization and minimization problems involving summations. Taking the recip-
rocal of each element enables reduction between maximization and minimization problems
involving products. Most problems involving products of positive numbers can be reduced
to problems of summations by the logarithm function. Conversely, problems involving sum-
mations can be reduced to problems involving products by the power function.

10.9.4 Kadane’s Algorithm


The divide and conquer paradigm provides Θ(n log n) algorithms for most consecutive
sub-sequence arithmetic problems in Chapter 3. Reduction based algorithms for consecutive
sub-sequence arithmetic problems in previous subsections depend on the complexity of an
algorithm for the maximum consecutive sub-sequence sum Problem 1.15, or MCSS in short.
If the divide and conquer Algorithm 3.6 is used, Algorithms 10.28, 10.30, and 10.29 would
take Θ(n log n) as well. They would take linear time if a linear time algorithm for the MCSS
problem is used, as the input transformation takes linear time and the output trasformation
can be assumed to be constant. Indeed, Joseph Kadane devised a linear time algorithm for
the MCSS Problem [21] and Kadane’s Algorithm has been considered a classic example of
Dynamic programming [95, p 504].
Before embarking on Kadane’s Algorithm, let’s try to utilize the strong inductive pro-
gramming paradigm for the MCSS problem. One may list all sub-problems’ solutions with
a toy example, as shown in Figure 10.49 (a), and try to come up with a higher order re-
currence relation for a while. If no immediate higher order recurrence relation is found, no
algorithm based on the strong inductive programming paradigm may exist.

A1∼i −3 1 4 3 −4 7 −4 −1
MCSS(A1∼i ) 0 1 5 8 8 11 11 11
(a) a table for MCSS sub-problems’ solutions
A1∼i −3 1 4 3 −4 7 −4 −1
MCSSe(A1∼i ) 0 1 5 8 4 11 7 6
(b) a table for MCSSe sub-problems’ solutions
A1∼i −3 1 4 3 −4 7 −4 −1
MCSSe(A1∼i ) 0 1 5 8 4 11 7 6
MCSS(A1∼i ) 0 1 5 8 8 11 11 11
(c) Kadane’s algorithm illustration.

Figure 10.49: Multi-reduction algorithm for MCSS

Before giving up, the multi-reduction paradigm may be applied. First, convert the
problem into an easier problem - typically, an “ending-at” problem, which is the maximum
consecutive sub-sequence that includes the ith position as the ending point. The all sub-
problems’ solution table for MCSSe is given in Figure 10.49 (b). When the value becomes
negative, it is set to zero. A first order linear recurrence for MCSSe can be derived as
follows:
(
MCSSe(A1∼n−1 ) + an if MCSSe(A1∼n−1 ) + an ≥ 0 ∧ n > 0
MCSSe(A1∼n ) = (10.95)
0 otherwise
10.10. REDUCTION TO MATRIX PROBLEMS 609

Now, put the two outputs together to observe any multi-reduction relationship, as shown in
Figure 10.49 (c). As MCSS(A1∼n ) must end at the index e, the following multi-reduction
relationship can be derived:
MCSS(A1∼n ) = max (MCSSe(A1∼e )) ⇔ MCSS ≤m
p MCSSe (10.96)
e=1∼n

A pseudo code based on eqn (10.96) can be stated as follows:


Algorithm 10.31. Kadane’s Algorithm
MCSS(A1∼n )
mcss, mcsse = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
mcsse = mcsse + ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if mcsse < 0, mcsse = 0 . . . . . . . . . . . . . . . . . . . . . 4
if mcss < mcsse, mcss = mcsse . . . . . . . . . . . . . . 5
return mcss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
The computational time complexity of Algorithm 10.31 is clearly Θ(n) and only O(1) space
is required.

10.10 Reduction to Matrix Problems


In addition to the graph theory and number theory we have seen in the previous sections,
there is one more field that is very useful to solve many problems if we are more knowledge-
able. It is the matrix theory. Numerous algebraic problems, as well as many problems in
low level image processing, reduce to matrix related problems. One immediate reduction is
MXP ≤m p MXM, where MXM and MXP stand for the matrix multiplication Problem 2.5
defined on page 47 and the matrix power problem considered on page 151, respectively.
In this section, several problems from number theory and graph theory that reduce to
a matrix power problem are introduced. They are finding nth Fibonacci and Kibonacci
number problems and the number of path of length k problem.

10.10.1 Fibonacci
Consider the finding the nth Fibonacci number Problem 5.8 defined on page 246. A
beautiful matrix representation called a Fibonacci Q-matrix has been utilized to compute
the nth Fibonacci number [81, p 106] and is given in eqn (10.97).
Theorem 10.15. Fibonacci Q-matrix
   n
Fn+1 Fn 1 1
FIB ≤p MXP ⇔ = for n ≥ 1 (10.97)
Fn Fn−1 1 0
Proof. Basis: (n = 1)
   1
F2 F1 1 1
=
F1 F0 1 0
Inductive step: Assuming eqn (10.97) is true,
   n+1
Fn+2 Fn+1 1 1
show =
Fn+1 Fn 1 0
610 CHAPTER 10. REDUCTION

 n+1  n      
1 1 1 1 1 1 Fn+1 Fn 1 1
= × = ×
1 0 1 0 1 0 Fn Fn−1 1 0
   
Fn+1 + Fn Fn+1 Fn+2 Fn+1
= = 
Fn+1 Fn Fn+1 Fn

Recall the divide and conquer algorithm for the matrix power problem considered in
Exercise Q. 3.31 on page 151. The divide and conquer approach can be applied to the nth
power of the Fibonacci Q-matrix, as given in eqn (10.98).
 !n/2 !n/2

 1 1 1 1
× if n is even.


 1 0 1 0



 n  !bn/2c !bn/2c !
1 1

= 1 1 1 1 1 1 (10.98)
1 0 × × if n is odd.


 1 0 1 0 1 0

 !

 1 1
if n = 1


 1

0

Figure 10.50 illustrates the divide and conquer algorithm in eqn (10.98). One might
wonder about using a perplexing matrix operation to solve a scalar value, but it is simply
taking one step back in order to propel. The divide and conquer algorithm by eqn (10.98)
takes only Θ(log n). The following pseudo code is equivalent to eqn (10.98):

1 1  8  F9 F8 
  =  
1 0   F8 F7 

1 1  4 1 1  4
   
1 0  1 0 

1 1  2 1 1  2 1 1  2 1 1  2
       
1 0  1 0  1 0  1 0 

1 1  1 1  1 1  1 1  1 1  1 1  1 1  1 1 
1 0  1 0  1 0  1 0  1 0  1 0  1 0  1 0 
               

 2 1  2 1  2 1  2 1
       
 1 1  1 1  1 1  1 1

5 3 5 3
   
3 2 3 2

 34 21
 
 21 13 

Figure 10.50: Computing the 9th Fibonacci number.


10.10. REDUCTION TO MATRIX PROBLEMS 611

Algorithm 10.32. nth Fibonacci number

FIBQmat(n)
if n = 1, return (1, 1, 1, 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else if n is even, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
(f11 , f12 , f21 , f22 ) = FIB(n/2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2
return (f11 + f12 × f21 , f11 × f12 + f12 × f22 ,
2
f21 × f11 + f22 × f21 , f21 × f12 + f22 ) ....................4
else (n is odd,) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
(f11 , f12 , f21 , f22 ) = FIB(bn/2c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2
return (f11 (f11 + f12 ) + f12 (f21 + f22 ), f11 + f12 × f21 ,
f21 (f11 + f12 ) + f22 (f21 + f22 ), f21 × f11 + f22 × f21 ) . . . . 7

To find the 11th Fibonacci number, F11 , FIBQmat(n) returns (f11 , f12 , f21 , f22 ), where
f11 = F12 , f12 = F11 , f21 = F11 , and f12 = F10 . It should be noted that the computational
complexities and behavior of Algorithm 10.32 are the same as Algorithm 7.38 that is the
divide and conquer memoization with a jumping array, stated on page 402.

10.10.2 Kibonacci
blah blah blah
Full tribonacci
 n  
1 1 1 Tn+2 Tn+1 + Tn Tn+1
1 0 0 = Tn+1 Tn + Tn−1 Tn 
0 1 0 Tn Tn−1 + Tn−2 Tn−1
KBF ≤p MXP
n−k+1
1 1 ··· 1 1

Kn Kn−1 + Kn−2 ··· Kn−1 + Kn−2 Kn−1
 
1 0 ··· 0 0
 
Kn−1 Kn−2 + Kn−3 ··· Kn−2 + Kn−3 Kn−2 
 
  

 .. .. .. 



Kn−2 Kn−3 + Kn−4 ··· Kn−3 + Kn−4

Kn−3 

0

1 . . . =


.

. .. .. .. 
 
 .. .. .. .. .. 
. . . 0 .
 
 . . . . .  
Kn−k+1 Kn−k + Kn−k−1 ··· Kn−k + Kn−k−1 Kn−k
 
0 ··· 0 1 0
(10.99)
Tribonacci-2
 n  
1 0 1 Tn+2 Tn+1 + Tn Tn+1
1 0 0 = Tn+1 Tn + Tn−1 Tn 
0 1 0 Tn Tn−1 + Tn−2 Tn−1
KB2 ≤p MXP
n−k+1
1 0 ··· 0 1

Kn Kn−1 + Kn−2 ··· Kn−1 + Kn−2 Kn−1
 
1 0 ··· 0 0
 
Kn−1 Kn−2 + Kn−3 ··· Kn−2 + Kn−3 Kn−2 
 
  

 .. .. .. 



Kn−2 Kn−3 + Kn−4 ··· Kn−3 + Kn−4

Kn−3 

0

1 . . . =


.

. .. .. .. 
 
 .. .. .. .. .. 
. . . 0 .
 
 . . . . .  
Kn−k+1 Kn−k + Kn−k−1 ··· Kn−k + Kn−k−1 Kn−k
 
0 ··· 0 1 0
(10.100)
612 CHAPTER 10. REDUCTION

10.10.3 Number of Path Problem

v1 v2 v1 v2 v3 v4
 
v1 0 1 0 0
v2 
0 0 0 1 
v3  0 1 0 0
v4 1 1 1 0
v3 v4

(a) a sample graph G (b) adjacency matrix A of G


       
0 1 0 0 0 0 0 1 1 1 1 0 0 2 0 1
0 0 0 1 2 1
 1 1 0 3 0
 2 0 1 4 1
 1 1 2
A=
  A =  A =  A = 
0 1 0 0 0 0 0 1 1 1 1 0 0 2 0 1
1 1 1 0 0 2 0 1 1 1 1 2 2 4 2 1
(c) kth power of the adjacent matrix.

Figure 10.51: A sample graph for the number of paths of length k problem.

Consider a directed graph, G, with cycles in Figure 10.51 (a). Let PATHk (vx , vy ) be the
set of all paths from vx to vy whose number of arcs in the path is exactly k. If the path is
represented by a sequence of vertices where any two consecutive vertices are an arc, the path
sequence contains exactly k+1 vertices. For example, there are two possible paths of length 2
from v4 to v2 : PATHk=2 (v4 , v2 ) = {hv4 , v1 , v2 i, hv4 , v3 , v2 i}. There is only one path of length
3 from v4 to v2 : PATHk=3 (v4 , v2 ) = {hv4 , v2 , v4 , v2 i}. There are four possible paths of length
4 from v4 to v2 : PATHk=4 (v4 , v2 ) = {hv4 , v1 , v2 , v4 , v2 i, hv4 , v2 , v4 , v1 , v2 i, hv4 , v2 , v4 , v3 , v2 i,
hv4 , v3 , v2 , v4 , v2 i}.
The problem of finding the number of paths of length k from vx to vy , NPK in short,
can be formally defined as follows:

Problem 10.7. Number of paths of length k problem (NPK)


Input: a graph G, a source node, vx ∈ V , a target node vy ∈ V , and k ∈ Z+
Output: |path(vx , vy )| where length(path(vx , vy )) = k

While challenging to devise an algorithm for this problem, the following fact provides
a clue to solve the problem. Let A be the adjacent matrix of G. The number of paths of
length k from vx to vy is the (vx , vy )th entry of Ak matrix, as given in Figure 10.51 (c).
Clearly, NPK problem reduces to the kth power of the adjacent matrix problem, or simply
MXP, which was considered on page 151; NPK ≤p MXP. This reduction based algorithm,
which utilizes a certain algorithm for the kth power of a matrix to solve the number of paths
of length k Problem 10.7, can be stated in the following equation:

NPK ≤p MXP ⇔ npk(vx , vy , k) = Svx ,vy where S = Ak (10.101)

Theorem 10.16. The reduction based algorithm in eqn (10.101) correctly finds the number
of paths of length k from vx to vy .

Proof. (by induction)


Basis step (k = 1): When a graph G is represented by an adjacent matrix, A, Each entry
of A represents the number of paths of length one from the row entry vertex to the column
entry vertex. By the definition of the adjacent matrix, if the (vx , vy )th entry, avx ,vy = 1,
10.10. REDUCTION TO MATRIX PROBLEMS 613

there is an arc from vx to vy whose path length is 1. If avx ,vy = 0, there is no arc from vx
to vy , i.e., it is impossible to reach from vx to vy within a path length of 1.
Inductive step: Assuming that the (vx , vy )th entry of Ak is npl(vx , vy , k), show that the
(vx , vy )th entry of Ak+1 is npk(vx , vy , k + 1). Let P = Ak . Ak+1 = Ak × A = P × A. The
n
(vx , vy )th entry of Ak+1 is
P
pvx ,vz avz ,vy . pvx ,vz = npk(vx , vz , k) by assumption.
z=1
By the product rule of basic counting principles [146, p 386], the number of paths of
length k + 1 that passes through vz is npk(vx , vz , k)× npk(vz , vy , 1) = pvx ,vz avz ,vy . By the
Pn
sum rule of basic counting principles [146, p 389], npk(vx , vy , k + 1) = pvx ,vz avz ,vy . 
z=1

Consider the number of paths Problem 5.14 on a DAG, or simply NPP, considered on
page 258. While NPK is the number of paths of exactly k length, NPP is the number of
paths of any length.
Let Sk be a matrix whose (vx , vy )th entry is the number of paths of up to k length.
k
Ai . Since a DAG contains no loop, the length of any path is finite.
P
Clearly, Sk =
i=1
Ap reaches a zero matrix when all entries are zeros. If A is multiplied to Ak and the
resulting Ak+1 is a zero matrix, the NPP problem can be solved straight from the matrix,
Sk , as illustrated in Figure 10.52. There is one caveat to consider though. When vx = vy ,
npp(vx , vy ) = 1 while Svx ,vy = 0.
(
1 if vx = vy
npp(vx , vy ) = (10.102)
svx ,vy if vx =
6 vy
(
A if k = 1
where Ak+1 is 0 matrix, S = Sk , and Sk =
Sk−1 + Ak if k > 1

1 1 v1 v2 v3 v4
v1 v2  
v1 0 1 1 1
v2 
0 0 1 1 
v3  0 0 0 1
2 4 v4 0 0 0 0
v3 v4

(a) a sample DAG (b) adjacency matrix of G


npp(v1 , G) = h1, 1, 2, 4i
       
0 1 1 1 0 0 1 2 0 0 0 1 0 0 0 0
1 0 0 1 1 2 0 0 0 1 3 0 0 0 0 4 0 0 0 0
   
A =  A =  A =  A = 
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 0 1 2 3 0 1 2 4 0 1 2 4
0 0 1 1 0 0 1 2 0 0 1 2 0 0 1 2
S1 =
0
 S2 =  S3 =  S 4 = 
0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(c) Ak and Sk matrices for k = 1 ∼ 4.

Figure 10.52: Solving NPP by matrix multiplications


614 CHAPTER 10. REDUCTION

Eqn (10.102) suggests NPP ≤mp MXM where MXM stands for the matrix multiplication
problem. NPP can be solved by invoking MXM k number of times.

10.11 Exercises
Q 10.1. Consider various order statistics problems and show their reduction relations.

a). MIN ≤p KSM, where MIN stands for the finding the minimum element Problem 2.14
and KSM stands for the finding the kth smallest element problem considered as an
exercise in Q 2.19 on page 85.

b). MAX ≤p KSM, where MAX stands for the finding the maximum element problem
considered as an exercise in Q 2.18 on page 85.

c). MDN ≤p Sort, where MDN stands for the finding the median element problem. For
example, median(h6, 4, 9, 2, 5i) = 5 and median(h8, 4, 9, 1i) = 6. It is defined as follows:

Problem 10.8. Median


Input: A sequence A1∼n of n quantifiable elements.
1 0 0 0
Output: 2 (ab n+1 c + ad n+1 e ) where A is the sorted list of A in ascending order.
2 2

d). MDN ≤m
p KSM.

e). MDN ≤m p KLG, where KLG stands for the finding the kth largest element Prob-
lem 2.15.

Q 10.2. Consider the various subset arithmetic problems considered earlier. Devise a
reduction based algorithm for following problems Px s, assuming that another problem Py ’s
algorithm is known; i.e., show Px ≤p Py .

a). Recall the select k subset sum minimization problem, or SKSSmin in short, considered
on page 200. It is to select k items out of n numbers such that the sum of these k
numbers is minimized. Devise an algorithm for SKSSmin by reducing to SKSS. SKSS
stands for the select k subset sum maximization Problem 4.1, defined on page 157.

b). Illustrate SKSS ≡p SKSSmin using the following toy example:


A = h5, 2, 9, 4, 0, 8, 7, 1i and k = 3.

c). Recall the select k subset positive product maximization problem, or SKSP in short,
considered on page 200. It is to select k items out of n positive numbers such that the
product of these k numbers is maximized. Devise an algorithm for SKSP by reducing
to SKSS.

d). Illustrate SKSS ≡p SKSP using the following toy example:


A = h0.5, 2.0, 1.0, 4.0, 1.0, 2.0, 0.25, 1.0i and k = 3.

e). Recall the select k subset positive product minimization problem, or SKSPmin in
short, considered on page 200. It is to select k items out of n positive numbers such
that the product of these k numbers is minimized. Devise an algorithm for SKSPmin
by reducing to SKSP.
10.11. EXERCISES 615

f). Illustrate SKSP ≡p SKSPmin using the following toy example:


A = h2.0, 0.5, 4.0, 2.0, 2.5, 0.2, 2.5, 0.5i and k = 3.

g). Devise an algorithm for SKSPmin by reducing to SKSSmin.

h). Illustrate SKSSmin ≡p SKSPmin using the following toy example:


A = h0.5, 2.0, 1.0, 4.0, 1.0, 2.0, 0.25, 1.0i and k = 3.

Q 10.3. Recall the number of ascents Problem 2.11, or simply NAS, defined on page 55
and the number of descents problem, or simply NDS, considered as an exercise Q. 2.27 on
page 87. For a toy example of A = h3, −1, 7, 9, −4, 1, −5, −8i, NAS(A) = 3 and NDS(A) = 4.

a). Negating the input sequence is a trick to show a reduction. Show NAS ≡p NDS by
negating the input sequence.

b). Illustrate the (NAS ≡p NDS) relation given in a) using the above toy example.

c). Show NAS ≡p NDS without negating the input sequence.

d). Illustrate the (NAS ≡p NDS) relation given in c) using the above toy example.

Q 10.4. Consider the up-down Problem 2.19, or simply UDP, defined on page 65. It is one
of the alternating permutation problems.

a). Devise an algorithm for UDP based on reduction to a sorting paradigm that produces
an output whose pattern is the one shown in Figure 10.4 (b).

b). Devise an algorithm for UDP based on reduction to a sorting paradigm that produces
an output whose pattern is the one shown in Figure 10.4 (c).

c). Devise an algorithm for UDP based on reduction to a sorting paradigm that produces
an output whose pattern is the one shown in Figure 10.4 (d).

d). Devise an algorithm for UDP based on reduction to a sorting paradigm that produces
an output whose pattern is the one shown in Figure 10.4 (e).

e). Devise an algorithm for UDP based on reduction to a sorting paradigm that produces
an output whose pattern is the one shown in Figure 10.4 (f).

Q 10.5. Consider various alternating permutation problems, such as the down-up problem
(DUP) and the up-up-down problem (UUD) considered as exercises in Q 2.24 on page 86.

a). Show DUP ≤p sorting, i.e., devise an algorithm for DUP based on reduction to a
sorting paradigm.

b). Illustrate the (DUP ≤p sorting) reduction based algorithm proposed in a) using the
following toy example: h3, 7, 9, 1, 10, 5, 4, 2, 6, 8i.

c). Provide the computational time complexity of the algorithm provided in a).

d). Show DUP ≡p UDP, where UDP stands for the up-down problem.

e). Illustrate the reduction relations provided in d) on random examples.


616 CHAPTER 10. REDUCTION

f). Show UUD ≤p sorting, where UUD stands for the (up-up-down) alternating permu-
tation problem.
g). Illustrate the algorithm provided in f) on random examples.
Q 10.6. Consider the Mr. Gardon Gekco example, which was used in Question 4.12 on
page 203. Note that each offer is breakable here, i.e., it is the fractional knapsack minimiza-
tion problem, or FKP-min in short, considered as an exercise Q. 4.13 on page 204.

a). Illustrate greedy Algorithm 4.8 on the following toy example:

A a1 a2 a3 a4 a5 a6 a7
P $560K $300K $620K $145K $800K $450K $189K
Q 20000 10000 20000 5000 25000 15000 7000

Greedy Gekco would like to buy at least 50,000 stocks with the minimum amount of
money.
b). Design a reduction based algorithm for the FKP-min problem. FKP-min reduces to
the ordinary fractional knapsack Problem 4.5, or FKP in short, i.e., FKP-min ≤p
FKP.
c). Illustrate the proposed reduction based algorithm on the above toy example.
d). Prove the correctness of your proposed algorithm, i.e., show FKP-min ≤p FKP.
e). Design the (FKP ≤p FKP-min) reduction based algorithm for the FKP Problem 4.5.

Q 10.7. The sorting Problem 2.16 can be solved by reducing to many graph related prob-
lems. Devise these reduction based sorting algorighms:
a). Sorting ≤p LPC, where LPC stands for the longest path cost problem on a weighed
DAG, considered as an exercise in Q 5.42 on page 290.
b). Sorting ≤p LPL, where LPL stands for the longest path length problem on a DAG,
considered as an exercise in Q 5.41 on page 290.
c). Sorting ≤p CPP, where CPP stands for the critical path Problem 5.19 on a DAG,
defined on page 267.
d). Sorting ≤p TPS, where TPS stands for the topological sorting Problem 5.13, defined
on page 256.
Q 10.8. Generalized Fibonacci numbers, also known as Kibonacci numbers, have been
defined in two different ways: KB2 defined on page 248 and KBF defined on page 285.
When (k = 3), they may be called Tribonacci numbers, and their relationships among
numbers are represented as DAG’s as follows:

KB2 (k = 3) KBF (k = 3)

a). Show KB2 ≤p WWP, where WWP stands for the winning way Problem 5.7 defined
on page 239.
10.11. EXERCISES 617

b). Show KB2 ≤p NPP, where NPP stands for the number of path Problem 5.14 defined
on page 258.
c). Illustrate (KB2 ≤p NPP) where k = 3.
d). Show KBF ≤p WWP.
e). Show KBF ≤p NPP.
f). Illustrate (KBF ≤p NPP) where k = 3.

Q 10.9. Consider the activity selection Problem 4.8, or ASP in short, defined on page 170,
and the weighted activity selection Problem 5.6, or ASP in short, defined on page 231.

a). Design a reduction based algorithm utilizing the fact that ASP reduces to the longest
path length problem, or LPL in short, i.e., ASP ≤p LPL.
b). Illustrate the algorithm provided in a) using the following toy example:

i 1 2 3 4 5 6
si 3 5 2 4 1 6
fi 4 9 5 6 2 7

c). Design a reduction based algorithm utilizing the fact that wASP reduces to the longest
path cost problem, or LPC in short, i.e., wASP ≤p LPC.
d). Illustrate the algorithm provided in c) using the following toy example:

i 1 2 3 4 5 6
si 3 5 2 4 1 6
fi 4 9 5 6 2 7
pi 2 1 5 1 3 2

Q 10.10. Consider the minimum spanning tree Problem 4.14, or MST in short, defined
on pages 184, and maximum spanning tree problem, or MxST in short, considered as an
exercise in Q 4.27 on page 209.
a). Show MST ≤p MxST.
b). Show MxST ≤p MST.
c). Show MST ≡p MxST.
d). Show MSrT ≡p MxSrT, where MSrT is the minimum spanning rooted tree Prob-
lem 5.17 defined on page 264, and MxSrT is the maximum spanning rooted tree
problem considered as an exercise in Q 5.43 on page 291.
Q 10.11. Consider the various string matching problems. Indel, LCS, and LEV stand
for the edit distance with insertion and deletion only Problem 6.6, longest common sub-
sequence Problem 6.5, and Levenshtein distance Problem 6.7 defined on pages 313, 311,
and 314, accordingly. Use the following two strings for any demonstration:

A1∼n = T A and B1∼m = C A G


618 CHAPTER 10. REDUCTION

a). Show Indel ≤p LCS.


b). Show LCS ≤p Indel.
c). Show Indel ≤p SPL, where SPL is the shortest path length Problem 7.11 defined on
page 388.
d). Demonstrate Indel ≤p SPL on the above examples.
e). Show Lev ≤p SPC, where SPC is the shortest path cost Problem 4.15 defined on
page 188.
f). Demonstrate Lev ≤p SPC on the above examples.

Q 10.12. Based on the maximum prefix sum Problem 10.4, or MPFS in short, defined on
page 598, consider the minimum prefix sum problem, or minPFS in short. For example,
minPFS(A) = 1 where A is:
A= 3 -1 5 -3 -3 7 4 -1

a). Formulate the minimum prefix sum problem.


b). Show minPFS ≤m
p PFS, where PFS stands for the prefix sum Problem 2.10 defined
on page 53.
c). Illustrate your algorithm in b) on the above toy sample example.
d). Show minPFS ≤p MPFS, i.e., devise a reduction (minPFS ≤p MPFS) based algorithm.
e). Show minPFS ≡p MPFS.
f). Illustrate (minPFS ≡p MPFS) on the above toy sample example.

Q 10.13. Based on the maximum prefix sum Problem 10.4, or MPFS in short, defined on
page 598, consider the maximum prefix product problem (MPFP) and the minimum prefix
product problem (minPFP). For example, MPFP(A) = 2 and minPFP(A) = 0.25 where A
is
A = 0.50 2.00 0.25 4.00 2.00 0.25 0.50 2.00

Note that the input is a sequence of positive numbers.

a). Formulate the maximum prefix product problem (MPFP).


b). Formulate the minimum prefix product problem (minPFP).
c). Show MPFP ≤m p PFP, where PFP stands for the prefix product problem considered
on page 84. Illustrate it on the above toy example.
d). Show minPFP ≤m
p PFP and illustrate it on the above toy example.

e). Show MPFP ≡p minPFP and demonstrate it on the above toy example.
f). Show MPFP ≡p MPFS and demonstrate it on the above toy example.
g). Show minPFP ≡p minPFS and demonstrate it on the above toy example.
10.11. EXERCISES 619

Q 10.14. Given a sequence of numbers, A, the longest decreasing sub-sequence problem,


or LDS in short, is to find the largest sub-sequence of A such that ai > aj for every i < j.
You may use the following toy example to demonstrate your algorithms:

A= 2 9 5 7 4 8 3

a). Formulate the problem.


b). Show LDS ≤p LPL, where LPL stands for the longest path length problem.
c). Demonstrate (LDS ≤p LPP) using the above toy example.
d). Suppose LPP takes O(n log n) time. What is the computational time complexity of
the reduction based algorithm provided in b).
e). Show LDS ≡p LIS, where LIS stands for the longest increasing sub-sequence Prob-
lem 10.1 defined on page 572.
f). Demonstrate (LDS ≡p LIS) using the above toy example.
g). Show LDS ≤m p LDSe. LDSe stands for the longest decreasing sub-sequence ending at
the nth position problem.
h). Derive a recurrence relation for LDSe.
i). Devise an algorithm using strong inductive programming for LDSe.
j). Demonstrate (LDS ≤m
p LDSe) by the algorithm provided in g) using the above toy
example.
k). Provide the computational time and space complexities of the (LDS ≤m
p LDSe) algo-
rithm.

Q 10.15. Consider the longest alternating sub-sequence problems: LUDS and LDUS. These
ask one to find the longest sub-sequence of A, such that it is an up-down and down-up
sequence, respectively. You may use the following toy example to demonstrate your algo-
rithms:

A= 1 3 6 4 1 5 2

a). Formulate the longest down-up sub-sequence problem, LDUS in short.


b). Show LDUS ≡p LUDS.
c). Demonstrate (LDUS ≡p LUDS) using the above toy example.
d). Show LDUS ≤m p LDUSe, where LDUSe stands for the longest down-up sub-sequence
ending at the nth position problem.
e). Derive a cross-recurrence relation between the two kinds of “end-at” versions of LDUSe.
f). Devise an algorithm using strong inductive programming for LDUSe.
g). Demonstrate (LDUS ≤m
p LDUSe) by the algorithm provided in f) using the above toy
example.
620 CHAPTER 10. REDUCTION

h). Provide the computational time and space complexities of the (LDUS ≤m
p LDUSe)
algorithm.

Q 10.16. Consider the longest increasing and decreasing consecutive sub-sequence problems
(LICS and LDCS) discussed on page 149. LICS is to find the longest consecutive sub-
sequence, As∼e of A1∼n , such that ai ≤ ai+1 for every i ∈ {s ∼ e − 1}. LDCS is to
find the longest consecutive sub-sequence, As∼e of A1∼n , such that ai ≥ ai+1 for every
i ∈ {s ∼ e − 1}. You may use the following toy example to demonstrate your algorithms:

A= 7 2 4 6 7 7 8 5 1

a). Derive a recurrence relation for LICSe, where LICSe stands for the longest increasing
consecutive sub-sequence ending at the nth position problem.
b). Devise an algorithm using strong inductive programming for LICSe.
c). Devise an (LICS ≤m
p LICSe) algorithm. (This is conventionally known as dynamic
programming.)
d). Illustrate the (LICS ≤m
p LICSe) algorithm using the above toy example.

e). Provide the computational time and space complexities of the (LICS ≤m
p LICSe)
algorithm.
f). Derive a recurrence relation for LDCSe, where LDCSe stands for the longest decreasing
consecutive sub-sequence ending at the nth position problem.
g). Devise an algorithm using strong inductive programming for LDCSe.
h). Devise a (LDCS ≤m
p LDCSe) algorithm. (This is conventionally known as dynamic
programming.)
i). Illustrate the (LDCS ≤m
p LDCSe) algorithm using the above toy example.

j). Show LDCS ≡p LICS.


k). Illustrate (LDCS ≡p LICS) on the above toy example.

Q 10.17. Consider the longest alternating consecutive sub-sequence problems: LUDC and
LDUC. These aim to find the longest consecutive sub-sequence of A, such that it is an
up-down and down-up sequence, respectively. You may use the following toy example to
demonstrate your algorithms:

A= 1 3 8 2 7 5 4

a). Formulate the longest up-down consecutive sub-sequence problem, or LUDC in short.
b). Derive a recurrence relation for LUDCe, where LUDCe stands for the longest alter-
nating up-down consecutive sub-sequence ending at the nth position problem.
c). Devise an algorithm using strong inductive programming for LUDCe.
d). Devise a (LUDC ≤m
p LUDCe) algorithm. (This is conventionally known as dynamic
programming.)
10.11. EXERCISES 621

e). Illustrate the (LUDC ≤m


p LUDCe) algorithm using the above toy example.

f). Provide the computational time and space complexities of the (LUDC ≤m
p LUDCe)
algorithm.
g). Formulate the longest down-up consecutive sub-sequence problem, LDUC in short.
h). Derive a recurrence relation for LDUCe, where LDUCe stands for the longest alter-
nating up-down consecutive sub-sequence ending at the nth position problem.
i). Devise an algorithm using strong inductive programming for LDUCe.
j). Devise a (LDUC ≤m
p LDUCe) algorithm. (This is conventionally known as dynamic
programming.)
k). Illustrate the (LDUC ≤m
p LDUCe) algorithm using the above toy example.

l). Provide the computational time and space complexities of the (LDUC ≤m
p LDUCe)
algorithm.
m). Show LDUC ≡p LUDC.
n). Demonstrate (LDUC ≡p LUDC) using the above toy example.

Q 10.18. Consider the longest palindrome consecutive sub-sequence Problem 10.6 defined
on page 602. Hint: Figure 10.44 on page 603.
a). Devise a multi-reduction based algorithm based on eqn (10.94) on page 604.
b). Provide the time and space complexities of the algorithm provided in a)
c). Demonstrate the algorithm on A = ‘CGFAFC.’
d). Demonstrate the algorithm on A = ‘CAAGCA.’
e). Demonstrate the algorithm on A = ‘ACCTGAAGC.’
f). Demonstrate the algorithm on A = ‘CACTAGACTA.’
Q 10.19. Consider various combinatoric problems that reduce to the number of path Prob-
lem 5.14, or simply NPP, considered on page 258.

a). Show IPE ≤p NPP, where IPE stands for the Integer partition exactly k parts Prob-
lem 6.11 defined on page 326.
b). Illustrate the (IPE ≤p NPP) algorithm to compute IPE(6, 3) = p(6, 3) = 3.
c). Show IPam ≤p NPP, where IPam stands for the Integer partition at most k parts
Problem 6.12 defined on page 331.
d). Illustrate the (IPam ≤p NPP) algorithm to compute IPam(3, 3) = I(3, 3) = 3.
e). Show MSC ≤p NPP, where MSC stands for the Multiset coefficient Problem 6.18
defined on page 350.
f). Illustrate the (MSC ≤p NPP) algorithm to compute MSC(3, 2) = 32 = 6.

622 CHAPTER 10. REDUCTION

g). Show SMSC ≤p NPP, where MSC stands for the Surjective multiset coefficient Prob-
lem 6.19 defined on page 351.
h). Illustrate the (SMSC ≤p NPP) algorithm to compute SMSC(4, 3) = 0.

Q 10.20. Consider double factorial of the nth even (DFE) and odd (DFO) number Prob-
lems 2.29 and 2.30 defined on pages 85 and 85, respectively.

a). Show DFE ≤p FAC in eqn (10.103).

DFE(n) = 2n n! (10.103)

b). Show DFO ≤m


p FAC in eqn (10.104).

(2n)!
DFO(n) = (10.104)
2n n!

c). Show DFO ≤p DFE.


d). Show DFE ≤p DFO.
e). Show DFO ≤p KPN, where MSC stands for the k-Permutaiton of n, P (n, k) Prob-
lem 2.8 defined on page 51.
f). Show DFO ≤p RFP, where RFP is the rising factorial power problem considered as
an exercise in Q 2.35 on page 89.

Q 10.21. Consider various summation problems for their reduction relationships. BNC
stands for the Binomial coefficient, C(n, k) Problem 6.9 defined on page 319.

a). Show TRN ≤p BNC in eqn (10.105), where TRN stands for the nth triangular number
Problem 1.6 defined on page 9.
 
n+1
TRN(n) = (10.105)
2
b). Show THN ≤p BNC in eqn (10.106), where THN stands for the nth tetrahedral number
Problem 1.9 defined on page 11.
 
n+2
THN(n) = (10.106)
3
c). Show PRN ≤p BNC in eqn (10.107), where PRN stands for the nth Pyramid number
Problem 1.8 defined on page 11.
 
1 2n + 2
PRN(n) = (10.107)
4 3
d). Show PRN ≤m
p BNC in eqn (10.108).

   
n+2 n+1
PRN(n) = + (10.108)
3 3
10.11. EXERCISES 623

e). Show STH ≤p BNC in eqn (10.109), where STH stands for the sum of first n tetrahedral
number problem considered on page 29.
 
n+3
STH(n) = (10.109)
4

f). Show SCB ≤p BNC in eqn (10.110), a.k.a., Nicomachus’s theorem, where SCB stands
for the sum of first n cubic number problem considered on page 82.

 2
n+1
SCB(n) = (10.110)
2

g). Show SCB ≡p TRN.

h). Show SQN ≡p TRN, where SQN stands for the nth square number Problem 1.7 defined
on page 10.

i). Show SQN ≤p BNC

j). Show SEN ≡p TRN, where SEN stands for the sum of first n even number problem
considered as an exercise in Q 1.8 on page 28.

k). Show SEN ≤p BNC

l). Show SEN ≡p SQN.

Q 10.22. Consider the problem of the ways of partitioning a positive integer, n, into any
part, I(n), or simply IPN, considered as an exercise in Q 7.41 on page 433. For example, a
positive integer (n = 4) can be represented in five different ways: {(4), (3 + 1), (2 + 2), (2 +
1 + 1), (1 + 1 + 1 + 1)}.

a). Show IPN ≤m p IPE, where IPE stands for the ways of partitioning an integer n into
exactly k parts Problem 6.11 defined on page 326.

b). Show IPN ≤p IPam, where IPam stands for the ways of partitioning an integer n into
at most k parts Problem 6.12 defined on page 331.

c). Show IPN ≤p IPal, where IPal stands for the ways of partitioning an integer n into at
least k parts problem considered as an exercise in Q 7.41 on page 433.

d). Show IPN ≤p IPE.

Q 10.23. Consider the bounded integer partition number problems: BIP and BIPam. The
formal definition of the bounded integer partition into exactly k parts Problem 6.14, or BIP
in short and denoted as pb (n, k), was given on page 338. The bounded integer partition into
at most k parts, or simply BIPam and denoted as Ib (n, k), was considered as an exercise in
Q 6.25 on page 353.

a). Show BIPam ≤m


p BIP.

b). Show BIPam ≤p BIP.


624 CHAPTER 10. REDUCTION

c). Show BIP ≤p BIPam.

d). Show IPE ≤p BIP, where IPE stands for the ways of partitioning an integer n into
exactly k parts Problem 6.11 defined on page 326.

e). Show IPam ≤p BIPam, where IPam stands for the ways of partitioning an integer n
into at most k parts Problem 6.12 defined on page 331.

Q 10.24. Let F (n) be the nth Fibonacci number defined in Problem 5.8 (FIB). Let L(n) be
the nth Lucas number problem defined in eqn (5.63) (LUC). Consider the following various
reduction relations:

L(n) = F (n + 1) + F (n − 1) ⇔ LUC ≤m
p FIB (10.80)
L(n) = F (n) + 2F (n − 1) ⇔ LUC ≤m
p FIB (10.111)
L(n) = F (n + 2) − F (n − 2) ⇔ LUC ≤m
p FIB (10.112)
L(n + 1) + L(n − 1)
F (n) = ⇔ FIB ≤m
p LUC (10.82)
5
L(n + 2) − L(n − 2)
F (n) = ⇔ FIB ≤m
p LUC (10.113)
5
L(n + 3) + L(n − 3)
F (n) = ⇔ FIB ≤m
p LUC (10.114)
10
L(n) = ϕn + (1 − ϕ)n ⇔ LUC ≤m
p POW (10.85)
n n
ϕ − (1 − ϕ)
F (n) = √ ⇔ FIB ≤m
p POW (10.84)
5
F (2n)
L(n) = ⇔ LUC ≤m
p FIB (10.81)
F (n)
   n  
Fn 0 1 0
= ⇔ FIB ≤p MXP (10.115)
Fn+1 1 1 1
   n  
Ln 0 1 2
= ⇔ LUC ≤p MXP (10.116)
Ln+1 1 1 1

a). Show FIB ≤p LUS, where LUS is the Lucas sequence Problem 5.10 defined on page 250.

b). Show LUC ≤p LUS2, where LUS2 is the Lucas sequence II Problem 5.11 defined on
page 250.

c). Prove the reduction relation in eqn (10.80).

d). Prove the reduction relation in eqn (10.111).

e). Prove the reduction relation in eqn (10.112).

f). Prove the reduction relation in eqn (10.82).

g). Prove the reduction relation in eqn (10.113).

h). Prove the reduction relation in eqn (10.114).


10.11. EXERCISES 625

i). Prove Binet’s formula in eqn (10.85) on page 597.


Hint: Golden ratio ϕ has following properties:
1
ϕ = 1+ (10.117)
ϕ
1
= (10.118)
ϕ−1

j). Prove Binet’s formula in eqn (10.84) on page 597.


k). Prove the reduction relation in eqn (10.81).
l). Prove the reduction relation in eqn (10.115).
m). Prove the reduction relation in eqn (10.116).

Q 10.25. Consider the nth Pell number problem, or simply PLN, defined recursively in
eqn (5.67) on page 279 and the nth Pell-Lucas number problem, or simply PLL, defined
recursively in eqn (5.70) on page 280.

PLL(n) = PLN(n + 1) + PLN(n − 1) ⇔ PLL ≤m


p PLN (10.119)
PLL(n) = 2(PLN(n) + PLN(n − 1)) ⇔ PLL ≤m
p PLN (10.120)
m
PLL(n) = 2(PLN(n + 1) − PLN(n)) ⇔ PLL ≤p PLN (10.121)
PLN(n + 2) − PLN(n − 2)
PLL(n) = ⇔ PLL ≤m
p PLN (10.122)
2
PLL(n + 1) + PLL(n − 1)
PLN(n) = ⇔ PLN ≤m
p PLL (10.123)
8
PLL(n + 2) − PLL(n − 2)
PLN(n) = ⇔ PLN ≤m
p PLL (10.124)
√ 16 √
PLL(n) = (1 + 2)n + (1 − 2)n ⇔ PLL ≤m
p POW (10.125)
√ √
(1 + 2)n − (1 − 2)n
PLN(n) = √ ⇔ PLN ≤m
p POW (10.126)
2 2
PLN(2n)
PLL(n) = ⇔ PLL ≤m
p PLN (10.127)
PLN(n)
   n  
PLN(n + 1) 2 1 1
= ⇔ PLN ≤p MXP (10.128)
PLN(n) 1 0 0
   n  
PLL(n + 1) 2 1 2
= ⇔ PLL ≤p MXP (10.129)
PLL(n) 1 0 2

a). Show PLN ≤p LUS where LUS is the Lucas sequence Problem 5.10 defined on page 250.
b). Show PLL ≤p LUS2 where LUS2 is the Lucas sequence II Problem 5.11 defined on
page 250.
c). Prove the reduction relation in eqn (10.119).
d). Prove the reduction relation in eqn (10.120).
626 CHAPTER 10. REDUCTION

e). Prove the reduction relation in eqn (10.121).

f). Prove the reduction relation in eqn (10.122).

g). Prove the reduction relation in eqn (10.123).

h). Prove the reduction relation in eqn (10.124).

i). Prove the reduction relation in eqn (10.125).

j). Prove the reduction relation in eqn (10.126).

k). Prove the reduction relation in eqn (10.127). Hint: eqns (10.125) and (10.126).

l). Prove the reduction relation in eqn (10.128).

m). Prove the reduction relation in eqn (10.129).

Q 10.26. Consider the nth Jacobsthal number problem, or simply JCN, defined recursively
in eqn (5.73) on page 281 and the nth Jacobsthal-Lucas number problem, or simply JCL,
defined recursively in eqn (5.78) on page 282.

JCL(n) = JCN(n + 1) + 2JCN(n − 1) ⇔ JCL ≤m


p JCN (10.130)
JCL(n) = JCN(n) + 4JCN(n − 1) ⇔ JCL ≤m
p JCN (10.131)
m
JCL(n) = JCN(n + 2) − 4JCN(n − 2) ⇔ JCL ≤p JCN (10.132)
JCL(n + 1) + 2JCL(n − 1)
JCN(n) = ⇔ JCN ≤m
p JCL (10.133)
9
JCL(n + 2) − 4JCL(n − 2)
JCN(n) = ⇔ JCN ≤m
p JCL (10.134)
9
JCL(n) = 2n + (−1)n ⇔ JCL ≤m
p POW (10.135)
n n
2 − (−1)
JCN(n) = ⇔ JCN ≤m
p POW (10.136)
3
JCN(2n)
JCL(n) = ⇔ JCL ≤m
p JCN (10.137)
JCN(n)
 n
1 1
(JCN(n + 1), JCN(n)) = (1, 0) ⇔ JCN ≤p MXP (10.138)
2 0
 n
1 1
(JCL(n + 1), JCL(n)) = (1, 2) ⇔ JCL ≤p MXP (10.139)
2 0

a). Show JCN ≤p LUS where LUS is the Lucas sequence Problem 5.10 defined on page 250.

b). Show JCL ≤p LUS2 where LUS2 is the Lucas sequence II Problem 5.11 defined on
page 250.

c). Prove the reduction relation in eqn (10.130).

d). Prove the reduction relation in eqn (10.131).

e). Prove the reduction relation in eqn (10.132).


10.11. EXERCISES 627

f). Prove the reduction relation in eqn (10.133).


g). Prove the reduction relation in eqn (10.134).
h). Prove the reduction relation in eqn (10.135).
i). Prove the reduction relation in eqn (10.136).
j). Prove the reduction relation in eqn (10.137). Hint: eqns (10.135) and (10.136).
k). Prove the reduction relation in eqn (10.138).
l). Prove the reduction relation in eqn (10.139).

Q 10.27. Consider the nth Mersenne number problem, or simply MSN, defined recursively
in eqn (5.82) on page 282 and the nth Mersenne-Lucas number problem, or simply MSL,
defined recursively in eqn (5.86) on page 283.

MSL(n) = MSN(n + 1) − 2MSN(n − 1) ⇔ MSL ≤m


p MSN (10.140)
MSL(n) = 3MSN(n) − 4MSN(n − 1) ⇔ MSL ≤m
p MSN (10.141)
MSN(n) = MSL(n + 1) − 2MSL(n − 1) ⇔ MSN ≤m
p MSL (10.142)
n
MSN(n) = 2 − 1 ⇔ MSN ≤p POW (10.143)
n
MSL(n) = 2 + 1 ⇔ MSL ≤p POW (10.144)
MSL(n) = MSN(n) + 2 ⇔ MSL ≤p MSN (10.145)
MSN(n) = MSL(n) − 2 ⇔ MSN ≤p MSL (10.146)
MSN(2n)
MSL(n) = ⇔ MSL ≤m
p MSN (10.147)
MSN(n)
   n  
MSN(n + 1) 3 −2 1
= ⇔ MSN ≤p MXP (10.148)
MSN(n) 1 0 0
   n  
MSL(n + 1) 3 −2 3
= ⇔ MSL ≤p MXP (10.149)
MSL(n) 1 0 2

a). Show MSN ≤p LUS where LUS is the Lucas sequence Problem 5.10 defined on page 250.
b). Show MSL ≤p LUS2 where LUS2 is the Lucas sequence II Problem 5.11 defined on
page 250.
c). Prove the reduction relation in eqn (10.140).
d). Prove the reduction relation in eqn (10.141).
e). Prove the reduction relation in eqn (10.142).
f). Prove the reduction relation in eqn (10.143).
g). Prove the reduction relation in eqn (10.144).
h). Prove the reduction relation in eqn (10.145).
i). Prove the reduction relation in eqn (10.146).
628 CHAPTER 10. REDUCTION

j). Prove the reduction relation in eqn (10.147). Hint: eqns (10.143) and (10.144).
k). Prove the reduction relation in eqn (10.148).
l). Prove the reduction relation in eqn (10.149).

Q 10.28. Consider the nth Lucas sequence Problem 5.10, or simply LUS, defined recursively
in eqn (5.34) on page 249 and the nth Lucas sequence II Problem 5.11, or simply LUS2,
defined recursively in eqn (5.35) on page 249. Assume that p2 − 4q 6= 0.

LUS2(n, p, q) = LUS(n + 1, p, q) − qLUS(n − 1, p, q) ⇔ LUS2 ≤m


p LUS (10.150)
LUS2(n, p, q) = pLUS(n, p, q) − 2qLUS(n − 1, p, q) ⇔ LUS2 ≤m
p LUS (10.151)
LUS2(n, p, q) = 2LUS(n + 1, p, q) − pLUS(n, p, q) ⇔ LUS2 ≤m
p LUS (10.152)
LUS2(n + 1) − qLUS2(n − 1)
LUS(n) = ⇔ LUS ≤m
p LUS2 (10.153)
p2 − 4q
2LUS2(n + 1) − pLUS2(n)
LUS(n) = ⇔ LUS ≤m
p LUS2 (10.154)
p2 − 4q
p
rn − (p − r)n p+ p2 − 4q
LUS(n) = , where r = ⇔ LUS ≤m
p POW (10.155)
2r − p 2
LUS2(n) = rn + (p − r)n ⇔ LUS2 ≤m
p POW (10.156)
LUS(2n)
LUS2(n, p, q) = ⇔ LUS2 ≤m
p LUS (10.157)
LUS(n)
   n  
LUS(n + 1, p, q) p −q 1
= ⇔ LUS ≤p MXP (10.158)
LUS(n, p, q) 1 0 0
   n  
LUS2(n + 1, p, q) p −q p
= ⇔ LUS2 ≤p MXP (10.159)
LUS2(n, p, q) 1 0 2
 n
p 1
(LUS(n + 1, p, q), LUS(n, p, q)) = (1, 0) ⇔ LUS ≤p MXP (10.160)
−q 0
 n
p 1
(LUS2(n + 1, p, q), LUS2(n, p, q)) = (p, 2) ⇔ LUS2 ≤p MXP (10.161)
−q 0

a). Prove the reduction relation in eqn (10.150).


b). Prove the reduction relation in eqn (10.151).
c). Prove the reduction relation in eqn (10.152).
d). Prove the reduction relation in eqn (10.153).
e). Prove the reduction relation in eqn (10.154).
f). Prove the reduction relation in eqn (10.157). Hint: eqns (10.155) and (10.156).
g). Prove the reduction relation in eqn (10.158).
h). Prove the reduction relation in eqn (10.159).
i). Prove the reduction relation in eqn (10.160).
10.11. EXERCISES 629

j). Prove the reduction relation in eqn (10.161).

Q 10.29. Consider the number of nodes in a Fibonacci tree of height n problem, or simply
FTN, defined recursively in eqn (3.33) on page 140 and the nth Fibonacci number recursive
calls problem, or simply FRC, defined recursively in eqn (5.31) on page 247. FIB stands for
the nth Fibonacci number Problem 5.8: F (n).

a). Show FTN ≡p FIB.


b). Show FRC ≡p FIB.
c). Show FTN ≡p FRC.
d). Show FTN ≤p MXP.
e). Show FRC ≤p MXP.

Q 10.30. Recall the two kinds of Lucas Sequence Coefficients (LSC and LSC2) defined
recursively in eqns (6.21) and (6.34) on pages 324 and 352, respectively. LSC and LSC2
triangles are given below.

1 2
L(1,1) L(0,0)
1 0 1 0
1 0 −1 1 0 −2
1 0 −2 0 1 0 −3 0
1 0 −3 0 1 1 0 −4 0 2
1 0 −4 0 3 0 1 0 −5 0 5 0
1 0 −5 0 6 0 −1 1 0 −6 0
−2 9 0
1 0 −6 0 −4 0 10 0 1 0 −7 0 −7 0 14 0
1 0 −7 0 15 0 −10 0 1 1 0 −8 0 20 0 −16 0 2
1 0 −8 0 21 0 −20 0 5 0 1 0 −9 0 27 0 −30 0 9 0

LSC L(n, k) LSC2 L(n, k)

L(n, k) is the kth coefficient of the nth row in the LSC triangle. and L(n, k) is the kth
coefficient of the nth row in the LSC2 triangle.

a). Show FIB ≤m


p LSC where FIB stands for the nth Fibonacci number Problem 5.8

b). Show LUC ≤m p LSC2 where LUC stands for the nth Lucas number problem defined
in eqn 5.63.
c). Express the reduction relation for the following equation:

L(n, k) = L(n + 1, k + 1) − L(n − 1, k − 1) (10.162)

d). Prove eqn (10.162) by strong induction.


e). Prove eqn (10.163).

L(n, k) = 2L(n + 1, k + 1) − L(n, k + 1) (10.163)

f). Can you show LSC ≤m


p LSC2? (Open problem)

Q 10.31. Consider the following consecutive subsequence arithmetic problems:


630 CHAPTER 10. REDUCTION

• MCSS - the maximum consecutive sub-sequence sum Problem 1.15 defined on page 22.
• minCSS - the minimum consecutive sub-sequence sum problem considered previously
as an exercise in Q 1.17 on page 30.
• MCSPp - the maximum consecutive positive real number sub-sequence product prob-
lem considered previously as an exercise in Q 3.18 on page 147.
• minCSPp - the minimum consecutive positive real number sub-sequence product prob-
lem considered previously as an exercise in Q 3.19 on page 148.

a). Show MCSS ≤p minCSS.


b). Demonstrate (MCSS ≤p minCSS) on the following toy example:

3 -1 5 -3 -3 7 4 -1

c). Provide a pseudo code for the reduction (MCSS ≤p minCSS) based algorithm.
d). Show MCSPp ≤p minCSPp.
e). Demonstrate (MCSPp ≤p minCSPp) on the following toy example:

2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5

f). Provide a pseudo code for the reduction (MCSPp ≤p minCSPp) based algorithm.
g). Show minCSPp ≤p minCSS.
h). Demonstrate (minCSPp ≤p minCSS) on the following toy example:

2.0 0.25 2.0 1.0 0.25 4.0 0.5 4.0

i). Provide a pseudo code for the reduction (minCSPp ≤p minCSS) based algorithm.
j). Show minCSS ≤p minCSPp.
k). Demonstrate (minCSS ≤p minCSPp) on the following toy example:

1 -2 1 0 -2 2 -1 2

l). Provide a pseudo code for the reduction (minCSS ≤p minCSPp) based algorithm.
m). Show MCSS ≤p MCSPp.
n). Demonstrate (MCSS ≤p MCSPp) on the following toy example:

1 -2 1 0 -2 2 -1 2

o). Provide a pseudo code for the reduction (MCSS ≤p MCSPp) based algorithm.

Q 10.32. Illustrate Kadane’s Algorithm 10.31 for the maximum consecutive sub-sequence
sum problem defined on page 22 on the following toy examples:
10.11. EXERCISES 631

a). 3 -1 -4 -3 4 -7 4 1

b). 3 -1 5 -3 -3 7 4 -1

c). -3 1 -5 3 3 -7 -4 1

Q 10.33. Consider the minimum consecutive sub-sequence sum problem, or simply minCSS,
considered previously as an exercise in Q 1.17 on page 30.

a). Derive a recurrence relation for the “ending-at” version of the problem.
b). Devise an algorithm using a multi-reduction paradigm (or dynamic programming in
other textbooks) based on the recurrence relation found in a)
c). Illustrate your algorithm on the following toy example:

3 -1 5 -3 -3 7 4 -1

d). Illustrate your algorithm on the following toy example:

-3 1 -5 3 3 -7 -4 1

e). Provide computational time and space complexities of the proposed algorithm in b).

Q 10.34. Consider the maximum consecutive positive real number sub-sequence product
problem, or simply MCSPp, considered previously as an exercise in Q 3.18 on page 147.
a). Derive a recurrence relation for the “ending-at” version of the problem.
b). Devise an algorithm using a multi-reduction paradigm (or dynamic programming in
other textbooks) based on the recurrence relation found in a)
c). Illustrate your algorithm on the following toy example:

0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0

d). Illustrate your algorithm on the following toy example:

2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5

e). Provide computational time and space complexities of the proposed algorithm in b).
Q 10.35. Consider the minimum consecutive positive real number sub-sequence product
problem, or simply minCSPp, considered previously as an exercise in Q 3.19 on page 148.
a). Derive a recurrence relation for the “ending-at” version of the problem.
b). Devise an algorithm using a multi-reduction paradigm (or dynamic programming in
other textbooks) based on the recurrence relation found in a)
c). Illustrate your algorithm on the following toy example:

0.5 2.0 0.2 0.5 0.4 5.0 0.4 2.0


632 CHAPTER 10. REDUCTION

d). Illustrate your algorithm on the following toy example:

2.0 0.5 5.0 2.0 2.5 0.2 2.5 0.5

e). Provide computational time and space complexities of the proposed algorithm in b).
Q 10.36. Recall the maximum consecutive sub-sequence product problem considered as an
exercise in Q 1.18 on page 31.

a). Derive a recurrence relation for the “ending-at” version of the problem.
b). Devise an algorithm using a multi-reduction paradigm (or dynamic programming in
other textbooks).
c). Illustrate your algorithm on the following toy example:

-2 0 -1 2 1 -1 -2 2

d). Illustrate your algorithm on the following toy example:

2 -2 1 0 2 -2 2 -0.5

e). Provide computational time and space complexities of the proposed algorithm.
f). Show MCSPp ≤p MCSP, where MCSPp stands for the maximum consecutive positive
real number sub-sequence product problem, considered as an exercise in Q 3.18 on
page 147.

Q 10.37. Recall the minimum consecutive sub-sequence product problem considered as an


exercise in Q 1.19 on page 31.

a). Derive a recurrence relation for the “ending-at” version of the problem.
b). Devise an algorithm using a multi-reduction paradigm (or dynamic programming in
other textbooks).
c). Illustrate your algorithm on the following toy example:

-2 0 -1 2 1 -1 -2 2

d). Illustrate your algorithm on the following toy example:

2 -2 1 0 2 -2 2 -0.5

e). Provide computational time and space complexities of the proposed algorithm.
f). Show minCSPp ≤p minCSP, where minCSPp stands for the minimum consecutive
positive real number sub-sequence product problem, considered as an exercise in Q 3.19
on page 148.

Q 10.38. Recall the maximum and minimum consecutive sub-sequence product problems
(MCSP and minCSP), considered as exercises Q 1.18 on page 31 and Q 1.19 on page 31,
respectively. Can you prove (minCSP ≤p MCSP) or (MCSP ≡p minCSP)?
10.11. EXERCISES 633

a). Give a counter-example of why negating elements would not work.


b). Give a counter-example of why taking reciprocals of elements would not work.
c). Give a counter-example of why taking negated reciprocals of elements would not work.
d). Can you prove (minCSP ≤p MCSP) or (MCSP ≡p minCSP)? (Open problem)
634 CHAPTER 10. REDUCTION
Chapter 11

NP-complete

This chapter introduces the theory of computation. Concepts of computability, Tractabil-


ity, P class and NP class are briefly presented. NP-hardness and NP-completeness of var-
ious problems are introduced with proofs. Numerous NP-complete problems from Logic
and Combinational Circuit theories are introduced in Section 11.2. In Section 11.3, NP-
complete problems from Set theory are given. NP-hard optimization problems with their
decision versions are presented in Section 11.4. Two NP-complete scheduling optimization
problems are covered in Section 11.5. Various NP-complete graph related problems are
given in Section 11.6.

11.1 Definitions
11.1.1 Computability
“A problem is computable” means that there exists an algorithm to solve the problem.
All problems listed in ‘Index of Computational Problems’ on page 736 are computable. Not
all problems are computable though. There are uncomputable problems where no algorithm
exists to solve the problem. Such problems are also called non-computable or undecidable.
Famous uncomputable problems include the halting problem, Post correspondence problem,
Wang tiles, Diophantine equation, etc.
Given a description of an arbitrary computer program, the halting problem is to decide

Alonzo Church (1903-1995) was an American mathematician and logician. Major


contributions include the lambda calculus, ChurchTuring thesis, FregeChurch ontology,
and the ChurchRosser theorem.
c Photography, courtesy of Princeton University.

Alan M. Turing (1912-1954) was a pioneering English computer scientist. He


is considered to be the father of computer science and artificial intelligence. Major
contributions include Turing machine, Turing test, and cracking the Enigma code.
c Photography is in public domain.

635
636 CHAPTER 11. NP-COMPLETE

x
program halt(x, y) Halt lie( )
if x halts if halt(lie( ), _) = true
y true loop forever
else else
Input Infinite
false halt
data loop

(a) halting problem (b) lie program

Figure 11.1: Halting problem is uncomputable.

whether the program x finishes running or continues to run forever on certain input argument
y as depicted in Figure 11.1 (a). For example, a certain program with an accidental infinite
loop would run forever. This simple decision problem is uncomputable. It was proven
independently by Church [34] and Turing [173] using a proof by contradiction. Suppose
that there exists such an algorithm, halt(x, y). Consider a lie program stated in Figure 11.1
(b) which invokes the halt procedure with the input of itself. If the lie program halts,
halt(lie(), ) must return true but lie() program goes an infinite loop which is a contradiction.
If the lie program never halts, halt(lie(), ) must return false and thus lie() program halts
which is also a contradiction.

11.1.2 Tractability
A function p(n) is polynomial if there exists a non-negative number k such that p(n) =
O(nk ). A problem px is said to be in P, px ∈ P if there exists a polynomial time O(p(n))
algorithm that can transform the input into the output in the worst case. To be more precise,
px ∈ P if there exists a deterministic algorithm in polynomial time. Problems belonging to
the P complexity class include alternating permutation, order statistics, searching, sorting,
etc. The concept of the complexity class P was considered formally in [36]. If px ∈ P, px is
said to be tractable. Hence, the complexity class P is the set of all tractable problems.
A problem px is said to be in EXP, px ∈ EXP if there exists a deterministic algorithm
that can be computed in exponential time, O(2p(n) ) where p(n) is polynomial. Clearly, all
problems in P are also in EXP; P ⊂ EXP.
The opposite of tractable would be intractable. It is relatively easy to show the tractabil-
ity (px ∈ P) but debatable to show the intractability (px 6∈ P), a problem px is insurmount-
able in polynomial time. If the currently known algorithm for a certain problem px is
exponential, it can be stated as (px ∈ EXP) but not as (px 6∈ P) because we never know
whether there exists a polynomial time algorithm.

11.1.3 Non-deterministic in Polynomial Time


Suppose that a professor gives a question to find whole number solutions such that
a4 + b4 + c4 = d4 without knowing the answer. If students A and B provide (3, 4, 5, 7) and
(95800, 217519, 414560, 422481), respectively, the professor can grade any answer provided
by students even if he or she does not know the answer. Since (34 + 44 + 54 = 962) 6=
(74 = 2041), the student A is apparently wrong. The student B is correct and the solution
958004 + 2175194 + 4145604 = 4224814 can be verified in polynomial time.
This problem of finding positive integers such that a4 + b4 + c4 = d4 is the Euler’s
sum of powers problem and Euler conjectured that there is no whole number solutions. In
11.1. DEFINITIONS 637

1987, however, Noam Elkies found a quadruplet (2682440, 15365639, 18796760, 20615673)
such that 26824404 + 153656394 + 187967604 = 206156734 [55]. Albeit finding a quadruplet
may require an exhaustive search algorithm, verifying a guessed solution only takes polyno-
mial time. Often verifying a guessed answer to the problem is computationally easier than
actually solving the problem.
Computational problem takes a set of inputs and produces a set of outputs. If the output
is limited to true or false or the problem is yes or no question, such problems are called
decision problems. For example, CEU isupdown, isGBW, primality testing, etc. are decision
problems. Verifying the output of a certain problem is also a decision problem as depicted
in Figure 11.2 (c).

correct answer correct


input A input G V
output (guess) wrong

(a) computational problem & .


yes input yes
input A V
no guess no

(b) decision problem (c) verification problem of a certain algorithm

Figure 11.2: Computational problem types

NP stands for non-deterministic in polynomial time. A problem px is said to be in


NP, px ∈ NP if there exists a deterministic verification algorithm in polynomial time. In
other words, if one can devise a polynomial time complexity grading program which takes
a program for px as an input and outputs the correctness of the program, this problem px
is said to be in NP complexity class. An algorithm is said to be deterministic if it always
produces a correct output and non-deterministic if it sometimes produces a correct output.
If a guessing algorithm and the verification step take polynomial time, the whole process is
a non-deterministic algorithm in polynomial time.
There exist many problems for which no deterministic algorithm in polynomial time
is known. One such a problem is the boolean satisfiability problem, or simply SAT. The
satisfiability problem, or SAT, is to determine whether a given statement, S, which is either
a proposition or a compound statement, is satisfiable, i.e., whether there exists an assignment
of truth values, true or false for boolean variables such that S becomes true. It is defined
formally as follows:
Problem 11.1. is satisfiable(S) (SAT)
Input: A
( statement S of length m with n boolean variables, hp1 , · · · , pn i
True/V1∼n ∃V1∼n , eval(S, V1∼n ) = T where vi = T or F
Output:
False otherwise

For examples, S1 = (p → q) ∧ (¬p → r) is satisfiable because there is an assignment


(p = T, q = T, and r = F) which makes S1 = T. S2 = (p ∧ ¬q) ∧ ¬(p ∨ r), however, is not
satisfiable since there is no assignment that makes S2 = T, i.e., S2 is a fallacy. Building truth
tables can determine whether a statement is satisfiable as shown in Figure 11.3. Since the
height of the truth table is 2n , the algorithm by using the truth table for the SAT problem
would take O(m2n ) in the worst case.
638 CHAPTER 11. NP-COMPLETE

p q r ( p → q ) ∧ ( ¬p → r )
T T T T T T T F T T
T T F T T T T F T F
T F T T F F F F T T
T F F T F F F F T F
F T T F T T T T T T
F T F F T T F T F F
F F T F T F T T T T
F F F F T F F T F F
(a) S1 = (p → q) ∧ (¬p → r) is satisfiable.
p q r ( p ∧ ¬q ) ∧ ¬( p ∨ ¬r )
T T T T F F F F T T F
T T F T F F F F T T T
T F T T T T F F T T F
T F F T T T F F T T T
F T T F F F F T F F F
F T F F F F F F F T T
F F T F F T F T F F F
F F F F F T F F F T T
(b) S2 = (p ∧ ¬q) ∧ ¬(p ∨ r) is not satisfiable.

Figure 11.3: Truth tables

Theorem 11.1. SAT ∈ NP.

Proof. Any guessed solution can be verified in linear time. First convert the infix notation
input into a postfix notation and then evaluate the postfix notation with a guessed answer.
Algorithm 7.12 on page 369 takes linear time. Hence, SAT ∈ NP. 

If px ∈ P, then px ∈ NP. The polynomial time algorithm for px can be used to produce
the correct output and compare it with the guessed output. There are exceptions though.
Many problems in P such as the alternating permutation Problem 2.19 may have more than
one possible correct answers. Checking up-down sequence Algorithm 1.18 on page 30 clearly
takes O(n) but is not sufficient to prove UDP ∈ NP. ∀x (if x ∈ A, then x ∈ O) must be
shown, which takes O(n log n) by sorting both input and output sequences and checking
whether they are identical.
(
T if isupdown(O) ∧ (sort(A) = sort(O))
verify(UDP algox, A) = (11.1)
F otherwise

where O = UDP algox(A).

Clearly, checking an arbitrary algorithm, UDP algox, on an arbitrary input, A, by


eqn (11.1) takes a polynomial time, and thus UDP ∈ NP.

11.1.4 P vs. NP
P vs. NP problem introduced in [41] is one of the seven Millennium Prize Problems
selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct
solution (see [40] for the full problem description). The problem is whether there exists any
11.1. DEFINITIONS 639

NP
px
←? → P = NP
P

(a) P 6= NP (b) P = NP

Figure 11.4: P versus NP problem.

problem which can be verified in polynomial time but cannot be solved in polynomial time;
P 6= NP or P ⊂ NP as depicted in Figure 11.4 (a) or whether every problem which can be
verified in polynomial time can be also solved in polynomial time; P = NP as depicted in
Figure 11.4 (b).
To prove the former case P 6= NP, one needs to find one such problem; ∃px ∈ NP, px 6∈ P.
To prove the latter case P = NP, one has to show all problems; ∀px ∈ P, px ∈ NP.
Most computer scientists strongly believe the former case P 6= NP (see the poll in [67]).

11.1.5 NP-complete
A problem px is said to be in NP-complete if px is in NP and all problems in NP reduce
to px : ∀py ∈ NP, py ≤p px .

Definition 11.1. px ∈ NP-complete if px -complete = NP.

Problems in NP-complete, which are simply called NP-complete problems, are of great
importance as they are extremely powerful problems. Once a polynomial time complexity
algorithm can be devised for an NP-complete problem, all problems in NP can be solved
in polynomial time as well and consequently P = NP. Most computer scientists strongly
believe that it is impossible to solve any NP-complete problem in polynomial time, i.e., P
6= NP. Hence, proving that any one of NP-complete problems is impossible to be computed
in polynomial time suffices to prove that P 6= NP as well.
Conversely, if px ∈ NP-complete, there cannot be a deterministic polynomial time al-
gorithm for px unless P = NP. In other words, the computational time complexity of any
algorithm for px is in EXP unless P = NP.

Stephen Arthur Cook (1939-) is a renowned American-Canadian computer sci-


entist and mathematician who has made major contributions to the fields of complexity
theory and proof complexity. He is considered one of the forefathers of computational
complexity theory. Photo
c Credit: courtesy of Professor Stephen A. Cook.

Leonid Anatolievich Levin (1948-) is a Soviet-American computer scien-


tist.Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness
and the development of average-case complexity.
Photo
c Credit: Sergio01, licensed under CC BY 3.0, changes were made.
640 CHAPTER 11. NP-COMPLETE

The concept of NP-completeness was developed by Cook [41] and Levin [112] indepen-
dently, and their names are tributed to the important theorem that the boolean satisfiability
Problem 11.1, or simply SAT is in NP-complete.
Theorem 11.2. Cook-Levin Theorem

SAT ∈ NP-complete
Cook-Levin Theorem 11.2 can be also stated as “SAT is the hardest problem in NP.”
The proof for Cook-Levin Theorem 11.2 is omitted here (see [66] for a proof).
Proving that all problems in NP reduce to SAT is quite daunting, but proving that other
problmes are in NP-complete becomes much easier by the concept of NP-hardness in the
subsequent subsection. They utilize the reduction and transitivity of reduction Theorem 10.2
on page 564 whose root is SAT.

11.1.6 NP-hard

”p ”p ”p ”p
….

….

….
px px py py
NP NP NP

(a) Pick px ∈ NP-complete (b) Show px ≤p py (c) (py ∈ NP-hard) proven

Figure 11.5: Proving py ∈ NP-hard by px ≤p py .

A problem py ∈ NP-hard means that the problem py is as hard as the hardest problem
in NP. It can be formally defined as follows:
Definition 11.2. NP-hard by reduction

py ∈ NP-hard if and only if ∃px ∈ NP-complete ∧ px ≤p py


The process of proving that a problem py ∈ NP-hard is depicted in Figure 11.5. First,
first a known NP-complete problem, px ∈ NP-complete must be selected. Note that problems
in NP-complete are the hardest problems in NP. Next, if one shows that px ≤p py , py ∈
NP-hard is proven by the transitivity of reduction Theorem 10.2 on page 564.
Recall that if px ∈ NP-complete, there cannot be a deterministic polynomial time algo-
rithm for px unless P = NP. Let’s assume that there exists a deterministic polynomial time
algorithm for py . Then by showing px ≤p py , there exists a deterministic polynomial time
algorithm for px . This contradicts that there cannot be a deterministic polynomial time
algorithm for px unless P = NP. Hence, the computational complexity of py is as hard as
the hardest problem px in NP.
If py ∈ NP-hard is proven, the computational complexity of py is presumably in EXP
unless P = NP. If a deterministic polynomial time algorithm is actually devised for the
NP-hard problem py , P = NP is proven.
Another definition for NP-completeness, different from Defintion 11.1, can be stated.
py ∈ NP-complete if and only if py ∈ NP and py ∈ NP-hard. NP-complete is the intersection
of NP and NP-hard, as depicted in Figure 11.6.
11.2. NP-COMPLETE LOGIC PROBLEMS 641

NP-hard

NP-complete

NP

Figure 11.6: NP-complete = NP-hard ∩ NP.

In [94], Karp’s 21 NP-complete problems were introduced and proven by reduction. Var-
ious NP-complete problems from Logic, Graph theory, Set theory, scheduling, and combina-
torial optimization are presented in subsequent sections. Figure 11.37 on page 694 builds the
edifice of NP-complete problems as a partial reduction graph whose nodes are NP-complete
problems and arcs indicate the reduction from one problem to the other. Technically, the
reduction graph of np-complete problems is a complete graph.

11.2 NP-complete Logic Problems


This section introduces several NP-complete problems related to logic and combinational
circuit. These problems include Combinational circuit satisfiability, Satisfiability of CNF-3,
Satisfiability of CNF, NAND gate only circuit satisfiability. They are located in the top
center balloon in Figure 11.37 on page 694

11.2.1 Combinational Circuit Satisfiability

p NOT p 1 0
p g1 g2 Gate L R procedure
Inverter 0 g1 p - ¬ neg(p)
p pq q 0 0 g2 g1 q ∨ disj(g1 , q)
q AND g5
g3 q r ∨ disj(q, r)
g3 0
g4 1 g4 g3 - ¬ neg(g3 )
p OR p+q r g5 g2 g4 ∧ conj(g2 , g4 )
q 0

(a) Basic gates (b) a sample combinational (c) a combinational circuit


circuit “(p + q)(q + r)” representation

Figure 11.7: Combinational circuit representation

Most electronic devices including computers are composed of a number of circuits. Basic
elements of circuits are called gates. As shown in Figure 11.7 (a), three most basic gates are
inverter (NOT), disjunction (OR), and conjunction (AND) gates whose output values are
boolean (0 or 1). One and zero correspond to T (true) and F (false), respectively. When a
circuit is purely composed of basic gates without a cycle, it is called a combinational circuit
or gating network. A sample combinational circuit with a combination of five gates is given
in Figure 11.7 (b). Combinational circuits are acyclic while other types of circuits such as
sequential circuits do allow cycles. Hence, a combinational circuit, C is a directed acyclic
642 CHAPTER 11. NP-COMPLETE

graph with |V | = nc = m + n where m and n are the numbers of gates and variables,
respectively. Let’s assume that a combinational circuit has a one network output terminal
gate; for example in Figure 11.7 (b), g5 is the network output terminal gate as its output is
not associated with any other gate.
Although an adjacent matrix or list can be used to represent a combinational circuit, a
simpler representation such as in Figure 11.7 (c) can be also used if the gates are limitted to
binary gate. AND and OR gates are allowed to take more than two inputs in circuit logic,
but they can be trivially converted into binary gates only. A node is either a variable or
gate node. If the type of a gate is ∧ or ∨, the gate node is consisted of two children nodes
and the type information. Only the inverter (NOT) gate has one input.
Consider the combinational circuit satisfiability Problem 11.2 defined below. It is to
determine whether there exists any assignment to boolean variables such that the network
output terminal gate fires one, which means T (true). It is conventionally abbreviated to
CIRCUIT-SAT such as in [42], but here CCS shall be used for the brevity sake.

Problem 11.2. Combinational Circuit Satisfiability (CCS)


Input: A combinational circuit C with m gates, hg1 , · · · , gm i, a terminal gate, gt ,
 and n variables, hp1 , · · · , pn i

 True/V1∼n if there exists V1∼n
Output: such that evalCC(gt , V1∼n ) = 1 where vi = 1 or 0

False otherwise

Instead of trying to come up with a polynomial time complexity algorithm for CCS for
a considerably long time, one might simply show that CCS is in NP-complete. To do so,
CCS ∈ NP as well as CCS ∈ NP-hard must be shown.
First to show that CCS ∈ NP, the problem of evaluating a combinational circuit with a
guessed input assignment must be considered. Given a digital circuit and input signals, the
problem is whether it returns 1 or 0.

g1 = neg(p) g1 = neg(1)

g2 = g2 = g2 = g2 = g2 =
disj(g1 , q) disj(g1 , q) disj(g1 , q) disj(0, q) disj(0, 0)
g5 = g5 = g5 = g5 = g5 = g5 = g5 =
conj(g2 , g4 ) conj(g2 , g4 ) conj(g2 , g4 ) conj(g2 , g4 ) conj(g2 , g4 ) conj(g2 , g4 ) conj(0, g4 )
push(g5 ) push(g2 ) push(g1 ) p=1 pop(g1 = 0) q=0 pop(g2 = 0)
g3 = g3 = g3 =
disj(q, r) disj(0, r) disj(0, 0)
g4 = g4 = g4 = g4 = g4 =
neg(g3 ) neg(g3 ) neg(g3 ) neg(g3 ) neg(0)
g5 = g5 = g5 = g5 = g5 = g5 =
conj(0, g4 ) conj(0, g4 ) conj(0, g4 ) conj(0, g4 ) conj(0, g4 ) conj(0, 1)
push(g4 ) push(g3 ) q=0 r=0 pop(g3 = 0) pop(g4 = 1) pop(g5 = 0)

Figure 11.8: Evaluating a combinational circuit C in Figure 11.7 (b) by a stack

Theorem 11.3. The combinational circuit satisfiability, CCS ∈ NP.


11.2. NP-COMPLETE LOGIC PROBLEMS 643

Proof. Let V1∼n be 0 or 1 assignment to the variables, hp1 , · · · , pn i. A combinational circuit,


C is referred by its root node, the network output terminal gate, gt that returns either 0 or
1. Every internal node is a sub combinational circuit. Let gx .L and gx .R be the first and
second input argument nodes for the node gx . Consider a following recurrence relation of
evaluating a combinational circuit.

 vy
 if gx = (py ∈ V )
eval(gx , V1∼n ) = 1 − eval(gx .L, V1∼n ) if gx is a ‘¬’ gate (11.2)

gx .op(eval(gx .L, V1∼n ), eval(gx .R, V1∼n )) otherwise

While the naı̈ve recursive programming by eqn (11.2) takes exponential time in a worst case,
the strong inductive programming or memoization version of eqn (11.2) takes polynomial
time. A strong inductive programming would utilize a topological sorted list of nodes of
C to evaluate each gate. Valid topological sorted list of gates include hg1 , g2 , g3 , g4 , g5 i,
hg3 , g1 , g4 , g2 , g5 i, etc. Evaluating gates’ outputs in topologically valid order clearly takes
polynomial time. A naı̈ve recursive algorithm, memoization, or a depth first traversal like
algorithm using a stack is illustrated in Figure 11.8. Stating respective pseudo codes are
left for exercises but they clearly take polynomial time and thus CCS ∈ NP. 
Next to show CCS ∈ NP-hard, a problem px ∈ NP-complete must be selected. Only
one problem is known to be in NP-complete thus far and that is SAT according to the
Cook-Levin Theorem 11.2. If SAT ≤p CCS is shown, CCS ∈ NP-hard is proven.
<

g5
¬
>

g2 g4
¬ q
>

g1 g3

p q r p q r

(a) a propositional logic parse tree (b) a combinational circuit


“(¬p ∨ q) ∧ ¬(q ∨ r)” “(p + q)(q + r)”

Figure 11.9: A sample illustration for SAT ≤p CCS

Theorem 11.4. SAT ≤p CCS


Proof. If the input of the SAT, a propositional statement can be converted to a combina-
tional circuit, the solution to CCS is identical to SAT. As illustrated in Figure 11.9, every
operator which is an internal node in the propositional logic parse tree becomes a gate
node but duplicate variables in a propositional logic parse tree are merged into one in the
combinational circuit. So as to transform a propositional statement in infix notation to a
combinational circuit, consider the following Algorithm 11.1, which first converts the input
statement into a postfix notation. This input transformation algorithm resembles the post-
fix evaluation Algorithm 7.10 described on page 367. Instead of pushing the assignment
truth value when the symbol in a list is a Boolean variable in Algorithm 7.10, variable itself
is pushed onto the stack. When a symbol is a negation operator, create a new gate node
gx . And then pop the top element from a stack and add an arc from the popped element
644 CHAPTER 11. NP-COMPLETE

Gate notations evaluate


NOT ¬x x neg(x) = 1 − x
AND x∧y xy conj(x, y) = x × y
OR x∨y x+y disj(x, y) = d(x + y)/2e = 1 − (1 − x) × (1 − y)
NAND x↑y xy nand(x, y) = 1 − x × y
NOR x↓y x+y nor(x, y) = (1 − x) × (1 − y)
XOR x⊕y x⊕y xor(x, y) = (x + y) % 2
XNOR x y x⊕y xnor(x, y) = 1 − (x + y) % 2
BiCon x ↔ y (x + y)(x + y) bicon(x, y) = xnor(x, y) = 1− xor(x, y)
IMPL x→y x+y impl(x, y) = 1 − x(1 − y)
x←y x+y r-impl(x, y) = 1 − (1 − x)y
(a) Seven conventional gates for combinational circuits

p p p p
q q q q

(b) NAND gate (c) NOR gate


p p p p
q q q q

(d) imply gate (e) reverse imply gate


p p p
q
p
q p
q q q

(f) XOR gate (g) XNOR or Biconditional gate

Figure 11.10: Digital circuit to propositional logic

to the newly created gate node. Next, push the new gate node gx to the stack. If symbol
is a binary operator, the process is similar to that of the negation operator but two items
are popped and two arcs from them are created. There are, however, exceptional binary
operators; they are the implication ‘→’ and the reverse implication ‘←’. There are more
symbols in propositional logic than those in combinational circuits as summarized in Fig-
ure 11.10 (a). Albeit only three gates are widely used, seven gate version is conventionally
utilized to map them to respective logical connectives. However, no gate is defined for the
implication connective, ‘→’. If an implication connective symbol occurs in a propositional
statement, it can be converted into a circuit using two gates as illustrated in Figure 11.10
(d) using the fact in eqn (7.3). As ‘←’ hardly appears, Algorithm 11.1 deals only with ‘→’
but it can also be trivially handled as indicated in Figure 11.10 (e). Algorithm 11.1 clearly
takes linear time and thus CCS is in NP-hard. 
The problem of converting or constructing a combinational circuit from a propositional
statement is defined as follows:
Problem 11.3. Construct combinational circuit(S)
Input: A propositional statement S with n boolean variables, hp1 , · · · , pn i
Output: A combinational circuit C where gt is the terminal gate such that
eval(gt , V1∼n ) = eval(S, V1∼n ) ∀V1∼n where vi = T or F
11.2. NP-COMPLETE LOGIC PROBLEMS 645

It is assumed that all seven gates may be used to construct a combinational circuit C.
It is also assumed that the connective, ‘←’ is omitted for the simplicty sake. An algorithm
to convert a propositional statement into a combinational circuit using a stack is stated as
folllows:
Algorithm 11.1. Constructing a combinational circuit using a stack.
ConstructCC(S)
declare a stack, Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A =convert inf2post(S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
V = {p1 , p2 , · · · , pn } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
j = 0 ....................................................4
for i = 1 ∼ |A| . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
if ai ∈ P = {p1 , p2 , · · · , pn }, push ai to Tstack . . . . 6,7
if ai = ‘¬’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
j = j + 1 and cj .L = pop from Tstack . . . . . . . . . . . 9,10
cj .P = ai and push cj to Tstack . . . . . . . . . . . . . . . . 11,12
if ai ∈ binary operator = {∨, ∧, ↔, ⊕, ↑, ↓} . . . . . . . . . . . 13
j = j + 1 and cj .L = pop from Tstack . . . . . . . . . . 14,15
cj .P = ai and cj .R = pop from Tstack . . . . . . . . . . 16,17
push cj to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
if ai = ‘→’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
j = j + 1 and cj .L = pop from Tstack . . . . . . . . . . 20,21
cj .P = ‘¬’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
j = j + 1 and cj .L = cj−1 . . . . . . . . . . . . . . . . . . . . . . . 23,24
cj .P = ‘∨’ and cj .R = pop from Tstack . . . . . . . . . 25,26
push cj to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
return pop from T stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 11.11 (b) illustrates Algorithm 11.1 for a statement ‘(¬p ∨ q) ∧ ¬(q ∨ r)’. Every
time a connective symbol is encountered, a new gate node is created and pushed onto a
stack by popping respective number of arguments. Figure 11.11 (a) which evalutes a postfix
notation is given to show how similar Algorithm 11.1 is to Algorithm 7.10.
In [42], CCS instead of SAT is proven to be the first problem in NP-complete. Let’s
assume that only CCS is known to be in NP-complete and try to show SAT is also in
NP-complete. To do so, CCS ≤p SAT must be shown as SAT ∈ NP is already proven in
Theorem 11.1.
There exists an equivalent propositional statement S for a combinational circuit C and
thus if one devise a polynomial time conversion algorithm, CCS ≤p SAT is shown thus SAT
in NP-hard is proven. The problem of converting a combinational circuit to a propositional
statement is defined as follows:
Problem 11.4. Convert a combinational circuit to a propositional statement
Input: A combinational circuit C with m gates, hg1 , · · · , gm i, a terminal gate, gt ,
and n variables, hp1 , · · · , pn i
Output: A propositional statement S such that
eval(S, V1∼n ) = eval(gt , V1∼n ) ∀V1∼n where vi = T or F
The salient difference between a propositional logic and a combinatoric circuit is their
representations: tree vs. DAG. A propositional logical statement is a string with some
646 CHAPTER 11. NP-COMPLETE

p ¬ q q r ¬

>

>

<
F
F F F F T
T F F F F F F F F
push(p) spi = pop() push(q) fpi = pop() push(q) push(r) fpi = pop() spi = pop() fpi = pop()
push(¬spi) spi = pop() spi = pop() push(¬spi) spi = pop()
push(spi fpi) push(spi fpi) push(spi fpi)

>

>

<
(a) evaluating “p ¬ q ∨ q r ∨ ¬∧” for (p = T, q = F, and r = F )
p ¬ q > q r ¬

>

<
r
q q q g3 g4
p g1 g1 g2 g2 g2 g2 g2 g5
push(p) p = pop() push(q) q = pop() push(q) push(r) r = pop() g3 = pop() g4 = pop()
push(g1 ) g1 = pop() q = pop() push(g4) g2 = pop()
push(g2 ) push(g3) push(g5 )
g1 →
← ¬p g2 →
← (g1 q) g3 →
← (q r) g4 →
← ¬g3 g5 →
← (g2 g4)
>

<
>
Gate L R Type
g1 p ¬
g2 g1 q
> >

g3 q r
g4 g3 ¬
g5 g2 g4
<

(b) Transforming to a combinational circuit

Figure 11.11: Transforming to a combinational circuit for ‘(¬p ∨ q) ∧ ¬(q ∨ r)’

binary connectives and thus there is a corresponding binary tree representation, as depicted
in Figure 11.9. A combinational circuit is a DAG which is more flexible than a tree.
Consider the following algorithm to convert a combinational circuit to a propositional
statement based on the Figure 11.9. One may obtain the propositional logic parse tree
in Figure 11.9(a) straight from the a combinational circuit in Figure 11.9(b) and then
take an in-order depth first traversal provides a propositional statement. The implicit
propositional logic parse tree can be derived by a slightly modified graph-DFT algorithm
on a combinational circuit. A pseudo code of a recursive DFT to convert a combinational
circuit to a propositional statement is stated as follows:
Algorithm 11.2. Converting a combinational circuit to a propositional statement

CC2PS(gx )
if gx .type is a variable, return gx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if gx .type = ‘¬’ and gx .L.type is a variable, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
return concatenate(‘¬’, CC2PS(gx .L)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if gx .type = ‘¬’ and gx .L.type is not a variable, . . . . . . . . . . . . . . . . . . . . . . . . . 4
11.2. NP-COMPLETE LOGIC PROBLEMS 647

return concatenate(‘¬(’, CC2PS(gx .L), ‘)’ ) . . . . . . . . . . . . . . . . . . . . . . . . . . 5


if gx .type ∈ binary gates, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return concatenate(‘(’, CC2PS(gx .L), gx .type, CC2PS(gx .R), ‘)’) . . . . . 7

In order to prove CCS ≤p SAT, the computational time complexity of Algorithm 11.2
must be provided. Unfortunately, it takes exponential time in the worst case as one such
case is pointed out in Figure 11.12. While a linear circuit in Figure 11.12 (a) grows linearly,
the corresponding propositional statements by Algorithm 11.2 in Figure 11.12 (b) grows
exponentially. In general, if the output of a gate, gx is connected to multiple other gates,
gx is accessed multiple times and thus it can grow exponentially.

<
g3

<

<

<
g2 g2
<

<

<

<

<

<
g1 g1

p q p q p q p q p q p q p q p q

(a) combinational circuits (b) propositional logic parse trees


1 p∧q
2 (p ∧ q) ∧ (p ∧ q)
3 ((p ∧ q) ∧ (p ∧ q)) ∧ ((p ∧ q) ∧ (p ∧ q))
4 (((p ∧ q) ∧ (p ∧ q)) ∧ ((p ∧ q) ∧ (p ∧ q))) ∧ (((p ∧ q) ∧ (p ∧ q)) ∧ ((p ∧ q) ∧ (p ∧ q)))
(c) number of gates vs. size of the propositional statements of a linear circuit.

Figure 11.12: Digital circuit to propositional logic

Hence, a different approach to show CCS ≤p SAT is necessary and the equi-satisfiability
concept provides a solution. Two statements Sx and Sy are said to be equi-satisfiable if
Sx is satisfiable if and only if Sy is satisfiable (See [108, p 21] for equi-satisfiability). The
problem of converting a combinational circuit to an equi-satisfiable propositional statement
is defined as follows:
Problem 11.5. Converting a combinational circuit to an equi-satisfiable propositional
statement
Input: A combinational circuit C with m gates, hg1 , · · · , gm i, a terminal gate, gt ,
and n variables, hp1 , · · · , pn i
Output: A propositional statement S such that is satisfiable(S) = is satisfiable(gt )
The equi-satisfiability Problem 11.5 can be solved in polynomial time and, thus, the
following reduction relation Theorem can be derived:
Theorem 11.5. CCS ≤p SAT
Proof. Consider the following conversion algorithm. Make a gate as a Boolean variable
gx and if the gate type is a negation, make a sub-propositional statement using ↔ as an
assignment, gx ↔ ¬L. If the gate type is a binary, make a sub-propositional statement gx ↔
(gx .L gx .type gx .R). Every sub-propositional statement is connected using the conjunction.
Finally, the terminal gate connected using the conjunction.
648 CHAPTER 11. NP-COMPLETE

This converted statement is equi-satisfiable to the original circuit. Each logical sym-
bol’s equi-satisfiability can be shown by the truth table and the entire statement’s equi-
satisfiability can be shown by induction. The important issue here is that this transforma-
tion takes polynomial time. Hence, CCS ≤p SAT. 

g5 Gate L R Type Sub-proposition


g1 p - ¬ g1 ↔ ¬p
g2 g4 g2 g1 q ∨ g2 ↔ (g1 ∨ q)
=⇒
g3 g3 q r ∨ g3 ↔ (q ∨ r)
g1
g4 g3 - ¬ g4 ↔ ¬g3
p q r g5 g2 g4 ∧ g5 ↔ (g2 ∨ g4 )

(a) A sample C from Figure 11.9 (b) Sub-propositions

(g1 ↔ ¬p) ∧ (g2 ↔ (g1 ∨ p)) ∧ (g3 ↔ (q ∨ r)) ∧ (g4 ↔ ¬g3 ) ∧ (g5 ↔ (g2 ∧ g4 )) ∧ g5

(c) A propositional logic statement equi-satisfiable to C

Figure 11.13: Combinationall circuit to equi-satisfiable propositional logic

For example, consider the combinational circuit C in Figure 11.13 (a). Each gate is
converted to the respective sub-proposition using the bi-conditional connective, as given in
in Figure 11.13 (b). Every sub-propositional statement is connected using the conjunction
and the terminal gate connected using the conjunction. The final statement equi-satisfiable
to C is given in Figure 11.13 (c). Since the length of the statement is proportional to
the number of gates in C, the conversion takes Θ(m) and consequently, CCS ≤p SAT.
Theorem 11.5 is equivalent to say SAT ∈ NP-hard.
A pseudo code of a recursive DFT to convert a combinational circuit to an equi-satisfiable
propositional statement is stated as follows:

Algorithm 11.3. Converting a combinational circuit to an equi-satisfiable propositional


statement (Memoization version)

Declare a global table T1∼m and call CC2PS(gt ) initially


CC2PS(gx )
if T [gx ] = nil, return  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else (if T [gx ] 6= nil), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
T [gx ] = ‘visited’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
if gx .type = ‘¬’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
return concatenate(‘(gx ↔ ¬gx .L)∧’, CC2PS(gx .L)) . . . . . . 5
if gx .type ∈ binary gates, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
return concatenate(‘(gx ↔ (gx .L gx .type gx .R))∧’,
CC2PS(gx .L), CC2PS(gx .R)) . . . . . . . . 7
if gx = gt , return ‘gt ’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Many exponential time naı̈ve recursive algorithm become polynomial when the mem-
oization technique is used. Indeed, Algorithm 11.3 can be considered as a memoization
11.2. NP-COMPLETE LOGIC PROBLEMS 649

method based on the recursive programming Algorithm 11.2. The exponential time con-
version Algorithm 11.2 becomes polynomial when each gate is assigned as a variable as in
Algorithm 11.3.

11.2.2 Satisfiability of CNF-3


Conjunctive normal form, or CNF in short, is a standardization or normalization of
a logical formula, which is a conjunction of d-clauses where a d-clause is a literal or a
disjunction of literals. A literal is an atomic formula or its negation, e.g., p, q, ¬p, or ¬q.
Let L be a set of literals. Let ∨(L) be the d-clause where all literals in L are connected by
disjunction.
|L|
_ _
∨(L) = x = li = l1 ∨ · · · ∨ l|L| where L = {l1 , · · · , l|L| } (11.3)
x∈L i=1

The grammar for the d-clause is defined recursively and pseudo-formally in eqn (11.4). Recall
that the symbol, _, means the string concatenation defined in Definition 2.1 on page 70.
(
l1 if |L| = 1
∨(L) = _ _
(11.4)
∨(L − {l|L| }) ‘ ∨ ’ l|L| if |L| > 1

CNF is a logical formula such that a single d-clause or m d-clauses are connected by
conjunction. Let dx a d-clause and D1∼|D| be a set of |D| number of d-clauses: D =
{d1 , · · · , d|D| }. The superscript is used to denote a propositional logic statement in CNF,
S c to distinguish any general logic statement, S.
|D|
^ ^
Sc = x = di = d1 ∧ · · · ∧ d|D| (11.5)
x∈D i=1

Let dx be the xth d-clause in S c and lx,y be the literal at the yth literal in the xth d-clause.
Then eqn (11.5) can be stated as in eqn (11.6).
|D| |di |
^ _ ^_
Sc = dx,y = li,j (11.6)
dx ∈D lx,y ∈dx i=1 j=1

(l1,1 ∨ · · · ∨ l1,|d1 | ) ∧ (l2,1 ∨ · · · ∨ l2,|d2 | ) ∧ · · · ∧ (l|D|,1 ∨ · · · ∨ l|D|,|d|D| | )


The grammar for the CNF is defined recursively and pseudo-formally as follows:
(
d1 if m = 1
CNF(D1∼m ) = _ _
(11.7)
CNF(D1∼m−1 ) ‘ ∧ ’ dm if m > 1
Parenthesis is necessary around a d-clause if it has multiple literals connected by disjunc-
tions and there are other d-clauses connected by conjunctions. For example, consider a valid
CNF statement with two d-clauses, ‘p ∧ (¬q ∨ r).’ If parentheses are removed, ‘p ∧ ¬q ∨ r’
is not in CNF, as it is equivalent to ‘(p ∧ ¬q) ∨ r.’ Some more valid and invalid examples of
CNF statements are given in Figure 11.14.
Let CNF-k be the conjunctive normal form where each d-clause’s size is at most k. For
example, statements in the first row in Figure 11.14 (a) are CNF-1. All statements in the
650 CHAPTER 11. NP-COMPLETE

‘p’ 1 d-clause ‘p ∧ ¬r’ 2 d-clauses


‘p ∨ ¬r’ 1 d-clause ‘p ∧ (¬q ∨ r) ∧ ¬r’ 3 d-clauses
‘(p ∨ ¬q ∨ ¬r) ∧ (¬p ∨ q)’ 2 d-clauses ‘(¬p ∨ r) ∧ (¬p ∨ q ∨ ¬s) ∧ p’ 3 d-clauses
‘(p ∨ q ∨ r ∨ ¬s) ∧ t’ 2 d-clauses ‘¬p ∨ q ∨ r ∨ ¬s ∨ t’ 1 d-clause
(a) Valid CNF statements
‘¬p ∨ q ∨ r ∧ ¬s’ ‘p → r’ ‘(p ∨ ¬q ∨ ¬r) ∧ ¬(¬p ∨ q)’ ‘(¬p ∧ q ∧ r) ∨ ¬r’
(b) Invalid CNF statements

Figure 11.14: Valid and invalid examples of CNF statements

first and second rows are CNF-2. All statements in the first to third rows are CNF-3.
Statements in the fourth rows are not CNF-3, but they are CNF-4 and CNF-5, respectively.
The satisfiability Problem 11.1 whose logical statement input is restricted to be in CNF-
3 is the satisfiability of CNF-3 problem, or SC-3 in short. SC-k problems are special cases
of SAT problem. 3-CNF-SAT [42, p 999] or 3SAT [71, p 605] are conventionally used to
denote the satisfiability of CNF-3 problem, but SC-3 is used as an abbreviation in this text.
Showing the NP-completeness of SC-3 by the reduction from SAT requires converting
the input statement S for SAT to the input S c3 for SC-3 in polynomial time such that S c3
is equi-satisfiable to S. A naı̈ve algorithm to transform a propositional statement into a
CNF form would be utilizing De Morgan’s laws in eqns (11.8, 11.9), distributive properties
in eqns (11.10, 11.11), and eqns (7.3 ∼ 7.7).

¬(p ∧ q) ≡ ¬p ∨ ¬q (11.8)
¬(p ∨ q) ≡ ¬p ∧ ¬q (11.9)
p ∧ (q ∨ r) ≡ (p ∧ q) ∨ (p ∧ r) (11.10)
p ∨ (q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r) (11.11)

In the worst case such as ‘((p ⊕ q) ⊕ (p ↔ r)) ⊕ ((p ⊕ r) ⊕ (q ↔ r))’, however, it takes
exponential time. It seemed to have been impossible to convert a propositional statement
into CNF in polynomial time until Tseitin discovered a way in [172].

Theorem 11.6. SC-3 is NP-complete by SAT.

Proof. SC-3 ∈ NP because SC-3 is a special case of SAT and SAT can be verified in linear
time by Theorem 11.1. Convert the formula in the infix notation into postfix notation which
takes linear time using a stack. Then evaluating the expression in the postfix takes linear
time using another stack.
In order to show SC-3 ∈ NP-hard, we shall use SAT ≤p SC-3. Note that the SAT
problem which is already known to be in NP-complete by Cook-Levin Theorem 11.2. Any
logical statement S can be converted into S t in CNF-3 in polynomial time by Tseitin
transformation [172]. Tseitin transformation rules are given in Figure 11.15 (a). The first
three relations in Tseitin transformation rule table given in Figure 11.15 (a) are proven by
11.2. NP-COMPLETE LOGIC PROBLEMS 651

Connective Form T-CNF


Negation v ↔ ¬p (¬v ∨ ¬p) ∧ (v ∨ p)
Conjunction v ↔ (p ∧ q) (¬v ∨ p) ∧ (¬v ∨ q) ∧ (v ∨ ¬p ∨ ¬q)
Disjunction v ↔ (p ∨ q) (v ∨ ¬p) ∧ (v ∨ ¬q) ∧ (¬v ∨ p ∨ q)
Implication v ↔ (p → q) (v ∨ p) ∧ (v ∨ ¬q) ∧ (¬v ∨ ¬p ∨ q)
NAND v ↔ (p ↑ q) (v ∨ p) ∧ (v ∨ q) ∧ (¬v ∨ ¬p ∨ ¬q)
NOR v ↔ (p ↓ q) (¬v ∨ ¬p) ∧ (¬v ∨ ¬q) ∧ (v ∨ p ∨ q)
XOR v ↔ (p ⊕ q) (¬v ∨ ¬p ∨ ¬q) ∧ (¬v ∨ p ∨ q) ∧ (v ∨ p ∨ ¬q) ∧ (v ∨ ¬p ∨ q)
biconditional v ↔ (p ↔ q) (¬v ∨ ¬p ∨ q) ∧ (¬v ∨ p ∨ ¬q) ∧ (v ∨ ¬p ∨ ¬q) ∧ (v ∨ p ∨ q)
(a) Table of Tseitin transformation rules.
p q v v↔ (p ∧ q) ((((¬p ∨ ¬q) ∨ v) ∧ (p ∨ ¬v)) ∧ (q ∨ ¬v))
T T T T T F T T T T T
T T F F T F F F T F T
T F T F F T T T T F F
T F F T F T T T T T T
F T T F F T T F F F T
F T F T F T T T T T T
F F T F F T T F F F F
F F F T F T T T T T T
(b) Truth table for v ↔ (p ∧ q) ≡ (¬v ∨ p) ∧ (¬v ∨ q) ∧ (v ∨ ¬p ∨ ¬q).

Figure 11.15: Tseitin transformation

logical equivalencies. The rest of them can be trivially shown and are left for exercises.

v ↔ ¬p ≡ (v → ¬p) ∧ (¬p → v) by eqn (7.4)


≡ (¬v ∨ ¬p) ∧ (v ∨ p) by eqn (7.3) (11.12)
v ↔ (p ∨ q) ≡ (v → (p ∨ q)) ∧ ((p ∨ q) → v) by eqn (7.4)
≡ (¬v ∨ p ∨ q) ∧ (¬(p ∨ q) ∨ v) by eqn (7.3)
≡ (¬v ∨ p ∨ q) ∧ ((¬p ∧ ¬q) ∨ v) by eqn (11.9)
≡ (v ∨ ¬p) ∧ (v ∨ ¬q) ∧ (¬v ∨ p ∨ q) by eqn (11.11) (11.13)
v ↔ (p ∧ q) ≡ (v → (p ∧ q)) ∧ ((p ∧ q) → v) by eqn (7.4)
≡ (¬v ∨ (p ∧ q)) ∧ (¬(p ∧ q) ∨ v) by eqn (7.3)
≡ (¬v ∨ (p ∧ q)) ∧ (¬p ∨ ¬q ∨ v) by eqn (11.8)
≡ (¬v ∨ p) ∧ (¬v ∨ q) ∧ (v ∨ ¬p ∨ ¬q) by eqn (11.11) (11.14)

Rules can be proven by a truth table and the truth table for the rule in eqn (11.14) is
provided in Figure 11.15 (b).
S t is CNF-3 and equi-satisfiable to S. The number of d-clauses in S t is less than or
equal to the number of connective symbols in S, which means that the transformation takes
linear time. SC-3 must be as hard as SAT and, thus, SC-3 ∈ NP-hard.
Since SC-3 ∈ NP and SC-3 ∈ NP-hard, SC-3 ∈ NP-complete. 

An algorithm to convert a propositional statement into an equi-satisfiable statement in


CNF-3 is known as Tseitin transformation and stated below in Algorithm 11.4. It is a
652 CHAPTER 11. NP-COMPLETE

combination of constructing a combinational circuit Algorithm 11.1, converting a combina-


tional circuit to an equi-satisfiable propositional statement Algorithm 11.3, and the table of
Tseitin transformation rules in Figure 11.15 (a).
Algorithm 11.4. Tseitin transformation
Convert2CNF3(S)
declare a stack, Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A =convert inf2post(S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
O = ε and j = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,4
for i = 1 ∼ |A| . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if ai ∈ P = {p1 , p2 , · · · , pn }, push ai to Tstack . . . . . . . . . . .6
if ai = ‘¬’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
j = j + 1 and make a variable gj . . . . . . . . . . . . . . . . . . . . . . . 8,9
p = pop from Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
O = concatenate(O, ‘(¬gj ∨ ¬’, p, ‘) ∧ (gj ∨’, p, ‘)’) . . . 11
push gj to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
if ai ∈ binary operator = {∨, ∧, →, ↑, ↓, ⊕, ↔} . . . . . . . . . . . . . 13
q = pop from Tstack and p = pop from Tstack . . . . . . 14,15
O = concatenate(O, compose from Tseitin rules) . . . . . . . .16
push gj to Tstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
return O = concatenate(O, ‘∧’, pop from Tstack) . . . . . . . . . . . 18

Figure 11.11 (b) provides a good illustration of Tseitin transformation Algorithm 11.4.

11.2.3 Satisfiability of CNF


Consider the satisfiability problem where the input statement is in CNF. CNF-SAT is
conventionally used such as in [172], but SCN is used here for the simplicity sake. Showing
SCN is in NP-complete is extremely simple since SC-3 had been already proven to be in
NP-complete.
Theorem 11.7. SCN ∈ NP-complete by SC-3
Proof. SCN ∈ NP by the same reasoning in SAT and SC-3 ∈ NP. Algorithm 7.12 on page 369,
which takes linear time, can verify any guessed output.
To show SCN ∈ NP-hard, SC-3 which is already known to be in NP-complete; i.e., show
SC-3 ≤p SCN. Note that SC-3 is a special case of SCN. Consider the following reduction
based algorithm to solve SC-3.
SC-3 algo(S c3 ) = SCN algo(S c3 ) (11.15)
SC-3 reduces to SCN in constant time and thus SC-3 ≤p SCN according to eqn (11.15).
SCN is as hard as the hardest problem in NP which is SC-3.
Since SCN ∈ NP and SCN ∈ NP-hard, SCN ∈ NP-complete. 
As a matter of fact, a special case problem, ps of a general case problem, pg can directly
reduce to the general case problem, pg .
Fact 11.1. If a special case problem, ps ∈ NP-hard, a general case problem, pg ∈ NP-
complete.
ps ≤p pg ⇔ ps algo(Is ) = pg algo(Is )
11.2. NP-COMPLETE LOGIC PROBLEMS 653

If the special case problem is a hardest problem in NP, the general case problem must
be as hard as the special case problem. Using the fact 11.1, following reductions can be
derived.

SCN ≤p SAT ⇔ SCN algo(S c ) = SAT algo(S c ) (11.16)


c3 c3
SC3 ≤p SAT ⇔ SCN algo(S ) = SAT algo(S ) (11.17)

Suppose that SCN ∈ NP-complete is known, but SC-3 is not. Showing the reduction
relation SCN ≤p SC-3 provides that SC-3 is NP-hard. The difference between two problems
is the upper bound of d-clause size. There is no upper bound in SCN, but only up to 3
literals are allowed in SC-3.
The resolution rule in propositional logic provides insight for the reduction, SCN ≤p
SC-3.
Lemma 11.1. Resolution rule
Let X and Y be sets of literals. Let ∨(X) be the d-clause where all literals in X are
connected by disjunction.
If l ∈ X and ¬l ∈ Y , ∨(X) ∧ ∨(Y ) → ∨(X ∪ Y − {l, ¬l}).

Proof. Let X 0 = X −{l} and Y 0 = Y −{¬l}. Then ∨(X) = ∨(X 0 )∨l and ∨(Y ) = ∨(Y 0 )∨¬l.

∨(X 0 ) ∨(Y 0 ) ∨(X 0 ) ∨(Y 0 )


z }| { z }| { z }| { z }| {
(x1 ∨ · · · ∨ xn ∨ l) ∧ (y1 ∨ · · · ∨ ym ∨¬l) → (x1 ∨ · · · ∨ xn ∨ y1 ∨ · · · ∨ ym )
| {z } | {z } | {z }
∨(X) ∨(Y ) ∨(X∪Y −{l,¬l})

A truth table can be constructed as follows:


∨(X 0 ) ∨(Y 0 ) l (∨(X 0 ) ∨ l) ∧ (∨(Y 0 ) ∨ ¬l)) → (∨(X 0 ) ∨ ∨(Y 0 ))
T T T T T T T T
T T F T T T T T
T F T T F F T T
T F F T T T T T
F T T T T T T T
F T F F F T T T
F F T T F F T F
F F F F F T T F
∨(X 0 ) ∨ ∨(Y 0 ) = ∨(X ∪ Y − {l, ¬l}). Therefore, ∨(X) ∧ ∨(Y ) → ∨(X ∪ Y − {l, ¬l}). 
According to the resolution Lemma 11.1 , l and ¬l in two different d-clauses are removed
in the combined d-clause. The reverse resolution rule provides dividing a d-clause whose
number of literals are greater than 3.
Theorem 11.8. SCN ≤p SC-3
Proof. There are no upper bound for the number of literals within a d-clause in CNF while
there can be up to three literals in CNF-3. The reverse resolution rule is that a d-clause
can be divided into two d-clauses with a linking variable. The statement that connects two
d-clauses with a conjunction is equi-satisfiable to the original d-clause as given in the truth
table in Lemma 11.1.
654 CHAPTER 11. NP-COMPLETE

If the number of literals in a d-clause in SCN


Wnproblem is less than or equal to three, they
are untouched. Consider a d-clause, ∨(X) = i=1 xi in SCN problem whose n = |X| > 3.
It should be converted to multiple d-clauses such that their number of lierals are limited to
three. The reverse resolution rule can be applied with n − 3 number of linking variables as
follows:
n−2
^
S c3 (X1∼n ) = (x1 ∨ x2 ∨ l1 ) ∧ (¬li−2 ∨ xi ∨ li−1 ) ∧ (¬ln−3 ∨ xn−1 ∨ xn ) (11.18)
i=3

S c3 (X1∼n ) ≡s ∨(X1∼n ). Note that the symbol, ≡s means equi-satisfiable. The number of
literals throughout the transformed statement by eqn 11.18 is less than three times the num-
ber of total literals throughout the statement in SCN problem. Clearly, this transformation
takes linear time. Hence, SCN ≤p SC-3. 
Since SC-3 is in NP and in NP-hard, it is in NP-complete.

11.2.4 NAND Gate Only Circuit Satisfiability


Albeit numerous logical connective symbols exist, as given in Table 7.1 on page 364,
not all symbols are necessary. Any propositional logical sentence can be expressed using
only three symbols in {¬, ∨, ∧} by eqns (7.3 ∼ 7.7) on page 365. Such a set is called
a functionally complete set of logical connectives [56] or expressively adequate set [160].
{¬, ∨} and {¬, ∧} are also functionally complete set due to De Morgan’s laws in eqns (11.8)
and (11.9), respectively.
Complete sets of the smallest size include {↑} and {↓}. Sheffer stroke, ↑, means the
NAND operation.
Theorem 11.9. {↑} is a functionally complete set.
Proof. {¬, ∨, ∧} by eqns (7.3 ∼ 7.7) on page 365. Hence, showing eqns (11.19 ∼11.21) are
sufficient to prove its functional completeness.

¬p ≡ p↑p (11.19)
p∧q ≡ (p ↑ q) ↑ (p ↑ q) (11.20)
p∨q ≡ (p ↑ p) ↑ (q ↑ q) (11.21)

Truth tables for eqns (11.19 ∼11.21) are given in Figure 11.16. 
Consider a special satisfiability problem, NAND-SAT in short, where the input statement
uses only NAND (↑) symbol. Suppose one attempts to convert a propositional statement
to NAND only statement by using eqns (11.19 ∼11.21). This naı̈ve transformation may
result in an exponential increase in length. For example, to determine whether a statement,
S = (¬p ∨ q) ∧ (p ∨ r) is satisfiable, first S can be converted into an NAND form.

S ≡ (¬p ∨ q) ∧ (p ∨ r)
≡ ((p ↑ p) ∨ q) ∧ (p ∨ r) by eqn (11.19)
≡ (((p ↑ p) ↑ (p ↑ p)) ↑ (q ↑ q)) ∧ ((p ↑ p) ↑ (r ↑ r)) by eqn (11.21)
≡ ((((p ↑ p) ↑ (p ↑ p)) ↑ (q ↑ q)) ↑ ((p ↑ p) ↑ (r ↑ r)))
↑ ((((p ↑ p) ↑ (p ↑ p)) ↑ (q ↑ q)) ↑ ((p ↑ p) ↑ (r ↑ r))) by eqn (11.20)
11.3. NP-COMPLETE SET THEORY PROBLEMS 655

p ¬p p ↑ p
p p T F F
F T T
(a) substituting a NOT gate with a NAND gate
p q p ∧ q (p ↑ q) ↑ (p ↑ q)
T T T F T F
p p
q q T F F T F T
F T F T F T
F F F T F T
(b) substituting an AND gate with a combination of NAND gates
p q p ∨ q (p ↑ p) ↑ (q ↑ q)
p T T T F T F
p
q T F T F T T
q F T T T T F
F F F T F T
(c) substituting an OR gate with a combination of NAND gates

Figure 11.16: NAGS ≤p CCS

Showing NAND-SAT is NP-complete is quite tricky.


Consider the special NAND gate only circuit satisfiability problem, NAGS in short, where
all gates in the circuit are NAND. Theorem 11.9 provides the NAND gate universality, as
depicted in Figure 11.16. Showing NAGS ∈ NP-complete is easy.
Theorem 11.10. NAGS is NP-complete.
Proof. NAGS ∈ NP by the same reason as in Theorem 11.1 SAT ∈ NP.
To show NAGS ∈ NP-hard, consider the Combinational Circuit Satisfiability Prob-
lem 11.2, CCS in short. Each gate in CCS is expanded by the rules in Figure 11.16. Number
of gates in NAGS can be three times the number of gates in CCS in the worst case. Hence,
CCS ≤p NAGS and CCS is NP-complete is known by Theorem 11.4. Hence, NAGS is as
hard as CCS. Since NAGS is in NP and NP-hard, it is NP-complete. 
Figure 11.17 shows an example of transforming a combinational circuit to an NAND
gate only circuit.

11.3 NP-complete Set Theory Problems


In this section, several NP-complete problems drawn from the set theory are considered.
Such problems include subset sum, and set partition problems. Some problems may also
belong to optimization problems. They are located in center blue balloon in Figure 11.37.
Set cover problem, SCV in short, shall be dealt in NP-complete graph problems section.

11.3.1 Subset Sum Equality Problem


Consider the Subset Sum Equality Problem 6.3, or simply SSE, defined on page 305.
This problem is of great importance as NP-completeness of many problems in set theory,
scheduling, and optimization can be trivially shown by the reduction from SSE, as depicted
656 CHAPTER 11. NP-COMPLETE

p p
g1 g1 g2
q q g4
g3 g6
r g2 r g3 g5

Gate L R Type Gate L R


g1 p q
n
g1 p q ∧ =⇒
g2 g1 g1
g2 r - ¬ =⇒ { g3 r r
( g4 g2 g2
g3 g1 g2 ∨ =⇒ g5 g3 g3
g6 g4 g5
(a) a combinational circuit (b) NAND gate only circuit

Figure 11.17: Transforming a Combinational circuit to an NAND gate only circuit

in Figure 11.37. Showing that SSE is NP-complete is quite challenging though. Here,
its input transformation part is emphasized. Interested readers may see [42, p 1097] for
complete proof.
Theorem 11.11. SSE is NP-complete.
Proof. SSE is in NP because a guessed solution can be summed up and compared with the
target, t. It clearly takes polynomial time.
To show the NP-hardness of SSE, SC-3 ≤p SSE. Note that SC-3 is NP-complete is
already known by Theorem 11.6.
The input statement, S c3 , for SC-3 is in CNF-3 with m number d-clauses with n number
of unique Boolean variables. This input is transformed into a set of integers, A and a target
t, such that the following eqn 11.22 holds:

SC-3(S c3 ) = SSE(A, t) ⇔ SC-3 ≤p SSE (11.22)

The binary number representation helps understand the reduction relationship better,
as given in Figure 11.18. CNF-3 sample statements, which are satisfiable and unsatisfiable,
are given in Figure 11.18 (a) and (b), respectively.
First, the target t can be encoded using the following equation:
m−1
X
t = (2n − 1) × 23m + 3 × 23i
i=0

The first n bits of t are set to 1’s so that either a variable or its negation can be selected
exactly once. For the remaining (3 × m) bits in the right side, they are set to ‘011’ so that
at least one row that makes the corresponding clause true.
A pseudo code for generating a set of integers based on S c3 is given in Subroutine 11.1.
There will be 2n + 2m number of integers in A. First 2n elements correspond to Boolean
variable and its negation. For each Boolean variable, pi , there will be two rows. First n bits
from the left are 0’s except for the i bit from the left, which is set to 1, as indicated in lines
1 ∼ 3. The remaining bits correspond to whether the respective Boolean variable appear
as a literal in the jth d-clause. If the Boolean variable, pi , appears as a literal in Cj , the
11.3. NP-COMPLETE SET THEORY PROBLEMS 657

p1 p2 p3 C1 C2 C3 C4 A1∼2n+2m
p1 = T 1 0 0 001 000 000 001 16897
p1 = F 1 0 0 000 000 000 000 16384
p2 = T 0 1 0 000 000 000 001 8193
p2 = F 0 1 0 001 001 001 000 8776
p3 = T 0 0 1 000 000 000 001 4097
p3 = F 0 0 1 001 001 000 000 4672
C11 0 0 0 001 000 000 000 512
C10 0 0 0 001 000 000 000 512
C21 0 0 0 000 001 000 000 64
C20 0 0 0 000 001 000 000 64
C31 0 0 0 000 000 001 000 8
C30 0 0 0 000 000 001 000 8
C41 0 0 0 000 000 000 001 1
C40 0 0 0 000 000 000 001 1
t 1 1 1 011 011 011 011 30427
S1 = (p1 ∨ ¬p2 ∨ ¬p3 ) ∧ (¬p2 ∨ ¬p3 ) ∧ ¬p2 ∧ (p1 ∨ p2 ∨ p3 )
| {z } | {z } |{z} | {z }
C1 C2 C3 C4
(a) Satisfiable case S1
p1 p2 C1 C2 C3 C4 A1∼2n+2m
p1 = T 1 0 001 000 001 000 8712
p1 = F 1 0 000 001 000 001 8257
p2 = T 0 1 001 001 000 000 4672
p2 = F 0 1 000 000 001 001 4105
C11 0 0 001 000 000 000 512
C10 0 0 001 000 000 000 512
C21 0 0 000 001 000 000 64
C20 0 0 000 001 000 000 64
C31 0 0 000 000 001 000 8
C30 0 0 000 000 001 000 8
C41 0 0 000 000 000 001 1
C40 0 0 000 000 000 001 1
t 1 1 011 011 011 011 14043
S2 = (p1 ∨ p2 ) ∧ (¬p1 ∨ p2 ) ∧ (p1 ∨ ¬p2 ) ∧ (¬p1 ∨ ¬p2 )
| {z } | {z } | {z } | {z }
C1 C2 C3 C4
(b) Fallacy case S2

Figure 11.18: SC-3 ≤p SSE illustration.

bits in Cj in the (2i − 1)th row become ‘001.’ If ¬pi , appears as a literal in Cj , the bits
in Cj in the (2i)th row become ‘001.’ These assignments are stated in lines 4 ∼ 7. Lines
8 ∼ 10 create slack elements. Note that each d-clause contains up to three literals. It can
be selected at least once but cannot be selected more than 3. Two slack elements are added
for each d-clause.
If a d-clause is selected only once, both slack elements are also selected to meet the
target value. If a d-clause is selected only twice, one of slack elements is selected to meet
the target value. No slack elements shall be used if a d-clause is selected three times. Note
that the number of literals in a d-clause is limited to three and it cannot be selected more
658 CHAPTER 11. NP-COMPLETE

than three times.

Subroutine 11.1. Encoding S c3 to a set of integer, A

Encoding(S c3 )
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
a2i−1 = 23m+n−i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
a2i = 23m+n−i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
for j = 1 ∼ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
for k = 1 ∼ 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
if lj,1 = px , a2x−1 = a2x−1 + 23m−3j . . . . . . . 6
if lj,1 = ¬px , a2x = a2x + 23m−3j . . . . . . . . . . 7
for i = 1 ∼ m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
a2n+2i−1 = 23m−3i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
a2n+2i = 23m−3i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
return A1∼2n+2m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

The conversion Subroutine 11.1 takes Θ(nm), which is polynomial.


One concern is whether the sum of a different subset may equal to t, possibly due to
carries. No carry can occur according to the setting. For columns for d-clauses, entire
column sums are ‘101.’ For columns for variables, in order to a carry to occur, both variable
and its negation must be selected. Even though a carry occurs, it violates that the target
must be 1 but the sum of two ones become zero: 1 + 1 = 10 in binary number system.
If there exists a subset sum of A whose sum is equal to t, S c3 is satisfiable. 

One may wonder that the computational time complexity of the 2-dimensional strong
inductive programming, or simply dynamic programming Algorithm 6.11, stated on page 306
for SSE was Θ(kn), which seems to be polynomial. Such computational time complexity
algorithms are called pseudo polynomial time algorithms. The number of rows and columns
were k and n in the analysis of Algorithm 6.11.
 Let the number of rows to be n. In the
worst case, the number of columns becomes nk = 2n , which is exponential.

11.3.2 Unbounded Subset Sum Problem


Consider the unbounded subset sum equality Problem 5.3, or simply USSE, defined on
page 225. ‘USSE ∈ np-complete’ was proven in [117, 3]. If the input sequence, A = {3, 4, 5, 7}
and m = 10, SSE(A, m) would select 3 and 7, but USSE(A, m) would select not only 3 and
7, but also, 5 and 5. An element can be selected multiple times. Since a purported answer
for the SPEp can be verified in O(n), USSE is clearly in NP.
To show that USSE ∈ NP-hard, consider the subset sum equality Problem 6.3, or simply
SSE, which was already shown to be in NP-complete. If the input list for SSE is transformed
in a way that each element can be only selected only once, the result of USSE would be
the same as SSE. Each element is encoded in the binary number system, as depicted in
Figure 11.19. The first n most significant digits guarantee that each element can be only
selected only once. These digits should not be disturbed by carries when adding. Hence, the
starting digit, s, for the first n most significant digits is determined by adding all elements
in A. The number of digits in the total sum determines s. Each element is encoded by
a02i−1 = s × 2i−1 + ai . And one more element, a02i = s × 2i−1 , is added so that ai is not
11.3. NP-COMPLETE SET THEORY PROBLEMS 659

A A0 in binary A0 USSE(A0 , m0 ) %s SSE(A, m)


3 1 0001 00011 547 ⇒ 547 3 ⇒3
1 0001 00000 544
4 1 0010 00100 580
1 0010 00000 576 ⇒ 576 0
5 1 0100 00101 645
1 0100 00000 640 ⇒ 640 0
7 1 1000 00111 775 ⇒ 263 7 ⇒7
1 1000 00000 768
m= 10 m0 = 100 1111 01010 2538 2538 10 10
n (2n − 1) m
× 2n s ×s

Figure 11.19: SSE ≤p USSE reduction relation illustration.

selected if a02i is selected. The new target value is (2n − 1)s + m.

SSE(A1∼n , m) = USSE(A01∼2n , ((n + 1)2n − 1)s + m) ⇔ SSE ≤p USSE (11.23)

where a02i−1 = s × (2n + 2i−1 ) + ai , a02i = s × (2n + 2i−1 ), and s = 2dlog (n(max(A)+1))e
If the desired output is the subset rather than (T or F) output, the output of USSE(A0 , (2n −
1)s + m) must be transformed. An algorithm for USSE either outputs exactly a subset of
size n or ‘F’ if impossible to make the target value, m0 . First, take ‘mod s’ operation each
output element. The resulting value for a0i % s is either ad 2i e or 0. The final output for SSE
is selecting non-zero elements, as illustrated in Figure 11.19.
USSE ∈ NP-complete because USSE ∈ NP and USSE ∈ NP-hard.

11.3.3 Set Partition Problem

3 12
5 16 12
15 12
5 12
21
12 12 40 12
5 3 35
35 15 12 40
5 21 16 12 12?

(a) STP(S) = T case (b) STP(S) = F case


3 16 12 12
5 12
15
12
21 5
15 12 40 12
5 3 35 40
35 12
5 21 12 12?

(c) SSE(A, (m = 34)) = T case (d) SSE(A, (m = 38)) = F case

Figure 11.20: SSE ≤p STP illustration.

Consider the Set Partition problem or STP in short, which takes a multi-set S of numbers
and determines whether the numbers can be partitioned into two multi-subsets S1 and S2
660 CHAPTER 11. NP-COMPLETE

such that their sums are equal. Partition means that S1 ∪ S2 = S and |S1 | + |S2 | = |S|.
A sample set of numbers that can be partitioned into two equal sized parts is given in
Figure 11.20 (a) and a sample set of numbers that cannot be partitioned into two equal
sized parts is given in Figure 11.20 (b). STP is an NP-complete problem introduced in [106]
and formally defined as follows:

Problem 11.6. Set Partition problem


Input: A multi set S of n numbers !

P P
yes if ∃S1 , S2 such that x= x



x∈S1 x∈S2
Output:


 ∧ (S1 ∪ S2 = S) ∧ (|S1 | + |S2 | = n)
no otherwise

P P
It takes linear time to check x= x. Checking S1 and S2 are partitions of S
x∈S1 x∈S2
also takes linear time, i.e., S1 ∪ S2 = S and |S1 | + |S2 | = |S|. Hence, STP ∈ NP.

Theorem 11.12. STP ∈ NP-hard.

Proof. Consider the Subset Sum Equality Problem 6.3, or simply SSE, which is already
proven to be in NP-complete in the previous subsection. If we show SSE ≤p STP, STP ∈
NP-hard.
0
Recall that SSE takes aPset A of numbers andP m and outputs ‘yes’ if there exists A ,
a subset of A, such that x = m. Let s = x. Let S be the set A with one more
x∈A0 x∈A
element, i.e., S = A ∪ {s − 2m}.

SSE(A, m) = STP(A ∪ {s − 2m}) ⇔ SSE ≤p STP (11.24)


P P
The total sum for S, = x + s − 2m = 2s − 2m. If STP(S) = T, S can be partitioned
x∈S x∈A
into two equal parts: S1 and S2 whose sums are s − m. One of the parts must contain the
element, s − 2m. Let S 0 be one ofP
the two P
partitions of S that contains the element s − 2m.
Let A0 = S 0 − {s − 2m}. Then, x= x − s + 2m = m. Therefore, SSE ≤p STP. 
x∈A0 x∈S 0

As depicted in Figure 11.20 (c) to (a), SSE(A, m) = T if STP(A ∪ {s − 2m}) = T. As


depicted in Figure 11.20 (d) to (b), SSE(A, m) = F if STP(A ∪ {s − 2m}) = F. STP ∈
NP-complete because STP ∈ NP and STP ∈ NP-hard.
STP is a special case of SSE and, thus, STP ≤p SSE by eqn (11.25).
P
x
x∈S
STP(S) = SSE(S, ) ⇔ STP ≤p SSE (11.25)
2

11.4 NP-hard Optimization Problems


In this section, determining hardness of popular optimization problems, such as subset
sum maximization, subset sum minimization, and 01-knapsack problems, are considered.
Determining hardness for other classical optimization and subset arithmetic problems listed
in Tables 4.3 and 4.4 are left for exercises.
11.4. NP-HARD OPTIMIZATION PROBLEMS 661

11.4.1 Subset Sum Maximization Problem


Consider the subset sum maximization problem, or simply SSM, appeared as an exercise
in Q 4.9 on page 202. Whether SSM is in NP is questionable. Exactly the same anomaly
for px 6∈ P discussion applies to SSM 6∈ NP.
Consider the decision version of SSM, or SSMdv in short, which is defined as follows:
Problem 11.7. Subset sum maximization decision version problem
Input: A
 set, S, of n numbers, a target number m, and a lower bound l
0
 |S |
 0
if ∃S 0 ⊆ S such that l ≤ s0i ≤ m
P
S
Output: i=1

False otherwise

A purported answer forSSMdv can be verified in polynomial time Θ(|S 0 |). Hence, SSMdv
∈ NP.
To show SSMdv ∈ NP-hard, consider the Subset Sum Equality Problem 6.3, or simply
SSE, which is already proven to be in NP-complete in the previous section. If we set the
lower bound to be the same as the target number, m, SSMdv returns true if and only if
there exists a subset whose sum is exactly m. Since SSE reduces to SSMdv as in eqn (11.26),
SSMdv is in NP-hard and consequently in NP-complete.

SSE(A, m) = SSMdv (A, m, m) ⇔ SSE ≤p SSMdv (11.26)


(
T if SSM(A, m) = m
SSE(A, m) = ⇔ SSE ≤p SSM (11.27)
F otherwise
(
T if l ≤ SSM(A, m) ≤ m
SSMdv (A, m, l) = ⇔ SSMdv ≤p SSM (11.28)
F otherwise

The computational hardness of SSM can be shown in many ways. It can be reduced from
SSE or SSMdv as given in eqns (11.27) and (11.28), respectively. Clearly, SSM ∈ NP-hard;
SSM is as hard as SSE and SSMdv , which are the hardest problems in NP.

11.4.2 Subset Sum Minimization Problem


Consider the subset sum minimization problem, or simply SSmin, appeared as an exercise
in Q 4.8 on page 202. The decision version of SSmin is defined as follows:
Problem 11.8. Subset sum minimization decision version problem
Input: A
 set S of n numbers, a target number m, and an upper bound u
0
 |S |
 0
if ∃S 0 ⊆ S such that m ≤ s0i ≤ u
P
S
Output: i=1

False otherwise

A purported answer for this decision version of SSmin, or SSmindv in short, can be
verified in polynomial time, Θ(|S 0 |). Hence, SSmindv ∈ NP.
To show SSmindv ∈ NP-hard, consider the Subset Sum Equality Problem 6.3, or simply
SSE, which is already proven to be in NP-complete in the previous section. If we set the
upper bound to be the same as the target number, m, SSmindv returns true if and only if
662 CHAPTER 11. NP-COMPLETE

there exists a subset whose sum is exactly m. Since SSE reduces to SSmindv as in eqn (11.29),
SSmindv is in NP-hard and consequently in NP-complete.

SSE(A, m) = SSmindv (A, m, m) ⇔ SSE ≤p SSmindv (11.29)


(
T if SSmin(A, m) = m
SSE(A, m) = ⇔ SSE ≤p SSmin (11.30)
F otherwise
(
T if m ≤ SSmin(A, m) ≤ u
SSmindv (A, m, u) = ⇔ SSmindv ≤p SSmin (11.31)
F otherwise

SSmin can be reduced from SSE or SSMdv as given in eqns (11.30) and (11.31), respec-
tively. Clearly, SSmin ∈ NP-hard; SSmin is as hard as SSE and SSmindv , which are the
hardest problems in NP.
The computational hardness of SSmin and SSmindv can be also shown by reducing from
SSM and SSMdv , respectively. SSM and SSmin are dual problems; SSM ≡p SSmin, as given
in eqns (11.32) and (11.33).
X
SSM(A, m) = A − SSmin(A, x − m) ⇔ SSM ≤p SSmin (11.32)
x∈A
X
SSmin(A, m) = A − SSM(A, x − m) ⇔ SSmin ≤p SSM (11.33)
x∈A

17 16 4 12 4 5 18 22

5 12
17
16 18
22

(a) Check-in bag (b) (n = 7) item of total 94 lbs. (c) Carry-on bag

Figure 11.21: Check-in and carry-on baggage Theorem 11.13

An intuitively appealing way to explain the dual problem relation, SSM ≡p SSmin, is the
check-in and carry-on baggage Theorem 11.13, which is depicted in Figure 11.21. Suppose
one must pack n items represented in their weights: A1∼n = h17, 16, 5, 4, 18, 22, 12i. These
items are partitioned into check-in and carry-on bags. There is a maximum check-in bag
weight limit, m =50 lbs. He or she would like to maximize
P weight of the
P check-in bag while
minimizing the carry-on bag. The total weight is x = 94. At least x−m = 94−50 =
x∈A x∈A
44 must be in the carry-on bag.

Theorem 11.13. Check-in and carry-on baggage theorem: SSM ≡p SSmin.

Proof. Hint : a full proof is similar to Theorem S-10.1 on page S-413


Update the proof...
It is just a copy of the other similar theorem...
Let X be a solution provided by Algorithm S-10.22 which is a minimum prices to obtain at
least m stocks. Let X̄ be a solution provided by FKS algo. Suppose there is another solution
11.4. NP-HARD OPTIMIZATION PROBLEMS 663

X 0 which is smaller. There must be a unit x ∈ X 0 but 6∈ X. x must replace y ∈ X and


presumably y < x. This means that y ∈ X̄ must be replaced by x as well. This contradicts
that X̄ produces the maximum prices by FKS algo as including y instead of x makes the
sum of prices lower. Hence, Algorithm S-10.22 produces a correct minimum output 

The decision versions of SSM and SSmin are also dual problems, SSMdv ≡p SSmindv ,
by eqns 11.34 and 11.35.
X X
SSMdv (A, m, l) = SSmindv (A, x − m, x − l) ⇔ SSMdv ≤p SSmindv (11.34)
x∈A x∈A
X X
dv dv
SSmin (A, m, u) = SSM (A, x − m, x − u) ⇔ SSmindv ≤p SSMdv (11.35)
x∈A x∈A

Instead of complimenting technique in the check-in and carry-on baggage Theorem 11.13,
the negating technique, depicted in Figure 10.48 on page 607, can be used to show their
reduction relations. Let A0 be the negated list of A where a0i = −ai .

SSM(A, m) = −SSmin(A0 , −m) ⇔ SSM ≤p SSmin (11.36)


SSmin(A, m) = −SSM(A0 , −m) ⇔ SSmin ≤p SSM (11.37)
SSMdv (A, m, l) = SSmindv (A0 , −m, −l) ⇔ SSMdv ≤p SSmindv (11.38)
dv dv 0 dv dv
SSmin (A, m, u) = SSM (A , −m, −u) ⇔ SSmin ≤p SSM (11.39)

Or SSMdv ≡p SSmindv can be shown simply by their definitions.

SSMdv (A, m, l) = SSmindv (A, l, m) ⇔ SSMdv ≤p SSmindv (11.40)


SSmindv (A, m, u) = SSMdv (A, u, m) ⇔ SSmindv ≤p SSMdv (11.41)

11.4.3 01-Knapsack Problem


Another famous NP-hard problem is the 01-knapsack Problem 4.4 or simply ZOK defined
on page 163. To show ZOK ∈ NP-hard, cosnider the subset sum maximization problem, or
simply SSM, appeared as an exercise in Q 4.9 on page 202. SSMdv is the decision version
Problem 11.7 defined on page 661. Theorem 11.14 utilizes SSMdv ∈ NP-complete and then
show SSMdv ≤p ZOK. Since the subset sum maximization problem can be viewed as a
special case of the 0-1 knapsack problem, the reduction relation, SSM ≤p ZOK can be
trivially shown.

Theorem 11.14. The 0-1 knapsack problem is in NP-hard. ZOK ∈ NP-hard

Proof. To show SSM ≤p ZOK, let pi = wi = ai , ZOK becomes SSM.


 n
X
  n
X

 maximize pi xi   maximize ai xi 
i=1 i=1
   
   
n n
=
 X   X 
 
subject to wi xi ≤ m subject to
  ai xi ≤ m
 
 i=1   i=1 
where xi = 0 or 1 where xi = 0 or 1
664 CHAPTER 11. NP-COMPLETE

Hence, the following reduction relationship can be derived.

SSM(A, m) = ZOK((A, A), m) ⇔ SSM ≤m


p ZOK (11.42)

This input transformation clearly takes polynomial time and thus SSM ≤p ZOK. SSMdv
≤p SSM by eqn (11.28), where SSMdv ∈ NP-complete. SSMdv ≤p ZOK by Transitivity
of Reduction Theorem 10.2 ZOK must be at least as hard as SSMdv , which is one of the
hardest problems in NP. Hence, ZOK ∈ NP-hard. 

The decision version of the 0-1 knapsack problem, or simply ZOKdv , is in NP, and
consequently in NP-complete.

Theorem 11.15. The decision version of the 0-1 knapsack problem, ZOKdv ∈ NP-complete.

Proof. Consider the following decision version of ZOK, which is to select items whose sum
of profits is at least l.

Problem 11.9. decision version of 0-1 knapsack problem


Input: A set A = (P, W ) of n pair numbers (pi , wi ),
 the maximum capacity m, and the 0
profit lower bound, l
0
 |A | |A
P| 0
 0
if ∃A0 ⊆ A such that l ≤ p0i and
P
A wi ≤ m
Output: i=1 i=1

False otherwise

0
|A
P|
A purported answer for ZOKdv can be verified in polynomial time, i.e., checking wi0 ≤ m
i=1
0
|A
P|
and p0i ≥ l take Θ(|A0 |). Hence, ZOKdv ∈ NP.
i=1
Since SSE ∈ NP-complete and SSE ≤p ZOKdv by eqn (11.43), ZOKdv ∈ NP-hard.
dv
SSE(A, m) = ZOKdv ((A, A), m, m) ⇔ SSE ≤m
p ZOK (11.43)

Since ZOKdv ∈ NP and ZOKdv ∈ NP-hard, ZOKdv ∈ NP-complete. 

11.5 NP-complete Scheduling Problems


In this section, several NP-complete problems drawn from the scheduling field are con-
sidered. Such problems include bin packing and multiprocessor scheduling problems. They
are located in the upper left orange colored balloon in Figure 11.37.

11.5.1 Bin Packing Problem


Consider the Bin Packing Problem 4.11, or BPP in short, defined on page 175. To show
BPP ∈ NP-hard, consider the Set Partition Problem 11.6, or simply STP, which is already
proven to be one of hardest problems in NP, i.e, NP-complete, in the previous subsection.
If the set of numbers, A, is set-partitionable, i.e., STP(A) = T, A is 2 bin-packable, i.e.
11.5. NP-COMPLETE SCHEDULING PROBLEMS 665

3 12
5 16 12
15 12
5 12
21
12 12 40 12
5 3 35
35 15 12 40
5 21 16 12 12?

⇓ ⇓
B3 12

21 16 5 5 3 B2 12 12 12 12

35 15 B1 40

P P
x x
x∈A x∈A
mP= 2 = 50 mP= 2 = 50
x x
(a) BPP(A, x∈A
2 ) = 2 → STP(A) = T (b) BPP(A, x∈A
2 ) > 2 → STP(A) = F

Figure 11.22: Reduction from Set Partition to Bin Packing

P
x
x∈A
BPP(A, m) = 2, where the size of bin is half the total sum: m = 2 . The ‘STP ≤p BPP’
reduction relation is given in eqn (11.44).
 P
 x
x∈A
STP(A) = T if BPP(A, 2 )=2 ⇔ STP ≤p BPP (11.44)
F otherwise

Set partionable and non-set partitionable cases are given in Figure 11.22 (a) and (b),
respectively. They are 2 and 3 bin-packable, correspondingly. Clearly, BPP ∈ NP-hard.
Whether BPP is in NP is questionable but the following decision version, which is at
most k bin pack problem, is clearly in NP.
Problem 11.10. Bin packing decision version
Input: A list A of n various length items, m, the uniform bin capacity,
and a maximum number of allowed bins, k
 n
T if ∃B such that P u(Bj ) ≤ k
Output: j=1
F otherwise

subject to ∀i ∈ {1 ∼ n}∃!j ∈ {1 ∼ n}, ai ∈ Bj


Xn
∧ ∀j ∈ {1 ∼ n}, ai bj,i ≤ m
i=1
(
0 if Bj = ∅
where bj,i = 0 or 1 and u(Bj ) =
1 otherwise
BPPdv is in NP-complete because BPPdv ∈ NP and STP ≤p BPPdv by eqn (11.45).
P
x
x∈A
STP(A) = BPPdv(A, , k = 2) ⇔ STP ≤p BPPdv (11.45)
2
666 CHAPTER 11. NP-COMPLETE

11.5.2 Multiprocessor Scheduling Problem

B4 B4 3

B3 6 3 B3 4 4

B2 5 3 2 B2 6 4

B1 7 3 B1 5 4

BPPdv (T, (m = 10), (k = 3)) = T BPPdv (T, (m = 10), (k = 3)) = F


P3 6 3 P3 4 4 3

P2 5 3 2 P2 6 4

P1 7 3 P1 5 4
0 2 4 6 8 10 0 2 4 6 8 10

dv dv
(a) MPS (T, (k = 3), (m = 10)) = T case (b) MPS (T, (k = 3), (m = 10)) = F case

Figure 11.23: BPPdv ≡p MPSdv

Consider the Multiprocessor Scheduling Problem 4.10, or simply MPS, defined on page 174.
It is an NP-complete problem [66, p.238]. The following decision version of MPS is clearly
in NP:

Problem 11.11. Multiprocessor scheduling decision version


Input: T = {t1 , · · · , tn } (a list of n tasks represented by their running time),
k (the number of processors), and m (the upper bound for makespan)
 n
T if ∃P such that max P pi,j tj ≤ m
Output: i∈{1∼k} j=1

F otherwise

k
P
subject to ∀j ∈ {1 ∼ n} pi,j = 1, where pi,j = 0 or 1
i=1

To show MPSdv ∈ NP-hard, consider the Bin Packing decision version Problem 11.10,
or BPPdv in short, which is already proven to be one of hardest problems in NP, i.e, NP-
complete, in the previous subsection. They are dual problems, as depicted in Figure 11.23.

BPPdv (A, m, k) = MPSdv (A, k, m) ⇔ BPPdv ≤p MPSdv (11.46)


dv dv dv dv
MPS (T, k, m) = BPP (T, m, k) ⇔ MPS ≤p BPP (11.47)

MPSdv ∈ NP-complete because it is in NP and in NP-hard.


MPS, the original optimization version is in NP-hard as well by eqn (11.48).
(
dv T if MPS(T, k) ≤ m
MPS (T, k, m) = ⇔ MPSdv ≤p MPS (11.48)
F otherwise

Clearly, MPS ∈ NP-hard; MPS is as hard as MPSdv , which is one of the hardest problems
in NP.
11.6. NP-HARD GRAPH PROBLEMS 667

11.6 NP-hard Graph Problems


In this section, several NP-hard problems that take a graph G = (V, E) as an input are
considered. The number of vertices is denoted as n = |V |. Such problems include Clique,
Independent set, Vertex cover, Hamitonian path and cycle, and Traveling salesman related
problems. They are located in the upper right green colored balloon in Figure 11.37. Set
cover problem is also introduced to bridge graph and set theories.

11.6.1 Clique Problem

(a) Complete graphs K1 ∼ K8


v1 v2 v3 v4 v5
v1 v2 v3 v4
v v2 v3
 
 1
  v1 0 1 1 1 1
 v1 0 1 1 1
v1 0 1 1 v2 
1 0 1 1 1
v2 
1 0 1 1 
v2  1 0 1  v3 
1 1 0 1 1
v3  1 1 0 1 
v3 1 1 0 v4  1 1 1 0 1
v4 1 1 1 0
v5 1 1 1 1 0
(b) K3 (c) K4 (d) K5

Figure 11.24: Complete graphs

A graph, G is called a complete graph if there exists an edge for every pair of vertices in
G. All vertices are adjacent to each other. A complete graph on n vertices is denoted by
Kn and there are n(n − 1)/2 number of edges in Kn . K1 ∼ K9 are shown in Figure 11.24
(a) and Figures (b) ∼ (d) show the adjacent matrices for K3 ∼ K5 , respectively, where all
entries are 1’s except for cells in the main diagonals.
A clique is a subset of vertices such that they form a complete graph. The term “clique”
and the problem were first introduced and considered in the social science [116]. Let k
denote the cardinality of the subset of vetices that form a clique of size k, Kk . (k = 3) cliques
include {{v3 , v4 , v8 }, {v2 , v4 , v8 }, {v2 , v6 , v8 }, {v4 , v5 , v8 }, · · · } and one of them is highlighted
in Figure 11.25 (a). (k = 4) cliques include {{v2 , v3 , v4 , v8 }, {v2 , v4 , v6 , v8 }, {v2 , v6 , v8 , v9 },

v1 v2 v1 v2 v1 v2
v6 v6 v6
v3 v4 v5 v3 v4 v5 v3 v4 v5

v7 v8 v9 v7 v8 v9 v7 v8 v9
(b) (k = 3) clique (c) (k = 4) clique (d) (k = 5) clique
{v3 , v4 , v8 } {v2 , v3 , v4 , v8 } {v2 , v4 , v6 , v8 , v9 }

Figure 11.25: Clique


668 CHAPTER 11. NP-COMPLETE

{v4 , v6 , v8 , v9 }, · · · } and one of them is shown in Figure 11.25 (b). Figure 11.25 (c) shows a
(k = 5) clique, {v2 , v4 , v6 , v8 , v9 } which is maximum. Finding a maximum clique problem is
formally defined as follows:
Problem 11.12. Maximum Clique(G)
Input: a graph G = (V, E)
Output: maximize k = |C| where C ⊆ V such that for ∀u, v ∈ C, (u, v) ∈ E.
Problem 11.12 is conventionally called a clique problem such as in [42] but here the
abbreviation CLQ shall be used for the sake of simplicity. No polynomial time deterministic
algorithm for CLQ is known yet. Albeit there are many ways to show its NP-hardness but
one is given using SCN (Satisfiability of CNF) problem which is already known to be in
NP-complete as provided in [42, p 1087-1089]. Showing (SCN ≤p CLQ) proves that CLQ
is as hard as SCN, one of the known hardest problems in NP, i.e., CLQ ∈ NP-hard.
Theorem 11.16. CLQ is NP-hard. (SCN ≤p CLQ)
Proof. SCN, the satisfiability of CNF problem which is already known to be in NP-complete,
takes a propositional statement Skc in CNF as an input. Suppose that Skc consists of k number
of d-clauses connected by conjunction. Then one can construct a graph whose vertices are
literals in each clause; the jth literal in ith d-clause is a vertex, vi,j as exemplified in
Figure 11.26. Connect all vertices that are in different d-clauses except that they are related
to negations of each other. For example in Figure 11.26 (a), v1,1 is connected to all other
vertices in different d-clauses except for v2,1 as v1,1 = ¬v2,1 . Note that no two vertices in
the same d-clause are connected. Such a graph G can be constructed clearly in polynomial
time.

(
T if CLQ(G = (V, E)) = k
SCN(Skc ) = ⇔ SCN ≤p CLQ (11.49)
F otherwise
where V = {vx,y | x ∈ {1 ∼ k} ∧ y ∈ {1 ∼ |Dx |}}
E = {(vx,y , vw,z ) | ∀ vx,y , vw,z ∈ V, x 6= w ∧ (ly ∈ Dx ) 6= ¬(lz ∈ Dw )}
Suppose that there exists a certain algorithm for CLQ and it returns a clique of size k, Ck
on the constructed graph G. Then one assign the truth value to each vertex in Ck ; if the
vertex’s literal is simply boolean variable, px , assign px = T and if the vertex’s literal is a
negation of a boolean variable, ¬py , assign py = F. This assignment makes each respective
d-clause true and thus the entire statement is satisfiable. A satisfying assignment for SCN
can be found if CLQ can be solved.
If an algorithm for CLQ returns a clique of size k 0 < k, Ck0 on the constructed graph
G, the statement S is unsatisfiable. A fallacy statement, S2 is shown in Figure 11.26 (b).
Therefore, CLQ is NP-hard as SCN ≤p CLQ. 
Consider the decision version of the maximum clique problem, or simply CLQdv , which
asks whether the input graph G contains a clique of at least k size.
Problem 11.13. k-Clique(G, k)
Input: a(graph G = (V, E) and a positive integer k ≤ n
yes if |C| ≥ k such that C ⊆ V ∧ ∀u, v ∈ C, (u, v) ∈ E
Output:
no otherwise
11.6. NP-HARD GRAPH PROBLEMS 669

c2
z }| {
(¬p1 ∨ p2 ∨ p3 )
¬p1 p2 p3
c2
v2,1 v2,2 v2,3
c1
c3
p1 v1,1
(p1 ∨ p2 ) v3,1 ¬p3 (¬p3 )
| {z } p2 v1,2 | {z }
c1 c3

c4
v4,1 v4,2 v4,3
p1 ¬p2 ¬p3

(p1 ∨ ¬p2 ∨ ¬p3 )


| {z }
c4
(a) S1 = (p1 ∨ p2 ) ∧ (¬p1 ∨ p2 ∨ p3 ) ∧ (¬p3 ) ∧ (p1 ∨ ¬p2 ∨ ¬p3 )
| {z } | {z } | {z } | {z }
c1 c2 c3 c4
(k = 4 = kc ) clique, S1 is satisfiable, (p1 = T, p2 = T, p3 = F )
c2
z }| {
(¬p1 ∨ ¬p2 )
¬p1 ¬p2
c2
v2,1 v2,2
c1
c3
¬p2 v1,1
(¬p2 ∨ p3 ) v3,1 ¬p3 (¬p3 )
| {z } p3 v1,2 | {z }
c1 c3

c4
v4,1 v4,2
p2 p3

(p2 ∨ p3 )
| {z }
c4
(b) S2 = (¬p2 ∨ p3 ) ∧ (¬p1 ∨ ¬p2 ) ∧ (¬p3 ) ∧ (p2 ∨ p3 )
| {z } | {z } | {z } | {z }
c1 c2 c3 c4
(k = 3 6= kc ) clique S2 is unsatisfiable, a Fallacy.

Figure 11.26: SCN ≤p CLQ

CLQdv is clearly in NP as one can verify any guessed answer in polynomial time.
Eqn (11.49) can be replaced with eqn (11.50) to prove that CLQdv ∈ NP-hard.

SCN(Skc ) = CLQdv (G = (V, E), k) ⇔ SCN ≤p CLQdv (11.50)

Since. CLQdv ∈ NP and NP-hard, CLQdv ∈ NP-complete,


Theorem 11.16 CLQ ≤p SCN is a remarkable one as it links between NP-hard logic and
670 CHAPTER 11. NP-COMPLETE

graph problems. Once one of NP-complete graph problems is proven, the rest of other graph
related NP-hard problems can be relatively easily proven.

11.6.2 Independent Set

v1 v2 v1 v2 v1 v2
v6 v6 v6
v3 v4 v5 v3 v4 v5 v3 v4 v5

v7 v8 v9 v7 v8 v9 v7 v8 v9
(a) (k = 2) independent set (b) (k = 3) independent set (c) (k = 4) independent set

Figure 11.27: Independent set

Suppose there are n people represented as vertices and edges represent the acquaintance
relation between two people. One would like to form a committee selected from n people
and guarantee that they are not acquainted. People in the committee is an independent set
Is if no two members are related. An independent set, Is is a subset of V such that no two
vertices in Is are adjacent.
Let k denote the cardinality of the independent set of size k, Isk . Is(k=2) independent sets
include {{v1 , v8 }, {v1 , v5 }, {v2 , v7 }, {v3 , v7 }, · · · } and one of them is shown in Figure 11.27
(a). Is(k=3) independent sets include {{v1 , v3 , v5 }, {v1 , v3 , v5 }, {v1 , v5 , v7 }, {v3 , v5 , v7 }, · · · }
and one of them is shown in Figure 11.27 (b). Is(k=4) independent sets are {{v1 , v3 , v5 , v6 },
{v1 , v3 , v5 , v9 }, {v1 , v3 , v6 , v7 }, {v3 , v5 , v7 , v9 }, · · · } and one of them is shown in Figure 11.27
(c). Since there is no indendent set of size (k = 5), (k = 4) is a maximum independent set.
Finding a maximum independent set problem or IDS in short is formally defined as follows:
Problem 11.14. Maximum independent set
Input: A graph G = (V, E)
Output: Is ⊂ V such that |Is | is maximized where ∀(a, b) ∈ Is , (a, b) 6∈ E.
No polynomial time deterministic algorithm for finding the maximum independent set,
IDS is known yet. Indeed, IDS is NP-hard and a similar proof like Theorem 11.16 is possible
but (SCN ≤p IDS) is left for an exercise. Here a simpler proof to show IDS ∈ NP-hard
using CLQ (maximum clique) Problem 11.12 which is already known to be in NP-hard in
Theorem 11.16.
Theorem 11.17. CLQ ≡p IDS, i.e., CLQ and IDS are dual problems.

IDS ≤p CLQ ⇔ IDS(G) = CLQ(Ḡ) (11.51)


CLQ ≤p IDS ⇔ CLQ(G) = IDS(Ḡ) (11.52)

where Ḡ = (V, Ē) such that (E ∪ Ē = Kn ) ∧ (E ∩ Ē = ∅).


Proof. Let Ḡ be a complement graph of G if there is an edge between vx and vy in G, then
there is no edge between them in Ḡ and if there is no edge vx and vy in G, then there is an
edge between them in Ḡ. Constructing the complement graph Ḡ takes polynomial time.
11.6. NP-HARD GRAPH PROBLEMS 671

A maximum independent set in G as exemplified in Figure 11.28 (c) is a maximum clique


in the the complement graph Ḡ as given in Figure 11.28 (d). Suppose not, i.e., there is a
bigger clique C 0 in Ḡ: |C 0 | > |C|. Consider Is0 composed of all vetices in C 0 . There are
edges between all pairs of vertices in C 0 in Ḡ and no edges in G. Let Is0 = C 0 . |Is0 | > |Is |
and this contradicts Is is a maximum independent set. Therefore, IDS ≤p CLQ.
Conversely, CLQ ≤p IDS as depicted in Figures 11.28 (e) and (f). Two problems are
complementary and thus CLQ ≡p IDS. 

v1 v2 v3 v4 v5 v1 v2 v3 v4 v5
   
v1 0 1 0 0 1 v1 0 0 1 1 0
v2 
1 0 1 0 1  v2 
0 0 0 1 0 
G = v3 
0 1 0 0 0  Ḡ = v3 
1 0 0 1 1 
v4  0 0 0 0 1 v4  1 1 1 0 0
v5 1 1 0 1 0 v5 0 0 1 0 0
(a) (b)
v1 v2 v1 v2

v3 v4 v3 v4

v5 v5
(c) Is on G = {v1 , v3 , v4 } (d) clique C on Ḡ = {v1 , v3 , v4 }
v1 v2 v1 v2

v3 v4 v3 v4

v5 v5
(e) clique C on G = {v1 , v2 , v5 } clique (f) Is on Ḡ = {v1 , v2 , v5 }

Figure 11.28: IDS ≡p CLQ

Consider the following decision version of IDS, or simply IDSdv , which is to find an
independent set of at least k size.

Problem 11.15. k-Indepedentset(G, k)


Input: a(graph G = (V, E) and a positive integer k ≤ n
yes if |Is | ≥ k such that Is ⊆ V ∧ ∀u, v ∈ Is , (u, v) 6∈ E
Output:
no otherwise

IDSdv is clearly in NP as one can verify any guessed answer. To show IDSdv ∈ NP-hard,
‘CLQdv ≤p IDSdv ’ shall be shown. Indeed, CLQdv ≡p IDSdv .
672 CHAPTER 11. NP-COMPLETE

IDSdv (G, k) = CLQdv (Ḡ, k) ⇔ IDSdv ≤p CLQdv (11.53)


dv dv dv dv
CLQ (G, k) = IDS (Ḡ, k) ⇔ CLQ ≤p IDS (11.54)

where Ḡ = (V, Ē) such that (E ∪ Ē = Kn ) ∧ (E ∩ Ē = ∅).

Since CLQdv is in NP-complete and CLQdv ≤p IDSdv , IDSdv is in NP-hard. Since IDSdv
is in NP and NP-hard, IDSdv ∈ NP-complete.

11.6.3 Vertex Cover

Vertex cover Vertex cover

v1 v2 v4 v5 v8 v1 v2 v4 v8

Independent set

v3 v7 v6 v9 v3 v7 v6 v9 v5
Vertex cover Independent set Independent set

(a) V = Vc ∪ Is (b) Is (k = 4) (c) Is (k = 5)


Vc = V − Is = V − Vc (n − k = 5) = V − Vc (n − k = 4)

Figure 11.29: Dual-problems VCP ≡p IDS

Consider the vertex cover Problem 4.13, or simply VCP, defined on page 180. No poly-
nomial time deterministic algorithm for finding the minimum vertex cover, VCP is known
yet. Indeed, VCP is NP-hard and a similar proof like Theorem 11.16 is possible but (SCN
≤p VCP) is left for an exercise. Here is a much simpler proof to show VCP ∈ NP-hard using
IDS (maximum independent set) Problem 11.14, which is already known to be in NP-hard
by Theorem 11.17. Showing (IND ≤p VCP) means VCP is in NP-hard. Indeed, IND ≡p
VCP.

Theorem 11.18. VCP ≡p IND

VCP ≤p IDS ⇔ VCP(G) = V − IDS(G) (11.55)


IDS ≤p VCP ⇔ IDS(G) = V − VCP(G) (11.56)

Proof. First to show VCP ≤p IDS, suppose there is an algorithm for IDS which finds a
maximum independent set, Is on G = (V, E). The claim is Vc = V − Is , i.e., VCP(G) = V −
IDS(G) in eqn (11.55). If k number of vertices are selected in Is , all remaining vertices are
the minimum vertex cover; |Vc | = |V | − |Is | = n − k, as exemplified in Figure 11.29.
Suppose that IDS finds Is but Vc 6= V − Is , i.e., there exists Vc0 such that |Vc0 | < |Vc |.
This means that there exists vx ∈ Vc and vx ∈ Vc0 . There are two cases whether there exist
any edge between vx and any vertex in Is . First, if there is no edge between vx and any
vertex in Is , vx should be included in the maximum independent set. This is a contradiction.
11.6. NP-HARD GRAPH PROBLEMS 673

Suppose there is an edge between vx and any vertex in Is . Then this edge cannot be covered
by Vc0 . Therefore, VCP ≤p IDS.
Next, IND ≤p VCP because IDS(G) = V − VCP(G) in eqn (11.56). It is just an algebraic
shift from eqn (11.55). 

v1 v2 v1 v2 v1 v2

v3 v4 v3 v4 v3 v4

v5 v5 v5
(a) VCP on G, Vc = V − Is (b) IDS on G, Is = V − Vc (c) CLQ on Ḡ, C¯k = V − Vc
= V − C¯k = {v2 , v5 } = C¯k = {v1 , v3 , v4 } = Is = {v1 , v3 , v4 }
v1 v2 v1 v2 v1 v2

v3 v4 v3 v4 v3 v4

v5 v5 v5
(d) VCP on Ḡ, V¯c = V − I¯s (e) IDS on Ḡ, I¯s = V − V¯c (f) CLQ on G, C = V − V¯c
V − Ck = {v3 , v4 } Ck = {v1 , v2 , v5 } = I¯s = {v1 , v2 , v5 }
v1 v2 v1 v2 v1 v2
v3 v4 v3 v4 v3 v4

v5 v6 v5 v6 v5 v6
v7 v8 v7 v8 v7 v8

(g) VCP(G0 ), Vc0 = V 0 − Is0 (h) IDS(G0 ), Is0 = V 0 − Vc0 (i) CLQ(Ḡ0 ), C¯k0 = V 0 − Vc0
= V 0 − C¯k0 = {v5 , v6 , v7 , v8 } = C¯k0 = {v1 , v2 , v3 , v4 } = Is0 = {v1 , v2 , v3 , v4 }
v1 v2 v1 v2 v1 v2
v3 v4 v3 v4 v3 v4

v5 v6 v5 v6 v5 v6
v7 v8 v7 v8 v7 v8

(j) VCP(Ḡ0 ), V¯c0 = V 0 − I¯s0 (k) IDS(Ḡ0 ), I¯s0 = V 0 − V¯c0 (l) CLQ(G0 ), C 0 = V 0 − V¯c0
V 0 − Ck0 = {v1 , v2 , v3 , v5 } Ck0 = {v4 , v6 , v7 , v8 } = I¯s0 = {v4 , v6 , v7 , v8 }

Figure 11.30: Trio-problems VCP ≡p IDS ≡p CLQ

Another proof to show VCP ∈ NP-hard utilizes CLQ (maximum clique) Problem 11.12,
which is already known to be in NP-hard by Theorem 11.16. Showing (CLQ ≤p VCP)
674 CHAPTER 11. NP-COMPLETE

means VCP is in NP-hard. Indeed, CLQ ≡p VCP.

Theorem 11.19. VCP ≡p CLQ

VCP ≤p CLQ ⇔ VCP(G) = V − CLQ(Ḡ) (11.57)


CLQ ≤p VCP ⇔ CLQ(G) = V − VCP(Ḡ) (11.58)

where Ḡ = (V, Ē) such that (E ∪ Ē = Kn ) ∧ (E ∩ Ē = ∅).

Proof. IDS(G) = CLQ(Ḡ) by eqn (11.51) and VCP(G) = V − IDS(G) by eqn (11.55),
VCP(G) = V − CLQ(Ḡ).
VCP(Ḡ) = V − CLQ(G) by eqn (11.57). Hence, CLQ(G) = V − VCP(Ḡ). 

CLQ, IDS, and VCP are trio problems by Theorems 11.17, 11.18, and 11.19. Reduction
relations among these three problems are illustrated on four different graphs in Figure 11.30.
Consider the following decision version of VCP, or simply VCPdv , which is to find a
vertex cover of at most k size.

Problem 11.16. k-vertexcover(G, k)


Input: a(graph G = (V, E) and a positive integer k ≤ n
yes if |Vc | ≤ k such that ∀(vx , vy ) ∈ E, ∃vz ∈ Vc (vx = vz ∨ vy = vz )
Output:
no otherwise

VCPdv is clearly in NP as one can verify any guessed answer.


NP-hardness of VCPdv can be shown by facts that IDSdv ∈ NP-complete and IDSdv ≤p
VCPdv . Indeed, VCPdv ≡p IDSdv .

VCPdv (G, k) = IDSdv (G, n − k) ⇔ VCPdv ≤p IDSdv (11.59)


dv dv dv dv
IDS (G, k) = VCP (G, n − k) ⇔ IDS ≤p VCP (11.60)

Since IDSdv is in NP-complete and IDSdv ≤p VCPdv , VCPdv is in NP-hard.


NP-hardness of VCPdv can be also shown by facts that CLQdv ∈ NP-complete and
CLQdv ≤p VCPdv . Indeed, VCPdv ≡p CLQdv .

VCPdv (G, k) = CLQdv (Ḡ, n − k) ⇔ VCPdv ≤p CLQdv (11.61)


dv dv dv dv
CLQ (G, k) = VCP (Ḡ, n − k) ⇔ CLQ ≤p VCP (11.62)

where Ḡ = (V, Ē) such that (E ∪ Ē = Kn ) ∧ (E ∩ Ē = ∅).

Since CLQdv is in NP-complete and CLQdv ≤p VCPdv , VCPdv is in NP-hard. Since VCPdv
is in NP and NP-hard, VCPdv ∈ NP-complete.

11.6.4 Set Cover Problem


Consider the minimum set cover problem, or simply SCV, appeared earlier as an exercise
in Q 4.24 on page 208. The following decision version of SCV is in NP:
11.6. NP-HARD GRAPH PROBLEMS 675

Problem 11.17. Minimum set cover decision version problem


m
S
Input: S1∼m where Si = U and the upper bound u.
i=1
k
0 0
Si0 = U ∧ k ≤ u.
S
Output: S1∼k such that S1∼k ⊆ S1∼m ∧
i=1

v1 v2
v1 → {v2 , v5 } S1 → {(v1 , v2 ), (v1 , v5 )}
v2 → {v1 , v3 , v5 } S2 → {(v1 , v2 ), (v2 , v3 ), (v2 , v5 )}
v3 v4 v3 → {v2 } S3 → {(v2 , v3 )}
v4 → {v5 } S4 → {(v4 , v5 )}
v5 → {v1 , v2 , v4 } S5 → {(v1 , v5 ), (v2 , v5 ), (v4 , v5 )}
v5
(a) VCP(G) = {v2 , v5 } (b) Adjacent list of G (c) Adjacent edge sets

Figure 11.31: Reduction VCP ≤p SCV

To show that SCVdv ∈ NP-hard, consider the Vertex Cover decision version Prob-
lem 11.16, or simply VCPdv , which was already shown to be in NP-complete. Recall that
a graph can be represented in an adjacent list, as discussed on page 179. To be more spe-
cific, it was an adjacent vertex list. Now, consider an adjacent edge list, as depicted in
Figure 11.31 (c), which contains redundant information compared to the adjacent vertex
list in Figure 11.31 (b). However, if we feed the adjacent edge list to an algorithm for SCVdv
as an input, it solves the VCPdv . This reduction based algorithm for VCPdv is stated as
follows:

Algorithm 11.5. Vertex cover by a set cover.

VCP2SCV(G)
for i = 1 ∼ n, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
for j = 1 ∼ |L(vi )|, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
append(Ae (vi ), (vi , lj (vi ))) . . . . . . . . . . . . . . . . . . . . 3
return SCV algox(Ae ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Lines 1 ∼ 3 of Algorithm 11.5 prepares the adjacent edge list, Ae of G from the adjacent
vertex list, L of G. It takes Θ(|E|). Clearly, SCVdv ∈ NP-hard because VCPdv ≤p SCVdv .
SCV, the original optimization version is in NP-hard as well by eqn (11.63) or (11.64).
(
T if SCV(Ae ) ≤ u
VCPdv (G, u) = ⇔ VCPdv ≤p SCV (11.63)
F otherwise
(
T if SCV(Ae ) ≤ u
SCVdv (Ae , u) = ⇔ SCVdv ≤p SCV (11.64)
F otherwise

Clearly, SCV ∈ NP-hard; SCV is as hard as VCPdv and SCVdv , which are the hardest
problems in NP.
676 CHAPTER 11. NP-COMPLETE

v1 v2 v1 v2 v1 v2

v3 v4 v3 v4 v3 v4

v5 v5 v5
(a) Traceable and (b) Traceable but non- (c) Non-traceable and non-
Hamiltonian graph, G1 Hamiltonian graph, G2 Hamiltonian graph, G3
HMP(G1 ) = HMC(G1 ) HMP(G2 ) = hv3 , v2 , v1 , HMP(G3 ) = ∅
= hv1 , v2 , v3 , v4 , v5 i v5 , v4 i, HMC(G2 ) = ∅ HMC(G3 ) = ∅
v1 v2 v1 v2 v1 v2
v6 v6 v6
v3 v4 v5 v3 v4 v5 v3 v4 v5

v7 v8 v9 v7 v8 v9 v7 v8 v9
(d) Traceable and (e) Traceable but non- (f) Non-traceable and non-
Hamiltonian graph, G4 Hamiltonian graph, G5 Hamiltonian graph, G6
HMP(G4 ) = HMC(G4 ) = HMP(G5 ) = hv3 , v2 , v1 , v7 , v8 , HMC(G6 ) = ∅
hv1 , v2 , v6 , v9 , v4 , v5 , v8 , v3 , v7 i v5 , v4 , v6 , v9 i, HMC(G5 ) = ∅ HMP(G6 ) = ∅

Figure 11.32: Hamiltonian path and cycle

11.6.5 Hamiltonian Path and Cycle


A Hamiltonian path (or traceable path) is a path in an undirected or directed graph that
visits each vertex exactly once. If such a path exists in a graph, G, the graph is said to be a
traceable graph or Hamiltonian-connected. All graphs in Figure 11.32 except for (c) and (f)
are traceable graphs as their respective Hamiltonian paths are provided, respectively. The
graphs in Figure 11.32 (c) and (f) do not contain a Hamitonian path and thus non-traceable.
The Hamiltonian path problem, or HMP in short, is to determine whether a Hamiltonian
path exists in a graph and defined as follows:

Problem 11.18. Hamiltonian path problem


Input: G
( = (V, E) where n = |V |
yes if ∃V 0 ∀i ∈ {1 ∼ n − 1}(vi0 , vi+1
0
) ∈ E where V 0 is a permutation of V
Output:
no otherwise

The Hamiltonian path Problem 11.18 is clearly in NP as one can trace the path and
make sure that each vertex is visited exactly once. Obviously this can be done in linear
time assuming that the input graph is represented as an adjacent matrix.
Another problem that is closely related to HMP is the Hamiltonian cycle problem. A
Hamiltonian cycle (or Hamiltonian circuit) is a cycle in an undirected or directed graph that
visits each vertex exactly once. If such a cycle exists in a graph, G, the graph is said to be
a Hamiltonian graph.
11.6. NP-HARD GRAPH PROBLEMS 677

Graphs in Figure 11.32 (a) and (d) are Hamiltonian graphs and their respective paths
are provided, respectively. Note that the last vertex in the path must be connected to the
first vertex. The Hamiltonian cycle problem, or HMC in short, is to determine whether a
Hamiltonian cycle exists in a graph and can be defined in the same way as HMP but with
one more constraint.

Problem 11.19. Hamiltonian cycle problem


Input: G = (V, E) where n = |V |

0 0 0 0
yes if ∃V ∀i ∈ {1 ∼ n − 1}(vi , vi+1 ) ∈ E where V is a permutation of V

Output: ∧(vn0 , v10 ) ∈ E

no otherwise

HMC is among Karp’s 21 NP-complete problems listed in [94]. NP-completeness of HMP


can be shown by the reduction from HMC, which is already known to be in NP-complete:
HMC ≤p HMP.

Theorem 11.20. HMP ∈ NP-complete.

Proof. Suppose that HMC is already known to be in NP-complete.


Given a input graph G = (V, E) for HMC, a new Gd is constructed such that G contains
a Hamiltonian cycle if and only if Gd contains a Hamiltonian path. To construct such as
graph, Gd , split any vertex whose degree is greater than 2 as depicted in Figure 11.33. By
the way, if the graph has a vertex vx with deg(vx ) = 1, then the graph is automatically not
a Hamiltonian graph, i.e., no Hamiltonian cycle. Without loss of generality, it is assumed
that every vertex, vx ∈ G has deg(vx ) > 1.

s e vc v1 …….... vn
s 0 0 0 1 0 ……. 0
s e e 0 0 1 0 0 ……. 0
v1 …….... vn
vc 0 1 0 0
v1 v1 0 v1 vc v1 1 0 0 0
………...

………...

0 0
A A
……..
……..

vn vn 0 0

(a) G for HMC (b) Gd for HMP


hv1 , · · · i

Figure 11.33: Steps for (HMC ≤p HMP)

Splitting a vertex, vx is done by adding a copy vertex, vc of vx together with all its
edges. Then add starting and ending vertices, s and e by connecting s with vx and e with
vc , respectively. Albeit vx can be any node, let vx = v1 . Preparing a new Gd with a
duplicate new vertex takes a polynomial time.
To show if HMC(G) is true, HMP(Gd ) is also true, consider the output cycle sequence
V of HMC(G) ; V 0 = hv10 , v20 , · · · , vn0 i. The vertex v1 must be in V 0 and thus let vx0 = v1 .
0

Now consider a path V 00 = hs, v1 , vx+1 0


, · · · , vn0 , v10 , · · · , v 0 + x − 1, vc , ei which starts from s
678 CHAPTER 11. NP-COMPLETE

and and then follow the cycle sequence of HMC(G). Clearly, V 00 is a Hamiltonian path in
HMP(Gd ).
Conversely, if HMP(Gd ) is true, HMC(G) is also true. In order for HMP(Gd ) to be
true, the output path sequence must start from one of the end points, s or e. Consider a
Hamiltonian path in Gd , V 00 = hs, v1 , v200 , · · · , vn00 , vc , ei. Then a sequence hv1 , v200 , · · · , vn00 i is
a Hamiltonian cycle in G.
∴ G contains a Hamiltonian cycle if and only if Gd contains a Hamiltonian path.
Since HMC ≤p HMP, HMP is NP-hard. Since HMP ∈ NP and NP-hard, HMP ∈ NP-
complete. 

11.6.6 Traveling Salesman Problem


Consider the Traveling salesman Problem 4.16, or TSP in short, defined on page 190.

v1 v2 v3 v4 v5 v1 v2 v3 v4 v5
   
v1 0 1 1 0 0 v1 0 1 1 2 2
v2 
1 0 1 0 1  v2 
1 0 1 2 1 
G x = v3 
1 1 0 1 0  =⇒ Gcx = v3 
1 1 0 1 2 
v4  0 0 1 0 1 v4  2 2 1 0 1
v5 0 1 0 1 0 v5 2 1 2 1 0
v1 v2 v1 1 v2
1 1 2 2

v3 v4 2 1
v3 1 v4
2 1
v5 v5
HMP(Gx ) = hv3 , v1 , v2 , v5 , v4 i TSP(Gcx ) = 4 = n − 1, hv3 , v1 , v2 , v5 , v4 i
(a) a traceable graph Gx case
v1 v2 v1 1 v2
1 2 1 2
Gy = v3 v4 =⇒ Gcy = v3 1 1
v4
2
2 2
v5 v5
HMP(Gy ) = No TSP(Gcy ) = 5 > n − 1, hv4 , v3 , v1 , v2 , v5 i
(b) a nontraceable graph Gy case

Figure 11.34: HMP ≤p TSP

Theorem 11.21. TSP is NP-hard.

Proof. Consider a Hamiltonian Path Problem 11.18 which is already known to be in NP-
complete. As depicted in Figure 11.34, assign 1 to the existing edges and 2 to non-existing
edges to make a complete graph. This input transformation takes Θ(n2 ) where n = |V |.
Use a TSP algorithm, if the solution for TSP equals to n − 1, it is also the solution for
11.7. CO-NP-COMPLETE 679

HMP. If the solution for TSP is greater than n − 1, a Hamiltonian path does not exist as
exemplified in Figure 11.34 (b).
( (
T if TSP(V, C) = n − 1 1 if (vx , vy ) ∈ E
HMP(G) = where cvx ,vy = (11.65)
F otherwise 2 otherwise

Clearly, HMP ≤p TSP and TSP must be as hard as the hardest HMP problem in NP. 

The decision version of TSP, or simply TSPdv , is to find a path whose total cost is within
the budget, k. It is formally defined as follows:

Problem 11.20. Traveling salesman decision version problem


Input: A sequence V1∼n of size n and an n × n cost matrix Cv1 ∼vn ,v1 ∼vn
where cvi ,vj is the cost between vi and vj and a threshold value, k
n−1

T if ∃V 0 a permutation of V such that P c 0 0 ≤ k
vi ,vi+1
Output: i=1
F otherwise

A purported answer for TSPdv can be verified in polynomial time O(|V |). For example
in Figure 4.31 on page 191, if k = 16, a purported answer in Figure 4.31 (b) is incorrect
while a purported answer in Figure 4.31 (c) is correct. Hence, TSPdv is in NP.

(
dv 1 if (vx , vy ) ∈ E
HMP(G) = TSP (V, C, n − 1) where cvx ,vy = (11.66)
2 otherwise

TSPdv is in NP-complete.

11.7 co-NP-complete

NP-hard co-NP-hard

NP-complete co-NP
-complete

NP P co-NP

Figure 11.35: P = NP? problem.

This section introduces concepts of co-NP and co-NP-complete. “A general law can
never be verified by a finite number of observation. It can, however, be falsified by only one
observation.” This principle of the asymmetry between verification and falsification by Karl
Popper [136] helps understand the concept of co-NP.
680 CHAPTER 11. NP-COMPLETE

A problem, px , is said to be in co-NP if a purported answer cannot be verified in poly-


nomial time, but its negated version, px , of the probelm can be verified in polynomial time.
In other words, if px , the complement problem or negated problem, is in NP, then px is
in co-NP. Whether NP = co-NP is one of the open problems in computer science. The
widespread belief is that NP 6= co-NP [99, p 496]. Figure 11.35 shows a diagram for co-NP
and co-NP-complete classes.
A problem, px , is said to be in co-NP-complete if px , the complement problem is in
NP-complete. In other words, px ∈ co-NP-complete if px -complete = co-NP. The co-NP-
completeness of px can be also shown if px ∈ co-NP and there exists a known co-NP-complete
problem, py such that py ≤p px . Problems in co-NP-complete include Fallacy, Tautology,
CNF-Fallacy, DNF-Tautology, Logical equivalency, and Frobenius postage stamp problems.
Figure 11.36 shows a partial reduction and complement relation graph for co-NP-complete
problems presented in this section.

LEQ C1 SAT = ¬FAL · · · · · · · · · · · · Eq (11.67)


C2 SCN = ¬FCN · · · · · · · · · · · Eq (11.80)
R3 R6
R4 R5 C3 USSE = ¬FSP · · · · · · · · · · · Tm 11.22
C1 R1 R1 FAL ≤p TAU · · · · · · · · · · · · Eq (11.70)
SAT FAL TAU
R2 R2 TAU ≤p FAL · · · · · · · · · · · · Eq (11.69)
R9 R10 R3 FAL ≤p LEQ · · · · · · · · · · · · Eq (11.72)
C2 R7 R4 LEQ ≤p FAL · · · · · · · · · · · · Eq (11.74)
SCN FCN TDN
R8 R5 TAU ≤p LEQ · · · · · · · · · · · Eq (11.71)
R6 LEQ ≤p TAU · · · · · · · · · · · Eq (11.73)
C3 R7 FCN ≤p TDN · · · · · · · · · · · Eq (11.85)
USSE FSP
R8 TDN ≤p FCN · · · · · · · · · · · Eq (11.84)
R9 FCN ≤p FAL · · · · · · · · · · · · Eq (11.88)
NP-complete co-NP-complete R10 TDN ≤p TAU · · · · · · · · · · Eq (11.89)

Figure 11.36: Partial reduction and complement relation graph for co-NP-complete problems

11.7.1 Fallacy
A proposional statement that is always false is called contradiction or fallacy. The
problem of determining whether a propositional statement, S is a Fallacy, or simply FAL,
is formally defined as follows:

Problem 11.21. Fallacy


Input: A statement S ( of length m with n boolean variables, hp1 , · · · , pn i
True if ∀V1∼n , eval(S, V1∼n ) = F
Output: is fallacy(S) =
False otherwise
where vi = T or F for pi .

It is exactly complement of the satisfiability Problem 11.1.

is fallacy(S) = ¬is satisfiable(S) ⇔ FAL = ¬SAT (11.67)


is satisfiable(S) = ¬is fallacy(S) ⇔ SAT = ¬FAL (11.68)

Hence, FAL is in co-NP and co-NP-complete because SAT is in NP and NP-complete,


respectively.
11.7. CO-NP-COMPLETE 681

11.7.2 Tautology
A propositional statement that is always true is called tautology. The problem of de-
termining whether a propositional statement, S is a tautology, or simply TAU, is formally
defined as follows:
Problem 11.22. Tautology
Input: A statement S of length
( m with n boolean variables, hp1 , · · · , pn i
True if ∀V1∼n , eval(S, V1∼n ) = T
Output: is tautology(S) =
False otherwise
where vi = T or F for pi .
If a sentence S is a tautology, then ¬S is a fallacy.

is tautology(S) = is fallacy(¬S) ⇔ TAU ≤p FAL (11.69)


is fallacy(S) = is tautology(¬S) ⇔ FAL ≤p TAU (11.70)

The reduction relation in eqn (11.70) proves that TAU is co-NP-hard. Since TAU is in
co-NP and co-NP-hard, it is co-NP-complete.

11.7.3 LEQ
Consider the problem of determining whether two propositional statements are equiva-
lent. The logical equivalency problem, or simply LEQ, is formally defined as follows:
Problem 11.23. Logical equivalency
Input: Two statements(Sx and Sy
True if Sx ≡ Sy
Output: LEQ(Sx , Sy ) =
False otherwise

LEQ is co-NP because its complement problem is in NP. While a guessed single assign-
ment value that determine two statements are not equivalent can be verified in polynomial
time, all possible assignments in the truth table may need to be checked for LEQ.
LEQ is co-NP-hard because TAU ≤p LEQ. To determine whether a statement, S is
a tautology, one can create a tautological statement, S t and test whether S is logically
equivalent to S t . A simple tautology is p ∨ ¬p. For all Boolean variable, pi that appear in
S, create pi ∨ ¬pi and connect them with conjunctions, as stated in eqn (11.71). Hence,
LEQ is co-NP-complete.
n
^
is tautology(S) = LEQ(S, (pi ∨ ¬pi )) ⇔ TAU ≤p LEQ (11.71)
i=1
_n
is fallacy(S) = LEQ(S, (pi ∧ ¬pi )) ⇔ FAL ≤p LEQ (11.72)
i=1

where where p1 , · · · , pn are n boolean variable in S.

The co-NP-completeness of LEQ can be determined by FAL as given in eqn (11.72). To


determine whether a statement, S is a fallacy, one can create a contradicting statement, S f
682 CHAPTER 11. NP-COMPLETE

and test whether S is logically equivalent to S f . A simple fallacy is p ∧ ¬p. For all Boolean
variable, pi that appear in S, create pi ∧ ¬pi and connect them with disjunctions, as stated
in eqn (11.72).
LEQ reduces to TAU and FAL as given in eqns (11.73) and (11.74).

LEQ(Sx , Sy ) = is tautology(Sx ↔ Sy ) ⇔ TAU ≤p LEQ (11.73)


LEQ(Sx , Sy ) = is fallacy(Sx ↔ ¬Sy ) ⇔ FAL ≤p LEQ (11.74)

11.7.4 CNF vs. DNF


There is asymmetric relation of computational hardness for many logic related problems
depending on whether the propositional statement is represented in CNF or DNF. Disjunc-
tive normal form, or DNF in short, is a standardization or normalization of a logical formula,
which is a disjunction of c-clauses where a c-clause is a literal or a conjunction of literals.
Let L be a set of literals. Let ∧(L) be the c-clause where all literals in L are connected by
conjunction.
|L|
^ ^
∧(L) = x = li = l1 ∧ · · · ∧ l|L| where L = {l1 , · · · , l|L| } (11.75)
x∈L i=1

The grammar for the c-clause is defined recursively and pseudo-formally in eqn (11.76).
(
l1 if |L| = 1
∧(L) = (11.76)
∧(L − {l|L| }) _ ‘ ∧ ’ _ l|L| if |L| > 1

DNF is a logical formula such that a single c-clause or m c-clauses are connected by dis-
junction. Let cx a c-clause and C1∼|C| be a set of |C| number of c-clauses: C = {c1 , · · · , c|C| }.
The superscript is used to denote a propositional logic statement in DNF, S d to distinguish
any general logic statement, S.
|C|
_ _
Sd = x = ci = c1 ∨ · · · ∨ c|C| (11.77)
x∈C i=1

Let cx be the xth c-clause in S d and lx,y be the literal at the yth literal in the xth
c-clause. Then eqn (11.77) can be stated as in eqn (11.78).
|C| |ci |
_ ^ _ ^
d
S = lx,y = li,j (11.78)
cx ∈C lx,y ∈cx i=1 j=1

(l1,1 ∧ · · · ∧ l1,|c1 | ) ∨ (l2,1 ∧ · · · ∧ l2,|c2 | ) ∨ · · · ∨ (l|C|,1 ∧ · · · ∧ l|C|,|c|C| | )


The grammar for the DNF is defined recursively and pseudo-formally as follows:
(
c1 if m = 1
DNF(C1∼m ) = _ _
(11.79)
DNF(C1∼m−1 ) ‘ ∨ ’ cm if m > 1
Consider the satisfiability problem where the input statement is in DNF, or SDN in
short. Consider the following sample DNF statement with three c-clauses: S = (p ∧ ¬q ∧
11.7. CO-NP-COMPLETE 683

¬r) ∨ (¬p ∧ q ∧ r) ∨ (p ∧ ¬q ∧ r) To find whether S is satisfiable, one may pick any c-clause
and assign ‘T’ for variables in atomic form and ‘F’ for variables in negation form. Note that
if any one of c-clause is true, the entire statement is true. If the first c-clause (p ∧ ¬q ∧ ¬r)
is picked, the assignment, (p = T, q = F, and r = F) makes the S true. If the second
c-clause (¬p ∧ q ∧ r) is picked, the assignment, (p = F, q = F, and r = T) makes the S true.
If the last c-clause (p ∧ ¬q ∧ r) is picked, the assignment, (p = T, q = F, r = T) makes
the S true. Unless a c-clause is a contradiction, which contains both atomic form and its
negation within the c-clause, assigning truth value makes the entire statement, S satisfiable.
A pseudo code is stated as follows:

Algorithm 11.6. DNF-SAT

isDNF-SAT(S1∼m )
if m = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if sx and ¬sx ∈ S1 , return False . . . . . . . . . . . . . . . . . . . . 2
else return True . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
if sx and ¬sx ∈ Sm , return isDNF-SAT(S1∼m−1 ) . . . 5
else return True . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The computational time complexity of Algorithm 11.6 is O(n) where n is the length of
S. Unlike SCN, which is NP-complete by Theorem 11.7, SDN belongs to P class.
The general complement relation between SAT and FAL in eqns (11.67) and (11.68)
applies specific cases between SCN and FCN and between SDN and FDN.

is CNF-fallacy(S c ) = ¬isCNF-SAT(S c ) ⇔ FCN = ¬SCN (11.80)


d d
is DNF-fallacy(S ) = ¬isDNF-SAT(S ) ⇔ FDN = ¬SDN (11.81)

Since SCN is NP-complete, FCN is co-NP-complete by eqn (11.80). Since SDN ∈ P, FDN
is also in P by eqn (11.81).
Similarly, the general duality relation between TAU and FAL in eqns (11.69) and (11.70)
applies specific cases between FCN and TDN and between TCN and FDN.

is CNF-tautology(S) = is DNF-fallacy(¬S) ⇔ TCN ≤p FDN (11.82)


is DNF-fallacy(S) = is CNF-tautology(¬S) ⇔ FDN ≤p TCN (11.83)
is DNF-tautology(S) = is CNF-fallacy(¬S) ⇔ TDN ≤p FCN (11.84)
is CNF-fallacy(S) = is DNF-tautology(¬S) ⇔ FCN ≤p TDN (11.85)

The proof of the duality relations comes from the fact that the negation of CNF is DNF
and vice versa due to De Morgan’s laws in eqns (11.8, 11.9).
 _ ^  ^  ^  ^ _
¬S d = ¬ lx,y = ¬ lx,y = ¬lx,y (11.86)
cx ∈C lx,y ∈cx cx ∈C lx,y ∈cx cx ∈C lx,y ∈cx
 ^ _  _  _  _ ^
c
¬S = ¬ lx,y = ¬ lx,y = ¬lx,y (11.87)
dx ∈D lx,y ∈dx dx ∈D lx,y ∈dx dx ∈D lx,y ∈dx
684 CHAPTER 11. NP-COMPLETE

Table 11.1: Complexity Classes of Problems on Logic with CNF and DNF
CNF DNF General
Satisfiable SCN ∈ NP-complete SDN ∈ P SAT ∈ NP-complete
Fallacy FCN ∈ co-NP-comp. FDN ∈ P FAL ∈ co-NP-comp.
Tautology TCN ∈ P TDN ∈ co-NP-comp. TAU ∈ co-NP-comp.

Since FDN is in P by eqn (11.81), TCN is also in P by eqn (11.82). Since FCN is
co-NP-complete by eqn (11.80), TDN is also in co-NP-complete by eqn (11.85).
Table 11.1 summarizes complexity class of various problems on logic with CNF, DNF,
or general.
Other trivial reduction relations by Fact 11.1 include the followings:

is CNF-fallacy(S c ) = is fallacy(S c ) ⇔ FCN ≤p FAL (11.88)


d d
is DNF-tautology(S ) = is tautology(S ) ⇔ TDN ≤p TAU (11.89)

Showing the converse statements FAL ≤p FCN and TAU ≤p TDN are quite challenging
though.

11.7.5 Frobenius Postage Stamp


Recall Frobenius postage stamp Problem 5.4, or simply FSP, defined on page 226. While
FSP is proven to be NP-hard in literature such as in [4, 154], a different interpretation is
given here. Since FSP is an optimization problem, consider the following decision version.

Problem 11.24. Frobenius postage stamp decision problem (FSPdv )


+
Input: aset A of kpositive integers
 where mgcd(A1∼k ) = 1 and n ∈ Z
k
P
T if ∀X ai xi 6= n


Output: i=1
k
 where 0 ≤ xi integer.
P
F if ∃X ai xi = n


i=1

Theorem 11.22. Frobenius postage stamp decision Problem 11.24, FSPdv ∈ co-NP-complete.

Proof. Consider the following compliment problem by negating output of FSPdv .


  k

T P
if ∃X ai xi = n
i=1 where 0 ≤ xi integer.
F otherwise

It is the unbounded subset sum equality (USSE) Problem 5.3, which was proven to be
in NP-complete in eqn (11.23) on page 659. Hence, Frobenius postage stamp decision
Problem 11.24, FSPdv is in co-NP-complete. 

Since FSPdv is co-NP-complete, FSP ∈ co-NP-hard.


11.8. EXERCISES 685

11.8 Exercises
Q 11.1. Show whether the following problems are in NP:

a). DUP ∈ NP, where DUP stands for the Down-up problem, considered as an exercise
in Q 2.24 on page 86.

b). UUD ∈ NP, where UUD stands for the Up-up-down problem, considered as an exercise
in Q 2.26 on page 87.

c). CEU ∈ NP, where CEU stands for the Element Uniqueness Problem 2.12 defined on
page 56.

d). AVLc ∈ NP, where AVL stands for the problem of constructing an AVL tree given a
list of n quantifiable elements, considered as an exercise in Q 8.4 on page 491.

e). CVH ∈ NP, where CVH stands for the Convex Hull Problem 10.2 defined on page 580.

Q 11.2. Justify whether the following statements are true or false in two different assump-
tions: (P = NP) and (P 6= NP).

a). SAT 6∈ P.

b). Sort ∈ NP-complete.

c). CEU ∈ NP-hard, where CEU stands for the Element Uniqueness Problem 2.12 defined
on page 56.

d). Sort-complete ⊂ SAT-complete.

Q 11.3. Consider the following boolean expression.

(x ∨ ¬y ∨ z) ∧ (x ∨ y ∨ ¬z) ∧ (¬x ∨ ¬y ∨ ¬z)

a). Is the boolean expression satisfiable?

b). Is the boolean expression a Fallacy?

c). Is the boolean expression a Tautology?

d). Is the boolean expression in CNF?

e). Is the boolean expression in DNF?

Q 11.4. Consider the following boolean expression.

(x ∧ ¬x ∧ z) ∨ (x ∧ y ∧ ¬y) ∨ (¬y ∧ z ∧ ¬z)

a). Is the boolean expression satisfiable?

b). Is the boolean expression a Fallacy?

c). Is the boolean expression a Tautology?

d). Is the boolean expression in CNF?


686 CHAPTER 11. NP-COMPLETE

e). Is the boolean expression in DNF?


Q 11.5. Consider the following boolean expression.

p ∨ ¬(q ∨ r)

a). Is the boolean expression satisfiable?


b). Is the boolean expression a Fallacy?
c). Is the boolean expression a Tautology?
d). Is the boolean expression in CNF?
e). Is the boolean expression in DNF?
f). Find a equi-satisfiable statement by Tseitin transformation.
g). Find a equi-satisfiable statement in CNF-3 by Tseitin transformation.
Q 11.6. Consider the following combinational circuit.

p g1 g2
g5
q
r g3 g4

a). Devise a memoization algorithm to evaluate a combinational circuit.


b). Devise an algorithm based on strong inductive programming to evaluate a combina-
tional circuit.
c). Provide the computational time and space complexities of the algorithm devised in
b).
d). Evaluate the circuit where p = 1, q = 0, and r = 1.
e). Is the circuit satisfiable?
f). Derive an equivalent propositional statement by Algorithm 11.2.
g). Is the circuit or an equivalent propositional statement in CNF?
h). Find an equi-satisfiable statement by Tseitin transformation.
i). Find an equi-satisfiable statement in CNF-3 by Tseitin transformation.

Q 11.7. Prove the Tseitin transformation rules in Figure 11.15 (a).

a). Prove v ↔ (p → q) ≡ (v ∨ p) ∧ (v ∨ ¬q) ∧ (¬v ∨ ¬p ∨ q).


b). Prove v ↔ (p ↑ q) ≡ (v ∨ p) ∧ (v ∨ q) ∧ (¬v ∨ ¬p ∨ ¬q).
c). Prove v ↔ (p ↓ q) ≡ (¬v ∨ ¬p) ∧ (¬v ∨ ¬q) ∧ (v ∨ p ∨ q).
d). Prove v ↔ (p ⊕ q) ≡ (¬v ∨ ¬p ∨ ¬q) ∧ (¬v ∨ p ∨ q) ∧ (v ∨ p ∨ ¬q) ∧ (v ∨ ¬p ∨ q).
11.8. EXERCISES 687

e). Prove v ↔ (p ↔ q) ≡ (¬v ∨ ¬p ∨ q) ∧ (¬v ∨ p ∨ ¬q) ∧ (v ∨ ¬p ∨ ¬q) ∧ (v ∨ p ∨ q).

Q 11.8. Combinational Circuit Satisfiability Problem 11.2 or simply CCS allows seven types
of gates. Let C3S be the combinational circuit satisfiability problem which allows only three
basic types of gates {∧, ∨, ¬}. Consider the following combinational circuit which allows all
seven types of gates.

p
g1
q g3
g2
r

a). Evaluate the circuit where p = 1, q = 0, and r = 1.

b). Is the circuit satisfiable?

c). Derive an equivalent propositional statement by Algorithm 11.2.

d). Convert the circuit using only three types of circuit.

e). Prove C3S is NP-complete assuming that CCS is NP-complete.

Q 11.9. Consider the special NOR gate only circuit satisfiability problem, NOGS in short,
where all gates in the circuit are NOR.

a). NOGS ∈ NP-hard. Suppose that CCS is known to be in NP-complete.

b). NOGS ∈ NP-hard. Suppose that NAGS is known to be in NP-complete.

c). Show NOGS ≤p NAGS

d). Show NOGS ≤p C3S, where C3S is the three basic gate only circuit satisfiability
problem considered in Exercise Q 11.8.

e). Show NAGS ≤p C3S

Q 11.10. Consider various subset arithmetic problems: SSE, SSM, SSmin, SPEp, SPMp,
and SPminp.

a). Show SPEp ∈ NP, where SPEp stands for the subset product equality of positive
numbers problem, considered as an exercise in Q 6.12 on page 344.

b). Show SSE ≡p SPEp, where SSE stands for the subset sum equality Problem 6.3 defined
on page 305.

c). Show SPEp ∈ NP-complete. (Hint: SSE ∈ NP-complete.)

d). Show SPMp ∈ NP-hard, where SPMp stands for the subset product maximization of
positive numbers problem, considered as an exercise in Q 6.13 on page 345. (Hint:
SPEp ∈ NP-complete.)

e). Define a decision version of SPMp, denoted simply as SPMpdv .


688 CHAPTER 11. NP-COMPLETE

f). Show SPMpdv ∈ NP-complete. (Hint: SPEp ∈ NP-complete.)

g). Show SPminp ∈ NP-hard, where SPminp stands for the subset product minimization
of positive numbers problem, considered as an exercise in Q 6.14 on page 345. (Hint:
SPEp ∈ NP-complete.)

h). Define a decision version of SPminp, denoted simply as SPminpdv .

i). Show SPminpdv ∈ NP-complete.

j). Show SPMp ≡p SPminp.

k). Show SPMpdv ≡p SPminpdv .

l). Show SSM ≡p SPMp, where SSM stands for the subset sum maximization problem,
considered as an exercise in Q 4.9 on page 202.

m). Show SSmin ≡p SPminp, where SSmin stands for the subset sum minimization prob-
lem, considered as an exercise in Q 4.8 on page 202.

Q 11.11. Recall the variants of 01-knapsack problems; ZOK and ZOK-min are 01-knapsack
Problem 4.4 defined on page 163 and 01-knapsack minimization problem, considered as an
exercise Q 4.12 on page 203, respectively.

a). Show ZOKmin ∈ NP-hard.


(Hint: SSmindv ∈ NP-complete and SSmindv ≤p SSmin. SSmin stands for the subset
sum minimization problem, considered as an exercise in Q 4.8 on page 202.
SSmindv stands for the subset sum minimization decision version Problem 11.8 defined
on page 661.)

b). Define a decision version of ZOKmin, denoted simply as ZOKmindv .

c). Show ZOKmindv ∈ NP-complete

d). Show ZOK ≡p ZOKmin

e). Show ZOKdv ≡p ZOKmindv

Q 11.12. There are two kinds of 01-knapsack equality problems; ZOKE and ZOKEmin.
They are 01-knapsack equality maximization and minimization problems considered as ex-
ercises in Q 4.20 on page 206 and Q 4.21 on page 207, respectively.

a). Show ZOKE ∈ NP-hard. (Hint: SSE ∈ NP-complete, where SSE stands for the subset
sum equality Problem 6.3 defined on page 305.)

b). Define a decision version of ZOKE, denoted simply as ZOKEdv .

c). Show ZOKEdv ∈ NP-complete.

d). Show ZOKEmin ∈ NP-hard. (Hint: SSE ∈ NP-complete.)

e). Define a decision version of ZOKEmin, denoted simply as ZOKEmindv .

f). Show ZOKEmindv ∈ NP-complete.


11.8. EXERCISES 689

g). Show ZOKE ≡p ZOKEmin.

h). Show ZOKEdv ≡p ZOKEmindv .

Q 11.13. There are two kinds of 01-knapsack with two contraints problems; ZOK2 and
ZOKmin2. ZOK2 is the 01-knapsack with two constraints Problem 6.13 defined on page 334.
ZOKmin2 is the 01-knapsack minimization with two constraints problem, considered as an
exercise in Q 6.24 on page 353.

a). Show ZOK2 ∈ NP-hard.


(Hint: Some previous known NP-hard problems include SSM and ZOK. SSM stands for
the subset sum maximization problem, considered as an exercise in Q 4.9 on page 202.
ZOK stands for the 01-knapsack Problem 4.4 defined on page 163.)

b). Define a decision version of ZOK2, denoted simply as ZOK2dv .

c). Show ZOK2dv ∈ NP-complete.

d). Show ZOKmin2 ∈ NP-hard.


(Hint: Some previous known NP-hard problems include SSmin and ZOKmin. SSmin
stands for the subset sum minimization problem, considered as an exercise in Q 4.8
on page 202. ZOKmin stands for the 01-knapsack minimization problem, considered
as an exercise in Q 4.12 on page 203.)

e). Define a decision version of ZOKmin2, denoted simply as ZOKmin2dv .

f). Show ZOKmin2dv ∈ NP-complete.

g). Show ZOK2 ≡p ZOKmin2.

h). Show ZOK2dv ≡p ZOKmin2dv .

Q 11.14. Consider various unbounded subset sum problems: USSE, USSM, and USSmin.
USSE stands for the unbounded subset sum equality Problem 5.3 defined on page 225.
USSM stands for unbounded subset sum maximization problem, considered as an exercise
in Q 4.16 on page 205.
USSmin stands for the subset sum equality Problem 5.5 defined on page 227.

a). Show USSM ∈ NP-hard. (Hint: USSE is in NP-complete.)

b). Define a decision version of USSM, denoted simply as USSMdv .

c). Show USSMdv ∈ NP-complete.

d). Show USSmin ∈ NP-hard. (Hint: USSE is in NP-complete.)

e). Define a decision version of USSmin, denoted simply as USSmindv .

f). Show USSmindv ∈ NP-complete.

g). Show USSM ≡p USSmin.

h). Show USSMdv ≡p USSmindv .


690 CHAPTER 11. NP-COMPLETE

Q 11.15. Consider the unbounded subset product maximization problem, or simply USPM,
which is to find a subset such that its product is maximized and must be at most m. It is a
variation of the unbounded subset product equality problem, or simply USPE, considered
as an exercise in Q 4.17 on page 205.

a). Show USPE ∈ NP-complete. (Hint: USSE ∈ NP-complete, where USSE stands for
the subset sum equality Problem 5.3 defined on page 225.)
b). Show USSE ≡p USPE.
c). Formulate the unbounded subset product of positive number maximization problem.
d). Show USPM ∈ NP-hard. (Hint: USPE ∈ NP-complete.)
e). Show USSM ≡p USPM, where USSM stands for unbounded subset sum maximization
problem, considered as an exercise in Q 4.16 on page 205.
f). Define a decision version of USPM, denoted simply as USPMdv .
g). Show USPMdv ∈ NP-complete.
h). Show USSMdv ≡p USPMdv , where USSMdv is the decision version of unbounded
subset sum maximization problem, appeared as an exercise in Q 11.14 on page 689.

Q 11.16. Consider the unbounded subset product minimization problem, or simply USP-
min, which is to find a subset such that its product is minimized and must be at least
m.

a). Formulate the unbounded subset product of positive number minimization problem.
b). Show USPmin ∈ NP-hard. (Hint: USPE ∈ NP-complete, where USPE stands for the
unbounded subset product equality problem, considered as an exercise in Q 4.17 on
page 205.)
c). Show USSmin ≡p USPmin, where USSmin stands for unbounded subset sum mini-
mization Problem 5.5, defined on page 227.
d). Define a decision version of USPmin, denoted simply as USPmindv .
e). Show USPmindv ∈ NP-complete.
f). Show USSmindv ≡p USPmindv , where USSmindv is the decision version of unbounded
subset sum minimization problem, appeared as an exercise in Q 11.14 on page 689.
g). Show USPM ≡p USPmin. where USPM is the unbounded subset product maximiza-
tion problem, appeared as an exercise in Q 11.15 on page 690.
h). Show USPMdv ≡p USPmindv , where USPMdv is the decision version of unbounded
subset product maximization problem, appeared as an exercise in Q 11.15 on page 690.

Q 11.17. There are two kinds of unbounded knapsack problems: UKP and UKPmin. UKP
is the unbounded integer knapsack Problem 4.6 (UKP) defined on page 167 and UKPmin is
the unbounded integer knapsack minimization problem, considered as an exercise in Q 4.15
on page 204.
11.8. EXERCISES 691

a). Show UKP ∈ NP-hard.


(Hint: USSM ∈ NP-hard, where USSM stands for unbounded subset sum maximiza-
tion problem, considered as an exercise in Q 4.16 on page 205.)
b). Define a decision version of UKP, denoted simply as UKPdv .
c). Show UKPdv ∈ NP-complete. (Hint: USSMdv ∈ NP-complete, where USSMdv stands
for the decision version of unbounded subset sum maximization problem, considered
as an exercise in Q 11.14 on page 689.)
d). Show UKPmin ∈ NP-hard. (Hint: USSmin ∈ NP-hard, where USSmin stands for the
subset sum minimization Problem 5.5 defined on page 227.)
e). Define a decision version of UKPmin, denoted simply as UKPmindv .
f). Show UKPmindv ∈ NP-complete. (Hint: USSmindv ∈ NP-complete, where USSmindv
stands for the decision version of unbounded subset sum minimization problem, con-
sidered as an exercise in Q 11.14 on page 689.)
g). Show UKP ≡p UKPmin.
h). Show UKPdv ≡p UKPmindv .

Q 11.18. There are two kinds of unbounded knapsack equality problems: UKE and UKEmin.
UKE is the unbounded knapsack equality maximization problem, considered as an exercise
in Q 5.7 on page 272 and UKPmin is the unbounded integer knapsack minimization problem,
considered as an exercise in Q 4.15 on page 204.

a). Show UKE ∈ NP-hard. (Hint: USSE ∈ NP-complete, where USSE stands for the
subset sum equality Problem 5.3 defined on page 225.)
b). Define a decision version of UKE, denoted simply as UKEdv .
c). Show UKEdv ∈ NP-complete. (Hint: USSE ∈ NP-complete.)
d). Show UKEmin ∈ NP-hard. (Hint: USSE ∈ NP-complete.)
e). Define a decision version of UKEmin, denoted simply as UKEmindv .
f). Show UKEmindv ∈ NP-complete. (Hint: USSE ∈ NP-complete.)
g). Show UKE ≡p UKEmin.
h). Show UKEdv ≡p UKEmindv .

Q 11.19. Consider the postage stamp equality minimization (PSEmin) Problem 4.2 defined
on page 159 and the postage stamp equality maximization (PSEmax) problem, considered
as an exercise in Q 4.6 on page 201.
a). Show PSEmin ∈ NP-hard. (Hint: USSE ∈ NP-complete, where USSE stands for the
subset sum equality Problem 5.3 defined on page 225.)
b). Define a decision version of PSEmin, denoted simply as PSEmindv .
c). Show PSEmindv ∈ NP-complete.
692 CHAPTER 11. NP-COMPLETE

d). Show PSEmin ≤p UKEmin, where UKPmin is the unbounded integer knapsack min-
imization problem, considered as an exercise in Q 4.15 on page 204.
e). Show PSEmax ∈ NP-hard.
f). Define a decision version of PSEmax, denoted simply as PSEmaxdv .
g). Show PSEmaxdv ∈ NP-complete.
h). Show PSEmax ≤p UKE, where UKE is the unbounded knapsack equality maximiza-
tion problem, considered as an exercise in Q 5.7 on page 272.
Q 11.20. Consider the weighted set cover problem, or simply wSCV, appeared as an exercise
in Q 4.24 on page 208.

a). Define a decision version of wSCV, denoted simply as wSCVdv .


b). Show wSCVdv ∈ NP-complete. (Hint: SCVdv ∈ NP-complete by Algorithm 11.5 stated
on page 675, where SCVdv stands for the set cover decision version Problem 11.17
defined on page 675.)
c). Show wSCV ∈ NP-hard by SCV ≤p wSCV.
d). Show wSCV ∈ NP-hard by wSCVdv ≤p wSCV.

Q 11.21. Consider the maximum independent set Problem 11.14, or IDS in short, defined
on page 670.
a). Show SCN ≤p IDS, where SCN stands for the satisfiability of CNF problem.
b). Demonstrate (SCN ≤p IDS) to solve

S1 = (p1 ∨ p2 ) ∧ (¬p1 ∨ p2 ∨ p3 ) ∧ (¬p3 ) ∧ (p1 ∨ ¬p2 ∨ ¬p3 )


c). Demonstrate (SCN ≤p IDS) to solve

S2 = (¬p2 ∨ p3 ) ∧ (¬p1 ∨ ¬p2 ) ∧ (¬p3 ) ∧ (p2 ∨ p3 )


d). Show SCN ≤p IDSdv where IDSdv is the decision version of IDS, i.le., at least k
independent set Problem 11.15 defined on page 671.
Q 11.22. Consider the miniimum vertex cover Problem 4.13, or VCP in short, defined on
page 180.
a). Show SCN ≤p VCP, where SCN stands for the satisfiability of CNF problem.
b). Demonstrate (SCN ≤p VCP) to solve

S1 = (p1 ∨ p2 ) ∧ (¬p1 ∨ p2 ∨ p3 ) ∧ (¬p3 ) ∧ (p1 ∨ ¬p2 ∨ ¬p3 )


c). Demonstrate (SCN ≤p VCP) to solve

S2 = (¬p2 ∨ p3 ) ∧ (¬p1 ∨ ¬p2 ) ∧ (¬p3 ) ∧ (p2 ∨ p3 )


11.8. EXERCISES 693

d). Show SCN ≤p VCPdv where VCPdv is the decision version of VCP, i.e., at most k
vertex cover Problem 11.16 defined on page 674.
Q 11.23. Consider the traveling salesman maximization problem, or simply TSPx, appeared
as an exercise in Q 4.29 on page 210.

a). Show TSPx ∈ NP-hard. (Hint: HMP ∈ NP-complete by Theorem 11.20 on page 677,
where HMP stands for the Hamiltonian Path Problem 11.18 defined on page 676.)

b). Define a decision version of TSPx, denoted simply as TSPxdv .


c). Show TSPxdv ∈ NP-complete.
d). Show TSP ≡p TSPx, where TSP is the traveling salesman (minimization) Prob-
lem 4.16, defined on page 190.

e). Show TSPdv ≡p TSPxdv , where TSPdv is the traveling salesman decision version
Problem 11.20, defined on page 679.
694 CHAPTER 11. NP-COMPLETE

L1 G5
MPS NAGS CCS L2
SAT CLQ IDS
D3 G1 G4
L 14 L 13 L 9 L 10 L8 L3 L4 L7
D2 G2 G6
L 11 L5 G3
BPP NOGS C3S SC-3 SCN VCP
L 12 L 6L 6
S8 SCV S7
G7
D1
S1
PSEmax PSEmin STP S6 wSCV HMC
O35 O36 O34
O37 S5 G8
O30 USSE S2
SSE O17 ZOKE TSP G9
O32 O18
UKE UKEmin S4 S3 O19 G11 HMP
O33 O9 G10
USPE SPEp ZOKEmin TSPx
O1
O10 O2
O12 O13 O5 O4
O24
UKP O28
USSM O15
USPM SPMp O7
SSM O20
ZOK O23
ZOK2
O30 O11 O14 O6 O3 O22 O27
O26
UKPmin O29
USSmin O16
USPmin SPminp O8
SSmin O21
ZOKmin O25
ZOK2min

Logic problems
L1 SAT ≤p CCS · · · · · Tm 11.4 L2 CCS ≤p SAT · · · · · Tm 11.5 L3 SAT ≤p SC-3 · · · · · Tm 11.6
L4 SC-3 ≤p SAT · · · Eq (11.17) L5 SC-3 ≤p SCN · · · · · Tm 11.7 L6 SCN ≤p SC-3 · · · · · Tm 11.8
L7 SCN ≤p SAT · · · Eq (11.16) L8 CCS ≤p C3S · · · · · · · Ex 11.8 L9 C3S ≤p NAGS · · · Tm 11.10
L10 NAGS ≤p C3S · · · · Ex 11.9 L11 C3S ≤p NOGS · · · · Ex 11.9 L12 NOGS ≤p C3S · · · · Ex 11.9
L13 NAGS ≤p NOGS · Ex 11.9 L14 NOGS ≤p NAGS · Ex 11.9
Graph problems
G1 SCN ≤p CLQ · · · · Tm 11.16 G2 SCN ≤p IDS · · · · · Ex 11.21 G3 SCN ≤p VCP · · · · Ex 11.22
G4 CLQ ≡p VCP · · · Tm 11.19 G5 CLQ ≡p IDS · · · · Tm 11.17 G6 VCP ≡p IDS · · · · Tm 11.18
G7 VCP ≤p HMC · · · · · · · · · [94] G8 HMC ≤p HMP · · Tm 11.20 G9 HMP ≤p TSP · · · Tm 11.21
G10 HMP ≤p TSPx · · Ex 11.23 G11 TSP ≡p TSPx · · · Ex 11.23
Set theory problems
S1 SC-3 ≤p SSE · · · · · Tm 11.11 S2 SSE ≤p USSE · · · Eq (11.23) S3 SSE ≡p SPEp · · · · ·Ex 11.10
S4 USSE ≡p USPE · · Ex 11.15 S5 SSE ≤p STP · · · · · Tm 11.12 S6 STP ≤p SSE · · · · Eq (11.25)
S7 VCP ≤p SCV · · · Eq (11.63) S8 SCV ≤p wSCV · · · Ex 11.20
Scheduling problems
D1 STP ≤p BPP · · · Eq (11.44) D2 BPP ≤p MPS · · Eq (11.46) D3 MPS ≤p BPP · · Eq (11.47)
Optimization problems
O1 SSE ≤p SSM · · · · · · · · · · · · · · · · · · · · Eq (11.27) O2 SSE ≤p SSmin · · · · · · · · · · · · · · · · · · · · Eq (11.30)
O3 SSM ≡p SSmin · · · · · · · · · · · · · · · · · · · Tm 11.13 O4 SPEp ≤p SPMp · · · · · · · · · · · · · · · · · · · · Ex 11.10
O5 SPEp ≤p SPminp · · · · · · · · · · · · · · · · · Ex 11.10 O6 SPMp ≡p SPminp · · · · · · · · · · · · · · · · · · Ex 11.10
O7 SSM ≡p SPMp · · · · · · · · · · · · · · · · · · · · Ex 11.10 O8 SSmin ≡p SPminp · · · · · · · · · · · · · · · · · · Ex 11.10
O9 USSE ≤p USSM · · · · · · · · · · · · · · · · · · · Ex 11.14 O10 USSE ≤p USSmin · · · · · · · · · · · · · · · · · Ex 11.14
O11 USSM ≡p USSmin · · · · · · · · · · · · · · · · Ex 11.14 O12 USPE ≤p USPM · · · · · · · · · · · · · · · · · · Ex 11.15
O13 USPE ≤p USPmin · · · · · · · · · · · · · · · · Ex 11.16 O14 USPM ≡p USPmin · · · · · · · · · · · · · · · · Ex 11.16
O15 USSM ≡p USPM · · · · · · · · · · · · · · · · · Ex 11.15 O16 USSmin ≡p USPmin · · · · · · · · · · · · · · · Ex 11.16
O17 SSE ≤p ZOKE · · · · · · · · · · · · · · · · · · · Ex 11.12 O18 SSE ≤p ZOKEmin · · · · · · · · · · · · · · · · · Ex 11.12
O19 ZOKE ≡p ZOKEmin · · · · · · · · · · · · · Ex 11.12 O20 SSM ≤p ZOK · · · · · · · · · · · · · · · · · · · · · Tm 11.14
O21 SSmin ≤p ZOKmin · · · · · · · · · · · · · · · Ex 11.11 O22 ZOK ≡p ZOKmin · · · · · · · · · · · · · · · · · Ex 11.11
O23 ZOK ≤p ZOK2 · · · · · · · · · · · · · · · · · · · Ex 11.13 O24 SSM ≤p ZOK2 · · · · · · · · · · · · · · · · · · · · Ex 11.13
O25 ZOKmin ≤p ZOKmin2 · · · · · · · · · · · Ex 11.13 O26 SSmin ≤p ZOKmin2 · · · · · · · · · · · · · · · Ex 11.13
O27 ZOK2 ≡p ZOKmin2 · · · · · · · · · · · · · · Ex 11.13 O28 USSM ≤p UKP · · · · · · · · · · · · · · · · · · · · Ex 11.17
O29 USSmin ≤p UKPmin · · · · · · · · · · · · · Ex 11.17 O30 UKP ≡p UKPmin · · · · · · · · · · · · · · · · · Ex 11.17
O31 USSE ≤p UKE · · · · · · · · · · · · · · · · · · · Ex 11.18 O32 USSE ≤p UKEmin · · · · · · · · · · · · · · · · Ex 11.18
O33 UKE ≡p UKEmin · · · · · · · · · · · · · · · · Ex 11.18 O34 USSE ≤p PSEmin · · · · · · · · · · · · · · · · · Ex 11.19
O35 USSE ≤p PSEmax · · · · · · · · · · · · · · · · Ex 11.19 O36 PSEmin ≤p UKEmin · · · · · · · · · · · · · · Ex 11.19
O37 PSEmax ≤p UKE · · · · · · · · · · · · · · · · Ex 11.19

Figure 11.37: Partial reduction graph for NP-complete problems


Chapter 12

Randomized and Approximate


Algorithms

Probability and randomness played a major role in the 1920s in establishing quantum
physics, which presented an alternative to classical physics. In the famous debate between
Albert Einstein and Niels Bohr [114, p 41], Einstein asserted “God does not play dice
with the universe,” to which Bohr responded, “Stop telling God what to do.” Since then,
probability and randomness have impacted many other disciplines, algorithm design being no
exception. A randomized algorithm is an algorithm that utilizes probability and randomness
as part of its design. The first randomized algorithm was developed by Michael O. Rabin
for the closest pair problem in computational geometry in 1976 [159, p 10].
Thus far, we have striven to come up with correct and efficient algorithms in previous
chapters. In this chapter, we shall consider algorithms that are probably correct and al-
gorithms that are probably efficient. Algorithms that are usually correct but sometimes
incorrect are known as Monte Carlo methods while algorithms that are usually fast but
sometimes slow are called Las Vegas methods. Algorithms that are probably correct and
probably efficient are categorized as Atlantic city methods, a term first introduced by J.
Finn in 1982 [125, p 80]. The focus of this chapter is on the former two main categories of
randomized algorithms.

Michael O. Rabin (1931-) is an Israeli computer scientist who has made signifi-
cant contributions to computational complexity theory. He received the Turing Award
for introducing the idea of nondeterministic machines. He is well known for the Ra-
bin cryptosystem, the Miller-Rabin primality test, and the Rabin-Karp string search
algorithm. c Photo Credit: Konrad Jacobs, MFO, licensed under CC BY-SA 2.0 DE.

695
696 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

The term ‘algorithm’ is often used to denote a deterministic method that is proven to
yield a correct output while the term ‘heuristic’ is used to refer to a method that may fail to
produce a correct output or has no proof of correctness. Technically, Monte Carlo methods
fall under heuristics and not algorithms. However, both terms are used interchangeably for
Monte Carlo methods.
Another type of heuristics is approximate algorithms, which are primarily used to tackle
NP-hard optimization problems. Instead of finding the optimal solution, an approximate
algorithm finds an approximately correct suboptimal solution.
The objectives of this chapter include understanding the following three methods: Las
Vegas methods, Monte Carlo methods, and approximate algorithms. First, one must be
able to design a randomized partition based divide and conquer algorithm and analyze its
average and worst case time complexities. Next, one must be able to devise a Monte Carlo
method and analyze its error rate. Finally, one must be able to devise an approximate
algorithm for NP-hard optimization problems and prove its bound.

12.1 Las Vegas Method


Algorithms that are probably efficient, i.e., usually fast but sometimes slow, are examples
of the Las Vegas method, which is named after the iconic city of gambling in the United
States. Perhaps it was named so because most gamblers lose money quickly but sometimes
some gamblers lose money slowly. The term ‘Las Vegas method’ was first proposed in [16]
by László Babai for the closest pair problem in computational geometry. The underlying
idea has since been used more broadly to design algorithms.
In [16], it was stated that every Las Vegas computation is Monte Carlo but the converse
does not hold true. If a threshold for execution time for such an algorithm is set, most fast
cases will be correctly computed while a few slow cases will be terminated before finding
the solution, thus resulting in incorrect outputs. After all, every Las Vegas method with an
execution time threshold is a Monte Carlo method.
In this section, randomized partition based divide and conquer algorithms, which are
a standard type of Las Vegas method, are presented. Several problems dealing with lists,
such as sorting Problem 2.16, order statistics Problem 2.15, and alternating permutation
Problem 2.19, are taken into consideration to illustrate randomized partition based divide
and conquer algorithms. Random partitioning and riffling problems are also introduced as
they are important subroutines in Las Vegas methods for several problems.

12.1.1 Partitioning
Before delving into Las Vegas algorithms, consider the partitioning problem based on
a certain pivot value which was first formally described in [78]. This problem is of great
interest as it serves as a sub-problem for several forthcoming algorithms based on the Las
Vegas method. Given a list A1∼n and a pivot element ax ∈ A1∼n , the problem finds a

László Babai (1950-) is a Hungarian computer scientist affiliated with the University
of Chicago. Some of his notable accomplishments include interactive proof systems and
group theoretic methods in graph isomorphism testing.
c Photo Credit: Renate Schmid, MFO, licensed under CC BY-SA 2.0 DE.
12.1. LAS VEGAS METHOD 697

permutation of the sequence, A01∼n such that all elements to the left of a0p , A01∼p−1 , are less
than a0p and all elements to the right of a0p , A0p+1∼n , are greater than a0p where a0p = ax .
The output sequence A01∼n is partially ordered as ∀x ∈ A01∼p−1 < a0p ≤ ∀y ∈ A0p+1∼n but
the orders in A01∼p−1 and A0p+1∼n do not matter.

Problem 12.1. Random partition r patition(A1∼n )


Input: A sequence A1∼n of n quantifiable elements and a pivot element, ax ∈ A
Output: A0 , a permutation of A1∼n such that ∀i ∈ {1 ∼ p − 1}, a0i < a0p
and ∀i ∈ {p + 1 ∼ n}, a0i ≥ a0p where a0p = ax and p ∈ {1, · · · , n}

First, a pivot in the array must be randomly chosen to partition by. Once a pivot
is selected, it is usually temporarily placed at the end of the array. Next, all remaining
elements are compared to the pivot value. If greater, they are placed to the right side of the
array, and if less, they are placed to the left side. Finally, the pivot is placed in the middle
of the two partitions. There are two partitioning algorithms: outside-in and progressive
partitioning algorithms.
The first pivot based partitioning algorithm resembles the outside-in bitwise partition
Algorithm 3.24 which utilized a virtual pivot described on page 129. A pseudo code is stated
as follows:

Algorithm 12.1. Outside-in partitioning

Outside-in partitioning(A1∼n )
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Choose x ∈ {1, · · · , n} randomly . . . . . . . . . . . . . . . . 3
swap(ax , an ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
i = 1 and j = n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
while i 6= j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
increment i while ai < an . . . . . . . . . . . . . . . . . . 7
decrement j while aj ≥ an & i < j . . . . . . . . . 8
swap(ai , aj ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
if ai ≥ an , p = i . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
else, p = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
swap(ap , an ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
return p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Figure 12.1 illustrates Algorithm 12.1. First, the pivot element is moved to the end of
the list by swapping it with the last element. Next, i and j are initially placed in the first
and the rightmost position, n − 1, respectively. The index i is incremented while ai < an
(the pivot) and the index j is decremented while aj ≥ an (the pivot). If i < j, swap ai
and aj . Repeat the process while i < j. If i and j meet, move the pivot to the appropriate
position and return the pivot position. Since each element is scanned only once and the
swap operation takes constant time, the computational time complexity of Algorithm 12.1
is clearly Θ(n).
The other pivot based partitioning algorithm resembles the progressive bitwise partition
Algorithm 3.25 which utilized a virtual pivot described on page 130. A pseudo code which
is closely related to the one in [42, p 171] is written as follows and illustrated in Figure 12.2.
698 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

Input: A, x = 5 17 12 2 80 20 35 1 15 30 10

swap(ax , an ) & 17 12 2 80 10 35 1 15 30 20
initialize i = 1 & j = n − 1 i j ↑
increment i while ai < an & 17 12 2 80 10 35 1 15 30 20
decrement j while aj ≥ an → i j← ↑
swap(ai , aj ) 17 12 2 15 10 35 1 80 30 20
i j ↑
increment i while ai < an & 17 12 2 15 10 35 1 80 30 20
decrement j while aj ≥ an → i j ← ↑
swap(ai , aj ) 17 12 2 15 10 1 35 80 30 20
i j ↑
increment i while ai < an & 17 12 2 15 10 1 35 80 30 20
Stop since i = j →i=j ↑
swap(ai , an ) since ai ≥ an 17 12 2 15 10 1 20 80 30 35
Output A0 & p = 7 ↑

Figure 12.1: Outside-in partitioning process where the random x = 5.

Input: A, x 17 12 2 80 20 35 1 15 30 10

swap(ax , an ) & i = the first ak 17 12 2 80 20 35 1 15 30 10
from left such that ak ≥ an & j = i + 1 i j ↑
increment j while aj ≥ an 17 12 2 80 20 35 1 15 30 10
i →j ↑
swap(ai , aj ) & 2 12 17 80 20 35 1 15 30 10
increment i by 1 →i j ↑
increment j while aj ≥ an 2 12 17 80 20 35 1 15 30 10
i →j ↑
swap(ai , aj ) & 2 1 17 80 20 35 12 15 30 10
increment i by 1 →i j ↑
increment j while aj ≥ an & 2 1 17 80 20 35 12 15 30 10
Stop since j = n i → j =↑
swap(ai , an ) & 2 1 10 80 20 35 12 15 30 17
Output A0 & x = 3 ↑

Figure 12.2: Progressive partitioning process where the random x = 10.


12.1. LAS VEGAS METHOD 699

Algorithm 12.2. Progressive partitioning

Progressive partitioning(A1∼n )
if n = 1, return 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Choose x ∈ {1, · · · , n} randomly . . . . . . . . . . . . . . . . 3
swap(ax , an ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
i = 1 ........................................... 5
while ai < an . i = i + 1 . . . . . . . . . . . . . . . . . . . . . .6
if i = n. return n . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
for j = i + 1 ∼ n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . 9
if aj < an . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
swap(ai , aj ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
swap(ai , an ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
return i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Once a pivot is randomly selected, it is moved to the end of the array by lines 3 and
4 of Algorithm 12.2. Next, ai , the first element from the left such that ai ≥ an , is found
by incrementing i starting from i = 1. If i reaches the end of the array, all elements in
A1∼n−1 are less than the pivot, an , and thus the program returns n. Otherwise, let j, the
right partition index, be i + 1. Find the position of j by incrementing it such that aj < an .
Once such an element is found, swap it with ai , increment i by one, and repeat the process.
Since each element is scanned only once and the swap operation takes constant time, the
computational time complexity of Algorithm 12.2 is clearly Θ(n).

12.1.2 Quicksort
A canonical example of the Las Vegas method is an algorithm for the sorting Prob-
lem 2.16 defined on page 60. Quicksort, also known as partition-exchange sort, is a practical
and widely used sorting algorithm. It is an uneven divide and conquer algorithm or random
partition based algorithm.
Once an input list is partitioned by either Algorithm 12.1 or 12.2 in linear time, the
resulting sequence is partially ordered so that ∀x ∈ A01∼p−1 < a0p ≤ ∀y ∈ A0p+1∼n but
the left and right partitions, A01∼p−1 and A0p+1∼n , are not sorted and need to be sorted
recursively.
A simple pseudo code is given below. Let A1∼n be global and let Algorithm 12.3 be
called initially with quick sort(1, n).

Charles Antony Richard Hoare (1934-) also known as Tony Hoare or C. A.


R. Hoare, is a British computer scientist. Some of his notable contributions include
the quicksort algorithm, Hoare logic for verifying program correctness and the formal
language, Communicating Sequential Processes (CSP). He received the Turing award in
1980. c Photo Credit: Rama, Wikimedia Commons, licensed under CC BY 2.0.
700 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

Algorithm 12.3. Quicksort

quick sort(s, e)
if e − s > 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
x = partition(As∼e ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
quick sort(s, x − 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
quick sort(x + 1, e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

17 12 2 80 20 35 1 15 30 10 17 12 2 80 20 35 1 15 30 10
17 20

10 12 2 15 1 20 80 30 35 17 12 2 15 10 1 80 30 35
10 30 12 30

1 2 15 12 20 35 80 10 1 2 17 15 80 35
2 12 80 2 15 35
1 15 35 1 10 17 80

1 2 10 12 15 17 20 30 35 80 1 2 10 12 15 17 20 30 35 80

(a) best case (b) another random case


17 12 2 80 20 35 1 15 17 12 2 80 20 35 1 15 17 12 2 80 20 35 1 15
1 12 2 80 20 35 15 17 17 12 2 15 20 35 1 80 1 12 2 80 20 35 15 17
1 2 17 80 20 35 15 12 17 12 2 15 20 1 35 80 1 12 2 17 20 35 15 80
1 2 12 80 20 35 15 17 17 12 2 15 1 20 35 80 1 2 15 17 20 35 12 80
1 2 12 15 20 35 17 80 1 12 2 15 17 20 35 80 1 2 15 17 20 12 35 80
1 2 12 15 17 35 80 20 1 12 2 15 17 20 35 80 1 2 12 17 20 15 35 80
1 2 12 15 17 20 80 35 1 2 12 15 17 20 35 80 1 2 12 17 15 20 35 80
1 2 12 15 17 20 35 80 1 2 12 15 17 20 35 80 1 2 12 15 17 20 35 80
(c) worst case I (d) worst case II (e) worst case III

Figure 12.3: Quicksort Algorithm 12.3 illustration

The illustration of the quicksort Algorithm 12.3 in Figure 12.3 shows partition trees
similar to divide trees in Chapter 3. Partitioned trees as illustrated in Figure 12.3 are not
systematically balanced trees but the height of a randomly partitioned tree is O(log n) on
average. The computational running time of Algorithm 12.3 depends on two sub-problems
and partitioning; T (n) = T (k) + T (n − k − 1) + Θ(n). The best case is when the list is
always divided into roughly half and half, i.e., k = n−12 is Θ(n log n) according to Master
Theorem 3.9. It takes quadratic time in worst cases as exemplified in Figures 12.3 (c) ∼ (e),
which occurs when one of the partitions is always empty (k = 0) and all elements belong
to one partition throughout all recursive processes; T (n) = T (n − 1) + Θ(n) = Θ(n2 ). The
average case computational time complexity, Θ(n log n), can be shown by a strong induction.
12.1. LAS VEGAS METHOD 701

Theorem 12.1. The expected running time of Algorithm 12.3 is Θ(n log n).

Proof. The expected running time of Algorithm 12.3 has the following complete recurrence
relation:
n−1
P n−1
P
(T (i) + T (n − i − 1)) 2 T (i)
T (n) = i=0 + Θ(n) = i=0 + Θ(n) (12.1)
n n
(Proof by strong induction) Suppose T (i) = Θ(i log i) for all 1 ≤ i ≤ k. Show T (k + 1) =
Θ((k + 1) log (k + 1)).

k
P
2 T (i)
i=0
T (k + 1) = + Θ(k + 1) by eqn (12.1)
k+1
k
P
2 Θ(i log i)
= i=0 + Θ(k + 1) by strong assumption
k+1
2Θ(k 2 log k)
= + Θ(k) by summation rule
k+1
Θ((k + 1)2 log k)
= + Θ(k) by asymptotic rule
k+1
= Θ((k + 1) log k) by asymptotic rule
= Θ((k + 1) log (k + 1)) ∵ Θ(log k) = Θ(log (k + 1)) 

Although the original quicksort algorithm developed by Tony Hoare in 1959 in [79] is not
random, a randomized quicksort is widely used and is usually fast if the list is partitioned
randomly. When implemented properly and randomly, quicksort can run about 2-3 times
faster than other linearithmetic algorithms such as merge sort Algorithm 3.5 and heapsort
Algorithm 9.12 [158, p.129].

12.1.3 Quickselect
Another problem which can be solved by partitioning an array into two sub-arrays us-
ing a random pivot is the kth order statistics Problem 2.15 defined on page 59. Hoare,
the inventor of the quicksort algorithm, also discovered a comparable selection algorithm
known as quickselect [79]. This selection algorithm recursively partitions an array into two
sub-arrays using a random pivot. Unlike quicksort which invokes two sub-problems, the
quickselect algorithm invokes only one sub-problem.
To find the kth smallest element in an array, the array can be partitioned into two
partitions A1∼p−1 and Ap+1∼n . If p = k, ap is the solution. If p > k, the kth smallest
element must be in A1∼p−1 , not in Ap+1∼n . If p < k, vice versa. Hence, only one respective
partition is explored instead of both. A simple pseudo code is given below. Let A1∼n be
global and let Algorithm 12.4 be called initially with quick select(1, n, k).

Algorithm 12.4. Quickselect


702 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

17 12 2 80 20 35 1 15 30 10 17 12 2 80 20 35 1 15 30 10
17 10

10 12 2 15 1 20 80 30 35 1 2 80 20 35 17 15 30 12
10 20

1 2 15 12 15 12 17 80 30 35
12 15
15 12 17

1 2 10 12 15 17 20 80 30 35 1 2 10 12 15 17 20 80 30 35

(a) random case (b) another random case


17 12 2 80 20 35 1 15 17 12 2 80 20 35 1 15 17 12 2 80 20 35 1 15
1 12 2 80 20 35 15 17 17 12 2 15 20 35 1 80 17 12 2 15 20 35 1 80
1 2 17 80 20 35 15 12 17 12 2 15 20 1 35 80 1 12 2 15 20 35 17 80
1 2 12 80 20 35 15 17 17 12 2 15 1 20 35 80 1 12 2 15 20 17 35 80
1 2 12 15 20 35 17 80 1 12 2 15 17 20 35 80 1 2 17 15 20 12 35 80
1 2 12 15 17 35 80 20 1 12 2 15 17 20 35 80 1 2 17 15 12 20 35 80
1 2 12 15 17 20 80 35 1 2 12 15 17 20 35 80 1 2 12 15 17 20 35 80
(c) worst case I (k = 6) (d) worst case II (k = 3) (e) worst case III (k = 3)
k partition calls (n − k + 1) partition calls 2k partition calls

Figure 12.4: Quickselect Algorithm 12.4 illustration

Let A1∼n be global.


quick select(s, e, k)
x = partition(As∼e ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
if x = k, return ax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if x > k, quick select(s, x − 1) . . . . . . . . . . . . . . . . . . 3
else (if x < k), quick select(x + 1, e) . . . . . . . . . . . . 4

A couple of random partition trees illustrating the quickselect Algorithm 12.4 are given
in Figures 12.4 (a) and (b). The computational running time of Algorithm 12.3 depends
on only one sub-problem and partitioning; T (n) = T (n0 ) + Θ(n). The best case is when
the pivot element is the kth smallest number, i.e., ak = ap after partitioning and thus the
best case computational time complexity is Θ(n). It takes quadratic time in worst cases
as exemplified in Figures 12.4 (c) ∼ (e). If the pivot element is always the minimum,
the partition procedure must be invoked k number of times and thus takes Θ(kn) time as
illustrated in Figure 12.4 (c). If the pivot element is always the maximum, the partition
procedure must be invoked (n − k + 1) number of times as illustrated in Figure 12.4 (d).
Another worst case scenario is given in Figure 12.4 (e). Hence, the computational time
complexity of the quickselect Algorithm 12.3 is Θ(n2 ).
The average case computational time complexity, Θ(n), can be shown by a strong in-
duction.
12.1. LAS VEGAS METHOD 703

Theorem 12.2. The expected running time of Algorithm 12.4 is Θ(n).


Proof. The expected running time of Algorithm 12.4 has the following complete recurrence
relation:
n−1
P
T (i)
T (n) = i=0 + Θ(n) (12.2)
n
(Proof by strong induction) Suppose T (i) = Θ(i) for all 1 ≤ i ≤ k. Show T (k+1) = Θ(k+1).

k
P
T (i)
i=0
T (k + 1) = + Θ(k + 1) by eqn (12.2)
k+1
k
P
Θ(i)
i=0
= + Θ(k + 1) by strong assumption
k+1
Θ(k 2 )
= + Θ(k + 1) by summation rule
k+1
Θ((k + 1)2 )
= + Θ(k + 1) by asymptotic rule
k+1
= Θ(k + 1) 

12.1.4 Random Permutation by Riffling

(a) Riffling (b) Perfect riffle (n = 8) (c) P- riffle (n = 4)

Figure 12.5: Riffle method for shuffling cards and perfect riffles.

Consider the random permutation (shuffling) Problem 2.21 defined on page 67. One
of the most common shuffling methods in card games is the riffle method, also known as
dovetail shuffle or leafing the cards, as shown in Figure 12.5 (a).
An algorithm that splits a list into two halves and then interweaves them perfectly is
a perfect riffle, also known as Faro shuffle, which is the key to many magic tricks [48]. As
illustrated in Figure 12.5 (b), the sequence where n = 8 becomes identical to the initial
sequence when the perfect riffle is applied three times. When n is an exact power of two,
704 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

log n number of perfect riffles always returns the original sequence. If the original sequence
is A = h1, 2, · · · , 8i, as shown in Figure 12.5 (b), interweaving it yields an up-down sequence.
Interweaving twice provides a sequence whose first half is odd numbers and second half is
even numbers. The perfect riffle problem is formally defined as follows:
Problem 12.2. Perfect riffle (interweave)
Input: A sequence, A1∼n (
0 a02i−1 = ai if 1 ≤ i ≤ d n2 e
Output: a permutation A of A such that
a02i = ab n2 c+i if 1 ≤ i ≤ b n2 c
A pseudo code for the perfect riffle or interweaving is stated as follows:
Algorithm 12.5. Perfect riffle (interweave)

interweave(A1∼n )
for i = 1 ∼ b n2 c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
a02i−1 = ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
a02i = ab n2 c+i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if n is odd, a0n = an . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
0
return A1∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

To acquire the randomness of shuffling, random factors can be applied to the perfect
riffle algorithm. Consider a randomized riffle Algorithm 12.6 that interweaves elements by
random gaps. For each iteration, a card is selected from either the left or right half deck
randomly and thus random gaps are created.
Algorithm 12.6. Randomized riffle

Randomized riffle(A1∼n , rh)


h = b n2 c (or h = random(1 ∼ n)) . . . . . . . . . . . . 1
i = 1, l = 1, r = h + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
while i ≤ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
if l = h + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
A0i∼n = Ar∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
i=n+1 i.e., break . . . . . . . . . . . . . . . . . . . . . 6
else if r = n + 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
A0i∼n = Al∼h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i=n+1 i.e., break . . . . . . . . . . . . . . . . . . . . . 9
else if random(0, 1) = 0, . . . . . . . . . . . . . . . . . . . . . . . 10
a0i = al . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
l = l + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
else, (i.e., random(0, 1) = 1), . . . . . . . . . . . . . 14
a0i = ar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
i = i + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
r = r + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
return A01∼n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

If the random number generator selects 0, a card from the left half is selected in lines
10 ∼ 13 of Algorithm 12.6. Otherwise, a card from the right half is selected in lines 14 ∼ 17.
12.1. LAS VEGAS METHOD 705

a b c d
0 1
a c

b c d a b d
0 1 0 1
b c a d

a b c d b d b d c d a b
0 1 0 1
b d b d

a c b d a c d b c a b d c a d b

(a) a random riffling tree


A0 R A0 R A0 R A0 R
ha, b, c, di h0, 0i hb, a, c, di X hc, a, b, di h1, 0, 0i hd, a, b, ci X
ha, b, d, ci X hb, a, d, ci X hc, a, d, bi h1, 0, 1i hd, a, c, bi X
ha, c, b, di h0, 1, 0i hb, c, a, di X hc, b, a, di X hd, b, a, ci X
ha, c, d, bi h0, 1, 1i hb, c, d, ai X hc, b, d, ai X hd, b, c, ai X
ha, d, b, ci X hb, d, a, ci X hc, d, a, bi h1, 1i hd, c, a, bi X
ha, d, c, bi X hb, d, c, ai X hc, d, b, ai X hd, c, b, ai X
(b) possible and impossible permutations by a randomized riffling by perfect halving.

Figure 12.6: Riffle method for shuffling cards.

If one of the half decks runs out, the other half deck fills up the rest of the list and the
program terminates as indicated in lines 4 ∼ 9. The computational time complexity of
Algorithm 12.6 is clearly Θ(n).
If the random number generator returns 0 and 1 alternatively, Algorithm 12.6 would
imitate the perfect riffle. If the random number generator always returns 0’s, Algorithm 12.6
would result in the original list. While the Knuth shuffle Algorithm 2.26 described on page 67
generates all possible permutations, the randomized riffle Algorithm 12.6 fails to do so, as
illustrated in Figure 12.6. Only a subset of permutations can be generated and others are
impossible to generate if the input sequence is divided perfectly into halves, as illustrated in
Figure 12.6 (c). To be precise, only 2n−1 −2 permutations out of all n! existing permutations
can be generated by randomized riffle Algorithm 12.6.
One may invoke the randomized riffle Algorithm 12.6 several times to ensure that it out-
puts a random permutation. If Algorithm 12.6 is called O(log n) times, the computational
time complexity is comparable with Algorithm 10.6 which utilizes reduction to the sorting
problem as presented on page 563. As a matter of fact, it is recommended that a deck of
52 cards be riffled seven times in order to randomize it thoroughly [105]. The GilbertShan-
nonReeds model provides a mathematical model of the random outcomes of riffling [47].

12.1.5 Random Alternating Permutation


Consider the alternating permutation (up-down) Problem 2.19 defined on page 65. Par-
titioning algorithms 12.1 and 12.2 split an input list into two partitions, L and H, such
that all elements in the left partition are less than the pivot value and all elements in the
706 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

A1~n 17 A1~n 16

17 12 2 80 20 35 1 15 30 17 12 2 80 20 35 1 15 16
17 16

15 12 2 1 35 80 30 20 15 12 2 1 35 80 17 20

15 35 12 80 2 30 1 20 17 15 35 12 80 2 17 1 20

(a) Odd length case (b) Even length case

Figure 12.7: (UDP ≤p MDN) Algorithm 12.7 illustration

right partition are greater than the pivot value. If the input sequence is partitioned into
two equal sized partitions, i.e., |L| = |R| or |L| = |R| + 1, one can simply interweave the
two partitions starting from the left partition. This results in an up-down sequence.
If one finds the median, which is the middle value of the input sequence defined in
Problem 10.8 on page 614, the up-down Problem 2.19 is solved. Clearly, UDP ≤p MDN. A
pseudo code for the (UDP ≤p MDN) reduction algorithm is stated as follows:
Algorithm 12.7. updown by median partition

updown by median partition(A1∼n )


m = findmedian(A1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
if n is odd, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
p = searchp(A1∼n , m) . . . . . . . . . . . . . . . . . . . . . . . . . . 3
partition(A1∼n ) by ap . . . . . . . . . . . . . . . . . . . . . . . . . . 4
O1∼n = interweave(A1∼d n2 e , Ad n2 e+1∼n ) . . . . . . . . . 5
else, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
A1∼n+1 = append(A1∼n , m) . . . . . . . . . . . . . . . . . . . . 7
partition(A1∼n ) by an+1 . . . . . . . . . . . . . . . . . . . . . . . . 8
O1∼n = interweave(A1∼ n2 , A n2 +2∼n+1 ) . . . . . . . . . . 9
return (O1∼n ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

If the input sequence size |A1∼n | = n is odd, the median, m, is part of A1∼n . Hence,
Algorithm 12.7 searches the position of m and partitions the input sequence in lines 3 and
4, respectively. The odd length input sequence case is illustrated in Figure 12.7 (a). If the
input sequence size |A1∼n | = n is even, the median, m, is not in A1∼n . m is appended
to A and A1∼n+1 is partitioned by (an+1 = m). Line 9 interweaves the two partitions
excluding the pivot, which is the median. The even length input sequence case is illustrated
in Figure 12.7 (b).
Similar to Quicksort Algorithm 12.3 and Quickselect Algorithm 12.4, a Las Vegas method
can be applied to the up-down Problem 2.19 by using a random partitioning. When a
sequence is partitioned into two parts, L and H, by a random pivot value, sizes may vary.
Four lucky cases are given in Figure 12.8 (a) ∼ (d). When n is even and the size of L
is shorter than the size of H by one, the input sequence can be divided into two parts of
equal size, as depicted in Figure 12.8 (a). When n is odd and |L| = |H|, the input sequence
12.1. LAS VEGAS METHOD 707

L H L H

10 12 2 1 15 35 80 17 30 20 10 12 2 1 15 35 80 17 30

10 35 12 80 2 17 1 30 15 20 10 35 12 80 2 17 1 30 15

(a) n is even and |L| = |H| − 1 (b) n is odd and |L| = |H|
L H L H

10 12 2 15 1 17 20 80 30 35 10 12 2 15 1 17 20 80 30

10 17 12 20 2 80 15 30 1 35 10 17 12 20 2 80 15 30 1

(c) n is even and |L| = |H| + 1 (d) n is odd and |L| = |H| + 2
17 12 2 80 20 35 1 15 30 10
30

17 12 2 10 20 15 1 80 35 17 12 2 30 80 35
15

10 1 20 10 1 15 20

17 12 2 10 1 15 20 30 80 35 17 30 12 80 2 35 10 15 1 20

Partitioning the input sequence The output sequence after interweaving


(e) Uneven (|L| > |H| + 2) partition case
17 12 2 80 20 35 1 15 30 10
10

1 2 80 20 35 17 15 30 12 1 2 80 20
17

20 10 15 12 30 35 10 15 12 17 30 35

1 2 20 80 10 15 12 17 30 35 1 80 2 20 10 17 15 30 12 35

Partitioning the input sequence The output sequence after interweaving


(f) Uneven (|L| < |H| − 1) partition case

Figure 12.8: Quickupdown Algorithm 12.8 illustration

can be divided into two parts, such that the left partition includes the pivot element, as
depicted in Figure 12.8 (b). When interweaved , the resulting up-down sequence ends with
a downward turn. When n is even and |L| is greater than |H| by one, the input sequence
can be divided into two parts of equal size, as depicted in Figure 12.8 (c). When n is odd
and |L| is greater than |H| by two, the input sequence can be divided into two parts, such
708 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

that the right partition includes the pivot element, as depicted in Figure 12.8 (d). These
four best cases form the base cases for the Quickupdown algorithm.
When the sizes of the two partitions differ significantly, the two partitions are interweaved
until the smaller sized partition runs out. Only the remaining part of the bigger sized parti-
tion is solved recursively. This randomized algorithm is called the Quickupdown algorithm.
There are two cases where recursive calls must be made. First, when |L| > |H|+2, the pivot
is treated as a part of the H partition and interweaved |H| + 1 number of times, as depicted
in Figure 12.8 (e). The remaining elements in the L partition are recursively solved. Second,
when |L| < |H| − 1, L and part of H are interweaved |L| number of times, as depicted in
Figure 12.8 (f). The pivot is passed to the recursive sub-problem so that the sub-problem’s
solution starts from the pivot. A flag is necessary to indicate whether the remaining part
of H is passed or not. A pseudo code is stated as follows:
Algorithm 12.8. quick updown
Let A1∼n and O1∼n be global.
quick updown(b, e, c, f )
if f = F, p = partition(b, e) . . . . . . . . . . . . . . . . . . . . . . . . . . .1
else, p = partition(b + 1, e) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if b + e − 2p = 0 or 1, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Oc∼n = interweave(Ab∼p , Ap+1∼e ) . . . . . . . . . . . . . . . . . . . . . 4
else if b + e − 2p = −1 or −2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Oc∼n = interweave(Ab∼p−1 , Ap∼e ) . . . . . . . . . . . . . . . . . . . . . 6
else if b + e − 2p < −2, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Oc∼c+2e−2p+1 = interweave(Ab∼b+e−p , Ap∼e ) . . . . . . . . . . 8
quick updown(b + e − p + 1, p − 1, c + 2e − 2p + 2, F) . .9
else (if b + e − 2p > 1), . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Oc∼c+2p−2b−1 = interweave(Ab∼p−1 , Ap+1∼2p−b ) . . . . . . 11
swap(ap , a2p−b+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
quick updown(2p − b + 1, e, c + 2p − 2b, T) . . . . . . . . . . . 13
Lines 3 and 4 of Algorithm 12.8 are for the cases where |L| = |R| − 1 and |L| = |R|,
as depicted in Figure 12.8 (a) and (b), respectively. Lines 5 and 6 are for the cases where
|L| = |R| + 1 and |L| = |R| + 2, as depicted in Figure 12.8 (c) and (d), respectively. Lines 7
∼ 9 are for the cases where |L| > |R| + 2, as depicted in Figure 12.8 (e). Lines 10 ∼ 13 are
for the case where |L| < |R| − 1, as depicted in Figure 12.8 (f).
The best case computational time complexity of Algorithm 12.8 is Θ(n), as depicted in
Figure 12.8 (a) ∼ (d). The worst case computational time complexity is O(n2 ) by the same
reasoning as for that of Quicksort Algorithm 12.3, as depicted in Figure 12.3 (c) ∼ (e). The
expected running time of Algorithm 12.8 has the following complete recurrence relation.
n−1
P n−1
P
|T (i) − T (n − i − 1)| T (i)
i=0 i=0
T (n) = + Θ(n) =
+ Θ(n) (12.3)
n n
Hence, it is Θ(n) by Theorem 12.2.
Given the sample input sequence in Figure 12.9 (a), seventeen different up-down outputs
produced by various algorithms described in this book are given in Figure 12.9 (b) ∼ (r).
While a unique output sequence is predetermined by the algorithms presented in previous
chapters, the randomized Algorithm 12.8 produces a random valid up-down sequence every
time it is executed.
12.2. MONTE CARLO METHOD 709

(a) sample input (b) induc prog. & D& C (c) greedy algo & reduction

(d) minheap I (e) minheap II (f) minheap III

(g) maxheap I (h) maxheap II (i) maxheap III

(j) reduction ≤p sort (k) reduction ≤p sort (l) reduction ≤p sort

(m) reduction ≤p sort (n) reduction ≤p sort (o) reduction ≤p DUP

(p) reduction ≤p MDN (q) randomized algorithm (r) randomized algorithm

Figure 12.9: Up-down sequences by various algorithms

12.2 Monte Carlo Method


Probably correct algorithms, i.e., algorithms that are usually correct but sometimes in-
correct, are known as Monte Carlo methods. This term was coined in reference to the famous
Monte Carlo casino in Monaco by John Neumann and Stanislaw Ulam in the 1940s [53]. In
this section, Monte Carlo methods are illustrated with top m percent selection and primality
testing problems.

12.2.1 Top m Percent


Consider the problem of selecting an element in the top m percent, or TMP for short.
This problem can also be stated as finding a relatively small or large number.

Stanislaw Marcin Ulam (1909-1984) was a Polish-American mathematician. His


major contributions include cellular automaton, the Monte Carlo method of computa-
tion, and nuclear pulse propulsion. the Monte Carlo method of computation and nuclear
pulse propulsion. He also participated in America’s Manhattan Project.
c Photography is in public domain.
710 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

Problem 12.3. top m percent(A1∼n , m) (within m%)


Input: A sequence A of n quantifiable elements and 0 ≤ m ≤ 100
Output: ax ∈ A01∼bnpc , where A0 is the sorted list of A and p = 100
m

This problem can be trivially solved by reduction based algorithms.

TMP(A1∼n , m) = ax ∈ A01∼bnpc where A0 = sort(A) ⇔ TMP ≤p Sort (12.4)


TMP(A1∼n , m) = find min(A1∼bn(1− 100
m
)c+1 ) ⇔ TMP ≤p MIN (12.5)

First, if the input is sorted, then picking any element between a01 and a0bnpc in the sorted list
A0 guarantees finding a correct output. This algorithm stated in eqn (12.4) based on the
reduction TMP ≤p Sort, clearly takes O(n log n) time. Another possible reduction (TMP
≤p MIN) based algorithm, which simply returns the minimum, takes Θ(n) time. To make
the algorithm faster, only 1 ∼ bn(1 − p)c plus one elements in the list need to be examined,
as shown in eqn (12.5). In the worst case, all higher elements may be placed between index
1 and n(1 − p) of the list in this algorithm. Examining one more element guarantees finding
a correct output. Hence, the algorithm in eqn (12.5) takes Θ(n) time.
If m = 50% and less than b n2 c + 1 number of elements are examined to find the minimum
in the subset, the output may be incorrect. Suppose only k constant elements are randomly
selected and considered as stated in eqn (12.6). Let A001∼k be k randomly selected elements
from the list A1∼n where A001∼n is a random permutation of A1∼n .

TMP(A1∼n , m) ≈ find min(A001∼k ) where 1 ≤ k ≤ n (12.6)

When only one element is randomly chosen, the probability of error is 1/2. If (k = 2)
elements are selected, the error reduces to 1/4. Figure 12.10 (a) illustrates how the error
rate is computed for k trials. The table in Figure 12.10 (b) shows how the probability of
error reduces as the number of trials, k, increases. Suppose Θ(n) algorithms take too long
because n is very large. The randomized algorithm in eqn (12.6) only takes Θ(k) time but
does not guarantee that it is always correct. But when k = 24, the error rate is less than
the probablity of hitting the jackpot in a lottery and thus probably usually correct. The
The probability of error measures how often a randomized algorithm is incorrect.
The error probability of the randomized algorithm in eqn (12.6) with m% and k randomly
selected elements is stated as follows:
 m k
error(TMP(A1∼n , m), k) = 1 − (12.7)
100
Figure 12.10 (c) provides a table of error probabilities with various values of m and
k. The smaller m is, the larger the sample size needs to be in order to reduce the error
probability.
Let err be a threshold value for acceptable error probability. The minimum number of
samples needed to attain this error probability can be simply derived from eqn (12.7) as
follows:

log err
k≥ m
 (12.8)
log 1 − 100
12.2. MONTE CARLO METHOD 711

1/ 1/ k probability of error
2 2
1/ 1/ 1 1 1/2
2 2
1/ 1/ 2 1/4
2 2 1
1/ 3/ 2 3 1/8
1/ 4 1 4
2 /2 10 1/1024 < 1/thousand
1
1/
8
7/
8 3 20 1/1048576 < 1/million
1/ 1/ 30 1/1073741824 < 1/billion
2 2
1
1/ 15/ 4 40 1/1099511627776 < 1/trillion
16 16
50 1/1125899906842624 < 1/quadrillion

...…
1/ 1/2 1
2
100 < 1/1030 < 1/nonillion
1/ k
2 1− 1/2k k 333 < 1/10100 < 1/googol

(a) Probability of error (b) Probability of errors with respect to


where p = 1/2 k trials where p = 1/2
k m = 25% m = 10% m = 5% m = 3% m = 1%
1 0.75 0.9 0.95 0.97 0.99
100 3.207 × 10−13 2.656 × 10−05 0.0059 0.0476 0.3660
200 1.029 × 10−25 7.055 × 10−10 3.505 × 10−05 0.0023 0.1340
300 3.299 × 10−38 1.874 × 10−14 2.075 × 10−07 1.075 × 10−04 0.0490
400 1.058 × 10−50 4.977 × 10−19 1.229 × 10−09 5.113 × 10−06 0.0180
500 3.393 × 10−63 1.322 × 10−23 7.275 × 10−12 2.432 × 10−07 0.0066
1000 1.152 × 10−125 1.631 × 10−229 5.292 × 10−23 5.912 × 10−14 4.317 × 10−05
(c) Error probabilities of k sample size for various top m% problems.

Figure 12.10: Monte Carlo method illustration for the top m percent problem

12.2.2 Primality Test


Consider the problem of checking primality of a number n, or simply CPN, which was
presented as an exercise Q 1.16 on page 30. Algorithm 1.19 stated on page 30 would take
exponential time, Θ(10d/2 ), if the input number n is a d-digit long integer. No polynomial
time algorithm is known for the d-digit long primality test. Hence, a randomized algorithm
based on the Monte Carlo method is suitable for this problem.
Fermat’s little theorem, Theorem 12.3, is the essence of a randomized (Monte Carlo)
algorithm for CPN. The notation a ≡ b(mod p) reads “a is congruent to b mod p,” meaning
that the remainders of a/p and b/p are the same, i.e. a % p = b % p. Hence, Theorem 12.3
can be written in terms of modulo function.

Theorem 12.3 (Fermat’s little theorem). If p is prime and a is an integer not divisible by
p, i.e. gcd(p, a) = 1, then

ap ≡ a(mod p) (12.9)
p−1
a ≡ 1(mod p)
p−1
a %p = 1 (12.10)

Proof. (by induction) Base case (a = 1), (1p % p = 1) = (1 % p = 1).


Suppose ap ≡ a(mod p) is true. Show (a+1)p ≡ (a+1)(mod p) is also true. By the binomial
712 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

coefficient expansion,
           
p p p p p−1 p p−2 p 2 p 1 p 0
(a + 1) = a + a + a + ··· + a + a + a
0 1 2 p−2 p−1 p

Since ∀k ∈ {1 ∼ p − 1}p | kp except for k = 0 and k = p, (a + 1)p % p = (ap + 1) % p.




(ap + 1) % p = (ap % p + 1 % p) % p by Lemma 3.3 addition property of modulo


= (a % p + 1 % p) % p by assumption
= (a + 1) % p by Lemma 3.3 and goal.

∴ ap ≡ a(mod p). 
For example when a = 2 and p = 3, 23−1 % 3 = 1. Fermat’s little Theorem 12.3 is
a remarkable theorem that can be used to solve CPN, but it does not always produce a
correct answer because the converse is not always true. If p is not prime, ap−1 % p 6= 1 in
most cases. For example when a = 3 and p = 4, 34−1 % 4 = 3 6= 1. Although rare, there
exist composite numbers p such that ap−1 % p = 1. For example when a = 4 and p = 15
(a composite number), 415−1 % 15 = 1. Composite numbers that return 1 for eqn (12.10)
in Fermat’s little Theorem 12.3 are called pseudo primes. Let FMa be the set of positive
integers that includes both prime numbers and pseudo primes.

FMa = {x ∈ Z+ | x > 1 ∧ gcd(x, a) = 1 ∧ ax−1 % x = 1} (12.11)

FMa Composite a FMa − P


2 341, 561, 645, 1105, 1387, 1729, 1905, 2047, 2465, · · ·
3 91, 121, 286, 671, 703, 949, 1105, 1541, 1729, · · ·
Pseudo 4 15, 85, 91, 341, 435, 451, 561, 645, 703, 1105, · · ·
Prime
prime 5 4, 124, 217, 561, 781, 1541, 1729, 1891, 2821, · · ·
.. ..
. .
(a) Venn diagram for
(b) List of pseudo primes in FM2 , FM3 , FM4 , and FM5 .
FMa and pseudo primes

Figure 12.11: FMa and pseudo primes.

Figure 12.11 (a) shows a Venn diagram for FMa and pseudo primes. Figure 12.11 (b)
lists the first few pseudo primes in FM2 , FM3 , FM4 and FM5 .
Consider a randomized algorithm that randomly selects a to determine whether the input
n is a non-pseudo prime composite number or in FMa . By repeating this process, the error
rate becomes extremely small. A pseudo code is written below:
Algorithm 12.9. Fermat primality test

isprimeFermat(n)
for k = 1 ∼ k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
pick a such that gcd(a, n) = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
if modpow(a, n − 1, n) 6= 1, return “composite.” . . . . . . . . . . . . . 3
return “probably prime.” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
12.3. APPROXIMATE ALGORITHMS 713

Pick randomly a < n


such that gcd(a,n) = 1

=1 Prime or
n−1
Pseudo prime
n a %n 1
Composite

(a) Fermat primality test randomized algorithm process


a=2 a=3 a=5 a=7
21728 % 1729 =1 31728 % 1729 =1 51728 % 1729 =1 71728 % 1729 = 742
2864 % 1729 =1 3864 % 1729 =1 5864 % 1729 =1 7864 % 1729 = 742
2432 % 1729 =1 3432 % 1729 =1 5432 % 1729 =1 7432 % 1729 = 742
2216 % 1729 =1 3216 % 1729 =1 5216 % 1729 =1 7216 % 1729 = 742
2108 % 1729 =1 3108 % 1729 =1 5108 % 1729 =1 7108 % 1729 = 742
254 % 1729 = 1065 354 % 1729 =1 554 % 1729 = 1065 754 % 1729 = 77
227 % 1729 = 645 327 % 1729 = 664 527 % 1729 = 1217 727 % 1729 = 343
213 % 1729 = 1276 313 % 1729 = 185 513 % 1729 = 1461 713 % 1729 =7
26 % 1729 = 64 36 % 1729 = 729 56 % 1729 = 64 76 % 1729 = 77
23 % 1729 =8 33 % 1729 = 27 53 % 1729 = 125 73 % 1729 = 343
21 % 1729 =2 31 % 1729 =3 51 % 1729 =5 71 % 1729 =7
(b) (k = 4) iteration of Algorithm 12.9 to test (n = 1729).

Figure 12.12: Fermat primality test randomized Algorithm 12.9 illustration.

Figure 12.12 (a) depicts Fermat primality test randomized Algorithm 12.9. Consider a
composite number n = 1729 = 7 · 13 · 19 for an illustration of Algorithm 12.9, as shown
in Figure 12.12 (b). If a = 2 is randomly selected, Algorithm 12.9 fails, as it belongs to
FM2 . If Algorithm 12.9 continues with a = 3, it fails again. If Algorithm 12.9 continues
with a = 5, it still fails. But when a = 7, Algorithm 12.9 returns that 1729 is a composite
number.
Using Euclid’s Algorithm 2.32 to test whether a and n are co-prime in line 2 of Algo-
rithm 12.9 takes O(log n) time. Eqn (12.10) in line 3 can easily be verified by the divide and
conquer Algorithm 3.14, as previously demonstrated in an exercise in Q 3.23 on page 149.
Its computational time complexity is Θ(log n) or Θ(d) if n is a d-digit long integer, assum-
ing the multiplication operation is constant. Hence, the computational time complexity of
Algorithm 12.9 is O(k log n).
The larger the number of trials, k, the more the error rate is reduced so Fermat primality
test randomized Algorithm 12.9 is usually correct. Algorithm 12.9 fails because there exist
composite numbers n such that gcd(a, n) = 1 and an−1 % n = 1 for every a > 1. Such
composite numbers are called Carmichael numbers because Carmichael found the first and
smallest such number, 561, in [30]. Carmichael numbers are considerably rarer than prime
numbers. Erdös’ upper bound for the number of Carmichael numbers, which is lower than
the prime number function n/ ln n, is given in [58].

12.3 Approximate Algorithms


Approximate algorithms that return suboptimal solutions are often used to solve NP-
hard optimization problems approximately. One simple way of designing an approximate
714 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

algorithm is by using the worst case factor, ρ. Let algo(I) be the solution provided by an
approximate algorithm and opt(I) be the actual optimal solution for the input I.

Definition 12.1. An approximate algorithm is said to be ρ-approximate if


(
algo(I) ≤ ρ opt(I) for minimization problems
opt(I) ≤ ρ algo(I) for maximization problems

ρ = 1 means that the algorithm finds the optimal solution. The goal of designing approx-
imate algorithms is to minimize the difference between ρ and one as much as possible. In this
section, (ρ = 2)-approximate algorithms for the subset sum maximization problem, vertex
cover Problem 4.13 and metric traveling salesman problem are given. Other approximation
definitions and schemes can be found in [92, 14].

12.3.1 Subset Sum Maximization


Most greedy approximate algorithms designed in Chapter 4 are ρ-approximate and the
ρ value can be derived. For example, recall the subset sum maximization problem which
appeared in Exercise Q 4.9 on page 202. A greedy approach selects an item that fits the
most, i.e. (m − ai ) is minimized but (m − ai ) ≥ 0. A pseudo code for the first greedy best
fit approach is stated as follows:

Algorithm 12.10. Greedy subset sum maximization (best fit version)

SSM-greedy(A1∼n , m)
Declare X1∼n initially 0’s . . . . . . . . . . . . . . . . . . . . . . . . . 1
c = 0 ............................................. 2
while m > 0 and c < n . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ai = argmin |m − ai | . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ai ∈A
m = m − ai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ai = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
xi = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
c = c + 1 ....................................... 8
if m < 0, xi = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
return X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Theorem 12.4. Greedy Algorithm 12.10 for SSM is 2-approximate.

Proof. Suppose that 2A(I) < O(I). A(I) < m 2 . Let x be an item that is not selected by
Algorithm 12.10. x > m − A(I) because x must have been selected otherwise. x ≤ A(I)
since all selected items in A(I) are greater than or equal to x. Now we have m − A(I) < x ≤
A(I). Consequently, m 2 ≤ A(I) which leads to a contradiction to our assumption. Hence,
2A(I) ≥ O(I) and Algorithm 12.10 is 2-approximate. 

Better approximate algorithms whose ρ value is closer to 1 are possible. A quadratic


greedy (ρ = 43 )-approximate algorithm is given in [119] and a (ρ = 45 )-approximate algorithm
is given in [96]. A pseudo code for the greedy (ρ = 43 )-approximate algorithm in [119], which
calls greedy Algorithm 12.10 repeatedly, is stated as follows:
12.3. APPROXIMATE ALGORITHMS 715

Algorithm 12.11. Greedy approximate subset sum maximization II

SSM-appxII(A1∼n , m)
A1∼n = sort(A1∼n , desc.) . . . . . . . . . . . . . . . . . . . . . . . . .1
sol = −∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
for i = 1 ∼ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
s = SSM-greedy(Ai∼n , m) . . . . . . . . . . . . . . . . . . . . . . . 4
if s > sol, sol = s . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
return sol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The proof that Algorithm 12.11 is 34 -approximate can be found in [119] and [120, p.
119]. Algorithm 12.11 is a prime example of the trade-off between computational time and
tighter approximation.

12.3.2 Vertex Cover

v1 v2 v1 v2 v1 v2
v6 v6 v6
v3 v4 v5 v3 v4 v5 v3 v4 v5

v7 v8 v9 v7 v8 v9 v7 v8 v9
(a) Pick (v1 , v2 ) randomly (b) Pick (v3 , v4 ) randomly (c) Pick (v5 , v8 ) randomly
E = E − {(v1 , v2 ), (v1 , v3 ), E = E − {(v3 , v4 ), (v3 , v8 ), E = E − {(v5 , v8 ), (v6 , v8 ),
(v1 , v7 ), (v2 , v4 ), (v2 , v6 ), (v4 , v5 ), (v4 , v6 ), (v4 , v8 ), (v7 , v8 ), (v8 , v9 )}
(v2 , v8 )}, Vc = {v1 , v2 } (v4 , v9 )}, Vc = Vc ∪ {v3 , v4 } Vc = Vc ∪ {v5 , v8 }
v1 v2 v1 v2
v6 v6
v3 v4 v5 v3 v4 v5

v7 v8 v9 v7 v8 v9
(d) Pick (v6 , v9 ) E = E − {(v6 , v9 )} (e) Optimal solution
Vc = {v1 , v2 , v3 , v4 , v5 , v8 , v6 , v9 } Vc = {v1 , v4 , v6 , v8 }

Figure 12.13: Approximate vertex cover algorithm illustration

Recall the greedy approximate Algorithm 4.17 presented on page 181 for the vertex
cover Problem 4.13 defined on page 180. Deriving ρ for Algorithm 4.17 is not trivial.
However, Algorithm 12.12, which is somewhat similar to Algorithm 4.17, is 2-approximate.
It randomly selects an edge and includes both end vertices in the vertex cover solution set.
Then it removes all edges covered by these two vertices. It repeats the process until all
edges are covered. A pseudo code is stated as follows:
716 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

Algorithm 12.12. 2-approximate vertex cover

approxVC(G)
while E 6= ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
pick (x, y) ∈ E randomly . . . . . . . . . . . . . . . . . . . . . . . 2
VC = VC ∪ {x, y} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
E = E − {(a, b) | a ∈ {x, y} ∨ b ∈ {x, y}} . . . . . . . . 4
return VC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 12.13 illustrates Algorithm 12.12 on the sample graph given in Figure 4.22 on
page 180. This approximate algorithm returns a vertex cover of size 8 while the optimal
solution vertex cover size is 4, as shown in Figure 12.13 (e).

Theorem 12.5. Algorithm 12.12 is 2-approximate.

Proof. For each iteration of Algorithm 12.12, an edge that includes both end vertices in the
vertex cover is selected. At least one of the end vertices must be in the optimal vertex cover
because the edge cannot be covered otherwise. In the worst case, only one of them is in the
optimal solution for every iteration. Hence, Algorithm 12.12 is 2-approximate. 

12.3.3 Metric Traveling Salesman Problem


One of the special cases of the traveling salesman Problem 4.16 defined on page 190 is
the metric traveling salesman problem, or simply ∆-TSP. The input cost matrix must be
metric in ∆-TSP.

Definition 12.2. A matrix C is said to be metric if the following conditions are satisfied:

- cx,y ≥0 non-negativity
- cx,y = 0 if x = y identity
- cx,y = cy,x symmetry
- cx,y ≤ cx,z + cz,y triangle inequality

Recall that the output for TSP Problem 4.16 defined on page 190 is a path visting all
cities. If the first and last cities in the path are connected, i.e. the salesman must return
to the starting city, a cycle is formed, as shown in Figure 12.14 (b). The traveling salesman
problem can be formulated in two different ways either by minimizing the cost of the acyclic
path visiting all cities or of the cyclic route visiting all cities and returning to the starting
city. The metric traveling salesman problem utilizes the latter version and is formulated as
follows:

Problem 12.4. metric traveling salesman problem


Input: A sequence V1∼n of size n and an n × n metric cost matrix Cv1 ∼vn ,v1 ∼vn
n−1
V 0 a permutation of V such that
P
Output: cvi0 ,vi+1
0 + cvn0 ,v10 is minimized.
i=1

Consider an approximation algorithm that first computes the minimum spanning tree
and takes the pre-order DFT of the MST, as shown in Figure 12.14 (d) ∼ (f). A pseudo
code is stated as follows:
12.3. APPROXIMATE ALGORITHMS 717

v1 5 v2 v1 v1
3 4 3 3 5 3
v2 v2
8 7 7
v3 v4 v5 v5
1
5 v
6 5 4 v4
1
v5 v3 v3

(a) A sample metric matrix (b) Optimal solution (c) DFT of spanning tree
hv1 , v2 , v5 , v4 , v5 , v3 i
v1 5 v2 v1 v1
3 4 3 3 3
8 7 8
v3 v4 v4 v4
1
3
6 5
v5 v2 v3 v5 v2 4 v3 6 v5
(d) MST (e) DFT of MST (f) Pre-order DFT of MST
hv1 , v4 , v2 , v3 , v5 , v1 i
v1 5 v2 v4 v4
3 3 3 5
4 3

v3
8 7
v4 v2 v3 v5 v2 4 v3 v5
1
3
6 5 8
v5 v1 v1

(g) MST (h) DFT of MST (i) Pre-order DFT of MST


hv4 , v2 , v3 , v1 , v5 , v4 i

Figure 12.14: Approximation Algorithm for ∆-TSP.

Algorithm 12.13. metric TSP approximation by MST

metricTSPapprox(C)
T = MSTalgo(C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
V 0 = preorder-DFT(T) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
n−1
P
return cvi0 ,vi+1
0 + cvn0 ,v10 . . . . . . . . . . . . . . . . . . . . . . . . .3
i=1

Either Prim-Jarnik’s Algorithm 4.18 or Kruskal’s Algorithm 4.19 can be used in line 1,
which takes Θ(n2 ) time. Lines 2 and 3 take linear time. Clearly, Algorithm 12.13 takes
polynomial time. The approximated solution given by Algorithm 12.13 is 24 while the
optimum solution is 21 for the sample metric matrix in Figure 12.14.

Theorem 12.6. Algorithm 12.13 is 2-approximate.

Proof. Let T be a MST of the complete graph of the metric cost matrix, C. If one traverses
T by DFT, each edge is visited exactly twice and the total cost is 2ws(T ), where ws(T ) is
718 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

defined in eqn (4.8) on page 183. Let V o be the optimal solution for TSP and sr(V o ) =
n−1
+ cvn0 ,v10 . V o is a sequence of vertices, which a special case of a spanning tree.
P
cvi0 ,vi+1
0
i=1
If one traverses V o by DFT, the total cost is 2ws(V o ). 2ws(V o ) ≤ 2sr(V o ) because of the
edge from the last vertex to the starting vertex. 2ws(T ) ≤ 2ws(V o ) because T is a MST
and V o is a spanning tree. Let preordDFT(T ) be the sum of the costs of the edges in
the pre-order DFT of T . preordDFT(T ) is a sequence of vertices, which is the solution
produced by Algorithm 12.13. preordDFT(T ) ≤ 2ws(T ) because of the metric property.
Since preordDFT(T ) ≤ 2sr(V o ), Algorithm 12.13 is 2-approximate. 
Better approximate algorithms whose ρ value is closer to 1 are possible. A (ρ = 1.5)-
approximate algorithm is given in [33].

12.3.4 Probably Approximately Correct


blah blah blah

0.7

0.65

0.6 0.6
0.55

0.5 0.5
0.45

0.4 0.4
0.35

0.3
0 200 400 600 800 1000 0 10 20 30 40

Figure 12.15: Law of large numbers.

blah blah blah

h- statistics and law of large numbers -i

blah blah blah


cite.. PAC learning...

12.4 Exercises
Q 12.1. Partition the following array by a pivot ax according to the randomly selected
index x. Illustrate the outside-in partition Algorithm 12.1 on the following array.
A= 17 12 2 80 20 35 1 15 30 10
1 2 3 4 5 6 7 8 9 10

a). x = 8, a8 = 15
b). x = 3, a3 = 2
12.4. EXERCISES 719

c). x = 4, a4 = 80
d). x = 7, a7 = 1

Q 12.2. Modify Algorithm 12.1 so that it can be used in Quicksort, Quickselect, etc as a
subroutine.
Q 12.3. Partition the following array by a pivot ax according to the randomly selected
index x. Illustrate the progressive partition Algorithm 12.2 on the following array.
A= 17 12 2 80 20 35 1 15 30 10
1 2 3 4 5 6 7 8 9 10

a). x = 8, a8 = 15
b). x = 3, a3 = 2
c). x = 4, a4 = 80
d). x = 7, a7 = 1

Q 12.4. Modify Algorithm 12.2 so that it can be used in Quicksort, Quickselect, etc as a
subroutine.
Q 12.5. Illustrate the quick-sort algorithm on the following array using the respective pivot
selection mechanism.
A= 17 12 2 80 20 35 1 15 30 10
1 2 3 4 5 6 7 8 9 10

a). Pick the last element as a pivot with the partition Algorithm 12.1.
b). Pick the first element as a pivot with the partition Algorithm 12.1.
c). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.
d). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.
e). Pick the last element as a pivot with the partition Algorithm 12.2.
f). Pick the first element as a pivot with the partition Algorithm 12.2.
g). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.
h). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.

Q 12.6. Illustrate the quick-select algorithm on the following array where k = 3 using the
respective pivot selection mechanism.
A= 17 12 2 80 20 35 1 15 30 10
1 2 3 4 5 6 7 8 9 10
720 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

a). Pick the last element as a pivot with the partition Algorithm 12.1.

b). Pick the first element as a pivot with the partition Algorithm 12.1.

c). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.

d). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.

e). Pick the last element as a pivot with the partition Algorithm 12.2.

f). Pick the first element as a pivot with the partition Algorithm 12.2.

g). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.

h). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.

Q 12.7. Illustrate the quick-select algorithm on the following array where k = b n2 c using
the respective pivot selection mechanism.
A= 17 12 2 80 20 35 1 15 30 10
1 2 3 4 5 6 7 8 9 10

a). Pick the last element as a pivot with the partition Algorithm 12.1.

b). Pick the first element as a pivot with the partition Algorithm 12.1.

c). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.

d). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.

e). Pick the last element as a pivot with the partition Algorithm 12.2.

f). Pick the first element as a pivot with the partition Algorithm 12.2.

g). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.

h). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.

Q 12.8. Consider the median Problem 10.8 defined on page 614.

a). Devise a randomized algorithm using the Las Vegas method.

b). Provide the best case computational time complexity.

c). Provide the worst case computational time complexity.

d). Provide the average case computational time complexity.


12.4. EXERCISES 721

Q 12.9. Illustrate the quick-median algorithm, devised in the above exercise Q 12.8, on the
following arrays using the respective pivot selection mechanism.

E= 17 12 2 80 20 35 1 15 30 10
1 2 3 4 5 6 7 8 9 10

O= 17 12 2 80 20 35 1 15 30
1 2 3 4 5 6 7 8 9

a). Pick the last element as a pivot with the partition Algorithm 12.1.
b). Pick the first element as a pivot with the partition Algorithm 12.1.
c). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.
d). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.1.
e). Pick the last element as a pivot with the partition Algorithm 12.2.
f). Pick the first element as a pivot with the partition Algorithm 12.2.
g). Pick the middle of last three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.
h). Pick the middle of first three elements as a pivot if n ≥ 3. Base cases for n < 3 with
the partition Algorithm 12.2.

Q 12.10. Consider the select k sum maximization Problem 4.1 defined on page 157.

a). Devise a randomized algorithm, more specifically the Las Vegas method, for the SKSS
problem.
b). Provide the worst, best, and average time complexities of the algorithm proposed in a).
c). Sketch the illustration of the algorithm proposed in a) using the following toy example,
A = h5, 2, 9, 4, 0, 8, 7, 1i and k = 3 where the pivot is the first element.

Q 12.11. Consider the select k subset product minimization problem or simply SKSPmin
considered in exercise 4.4 on page 200.

a). Devise a randomized algorithm, more specifically the Las Vegas method.
b). Provide the worst, best, and average time complexities of the algorithm proposed in a).
c). Sketch the illustration of the algorithm proposed in a) using the following toy example,
A = h2.0, 0.5, 5.0, 0.5, 2.5, 0.2, 2.0, 0.5i and k = 3 where the pivot is the first element.

Q 12.12. Consider the up-down Problem 2.19, defined on page 65. Consider the following
two input lists.
A= 17 12 2 80 20 35 1 15 30

B= 17 12 2 80 20 35 1 15
722 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS

a). Recall the problem of finding kth smallest element, or simply KSM, considered as an
exercise Q 2.19 on page 85. Devise an algorithm to generate a up-down sequence if an
algorithm for KSM is known.

b). Demonstrate your algorithm devised in a) on the above A and B sequences.

c). Provide the worst, best, and average time complexities of the algorithm proposed in
a).

Q 12.13. Recall the down-up alternating permutation problem, considered as exercises in


Q 2.24 on page 86. Consider the following two input lists.

A= 17 12 2 80 20 35 1 15 30

B= 17 12 2 80 20 35 1 15

a). Recall the problem of finding kth smallest element, or simply KSM, considered as an
exercise Q 2.19 on page 85. Devise an algorithm to generate a down-up sequence if an
algorithm for KSM is known.

b). Demonstrate your algorithm devised in a) on the above A and B sequences.

c). Provide the worst, best, and average time complexities of the algorithm proposed in
a).

d). Devise an algorithm to generate a up-down sequence when a median element is known.
Note that the procedure for median is given to you. (Hint: partitioning.)

e). Demonstrate your algorithm devised in d) on the above A and B sequences, where
median(A) = 17 and median(B) = 16.

Q 12.14. Recall the down-up alternating permutation problem, considered as exercises in


Q 2.24 on page 86.
Quick downup....
DUP stands for the down-up sequence alternating permutation problem. Consider the
following two input lists.

A= 17 12 2 80 20 35 1 15 30 10

B= 17 12 2 80 20 35 1 15 30

a). Devise an algorithm to generate a up-down sequence by the Las Vegas method. (Hint:
Algorithm 12.8 stated on page 708 for the up-down Problem 2.19)

b). Demonstrate your algorithm devised in a) on the A sequence example, where the pivot
is (A) = 17.

c). Demonstrate your algorithm devised in a) on the b sequence example, where the pivot
is (A) = 15.

d). Demonstrate your algorithm devised in a) on the A sequence example, where the pivot
is (A) = 15.
12.4. EXERCISES 723

e). Demonstrate your algorithm devised in a) on the b sequence example, where the pivot
is (A) = 12.
f). Demonstrate your algorithm devised in a) on the A sequence example, where the pivot
is (A) = 10.

g). Demonstrate your algorithm devised in a) on the A sequence example, where the pivot
is (A) = 30.

Q 12.15. Quick upupdown sequence....


Q 12.16. Consider a metric traveling salesman maximization problem, or simply ∆-TSPx
where the matrix is metric and the salesman must return to the starting city.

a). Formulate the problem.


b). Design a 2-approximate algorithm.
c). Provide the computational time complexity of the algorithm provided in b).

d). Prove that the proposed algorithm in b) is 2-approximate.


724 CHAPTER 12. RANDOMIZED AND APPROXIMATE ALGORITHMS
Bibliography

[1] Georgy M. Adelson-Velsky and Evgenii M. Landis. An algorithm for the organization
of information. Doklady Akademii Nauk USSR, 146(2):263–266, 1962.

[2] Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. Data Structures and Al-
gorithms. Addison-Wesley series in computer science and information processing.
Addison-Wesley, 1983.

[3] Jorge L. Ramı́rez Alfonsı́n. On variations of the subset sum problem. Discrete Applied
Mathematics, 81(1):1 – 7, 1998.

[4] Jorge L. Ramı́rez Alfonsı́n. The Diophantine Frobenius Problem. Oxford Lecture
Series in Mathematics and Its Applications. OUP Oxford, 2005.

[5] Robert B. Anderson. Proving Programs Correct. John Wiley & Sons, New York, 1979.

[6] Désiré André. Développements de sec x et tan x. Comptes Rendus Acad. Sci., Paris,
88:965–967, 1879.

[7] Désiré André. Sur les permutations alternées. Journal de mathématiques pures et
appliquées, 7:167–184, 1881.

[8] Howard Anton. Calculus with analytic geometry. John Wiley & Sons Australia, Lim-
ited, 3 edition, 1988.

[9] Jörg Arndt. Matters Computational: Ideas, Algorithms, Source Code. Springer Berlin
Heidelberg, 2010.

[10] Sanjeev Arora and Boaz Barak. Computational Complexity: A Modern Approach.
Cambridge University Press, 2009.

[11] American Standards Association. American standard code for information inter-
change. ASCII, ASA X3.4-1963, June 1963.

[12] Owen Astrachan. Bubble sort: An archaeological algorithmic analysis. SIGCSE Bul-
letin, 35(1):1–5, January 2003.

[13] Mike D. Atkinson, Jörg-Rüdiger Sack, Nicola Santoro, and Thomas Strothotte. Min-
max heaps and generalized priority queues. Communications of the ACM, 29(10):996–
1000, October 1986.

725
726 BIBLIOGRAPHY

[14] Giorgio Ausiello, Pierluigi Crescenzi, Giorgio Gambosi, Viggo Kann, Alberto
Marchetti-Spaccamela, and Marco Protasi. Complexity and Approximation: Combina-
torial Optimization Problems and Their Approximability Properties. Springer-Verlag,
Berlin, Heidelberg, 1st edition, 1999.

[15] Giorgio Ausiello and Rossella Petreschi. The Power of Algorithms: Inspiration and
Examples in Everyday Life. Springer Berlin Heidelberg, 2013.

[16] László Babai. Monte-carlo algorithms in graph isomorphism testing. D.M.S. 79-10,
Université de Montréal, 1979.

[17] John A. Ball. Algorithms for RPN Calculators. A Wiley-Interscience publication.


John Wiley & Sons Australia, Limited, 1978.

[18] Rudolf Bayer and Edward M. McCreight. Organization and maintenance of large
ordered indexes. Acta Informatica, 1(3):173–189, Sep 1972.

[19] Richard E. Bellman and Stuart E. Dreyfus. Applied Dynamic Programming. Princeton
University Press, 1962.

[20] Arthur T. Benjamin and Michael E. Orrison. Two quick combinatorial proofs of
Pn 3 n+1 2

k=1 k = 2 . The College Mathematics Journal, 33(5):406–408, Nov 2002.

[21] Jon Bentley. Programming pearls: Algorithm design techniques. Commun. ACM,
27(9):865–873, September 1984.

[22] Jon Louis Bentley, Dorothea Haken, and James B. Saxe. A general method for solving
divide-and-conquer recurrences. SIGACT News, 12(3):36–44, 1980.

[23] Gerald E. Bergum, Larry Bennett, Alwyn F. Horadam, and S. D. Moore. Jacobsthal
polynomials and a conjecture concerning fibonacci-like matrices. Fibonacci Quarterly,
23:240–248, January 1985.

[24] Norman L. Biggs, Keith E. Lloyd, and Robin J. Wilson. Graph Theory, 1736-1936.
Clarendon Press, 1976.

[25] Paul E. Black. Bubble sort. In Vreda Pieterse and Paul E. Black, editors, Dictio-
nary of Algorithms and Data Structures [online]. National Institute of Standards and
Technology, August 2009.

[26] Gilles Brassard and Paul Bratley. Fundamentals of Algorithmics. Prentice Hall, 1996.

[27] Richard L. Burden and J. Doublas Faires. Numerical Analysis. Cengage Learning, 9
edition, 2010.

[28] Martin Campbell-Kelly. The History of Mathematical Tables From Sumer to Spread-
sheets. OUP Oxford, 2003.

[29] Martin Campbell-Kelly and Michael R. Williams. The Moore School Lectures: Theory
and Techniques for Design of Electronic Digital Computers. Babbage Inst Repr Ser for
History of Computers, Vol 9. University of Pennsylvania; Moore School of Electrical
Engineering, 1985.
BIBLIOGRAPHY 727

[30] Robert D. Carmichael. Note on a new number theory function. Bull. Amer. Math.
Soc., 16(5):232–238, Feb 1910.
[31] Sung-Hyuk Cha. Computing parity of combinatorial functions. CSIS Technical Re-
ports 281, Pace university, 2011.
[32] Seonghun Cho and Sartaj Sahni. Weight-biased leftist trees and modified skip lists.
Journal of Experimental Algorithmics, 3, September 1998.
[33] Nicos Christofides. Worst-case analysis of a new heuristic for the travelling sales-
man problem. Technical Report 388, Graduate School of Industrial Administration,
Carnegie Mellon University, 1976.
[34] Alonzo Church. An unsolvable problem of elementary number theory. American
Journal of Mathematics, 58:345–363, 1936.
[35] Marshall Clagett. Ancient Egyptian Science: A Source Book. Ancient Egyptian Math-
ematics. American Philosophical Society, 1999.
[36] Alan Cobham. The intrinsic computational difficulty of functions. In Bar-Hillel
Yehoshua, editor, proceedings of the second International Congress, held in Jerusalem,
1964, Logic, Methodology and Philosophy of Science, pages 24–30, Amsterdam, 1965.
North-Holland.
[37] Edward G. Coffman, Jr., Michael R. Garey, and David S. Johnson. Approximation
algorithms for bin packing: A survey. In Dorit S. Hochbaum, editor, Approximation
Algorithms for NP-hard Problems, pages 46–93. PWS Publishing Co., Boston, MA,
USA, 1997.
[38] Richard Cole and Uzi Vishkin. Deterministic coin tossing with applications to optimal
parallel list ranking. Information and Control, 70(1):32–53, July 1986.
[39] Curtis R. Cook and Do Jin Kim. Best sorting algorithm for nearly sorted lists. Com-
mun. ACM, 23(11):620–624, November 1980.
[40] Stephen Cook. The p versus np problem. Millennium prize problems, Clay Mathemat-
ics Institute, 2000. Available at http://www.claymath.org/millennium-problems/
p-vs-np-problem.
[41] Stephen A. Cook. The complexity of theorem-proving procedures. In Proceedings
of the Third Annual ACM Symposium on Theory of Computing, STOC ’71, pages
151–158. ACM, 1971.
[42] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. In-
troduction to Algorithms. MIT Press, 3rd edition, 2009.
[43] Clark Allan Crane. Linear Lists and Priority Queues As Balanced Binary Trees. PhD
thesis, Stanford University, Stanford, CA, USA, 1972. AAI7220697.
[44] Sivarama P. Dandamudi. Fundamentals of Computer Organization and Design. Texts
in Computer Science. Springer New York, 2003.
[45] George B. Dantzig. Discrete-variable extremum problems. Operations Research,
5(2):266–277, 1957.
728 BIBLIOGRAPHY

[46] Olivier Devillers. Randomization yields simple o(n log∗ n) algorithms for difficult
ω(n) problems. International Journal of Computational Geometry & Applications,
2(1):621–635, 1992.

[47] Persi Diaconis. Group representations in probability and statistics, volume 11 of Lecture
notes-monograph series. Institute of Mathematical Statistics, Hayward, CA, 1988.

[48] Persi Diaconis, Ronald L. Graham, and William M. Kantor. The mathematics of
perfect shuffles. Advances in Applied Mathematics, 4(2):175 – 196, 1983.

[49] Leonard E. Dickson. History of the Theory of Numbers: Divisibility and Primality,
volume 1. Dover Publications, 2012.

[50] Edsger W. Dijkstra. A note on two problems in connexion with graphs. Numerische
Mathematik, 1(1):269–271, December 1959.

[51] Vassil Dimitrov, Graham Jullien, and Roberto Muscedere. Multiple-Base Number Sys-
tem: Theory and Applications. Circuits and Electrical Engineering. Taylor & Francis,
2012.

[52] Richard Durstenfeld. Algorithm 235: Random permutation. Communications of the


ACM, 7(7):420, 1964.

[53] Roger Eckhardt. Stan ulam, john von neumann, and the monte carlo method. Los
Alamos Science, 15:131–137, 1987.

[54] Jack Edmonds. Optimum branchings. Journal of Research of the National Bureau of
Standards, 71B(4):233–240, 1967.

[55] Noam Elkies. On a4 + b4 + c4 = d4 . Mathematics of Computation, 51(184):825–835,


1988.

[56] Herbert B. Enderton. A Mathematical Introduction to Logic. Elsevier Science, 2


edition, 2001.

[57] Susanna S. Epp. Discrete Mathematics with Applications. Cengage Learning, 2010.

[58] Paul Erdös. On Pseudoprimes and Carmichael Numbers. Publicationes Mathematicae,


Debrecen, 4:201–206, 1956.

[59] Leonard Euler. Solutio problematis ad geometriam situs pertinentis. Commentarii


academiae scientiarum imperialis Petropolitanae, 8:128–140, 1741.

[60] William Feller. Introduction to Probability Theory and Its Applications, volume 1.
Wiley, New York, 3 edition, 1968.

[61] Ronald A. Fisher and Frank Yates. Statistical Tables for Biological, Agricultural and
Medical Research. Oliver and Boyd ltd., 1943, 1938. original from the University of
Wisconsin - Madison.

[62] Richard Fitzpatrick and Johan L. Heiberg. Euclid’s Elements. Richard Fitzpatrick,
2007.
BIBLIOGRAPHY 729

[63] Ivan Flores. Direct Calculation of k-generalized Fibonacci Numbers. The Fibonacci
Quarterly, 5(3):259–266, 1967.

[64] Robert W. Floyd. Algorithm 245 - treesort 3. Communications of the ACM, 7(12):701,
1964.

[65] Michael L. Fredman and Robert E. Tarjan. Fibonacci heaps and their uses in improved
network optimization algorithms. J. ACM, 34(3):596–615, July 1987.

[66] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to
the Theory of NP-completeness. Books in mathematical series. W. H. Freeman, 1979.

[67] William I. Gasarch. Guest Column: The second P =?NP Poll. 43(2):53–77, 06 2012.

[68] Dominic Giampaolo. Practical File System Design with the BE File System. Morgan
Kaufmann Publishers, 1999.

[69] Linda Gilbert. Elements of Modern Algebra. Cengage Learning, 2014.

[70] Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge


University Press, 2008.

[71] Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundations, Analy-
sis, and Internet Examples. Wiley, 2002.

[72] Ronald L. Graham. An efficient algorith for determining the convex hull of a finite
planar set. Information Processing Letters, 1(4):132 – 133, 1972.

[73] Ronald L. Graham, Donald E. Knuth, and Oren Patashnik. Concrete Mathematics:
A Foundation for Computer Science. Concrete Mathematics. Addison-Wesley, 2nd
edition, 1994.

[74] Ulf Grenander. Pattern Analysis. Number v. 2; v. 24 in Applied Mathematical Sci-


ences. Springer-Verlag, 1978.

[75] Battiscombe Gunn and Eric T. Peet. Four geometrical problems from the moscow
mathematical papyrus. Journal of Egyptian Archaeology, 15:167–185, 1929.

[76] Charles L. Hamblin. Translation to and from polish notation. The Computer Journal,
5(3):210, 1962.

[77] Godfrey Harold Hardy and Edward Maitland Wright. An introduction to the theory
of numbers. Oxford Science Publications. Clarendon Press, Oxford, 1979.

[78] Charles Antony Richard Hoare. Algorithm 63: partition. Communications of the
ACM, 4(7):321, 1961.

[79] Charles Antony Richard Hoare. Algorithm 64: Quicksort. Communications of the
ACM, 4(7):321, 1961.

[80] Verner E. Hoggatt. Fibonacci and Lucas numbers. Houghton Mifflin, 1969.

[81] Ross Honsberger. Mathematical Gems III. Dolciani Mathematical Expositions. Math-
ematical Association of America, Washington, DC, 1985.
730 BIBLIOGRAPHY

[82] John E. Hopcroft and Jeffrey D. Ullman. Introduction to Automata Theory, Lan-
guages, and Computation. Addison-Wesley Series in Computer Science and Informa-
tion Processing. Addison-Wesley, 1979.

[83] Alwyn F. Horadam. Pell identities. Fibonacci Quarterly, 9(3):245–252, 1971.

[84] Bing-Chao Huang and Michael A. Langston. Practical in-place merging. Communi-
cations of the ACM, 31(3):348–352, March 1988.

[85] David A. Huffman. A method for the construction of minimum-redundancy codes.


Proceedings of the Institute of Radio Engineers, 40(9):1098–1101, September 1952.

[86] Oscar H. Ibarra and Chul E. Kim. Fast approximation algorithms for the knapsack
and sum of subset problems. Journal of the ACM, 22(4):463–468, October 1975.

[87] Kenneth E. Iverson. A programming language. Wiley, 1962.

[88] M.P. Jarnigan. Automatic machine methods of testing pert networks for consistency.
Technical report, Naval Weapons Laboratory (U.S.), 1960.

[89] Vojtĕch Jarnı́k. O jistém problému minimálnı́m. Práce moravské přı́rodovědecké


spolec̆nosti, 6(fasc 4):57–63, 1930.

[90] David S. Johnson. Near-optimal bin packing algorithms. PhD thesis, Massachusetts
Institute of Technology, Cambridge, MA, USA, 1973.

[91] Arthur B. Kahn. Topological sorting of large networks. Communications of the ACM,
5(11):558–562, November 1962.

[92] Viggo Kann. On the Approximability of NP-complete Optimization Problems. PhD


thesis, Royal Institute of Technology, Stockholm, Sweden, 1992.

[93] Anatolii A. Karatsuba and Yuri P. Ofman. Multiplication of many-digital numbers


by automatic computers. Proceedings of the USSR Academy of Sciences, 145:293–294,
1962. Translation in Physics-Doklady 7, 595–596, 1963.

[94] Richard M. Karp. Reducibility among combinatorial problems. In Proceedings of a


symposium on the Complexity of Computer Computations, pages 85–103, the IBM
Thomas J. Watson Research Center, Yorktown Heights, New York., March 1972.

[95] Narasimha Karumanchi. Coding Interview Questions. CareerMonk Publications, 2016.

[96] Hans Kellerer, Renata Mansini, and Maria Grazia Speranza. Two linear approximation
algorithms for the subset-sum problem. European Journal of Operational Research,
120(2):289–296, 2000.

[97] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack Problems. Springer
Berlin Heidelberg, 2004.

[98] James E. Kelley, Jr and Morgan R. Walker. Critical-path planning and scheduling.
In Papers Presented at the December 1-3, 1959, Eastern Joint IRE-AIEE-ACM Com-
puter Conference, IRE-AIEE-ACM ’59 (Eastern), pages 160–173. ACM, 1959.

[99] Jon Kleinberg and Éva Tardos. Algorithm Design. Pearson/Addison-Wesley, 2006.
BIBLIOGRAPHY 731

[100] Donald E. Knuth. The Art of Computer Programming: Seminumerical algorithms,


volume 2 of Addison-Wesley series in computer science and information processing.
Addison-Wesley, Reading, Mass, 1969.

[101] Donald E. Knuth. Big omicron and big omega and big theta. SIGACT News, 8(2):18–
24, April 1976.

[102] Donald E. Knuth. The Art of Computer Programming: Fundamental Algorithms,


volume 1. Addison-Wesley, 1997.

[103] Donald E. Knuth. The Art of Computer Programming: Sorting and searching, vol-
ume 3 of Addison-Wesley series in computer science and information processing.
Addison-Wesley, 1998.

[104] Donald E. Knuth. The Art of Computer Programming: Combinatorial Algorithms,


volume 4 of Addison-Wesley series in computer science and information processing.
Addison-Wesley, 2005.

[105] Gina Kolata. In shuffling cards, 7 is winning number. New York Times, 1990.

[106] Richard E. Korf. A complete anytime algorithm for number partitioning. ARTIFI-
CIAL INTELLIGENCE, 106:181–203, 1998.

[107] Thomas Koshy. Elementary Number Theory with Applications. Elsevier Academic
Press, Amsterdam, Boston, 2 edition, 2007.

[108] Markus Krötzsch. Description Logic Rules. Ciencias (E-libro–2014/09). IOS Press,
2010.

[109] Joseph B. Kruskal. On the shortest spanning subtree of a graph and the traveling
salesman problem. Proceedings of the American Mathematical Society, 7:48–50, 1956.

[110] Richard Ladner, Nancy Lynch, and Alan Selman. Comparison of polynomial-time
reducibilities. In Proceedings of the Sixth Annual ACM Symposium on Theory of
Computing, STOC ’74, pages 110–121, New York, NY, USA, 1974. ACM.

[111] Vladimir I. Levenshtein. Binary codes capable of correcting deletions, insertions and
reversals. Soviet Physics Doklady, 10(8):707–710, 1966. Original Russian version
published in 1965.

[112] Leonid A. Levin. Universal sequential search problems. Problems of Information


Transmission, 9(3):265–266, 1973.

[113] Harry R. Lewis and Larry Denenberg. Data Structures & Their Algorithms. Harper-
Collins Publishers, 1991.

[114] Deyi Li and Yi Du. Artificial Intelligence with Uncertainty. CRC Press, 2007.

[115] Édouard Lucas. Le calcul des nombres entiers. Le calcul des nombres rationnels. La
divisibilité arithmétique. Théorie des nombres. Gauthier-Villars, Paris, 1891.

[116] R. Duncan Luce and Albert D. Perry. A method of matrix analysis of group structure.
Psychometrika, 14(2):95–116, 1949.
732 BIBLIOGRAPHY

[117] George S. Lueker. Two np-complete problems in nonnegative integer programming.


Technical and Scientific Reports 178, Princeton University. Computer Sciences Labo-
ratory, March 1975.
[118] Glenn Manacher. A new linear-time “on-line” algorithm for finding the smallest initial
palindrome of a string. Journal of the ACM, 22(3):346–351, July 1975.
[119] Silvano Martello and Paolo Toth. Worst-case analysis of greedy algorithms for the
subset-sum problem. Mathematical Programming, 28:198–205, 1984.
[120] Silvano Martello and Paolo Toth. Knapsack problems: algorithms and computer im-
plementations. Wiley-Interscience series in discrete mathematics and optimization. J.
Wiley & Sons, 1990.
[121] Daniel D. McCracken, Harold Weiss, and Tsai-Hwa Lee. Programming Business Com-
puters. Wiley, New York, 1 edition, 1959.
[122] Dinesh P. Mehta and Sartaj Sahni. Handbook of Data Structures and Applications.
Chapman & Hall/CRC Computer and Information Science Series. CRC Press, 2004.
[123] Donald Michie. Memo functions and machine learning. Nature, (218):19–22, 1968.
[124] Cayman Mitchell, Nelson Schoenbrot, Joshua Shor, Keith Thomas, and Sung-Hyuk
Cha. Radix selection algorithm for the kth order statistic. In Proceedings of Student-
Faculty Research Day, New York, NY, USA, May 2012.
[125] Richard A. Mollin. RSA and Public-Key Cryptography. Discrete Mathematics and Its
Applications. CRC Press, 2002.
[126] J. Ian Munro, Thomas Papadakis, and Robert Sedgewick. Deterministic skip lists.
In Proceedings of the Third Annual ACM-SIAM Symposium on Discrete Algorithms,
SODA ’92, pages 367–375, Philadelphia, PA, USA, 1992. Society for Industrial and
Applied Mathematics.
[127] John Napier and Henry Briggs. Mirifici Logarithmorum Canonis Constructio: Et
Eorum Ad Naturales Ipsorum Numeros Habitudines; Una Cum Appendice ... Una
Cum Annotationibus ... A. Hermann, reprint edition, 1620. digitized in 2009.
[128] Otto E. Neugebauer, Abraham Sachs, and Albrecht Gotze. Mathematical cuneiform
texts, volume 29 of American oriental series. New Haven, Conn., Pub. jointly by the
American Oriental Society and the American Schools of Oriental Research, 1945.
[129] Michael Newell and Marina Grashina. The Project Management Question and Answer
Book. American Management Association, 2003.
[130] Tony D. Noe and Jonathan Vos Post. Primes in fibonacci n-step and lucas n-step
sequences. Journal of Integer Sequences, 8(05.4.4), 2005.
[131] Hari Mohan Pandey. Design Analysis and Algorithm. Laxmi Publications Pvt Limited,
2008.
[132] Michael S. Paterson and Carl E. Hewitt. Comparative schematology. In Jack B.
Dennis, editor, Record of the Project MAC Conference on Concurrent Systems and
Parallel Computation, pages 119–127. ACM, New York, NY, USA, 1970.
BIBLIOGRAPHY 733

[133] David Pearson. A polynomial-time algorithm for the change-making problem. Oper-
ations Reseach Letters, 33(3):231–234, May 2005.

[134] Charles S. Peirce. A boolean algebra with one constant. In Charles Hartshorne and
Paul Weiss, editors, Collected Papers of Charles Sanders Peirce, volume 4, pages 12–
20. Harvard University Press, 1933.

[135] Michael L. Pinedo. Scheduling: Theory, Algorithms, and Systems. Springer Interna-
tional Publishing, 2016.

[136] Karl Popper. The Logic of Scientific Discovery. Routledge, reprint 2002, 1934. original
German title: Logik der Forschung.

[137] Emil L. Post. Recursively enumerable sets of positive integers and their decision
problems. Bulletin of the American Mathematical Society, 50(5):284–316, 1944.

[138] Robert C. Prim. Shortest connection networks and some generalizations. Bell System
Technology Journal, 36(6):1389–1401, 1957.

[139] William Pugh. Concurrent maintenance of skip lists. Technical Report CS-TR-2222,
Dept. of Computer Sciences, University of Maryland, April 1989.

[140] William Pugh. Skip lists: A probabilistic alternative to balanced trees. Communica-
tions of the ACM, 33(6):668–676, june 1990.

[141] Michael O. Rabin and Dana Scott. Finite automata and their decision problems. IBM
Journal of Research and Development, 3(2):114–125, April 1959.

[142] Raghu Ramakrishnan and Johannes Gehrke. Database Management Systems. Irwin
Computer Science. McGraw-Hill Education, 2003.

[143] Raphael M. Robinson. Mersenne and fermat numbers. Proceedings of the American
Mathematical Society, 5:842846, 1954.

[144] Eleanor Robson. Words and pictures: New light on plimpton 322. The American
Mathematical Monthly, 109(2):105–120, 2002.

[145] Kenneth H. Rosen. Elementary Number Theory and Its Applications. Addison-Wesley,
6 edition, 2011.

[146] Kenneth H. Rosen. Discrete Mathematics and Its Applications. McGraw-Hill Educa-
tion, 7 edition, 2012.

[147] Daniel J. Rosenkrantz, Richard E. Stearns, and Philip M. Lewis, II. An analysis of
several heuristics for the traveling salesman problem. SIAM Journal on Computing,
6:563–581, 1977.

[148] Richard M. Sainsbury. Paradoxes. Cambridge University Press, 2009.

[149] Edward C. Sandifer. The Early Mathematics of Leonhard Euler. MAA spectrum.
Mathematical Association of America, 2007.

[150] C. Schensted. Longest increasing and decreasing subsequences. Canadian Journal of


Mathematics, 13:179–191, 1961.
734 BIBLIOGRAPHY

[151] Robert Sedgewick and Kevin Wayne. Algorithms. Pearson Education, 2011.

[152] Harold H. Seward. Information sorting in the application of electronic digital com-
puters to business operations. Master’s thesis R-232, Massachusetts Institute of Tech-
nology, Digital Computer Laboratory, May 1953.

[153] Jeffrey Shallit. A triangle for the bell numbers. In Jr. Verner E. Hoggatt and Marjorie
Bicknell-Johnson, editors, A Collection of Manuscripts Related to the Fibonacci Se-
quence. 18th Anniversary Volume, pages 69–71. Fibonacci Association, Santa Clara,
California, 1980.

[154] Jeffrey Shallit. The computational complexity of the local postage stamp problem.
SIGACT News, 33(1):90–94, March 2002.

[155] C. J. Shaw and T. N. Trimble. Algorithm 175: Shuttle sort. Communications of the
ACM, 6(6):312–313, June 1963.

[156] Henry M. Sheffer. A set of five independent postulates for boolean algebras, with
application to logical constants. Transactions of the American Mathematical Society,
14:481–488, 1913.

[157] Abraham Silberschatz, Peter B. Galvin, and Greg Gagne. Operating system concepts.
Windows XP update. John Wiley & Sons, 2003.

[158] Steven S. Skiena. The Algorithm Design Manual. Springer-Verlag London, 2008.

[159] Michiel Smid. Closest Point Problems in Computational Geometry, volume 95 of issues
1-26 of Research report. Max-Planck-Institut für Informatik, 1995.

[160] Peter Smith. An Introduction to Formal Logic. Cambridge University Press, 2003.

[161] Richard P. Stanley. A survey of alternating permutations. In Combinatorics and


Graphs: The Twentieth Anniversary Conference of IPM Combinatorics, Contempo-
rary mathematics - American Mathematical Society, pages 165–196, Tehran, Iran,
May 2010. American Mathematical Society.

[162] Richard P. Stanley. Enumerative Combinatorics, volume I. Cambridge University


Press, second edition, 2011.

[163] Richard P. Stanley and S. Fomin. Enumerative Combinatorics:, volume 2 of Cambridge


Studies in Advanced Mathematics. Cambridge University Press, 1999.

[164] Guy L. Steele. Debunking the “expensive procedure call” myth or, procedure call
implementations considered harmful or, lambda: The ultimate goto. In Proceedings
of the Annual Conference, pages 153–162. ACM, 1977.

[165] J.F. Steffensen. Interpolation. Dover Publications, 2 edition, 2012.

[166] Graham A. Stephen. String Searching Algorithms, volume 6 of Lecture notes series
on computing. World Scientific, 1994.

[167] Angus Stevenson, editor. Oxford Dictionary of English. Oxford University Press, 3
edition, 2015.
BIBLIOGRAPHY 735

[168] Volker Strassen. Gaussian elimination is not optimal. Numerische Mathematik,


13(4):354–356, August 1969.
[169] Robert Endre Tarjan. Depth first search and linear graph algorithms. SIAM Journal
on Computing, 1(2):146–160, 1972.

[170] Robert Endre Tarjan. Edge-disjoint spanning trees and depth-first search. Acta In-
formatica, 6(2):171–185, Jun 1976.
[171] Edward Charles Titchmarsh. The Theory of the Riemann Zeta-Function. Clarendon
Press, Oxford, 1951.
[172] Grigori S. Tseitin. On the complexity of derivations in the propositional calculus.
Studies in Mathematics and Mathematical Logic, Part II:115–125, 1968.
[173] Alan M. Turing. On computable numbers with an application to the entschei-
dungsproblem. Proceeding of the London Mathematical Society, 42:230–265, 1937.
[174] Peter van Emde Boas, Robert Kaas, and Erik Zijlstra. Design and implementation of
an efficient priority queue. Mathematical systems theory, 10(1):99–127, Dec 1976.
[175] Jan van Leeuwen. On the construction of huffman trees. In Third International
Colloquium on Automata, Languages and Programming, pages 382–410, July 1976.
[176] Ilan Vardi. Computational Recreations in Mathematica. The advanced book program.
Addison-Wesley, 1991.

[177] Jean Vuillemin. A data structure for manipulating priority queues. Commun. ACM,
21(4):309–315, April 1978.
[178] Tommy Wan. RUBIKS CUBE GUIDE. Lulu.com, 2016.
[179] Herbert S. Wilf. Generatingfunctionology. Academic Press, Boston, MA, USA, 2nd
edition, 1994.
[180] John W. J. Williams. Algorithm 232 - heapsort. Communications of the ACM,
7(6):347–348, 1964.
[181] Michael R. Williams. A history of computing technology. Perspectives Series. IEEE
Computer Society Press, 1997.
Index of Computational
Problems

Problem · · · · · · · · · · · · · Page Algo name · · · · · · · · · · · # Design paradigm · · · · · · · Page


2-3 tree
Checking · · · · · · · · · · · · · ·455 · · · · · · · · · · · · · · · · ·Eq (8.14) Recursion · · · · · · · · · · · · · · · · · · · 456
Insertion · · · · · · · · · · · · · · 457 · · · · · · · · · · · · · · · · · · AG 8.16 Recursion · · · · · · · · · · · · · · · · · · · 457
Deletion · · · · · · · · · · · · · · 460 · · · · · · · · · · · · · · · · · · AG 8.17 Recursion · · · · · · · · · · · · · · · · · · · 460
Activity selection problems
ASP (Maximize number) · · · · · · · · · · · · · · · · · · AG 4.10 Greedy Algo. · · · · · · · · · · · · · · · 170
· · · · · · · · · · · · · · · · · · · · · · · 170
· · · · · · · · · · · · · · · · · · AG 4.11 Greedy Algo. · · · · · · · · · · · · · · · 171
· · · · · · · · · · · · · · · · · · AG 9.17 Greedy + minheap · · · · · · · · · · 521
· · · · · · · · · · · · · · · · AG S-9.30 Greedy + maxheap · · · 550,S-385
≤p LPL · · · · · · · · AG 10.19 Reduction · · · · · · · · · · · · · · · · · · 574
Weighted activity selection · · · · · · · · · · · · · · · · AG S-4.24 Greedy Aprx. · · · · · · · · · 208,S-114
(Maximize profits) · · · · 231 A· · · · · · · · · · · · · · Eq (5.15) Recursion · · · · · · · · · · · · · · · · · · · 232

^¨ · · · · · · · · · · · · · · · · AG 5.9 Strong ind. · · · · · · · · · · · · · · · · · 232
· · · · · · · · · · · · · · · · AG S-5.54 Memoization · · · · · · · · · · 289,S-201
≤p LPC · · · · · · · ·AG 10.20 Reduction · · · · · · · · · · · · · · · · · · 575
Alternating permutation problems
Checking down-up · · · · · · · · · · · · · · · · · · · · · · ·Eq (S-2.30) Recursion · · · · · · · · · · · · · · · 86,S-40
· · · · · · · · · · · · · · · · · · · · 86,S-40 · · · · · · · · · · · · · · · · AG S-2.25 Inductive prog. · · · · · · · · · 86,S-40
· · · · · · · · · · · · · · · · AG S-3.12 Divide & Conq. · · · · · · · · 146,S-65
Checking up-down · · · · · 66 · · · · · · · · · · · · · · · · · · AG 1.18 Inductive prog. · · · · · · · · · · · · · · 30
· · · · · · · · · · · · · · · · ·Eq (2.30) Recursion · · · · · · · · · · · · · · · · · · · · 67
· · · · · · · · · · · · · · · · · · · AG 3.9 Divide & Conq. · · · · · · · · · · · · · 104
Checking up-up-down · · · · · · · · · · · · · · · · · · ·Eq (S-2.31) Recursion · · · · · · · · · · · · · · · 87,S-42
· · · · · · · · · · · · · · · · · · · · 87,S-41 · · · · · · · · · · · · · · · · AG S-2.28 Inductive prog. · · · · · · · · · 87,S-42
· · · · · · · · · · · · · · · · AG S-3.15 Divide & Conq. · · · · · · · · 146,S-67
Down-up · · · · · · · · · · 86,S-41 ^¨ · · · · · · · · · · · · · AG S-2.26 Inductive prog. · · · · · · · · · 86,S-41
· · · · · · · · · · · · · · · · AG S-2.27 Recursion · · · · · · · · · · · · · · · 86,S-41
· · · · · · · · · · · · · · · · AG S-3.10 Divide & Conq. · · · · · · · · 146,S-63
· · · · · · · · · · · · · · · · AG S-3.11 Divide & Conq. · · · · · · · · 146,S-64
· · · · · · · · · · · · · · · · · AG S-4.1 Greedy Algo. · · · · · · · · · · 200,S-88
· · · · · · · · · · · · · · · · AG S-9.23 Greedy + minheap · · · · 549,S-381

^¨ · · · · · · · · · · · · · AG S-9.24 minheap · · · · · · · · · · · · · · 549,S-381
· · · · · · · · · · · · · · · · AG S-9.25 Greedy + maxheap · · · 549,S-382
· · · · · · · · · · · · · · · · AG S-9.26 maxheap · · · · · · · · · · · · · · 549,S-382
≤p sort · · · · · · AG S-10.15 Reduction · · · · · · · · · · · · 615,S-409
≤p sort · · · · · · AG S-10.16 Reduction · · · · · · · · · · · · 615,S-409

736
737

≤p sort · · · · · · AG S-10.17 Reduction · · · · · · · · · · · · 615,S-409


≤p sort · · · · · · AG S-10.18 Reduction · · · · · · · · · · · · 615,S-409
≤p UDP · · · · · AG S-10.19 Reduction · · · · · · · · · · · · 615,S-410
≤p UDP · · · · Eq (S-10.23) Reduction · · · · · · · · · · · · 615,S-410
≤p KSM · · · · · · AG S-12.5 Reduction · · · · · · · · · · · · 722,S-525
≤p MDN · · · · · · AG S-12.6 Reduction · · · · · · · · · · · · 722,S-526

^¨ quick-DU · · AG S-12.7 Las Vegas · · · · · · · · · · · · ·722,S-527
Up-down · · · · · · · · · · · · · · · 65 ^¨ · · · · · · · · · · · · · · · AG 2.24 Inductive prog. · · · · · · · · · · · · · · 66
· · · · · · · · · · · · · · · · · · AG 2.25 Recursion · · · · · · · · · · · · · · · · · · · · 66
· · · · · · · · · · · · · · · · · · · AG 3.3 Divide & Conq. · · · · · · · · · · · · · · 96
· · · · · · · · · · · · · · · · · AG S-3.9 Divide & Conq. · · · · · · · · 146,S-63
· · · · · · · · · · · · · · · · · · · AG 4.3 Greedy Algo. · · · · · · · · · · · · · · · 156
· · · · · · · · · · · · · · · · · · AG 9.13 Greedy + minheap · · · · · · · · · · 515
· · · · · · · · · · · · · · · · · · AG 9.14 Greedy + maxheap · · · · · · · · · 516

^¨ · · · · · · · · · · · · · · · AG 9.15 max-heap · · · · · · · · · · · · · · · · · · · 517
· · · · · · · · · · · · · · · · AG S-9.20 Greedy + maxheap · · · 549,S-378
· · · · · · · · · · · · · · · · AG S-9.21 Greedy + minheap · · · · 549,S-379
· · · · · · · · · · · · · · · · AG S-9.22 minheap · · · · · · · · · · · · · · 549,S-380
≤p sort · · · · · · · · · AG 10.2 Reduction · · · · · · · · · · · · · · · · · · 561
≤p sort · · · · · · · · · AG 10.3 Reduction · · · · · · · · · · · · · · · · · · 561
≤p sort · · · · · · · · · AG 10.4 Reduction · · · · · · · · · · · · · · · · · · 561
≤p sort · · · · · · AG S-10.10 Reduction · · · · · · · · · · · · 615,S-407
≤p sort · · · · · · AG S-10.11 Reduction · · · · · · · · · · · · 615,S-408
≤p sort · · · · · · AG S-10.12 Reduction · · · · · · · · · · · · 615,S-408
≤p sort · · · · · · AG S-10.13 Reduction · · · · · · · · · · · · 615,S-408
≤p sort · · · · · · AG S-10.14 Reduction · · · · · · · · · · · · 615,S-408
≤p DUP · · · · · AG S-10.20 Reduction · · · · · · · · · · · · 615,S-410
≤p DUP · · · · Eq (S-10.24) Reduction · · · · · · · · · · · · 615,S-410
≤p MDN · · · · · · · · AG 12.7 Reduction · · · · · · · · · · · · · · · · · · 706

^¨ quick-updown AG 12.8 Las Vegas · · · · · · · · · · · · · · · · · · · 708
≤p KSM · · · · · · AG S-12.4 Reduction · · · · · · · · · · · · 721,S-525
Up-up-down · · · · · · · 87,S-42 ^¨ · · · · · · · · · · · · · AG S-2.29 Inductive prog. · · · · · · · · · 87,S-42
· · · · · · · · · · · · · · · · AG S-2.30 Recursion · · · · · · · · · · · · · · · 87,S-43
· · · · · · · · · · · · · · · · AG S-3.13 Divide & Conq. · · · · · · · · 146,S-66
· · · · · · · · · · · · · · · · AG S-3.14 Divide & Conq. · · · · · · · · 146,S-66
· · · · · · · · · · · · · · · · · AG S-4.2 Greedy Algo. · · · · · · · · · · 200,S-88
· · · · · · · · · · · · · · · · AG S-9.27 maxheap · · · · · · · · · · · · · · 550,S-383
· · · · · · · · · · · · · · · · AG S-9.28 minheap · · · · · · · · · · · · · · 550,S-384
≤p sort · · · · · · AG S-10.21 Reduction · · · · · · · · · · · · · · · · S-411
≤m p KSM · · · · · AG S-12.8 Reduction · · · · · · · · · · · · 723,S-529

^¨ quick-UUD · · · · AG ?? Las Vegas · · · · · · · · · · · · · · · ·723,??
André’s problem (see Euler zigazg number)
AVL tree
Checking · · · · · · · · · · · · · ·447 · · · · · · · · · · · · · · · · · · Eq (8.6) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 447
Construct · · · · · · · 491,S-342 · · · · · · · · · · · · · · · · Eq (S-8.1) Recursion · · · · · · · · · · · · · 491,S-342
· · · · · · · · · · · · · · · · · AG S-8.1 Inductive prog. · · · · · · · 491,S-342
Insertion · · · · · · · · · · · · · · 451 · · · · · · · · · · · · · · · · · · AG 8.13 Recursion · · · · · · · · · · · · · · · · · · · 451
Deletion · · · · · · · · · · · · · · 451 · · · · · · · · · · · · · · · · · · AG 8.14 Recursion · · · · · · · · · · · · · · · · · · · 451
Delete Maximum · · · · · ·541 · · · · · · · · · · · · · · · · · · AG 9.34 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 541
Delete Minimum · · · · · · 541 · · · · · · · · · · · · · · · · · · AG 9.33 Recursion · · · · · · · · · · · · · · · · · · · 541
see also AVL-select & AVL-sort under Order statistics & Sorting problems
B tree
738 Index of Computational Problems

Checking · · · · · · · · · · · · · ·465 · · · · · · · · · · · · · · · · · · AG 8.18 Recursion · · · · · · · · · · · · · · · · · · · 465


Search · · · · · · · · · · · · · · · · 466 · · · · · · · · · · · · · · · · · · AG 8.19 Recursion · · · · · · · · · · · · · · · · · · · 466
Insertion · · · · · · · · · · · · · · 466 · · · · · · · · · · · · · · · · · · AG 8.20 Recursion · · · · · · · · · · · · · · · · · · · 466
Deletion · · · · · · · · · · · · · · 470 · · · · · · · · · · · · · · · · · · AG 8.21 Recursion · · · · · · · · · · · · · · · · · · · 470
B+ tree
Checking · · · · · · · · · · · · · ·475 · · · · · · · · · · · · · · · · · · AG 8.22 Recursion · · · · · · · · · · · · · · · · · · · 475
Search · · · · · · · · · · · · · · · · 475 · · · · · · · · · · · · · · · · · · AG 8.23 Recursion · · · · · · · · · · · · · · · · · · · 475
Insertion · · · · · · · · · · · · · · 476 · · · · · · · · · · · · · · · · · · AG 8.24 Recursion · · · · · · · · · · · · · · · · · · · 476
Deletion · · · · · · · · · · · · · · 479 · · · · · · · · · · · · · · · · · · AG 8.25 Recursion · · · · · · · · · · · · · · · · · · · 479
B+2 tree
Checking · · · · · · · 495,S-355 · · · · · · · · · · · · · · · · · AG S-8.4 Recursion · · · · · · · · · · · · · 495,S-355
Search · · · · · · · · · · 495,S-355 · · · · · · · · · · · · · · · · · AG S-8.5 Recursion · · · · · · · · · · · · · 495,S-355
Insertion · · · · · · · · 495,S-355 · · · · · · · · · · · · · · · · · AG S-8.6 Recursion · · · · · · · · · · · · · 495,S-355
Deletion · · · · · · · · 495,S-356 · · · · · · · · · · · · · · · · · AG S-8.7 Recursion · · · · · · · · · · · · · 495,S-356
Bell number (see under Set partition numbers)
Back edge checking · · · · · 379 ≤p GCC · · · · · · · Eq (7.11) Reduction · · · · · · · · · · · · · · · · · · 379
Bin packing (bounded parti- · · · · · · · · · · · · · · · · · · AG 4.15 Greedy Aprx. · · · · · · · · · · · · · · · 176
tion problem · · · · · · · · · · · · 175 STP ≤p · · · · · · · Eq (11.44) NP-hard · · · · · · · · · · · · · · · · · · · · 665
hdecision ver.i · · · · · · · · · · · 665 STP ≤p · · · · · · · Eq (11.45) NP-complete · · · · · · · · · · · · · · · · 665
MPSdv ≤p · · · · Eq (11.47) NP-complete · · · · · · · · · · · · · · · · 666
Binary tree related problems
Depth of a node · · · · · · ·437 ≤p SPL · · · · · · · · · · · Pr 8.1 Reduction · · · · · · · · · · · · · · · · · · 437
Height balanceness · · · · 447 · · · · · · · · · · · · · · · · · · Eq (8.5) Recursion · · · · · · · · · · · · · · · · · · · 447
Height of a node · · · · · · 437 ≤m p LPL · · · · · · · · · · Pr 8.2 m-Reduction · · · · · · · · · · · · · · · · 437

^¨ · · · · · · · · · · · · · · · Eq (8.1) Recursion · · · · · · · · · · · · · · · · · · · 438
Leftist · · · · · · · · · · · · · · · · 533 · · · · · · · · · · · · · · · · ·Eq (9.21) Recursion · · · · · · · · · · · · · · · · · · · 533
Null path length · · · · · · 532 ≤m p SPL · · · · · · · · · Pr 9.10 m-Reduction · · · · · · · · · · · · · · · · 532

^¨ · · · · · · · · · · · · · ·Eq (9.20) Recursion · · · · · · · · · · · · · · · · · · · 532
Number of nodes in a · · · · · · · · · · · · · · · · ·Eq (1.24) Closed form · · · · · · · · · · · · · · · · · · 28
perfect binary tree 83,S-30 · · · · · · · · · · · · · · ·Eq (S-2.11) Recursion · · · · · · · · · · · · · · · 83,S-30
Ph i
i=0 2 · · · · · · Eq (S-2.12)
Inductive prog. · · · · · · · · · 83,S-30
· · · · · · · · · · · · · · · · ·Eq (3.29) Divide & Conq. · · · · · · · · · · · · · 138
Number of rBTs · · · · · · 438 (see Catalan number)
Right spine length · · · · 534 · · · · · · · · · · · · · · · · ·Eq (9.26) Recursion · · · · · · · · · · · · · · · · · · · 534
Sum of depths in a · · · · · · · · · · · · · · · · ·Eq (1.25) Closed form · · · · · · · · · · · · · · · · · · 28
perfect binary tree 83,S-30 · · · · · · · · · · · · · · ·Eq (S-2.13) Recursion · · · · · · · · · · · · · · · 83,S-30
Ph i
i=0 2 · · · · · · Eq (S-2.14)
Inductive prog. · · · · · · · · · 83,S-30
· · · · · · · · · · · · · · · · ·Eq (3.31) Divide & Conq. · · · · · · · · · · · · · 139
Binary Search tree
Checking · · · · · · · · · · · · · ·441 ≤p DFT · · · · · · · · · ·AG 8.3 Reduction · · · · · · · · · · · · · · · · · · 441
· · · · · · · · · · · · · · · · · · · AG 8.4 Recursion · · · · · · · · · · · · · · · · · · · 441
Deletion · · · · · · · · · · · · · · 444 · · · · · · · · · · · · · · · · · · · AG 8.9 Recursion · · · · · · · · · · · · · · · · · · · 445
Insertion · · · · · · · · · · · · · · 443 · · · · · · · · · · · · · · · · · · · AG 8.6 Recursion · · · · · · · · · · · · · · · · · · · 443
Maximum · · · · · · · · · · · · · 444 · · · · · · · · · · · · · · · · · · · AG 8.8 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 444
Minimum · · · · · · · · · · · · · 444 · · · · · · · · · · · · · · · · · · · AG 8.7 Recursion · · · · · · · · · · · · · · · · · · · 444
Search · · · · · · · · · · · · · · · · 441 · · · · · · · · · · · · · · · · · · · AG 8.5 Recursion · · · · · · · · · · · · · · · · · · · 442
see also AVL tree
Binomial coefficient (see Subset selection without repetition)
Catalan number (number of A · · · · · · · · · · · · Eq (5.104) Recursion · · · · · · · · · · · · · · · · · · · 288
rooted binary trees) · · · · · 438 A · · · · · · · · · · · · Eq (5.105) Recursion · · · · · · · · · · · · · · · · · · · 288
· · · · · · · · · · · · · · · · · · · AG 8.1 Strong ind. · · · · · · · · · · · · · 288,439
739

· · · · · · · · · · · · · · ·Eq (S-5.71) Memoization · · · · · · · · · · 288,S-197


· · · · · · · · · · · · · · · · · · · AG 8.2 Memoization · · · · · · · · · · · · · · · · 439
≤p BNC · · · · · · Eq (10.27) Reduction · · · · · · · · · · · · · · · · · · 585
≤m p FAC · · · · · · Eq (10.28) m-Reduction · · · · · · · · · · · · · · · · 585
Clique (Maximum clique) 668 SCN ≤p · · · · · · · Tm 11.16 NP-hard · · · · · · · · · · · · · · · · · · · · 668
IDS ≤p · · · · · · · Eq (11.51) NP-hard · · · · · · · · · · · · · · · · · · · · 670
VCP ≤p · · · · · · Eq (11.57) NP-hard · · · · · · · · · · · · · · · · · · · · 674
hdecision ver.i · · · · · · · · · · · 668 SCN ≤p · · · · · · Eq (11.50) NP-complete · · · · · · · · · · · · · · · · 669
IDSdv ≤p · · · · · Eq (11.53) NP-complete · · · · · · · · · · · · · · · · 672
VCPdv ≤p · · · · Eq (11.61) NP-complete · · · · · · · · · · · · · · · · 674
Coin Change problem (see Postage stamp equality minimization under Stamp problems)
Colexicographical order · · · · · · · · · · · · · · ·Eq (S-2.36) Rcursion · · · · · · · · · · · · · · · · 89,S-47
· · · · · · · · · · · · · · · · · · · · · · 89,S-47 · · · · · · · · · · · · · · · · AG S-2.36 Tail recursion · · · · · · · · · · · 89,S-47
Complete k-ary tree related problems
Level of the ith node · · 503 · · · · · · · · · · · · · · · · · · Eq (9.9) Closed form · · · · · · · · · · · · · · · · · 503
Num. of internal nodes 503 · · · · · · · · · · · · · · · · ·Eq (9.10) Closed form · · · · · · · · · · · · · · · · · 503
Number of leaf nodes · 503 · · · · · · · · · · · · · · · · ·Eq (9.11) Closed form · · · · · · · · · · · · · · · · · 503
Combinational Circuit problems
construct · · · · · · · · · · · · · 644 · · · · · · · · · · · · · · · · · · AG 11.1 Stack · · · · · · · · · · · · · · · · · · · · · · · 645
convert to equivalent pro- · · · · · · · · · · · · · · · · · · AG 11.2 Recursion · · · · · · · · · · · · · · · · · · · 646
position · · · · · · · · · · · · · · · 645
convert to equi-satisfiable · · · · · · · · · · · · · · · · · · AG 11.3 Memoization · · · · · · · · · · · · · · · · 648
proposition · · · · · · · · · · · 647
evaluate · · · · · · · · · · · · · · ·642 A · · · · · · · · · · · · · · Eq (11.2) Recursion · · · · · · · · · · · · · · · · · · · 643
· · · · · · · · · · · · · · · · AG S-11.2 Strong ind. · · · · · · · · · · · 686,S-483
· · · · · · · · · · · · · · · · AG S-11.1 Memoization · · · · · · · · · · 686,S-483
see also under satisfiability
Complete recurrence problems
Pn−1
i=1 T (i) + 1 · · · · · · · · 286 A · · · · · · · · · · · · · · Eq (5.99) Recursion · · · · · · · · · · · · · · · · · · · 286
· · · · · · · · · · · · · · · · · ·Tm 5.27 Closed form · · · · · · · · · · · · · · · · · 286
· · · · · · · · · · · · · · ·Eq (S-5.69) Memoization · · · · · · · · · · 286,S-191
· · · · · · · · · · · · · · · · AG S-5.43 Strong ind. · · · · · · · · · · · 286,S-191
Pn−1
i=0 T (i) + n · · · · · · · · A
286 · · · · · · · · · · · · Eq (5.100) Recursion · · · · · · · · · · · · · · · · · · · 286
· · · · · · · · · · · · · · · · AG S-5.44 Strong ind. · · · · · · · · · · · 286,S-192
· · · · · · · · · · · · · · · · · ·Tm 5.28 Asymtotic Aprx. · · · · · · · · · · · · · 286
· · · · · · · · · · · · · · ·Eq (S-5.70) Memoization · · · · · · · · · · 286,S-193
· · · · · · · · · · · · · · · Eq (5.101) Recursion · · · · · · · · · · · · · · · · · · · 287
· · · · · · · · · · · · · · · · AG S-5.45 Inductive prog. · · · · · · · 286,S-193
Pn−1 T (i)
i=1 n−1 + n · · · · · · · · 235 A · · · · · · · · · · · · · · Eq (5.21) Recursion · · · · · · · · · · · · · · · · · · · 236
· · · · · · · · · · · · · · · · · · · Tm 5.7 Closed form · · · · · · · · · · · · · · · · · 235
· · · · · · · · · · · · · · · · AG S-5.46 Memoization · · · · · · · · · · 287,S-194
· · · · · · · · · · · · · · · · · · AG 5.12 Strong ind. · · · · · · · · · · · · · · · · · 237
· · · · · · · · · · · · · · · · · ·Tm 12.2 Asymptotic Aprx. · · · · · · · · · · · 703
2 n−1
P
i=0
T (i)
n
+ 1 · · · · · · · 287 A · · · · · · · · · · · · Eq (5.103) Recursion · · · · · · · · · · · · · · · · · · · 287
· · · · · · · · · · · · · · · · AG S-5.47 Strong ind. · · · · · · · · · · · 287,S-195
· · · · · · · · · · · · · · · · · ·Tm 5.29 Closed form · · · · · · · · · · · · · · · · · 288
· · · · · · · · · · · · · · · · AG S-5.48 Memoization · · · · · · · · · · 287,S-196
T (i)
2 n−1 · · · · · · · · · · · · · · · · · · · Tm 8.1 Asymptotic Aprx. · · · · · · · · · · · 445
P
+ n − 1 · · · 445
Pi=0 Tn(i)
2 n−1 i=0 n
+ Θ(n) · · · 701 · · · · · · · · · · · · · · · · · ·Tm 12.1 Asymptotic Aprx. · · · · · · · · · · · 701
for other types of complete recursion, see Catalan number and Euler zigzag number.
Conjunctive normal form
740 Index of Computational Problems

Checking · · · · · · · · · · · · · ·649 · · · · · · · · · · · · · · · · ·Eq (11.5) Inductive prog. · · · · · · · · · · · · · 649


· · · · · · · · · · · · · · · · ·Eq (11.7) Recursion · · · · · · · · · · · · · · · · · · · 649
Convert to CNF-3 · · · · 652 Tseitin trans. · · · AG 11.4 Recursion · · · · · · · · · · · · · · · · · · · 652
see also under satisfiability, tautology, and fallacy
Connectivity (Graph)
Connectivity · · · · · · · · · · 375 · · · · · · · · · · · · · · · · · · Eq (7.8) Reduction · · · · · · · · · · · · · · · · · · 375
≤p GCC · · · · · · · · Eq (7.9) Reduction · · · · · · · · · · · · · · · · · · 377
Find connected components · · · · · · · · · · · · · · · · · · AG 7.17 Stack · · · · · · · · · · · · · · · · · · · · · · · 376
· · · · · · · · · · · · · · · · · · · · · · · 376 · · · · · · · · · · · · · · · · · AG S-7.3 Recursion · · · · · · · · · · · · · 420,S-271
· · · · · · · · · · · · · · · · · AG S-7.4 Stack · · · · · · · · · · · · · · · · · 420,S-272
Consecutive sub-sequence arithmetic problems
Maximum sum · · · · · · · · · 22 · · · · · · · · · · · · · · · · · · AG 1.13 by def. · · · · · · · · · · · · · · · · · · · · · · · 22
· · · · · · · · · · · · · · · · · · AG 1.14 by def. · · · · · · · · · · · · · · · · · · · · · · · 23
· · · · · · · · · · · · · · · · · · · AG 3.6 Divide & Conq. · · · · · · · · · · · · · 101

^¨ Kadane · · · · · AG 10.31 m-Reduction · · · · · · · · · · · · · · · · 609
≤p minCSS · · AG S-10.50 Reduction · · · · · · · · · · · · 629,S-471
≤p minCSPp · AG S-10.54 Reduction · · · · · · · · · · · · 629,S-473
hending-at ver.i (MCSSe) · · · · · · · · · · · · · · · Eq (10.95) Recursion · · · · · · · · · · · · · · · · · · · 608
Maximum product 31,S-16 · · · · · · · · · · · · · · · · · AG S-1.2 by def. · · · · · · · · · · · · · · · · · · 31,S-16
· · · · · · · · · · · · · · · · AG S-3.18 Divide & Conq. · · · · · · · · 147,S-71

^¨ · · · · · · · · · · · AG S-10.58 m-Reduction · · · · · · · · · · 632,S-477
hending-at ver.i(MCSPe) · · · · · · · · · · · · Eq (S-10.101) Recursion · · · · · · · · · · · · · 632,S-477
Maximum product · · · · · · · · · · · · · · · · AG S-3.17 Divide & Conq. · · · · · · · · 147,S-70
(positive number) 147,S-70 ≤p MCSS · · · · · · AG 10.30 Reduction · · · · · · · · · · · · · · · · · · 607
≤p minCSPp · AG S-10.51 Reduction · · · · · · · · · · · · 629,S-471

^¨ · · · · · · · · · · · AG S-10.56 m-Reduction · · · · · · · · · · 631,S-475
≤p MCSP · Eq (S-10.103) Reduction · · · · · · · · · · · · 632,S-478
hending-at ver.i(MCSPpe) · · · · · · · · · · · · · Eq (S-10.99) Recursion · · · · · · · · · · · · · 631,S-475
Minimum sum · · · · 30,S-15 · · · · · · · · · · · · · · · · · AG S-1.1 by def. · · · · · · · · · · · · · · · · · · 30,S-15
· · · · · · · · · · · · · · · · AG S-3.16 Divide & Conq. · · · · · · · · 147,S-69
≤p MCSS · · · · · · AG 10.28 Reduction · · · · · · · · · · · · · · · · · · 605
≤p minCSPp · AG S-10.53 Reduction · · · · · · · · · · · · · · 629,605

^¨ · · · · · · · · · · · AG S-10.55 m-Reduction · · · · · · · · · · 631,S-474
hending-at ver.i(minCSSe) · · · · · · · · · · · · · Eq (S-10.98) Recursion · · · · · · · · · · · · · 631,S-474
Minimum product · 31,S-16 · · · · · · · · · · · · · · · · · AG S-1.3 by def. · · · · · · · · · · · · · · · · · · 31,S-17
· · · · · · · · · · · · · · · · AG S-3.20 Divide & Conq. · · · · · · · · 148,S-73

^¨ · · · · · · · · · · · AG S-10.59 m-Reduction · · · · · · · · · · 632,S-478
hending-at veri(minCSPe) · · · · · · · · · · · · Eq (S-10.102) Recursion · · · · · · · · · · · · · 632,S-477
Minimum product · · · · · · · · · · · · · · · · AG S-3.19 Divide & Conq. · · · · · · · · 148,S-72
(positive number) 148,S-72 ≤p MCSPp · · · · AG 10.29 Reduction · · · · · · · · · · · · · · · · · · 606
≤p minCSS · · AG S-10.52 Reduction · · · · · · · · · · · · 629,S-472

^¨ · · · · · · · · · · · AG S-10.57 m-Reduction · · · · · · · · · · 631,S-476
≤p minCSP Eq (S-10.104) Reduction · · · · · · · · · · · · 632,S-479
hending-at veri(minCSPpe) · · · · · · · · · · · · Eq (S-10.100) Recursion · · · · · · · · · · · · · 631,S-476
Convex hull · · · · · · · · · · · · · 580 Graham’s scan · AG 10.22 Reduction + Stack · · · · · · · · · · 581
Critical Path problem · · · 267 A · · · · · · · · · · · · · · Eq (5.51) Recursion · · · · · · · · · · · · · · · · · · · 267
CPM/PERT · · · · AG 5.31 Strong ind. · · · · · · · · · · · · · · · · · 268
≤p LPC · · · · · · · ·AG 10.17 Reduction · · · · · · · · · · · · · · · · · · 572
Cycle detection (Graph)
Cycle in ungraph · · · · · ·378 · · · · · · · · · · · · · · · · · · AG 7.18 Recursion · · · · · · · · · · · · · · · · · · · 378
· · · · · · · · · · · · · · · · · AG S-7.5 Stack · · · · · · · · · · · · · · · · · 420,S-273
· · · · · · · · · · · · · · · · · AG S-7.6 Queue · · · · · · · · · · · · · · · · 420,S-274
741

Cycle in digraph · · · · · · 255 ≤p TPS · · · · · · · · Eq (5.44) Reduction · · · · · · · · · · · · · · · · · · 256


· · · · · · · · · · · · · · · · · · AG 7.19 Recursion · · · · · · · · · · · · · · · · · · · 380
Cycle numbers
Number of at least k ≤m p SNF · · · · · · Eq (10.38) m-Reduction · · · · · · · · · · · · · · · · 590
cycles · · · · · · · · · · · 431,S-324 ≤m p SNF · · · · · · AG S-7.74 m-Reduction + Cyl · · · 431,S-324
≤m p CNam · · · · Eq (10.40) m-Reduction · · · · · · · · · · · · · · · · 590
≤p CNam · · · · · Eq (10.42) Reduction · · · · · · · · · · · · · · · · · · 590
Number of at most k ≤m p SNF · · · · · · Eq (10.37) m-Reduction · · · · · · · · · · · · · · · · 590
cycles · · · · · · · · · · · 431,S-323 ≤m p SNF · · · · · · AG S-7.72 m-Reduction + Cyl · · · 431,S-323
≤m p SNF · · · · · · AG S-7.73 m-Reduction + Cyl · · · 431,S-323
≤m p CNal · · · · · Eq (10.39) m-Reduction · · · · · · · · · · · · · · · · 590
≤p CNal · · · · · · Eq (10.41) Reduction · · · · · · · · · · · · · · · · · · 590
Stirling number of the A · · · · · · · · · · · · · · Eq (6.28) Recursion · · · · · · · · · · · · · · · · · · · 348
first kind · · · · · · · · · · · · · · 348 · · · · · · · · · · · · · · · · AG S-6.26 2D Memoization · · · · · · 347,S-238
· · · · · · · · · · · · · · · · AG S-6.27 2D Memoization II · · · ·347,S-238
· · · · · · · · · · · · · · · · AG S-6.28 2D Str. ind. · · · · · · · · · · 347,S-239

^¨ · · · · · · · · · · · · · AG S-7.71 Strong ind. + Cyl · · · · 431,S-322
Disjunctive normal form · · · · · · · · · · · · · · · Eq (11.77) Inductive prog. · · · · · · · · · · · · · 682
Checking · · · · · · · · · · · · · · · ·682 · · · · · · · · · · · · · · · Eq (11.79) Recursion · · · · · · · · · · · · · · · · · · · 682
see also under satisfiability, tautology, and fallacy
Divide recurrence problems
T (n/2) + 1 · · · · · · · · · · · · 142 Master Theorem · Tm 3.9 Asymptotic Aprx. · · · · · · · · · · · 142
T (b n2 c) + 1 · · · · · · · · · · · · 234 · · · · · · · · · · · · · · · · · · · Tm 5.6 Closed form · · · · · · · · · · · · · · · · · 234
· · · · · · · · · · · · · · · · ·Eq (5.19) Divide & Conq. · · · · · · · · · · · · · 234
· · · · · · · · · · · · · · · · · · AG 5.11 Strong ind. · · · · · · · · · · · · · · · · · 235
· · · · · · · · · · · · · · · · ·Eq (5.28) Memoization · · · · · · · · · · · · · · · · 245
· · · · · · · · · · · · · · · · · · AG 5.17 Tail recursion · · · · · · · · · · · · · · · 245
T ( n−1
 
2
) + 1 · · · · · · · · · 276 · · · · · · · · · · · · · · · · · ·Tm 5.14 Closed form · · · · · · · · · · · · · · · · · 276
· · · · · · · · · · · · · · · · ·Eq (5.56) Divide & Conq. · · · · · · · · · · · · · 275
· · · · · · · · · · · · · · · · AG S-5.10 Strong ind. · · · · · · · · · · · 275,S-148
· · · · · · · · · · · · · · ·Eq (S-5.23) Memoization · · · · · · · · · · 275,S-149
· · · · · · · · · · · · · · · · AG S-5.11 Tail recursion · · · · · · · · · 275,S-150
T (n/2) + n · · · · · · · · · · · 142 Master Theorem · Tm 3.9 Asymptotic Aprx. · · · · · · · · · · · 142
T (b n2 c) + n · · · · · · · · · · · 276 · · · · · · · · · · · · · · · · · ·Tm 5.15 Closed form · · · · · · · · · · · · · · · · · 276
· · · · · · · · · · · · · · · · ·Eq (5.57) Divide & Conq. · · · · · · · · · · · · · 276
· · · · · · · · · · · · · · · · AG S-5.12 Strong ind. · · · · · · · · · · · 276,S-150
· · · · · · · · · · · · · · ·Eq (S-5.25) Memoization · · · · · · · · · · 276,S-151
· · · · · · · · · · · · · · · · AG S-5.13 Tail recursion · · · · · · · · · 276,S-152
2T (n/2) + 1 · · · · · · · · · · ·142 Master Theorem · Tm 3.9 Asymptotic Aprx. · · · · · · · · · · · 142
T (b n2 c) + T (d n2 e) + 1 · · 233 · · · · · · · · · · · · · · · · · · · Tm 5.5 Closed form · · · · · · · · · · · · · · · · · 233
· · · · · · · · · · · · · · · · ·Eq (5.17) Divide & Conq. · · · · · · · · · · · · · 233
· · · · · · · · · · · · · · · · · · AG 5.10 Strong ind. · · · · · · · · · · · · · · · · · 234
· · · · · · · · · · · · · · · · ·Eq (5.27) Memoization · · · · · · · · · · · · · · · · 242
T (b n−1 2
c) + T (d n−1 2
e) + 1 · · · · · · · · · · · · · · · · · ·Tm 5.16 Closed form · · · · · · · · · · · · · · · · · 277
· · · · · · · · · · · · · · · · · · · · · · · 277 · · · · · · · · · · · · · · · · ·Eq (5.58) Divide & Conq. · · · · · · · · · · · · · 277
· · · · · · · · · · · · · · · · AG S-5.14 Strong ind. · · · · · · · · · · · 277,S-153
· · · · · · · · · · · · · · ·Eq (S-5.27) Memoization · · · · · · · · · · 277,S-154
2T (n/2) + n · · · · · · · · · · 142 Master Theorem · Tm 3.9 Asymptotic Aprx. · · · · · · · · · · · 142
T (b n2 c) + T (d n2 e) + n · 277 · · · · · · · · · · · · · · · · ·Eq (5.60) Closed form · · · · · · · · · · · · · · · · · 277
· · · · · · · · · · · · · · · · ·Eq (5.59) Divide & Conq. · · · · · · · · · · · · · 277
· · · · · · · · · · · · · · · · AG S-5.15 Strong ind. · · · · · · · · · · · 277,S-154
· · · · · · · · · · · · · · ·Eq (S-5.30) Memoization · · · · · · · · · · 277,S-156
742 Index of Computational Problems

2T (n/2) + log n · · · · · · · 142 Master Theorem · Tm 3.9 Asymptotic Aprx. · · · · · · · · · · · 142


T (b n−1
2
c) + T (d n−12
e) + · · · · · · · · · · · · · · · · ·Eq (5.61) Divide & Conq. · · · · · · · · · · · · · 278
blog nc + 1 · · · · · · · · · · 278 · · · · · · · · · · · · · · · · AG S-5.16 Strong ind. · · · · · · · · · · · 278,S-157
· · · · · · · · · · · · · · ·Eq (S-5.31) Memoization · · · · · · · · · · 278,S-157
· · · · · · · · · · · · · · · · · ·Tm 5.62 Asymptotic Aprx. · · · · · · · · · · · 278
Division · · · · · · · · · · · · · · · · · · 77 · · · · · · · · · · · · · · · · ·Eq (2.38) Rcursion · · · · · · · · · · · · · · · · · · · · · 77
· · · · · · · · · · · · · · · · · · AG 2.33 Tail recursion · · · · · · · · · · · · · · · · 77
· · · · · · · · · · · · · · · · · · AG 2.34 Inductive prog. · · · · · · · · · · · · · · 77
· · · · · · · · · · · · · · · · · · AG 3.15 Divide & Conq. · · · · · · · · · · · · · 112
Dot product · · · · · · · · · · · · · 47 · · · · · · · · · · · · · · · · ·Eq (2.19) Recursion · · · · · · · · · · · · · · · · · · · · 47
· · · · · · · · · · · · · · · · · · · AG 2.6 Inductive prog. · · · · · · · · · · · · · · 47
· · · · · · · · · · · · · · · · · AG S-3.6 Divide & Conq. · · · · · · · · 145,S-58
Double factorial (see under Product)
Element uniqueness · · · · · · 56 · · · · · · · · · · · · · · · · · · · Lm 2.3 Recursion · · · · · · · · · · · · · · · · · · · · 56
· · · · · · · · · · · · · · · · · · AG 2.15 Inductive prog. · · · · · · · · · · · · · · 56
≤m p search · · · · · · AG 2.16 m-Reduction · · · · · · · · · · · · · · · · · 57
· · · · · · · · · · · · · · · · · AG S-3.3 Divide & Conq. · · · · · · · · 144,S-55
¨ ≤p sort · · · · · · AG 10.5 Reduction · · · · · · · · · · · · · · · · · · 562
A
^

Euler zigazg number · · · · 237 · · · · · · · · · · · · · · Eq (5.24) Recursion · · · · · · · · · · · · · · · · · · · 238


André’s problem · · · · · · · · · · · · · · · · · · AG 5.13 Strong ind. · · · · · · · · · · · · · · · · · 238
· · · · · · · · · · · · · · ·Eq (S-5.73) Memoization · · · · · · · · · · 289,S-201
Eulerian numbers (see under Number of ascents)
Eulerian numbers of the second kind (see under Number of ascents)
Factorial n! (see under Product)
Fallacy logic related problems
CNF (FCN) · · · · · · · · · · ·683 ¬ SCN · · · · · · · · Eq (11.80) co-NP-complete · · · · · · · · · · · · · 683
TDN ≤p · · · · · · Eq (11.84) co-NP-complete · · · · · · · · · · · · · 683
DNF (FDN) · · · · · · · · · · 683 ≤p TCN · · · · · · Eq (11.83) Reduction · · · · · · · · · · · · · · · · · · 683
Fallacy (FAL) · · · · · · · · · 680 ¬ SAT · · · · · · · · Eq (11.67) co-NP-complete · · · · · · · · · · · · · 680
TAU ≤p · · · · · · Eq (11.69) co-NP-complete · · · · · · · · · · · · · 681
LEQ ≤p · · · · · · Eq (11.74) co-NP-complete · · · · · · · · · · · · · 682
FCN ≤p · · · · · · Eq (11.88) co-NP-complete · · · · · · · · · · · · · 684
Fermat Number · · · · · · · · · · 52 ≤m p POW · · · · · · Eq (2.25) m-Reduction · · · · · · · · · · · · · · · · · 52
· · · · · · · · · · · · · · · · ·Eq (2.26) Recursion · · · · · · · · · · · · · · · · · · · · 52
· · · · · · · · · · · · · · · · · · AG 2.12 Inductive prog. · · · · · · · · · · · · · · 53
A · · · · · · · · · · · · Eq (5.106) Recursion · · · · · · · · · · · · · · · · · · · 289
· · · · · · · · · · · · · · · · AG S-5.49 Strong ind. · · · · · · · · · · · 289,S-198
· · · · · · · · · · · · · · · · AG S-5.50 Memoization · · · · · · · · · · 289,S-198
Fibonacci problems
Fibonacci number · · · · · 246 A · · · · · · · · · · · · · · Eq (5.29) Recursion · · · · · · · · · · · · · · · · · · · 246
· · · · · · · · · · · · · · · · · · AG 5.18 Memoization · · · · · · · · · · · · · · · · 246
· · · · · · · · · · · · · · · · · · AG 5.19 Strong ind. · · · · · · · · · · · · · · · · · 248
· · · · · · · · · · · · · · · · ·Eq (5.40) Divide & Conq. · · · · · · · · · · · · · 251
· · · · · · · · · · · · · · · · · · AG 5.24 Memoiz. + D&C · · · · · · · · · · · 252
· · · · · · · · · · · · · · · · · · AG 5.25 Memoiz. + D&C · · · · · · · · · · · 254
· · · · · · · · · · · · · · · · · · AG 7.34 Str. ind. + Cir. · · · · · · · · · · · · 398
· · · · · · · · · · · · · · · · · · AG 7.35 Memoiz. + Cir. · · · · · · · · · · · · ·398
· · · · · · · · · · · · · · · · · · AG 7.38 D&C + Jmp. · · · · · · · · · · · · · · · 402
≤p WWP · · · · · Eq (10.14) Reduction · · · · · · · · · · · · · · · · · · 567
≤p NPP · · · · · · · · AG 10.9 Reduction · · · · · · · · · · · · · · · · · · 568
≤m p LUC · · · · · · Eq (10.82) m-Reduction · · · · · · · · · · · · · · · · 596
743

Binet formula · Eq (10.84) m-Reduction · · · · · · · · · · · · · · · · 597


≤p MXP · · · · · · Eq (10.97) Reduction · · · · · · · · · · · · · · 629,609
≤m p LUC · · · · ·Eq (10.113) m-Reduction · · · · · · · · · · · · · · · · 624
≤m p LUC · · · · ·Eq (10.114) m-Reduction · · · · · · · · · · · · · · · · 624
≤p MXP · · · · · Eq (10.115) Reduction · · · · · · · · · · · · · · · · · · 624
≤p LUS · · · · · Eq (S-10.83) Reduction · · · · · · · · · · · · 624,S-444
≤p FTN · · · · Eq (S-10.91) Reduction · · · · · · · · · · · · 629,S-466
≤p FRC · · · · Eq (S-10.92) Reduction · · · · · · · · · · · · 629,S-466
≤m p LSC · · · · Eq (S-10.96) m-Reduction · · · · · · · · · · · · · · S-469
Fibonacci tree size · · · · 140 A · · · · · · · · · · · · · · Eq (3.33) Recursion · · · · · · · · · · · · · · · · · · · 140
· · · · · · · · · · · · · · · · · · · Tm 8.2 Asymtotic Aprx. · · · · · · · · · · · · · 448
· · · · · · · · · · · · · · · · AG S-7.10 Str. ind. + Cir. · · · · · · 422,S-281
· · · · · · · · · · · · · · · · · AG S-7.9 Memoiz. + Cir. · · · · · · 422,S-281
≤p FIB · · · · · · · Eq (10.77) Reduction · · · · · · · · · · · · · · · · · · 595
≤p FRN · · · · Eq (S-10.93) Reduction · · · · · · · · · · · · 629,S-467
≤p MXP · · · · Eq (S-10.94) Reduction · · · · · · · · · · · · 629,S-467
Recursive calls · · · · · · · · 247 A · · · · · · · · · · · · · · Eq (5.31) Recursion · · · · · · · · · · · · · · · · · · · 247
· · · · · · · · · · · · · · · · · AG S-7.8 Str. ind. + Cir. · · · · · · 422,S-281
· · · · · · · · · · · · · · · · AG S-7.11 Memoiz. + Cir. · · · · · · 422,S-282
≤p FIB · · · · · · · Eq (10.78) Reduction · · · · · · · · · · · · · · · · · · 596
≤p FTN · · · · · · Eq (10.79) Reduction · · · · · · · · · · · · · · · · · · 596
≤p MXP · · · · Eq (S-10.95) Reduction · · · · · · · · · · · · 629,S-468
see also Lucas number and Lucas sequence related problems
Frobenius postage stamp problem (see Frobenius number under Stamp problems)
Greater between elements · · · · · · · · · · · · · · ·Eq (S-2.34) Recursion · · · · · · · · · · · · · · · 88,S-45
sequence (GBW) · · · · · · · · 121 · · · · · · · · · · · · · · · · AG S-2.32 Inductive prog. · · · · · · · · · 88,S-45
· · · · · · · · · · · · · · · · · · AG 3.18 Divide & Conq. · · · · · · · · · · · · · 121

^¨ · · · · · · · · · · · · · · AG S-7.1 Stack · · · · · · · · · · · · · · · · · 419,S-269
≤p LBW · · · · · · Eq (10.10) Reduction · · · · · · · · · · · · · · · · · · 559
Greatest common divisor · · · · · · · · · · · · · · · · · · · AG 1.3 by def. · · · · · · · · · · · · · · · · · · · · · · · · 8
(GCD) · · · · · · · · · · · · · · · · · · · · 7 Euclid’s algo · · · · · AG 1.4 by recursive def. · · · · · · · · · · · · · · 8

^¨ Euclid’s algo · AG 2.32 Tail recursion · · · · · · · · · · · · · · · · 76
≤p LCM · · · · · · · Eq (10.9) Reduction · · · · · · · · · · · · · · · · · · 558
multi GCD · · · · · · · · · · 89,S-48 · · · · · · · · · · · · · · ·Eq (S-2.37) 1st order rec. · · · · · · · · · · · 89,S-48
· · · · · · · · · · · · · · · · AG S-2.37 Inductive prog. · · · · · · · · · 89,S-48
· · · · · · · · · · · · · · · · AG S-3.29 Divide & Conq. · · · · · · · · 150,S-81
Hamiltonian Graph Problems
Hamiltonian Cycle · · · · 677 VCP ≤p · · · · · · · · · · · · · [94] NP-complete · · · · · · · · · · · · · · · · 677
Hamiltonian Path · · · · · 676 HMC ≤p · · · · · · · Tm 11.20 NP-complete · · · · · · · · · · · · · · · · 677
Heap data structures
Binary Maxheap or simply max-heap
Check · · · · · · · · · · · · · · · · · 503 ^¨ · · · · · · · · · · · · · · · · AG 9.1 by def. · · · · · · · · · · · · · · · · · · · · · ·504
· · · · · · · · · · · · · · · · · · · AG 9.2 Divide & Conq. · · · · · · · · · · · · · 504

^¨ · · · · · · · · · · · · · · · · AG 9.3 Divide & Conq. · · · · · · · · · · · · · 504
Construct · · · · · · · · · · · · · 508 · · · · · · · · · · · · · · · · · · · Lm 9.1 Recursion · · · · · · · · · · · · · · · · · · · 508
· · · · · · · · · · · · · · · · · · · AG 9.6 Inductive prog. · · · · · · · · · · · · · 508
Heapify · · · · · · · · · · Lm 9.2 Divide & Conq. · · · · · · · · · · · · · 509

^¨ Heapify · · · · · · · AG 9.7 Divide & Conq. · · · · · · · · · · · · · 510
Delete-max · · · · · · · · · · · 506 · · · · · · · · · · · · · · · · · · · AG 9.5 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 507
Insert · · · · · · · · · · · · · · · · · 505 · · · · · · · · · · · · · · · · · · · AG 9.4 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 506
Binary Minheap or simply min-heap
Check · · · · · · · · · · ·546,S-366 ^¨ · · · · · · · · · · · · · · AG S-9.1 by def. · · · · · · · · · · · · · · · 546,S-366
744 Index of Computational Problems

· · · · · · · · · · · · · · · · · AG S-9.2 Divide & Conq. · · · · · · · 546,S-367



^¨ · · · · · · · · · · · · · · AG S-9.3 Divide & Conq. · · · · · · · 546,S-367
Delete-min · · · · · · 546,S-368 · · · · · · · · · · · · · · · · · AG S-9.5 · · · · · · · · · · · · · · · · · · · · · · · 546,S-368
Insert · · · · · · · · · · · 546,S-367 · · · · · · · · · · · · · · · · · · · AG 9.4 · · · · · · · · · · · · · · · · · · · · · · · 546,S-367
Binary Minmaxheap or simply minmax-heap
Check · · · · · · · · · · · · · · · · · 523
^¨ · · · · · · · · · · · · · · · AG 9.20 by def. · · · · · · · · · · · · · · · · · · · · · ·525
· · · · · · · · · · · · · · · · AG S-9.35 Divide & Conq. · · · · · S-390,S-391

^¨ · · · · · · · · · · · · · AG S-9.34 Divide & Conq. · · · · · S-390,S-390
Construct · · · · · · · · · · · · · 530 · · · · · · · · · · · · · · · · · · · Lm 9.3 Recursion · · · · · · · · · · · · · · · · · · · 530
· · · · · · · · · · · · · · · · · · AG 9.24 Inductive prog. · · · · · · · · · · · · · 530
· · · · · · · · · · · · · · · · · · AG 9.25 Inductive prog. · · · · · · · · · · · · · 530
· · · · · · · · · · · · · · · · · · · Lm 9.4 Divide & Conq. · · · · · · · · · · · · · 531

^¨ · · · · · · · · · · · · · · · AG 9.26 Divide & Conq. · · · · · · · · · · · · · 531
Delete-max · · · · · · · · · · · 528 · · · · · · · · · · · · · · · · · · AG 9.23 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 529
Delete-min · · · · · · · · · · · · 528 · · · · · · · · · · · · · · · · · · AG 9.22 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 529
Find-max · · · · · · · · · · · · · 523 · · · · · · · · · · · · · · · · ·Eq (9.13) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 523
Find-min · · · · · · · · · · · · · 523 · · · · · · · · · · · · · · · · ·Eq (9.12) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 523
Insert · · · · · · · · · · · · · · · · · 526 · · · · · · · · · · · · · · · · · · AG 9.21 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 527
Leftist max heap
Check · · · · · · · · · · ·552,S-394 · · · · · · · · · · · · · · · · Eq (S-9.1) Reduction · · · · · · · · · · · · 552,S-394
· · · · · · · · · · · · · · · · Eq (S-9.2) Recursion · · · · · · · · · · · · · 552,S-394
Construct · · · · · · · 552,S-395 · · · · · · · · · · · · · · · · Eq (S-9.4) Recursion · · · · · · · · · · · · · 552,S-395
· · · · · · · · · · · · · · · · AG S-9.38 Inductive prog. · · · · · · · 552,S-395
· · · · · · · · · · · · · · · · Eq (S-9.5) Divide & Conq. · · · · · · · 552,S-396
· · · · · · · · · · · · · · · · AG S-9.39 Divide & Conq. · · · · · · · 552,S-396
Delete-max · · · · · 552,S-395 · · · · · · · · · · · · · · · · AG S-9.37 Reduction · · · · · · · · · · · · 552,S-395
Insert · · · · · · · · · · · 552,S-395 · · · · · · · · · · · · · · · · Eq (S-9.3) Reduction · · · · · · · · · · · · 552,S-395
Max heap property · · · ·533 · · · · · · · · · · · · · · · · ·Eq (9.23) Recursion · · · · · · · · · · · · · · · · · · · 533
Merge · · · · · · · · · · 552,S-394 Meld · · · · · · · · · · AG S-9.36 Recursion · · · · · · · · · · · · · 552,S-394
Leftist min heap
Check · · · · · · · · · · · · · · · · · 533 · · · · · · · · · · · · · · · · ·Eq (9.24) Reduction · · · · · · · · · · · · · · · · · · 533
· · · · · · · · · · · · · · · · ·Eq (9.25) Recursion · · · · · · · · · · · · · · · · · · · 534
Construct · · · · · · · · · · · · · 539 · · · · · · · · · · · · · · · · ·Eq (9.29) Recursion · · · · · · · · · · · · · · · · · · · 539
· · · · · · · · · · · · · · · · · · AG 9.30 Inductive prog. · · · · · · · · · · · · · 539
· · · · · · · · · · · · · · · · · · AG 9.31 Inductive prog. · · · · · · · · · · · · · 539
· · · · · · · · · · · · · · · · ·Eq (9.30) Divide & Conq. · · · · · · · · · · · · · 539
· · · · · · · · · · · · · · · · · · AG 9.32 Divide & Conq. · · · · · · · · · · · · · 541
Delete-min · · · · · · · · · · · · 538 · · · · · · · · · · · · · · · · · · AG 9.29 Reduction · · · · · · · · · · · · · · · · · · 538
Insert · · · · · · · · · · · · · · · · · 537 · · · · · · · · · · · · · · · · · · AG 9.28 Reduction · · · · · · · · · · · · · · · · · · 537
Merge · · · · · · · · · · · · · · · · ·536 Meld · · · · · · · · · · · · AG 9.27 Recursion · · · · · · · · · · · · · · · · · · · 536
Min heap property · · · · 533 · · · · · · · · · · · · · · · · ·Eq (9.22) Recursion · · · · · · · · · · · · · · · · · · · 533
Huffman code (see Minimum length code)
Independent set · · · · · · · · · 670 · · · · · · · · · · · · · · · · AG S-4.28 Greedy Aprx. · · · · · · · · · 209,S-118
CLQ ≤p · · · · · · Eq (11.51) NP-hard · · · · · · · · · · · · · · · · · · · · 670
VCP ≤p · · · · · · Eq (11.55) NP-hard · · · · · · · · · · · · · · · · · · · · 672
SCN ≤p · · · · Eq (S-11.96) NP-hard · · · · · · · · · · · · · · 692,S-507
hdecision ver.i · · · · · · · · · · · 671 CLQdv ≤p · · · · Eq (11.54) NP-complete · · · · · · · · · · · · · · · · 672
VCPdv ≤p · · · · Eq (11.59) NP-complete · · · · · · · · · · · · · · · · 674
SCN ≤p · · · · Eq (S-11.97) NP-complete · · · · · · · · · · 692,S-508
Integer multiplication · · · · 21 · · · · · · · · · · · · · · · · · · AG 1.11 Inductive prog. · · · · · · · · · · · · · · 21
Doubling method AG 1.12 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 21
· · · · · · · · · · · · · · · · ·Eq (2.15) Recursion · · · · · · · · · · · · · · · · · · · · 43
745

· · · · · · · · · · · · · · · · AG S-3.23 Divide & Conq. · · · · · · · · 149,S-77


Integer partition
any parts · · · · · · · 433,S-333 ≤m p IPE · · · · · · · AG S-7.84 m-Reduction + Cyl · · · 433,S-333
≤m p IPE · · · · Eq (S-10.74) m-Reduction · · · · · · · · · · 623,S-442
≤p IPam · · · · Eq (S-10.75) Reduction · · · · · · · · · · · · 623,S-443
≤p IPal · · · · · Eq (S-10.76) Reduction · · · · · · · · · · · · 623,S-443
≤p IPE · · · · · Eq (S-10.77) Reduction · · · · · · · · · · · · 623,S-443
at least k parts · 433,S-332 ≤m p IPE · · · · · · · AG S-7.83 m-Reduction + Cyl · · · 433,S-332
at most k parts · · · · · · · 331 A · · · · · · · · · · · · · · Eq (6.23) Recursion · · · · · · · · · · · · · · · · · · · 332
I(n, k) · · · · · · · · · · · · · · · · · · AG 6.31 2D Str. ind. · · · · · · · · · · · · · · · · 332
· · · · · · · · · · · · · · · · · · AG 6.32 Memoization · · · · · · · · · · · · · · · · 333

^¨ · · · · · · · · · · · · · AG S-7.85 m-Reduction + Cyl · · · 433,S-333
· · · · · · · · · · · · · · · · AG S-7.86 m-Reduction + Cyl · · · 433,S-334
· · · · · · · · · · · · · · · · AG S-7.87 m-Reduction + Cyl · · · 433,S-335

^¨ · · · · · · · · · · · · · AG S-7.88 Strong ind. + Cyl · · · · 433,S-335
· · · · · · · · · · · · · · · · AG S-7.89 Strong ind. + Cyl · · · · 433,S-336
≤m p IPE · · · · · · Eq (10.35) m-Reduction · · · · · · · · · · · · · · · · 588
≤p IPE · · · · · · · · · Tm 10.7 Reduction · · · · · · · · · · · · · · · · · · 588
≤p NPP · · · · · AG S-10.47 Reduction · · · · · · · · · · · · 621,S-437
≤p BIPam · · Eq (S-10.82) Reduction · · · · · · · · · · · · 623,S-443
at most k parts with upper A · · · · · · · · · · · ·Eq (S-6.14) Recursion · · · · · · · · · · · · · 353,S-256
bound Ib (n, k) · · 353,S-256 · · · · · · · · · · · · · · · · AG S-6.44 3D Str. ind. · · · · · · · · · · 353,S-256

^¨ · · · · · · · · · · · · · AG S-6.45 Memoization · · · · · · · · · · 353,S-258
≤m p BIP · · · · Eq (S-10.78) m-Reduction · · · · · · · · · · 623,S-443
≤p BIP · · · · · Eq (S-10.79) Reduction · · · · · · · · · · · · 623,S-443
Exactly k parts · · · · · · · 326 A · · · · · · · · · · · · · · Eq (6.22) Recursion · · · · · · · · · · · · · · · · · · · 327
p(n, k) · · · · · · · · · · · · · · · · · · AG 6.28 2D Str. ind. · · · · · · · · · · · · · · · · 328
· · · · · · · · · · · · · · · · · · AG 6.30 Memoization · · · · · · · · · · · · · · · · 329

^¨ · · · · · · · · · · · · · · · AG 7.52 Strong ind. + Cyl · · · · · · · · · · 414

^¨ · · · · · · · · · · · · · · · AG 7.53 Strong ind. + Cyl · · · · · · · · · · 415
≤p IPam · · · · · · Eq (10.36) Reduction · · · · · · · · · · · · · · · · · · 589
≤p NPP · · · · · AG S-10.46 Reduction · · · · · · · · · · · · 621,S-436
≤p BIP · · · · · Eq (S-10.81) Reduction · · · · · · · · · · · · 623,S-443
Exactly k parts with upper A · · · · · · · · · · · · · · Eq (6.26) Recursion · · · · · · · · · · · · · · · · · · · 338
bound pb (n, k) · · · · · · · · 338 · · · · · · · · · · · · · · · · · · AG 6.35 3D Str. ind. · · · · · · · · · · · · · · · · 339

^¨ · · · · · · · · · · · · · · · AG 6.36 Memoization · · · · · · · · · · · · · · · · 340
≤p BIPam · · Eq (S-10.80) Reduction · · · · · · · · · · · · 623,S-443
Jacobsthal number · · · · · · 281 A · · · · · · · · · · · · · · Eq (5.73) Recursion · · · · · · · · · · · · · · · · · · · 281
· · · · · · · · · · · · · · · · AG S-5.24 Strong ind. · · · · · · · · · · · 281,S-169
· · · · · · · · · · · · · · ·Eq (S-5.45) Memoization · · · · · · · · · · 281,S-170
· · · · · · · · · · · · · · · · ·Eq (5.74) Recursion · · · · · · · · · · · · · · · · · · · 281
· · · · · · · · · · · · · · · · AG S-5.25 Inductive prog. · · · · · · · 281,S-170
· · · · · · · · · · · · · · · · ·Eq (5.75) Recursion · · · · · · · · · · · · · · · · · · · 281
· · · · · · · · · · · · · · · · AG S-5.25 Inductive prog. · · · · · · · 281,S-171
· · · · · · · · · · · · · · ·Eq (S-5.48) Divide & Conq. · · · · · · · 281,S-172
· · · · · · · · · · · · · · · · AG S-5.27 Memoiz. + D&C · · · · · 281,S-172
· · · · · · · · · · · · · · · · AG S-7.25 Str. ind. + Cir. · · · · · · 423,S-288
· · · · · · · · · · · · · · · · AG S-7.26 Memoiz. + Cir. · · · · · · 423,S-288
· · · · · · · · · · · · · · · · AG S-7.27 D&C + Jmp. · · · · · · · · · 423,S-289
≤p NPP · · · · · · · AG 10.13 Reduction · · · · · · · · · · · · · · · · · · 569
≤m p JCL · · · · · Eq (10.133) m-Reduction · · · · · · · · · · · · · · · · 626
≤m p JCL · · · · · Eq (10.134) m-Reduction · · · · · · · · · · · · · · · · 626
746 Index of Computational Problems

≤m p POW · · · · Eq (10.136) m-Reduction · · · · · · · · · · · · · · · · 626


≤p MXP · · · · · Eq (10.138) Reduction · · · · · · · · · · · · · · · · · · 626
≤p LUS · · · · · Eq (S-10.87) Reduction · · · · · · · · · · · · 626,S-454
Jacobsthal-Lucas number A · · · · · · · · · · · · · · Eq (5.78) Recursion · · · · · · · · · · · · · · · · · · · 282
· · · · · · · · · · · · · · · · · · · · · · · · · 282 · · · · · · · · · · · · · · · · AG S-5.28 Strong ind. · · · · · · · · · · · 282,S-173
· · · · · · · · · · · · · · ·Eq (S-5.49) Memoization · · · · · · · · · · 282,S-174
· · · · · · · · · · · · · · · · ·Eq (5.79) Recursion · · · · · · · · · · · · · · · · · · · 282
· · · · · · · · · · · · · · · · AG S-5.29 Inductive prog. · · · · · · · 282,S-174
· · · · · · · · · · · · · · ·Eq (S-5.52) Divide & Conq. · · · · · · · 282,S-176
· · · · · · · · · · · · · · · · AG S-5.30 Memoiz. + D&C · · · · · 282,S-176
· · · · · · · · · · · · · · · · AG S-7.28 Str. ind. + Cir. · · · · · · 424,S-289
· · · · · · · · · · · · · · · · AG S-7.29 Memoiz. + Cir. · · · · · · 424,S-290
· · · · · · · · · · · · · · · · AG S-7.30 D&C + Jmp. · · · · · · · · · 424,S-290
≤p NPP · · · · · · · AG 10.14 Reduction · · · · · · · · · · · · · · · · · · 569
≤m p JCN · · · · · Eq (10.130) m-Reduction · · · · · · · · · · · · · · · · 626
≤m p JCN · · · · · Eq (10.131) m-Reduction · · · · · · · · · · · · · · · · 626
≤m p JCN · · · · · Eq (10.132) m-Reduction · · · · · · · · · · · · · · · · 626
≤m p POW · · · · Eq (10.135) m-Reduction · · · · · · · · · · · · · · · · 626
≤m p JCN · · · · · Eq (10.137) m-Reduction · · · · · · · · · · · · · · · · 626
≤p MXP · · · · · Eq (10.139) Reduction · · · · · · · · · · · · · · · · · · 626
≤p LUS · · · · · Eq (S-10.88) Reduction · · · · · · · · · · · · 626,S-454
Job scheduling with · · · · · · · · · · · · · · · · · · AG 4.16 Greedy Algo. · · · · · · · · · · · · · · · 178
deadline · · · · · · · · · · · · · · · · 177 · · · · · · · · · · · · · · · · AG S-9.33 Greedy + maxheap · · · 551,S-389
Kibonacci number (generalized Fibonacci)
Kn = Kn−1 + Kn−k · · ·248 A · · · · · · · · · · · · · · Eq (5.32) Recursion · · · · · · · · · · · · · · · · · · · 248
· · · · · · · · · · · · · · · · · · AG 5.19 Strong ind. · · · · · · · · · · · · · · · · · 248
· · · · · · · · · · · · · · · · ·Eq (5.33) Memoization · · · · · · · · · · · · · · · · 249

^¨ · · · · · · · · · · · · · · · AG 7.31 Str. ind. + Que. · · · · · · · · · · · · 396

^¨ · · · · · · · · · · · · · · · AG 7.32 Str. ind. + Cir. · · · · · · · · · · · · 396

^¨ · · · · · · · · · · · · · · · AG 7.33 Recursion + Cir. · · · · · · · · · · · 398
≤p MXP · · · · · Eq (10.100) Reduction · · · · · · · · · · · · · · · · · · 611
≤p WWP · · · Eq (S-10.25) Reduction · · · · · · · · · · · · 616,S-415
≤p NPP · · · · · AG S-10.28 Reduction · · · · · · · · · · · · 616,S-416
Full Kibonacci [63] · · · · 285 A · · · · · · · · · · · · · · Eq (5.97) Recursion · · · · · · · · · · · · · · · · · · · 285
Kn = ki=1 Kn−i
P
· · · · · · · · · · · · · · · · AG S-5.41 Strong ind. · · · · · · · · · · · 285,S-189
· · · · · · · · · · · · · · ·Eq (S-5.67) Memoization · · · · · · · · · · 285,S-189
A · · · · · · · · · · · · · · Eq (5.98) Recursion · · · · · · · · · · · · · · · · · · · 286
· · · · · · · · · · · · · · · · AG S-5.42 Strong ind. · · · · · · · · · · · 285,S-189
· · · · · · · · · · · · · · ·Eq (S-5.68) Memoization · · · · · · · · · · 285,S-190
· · · · · · · · · · · · · · · · AG S-7.37 Str. ind. + Cir. · · · · · · 425,S-294

^¨ · · · · · · · · · · · · · AG S-7.38 Str. ind. + Que · · · · · · 425,S-295

^¨ · · · · · · · · · · · · · AG S-7.39 Str. ind. + Cir. · · · · · · 425,S-295
≤p MXP · · · · · · Eq (10.99) Reduction · · · · · · · · · · · · · · · · · · 611
≤p WWP · · · Eq (S-10.26) Reduction · · · · · · · · · · · · 616,S-416
≤p NPP · · · · · AG S-10.29 Reduction · · · · · · · · · · · · 616,S-416
Knapsack problems
0-1 knapsack · · · · · · · · · · 163 · · · · · · · · · · · · · · · · · · · AG 4.6 Greedy Aprx. I · · · · · · · · · · · · · 163
· · · · · · · · · · · · · · · · · · · AG 4.7 Greedy Aprx. II · · · · · · · · · · · · 164
· · · · · · · · · · · · · · · · · · AG 4.25 Greedy Aprx. III · · · · · · · · · · · 203
A · · · · · · · · · · · · · · · Eq (6.6) Recursion · · · · · · · · · · · · · · · · · · · 302
dynamic · · · · · · · · · · AG 6.6 2D Str. ind. · · · · · · · · · · · · · · · · 302
747

· · · · · · · · · · · · · · · · · · · AG 6.8 2D Memoization · · · · · · · · · · · · 303


· · · · · · · · · · · · · · · · · · AG 7.39 2D str. ind. + Cyl. · · · · · · · · · 403
· · · · · · · · · · · · · · · · · · AG 7.40 2D str. ind. + Cyl. · · · · · · · · · 405
SSM ≤p · · · · · · Eq (11.42) NP-hard · · · · · · · · · · · · · · · · · · · · 664
ZOKmin ≤p Eq (S-11.24) NP-hard · · · · · · · · · · · · · · 688,S-492
hdecision ver.i · · · · · · · · · 664 SSE ≤p · · · · · · · Eq (11.43) NP-complete · · · · · · · · · · · · · · · · 664
ZOKmindv · · Eq (S-11.26) NP-complete · · · · · · · · · · 688,S-493
0-1 knapsack minimization · · · · · · · · · · · · · · · · AG S-4.11 Greedy Aprx. · · · · · · · · · 203,S-100
· · · · · · · · · · · · · · · · · 203,S-100 A · · · · · · · · · · · · · Eq (S-6.3) Recursion · · · · · · · · · · · · · 342,S-214
dynamic · · · · · · · · AG S-6.5 2D Str. ind. · · · · · · · · · · 342,S-214
· · · · · · · · · · · · · · · · · AG S-6.6 2D Memoization · · · · · · 342,S-214
· · · · · · · · · · · · · · · · AG S-7.52 2D str. ind. + Cyl. · · · 428,S-307
· · · · · · · · · · · · · · · · AG S-7.53 2D str. ind. + Cyl. · · · 428,S-308
SSmin ≤p · · · Eq (S-11.21) NP-hard · · · · · · · · · · · · · · 688,S-492
ZOK ≤p · · · · Eq (S-11.23) NP-hard · · · · · · · · · · · · · · 688,S-492
hdecision ver.i · · ·688,S-492 SSE ≤p · · · · · Eq (S-11.22) NP-complete · · · · · · · · · · 688,S-492
ZOKdv ≤p · · Eq (S-11.25) NP-complete · · · · · · · · · · 688,S-493
0-1 knapsack equality · · · · · · · · · · · · · · · · AG S-4.20 Greedy Aprx. I · · · · · · · 206,S-111
· · · · · · · · · · · · · · · · · 206,S-110 · · · · · · · · · · · · · · · · AG S-4.21 Greedy Aprx. II · · · · · · 206,S-111
A · · · · · · · · · · · · · Eq (S-6.2) Recursion · · · · · · · · · · · · · 341,S-212
dynamic · · · · · · · · AG S-6.3 2D Str. ind. · · · · · · · · · · 341,S-212
· · · · · · · · · · · · · · · · · · · AG 6.8 2D Memoization · · · · · · · · 341,303
SSE ≤p · · · · · Eq (S-11.27) NP-hard · · · · · · · · · · · · · · 688,S-493
ZOKEmin · · Eq (S-11.32) NP-hard · · · · · · · · · · · · · · 688,S-494
hdecision ver.i · · ·688,S-493 SSE ≤p · · · · · Eq (S-11.28) NP-complete · · · · · · · · · · 688,S-493
ZOKEmindv Eq (S-11.34) NP-complete · · · · · · · · · · 688,S-495
0-1 knapsack equality · · · · · · · · · · · · · · · · AG S-4.22 Greedy Aprx. I · · · · · · · 207,S-113
minimization · · · · 207,S-112 · · · · · · · · · · · · · · · · AG S-4.23 Greedy Aprx. II · · · · · · 207,S-113
A · · · · · · · · · · · · · Eq (S-6.4) Recursion · · · · · · · · · · · · · 342,S-215
dynamic · · · · · · · · AG S-6.7 2D Str. ind. · · · · · · · · · · 342,S-216
· · · · · · · · · · · · · · · · · AG S-6.6 2D Memoization · · · · · · 342,S-214
SSE ≤p · · · · · Eq (S-11.29) NP-hard · · · · · · · · · · · · · · 688,S-494
ZOKE ≤p · · · Eq (S-11.31) NP-hard · · · · · · · · · · · · · · 688,S-494
hdecision ver.i · · ·688,S-494 SSE ≤p · · · · · Eq (S-11.30) NP-complete · · · · · · · · · · 688,S-494
ZOKEdv ≤p ·Eq (S-11.33) NP-complete · · · · · · · · · · 688,S-495
0-1 knapsack with two A · · · · · · · · · · · · · · Eq (6.25) Recursion · · · · · · · · · · · · · · · · · · · 335
constraints · · · · · · · · · · · · 334 dynamic · · · · · · · · AG 6.33 3D Str. ind. · · · · · · · · · · · · · · · · 335
· · · · · · · · · · · · · · · · · · AG 6.34 3D Memoization · · · · · · · · · · · · 337
ZOK ≤p · · · · Eq (S-11.35) NP-hard · · · · · · · · · · · · · · 689,S-495
SSM ≤p · · · · Eq (S-11.36) NP-hard · · · · · · · · · · · · · · 689,S-495
ZOKmin2 · · · Eq (S-11.42) NP-hard · · · · · · · · · · · · · · 689,S-496
hdecision ver.i · · ·689,S-495 SSE ≤p · · · · · Eq (S-11.37) NP-complete · · · · · · · · · · 689,S-495
ZOKmin2dv · Eq (S-11.44) NP-complete · · · · · · · · · · 689,S-497
0-1 knapsack minimization A · · · · · · · · · · · ·Eq (S-6.13) Recursion · · · · · · · · · · · · · 353,S-253
with two constraints dynamic · · · · · · AG S-6.42 3D Str. ind. · · · · · · · · · · 353,S-253
· · · · · · · · · · · · · · · · · 353,S-252 · · · · · · · · · · · · · · · · AG S-6.43 3D Memoization · · · · · · 353,S-254
ZOKmin ≤p Eq (S-11.38) NP-hard · · · · · · · · · · · · · · 689,S-496
SSmin ≤p · · · Eq (S-11.39) NP-hard · · · · · · · · · · · · · · 689,S-496
ZOK2 ≤p · · · Eq (S-11.41) NP-hard · · · · · · · · · · · · · · 689,S-496
hdecision ver.i · · ·689,S-496 SSE ≤p · · · · · Eq (S-11.40) NP-complete · · · · · · · · · · 689,S-496
ZOK2dv ≤p · Eq (S-11.43) NP-complete · · · · · · · · · · 689,S-497
748 Index of Computational Problems

Fractional knapsack · · · 165 · · · · · · · · · · · · · · · · · · · AG 4.8 Greedy Algo · · · · · · · · · · · · · · · · 165


· · · · · · · · · · · · · · · · · · AG 9.16 Greedy + maxheap · · · · · · · · · 519
≤p FKPmin · ·AG S-10.23 Reduction · · · · · · · · · · · · 616,S-413
Fractional knapsack · · · · · · · · · · · · · · · · AG S-4.12 Greedy Algo · · · · · · · · · · 204,S-103
minimization · · · · 204,S-102 · · · · · · · · · · · · · · · · AG S-9.29 Greedy + minheap · · · · 550,S-385
≤p FKP · · · · · AG S-10.22 Reduction · · · · · · · · · · · · 616,S-412
Unbounded knapsack A · · · · · · · · · · · · · Eq (S-5.8) Recursion · · · · · · · · · · · · · 272,S-139
equality · · · · · · · · · 272,S-139 · · · · · · · · · · · · · · · · · AG S-5.5 Strong ind. · · · · · · · · · · · 272,S-139
· · · · · · · · · · · · · · · · Eq (S-5.9) Memoization · · · · · · · · · · 272,S-140
· · · · · · · · · · · · · · · · AG S-7.44 Str. ind. + Cir. · · · · · · · · · · S-300
USSE ≤p · · · · Eq (S-11.79) NP-hard · · · · · · · · · · · · · · 691,S-503
UKEmin ≤p Eq (S-11.84) NP-hard · · · · · · · · · · · · · · 691,S-504
PSEmax ≤p Eq (S-11.92) NP-hard · · · · · · · · · · · · · · 691,S-506
hdecision ver.i · · ·691,S-503 USSE ≤p · · · · Eq (S-11.80) NP-complete · · · · · · · · · · 691,S-504
UKEmindv · · Eq (S-11.86) NP-complete · · · · · · · · · · 691,S-505
Unbounded knapsack · · · · · · · · · · · · · · · · AG S-4.17 Greedy Aprx. · · · · · · · · · 205,S-108
equality minimization A · · · · · · · · · · · · · Eq (S-5.4) Recursion · · · · · · · · · · · · · 271,S-137
· · · · · · · · · · · · · · · · · 205,S-108 · · · · · · · · · · · · · · · · · AG S-5.3 Strong ind. · · · · · · · · · · · 271,S-137
· · · · · · · · · · · · · · · · Eq (S-5.5) Memoization · · · · · · · · · · 271,S-137
· · · · · · · · · · · · · · · · AG S-7.46 Str. ind. + Cir. · · · · · · 426,S-302
USSE ≤p · · · Eq (S-11.81) NP-hard · · · · · · · · · · · · · · 691,S-504
UKE ≤p · · · · Eq (S-11.83) NP-hard · · · · · · · · · · · · · · 691,S-504
PSEmin ≤p · Eq (S-11.89) NP-hard · · · · · · · · · · · · · · 691,S-506
hdecision ver.i · · ·691,S-504 USSE ≤p · · · · Eq (S-11.82) NP-complete · · · · · · · · · · 691,S-504
UKEdv ≤p · · Eq (S-11.85) NP-complete · · · · · · · · · · 691,S-505
Unbounded integer · · · · · · · · · · · · · · · · · · · AG 4.9 Greedy Aprx. · · · · · · · · · · · · · · · 167
knapsack · · · · · · · · · · · · · · 167 A · · · · · · · · · · · · · · Eq (5.14) Recursion · · · · · · · · · · · · · · · · · · · 230
· · · · · · · · · · · · · · · · · · · AG 5.8 Strong ind. · · · · · · · · · · · · · · · · · 230
· · · · · · · · · · · · · · · · AG S-5.53 Memoization · · · · · · · · · · 289,S-200
A · · · · · · · · · · · · · · · Eq (6.7) Recursion 2D · · · · · · · · · · · · · · · 303
· · · · · · · · · · · · · · · · · · · AG 6.9 2D Str. ind. · · · · · · · · · · · · · · · · 304
· · · · · · · · · · · · · · · · · AG S-6.9 2D Memoization · · · · · · 343,S-217
· · · · · · · · · · · · · · · · AG S-7.43 Str. ind. + Cir. · · · · · · 426,S-299
USSM ≤p · · · Eq (S-11.71) NP-hard · · · · · · · · · · · · · · 690,S-501
UKPmin ≤p Eq (S-11.76) NP-hard · · · · · · · · · · · · · · 690,S-503
hdecision ver.i · · ·690,S-502 USSMdv ≤p · Eq (S-11.72) NP-complete · · · · · · · · · · 690,S-502
UKPmindv · · Eq (S-11.78) NP-complete · · · · · · · · · · 690,S-503
Unbounded integer · · · · · · · · · · · · · · · · AG S-4.14 Greedy Aprx. · · · · · · · · · 204,S-105
minimization · · · · 204,S-105 A · · · · · · · · · · · · · Eq (S-5.6) Recursion · · · · · · · · · · · · · 271,S-138
· · · · · · · · · · · · · · · · · AG S-5.4 Strong ind. · · · · · · · · · · · 271,S-138
· · · · · · · · · · · · · · · · Eq (S-5.7) Memoization · · · · · · · · · · 271,S-138
· · · · · · · · · · · · · · · · AG S-7.45 Str. ind. + Cir. · · · · · · 426,S-301
USSmin ≤p · · Eq (S-11.73) NP-hard · · · · · · · · · · · · · · 690,S-502
UKP ≤p · · · · Eq (S-11.75) NP-hard · · · · · · · · · · · · · · 690,S-503
hdecision ver.i · · ·690,S-502 USSmindv · · ·Eq (S-11.74) NP-complete · · · · · · · · · · 690,S-503
UKPdv ≤p · · Eq (S-11.77) NP-complete · · · · · · · · · · 690,S-503
Least common multiple · · · · · · · · · · · · · · · · · · · AG 1.1 by def. · · · · · · · · · · · · · · · · · · · · · · · · 4
(LCM) · · · · · · · · · · · · · · · · · · · · 4
^¨ ≤p GCD · · · · Eq (10.8) Reduction · · · · · · · · · · · · · · · · · · 558
multi LCM · · · · · · · · · · 89,S-49 · · · · · · · · · · · · · · ·Eq (S-2.38) Recursion · · · · · · · · · · · · · · · 89,S-49
· · · · · · · · · · · · · · · · AG S-2.38 Inductive prog. · · · · · · · · · 89,S-49
· · · · · · · · · · · · · · · · AG S-3.30 Divide & Conq. · · · · · · · · 150,S-82
749

Leftist heap (see under Heap data structures)


Less between elements · · · · · · · · · · · · · · ·Eq (S-2.35) Recursion · · · · · · · · · · · · · · · 88,S-46
sequence (LBW) · · · · 88,S-45 · · · · · · · · · · · · · · · · AG S-2.33 Inductive prog. · · · · · · · · · 88,S-46
· · · · · · · · · · · · · · · · AG S-3.28 Divide & Conq. · · · · · · · · 150,S-80

^¨ · · · · · · · · · · · · · · AG S-7.2 Stack · · · · · · · · · · · · · · · · · 420,S-270
≤p GBW · · · · · Eq (10.11) Reduction · · · · · · · · · · · · · · · · · · 559
Lexicographical order · · · · 79 · · · · · · · · · · · · · · · · ·Eq (2.41) Rcursion · · · · · · · · · · · · · · · · · · · · · 80
· · · · · · · · · · · · · · · · · · AG 2.38 Tail recursion · · · · · · · · · · · · · · · · 80
List operations
Checking · · · · · · · · · · · · · · · 70 · · · · · · · · · · · · · · · · ·Eq (2.31) Recursion · · · · · · · · · · · · · · · · · · · · 70
· · · · · · · · · · · · · · · · ·Eq (2.32) Recursion · · · · · · · · · · · · · · · · · · · · 70
· · · · · · · · · · · · · · · · AG S-2.21 Inductive prog. · · · · · · · · · 86,S-38
· · · · · · · · · · · · · · · · AG S-2.22 Inductive prog. · · · · · · · · · 86,S-39
Linked list - Access · · · · 71 · · · · · · · · · · · · · · · · ·Eq (2.34) Recursion · · · · · · · · · · · · · · · · · · · · 71
Linked list - Delete · · · · · 72 · · · · · · · · · · · · · · · · · · AG 2.29 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 72
Linked list - Insert · · · · · 71 · · · · · · · · · · · · · · · · · · AG 2.28 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 71
Linked list - Sort · · · · · · · 73 Insertion sort · · · ·AG 2.30 Inductive prog. · · · · · · · · · · · · · · 73
see also sorted list
Logarithm base b blogb nc 113 · · · · · · · · · · · · · · · · ·Eq (3.17) Divide & Conq. · · · · · · · · · · · · · 113
· · · · · · · · · · · · · · · · · · AG 3.21 Tail recursion D.& C. · · · · · · · 126
· · · · · · · · · · · · · · · · · · AG 3.22 Bottom up D.& C. · · · · · · · · · · 126
Logic related problems
Convert infix to postfix · · · operator precedence Stack · · · · · · · · · · · · · · · · · · · · · · · 370
· · · · · · · · · · · · · · · · · · · · · · · 370 parsing · · · · · · · · · · AG 7.13
DFT infix · · · · · · · · · · · · · 367 · · · · · · · · · · · · · · · · · · · AG 7.9 Recursion · · · · · · · · · · · · · · · · · · · 367
DFT postfix · · · · · · · · · · 366 · · · · · · · · · · · · · · · · · · · AG 7.8 Recursion · · · · · · · · · · · · · · · · · · · 366
DFT prefix · · · · · · · · · · · 366 · · · · · · · · · · · · · · · · · · · AG 7.7 Recursion · · · · · · · · · · · · · · · · · · · 366
Equivalency · · · · · · · · · · · 681 TAU ≤p · · · · · · Eq (11.71) co-NP-complete · · · · · · · · · · · · · 681
FAL ≤p · · · · · · · Eq (11.72) co-NP-complete · · · · · · · · · · · · · 681
Evaluate infix · · · · · · · · · 369 · · · · · · · · · · · · · · · · · · AG 7.12 Reduction · · · · · · · · · · · · · · · · · · 369
Evaluate prefix · · · · · · · · 368 · · · · · · · · · · · · · · · · · · AG 7.11 Stack · · · · · · · · · · · · · · · · · · · · · · · 368
Evaluate postfix · · · · · · ·367 · · · · · · · · · · · · · · · · · · AG 7.10 Stack · · · · · · · · · · · · · · · · · · · · · · · 367
see also under satisfiability
Longest path (Graph)
Longest path cost (wDAG) A · · · · · · · · · · · ·Eq (S-5.74) Recursion · · · · · · · · · · · · · 290,S-203
· · · · · · · · · · · · · · · · · 290,S-203 · · · · · · · · · · · · · · · · AG S-5.56 Strong ind. · · · · · · · · · · · 290,S-204
≤p SPC · · · · · · · Eq (10.15) Reduction · · · · · · · · · · · · · · · · · · 577
Longest path length (DAG) A · · · · · · · · · · · ·Eq (S-5.73) Recursion · · · · · · · · · · · · · 290,S-202
· · · · · · · · · · · · · · · · · 290,S-202 · · · · · · · · · · · · · · · · AG S-5.55 Strong ind. · · · · · · · · · · · 290,S-202
· · · · · · · · · · · · · · · · · AG S-7.7 Queue · · · · · · · · · · · · · · · · 421,S-279
Longest sub-sequence problems
alternating down-up ≤m p LDUSe · Eq (S-10.48) m-Reduction · · · · · · · · · · 619,S-427
(LDUS) · · · · · · · · · 619,S-427 ≤p LUDS · · · Eq (S-10.47) Reduction · · · · · · · · · · · · 619,S-427
hending-at ver.i (LDUSe) A · · · · · · · · · · Eq (S-10.49) Recursion · · · · · · · · · · · · · 619,S-427
· · · · · · · · · · · · · · · · · 619,S-427 · · · · · · · · · · · · · · AG S-10.36 Strong ind. · · · · · · · · · · · 619,S-428
alternating up-down ≤m p LUDSe · · · Eq (10.89) m-Reduction · · · · · · · · · · · · · · · · 601
(LUDS) · · · · · · · · · · · · · · · 600 ≤p LDUS · · · Eq (S-10.46) Reduction · · · · · · · · · · · · 619,S-427
hending-at ver.i (LUDSe) A · · · · · · · · · · · · Eq (10.90) Recursion · · · · · · · · · · · · · · · · · · · 601
· · · · · · · · · · · · · · · · · · · · · · · 601 · · · · · · · · · · · · · · · · ·AG 10.26 Strong ind. · · · · · · · · · · · · · · · · · 602
consecutive alternating ≤m p LDUCe · · AG S-10.44 m-Reduction · · · · · · · · · · 620,S-433
down-up (LDUC) 620,S-432 ≤p LUDC · · ·Eq (S-10.58) Reduction · · · · · · · · · · · · 620,S-434
750 Index of Computational Problems

hending-at ver.i (LDUCe) A · · · · · · · · · · Eq (S-10.57) Recursion · · · · · · · · · · · · · 620,S-432


· · · · · · · · · · · · · · · · · 620,S-432 · · · · · · · · · · · · · · AG S-10.43 Strong ind. · · · · · · · · · · · 620,S-433
consecutive alternating ≤m p LUDCe · · AG S-10.42 m-Reduction · · · · · · · · · · 620,S-431
up-down (LUDC) 620,S-431 ≤p LDUC · · ·Eq (S-10.59) Reduction · · · · · · · · · · · · 620,S-434
hending-at ver.i (LUDCe) A · · · · · · · · · · Eq (S-10.56) Recursion · · · · · · · · · · · · 620, S-431
· · · · · · · · · · · · · · · · · 620,S-431 · · · · · · · · · · · · · · AG S-10.41 Strong ind. · · · · · · · · · · · 620,S-431
consecutive decreasing · · · · · · · · · · · · · · · · AG S-3.22 Divide & Conq. · · · · · · · · 149,S-76
(LDCS) · · · · · · · · · · 149,S-75 ≤m p LDCSe · · AG S-10.40 m-Reduction · · · · · · · · · · 620,S-430
≤p LICS · · · · Eq (S-10.55) Reduction · · · · · · · · · · · · 620,S-430
hending-at ver.i (LDCSe) · · · · · · · · · · · · · Eq (S-10.53) Recursion · · · · · · · · · · · · · 620,S-429
· · · · · · · · · · · · · · · · · 620,S-429 · · · · · · · · · · · · · · AG S-10.39 Strong ind. · · · · · · · · · · · 620,S-430
consecutive increasing · · · · · · · · · · · · · · · · AG S-3.21 Divide & Conq. · · · · · · · · 149,S-74
(LICS) · · · · · · · · · · · 149,S-74 ≤m p LICSe · · · AG S-10.38 m-Reduction · · · · · · · · · · 620,S-429
≤p LDCS · · · Eq (S-10.54) Reduction · · · · · · · · · · · · 620,S-430
hending-at ver.i (LICSe) · · · · · · · · · · · · · Eq (S-10.52) Recursion · · · · · · · · · · · · · 620,S-428
· · · · · · · · · · · · · · · · · 620,S-428 · · · · · · · · · · · · · · AG S-10.37 Strong ind. · · · · · · · · · · · 620,S-429
consecutive palindromic ≤m p PLD · · · · · · Eq (10.94) m-Reduction · · · · · · · · · · · · · · · · 604
(LPCS) · · · · · · · · · · · · · · · 602 ≤m p PLD · · · · · AG S-10.45 m-Reduction + Cyl · · · 621,S-434
Manacher’s algo · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · see [118]
decreasing (LDS) 619,S-424 ≤p LPP · · · · · · AG S-10.34 Reduction · · · · · · · · · · · · 619,S-425
≤p LIS · · · · · ·Eq (S-10.42) Reduction · · · · · · · · · · · · 619,S-425
≤m p LDSe · · · Eq (S-10.44) m-Reduction · · · · · · · · · · 619,S-425
hending-at ver.i (LDSe) A · · · · · · · · · · Eq (S-10.45) Recursion · · · · · · · · · · · · · 619,S-426
· · · · · · · · · · · · · · · · · 619,S-426 · · · · · · · · · · · · · · AG S-10.35 Strong ind. · · · · · · · · · · · 619,S-426
increasing (LIS) · · · · · · · 572 ≤p LPP · · · · · · · · AG 10.18 Reduction · · · · · · · · · · · · · · · · · · 573
≤m p LISe · · · · · · Eq (10.87) m-Reduction · · · · · · · · · · · · · · · · 600
≤p LDS · · · · · Eq (S-10.43) Reduction · · · · · · · · · · · · 619,S-425
hending-at ver.i (LISe) A · · · · · · · · · · · · Eq (10.88) Recursion · · · · · · · · · · · · · · · · · · · 600
· · · · · · · · · · · · · · · · · · · · · · · 600 · · · · · · · · · · · · · · · · ·AG 10.25 Strong ind. · · · · · · · · · · · · · · · · · 600
palindromic (LPS) · · · · 316 A · · · · · · · · · · · · · · Eq (6.15) Recursion · · · · · · · · · · · · · · · · · · · 317
· · · · · · · · · · · · · · · · · · AG 6.19 2D Str. ind. · · · · · · · · · · · · · · · · 317
· · · · · · · · · · · · · · · · · · AG 6.20 Memoization · · · · · · · · · · · · · · · · 318
see also consecutive subsequence arithmetic problems
Lucas related problems
Lucas number · · · · · · · · · 278 A · · · · · · · · · · · · · · Eq (5.63) Recursion · · · · · · · · · · · · · · · · · · · 278
· · · · · · · · · · · · · · · · AG S-5.17 Strong ind. · · · · · · · · · · · 278,S-159
· · · · · · · · · · · · · · ·Eq (S-5.32) Memoization · · · · · · · · · · 278,S-160
· · · · · · · · · · · · · · ·Eq (S-5.35) Divide & Conq. · · · · · · · 278,S-161
· · · · · · · · · · · · · · · · AG S-5.18 Memoiz. + D&C · · · · · 278,S-161
· · · · · · · · · · · · · · · · AG S-7.16 Str. ind. + Cir. · · · · · · 422,S-283
· · · · · · · · · · · · · · · · AG S-7.17 Memoiz. + Cir. · · · · · · 422,S-284
· · · · · · · · · · · · · · · · AG S-7.18 D&C + Jmp. · · · · · · · · · 422,S-284
≤p NPP · · · · · · · AG 10.10 Reduction · · · · · · · · · · · · · · · · · · 568
≤m p FIB · · · · · · Eq (10.80) m-Reduction · · · · · · · · · · · · · · · · 596
≤m p FIB · · · · · · Eq (10.81) m-Reduction · · · · · · · · · · · · · · · · 596
Binet formula · Eq (10.85) m-Reduction · · · · · · · · · · · · · · · · 597
≤p LUS2 · · · ·Eq (S-10.84) Reduction · · · · · · · · · · · · 624,S-444
≤m p FIB · · · · · Eq (10.111) m-Reduction · · · · · · · · · · · · · · · · 624
≤m p FIB · · · · · Eq (10.112) m-Reduction · · · · · · · · · · · · · · · · 624
≤p MXP · · · · · Eq (10.116) Reduction · · · · · · · · · · · · · · · · · · 624
≤m p LSC2 · · · Eq (S-10.97) m-Reduction · · · · · · · · · · 629,S-469
Lucas sequence · · · · · · · 250 A · · · · · · · · · · · · · · Eq (5.34) Recursion · · · · · · · · · · · · · · · · · · · 249
751

· · · · · · · · · · · · · · · · · · AG 5.20 Strong ind. · · · · · · · · · · · · · · · · · 250


· · · · · · · · · · · · · · · · · · AG 5.22 Memoization · · · · · · · · · · · · · · · · 250
· · · · · · · · · · · · · · · · AG S-7.12 Str. ind. + Cir. · · · · · · 422,S-282
· · · · · · · · · · · · · · · · AG S-7.13 Memoiz. + Cir. · · · · · · 422,S-282
≤m p LUS2 · · · · Eq (10.153) m-Reduction · · · · · · · · · · · · · · · · 628
≤m p LUS2 · · · · Eq (10.154) m-Reduction · · · · · · · · · · · · · · · · 628
Binet form. · · Eq (10.155) m-Reduction · · · · · · · · · · · · · · · · 628
≤p MXP · · · · · Eq (10.158) Reduction · · · · · · · · · · · · · · · · · · 628
≤p MXP · · · · · Eq (10.160) Reduction · · · · · · · · · · · · · · · · · · 628
Lucas sequence II · · · · · 250 A · · · · · · · · · · · · · · Eq (5.35) Recursion · · · · · · · · · · · · · · · · · · · 249
· · · · · · · · · · · · · · · · · · AG 5.21 Strong ind. · · · · · · · · · · · · · · · · · 250
· · · · · · · · · · · · · · · · · · AG 5.23 Memoization · · · · · · · · · · · · · · · · 250
· · · · · · · · · · · · · · · · AG S-7.14 Str. ind. + Cir. · · · · · · 422,S-283
· · · · · · · · · · · · · · · · AG S-7.15 Memoiz. + Cir. · · · · · · 422,S-283
≤m p LUS · · · · · Eq (10.150) m-Reduction · · · · · · · · · · · · · · · · 628
≤m p LUS · · · · · Eq (10.151) m-Reduction · · · · · · · · · · · · · · · · 628
≤m p LUS · · · · · Eq (10.152) m-Reduction · · · · · · · · · · · · · · · · 628
≤m p LUS · · · · · Eq (10.157) m-Reduction · · · · · · · · · · · · · · · · 628
Binet form. · · Eq (10.156) m-Reduction · · · · · · · · · · · · · · · · 628
≤p MXP · · · · · Eq (10.159) Reduction · · · · · · · · · · · · · · · · · · 628
≤p MXP · · · · · Eq (10.161) Reduction · · · · · · · · · · · · · · · · · · 628
Lucas Sequence coefficient A · · · · · · · · · · · · · · Eq (6.21) Recursion · · · · · · · · · · · · · · · · · · · 324
· · · · · · · · · · · · · · · · · · · · · · · 323 · · · · · · · · · · · · · · · · · · AG 6.25 Memoization · · · · · · · · · · · · · · · · 324
· · · · · · · · · · · · · · · · · · AG 6.26 2D Str. ind. · · · · · · · · · · · · · · · · 325
· · · · · · · · · · · · · · · · · · AG 6.27 2D Str. ind. · · · · · · · · · · · · · · · · 325

^¨ · · · · · · · · · · · · · · · AG 7.54 Strong ind. + Cyl · · · · · · · · · · 416
¨ · · · · · · · · · · · · · · · AG 7.55 Strong ind. + Cyl · · · · · · · · · · 416
A
^

Lucas Sequence II coeffi- · · · · · · · · · · · · · · Eq (6.34) Recursion · · · · · · · · · · · · · · · · · · · 352


cient · · · · · · · · · · · · · · · · · · 352 · · · · · · · · · · · · · · · · AG S-6.39 Memoization · · · · · · · · · · 352,S-250
· · · · · · · · · · · · · · · · AG S-6.40 2D Str. ind. · · · · · · · · · · 352,S-251
· · · · · · · · · · · · · · · · AG S-6.41 2D Str. ind. · · · · · · · · · · 352,S-251

^¨ · · · · · · · · · · · · · AG S-7.90 Strong ind. + Cyl · · · · 434,S-337

^¨ · · · · · · · · · · · · · AG S-7.91 Strong ind. + Cyl · · · · 434,S-337
≤m p LSC · · · · · Eq (10.162) m-Reduction · · · · · · · · · · · · · · · · 629
≤m p LSC · · · · · Eq (10.163) m-Reduction · · · · · · · · · · · · · · · · 629
Matrix operations
Matrix multiplication · · 47 grade-school · · · · · · AG 2.7 Inductive prog. · · · · · · · · · · · · · · 48
· · · · · · · · · · · · · · · · ·Eq (3.19) Divide & Conq. · · · · · · · · · · · · · 118

^¨ Strassen algo Eq (3.27) Divide & Conq. · · · · · · · · · · · · · 119
Matrix power · · · · 151,S-84 grade-school · · · AG S-3.34 Inductive prog. · · · · · · · · 151,S-85
· · · · · · · · · · · · · · · · AG S-3.33 Divide & Conq. · · · · · · · · 151,S-84
McNugget number (see Unbounded subset sum equality)
Mersenne number · · · · · · · 282 A
· · · · · · · · · · · · · · Eq (5.82) Recursion · · · · · · · · · · · · · · · · · · · 282
· · · · · · · · · · · · · · · · AG S-5.31 Strong ind. · · · · · · · · · · · 282,S-177
· · · · · · · · · · · · · · ·Eq (S-5.53) Memoization · · · · · · · · · · 282,S-178
· · · · · · · · · · · · · · · · ·Eq (5.83) Recursion · · · · · · · · · · · · · · · · · · · 283
· · · · · · · · · · · · · · · · AG S-5.32 Inductive prog. · · · · · · · 282,S-178
· · · · · · · · · · · · · · ·Eq (S-5.56) Divide & Conq. · · · · · · · 282,S-179
· · · · · · · · · · · · · · · · AG S-5.33 Memoiz. + D&C · · · · · 282,S-180
· · · · · · · · · · · · · · · · AG S-7.31 Str. ind. + Cir. · · · · · · 424,S-291
· · · · · · · · · · · · · · · · AG S-7.32 Memoiz. + Cir. · · · · · · 424,S-291
· · · · · · · · · · · · · · · · AG S-7.33 D&C + Jmp. · · · · · · · · · 424,S-292
752 Index of Computational Problems

≤m p MSL · · · · ·Eq (10.142) m-Reduction · · · · · · · · · · · · · · · · 627


≤p POW · · · · Eq (10.143) Reduction · · · · · · · · · · · · · · · · · · 627
≤p MSL · · · · · Eq (10.146) Reduction · · · · · · · · · · · · · · · · · · 627
≤p MXP · · · · · Eq (10.148) Reduction · · · · · · · · · · · · · · · · · · 627
≤p LUS · · · · · Eq (S-10.89) Reduction · · · · · · · · · · · · 627,S-459
Mersenne-Lucas number A · · · · · · · · · · · · · · Eq (5.86) Recursion · · · · · · · · · · · · · · · · · · · 283
· · · · · · · · · · · · · · · · · · · · · · · · · 283 · · · · · · · · · · · · · · · · AG S-5.34 Strong ind. · · · · · · · · · · · 283,S-180
· · · · · · · · · · · · · · ·Eq (S-5.57) Memoization · · · · · · · · · · 283,S-181
· · · · · · · · · · · · · · · · ·Eq (5.87) Recursion · · · · · · · · · · · · · · · · · · · 283
· · · · · · · · · · · · · · · · AG S-5.35 Inductive prog. · · · · · · · 283,S-181
· · · · · · · · · · · · · · ·Eq (S-5.60) Divide & Conq. · · · · · · · 283,S-182
· · · · · · · · · · · · · · · · AG S-5.36 Memoiz. + D&C · · · · · 283,S-183
· · · · · · · · · · · · · · · · AG S-7.34 Str. ind. + Cir. · · · · · · 424,S-292
· · · · · · · · · · · · · · · · AG S-7.35 Memoiz. + Cir. · · · · · · 424,S-293
· · · · · · · · · · · · · · · · AG S-7.36 D&C + Jmp. · · · · · · · · · 424,S-293
≤m p MSN · · · · Eq (10.140) m-Reduction · · · · · · · · · · · · · · · · 627
≤m p MSN · · · · Eq (10.141) m-Reduction · · · · · · · · · · · · · · · · 627
≤p POW · · · · Eq (10.144) Reduction · · · · · · · · · · · · · · · · · · 627
≤p MSN · · · · · Eq (10.145) Reduction · · · · · · · · · · · · · · · · · · 627
≤m p MSN · · · · Eq (10.147) m-Reduction · · · · · · · · · · · · · · · · 627
≤p MXP · · · · · Eq (10.149) Reduction · · · · · · · · · · · · · · · · · · 627
≤p LUS · · · · · Eq (S-10.90) Reduction · · · · · · · · · · · · 627,S-459
Minimum length code
binary (r = 2) · · · · · · · · 195 huffman code · · · ·AG 4.23 Greedy Algo. · · · · · · · · · · · · · · · 196
≤p sort · · · · · · · · · AG 7.29 Reduction + Queue · · · · · · · · · 392
≤p sort · · · · · · · · · AG 7.30 Reduction + Queue · · · · · · · · · 394
huffman code · · · ·AG 9.18 Greedy + minheap · · · · · · · · · · 521
huffman code · · · ·AG 9.18 minheap + Queue · · · · · · · · · · 521
r-ary · · · · · · · · · · · · · · · · · · 195 huffman code · · · ·AG 4.24 Greedy Aprx. · · · · · · · · · · · · · · · 198
Minimum number of internal · · · · · · · · · · · · · · · · ·Eq (4.11) Recursion · · · · · · · · · · · · · · · · · · · 197
nodes in a k-ary tree · · · · 197 · · · · · · · · · · · · · · · · · ·Tm 4.16 Closed form · · · · · · · · · · · · · · · · · 197
Minimum number of · · · · · · · · · · · · · · · · · · AG 4.12 Greedy Algo. · · · · · · · · · · · · · · · 173
processors · · · · · · · · · · · · · · ·172 · · · · · · · · · · · · · · · · AG S-9.31 Greedy + minheap · · · · 551,S-386
· · · · · · · · · · · · · · · · AG S-9.32 Greedy + maxheap · · · 551,S-388
Modulo n % d · · · · · · · · · · · 77 · · · · · · · · · · · · · · · · ·Eq (2.39) Recursion · · · · · · · · · · · · · · · · · · · · 77
· · · · · · · · · · · · · · · · · · AG 2.35 Tail recursion · · · · · · · · · · · · · · · · 78
· · · · · · · · · · · · · · · · · · AG 3.13 Divide & Conq. · · · · · · · · · · · · · 109
Modulo an % d · · · · · · · · · 110 ≤p POW · · · · · · · Eq (3.15) Reduction · · · · · · · · · · · · · · · · · · 110
· · · · · · · · · · · · · · · · · · AG 3.14 Divide & Conq. · · · · · · · · · · · · · 111
≤p DIV · · · · · · · · Eq (10.3) Recursion · · · · · · · · · · · · · · · · · · · 557
Multiprocessor scheduling · · · · · · · · · · · · · · · · · · AG 4.13 Greedy Aprx. · · · · · · · · · · · · · · · 175
problem · · · · · · · · · · · · · · · · ·174 · · · · · · · · · · · · · · · · · · AG 4.14 Greedy Aprx. · · · · · · · · · · · · · · · 175
· · · · · · · · · · · · · · · · · AG S-4.9 Greedy Aprx. · · · · · · · · · · 203,S-99
MPSdv ≤p · · · · Eq (11.44) NP-hard · · · · · · · · · · · · · · · · · · · · 666
hdecision ver.i · · · · · · · · · · · 666 BPPdv ≤p · · · · Eq (11.46) NP-complete · · · · · · · · · · · · · · · · 666
Multiset coefficient · · · · · · 350 A · · · · · · · · · · · · · · Eq (6.31) Recursion · · · · · · · · · · · · · · · · · · · 350
· · · · · · · · · · · · · · · · AG S-6.35 2D Memoization · · · · · · 350,S-246
· · · · · · · · · · · · · · · · AG S-6.36 2D Str. ind. · · · · · · · · · · 350,S-247
· · · · · · · · · · · · · · · · AG S-7.64 Strong ind. + Cyl · · · · 430,S-318
· · · · · · · · · · · · · · · · AG S-7.65 Strong ind. + Cyl · · · · 430,S-318
≤p BNC · · · · · · Eq (10.29) Reduction · · · · · · · · · · · · · · · · · · 585
753

≤m p FAC · · · · · · Eq (10.30) m-Reduction · · · · · · · · · · · · · · · · 585


≤p NPP · · · · · AG S-10.48 Reduction · · · · · · · · · · · · 621,S-437
n digit long integer arithmetic problems
addition · · · · · · · · · · · · · · · · 43 grade-school · · · · · · AG 2.3 Inductive prog. · · · · · · · · · · · · · · 44
· · · · · · · · · · · · · · · · · · AG 3.19 Divide & Conq. · · · · · · · · · · · · · 122
n × 1 digit multiplication grade-school · · · · · · AG 2.4 Inductive prog. · · · · · · · · · · · · · · 45
· · · · · · · · · · · · · · · · · · · · · · · · ·45 · · · · · · · · · · · · · · · · · · AG 3.20 Divide & Conq. · · · · · · · · · · · · · 123
· · · · · · · · · · · · · · · · · Eq (3.28 Reduction · · · · · · · · · · · · · · · · · · 125
m × n digit multiplication grade-school · · · · · · AG 2.5 Inductive prog. · · · · · · · · · · · · · · 46
· · · · · · · · · · · · · · · · · · · · · · · · ·45 · · · · · · · · · · · · · · · · · · AG 3.16 Divide & Conq. · · · · · · · · · · · · · 117
¨ Karatsuba · · · AG 3.17 Divide & Conq. · · · · · · · · · · · · · 118
A
^

NFA Acceptance · · · · · · · · 363 backtrack · · · · · AG 7.6 Recursion · · · · · · · · · · · · · · · · · · · 363


Number of ascents
Number of ascents (List) · · · · · · · · · · · · · · · · ·Eq (2.28) Recursion · · · · · · · · · · · · · · · · · · · · 55
· · · · · · · · · · · · · · · · · · · · · · · · ·55 · · · · · · · · · · · · · · · · · · AG 2.14 Inductive prog. · · · · · · · · · · · · · · 55
· · · · · · · · · · · · · · · · · · · AG 3.2 Divide & Conq. · · · · · · · · · · · · · · 95
≤p NDS · · · · · · ·AG S-10.8 Reduction · · · · · · · · · · · · 615,S-406
≤p NDS · · · · Eq (S-10.20) Reduction · · · · · · · · · · · · 615,S-407
≤p NDS · · · · Eq (S-10.21) Reduction · · · · · · · · · · · · 615,S-407
Number of permutations ≤m p EUN · · · · · Eq (10.69) m-Reduction · · · · · · · · · · · · 432,594
with at least k ascents ≤m p EUN · · · · · · AG S-7.78 m-Reduction + Cyl · · · 432,S-327
· · · · · · · · · · · · · · · · · 432,S-327 ≤m p NAam · · · · Eq (10.71) m-Reduction · · · · · · · · · · · · · · · · 594
≤p NAam · · · · · Eq (10.74) Reduction · · · · · · · · · · · · · · · · · · 594
≤p NAam · · · · · Eq (10.76) Reduction · · · · · · · · · · · · · · · · · · 595
Number of permutations ≤m p EUN · · · · · Eq (10.68) m-Reduction · · · · · · · · · · · · 432,594
with at most k ascents ≤m p EUN · · · · · · AG S-7.76 m-Reduction + Cyl · · · 432,S-326
· · · · · · · · · · · · · · · · · 432,S-326 ≤m p EUN · · · · · · AG S-7.77 m-Reduction + Cyl · · · 432,S-326
≤m p NAal · · · · · Eq (10.70) m-Reduction · · · · · · · · · · · · · · · · 594
≤p NAal · · · · · · Eq (10.73) Reduction · · · · · · · · · · · · · · · · · · 594
≤p NAal · · · · · · Eq (10.75) Reduction · · · · · · · · · · · · · · · · · · 595
Number of permutations A · · · · · · · · · · · · · · Eq (6.29) Recursion · · · · · · · · · · · · · · · · · · · 348
with exactly k ascents · · · · · · · · · · · · · · · · AG S-6.29 2D Memoization · · · · · · 348,S-240
(Eulerian number) · · · · 348 · · · · · · · · · · · · · · · · AG S-6.30 2D Memoization · · · · · · 348,S-241
· · · · · · · · · · · · · · · · AG S-6.31 2D Str. ind. · · · · · · · · · · 348,S-241

^¨ · · · · · · · · · · · · · AG S-7.75 Strong ind. + Cyl · · · · 432,S-325
Number of GBW sequences ≤m p EUS · · · · · · Eq (10.53) m-Reduction · · · · · · · · · · · · 432,592
with at least k ascents ≤m p EUS · · · · · · AG S-7.74 m-Reduction + Cyl · · · 432,S-331
· · · · · · · · · · · · · · · · · 432,S-330 ≤m p NA2am · · · Eq (10.55) m-Reduction · · · · · · · · · · · · · · · · 593
≤p NA2am · · · Eq (10.58) Reduction · · · · · · · · · · · · · · · · · · 593
Number of GBW sequences ≤m p EUS · · · · · · Eq (10.52) m-Reduction · · · · · · · · · · · · 432,594
with at most k ascents ≤m p EUS · · · · · · AG S-7.80 m-Reduction + Cyl · · · 432,S-330
· · · · · · · · · · · · · · · · · 432,S-329 ≤m p EUS · · · · · · AG S-7.81 m-Reduction + Cyl · · · 432,S-326
≤m p NA2al · · · · Eq (10.54) m-Reduction · · · · · · · · · · · · · · · · 593
≤p NA2al · · · · · Eq (10.57) Reduction · · · · · · · · · · · · · · · · · · 593
Number of GBW sequences A · · · · · · · · · · · · · · Eq (6.30) Recursion · · · · · · · · · · · · · · · · · · · 349
with exactly k ascents · · · · · · · · · · · · · · · · AG S-6.32 2D Memoization · · · · · · 349,S-244
(Eulerian numbers of the · · · · · · · · · · · · · · · · AG S-6.33 2D Memoization · · · · · · 349,S-244
second kind) · · · · · · · · · · 349 · · · · · · · · · · · · · · · · AG S-6.34 2D Str. ind. · · · · · · · · · · 349,S-244

^¨ · · · · · · · · · · · · · AG S-7.79 Strong ind. + Cyl · · · · 432,S-328
Number of descents (List) · · · · · · · · · · · · · · ·Eq (S-2.32) Recursion · · · · · · · · · · · · · · · 87,S-43
· · · · · · · · · · · · · · · · · · · · · · 87,S-43 · · · · · · · · · · · · · · · · AG S-2.31 Inductive prog. · · · · · · · · · 87,S-43
· · · · · · · · · · · · · · · · · AG S-3.7 Divide & Conq. · · · · · · · · 145,S-59
754 Index of Computational Problems

≤p NAS · · · · · · · AG S-10.9 Reduction · · · · · · · · · · · · 615,S-406


≤p NAS · · · · Eq (S-10.19) Reduction · · · · · · · · · · · · 615,S-407
≤p NAS · · · · Eq (S-10.22) Reduction · · · · · · · · · · · · 615,S-407
Number of paths (Graph)
exactly k length · · · · · · · 612 ≤p MXP · · · · · Eq (10.101) Reduction · · · · · · · · · · · · · · · · · · 612
on DAGs · · · · · · · · · · · · · 258 A · · · · · · · · · · · · · · Eq (5.46) Recursion · · · · · · · · · · · · · · · · · · · 259
· · · · · · · · · · · · · · · · · · AG 5.27 Strong ind. · · · · · · · · · · · · · · · · · 259
≤m p MXM · · · Eq (10.102) m-Reduction · · · · · · · · · · · · · · · · 613
Order statistics
Find max · · · · · · · · · 85,S-36 · · · · · · · · · · · · · · ·Eq (S-2.27) Recursion · · · · · · · · · · · · · · · 85,S-36
· · · · · · · · · · · · · · · · AG S-2.18 Inductive prog. · · · · · · · · · 85,S-36
· · · · · · · · · · · · · · · · · AG S-3.1 Divide & Conq. · · · · · · · · 143,S-54
≤p KLG · · · · · · · Eq (10.4) Reduction · · · · · · · · · · · · · · · · · · 557
≤p KSM · · · · · Eq (S-10.2) Reduction · · · · · · · · · · · · 614,S-402
Find median · · · · · · · · · · 614 ≤p sort · · · · · · · AG S-10.1 Divide & Conq. · · · · · · · 614,S-402
≤m p KLG · · · · ·Eq (S-10.4) Reduction · · · · · · · · · · · · 614,S-402
≤m p KSM · · · · Eq (S-10.3) Reduction · · · · · · · · · · · · 614,S-402
quick-MDN · · · AG S-12.1 Las Vegas · · · · · · · · · · · · ·720,S-520
Find min · · · · · · · · · · · · · · · 58 · · · · · · · · · · · · · · · · · · · Lm 2.5 Recursion · · · · · · · · · · · · · · · · · · · · 58
· · · · · · · · · · · · · · · · · · AG 2.18 Inductive prog. · · · · · · · · · · · · · · 58
· · · · · · · · · · · · · · · · · · · AG 3.1 Divide & Conq. · · · · · · · · · · · · · · 93
≤p KLG · · · · · · · Eq (10.5) Reduction · · · · · · · · · · · · · · · · · · 557
≤p KSM · · · · · Eq (S-10.1) Reduction · · · · · · · · · · · · 614,S-402
kth largest · · · · · · · · · · · · · 59 · · · · · · · · · · · · · · · · · · · Lm 2.6 Recursion · · · · · · · · · · · · · · · · · · · · 59
· · · · · · · · · · · · · · · · · · AG 2.19 Inductive prog. · · · · · · · · · · · · · · 60
Bubble select · · AG S-2.34 · · · · · · · · · · · · · · · · · · · · · · · · · ·88,S-46
· · · · · · · · · · · · · · · · · · · AG 4.1 Greedy Algo. · · · · · · · · · · · · · · · 154
heapselect · · · · · · · · AG 9.8 Greedy + maxHeap · · · · · · · · · 511
heapselect · · · · · · · · AG 9.9 Ind. prog. + minHeap · · · · · · 511
heapselect · · · · · · · AG 9.10 Ind. prog. + minHeap · · · · · · 513
LH-select · · · · · ·AG S-9.41 Greedy + LHmax · · · · · 553,S-398
LH-select II · · · AG S-9.42 Ind. prog. + LHmin · · 553,S-399
AVLselect · · · · · AG S-9.43 Greedy + AVL tree · · · 553,S-399
AVLselectII · · · AG S-9.44 Ind. prog. + AVL tree 553,S-400
AVLselectIII · · AG S-9.45 DFT + AVL tree · · · · · 553,S-400
≤p KSM · · · · · · · Eq (10.7) Reduction · · · · · · · · · · · · · · · · · · 557
≤p sort · · · · · · · · · AG 10.1 Reduction · · · · · · · · · · · · · · · · · · 559
kth smallest · · · · · · · 85,S-37 · · · · · · · · · · · · · · ·Eq (S-2.28) Recursion · · · · · · · · · · · · · · · 85,S-37
· · · · · · · · · · · · · · · · AG S-2.19 Inductive prog. · · · · · · · · · 85,S-37
Bubble select · · AG S-2.35 · · · · · · · · · · · · · · · · · · · · · · · · · ·88,S-47
Radix select · · · · · AG 3.26 Divide & Conq. · · · · · · · · · · · · · 131
· · · · · · · · · similar AG S-4.3 Greedy Algo. · · · · · · · · · · · 200,154
heapselect · · · · · · AG S-9.6 Greedy + minHeap · · · 547,S-370
heapselect · · · · · · AG S-9.7 Ind. prog. + maxHeap 547,S-371
AVLselect · · · · · · · AG 9.35 Greedy + AVL tree · · · · · · · · · 542
AVLselectII · · · · · AG 9.36 Ind. prog. + AVL tree · · · · · · 542
AVLselectIII · · · · AG 9.37 DFT + AVL tree · · · · · · · · · · · 543
≤p KLG · · · · · · · Eq (10.6) Reduction · · · · · · · · · · · · · · · · · · 557
≤p sort · · · · · · · Eq (10.12) Reduction · · · · · · · · · · · · · · · · · · 559
Quickselect · · · · · · AG 12.4 Radomized Algo. (LV) · · · · · · 701
Top m percent · · · · · · · · 710 ·························· Reduce 2 sort · · · · · · · · · · · · · · · 710
(Relatively small) ·························· reduction · · · · · · · · · · · · · · · · · · · 710
755

·························· Monte Carlo · · · · · · · · · · · · · · · · 710


Palindrome
Checking · · · · · · · · · · · · · · · 68 · · · · · · · · · · · · · · · · · · Lm 2.10 Recursion · · · · · · · · · · · · · · · · · · · · 69
· · · · · · · · · · · · · · · · · · AG 2.27 Inductive prog. · · · · · · · · · · · · · · 69
· · · · · · · · · · · · · · · · · · · AG 3.8 Divide & Conq. · · · · · · · · · · · · · 103
Longest consecutive · · · · · · · · · · · · · · · Eq (10.93) Recursion · · · · · · · · · · · · · · · · · · · 603
sub-sequence · · · · · · · · · · 603 · · · · · · · · · · · · · · · · ·AG 10.27 2D str. ind. + Cyl. · · · · · · · · · 603
see under Longest subsequence problems for other palindromic related problems.
Parenthesis balance
Checking · · · · · · · · · · · · · ·358 A
· · · · · · · · · · · · · · Eq (2.21) Recursion · · · · · · · · · · · · · · · · · · · 358

^¨ · · · · · · · · · · · · · · · · AG 7.5 Stack · · · · · · · · · · · · · · · · · · · · · · · 359
Number of BPs · · · · · · · 288 (see Catalan number)
Partitioning (List)
Bit partitioning · · · · · · · 128 Outside-in · · · · · · ·AG 3.24 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 129
Progressive · · · · · · AG 3.25 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 130
Random · · · · · · · · · · · · · · 697 Outside-in · · · · · · ·AG 12.1 Las Vegas · · · · · · · · · · · · · · · · · · · 697
Progressive · · · · · · AG 12.2 Las Vegas · · · · · · · · · · · · · · · · · · · 699
Pell number · · · · · · · · · · · · · 279 A
· · · · · · · · · · · · · · Eq (5.67) Recursion · · · · · · · · · · · · · · · · · · · 279
· · · · · · · · · · · · · · · · AG S-5.20 Strong ind. · · · · · · · · · · · 279,S-164
· · · · · · · · · · · · · · ·Eq (S-5.37) Memoization · · · · · · · · · · 279,S-164
· · · · · · · · · · · · · · ·Eq (S-5.40) Divide & Conq. · · · · · · · 279,S-165
· · · · · · · · · · · · · · · · AG S-5.21 Memoiz. + D&C · · · · · 279,S-166
· · · · · · · · · · · · · · · · AG S-7.19 Str. ind. + Cir. · · · · · · 423,S-285
· · · · · · · · · · · · · · · · AG S-7.20 Memoiz. + Cir. · · · · · · 423,S-285
· · · · · · · · · · · · · · · · AG S-7.21 D&C + Jmp. · · · · · · · · · 423,S-286
≤p NPP · · · · · · · AG 10.11 Reduction · · · · · · · · · · · · · · · · · · 569
≤m p PLL · · · · · Eq (10.123) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p PLL · · · · · Eq (10.124) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p POW · · · · Eq (10.126) m-Reduction · · · · · · · · · · · · · · · · 625
≤p MXP · · · · · Eq (10.128) Reduction · · · · · · · · · · · · · · · · · · 625
≤p LUS · · · · · Eq (S-10.85) Reduction · · · · · · · · · · · · 625,S-449
Pell-Lucas number · · · · · · 280 A
· · · · · · · · · · · · · · Eq (5.70) Recursion · · · · · · · · · · · · · · · · · · · 280
· · · · · · · · · · · · · · · · AG S-5.22 Strong ind. · · · · · · · · · · · 280,S-167
· · · · · · · · · · · · · · ·Eq (S-5.41) Memoization · · · · · · · · · · 280,S-167
· · · · · · · · · · · · · · ·Eq (S-5.44) Divide & Conq. · · · · · · · 280,S-168
· · · · · · · · · · · · · · · · AG S-5.23 Memoiz. + D&C · · · · · 280,S-168
· · · · · · · · · · · · · · · · AG S-7.22 Str. ind. + Cir. · · · · · · 423,S-286
· · · · · · · · · · · · · · · · AG S-7.23 Memoiz. + Cir. · · · · · · 423,S-287
· · · · · · · · · · · · · · · · AG S-7.24 D&C + Jmp. · · · · · · · · · 423,S-287
≤p NPP · · · · · · · AG 10.12 Reduction · · · · · · · · · · · · · · · · · · 569
≤m p PLN · · · · · Eq (10.119) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p PLN · · · · · Eq (10.120) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p PLN · · · · · Eq (10.121) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p PLN · · · · · Eq (10.122) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p POW · · · · Eq (10.125) m-Reduction · · · · · · · · · · · · · · · · 625
≤m p PLN · · · · · Eq (10.127) m-Reduction · · · · · · · · · · · · · · · · 625
≤p MXP · · · · · Eq (10.129) Reduction · · · · · · · · · · · · · · · · · · 625
≤p LUS2 · · · ·Eq (S-10.86) Reduction · · · · · · · · · · · · 625,S-449
Perfect k-ary tree related problems
Height · · · · · · · · · · · · · · · · · 13 · · · · · · · · · · · · · · · · · · · Co 1.1 Closed form · · · · · · · · · · · · · · · · · · 13
Number of nodes · · · · · · · 12 geometric series · · Tm 1.7 Closed form · · · · · · · · · · · · · · · · · · 12
· · · · · · · · · · · · · · · · Eq (S-2.9) Recursion · · · · · · · · · · · · · · · 83,S-28
756 Index of Computational Problems

Ph
i=0 ai · · · · · · · · AG S-2.7 Inductive prog. · · · · · · · · · 83,S-29
· · · · · · · · · · · · · · · · ·Eq (3.30) Divide & Conq. · · · · · · · · · · · · · 138
Sum of depths · · · · · · · · · 13 · · · · · · · · · · · · · · · · · · · Tm 1.8 Closed form · · · · · · · · · · · · · · · · · · 13
· · · · · · · · · · · · · · ·Eq (S-2.10) Recursion · · · · · · · · · · · · · · · 83,S-29
Ph i
i=0 ia · · · · · · · AG S-2.8
Inductive prog. · · · · · · · · · 83,S-29
· · · · · · · · · · · · · · · · ·Eq (3.32) Divide & Conq. · · · · · · · · · · · · · 139
Permutation (see also Factorial)
k-Permutaiton of n, · · · · · · · · · · · · · · · · ·Eq (2.23) Recursion · · · · · · · · · · · · · · · · · · · · 51
P (n, k), Pkn · · · · · · · · · · · · · 51 · · · · · · · · · · · · · · · · · · AG 2.10 Inductive prog. · · · · · · · · · · · · · · 51
· · · · · · · · · · · · · · · · ·Eq (2.24) Recursion · · · · · · · · · · · · · · · · · · · · 51
· · · · · · · · · · · · · · · · · · AG 2.11 Inductive prog. · · · · · · · · · · · · · · 51
· · · · · · · · · · · · · · · · AG S-3.31 Divide & Conq. · · · · · · · · 151,S-83
≤p BNC · · · · · · Eq (10.21) Reduction · · · · · · · · · · · · · · · · · · 584
≤p RFP · · · · · · Eq (10.23) Reduction · · · · · · · · · · · · · · · · · · 584
≤m p FAC · · · · · · Eq (10.22) m-Reduction · · · · · · · · · · · · · · · · 584
n
Power a (see under Product)
Prefix arithematic problems
2-dimensional prefix · · · · · · A · · · · · · · · · · · · · Eq (S-6.1) Recursion · · · · · · · · · · · · · 340,S-209
product · · · · · · · · · 340,S-208 · · · · · · · · · · · · · · · · · AG S-6.1 2D Str. ind. · · · · · · · · · · 340,S-209
2-dimensional prefix sum · A · · · · · · · · · · · · · · · Eq (6.1) Recursion · · · · · · · · · · · · · · · · · · · 294
· · · · · · · · · · · · · · · · · · · · · · · 294 · · · · · · · · · · · · · · · · · · · AG 6.1 2D Str. ind. · · · · · · · · · · · · · · · · 295
Maximum prefix product · ≤m p PFP · · · · Eq (S-10.33) m-Reduction · · · · · · · · · · 618,S-422
· · · · · · · · · · · · · · · · · 618,S-422 ≤p minPFP · Eq (S-10.35) Reduction · · · · · · · · · · · · 618,S-422
≤p MPFS · · · Eq (S-10.37) Reduction · · · · · · · · · · · · 618,S-423
Maximum prefix sum · · · · · ≤m p PFS · · · · · · · AG 10.24 m-Reduction · · · · · · · · · · · · · · · · 599
· · · · · · · · · · · · · · · · · · · · · · · 598 ≤p minPFS · Eq (S-10.32) Reduction · · · · · · · · · · · · 618,S-421
≤p MPFP · · Eq (S-10.38) Reduction · · · · · · · · · · · · 618,S-423
Minimum prefix product · · ≤m p PFP · · · · Eq (S-10.34) m-Reduction · · · · · · · · · · 618,S-422
· · · · · · · · · · · · · · · · · 618,S-422 ≤p MPFP · · Eq (S-10.36) Reduction · · · · · · · · · · · · 618,S-422
≤p minPFS · Eq (S-10.39) Reduction · · · · · · · · · · · · 618,S-424
Minimum prefix sum · · · · · ≤p MPFS · · · · AG S-10.33 Reduction · · · · · · · · · · · · 618,S-421
· · · · · · · · · · · · · · · · · 618,S-420 ≤m p PFS · · · · · AG S-10.32 m-Reduction · · · · · · · · · · 618,S-420
≤p minPFP · Eq (S-10.40) Reduction · · · · · · · · · · · · 618,S-424
Prefix product · · · · 84,S-33 · · · · · · · · · · · · · · ·Eq (S-2.23) Recursion · · · · · · · · · · · · · · · 84,S-33
· · · · · · · · · · · · · · · · AG S-2.12 Inductive prog. · · · · · · · · · 84,S-34
· · · · · · · · · · · · · · · · · AG S-3.5 Divide & Conq. · · · · · · · · 144,S-58
Prefix sum · · · · · · · · · · · · · 53 · · · · · · · · · · · · · · · · · · · Lm 2.2 Recursion · · · · · · · · · · · · · · · · · · · · 54
· · · · · · · · · · · · · · · · · · AG 2.13 Inductive prog. · · · · · · · · · · · · · · 54
· · · · · · · · · · · · · · · · · AG S-3.4 Divide & Conq. · · · · · · · · 144,S-57
Prime number problems
Number of prime factors · · A · · · · · · · · · · · · · · · Eq (5.5) Recursion · · · · · · · · · · · · · · · · · · · 219
· · · · · · · · · · · · · · · · · · · · · · · 218 · · · · · · · · · · · · · · · · · · · AG 5.1 Str. Ind. Prog. · · · · · · · · · · · · · 219
Primality testing · · 30,S-15 · · · · · · · · · · · · · · · · · · AG 1.19 by def. · · · · · · · · · · · · · · · · · · · · · · · 30
≤p NPF · · · · · · · Eq (10.2) Reduction · · · · · · · · · · · · · · · · · · 557
Fermat test · · · · · AG 12.9 Monte Carlo · · · · · · · · · · · · · · · · 712
Product
Qn
1 f (i) · · · · · · · · · · · · · · · · 14 · · · · · · · · · · · · · · ·Eq (S-2.24) Recursion · · · · · · · · · · · · · · · 84,S-34
Q n
1 f (i) · · · · · · · AG S-2.13 Inductive prog. · · · · · · · · · 84,S-34
Double factorial even # · · · · · · · · · · · · · · ·Eq (S-2.25) Recursion · · · · · · · · · · · · · · · 85,S-35
Q n
(2n)!! · · · · · · · · · · · · · · · · · · 85 i=1 2i · · · · · · · AG S-2.14 Inductive prog. · · · · · · · · · 85,S-35
· · · · · · · · · · · · · · · · AG S-2.15 Tail recursion · · · · · · · · · · · 85,S-35
757

≤p FAC · · · · · Eq (10.103) Reduction · · · · · · · · · · · · · · · · · · 622


≤p DFO · · · · Eq (S-10.61) Reduction · · · · · · · · · · · · 622,S-439
Double factorial odd # · · · · · · · · · · · · · · ·Eq (S-2.26) Recursion · · · · · · · · · · · · · · · 85,S-36
Q n
(2n − 1)!! · · · · · · · · · · · · · · 85 i=1 (2i − 1) · · AG S-2.16 Inductive prog. · · · · · · · · · 85,S-36
· · · · · · · · · · · · · · · · AG S-2.17 Tail recursion · · · · · · · · · · · 85,S-36
≤m p FAC · · · · · Eq (10.104) m-Reduction · · · · · · · · · · · · · · · · 622
≤p DFE · · · · Eq (S-10.60) Reduction · · · · · · · · · · · · 622,S-439
≤p KPN · · · · Eq (S-10.62) Reduction · · · · · · · · · · · · 622,S-439
≤p RFP · · · · Eq (S-10.63) Reduction · · · · · · · · · · · · 622,S-439
Factorial n! · · · · · · · · · · · · 50 · · · · · · · · · · · · · · · · ·Eq (2.22) Recursion · · · · · · · · · · · · · · · · · · · · 50
Q n
i=1 i · · · · · · · · · · · · AG 2.9 Inductive prog. · · · · · · · · · · · · · · 50
· · · · · · · · · · · · · · · · ·Eq (2.36) Tail recursion · · · · · · · · · · · · · · · · 75
Power an · · · · · · · · · · · · · · ·49 · · · · · · · · · · · · · · · · ·Eq (2.21) Recursion · · · · · · · · · · · · · · · · · · · · 49
Q n
i=1 a · · · · · · · · · · · AG 2.8 Inductive prog. · · · · · · · · · · · · · · 49
· · · · · · · · · · · · · · · · · · AG 3.12 Divide & Conq. · · · · · · · · · · · · · 108
Queue data structure
Dequeue · · · · · · · · · · · · · · 384 · · · · · · · · · · · · · · · · · · AG 7.23 Linked list · · · · · · · · · · · · · · · · · · 386
· · · · · · · · · · · · · · · · · · AG 7.24 Cir. Array · · · · · · · · · · · · · · · · · · 386
Enqueue · · · · · · · · · · · · · · 384 · · · · · · · · · · · · · · · · · · AG 7.22 Linked list · · · · · · · · · · · · · · · · · · 386
· · · · · · · · · · · · · · · · · · AG 7.24 Cir. Array · · · · · · · · · · · · · · · · · · 386
Random permutation Knuth shuffle · · · AG 2.26 Inductive prog. · · · · · · · · · · · · · · 67
(Shuffling) · · · · · · · · · · · · · · · 67 ≤p sort · · · · · · · · · AG 10.6 Reduction · · · · · · · · · · · · · · · · · · 563
Random riffle · · · AG 12.6 Randomized · · · · · · · · · · · · · · · · 704
Riffle
Perfect riffle · · · · · · · · · · 704 faro shuffle · · · · · · AG 12.5 Definition · · · · · · · · · · · · · · · · · · · 704
Rising factorial power · · · · · · · · · · · · · · ·Eq (S-2.39) Recursion · · · · · · · · · · · · · · · 89,S-49
nk̄ · · · · · · · · · · · · · · · · · · 89,S-49 · · · · · · · · · · · · · · · · AG S-2.39 Inductive prog. · · · · · · · · · 89,S-50
· · · · · · · · · · · · · · · · AG S-3.32 Divide & Conq. · · · · · · · · 151,S-83
≤p BNC · · · · · · Eq (10.25) Reduction · · · · · · · · · · · · · · · · · · 585
≤p KPN · · · · · · Eq (10.24) Reduction · · · · · · · · · · · · · · · · · · 585
≤m p FAC · · · · · · Eq (10.26) m-Reduction · · · · · · · · · · · · · · · · 585
Rod cutting
Maximization · · · · · · · · · 168 · · · · · · · · · · · · · · · · AG S-4.18 Greedy Aprx. · · · · · · · · · 206,S-109
A · · · · · · · · · · · · · · Eq (5.13) Recursion · · · · · · · · · · · · · · · · · · · 228
· · · · · · · · · · · · · · · · · · · AG 5.7 Strong ind. · · · · · · · · · · · · · · · · · 229
· · · · · · · · · · · · · · ·Eq (S-5.19) Memoization · · · · · · · · · · 274,S-147
· · · · · · · · · · · · · · · · AG S-7.40 Str. ind. + Cir. · · · · · · 425,S-296
Minimization · · · 206,S-109 · · · · · · · · · · · · · · · · AG S-4.19 Greedy Aprx. · · · · · · · · · 206,S-110
A · · · · · · · · · · · ·Eq (S-5.17) Recursion · · · · · · · · · · · · · 274,S-146
· · · · · · · · · · · · · · · · · AG S-5.9 Strong ind. · · · · · · · · · · · 274,S-146
· · · · · · · · · · · · · · ·Eq (S-5.18) Memoization · · · · · · · · · · 274,S-146
· · · · · · · · · · · · · · · · AG S-7.41 Str. ind. + Cir. · · · · · · 425,S-297
Root Finding · · · · · · · · · · · 107 · · · · · · · · · · · · · · · · Eq (S-3.1) Recursion · · · · · · · · · · · · · · 150,S-78
· · · · · · · · · · · · · · · · AG S-3.24 Inductive prog. · · · · · · · · 150,S-79
bisection · · · · · · · · AG 3.11 Divide & Conq. · · · · · · · · · · · · · 107
bisection · · · · · · AG S-3.25 Tail recursion D.& C. · · 150,S-79
Round robin tournament ·························· Divide & Conq. · · · · · · · · · · · · · 141
Satisfiability problems
Circuit (CCS) · · · · · · · · · 642 SAT ≤p · · · · · · · · · Tm 11.4 NP-complete · · · · · · · · · · · · · · · · 643
Circuit - 3 basic gate only CCS ≤p · · · · · · · Tm S-11.1 NP-complete · · · · · · · · · · 687,S-487
(C3S) · · · · · · · · · · · · · · · · · 687 NOGS ≤p · · · · · Tm S-11.5 NP-complete · · · · · · · · · · 687,S-488
758 Index of Computational Problems

NAGS ≤p · · · · · Tm S-11.6 NP-complete · · · · · · · · · · 687,S-488


Circuit - NAND gate only C3S ≤p · · · · · · · · Tm 11.10 NP-complete · · · · · · · · · · · · · · · · 655
(NAGS) · · · · · · · · · · · · · · 655 NOGS ≤p · · · · · Tm S-11.4 NP-complete · · · · · · · · · · 687,S-488
Circuit - NOR gate only C3S ≤p · · · · · · · Tm S-11.2 NP-complete · · · · · · · · · · 687,S-487
(NOGS) · · · · · · · · · · · · · · 687 NAGS ≤p · · · · · Tm S-11.3 NP-complete · · · · · · · · · · 687,S-487
CNF (SCN) · · · · · · · · · · · 652 SC3 ≤p · · · · · · · · · Tm 11.7 NP-complete · · · · · · · · · · · · · · · · 652
CNF-3 (SC-3) · · · · · · · · · 650 SAT ≤p · · · · · · · · · Tm 11.6 NP-complete · · · · · · · · · · · · · · · · 650
SCN ≤p · · · · · · · · · Tm 11.8 NP-complete · · · · · · · · · · · · · · · · 653
DNF (SDN) · · · · · · · · · · 683 · · · · · · · · · · · · · · · · · · AG 11.6 Recursion · · · · · · · · · · · · · · · · · · · 683
Proposition (SAT) · · · · 637 Cook-Levin · · · · · Tm 11.2 NP-complete · · · · · · · · · · · · · · · · 640
CCS ≤p · · · · · · · · · AG 11.5 NP-complete · · · · · · · · · · · · · · · · 647
SCN ≤p · · · · · · Eq (11.16) NP-complete · · · · · · · · · · · · · · · · 653
SC3 ≤p · · · · · · · Eq (11.17) NP-complete · · · · · · · · · · · · · · · · 653
Search
Unsorted list · · · · · · · · · · · 57 Sequential search AG 2.17 Inductive prog. · · · · · · · · · · · · · · 57
(all occurrences) · · · · · · · · · · · · · · · · · · · Lm 2.4 Recursion · · · · · · · · · · · · · · · · · · · · 57
· · · · · · · · · · · · · · · · · AG S-3.2 Divide & Conq. · · · · · · · · 144,S-55
Unsorted list (distinct) · 20 Sequential search AG 1.10 Inductive prog. · · · · · · · · · · · · · · 20
· · · · · · · · · · · · · · ·Eq (S-2.33) Recursion · · · · · · · · · · · · · · · 87,S-44
· · · · · · · · · · · · · · · · · · · AG 3.7 Divide & Conq. · · · · · · · · · · · · · 102
also see under respective data structures such as sorted list, BST, etc.
Set Cover
Set Cover · · · · · · · 208,S-116 · · · · · · · · · · · · · · · · AG S-4.25 Greedy Aprx. · · · · · · · · · 208,S-116
VCPdv ≤p · · · · Eq (11.63) NP-hard · · · · · · · · · · · · · · · · · · · · 675
SCVdv ≤p · · · · Eq (11.64) NP-hard · · · · · · · · · · · · · · · · · · · · 675
hdecision ver.i · · · · · · · · · 675 VCPdv ≤p · · · · · · AG 11.5 NP-complete · · · · · · · · · · · · · · · · 675
weighted Set Cover · · · · · · · · · · · · · · · · · · · · · · · AG S-4.26 Greedy Aprx. · · · · · · · · · 209,S-117
· · · · · · · · · · · · · · · · · 209,S-117 · · · · · · · · · · · · · · · · AG S-4.27 Greedy Aprx. · · · · · · · · · 209,S-117
SCV ≤p · · · · Eq (S-11.94) NP-hard · · · · · · · · · · · · · · 692,S-507
wSCVdv ≤p · Eq (S-11.95) NP-hard · · · · · · · · · · · · · · 692,S-507
hdecision ver.i · · ·692,S-506 SCVdv ≤p · · Eq (S-11.93) NP-complete · · · · · · · · · · 692,S-507
Set Partition · · · · · · · · · · · · 660 SSE ≤p · · · · · · · Eq (11.24) NP-complete · · · · · · · · · · · · · · · · 660
Set partition numbers
Bell number · · · · · · · · · · 410 ≤m p SNS · · · · · · · · AG 7.45 m-Reduction + Cyl · · · · · · · · · 410
≤m p SNS · · · · · · · · AG 7.46 m-Reduction + Cyl · · · · · · · · · 411
≤m p SNS · · · · · · · · AG 7.47 m-Reduction + Cyl · · · · · · · · · 411
Bell triangle · · · · · AG 7.48 Cyl · · · · · · · · · · · · · · · · · · · · · · · · · 412
≤m p SNS · · · · · · Eq (10.46) m-Reduction · · · · · · · · · · · · · · · · 592
≤p SPam · · · · · Eq (10.47) Reduction · · · · · · · · · · · · · · · · · · 592
≤p SPal · · · · · · ·Eq (10.48) Reduction · · · · · · · · · · · · · · · · · · 592
≤m p BNC · · · · · Eq (10.49) m-Reduction · · · · · · · · · · · · · · · · 592
Number of at least k ≤m p SNS · · · · · · · Eq (7.14) m-Reduction · · · · · · · · · · · · · · · · 413
partition · · · · · · · · · · · · · · 413 ≤m p SNS · · · · · · · · AG 7.51 m-Reduction + Cyl · · · · · · · · · 414
≤m p SPam · · · · · Eq (10.44) m-Reduction · · · · · · · · · · · · · · · · 591
≤p SPam · · · · · Eq (10.50) Reduction · · · · · · · · · · · · · · · · · · 592
Number of at most k ≤m p SNS · · · · · · · Eq (7.13) m-Reduction · · · · · · · · · · · · · · · · 413
partition · · · · · · · · · · · · · · 412 ≤m p SNS · · · · · · · · AG 7.49 m-Reduction + Cyl · · · · · · · · · 413
≤m p SNS · · · · · · · · AG 7.50 m-Reduction + Cyl · · · · · · · · · 413
≤m p SPal · · · · · · Eq (10.45) m-Reduction · · · · · · · · · · · · · · · · 591
≤p SPal · · · · · · ·Eq (10.51) Reduction · · · · · · · · · · · · · · · · · · 592
Stirling number of the A · · · · · · · · · · · · · · Eq (6.27) Recursion · · · · · · · · · · · · · · · · · · · 347
second kind · · · · · 347,S-235 · · · · · · · · · · · · · · · · AG S-6.23 2D Memoization · · · · · · 347,S-236
759

· · · · · · · · · · · · · · · · AG S-6.24 2D Memoization II · · · ·347,S-236


· · · · · · · · · · · · · · · · AG S-6.25 2D Str. ind. · · · · · · · · · · 347,S-236

^¨ · · · · · · · · · · · · · · · AG 7.44 Strong ind. + Cyl · · · · · · · · · · 409
Shortest path (Graph)
Shortest path cost · · · · 188 Dijkstra’s algo · · · AG 4.20 Greedy Algo. · · · · · · · · · · · · · · · 189
Shortest path cost (DAG) A · · · · · · · · · · · · · · Eq (5.48) Recursion · · · · · · · · · · · · · · · · · · · 262
· · · · · · · · · · · · · · · · · · · · · · · 262 · · · · · · · · · · · · · · · · · · AG 5.29 Strong ind. · · · · · · · · · · · · · · · · · 262
≤p LPC · · · · · · Eq (10.16) Reduction · · · · · · · · · · · · · · · · · · 577
Shortest path length · · 388 · · · · · · · · · · · · · · · · · · AG 7.27 Queue · · · · · · · · · · · · · · · · · · · · · · 389
Shortest path length A · · · · · · · · · · · · · · Eq (5.47) Recursion · · · · · · · · · · · · · · · · · · · 260
(DAG) · · · · · · · · · · · · · · · · 259 · · · · · · · · · · · · · · · · · · AG 5.28 Strong ind. · · · · · · · · · · · · · · · · · 261
Skip-b list
Search · · · · · · · · · · · · · · · · 483 · · · · · · · · · · · · · · · · · · AG 8.26 Recursion · · · · · · · · · · · · · · · · · · · 483
Insertion · · · · · · · · · · · · · · 485 · · · · · · · · · · · · · · · · · · AG 8.27 Recursion · · · · · · · · · · · · · · · · · · · 485
Deletion · · · · · · · · · · · · · · 488 · · · · · · · · · · · · · · · · · · AG 8.28 Recursion · · · · · · · · · · · · · · · · · · · 488
Sorted list operations
Checking · · · · · · · · · · 86,S-39 · · · · · · · · · · · · · · ·Eq (S-2.29) Recursion · · · · · · · · · · · · · · · 86,S-39
· · · · · · · · · · · · · · · · AG S-2.23 Inductive prog. · · · · · · · · · 86,S-39
· · · · · · · · · · · · · · · · AG S-2.24 Tail recursion · · · · · · · · · · · · 86S-40
· · · · · · · · · · · · · · · · · AG S-3.8 Divide & Conq. · · · · · · · · 145,S-61
Insert · · · · · · · · · · · · · · · · · · 61 (by swapping) · · · AG 2.20 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 62
(by sliding) · · · · · ·AG 2.22 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 62
Merge · · · · · · · · · · · · · · · · · · 97 · · · · · · · · · · · · · · · · · · Eq (3.7) Recursion · · · · · · · · · · · · · · · · · · · · 97
· · · · · · · · · · · · · · · · · · · AG 3.4 Inductive prog. · · · · · · · · · · · · · · 97
Search · · · · · · · · · · · · · 62,105 · · · · · · · · · · · · · · · · · · · Lm 2.7 Recursion · · · · · · · · · · · · · · · · · · · · 62
Sequential search AG 2.21 Tail recursion · · · · · · · · · · · · · · · · 62
binary search · · · · AG 3.10 Divide & Conq. · · · · · · · · · · · · · 106
Sorting · · · · · · · · · · · · · · · · · · ·60 Insertion sort · · · · · Lm 2.8 Recursion · · · · · · · · · · · · · · · · · · · · 63
Insertion sort · · · ·AG 2.23 Inductive prog. · · · · · · · · · · · · · · 64
Bubble sort · · · · · AG 2.31 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 74
Merge sort · · · · · · · AG 3.5 Divide & Conq. · · · · · · · · · · · · · · 98
Merge sort · · · · · · AG 3.23 Bottom up D.& C. · · · · · · · · · · 127
Radix sort · · · · · · AG 3.28 Partition & Conq. · · · · · · · · · · 132
Radix sort · · · · · · AG 3.29 Partition & Conq. · · · · · · · · · · 134
Counting sort · · · AG 3.30 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 136
Counting sort · · · AG 3.31 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 137
Selection sort · · · · · AG 4.2 Greedy algo. · · · · · · · · · · · · · · · · 155
heapsort · · · · · · · · AG 9.11 Greedy + minheap · · · · · · · · · · 513
heapsort · · · · · · · · AG 9.12 Greedy + maxheap · · · · · · · · · 515
AVL-sort · · · · · · · · AG 9.38 DFT + AVL · · · · · · · · · · · · · · · · 544
Leftist heapsort AG S-9.40 Greedy + leftist · · · · · · 553,S-397
≤p rMST · · · · · · AG 10.15 Reduction · · · · · · · · · · · · · · · · · · 570
≤p SPC · · · · · · · · AG 10.16 Reduction · · · · · · · · · · · · · · · · · · 571
≤p CVH · · · · · · · AG 10.23 Reduction · · · · · · · · · · · · · · · · · · 582
≤p LPC · · · · · ·AG S-10.24 Reduction · · · · · · · · · · · · 616,S-413
≤p LPL · · · · · · AG S-10.25 Reduction · · · · · · · · · · · · 616,S-414
≤p CPP · · · · · AG S-10.26 Reduction · · · · · · · · · · · · 616,S-415
≤p TPS · · · · · · AG S-10.27 Reduction · · · · · · · · · · · · 616,S-415
Quicksort · · · · · · · AG 12.3 Radomized algo. (LV) · · · · · · 700
Spanning tree (Graph)
Maximum spanning tree · · · · · · · · · · · · · · · · AG S-4.29 Greedy Algo. · · · · · · · · · 209,S-122
· · · · · · · · · · · · · · · · · 209,S-122 · · · · · · · · · · · · · · · · AG S-4.30 Greedy Algo. · · · · · · · · · 209,S-124
760 Index of Computational Problems

≤p MST · · · · Eq (S-10.28) Reduction · · · · · · · · · · · · 617,S-418


Maximum spanning rooted A · · · · · · · · · · · ·Eq (S-5.75) Recursion · · · · · · · · · · · · · 291,S-204
tree (edge weight) · · · · · · · · · · · · · · · · AG S-5.57 Strong Ind. · · · · · · · · · · · 291,S-205
· · · · · · · · · · · · · · · · · 291,S-204 ≤p MSrT · · · Eq (S-10.30) Reduction · · · · · · · · · · · · 617,S-419
Maximum spanning rooted A · · · · · · · · · · · ·Eq (S-5.74) Recursion · · · · · · · · · · · · · 290,S-203
tree (path cost) =LPC-dag · · · · · · · · · · · · · · · · AG S-5.56 Strong ind. · · · · · · · · · · · 290,S-204
· · · · · · · · · · · · · · · · · 291,S-205 ≤p SPC-dag · · Eq (10.15) Reduction · · · · · · · · · · · · · · · · · · 577
Minimum spanning tree Prim-Jarnik · · · · · AG 4.18 Greedy Algo. · · · · · · · · · · · · · · · 184
· · · · · · · · · · · · · · · · · · · · · · · 184 Kruskal · · · · · · · · · AG 4.19 Greedy Algo. · · · · · · · · · · · · · · · 187
≤p MxST · · · Eq (S-10.27) Reduction · · · · · · · · · · · · 617,S-418
Minimum spanning rooted A · · · · · · · · · · · · · · Eq (5.49) Recursion · · · · · · · · · · · · · · · · · · · 264
tree (edge weight) · · · · · 264 · · · · · · · · · · · · · · · · · · AG 5.30 Strong Ind. · · · · · · · · · · · · · · · · · 264
≤p MxSrT · · Eq (S-10.29) Reduction · · · · · · · · · · · · 617,S-419
Minimum spanning rooted A · · · · · · · · · · · · · · Eq (5.48) Recursion · · · · · · · · · · · · · · · · · · · 262
tree (path cost) =SPC-dag · · · · · · · · · · · · · · · · · · AG 5.29 Strong ind. · · · · · · · · · · · · · · · · · 262
· · · · · · · · · · · · · · · · · · · · · · · 265 ≤p LPC-dag · · Eq (10.16) Reduction · · · · · · · · · · · · · · · · · · 577

Square root b nc · · · · · · · · 78 · · · · · · · · · · · · · · · · ·Eq (2.40) Recursion · · · · · · · · · · · · · · · · · · · · 78
· · · · · · · · · · · · · · · · · · AG 2.36 Tail recursion · · · · · · · · · · · · · · · · 78
· · · · · · · · · · · · · · · · · · AG 2.37 Inductive prog. · · · · · · · · · · · · · · 78
· · · · · · · · · · · · · · · · AG S-3.26 Divide & Conq. · · · · · · · · 150,S-79
· · · · · · · · · · · · · · · · AG S-3.27 Tail recursion D.& C. · · 150,S-79
Sqaure triangular related √ numbers, STN
Square root of STN, STN ≤p STN · · · · · · · · Eq (5.93) Reduction · · · · · · · · · · · · · · · · · · 285
STNr · · · · · · · · · · · · · · · · · 285 A · · · · · · · · · · · · · · Eq (5.94) Recursion · · · · · · · · · · · · · · · · · · · 285
· · · · · · · · · · · · · · · · AG S-5.39 Strong ind. · · · · · · · · · · · 285,S-186
· · · · · · · · · · · · · · ·Eq (S-5.63) Memoization · · · · · · · · · · 285,S-186
· · · · · · · · · · · · · · ·Eq (S-5.66) Divide & Conq. · · · · · · · 285,S-187
· · · · · · · · · · · · · · · · AG S-5.40 Memoiz. + D&C · · · · · 285,S-187
Sqaure triangular numbers A · · · · · · · · · · · · · · Eq (5.90) Recursion · · · · · · · · · · · · · · · · · · · 284
STN · · · · · · · · · · · · · · · · · · 284 · · · · · · · · · · · · · · · · AG S-5.37 Strong ind. · · · · · · · · · · · 284,S-183
· · · · · · · · · · · · · · ·Eq (S-5.61) Memoization · · · · · · · · · · 284,S-184
· · · · · · · · · · · · · · ·Eq (S-5.62) Divide & Conq. · · · · · · · 284,S-185
· · · · · · · · · · · · · · · · AG S-5.38 Memoiz. + D&C · · · · · 284,S-185
Stack data structure
Pop · · · · · · · · · · · · · · · · · · · 356 · · · · · · · · · · · · · · · · · · · AG 7.2 Array · · · · · · · · · · · · · · · · · · · · · · · 357
· · · · · · · · · · · · · · · · · · · AG 7.4 Linked list · · · · · · · · · · · · · · · · · · 357
Push · · · · · · · · · · · · · · · · · · 356 · · · · · · · · · · · · · · · · · · · AG 7.1 Array · · · · · · · · · · · · · · · · · · · · · · · 357
· · · · · · · · · · · · · · · · · · · AG 7.3 Linked list · · · · · · · · · · · · · · · · · · 357
Stamp problems
3-5 cent stamp · · · · · · · · 220 · · · · · · · · · · · · · · · · · · · AG 5.2 Strong Ind. · · · · · · · · · · · · · · · · · 221
4-5 cent stamp · · · · · · · · 270 · · · · · · · · · · · · · · · · · ·Tm 5.10 Strong Ind. · · · · · · · · · · · · · · · · · 270
Frobenius number · · · · · 226 ≤m p USSE · · · · · · · · Tm 5.4 m-Reduction · · · · · · · · · · · · · · · · 226
hdecision ver.i · · · · · · · · · 684 ¬USSE · · · · · · · · Tm 11.22 co-NP-complete · · · · · · · · · · · · · 684
Postage stamp equality · · · · · · · · · · · · · · · · · AG S-4.6 Greedy Aprx. · · · · · · · · · · 201,S-93
maximization · · · · 201,S-93 A · · · · · · · · · · · ·Eq (S-5.11) Recursion · · · · · · · · · · · · · 273,S-142
· · · · · · · · · · · · · · · · · AG S-5.6 Strong ind. · · · · · · · · · · · 273,S-142
· · · · · · · · · · · · · · ·Eq (S-5.12) Memoization · · · · · · · · · · 273,S-142
· · · · · · · · · · · · · · · · AG S-7.50 Str. ind. + Cir. · · · · · · 427,S-305
USSE ≤p · · · Eq (S-11.90) NP-hard · · · · · · · · · · · · · · 691,S-506
hdecision ver.i · · ·691,S-506 USSE ≤p · · · Eq (S-11.91) NP-complete · · · · · · · · · · 691,S-506
Postage stamp equality Cashier’s algo · · · · AG 4.5 Greedy Aprx. · · · · · · · · · · · · · · · 159
761

minimization · · · · · · · · · · 159 A · · · · · · · · · · · · · · · Eq (5.7) Recursion · · · · · · · · · · · · · · · · · · · 222


· · · · · · · · · · · · · · · · · · · AG 5.3 Strong ind. · · · · · · · · · · · · · · · · · 223
· · · · · · · · · · · · · · ·Eq (S-5.10) Memoization · · · · · · · · · · 272,S-141
A · · · · · · · · · · · · · · · Eq (6.4) 2D recursion · · · · · · · · · · · · · · · · 298
· · · · · · · · · · · · · · · · · · · AG 6.4 2D str. ind. · · · · · · · · · · · · · · · · ·300
· · · · · · · · · · · · · · · · · AG S-6.2 2D Memoization · · · · · · 341,S-211
· · · · · · · · · · · · · · · · · · AG 7.36 Str. ind. + Cir. · · · · · · · · · · · · 399
· · · · · · · · · · · · · · · · · · AG 7.37 Recursion + Cir. · · · · · · · · · · · 401
· · · · · · · · · · · · · · · · AG S-7.51 Str. ind. + Cyl. · · · · · · 427,S-307
USSE ≤p · · · Eq (S-11.87) NP-hard · · · · · · · · · · · · · · 691,S-505
hdecision ver.i · · ·691,S-505 USSE ≤p · · · Eq (S-11.88) NP-complete · · · · · · · · · · 691,S-505
Postage stamp maximiza- · · · · · · · · · · · · · · · · Eq (S-4.1) Greedy Algo. · · · · · · · · · · 201,S-95
tion · · · · · · · · · · · · · · 201,S-94 A · · · · · · · · · · · ·Eq (S-5.13) Recursion · · · · · · · · · · · · · 273,S-143
· · · · · · · · · · · · · · · · · AG S-5.7 Strong ind. · · · · · · · · · · · 273,S-143
Postage stamp minimiza- · · · · · · · · · · · · · · · · · · Eq (4.3) Greedy Algo. · · · · · · · · · · · · · · · 161
tion · · · · · · · · · · · · · · · · · · · 161 A · · · · · · · · · · · · · · · Eq (5.8) Recursion · · · · · · · · · · · · · · · · · · · 223
· · · · · · · · · · · · · · · · · · · AG 5.4 Strong ind. · · · · · · · · · · · · · · · · · 223
Ways of stamping · · · · · 295 · · · · · · · · · · · · · · · · · · Eq (6.2) 2D recursion · · · · · · · · · · · · · · · · 297
· · · · · · · · · · · · · · · · · · · AG 6.2 2D str. ind. · · · · · · · · · · · · · · · · ·297
· · · · · · · · · · · · · · · · · · · AG 6.3 2D Memoization · · · · · · · · · · · · 298
· · · · · · · · · · · · · · · · · · AG 7.41 Str. ind. + Cyl. · · · · · · · · · · · · 405
Stirling number of the first kind (see under Cycle numbers)
Stirling number of the second kind (see under Set partition numbers)
String matching
Edit distance (InDel) · 313 A · · · · · · · · · · · · · · Eq (6.12) Recursion · · · · · · · · · · · · · · · · · · · 313
· · · · · · · · · · · · · · · · · · AG 6.16 2D str. ind. · · · · · · · · · · · · · · · · ·313
≤p LCS · · · · · · · · · · Tm 6.2 Reduction · · · · · · · · · · · · · · · · · · 313
≤p SPL · · · · · · AG S-10.30 Reduction · · · · · · · · · · · · 617,S-419
Exact string matching · · · 2 ·························· · · · · · · · · · · · · · · · · · · · · · · · · see [166]
Levenshtein distance · · 314 A · · · · · · · · · · · · · · Eq (6.14) Recursion · · · · · · · · · · · · · · · · · · · 314
· · · · · · · · · · · · · · · · · · AG 6.17 2D str. ind. · · · · · · · · · · · · · · · · ·314
· · · · · · · · · · · · · · · · · · AG 6.18 Memoization · · · · · · · · · · · · · · · · 316
≤p SPC · · · · · · AG S-10.31 Reduction · · · · · · · · · · · · 617,S-419
Longest Common Sub- A · · · · · · · · · · · · · · Eq (6.11) Recursion · · · · · · · · · · · · · · · · · · · 311
sequence (LCS) · · · · · · · 311 · · · · · · · · · · · · · · · · · · AG 6.14 2D str. ind. · · · · · · · · · · · · · · · · ·311
≤p InDel · · · · · · · Eq (6.13) Reduction · · · · · · · · · · · · · · · · · · 314
≤p LPC · · · · · · · ·AG 10.21 Reduction · · · · · · · · · · · · · · · · · · 576
Subarray arithmetic problems (see Consecutive subsequence arithmetic)
Subset arithmetic problems
Subset product equality A · · · · · · · · · · · ·Eq (S-6.10) Recursion · · · · · · · · · · · · · 344,S-225
positive · · · · · · · · · 344,S-225 dynamic · · · · · · AG S-6.17 2D Str. ind. · · · · · · · · · · 344,S-225
· · · · · · · · · · · · · · · · AG S-6.18 Memoization · · · · · · · · · · 343,S-226
· · · · · · · · · · · · · · · · AG S-7.60 2D str. ind. + Cyl. · · · 429,S-314
· · · · · · · · · · · · · · · · AG S-7.61 2D str. ind. + Cyl. · · · 429,S-315
SSE ≤p · · · · · · Eq (S-11.5) NP-complete · · · · · · · · · · 687,S-489
Subset product maximiza- A · · · · · · · · · · · ·Eq (S-6.11) Recursion · · · · · · · · · · · · · 345,S-227
tion positive · · · · 345,S-226 dynamic · · · · · · AG S-6.19 2D Str. ind. · · · · · · · · · · 345,S-227
· · · · · · · · · · · · · · · · AG S-6.20 Memoization · · · · · · · · · · 343,S-227
· · · · · · · · · · · · · · · · AG S-7.62 2D str. ind. + Cyl. · · · 430,S-316
SPEp ≤p · · · · ·Eq (S-11.7) NP-hard · · · · · · · · · · · · · · 687,S-489
SPminp ≤p · Eq (S-11.12) NP-hard · · · · · · · · · · · · · · 687,S-490
762 Index of Computational Problems

SSM ≤p · · · · Eq (S-11.17) NP-hard · · · · · · · · · · · · · · 687,S-491


hdecision ver.i · · ·687,S-489 SPEp ≤p · · · · ·Eq (S-11.8) NP-complete · · · · · · · · · · 687,S-489
SPminpdv ≤p Eq (S-11.14) NP-complete · · · · · · · · · · 687,S-490
SPminpdv ≤p Eq (S-11.16) NP-complete · · · · · · · · · · 687,S-491
Subset product minimiza- A · · · · · · · · · · · ·Eq (S-6.12) Recursion · · · · · · · · · · · · · 345,S-229
tion positive · · · · 345,S-228 dynamic · · · · · · AG S-6.21 2D Str. ind. · · · · · · · · · · 345,S-229
· · · · · · · · · · · · · · · · AG S-6.22 Memoization · · · · · · · · · · 343,S-229
· · · · · · · · · · · · · · · · AG S-7.63 2D str. ind. + Cyl. · · · 430,S-317
SPEp ≤p · · · · ·Eq (S-11.9) NP-complete · · · · · · · · · · 687,S-490
SPMp ≤p · · · Eq (S-11.11) NP-complete · · · · · · · · · · 687,S-490
SSmin ≤p · · · Eq (S-11.19) NP-complete · · · · · · · · · · 687,S-491
hdecision ver.i · · ·687,S-490 SPEp ≤p · · · Eq (S-11.10) NP-complete · · · · · · · · · · 687,S-490
SPMpdv ≤p · Eq (S-11.13) NP-complete · · · · · · · · · · 687,S-490
SPMpdv ≤p · Eq (S-11.15) NP-complete · · · · · · · · · · 687,S-491
Subset sum equality · · · 305 · · · · · · · · · · · · · · · · · AG S-4.8 Greedy Aprx. · · · · · · · · · · 203,S-98
A · · · · · · · · · · · · · · · Eq (6.8) Recursion · · · · · · · · · · · · · · · · · · · 306
dynamic · · · · · · · · AG 6.11 2D Str. ind. · · · · · · · · · · · · · · · · 306
· · · · · · · · · · · · · · · · AG S-6.11 Memoization · · · · · · · · · · 343,S-220
· · · · · · · · · · · · · · · · AG S-7.54 2D str. ind. + Cyl. · · · 428,S-309
· · · · · · · · · · · · · · · · AG S-7.55 2D str. ind. + Cyl. · · · 428,S-310
SPEp ≤p · · · · ·Eq (S-11.6) NP-complete · · · · · · · · · · 687,S-489
STP ≤p · · · · · · · Eq (11.25) NP-complete · · · · · · · · · · · · · · · · 660
Subset sum maximization A · · · · · · · · · · · · · Eq (S-6.5) Recursion · · · · · · · · · · · · · 343,S-221
· · · · · · · · · · · · · · · · · · 202,S-97 dynamic · · · · · · AG S-6.12 2D Str. ind. · · · · · · · · · · 343,S-221
· · · · · · · · · · · · · · · · AG S-6.13 Memoization · · · · · · · · · · 343,S-221
· · · · · · · · · · · · · · · · AG S-7.56 2D str. ind. + Cyl. · · · 429,S-311
· · · · · · · · · · · · · · · · AG S-7.57 2D str. ind. + Cyl. · · · 429,S-312
SSE ≤p · · · · · · · Eq (11.27) NP-hard · · · · · · · · · · · · · · · · · · · · 661
SSMdv ≤p · · · · Eq (11.28) NP-hard · · · · · · · · · · · · · · · · · · · · 661
SSmin ≤p · · · · · Eq (11.33) NP-hard · · · · · · · · · · · · · · · · · · · · 662
SSmin ≤p · · · · · Eq (11.37) NP-hard · · · · · · · · · · · · · · · · · · · · 663
SPMp ≤p · · · Eq (S-11.18) NP-hard · · · · · · · · · · · · · · 687,S-491
· · · · · · · · · · · · · · · · ·AG 12.10 Greedy 2-approx. · · · · · · · · · · · 714
4
· · · · · · · · · · · · · · · · ·AG 12.11 3
-approximate · · · · · · · · · · · · · · 715
hdecision ver.i · · · · · · · · · 661 SSE ≤p · · · · · · · Eq (11.26) NP-complete · · · · · · · · · · · · · · · · 661
SSmindv ≤p · · · Eq (11.35) NP-complete · · · · · · · · · · · · · · · · 663
SSmindv ≤p · · · Eq (11.39) NP-complete · · · · · · · · · · · · · · · · 663
SSmindv ≤p · · · Eq (11.41) NP-complete · · · · · · · · · · · · · · · · 663
Subset sum minimization · · · · · · · · · · · · · · · · · · AG S-4.7 Greedy Aprx. · · · · · · · · · · 202,S-96
· · · · · · · · · · · · · · · · · · 202,S-96 A · · · · · · · · · · · · · Eq (S-6.6) Recursion · · · · · · · · · · · · · 344,S-222
dynamic · · · · · · AG S-6.14 2D Str. ind. · · · · · · · · · · 344,S-223
· · · · · · · · · · · · · · · · AG S-6.15 Memoization · · · · · · · · · · 344,S-223
· · · · · · · · · · · · · · · · AG S-7.58 2D str. ind. + Cyl. · · · 429,S-312
· · · · · · · · · · · · · · · · AG S-7.59 2D str. ind. + Cyl. · · · 429,S-313
SSE ≤p · · · · · · · Eq (11.30) NP-hard · · · · · · · · · · · · · · · · · · · · 662
SSmindv ≤p · · · Eq (11.31) NP-hard · · · · · · · · · · · · · · · · · · · · 662
SSM ≤p · · · · · · Eq (11.32) NP-hard · · · · · · · · · · · · · · · · · · · · 662
SSM ≤p · · · · · · Eq (11.36) NP-hard · · · · · · · · · · · · · · · · · · · · 663
SPminp ≤p · Eq (S-11.20) NP-hard · · · · · · · · · · · · · · 687,S-491
hdecision ver.i · · · · · · · · · 661 SSE ≤p · · · · · · · Eq (11.29) NP-complete · · · · · · · · · · · · · · · · 662
SSMdv ≤p · · · · Eq (11.34) NP-complete · · · · · · · · · · · · · · · · 663
763

SSMdv ≤p · · · · Eq (11.38) NP-complete · · · · · · · · · · · · · · · · 663


SSMdv ≤p · · · · Eq (11.40) NP-complete · · · · · · · · · · · · · · · · 663
Unbounded subset product · · · · · · · · · · · · · · · · AG S-4.16 Greedy Aprx. · · · · · · · · · 205,S-107
equality · · · · · · · · · · · · · · · 307 A · · · · · · · · · · · ·Eq (S-5.72) Recursion · · · · · · · · · · · · · 289,S-199
· · · · · · · · · · · · · · · · AG S-5.51 Strong ind. · · · · · · · · · · · 289,S-199
· · · · · · · · · · · · · · · · AG S-5.52 Memoization · · · · · · · · · · 289,S-200
· · · · · · · · · · · · · · · · · · Eq (6.9) Recursion · · · · · · · · · · · · · · · · · · · 307
· · · · · · · · · · · · · · · · · · AG 6.12 2D Str. ind. · · · · · · · · · · · · · · · · 308
USSE ≤p · · · Eq (S-11.53) NP-complete · · · · · · · · · · 690,S-498
Unbounded subset product USPE ≤p · · · Eq (S-11.55) NP-hard · · · · · · · · · · · · · · 690,S-499
maximization · · · 690,S-499 USSM ≤p · · · Eq (S-11.56) NP-hard · · · · · · · · · · · · · · 690,S-499
USPmin ≤p · Eq (S-11.68) NP-hard · · · · · · · · · · · · · · 690,S-501
hdecision ver.i · · ·690,S-499 USPE ≤p · · · Eq (S-11.58) NP-complete · · · · · · · · · · 690,S-499
USPmindv · · Eq (S-11.70) NP-complete · · · · · · · · · · 690,S-501
USSMdv ≤p · Eq (S-11.59) NP-complete · · · · · · · · · · 690,S-500
Unbounded subset product USPE ≤p · · · Eq (S-11.61) NP-hard · · · · · · · · · · · · · · 690,S-500
minimization · · · · 689,S-500 USSmin ≤p · Eq (S-11.62) NP-hard · · · · · · · · · · · · · · 690,S-500
USPM ≤p · · Eq (S-11.67) NP-hard · · · · · · · · · · · · · · 690,S-501
hdecision ver.i · · ·689,S-500 USPE ≤p · · · Eq (S-11.64) NP-complete · · · · · · · · · · 690,S-501
USPMdv ≤p Eq (S-11.69) NP-complete · · · · · · · · · · 690,S-501
USSmindv · · ·Eq (S-11.65) NP-complete · · · · · · · · · · 690,S-501
Unbounded subset sum A · · · · · · · · · · · · · · Eq (5.10) Recursion · · · · · · · · · · · · · · · · · · · 225
equality · · · · · · · · · · · · · · · 225 · · · · · · · · · · · · · · · · · · · AG 5.5 Strong ind. · · · · · · · · · · · · · · · · · 225
· · · · · · · · · · · · · · · · AG S-6.16 2D Str. ind. · · · · · · · · · · 344,S-224
· · · · · · · · · · · · · · · · AG S-7.47 Str. ind. + Cir. · · · · · · 427,S-302
SSE ≤p · · · · · · · Eq (11.23) NP-complete · · · · · · · · · · · · · · · · 659
USPE ≤p · · · Eq (S-11.54) NP-complete · · · · · · · · · · 690,S-498
Unbounded subset sum · · · · · · · · · · · · · · · · AG S-4.15 Greedy aprx. · · · · · · · · · 205,S-106
maximization · · · 205,S-106 A · · · · · · · · · · · ·Eq (S-5.15) Recursion · · · · · · · · · · · · · 274,S-145
· · · · · · · · · · · · · · · · · AG S-5.8 Strong ind. · · · · · · · · · · · 274,S-145
· · · · · · · · · · · · · · ·Eq (S-5.16) Memoization · · · · · · · · · · 274,S-145
· · · · · · · · · · · · · · · · AG S-7.49 Str. ind. + Cir. · · · · · · 427,S-304
USSE ≤p · · · Eq (S-11.45) NP-hard · · · · · · · · · · · · · · 690,S-497
USPM ≤p · · Eq (S-11.57) NP-hard · · · · · · · · · · · · · · 690,S-499
USSmin ≤p · Eq (S-11.50) NP-hard · · · · · · · · · · · · · · 689,S-498
hdecision ver.i · · ·689,S-497 USSE ≤p · · · Eq (S-11.46) NP-complete · · · · · · · · · · 689,S-497
USPMdv ≤p Eq (S-11.60) NP-complete · · · · · · · · · · 690,S-500
USSmindv · · ·Eq (S-11.52) NP-complete · · · · · · · · · · 689,S-498
Unbounded subset sum · · · · · · · · · · · · · · · · AG S-4.13 Greedy aprx. · · · · · · · · · 204,S-104
minimization · · · · · · · · · · 227 A · · · · · · · · · · · · · · Eq (5.12) Recursion · · · · · · · · · · · · · · · · · · · 227
· · · · · · · · · · · · · · · · · · · AG 5.6 Strong ind. · · · · · · · · · · · · · · · · · 228
· · · · · · · · · · · · · · ·Eq (S-5.14) Memoization · · · · · · · · · · 274,S-144
· · · · · · · · · · · · · · · · AG S-7.48 Str. ind. + Cir. · · · · · · 427,S-304
USSE ≤p · · · Eq (S-11.47) NP-hard · · · · · · · · · · · · · · 689,S-498
USSM ≤p · · · Eq (S-11.49) NP-hard · · · · · · · · · · · · · · 689,S-498
USPmin ≤p · Eq (S-11.63) NP-hard · · · · · · · · · · · · · · 690,S-500
hdecision ver.i · · ·689,S-498 USSE ≤p · · · Eq (S-11.48) NP-complete · · · · · · · · · · 689,S-498
USPmindv · · Eq (S-11.66) NP-complete · · · · · · · · · · 690,S-501
USSMdv ≤p · Eq (S-11.51) NP-complete · · · · · · · · · · 689,S-498
Subset k arithmetic problems (see Order statistics)
Select k positive product · · · · · · · · · · · · · · · · · AG S-4.4 Greedy Algo. · · · · · · · · · · 200,S-90
764 Index of Computational Problems

maximization · · · · 200,S-90 · · · · · · · · · · · · · · · · AG S-9.14 Greedy + maxHeap · · · 548,S-375


· · · · · · · · · · · · · · · · AG S-9.15 Ind. prog. + minHeap 548,S-375
· · · · · · · · · · · · · · · · AG S-9.16 Ind. prog. + minHeap 548,S-376
≤p SKSS · · · · · · AG S-10.3 reduction · · · · · · · · · · · · · 614,S-404
≤p SKSPmin Eq (S-10.13) reduction · · · · · · · · · · · · · 614,S-405
≤p SKSPmin Eq (S-10.14) reduction · · · · · · · · · · · · · 614,S-405
Select k positive product · · · · · · · · · · · · · · · · · AG S-4.5 Greedy Algo. · · · · · · · · · · 200,S-91
minimization · · · · · 200,S-91 · · · · · · · · · · · · · · · · AG S-9.17 Greedy + minHeap · · · 548,S-376
· · · · · · · · · · · · · · · · AG S-9.18 Ind. prog. + maxHeap 548,S-377
· · · · · · · · · · · · · · · · AG S-9.19 Ind. prog. + maxHeap 548,S-377
≤p SKSP · · · · · AG S-10.5 reduction · · · · · · · · · · · · · 614,S-405
≤p SKSP · · · Eq (S-10.12) reduction · · · · · · · · · · · · · 614,S-405
≤p SKSSmin · · AG S-10.6 reduction · · · · · · · · · · · · · 614,S-405
· · · · · · · · · · · · · · · · AG S-12.3 Las Vegas · · · · · · · · · · · · ·721,S-524
Select k sum maximization · · · · · · · · · · · · · · · · AG S-2.20 Inductive prog. · · · · · · · · · 86,S-38
· · · · · · · · · · · · · · · · · · · · · · · 157 · · · · · · · · · · · · · · · · · · · AG 4.4 Greedy Algo. · · · · · · · · · · · · · · · 158
· · · · · · · · · · · · · · · · · AG S-9.8 Greedy + maxHeap · · · 547,S-372
· · · · · · · · · · · · · · · · · AG S-9.9 Ind. prog. + minHeap 547,S-373
· · · · · · · · · · · · · · · · AG S-9.10 Ind. prog. + minHeap 547,S-373
≤p SKSSmin · Eq (S-10.7) Reduction · · · · · · · · · · · · 614,S-403
≤p SKSSmin · Eq (S-10.8) Reduction · · · · · · · · · · · · 614,S-403
· · · · · · · · · · · · · · · · AG S-12.2 Las Vegas · · · · · · · · · · · · ·721,S-523
Select k sum minimization · · · · · · · · · · · · · · · · · AG S-4.3 Greedy Algo. · · · · · · · · · · 200,S-89
· · · · · · · · · · · · · · · · · · 200,S-89 · · · · · · · · · · · · · · · · AG S-9.11 Greedy + minHeap · · · 548,S-373
· · · · · · · · · · · · · · · · AG S-9.12 Ind. prog. + maxHeap 548,S-374
· · · · · · · · · · · · · · · · AG S-9.13 Ind. prog. + maxHeap 548,S-374
≤p SKSS · · · · · · AG S-10.2 Reduction · · · · · · · · · · · · 614,S-403
≤p SKSS · · · · · Eq (S-10.6) Reduction · · · · · · · · · · · · 614,S-403
≤p SKSPmin · · AG S-10.7 reduction · · · · · · · · · · · · · 614,S-406
Subset selection without repetition Pn n

at least k · · · · · · · 430,S-321 i=k i · · · · · ·Eq (10.60) m-Reduction · · · · · · · · · · · · · · · · 593
· · · · · · · · · · · · · · · · AG S-7.70 Strong ind. + Cyl · · · · 430,S-321
≤m p SWam · · · · Eq (10.61) Reduction · · · · · · · · · · · · · · · · · · 593
≤p SWam · · · · · Eq (10.65) Reduction · · · · · · · · · · · · · · · · · · 593
≤p SWam · · · · · Eq (10.67) Reduction · · · · · · · · · · · · · · · · · · 593
Pk n

at most k · · · · · · · 430,S-319 i=0 i · · · · · · Eq (10.59) m-Reduction · · · · · · · · · · · · · · · · 593
· · · · · · · · · · · · · · · · AG S-7.68 Strong ind. + Cyl · · · · 430,S-320
· · · · · · · · · · · · · · · · AG S-7.69 Strong ind. + Cyl · · · · 430,S-320
≤m p SWal · · · · · Eq (10.62) Reduction · · · · · · · · · · · · · · · · · · 593
≤p SWal · · · · · · Eq (10.64) Reduction · · · · · · · · · · · · · · · · · · 593
≤p SWal · · · · · · Eq (10.66) Reduction · · · · · · · · · · · · · · · · · · 593
Binomial coefficient A · · · · · · · · · · · · · · · AG 6.17 Recursion · · · · · · · · · · · · · · · · · · · 319
exactly k - nk · · · · · · · · 319 · · · · · · · · · · · · · · · · · · AG 6.22

Memoization · · · · · · · · · · · · · · · · 321
· · · · · · · · · · · · · · · · · · AG 6.23 Memoization II · · · · · · · · · · · · · 322
· · · · · · · · · · · · · · · · · · AG 6.24 2D Str. ind. · · · · · · · · · · · · · · · · 322

^¨ · · · · · · · · · · · · · · · AG 7.43 Strong ind. + Cyl · · · · · · · · · · 408
≤p NPP · · · · · · · · AG 10.7 Reduction · · · · · · · · · · · · · · · · · · 566
≤m p FAC · · · · · · Eq (10.20) m-Reduction · · · · · · · · · · · · · · · · 584
Summation
Pn Pn
1 f (i) · · · · · · · · · · · · · · · · · 9 1 f (i) · · · · · · · · · · AG 2.2 Inductive prog. · · · · · · · · · · · · · · 42
· · · · · · · · · · · · · · · · ·Eq (2.14) Recursion · · · · · · · · · · · · · · · · · · · · 42
Pyramid number · · · · · · · 11 · · · · · · · · · · · · · · · · · · · Tm 1.5 Closed form · · · · · · · · · · · · · · · · · · 11
765

(Sum of square numbers) · · · · · · · · · · · · · · · · Eq (S-2.2) Recursion · · · · · · · · · · · · · · · 82,S-25


P n 2
1 i · · · · · · · · · · AG S-2.5 Inductive prog. · · · · · · · · · 82,S-25
≤p BNC · · · · · Eq (10.107) Reduction · · · · · · · · · · · · · · · · · · 622
≤m p BNC · · · · Eq (10.108) Reduction · · · · · · · · · · · · · · · · · · 622
Square number · · · · · · · · · 10 · · · · · · · · · · · · · · · · · · · Tm 1.4 Closed form · · · · · · · · · · · · · · · · · · 11
(Sum of odd numbers) · · · · · · · · · · · · · · · · ·Eq (2.44) Recursion · · · · · · · · · · · · · · · · · · · · 81
P n
1 (2i − 1) · · · · · AG S-2.3 Inductive prog. · · · · · · · · · 81,S-21
≤p BNC · · · · Eq (S-10.68) Reduction · · · · · · · · · · · · 622,S-442
≤p TRN · · · · Eq (S-10.66) Reduction · · · · · · · · · · · · 622,S-442
≤p SEN · · · · ·Eq (S-10.73) Reduction · · · · · · · · · · · · 622,S-442
Sum of cubic numbers · · · · · · · · · · · · · · · · · · · · ·Eq (2.45) Closed form · · · · · · · · · · · · · · · · · · 83
· · · · · · · · · · · · · · · · · · · · 82,S-26 · · · · · · · · · · · · · · · · Eq (S-2.3) Recursion · · · · · · · · · · · · · · · 82,S-26
P n 3
1 i · · · · · · · · · · AG S-2.6 Inductive prog. · · · · · · · · · 82,S-26
≤p BNC · · · · · Eq (10.110) Reduction · · · · · · · · · · · · · · · · · · 623
≤p TRN · · · · Eq (S-10.64) Reduction · · · · · · · · · · · · 622,S-441
Sum of even numbers · · · · · · · · · · · · · · · · · · · · · Eq (S-1.1) Closed form · · · · · · · · · · · · · · 28,S-7
· · · · · · · · · · · · · · · · · · · · · 28,S-7 · · · · · · · · · · · · · · · · Eq (S-2.1) Recursion · · · · · · · · · · · · · · · 82,S-24
P n
1 (2i) · · · · · · · · · AG S-2.4 Inductive prog. · · · · · · · · · 82,S-25
≤p BNC · · · · Eq (S-10.71) Reduction · · · · · · · · · · · · 622,S-442
≤p TRN · · · · Eq (S-10.69) Reduction · · · · · · · · · · · · 622,S-442
≤p SQN · · · · Eq (S-10.72) Reduction · · · · · · · · · · · · 622,S-442
Sum of floor of log · · · · · 83 · · · · · · · · · · · · · · · · ·Eq (2.46) Recursion · · · · · · · · · · · · · · · · · · · · 84
· · · · · · · · · · · · · · · · ·Eq (2.47) Closed form · · · · · · · · · · · · · · · · · · 84
P n
i=1 blog ic · · · · AG S-2.9 Inductive prog. · · · · · · · · · 83,S-32
Sum of product of consecu- · · · · · · · · · · · · · · · · · ·Tm 1.10 Closed form · · · · · · · · · · · · · · · · · · 15
tive k #s (SPCk ) · · · · · · 14 · · · · · · · · · · · · · · ·Eq (S-2.21) Recursion · · · · · · · · · · · · · · · 84,S-32
· · · · · · · · · · · · · · · · AG S-2.10 Inductive prog. · · · · · · · · · 84,S-32
Sum of sum of consecutive · · · · · · · · · · · · · · · · Eq (S-1.2) Closed form · · · · · · · · · · · · · 29,S-13
k #s (SSCk ) · · · · · · 29,S-12 · · · · · · · · · · · · · · ·Eq (S-2.22) Recursion · · · · · · · · · · · · · · · 84,S-33
· · · · · · · · · · · · · · · · AG S-2.11 Inductive prog. · · · · · · · · · 84,S-33
Pn Pi Pj
Sum of tetrahedral numbers 1 k · · · · AG 1.15
Inductive prog. · · · · · · · · · · · · · · 29
P1n P 1
i
(STH) · · · · · · · · · · · · 29,S-13 i=1 1 TRN(j) AG 1.16 Inductive prog. · · · · · · · · · · · · · · 29
Pn
i=1 THN(i) · · · · AG 1.17 Inductive prog. · · · · · · · · · · · · · · 29
· · · · · · · · · · · · · · · · Eq (S-1.3) Closed form · · · · · · · · · · · · · 29,S-13
· · · · · · · · · · · · · · · · Eq (S-2.6) Recursion · · · · · · · · · · · · · · · 83,S-28
· · · · · · · · · · · · · · · · Eq (S-2.7) Recursion · · · · · · · · · · · · · · · 83,S-28
· · · · · · · · · · · · · · · · Eq (S-2.8) Recursion · · · · · · · · · · · · · · · 83,S-28
≤p BNC · · · · · Eq (10.109) Reduction · · · · · · · · · · · · · · · · · · 623
Tetrahedral number · · · · 11 · · · · · · · · · · · · · · · · · · · Tm 1.6 Closed form · · · · · · · · · · · · · · · · · · 12
Pn Pi
(Sum of triangular #s)
Pi=1 j=1 j · · · · · AG 1.7 Inductive prog. · · · · · · · · · · · · · · 19
n
i=1 TRN(i) · · · · · AG 1.8 Inductive prog. · · · · · · · · · · · · · · 19
· · · · · · · · · · · · · · · · Eq (S-2.4) Recursion · · · · · · · · · · · · · · · 83,S-27
· · · · · · · · · · · · · · · · Eq (S-2.5) Recursion · · · · · · · · · · · · · · · 83,S-27

PpnBNC · · · · · Eq (10.106) Reduction · · · · · · · · · · · · · · · · · · 622
Triangular number · · · · · · 9 1 i · · · · · · · · · · · · · AG 1.5 Inductive prog. · · · · · · · · · · · · · · 10
· · · · · · · · · · · · · · · · · · · AG 1.6 Closed form · · · · · · · · · · · · · · · · · · 10
· · · · · · · · · · · · · · · · · · · AG 2.1 Recursion · · · · · · · · · · · · · · · · · · · · 36
≤p BNC · · · · · Eq (10.105) Reduction · · · · · · · · · · · · · · · · · · 622
≤p SCB · · · · ·Eq (S-10.65) Reduction · · · · · · · · · · · · 622,S-441
≤p SEN · · · · ·Eq (S-10.70) Reduction · · · · · · · · · · · · 622,S-442
≤p SQN · · · · Eq (S-10.67) Reduction · · · · · · · · · · · · 622,S-442
766 Index of Computational Problems

see under Perfect a-ary tree problems for other summation related problems.
Surjective multiset coefficient A · · · · · · · · · · · · · · Eq (6.32) Recursion · · · · · · · · · · · · · · · · · · · 351
· · · · · · · · · · · · · · · · · · · · · · · · · 351 · · · · · · · · · · · · · · · · AG S-6.37 Memoization · · · · · · · · · · 351,S-248
· · · · · · · · · · · · · · · · AG S-6.38 2D Str. ind. · · · · · · · · · · 351,S-248
· · · · · · · · · · · · · · · · AG S-7.66 Strong ind. + Cyl · · · · 430,S-319
· · · · · · · · · · · · · · · · AG S-7.67 Strong ind. + Cyl · · · · 430,S-319
≤p MSC · · · · · · Eq (10.31) Reduction · · · · · · · · · · · · · · · · · · 586
≤p BNC · · · · · · Eq (10.32) Reduction · · · · · · · · · · · · · · · · · · 586
≤m p FAC · · · · · · Eq (10.33) m-Reduction · · · · · · · · · · · · · · · · 586
≤p NPP · · · · · AG S-10.49 Reduction · · · · · · · · · · · · 621,S-438
Surjective sequence number ≤p SNS · · · · · · · Eq (10.34) Reduction · · · · · · · · · · · · · · · · · · 587
S̃(n, k) · · · · · · · · · · · · · · · · · · 587
Tautology logic related problems
CNF (TCN) · · · · · · · · · · 683 ≤p FDN · · · · · · Eq (11.82) Reduction · · · · · · · · · · · · · · · · · · 683
DNF (TDN) · · · · · · · · · · 683 FCN ≤p · · · · · · Eq (11.85) co-NP-complete · · · · · · · · · · · · · 683
Tautolog (TAU) · · · · · · ·681 FAL ≤p · · · · · · · Eq (11.70) co-NP-complete · · · · · · · · · · · · · 681
LEQ ≤p · · · · · · Eq (11.73) co-NP-complete · · · · · · · · · · · · · 682
TDN ≤p · · · · · · Eq (11.89) co-NP-complete · · · · · · · · · · · · · 684
Ternary tree related problems
2-3 property · · · · · · · · · · 455 · · · · · · · · · · · · · · · · · Eq S-8.3 DFT · · · · · · · · · · · · · · · · · · 493,S-348
Checking ternary search · · · · · · · · · · · · · · · · · · Eq (8.9) Definition · · · · · · · · · · · · · · · · · · · 454
tree (TST) · · · · · · · · · · · · 454 · · · · · · · · · · · · · · · · · · AG 8.15 Recursion · · · · · · · · · · · · · · · · · · · 454
Height of a node · · · · · · 456 · · · · · · · · · · · · · · · · ·Eq (8.15) Recursion · · · · · · · · · · · · · · · · · · · 456
Maximum in TST · · · · · 453 · · · · · · · · · · · · · · · · · · Eq (8.8) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 453
Minimum in TST · · · · · 453 · · · · · · · · · · · · · · · · · · Eq (8.7) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 453
Same leaf level · · · · · · · · 455 · · · · · · · · · · · · · · · · · Eq S-8.2 DFT · · · · · · · · · · · · · · · · · · 493,S-347
Topological sorting · · · · · · 256 Kahn’s algo · · · · · AG 5.26 Greedy Algo. · · · · · · · · · · · · · · · 257
Tarjan’s algo · · · · AG 7.20 Recursion · · · · · · · · · · · · · · · · · · · 382
· · · · · · · · · · · · · · · · · · AG 7.21 Stack · · · · · · · · · · · · · · · · · · · · · · · 383
· · · · · · · · · · · · · · · · · · AG 7.28 Queue · · · · · · · · · · · · · · · · · · · · · · 389
Traveling salesman problem
maximization (TSPx) · · · · nearest-nei. · · · AG S-4.32 Greedy Aprx. · · · · · · · · · 210,S-128
· · · · · · · · · · · · · · · · · 210,S-128 merge · · · · · · · · · AG S-4.33 Greedy Aprx. · · · · · · · · · 210,S-128
nearest-nei.2 · · AG S-4.34 Greedy Aprx. · · · · · · · · · 210,S-129
HMP ≤p · · · Eq (S-11.100) NP-hard · · · · · · · · · · · · · · 693,S-510
TSP ≤p · · · Eq (S-11.102) NP-hard · · · · · · · · · · · · · · 693,S-511
hdecision ver.i · · ·693,S-510 HMP ≤p · · · Eq (S-11.101) NP-complete · · · · · · · · · · 693,S-511
TSP ≤p · · · Eq (S-11.104) NP-complete · · · · · · · · · · 693,S-511
hmetric TSPxi · · 723,S-530 ≤p MxST · · · · · AG S-12.9 2-approximate · · · · · · · · 723,S-530
minimization (TSP) · · · 190 nearest-neighbor AG 4.21 Greedy Aprx. · · · · · · · · · · · · · · · 191
merge · · · · · · · · · · · AG 4.22 Greedy Aprx. · · · · · · · · · · · · · · · 191
combine · · · · · · · AG S-4.10 Greedy Aprx. · · · · · · · · · · 203,S-99
nearest-nei.2 · · AG S-4.31 Greedy Aprx. · · · · · · · · · 210,S-127
HMP ≤p · · · · · · Eq (11.65) NP-hard · · · · · · · · · · · · · · · · · · · · 679
TSPx ≤p · · Eq (S-11.103) NP-hard · · · · · · · · · · · · · · 693,S-511
hdecision ver.i · · · · · · · · · 679 HMP ≤p · · · · · · Eq (11.66) NP-complete · · · · · · · · · · · · · · · · 679
TSPx ≤p · · Eq (S-11.105) NP-complete · · · · · · · · · · 693,S-511
hmetric TSPi · · · · · · · · · 716 ≤p MST · · · · · · · AG 12.13 2-approximate · · · · · · · · · · · · · · 717
Vertex cover · · · · · · · · · · · · 180 · · · · · · · · · · · · · · · · · · AG 4.17 Greedy Aprx. · · · · · · · · · · · · · · · 181
IDS ≤p · · · · · · · Eq (11.56) NP-hard · · · · · · · · · · · · · · · · · · · · 672
CLQ ≤p · · · · · · Eq (11.58) NP-hard · · · · · · · · · · · · · · · · · · · · 674
SCN ≤p · · · · Eq (S-11.98) NP-hard · · · · · · · · · · · · · · 692,S-509
767

· · · · · · · · · · · · · · · · ·AG 12.12 2-approximate · · · · · · · · · · · · · · 716


hdecision ver.i · · · · · · · · · · · 674 IDSdv ≤p · · · · · Eq (11.60) NP-complete · · · · · · · · · · · · · · · · 674
CLQdv ≤p · · · · Eq (11.62) NP-complete · · · · · · · · · · · · · · · · 674
SCNdv ≤p · · Eq (S-11.99) NP-complete · · · · · · · · · · 692,S-510
Vertex ordering (Graph)
BFS order · · · · · · · · · · · · 387 BFS · · · · · · · · · · · · ·AG 7.26 Queue · · · · · · · · · · · · · · · · · · · · · · 387
check BFS order · · · · · · 387 ≤p SPL · · · · · · · · · · Pr 7.10 Reduction · · · · · · · · · · · · · · · · · · 387
check tBFS order · · · · · 391 ≤p LPL · · · · · · · · · · Pr 7.12 Reduction · · · · · · · · · · · · · · · · · · 391
≤p BFS · · · · · · · · Eq (7.12) Reduction · · · · · · · · · · · · · · · · · · 391
DFS order · · · · · · · · · · · · 372 DFS · · · · · · · · · · · · AG 7.14 Recursion · · · · · · · · · · · · · · · · · · · 372
DFS · · · · · · · · · · · · AG 7.15 Stack · · · · · · · · · · · · · · · · · · · · · · · 374
Stack push order · · · · · · 374 DFS · · · · · · · · · · · · AG 7.16 Stack · · · · · · · · · · · · · · · · · · · · · · · 374
Topological order - see under Topological sorting
Volumn of Frustum · · · · · · · 5 Moscow P.14 · · · · · AG 1.2 Algebraic formula · · · · · · · · · · · · · 6
≤m p VPR · · · · · Eq (10.19) Reduction · · · · · · · · · · · · · · · · · · 583
Volumn of Pyramid · · · · · 583 · · · · · · · · · · · · · · · Eq (10.18) Algebraic formula · · · · · · · · · · · 583
Winning ways · · · · · · · · · · · 239 A
· · · · · · · · · · · · · · Eq (5.26) Recursion · · · · · · · · · · · · · · · · · · · 239
· · · · · · · · · · · · · · · · · · AG 5.14 Str. Ind. Prog. · · · · · · · · · · · · · 240
· · · · · · · · · · · · · · · · · · AG 5.15 memoization · · · · · · · · · · · · · · · · 241
· · · · · · · · · · · · · · · · AG S-7.42 Str. ind. + Cir. · · · · · · 425,S-298
≤p NPP · · · · · · · · AG 10.8 Reduction · · · · · · · · · · · · · · · · · · 567
Word search puzzle · · · · · · · · 3 ·························· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 32
List of Abbreviations

Abbrev. Meaning
ADT Adstract data type · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 355
APP Alternating permutation problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 65
Application software · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 162
ASP Activity selection problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 170
AVL Adelson-Velsky and Landis tree data structure · · · · · · · · · · · · · · · · · · · · · 447
BFS Breadth first search · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 387
BFSt Topological order breadth first search · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 391
BIP Bounded integer partition number, pb (n, k) exactly k parts · · · · · · · · · 338
BIPam Bounded integer partition number, Ib (n, k) at most k parts · · ·353,S-256
BLN nth Bell number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 410
BNC Binomial coefficient, C(n, k) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 319
BPP Bin packing or bounded partition problem · · · · · · · · · · · · · · · · · · · · · · · · · 175
BST Binary search tree · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 441
C3S 3 basic gate only circuit satisfiability problem · · · · · · · · · · · · · · · · · · · · · · 687
CAT nth Catalan number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 288
CBE Checking back edge problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 379
CCM Connected component membership · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 379
CCS Combinational Circuit Satisfiability · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 642
CEU Element uniqueness problem (checking element uniqueness) · · · · · · · · · 56
CLQ Maximum clique problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 668
CLX Colexicographical order · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 89,S-47
CNal Number of at least k cycles. · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 431,S-324
CNam Number of at most k cycles. · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 431,S-323
CNF Conjunctive normal form of a logical statement · · · · · · · · · · · · · · · · · · · · 649
CPM Critical path method · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·268
CPN Checking primality of n · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 30,S-15
CPP Critical path problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 267
CVH Convex hull problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 580
D&C Divide and conquer · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 126
DAG Directed acyclic graph · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 254
DFE Double factorial of the nth even number (2n)!! · · · · · · · · · · · · · · · · · · · · · · 85
DFO Double factorial of the nth odd number (2n − 1)!! · · · · · · · · · · · · · · · · · · · 85
DFS Depth first search · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 372
DFT Depth first traversal · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 366
DIV division n/d arithmetic operation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 77

768
769

DNF Disjunctive normal form of a logical statement · · · · · · · · · · · · · · · · · · · · · 682


DPD Dot product · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 47
DUP down-up alternating permutation problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 86
EUN Eulerian number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 348
EUS Eulerian number of the second kind · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 349
EZN Euler zigazg number (André’s problem) · · · · · · · · · · · · · · · · · · · · · · · · · · · · 237
FAC Factorial n! problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 50
FAL Fallacy problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 680
FCN Fallacy of CNF problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·683
FDN Fallacy of DNF problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 683
FIB nth Fibonacci number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 246
FIFO first-in, first-out principle · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 355
FKP Fractional knapsack problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 165
FKP-min Fractional knapsack minimization problem · · · · · · · · · · · · · · · · · · · 204,S-102
FMa Set of numbers that pass the Fermat (base a) primality test · · · · · · · · 712
FPN Forward Polish notation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·365
FRC nth Fibonacci number recursive calls · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 247
FSA Finite state automata · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 362
FSM Finite state machine · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 362
FSP Frobenius postage stamp problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 226
FTN The number of nodes in the Fibonacci tree of height h · · · · · · · · · · · · · 140
GBW Greater between elements sequence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·121
GCC Find all connected components of a graph · · · · · · · · · · · · · · · · · · · · · · · · · · 376
GCD Greatest common divisor · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 7
GCN Graph connectivity problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 375
HMC Hamiltonian cycle problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 677
HMP Hamiltonian path problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 676
IDS Independent set problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 670
InDel String edit distance with insertion and deletion only · · · · · · · · · · · · · · · · 313
IPal Integer partition number, I(n, k) at least k parts · · · · · · · · · · · · · 433,S-332
IPam Integer partition number, I(n, k) at most k parts · · · · · · · · · · · · · · · · · · · 331
IPE integer partition number, p(n, k) exactly k parts · · · · · · · · · · · · · · · · · · · 326
IPN Integer partition number in any part · · · · · · · · · · · · · · · · · · · · · · · · · 433,S-333
JCL Jacobsthal-Lucas number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 282
JCN Jacobsthal number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 281
JSD Job scheduling with deadline · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 177
KB2 nth Kibonacci number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 248
KBF nth full Kibonacci number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 285
KLG k-th largest order statistics problem (see KOS) · · · · · · · · · · · · · · · · · · · · · · 59
KOS k-th order statistics problem (includes KLG and KSM) · · · · · · · · · · · · · · 59
KPN k Permutation of n, P (n, k) = nk , the falling factorial power · · · · · · · · ·51
KSM k-th smallest order statistics problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 85,S-37
LBW Less between elements sequence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 88,S-45
LCM Least common multiple · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·4
LCS Longest Common Subsequence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 311
LDCS Longest decreasing consecutive subsequence problem · · · · · · · · · · 620,S-75
LDS Longest decreasing subsequence problem · · · · · · · · · · · · · · · · · · · · · 619,S-424
770 List of Abbreviations

LDUC Longest alternating down-up subsequence consecutive - · · · · · · ·620,S-432


LDUS Longest alternating down-up subsequence problem · · · · · · · · · · · 619,S-427
LEQ Logical equivalency · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 681
LEV Levenshtein string edit distance · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 314
LEX Lexicographical order · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 79
LHmax (height biased) leftist max heap · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 534
LHmin (height biased) leftist min heap · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 534
LICS Longest increasing consecutive subsequence problem · · · · · · · · · · · 620,S-74
LIFO last-in, first-out principle · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 355
LIS Longest increasing subsequence problem · · · · · · · · · · · · · · · · · · · · · · · · · · · ·572
LNE solving a linear equation problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 556
LPC Longest path cost problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 290,S-203
LPCS Longest palindromic consecutive subsequence problem · · · · · · · · · · · · · · 602
LPL Longest path length problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 290,S-202
LPS Longest palindromic subsequence problem · · · · · · · · · · · · · · · · · · · · · · · · · · 316
LSC Lucas Sequence Coefficient L(n, k) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 323
LSC2 Lucas Sequence II Coefficient L2 (n, k) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 352
LUC nth Lucas number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 278
LUDC Longest alternating up-down subsequence consecutive - · · · · · · ·620,S-431
LUDS Longest alternating up-down subsequence problem · · · · · · · · · · · · · · · · · 600
LUS Lucas sequence problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 250
LUS2 Lucas sequence II problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 250
MAX Find max problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 85,S-36
MCSP Maximum consecutive subsequence product problem · · · · · · · · · · · ·31,S-16
MCSPp Maximum consecutive subsequence product of positive numbers 147,S-70
MCSS Maximum consecutive subsequence sum problem · · · · · · · · · · · · · · · · · · · · 22
MDN Find median problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 614
MIN Find min problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 58
minCSP Minimum consecutive subsequence product of positive numbers 148,S-72
minCSPp Minimum consecutive subsequence product problem · · · · · · · · · · · 148,S-72
minCSS Minimum consecutive subsequence sum problem · · · · · · · · · · · · · · · · 30,S-15
minPFP Minimum prefix product problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · 618,S-422
minPFS Minimum prefix sum problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 618,S-420
MNK Minimum number of internal nodes in a k-ary tree · · · · · · · · · · · · · · · · · 197
MNP Minimum number of processors · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 172
MOD Modulo n % d arithmetic operation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 77
MPFP Maximum prefix product problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · 618,S-422
MPFS Maximum prefix sum problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·598
MSC Multiset coefficient · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 350
MSL Mersenne-Lucas number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 283
MSN Mersenne number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 282
MPS Multiprocessor scheduling problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 174
MSrT Minimum spanning rooted tree problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · 264
MST Minimum spanning tree problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 184
MXM Matrix multiplication · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 47
MXP Matrix power · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 151,S-84
MxSrT Maximum spanning rooted tree problem · · · · · · · · · · · · · · · · · · · · · 291,S-204
771

MxST Maximum spanning tree problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · 209,S-122


NA2al Number of GBW sequences with at least k ascents. · · · · · · · · · · 432,S-330
NA2am Number of GBW sequences with at most k ascents. · · · · · · · · · · 432,S-329
NAal Number of permutations with at least k ascents. · · · · · · · · · · · · · 432,S-327
NAam Number of permutations with at most k ascents. · · · · · · · · · · · · · 432,S-326
NAGS NAND gate only circuit satisfiability problem · · · · · · · · · · · · · · · · · · · · · · 655
NAS Number of ascents problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 55
NDS Number of descents problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·87,S-43
NFA Non-deterministic finite state automaton · · · · · · · · · · · · · · · · · · · · · · · · · · · 362
NP Non-deterministic in polynomial time · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 695
Number of paths · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 259
NPC a set of NP-complete problems · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 695
NPF Number of prime factors · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 218
NOGS NOR gate only circuit satisfiability problem · · · · · · · · · · · · · · · · · · · · · · · · 687
NPK Number of paths of length k problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 612
NPL Null path length · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 532
NPN Normal Polish notation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
NPP Number of paths problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 258
P Deterministic in polynomial time · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 695
PFP Prefix product problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·84,S-33
PFP2 2 dimensional Prefix product problem · · · · · · · · · · · · · · · · · · · · · · · · 340,S-208
PFS Prefix sum problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 53
PFS2 2 dimensional Prefix sum problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 294
PLL Pell-Lucas number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 280
PLN Pell number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·279
PN Polish notation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
POW power an or sequencing number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 49
PRN nth Pyramid number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 11
PSEmax Postage stamp equality maximization problem · · · · · · · · · · · · · · · · 201,S-93
PSEmin Postage stamp equality minimization problem · · · · · · · · · · · · · · · · · · · · · · 159
PSmax Postage stamp maximization problem · · · · · · · · · · · · · · · · · · · · · · · · · 201,S-94
PSmin Postage stamp minimization problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 161
QRE solving a quadratic equation problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·556
RCM Rod cutting maximization problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 168
RCmin Rod cutting minimization problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 206,S-109
RFP Rising factorial power · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 89,S-49
RPN Reverse Polish notation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
RPP Random permutation problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·67
RSL Right spine length of a binary tree · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 534
RTF Root Finding problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 107
SAT Satisfiability problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 637
SC-3 Satisfiability of CNF-3 problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 650
SCB sum of first n cubic number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 82,S-26
SCN Satisfiability of CNF problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 652
SCV Set Cover problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 208,S-116
SDN Satisfiability of DNF problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 683
SEN sum of first n even number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 28,S-7
772 List of Abbreviations

SFL Sum of first n floor of log problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·83


SKSP select k subset positive product maximization problem · · · · · · · · 200,S-90
SKSPmin select k subset positive product minimization problem · · · · · · · · · 200,S-90
SKSS select k subset sum maximization problem · · · · · · · · · · · · · · · · · · · · · · · · · 157
SKSSmin select k subset sum minimization problem · · · · · · · · · · · · · · · · · · · · · · · · · · 200
SMSC Surjective Multiset coefficient · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 351
SNF Stirling number of the first kind · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 348
SNS Stirling number of the second kind, set partition number · · · · · 347,S-235
SPal Set partition number with at least k partition · · · · · · · · · · · · · · · · · · · · · · 413
SPam Set partition number with at most k partition · · · · · · · · · · · · · · · · · · · · · · 412
SPC Shortest path cost problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·262
SPCk Sum of the first n product of consecutive k numbers · · · · · · · · · · · · · · · · · 14
SPEp Subset product equality problem of positive numbers· · · · · · · · · ·344,S-225
SPL Shortest path length problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 259
SPMp Subset product maximization problem of positive numbers · · · · 345,S-226
SPminp Subset product minimization problem of positive numbers · · · · 345,S-228
SQN nth square number problem (sum of first n odd number) · · · · · · · · · · · · 10
SRNk Floor of square root of n · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 78
SSCk Sum of the first n sum of consecutive k numbers · · · · · · · · · · · · · · · 29,S-12
SSE Subset sum equality problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 305
SSM Subset sum maximization problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · 202,S-97
SSmin Subset sum minimization problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 202,S-96
SSN Surjective sequence number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 587
STH Sum of first n tetrahedral numbers · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 29,S-13
STN Sqaure triangular number
√ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 284
STNr square root of STN, STN · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 285
STP Set Partition problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 660
SWal Ways of selecting at least k elements without repetition · · · · · · 430,S-321
SWam Ways of selecting at most k elements without repetition · · · · · · 430,S-319
TAU Tautology problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 681
TCN Tautology of CNF problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 683
TDN Tautology of DNF problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·683
THN nth tetrahedral number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 11
TMP Selecting an element in top m percent problem · · · · · · · · · · · · · · · · · · · · · 710
TPS Topological sorting · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 256
TRN nth triangular number problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 9
TSP Traveling salesman problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 190
TST Ternary search tree · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 454
UDP up-down alternating permutation problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 65
UKE Unbounded knapsack equality problem · · · · · · · · · · · · · · · · · · · · · · · 272,S-139
UKEmin Unbounded knapsack equality minimization problem · · · · · · · · · 205,S-108
UKP Unbounded knapsack problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·167
UKPmin Unbounded knapsack minimization problem · · · · · · · · · · · · · · · · · · 204,S-105
USPE Unbounded subset product of positive number equality problem · · · · 307
USPM Unbounded subset product of positive num. maximization · · · 690,S-499
USPmin Unbounded subset product of positive num. minimization · · · · 690,S-500
USSE Unbounded subset sum equality problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 225
773

USSM Unbounded subset sum maximization problem · · · · · · · · · · · · · · · 205,S-106


USSmin Unbounded subset sum minimization problem · · · · · · · · · · · · · · · · · · · · · · 227
UUD up-up-down alternating permutation problem · · · · · · · · · · · · · · · · · · · · · · · 87
VCP Vertex cover problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 180
VFR Volume of a Frustum problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 5
VPR Volume of a Pyramid problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·583
wASP Weighted activity selection problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 231
WSP Word search puzzle problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 3
WWP Winning way problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 239
ZOK 01-knapsack problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 163
ZOK-min 01-knapsack minimization problem · · · · · · · · · · · · · · · · · · · · · · · · · · · 203,S-100
ZOKE 01-knapsack equality problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 206,S-110
ZOKE-min 01-knapsack equality minimization problem · · · · · · · · · · · · · · · · · · 207,S-112
∆-TSP metric Traveling salesman problem · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 716
List of Symbols and Notations

Symbol Meaning
AP the set of all permutations · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 67
e the natural number, 2.7182818284590452353602874713527 . . . · · · · · · · · 25
JL the Jacobsthall-Lucas number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 282
ML the Mersenne-Lucas number · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·284
N A set of natural numbers, {0, 1, 2, · · · } · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 12
O Big-oh · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 17
o Little-oh · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 17
R A set of real numbers · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 47
R+ A set of positive real numbers · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 5
Z A set of integers, {· · · , −2, −1, 0, 1, 2, · · · } · · · · · · · · · · · · · · · · · · · · · · · · · · · 21
Z+ A set of positive integers, {1, 2, 3, · · · } · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4
∀ The universal quantifier, “for all” · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·2
∃ The existential quantifier, “there exists” · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 79
∃! The unique existential quantifier, “there exists only one” · · · · · · · · · · · 176
 Empty string · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·70
Θ Theta · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 17
Σ Summation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·9
an alphabet or symbol set · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 70
Π Product · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 14
ϕ golden ratio · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 597
Ω Big-omega · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 17
ω Little-omega · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 17
∞ infinity · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 18
∧ Logical conjunction operator ‘and’ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 364
∨ Logical disjunction operator ‘or’ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 364
¬ Logical negation operator ‘negate’ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 364
→ Logical implication operator · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
↔ Logical biconditional operator ‘if and only if’ · · · · · · · · · · · · · · · · · · · · · · · 365
↑ NAND operator · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
↓ NOR operator · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
XNOR exclusive nor · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
⊕ XOR exclusive or · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 365
∴ therefore · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 10
∼ range (pseudo code) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19
similar (geometry) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 6

774
775

≡ Logical equivalence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·365


congruent (number theory) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 711
= assignment · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4
⊆0 Sub-sequence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 310
_ String concatenation operator · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 70
! n! Factorial · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 50
% Modulo operator ‘mod’ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 77
percent · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 710
| ‘{x ∈ S | P (x)}’ denotes the subset of S such that P (x) is satisfied. · · 2
‘|S|’ denotes the cardinality of a set S or the length of a string S · · · · · 2
‘|x|’ denotes the absolute value of x · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 16
‘a | b’ denotes ‘a divides b.’ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·7
- ‘a - b’ denotes ‘a does not divide b.’ · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·7

bc
de
hi
{}

You might also like