Ada (Bcs401) Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Module 1: Introduction:

What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.

Figure 1.1
The reference to instructions in the different implies that there is something or someone capable
of understanding and following the instruction given. We call this a computer keeping in mind
that before the electronic computer was invented, the word computer meant a human being
involved in performing numeric calculation. Computers are those ubiquitous electronic devices
that have become in dispensable in almost everything we do.
✓ The non-ambiguity requirement for each step of an algorithm cannot be compromised.
✓ The range of inputs for which an algorithm works has to be specified carefully.
✓ The same algorithm can be represented in several different ways.
✓ There may exist several algorithms for solving the same problem Can be based on very
different ideas and can solve the problem with dramatically different speeds
Why do need to study algorithm?
✓ If you are going to be a computer professional, there are both practical and theoretical
reasons to study algorithm.
✓ From a practical standpoint, you have to know a standard set of important algorithm from
different areas of computing and you should be able to design new algorithms and
analyze their efficiency.
✓ From the theoretical standpoint, the study of algorithm sometime called algorithmic has
come to be recognized as the corves stone of computer science.
How to Write an Algorithm?
There is no hard and fast rule when writing an algorithm. There are a few points you just have to
consider when you write an algorithm.
✓ Describe the problem clearly. Be simple and clear with the description of the problem
✓ Determine the input and output of the algorithm. Do not make the algorithm runs
infinite.
✓ Briefly describe the start and end points of the algorithm.
✓ Description of the steps to achieve the target.
✓ Review the algorithm.
Characteristics of Algorithms
✓ Algorithms must have an endpoint. It should not run for infinity. There must be a finite
amount of time for which the algorithm must run.
✓ You must give an algorithm an individual name.
✓ Definition of input and output should be well defined. You must give sample input other
than 0 and output for better understanding.
✓ There must be a defined instruction for solving the problem. The algorithm should not
have an unnecessary number of instructions.
✓ You must give clear and concise instructions without making them difficult to
understand.
✓ The instruction must be independent of the programming language. The algorithm should
be dependent on one single language.
Algorithm helps in faster processing, so it must be adequate and correct, which helps in getting
greater efficiency.
Fundamentals of Algorithmic Problem Solving
These solutions are not answers but specific instructions for getting answers. It is this emphasis
on precisely defined constructive procedures that makes computer science distinct from other
disciplines. In particular, this distinguishes it from theoretical mathematics, whose practitioners
are typically satisfied with just proving the existence of a solution to a problem and, possibly,
investigating the solution’s properties.
We now list and briefly discuss a sequence of steps one typically goes through in designing and
analyzing an algorithm (Figure 1.2).

Understand the problem

Decide on:
Computational means, exact
vs. approximate solving,
algorithm design technique

Design an algorithm

Prove correctness

Analyze the algorithm

Code to algorithm
Figure 1.2
Understanding the Problem
From a practical perspective, the first thing you need to do before designing an algorithm is to
understand completely the problem given. Read the problem’s description carefully and ask
questions if you have any doubts about the problem, do a few small examples by hand, think
about special cases, and ask questions again if needed.

There are a few types of problems that arise in computing applications quite often. We review
them in the next section. If the problem in question is one of them, you might be able to use a
known algorithm for solving it. Of course, it helps to understand how such an algorithm works
and to know its strengths and weaknesses, especially if you have to choose among several
available algorithms. But often you will not find a readily available algorithm and will have to
design your own. The sequence of steps outlined in this section should help you in this exciting
but not always easy task.

An input to an algorithm specifies an instance of the problem the algorithm solves. It is very
important to specify exactly the set of instances the algorithm needs to handle. (As an example,
recall the variations in the set of instances for the three greatest common divisor algorithms
discussed in the previous section.) If you fail to do this, your algorithm may work correctly for a
majority of inputs but crash on some “boundary” value. Remember that a correct algorithm is not
one that works most of the time, but one that works correctly for all legitimate inputs.
Ascertaining the Capabilities of the Computational Device

Once you completely understand a problem, you need to ascertain the capabilities of the
computational device the algorithm is intended for. The vast majority of

algorithms in use today are still destined to be programmed for a computer closely resembling
the von Neumann machine—a computer architecture outlined by the prominent Hungarian-
American mathematician John von Neumann (1903– 1957), in collaboration with A. Burks and
H. Goldstine, in 1946. The essence of this architecture is captured by the so-called random-
access machine (RAM). Its central assumption is that instructions are executed one after another,
one operation at a time. Accordingly, algorithms designed to be executed on such machines are
called sequential algorithms.
algorithms in use today are still destined to be programmed for a computer closely resembling
the von Neumann machine—a computer architecture outlined by the prominent Hungarian-
American mathematician John von Neumann (1903– 1957), in collaboration with A. Burks and
H. Goldstine, in 1946. The essence of this architecture is captured by the so-called random-
access machine (RAM). Its central assumption is that instructions are executed one after another,
one operation at a time. Accordingly, algorithms designed to be executed on such machines are
called sequential algorithms.
Choosing between Exact and Approximate Problem Solving
The next principal decision is to choose between solving the problem exactly or solving it
approximately. In the former case, an algorithm is called an exact algorithm; in the latter case,
an algorithm is called an approximation algorithm. Why would one opt for an approximation
algorithm? First, there are important problems that simply cannot be solved exactly for most of
their instances; examples include extracting square roots, solving nonlinear equations, and
evaluating definite integrals. Second, available algorithms for solving a problem exactly can be
unacceptably slow because of the problem’s intrinsic complexity.
Algorithm Design Techniques
Now, with all the components of the algorithmic problem solving in place, how do you design an
algorithm to solve a given problem? This is the main question this book seeks to answer by
teaching you several general design techniques.

What is an algorithm design technique?

An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving


problems algorithmically that is applicable to a variety of problems from different areas of
computing.

Check this book’s table of contents and you will see that a majority of its chapters are devoted to
individual design techniques. They distill a few key ideas that have proven to be useful in
designing algorithms. Learning these techniques is of utmost importance for the following
reasons.

First, they provide guidance for designing algorithms for new problems, i.e., problems for which
there is no known satisfactory algorithm. Therefore—to use the language of a famous proverb—
learning such techniques is akin to learning to fish as opposed to being given a fish caught by
somebody else. It is not true, of course, that each of these general techniques will be necessarily
applicable to every problem you may encounter. But taken together, they do constitute a
powerful collection of tools that you will find quite handy in your studies and work.

Second, algorithms are the cornerstone of computer science. Every science is interested in
classifying its principal subject, and computer science is no exception. Algorithm design
techniques make it possible to classify algorithms according to an underlying design idea;
therefore, they can serve as a natural way to both categorize and study algorithms.

Designing an Algorithm and Data Structures

While the algorithm design techniques do provide a powerful set of general approaches to
algorithmic problem solving, designing an algorithm for a particular problem may still be a
challenging task. Some design techniques can be simply inapplicable to the problem in question.
Sometimes, several techniques need to be combined, and there are algorithms that are hard to
pinpoint as applications of the known design techniques. Even when a particular design
technique is applicable, getting an algorithm often requires a nontrivial ingenuity on the part of
the algorithm designer.
Methods of Specifying an Algorithm

Once you have designed an algorithm, you need to specify it in some fashion. To give you an
example, Euclid’s algorithm is described in words (in a free and also a step-by-step form) and in
pseudocode. These are the two options that are most widely used nowadays for specifying
algorithms.

Using a natural language has an obvious appeal; however, the inherent ambiguity of any natural
language makes a succinct and clear description of algorithms surprisingly difficult.
Nevertheless, being able to do this is an important skill that you should strive to develop in the
process of learning algorithms.

Pseudocode is a mixture of a natural language and programming language-like constructs.


Pseudocode is usually more precise than natural language, and its usage often yields more
succinct algorithm descriptions.

In the earlier days of computing, the dominant vehicle for specifying algorithms was a flowchart,
a method of expressing an algorithm by a collection of connected geometric shapes containing
descriptions of the algorithm’s steps. This representation technique has proved to be
inconvenient for all but very simple algorithms; nowadays, it can be found only in old algorithm
books.

The state of the art of computing has not yet reached a point where an algorithm’s description-be
it in a natural language or pseudocode—can be fed into an electronic computer directly. Instead,
it needs to be converted into a computer program written in a particular computer language. We
can look at such a program as yet another way of specifying the algorithm, although it is
preferable to consider it as the algorithm’s implementation.

Proving an Algorithm’s Correctness

Once an algorithm has been specified, you have to prove its correctness. That is, you have to
prove that the algorithm yields a required result for every legitimate input in a finite amount of
time. For example, the correctness of Euclid’s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n), the simple
observation that the second integer gets smaller on every iteration of the algorithm, and the fact
that the algorithm stops when the second integer becomes 0.

For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex. A
common technique for proving correctness is to use mathematical induction because an
algorithm’s iterations provide a natural sequence of steps needed for such proofs. It might be
worth mentioning that although tracing the algorithm’s performance for a few specific inputs can
be a very worthwhile activity, it cannot prove the algorithm’s correctness conclusively. But in
order to show that an algorithm is incorrect, you need just one instance of its input for which the
algorithm fails.

The notion of correctness for approximation algorithms is less straightforward than it is for exact
algorithms. For an approximation algorithm, we usually would like to be able to show that the
error produced by the algorithm does not exceed a predefined limit.

Analyzing an Algorithm

We usually want our algorithms to possess several qualities. After correctness, by far the most
important is efficiency. In fact, there are two kinds of algorithm efficiency: time efficiency,
indicating how fast the algorithm runs, and space efficiency, indicating how much extra memory
it uses.

Another desirable characteristic of an algorithm is simplicity. Unlike efficiency, which can be


precisely defined and investigated with mathematical rigor, simplicity, like beauty, is to a
considerable degree in the eye of the beholder. For example, most people would agree that
Euclid’s algorithm is simpler than the middle-school procedure for computing gcd(m, n), but it is
not clear whether Euclid’s algorithm is simpler than the consecutive integer checking algorithm.
Still, simplicity is an important algorithm characteristic to strive for. Why? Because simpler
algorithms are easier to understand and easier to program; consequently, the resulting programs
usually contain fewer bugs. There is also the undeniable aesthetic appeal of simplicity.
Sometimes simpler algorithms are also more efficient than more complicated alternatives.
Unfortunately, it is not always true, in which case a judicious compromise needs to be made.

Yet another desirable characteristic of an algorithm is generality. There are, in fact, two issues
here: generality of the problem the algorithm solves and the set of inputs it accepts. On the first
issue, note that it is sometimes easier to design an algorithm for a problem posed in more general
terms. Consider, for example, the problem of determining whether two integers are relatively
prime, i.e., whether their only common divisor is equal to 1. It is easier to design an algorithm
for a more general problem of computing the greatest common divisor of two integers and, to
solve the former problem, check whether the gcd is 1 or not. There are situations, however,
where designing a more general algorithm is unnecessary or difficult or even impossible. For
example, it is unnecessary to sort a list of n numbers to find its median, which is its n/2 th
smallest element. To give another example, the standard formula for roots of a quadratic
equation cannot be generalized to handle polynomials of arbitrary degrees.

As to the set of inputs, your main concern should be designing an algorithm that can handle a set
of inputs that is natural for the problem at hand. For example, excluding integers equal to 1 as
possible inputs for a greatest common divisor algorithm would be quite unnatural. On the other
hand, although the standard formula for the roots of a quadratic equation holds for complex
coefficients, we would normally not implement it on this level of generality unless this capability
is explicitly required.
Asymptotic Analysis:
In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of
input size (we don’t measure the actual running time). We calculate, how the time (or space)
taken by an algorithm increases with the input size.

Asymptotic notation is a way to describe the running time or space complexity of an


algorithm based on the input size. It is commonly used in complexity analysis to describe how an
algorithm performs as the size of the input grows. The three most commonly used notations are
Big O, Omega, and Theta.
1. Big-Oh (O): This notation provides an upper bound on the growth rate of an algorithm’s
running time or space usage. It represents the worst-case scenario, i.e., the maximum amount
of time or space an algorithm may need to solve a problem.
We define an algorithm’s worst-case time complexity by using the Big-O notation, which
determines the set of functions grows slower than or at the same rate as the expression.
Furthermore, it explains the maximum amount of time an algorithm requires to consider all
input values.

Fig : Big-Oh(O)
Big-O is a measure of the longest amount of time taken by the algorithm to complete the
execution. Let 𝑓(𝑛) be the time efficiency of an algorithm. The function 𝑓(𝑛) is said to be
big-oh of 𝑔(𝑛).
𝑓(𝑛) = O(𝑔(𝑛))
Such that there exist a +ve constant c and +ve integer no satisfying the constraint.
𝑓(𝑛) ≤ c ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑛0
𝑐 ∗ 𝑔(𝑛) is the upper bound. The upper bound on 𝑓(𝑛) indicates the function 𝑓(𝑛) will not
consume more than the specified time 𝑐 ∗ 𝑔(𝑛) i.e. running time of function 𝑓(𝑛) may be
equal to 𝑐 ∗ 𝑔(𝑛), but it will never be worse than the upper bound i.e. 𝑓(𝑛) is generally faster
than 𝑔(𝑛).
For example :
1) 𝐿𝑒𝑡 𝑓(𝑛) = 100𝑛 + 5 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = O(𝑛)
Solution: 𝑔(𝑛) = 𝑓(𝑛)
𝑔(𝑛) = 100𝑛 + 5 (𝑟𝑒𝑝𝑙𝑎𝑐𝑖𝑛𝑔 5 𝑤𝑖𝑡ℎ 𝑛)
= 100𝑛 + 𝑛
= 101𝑛 𝑤ℎ𝑒𝑛𝑒𝑣𝑒𝑟 𝑛 = 5
The constraint to be satisfied is
𝑓(𝑛) ≤ c ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑛 ≥ 𝑛0
100𝑛 + 5 ≤ 101𝑛 𝑓𝑜𝑟 𝑛 ≥ 5
𝑤ℎ𝑒𝑟𝑒 𝑐 = 101, 𝑔(𝑛) = 𝑛 & 𝑛0 = 5
𝑓(𝑛) = O(𝑔(𝑛))
i.e. 𝑓(𝑛) = O(𝑛)
2) 𝐿𝑒𝑡 𝑓(𝑛) = 10𝑛3 + 8 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = O(𝑛3 )
Solution: 𝑔(𝑛) = 𝑓(𝑛)
𝑔(𝑛) = 10𝑛3 + 8 (𝑟𝑒𝑝𝑙𝑎𝑐𝑖𝑛𝑔 8 𝑤𝑖𝑡ℎ 𝑛)
3 3
= 10𝑛 + 𝑛
= 11𝑛3 𝑤ℎ𝑒𝑟𝑒 𝑛3 = 8 𝑠𝑜 𝑛 = 2
The constraint to be satisfied is
𝑓(𝑛) ≤ c ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑛 ≥ 𝑛0
10𝑛3 + 8 ≤ 11𝑛3 𝑓𝑜𝑟 𝑛 ≥ 2
𝑤ℎ𝑒𝑟𝑒 𝑐 = 11, 𝑔(𝑛) = 𝑛3 & 𝑛0 = 2
𝑓(𝑛) = O(𝑔(𝑛))
i.e. 𝑓(𝑛) = O(𝑛3 )
Exercise 3) 𝐿𝑒𝑡 𝑓(𝑛) = 6 × 2𝑛 + 𝑛2 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = O(2𝑛 )
4) 𝐿𝑒𝑡 𝑓(𝑛) = 3𝑛 + 2 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = O(𝑛)
2. Big-Omega (Ω): This notation provides a lower bound on the growth rate of an algorithm’s
running time or space usage. It represents the best-case scenario, i.e., the minimum amount of
time or space an algorithm may need to solve a problem.
It defines the best case of an algorithm’s time complexity, the Omega notation defines
whether the set of functions will grow faster or at the same rate as the expression.
Furthermore, it explains the minimum amount of time an algorithm requires to consider all
input values.
Fig : Big-Omega (Ω)
Big-Ω is a measure of the least amount of time taken by the algorithm to complete the
execution. Let 𝑓(𝑛) be the time complexity of an algorithm. The function 𝑓(𝑛) is said to be
big- Ω of 𝑔(𝑛).
𝑓(𝑛) = Ω(𝑔(𝑛))
Such that there exist a +ve constant c and -ve integer no satisfying.
𝑓(𝑛) ≥ c ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 > 𝑛0
𝑐 ∗ 𝑔(𝑛) is the lower bound. The lower bound on 𝑓(𝑛) indicates the function 𝑓(𝑛) will not
consume at least the specified time 𝑐 ∗ 𝑔(𝑛) i.e. running time that is always greater than 𝑐 ∗
𝑔(𝑛). In general the lower bound implies that below this time the algorithm cannot perform
better.
For example :
1) 𝐿𝑒𝑡 𝑓(𝑛) = 100𝑛 + 5 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = Ω(𝑛)
Solution: 𝑔(𝑛) = 𝑓(𝑛)
𝑔(𝑛) = 100𝑛 + 5 (𝑟𝑒𝑝𝑙𝑎𝑐𝑖𝑛𝑔 5 𝑤𝑖𝑡ℎ 0)
= 100𝑛
𝑓(𝑛) ≥ c ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑛 ≥ 𝑛0
100𝑛 + 5 ≤ 100𝑛 𝑓𝑜𝑟 𝑛 ≥ 0
𝑓(𝑛) = Ω(𝑔(𝑛))
i.e. 𝑓(𝑛) = Ω(𝑛)
2) 𝐿𝑒𝑡 𝑓(𝑛) = 10𝑛3 + 5 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = Ω(𝑛3 )
Solution: 𝑔(𝑛) = 𝑓(𝑛)
𝑔(𝑛) = 10𝑛3 + 5 (𝑟𝑒𝑝𝑙𝑎𝑐𝑖𝑛𝑔 8 𝑤𝑖𝑡ℎ 𝑛)
3
= 10𝑛 + 0
= 10𝑛3
The constraint to be satisfied is
𝑓(𝑛) ≥ c ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑛 ≥ 𝑛0
10𝑛3 + 5 ≤ 10𝑛3 𝑓𝑜𝑟 𝑛 ≥ 0
𝑤ℎ𝑒𝑟𝑒 𝑐 = 10, 𝑔(𝑛) = 𝑛3 & 𝑛0 = 0
𝑓(𝑛) = Ω(𝑔(𝑛))
i.e. 𝑓(𝑛) = Ω(𝑛3 )
Exercise 3) 𝐿𝑒𝑡 𝑓(𝑛) = 6 × 2𝑛 + 𝑛2 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = Ω(2𝑛 )
4) 𝐿𝑒𝑡 𝑓(𝑛) = 3𝑛 + 2 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = Ω(𝑛)
3. Big-Theta (Θ): This notation provides both an upper and lower bound on the growth rate of
an algorithm’s running time or space usage. It represents the average-case scenario, i.e., the
amount of time or space an algorithm typically needs to solve a problem.
It defines the average case of an algorithm’s time complexity, the Theta notation defines
when the set of functions lies in both O(expression) and Ω (expression), then Theta notation
is used. This is how we define a time complexity average case for an algorithm.

Fig : Big-Theta (Θ)


Big-Θ is a measure of the least as well as longer amount of time taken by the algorithm to
complete the execution. Let 𝑓(𝑛) be the time complexity of an algorithm. The function 𝑓(𝑛)
is said to be big-Θ of 𝑔(𝑛).
𝑓(𝑛) = Θ(𝑔(𝑛))
Such that there exist a +ve constant c1 & c2 are -ve integer no satisfying.
c1 ∗ 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ c2 ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑛0
𝑓(𝑛) lies above c1 ∗ 𝑔(𝑛) & lies above c2 ∗ 𝑔(𝑛) for sufficiently large value of n.
For example :
1) 𝐿𝑒𝑡 𝑓(𝑛) = 100𝑛 + 5 𝑡ℎ𝑎𝑛 𝑝𝑟𝑜𝑣𝑒 𝑡ℎ𝑎𝑡 𝑓(𝑛) = Θ(𝑛)
Solution: c1 ∗ 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ c2 ∗ 𝑔(𝑛) 𝑓𝑜𝑟 𝑛 ≥ 𝑛0
100 ∗ 𝑛 ≤ 100𝑛 + 5 ≤ 105 ∗ 𝑛 𝑓𝑜𝑟 𝑛 ≥ 1
𝑤ℎ𝑒𝑟𝑒 𝑐1 = 100, 𝑐2 = 105, 𝑔(𝑛) = 𝑛 & 𝑛0 = 1
𝑓(𝑛) = Θ𝑔(𝑛) 𝑓𝑜𝑟 𝑓(𝑛) = Θ(n)
In general, the choice of asymptotic notation depends on the problem and the specific algorithm
used to solve it. It is important to note that asymptotic notation does not provide an exact running
time or space usage for an algorithm, but rather a description of how the algorithm scales with
respect to input size. It is a useful tool for comparing the efficiency of different algorithms and
for predicting how they will perform on large input sizes.
Theorem 𝑰𝒇 𝒕𝟏 (𝒏) ∈ 𝑶(𝒈𝟏 (𝒏)) and 𝒕𝟐 (𝒏) ∈ 𝑶(𝒈𝟐 (𝒏)) then
𝒕𝟏 (𝒏) + 𝒕𝟐 (𝒏) ∈ 𝑶(𝐦𝐚𝐱 {𝒈𝟏 (𝒏), 𝒈𝟐 (𝒏)}).
Proof: The proof extends to orders of growth the following simple fact about four arbitrary real
numbers 𝑎1 , 𝑏1 , 𝑎2 & 𝑏2
𝑖𝑓 𝑎1 ≤ 𝑏1 𝑎𝑛𝑑 𝑎2 ≤ 𝑏2 𝑡ℎ𝑒𝑛 𝑎1 + 𝑎2 ≤ 2max {𝑏1 , 𝑏2 }
Since 𝑡1 (𝑛) ∈ 𝑂(𝑔1 (𝑛)) there exist some positive constant 𝑐1 and some non-negative integer
𝑛1 such that
𝑡1 (𝑛) ≤ 𝑐1 𝑔1 (𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑛1
Similarly, Since 𝑡2 (𝑛) ∈ 𝑂(𝑔2 (𝑛))
𝑡2 (𝑛) ≤ 𝑐2 𝑔2 (𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑛2
Let us denote 𝑐3 = max {𝑐1 , 𝑐2 } and consider 𝑛 ≥ max {𝑛1 , 𝑛2 } so that we can use both
inequalities.
Adding them yields the following
𝑡1 (𝑛) + 𝑡2 (𝑛) ≤ 𝑐1 𝑔1 (𝑛) + 𝑐2 𝑔2 (𝑛)
≤ 𝑐3 𝑔1 (𝑛) + 𝑐3 𝑔2 (𝑛)
≤ 𝑐3 {𝑔1 (𝑛) + 𝑔2 (𝑛)}
≤ 𝑐3 2 𝑚𝑎𝑥{𝑔1 (𝑛), 𝑔2 (𝑛)}
Hence 𝑡1 (𝑛) + 𝑡2 (𝑛) ∈ 𝑂(max {𝑔1 (𝑛), 𝑔2 (𝑛)}) with the constants c and 𝑛0 required by
the O definition being 2𝑐3 = 2 𝑚𝑎𝑥{𝑐1 , 𝑐2 } and max{𝑛1 , 𝑛2 }, respectively.
So what does this property imply for an algorithm that comprises two consecutively
executed parts? It implies that the algorithm's overall efficiency is determined by the part with a
higher order of growth, i.e., its least efficient part:

𝑡1 (𝑛) ∈ 𝑂(𝑔1 (𝑛))


𝑡1 (𝑛) + 𝑡2 (𝑛) ∈ 𝑂(max {𝑔1 (𝑛), 𝑔2 (𝑛)}).
𝑡2 (𝑛) ∈ 𝑂(𝑔2 (𝑛))
For example, we can check whether an array has equal elements by the following two-part
algorithm:
✓ Sort the array by applying some known sorting algorithm.
✓ Scan the sorted array to check its consecutive elements for equality.
1
If, for example, a sorting algorithm used in the first part makes no more than 𝑛(𝑛 − 1).
2
comparisons (and hence is in O(n2)) while the second part makes no more than n – 1
comparisons (and hence is in O(n)), the efficiency of the entire algorithm will be in
𝑂(max{𝑛2 , 𝑛}) = 𝑂(𝑛2 )

Extending the Property


The property can be applied for the Ω notation with a slight change: replace the Max with the
Min
𝑰𝒇 𝒕𝟏 (𝒏) ∈ 𝛀(𝒈𝟏 (𝒏)) and 𝒕𝟐 (𝒏) ∈ 𝛀(𝒈𝟐 (𝒏)) then
𝒕𝟏 (𝒏) + 𝒕𝟐 (𝒏) ∈ 𝛀(𝐦𝐢𝐧 {𝒈𝟏 (𝒏), 𝒈𝟐 (𝒏)})
Using Limits to Compare Order of Growth
The first case means t(n) = O(g(n)
if the second case is true, then t(n) = Θ(g(n))
The last case means t(n) = Ω(g(n))
L’Hopital’s Rule

Note: t’(n) and g’(n) are first-order derivatives of t(n) and g(n)

Stirling’s Formula
𝟏
Example 1: compare orders of growth of 𝒕(𝒏) = 𝟐 𝒏(𝒏 − 𝟏) 𝒂𝒏𝒅 𝒈(𝒏) = 𝒏𝟐
Solution:
1
𝑡(𝑛) 𝑛(𝑛 − 1)
lim = lim 2
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 𝑛2
1 𝑛2 − 𝑛
= lim
2 𝑛→∞ 𝑛2
1 1
= lim (1 − )
2 𝑛→∞ 𝑛
1
=
2
Since the limit is equal to +ve constant, the function have the some order of growth or
symbolically
𝟏
𝒏(𝒏 − 𝟏) ∈ 𝛉(𝐧𝟐 )
𝟐
Example 2: compare orders of growth of 𝒕(𝒏) = 𝒍𝒐𝒈𝟐 𝒏 & 𝑔(𝒏) = √𝒏
Solution:
𝑡(𝑛) 𝑙𝑜𝑔2 𝑛
lim = lim
𝑛→∞ 𝑔(𝑛) 𝑛→∞ √𝑛
(𝑙𝑜𝑔2 𝑛)′
= lim
𝑛→∞ (√𝑛)′
1
(𝑙𝑜𝑔2 𝑒) 𝑛
= lim
𝑛→∞ 1
2√𝑛
√𝑛
= 2𝑙𝑜𝑔2 𝑒 lim =0
𝑛→∞ 𝑛
Since the limit is equal to zero, 𝑙𝑜𝑔2 𝑛 has a smaller order of growth than √𝑛
𝒍𝒐𝒈𝟐 𝒏 ∈ 𝐎(√𝒏)
Example 3: Compare the order of growth of 𝒕(𝒏) = 𝒍𝒐𝒈𝟐 𝒏 & 𝑙𝑜𝑔 𝒏𝟐
𝑡(𝑛) 𝑙𝑜𝑔2 𝑛
lim = lim
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 𝑙𝑜𝑔 𝑛2
𝑙𝑜𝑛 𝑛 ∗ log 𝑛
= lim
𝑛→∞ 𝑙𝑜𝑛 (𝑛 ∗ 𝑛)
𝑙𝑜𝑛 𝑛 ∗ log 𝑛
= lim
𝑛→∞ 𝑙𝑜𝑛 𝑛 + log 𝑛
𝑙𝑜𝑛 𝑛 ∗ log 𝑛
= lim
𝑛→∞ 2 𝑙𝑜𝑛 𝑛
𝑙𝑜𝑛 𝑛
= lim
𝑛→∞ 2
=∞
Hence 𝒍𝒐𝒈 𝒏 = 𝛀(𝒍𝒐𝒈 𝒏𝟐 )
𝟐
That is, 𝒍𝒐𝒈 𝒏𝟐 = 𝐎(𝒍𝒐𝒈𝟐 𝒏)
Example 4: find the class 𝑶(𝒈(𝒏)), 𝛉(𝒈(𝒏)) & 𝛀(𝒈(𝒏)) for the following function
𝒂) 𝒕(𝒏) = (𝒏𝟐 + 𝟏)𝟏𝟎 and 𝒈(𝒏) = 𝒏𝟐𝟏
Solution:
𝑂(𝑔(𝑛)) =?
𝑡(𝑛) (𝑛2 + 1)10
lim = lim
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 𝑛21
(𝑛 + 1)10
2
= lim
𝑛→∞ (𝑛2 )10 𝑛
10
1 𝑛2 + 1
= lim [ 2 ]
𝑛→∞ 𝑛 𝑛
1 1 10
= lim [1 + 2 ]
𝑛→∞ 𝑛 𝑛
= 0 ∗ [1 + 0]
=0
(𝒏𝟐 + 𝟏)𝟏𝟎 = 𝐎(𝒏𝟐𝟏 )
𝒃) 𝒕(𝒏) = (𝒏𝟐 + 𝟏)𝟏𝟎 and 𝒈(𝒏) = 𝒏𝟐𝟎
Solution:
ϴ(𝑔(𝑛)) =?
𝑡(𝑛) (𝑛2 + 1)10
lim = lim
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 𝑛20
(𝑛 + 1)10
2
= lim
𝑛→∞ (𝑛2 )10
10
𝑛2 + 1
= lim [ 2 ]
𝑛→∞ 𝑛
1 10
= lim [1 + 2 ]
𝑛→∞ 𝑛
= 1+0
=1
(𝒏𝟐 + 𝟏)𝟏𝟎 = 𝚹(𝒏𝟐𝟎 )
𝒄) 𝒕(𝒏) = (𝒏𝟐 + 𝟏)𝟏𝟎 and 𝒈(𝒏) = 𝒏𝟏𝟎
Solution:
Ω(𝑔(𝑛)) =?
𝑡(𝑛) (𝑛2 + 1)10
lim = lim
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 𝑛10
10
𝑛2 + 1
= lim [ ]
𝑛→∞ 𝑛
1 10
= lim [𝑛 + ]
𝑛→∞ 𝑛
=∞+0
=∞
(𝒏𝟐 + 𝟏)𝟏𝟎 = 𝛀(𝒏𝟏𝟎 )

You might also like