0% found this document useful (0 votes)
17 views15 pages

Notes DAA

Uploaded by

magdumaastha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views15 pages

Notes DAA

Uploaded by

magdumaastha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Fundamentals of Algorithms

Introduction to algorithm, why to analysis algorithm, Running time analysis, How to Compare
Algorithms, Rate of Growth, Commonly Used Rates of Growth, Types of Analysis,

Asymptotic Notation, Big-O Notation, Omega-Ω Notation, Theta-Θ Notation, Asymptotic Analysis,
Properties of Notations, commonly used Logarithms and Summations, Performance, characteristics of
algorithms,

Master Theorem for Divide and Conquer, Divide and Conquer Master Theorem: Problems & Solutions,
Master Theorem for Subtract and Conquer Recurrences, Method of Guessing and Confirming

Introduction to Algorithm •

That term was coined by a Persian author, Abu. Jafar Mohammad bin Musa alKhowarizmi.

• An algorithm is a set of self-contained sequence of instructions or actions that contains finite


space or sequence and that will give us a result to a specific problem in a finite amount of time.

• A procedure that reduces the solution of some class of problems to a series of rote steps
which, if followed to the letter, and as far as may be necessary, is bound to: - Always give some answer
rather than ever give no answer; - Always give the right answer and never give a wrong answer; - Always
be completed in a finite number of steps, rather than in an infinite number; - Work for all instances of
problems of the class.

• It is a logical and mathematical approach to solve or crack a problem using any possible
method.

• An algorithm is an effective, efficient and best method which can be used to express solution
of any problem within a finite amount of space and time and in a well-defined formal language.

• Starting from an initial state the instructions describe a process or computational process that,
when executed, proceeds through a finite number of well-defined successive states, eventually
producing "output" and terminating at a final ending state.

• In other words, we can say that, o Step by step procedure for solving any problem is known as
algorithm. o An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. o
An algorithm is a sequence of computational steps that transform the input into a valuable or required
output. o Any special method of solving a certain kind of problem is known as algorithm.

Criteria of Algorithm All Algorithms must satisfy the following criteria –

1) Input there are more quantities that are extremely supplied.

2) Output At least one quantity is produced.

3) Definiteness Each instruction of the algorithm should be clear and unambiguous.

4) Finiteness The process should be terminated after a finite number of steps.


5) Effectiveness Every instruction must be basic enough to be carried out theoretically or by
using paper and pencil.

Properties of Algorithm

Simply writing the sequence of instructions as an algorithm is not sufficient to accomplish


certain task. It is necessary to have following properties associated with an algorithm.

Non Ambiguity

Each step in an algorithm should be non-ambiguous. That means each instruction should be
clear and precise. The instruction in any algorithm should not denote any conflicting meaning. This
property also indicates the effectiveness of algorithm.

Range of Input The range of input should be specified. This is because normally the algorithm is
input driven and if the range of input is not being specified then algorithm can go in an infinite state.

Multiplicity

The same algorithm can be represented into several different ways. That means we can write in
simple English the sequence of instruction or we can write it in form of pseudo code. Similarly, for
solving the same problem we can write several different algorithms.

Speed

The algorithms written using some specified ideas. Bus such algorithm should be efficient and
should produce the output with fast speed.

Finiteness

The algorithm should be finite. That means after performing required operations it should be
terminate

Types of algorithm Well there are many types of algorithm but the most fundamental types of
algorithm are:

1. Recursive algorithms

2. Dynamic programming algorithm

3. Backtracking algorithm

4. Divide and conquer algorithm

1) Simple recursive algorithm

Solves the base case directly and then recurs with a simpler or easier input every time (A base
value is set at the starting for which the algorithm terminates). It is use to solve the problems which can
be broken into simpler or smaller problems of same type.

Example: Factorial using recursion

2) Dynamic programming algorithm


A dynamic programming algorithm (also known as dynamic optimization algorithm) remembers
the past result and uses them to find new result means it solve complex problems by breaking it down
into a collection of simpler subproblems, then solving each of those subproblems only once ,and storing
their solution for future use instead of recomputing their solutions again.

Example: Fibonacci sequence,

3) Backtracking algorithm How about we learn backtracking using an example so let’s say we
have a problem “Monk” and we divide it into four smaller problems “M, R, A, A”. It may be the case that
the solution of these problems did not get accepted as the solution of “Monk”. In fact we did not know
on which one it depends. So we will check each one of them one by one until we find the solution for
“Monk”. So basically we attempt solving a subproblem but if we did not reach the desired solution undo
whatever you have done and start from the scratch again until you find the solution.

Example: Queens Problem

4) Divide and conquer algorithm Divide and conquer consist of two parts first of all it divides the
problems into smaller subproblems of the same type and solve them solve them recusively and then
combine them to to form the solution of the original problem.

Example: Quick sort, Merge sort

5) Greedy algorithm Greedy algorithm is an algorithm that solves the problem by taking optimal
solution at the local level (without regards for any consequences) with the hope of finding optimal
solution at the global level. Greedy algorithm is used to find the optimal solution but it is not necessary
that you will definitely find the optimal solution by following this algorithm. Like there are some
problems for which an optimal solution does not exist (currently) these are called NP complete problem.
Example: Huffman tree, Counting money

6) Brute force algorithm A brute force algorithm simply tries all the possibilities until a
satisfactory solution is found. Such types of algorithm are also used to find the optimal (best) solution as
it checks all the possible solutions. And also used for finding a satisfactory solution (not the best), simply
stop as soon as a solution of the problem is found.

Example: Exact string matching algorithm

7) Randomized algorithm A randomized algorithm uses a random number at least once during
the computation to make a decision.

Example: Quick sort As we use random number to choose the pivot point.

Advantages of the Algorithm

• It identifies the solution process, decision points and variables required to solve the problem.

It helps in dividing a huge problem into smaller manageable steps of the solution.

• The analysis and specification of the process lead to the efficiency.

• Separation of the steps divides labour and development expertise.


Disadvantages of the Algorithm

• At a specific point, the algorithm terminates.

• Inability to solve problems that generate non-computational results.

• Consumes a lot of time.

Expressing Algorithms

Pseudocode

Pseudocode is the expressive form of the algorithm or a way to describe an algorithm. It is a


combination of natural language and high-level programming practices which represent the
fundamental concept behind a general implementation of a data structure or algorithm. Pseudocode
incorporated the natural language when the details are insignificant with the standard programming
language constructs to obtain more clarity. However, we cannot execute Pseudocode on a computer,
but it models the actual programming code along with a similar level of detail.

Pseudocode is also written using some specific words and characters, which are shown below:

• To begin the comment double forward slash are used “//“.

• Matching braces “{ and }” are used to present blocks where a compound statement (set of
simple statements) can be illustrated as a block and terminated by a semicolon”;“. The body of a
procedure constructs a block as well.

• All the identifiers start with a letter and the datatype of the variables are not declared
explicitly.

• An assignment statement is used for the assigning values to the variables.

• To produce the boolean values (i.e., true and false) the logical operators and, or and not and
the relational operators are provided.

• Input and output are presented by read and write instructions.

• “if and then” expressions are used to express a conditional statement

Algorithm vs. Pseudocode


Flowchart

• It is nothing but a manner of representing an algorithm.

• It is also known as the flow diagram, which illustrates a process or a detailed series of steps
needed to produce a specific output.

• A flow chart is comprised of the different symbols and control lines to connect those symbols.
Each symbol specifies distinct functions.

• It is extremely useful in programming because it simplifies the complicated algorithm and


converts it into the understandable pictorial representation.

• With the help of the flowchart, the application designer can easily segregate the different
components of the process. It facilitates the analysation by providing the step-by-step process of the
problem.

Algorithm vs. Flowchart


Need of Algorithm

1. To understand the basic idea of the problem.

2. To find an approach to solve the problem.

3. To improve the efficiency of existing techniques.

4. To understand the basic principles of designing the algorithms.

5. To compare the performance of the algorithm with respect to other techniques.

6. It is the best method of description without describing the implementation detail.

7. The Algorithm gives a clear description of requirements and goal of the problem to the
designer.

8. A good design can produce a good solution.

9. To understand the flow of the problem.

10. To measure the behaviour (or performance) of the methods in all cases (best cases, worst
cases, average cases)

11. With the help of an algorithm, we can also identify the resources (memory, input-output)
cycles required by the algorithm.

12. With the help of algorithm, we convert art into a science.

13. To understand the principle of designing.

14. We can measure and analyse the complexity (time and space) of the problems concerning
input size without implementing and running it; it will reduce the cost of design

Analysis of Algorithms

Why Analyse an Algorithm? The most straightforward reason for analyzing an algorithm is to
discover its characteristics in order to evaluate its suitability for various applications or compare it with
other algorithms for the same application. Moreover, the analysis of an algorithm can help us
understand it better, and can suggest informed improvements. Algorithms tend to become shorter,
simpler, and more elegant during the analysis process.

Computational Complexity. The branch of theoretical computer science where the goal is to
classify algorithms according to their efficiency and computational problems according to their inherent
difficulty is known as computational complexity. Paradoxically, such classifications are typically not
useful for predicting performance or for comparing algorithms in practical applications because they
focus on order-of-growth worst-case performance. In this book, we focus on analyses that can be used
to predict performance and compare algorithms.
Analysis of Algorithms.

A complete analysis of the running time of an algorithm involves the following steps:

• Implement the algorithm completely.

• Determine the time required for each basic operation.

• Identify unknown quantities that can be used to describe the frequency of execution of the
basic operations.

• Develop a realistic model for the input to the program.

• Analyze the unknown quantities, assuming the modelled input.

Calculate the total running time by multiplying the time by the frequency for each operation,
then adding all the products.

Classical algorithm analysis on early computers could result in exact predictions of running
times. Modern systems and algorithms are much more complex, but modern analyses are
informed by the idea that exact analysis of this sort could be performed in principle. The term
analysis of algorithms is used to describe approaches to the study of the performance of
algorithms, we will perform the following types of analysis:

The worst-case runtime complexity of the algorithm is the function defined by the maximum
number of steps taken on any instance of size a.

• The best-case runtime complexity of the algorithm is the function defined by the minimum
number of steps taken on any instance of size a.

• The average case runtime complexity of the algorithm is the function defined by an average
number of steps taken on any instance of size a.

• The amortized runtime complexity of the algorithm is the function defined by a sequence of
operations applied to the input of size a and averaged over time.

Complexity of Algorithm

It is very convenient to classify algorithm based on the relative amount of time or relative
amount of space they required and specify the growth of time/space requirement as a function
of input size. An algorithm is said to be efficient and fast, if it takes less time to execute and
consumes less memory space. The performance of an algorithm is measured on the basis of
following properties:

1. Time Complexity 2. Space Complexity


Space Complexity
It’s the amount of memory space required by the algorithm, during the course of its
execution. Space complexity must be taken seriously for multi-user systems and in situations
where limited memory is available. Whenever a solution to a problem is written some
memory is required to complete. For any algorithm memory may be used for the following:
1. Variables (This include the constant values, temporary values)
2. Program Instruction
3. Execution Space complexity is the amount of memory used by the algorithm (including
the input values to the algorithm) to execute and produce the result. Sometime Auxiliary
Space is confused with Space Complexity. But Auxiliary Space is the extra space or the
temporary space used by the algorithm during its execution. Space Complexity = Auxiliary
Space + Input space
Memory Usage while Execution While executing, algorithm uses memory space for three
reasons:
1. Instruction Space
It's the amount of memory used to save the compiled version of instructions.

2. Environmental Stack Sometimes an algorithm (function) may be called inside another


algorithm (function). In such a situation, the current variables are pushed onto the system stack,
where they wait for further execution and then the call to the inside algorithm (function) is
made. For example, If a function A() calls function B() inside it, then all th variables of the
function A() will get stored on the system stack temporarily, while the function B() is called and
executed inside the funciton A().

3. Data Space Amount of space used by the variables and constants

But while calculating the Space Complexity of any algorithm, we usually consider only Data
Space and we neglect the Instruction Space and Environmental Stack.

Calculating the Space Complexity For calculating the space complexity, we need to know the
value of memory used by different type of data type variables, which generally varies for
different operating systems, but the method for calculating the space complexity remains the
same.

Time Complexity

• Time Complexity is a way to represent the amount of time required by the program to run till
its completion.

• It's generally a good practice to try to keep the time required minimum, so that our algorithm
completes it's execution in the minimum time possible
• Time complexity of an algorithm signifies the total time required by the program to run till its
completion.

• The time complexity of algorithms is most commonly expressed using the big O notation. It's
an asymptotic notation to represent the time complexity. We will study about it in detail in the
next tutorial.

• Time Complexity is most commonly estimated by counting the number of elementary steps
performed by any algorithm to finish execution. Like in the example above, for the first code the
loop will run n number of times, so the time complexity will be n at least and as the value of n
will increase the time taken will also increase. While for the second code, time complexity is
constant, because it will never be dependent on the value of n, it will always give the result in 1
step. And since the algorithm's performance may vary with different types of input data, hence
for an algorithm we usually use the worst-case Time complexity of an algorithm because that is
the maximum time taken for any input size.

Types of Notations for Time Complexity Now we will discuss and understand the various
notations used for Time Complexity.

• Big Oh denotes "fewer than or the same as" iterations.

• Big Omega denotes "more than or the same as" iterations.

• Big Theta denotes "the same as" iterations.

• Little Oh denotes "fewer than" iterations.

• Little Omega denotes "more than" iterations.

Understanding Notations of Time Complexity with Example

• Oh (expression) is the set of functions that grow slower than or at the same rate as expression.
It indicates the maximum required by an algorithm for all input values. It represents the worst
case of an algorithm's time complexity.

• Omega (expression) is the set of functions that grow faster than or at the same rate as
expression. It indicates the minimum time required by an algorithm for all input values. It
represents the best case of an algorithm's time complexity.

• Theta (expression) consist of all the functions that lie in both O(expression) and
Omega(expression). It indicates the average bound of an algorithm. It represents the average
case of an algorithm's time complexity. These are the possible running times:

• Worst-case running time - the algorithm finds the number at the end of the list or determines
that the number isn't in the list. It went through the entire list so it took linear time. The worst-
case running time is usually what is examined.

• Best-case running time - the algorithm gets lucky and finds the number on the first check. It
took constant time regardless of the input size. Since we are not usually that lucky, we don't
usually care about the best-case running time.
• Expected-case running time - the algorithm finds the number halfway through the list
(assuming the number is in the input). We sometimes care about the expected-case, though it
can be harder to calculate than the worst-case.

Typical Complexities of an Algorithm

o Constant Complexity:

It imposes a complexity of O(1). It undergoes an execution of a constant number of steps like 1,


5, 10, etc. for solving a given problem. The count of operations is independent of the input data
size.

o Logarithmic Complexity: It imposes a complexity of O(log(N)). It undergoes the execution of


the order of log(N) steps. To perform operations on N elements, it often takes the logarithmic
base as 2. For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would undergo 20

steps (with a constant precision). Here, the logarithmic base does not hold a necessary
consequence for the operation count order, so it is usually omitted.

o Linear Complexity: o It imposes a complexity of O(N). It encompasses the same number of


steps as that of the total number of elements to implement an operation on N elements. For
example, if there exist 500 elements, then it will take about 500 steps. Basically, in linear
complexity, the number of elements linearly depends on the number of steps. For example, the
number of steps for N elements can be N/2 or 3*N. o It also imposes a run time of O(n*log(n)). It
undergoes the execution of the order N*log(N) on N number of elements to solve the given
problem. For a given 1000 elements, the linear complexity will execute 10,000 steps for solving a
given problem.

o Quadratic Complexity: It imposes a complexity of O(n2 ). For N input data size, it undergoes
the order of N2 count of operations on N number of elements for solving a given problem. If N =
100, it will endure 10,000 steps. In other words, whenever the order of operation tends to have
a quadratic relation with the input data size, it results in quadratic complexity. For example, for
N number of elements, the steps are found to be in the order of 3*N2 /2.

o Cubic Complexity: It imposes a complexity of O(n3 ). For N input data size, it executes the
order of N3 steps on N elements to solve a given problem. For example, if there exist 100
elements, it is going to execute 1,000,000 steps.

o Exponential Complexity: It imposes a complexity of O(2n ), O(N!), O(nk ), …. For N elements, it


will execute the order of count of operations that is exponentially dependable on the input data
size. For example, if N = 10, then the exponential function 2N will result in 1024. Similarly, if N =
20, it will result in 1048 576, and if N = 100, it will result in a number having 30 digits. The
exponential function N! grows even faster; for example, if N = 5 will result in 120. Likewise, if N =
10, it will result in 3,628,800 and so on

Asymptotic Analysis of algorithms (Growth of function) Resources for an algorithm are usually
expressed as a function regarding input. Often this function is messy and complicated to work.
To study Function growth efficiently, we reduce the function down to the important part. Let f
(n) = an2+bn+c In this function, the n2 term dominates the function that is when n gets
sufficiently large.

Dominate terms are what we are interested in reducing a function, in this; we ignore all
constants and coefficient and look at the highest order term concerning n.

Asymptotic notation: The word Asymptotic means approaching a value or curve arbitrarily
closely (i.e., as some sort of limit is taken).

Asymptotic analysis

It is a technique of representing limiting behavior. The methodology has the applications across
science. It can be used to analyze the performance of an algorithm for some large data set.

1. In computer science in the analysis of algorithms, considering the performance of


algorithms when applied to very large input datasets
2. The simplest example is a function ƒ (n) = n2+3n, the term 3n becomes insignificant
compared to n 2 when n is very large. The function "ƒ (n) is said to be asymptotically
equivalent to n 2 as n → ∞", and here is written symbolically as ƒ (n) ~ n2 .

Asymptotic notations are used to write fastest and slowest possible running time for an
algorithm. These are also referred to as 'best case' and 'worst case' scenarios respectively.
"In asymptotic notations, we derive the complexity concerning the size of the input.
(Example in terms of n)" "These notations are important because without expanding the
cost of running the algorithm, we can estimate the complexity of the algorithms."

Why is Asymptotic Notation Important?


1. They give simple characteristics of an algorithm's efficiency.
2. They allow the comparisons of the performances of various algorithms.

Asymptotic Notations: Asymptotic Notation is a way of comparing function that ignores


constant factors and small input sizes. Three notations are used to calculate the running
time complexity of an algorithm: 1. Big-oh notation: Big-oh is the formal method of
expressing the upper bound of an algorithm's running time. It is the measure of the longest
amount of time. The function f (n) = O (g (n)) [read as "f of n is big-oh of g of n"] if and only if

1. f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case Hence, function g (n) is an upper bound
exist positive constant c and such that

for function f (n), as g (n) grows faster than f (n)


Asymptotic upper bound
For Example:
1.1. 3n+2=O(n) as 3n+2≤4n for all n≥2
2. 2. 3n+3=O(n) as 3n+3≤4n for all n≥3 Hence, the complexity of f(n) can be represented as
O (g (n))

3. Omega () Notation: The function f (n) = Ω (g (n)) [read as "f of n is omega of g of n"] if and
only if there exists positive constant c and n0 such that
F (n) ≥ k* g (n) for all n, n≥ n0

Asymptotic lower bound


For Example: f (n) =8n2+2n-3≥8n2 -3 =7n2+(n2 -3)≥7n2 (g(n)) Thus, k1=7 Hence, the
complexity of f (n) can be represented as Ω (g (n))

3. Theta (θ): The function f (n) = θ (g (n)) [read as "f is the theta of g of n"] if and only if there
exists positive constant k1, k2 and k0 such that
k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0
Asymptotic tight bound

For Example: 3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n k1=3,k2=4, and n0=2 Hence, the
complexity of f (n) can be represented as θ (g(n)). The Theta Notation is more precise than
both the big-oh and Omega notation. The function f (n) = θ (g (n)) if g(n) is both an upper
and lower bound.

Analyzing Algorithm Control Structure

To analyze a programming code or algorithm, we must notice that each instruction affects
the overall performance of the algorithm and therefore, each instruction must be analyzed
separately to analyze overall performance. However, there are some algorithm control
structures which are present in each programming code and have a specific asymptotic
analysis. Some Algorithm Control Structures are:
1. Sequencing
2. If-then-else
3. for loop
4. While loop

1. Sequencing: Suppose our algorithm consists of two parts A and B. A takes time tA and B
takes time tB for computation. The total computation "tA + tB" is according to the
sequence rule. According to maximum rule, this computation time is (max (tA,tB)).
Example: Suppose tA =O (n) and tB = θ (n2 ). Then, the total computation time can be
calculated as
2. For loop: The general format of for loop is:
For (initialization; condition; updation) Statement(s);
Complexity of for loop: The outer loop executes N times. Every time the outer loop
executes, the inner loop executes M times. As a result, the statements in the inner loop
execute a total of N * M times. Thus, the total complexity for the two loops is O (N2)
Consider the following loop:
for i ← 1 to n {
P (i)
}
If the computation time ti for ( PI) various as a function of "i", then the total
computation time for the loop is given not by a multiplication but by a sum i.e.
If the computation time ti for ( PI) various as a function of "i", then the total
computation time for the loop is given not by a multiplication but by a sum i.e.
For i ← 1 to n {
P (i)
}
Takes

You might also like