0% found this document useful (0 votes)
10 views

Approaches of Algorithm

Uploaded by

Brajesh Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Approaches of Algorithm

Uploaded by

Brajesh Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Approaches of Algorithm

The following are the approaches used after considering both the theoretical
and practical importance of designing an algorithm:

o Brute force algorithm: The general logic structure is applied to design an


algorithm. It is also known as an exhaustive search algorithm that searches all the
possibilities to provide the required solution. Such algorithms are of two types:

1. Optimizing: Finding all the solutions of a problem and then take out the
best solution or if the value of the best solution is known then it will
terminate if the best solution is known.

2. Sacrificing: As soon as the best solution is found, then it will stop.

o Divide and conquer: It is a very implementation of an algorithm. It allows you to


design an algorithm in a step-by-step variation. It breaks down the algorithm to
solve the problem in different methods. It allows you to break down the problem
into different methods, and valid output is produced for the valid input. This valid
output is passed to some other function.

o Greedy algorithm: It is an algorithm paradigm that makes an optimal choice on


each iteration with the hope of getting the best solution. It is easy to implement
and has a faster execution time. But, there are very rare cases in which it
provides the optimal solution.

o Dynamic programming: It makes the algorithm more efficient by storing the


intermediate results. It follows five different steps to find the optimal solution for
the problem:

1. It breaks down the problem into a subproblem to find the optimal solution.

2. After breaking down the problem, it finds the optimal solution out of these
subproblems.

3. Stores the result of the subproblems is known as memorization.

4. Reuse the result so that it cannot be recomputed for the same


subproblems.

5. Finally, it computes the result of the complex program.

o Branch and Bound Algorithm: The branch and bound algorithm can be applied
to only integer programming problems. This approach divides all the sets of
feasible solutions into smaller subsets. These subsets are further evaluated to find
the best solution.

Er. Chandan Mani Tripathi


o Randomized Algorithm: As we have seen in a regular algorithm, we have
predefined input and required output. Those algorithms that have some defined
set of inputs and required output, and follow some described steps are known as
deterministic algorithms. What happens that when the random variable is
introduced in the randomized algorithm?. In a randomized algorithm, some
random bits are introduced by the algorithm and added in the input to produce
the output, which is random in nature. Randomized algorithms are simpler and
efficient than the deterministic algorithm.

o Backtracking: Backtracking is an algorithmic technique that solves the problem


recursively and removes the solution if it does not satisfy the constraints of a
problem.

The major categories of algorithms are given below:

o Sort: Algorithm developed for sorting the items in a certain order.

o Search: Algorithm developed for searching the items inside a data structure.

o Delete: Algorithm developed for deleting the existing element from the data
structure.

o Insert: Algorithm developed for inserting an item inside a data structure.

o Update: Algorithm developed for updating the existing element inside a data
structure.

Efficiency of an algorithm

Computer resources are limited that should be utilized efficiently. The efficiency of an
algorithm is defined as the number of computational resources used by the algorithm. An
algorithm must be analyzed to determine its resource usage. The efficiency of an
algorithm can be measured based on the usage of different resources.

For maximum efficiency of algorithm we wish to minimize resource usage. The important
resources such as time and space complexity cannot be compared directly, so time and
space complexity could be considered for an algorithmic efficiency.

Method for determining Efficiency

The efficiency of an algorithm depends on how efficiently it uses time and memory
space.

The time efficiency of an algorithm is measured by different factors. For example, write a
program for a defined algorithm, execute it by using any programming language, and
measure the total time it takes to run. The execution time that you measure in this case
would depend on a number of factors such as:

Er. Chandan Mani Tripathi


· Speed of the machine
· Compiler and other system Software tools
· Operating System

· Programming language used

· Volume of data required


However, to determine how efficiently an algorithm solves a given problem, you would
like to determine how the execution time is affected by the nature of the algorithm.
Therefore, we need to develop fundamental laws that determine the efficiency of a
program in terms of the nature of the underlying algorithm.

Space-Time tradeoff

A space-time or time-memory tradeoff is a way of solving in less time by using


more storage space or by solving a given algorithm in very little space by spending more
time.

To solve a given programming problem, many different algorithms may be used. Some of
these algorithms may be extremely time-efficient and others extremely space-efficient.

Time/space trade off refers to a situation where you can reduce the use of memory at the
cost of slower program execution, or reduce the running time at the cost of increased
memory usage.

Algorithm Analysis

The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm,
and second is after creating the algorithm. The following are the two analysis of an
algorithm:

o Priori Analysis: Here, priori analysis is the theoretical analysis of an algorithm


which is done before implementing the algorithm. Various factors can be
considered before implementing the algorithm like processor speed, which has no
effect on the implementation part.

o Posterior Analysis: Here, posterior analysis is a practical analysis of an algorithm.


The practical analysis is achieved by implementing the algorithm using any
programming language. This analysis basically evaluate that how much running
time and space taken by the algorithm.

Algorithm Complexity

The performance of the algorithm can be measured in two factors:

o Time complexity: The time complexity of an algorithm is the amount of time


required to complete the execution. The time complexity of an algorithm is

Er. Chandan Mani Tripathi


denoted by the big O notation. Here, big O notation is the asymptotic notation to
represent the time complexity. The time complexity is mainly calculated by
counting the number of steps to finish the execution. Let's understand the time
complexity through an example.

1. sum=0;
2. // Suppose we have to calculate the sum of n numbers.
3. for i=1 to n
4. sum=sum+i;
5. // when the loop ends then sum holds the sum of the n numbers
6. return sum;

In the above code, the time complexity of the loop statement will be atleast n, and if the
value of n increases, then the time complexity also increases. While the complexity of
the code, i.e., return sum will be constant as its value is not dependent on the value of n
and will provide the result in one step only. We generally consider the worst-time
complexity as it is the maximum time taken for any given input size.

o Space complexity: An algorithm's space complexity is the amount of space


required to solve a problem and produce an output. Similar to the time
complexity, space complexity is also expressed in big O notation.

For an algorithm, the space is required for the following purposes:

1. To store program instructions

2. To store constant values

3. To store variable values

4. To track the function calls, jumping statements, etc.

Auxiliary space: The extra space required by the algorithm, excluding the input size, is
known as an auxiliary space. The space complexity considers both the spaces, i.e.,
auxiliary space, and space used by the input.

So,

Space complexity = Auxiliary space + Input size.

Asymptotic Analysis

In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of


input size (we don’t measure the actual running time). We calculate, how the time (or
space) taken by an algorithm increases with the input size.

Usually, the time required by an algorithm comes under three types:

Er. Chandan Mani Tripathi


Worst case: It defines the input for which the algorithm takes a huge time.

Average case: It takes average time for the program execution.

Best case: It defines the input for which the algorithm takes the lowest time

Asymptotic notation

Asymptotic notation is a way to describe the running time or space complexity of an


algorithm based on the input size. It is commonly used in complexity analysis to
describe how an algorithm performs as the size of the input grows. The three most
commonly used notations are Big O, Omega, and Theta.

Big oh Notation (O)

o Big O notation is an asymptotic notation that measures the performance of an


algorithm by simply providing the order of growth of the function.

o This notation provides an upper bound on a function which ensures that the
function never grows faster than the upper bound. So, it gives the least upper
bound on a function so that the function never grows faster than this upper
bound.

it is the formal way to express the upper boundary of an algorithm running time. It
measures the worst case of time complexity or the algorithm's longest amount of time to
complete its operation. It is represented as shown below:

For example:

If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:

f(n)≤c.g(n) for all n≥no

This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the
function f(n). In this case, we are calculating the growth rate of the function which

Er. Chandan Mani Tripathi


eventually calculates the worst time complexity of a function, i.e., how worst an
algorithm can perform.

Let's understand through examples

Example 1: f(n)=2n+3 , g(n)=n

Now, we have to find Is f(n)=O(g(n))?

To check f(n)=O(g(n)), it must satisfy the given condition:

f(n)<=c.g(n)

First, we will replace f(n) by 2n+3 and g(n) by n.

2n+3 <= c.n

Let's assume c=5, n=1 then

2*1+3<=5*1

5<=5

For n=1, the above condition is true.

If n=2

2*2+3<=5*2

7<=10

For n=2, the above condition is true.

We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If
the value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any
value of n starting from 1, it will always satisfy. Therefore, we can say that for some
constants c and for some constants n0, it will always satisfy 2n+3<=c.n. As it is
satisfying the above condition, so f(n) is big oh of g(n) or we can say that f(n) grows
linearly. Therefore, it concludes that c.g(n) is the upper bound of the f(n). It can be
represented graphically as:

Er. Chandan Mani Tripathi


The idea of using big o notation is to give an upper bound of a particular function, and
eventually it leads to give a worst-time complexity. It provides an assurance that a
particular function does not behave suddenly as a quadratic or a cubic fashion, it just
behaves in a linear manner in a worst-case.

Omega Notation (Ω)

o It basically describes the best-case scenario which is opposite to the big o


notation.

o It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or
the best-case time complexity.

o It determines what is the fastest time that an algorithm can run.

If we required that an algorithm takes at least certain amount of time without using an
upper bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the
growth of running time for large input size.

If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:

f(n)>=c.g(n) for all n≥no and c>0

Let's consider a simple example.

Er. Chandan Mani Tripathi


If f(n) = 2n+3, g(n) = n,

Is f(n)= Ω (g(n))?

It must satisfy the condition:

f(n)>=c.g(n)

To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.

2n+3>=c*n

Suppose c=1

2n+3>=n (This equation will be true for any value of n starting from 1).

Therefore, it is proved that g(n) is big omega of 2n+3 function.

As we can see in the above figure that g(n) function is the lower bound of the f(n)
function when the value of c is equal to 1. Therefore, this notation gives the fastest
running time. But, we are not more interested in finding the fastest running time, we are
interested in calculating the worst-case scenarios because we want to check our
algorithm for larger input that what is the worst time that it will take so that we can take
further decision in the further process.

Theta Notation (θ)

o The theta notation mainly describes the average case scenarios.

Er. Chandan Mani Tripathi


o It represents the realistic time complexity of an algorithm. Every time, an
algorithm does not perform worst or best, in real-world problems, algorithms
mainly fluctuate between the worst-case and best-case, and this gives us the
average case of the algorithm.

o Big theta is mainly used when the value of worst-case and the best-case is same.

o It is the formal way to express both the upper bound and lower bound of an
algorithm running time.

Let's understand the big theta notation mathematically:

Let f(n) and g(n) be the functions of n where n is the steps required to execute the
program then:

f(n)= θg(n)

The above condition is satisfied only if when

c1.g(n)<=f(n)<=c2.g(n)

where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal
to f(n) and c2.g(n) is greater than or equal to f(n). The graphical representation of theta
notation is given below:

Er. Chandan Mani Tripathi


Let's consider the same example where
f(n)=2n+3
g(n)=n

As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater
than f(n) so c2 is equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n)
is the upper limit of the f(n).

c1.g(n)<=f(n)<=c2.g(n)

Replace g(n) by n and f(n) by 2n+3

c1.n <=2n+3<=c2.n

if c1=1, c2=2, n=1

1*1 <=2*1+3 <=2*1

1 <= 5 <= 2 // for n=1, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

If n=2

1*2<=2*2+3<=2*2

2<=7<=4 // for n=2, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

Therefore, we can say that for any value of n, it satisfies the condition
c1.g(n)<=f(n)<=c2.g(n). Hence, it is proved that f(n) is big theta of g(n). So, this is the
average-case scenario which provides the realistic time complexity.

Why we have three different asymptotic analysis?

As we know that big omega is for the best case, big oh is for the worst case while big
theta is for the average case. Now, we will find out the average, worst and the best case
of the linear search algorithm.

Suppose we have an array of n numbers, and we want to find the particular element in
an array using the linear search. In the linear search, every element is compared with the
searched element on each iteration. Suppose, if the match is found in a first iteration
only, then the best case would be Ω(1), if the element matches with the last element,
i.e., nth element of the array then the worst case would be O(n). The average case is the
mid of the best and the worst-case, so it becomes θ(n/1). The constant terms can be
ignored in the time complexity so average case would be θ(n).

So, three different analysis provide the proper bounding between the actual functions.
Here, bounding means that we have upper as well as lower limit which assures that the
algorithm will behave between these limits only, i.e., it will not go beyond these limits.

Er. Chandan Mani Tripathi

You might also like