Approaches of Algorithm
Approaches of Algorithm
The following are the approaches used after considering both the theoretical
and practical importance of designing an algorithm:
1. Optimizing: Finding all the solutions of a problem and then take out the
best solution or if the value of the best solution is known then it will
terminate if the best solution is known.
1. It breaks down the problem into a subproblem to find the optimal solution.
2. After breaking down the problem, it finds the optimal solution out of these
subproblems.
o Branch and Bound Algorithm: The branch and bound algorithm can be applied
to only integer programming problems. This approach divides all the sets of
feasible solutions into smaller subsets. These subsets are further evaluated to find
the best solution.
o Search: Algorithm developed for searching the items inside a data structure.
o Delete: Algorithm developed for deleting the existing element from the data
structure.
o Update: Algorithm developed for updating the existing element inside a data
structure.
Efficiency of an algorithm
Computer resources are limited that should be utilized efficiently. The efficiency of an
algorithm is defined as the number of computational resources used by the algorithm. An
algorithm must be analyzed to determine its resource usage. The efficiency of an
algorithm can be measured based on the usage of different resources.
For maximum efficiency of algorithm we wish to minimize resource usage. The important
resources such as time and space complexity cannot be compared directly, so time and
space complexity could be considered for an algorithmic efficiency.
The efficiency of an algorithm depends on how efficiently it uses time and memory
space.
The time efficiency of an algorithm is measured by different factors. For example, write a
program for a defined algorithm, execute it by using any programming language, and
measure the total time it takes to run. The execution time that you measure in this case
would depend on a number of factors such as:
Space-Time tradeoff
To solve a given programming problem, many different algorithms may be used. Some of
these algorithms may be extremely time-efficient and others extremely space-efficient.
Time/space trade off refers to a situation where you can reduce the use of memory at the
cost of slower program execution, or reduce the running time at the cost of increased
memory usage.
Algorithm Analysis
The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm,
and second is after creating the algorithm. The following are the two analysis of an
algorithm:
Algorithm Complexity
1. sum=0;
2. // Suppose we have to calculate the sum of n numbers.
3. for i=1 to n
4. sum=sum+i;
5. // when the loop ends then sum holds the sum of the n numbers
6. return sum;
In the above code, the time complexity of the loop statement will be atleast n, and if the
value of n increases, then the time complexity also increases. While the complexity of
the code, i.e., return sum will be constant as its value is not dependent on the value of n
and will provide the result in one step only. We generally consider the worst-time
complexity as it is the maximum time taken for any given input size.
Auxiliary space: The extra space required by the algorithm, excluding the input size, is
known as an auxiliary space. The space complexity considers both the spaces, i.e.,
auxiliary space, and space used by the input.
So,
Asymptotic Analysis
Best case: It defines the input for which the algorithm takes the lowest time
Asymptotic notation
o This notation provides an upper bound on a function which ensures that the
function never grows faster than the upper bound. So, it gives the least upper
bound on a function so that the function never grows faster than this upper
bound.
it is the formal way to express the upper boundary of an algorithm running time. It
measures the worst case of time complexity or the algorithm's longest amount of time to
complete its operation. It is represented as shown below:
For example:
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:
This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the
function f(n). In this case, we are calculating the growth rate of the function which
f(n)<=c.g(n)
2*1+3<=5*1
5<=5
If n=2
2*2+3<=5*2
7<=10
We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If
the value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any
value of n starting from 1, it will always satisfy. Therefore, we can say that for some
constants c and for some constants n0, it will always satisfy 2n+3<=c.n. As it is
satisfying the above condition, so f(n) is big oh of g(n) or we can say that f(n) grows
linearly. Therefore, it concludes that c.g(n) is the upper bound of the f(n). It can be
represented graphically as:
o It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or
the best-case time complexity.
If we required that an algorithm takes at least certain amount of time without using an
upper bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the
growth of running time for large input size.
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:
Is f(n)= Ω (g(n))?
f(n)>=c.g(n)
To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.
2n+3>=c*n
Suppose c=1
2n+3>=n (This equation will be true for any value of n starting from 1).
As we can see in the above figure that g(n) function is the lower bound of the f(n)
function when the value of c is equal to 1. Therefore, this notation gives the fastest
running time. But, we are not more interested in finding the fastest running time, we are
interested in calculating the worst-case scenarios because we want to check our
algorithm for larger input that what is the worst time that it will take so that we can take
further decision in the further process.
o Big theta is mainly used when the value of worst-case and the best-case is same.
o It is the formal way to express both the upper bound and lower bound of an
algorithm running time.
Let f(n) and g(n) be the functions of n where n is the steps required to execute the
program then:
f(n)= θg(n)
c1.g(n)<=f(n)<=c2.g(n)
where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal
to f(n) and c2.g(n) is greater than or equal to f(n). The graphical representation of theta
notation is given below:
As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater
than f(n) so c2 is equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n)
is the upper limit of the f(n).
c1.g(n)<=f(n)<=c2.g(n)
c1.n <=2n+3<=c2.n
If n=2
1*2<=2*2+3<=2*2
Therefore, we can say that for any value of n, it satisfies the condition
c1.g(n)<=f(n)<=c2.g(n). Hence, it is proved that f(n) is big theta of g(n). So, this is the
average-case scenario which provides the realistic time complexity.
As we know that big omega is for the best case, big oh is for the worst case while big
theta is for the average case. Now, we will find out the average, worst and the best case
of the linear search algorithm.
Suppose we have an array of n numbers, and we want to find the particular element in
an array using the linear search. In the linear search, every element is compared with the
searched element on each iteration. Suppose, if the match is found in a first iteration
only, then the best case would be Ω(1), if the element matches with the last element,
i.e., nth element of the array then the worst case would be O(n). The average case is the
mid of the best and the worst-case, so it becomes θ(n/1). The constant terms can be
ignored in the time complexity so average case would be θ(n).
So, three different analysis provide the proper bounding between the actual functions.
Here, bounding means that we have upper as well as lower limit which assures that the
algorithm will behave between these limits only, i.e., it will not go beyond these limits.