Computational Complexity

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

P, NP,NP-Hard and NP-Complete:-

Deterministic Time Complexity: Deterministic algorithms are those where given the
same input, they will always produce the same output and follow the same sequence of steps. This
means that for any given input size, a deterministic algorithm will always take the same amount of
time to produce the result. For example, an algorithm with a time complexity of 𝑂(n2 ) means that its
runtime increases quadratically as the size of the input (𝑛) increases.

Exponential Time Complexity: Algorithms with exponential time complexity take


increasingly longer to run as the input size increases. This is because the time taken increases
exponentially with the size of the input. For instance, algorithms with time complexities like 𝑂(2n)
will take a very long time for even moderate input sizes. Exponential time algorithms often involve
exhaustive search or enumeration of all possible solutions.

Non-Deterministic Time Complexity: In non-deterministic time complexity, the idea is


that an algorithm can explore multiple paths simultaneously. If there's a correct path, it will be found.
For NP problems, which are decision problems for which a given solution can be verified in
polynomial time, non-deterministic algorithms can potentially guess a solution and verify it
efficiently. However, it's important to note that non-deterministic algorithms don't guarantee
correctness; they just provide a method to explore potential solutions more efficiently.

P:- P is the set of those deterministic algorithms which are taking polynomial time.
For example:- Bubble sort, Linear Search, Merge Sort etc.
NP:- NP is the set of non-deterministic algorithms which take polynomial time.

Fig.:- Diagram of intersection among classes of


P, NP, NP-Complete & NP-Hard.

P is the subset of NP(as can be seen in the above fig.) which shows that the deterministic algorithms
of today were a part of non-deterministic algorithms.

NP-Hard:- An NP-hard problem is one for which there is no known polynomial-time


algorithm to solve it. Unlike NP problems, NP-hard problems don't necessarily have solutions that can
be verified in polynomial time.
NP-Complete:- NP-complete problems are a subset of NP that are both NP and NP-hard.
Every problem in NP can be reduced to any NP-complete problem in polynomial time. This means if
you had a polynomial-time algorithm for one NP-complete problem, you could use it to solve any
problem in NP in polynomial time.

Difference b/w NP-Complete & NP-Hard:-


 When the NP-hard problem is solved through a nondeterministic polynomial time algorithm
then it becomes an NP-complete problem.
 NP-Complete can be NP problem but NP-hard problem can’t be NP.

Difference b/w P and NP:-


 P consists of problems that are efficiently solvable by deterministic algorithms.
 NP consists of problems for which a solution if it exists, can be verified efficiently.
 The critical difference between P and NP is that in P, solutions can be found efficiently,
whereas in NP, solutions can be verified efficiently.

Approximation algorithm:-
It is used to solve NP-complete optimization problems.
A problem which is having a number of solutions out of which we have to select the best solution as
the optimal solution.

 Approximation algorithm uses (Vertex Cover Approximations):


Recall that a vertex cover of an undirected graph G = (V, E) is a subset V′ ⊆ V such that if (u, v)
∈ E then u ∈ V′ or v ∈ V′ or both (there is a vertex in V′ "covering" every edge in E).

The optimisation version of the Vertex Cover Problem is to find a vertex cover of minimum size
in G.

We previously showed by reduction of CLIQUE to VERTEX-COVER that the corresponding


decision problem is NP-Complete, so the optimisation problem is NP-Hard.

Suppose we have this input graph:

Suppose then that edge {b, c} is chosen. The two incident vertices are added to the cover and all other
incident edges are removed from consideration:
Iterating now for edges {e, f} and then {d, g}:

The resulting vertex cover is shown on the left and the optimal vertex on the right (the lighter
coloured vertices are in the cover:

Randomized algorithm:- A randomized algorithm is an algorithm that employs a degree


of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as
an auxiliary input to guide its behaviour, in the hope of achieving good performance in the "average
case" over all possible choices randomly determined by the random bits; thus, either the running time
or the output (or both) are random variables.

 Monte Carlo and Las Vegas :

These probability expectations are only over the random choices made by the algorithm independent
of the input. Thus, independent repetitions of Monte Carlo algorithms drive down the failure
probability exponentially.

 Las Vegas:
1. A Las Vegas algorithm is a randomized algorithm that always produces a
correct result.
2. Its running time is a random variable whose expectation is bounded say by
a polynomial. They are said to be "Probably fast but deterministically
accurate".
Example:
//Las Vegas algorithm
repeat:
k = RandInt(n)
if A[k] == 1,
return k;
Randomized QuickSort: Randomized QuickSort always sorts an input array and expected worst
case time complexity of QuickSort is O(n log n).
Properties:
1. It always gives correct output.
2. Running time is bounded by the input size.
3. It informs about the failure.
Quick Sort Example for Las Vegas

 On input S = (x1,x2,x3, …… ,xn)


 If n <= 1, return S
 Pick uniformly at random a “pivot” xm.
 Compare xm to all other x’s .
 Let S1 ={ xi : xi < xm } , S2 = { xi : xi > xm }
 Recursively sort S1 and S2.
 Return [S1, xm, S2 ]
Algorithm Search_Repeat(A)
{
for( i = 0; i < n-1; i++ )
{
for( j = i+1; j < n; j++ )
{
if( A[i] = A[j] )
return True
}
}
}
Algorithm LV_Search_Repeat(A,n)
// Find the repeated element from A[1:n] .
{
while (true) do
{

i = Random() mod n+1;


j = Random() mod n+1;
// i and j are random numbers in the range [1,n] .
If ((i != j) and (A[i] = A[j] ) )
return i;
}
}
A[1,2,7,3,7,4,7,7]
Where n/2 elements are repeated and n/2 elements are non-repeated.
Time complexity - O(n^2)

 Monte Carlo Algorithm:


Produce correct or optimum result with some probability. These
algorithms have deterministic running time and it is generally easier to find out
worst case time complexity.

They are said to be “ Probably correct but deterministically fast “.


 It has a Bounded Running time.
 In this the probability of failure may be minimized by increasing repeated
operations.
 The more number of iteration, the more accurate the result is.

Example of Monte Carlo Algorithm:

Algorithm Search_Repeat(A,a)
{
for(i=0; i < n; i++)
if(A[i] = a)
return true
}
Need at least n/2 + 2 Steps in the worst case, Since the first n/2 + 1 Elements may all be
distinct. O(n)
Algorithm MC_Repeat(A,a,x)
{
i=0, flag = False
while( i <= x ) {
k = random() mod n+1;
i=i+1
if ( element a is found )
flag = True
}
return flag
}
x number of iteration is performed (fixed). It may not return the correct output.

 Which one is better, Monte Carlo or Las Vegas?


The answer depends on the problem itself. In some cases, a wrong answer is not a viable
alternative. Because a Las Vegas algorithm is a Monte Carlo algorithm with error probability
0, we can say that, in most cases, a Las Vegas algorithm is better than a Monte Carlo
algorithm. However, there is a cost/benefit relationship between these two alternatives and, in
some cases, we do not want to pay for the extra time needed by the Las Vegas algorithm to
generate an answer.

You might also like