Exam

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Question 1:

A good algorithm has several characteristics, including:

1. Correctness: The algorithm must be able to produce the correct output for any given
input.

2. Efficiency: The algorithm should be able to solve the problem in a reasonable amount of
time and using a reasonable amount of resources. This can be measured in terms of time
complexity and space complexity.

3. Clarity: The algorithm should be easy to understand and implement. It should use simple
and clear instructions that can be followed by anyone with basic programming knowledge.

4. Robustness: The algorithm should be able to handle all possible input values, including
edge cases and invalid inputs, without crashing or producing incorrect results.

5. Modularity: The algorithm should be designed in a modular way, with separate, reusable
components that can be combined to solve larger problems.

Best case complexity gives lower bound on the running time of the algorithm for any

instance of input(s). This indicates that the algorithm can never have lower running time

than best case for particular class of problems.

Worst case complexity gives upper bound on the running time of the algorithm for all

the instances of the input(s). This insures that no input can overcome the running time

limit posed by worst case complexity.

Average case complexity gives average number of steps required on any instance(s) of

the input(s)

Example 1: Fibonacci Numbers (WORST complexity)

Input: n

Output: nth Fibonacci number.

Algorithm: assume a as first(previous) and b as second(current) numbers

fib(n)

{ a = 0, b= 1, f=1 ;

for(i = 2 ; i <=n ; i++)


{

f = a+b ;

a=b ;

b=f ;

return f ;

Question no 2:

Recurrence relations, in the computer algorithms, can model the complexity of the

running time, particularly divide and conquer algorithms. Recurrence relation is an

equation or an inequality that is defined in term of itself. Recursions are useful in

computer science because it makes the expression of an algorithm easy and can be solved

easily. There are many natural problems that are easily described through recursion.

Examples

a. an = an-1 + 1, a1 = 1; (what function?)

b. an = 2an-1 , a1 = 1 ; (what function?)

c. an = nan-1 , a1 = 1 ; (what function?)

a. an = n (polynomial)

b. an = 2n (exponential)

c. an = n! (factorial
Q no 3:

Divide and conquer

In the ancient time Roman politicians used the concept of dividing the enemy unity

somehow to conquer them. The method they used was not conceived for algorithms.

While designing any algorithm if we try to break the problem of larger size into smaller

then we adhere to the designing technique called “Divide and Conquer”. Most of the

computational problems can be solved by using this technique. Analysis of such an

algorithm developed using divide and conquer paradigm needs recurrence since divide

and conquer algorithms use recursion.

The main elements of the Divide and Conquer Solutions are:

Divide: Divide the larger problem into the problem of smaller size.

Conquer: Solve each piece by applying recursive divide and conquer.

Combine: add up all the solved pieces to a larger solution.

Merge Sort:

Merge sort is a divide-and-conquer algorithm that sorts a given list or array of elements by
dividing it into two halves, sorting each half separately, and then merging the two sorted halves
back into a single sorted list. The steps involved in merge sort are:

Divide the list into two halves: The given list is divided into two equal halves recursively until
each half contains only one element. This is the base case of the recursion.

Sort the halves: The two halves are sorted recursively using merge sort.

Merge the sorted halves: The two sorted halves are merged back into a single sorted list. This is
done by comparing the first elements of each half and selecting the smaller one. The smaller
element is then removed from its half and added to a new list. This process is repeated until all
the elements have been added to the new list.
Question No 6:

BFS DFS
BFS stands for Breadth First Search. DFS stands for Depth First Search.
BFS(Breadth First Search) uses Queue data DFS(Depth First Search) uses Stack data
structure for finding the shortest path. structure.
BFS is a traversal approach in which we first DFS is also a traversal approach in which the
walk through all nodes on the same level traverse begins at the root node and
before moving on to the next level. proceeds through the nodes as far as possible
until we reach the node with no unvisited
nearby nodes.
BFS builds the tree level by level. DFS builds the tree sub-tree by sub-tree.
It works on the concept of FIFO (First In First It works on the concept of LIFO (Last In First
Out). Out).
BFS is more suitable for searching vertices DFS is more suitable when there are solutions
closer to the given source. away from source.
BFS considers all neighbors first and therefore DFS is more suitable for game or puzzle
not suitable for decision-making trees used in problems. We make a decision, and the then
games or puzzles. explore all paths through this decision. And if
this decision leads to win situation, we stop.
In BFS, the space complexity is more critical DFS has lesser space complexity because at a
as compared to time complexity. time it needs to store only a single path from
the root to the leaf node.
BFS is slow as compared to DFS. DFS is fast as compared to BFS.

Floyds algorith

The algorithm being discussed uses dynamic programming approach. The algorithm

being presented here works even if some of the edges have negative weights. Consider a

weighted graph G = (V,E) and denote the weight of edge connecting vertices i and j by

wij. Let W be the adjacency matrix for the given graph G. Let Dk denote an n⋅n matrix

such that Dk(i,j) is defined as the weight of the shortest path from the vertex i to vertex

using only vertices from 1,2,…,k as intermediate vertices in the path. If we consider shortest path
with intermediate vertices as above then computing the path contains two cases. Dk(i,j) does
not contain k as intermediate vertex and .Dk(i,j) contains k as intermediate vertex. Then we have
the following relations Dk(i,j) = Dk-1(i,j), when k is not an intermediate vertex, and
Dk(i,j) = Dk-1(i,k) + Dk-1(k,j), when k is an intermediate vertex.

So from the above relations we obtain:

Dk(i,j) = min{Dk-1(i,j), Dk-1(i,k) + Dk-1(k,j)}.

The above relation is used by flyod’s algorithm to compute all pairs shortest path in

bottom up manner for finding D1, D2,…, Dn

Algorithm:

FloydWarshalAPSP(W,D,n)

for(i=1;i<=n;i++)

for(j=1;j<=1;j++)

D[i][j] = W[i][j];

For(k=1;k<=n;k++)

for(i=1;i<=n;i++)

for(j=1;j<=1;j++)

D[i][j] = min{D[i][j], D[i][k]+ D[k][j]}; //D[][]’s are Dk’s.

Question no 7:

Convex hall problem:

The convex hull of a polygon P is the smallest convex polygon that contains P. Similarly, we can
define the convex hull of a set of points R as the smallest convex polygon containing
Graham’s scan algorithm (Convex Hull)

We learned about convex hull. Graham’s scan algorithm is an algorithm to obtain the

convex hull from the given set of points say P. Graham’s scan depends on angular

sorting. The key idea is select the starting point p0, with minimum y-coordinate value

(leftmost point in the set in case of the tie). After selection of the point p0 sort the points

of P by polar angle with respect to p0 in anticlockwise direction. In case of the same angle

remove the point with lower y-coordinate value. Push each point of set Q onto the stack

once and the points that are not of convex hull of P are popped from the stack.

Algorithm:
Analysis:

It requires O(n) time to find p0. Sorting of points require O(nlogn) time,. Push operation takes
constant time i.e., O(1). We can understand that the pop operation inside the while loop is done
at most O(m) time infact, at most m - 2 pop operations are performed. Therefore, the worst case
for for loop takes O(n) (n > m) because while loop is executed at most O(m) times so if it is
regarded with each iteration of for loop, the cost while loop is about constant time [O(m)/m =
O(1)]. So, the worst case running time of the algorithm is T(n) = O(n) + O(n lg n) + O(1) + O(n) =
O(n lg n), where n = |P|

Question no 9:

Benefits of randomized algorithms

Advantages:

Randomized algorithms are known for their simplicity.

Any deterministic algorithm can easily be converted to a randomized algorithm. ...

Randomized algorithms are very efficient.


They utilize little execution time and space compared to any deterministic algorithms.

Randomized Quick Sort:

The algorithm is called randomized if its behavior depends on input as well as random value

generated by random number generator. The beauty of the randomized algorithm is that no

particular input can produce worst-case behavior of an algorithm. IDEA: Partition around a

random element. Running time is independent of the input order. No assumptions need to be

made about the input distribution. No specific input elicits the worst-case behavior. The worst

case is determined only by the output of a random-number generator. Randomization cannot

eliminate the worst-case but it can make it less likely!

Algorithm:

RandQuickSort(A,l,r)

if(l<r)

m = RandPartition(A,l,r);

RandQuickSort(A,l,m-1);

RandQuickSort(A,m+1,r);

RandPartition(A,l,r)

k = random(l,r); //generates random number between i and j including both.

swap(A[l],A[k]);

return Partition(A,l,r);

Question no 10:
Directed Acyclic Graph (DAG) DAG, here directed means that each edge has an arrow denoting
that the edge can be traversed in only that particular direction. Acyclic means that the graph has
no cycles, i.e., starting at one node, you can never end up at the same node. DAG can be used to
find shortest path from a given source node to all other nodes. To find shortest path by using
DAG, first of all sort the vertices of graph topologically and then relax the vertices in topological
order

Algorithm

DagSP(G,w,s)

Topologically Sort the vertices of G

for each vertex vɛ V

d[v] = ∞

d[s] = 0

for each vertex u, taken in topologically sorted order

for each vertex v adjacent to u

if d[v] > d[u] + w(u,v)

d[v] = d[u] + w(u,v)

Analysis:

In the above algorithm, the topological sort can be done in O(V+E) time (Since this is similar to DFS! see
book.).The first for loop block takes O(V) time. In case of second for loop it executes in O(V2) Time so the
total running time is O(V2). Aggregate analysis gives us the running time O(E+V)
Question no 8:

Cook's Theorem proves that satisfiability is NP-complete by reducing all non-deterministic Turing
machines to SAT. Each Turing machine has access to a two-way infinite tape (read/write) and a finite
state control, which serves as the program.

P
P is a complexity class that represents the set of all decision problems that can be solved in
polynomial time.
That is, given an instance of the problem, the answer yes or no can be decided in
polynomial time.

Example
Given a connected graph G, can its vertices be coloured using two colours so that no edge is
monochromatic?
Algorithm: start with an arbitrary vertex, color it red and all of its neighbours blue and
continue. Stop when you run out of vertices or you are forced to make an edge have both
of its endpoints be the same color

NP
NP is a complexity class that represents the set of all decision problems for which the instances
where the answer is "yes" have proofs that can be verified in polynomial time.
This means that if someone gives us an instance of the problem and a certificate
(sometimes called a witness) to the answer being yes, we can check that it is correct in
polynomial time.

Example
Integer factorisation is in NP. This is the problem that given integers n and m, is there an
integer f with 1 < f < m, such that f divides n (f is a small factor of n)?

This is a decision problem because the answers are yes or no. If someone hands us an
instance of the problem (so they hand us integers n and m) and an integer f with 1 < f < m,
and claim that f is a factor of n (the certificate), we can check the answer in polynomial
time by performing the division n / f.

NP-Complete
NP-Complete is a complexity class which represents the set of all problems X in NP for which it is
possible to reduce any other NP problem Y to X in polynomial time.

Intuitively this means that we can solve Y quickly if we know how to solve X quickly.
Precisely, Y is reducible to X, if there is a polynomial time algorithm f to transform
instances y of Y to instances x = f(y) of X in polynomial time, with the property that the
answer to y is yes, if and only if the answer to f(y) is yes.

Example
3-SAT. This is the problem wherein we are given a conjunction (ANDs) of 3-clause
disjunctions (ORs), statements of the form

(x_v11 OR x_v21 OR x_v31) AND


(x_v12 OR x_v22 OR x_v32) AND
... AND
(x_v1n OR x_v2n OR x_v3n)

where each x_vij is a boolean variable or the negation of a variable from a finite predefined
list (x_1, x_2, ... x_n).
Question no 5:

To find the longest common subsequence between two strings using dynamic
programming, we can use a table to keep track of the lengths of the longest common
subsequences of all prefixes of the two input strings. Here is an example of how to apply
this algorithm to the given strings:

1.Create a table with dimensions (m+1) x (n+1), where m and n are the lengths of strings x
and y, respectively. Initialize the table to all zeros.

2.For each position (i, j) in the table, where 1 <= i <= m and 1 <= j <= n:

If the i-th character of x matches the j-th character of y, set the (i, j) entry of the table
to the value of the (i-1, j-1) entry plus 1.

Otherwise, set the (i, j) entry of the table to the maximum of the (i-1, j) and (i, j-1)
entries.

3.The length of the longest common subsequence is the value of the (m, n) entry of the
table.

4:To construct the longest common subsequence, start at the (m, n) entry of the table and
work backwards as follows:

If the i-th character of x matches the j-th character of y, add that character to the
front of the subsequence and move to the (i-1, j-1) entry of the table.

Otherwise, move to the (i-1, j) entry of the table if that value is greater than the (i, j-1)
entry, or move to the (i, j-1) entry otherwise.

Using this algorithm, we get the following table:


The (6, 4) entry of the table is 3, so the length of the longest common subsequence is 3. To construct the
subsequence, we start at the (6, 4) entry and work backwards, following the arrows in the table:

You might also like