Untitled
Untitled
Untitled
Data Structure involves two complementary goals. The first goal is to identify and develop useful,
mathematical entities and operations and to determine what class of problems can be solved by
using these entities and operations. The second goal is to determine representation for those
abstract entities to implement abstract operations on this concrete representation.
A well-designed data structure allows a variety of critical operations to be performed, using as few
resources, both execution time and memory space, as possible. A data structure should be seen as a
logical concept that must address two fundamental concerns; how the data will be stored, and what
operations will be performed on it. For example, we can store a list of items having the same
datatype using the array data structure as shown in Figure.
Traversing It is used to access each data item exactly once so that it can be processed.
Searching It is used to find out the location of the data item if it exists in the given collection
of data items.
Inserting It is used to add a new data item in the given collection of data items. Deleting It is
used to delete an existing data item from the given collection of data items.
Sorting It is used to arrange the data items in some order i.e., in ascending or descending
order in case of numerical data and in dictionary order in case of alphanumeric data.
Merging It is used to combine the data items of two sorted files into single file in the sorted
form.
Abstraction
Data structures are often implemented as abstract data types. The users only access its outer
interface without worrying about the underlying implementation. Thus, data structure provides a
layer of abstraction.
Efficiency
Proper organization of data results in efficient access of data thereby making programs more
efficient. Secondly, we can select the proper data structure depending on our requirements.
Reusability
We can reuse the data structures that we have designed. They can be compiled into a library as well
and distributed to the client.
Primitive data types are classified as basic data and consists of Boolean, characters, integers, and
floating-point numbers. These data types are the building blocks of data structures. The other data
structures are nonprimitive and the user must define them before using them in a program. The
non-primitive data structures can further divide into two categories: linear and non-liner. Linear
Data Structures have all their elements arranged in a linear or sequential fashion. Each element in a
linear data structure has a predecessor (previous element) and a successor (next element). Linear
data structures are further divided into static and dynamic data structures. Static data structures
usually have a fixed size and once their size is declared at compile time it cannot be changed
whereas dynamic data structures can change their size dynamically and accommodate themselves.
The most popular example of linear static data structure is an array. In non-linear data structures,
data is not arranged sequentially, instead, it is arranged in a non-linear fashion. Elements are
connected to each other in a non-linear arrangement.
Array
An array is a sequential collection of elements of the same type. Each element of the array can be
accessed using its position in the array called an index or subscript of the array. The name of the
array points to the first element in the array. The above shown Figure is an array ‘a’ of n elements.
The elements are numbered from 0 to n-1. The size of the array
(n in this case) is also called the dimension of the array. As shown in the above figure, the name of
the array points to the first element of the array. The array is the simplest data structure and is
efficient as elements can be accessed using subscripts directly. For example, to access the third
element of array we use a[2].
An ADT consists of two parts: declaration of data and declaration of operations. ADT is a logical
description of how we view the data and the operations that are allowed without regard to how
they will be implemented. This means that we are concerned only with what data is representing
and not with how it will eventually be constructed. By providing this level of abstraction, we are
creating an encapsulation around the data. The idea is that by encapsulating the details of the
implementation, we are hiding them from the user’s view. This is called information hiding.
Handling complexity
Increase in complexities in computer algorithms, the volume of data usage is rising; this can affect
the execution of the application and can create remarkable areas of concern like processing speed,
data search, and multiple requests. To counter these data structures are used.
Ability to reuse
Once we have executed a particular data structure, we can reuse it in any distinct position.
Implementation of data structures can be assembled into libraries that can be utilized by various
clients.
Abstraction
Data structure acts as the foundation of abstract data types; the data structure describes the
physical form of Abstract Data Type. In ADT, the set of operations is supposed to be understood, and
the data structure provides physicality to them.
3. Definiteness Every step of the algorithm should be clear and well defined.
Time Complexity is a way to represent the amount of time needed by the program to run till its
completion. Time complexity of an algorithm signifies the total time required by the program to run
till its completion. The time complexity of algorithms is most expressed using the big O notation.
Time Complexity is most estimated by counting the number of elementary functions performed by
the algorithm. And since the algorithm’s performance may vary with different types of input data,
hence for an algorithm we usually use the worst-case Time complexity of an algorithm because that
is the maximum time taken for any input size.
It’s the amount of memory space required by the algorithm, during its execution. Space complexity
must be taken seriously for multi-user systems and in situations where limited memory is available.
An algorithm generally requires space for following components:
Instruction Space Its the space required to store the executable version of the program. This
space is fixed but varies depending upon the number of lines of code in the program.
Data Space Its the space required to store all the constants and variables value.
Environment Space Its the space required to store the environment information needed to
resume the suspended function.
It is the process of determining how processing time increases as the size of the problem (input size)
increases. Input size is the number of elements in the input, and depending on the problem type, the
input may be of different types. The following are the common types of inputs.
Size of an array
Polynomial degree
Number of elements in a matrix
Number of bits in the binary representation of the input Vertices and edges in a graph.
Execution times? Not a good measure as execution times is specific to a particular computer.
Number of statements executed. Not a good measure, since the number of statements varies with
the programming language as well as the style of the individual programmer.
Ideal solution? Let us assume that we express the running time of a given algorithm as a function of
the input size n (i.e., f(n)) and compare these different functions corresponding to running times.
This kind of comparison is independent of machine time, programming style, etc.
The rate at which the running time increases as a function of input is called rate of growth. Let us
assume that you go to a shop to buy a car and a bicycle. If your friend sees you there and asks what
you are buying, then in general you say buying a car. This is because the cost of the car is high
compared to the cost of the bicycle (approximating the cost of the bicycle to the cost of the car).
For the above-mentioned example, we can represent the cost of the car and the cost of the bicycle
in terms of function, and for a given function ignore the low order terms that are relatively
insignificant (for large value of input size, n). As an example, in the case below, n4, 2n2, 100n and 500
are the individual costs of some function and approximate to n4 since n4 is the highest rate of growth.
The Figure shows the relationship between different rates of growth.
algorithm with multiple expressions: one for the case where it takes less time and another for the
case where it takes more time.
In general, the first case is called the best case and the second case is called the worst case for the
algorithm. To analyse an algorithm, we need syntax, and that forms the base for asymptotic
analysis/notation. There are three types of analysis:
Worst case
Defines the input for which the algorithm takes a long time (slowest time to complete). Input is the
one for which the algorithm runs the slowest.
Best case
Defines the input for which the algorithm takes the least time (fastest time to complete). Input is the
one for which the algorithm runs the fastest.
Average case
Provides a prediction about the running time of the algorithm. Run the algorithm many times,
using many different inputs that come from some distribution that generates these inputs,
compute the total running time (by adding the individual times), and divide by the number of trials.
Assumes that the input is random.
Asymptotic analysis refers to computing the running time of any operation in mathematical units of
computation. For example, the running time of one operation is computed as f(n) and may be for
another operation it is computed as g(n2). This means the first operation running time will increase
linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.
Followings are the commonly used asymptotic notations to calculate the running time complexity of
an algorithm.
Big O Notation
Omega Ω Notation
Theta Θ Notation
The notation O(n) is the formal way to express the upper bound of an algorithm’s running time. It
measures the worst-case time complexity or the longest amount of time an algorithm can possibly
take to complete. Generally, it is represented as f(n) = O(g(n)). That means, at larger values of n, the
upper bound of f(n) is g(n). For example, if f(n) = n4+100n2+10n+50 is the given algorithm, then n4 is
g(n). That means g(n) gives the maximum rate of growth for f(n) at larger values of n.
Let us see the O-notation with a little more detail. O-notation defined as O(g(n)) = f(n) such that
there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n > n0. g(n) is an asymptotic
tight upper bound for f(n). Our objective is to give the smallest rate of growth g(n) which is greater
than or equal to the given algorithms’ rate of growth /(n).
Generally, we discard lower values of n. That means the rate of growth at lower values of n is not
important. In the figure, n0 is the point from which we need to consider the rate of growth for a
given algorithm. Below n0, the rate of growth could be different. n0 is called threshold for the given
function.
The notation Ω(n) is the formal way to express the lower bound of an algorithm’s running time. It
measures the best-case time complexity or the best amount of time an algorithm can possibly take
to complete. We represent it as f(n) = Ω(g(n)). That means, at larger values of n, the tighter lower
bound of f(n) is g(n). For example, if f(n) = 100n2 + 10n + 50, g(n) is Ω(n2).
The Ω notation can be defined as Ω(g(n)) = f(n) there exists positive constants cand n0 such that 0 ≤
cg(n) ≤ f(n)for all n ≥ n0. g(n) is an asymptotic tight lower bound for f(n). Our objective is to give the
largest rate of growth g(n) which is less than or equal to the given algorithm’s rate of growth f(n).
The notation Θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm’s running time. If the upper bound (O) and lower bound (Ω) give the same result, then the
Θ notation will also have the same rate of growth.
It is defined as Θ(g(n)) = f(n), there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤
c2g(n) for all n ≥ n0. g(n) is an asymptotic tight bound for f(n). Θ(g(n)) is the set of functions with the
same order of growth as g(n).
In a stack, the insertion operation is performed using a function called push and deletion operation is
performed using a function called pop. In the figure below, PUSH and POP operations are performed
at top position in the stack. That means, both the insertion and deletion operations are performed at
one end i.e., at top. While performing push and pop operations we must check whether stack is
empty or not and stack is full or not.
A stack S of elements of type T is a finite sequence of elements of T together with the operations.
Push operation
Push operation is used to add new elements into the stack. At the time of addition first check the
stack is full or not. If the stack is full, it generates an error message “stack overflow”. The push()
operation goes through the following steps:
1. Before inserting a new element into the stack, check whether the stack is full or not.
2. If the stack is already full and we try to insert more elements into it, it will return an
overflow error.
3. If the stack is not full, increment the ‘top’ to point to the next empty space and insert the
new element there.
4. When the stack is empty, top points to -1. Once we add a new element, the top points to
that element and has a value of 0.
5. The stack can accommodate elements if it does not reach its maximum capacity.
Initially top=-1, we can insert an element into the stack, increment the top value i.e., top=top+1. We
can insert an element into the stack first check the condition is stack is full or not. i.e., top>=size - 1.
Otherwise add the element in to the stack.
Pop operation
Pop operation is used to delete elements from the stack. At the time of deletion first check the stack
is empty or not. If the stack is empty, it generates an error message “stack underflow”. The steps
involved in the pop operation are:
1. The first step is to check whether the stack is empty or not.
2. If the stack is already empty, we cannot remove any element from it. So, we will return an
underflow error.
3. If the stack is not empty, remove the topmost element of the stack.
4. Decrement the position of ‘top’ by 1 and return.
void pop() Algorithm: pop():
{ Step 1: START
Step 2: if isEmpty()then
if(isEmpty()) print “Stack is Underflow”
{
Step 3: else
printf("Stack Underflow\n");
} 3.1: print “deleted element”
else 3.2: top=top-1;
{ Step 4: END
int n = stack[top];
top= top-1;
printf("%d is removed from
stack\n",n);
}
}
Traverse Operation
This operation performed display the elements in the stack. We display the element in the stack
check the condition is stack is empty or not i.e., top==-1. Otherwise display the list of elements in the
stack.
}
printf("\n");
}
}
int stack[MAXSIZE];
int top = -1;
int isEmpty()
{
return top<0;
}
int isFull()
{
return top>=MAXSIZE-1;
}
void pop()
{
if(isEmpty())
{
printf("Stack Underflow\n");
}
else
{
int n = stack[top];
top= top-1;
printf("%d is removed from stack\n",n);
}
}
void topStack()
{
if (isEmpty())
{
printf("Stack is empty.\n");
}
else
{
printf("%d is a top element", stack[top]);
}
}
void traverse()
{
if(isEmpty())
{
printf("Stack is empty. \n");
}
else
{
printf("Elements in a stack are: ");
for(int i=top; i>=0;i--)
{
printf("%d\t",stack[i]);
}
printf("\n");
}
}
int main()
{
topStack();
push(4);
push(8);
push(3);
traverse();
pop();
traverse();
topStack();
return 0;
}
Backtracking
Backtracking is a recursive algorithm which is used for solving the optimization problem. So, to find
the optimized solution of a problem with backtracking, we must find each and every possible
solution of the problem, doesn’t matter if it is correct or not. In backtracking, while finding every
possible solution of a problem, we store the solution of a previously calculated problem in stack and
use that solution to solve the upcoming problems.
Parenthesis Checking
In programming, we make use of different type of parenthesis, like - (,), {, }, which are used for
opening and closing a block of code. So, these parentheses get stored in stack and control the flow
of our program. Reverse a data processing function call.
Function Call
In programming, whenever you make a call from one function to another function. The address of
the calling function gets stored in the stack. So, when the called function gets terminated. The
program control moves back to the calling function with the help of the address which was stored in
the stack. So, stack plays the main role when it comes to calling a function from other function.
String Reversal
String reversal is another amazing application of stack. Here, one by one each character of the stack
gets inserted into the stack. So, the first character of the stack is on the bottom of the stack and the
last character of the string is on the top of the stack. After performing the pop operation in stack, we
get the string in reverse order.
Memory Management
The assignment of memory takes place in contiguous memory blocks. We call this stack memory
allocation because the assignment takes place in the function call stack. The size of the memory to
be allocated is known to the compiler. When a function is called, its variables get memory allocated
on the stack. When the function call is completed, the memory for the variables is released. All this
happens with the help of some predefined routines in the compiler. The user does not have to worry
about memory allocation and release of stack variables.
An algebraic expression can be represented using three different notations. They are infix,
postfix, and prefix notations:
Operator Precedency
Operators Symbols
Parenthesis { }, ( ), [ ]
Exponential notation ^
Multiplication and Division *, /
Addition and Subtraction +, -
Algorithm
Step 1: Consider the next element in the input.
Step 2: If it is operand, display it.
Step 3: If it is opening parenthesis, insert it on stack.
Step 4: If it is an operator, then
If stack is empty, insert operator on stack.
If the top of stack is opening parenthesis, insert the operator on stack
If it has higher priority than the top of stack, insert the operator on stack.
Else, delete the operator from the stack and display it, repeat Step 4.
Step 5: If it is a closing parenthesis, delete the operator from stack and display them until an opening
parenthesis is encountered. Delete and discard the opening parenthesis.
Step 6: If there is more input, go to Step 1.
Step 7: If there is no more input, delete the remaining operators to output.
char s[SIZE];
int top=-1;
push(char elem)
{
s[++top]=elem;
return 0;
}
char pop()
{
return(s[top--]);
}
int pr(char elem)
{
switch(elem)
{
case '#': return 0;
case '(': return 1;
case '+':
case '-': return 2;
case '*':
case '/': return 3;
}
return 0;
}
void main()
{
char infix[50], pofx[50], ch, elem;
int i=0, k=0;
printf("\n\nEnter Infix Expression: ");
scanf("%s",infix);
push('#');
while( (ch=infx[i++]) != '\0')
{
if( ch == '(') push(ch);
else
if(isalnum(ch)) pofx[k++]=ch;
else
if( ch == ')')
{
while( s[top] != '(')
pofx[k++]=pop();
elem=pop();
}
else
{
while( pr(s[top]) >= pr(ch) )
pofx[k++]=pop();
push(ch);
}
}
while( s[top] != '#')
pofx[k++]=pop();
pofx[k]='\0';
printf("\n\n Given Infix Expression: %s\n Postfix Expresssion:
%s\n",infix,pofx);
}
Algorithm
1. Add ‘)’ to postfix expression.
2. Read postfix expression Left to Right until “)” encountered
3. If operand is encountered, push it onto Stack
4. If operator is encountered, Pop two elements
A -> Top element
B-> Next to Top element
Evaluate B operator A
#include <stdio.h>
#include <ctype.h>
int pop()
{
int item;
if (top < 0) {
printf("stack underflow");
}
else {
item = stack[top];
top = top - 1;
return item;
}
}
int i;
char ch;
int val;
int A, B;
A = pop();
B = pop();
switch (ch)
{
case '*':
val = B * A;
break;
case '/':
val = B / A;
break;
case '+':
val = B + A;
break;
case '-':
val = B - A;
break;
}
push(val);
}
}
printf(" \n Result of expression evaluation : %d \n", pop());
}
int main()
{
int i;
char postfix[POSTFIXSIZE];
printf(" \nEnter postfix expression,\npress right parenthesis ')' to
end expression : ");
if (postfix[i] == ')')
{
break;
}
}
EvalPostfix(postfix);
return 0;
}
Algorithm
1. First, reverse the infix expression given in the problem.
2. Scan the expression from left to right.
3. Whenever the operands arrive, print them.
4. If the operator arrives and the stack is found to be empty, then simply push the operator
into the stack.
5. If the incoming operator has higher precedence than the TOP of the stack, push the
incoming operator into the stack.
6. If the incoming operator has the same precedence with a TOP of the stack, push the
incoming operator into the stack.
7. If the incoming operator has lower precedence than the TOP of the stack, pop, and print the
top of the stack. Test the incoming operator against the top of the stack again and pop the
operator from the stack till it finds the operator of a lower precedence or same precedence.
8. If the incoming operator has the same precedence with the top of the stack and the
incoming operator is ^, then pop the top of the stack till the condition is true. If the
condition is not true, push the ^ operator.
9. When we reach the end of the expression, pop, and print all the operators from the top of
the stack.
10. If the operator is ')', then push it into the stack.
11. If the operator is '(', then pop all the operators from the stack till it finds ) opening bracket in
the stack.
12. If the top of the stack is ')', push the operator on the stack.
13. At the end, reverse the output.
Unit 3 Queue
3.1 The Queue
Queue is a linear data structure in which the insertion and deletion operations are performed at two
different ends. In a queue data structure, adding and removing elements are performed at two
different positions. The insertion is performed at one end and deletion is performed at another end.
In a queue data structure, the insertion operation is performed at a position which is known as 'rear'
and the deletion operation is performed at a position which is known as 'front'. In queue data
structure, the insertion and deletion operations are performed based on FIFO (First in First
Out) principle.
Queue is an abstract data structure. It is like the ticket queue outside a cinema hall, where the first
person entering the queue is the first person who gets the ticket. A real-world example of queue can
be a single-lane one-way road, where the vehicle enters first, exits first.
Thus, by using a queue we can perform above operations thus a queue acts as an ADT.
The operations for a queue are analogues to those for a stack; the difference is that the insertions go
at the end of the list, rather than the beginning. A queue is an object or more specifically an abstract
data structure (ADT) that allows the following operations:
Two pointers called FRONT and REAR are used to keep track of the first and last elements in the
queue. When initializing the queue, we set the value of FRONT and REAR to 0. On enqueuing an
element, we increase the value of REAR index and place the new element in the position pointed to
by REAR. Before enqueuing, we check if queue is already full. When enqueuing the first element, we
set the value of FRONT to 1. On dequeuing an element, we return the value pointed to by FRONT
and increase the FRONT index. Before dequeuing, we check if queue is already empty. When
dequeuing the last element, we reset the values of FRONT and REAR to 0.
Again, insert another element 33 to the queue. The status of the queue is:
Now, delete an element. The element deleted is the element at the front of the queue. So, the
status of the queue is:
Again, delete an element. The element to be deleted is always pointed to by the FRONT pointer. So,
22 is deleted. The queue status is as follows:
Now, insert new elements 44 and 55 into the queue. The queue status is:
Next insert another element, say 66 to the queue. We cannot insert 66 to the queue as the rear
crossed the maximum size of the queue (i.e., 5). There will be queue full signal. The queue status is
as follows:
Now it is not possible to insert an element 66 even though there are two vacant positions in the
linear queue. To overcome this problem the elements of the queue are to be shifted towards the
beginning of the queue so that it creates vacant position at the rear end. Then the FRONT and REAR
are to be adjusted properly. The element 66 can be inserted at the rear end. After this operation, the
queue status is as follows:
This difficulty can overcome if we treat queue position with index 0 as a position that comes after
position with index 4 i.e., we treat the queue as a circular queue.
int queue[MAXSIZE];
int front = 0;
int rear = 0;
int isEmpty() {
return front == rear;
}
int isFull() {
return rear == MAXSIZE;
}
queue[rear] = value;
rear = rear + 1;
printf("%d %d is inserted.\n", value, rear);
}
}
void dequeue() {
if (isEmpty()) {
printf("Q is empty\n");
} else {
int n = queue[front];
front = front + 1;
printf("%d is removed from Q\n", n);
}
}
void traverse() {
if (isEmpty()) {
printf("Q is empty. \n");
} else {
printf("Elements in a Q are: ");
for (int i = front; i < rear; i++) {
printf("%d\t", queue[i]);
}
printf("\n");
}
}
int main() {
traverse();
enqueue(4);
dequeue();
enqueue(40);
traverse();
dequeue();
traverse();
return 0;
}
There are two problems associated with linear queue. They are:
Time consuming: linear time to be spent in shifting the elements to the beginning of the
queue.
Signalling queue full: even if the queue is having vacant position. For example, let us
consider a linear queue status as follows:
Next insert another element, say 66 to the queue. We cannot insert 66 to the queue as the rear
crossed the maximum size of the queue (i.e., 5). There will be queue full signal. The queue status is
as follows:
This difficulty can be overcome if we treat queue position with index zero as a position that comes
after position with index four then we treat the queue as a circular queue. In circular queue if we
reach the end for inserting elements to it, it is possible to insert new elements if the slots at the
beginning of the circular queue are empty.
Let us consider a circular queue, which can hold maximum (MAX) of six elements. Initially the queue
is empty.
Now, insert 11 to the circular queue. Then circular queue status will be:
Insert new elements 22, 33, 44 and 55 into the circular queue. The circular queue status is:
Now, delete an element. The element deleted is the element at the front of the circular queue. So,
11 is deleted. The circular queue status is as follows:
Again, delete an element. The element to be deleted is always pointed to by the FRONT pointer. So,
22 is deleted. The circular queue status is as follows:
Again, insert another element 66 to the circular queue. The status of the circular queue is:
Now, insert new elements 77 and 88 into the circular queue. The circular queue status is:
Now, if we insert an element to the circular queue, as COUNT = MAX we cannot add the element to
circular queue. So, the circular queue is full.
#define MAXSIZE 10
int CQ[MAXSIZE];
int front = 0;
int rear = 0;
int count = 0;
int isEmpty() {
return count == 0;
}
int isFull() {
return count == MAXSIZE;
}
void c_dequeue() {
if (isEmpty()) {
printf("CQ is empty\n");
} else {
int n = CQ[front];
printf("%d is removed from CQ\n", n);
front = (front + 1) % MAXSIZE;
count--;
}
}
void traverse() {
if (isEmpty()) {
printf("CQ is empty. \n");
} else {
printf("Elements in CQ are:");
int j = count;
for (int i = front; j != 0; j--) {
printf("%d\t", CQ[i]);
i = (i + 1) % MAXSIZE;
}
}
}
int main() {
traverse();
c_enqueue(4);
c_dequeue();
c_enqueue(40);
traverse();
return 0;
}
3.7 Deque
In the preceding section we saw that a queue in which we insert items at one end and from which
we remove items at the other end. In this section we examine an extension of the queue, which
provides a means to insert and remove items at both ends of the queue. This data structure is a
deque. The word deque is an acronym derived from double-ended queue. Below figure shows the
representation of a deque.
An Input restricted deque is a deque, which allows insertions at one end but allows deletions at both
ends of the list. An output restricted deque is a deque, which allows deletions at one end but allows
insertions at both ends of the list.
Check empty
This operation is performed to check whether the deque is empty or not. If front = -1, it means that
the deque is empty.
Check full
This operation is performed to check whether the deque is full or not. If front = rear + 1, or front = 0
and rear = n - 1 it means that the deque is full.
If the queue is empty, both rear and front are initialized with 0. Now, both will point to the
first element.
Otherwise, check the position of the front if the front is less than 1 (front < 1), then
reinitialize it by front = n - 1, i.e., the last index of the array.
If the queue is empty, both rear and front are initialized with 0. Now, both will point to the
first element.
Otherwise, increment the rear by 1. If the rear is at last index (or size - 1), then instead of
increasing it by 1, we must make it equal to 0.
If the queue is empty, i.e., front = -1, it is the underflow condition, and we cannot perform
the deletion. If the queue is not full, then the element can be inserted from the front end by
using the below conditions
If the deque has only one element, set rear = -1 and front = -1.
Else if front is at end (that means front = size - 1), set front = 0.
Else increment the front by 1, (i.e., front = front + 1).
If the queue is empty, i.e., front = -1, it is the underflow condition, and we cannot perform
the deletion.
If the deque has only one element, set rear = -1 and front = -1.
If rear = 0 (rear is at front), then set rear = n - 1.
Else, decrement the rear by 1 (or rear = rear -1).
#include <stdio.h>
#define size 5
int deque[size];
while (i != r) {
printf("%d ", deque[i]);
i = (i + 1) % size;
}
printf("%d", deque[r]);
}
} else if (r == 0) {
printf("\nThe deleted element is %d", deque[r]);
r = size - 1;
} else {
printf("\nThe deleted element is %d", deque[r]);
r = r - 1;
}
}
int main() {
insert_front(20);
insert_front(10);
insert_rear(30);
insert_rear(50);
insert_rear(80);
display(); // Calling the display function to retrieve the values of
deque
getfront(); // Retrieve the value at front-end
getrear(); // Retrieve the value at rear-end
delete_front();
delete_rear();
display(); // calling display function to retrieve values after
deletion
return 0;
}
We always remove an element with the highest priority, which is given by the minimal integer
priority assigned.
For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority queue with an
ordering imposed on the values is from least to the greatest. Therefore, the 1 number would be
having the highest priority while 22 will be having the lowest priority.
A prototype of a priority queue is time sharing system: programs of high priority are processed first,
and programs with the same priority form a standard queue. An efficient implementation for the
Priority Queue is to use heap, which in turn can be used for sorting purpose called heap sort.
Ascending order priority queue: It is Lower priority number to high priority number. Examples:
order is 1,2,3,4,5,6,7,8,9,10
Descending order priority queue: It is high priority number to lowest priority number. Examples:
Order is 10,9,8,7,6,5,4,3,2,1
Note: Priority queue can be implemented using an array, a linked list, a heap data structure, or a
binary search tree. Among these data structures, heap data structure provides an efficient
implementation of priority queues.
Unit 6 Recursion
6.1 Introduction
Recursion is the process of calling the function by itself as its subroutine to solve the complex
program. Recursion uses the method of dividing the program into sub-tasks and calling it repeatedly
instead of the iterative method which takes lots of effort and time to solve the same problem.
Therefore, the function which calls itself is called the recursive function, and the process of calling a
function by itself is called recursion. The most important thing about the recursion is it should have
the base case to terminate the recursion. As the recursive function calls itself continuously, it is
always possible that the program goes into the infinite loop. So, if we create the terminate base case
using the if…else statement, the program will check the base case condition every time and prevent
going into the infinite loop.
Base case: The base case is used to terminate the task of recurring. If a base case is not
defined, then the function will recur an infinite number of times. The base case gives the
conditions in which the recursive calls will halt means collapse the recursion when you hit
the base case.
Recursive case: In the Recursive case, the problem space is made smaller and smaller dive
deeper into the recursion in the recursive case. Recursive case simplifies a bigger problem
into several simpler sub-problems and then calls them.
The function of the recursion approach is to solve the problem effectively in comparison to another
problem-solving approach. The recursion process divides the problem into the subtask as a function
and continuously calls the same function again to get one step closer to the final solution. This
process continues until we find the final solution to the problem. Each time the part of the solution
is found, it is stored in the form of stack data structure in the memory and at last, popped out to get
the final solution. As we approach the final solution, there is a base condition that should be checked
to get out of the recursion process. This base condition is checked using the conditional statement
and hence avoids the recursion process to get into the infinite loop. If for any reason the base case
fails to work, the program will fall into an infinite loop, and we will never reach the solution of the
program.
Direct Recursion
Tail Recursion
Head Recursion
Nested Recursion
Tree Recursion
Indirect Recursion
Linear Recursion
Binary Recursion
Multiple Recursion
Linear Recursion
It is the most common type of Recursion in which function calls itself repeatedly until base condition
[termination case] is reached. Once the base case is reached the results are return to the caller
function. If a recursive function is called only once, then it is called a linear recursion. Example,
factorial of a number.
Binary Recursion
Some recursive functions don't just have one call to themselves; they have two (or more). Functions
with two recursive calls are referred to as binary recursive functions. For example, the Fibonacci
function fib provides a classic example of binary recursion. The Fibonacci numbers can be defined by
the rule:
fib(n) = 0 if n is 0,
= 1 if n is 1,
Tail Recursion
Tail recursion is a form of linear recursion. In tail recursion, the recursive call is the last thing the
function does. Often, the value of the recursive call is returned. As such, tail recursive functions can
often be easily implemented in an iterative manner; by taking out the recursive call and replacing it
with a loop, the same effect can generally be achieved. In fact, a good compiler can recognize tail
recursion and convert it to iteration to optimize the performance of the code. A good example of a
tail recursive function is a function to compute the GCD, or Greatest Common Denominator, of two
numbers.
Head Recursion
If a recursive function calling itself and that recursive call is the first statement in the function then
it’s known as Head Recursion. There’s no statement, no operation before the call. The function
doesn’t have to process or perform any operation at the time of calling and all operations are done
at returning time.
Nested Recursion
In this recursion, a recursive function will pass the parameter as a recursive call. That means
“recursion inside recursion”.
Tower of Hanoi is a mathematical puzzle where we have three rods (A, B, and C) and N disks.
Initially, all the disks are stacked in decreasing value of diameter i.e., the smallest disk is placed on
the top and they are on rod A. The objective of the puzzle is to move the entire stack to another rod
(here considered C), obeying the following simple rules:
Implementation of TOH
# include <stdio.h>
void TOH (int n, char from_rod, char to_rod,char aux_rod)
{
if (n == 0)
{
return;
}
TOH (n - 1, from_rod, aux_rod, to_rod);
printf("\nMove disk %d from rod %c to rod %c ", n,from_rod,to_rod);
TOH (n - 1, aux_rod, to_rod, from_rod);
}
int main ()
{
int N = 3;
Fibonacci series is a series of numbers formed by the addition of the preceding two numbers in the
series. The first two terms are zero and one respectively. The terms after this are generated by
simply adding the previous two terms.
The next number is found by adding up the two numbers before it:
#include<stdio.h>
int main() {
int first = 0, second = 1, i, n, sum = 0;
printf("Enter the number of terms: ");
scanf("%d", & n);
printf("Fibonacci Series:");
for (i = 0; i < n; i++) {
if (i <= 1) {
sum = i;
}
//to print 0 and 1
else {
Implementation of GCD
# include<stdio.h>
int gcd(int n1, int n2);
int main() {
int n1, n2;
printf("Enter two positive integers: ");
scanf("%d %d", &n1, &n2);
printf("G.C.D of %d and %d is %d.", n1, n2, gcd(n1, n2));
return 0;
}
# include<stdio.h>
void mul_table(int N, int i)
{
// Base Case
if (i > 10)
return;
int main()
{
// Input number whose table is to print
int N = 8;
mul_table(N, 1);
return 0;
}
Chess Games
If you have ever played chess on your mobile or laptop, then you must be aware of the auto-
suggestion for each move on the chessboard. And if you have not played it yet then you can take the
reference from the image below.
The green Dots that you can see in the above image are generated with the help of recursions. Here
recursion is used to generate all possible safe and unsafe moves of a particular piece of the chess
game. Generally, the safe moves are represented with the help of a green dot and Red for the
unsafe ones.
Some other popular Games that use the concept of recursion are Candy Crush, Sudoku, etc.
are of comparable depths. Various search-tree data structures exist, several of which also allow
efficient insertion and deletion of elements, which operations then must maintain tree balance.
Search trees are often used to implement an associative array. The search tree algorithm uses the
key from the key–value pair to find a location, and then the application stores the entire key–value
pair at that particular location.