0% found this document useful (0 votes)
13 views53 pages

DS Unit-I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views53 pages

DS Unit-I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

R23 Data Structures Learning Concepts

Unit – I:

1. Definition of Data Structure, Definitions of Linear & Non-Linear Data

Structures

2. Abstract Data Types & their implementation, Overview of time and space

complexity analysis for linear data structures

3. Recursion

4. Linear Search, Binary Search in an Array

5. Fibonacci Search, Bubble Sort

6. Selection Sort, Insertion Sort

7. Shell Sort

8. Quick Sort

9. Merge Sort

1. Definition of Data Structure, Definitions of Linear & Non-Linear Data

Structures

Data Structure

A data structure is a way of organizing and storing data in a computer so that it


can be accessed and used efficiently.

Data structures can be classified into two broad categories:

Linear Data Structure: A data structure in which data elements are arranged
sequentially or linearly, where each element is attached to its previous and
next adjacent elements, is called a linear data structure. Examples are array,
stack, queue, Linked List etc.
Non-linear Data Structure: Data structures where data elements are not
placed sequentially or linearly are called non-linear data structures. Examples
are trees and graphs.

Classification of Data Structure

Linear Data Structure:

Data structure where data elements are arranged sequentially or linearly


where each and every element is attached to its previous and next
adjacent is called a linear data structure. In linear data structure, single
level is involved. Therefore, we can traverse all the elements in single
run only. Linear data structures are easy to implement because
computer memory is arranged in a linear way. Its examples are array,
stack, queue, linked list, etc.

1. Array

The array is a type of data structure that stores elements of the


same type. These are the most basic and fundamental data
structures. Data stored in each position of an array is given a positive
value called the index of the element. The index helps in identifying
the location of the elements in an array.
If supposedly we have to store some data i.e. the marks of ten
students, then we can create a structure of an array and store all the
integers together.
This doesn’t need creating ten separate integer variables. Therefore, the
lines in a code are reduced and memory is saved.

2. Stack

The data structure follows the rule of LIFO (Last In-First Out) where
the data last added element is removed first.
Push operation is used for adding an element of data on a stack
and the pop operation is used for deleting the data from the stack.
This can be explained by the example of books stacked together. In
order to access the last book, all the books placed on top of the last
book have to be safely removed.

3. Queue

This structure is almost similar to the stack as the data is stored sequentially.
The difference is that the queue data structure follows FIFO which is the
rule of First In-First Out where the first added element is to exit the
queue first. Front and rear are the two terms to be used in a queue.

Enqueue is the insertion operation and dequeue is the deletion


operation.

The data structure might be explained with the example of people queuing
up to ride a bus. The first person in the line will get the chance to exit
the queue while the last person will be the last to exit.

4. Linked List

Linked lists are the types where the data is stored in the form of
nodes which consist of an element of data and a pointer.
The use of the pointer is that it points or directs to the node which
is next to the element in the sequence. The data stored in a linked
list might be of any form, strings, numbers, or characters.
5. Hash Tables
These types can be implemented as linear or non-linear data
structures. The data structures consist of key-value pairs.

Non-linear Data Structure:


Data structures where data elements are not arranged sequentially
or linearly are called non-linear data structures.

In a non-linear data structure, single level is not involved. Therefore, we


can’t traverse all the elements in single run only. Non-linear data
structures are not easy to implement in comparison to linear data
structure. It utilizes computer memory efficiently in comparison to
a linear data structure. Its examples are trees and graphs.

1. Trees

A tree data structure consists of various nodes linked together. The


structure of a tree is hierarchical that forms a relationship like that
of the parent and a child. The structure of the tree is formed in a way
that there is one connection for every parent-child node relationship.
Various types of trees are present based on their structures like AVL
tree, binary tree, binary search tree, etc.

2. Graph

Graphs are those types of non-linear data structures which consist of


vertices and edges. The vertices or the nodes are involved in
storing data and the edges show the vertices relationship. Real-life
problems like social networks, telephone networks, etc. can be
represented through the graphs.

Difference between Linear and Non-linear Data Structures:

S.NO Linear Data Structure Non-linear Data Structure

In a linear data structure, data


elements are arranged in a linear In a non-linear data structure, data
1. order where each and every element elements are attached in hierarchically
is attached to its previous and next manner.
adjacent.

In linear data structure, single level is Whereas in non-linear data structure,


2.
involved. multiple levels are involved.
S.NO Linear Data Structure Non-linear Data Structure

Its implementation is easy in


While its implementation is complex in
3. comparison to non-linear data
comparison to linear data structure.
structure.

While in non-linear data structure, data


In linear data structure, data elements
4. elements can’t be traversed in a single
can be traversed in a single run only.
run only.

While in a non-linear data structure,


In a linear data structure, memory is
5. memory is utilized in an efficient way.
not utilized in an efficient way.

Its examples are: array, stack, queue, While its examples are: trees and
6.
linked list, etc. graphs.

Applications of linear data structures Applications of non-linear data


7. are mainly in application software structures are in Artificial Intelligence
development. and image processing.

Non-linear data structures are useful for


Linear data structures are useful for representing complex relationships and
8. simple data storage and data hierarchies, such as in social
manipulation. networks, file systems, or computer
networks.

Static Data Structure vs Dynamic Data Structure

Data structure is a way of storing and organizing data efficiently such that the
required operations on them can be performed be efficient with respect to
time as well as memory.

In Static data structure the size of the structure is fixed. The content of
the data structure can be modified but without changing the memory
space allocated to it.
Example of Static Data Structures: Array

What is Dynamic Data Structure?


In Dynamic data structure the size of the structure is not fixed and can
be modified during the operations performed on it. Dynamic data
structures are designed to facilitate change of data structures in the run
time.

Example of Dynamic Data Structures: Linked List

Differences between Arrays and Linked Lists

Advantages Of Linked List:


 Dynamic data structure: A linked list is a dynamic arrangement so it can
grow and shrink at runtime by allocating and deallocating memory. So
there is no need to give the initial size of the linked list.
 No memory wastage: In the Linked list, efficient memory utilization can
be achieved since the size of the linked list increase or decrease at run
time so there is no memory wastage and there is no need to pre-allocate
the memory.
 Implementation: Linear data structures like stacks and queues are often
easily implemented using a linked list.
 Insertion and Deletion Operations: Insertion and deletion operations are
quite easier in the linked list. There is no need to shift elements after the
insertion or deletion of an element only the address present in the next
pointer needs to be updated.
 Flexible: This is because the elements in Linked List are not stored in
contiguous memory locations unlike the array.
 Efficient for large data: When working with large datasets linked lists play
a crucial role as it can grow and shrink dynamically.
 Scalability: Contains the ability to add or remove elements at any position.

Disadvantages Of Linked List:


 Memory usage: More memory is required in the linked list as compared
to an array. Because in a linked list, a pointer is also required to store the
address of the next element and it requires extra memory for itself.
 Traversal: In a Linked list traversal is more time-consuming as compared
to an array. Direct access to an element is not possible in a linked list as in
an array by index. For example, for accessing a node at position n, one has
to traverse all the nodes before it.
 Reverse Traversing: In a singly linked list reverse traversing is not possible,
but in the case of a doubly-linked list, it can be possible as it contains a
pointer to the previously connected nodes with each node. For performing
this extra memory is required for the back pointer hence, there is a
wastage of memory.
 Random Access: Random access is not possible in a linked list due to
its dynamic memory allocation.
 Lower efficiency at times: For certain operations, such as searching for an
element or iterating through the list, can be slower in a linked list.
 Complex implementation: The linked list implementation is more
complex when compared to array. It requires a complex programming
understanding.
 Difficult to share data: This is because it is not possible to directly access
the memory address of an element in a linked list.
 Not suited for small dataset: Cannot provide any significant benefits on
small dataset compare to that of an array.
Applications of Array in C:
 Arrays are used as the base of all sorting algorithms.
 Arrays are used to implement other DS like a stack, queue, etc.
 Used for implementing matrices.
 Data structures like trees also sometimes use the array implementation
since arrays are easier to handle than pointers.
 Binary search trees and balanced binary trees are used in data structures
such as a heap, map, and set, which can be built using arrays.

Advantages of array data structure:


 Efficient access to elements: Arrays provide direct and efficient access to
any element in the collection. Accessing an element in an array is an O(1)
operation, meaning that the time required to access an element is
constant and does not depend on the size of the array.
 Fast data retrieval: Arrays allow for fast data retrieval because the data is
stored in contiguous memory locations. This means that the data can be
accessed quickly and efficiently without the need for complex data
structures or algorithms.
 Memory efficiency: Arrays are a memory-efficient way of storing data.
Because the elements of an array are stored in contiguous memory
locations, the size of the array is known at compile time. This means that
memory can be allocated for the entire array in one block, reducing
memory fragmentation.
 Versatility: Arrays can be used to store a wide range of data types,
including integers, floating-point numbers, characters, and even complex
data structures such as objects and pointers.
 Easy to implement: Arrays are easy to implement and understand,
making them an ideal choice for beginners learning computer
programming.

Disadvantages of array data structure:

 Fixed size: Arrays have a fixed size that is determined at the time of
creation. This means that if the size of the array needs to be increased, a
new array must be created and the data must be copied from the old
array to the new array, which can be time-consuming and memory-
intensive.
 Memory allocation issues: Allocating a large array can be problematic,
particularly in systems with limited memory. If the size of the array is too
large, the system may run out of memory, which can cause the program to
crash.
 Insertion and deletion issues: Inserting or deleting an element from an
array can be inefficient and time-consuming because all the elements
after the insertion or deletion point must be shifted to accommodate the
change.
 Wasted space: If an array is not fully populated, there can be wasted
space in the memory allocated for the array. This can be a concern if
memory is limited.
LC2:
Abstract Data Types & their implementation, Overview of time
and space complexity analysis for linear data structures

Abstract Data type (ADT) is a type (or class) for objects whose behavior is
defined by a set of values and a set of operations. The definition of ADT
only mentions what operations are to be performed but not how these
operations will be implemented.

ADT (Abstract Data Type) – means

1. Representation of Data.
2. Set of Operations on the Data.

Array as a basic data structure. Representation of data is defined by the


language compiler itself. But the operations on the data are not defined
by the compiler. We have to implement or provide the operations on Array
data structure.

list of some of the operations that we can perform on an array


1. Display () – To Display the entire array on the screen.
2. Add(n) / Append(n) – To add a particular element on the end of the array.
3. Insert (index, n) – To add an element to a particular index.
4. Delete (index) – To delete an element with the help of an index in the given
array.
5. Search (n) – To check whether the given element is present or not in an array.
6. Get (index) – It will return the element which presents on the given index.
7. Set (index, x) – It will change the element with the new element at a particular
index.
8. Max () / Min () – These will return the max and min element in the given array.
9. Reverse () – It will reverse the order of elements in the given array.
10. Shift () – It will shift the whole elements either on the left or right side by the
given number.

Representation of Data on Array:


Now let’s look at the representation of the data on an array.
The first thing is we need an array space of some size. Here we initialize an array of size
= 10 and length = 0 because there are no elements in the array.

arr

Abstract, that is without knowing internal details we can use them. So a user
only needs to know what a data type can do, but not how it will be
implemented. Think of ADT as a black box which hides the inner structure and
design of the data type.
Now we’ll define three ADTs namely List ADT, Stack ADT, Queue ADT.

1. List ADT

 The data is generally stored in key sequence in a list which has a head
structure consisting of count, pointers and address of compare
function needed to compare the data in the list.
 The data node contains the pointer to a data structure and a self-
referential pointer which points to the next node in the list.
 The List ADT Functions is given below:
 get() – Return an element from the list at any given position.
 insert() – Insert an element at any position of the list.
 remove() – Remove the first occurrence of any element from a non-empty
list.
 removeAt() – Remove the element at a specified location from a non-
empty list.
 replace() – Replace an element at any position by another element.
 size() – Return the number of elements in the list.
 isEmpty() – Return true if the list is empty, otherwise return false.
 isFull() – Return true if the list is full, otherwise return false.

2. Stack ADT

View of stack

 In Stack ADT Implementation instead of data being stored in each node,


the pointer to data is stored.
 The program allocates memory for the data and address is passed to the
stack ADT.
 The head node and the data nodes are encapsulated in the ADT. The
calling function can only see the pointer to the stack.
 The stack head structure also contains a pointer to top and count of
number of entries currently in stack.
 push() – Insert an element at one end of the stack called top.
 pop() – Remove and return the element at the top of the stack, if it is not
empty.
 peek() – Return the element at the top of the stack without removing it, if
the stack is not empty.
 size() – Return the number of elements in the stack.
 isEmpty() – Return true if the stack is empty, otherwise return false.
 isFull() – Return true if the stack is full, otherwise return false.
3. Queue ADT

View of Queue

 The queue abstract data type (ADT) follows the basic design of the stack
abstract data type.
 Each node contains a void pointer to the data and the link pointer to the
next element in the queue. The program’s responsibility is to allocate
memory for storing the data.
 enqueue() – Insert an element at the end of the queue.
 dequeue() – Remove and return the first element of the queue, if the
queue is not empty.
 peek() – Return the element of the queue without removing it, if the
queue is not empty.
 size() – Return the number of elements in the queue.
 isEmpty() – Return true if the queue is empty, otherwise return false.
 isFull() – Return true if the queue is full, otherwise return false.

Time and Space Complexity of Linked List


The time complexity of the Linked List is O(n) for Accessing, insertion, or
deletion at the beginning and searching for an element, where n is the
number of elements. However, insertion and deletion at the beginning or
end take O(1) time in a doubly linked list due to direct access to the head
and tail nodes.
Space complexity for both types of linked lists is O(n), as each element
requires space for its data and pointers to the next (and possibly previous)
node, resulting in linear space usage proportional to the number of elements.
Play
Unmute
×

Time Complexity Time Complexity


(Singly Linked (Doubly Linked Space
Operation List) List) Complexity

Accessing by
O(n) O(n) O(1)
Index

Insertion at
O(1) O(1) O(1)
Beginning

Insertion at
O(n) O(1) O(1)
End

Insertion at
Given O(n) O(n) O(1)
Position

Deletion at
O(1) O(1) O(1)
Beginning

Deletion at
O(n) O(1) O(1)
End

Deletion at
Given O(n) O(n) O(1)
Position

Searching O(n) O(n) O(1)

Time Complexity of Searching (Finding an Element) in Linked List:


 Best Case: O(1) – If the element is found at the head of the list.
 In the best case, the target element is located at the beginning of
the linked list, so only one comparison is needed to find it.
 Worst Case: O(n) – If the element is at the end of the list or not present.
 In the worst case, the target element is either at the end of the
list, requiring traversal through all elements, or not present at all,
resulting in full traversal.
 Average Case: O(n) – Similar to the worst case, as each element may
need to be checked on average.
 On average, the target element could be located anywhere
within the list, requiring traversal through approximately half of
the elements.
Time Complexity of Insertion (Adding an Element) in Linked List:
 Best Case: O(1) – If inserting at the beginning of the list.
 Insertion at the beginning requires updating the head pointer,
which can be done in constant time.
 Worst Case: O(n) – If inserting at the end or in the middle of the list,
requiring traversal.
 Inserting at the end or in the middle of the list requires
traversing to the appropriate position, which may involve visiting
all elements.
 Average Case: O(n) – Similar to the worst case, as traversal may be
needed on average.
 On average, insertion could occur anywhere within the list,
requiring traversal through approximately half of the elements.
Time Complexity of Deletion (Removing an Element) in Linked List:
 Best Case: O(1) – If deleting the first element.
 Deletion of the first element only requires updating the head
pointer, which can be done in constant time.
 Worst Case: O(n) – If deleting the last element or one in the middle,
requiring traversal.
 Deleting the last element or one in the middle requires traversal
to locate the target element, which may involve visiting all
elements.
 Average Case: O(n) – Similar to the worst case, as traversal may be
needed on average.
 On average, the target element could be located anywhere
within the list, requiring traversal through approximately half of
the elements.
Time Complexity of Traversal in Linked List: O(n)
 Each element needs to be visited once to perform any operation.
Time Complexity of Accessing (Getting/Setting) an element in Linked List:
O(n)
 Typically O(n), as access requires traversal from the head to the desired
position.
 Accessing an arbitrary element in the linked list requires traversal from the
head to the desired position, which may involve visiting all elements.

Time and Space complexity of 1-D Arrays


The time and space complexity of one-dimensional and two-
dimensional array operations can vary depending on the specific
operation. Here, we’ll discuss common array operations and provide
insights into their time and space complexities for one-dimensional
arrays.

One-Dimensional Array Operations:


1. Accessing an Element by Index:
 Time Complexity: O(1)
 Space Complexity: O(1)
 Accessing an element in a one-dimensional array by its index is typically
a constant-time operation because it directly computes the memory
location of the element.
2. Inserting an Element at the End:
 Time Complexity: O(1) (Amortized)
 Space Complexity: O(1)
 Inserting an element at the end of a one-dimensional array usually
involves updating the array’s size and placing the new element in the
next available position, which is a constant-time operation on average.
3. Inserting an Element at the Beginning:
 Time Complexity: O(n)
 Space Complexity: O(n)
 Inserting an element at the beginning of a one-dimensional array
requires shifting all existing elements to make room, resulting in a
linear time and space complexity.
4. Searching for an Element (Linear Search):
 Time Complexity: O(n)
 Space Complexity: O(1)
 In the worst case, searching for an element in a one-dimensional array
may require looking at every element, resulting in a linear time
complexity. The space complexity remains constant.
5. Deleting an Element:
 Time Complexity: O(n)
 space complexity: O(1)
 In the worst case, deleting an element from the front may take O(n)
time as elements after an element should be shifted by one position.

LC3: Recursion
What is Recursion?
The process in which a function calls itself directly or indirectly is called
recursion and the corresponding function is called a recursive function.

Recursion are mainly of two types depending on whether a function


calls itself from within itself or more than one function call one
another mutually. The first one is called direct recursion and another
one is called indirect recursion.

1. Direct Recursion: These can be further categorized into four types:


 Tail Recursion: If a recursive function calling itself and that
recursive call is the last statement in the function then it’s known
as Tail Recursion. After that call the recursive function performs
nothing.

// Code Showing Tail Recursion

#include <stdio.h>

// Recursion function
void fun(int n)
{
if (n > 0) {
printf("%d ", n);

// Last statement in the function


fun(n - 1);
}
}
int main()
{
int x = 3;
fun(x);
return 0;
}

output: 3 2 1

Time Complexity For Tail Recursion : O(n)


Space Complexity For Tail Recursion : O(n)

Head Recursion: If a recursive function calling itself and that recursive call
is the first statement in the function then it’s known as Head
Recursion. There’s no statement, no operation before the call. The function
doesn’t have to process or perform any operation at the time of calling and
all operations are done at returning time.
example:
void fun(int n)
{
if (n > 0) {

// First statement in the function


fun(n - 1);

printf("%d ", n);


}
}
tracing
Linear Recursion:
If a recursive function calling itself for one time then it’s known as Linear
Recursion.
Linear recursion code snippet
fun(n)
{
// some code
if(n>0)
{
fun(n-1); // Calling itself only once
}
// some code
}

Tree Recursion
If a recursive function calling itself for more than one time then it’s known
as Tree Recursion.
void fun(int n)
{
if (n > 0) {
printf("%d ", n);
// Calling once
fun(n - 1);

// Calling twice
fun(n - 1);
}
}
tracing

Binary recursion:
binary recursion a function makes two recursive calls to itself when invoked, it uses binary
recursion. Fibonacci series is an example of binary recursion

fib (1) = fib (2) = 1


fib (n) = fib (n-1) + fib (n-2), if n > 2

int recFib (int n)


{
// base case
if (n <= 1)
return n;

// binary recursive call


return recFib(n-1) + recFib (n - 2);
}
LC 4. Linear Search, Binary Search in an Array

Linear Search

Linear Search is defined as a sequential search algorithm that starts at one


end and goes through each element of a list until the desired element is
found, otherwise the search continues till the end of the list.

How Does Linear Search Algorithm Work?


In Linear Search Algorithm,
 Every element is considered as a potential match for the key and checked for the
same.
 If any element is found equal to the key, the search is successful and the index of
that element is returned.
 If no element is found equal to the key, the search yields “No match found”.

For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90,
40} and key = 30
Step 1: Start from the first element (index 0) and compare key with each
element (arr[i]).
 Comparing key with first element arr[0]. SInce not equal, the iterator
moves to the next element as a potential match.

 Comparing key with next element arr[1]. SInce not equal, the iterator
moves to the next element as a potential match.
Step 2: Now when comparing arr[2] with key, the value matches. So the
Linear Search Algorithm will yield a successful message and return the index
of the element when key is found (here 2).

Program
C Programs to implement the Searching Techniques – Linear & Binary Search

Linear Search

#include<stdio.h>
int main()

int search(int arr[], int n,int key);

int n, arr[10], i, rev[10], j = 0, key,found=0;

printf("Enter the size of the array (max 10): ");

scanf("%d", &n);

printf("Enter the elements: ");

for(i = 0; i < n; i++)

scanf("%d", &arr[i]);

printf("enter key to search");

scanf("%d",&key);

found=search(arr,n,key);

if(found==-1)

printf("element not found");

else

printf("element found at %d", found);

int search(int array[], int n, int x)

// Going through array sequencially

for (int i = 0; i < n; i++)

if (array[i] == x)

return i+1;

return -1;

}
Output

Enter the size of the array (max 10): 5

Enter the elements: 2 3 6 89 4

enter key to search2

element found at 1

Binary Search
binary search is defined as a searching algorithm used in a sorted array by repeatedly dividing
the search interval in half. The idea of binary search is to use the information that the array is
sorted and reduce the time complexity to O(log N).

Conditions for when to apply Binary Search in a


Data Structure:
To apply Binary Search algorithm:
 The data structure must be sorted.
 Access to any element of the data structure takes constant time.

Binary Search Algorithm:

In this algorithm,

Divide the search space into two halves by finding the middle index “mid”.
 Compare the middle element of the search space with the key.
 If the key is found at middle element, the process is terminated.
 If the key is not found at middle element, choose which half will be used as the next
search space.
 If the key is smaller than the middle element, then the left side is used for
next search.
 If the key is larger than the middle element, then the right side is used for
next search.
 This process is continued until the key is found or the total search space is exhausted.

Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target = 23.
First Step: Calculate the mid and compare the mid element with the key. If the key is less
than mid element, move to left and if it is greater than the mid then move search space to
the right.
 Key (i.e., 23) is greater than current mid element (i.e., 16). The search space moves to
the right.

 Key is less than the current mid 56. The search space moves to the left.

Second Step: If the key matches the value of the mid element, the element
is found and stop search.
Time Complexity of Linear Search Algorithm:
Best Case Time Complexity: O(1)
 Best case is when the list or array’s first element matches the target
element. The time complexity is O(1) because there is just one comparison
made. So the best case complexity is O(1).
Average Case Time Complexity: O(n)
 The average case time complexity of the linear search algorithm
is O(n), where n is the number of elements in the array being searched.
Worst Case Time Complexity: O(n)
 The worst-case will be when the target element is absent from the list or
array or is located at the end of the list. O(n) time complexity results from
the necessity to traverse the complete list and the n comparisons that are
required.

The space complexity of the Linear Search Algorithm is O(1), as it only


requires a constant amount of extra space regardless of the size of the input
array.

Time Complexity of Binary Search Algorithm:


Best Case Time Complexity of Binary Search Algorithm: O(1)
Best case is when the element is at the middle index of the array. It takes only
one comparison to find the target element. So the best case complexity
is O(1).
Average Case Time Complexity of Binary Search Algorithm: O(log N)

Worst Case Time Complexity of Binary Search Algorithm: O(log N)


The worst case will be when the element is present in the first position. As
seen in the average case, the comparison required to reach the first element
is logN. So the time complexity for the worst case is O(logN).

The space complexity of the binary search algorithm is O(1), which indicates
that the amount of memory required by the algorithm remains constant
regardless of the size of the input array. This efficiency in both time and space
usage makes binary search a preferred choice for searching in large sorted
arrays, as it significantly reduces the time taken to find an element compared
to linear search algorithms.

LC 5. Fibonacci Search, Bubble Sort


Fibonacci Search
Fibonacci Series is a series of numbers that have two primitive
numbers 0 and 1. The successive numbers are the sum of
preceding two numbers in the series.

Fibonacci Search Algorithm


The Fibonacci Search Algorithm makes use of the Fibonacci Series
to diminish the range of an array on which the searching is set to
be performed. With every iteration, the search range decreases
making it easier to locate the element in the array. The detailed
procedure of the searching is seen below −

Step 1 − As the first step, find the immediate Fibonacci number


that is greater than or equal to the size of the input array. Then,
also hold the two preceding numbers of the selected Fibonacci
number, that is, we hold Fm, Fm-1, Fm-2 numbers from the
Fibonacci Series.

Step 2 − Initialize the offset value as -1, as we are considering the


entire array as the searching range in the beginning.
Step 3 − Until Fm-2 is greater than 0, we perform the following
steps

 Compare the key element to be found with the element at


index [min(offset+Fm-2,n-1)]. If a match is found, return the
index.
 If the key element is found to be lesser value than this
element, we reduce the range of the input from 0 to the
index of this element. The Fibonacci numbers are also
updated with Fm = Fm-2.
 But if the key element is greater than the element at this
index, we remove the elements before this element from the
search range. The Fibonacci numbers are updated as Fm =
Fm-1. The offset value is set to the index of this element.

Step 4 − As there are two 1s in the Fibonacci series, there arises


a case where your two preceding numbers will become 1. So if F m-
1 becomes 1, there is only one element left in the array to be

searched. We compare the key element with that element and


return the 1st index. Otherwise, the algorithm returns an
unsuccessful search.

Suppose we have a sorted array of elements {12, 14, 16, 17, 20,
24, 31, 43, 50, 62} and need to identify the location of element
24 in it using Fibonacci Search.
Step 1

The size of the input array is 10. The smallest Fibonacci number
greater than 10 is 13.

Therefore, Fm = 13, Fm-1 = 8, Fm-2 = 5.

We initialize offset = -1

Step 2

In the first iteration, compare it with the element at index =


minimum (offset + Fm-2, n – 1) = minimum (-1 + 5, 9) =
minimum (4, 9) = 4.

The fourth element in the array is 20, which is not a match and is
less than the key element.

Step 3

In the second iteration, update the offset value and the Fibonacci
numbers.

Since the key is greater, the offset value will become the index of
the element, i.e. 4. Fibonacci numbers are updated as F m = Fm-
1 = 8.

Fm-1 = 5, Fm-2 = 3.

Now, compare it with the element at index = minimum (offset +


Fm-2, n – 1) = minimum (4 + 3, 9) = minimum (7, 9) = 7.
Element at the 7th index of the array is 43, which is not a match
and is also lesser than the key.

Step 4

We discard the elements after the 7th index, so n = 7 and offset


value remains 4.

Fibonacci numbers are pushed two steps backward, i.e. F m = Fm-


2 = 3.

Fm-1 = 2, Fm-2 = 1.

Now, compare it with the element at index = minimum (offset +


Fm-2, n – 1) = minimum (4 + 1, 6) = minimum (5, 7) = 5.

The element at index 5 in the array is 24, which is our key


element. 5th index is returned as the output for this example
array.
The output is returned as 5.

Bubble Sort
In Bubble Sort algorithm,
 traverse from left and compare adjacent elements and the higher one is placed at right
side.
 In this way, the largest element is moved to the rightmost end at first.
 This process is then continued to find the second largest and place it and so on until the
data is sorted.

How does Bubble Sort Work?


Let us understand the working of bubble sort with the help of the following
illustration:

Input: arr[] = {6, 3, 0, 5}


First Pass:
The largest element is placed in its correct position, i.e., the end of the array.

Second Pass:
Place the second largest element at correct position
Third Pass:
Place the remaining two elements at their correct positions.

Bubble Sort Algorithm

Start
Initialise a variable named array of required size, i, j, n.

 Run two loops nested in one another.

 The outer loop will run from i = 0 to i < n – 1, where n is the number of
elements in the list.

 The inner loop will run from j = 0 to j < n – i – 1. It is because, after each
iteration of the outer loop, one element at the end (or at the start if the
order is decreasing order) will be in its right place so we can leave it as it
is.

 In the inner loop, we will check if the arr[ j ] > arr[ j + 1 ].


 If it’s true, then we will swap places of these elements.
 If false, we will continue to the next iteration.

 This process will be repeated till the conditions of the loop are satisfied.
LC 6. Selection Sort, Insertion Sort
Selection Sort
The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion
of the list and swaps it with the first element of the unsorted part. This process is repeated
for the remaining unsorted portion until the entire list is sorted.

This approach follows

1. Start
2. Initialise a variable named min_ind, which will store 0 at the beginning.
3. Now, run a loop to find the minimum element in the array.
4. If you find any smaller elements than min_ind, then swap both values.
5. Now, increment min_ind++ to point to the next element in the array.
6. Keep repeating until you reach the size of the array or list.
7. End

How does Selection Sort Algorithm work?


Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}

First pass
 For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole array
it is clear that 11 is the lowest value.
 Thus, replace 64 with 11. After one iteration 11, which happens to be the least value in
the array, tends to appear in the first position of the sorted list.
Second Pass:
 For the second position, where 25 is present, again traverse the rest of the array in a
sequential manner.
 After traversing, we found that 12 is the second lowest value in the array and it should
appear at the second place in the array, thus swap these values.

Third Pass:
 Now, for third place, where 25 is present again traverse the rest of the array and find the
third least value present in the array.
 While traversing, 22 came out to be the third least value and it should appear at the third
place in the array, thus swap 22 with element present at third position.
Fourth pass:
 Similarly, for fourth position traverse the rest of the array and find the
fourth least element in the array
 As 25 is the 4th lowest value hence, it will place at the fourth position.

Fifth Pass:
 At last the largest value present in the array automatically get placed at
the last position in the array
 The resulted array is the sorted array.
Algorithm

selectionSort (array, size)


loop i from 0 to size – 2

set min_ind as i

loop j from first unsorted to size – 1

check if array[j] < array[min_ind]

set min_ind as j

swap array[i] with array[min_ind]

end for

end selectionSort

Insertion Sort
To sort an array of size N in ascending order iterate over the array and compare the current
element (key) to its predecessor, if the key element is smaller than its predecessor, compare it
to the elements before. Move the greater elements one position up to make space for the
swapped element.
Working of Insertion Sort algorithm

Consider an example: arr[]: {12, 11, 13, 5, 6}


12 11 13 5 6

 Initially, the first two elements of the array are compared in insertion sort.

12 11 13 5 6

 Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at its
correct position. Thus, swap 11 and 12.
 So, for now 11 is stored in a sorted sub-array.

11 12 13 5 6

Second Pass:
 Now, move to the next two elements and compare them

11 12 13 5 6

 Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence,
no swapping will occur. 12 also stored in a sorted sub-array along with 11

Third Pass:

 Now, two elements are present in the sorted sub-array which are 11 and 12
 Moving forward to the next two elements which are 13 and 5

11 12 13 5 6

 Both 5 and 13 are not present at their correct place so swap them

11 12 5 13 6

 After swapping, elements 12 and 5 are not sorted, thus swap again
11 5 12 13 6

 Here, again 11 and 5 are not sorted, hence swap again

5 11 12 13 6

 Here, 5 is at its correct position

Fourth Pass:

 Now, the elements which are present in the sorted sub-array are 5, 11 and 12
 Moving to the next two elements 13 and 6

5 11 12 13 6

 Clearly, they are not sorted, thus perform swap between both

5 11 12 6 13

 Now, 6 is smaller than 12, hence, swap again

5 11 6 12 13

 Here, also swapping makes 11 and 6 unsorted hence, swap again

5 6 11 12 13

 Finally, the array is completely sorted.

Algorithm of Insertion Sort

The Selection Sort program in C to sort the array in ascending order can be
implemented in a few easy steps as mentioned below:-

1. Declare an array of size, n.


2. Provide the n inputs such that the array (named arr) is unsorted.
3. Run a loop, with i from 0 to n-1
4. Assign arr[i] as key and i-1 as j
5. While j >= 0 and arr[j+1] > arr[j] is True

 arr[j+1] = arr[j]
 Increment j = j + 1

6. Assign key at a[j+1]

LC7: Shell Sort

LC8: Quick Sort

QuickSort
QuickSort is a sorting algorithm based on the Divide and
Conquer algorithm that picks an element as a pivot and partitions
the given array around the picked pivot by placing the pivot in its
correct position in the sorted array.

How does Quicksort works?

The key process in quickSort is a partition(). The target of partitions


is to place the pivot (any element can be chosen to be a pivot) at its
correct position in the sorted array and put all smaller elements to
the left of the pivot, and all greater elements to the right of the
pivot.
Partition is done recursively on each side of the pivot after the pivot
is placed in its correct position and this finally sorts the array.
Quick sort
It is a divide and conquer algorithm.

 Step 1 − Pick an element from an array, call it as pivot element.


 Step 2 − Divide an unsorted array element into two arrays.
 Step 3 − If the value less than pivot element come under first
sub array, the remaining elements with value greater than
pivot come in second sub array.

Consider an example given below, wherein

 P is the pivot element.


 L is the left pointer.
 R is the right pointer.

The elements are 6, 3, 7, 2, 4, 5.


Now,

 The pivot is in fixed position.


 All the left elements are less.
 The right elements are greater than pivot.
 Now, divide the array into 2 sub arrays left part and right part.
 Take left partition apply quick sort.
Now,

 The pivot is in fixed position.


 All the left elements are less and sorted
 The right elements are greater and are in sorted order.
 The final sorted list is combining two sub arrays is 2, 3, 4, 5, 6, 7
Quick sort is a highly efficient sorting algorithm and is based on
partitioning of array of data into smaller arrays. A large array is
partitioned into two arrays one of which holds values smaller than
the specified value, say pivot, based on which the partition is made
and another array holds values greater than the pivot value.

Quicksort partitions an array and then calls itself recursively twice to


sort the two resulting subarrays.

Quick Sort Pivot Algorithm


Based on our understanding of partitioning in quick sort,

1. Choose the highest index value has pivot


2. Take two variables to point left and right of the list
excluding pivot
3. Left points to the low index
4. Right points to the high
5. While value at left is less than pivot move right
6. While value at right is greater than pivot move left
7. If both step 5 and step 6 does not match swap left and right
8. If left ≥ right, the point where they met is new pivot

Quick Sort Algorithm


Using pivot algorithm recursively, we end up with smaller possible
partitions. Each partition is then processed for quick sort. We
define recursive algorithm for quicksort as follows −

1. Make the right-most index value pivot


2. Partition the array using pivot value
3. Quicksort left partition recursively
4. Quicksort right partition recursively

Quick Sort Algorithm


Step 1: Initialize two pointers, i and j, at the beginning and end of the array
(excluding the pivot).
Step 2: Increment i until you find an element greater than or equal to the
pivot.
Step 3: Decrement j until you find an element less than or equal to the pivot.
Step 4: If i is less than j, swap the elements at i and j.
Step 5: Repeat steps 2-4 until i and j cross each other.
Step 6: Swap the pivot element with the element at j. This places the pivot in
its final position, dividing the array into two partitioned subarrays.
Step 7: Apply Quick Sort to the sub-arrays (Continue recursion until each sub-
array becomes sorted)

QuickSort pseudocode

procedure quickSort(left, right)


if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if
end procedure

Quick Sort Time Complexity


Variation Time Complexity Space Complexity

Best Case O(n log n) O(log n)

Average Case O(n log n) O(log n)

Worst Case O(n^2) O(n)

Analysis
The worst case complexity of Quick-Sort algorithm is O(n2). However,
using this technique, in average cases generally we get the output
in O (n log n) time.

LC9: Merge Sort


Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It
divides the input array into two halves, calls itself for the two
halves, and then it merges the two sorted halves. The merge()
function is used for merging two halves. The merge(arr, l, m, r) is a
key process that assumes that arr[l..m] and arr[m+1..r] are sorted
and merges the two sorted sub-arrays into one.

Algorithm:

Step 1: Start

Step 2: Declare an array and left, right, mid variable

Step 3: Perform merge function.

mergesort (array, left, right)

if left > right

return
mid= (left+right)/2

mergesort(array, left, mid)

mergesort(array, mid+1, right)

merge(array, left, mid, right)

Step 4: Stop

How Merge sort Works?


To know the functioning of merge sort, lets consider an array arr[] =
{38, 27, 43, 3, 9, 82, 10}
 At first, check if the left index of array is less than the right index,
if yes then calculate its mid point

 Now, as we already know that merge sort first divides the whole
array iteratively into equal halves, unless the atomic values are
achieved.
 Here, we see that an array of 7 items is divided into two arrays of
size 4 and 3 respectively.
 Now, again find that is left index is less than the right index for
both arrays, if found yes, then again calculate mid points for both
the arrays.

 Now, further divide these two arrays into further halves, until the
atomic units of the array is reached and further division is not
possible.

 After dividing the array into smallest units, start merging the
elements again based on comparison of size of elements
 Firstly, compare the element for each list and then combine
them into another list in a sorted manner.
 After the final merging, the list looks like this:

Example2:
Merge sort complexity
Now, let's see the time complexity of merge sort in best case, average case, and in
worst case. We will also see the space complexity of the merge sort.

1. Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n*logn)

You might also like