0% found this document useful (0 votes)
34 views

Dsa JS

Data structures

Uploaded by

MJAYASURYA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Dsa JS

Data structures

Uploaded by

MJAYASURYA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT-2

NOTE : ( Explain 1 program for Each Questions) ( Common for all UNITS)

1) STACK :

DEFINITION :

Stack is a linear data structure that follows a particular order in which the operations
are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO
implies that the element that is inserted last, comes out first and FILO implies that the
element that is inserted first, comes out last.

What is Stack Data Structure?

A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. It
behaves like a stack of plates, where the last plate added is the first one to be removed.

Basic Operations of Stack Data Structures:

Push: Adds an element to the top of the stack.


Pop: Removes the top element from the stack.
Peek: Returns the top element without removing it.
IsEmpty: Checks if the stack is empty.
IsFull: Checks if the stack is full (in case of fixed-size arrays).
Applications of Stack Data Structures:
i. Recursion
ii. Expression Evaluation and Parsing
iii. Depth-First Search (DFS)
iv. Undo/Redo Operations
v. Browser History
2) QUEUE :

A Queue Data Structure is a fundamental concept in computer science used for storing
and managing data in a specific order. It follows the principle of “First in, First out” (FIFO),
where the first element added to the queue is the first one to be removed. Queues are
commonly used in various algorithms and applications for their simplicity and efficiency in
managing data flow.

What is Queue in Data Structures?


A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle.
It operates like a line where elements are added at one end (rear) and removed from the other
end (front).

Basic Operations of Queue Data Structure:


Enqueue (Insert): Adds an element to the rear of the queue.
Dequeue (Delete): Removes and returns the element from the front of the queue.
Peek: Returns the element at the front of the queue without removing it.
isEmpty: Checks if the queue is empty.
isFull: Checks if the queue is full.
Applications of Queue :
i. Task scheduling in operating systems
ii. Data transfer in network communication
iii. Simulation of real-world systems (e.g., waiting lines)
iv. Priority queues for event processing queues for event processing
3) INFIX & POSTFIX EXPRESSION :

Infix expression:
The expression of the form “a operator b” (a + b) i.e., when an operator is in-between
every pair of operands.
Postfix expression:
The expression of the form “a b operator” (ab+) i.e., When every pair of operands is
followed by an operator.

Examples:
Input: A + B * C + D Input: ((A + B) – C * (D / E)) + F
Output: ABC*+D+ Output: AB+CDE/*-F+

Why postfix representation of the expression?


The compiler scans the expression either from left to right or from right to left.
Consider the expression: a + b * c + d

How to convert an Infix expression to a Postfix expression?


To convert infix expression to postfix expression, use the stack data structure. Scan the
infix expression from left to right. Whenever we get an operand, add it to the postfix
expression and if we get an operator or parenthesis add it to the stack by maintaining their
precedence.

Illustration (Example):
Follow the below illustration for a better understanding
Consider the infix expression exp = “a+b*c+d”
and the infix expression is scanned using the iterator i, which is initialized as i = 0.

1st Step: Here i = 0 and exp[i] = ‘a’ i.e., an operand. So add this in the postfix expression.
Therefore, postfix = “a”.
2nd Step: Here i = 1 and exp[i] = ‘+’ i.e., an operator. Push this into the stack. postfix = “a”
and stack = {+}.

3rd Step: Now i = 2 and exp[i] = ‘b’ i.e., an operand. So add this in the postfix expression.
postfix = “ab” and stack = {+}.

4th Step: Now i = 3 and exp[i] = ‘*’ i.e., an operator. Push this into the stack. postfix = “ab”
and stack = {+, *}.
5th Step: Now i = 4 and exp[i] = ‘c’ i.e., an operand. Add this in the postfix expression.
postfix = “abc” and stack = {+, *}.

6th Step: Now i = 5 and exp[i] = ‘+’ i.e., an operator. The topmost element of the stack has
higher precedence. So pop until the stack becomes empty or the top element has less
precedence. ‘*’ is popped and added in postfix. So postfix = “abc*” and stack = {+}.

Now top element is ‘+‘ that also doesn’t have less precedence. Pop it. postfix = “abc*+”.
Now stack is empty. So push ‘+’ in the stack. stack = {+}.

7th Step: Now i = 6 and exp[i] = ‘d’ i.e., an operand. Add this in the postfix expression.
postfix = “abc*+d”.

Final Step: Now no element is left. So empty the stack and add it in the postfix expression.
postfix = “abc*+d+”.
UNIT-3

1) TREE :
Introduction to Tree:
A tree data structure is a hierarchical structure that is used to represent and organize
data in a way that is easy to navigate and search. It is a collection of nodes that are connected
by edges and has a hierarchical relationship between the nodes.

This data structure is a specialized method to organize and store data in the computer
to be used more effectively. It consists of a central node, structural nodes, and sub-nodes,
which are connected via edges. We can also say that tree data structure has roots, branches,
and leaves connected.

Basic Terminologies In Tree Data Structure:


 Parent Node
 Child Node
 Root Node
 Leaf Node or External Node
 Ancestor of a Node
 Descendant
 Level of a node
 Internal node
 Neighbour of a Node & Subtree
Representation of Tree Data Structure:

Types of Tree data structures:

Basic Operation Of Tree Data Structure:


Create – create a tree in the data structure.
Insert − Inserts data in a tree.
Search − Searches specific data in a tree to check whether it is present or not.

Traversal:
i. Preorder Traversal
ii. In order Traversal
iii. Post-order Traversal
2) B-TREE & B+ TREE - DIFFERENCE :
B-Tree:
B-Tree is known as a self-balancing tree as its nodes are sorted in the inorder traversal.
In B-tree, a node can have more than two children. B-tree has a height of logM N (Where ‘M’
is the order of tree and N is the number of nodes).

B+ Tree:
B+ tree eliminates the drawback B-tree used for indexing by storing data pointers only
at the leaf nodes of the tree. Thus, the structure of leaf nodes of a B+ tree is quite different
from the structure of internal nodes of the B tree.
DIFFERENCE :

Basis of B tree B+ tree


Comparison

Only leaf nodes have data


Pointers All internal and leaf nodes have data pointers
pointers

All keys are at leaf nodes, hence


Since all keys are not available at leaf, search
Search search is faster and more
often takes more time.
accurate.

Duplicate of keys are maintained


Redundant No duplicate of keys is maintained in the
and all nodes are present at the
Keys tree.
leaf.

Insertion takes more time and it is not Insertion is easier and the results
Insertion
predictable sometimes. are always the same.

Deletion of the internal node is very complex Deletion of any node is easy
Deletion and the tree has to undergo a lot of because all node are found at
transformations. leaf.

Leaf nodes are not stored as structural linked Leaf nodes are stored as
Leaf Nodes
list. structural linked list.

Sequential access is possible just


Access Sequential access to nodes is not possible
like linked list

Height is lesser than B tree for


Height For a particular number nodes height is larger
the same number of nodes

B+ Trees used in Multilevel


Application B-Trees used in Databases, Search engines
Indexing, Database indexing

Number of Number of nodes at any intermediary level ‘l’ Each intermediary node can have
Nodes is 2l. n/2 to n children.
3) TREE TRAVERSAL :
Tree Traversal Technique :
A Tree Data Structure can be traversed in following ways:

Depth First Search or DFS :


i. Inorder Traversal
ii. Preorder Traversal
iii. Postorder Traversal

Level Order Traversal or Breadth First Search or BFS :


 Boundary Traversal
 Diagonal Traversal

Inorder Traversal:
#Algorithm Inorder(tree)
i. Traverse the left subtree, i.e., call Inorder(left->subtree)
ii. Visit the root.
iii. Traverse the right subtree, i.e., call Inorder(right->subtree)

Uses of Inorder Traversal:


In the case of binary search trees (BST), Inorder traversal gives nodes in non-
decreasing order. To get nodes of BST in non-increasing order, a variation of Inorder
traversal where Inorder traversal is reversed can be used.
Preorder Traversal:
#Algorithm Preorder(tree)
i. Visit the root.
ii. Traverse the left subtree, i.e., call Preorder(left->subtree)
iii. Traverse the right subtree, i.e., call Preorder(right->subtree)

Uses of Preorder:
Preorder traversal is used to create a copy of the tree. Preorder traversal is also used to
get prefix expressions on an expression tree.

Postorder Traversal:
#Algorithm Postorder(tree)
i. Traverse the left subtree, i.e., call Postorder(left->subtree)
ii. Traverse the right subtree, i.e., call Postorder(right->subtree)
iii. Visit the root

Uses of Postorder:
Postorder traversal is used to delete the tree. Please see the question for the deletion of
a tree for details. Postorder traversal is also useful to get the postfix expression of an
expression tree.
(Exlain own programs)
UNIT-4
1) GRAPHS :
#Graph Data Structure And Algorithms
Graph Data Structure is a collection of nodes connected by edges. It’s used to
represent relationships between different entities. Graph algorithms are methods used to
manipulate and analyze graphs, solving various problems like finding the shortest path or
detecting cycles.

What is Graph Data Structure?


Graph is a non-linear data structure consisting of vertices and edges. The vertices are
sometimes also referred to as nodes and the edges are lines or arcs that connect any two
nodes in the graph. More formally a Graph is composed of a set of vertices( V ) and a set of
edges( E ). The graph is denoted by G(V, E).

Components of a Graph:
I. Vertices: Vertices are the fundamental units of the graph. Sometimes, vertices are
also known as vertex or nodes. Every node/vertex can be labeled or unlabeled.
II. Edges: Edges are drawn or used to connect two nodes of the graph. It can be ordered
pair of nodes in a directed graph. Edges can connect any two nodes in any possible
way.

Basic Operations on Graphs:


Below are the basic operations on the graph:
i. Insertion of Nodes/Edges in the graph – Insert a node into the graph.
ii. Deletion of Nodes/Edges in the graph – Delete a node from the graph.
iii. Searching on Graphs – Search an entity in the graph.
iv. Traversal of Graphs – Traversing all the nodes in the graph.
2) BREADTH FIRST SEARCH :
Breadth First Search (BFS) is a fundamental graph traversal algorithm. It involves
visiting all the connected nodes of a graph in a level-by-level manner. In this article, we will
look into the concept of BFS and how it can be applied to graphs effectively.

Breadth First Search (BFS) for a Graph Algorithm:


i. Initialization
ii. Exploration
iii. Termination

Example:
Step1: Initially queue and visited arrays are empty.

Step2: Push node 0 into queue and mark it visited.

Step 3: Remove node 0 from the front of queue and visit the unvisited neighbours and push
them into queue.
Step 4: Remove node 1 from the front of queue and visit the unvisited neighbours and push
them into queue.

Step 5: Remove node 2 from the front of queue and visit the unvisited neighbours and push
them into queue.

Step 6: Remove node 3.Every neighbours of node 3 is visited, so move to the next node that
are in the front of the queue.
Steps 7: Remove node 4 from the front of queue and visit the unvisited neighbours and push
them into queue.

Applications of BFS in Graphs:


BFS has various applications in graph theory and computer science, including:
i. Shortest Path Finding
ii. Cycle Detection
iii. Connected Components
iv. Topological Sorting
v. Level Order Traversal of Binary Trees
vi. Network Routing
3) DEPTH FIRST SEARCH :
Depth First Traversal (or DFS) for a graph is similar to Depth First Traversal of a tree.
The only catch here is, that, unlike trees, graphs may contain cycles (a node may be visited
twice).
Example:
Input: n = 4, e = 6
0 -> 1, 0 -> 2, 1 -> 2, 2 -> 0, 2 -> 3, 3 -> 3
Output: DFS from vertex 1 : 1 2 0 3

EXAMPLE :
Step1: Initially stack and visited arrays are empty.

Step 2: Visit 0 and put its adjacent nodes which are not visited yet into the stack.

Step 3: Now, Node 1 at the top of the stack, so visit node 1 and pop it from the stack and put
all of its adjacent nodes which are not visited in the stack.

Step 4: Now, Node 2 at the top of the stack, so visit node 2 and pop it from the stack and put
all of its adjacent nodes which are not visited (i.e, 3, 4) in the stack.
Step 5: Now, Node 4 at the top of the stack, so visit node 4 and pop it from the stack and put
all of its adjacent nodes which are not visited in the stack.

Step 6: Now, Node 3 at the top of the stack, so visit node 3 and pop it from the stack and put
all of its adjacent nodes which are not visited in the stack.

Now, Stack becomes empty, which means we have visited all the nodes and our DFS
traversal ends.

Complexity Analysis of Depth First Search:

a) Time complexity: O(V + E), where V is the number of vertices and E is the number
of edges in the graph.
b) Auxiliary Space: O(V + E), since an extra visited array of size V is required, And
stack size for iterative call to DFS function.
UNIT-5
1) LINEAR SEARCH :
Linear Search is defined as a sequential search algorithm that starts at one end and
goes through each element of a list until the desired element is found, otherwise the search
continues till the end of the data set.

Example:
Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30

Step 1: Start from the first element (index 0) and compare key with each element
(arr[i]).Comparing key with first element arr[0]. SInce not equal, the iterator moves to the
next element as a potential match.

Comparing key with next element arr[1]. SInce not equal, the iterator moves to the
next element as a potential match.

Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is
found (here 2).
Time and Space Complexity of Linear Search:
Time Complexity:
i. Best Case: Best case complexity is O(1)
ii. Worst Case: The worst-case complexity is O(N) where N is the size of the list.
iii. Average Case: O(N)

Advantages of Linear Search:


 Does not require any additional memory.
 It is a well-suited algorithm for small datasets.

Disadvantages of Linear Search:


 Linear search has a time complexity of O(N), which in turn makes it slow for
large datasets.
 Not suitable for large arrays.
2) BINARY SEARCH :
Binary Search is defined as a searching algorithm used in a sorted array by repeatedly
dividing the search interval in half. The idea of binary search is to use the information that
the array is sorted and reduce the time complexity to O(log N).

Conditions:
To apply Binary Search algorithm:
 The data structure must be sorted.
 Access to any element of the data structure takes constant time.

Binary Search Algorithm:


In this algorithm, Divide the search space into two halves by finding the middle index “mid”.

EXAMPLE :
Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target = 23.
First Step: Calculate the mid and compare the mid element with the key. If the key is less
than mid element, move to left and if it is greater than the mid then move search space to the
right.
Key (i.e., 23) is greater than current mid element (i.e., 16). The search space moves to the
right.

Key is less than the current mid 56. The search space moves to the left.

Second Step: If the key matches the value of the mid element, the element is found and stop
search.

Complexity Analysis of Binary Search:


Time Complexity:
i. Best Case: O(1)
ii. Average Case: O(log N)
iii. Worst Case: O(log N)
iv. Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary
space will be O(logN).
SORTING TECHNIQUES
1) BUBBLE SORT :
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent
elements if they are in the wrong order. This algorithm is not suitable for large data sets as its
average and worst-case time complexity is quite high.

EXAMPLE:
Input: arr[] = {6, 0, 3, 5}
First Pass: The largest element is placed in its correct position, i.e., the end of the array.

Second Pass: Place the second largest element at correct position

Third Pass: Place the remaining two elements at their correct positions.
Total no. of passes: n-1
Total no. of comparisons: n*(n-1)/2

Complexity Analysis of Bubble Sort:


Time Complexity: O(N2)
Auxiliary Space: O(1)

Advantages of Bubble Sort:


i. Bubble sort is easy to understand and implement.
ii. It does not require any additional memory space.

Disadvantage of Bubble Sort:


Bubble sort has a time complexity of O(N2) which makes it very slow for large data sets.
2) SELECTION SORT :
Selection sort is a simple and efficient sorting algorithm that works by repeatedly
selecting the smallest (or largest) element from the unsorted portion of the list and moving it
to the sorted portion of the list.

EXAMPLE:
array as an example: arr[] = {64, 25, 12, 22, 11}
First pass:For the first position in the sorted array, the whole array is traversed from index 0
to 4 sequentially. The first position where 64 is stored presently, after traversing whole array
it is clear that 11 is the lowest value.

Second Pass:
 For the second position, where 25 is present, again traverse the rest of the array in a
sequential manner.
 After traversing, we found that 12 is the second lowest value in the array and it
should appear at the second place in the array, thus swap these values.

Third Pass:
 Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.
 While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.
Fourth pass:
 Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
 As 25 is the 4th lowest value hence, it will place at the fourth position.

Fifth Pass:
 At last the largest value present in the array automatically get placed at the last
position in the array
 The resulted array is the sorted array.

overall complexity = O(N) * O(N) = O(N*N) = O(N2)

Advantages of Selection Sort Algorithm :


 Simple and easy to understand.
 Works well with small datasets.

Disadvantages of the Selection Sort Algorithm :


 Selection sort has a time complexity of O(n^2) in the worst and average case.
 Does not work well on large datasets.
3) INSERTION SORT :
Insertion sort is a simple sorting algorithm that works by building a sorted array one
element at a time. It is considered an “in-place” sorting algorithm, meaning it doesn’t require
any additional memory space beyond the original array.

Working of Insertion Sort Algorithm:

First Pass:
Current element is 23
The first element in the array is assumed to be sorted.
The sorted part until 0th index is : [23]
Second Pass:
Compare 1 with 23 (current element with the sorted part).
Since 1 is smaller, insert 1 before 23.
The sorted part until 1st index is: [1, 23]
Third Pass:
Compare 10 with 1 and 23 (current element with the sorted part).
Since 10 is greater than 1 and smaller than 23, insert 10 between 1 and 23.
The sorted part until 2nd index is: [1, 10, 23]
Fourth Pass:
Compare 5 with 1, 10, and 23 (current element with the sorted part).
Since 5 is greater than 1 and smaller than 10, insert 5 between 1 and 10.
The sorted part until 3rd index is: [1, 5, 10, 23]
Fifth Pass:
Compare 2 with 1, 5, 10, and 23 (current element with the sorted part).
Since 2 is smaller than all elements in the sorted part, insert 2 at the beginning.
The sorted part until 4th index is: [2, 1, 5, 10, 23]
Final Array:
The sorted array is: [2, 1, 5, 10, 23]
Time Complexity: O(N^2)
Auxiliary Space: O(1)

Complexity Analysis of Insertion Sort:


Time Complexity of Insertion Sort
i. Best case: O(n), where n is the number of elements in the list.
ii. Average case: O(n2), If the list is randomly ordered
iii. Worst case: O(n2), If the list is in reverse order

Space Complexity of Insertion Sort :


Auxiliary Space: O(1), Insertion sort requires O(1) additional space, making it a
space-efficient sorting algorithm.

Advantages of Insertion Sort:


 Simple and easy to implement.
 Stable sorting algorithm.
 Space-efficient.

Disadvantages of Insertion Sort:


 Inefficient for large lists.
 Not as efficient as other sorting algorithms (e.g., merge sort, quick sort) for most
cases.
HASHING TECHNIQUE :
Hashing is a fundamental data structure that efficiently stores and retrieves data in a
way that allows for quick access. It involves mapping data to a specific index in a hash table
using a hash function that enabling fast retrieval of information based on its key. This
method is commonly used in databases, caching systems, and various programming
applications to optimize search and retrieval operations.

What is Hashing in Data Structure?


Hashing is a technique used in data structures to store and retrieve data efficiently. It
involves using a hash function to map data items to a fixed-size array which is called a hash
table.

Hash Table in Data Structure :


A hash table is also known as a hash map. It is a data structure that stores key-value
pairs. It uses a hash function to map keys to a fixed-size array, called a hash table. This
allows in faster search, insertion, and deletion operations.
Hash Function :
The hash function is a function that takes a key and returns an index into the hash table.
The goal of a hash function is to distribute keys evenly across the hash table, minimizing
collisions (when two keys map to the same index).

Common hash functions include:


Division Method: Key % Hash Table Size
Multiplication Method: (Key * Constant) % Hash Table Size
Universal Hashing: A family of hash functions designed to minimize collisions
What is a Hash Collision?
A hash collision occurs when two different keys map to the same index in a hash table. This
can happen even with a good hash function, especially if the hash table is full or the keys are
similar.

Collision Resolution Techniques :


There are two types of collision resolution techniques:
 Open Addressing
 Linear Probing
 Quadratic Probing
 Closed Addressing
 Chaining
 Cuckoo Hashing

Applications of Hashing :
Hash tables are used in a wide variety of applications, including:
Databases: Storing and retrieving data based on unique keys
Caching: Storing frequently accessed data for faster retrieval
Symbol Tables: Mapping identifiers to their values in programming languages
Network Routing: Determining the best path for data packets

You might also like