0% found this document useful (0 votes)
14 views

Data Structure

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Data Structure

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Data Structure

Q.1. Explain the AVL TREE , insertion and deletion with an example
—> AVL Tree is a type of self-balancing binary search tree where the difference between the
heights of the left and right subtrees (called the balance factor) for any node is at most one. This
property ensures that the tree remains balanced, allowing for efficient operations like insertion,
deletion, and lookup.

### Properties of AVL Trees:


1. For any node in the tree, the height difference between the left and right subtree (balance
factor) must be -1, 0, or +1.
2. The left subtree contains only nodes with keys less than the node’s key.
3. The right subtree contains only nodes with keys greater than the node’s key.

### Insertion in AVL Trees:


When inserting a node, you follow the standard binary search tree insertion process. After
inserting, you check for balance and perform rotations if necessary.

#### Example of Insertion:


Let's insert the following sequence of numbers into an AVL tree: 30, 20, 40, 10, 25.

1. Insert 30:
```
30
```

2. Insert 20:
```
30
/
20
```

3. Insert 40:
```
30
/ \
20 40
```
4. Insert 10:
```
30
/ \
20 40
/
10
```
The balance factor of node 30 is +2 (left-heavy), so we perform a right rotation on 30.
```
20
/ \
10 30
\
40
```

5. Insert 25:
```
20
/ \
10 30
/
25
\
40
```
The balance factors are now within the allowed range, so no rotations are needed.

### Deletion in AVL Trees:


When deleting a node, you also follow the standard binary search tree deletion process. After
deletion, you check for balance and perform rotations if necessary.

#### Example of Deletion:


Let’s delete the node with value 20 from the AVL tree we created:

1. Current AVL Tree:


```
20
/ \
10 30
/
25
\
40
```

2. Delete 20:
- Node 20 has two children, so we find its in-order successor, which is 25. Replace 20 with 25.
```
25
/ \
10 30
\
40
```

3. Check balance factors:


- The balance factors are still within the allowed range, so no rotations are needed.

### Conclusion:
AVL trees maintain balance through rotations during insertion and deletion, ensuring efficient
operations. This self-balancing property makes AVL trees suitable for applications where
frequent insertions and deletions occur.
Q.2. Explain the tree traversal technique with an example .
—> Tree traversal is the process of visiting each node in a tree data structure in a specific order.
There are several traversal techniques, but the most common ones are:

1. In-order Traversal
2. Pre-order Traversal
3. Post-order Traversal
4. Level-order Traversal

Let’s explain each technique with an example.

### Example Tree:


Consider the following binary tree:

```
A
/\
B C
B C
/\ \
D E F
```

### 1. In-order Traversal (Left, Root, Right)


In in-order traversal, you visit the left subtree first, then the root node, and finally the right
subtree.

- For the above tree, the in-order traversal would be:


- Visit D (left of B)
- Visit B (root)
- Visit E (right of B)
- Visit A (root)
- Visit C (right of A)
- Visit F (right of C)

Result: D, B, E, A, C, F

### 2. Pre-order Traversal (Root, Left, Right)


In pre-order traversal, you visit the root node first, then the left subtree, and finally the right
subtree.

- For the above tree, the pre-order traversal would be:


- Visit A (root)
- Visit B (left of A)
- Visit D (left of B)
- Visit E (right of B)
- Visit C (right of A)
- Visit F (right of C)

Result: A, B, D, E, C, F

### 3. Post-order Traversal (Left, Right, Root)


In post-order traversal, you visit the left subtree first, then the right subtree, and finally the root
node.

- For the above tree, the post-order traversal would be:


- Visit D (left of B)
- Visit E (right of B)
- Visit B (root)
- Visit F (right of C)
- Visit F (right of C)
- Visit C (right of A)
- Visit A (root)

Result: D, E, B, F, C, A

### 4. Level-order Traversal (Breadth-First)


In level-order traversal, you visit the nodes level by level from top to bottom and left to right.

- For the above tree, the level-order traversal would be:


- Visit A (level 0)
- Visit B, C (level 1)
- Visit D, E, F (level 2)

Result: A, B, C, D, E, F

### Conclusion:
Each traversal technique serves different purposes and can be useful depending on the
application. In-order is often used for binary search trees to retrieve values in sorted order, while
pre-order and post-order are useful for copying trees or evaluating expressions. Level-order is
commonly used for breadth-first search algorithms.
Q.3. How to insert and delete an element in BST ? Write down a code for insertion routine in DS.
—> To insert and delete an element in a Binary Search Tree (BST), you need to follow specific
procedures based on the properties of a BST. Let's go through both operations step by step, and
I'll provide you with the code for the insertion routine.

### Insertion in BST


When inserting an element in a BST, you need to follow these rules:
1. If the tree is empty, the new node becomes the root.
2. If the new value is less than the current node's value, move to the left subtree.
3. If the new value is greater than the current node's value, move to the right subtree.
4. Repeat this process until you find an appropriate null position where the new node can be
inserted.

Here's a simple code example in Python for the insertion routine:

```python
class Node:
def __init__(self, key):
self.left = None
self.right = None
self.val = key

def insert(root, key):


# If the tree is empty, return a new node
if root is None:
return Node(key)

# Otherwise, recur down the tree


if key < root.val:
root.left = insert(root.left, key)
else:
root.right = insert(root.right, key)

# Return the unchanged node pointer


return root

# Example usage:
root = None
root = insert(root, 50)
root = insert(root, 30)
root = insert(root, 20)
root = insert(root, 40)
root = insert(root, 70)
root = insert(root, 60)
root = insert(root, 80)
```

### Deletion in BST


To delete a node from a BST, you need to consider three cases:
1. Node to be deleted is a leaf node: Simply remove it.
2. Node to be deleted has one child: Remove the node and replace it with its child.
3. Node to be deleted has two children: Find the in-order predecessor (maximum value in the left
subtree) or in-order successor (minimum value in the right subtree), copy its value to the node to
be deleted, and then delete the predecessor/successor.

Here's a basic code example for the deletion routine:


```python
def deleteNode(root, key):
# Base case
if root is None:
return root

# If the key to be deleted is smaller than the root's key,


# then it lies in the left subtree
if key < root.val:
root.left = deleteNode(root.left, key)

# If the key to be deleted is greater than the root's key,


# then it lies in the right subtree
elif key > root.val:
root.right = deleteNode(root.right, key)

# If key is same as root's key, then this is the node to be deleted


else:
# Node with only one child or no child
if root.left is None:
return root.right
elif root.right is None:
return root.left

# Node with two children: Get the inorder successor (smallest


# in the right subtree)
temp = minValueNode(root.right)

# Copy the inorder successor's content to this node


root.val = temp.val

# Delete the inorder successor


root.right = deleteNode(root.right, temp.val)

return root

def minValueNode(node):
current = node
while current.left is not None:
current = current.left
return current
```

### Conclusion
With these routines, you can insert and delete elements in a Binary Search Tree effectively. The
insertion routine adds elements while maintaining the BST properties, and the deletion routine
handles different cases to ensure the tree remains a valid BST after a deletion.

Q.4.Short Notes on:

—>Binomial Heap:

A binomial heap is a specific type of data structure that is used to implement a priority queue. It
consists of a collection of binomial trees that are linked together. Here’s a detailed explanation of
its structure, properties, and operations:

1. Structure:
- A binomial heap is made up of a collection of binomial trees. Each binomial tree is defined
recursively:
- A binomial tree of order 0 is a single node.
- A binomial tree of order k consists of two binomial trees of order k-1 that are linked together.
This means that a binomial tree of order k has 2^k nodes.
- The trees in a binomial heap are organized in such a way that there is at most one binomial
tree of any order. The number of trees in the heap corresponds to the binary representation of
the number of elements in the heap.

2. Properties:
- The minimum element of a binomial heap can be found in O(log n) time.
- The binomial heap supports merging of two heaps in O(log n) time.
- The trees in a binomial heap are ordered, meaning that for any given tree, the value of the
parent node is less than or equal to the values of its children.

3. Operations:
- Insertion: To insert a new element, create a new binomial heap with just that element and
then merge it with the existing heap. This takes O(log n) time.
- Find Minimum: To find the minimum element, look at the roots of all the trees in the heap and
return the smallest. This takes O(log n) time.
- Delete Minimum: To delete the minimum element, remove the root of the tree containing the
minimum element, then merge its children back into the heap. This takes O(log n) time.
- Union: To merge two binomial heaps, combine the lists of trees and then link trees of the
same order together. This operation is efficient and takes O(log n) time.

4. Applications:
- Binomial heaps are particularly useful in applications where frequent merging of heaps is
required, such as in algorithms for network optimization and in implementing priority queues.

In summary, a binomial heap is a powerful data structure that allows for efficient insertion,
deletion, and merging operations, making it suitable for various applications in computer
science. The combination of binomial trees and their properties allows for efficient manipulation
of the heap structure.

—>fibonacci heap:

Fibonacci heap is an advanced data structure that is used to implement a priority queue. It is
particularly known for its efficiency in performing a set of operations, especially in algorithms
that require frequent decrease-key and delete operations. Here’s a detailed explanation of its
structure, properties, and operations:

1. Structure:
- A Fibonacci heap is a collection of trees, which are not necessarily binomial trees. The trees
are organized in a circular doubly linked list, and each tree follows the min-heap property,
meaning the value of a parent node is less than or equal to the values of its children.
- Each node in a Fibonacci heap contains a pointer to its parent, a pointer to one of its children,
and a pointer to the next node in the circular list. Additionally, each node keeps track of the
number of children it has, and a mark indicating whether it has lost a child since it was made a
child of another node.

2. Properties:
- The Fibonacci heap can have an arbitrary number of trees, which allows for more flexibility
compared to binomial heaps.
- The minimum element is found in O(1) time, as it is simply a pointer to the minimum node
among the roots of the trees.
- The amortized time complexity for various operations is significantly better than that of other
heap types, especially for decrease-key and delete operations.

3. Operations:
- Insertion: To insert a new element, create a new node and add it to the root list of the heap.
This operation takes O(1) time.
- Find Minimum: The minimum element can be accessed in O(1) time by maintaining a pointer
to the minimum node.
- Union: To merge two Fibonacci heaps, simply concatenate the root lists of both heaps. This
takes O(1) time.
- Decrease-Key: To decrease the key of a node, adjust its value and move it to the root list if
necessary. If the node's parent is not marked, mark the parent and make the node a root. If the
parent is marked, cut the node from its parent and add it to the root list. This operation has an
amortized time complexity of O(1).
- Delete: To delete a node, first decrease its key to negative infinity (or a value smaller than any
other node), making it the minimum node, and then perform a delete-min operation. The
amortized time complexity for deletion is O(log n).

4. Applications:
- Fibonacci heaps are particularly useful in network optimization algorithms like Dijkstra's and
Prim's algorithms, where the decrease-key operation is frequently used. They are also beneficial
in scenarios where merging heaps is required.

In summary, Fibonacci heaps are a powerful and efficient data structure for implementing priority
queues, with significant advantages in the amortized time complexity of its operations. Their
flexible structure and efficient handling of decrease-key and delete operations make them
suitable for a wide range of applications in computer science.

You might also like