Video Interview Questions - Data Structures & Algorithms
1. LINKED LISTS
Basic Level
Q1: Interviewer: "Can you explain what a linked list is and how it di ers from an array?
When would you choose one over the other?"
Answer: "A linked list is a linear data structure where elements called nodes are connected
through pointers. Each node contains data and a reference to the next node, unlike arrays
where elements are stored in contiguous memory locations.
The main differences between arrays and linked lists are arrays are of fixed size and uses
contiguous memory, while linked lists use dynamic memory allocation, allowing them to
grow or shrink at runtime. Arrays provide direct access using indices, while linked lists
require sequential traversal. Linked lists need extra memory for storing pointers
I would prefer an array when I need fast random access and the data size is mostly fixed —
like storing month names or fixed dropdown options in a form. I would choose a linked list
when I expect frequent insertions and deletions, especially at the beginning or middle of
the collection. For example, implementing a queue or playlist that frequently changes."
Q2: Explain the di erent types of linked lists.
There are four main types of linked lists. First is the singly linked list, where each node
points to the next node and the last node points to null. Second is the doubly linked list,
where each node has pointers to both the next and previous nodes, allowing bidirectional
traversal.
Third is the circular linked list, where the last node points back to the first node instead of
null, creating a circular structure.
Singly linked lists are memory-e icient but only allow forward traversal. Doubly linked lists
use more memory but enable e icient backward traversal and easier deletion. Circular
lists are useful for round-robin scheduling or media players with repeat functionality.
Q3: How do you find the middle element of a linked list in one pass?
How do you find the middle element of a linked list in one pass?
I would use the two-pointer technique. I'll use a slow pointer that moves one step at a time
and a fast pointer that moves two steps at a time. When the fast pointer reaches the end of
the list, the slow pointer will be at the middle.
For even-length lists, this gives us the first middle element. If we want the second middle
element, we can start the slow pointer from the second node instead of the first.
This approach has O(n) time complexity and O(1) space complexity. It's much more
e icient than first counting the total nodes and then traversing halfway, because we only
make one pass through the list instead of two.
Q4: Explain how you would reverse a linked list. What are the di erent approaches you
could take?"
Answer: "I can think of two main approaches: iterative and recursive.
To reverse a linked list iteratively in Java, I would use three instance variables all of type
Node. These are previous, current, and next.
I start by initializing previous as null, because initially there's no node before the head. Then
I set current to the head of the list. The idea is to reverse the direction of each node’s next
pointer so that it points to the previous node instead of the next one.
Inside a while loop, I first store current.next in the next variable — this is important so that I
don't lose the rest of the list. Then, I reverse the link by setting current.next to previous.
After that, I move previous to current, and current to next, so that all three pointers move
one step forward.
I continue this process until current becomes null, which means I've reached the end of the
original list. At this point, previous will be pointing to the new head of the reversed list, and I
return it.
This approach runs in linear time, O(n), since we go through each node once. It also uses
constant space, O(1), because we’re just using three pointers regardless of the size of the
list.
To reverse a linked list recursively in Java, I approach the problem by thinking in terms
of reversing the rest of the list first, and then fixing the current node's connection.
The base case is simple — if the current node is null or it's the last node in the list, then that
node becomes the new head of the reversed list, and I return it.
For the recursive step, I call the same reverse function on head.next, which e ectively
keeps moving toward the end of the list. As the recursion starts to unwind, I take the node
that comes after the current one — which is head.next — and point its next back to the
current node. Then I set head.next to null to break the original forward link.
This way, each pair of nodes gets reversed as we return from the recursion. Finally, I return
the new head node that we got from the base case, and that becomes the head of the fully
reversed list.
In terms of complexity, the time is still O(n), but the space complexity is O(n) as well,
because each recursive call adds a new frame to the call stack. That’s why for very large
lists, I’d generally prefer the iterative approach to avoid potential stack overflow.
Q5: How do you remove duplicates from a sorted linked list?
For a sorted linked list, duplicates will be adjacent, so I can traverse the list with a single
pointer. At each node, I compare its value with the next node's value. If they're equal, I skip
the next node by updating the current node's pointer to point to the node after next.
I continue this process until I reach the end of the list. This approach modifies the original
list in-place.
The time complexity is O(n) since I visit each node once, and space complexity is O(1)
since I only use a constant amount of extra space. For an unsorted list, I would either sort it
first or use a hash set to track seen values.
Same questions with code
Q6: Write a Java class to define a node in a singly linked list.
class Node {
int data;
Node next;
Node(int data) {
this.data = data;
this.next = null;
Q7: Provide the code to insert a node at the beginning of a singly linked list?
To insert a node at the beginning, I create a new node and point its next to the current head.
Then I update the head to the new node.
Node newNode = new Node(10);
newNode.next = head;
head = newNode;
Q8: Provide the code to print all elements in a linked list?
I use a loop to traverse the list and print each node's data.
Node temp = head;
while (temp != null) {
System.out.println(temp.data);
temp = temp.next;
Q9: Provide the code to reverse a linked list iteratively and explain the same?
I use three pointers: previous, current, and next. I traverse the list and reverse the links one
by one.
Node previous = null;
Node current = head;
Node next = null;
while (current != null) {
next = current.next;
current.next = previous;
previous = current;
current = next;
}
head = previous;
In the iterative method, I use three pointers: previous, current, and next. Initially, previous is
set to null because the new tail of the list will point to null. Then I traverse the list using
current. In each step, I store the next node temporarily, reverse the link by pointing
current.next to previous, and then move the previous and current pointers one step
forward. After the loop, previous will point to the new head of the reversed list.
Q10: Provide the code to reverse a linked list recursively and explain the same?
I use recursion to reverse the rest of the list and fix the current node’s next pointer.
Node reverseRecursive(Node head) {
if (head == null || head.next == null) return head;
Node rest = reverseRecursive(head.next);
head.next.next = head;
head.next = null;
return rest;
In the recursive version, I go all the way to the end of the list using recursion, and then as
the call stack unwinds, I reverse the direction of each link. The base case is when I hit the
last node or the list is empty. That node becomes the new head. Then for each recursive
call, I make the next node point back to the current node, and set the current node’s next to
null to break the old link.
Q11: Provide the code to find the middle of a linked list and explain the same?
I use two pointers: slow and fast. Slow moves one step, fast moves two steps.
Node slow = head;
Node fast = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
// slow now points to the middle node
To find the middle, I use two pointers: slow and fast. Slow moves one node at a time, while
fast moves two nodes. When fast reaches the end of the list, slow will be at the middle. This
method ensures a single pass through the list and has O(n) time complexity
Q12: Provide the code to find the nth node from the end of a linked list and explain the
same?
I use two pointers. First moves n steps ahead, then both move until first reaches the end.
Node first = head;
Node second = head;
for (int i = 0; i < n; i++) {
if (first == null) return null;
first = first.next;
while (first != null) {
first = first.next;
second = second.next;
// second is the nth node from the end
I maintain two pointers: first and second. I move the first pointer n steps ahead, and then
move both pointers together until first reaches the end. At that point, the second pointer
will be at the nth node from the end. This approach also works in one traversal.
Q13: Provide the code to check if a linked list is a palindrome and explain the same?
First, find the middle of the list. Reverse the second half, then compare both halves.
// Steps:
// 1. Use fast and slow to find the middle
// 2. Reverse second half
// 3. Compare both halves
public boolean isPalindrome(Node head) {
if (head == null || head.next == null) return true;
// Step 1: Find middle using slow and fast pointers
Node slow = head;
Node fast = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
// Step 2: Reverse second half
Node secondHalf = reverse(slow);
Node firstHalf = head;
// Step 3: Compare both halves
while (secondHalf != null) {
if (firstHalf.data != secondHalf.data) {
return false;
firstHalf = firstHalf.next;
secondHalf = secondHalf.next;
return true;
}
private Node reverse(Node head) {
Node prev = null;
Node current = head;
while (current != null) {
Node next = current.next;
current.next = prev;
prev = current;
current = next;
return prev;
I use fast and slow pointers to find the middle of the list. Then I reverse the second half
using an iterative reversal method. After that, I compare both halves node by node. If all
values match, I return true. Otherwise, false. The space complexity is O(1) because I don’t
use extra memory — just pointers.
To check if the list is a palindrome, I first find the middle using the slow and fast pointer
method. Then I reverse the second half of the list. After that, I compare the first half and the
reversed second half node by node. If all values match, it’s a palindrome. After
comparison, I can restore the original list if needed.
Q14: Provide the code to merge two sorted linked lists and explain the same?
I compare nodes from both lists and build a new sorted list.
Node dummy = new Node(0);
Node tail = dummy;
while (l1 != null && l2 != null) {
if (l1.data < l2.data) {
tail.next = l1;
l1 = l1.next;
} else {
tail.next = l2;
l2 = l2.next;
tail = tail.next;
tail.next = (l1 != null) ? l1 : l2;
return dummy.next;
I create a dummy node to act as the starting point of the merged list. Then I use two
pointers, one for each list, and keep comparing the current elements. Whichever is smaller
gets added to the new list, and I advance that pointer. After one list is exhausted, I link the
rest of the other list. Finally, I return dummy.next as the head of the merged list.
STACK:
Q15 : Can you explain what a stack is in simple terms and what are the primary
operations that can be performed on it?
Answer:
A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. It can be
imagined like a stack of plates—elements are added and removed only from the top. This
means the last element inserted is the first one to be removed.
The core operations supported by a stack include pushing, which adds a new element to
the top of the stack, and popping, which removes the topmost element. Another important
operation is peek, also known as top, which allows viewing the top element without
removing it. Additionally, there is the isEmpty operation, which checks whether the stack
currently holds any elements.
Q16: What happens when you try to pop from an empty stack or push to a full one?
Answer:
Both scenarios represent error conditions. Popping from an empty stack results in a stack
underflow, which usually throws an exception or returns an error, depending on the
implementation. Similarly, pushing to a full stack leads to a stack overflow, especially in
fixed-size stacks. In both cases, it’s crucial to include proper error checks like verifying if
the stack is empty before popping, or if it’s full before pushing.
Q17: How would implementing a stack with a linked list di er from implementing it
with an array?
Answer:
When using an array, the stack has a fixed size, so you need to define its maximum capacity
upfront. This can lead to stack overflow if the limit is exceeded. Also, increasing the size of
an array when it's full requires allocating new memory and copying existing elements,
which can be time-consuming. On the other hand, a linked list implementation allows the
stack to grow dynamically as long as memory is available, avoiding overflow due to size
limitations. However, it uses extra memory for storing pointers and can have slightly slower
access times due to non-contiguous memory allocation.
Q18: How would you use a stack to evaluate a postfix expression?
Answer:
In postfix evaluation, the expression is scanned from left to right. Operands are pushed
onto the stack as they appear. When an operator is encountered, the top two operands are
popped from the stack, the operation is performed, and the result is pushed back onto the
stack. This process continues until the end of the expression. At that point, the stack
should contain exactly one element, which represents the final result. This method works
because, in postfix notation, operators appear after their corresponding operands,
ensuring that all necessary values are already available on the stack when an operation is
to be performed.
Q19: Can you explain how the call stack works and what happens during recursion?
Answer:
The call stack is a special memory structure managed by the runtime system to keep track
of active function calls. Each time a function is invoked, a new stack frame is pushed onto
the call stack. This frame holds the function’s parameters, local variables, return address,
and other relevant data. When the function completes, its frame is popped, and control is
transferred back to the caller using the stored return address.
In the case of recursion, each recursive call results in a new stack frame being added to the
call stack. The process continues until a base case is reached. After that, the stack frames
are removed in reverse order, one by one, as each function returns. This behaviour naturally
follows the Last-In-First-Out (LIFO) principle, where the most recent function call is the first
one to complete and return.
QUEUE:
Q20: How would you implement a queue using arrays, and what challenges might you
face?
Answer:
When implementing a queue with arrays, the main challenge is handling the front and rear
pointers e iciently. If we simply use the beginning of the array as front and end as rear,
dequeue operations become O(n) because we'd need to shift all elements.
A better approach is using a circular array with front and rear pointers. We increment these
pointers modulo the array size. This keeps both operations O(1), but we need to handle the
wraparound logic carefully and distinguish between full and empty states. One common
technique is to sacrifice one array slot or use a separate counter to track the number of
elements.
Q21: What about implementing a queue using linked lists? What are the trade-o s?
Answer:
With linked lists, we maintain pointers to both the head and tail of the list. Enqueue adds a
new node at the tail, and dequeue removes the node at the head. Both operations are O(1)
with no shifting required.
The trade-o s are interesting - linked lists give us dynamic sizing without declaring a fixed
capacity upfront, but they use extra memory for storing pointers. Array-based queues have
better cache locality since elements are stored contiguously, which can lead to better
performance despite the same time complexity. Linked lists also have the overhead of
memory allocation for each new element.
Q22: How you'd determine if a circular queue is full or empty?
Answer:
If we just use front and rear pointers, we can't distinguish between full and empty states
because both occur when front equals rear.
There are several solutions. One approach is to sacrifice one slot - consider the queue full
when the next position of rear equals front. Another approach is to maintain a separate
counter for the number of elements. A third approach is to use a boolean flag to track the
last operation. The counter approach is cleaner and also gives us the size directly.
Q23: How would you implement a queue that maintains elements in sorted order?
Answer:
A sorted queue is essentially a priority queue where the priority is the element's value itself.
The challenge is maintaining sorted order while keeping reasonable performance for both
insertion and removal.
For insertion, I'd use a binary search to find the correct position, then insert the element. If
using an array, this requires shifting elements. With a linked list, finding the position is O(n)
but insertion is O(1) once we find the spot.
A better approach is using a heap, which keeps elements partially sorted. We can extract
the minimum element in O(log n) time and insert new elements in O(log n) time. While not
perfectly sorted internally, it gives us the smallest element e iciently.
For better performance, I might use a skip list, which provides O(log n) insertion and
deletion with good practical performance, or a balanced binary search tree like a red-black
tree.
TREE:
Q24: What is a tree in data structures and why we use them?
Answer:
A tree is a non-linear hierarchical data structure that consists of nodes connected by
edges. It starts with a special node called the root, from which all other nodes branch out.
Each node can have child nodes, and there is exactly one path between the root and any
other node in the tree.
Trees are used because they e iciently represent hierarchical relationships, such as file
systems, organization charts, and XML/HTML document structures. They are also essential
in many algorithms and applications like searching and sorting (as in binary search trees),
data compression (like Hu man coding), and databases (such as indexing in B-trees). Their
structure allows fast insertion, deletion, and lookup operations in a structured and logical
way.
Q25: What’s the di erence between a tree and a graph?
Answer:
A tree is a special type of graph, but there are key di erences between the two. A tree is
always acyclic, meaning it has no cycles, whereas a graph can have one or more cycles. In
a tree, there is exactly one unique path between any two nodes, ensuring a clear and
predictable structure. Trees also have a well-defined hierarchy with a designated root node
and a strict parent-child relationship. In contrast, graphs do not necessarily follow a
hierarchical structure and may not have a root or any fixed direction. Graphs are more
general—they can be directed or undirected, cyclic or acyclic, and may contain multiple
paths between nodes. These distinctions make trees suitable for representing structured
data, while graphs are better suited for modelling complex relationships like networks and
connections.
Q26: Explain what is a binary tree and how it di ers from other trees?
Answer:
A binary tree is a type of tree data structure in which each node has at most two children,
commonly referred to as the left and right child. This strict limit distinguishes binary trees
from general trees, where a node can have any number of children.
Binary trees are often used in scenarios where ordered or e icient data access is
important, such as in binary search trees, heaps, and expression trees. The structure of a
binary tree allows for easier implementation of algorithms that rely on dividing data into
two parts, like searching, sorting, and traversing. In contrast, general trees are more flexible
in terms of branching but may require more complex logic for traversal and manipulation.
The binary tree’s structure simplifies recursive operations and is especially well-suited for
problems that benefit from binary decision-making.
Q27: What are the di erent techniques for traversing a binary tree, and how do they
work?
Answer:
Binary trees can be traversed in several ways, mainly categorized into depth-first and
breadth-first traversal methods. The most common depth-first traversals are inorder,
preorder, and postorder.
In an inorder traversal, the left subtree is visited first, followed by the current node, and
then the right subtree. This is especially useful for binary search trees, as it retrieves data in
sorted order.
In a preorder traversal, the current node is visited first, then the left and right subtrees. This
method is helpful for tasks like copying or serializing the tree structure.
In a postorder traversal, the left and right subtrees are visited first, and the current node is
processed last. This is commonly used for deleting or freeing nodes in a tree.
Apart from these, there's also level-order traversal, which is a type of breadth-first
traversal. It visits nodes level by level from top to bottom, typically using a queue.
Q28: Consider you're given two arrays - one representing the preorder traversal and
the other representing the inorder traversal of a binary tree. How would you go about
reconstructing the original tree?
Answer:
To reconstruct the binary tree from its preorder and inorder traversals, the key is to
understand the properties of each traversal. In a preorder traversal, the first element is
always the root of the tree. Using this root value, we can locate its position in the inorder
array, which divides the tree into left and right subtrees.
All elements to the left of the root in the inorder array belong to the left subtree, and all
elements to the right belong to the right subtree. With this partitioning, we can recursively
repeat the process for both left and right subtrees using the corresponding segments from
both arrays.
This method ensures the tree is rebuilt accurately, maintaining both structure and
hierarchy. Typically, the algorithm uses recursion and indexing to e iciently manage the
boundaries within the preorder and inorder arrays during the reconstruction process.
Q29: Explain how an AVL tree maintains balance?
Answer:
An AVL tree is a type of self-balancing binary search tree that maintains its balance by
ensuring the height di erence between the left and right subtrees of any node, known as
the balance factor, is no more than one. After every insertion or deletion operation, the tree
checks the balance factor of each node along the path from the a ected node to the root.
If the balance factor of any node becomes less than -1 or greater than 1, the tree is
considered unbalanced, and rotations are used to restore balance. There are four types of
rotations used for rebalancing: left rotation, right rotation, left-right rotation, and right-left
rotation. The type of rotation applied depends on the structure of the imbalance.
By performing these rotations immediately after modifications, the AVL tree maintains a
balanced height, ensuring that operations like insertion, deletion, and search all remain
e icient, with a time complexity of O(log n).
Q30: How would you find the lowest common ancestor of two nodes in a binary tree?
Answer:
The lowest common ancestor (LCA) of two nodes in a binary tree is the deepest node that
has both nodes as its descendants, meaning it’s the last shared node on the path from the
root to each of the two nodes.
To find the LCA, a recursive approach can be used. Starting from the root, the tree is
traversed to check whether each node matches either of the two target nodes. If the
current node matches one of them, it is returned up the call stack. Then, the left and right
subtrees are searched recursively.
If both the left and right recursive calls return non-null values, it means one target node is
found in each subtree, so the current node is the lowest common ancestor. If only one side
returns a non-null result, that means both nodes lie in that subtree, and the result from that
side is propagated upward. This method works e iciently in O(n) time, where n is the
number of nodes in the tree.
Q31: How would you find the diameter of a binary tree?
Answer:
The diameter of a binary tree is the length of the longest path between any two nodes in the
tree. This path may or may not pass through the root. The length is typically measured by
the number of edges between the two farthest nodes.
To find the diameter, a recursive approach can be used that calculates the height of each
subtree while simultaneously computing the diameter. For each node, the longest path that
passes through it is the sum of the heights of its left and right subtrees. By keeping track of
the maximum such path found during the traversal, the diameter can be determined.
The process involves a post-order traversal where the height of each node is computed
from its subtrees, and at each step, the potential diameter is updated if a longer path is
found. This approach ensures the diameter is found e iciently in O(n) time, where n is the
number of nodes in the tree.
Q32: How would you convert a binary tree to its mirror image?
Answer:
To convert a binary tree to its mirror image, the goal is to swap the left and right children of
every node in the tree. This transformation can be done using a simple recursive approach.
Starting from the root, the left and right subtrees of each node are recursively mirrored, and
then the left and right child pointers are swapped.
The base case of the recursion occurs when a null node is reached, at which point the
function simply returns. As the recursion unwinds, each node's children are exchanged,
e ectively flipping the structure of the tree at every level.
This operation ensures that the tree is transformed into its mirror image, where all left
children become right children and vice versa. The time complexity of this approach is
O(n), where n is the number of nodes in the tree, since each node is visited exactly once.
Q33: How would you find the maximum path sum in a binary tree, where the path can
start and end at any nodes? And what if all the nodes have negative values?
Answer:
To find the maximum path sum in a binary tree, we can use a recursive approach that visits
each node in the tree. For every node, we look at the maximum path sum that can be
formed by including that node and extending into its left or right child. We only include a
child’s path if it increases the total. So, if the value from a child is negative, we ignore it and
just use zero instead.
At each node, we also calculate the total path sum that goes through that node and both its
left and right children. We compare this value with the highest path sum found so far and
update the result if it’s bigger.
Even if all nodes have negative values, this method still works. In that case, the maximum
path sum will simply be the node with the highest value (or the least negative value),
because adding more negative numbers would only make the total smaller.
This method checks every node once, so the time taken is proportional to the number of
nodes in the tree.
Sorting Algorithms:
Q34: Can you explain how bubble sort works, along with its advantages, and
disadvantages?
Answer:
Bubble sort is a simple sorting algorithm that works by repeatedly comparing each pair of
adjacent elements in an array and swapping them if they are in the wrong order. This
process continues through multiple passes until the entire array is sorted. After each pass,
the largest unsorted element "bubbles up" to its correct position at the end of the array,
which is where the name comes from.
One advantage of bubble sort is that it’s very easy to understand and implement. It also
doesn’t require any extra memory, since it sorts the data in place.
However, bubble sort is not e icient for large datasets because it has a time complexity of
O(n²) in the average and worst cases. It performs many unnecessary comparisons and
swaps, even if the array is only slightly unsorted.
Bubble sort is suitable when the dataset is very small or nearly sorted.
Q35: How do insertion sort and selection sort work? In what situations would you
choose insertion sort over selection sort, or vice versa?
Answer:
Insertion sort works by building the sorted part of the array one element at a time. It takes
each element and inserts it into its correct position among the elements that have already
been sorted. This makes it e icient when the input is nearly sorted, as fewer shifts are
needed.
Selection sort, on the other hand, works by repeatedly selecting the smallest (or largest)
element from the unsorted portion of the array and placing it at the beginning. It always
makes the same number of comparisons regardless of the input order, but it performs
fewer swaps than insertion sort.
Insertion sort is generally preferred when the input is mostly sorted or when working with
small datasets, as it is adaptive and more e icient in those cases. Selection sort might be
chosen in scenarios where memory writes are costly and minimizing swaps is important,
even though it's generally slower on average compared to insertion sort.
Stack
Q36: Provide the code for stack operations using array and explain the same
class Stack {
int[] arr;
int top;
int size;
Stack(int size) {
this.size = size;
arr = new int[size];
top = -1;
}
void push(int data) {
if (top == size - 1) {
System.out.println("Stack Overflow");
return;
}
arr[++top] = data;
}
int pop() {
if (top == -1) {
System.out.println("Stack Underflow");
return -1;
}
return arr[top--];
}
}
A stack is a Last In First Out (LIFO) structure. To implement it using an array, we use an
integer array stack size and a top index to represent the top of the stack.
Initially, top is set to -1 (meaning the stack is empty).
For a push operation, we first check if the stack is full (top == size - 1). If not, we
increment top and store the new element there.
For a pop,we first check if the stack is empty that is top=-1. we return the element at
top and then decrement top. If top is -1, it means the stack is empty (underflow).