Data Structure Module2014
Data Structure Module2014
Faculty of Technology
Prepared by: -
Adane Belay
&
Endesahw Adamsie
DTU
January, 2014
Debre Tabor
Course description
Introduction of Data Structures (Linear ,non-Linear Data Structures) and Algorithm
Analysis Concept, Measuring Complexity, Complexity of Algorithm
Big-O notation. Simple Sorting and Searching Algorithms (Bubble Sort, Insertion Sort,
Selection Sort, Sequential Searching, Binary Searching.). Abstract Data Types,
Structures, Pointers, Arrays, Linked Lists, Stacks, Queues, Trees, Graphs. Advanced
Sorting and Searching Algorithms (Shell Sort, Quick Sort, heap Sort, Merge Sort, and
Hashing). These contents will be delivered to the students through brain storming, group
discussion, observation/demonstration, test and mini lecture. And students assessed with
lab exercise, quiz, group work and individual assignment.
Learning outcomes
At the end of the course students will be able to:-
1. Explain the basic techniques for the design and analysis of efficient Algorithm;
2. Determine when and how to use the various data structures including Linked lists,
Queues, Stacks, Binary trees, Search trees and Graphs;
3. Apply data structures and algorithms that are frequently used in information
processing.
Unit description
This unit deals with meaning of data structure, abstract data type and abstraction, algorithms,
properties of algorithms and different types of complexity analysis and analysis rules. These
contents will be delivered to the students through brain storming, group discussion,
observation/demonstration, test and mini lecture. And students assessed with lab exercise, quiz,
group work and individual assignment.
Method of Teaching: brain storming, gap lecture, group discussion and question
answering.
The way data are organized in a computer’s memory is said to be Data Structure and the sequence
of computational steps to solve a problem is said to be an algorithm. Therefore, a program is
nothing but data structures plus algorithms.
Given a problem, the first step to solve the problem is obtaining ones own abstract view, or
model, of the problem. This process of modeling is called abstraction.
The model defines an abstract view to the problem. This implies that the model focuses only on
problem related stuff and that a programmer tries to define the properties of the problem.
With abstraction you create a well-defined entity that can be properly handled. These entities
define the data structure of the program.
An entity with the properties just described is called an abstract data type (ADT).
A data structure is a language construct that the programmer has defined in order to implement an
abstract data type.
There are lots of formalized and standard Abstract data types such as Stacks, Queues, Trees, etc.
1.1.2. Abstraction
Abstraction is a process of classifying characteristics as relevant and irrelevant for the particular
purpose at hand and ignoring the irrelevant ones.
How do data structures model the world or some part of the world?
The value held by a data structure represents some specific characteristic of the world
The characteristic being modeled restricts the possible values held by a data structure
The characteristic being modeled restricts the possible operations to be performed on the
data structure.
Note: Notice the relation between characteristic, value, and data structures
1.2. Algorithms
An algorithm is a well-defined computational procedure that takes some value or a set of values
as input and produces some value or a set of values as output. Data structures model the static part
of the world. They are unchanging while the world is changing. In order to model the dynamic
part of the world we need to work with algorithms. Algorithms are the dynamic part of a
program’s world model.
The quality of a data structure is related to its ability to successfully model the characteristics of
the world. Similarly, the quality of an algorithm is related to its ability to successfully simulate
the changes in the world.
However, independent of any particular world model, the quality of data structure and algorithms
is determined by their ability to work together well. Generally speaking, correct data structures
lead to simple and efficient algorithms and correct algorithms lead to accurate and efficient data
structures.
Group discussion:
Discuss on each basic feature of Properties of an algorithm.
In order to solve a problem, there are many possible algorithms. One has to be able to choose the
best algorithm for the problem at hand using some scientific method. To classify some data
structures and algorithms as good, we need precise ways of analyzing them in terms of resource
requirement. The main resources are:
Running Time
Memory Usage
Communication Bandwidth
Running time is usually treated as the most important since computational time is the most
precious resource in most problem domains.
Accordingly, we can analyze an algorithm according to the number of operations required, rather
than according to an absolute amount of time involved. This can show how an algorithm’s
efficiency changes according to the size of the input.
The goal is to have a meaningful measure that permits comparison of algorithms independent of
operating platform.
There are two things to consider:
There is no generally accepted set of rules for algorithm analysis. However, an exact count of
operations is commonly used.
1.2.3.1.Analysis Rules:
1. We assume an arbitrary time unit.
2. Execution of one of the following operations takes time 1:
Assignment Operation
Single Input/Output Operation
Single Boolean Operations
Single Arithmetic Operations
Function Return
3. Running time of a selection statement (if, switch) is the time for the condition evaluation +
the maximum of the running times for the individual clauses in the selection.
4. Loops: Running time for a loop is equal to the running time for the statements inside the
loop * number of iterations.
The total running time of a statement inside a group of nested loops is the running time of
the statements multiplied by the product of the sizes of all the loops.
For nested loops, analyze inside out.
Always assume that the loop executes the maximum number of iterations possible.
5. Running time of a function call is 1 for setup + the time for any parameter calculations +
the time required for the execution of the function body.
Examples:
1. int count(){
int k=0;
cout<< “Enter an integer”;
cin>>n;
for (i=0;i<n;i++)
k=k+1;
return 0;}
Time Units to Compute
-------------------------------------------------
Group work
• In general, a for loop translates to a summation. The index and bounds of the summation are
the same as the index and bounds of the for loop.
N
1
for (int i = 1; i <= N; i++) {
sum = sum+i; N
}
i 1
• Suppose we count the number of additions that are done. There is 1 addition per iteration of
the loop, hence N additions in total.
• Nested for loops translate into multiple summations, one for each for loop.
}
sum = sum+i+j; 2 2M
i 1 j 1 i 1
2 MN
}
• Again, count the number of additions. The outer summation is for the outer for loop.
Conditionals: Formally
• If (test) s1 else s2: Compute the maximum of the running time for s1 and s2.
if (test == 1) {
for (int i = 1; i <= N; i++) { N N
N
sum = sum+i; max 1, 2
}} i 1 i 1
j 1
else for (int i = 1; i <= N; i++) {
for (int j = 1; j <= N; j++) { max N , 2 N 2 2N 2
sum = sum+i+j;
}}
Exercises
Determine the run time equation and complexity of each of the following code segments.
1. for (i=0;i<n;i++)
for (j=0;j<n; j++)
sum=sum+i+j;
What is the value of sum if n=100?
3. int k=0;
for (int i=0; i<n; i++)
for (int j=i; j<n; j++)
k++;
What is the value of k when n is equal to 20?
4. int k=0;
for (int i=1; i<n; i*=2)
for(int j=1; j<n; j++)
k++;
What is the value of k when n is equal to 20?
6. int x=0;
for(int k=n;k>=n/3;k=k-5)
x++;
What is the value of x when n=25?
7. int x=0;
for (int i=1; i<n;i=i+5)
for (int k=n;k>=n/3;k=k-5)
x++;
What is the value of x when n=25?
8. int x=0;
for(int i=1;i<n;i=i+5)
for(int j=0;j<i;j++)
for(int k=n;k>=n/2;k=k-3)
x++;
What is the correct big-Oh Notation for the above code segment?
In order to determine the running time of an algorithm it is possible to define three functions
Tbest(n), Tavg(n) and Tworst(n) as the best, the average and the worst case running time of the
algorithm respectively.
Average Case (Tavg): The amount of time the algorithm takes on an "average" set of inputs.
Worst Case (Tworst): The amount of time the algorithm takes on the worst possible set of inputs.
Best Case (Tbest): The amount of time the algorithm takes on the smallest possible set of inputs.
We are interested in the worst-case time, since it provides a bound for all input – this is called the
“Big-Oh” estimate.
Formal Definition: f (n)= O (g (n)) if there exist c, k ∊ ℛ + such that for all n≥ k, f (n) ≤ c.g (n).
Examples: The following points are facts that you can use for Big-Oh problems:
To show that f(n) is O(g(n)) we must show that constants c and k such that
(c=15,k=1).
Typical Orders
Here is a table of some typical cases. This uses logarithms to base 2, but these are simply
proportional to logarithms in other base.
Demonstrating that a function f(n) is big-O of a function g(n) requires that we find specific
constants c and k for which the inequality holds (and show that the inequality does in fact hold).
Big-O expresses an upper bound on the growth rate of a function, for sufficiently large values of
n.
An upper bound is the best algorithmic solution that has been found for a problem.
“ What is the best that we know we can do?”
Exercise:
f(n) = (3/2)n2+(5/2)n-3
Show that f(n)= O(n2)
In simple words, f (n) =O(g(n)) means that the growth rate of f(n) is less than or equal to g(n).
For all the following theorems, assume that f(n) is a function of n and that k is an arbitrary
constant.
Theorem 1: k is O(1)
Theorem 2: A polynomial is O(the term containing the highest power of n).
Exponential functions grow faster than powers, i.e. is O( bn ) b > 1 and k >= 0
E.g. n20 is O( 1.05n)
Group Discussion
Discuss on each basic feature of Big-O Notation
f(n)= ( g (n)) means that f(n) is greater than or equal to some constant multiple of g(n) for all
values of n greater than or equal to some k.
In simple terms, f(n)= ( g (n)) means that the growth rate of f(n) is greater that or equal to g(n).
Group Discussion
Discuss on each basic feature of Big-Omega Notation
A function f (n) belongs to the set of (g(n)) if there exist positive constants c1 and c2 such that
it can be sandwiched between c1.g(n) and c2.g(n), for sufficiently large values of n.
Formal Definition: A function f (n) is (g(n)) if it is both O( g(n) ) and ( g(n) ). In other words,
there exist constants c1, c2, and k >0 such that c1.g (n)<=f(n)<=c2. g(n) for all n >= k
In simple terms, f(n)= (g(n)) means that f(n) and g(n) have the same rate of growth.
Example:
f(n)=O(n4)
f(n)=O(n3)
f(n)=O(n2)
All these are technically correct, but the last expression is the best and tight one. Since 2n2 and n2
have the same growth rate, it can be written as f(n)= (n2).
Group Discussion
Discuss on each basic feature of Theta Notation
2n2 = O(n2)
=O(n3)
f(n)=o(g(n)) means for all c>0 there exists some k>0 such that f(n)<c.g(n) for all n>=k.
Informally, f(n)=o(g(n)) means f(n) becomes insignificant relative to g(n) as n approaches
infinity.
Group Discussion
Discuss on each basic feature of Little-o Notation
Little-omega () notation is to big-omega () notation as little-o notation is to Big-Oh notation.
We use notation to denote a lower bound that is not asymptotically tight.
Formal Definition: f(n)= (g(n)) if there exists a constant no>0 such that 0<= c. g(n)<f(n) for all
n>=k.
Group Discussion
Discuss on each basic feature of Little-Omega Notation
Transitivity
Symmetry
Transpose symmetry
Reflexivity
Assessment
.Question 1: Compute the following by using the analysis rules?
void func()
{
int x=0;
int i=0;
int j=1;
cout<< “Enter an Integer value”;
cin>>n;
while (i<n){
x++;
i++; }
while (j<n)
{
j++; } }
Question 2: Suppose we have hardware capable of executing 106 instructions per second.
How long would it take to execute an algorithm whose complexity function was:
T (n) = 2n2 on an input size of n=108?
Answer
Unit description
This unit deals with applications of algorithms on some searching (Linear and binary) and sorting
(Insertion, selection and bubble sorts) and coding for all algorithms using the algorithm with in
C++. These contents will be delivered to the students through brain storming, group discussion,
observation/demonstration, test and mini lecture. And students assessed with lab exercise, quiz,
group work and individual assignment.
2.1. Searching
Searching is a process of looking for a specific element in a list of items or determining that the
item is not in the list. There are two simple searching algorithms:
Pseudocode
Time is proportional to the size of input (n) and we call this time complexity O(n).
Example Implementation:
The computational time for this algorithm is proportional to log2 n. Therefore the time complexity
is O(log n)
Group discussion:
Discuss on each searching algorithm with an example for each algorithms
• Insertion Sort
• Selection Sort
• Bubble Sort
It's the most instinctive type of sorting algorithm. The approach is the same approach that you use
for sorting a set of cards in your hand. While playing cards, you pick up a card, start at the
beginning of your hand and find the place to insert the new card, insert it and move all the others
up one place.
Basic Idea:
Find the location for an element and move all others up, and insert the element.
1. The left most value can be said to be sorted relative to itself. Thus, we don’t need to do
anything.
2. Check to see if the second value is smaller than the first one. If it is, swap these two
values. The first two values are now relatively sorted.
3. Next, we need to insert the third value in to the relatively sorted portion so that after
insertion, the portion will still be relatively sorted.
4. Remove the third value first. Slide the second value to make room for insertion. Insert the
value in the appropriate position.
5. Now the first three are relatively sorted.
6. Do the same for the remaining items in the list.
Implementation
1+2+3+…+(n-1)= O(n2)
1+2+3+…+(n-1)= O(n2)
In-place algorithm
Basic Idea:
Implementation:
(n-1)+(n-2)+…+1= O(n2)
n=O(n)
In-place algorithm
Basic Idea:
Loop through array from i=0 to n and swap adjacent elements if they are out of order.
Implementation:
void bubble_sort(list[])
{
int i,j,temp;
for(i=0;i<n; i++){
for(j=n-1;j>i; j--){
if(list[j]<list[j-1]){
temp=list[j];
list[j]=list[j-1];
list[j-1]=temp;
}//swap adjacent elements
}//end of inner loop
}//end of outer loop
}//end of bubble_sort
(n-1)+(n-2)+…+1= O(n2)
(n-1)+(n-2)+…+1= O(n2)
In-place algorithm.
Group discussion:
Discuss on each sorting algorithm with an example for each algorithms
General Comments
Each of these algorithms requires n-1 passes: each pass places one item in its correct place. The ith
pass makes either i or n - i comparisons and moves. So:
or O(n2). Thus these algorithms are only suitable for small problems where their simple code
makes them faster than the more complex code of the O(n logn) algorithm. As a rule of thumb,
expect to find an O(n logn) algorithm faster for n>10 - but the exact value depends very much on
individual machines!.
Empirically it’s known that Insertion sort is over twice as fast as the bubble sort and is just as easy
to implement as the selection sort. In short, there really isn't any reason to use the selection sort -
use the insertion sort instead.
If you really want to use the selection sort for some reason, try to avoid sorting lists of more than
a 1000 items with it or repetitively sorting lists of more than a couple hundred items.
Assessment
Question 1: What is the algorithm and implementation of Bubble Sort?
Answer
Check out the above notes for the answer.
3. Data Structures
3.1. Structures
Structures are aggregate data types built using elements of primitive data types.
The struct keyword creates a new user defined data type that is used to declare variables of an
aggregate data type.
The Arrow operator (->): to access data members of pointer variables pointing to the structure.
cout<< timeObject.hour; or
cout<<timeptr->hour;
The parentheses is required since (*) has lower precedence than (.).
struct list{
char name[10];
int count;
struct list *next;
};
Linked lists are the most basic self-referential structures. Linked lists allow you to have a chain of
structs with related data.
Arrays are simple and fast but we must specify their size at construction time. This has its own
drawbacks. If you construct an array with space for n, tomorrow you may need n+1.Here comes a
need for a more flexible system?
Flexible space use by dynamically allocating space for each element as needed. This implies that
one need not know the size of the list in advance. Memory is efficiently utilized.
A linked list is a data structure that is built from structures and pointers. It forms a chain of
"nodes" with pointers representing the links of the chain and holding the entire thing together. A
linked list can be represented by a diagram like this one:
This linked list has four nodes in it, each with a link to the next node in the series. The last node
has a link to the special value NULL, which any pointer (whatever its type) can point to, to show
that it is the last link in the chain. There is also another special pointer, called Start (also called
head), which points to the first link in the chain so that we can keep track of it.
The key part of a linked list is a structure, which holds the data for each node (the name, address,
age or whatever for the items in the list), and, most importantly, a pointer to the next node. Here
we have given the structure of a typical node:
struct node
{ char name[20]; // Name of up to 20 letters
int age
float height; // In metres
node *nxt;// Pointer to next node
};
struct node *start_ptr = NULL;
The important part of the structure is the line before the closing curly brackets. This gives a
pointer to the next node in the list. This is the only case in C++ where you are allowed to refer to
a data type (in this case node) before you have even finished defining it!
We have also declared a pointer called start_ptr that will permanently point to the start of the list.
To start with, there are no nodes in the list, which is why start_ptr is set to NULL.
The first problem that we face is how to add a node to the list. For simplicity's sake, we will
assume that it has to be added to the end of the list, although it could be added anywhere in the
list (a problem we will deal with later on).
Firstly, we declare the space for a pointer item and assign a temporary pointer to it. This is done
using the new statement as follows:
We can refer to the new node as *temp, i.e. "the node that temp points to". When the fields of
this structure are referred to, brackets can be put round the *temp part, as otherwise the compiler
will think we are trying to refer to the fields of the pointer. Alternatively, we can use the arrow
pointer notation.
Having declared the node, we ask the user to fill in the details of the person, i.e. the name, age,
address or whatever:
The last line sets the pointer from this node to the next to NULL, indicating that this node, when it
is inserted in the list, will be the last node. Having set up the information, we have to decide what
to do with the pointers. Of course, if the list is empty to start with, there's no problem - just set the
Start pointer to point to this node (i.e. set it to the same value as temp):
if (start_ptr == NULL)
start_ptr = temp;
It is harder if there are already nodes in the list. In this case, the secret is to declare a second
pointer, temp2, to step through the list until it finds the last node.
The loop will terminate when temp2 points to the last node in the chain, and it knows when this
happened because the nxt pointer in that node will point to NULL. When it has found it, it sets
the pointer from that last node to point to the node we have just declared:
temp2->nxt = temp;
The link temp2->nxt in this diagram is the link joining the last two nodes. The full code for
adding a node at the end of the list is shown below, in its own little function:
void add_node_at_end ()
{ node *temp, *temp2; // Temporary pointers
Group discussion:
Discuss on each step of the above data structure code
Having added one or more nodes, we need to display the list of nodes on the screen. This is
comparatively easy to do. Here is the method:
1. Set a temporary pointer to point to the same thing as the start pointer.
2. If the pointer points to NULL, display the message "End of list" and stop.
3. Otherwise, display the details of the node pointed to by the start pointer.
4. Make the temporary pointer point to the same thing as the nxt pointer of the node it is
currently indicating.
5. Jump back to step 2.
The temporary pointer moves along the list, displaying the details of the nodes it comes across. At
each stage, it can get hold of the next node in the list by using the nxt pointer of the node it is
currently pointing to. Here is the C++ code that does the job:
temp = start_ptr;
do
{ if (temp == NULL)
cout << "End of list" << endl;
else
{ // Display details for what temp points to
cout << "Name : " << temp->name << endl;
cout << "Age : " << temp->age << endl;
cout << "Height : " << temp->height << endl;
cout << endl; // Blank line
One thing you may need to do is to navigate through the list, with a pointer that moves backwards
and forwards through the list, like an index pointer in an array. This is certainly necessary when
you want to insert or delete a node from somewhere inside the list, as you will need to specify the
position.
We will call the mobile pointer current. First of all, it is declared, and set to the same value as the
start_ptr pointer:
node *current;
current = start_ptr;
Notice that you don't need to set current equal to the address of the start pointer, as they are both
pointers. The statement above makes them both point to the same thing:
It's easy to get the current pointer to point to the next node in the list (i.e. move from left to right
along the list). If you want to move current along one node, use the nxt field of the node that it is
pointing to at the moment:
current = current->nxt;
In fact, we had better check that it isn't pointing to the last item in the list. If it is, then there is no
next node to move to:
if (current->nxt == NULL)
cout << "You are at the end of the list." << endl;
else
current = current->nxt;
Moving the current pointer back one step is a little harder. This is because we have no way of
moving back a step automatically from the current node. The only way to find the node before the
current one is to start at the beginning, work our way through and stop when we find the node
before the one we are considering at the moment. We can tell when this happens, as the nxt
pointer from that node will point to exactly the same place in memory as the current pointer (i.e.
the current node).
Previous Current
Start
Sto
p
here NULL
Data Structure and algorithm Page 34
First of all, we had better check to see if the current node is also first the one. If it is, then there is
no "previous" node to point to. If not, check through all the nodes in turn until we detect that we
are just behind the current one (Like a pantomime - "behind you!")
if (current == start_ptr)
cout << "You are at the start of the list" << endl;
else
{ node *previous; // Declare the pointer
previous = start_ptr;
The else clause translates as follows: Declare a temporary pointer (for use in this else clause
only). Set it equal to the start pointer. All the time that it is not pointing to the node before the
current node, move it along the line. Once the previous node has been found, the current pointer is
set to that node - i.e. it moves back along the list.
Now that you have the facility to move back and forth, you need to do something with it. Firstly,
let's see if we can alter the details for that particular node in the list:
cout << "Please enter the new name of the person: ";
cin >> current->name;
cout << "Please enter the new age of the person : ";
cin >> current->age;
cout << "Please enter the new height of the person : ";
cin >> current->height;
The next easiest thing to do is to delete a node from the list directly after the current position. We
have to use a temporary pointer to point to the node to be deleted. Once this node has been
"anchored", the pointers to the remaining nodes can be readjusted before the node on death row is
deleted. Here is the sequence of actions:
1. Firstly, the temporary pointer is assigned to the node after the current one. This is the node
to be deleted:
Current Temp
2. Now the pointer from the current node is made to leap-frog the next node and point to the
one after that:
current temp
NULL
Here is the code for deleting the node. It includes a test at the start to test whether the current node
is the last one in the list:
if (current->nxt == NULL)
cout << "There is no node after current" << endl;
else
{ node *temp;
temp = current->nxt;
current->nxt = temp->nxt; // Could be NULL
delete temp;
}
Here is the code to add a node after the current one. This is done similarly, but we haven't
illustrated it with diagrams:
if (current->nxt == NULL)
add_node_at_end();
else
{ node *temp;
new temp;
get_details(temp);
// Make the new node point to the same thing as
// the current node
temp->nxt = current->nxt;
// Make the current node point to the new link
// in the chain
current->nxt = temp;
}
Similarly, the routine get_temp(temp) is a routine that reads in the details for the new node similar
to the one defined just above. ... and so...
Group discussion:
Discuss on each step of the above data structure code
3.2.6. Deleting a node from the list
When it comes to deleting nodes, we have three choices: Delete a node from the start of the list,
delete one from the end of the list, or delete one from somewhere in the middle. For simplicity,
we shall just deal with deleting one from the start or from the end.
When a node is deleted, the space that it took up should be reclaimed. Otherwise the computer
will eventually run out of memory space. This is done with the delete instruction:
However, we can't just delete the nodes willy-nilly as it would break the chain. We need to
reassign the pointers and then delete the node at the last moment. Here is how we go about
deleting the first node in the linked list:
Now that the first node has been safely tagged (so that we can refer to it even when the start
pointer has been reassigned), we can move the start pointer to the next node in the chain:
start_ptr = start_ptr->nxt; // Second node in chain.
void delete_start_node()
{ node *temp;
temp = start_ptr;
start_ptr = start_ptr->nxt;
delete temp;
}
Deleting a node from the end of the list is harder, as the temporary pointer must find where the
end of the list is by hopping along from the start. This is done using code that is almost identical
to that used to insert a node at the end of the list. It is necessary to maintain two temporary
pointers, temp1 and temp2. The pointer temp1 will point to the last node in the list and temp2 will
point to the previous node. We have to keep track of both as it is necessary to delete the last node
and immediately afterwards, to set the nxt pointer of the previous node to NULL (it is now the
new last node).
1. Look at the start pointer. If it is NULL, then the list is empty, so print out a "No nodes to
delete" message.
2. Make temp1 point to whatever the start pointer is pointing to.
3. If the nxt pointer of what temp1 indicates is NULL, then we've found the last node of the
list, so jump to step 7.
4. Make another pointer, temp2, point to the current node in the list.
5. Make temp1 point to the next item in the list.
6. Go to step 3.
7. If you get this far, then the temporary pointer, temp1, should point to the last item in the
list and the other temporary pointer, temp2, should point to the last-but-one item.
8. Delete the node pointed to by temp1.
Data Structure and algorithm Page 38
9. Mark the nxt pointer of the node pointed to by temp2 as NULL - it is the new last node.
Let's try it with a rough drawing. This is always a good idea when you are trying to understand an
abstract data type. Suppose we want to delete the last node from this list:
Firstly, the start pointer doesn't point to NULL, so we don't have to display a "Empty list, wise
guy!" message. Let's get straight on with step2 - set the pointer temp1 to the same as the start
pointer:
The nxt pointer from this node isn't NULL, so we haven't found the end node. Instead, we set the
pointer temp2 to the same node as temp1
NULL
temp 2 temp1
Eventually, this goes on until temp1 really is pointing to the last node in the list, with temp2
pointing to the penultimate node:
start_ptr
NULL
temp 2 temp1
We suppose you want some code for all that! All right then ....
void delete_end_node()
{ node *temp1, *temp2;
if (start_ptr == NULL)
cout << "The list is empty!" << endl;
else
{ temp1 = start_ptr;
while (temp1->nxt != NULL)
{ temp2 = temp1;
temp1 = temp1->nxt;
}
delete temp1;
temp2->nxt = NULL;
}
}
The code seems a lot shorter than the explanation!
Now, the sharp-witted amongst you will have spotted a problem. If the list only contains one
node, the code above will malfunction. This is because the function goes as far as the temp1 =
start_ptr statement, but never gets as far as setting up temp2. The code above has to be adapted so
that if the first node is also the last (has a NULL nxt pointer), then it is deleted and the start_ptr
pointer is assigned to NULL. In this case, there is no need for the pointer temp2:
Data Structure and algorithm Page 41
void delete_end_node()
{ node *temp1, *temp2;
if (start_ptr == NULL)
cout << "The list is empty!" << endl;
else
{ temp1 = start_ptr;
if (temp1->nxt == NULL) // This part is new!
{ delete temp1;
start_ptr = NULL;
}
else
{ while (temp1->nxt != NULL)
{ temp2 = temp1;
temp1 = temp1->nxt;
}
delete temp1;
temp2->nxt = NULL;
}
}
Group discussion:
Discuss on each step of the above data structure code
That sounds even harder than a linked list! Well, if you've mastered how to do singly linked lists,
then it shouldn't be much of a leap to doubly linked lists
A doubly linked list is one where there are links from each node in both directions:
You will notice that each node in the list has two pointers, one to the next node and one to the
previous one - again, the ends of the list are defined by NULL pointers. Also there is no pointer to
The reason we needed a start pointer in the ordinary linked list is because, having moved on from
one node to another, we can't easily move back, so without the start pointer, we would lose track
of all the nodes in the list that we have already passed. With the doubly linked list, we can move
the current pointer backwards and forwards at will.
We have also included some code to declare the first node and set its pointers to NULL. It gives
the following situation:
We still need to consider the directions 'forward' and 'backward', so in this case, we will need to
define functions to add a node to the start of the list (left-most position) and the end of the list
(right-most position).
void add_node_at_end ()
{ // Declare a temporary pointer and move it to the end
node *temp = current;
while (temp->nxt != NULL)
temp = temp->nxt;
// Declare a new node and link it in
node *temp2;
temp2 = new node;
temp2->name = new_name; // Store the new name in the node
temp2->nxt = NULL; // This is the new start of the list
temp2->prv = temp; // Links to current list
temp->nxt = temp2;
}
Group discussion:
Discuss on each step of the above data structure code
Here, the new name is passed to the appropriate function as a parameter. We'll go through the
function for adding a node to the right-most end of the list. The method is similar for adding a
node at the other end. Firstly, a temporary pointer is set up and is made to march along the list
until it points to last node in the list.
Start_Ptr
After that, a new node is declared, and the name is copied into it. The nxt pointer of this new node
is set to NULL to indicate that this node will be the new end of the list.
The prv pointer of the new node is linked into the last node of the existing list.
The nxt pointer of the current end of the list is set to the new node.
-
-
Assignment?
-
-
-
A simple data structure, in which insertion and deletion occur at the same end, is termed (called) a
stack. It is a LIFO (Last In First Out) structure.
The operations of insertion and deletion are called PUSH and POP
TOS=> 8
TOS=> 4 4 TOS=> 4
1 1 1
3 3 3
6 6 6
Our Purpose:
To develop a stack implementation that does not tie us to a particular data type or to a particular
implementation.
Implementation:
Stacks can be implemented both as an array (contiguous list) and as a linked list. We want a set of
operations that will work with either type of implementation: i.e. the method of implementation is
hidden and can be changed without affecting the programs that use them.
Push()
{
if there is room {
put an item on the top of the stack
else
give an error message
}
}
CreateStack()
{
remove existing items from the stack
initialise the stack to empty
}
Algorithm:
Step-1: Increment the Stack TOP by 1. Check whether it is always less than the Upper Limit of
the stack. If it is less than the Upper Limit go to step-2 else report -"Stack Overflow"
Step-2: Put the new element at the position pointed by the TOP
Implementation:
push(int item) {
top = top + 1;
if(top < UPPERLIMIT)
stack[top] = item; /*step-1 & 2*/
else
cout<<"Stack Overflow"; }
Note:- In array implementation,we have taken TOP = -1 to signify the empty stack, as this
simplifies the implementation.
Algorithm
Step-1: If the Stack is empty then give the alert "Stack underflow" and quit; or else go to step-2
Step-2: a) Hold the value for the element pointed by the TOP
b) Put a NULL value instead
c) Decrement the TOP by 1
Implementation:
int pop()
{
Note: - Step-2:(b) signifies that the respective element has been deleted.
It’s very similar to the insertion operation in a dynamic singly linked list. The only difference is
that here you'll add the new element only at the end of the list, which means addition can happen
only from the TOP. Since a dynamic list is used for the stack, the Stack is also dynamic, means it
has no prior upper limit set. So, we don't have to check for the Overflow condition at all!
Algorithm
Implementation:
struct node{
int item;
struct node *next;
}
struct node *stack = NULL; /*stack is initially empty*/
push(int item)
{
if(stack == NULL) /*step-1*/
{
newnode = new node /*step-2*/
newnode -> item = item;
newnode -> next = NULL;
stack = newnode;
top = stack;
}
else
{
newnode = new node; /*step-3*/
newnode -> item = item;
newnode -> next = NULL;
top ->next = newnode;
top = newnode; /*step-4*/
}
}
Group discussion:
Discuss on each step of the above data structure code
Supposing you have only one element left in the Stack, then we
won't make use of "target" rather we'll take help of our
"bottom" pointer. See how...
Algorithm:
Step-1: If the Stack is empty then give an alert message "Stack Underflow" and quit; or else
proceed
Step-2: If there is only one element left go to step-3 or else step-4
Step-3: Free that element and make the "stack", "top" and "bottom" pointers point to NULL and
quit
Step-4: Make "target" point to just one element before the TOP; free the TOP most element;
make "target" as your TOP most element
Implementation:
struct node
{
int nodeval;
struct node *next;
}
struct node *stack = NULL; /*stack is initially empty*/
struct node *top = stack;
main()
{
int newvalue, delval;
..
push(newvalue);
..
delval = pop(); /*POP returns the deleted value from the stack*/
}
int pop( )
{
int pop_val = 0;
struct node *target = stack;
if(stack == NULL) /*step-1*/
cout<<"Stack Underflow";
else
{
Applications of Stacks
Group discussion:
Mention at list one application of stacks?
Evaluation of Algebraic Expressions
E.g. 4 + 5 * 5
Simple calculator: 45
Question:
Can we develop a method of evaluating arithmetic expressions without having to ‘look
ahead’ or ‘look back’? ie consider the quadratic formula:
x = (-b+(b^2-4*a*c)^0.5)/(2*a)
In it’s current form we cannot solve the formula without considering the ordering of the
parentheses. i.e. we solve the innermost parenthesis first and then work outwards also considering
Computers solve arithmetic expressions by restructuring them so the order of each calculation is
embedded in the expression. Once converted an expression can then be solved in one pass.
Types of Expression
The normal (or human) way of expressing mathematical expressions is called infix form, e.g.
4+5*5. However, there are other ways of representing the same expression, either by writing all
operators before their operands or after them,
e.g.: 4 5 5 * +
+4*55
This method is called Polish Notation (because this method was discovered by the Polish
mathematician Jan Lukasiewicz).
When the operators are written before their operands, it is called the prefix form
e.g. + 4 * 5 5
When the operators come after their operands, it is called postfix form (suffix form or reverse
polish notation)
e.g. 4 5 5 * +
Postfix notation arises from the concept of post-order traversal of an expression tree (see Weiss p.
93 - this concept will be covered when we look at trees).
For now, consider postfix notation as a way of redistributing operators in an expression so that
their operation is delayed until the correct time.
Purpose
The reason for using postfix notation is that a fairly simple algorithm exists to evaluate such
expressions based on using a stack.
Postfix Evaluation
Unary operators: unary minus, square root, sin, cos, exp, etc.,
So for 6 5 2 3 + 8 * + 3 + *
So next a '+' is read (a binary operator), so 3 and 2 are popped from the stack and their sum '5'
is pushed onto the stack:
TOS=> 5
5
6
TOS=> 8
5 TOS=> 40
5 5
6 6
(8, 5 popped, 40 pushed)
TOS=> 3
TOS=> 45 45
6 6
(40, 5 popped, 45 pushed, 3 pushed)
TOS=> 288
Now there are no more items and there is a single value on the stack, representing the final answer
288.
Note the answer was found with a single traversal of the postfix expression, with the stack being
used as a kind of memory storing values that are waiting for their operands.
Algorithm
ab
TOS=> +
TOS=> * abc
+
abc*+
TOS=> +
TOS=> *
abc*+de
(
+
TOS=> +
abc*+de*f
(
+
abc*+de*f+
+
TOS=>
TOS=> * abc*+de*f+g
+
empty abc*+de*f+g*+
2:/*
1:+-
The algorithm immediately passes values (operands) to the postfix expression, but
remembers (saves) operators on the stack until their right-hand operands are fully
translated.
Assessment
Question 1: Write down the algorithm and C++ code for Linked List Implementation of Stacks:
the POP Operation?
Question 2: Write down the algorithm and C++ code for Linked List Implementation of Stacks:
the Push Operation?
Question 4: Write down the c++ code for adding node at the end using double linked list?
Answer
Check out the above chapter for the brief answer.
A data structure that has access to its data at the front and rear.
Operates on FIFO (Fast In First Out) basis.
Uses two pointers/indices to keep tack of information/data.
has two basic operations:
o enqueue - inserting data at the rear of the queue
o dequeue – removing data at the front of the queue
dequeue enqueue
Front Rear
Example:
Analysis:
Consider the following structure: int Num[MAX_SIZE];
We need to have two integer variables that tell:
- the index of the front element
- the index of the rear element
We also need an integer variable that tells:
- the total number of data in the queue
Implementation:
const int MAX_SIZE=100;
int FRONT =-1, REAR =-1;
int QUEUESIZE = 0;
void enqueue(int x)
{
if(Rear<MAX_SIZE-1)
{
REAR++;
Num[REAR]=x;
QUEUESIZE++;
if(FRONT = = -1)
FRONT++;
}
else
cout<<"Queue Overflow";
}
int dequeue()
{
int x;
if(QUEUESIZE>0)
{
x=Num[FRONT];
FRONT++;
QUEUESIZE--;
}
else
cout<<"Queue Underflow";
return(x);
}
A problem with simple arrays is we run out of space even if the queue never reaches the size of
the array. Thus, simulated circular arrays (in which freed spaces are re-used to store data) can be
used to solve this problem.
The circular array implementation of a queue with MAX_SIZE can be simulated as follows:
12 11
13
10
9
MAX_SIZE - 1 8
0 7
1 6
2 5
3 4
Analysis:
Consider the following structure: int Num[MAX_SIZE];
We need to have two integer variables that tell:
- the index of the front element
- the index of the rear element
We also need an integer variable that tells:
- the total number of data in the queue
Implementation:
const int MAX_SIZE=100;
int FRONT =-1, REAR =-1;
int QUEUESIZE = 0;
void enqueue(int x)
{
if(QUEUESIZE<MAX_SIZE)
{
REAR++;
if(REAR = = MAX_SIZE)
REAR=0;
Num[REAR]=x;
QUEUESIZE++;
if(FRONT = = -1)
FRONT++;
}
else
cout<<"Queue Overflow";
}
int dequeue()
{
int x;
}
else
cout<<"Queue Underflow";
return(x);
}
Front Rear
Example: Consider the following queue of persons where females have higher priority than
males (gender is the key to give priority).
Thus, in the above example the implementation of the dequeue operation need to be
modified.
Example: The following two queues can be created from the above priority queue.
Aster Meron Abebe Alemu Belay Kedir Yonas
Female Female Male Male Male Male Male
Algorithm:
create empty females and males queue
while (PriorityQueue is not empty)
{
Data=DequeuePriorityQueue(); // delete data at the front
Example: The following two queues (females queue has higher priority than the males
queue) can be merged to create a priority queue.
Aster Meron Abebe Alemu Belay Kedir Yonas
Female Female Male Male Male Male Male
Algorithm:
Group discussion:
B E F G
C D H I J
K L M
H I J
K L M
Binary tree: a tree in which each node has at most two children called left child and right child.
Full binary tree: a binary tree where each node has either 0 or 2 children.
Balanced binary tree: a binary tree where each node except the leaf nodes has left and right
children and all the leaves are at the same level.
Group Discussion
Discuss on each type of trees
Binary search tree (ordered binary tree): A binary tree that may be empty, but if it is
not empty it satisfies the following.
Every node has a key and no two elements have the same key.
The keys in the right subtree are larger than the keys in the root.
The keys in the left subtree are smaller than the keys in the root.
The left and the right subtrees are also binary search trees.
10
6 15
4 8 14 18
7 12
16 19
11 13
4.2.3.1. Insertion
When a node is inserted the definition of binary search tree should be preserved. Suppose
there is a binary search tree whose root node is pointed by RootNodePtr and we want to insert
a node (that stores 17) pointed by InsNodePtr.
17 17
InsertBST(RootNodePtr, InsNodePtr)
17 10 10
6 15 6 15
4 8 14 4 8 14 18
18
7 12 7 12 16 19
16 19
11 13 11 13 17
Data Structure and algorithm Page 71
Function call:
if(RootNodePtr = = NULL)
RootNodePtr=InsNodePtr;
Implementation:
Group Discussion
Discuss on each step of the above C++ code
Example:
RootNodePtr
10
6 15
4 8 14 18
7 12 16 19
11 13 17
Preorder traversal - 10, 6, 4, 8, 7, 15, 14, 12, 11, 13, 18, 16, 17, 19
Inorder traversal - 4, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19
==> Used to display nodes in ascending order.
Postorder traversal- 4, 7, 8, 6, 11, 13, 12, 14, 17, 16, 19, 18, 15, 10
Example:
+
B C E F
Data Structure and algorithm Page 73
Function calls:
Preorder(RootNodePtr);
Inorder(RootNodePtr);
Postorder(RootNodePtr);
Implementation:
Function call:
ElementExists = SearchBST (RootNodePtr, Number);
// ElementExists is a Boolean variable defined as: bool ElementExists = false;
When we search an element in a binary search tree, sometimes it may be necessary for the
SearchBST function to return a pointer that points to the node containing the element
searched. Accordingly, the function has to be modified as follows.
Function call:
SearchedNodePtr = SearchBST (RootNodePtr, Number);
// SearchedNodePtr is a pointer variable defined as: Node *SearchedNodePtr=NULL;
Implementation:
10
6 14
3 8 12 18
4 7 9 11 13 16 19
2
15 17
1 5
Delete 7
10
10
6 14
6 14
3 8 12 18
3 8 12 18
4 9 11 13 16 19
16 2
2 4 7 9 11 13 19
15 17
17 1 5
5 15
1
If the deleted node is the left child of its parent and the deleted node has only the left child, the
left child of the deleted node is made the left child of the parent of the deleted node.
If the deleted node is the left child of its parent and the deleted node has only the right child,
the right child of the deleted node is made the left child of the parent of the deleted node.
If the deleted node is the right child of its parent and the node to be deleted has only the left
child, the left child of the deleted node is made the right child of the parent of the deleted
node.
If the deleted node is the right child of its parent and the deleted node has only the right child,
the right child of the deleted node is made the right child of the parent of the deleted node.
RootNodePtr
RootNodePtr
Delete 2
10
10
6 14
6 14
3 8 12 18
3 8 12 18
4 9 11 13 16 19
16 1
2 4 7 9 11 13 19
15 17
17 5
5 15
1
Delete 2
10
10
6 14
6 14
3 8 12 18
3 8 12 18
4 9 11 13 16 19
16 1
2 4 7 9 11 13 19
15 17
17 5
5 15
1
If the deleted node is the right child of its parent, one of the following is done
o The left child of the deleted node is made the right child of the parent of the deleted node,
and
o The right child of the deleted node is made the right child of the node containing largest
element in the left of the deleted node
OR
o The right child of the deleted node is made the right child of the parent of the deleted
node, and
o The left child of the deleted node is made the left child of the node containing smallest
element in the right of the deleted node
RootNodePtr RootNodePtr
Delete 6
Data Structure and
10 algorithm 10 Page 78
6 14 8 14
RootNodePtr
RootNodePtr
Delete 6 10
10
3 14
6 14
2 4 12 18
3 8 12 18
5 11 13 16 19
1
4 7 9 11 13 16 19
2
8 15 17
15 17
1 5
7 9
Delete 6
10 10
6 14 5 14
3 8 12 18 3 8 12 18
4 7 9 11 13 16 19 4 7 9 11 13 16 19
2 2
15 17 15 17
1 5 1
RootNodePtr
RootNodePtr
Delete 6 10
10
7 14
6 14
3 8 12 18
3 8 12 18
4 9 11 13 16 19
16 2
2 4 7 9 11 13 19
5 15 17
15 17 1
1 5
If the tree has only one node the root node pointer is made to point to nothing (NULL)
If the root node has left child
o the root node pointer is made to point to the left child
o the right child of the root node is made the right child of the node containing the largest
element in the left of the root node
If root node has right child
o the root node pointer is made to point to the right child
o the left child of the root node is made the left child of the node containing the smallest
element in the right of the root node
RootNodePtr RootNodePtr
RootNodePtr
10 Delete 10 6
6 14 3 8
3 8 12 18 4 7 9
2
4 7 9 11 13 16 19
2 1 5 14
15 17 12
1 5 18
11 13 16 19
15 17
RootNodePtr
10 14
Delete 10
6 14 12 18
3 8 12 18 16
11 13 19
4 7 9 11 13 16 19 17
2 6 15
15 17 8
1 5 3
2 4 7 9
1 5
9
10 Delete 10
6 14
6 14
3 8 12 18
3 8 12 18
4 7 11 13 16 19
16 2
2 4 7 9 11 13 19
15 17
17 1 5
5 15
1
RootNodePtr
RootNodePtr
Data Structure and algorithm Page 82
11
10 Delete 10
Function call:
if ((RootNodePtr->Left==NULL)&&( RootNodePtr->Right==NULL) && (RootNodePtr->Num==N))
{ // the node to be deleted is the root node having no child
RootNodePtr=NULL;
delete RootNodePtr;
}
else
DeleteBST(RootNodePtr, RootNodePtr, N);
Assessment
Data Structure and algorithm Page 84
Question 1: Write down the array implementation of Enqueue and dequeue operations?
Question 2:
10
6 15
4 8 14 18
7 12
16 19
11 13
Using the above tree write down the Root, Internal node, External (leaf) node, Ancestors of a
node, Descendants of a node and Depth of a node?
Question 4: Write down the c++ code for deleting a node having only one child?
Answer
Check out the above chapter for brief answer.
A data structure that consists of a set of nodes (vertices) and a set of edges that relate the
nodes to each other. The set of edges describes relationships among the vertices
When the edges in a graph have a direction, the graph is called directed (or digraph)
Trees vs graphs
Trees are special cases of graphs!!
5 is adjacent to 7
7 is adjacent from 5
Complete graph: a graph in which every vertex is directly connected to every other vertex
N * (N-1)
O(N2)
N * (N-1) / 2
O(N2)
Array-based implementation
A list is used for each vertex v which contains the vertices which are adjacent from v (adjacency
list) .
• Adjacency matrix
– Good for dense graphs --|E|~O(|V|2)
– Memory requirements: O(|V| + |E| ) = O(|V|2 )
– Connectivity between two vertices can be tested quickly
• Adjacency list
– Good for sparse graphs -- |E|~O(|V|)
– Memory requirements: O(|V| + |E|)=O(|V|)
– Vertices adjacent to another vertex can be found quickly
Graph searching
Problem: find a path between two nodes of the graph (e.g., Austin and Washington)
Depth-First-Search (DFS)
Depth First Search (DFS)
The DFS algorithm is a recursive algorithm that uses the idea of backtracking. It involves
exhaustive searches of all the nodes by going ahead, if possible, else by backtracking.
Here, the word backtrack means that when you are moving forward and there are no more nodes
along the current path, you move backwards on the same path to find nodes to traverse. All the
nodes will be visited on the current path till all the unvisited nodes have been traversed after
which the next path will be selected.
This recursive nature of DFS can be implemented using stacks. The basic idea is as follows:
Pick a starting node and push all its adjacent nodes into a stack.
Pop a node from stack to select the next node to visit and push all its adjacent nodes into a stack.
Repeat this process until the stack is empty. However, ensure that the nodes that are visited are
marked. This will prevent you from visiting the same node more than once. If you do not mark
the nodes that are visited and you visit the same node more than once, you may end up in an
infinite loop.
There are many ways to traverse graphs. BFS is the most commonly used approach.
BFS is a traversing algorithm where you should start traversing from a selected node (source or
starting node) and traverse the graph layerwise thus exploring the neighbour nodes (nodes which are
directly connected to source node). You must then move towards the next-level neighbour nodes.
As the name BFS suggests, you are required to traverse the graph breadthwise as follows:
1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer
Brian storming: What do you think the major difference between an ordinary searching
and sorting algorithms with advanced ones?
Shell sort is an improvement of insertion sort. It is developed by Donald Shell in 1959. Insertion
sort works best when the array elements are sorted in a reasonable order. Thus, shell sort first
creates this reasonable order.
Algorithm:
1. Choose gap gk between elements to be partly ordered.
2. Generate a sequence (called increment sequence) gk, gk-1,…., g2, g1 where for each
sequence gi, A[j]<=A[j+gi] for 0<=j<=n-1-gi and k>=i>=1
Group discussion:
Discuss on the algorithm of Shell Sort.
It is advisable to choose gk =n/2 and gi-1 = gi/2 for k>=i>=1. After each sequence gk-1 is done and
the list is said to be gi-sorted. Shell sorting is done when the list is 1-sorted (which is sorted using
insertion sort) and A[j]<=A[j+1] for 0<=j<=n-2. Time complexity is O(n3/2).
5 8 2 4 1 3 9 7 6 0
Sort (5, 3) 3 8 2 4 1 5 9 7 6 0
Sort (8, 9) 3 8 2 4 1 5 9 7 6 0
Sort (2, 7) 3 8 2 4 1 5 9 7 6 0
Sort (4, 6) 3 8 2 4 1 5 9 7 6 0
Sort (1, 0) 3 8 2 4 0 5 9 7 6 1
5- sorted list 3 8 2 4 0 5 9 7 6 1
Choose g2 =3
Sort (3, 4, 9, 1) 1 8 2 3 0 5 4 7 6 9
Sort (8, 0, 7) 1 0 2 3 7 5 4 8 6 9
Sort (2, 5, 6) 1 0 2 3 7 5 4 8 6 9
3- sorted list 1 0 2 3 7 5 4 8 6 9
Choose g1 =1 (the same as insertion sort algorithm)
Sort (1, 0, 2, 3, 7, 5, 4, 8, 6, 9) 0 1 2 3 4 5 6 7 8 9
1- sorted (shell sorted) list 0 1 2 3 4 5 6 7 8 9
Group discussion:
Discuss on the algorithm of Quick Sort.
Quick sort is the fastest known algorithm. It uses divide and conquer strategy and in the worst
case its complexity is O (n2). But its expected complexity is O(nlogn).
Algorithm:
1. Choose a pivot value (mostly the first element is taken as the pivot value)
2. Position the pivot element and partition the list so that:
the left part has items less than or equal to the pivot value
the right part has items greater than or equal to the pivot value
3. Recursively sort the left part
4. Recursively sort the right part
The following algorithm can be used to position a pivot value and create partition.
Left=0;
Right=n-1; // n is the total number of elements in the list
PivotPos=Left;
while(Left<Right)
{
if(PivotPos==Left)
{
if(Data[Left]>Data[Right])
{
swap(data[Left], Data[Right]);
PivotPos=Right;
Left++;
}
else
Right--;
}
else
{
if(Data[Left]>Data[Right])
{
swap(data[Left], Data[Right]);
PivotPos=Left;
Right--;
}
else
Left++;
}
}
Left Right
Pivot
Data Structure and algorithm Page 99
5.3. Heap Sort
Heap sort operates by first converting the list in to a heap tree. Heap tree is a binary tree in which
each node has a value greater than both its children (if any). It uses a process called "adjust to
accomplish its task (building a heap tree) whenever a value is larger than its parent. The time
complexity of heap sort is O(nlogn).
Algorithm:
1. Construct a binary tree
The root node corresponds to Data[0].
If we consider the index associated with a particular node to be i, then the left child of
this node corresponds to the element with index 2*i+1 and the right child corresponds
to the element with index 2*i+2. If any or both of these elements do not exist in the
array, then the corresponding child node does not exist either.
2. Construct the heap tree from initial binary tree using "adjust" process.
3. Sort by swapping the root value with the lowest, right most value and deleting the lowest,
right most value and inserting the deleted value in the array in it proper position.
Group discussion:
Discuss on the algorithm of Shell Sort.
5 8 2 4 1 3 9 7 6 0
RootNodePtr RootNodePtr
5 9
8 2 8 5
4 1 3 9 7 1 3 2
7 6 0 4 6 0
Swap the root node with the lowest, right most node and delete the lowest, right most value;
insert the deleted value in the array in its proper position; adjust the heap tree; and repeat this
process until the tree is empty.
RootNodePtr RootNodePtr
9 0
Data Structure and algorithm Page 100
8 5 8 5
7 1 3 2 7 1 3 2
9
RootNodePtr RootNodePtr
8 0
8 9 7 5
7 5
6 1 6 1 3 2
3 2
4 4
0
RootNodePtr RootNodePtr
7 0
6 5 7 8 9
6 5
4 1 3 2 4 1 3 2
0
RootNodePtr
RootNodePtr
2
6
6 7 8 9
4 5
4 5
0 1 3
0 1 3 2
RootNodePtr RootNodePtr
5 2
0 1 2 0 1
5 6 7 8 9
RootNodePtr RootNodePtr
4 1
4 5 6 7 8 9
2 3 2 3
0 1 0
RootNodePtr RootNodePtr
3 0
3 4 5 6 7 8 9
2 1 2 1
RootNodePtr
RootNodePtr
2
2 3 4 5 6 7 8 9 1
0 1
0
RootNodePtr
RootNodePtr
1
1 2 3 4 5 6 7 8 9 0
0
RootNodePtr 0 1 2 3 4 5 6 7 8 9 RootNodePtr
0
Algorithm:
1. Divide the array in to two halves.
2. Recursively sort the first n/2 items.
3. Recursively sort the last n/2 items.
4. Merge sorted items (using an auxiliary array).
Group discussion:
Discuss on the algorithm of Merge Sort.
5 8 2 4 1 3 9 7 6 0
5 8 2 4 1 3 9 7 6 0
5 8 2 4 1 3 9 7 6 0
Division phase
5 8 2 4 1 3 9 7 6 0
5 8 2 4 1 3 9 7 6 0
4 1 6 0
1 4 0 6
5 8 1 2 4 3 9 0 6 7
Sorting and merging phase
1 2 4 5 8 0 3 6 7 9
Assessment
0 1 2 3 4 5 6 7 8 9
Question 1: What is the algorithm and c++ code for heap sort?
Question 3: What is the algorithm and c++ code for Shell sort?
Assessment methods
Formative
Quiz ……….……. 5 %
Group assignment(Project) . 10 %
Individual assignment …………….5 %
Mid exam ……………. 30%
Summative
Teaching materials
Reference books:
Robert Lafore, “Data Structures and Algorithms in JAVA, 2nd Ed.”, Sams Publishing
Jean Paul Tremblay, Paul G. Soreson, “An Introduction to Data Structures with
Applications”, Mc. Graw Hill Computer Science Series
E. Horowitz, S.Sahni and Dinesh Mehta. Fundamentals of data structures in C++, W.H
Freeman and Company (1995)
Sanjay Pahuja, A practical approach to data structures and algorithms, New age
Internationapublishers, 2008