Introduction to Data Structures-1
Introduction to Data Structures-1
2
Introduction
●
Data Structure can be defined as the group of data elements which
provides an efficient way of storing and organizing data in the computer
so that it can be used efficiently.
●
Some examples of Data Structures are arrays, Linked List, Stack, Queue,
etc.
●
Data Structures are widely used in almost every aspect of Computer
Science i.e. operating System, Compiler Design, Artificial intelligence,
Graphics and many more
●
Explain how data structures enhance performance of a software/program
Organization, storage and management of data leads to efficiency because
processing through access, insertion and deletion of data becomes faster because of
efficiency hence improving the performance of the software program.
3
Basic Terminology in DS
●
Data: Data can be defined as an elementary value or the collection of
value. For example, student's name and its id are the data about the
student.
●
Group Items: Data items which have subordinate data items are called
Group item. For example, name of a student can have first name and
the last name.
●
Record: Record can be defined as the collection of various data items.
For example, if we talk about the student entity, then its name, address,
course and marks can be grouped together to form the record for the
student.
4
Basic Terminology in DS
●
File: A File is a collection of various records of one type of entity, For
example, if there are 60 employees in the class, then there will be
20 records in the related file where each record contains the data
about each employee.
●
Attribute and Entity: An entity represents the class of certain
objects. it contains various attributes. Each
●
attribute represents the particular property of that entity.
●
Field: Field is a single elementary unit of information representing
the attribute of an entity.
5
Need of Data Structures
As applications are getting complex and amount of data is increasing
day by day, there may arise then following problems:
●
Processor speed: To handle very large amount of data, high speed
processing is required, but as the data is growing day by day to the
billions of files per entity, processor may fail to deal with that much
amount of data.
●
Data Search: Consider an inventory size of 106 items in a store, If our
application needs to search for a particular item, it needs to traverse
106 items every time, results in slowing down the search process.
6
Need of Data Structures
Multiple requests: If thousands of users are searching the
data simultaneously on a web server, then there are the
chances that a very large server can be failed during that
process. In order to solve the above problems, data structures
are used. Data is organized to form a data structure in such a
way that all items are not required to be searched and required
data can be searched instantly.
7
Advantages of Data Structures
●
Efficiency: Efficiency of a program depends upon the choice of
data structures e.g ordered array, binary search tree or hash
tables are better data structures compared to array when
performing a search for a particular record in a dataset.
●
Reusability: Data structures are reusable, i.e. once we have
implemented a particular data structure, we can use it at any
other place. Implementation of data structures can be compiled
into libraries which can be used by different clients.
8
Advantages of Data Structures
●
Abstraction: Data structure is specified by the ADT which
provides a level of abstraction. The client program uses the
data structure through interface only, without getting into the
implementation details.
9
Classification of Data Structures
10
Linear Data Structures
●
A data structure is called linear if all of its elements are
arranged in the linear order. In linear data structures, the
elements are stored in non-hierarchical way where each
element has the successors and predecessors except the first
and last element.
Linear Ds: This is a data structure where each element is stored in a line with each
element having a predecessor and successor except the first and last element
11
Types of Linear Data Structures
Array: Is a D.S that stores data of a fixed size and the same data type in a contigious
way.
●
Arrays: An array is a collection of similar type of data items
and each data item is called an element of the array. The data
type of the element may be char, int, float or double. The
elements of array share the same variable name but each one
carries a subscript.
●
Linked list: Linked List is a linear data structure which is used
to maintain a list in the memory. It can be seen as the collection
of nodes stored at non-contiguous memory locations. Each
node of the list contains a pointer to its adjacent node.
Contiguous way: The data items share the same boundaries eg arrays. Non contiguous way:
The data items do not share boundaries and each data item consists of the data part
12 and
pointer where the pointer points to the next data item in the D.S.
Types of Linear Data Structures
●
Stack: Stack is a linear list in which insertion and deletions are
allowed only at one end, called top. A stack is an abstract data
type (ADT), can be implemented in most of the programming
languages. For example: - piles of plates or deck of cards etc.
●
Queue: Queue is a linear list in which elements can be inserted
only at one end called rear and deleted only at the other end
called front. It is an abstract data structure, similar to stack.
Queue is opened at both end therefore it follows First-In First-
Out (FIFO) methodology for storing the data items.
13
Non-Linear Data Structures
●
This data structure does not form a sequence i.e. each item or
element is connected with two or more other items in a non-
linear arrangement. The data elements are not arranged in
sequential structure.
14
Types of Non-Linear Data Structures
●
Trees: Trees are multilevel data structures with a hierarchical
relationship among its elements known as nodes. The
bottommost nodes in the hierarchy are called leaf node while
the topmost node is called root node. Each node contains
pointers to point adjacent nodes. Tree data structure is based
on the parent-child relationship among the nodes. Each node in
the tree can have more than one child except the leaf nodes
whereas each node can have at most one parent except the
root node. Trees can be classified into many categories.
15
Types of Non-Linear Data Structures
●
Graphs: Graphs can be defined as the pictorial representation
of the set of elements (represented by vertices) connected by
the links known as edges. A graph is different from tree in the
sense that a graph can have cycle while the tree cannot have
the one.
16
Operations on data structure
●
Traversing: Every data structure contains the set of data
elements. Traversing the data structure means visiting each
element of the data structure in order to perform some specific
operation like searching or sorting.
●
Example: If we need to calculate the average of the marks
obtained by a student in 6 different subject, we need to
traverse the complete array of marks and calculate the total
sum, then we will divide that sum by the number of subjects i.e.
6, in order to find the average.
17
Operations on data structure
●
Insertion: Insertion can be defined as the process of adding
the elements to the data structure at any location. If the size of
data structure is n then we can only insert n-1 data elements
into it.
●
Deletion: The process of removing an element from the data
structure is called Deletion. We can delete an element from the
data structure at any random location. If we try to delete an
element from an empty data structure then underflow occurs.
18
Operations on data structure
●
Searching: The process of finding the location of an element
within the data structure is called Searching. There are two
algorithms to perform searching, Linear Search and Binary
Search.
●
Sorting: The process of arranging the data structure in a
specific order is known as Sorting. There are many algorithms
that can be used to perform sorting, for example, insertion sort,
selection sort, bubble sort etc.
19
Operations on data structure
●
Searching: The process of finding the location of an element
within the data structure is called Searching. There are two
algorithms to perform searching, Linear Search and Binary
Search.
●
Sorting: The process of arranging the data structure in a
specific order is known as Sorting. There are many algorithms
that can be used to perform sorting, for example, insertion sort,
selection sort, bubble sort etc.
20
Operations on data structure
●
Merging: When two lists List A and List B of size M and N
respectively, of similar type of elements, clubbed or joined to
produce the third list, List C of size (M+N), then this process is
called merging.
21
Characteristics of a Data Structure
●
Correctness − Data structure implementation should
implement its interface correctly.
●
Time Complexity − Running time or the execution time of
operations of data structure must be as small as possible.
●
Space Complexity − Memory usage of a data structure
operation should be as little as possible.
22
Algorithm
●
An algorithm is a procedure having well defined steps for
solving a particular problem. Algorithm is finite set of logic or
instructions, written in order for accomplish the certain
predefined task.
●
It is not the complete program or code, it is just a solution
(logic) of a problem, which can be represented either as an
informal description using a Flowchart or Pseudo code.
23
Categories of algorithms
●
Sort: Algorithm developed for sorting the items in certain order.
●
Search: Algorithm developed for searching the items inside a
data structure.
●
Delete: Algorithm developed for deleting the existing element
from the data structure.
●
Insert: Algorithm developed for inserting an item inside a data
structure.
●
Update: Algorithm developed for updating the existing element
inside a data structure.
24
Performance of an algorithm
25
What makes an algorithm
●
Specification: Description of the computational procedure.
●
Pre-conditions: The condition(s) on input.
●
Body of the Algorithm: A sequence of clear and unambiguous
instructions.
●
Post-conditions: The condition(s) on output.
26
Example of an algorithm
●
Input: An algorithm must have 0 or well defined inputs.
●
Output: An algorithm must have 1 or well defined outputs, and
should match with the desired output.
●
Feasibility: An algorithm must be terminated after the finite
number of steps.
●
Independent: An algorithm must have step-by-step directions
which is independent of any programming code.
●
Unambiguous: An algorithm must be unambiguous and clear.
Each of their steps and input/outputs must be clear and lead to
only one meaning. 28
Criteria of an Algorithm
●
Input: It may be zero or more quantities are externally
supplied. But input not necessary for all Algorithms.
●
Output: At least one quantity is produced. It is must for all the
algorithms
●
Definiteness: Each instruction is clear and unambiguous.
●
Finiteness: The instruction (Steps) of an algorithm must be
finite. Means Algorithm terminate after finite number of steps
(Instructions).
●
Effectiveness: Every instruction must be very basic so that it
can be carried out, in principle, by a person using only pencil
29
and paper. It is not enough that each operation be definite, it
Assignment
30
Pseudocode
●
Algorithm can be described in many ways like English for
specifying algorithm in step by step and Flowchart for graphical
representation.
●
But these two ways work well only if algorithm is small or
simple. For large or big algorithm we are going to use pseudo
code.
●
Pseudo code is a kind of structured English for describing
algorithms. It allows the designer to focus on the logic of the
algorithm without being distracted (diverted) by details of
language syntax.
31
Pseudocode
●
At the same time, the pseudo code needs to be complete. It
describes the entire logic of the algorithm so that
implementation becomes a rote mechanical task of translating
line by line into source code.
32
Example of Pseudocode
ALGORITHM Sum(num1,num2)
Input: read num1,num2
Output: write sum of num1&num2
{
Read num1,num2;
Sum = num1+num2;
Write sum;
}
33
Assignment
34
Recursive Algorithms
●
If an algorithm is executed, it uses the
- Computer’s CPU to perform the operations.
- Memory (both RAM and ROM to hold the program & data).
●
Algorithm analysis is the task of determining how much
computing time and storage memory required for an algorithm.
36
Criteria of evaluating algorithm performance
●
Does it do what we want it to do?
●
Does it work correctly according to the original specifications of
the task?
●
Is there documentation that describes how to use it and how it
works?
●
Are procedures created in such a way that they perform logical
sub functions?
●
Is the code readable?
37
Phases of algorithm performance
●
Priori estimates → Performance analysis
●
Posterior testing → Performance measurements
38
Space Complexity
●
Space complexity; The space complexity of an algorithm is the
amount of memory it needs to run to completion. The space
needed by algorithm is the sum of following components.
●
This includes memory for variables, data structures, function call
stacks, and auxiliary space (temporary storage used by the
algorithm)
●
Fixed part (Constant space complexity)
●
Variable part (Linear space complexity)
39
Constant Space Complexity
●
Constant Space complexity; An algorithm has constant space
complexity if the amount of memory it uses does not grow with
the size of the input. This means it only uses a fixed, constant
amount of memory, no matter how big the input is.
●
that is independent of the characteristics of input and output.
Example Number & size. In this includes instructions space
[space for code] + Space for simple variables + Fixed-size
component variables + Space for constant & so on.
40
Constant Space Complexity
●
Example: Imagine you're flipping a coin once, and you need to
store the result (either "Heads" or "Tails"). You only need one
variable to store the result. No matter how many times you flip
the coin, the memory used to store the result stays the same.
41
Linear Space Complexity
●
Linear Space complexity; An algorithm has linear space
complexity if the amount of memory it uses grows proportionally
to the size of the input. This means that if the input size
increases, the memory usage increases at the same rate.
●
Example: Imagine you're making a list of your friends' names. If
you have 10 friends, you need space to store 10 names. If you
have 100 friends, you need space to store 100 names. The more
friends you have, the more space you need.
42
Space Complexity
●
A variable part; that consists of the space needed by
component variables whose size is dependent on the particular
problem instance being solved + The space needed by
referenced variables + Recursion stack space. It is also known
as Linear space complexity.
43
Space Complexity, S(P)
●
The space requirement S(P) of any algorithm P can be written
as;
●
S(P)=C+ SP where C → Constant and SP→ Instance
characteristics.
44
Time complexity T(P)
●
Time complexity of an algorithm is the amount of computer
time it needs to run to completion.
●
Here RUN means Compile + Execution → T(P)=tc+tp
●
But we are neglecting tc Because the compile time does not
depends on the instance characteristics. The compiled program
will be run several times without recompilation.
●
So T(P)= tp Here tp → instance characteristics.
45
Why T(P)= tp?
When calculating the time complexity of an algorithm, compile
time is typically neglected because time complexity focuses on
measuring the algorithm's performance during execution, not
during compilation. Here's why: Execution vs. Compilation:
●
Time complexity aims to describe how the running time of an
algorithm grows relative to the size of the input.
●
Compile time refers to the time taken by a compiler to
translate code into executable instructions. This is a one-time
event and does not depend on the size of the input data, so it’s
not relevant when evaluating the scalability of the algorithm.
46
Performance Measurement
●
Program step: Program step is the syntactically or
semantically meaningful segment of a program. And it has an
execution time that is independent of the instance
characteristics. Example:
●
For comment //-- zero steps
●
For assignment statements (Which does not involve any calls
to other algorithms) := one step
47
Performance Measurement
●
For iterative statements such as “for, while and until-repeat”
statements, we consider the step counts only for control part of
the statement.
●
For while loop “while (<expr>) do “ : the step count for control
part of a while stmt is Number of step counts for assignable to
<expr>
●
For for loop ie for i:=<expr> to <expr1> do: The step count of
control part of “for” statement is Sum of the count of <expr> &
<expr1> N and remaining execution of the “for” statement, i.e.,
one.
48
Performance Measurement
49
Program Step
●
By count, find program step of an iterative program e.g sum of
n natural numbers
●
Algorithm sum(a,n){
S:=0;
for i:=0 to n do {
s:=s+a[i];
}
return s;
} 50
Program Step
●
By table, find program step of an iterative program e.g sum of n
natural numbers
●
Algorithm 0 0
sum(a,n)
{ 0 0
S:=0; 1 1 1
for i:=1 to n 1 n+1 n+1
do {
s:=s+a[i]; 1 1 1
}
return s; 1 1 1 51
} 0 0
Asymptotic notation
●
Asymptotic notation of an algorithm is a mathematical
representation of its complexity
●
Asymptotic notation is used to judge the best algorithm among
numerous algorithms for a particular problem.
●
Asymptotic complexity is a way of expressing the main
component of algorithms like
✔
Cost
✔
Time complexity
✔
Space complexity
52
Asymptotic notation
●
Some Asymptotic notations are
●
1. Big oh O
●
2. Omega Ω
●
3. Theta θ
●
4. Little oh o
●
5. Little Omega ω
53
Asymptotic notation
●
Some Asymptotic notations are
●
1. Big oh O
●
2. Omega Ω
●
3. Theta θ
●
4. Little oh o
●
5. Little Omega ω
54
Big Oh notation
●
Big - Oh notation is used to define the upper bound of an
algorithm in terms of Time Complexity.
●
That means Big - Oh notation always indicates the maximum
time required by an algorithm for all input values. That means
Big - Oh notation describes the worst case of an algorithm time
complexity
55
Big Oh notation
●
Constant Time Complexity O(1)
●
Linear Time Complexity O(n)
●
Quadratic Time Complexity O(n2)
56
Constant Time Complexity
int getFirstElement(int arr[], int size) {
return arr[0]; // Accessing the first element
}
Analysis:
●
Operation: Accessing the first element of the array takes a
constant time, regardless of the size of the array.
●
Complexity: This function runs in constant time, so its
complexity is O(1)
57
Linear Time Complexity
int sumArray(int arr[], int size) {
int total = 0;
for (int i = 0; i < size; i++) {
total += arr[i]; // Summing all elements
}
return total; }
Analysis:
●
Operation: The function iterates through each element of the
array exactly once. Steps: If the size of the array is n, the loop
will execute n times. 58
●
Complexity: The time complexity is 𝑂(𝑛)
Quadratic Time Complexity O(n2)
void printPairs(int arr[], int size) {
for (int i = 0; i < size; i++) {
for (int j = 0; j < size; j++) {
cout << arr[i] << ", " << arr[j] << endl; // Printing all pairs
}}}
●
Analysis:
Operation: The function uses two nested loops, each iterating
over the array. Steps: For each element 𝑖, the inner loop runs 𝑛
times. Thus, the total number of operations is 𝑛×𝑛=𝑛2
●
Complexity: The time complexity is 𝑂(𝑛2) 59
Assignment
●
Using different algorithms of your choice, write and analyse the
complexity of each algorithm using Big Oh notation. Your
algorithms should cover constant, linear, quadratic, and
logarithmic complexity.
60