Unit1 Intro BDS
Unit1 Intro BDS
Unit1 Intro BDS
Structures
Introduction
• The Data Structure is the way of storing and organizing data in a
computer so that it can be used efficiently.
• Some common examples of data structures are arrays, linked lists,
queues, stacks, binary trees, graphs, and hash tables.
• Data structures are widely applied in areas like:
✓ Compiler design ✓ English Dictionary
✓ Operating system ✓ City maps
✓ Statistical analysis package ✓ Banking
✓ Library
✓ DBMS
✓ Stores
✓ Numerical analysis
✓ Simulation
✓ Artificial Intelligence
✓ Graphics
Classification of Data Structures
• Primitive Data Structures are the fundamental data types which
are supported by a programming language.
• Some basic data types are integer, real, and boolean. The terms
‘data type’, ‘basic data type’, and ‘primitive data type’ are often
used interchangeably.
Primitive DS Non-Primitive DS
datatype name[size];
1 st 2 nd 3 rd 4 th 5 th 6 th 7 th 8 th 9 th 10 th
element element element element element element element element element element
marks[0] marks[1] marks[2] marks[3] marks[4] marks[5] marks[6] marks[7] marks[8] marks[9 ]
Linked Lists
• A linked list is a very flexible dynamic data structure in which
elements can be added to or deleted from anywhere.
• Every node in the list points to the next node in the list. Therefore, in
a linked list every node contains two types of information:
• Every stack has a variable TOP associated with it, which is used to
store the address of the topmost element of the stack.
0 1 2 3 TOP = 4 5 6 7 8 9
PUSH POP
[STACK]
• The elements in a queue are added at one end called the REAR
and removed from the other one end called FRONT.
0 FRONT = 1 2 3 4 5 REAR = 6 7 8 9
Tree
• A tree is a non-linear data structure which consists of a collection
of nodes arranged in a hierarchical order.
• One of the nodes is designated as the root node, and the remaining
nodes can be partitioned into disjoint sets such that each set is the
sub-tree of the root.
ROOT NODE
• A binary tree is the simplest form of tree which 1
T2
T1
2 3
• Every node in the graph can be connected with any other node.
• When two nodes are connected via an edge, the two nodes are known
D B C A
as neighbors.
B C D
A E E F G
H I
Operations on Data Structures
Traversing: It means to access each data item exactly once so that it can be processed.
For example, to print the names of all the students in a class.
Searching: It is used to find the location of one or more data items that satisfy the given
constraint. Such a data item may or may not be present in the given collection of data
items.
For example, to find the names of all the students who secured 100 marks in
mathematics.
Merging Lists of two sorted data items can be combined to form a single list of sorted data
items.
Inserting: It is used to add new data items to the given list of data items.
For example, to add the details of a new student who has recently joined the course.
Deleting: It means to remove (delete) a particular data item from the given collection of
data items. For example, to delete the name of a student who has left the course.
Sorting Data: items can be arranged in some order like ascending order or descending order
depending on the type of application.
For example, arranging the names of students in a class in an alphabetical order, or calculating
the top three winners by arranging the participants’ scores in descending order and then
extracting the top three.
Abstract Data Type
• Data type of a variable is the set of values that the variable can take.
The basic data types in C include int, char, float, and double.
In the top-down approach, we start from an abstract design and then at each
step, this design is refined into more concrete levels until a level is reached that
requires no further refinement.
In the bottom-up design, we start with designing the most basic or concrete
modules and then proceed towards designing higher level modules.
Algorithm
CONTROL STRUCTURES USED IN ALGORITHMS:
An algorithm may employ one of the following control structures: (a) sequence, (b) decision,
and (c) repetition.
Sequence: By sequence, we mean that each step of an algorithm is executed in a specified
order. Let us write an algorithm to add two numbers.
Decision: Decision statements are used when the execution of a process depends on the outcome of some
condition.
Repetition: Which involves executing one or more steps for a number of times, can be implemented using
constructs such as while, do–while, and for loops. These loops execute one or more steps until some
condition is true.
Algorithm
Write an algorithm to find whether a number is even or odd
Step 1: Input the first number as A
Step 2: IF A%2 =0
Then Print "EVEN"
ELSE
Print "ODD"
Step 3: END
Algorithm
Write an algorithm to find the larger of two Write an algorithm to find the sum of
numbers. first N natural numbers.
✓ Variable part includes space needed for recursion stack, and for
structured variables that are allocated space dynamically during
the run-time of the program.
Time and Space Complexity of
Algorithm
Time–Space Trade-off:
The best algorithm to solve a particular problem at hand is no doubt the one that
requires less memory space and takes less time to complete its execution. But practically,
designing such an ideal algorithm is not a trivial task. There can be more than
one algorithm to solve a particular problem. One may require less memory
space, while the other may require less CPU time to execute. Thus, it is not
uncommon to sacrifice one thing for the other. Hence, there exists a time–
space trade-off among algorithms.
So, if space is a big constraint, then one might choose a program that takes less
space at the cost of more CPU time. On the contrary, if time is a major
constraint, then one might choose a program that takes minimum time to
execute at the cost of more space.
Calculating Algorithm Efficiency
• The efficiency of an algorithm is expressed in terms of the number of
elements that has to be processed. So, if n is the number of elements, then
the efficiency can be stated as Efficiency = f(n).
for(i=0;i<100;i+=2)
statement block;
Here, the number of iterations is half the number of the loop factor. So, here the
efficiency can be given as
f(n) = n/2
Calculating Algorithm Efficiency
Logarithmic Loops: The loop-controlling variable is either multiplied or divided
during each iteration of the loop. For example, look at the loops given below:
for(i=1;i<1000;i*=2) for(i=1000;i>=1;i/=2)
Statement block; Statement block;
Consider the first for loop in which the loop-controlling variable i is multiplied
by 2. The loop will be executed only 10 times and not 1000 times because in each
iteration the value of i doubles.
Now, consider the second loop in which the loop-controlling variable i is divided
by 2. In this case also, the loop will be executed 10 times. Thus, the number of
iterations is a function of the number by which the loop-controlling variable is
divided or multiplied. In the examples discussed, it is 2. That is, when n = 1000, the
number of iterations can be given by log 1000 which is approximately equal to 10.
Therefore, putting this analysis in general terms, we can conclude that the efficiency
of loops in which iterations divide or multiply the loop-controlling variables can be
given as f(n) = log n
Nested Loops :Loops that contain loops are known as nested loops. In order to analyse nested loops, we
need to determine the number of iterations each loop completes.
The total is then obtained as the product of the number of iterations in the inner loop and the number of
iterations in the outer loop. In this case, we analyse the efficiency of the algorithm based on whether it is
a linear logarithmic, quadratic, or dependent quadratic nested loop.
Quadratic loop In a quadratic loop, the number of iterations in the inner loop is equal to the number
of iterations in the outer loop. Consider the following code in which the outer loop executes 10 times
and for each iteration of the outer loop, the inner loop also executes 10 times. Therefore, the efficiency
here is 100.
The generalized formula for quadratic loop can be given as f(n) = n^2.
Dependent quadratic loop In a dependent quadratic loop, the number of iterations in the inner loop
is dependent on the outer loop. the inner loop will execute just once in the first iteration, twice in the
second iteration, thrice in the third iteration, so on and so forth. In this way, the number of iterations can
be calculated as
1 + 2 + 3 + ... + 9 + 10 = 55
The efficiency of such a code can be given as f(n) = n (n + 1)/2
Big O Notation
• Big-O notation, where the "O" stands for "order of", is concerned
with what happens for very large values of n.
FIB(6) FIB(5)
FIB(2) FIB(1)
•Write a program to print the Fibonacci series using recursion.
#include <stdio.h>
int Fibonacci(int);
int main()
{ int n, i = 0, res;
printf("Enter the number of terms\n");
scanf("%d",&n);
printf("Fibonacci series\n");
for(i = 0; i < n; i++ )
{ res = Fibonacci(i);
printf("%d\t",res);
}
return 0;
}
int Fibonacci(int n)
{ if ( n == 0 )
return 0;
else if ( n == 1 )
return 1; Output
else Enter the number of terms
return ( Fibonacci(n–1) + Fibonacci(n–2) ); 7
} Fibonacci series
0 1 1 2 3 5 8 13
Iteration Vs Recursion