Dsa-1 Unit-1
Dsa-1 Unit-1
Dsa-1 Unit-1
Unit: 1
Arrays: Definition, Single and Multi-Dimensional, Representation of an Arrays: Row and Column
Major, Derivation of Index Formulae for 1-D,2-D and 3-D and n-D Arrays, Application of Arrays:
Sparse Matrix and their Representation.
Searching and Sorting: Searching Algorithms with analysis: Linear and Binary Search.
Sorting Algorithms with analysis: Bubble Sort, Insertion Sort, Selection Sort, Shell Sort, Sorting
in Linear Time- Counting Sort.
Hashing: Hashing the Symbol Table, Hashing Functions, Collision Resolution, Techniques hashing
for direct files.
Stack Primitive Stack operations: Push & Pop, Array and Linked List Implementation of Stack, Application
of stack: Infix, Prefix, Postfix Expressions and their mutual conversion, Evaluation of postfix expression.
Recursion: Principles of recursion, Tail recursion, Removal of recursion, with example such as Fibonacci
series, binary search, Merge sort and Quick sort algorithms with analysis. Tower of Hanoi, Trade-offs
between iteration and recursion.
Queue: Array and linked List implementation of queues, Operations on Queue: Create, Insert, Delete, Full
and Empty, Circular queues, Dequeue and Priority Queue algorithms with analysis .
Divide and Conquer concepts with Examples Such as Quick sort, Merge sort, Convex Hull.
Greedy Methods with Examples Such as Activity Selection, Task Scheduling, Fractional Knapsack
Problem.
COMPUTER SCIENCE
AND ENGINEERING
COMPUTER SCIENCE
AND ENGINEERING (AI)
COMPUTER SCIENCE
AND ENGINEERING(DS)
COMPUTER SCIENCE
AND ENGINEERING(IoT)
3. To Understand basic concepts about stacks, queues, lists, trees and graphs.
4. To understanding about writing algorithms and step by step approach in solving problems
CO2 : Implementation of Arrays for searching and sorting and hashing to foster critical thinking.
CO3 : Compare and contrast linked lists with arrays and implementation of linked lists with its
application.
CO4 : Understand static and dynamic implementation of stacks and queues while mastering
principle of recursion for effective problem-solving.
CO5 :Implementation and analysis of divide and conquer algorithms and greedy
approach for efficient problem-solving across diverse contexts.
PROGRAM OUTCOMES (PO’s)
PO8 : Ethics
PO10 : Communication
1 BCSE0301.1 3 3 2 2 1 - - - 1 1 - 2
2 BCSE0301.2 3 3 3 2 2 2 1 - - 1 - 2
3 BCSE0301.3 3 3 3 2 2 1 - - - 1 - 2
4 BCSE0301.4 3 3 3 2 2 1 1 - - 2 - 2
5 BCSE0301.5 3 3 3 3 2 2 - - 1 2 1 2
6 Average
2 PSO2 Design and Develop the hardware sensor devices and related interfacing software
systems for solving complex engineering problems.
3 PSO3 Understanding inter-disciplinary computing technique and to apply them in the design
of advanced computing.
4 PSO4 Conduct investigation of complex problems with the help of technical, managerial,
leadership qualities, and modern engineering tools provided by industry-sponsored
laboratories.
CO’s-PSO’s MAPPING
BCSE0301.2 2 3 2 2
BCSE0301.3 2 1 3 2
BCSE0301.4 2 2 2 3
BCSE0301.5 3 2 2 3
Average
PROGRAM EDUCATIONAL OBJECTIVES (PEO’s)
PROGRAM
EDUCATIONAL PEO’s DESCRIPTION
OBJECTIVES (PEO’s)
To have an excellent scientific and engineering breadth so as to comprehend, analyze, design
PEO1 and provide sustainable solutions for real-life problems using state-of-the-art technologies.
To have an effective communication skills, professional attitude, ethical values and a desire
PEO3 to learn specific knowledge in emerging trends, technologies for research, innovation and
product development and contribution to society.
To have life-long learning for up-skilling and re-skilling for a successful professional career as
PEO4 an engineer, scientist, entrepreneur or bureaucrat for the betterment of the society.
PATTERN OF END SEMESTER EXAMINATION (100 MARKS)
PATTERN OF END SEMESTER EXAMINATION (100 MARKS)
PATTERN OF END SEMESTER EXAMINATION (100 MARKS)
PATTERN OF END SEMESTER EXAMINATION (100 MARKS)
PATTERN OF END SEMESTER EXAMINATION (100 MARKS)
PRE-REQUISITE FOR THE SUBJECT
Video Links:
1. https://infyspringboard.onwingspan.com/web/en/app/toc/lex_auth_0125409699132620801065_shared/overview
2. https://infyspringboard.onwingspan.com/web/en/app/toc/lex_auth_0135015662906327049522/overview
3. https://infyspringboard.onwingspan.com/web/en/app/toc/lex_auth_01317717336104140852_shared/overview
7 GROWTH OF A FUNCTIONS.
8 TIME AND SPACE COMPLEXITY OF AN ALGORITHM
9 METHOD OF SOLVING RECURRENCES.
10 WHAT IS DATA TYPES.
1. PRIMITIVE AND NON-PRIMITIVE
After the completion of 1st unit, Student will be able to understand the
following:
• Basic Understanding of an Algorithm.
• Able to calculate the complexity of an algorithm.
• Will be able to identity the asymptotic notation and the level of
complexity.
• Different types of Data Structures.
• Will be able to solve the recurrence relations.
INTRODUCTION: WHAT IS DATA STRUCTURES ?
Data: A value or a set of values of different type which is called data types
like string, integer, char etc.
• PROBLEM SOLVING
• ALGORITHM DESIGN
• COMPETETIVE PROGRAMMING
• They help to store and retrieve data in an organized manner, making it easier to
manage and analyze the information.
• Need of algorithm
• Problem statement
• Why study algorithm
• Characteristics of Algorithm
Set of rules to
INPUT obtain the OUTPUT
expected
output from
given input
ALGORITHM
▪ An algorithm is thus a sequence of computational steps that transform the input
into the output.
1. WELL-DEFINED INPUT
2. WELL-DEFINED OUTPUT
3. UNAMBIGUITY
4. FINITENESS
5. FEASIBILITY
Dr. Megha Gupta DATA STRUCTURES AND ALGORITHMS-I
11/17/2024 41
BCSE0301 UNIT1
Performance Analysis
• It depends upon two major factors.
• Time Complexity
• Space Complexity
Focus Based on theories and predictions. Based on real results and data.
Data Estimated or predicted data. Actual data collected from the experiment.
Hardware/Soft
ware Independent of specific hardware or software. Dependent on the actual hardware and software used.
Dependency
Dr. Megha Gupta DATA STRUCTURES AND ALGORITHMS-I
11/17/2024 43
BCSE0301 UNIT1
TOPIC OBJECTIVE: ANALYSIS OF ALGORITHM
• Space Complexity
• Time Complexity
• Best Case
• Worst Case
• Average Case
return true
Algo 1(n)
{
j=1
for(i=1;i<=n;i=i+2)
{
j=j+1
}
}
𝑎. 𝑝
𝑇𝑘 = 𝑎𝑘 + 𝑘 − 1 𝑑
Algo2(n)
{ Algo3(n)
j=1 {
for(i=1;i<=n;i=i*2) j=1
{ for(i=n;i>0;i=i/2)
j=j+1 {
} j=j+1
}
}
𝑔. 𝑝 }
𝑇𝑘 = 𝑎. 𝑟 𝑘−1
Int IsPrime(n)
{
for(i=2;i<= 𝑛;i=i++)
{
if(n%i == 0)
{
print(“not prime”)
return 0;
}
return 1;
}
To perfectly grasp the concept of "as a function of input size," imagine you have an algorithm that
computes the sum of numbers based on your input. If your input is 4, it will add 1+2+3+4 to output
10; if your input is 5, it will output 15 (meaning 1+2+3+4+5).
• Looking at the image above, we only have three statements. Still, because there is a loop, the second statement will
be executed based on the input size, so if the input is four, the second statement (statement 2) will be executed four
times, meaning the entire algorithm will run six (4 + 2) times.
• In plain terms, the algorithm will run input + 2 times, where the input can be any number. This shows that it's
expressed in terms of the input. In other words, it is a function of the input size.
Count the total number of fundamental operations in the program and this total
will give a rough estimate of the running time in terms of input size.
• Best Case: The minimum number of steps taken on any instance of size n.
• Worst Case: The maximum number of steps taken on any instance of size n.
• An algorithm is said to have a constant time when it is not dependent on the input data (n). No
matter the size of the input data, the running time will always be the same.
EXAMPLE:
If a>b:
return True An algorithm with constant time
Else:
return False
complexity is excellent since we don’t
need to worry about the input size.
EXAMPLE:
def linear_search(data,value):
for index in range(len(data)):
if value==data[index]:
return index
raise ValueError(“Value not found in the list”)
If __name__==‘__main__’:
data =[1,2,9,8,3,4,7,6,5]
print(linear_search(data,7))
Dr. Megha Gupta DATA STRUCTURES AND ALGORITHMS-I
11/17/2024 63
BCSE0301 UNIT1
TIME COMPLEXITY OF AN ALGORITHM
• This is similar to linear time complexity, except that the runtime does not depend on the input
size but rather on half the input size. When the input size decreases on each iteration or step, an
algorithm is said to have logarithmic time complexity.
• This method is the second best because your program runs for half the input size rather than
the full size. After all, the input size decreases with each iteration.
• P.S.: It is important to understand that an algorithm that must access all elements of its input data cannot
take logarithmic time, as the time taken for reading input of size n is of the order of n.
• An algorithm is said to have a quasilinear time complexity when each operation in the input
data has a logarithm time complexity. It is commonly seen in sorting algorithms. It is commonly
seen in sorting algorithms (e.g. merge sort, heap sort).
• EXAMPLE: A perfect way to explain this would be if you have an array with n items. The outer loop will run n
times, and the inner loop will run n times for each iteration of the outer loop, which will give total n^2 prints.
If the array has ten items, ten will print 100 times (10^2).
for x in data:
for y in data:
print(x,y)
• Exponential Time Complexity when the growth rate doubles with each addition to the input (n),
often iterating through all subsets of the input elements. Any time an input unit increases by 1,
the number of operations executed is doubled.
• Example: The recursive Fibonacci sequence In the below code, the algorithm specifies a growth
rate that doubles every time the input data set is added. This means the time complexity is
exponential with an order O(2^n).
def Fibonacci(n):
if n<=1:
return n
return Fibonacci(n-1)+Fibonacci(n-2)
• Space complexity is nothing but the amount of memory space that an algorithm or a problem
takes during the execution of that particular problem/algorithm
• The space complexity is not only calculated by the space used by the variables in the
problem/algorithm it also includes and considers the space for input values with it.
➢AUXILLARY SPACE: Auxiliary space is nothing but the space required by an algorithm/problem
during the execution of that algorithm/problem and it is not equal to the space complexity
because space complexity includes space for input values along with it.
• So in the above example, input value is 'n' that is constant which will take the space of O(1).
Now ,what about auxiliary space, so it is also O(1) because 'i' and 'sum' are also constants.
• Hence,total space complexity is O(1).
The Objective of this topic is to make students understand about the five
asymptotic notations:
1. Big-Oh Notations
2. Theta Notations
3. Big-Omega Notations
4. Small Oh Notations
5. Small Omega Notations
Asymptotic is an adjective used in specialized mathematics to describe something that gets closer
to a limit as a variable approaches infinity.
• Asymptotic notation provides a simplified way to analyze and compare the efficiency of
algorithms, focusing on their growth rates without being concerned with constant factors and
lower-order terms.
• Big-oh is the formal method of expressing the upper bound of an algorithm's running time. It is
the measure of the longest amount of time.
• The function f(n) = O(g(n)) (read as “ f of n is big oh of g of n”) if there exist positive constants C
and n0 such that
f(n)≤Cg(n) for all n≥ n0
Sol.
To prove f(n) = O(g(n)) i.e. f(n) <= c.g(n)
n f(n) c.
It means 3n + 4 <=c.n g(n)
3n+4 5n
Let c= 5 1 7 > 5
2 10 = 10
So, for n0 = 2
3 13 < 15
f(n) < = 5.g(n)
4 16 < 20
• The above condition is always TRUE for all values of C = 4 and n >= 2 .By
using Big - Oh notation we can represent the time complexity as
follows...3n + 2 = O(n)
Hence, the complexity of f(n) can be represented as: O(g(n)), i.e. O(n3)
• Consider function f(n) as the time complexity of an algorithm and g(n) is the
most significant term.
• If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as
Ω(g(n)). f(n) = Ω(g(n))
• That means Theta notation always indicates the average time an algorithm
requires for all input values.
• Consider function f(n) as the time complexity of an algorithm and g(n) is the most
significant term.
• If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can
represent f(n) as Θ(g(n))
f(n) = Θ(g(n))
𝑓(𝑛)
lim =0
𝑛→∞ 𝑔(𝑛)
𝑔(𝑛)
lim =0
𝑛→∞ 𝑓(𝑛)
• Like all recursive functions, a recurrence relation also consists of two steps:
i. One or more initial conditions
ii. Recursive definition of a problem
• Example 1:
• A Fibonacci sequence 𝑓0, 𝑓1, 𝑓2, … .. can be defined by the recurrence relation as:
0 𝑖𝑓 𝑛 = 0
𝑓𝑛 = {0 𝑖𝑓 𝑛 = 1
𝑓𝑛−1 + 𝑓𝑛−2 𝑖𝑓 𝑛 ≥ 2
1. (Basic Step): The given recurrence says that if n=0 then 𝑓0 = 0 and if n=1 then 𝑓1 = 1 . These
two conditions (or values) where recursion does not call itself is called an initial condition (or
Base conditions).
2. (Recursive Step): This step is used to find new terms 𝑓2, 𝑓3, … . ., from the existing (preceding)
terms, by using the formula 𝑓𝑛 = 𝑓𝑛−1 + 𝑓𝑛−2 for 𝑛 ≥ 2.
This formula says that “by adding two previous sequencese (or term) we can get the next term”.
• Example 1:
• A Fibonacci sequence 𝑓0, 𝑓1, 𝑓2, … .. can be defined by the recurrence relation as:
0 𝑖𝑓 𝑛 = 0
𝑓𝑛 = {0 𝑖𝑓 𝑛 = 1
𝑓𝑛−1 + 𝑓𝑛−2 𝑖𝑓 𝑛 ≥ 2
1. (Basic Step): The given recurrence says that if n=0 then 𝑓0 = 0 and if n=1 then 𝑓1 = 1 . These
two conditions (or values) where recursion does not call itself is called an initial condition (or
Base conditions).
2. (Recursive Step): This step is used to find new terms 𝑓2, 𝑓3, … . ., from the existing (preceding)
terms, by using the formula 𝑓𝑛 = 𝑓𝑛−1 + 𝑓𝑛−2 for 𝑛 ≥ 2.
This formula says that “by adding two previous sequencese (or term) we can get the next term”.
For example 𝒇𝟐 = 𝒇𝟏 + 𝒇𝟎 = 𝟏 + 𝟎 = 𝟏; 𝑓3 = 𝑓2 + 𝑓1 = 1 + 1 = 2 ; 𝑓4 = 𝑓3 + 𝑓2 = 2 + 1 = 3 and so on
Dr. Megha Gupta DATA STRUCTURES AND ALGORITHMS-I
11/17/2024 99
BCSE0301 UNIT1
TOPIC: RECURRENCE RELATION
• Example 2:
Find out the value of n! = n (n-1) (n-2)……. (3) (2) (1) for n ≥ 1
The factorial function is defined as:
𝑛! = { 1 𝑖𝑓 𝑛 = 1 𝑛.
(𝑛 − 1)! 𝑖𝑓 𝑛 > 1
int fact(int n)
𝑖𝑓 𝑛 == 0 𝑡ℎ𝑒𝑛
𝑟𝑒𝑡𝑢𝑟𝑛 1
𝑒𝑙𝑠𝑒
𝑟𝑒𝑡𝑢𝑟𝑛 𝑛 ∗ (𝑛 − 1)
𝑒𝑛𝑑𝑖𝑓
• Let us try to understand the efficiency of the algorithm in terms of the number of
multiplication operations required for each value of n
Let (𝑛) denote the number of multiplications required to execute the n!, that is T(n)
denotes the number of times line 4 is executed in the factorial algorithm.
• We have the initial condition T(0)=1; since when n=0,fact simply returns
• When n>1 ,the line performs the multiplication plus fact is recursively called the
input (n-1). It means, by the definition of (n), additional (n-1) number of
multiplication are required.
• Here our guess does not hold for n=1because (1) ≤ 𝑐. 1𝑙𝑜𝑔1 𝑖. 𝑒. (𝑛) ≤ 0 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠
𝑐𝑜𝑛𝑡𝑟𝑎𝑑𝑖𝑐𝑡𝑖𝑜𝑛 𝑤𝑖𝑡ℎ (1) = 1.
• A recursion tree is a tree where each node represents the cost of a certain
recursive sub-problem. Then you can sum up the numbers in each node to get
the cost of the entire algorithm.
Why is it used?
• A recursion tree is mainly used to generate a close guess to the actual
complexity, which can be further verified using the substitution method. So, we
should keep in mind that we can use recursion trees to develop an intuition of
the underlying pattern, but they are not the actual proofs.
• Step 2:
• Find the total number of levels in the recursion tree.
• Compute the cost of each level in the tree.
• Calculate the total number of nodes in the last level or the leaf nodes.
• Compute the cost of the last level.
• Step 3: Sum up the cost of all the levels and decompose the expression obtained
in the standard asymptotic notation.
𝒏
• Now we have to find the value of 𝑻 by putting (𝒏/𝒃)in place of n
𝒃
in equation .That is
𝒏
𝒏 𝒃 𝒏
=𝑻 = 𝒂𝑻 + 𝒇( )
𝒃 𝒃 𝒃
𝒏 𝒏
• = 𝒂𝑻 + 𝒇 + 𝒇( )……………(ii)
𝒃𝟐 𝒃
• From equation (2), now (𝑛⁄𝑏) will be the value of node having a branch (child
𝒏 𝒏
nodes) each of size 𝑻 Now each 𝑻 in the figure in previous slide will be
𝒃 𝒃
replaced as follows:
• Now you have to find the per-level cost of a tree. Per-level cost is the sum
of the cost of each node at that level. For example, per level cost at level 1
is:
𝑛 𝑛 𝑛
𝑏
+𝑓 𝑏
+ ⋯𝑓 𝑏
𝑎 𝑡𝑖𝑚𝑒𝑠 . This is called “Row-Sum”.
• Now the total (final) cost of the tree can be obtained by taking the sum of
the cost of all these levels. i.e.,
1. To make a recursion tree, you have to write the value of (𝑛) at root node.
2. The number of child of a Root Node is equal to the value of a. (Here the value
of a = 2).So recursion tree be looks like as:
𝒏
𝒏 𝟐 𝒏
𝑻 = 𝟐𝑻 + 𝒏 = 𝟐𝑻 𝟐 + 𝒏/𝟐
𝒃 𝟐 𝟐
From above equation, now (n/2) will be the value of node having 2 branch (child
nodes) each of size T(n/2) in figure a will be replaced as follows:
• Then total cost is the sum of the cost of all levels which gives the solution of
recurrence. The height of the tree is:
𝑻𝒐𝒕𝒂𝒍 𝑪𝒐𝒔𝒕 = 𝒏 + 𝒏 + 𝒏 + 𝒏 … … … . +𝒏
“To find the sum of the series you have to find the total numbers of the term in
the series. To find the total number of terms, you have to find the height of tree.”
𝒏 𝒏 𝒏
𝒏→ → 𝟐 ………… → 𝒌
𝟐 𝟐 𝟐
• At last problem size will be equal to 1 if:
𝒏 𝒌
𝒌
= 𝟏 → 𝒏 = 𝟐 → 𝒌 = 𝒍𝒐𝒈𝟐 𝒏.
𝟐
• This k represents the height of the tree, hence height = 𝒌 = 𝒍𝒐𝒈𝟐 𝒏
• Total Cost is: 𝒏 + 𝒏 + 𝒏 + 𝒏 … … … . +𝒍𝒐𝒈𝟐 𝒏 terms
=𝒏𝒍𝒐𝒈𝟐 𝒏
• The master method provides us a straight forward method for solving recurrences
𝒏
of the form : 𝑻 𝒏 = 𝒂𝑻 + 𝒇 𝒏 , 𝒘𝒉𝒆𝒓𝒆 𝒂 ≥ 𝟏 𝒂𝒏𝒅 𝒃 > 𝟏 are constants
𝒃
and f(n) is an asymptotically positive function.
• This recurrence gives us the running time of an algorithm that divides a problem
of size n into sub-problems of size (n/b).
𝒂−𝒆 𝒃 𝒂
CASE 1: IF 𝒇(𝒏) = 𝑶 𝒏𝒍𝒐𝒈𝒃 𝒇𝒐𝒓 𝒔𝒐𝒎𝒆 ϵ>0 ,then 𝑻 𝒏 = 𝜽(𝒏𝒍𝒐𝒈 )
𝒂 𝒃 𝒂
CASE 2: IF 𝒇(𝒏) = ϴ 𝒏𝒍𝒐𝒈𝒃 , then 𝑻 𝒏 = 𝜽(𝒏𝒍𝒐𝒈 . 𝒍𝒐𝒈𝒏)
𝒍𝒐𝒈𝒃𝒂+ϵ 𝒏
CASE 3: IF (𝒏)= (𝒏 ) 𝒇𝒐𝒓 𝒔𝒐𝒎𝒆 ϵ > 𝟎, 𝒂𝒏𝒅 𝒊𝒇 𝒂𝒇 𝒃
≤ 𝒄𝒇 𝒏 𝒇𝒐𝒓someconstant c<1 and all
sufficiently large 𝒏. 𝑻 𝒏 = 𝜽(𝒇 𝒏 )
• REMARKS: To apply the Master method you always compare 𝒏𝒍𝒐𝒈𝒃𝒂 and f(𝑛)The larger of the
two functions determines the solution to the recurrence problems. If the growth rate of these two
functions then it belongs to case 2. In this case, we multiply by a logarithmic factor to get the run
time solution (T(n)) of the recurrence relation.
• If 𝒇(𝒏) is polynomially smaller than 𝒏𝒍𝒐𝒈𝒃 (by a factor of 𝒏∈ then case 1 will be
applicable to find 𝑇(𝑛).
rate of f(n) is slower, we will apply the case 1 of Master Theorem and we get 𝑇(𝑛)
𝒂−𝒆
=Θ(𝒏𝒍𝒐𝒈𝒃 )= Θ(𝑛2 ).
𝟐𝒏
• Example 2:Consider the recurrence of 𝑻 𝒏 = 𝑻 + 𝟏, in which 𝑎 = 1, 3 𝑏 =
𝟑
3/2 ,𝑓 𝑛 = 𝑛𝑙𝑜𝑔32 = 1. Since f(𝑛) = 𝛩(𝑛𝑙𝑜𝑔32 ). By Master Theorem (case2), we
get 𝑇(𝑛) = Θ(𝑛𝑙𝑜𝑔𝑏𝑎 . 𝑙𝑜𝑔𝑛) = Θ(𝑙𝑜𝑔𝑛).
𝒏 𝟐𝒏
1. Solve the recurrence 𝑻 𝒏 = 𝑻 +𝑻 + 𝒏 using recursion tree method.
𝟑 𝟑
2. A recurrence relation for Tower of Hanoi (TOH) problem is 𝑻 𝒏 = 𝟐𝑻(𝒏 −
𝟏) + 𝟏 with (1)=1 and T(n)=3. Solve this recurrence to find the solution of TOH
problem.
3. Solve the following recurrence using the recursion tree method:
𝒏
a) 𝑻 𝒏 = 𝟒𝑻 𝟐
+𝒏
𝒏 𝒏 𝒏
b) 𝑻 𝒏 = 𝑻 𝟐
+𝑻
𝟒
+𝑻
𝟖
+𝒏
4. Use Master theorem to give the tight asymptotic bounds of the following
recurrences:
𝒏
a) 𝑻 𝒏 = 𝟒𝑻 𝟐
+𝒏
𝒏
b) 𝑻 𝒏 = 𝟒𝑻 + 𝒏𝟐
𝟐
1. DATA TYPES
2. PRIMITIVE AND NON-PRIMITIVE DATA TYPES
3. LINEAR DATA STRUCTURE
4. NON-LINEAR DATA STRUCTURE
1. PRIMITIVE DATA-STRUCTURES
2. NON-PRIMITIVE DATA-STRUCTURES
• Non-primitive data structures are the data structures created by the programmer
with the help of primitive data structures.
2 All the items are present on the single The data items are present at different layers.
layer.
3 It can be traversed on a single run. That It requires multiple runs. That is, if we start from
is, if we start from the first element, we the first element it might not be possible to
can traverse all the elements sequentially traverse all the elements in a single pass.
in a single pass.
4 The memory utilization is not efficient. Different structures utilize memory in different
efficient ways depending on the need.
5 The time complexity increase with the Time complexity remains the same.
data size.