0% found this document useful (0 votes)
2 views49 pages

Design and Analysis of Algorithm Course File 2024

The document outlines the introductory class details for the Design and Analysis of Algorithm course at Noida International University, scheduled for January 16, 2024. It includes classroom management guidelines, course prerequisites, expected outcomes, and a detailed lecture plan for the semester. The course aims to equip students with algorithm analysis skills, covering topics such as dynamic programming, graph algorithms, and computational complexity.

Uploaded by

ankit1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views49 pages

Design and Analysis of Algorithm Course File 2024

The document outlines the introductory class details for the Design and Analysis of Algorithm course at Noida International University, scheduled for January 16, 2024. It includes classroom management guidelines, course prerequisites, expected outcomes, and a detailed lecture plan for the semester. The course aims to equip students with algorithm analysis skills, covering topics such as dynamic programming, graph algorithms, and computational complexity.

Uploaded by

ankit1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

w.e.f.:Jan.

2024
Noida International University
Form No. Acad. -001

Department of Computer Science & Engineering

Faculty Introductory Class at the beginning of the Semester:

Date: 16th Jan 2024 Sem.: IV Period: Sub code: PCC-CS404P Sub Name: Design and Analysis of Algorithm
Name of the Faculty Member: Ms. Kumari Pragya
1. Did you teach this/ similar subject earlier in any class? Yes
2. Class Room Management - When you enter the class observe the following:
(a) Students should get up & pay compliments; if not teach them to do so. Reply back & tell them to sit down
(b) See that the room is well ventilated with lights & fans are properly working; if not register the complaint.
(c) See that the seating arrangement is proper. If required make changes.
(d) Ask General Welfare of the students especially hosteller regarding their mess & food.
(e) In case any particular student is found not cheerful, ask the reason & do the needful.
(f) Make the students aware of General Discipline, Dress Code, Attendance and class etiquettes.
(g) Emphasize importance of taking down notes in separate copies for different subjects, keeping in step with
the class and Establish importance of asking questions.
(h) Importance of communication in English for the professionals.
3. When you find that the students are comfortable and ready to listen, then talk on the following points:
(a) Introduce yourself i.e., Name, qualification and experience in research etc. and any other information which
may influence the students to regard you as their teacher/ guide or mentor.
(b) Introduce the subject to be taught highlighting the following:
- Course Objectives
- Course Outcomes
- Expectations from the students after attending the Course
- Evaluation Scheme, Syllabus and Books
- Course Delivery to include – Total number of Units to be taught in the semester, number of Units to be
covered up before 1st, 2nd & 3rd sessional tests, sessional test schedules, duration and course coverage in the
tests, number of assignments/ quizzes, sessional marks policy etc.
- Importance and relevance of the subject in engineering field.
- Its importance for career in the industry & likely career avenues, Need of the knowledge in human
life, at national & international level.
- Brief summary of the subject taught in previous semester (to connect the current subject with the subject
taught earlier-pre-requisites)
- Clarify doubts, if any, about the curriculum and about any other matter.
(c) Create interest amongst the students so that they will eagerly wait to attend your classes.
(d) Provide information about various co-curricular and extra-curricular activities and clubs in the college and
emphasize their importance for their overall personality development and help in placement. Also inform the
incentive schemes for their participation in such activities within the college and outside.
(e) Provide information about technical Society/ professional magazines being promoted by the department and
various Centers of Excellence in the college.
4. Give them an assignment based on pre-requisites of the course-A set of 40 questions & random/ sequence of 10
questions for each student. Collect the assignment in next class to understand the level of the class as the
stepping stone for start of the new subject.
5. Just before the end of the class, enquire if they have any comments or suggestion.
6. Submit the report to the HOD after the introductory class.
Observations/ Report

Signature &Name of the Faculty Member: Signature of the


HOD:
Date: Date:
w.e.f.:Jan. 2024

Noida International University Form No. Acad. -002

Department of Computer Science & Engineering


Session: 2023-2024 Semester: IV
Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm

Prerequisite of the course (subject):

● The students must have basic knowledge of data structure and c programming to derive the Analysis of algorithms.
● The course includes the Complexities of algorithms, Dynamic programming, Spanning tree algorithms, shortest path
algorithms. Therefore, students should have knowledge about the Data structure and C from their previous studies to gain the
advance knowledge during this course.

Course Outcomes (CLO’s):

This course provides the following outcomes to the students:


CLO-1: Analyse best case, average case and worst- casa running times of algorithms based on asymptotic analysis and justify the
correctness of algorithms.
CLO-2: Create and implement the greedy paradigm and explain when an algorithmic design situation calls for it. For a given problem
develop the greedy algorithms.
CLO-3: Implement divide-and-conquer paradigm and explain when an algorithmic design situation calls for it. Synthesize divide-and-
conquer algorithms. Derive and solve recurrence relation
CLO-4: Analyse dynamic-programming paradigm and explain when an algorithmic design situation calls for it.
CLO-5 Explore approximation algorithm is. Compute the approximation factor of an approximation algorithm (PTAS and FPTAS).
CLO-6 Analyse the performance of the different shortest path and spanning tree algorithms.

Program Outcomes (PO’s) relevant to the course:

PO1 - Engineering Knowledge: Apply knowledge of mathematics, science, engineering fundamentals, and an engineering
specialization to the solution of complex engineering problems.
PO2 - Problem analysis: Identify, formulate, review research literature and analyze complex engineering problems reaching
substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.
PO3 -. Design/development of solutions: Design solutions for complex engineering problems and design system components or
processes that meet the specified needs with appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.
PO5 - Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools, including
prediction and modelling, to complex engineering activities with an understanding of the limitations.
PO12 - Life-long learning: Recognize the need for, and have the preparation and ability to engage in independent and life-long learning
in the broadest context of technological change.

COURSE OUTCOMES – PROGRAMME OUTCOMES MAPPING TABLE

CLO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12

CO1 2 1 1 - 2 - - - - - - 1

CO2 2 1 1 - 2 - - - - - - 1

CO3 2 1 1 - 2 - - - - - - 1

CO4 2 1 1 - 3 - - - - - - 1

CO5 2 1 2 - 2 - - - - - - 1

CO6 2 1 - - 1 - - - - - - 1
w.e.f.:Jan. 2024

Noida International University Form No. Acad. -003

Department of Computer Science & Engineering

Lecture Plan

Session: 2023-2024 Semester: IV


Subject Code: PCC-CS404 Subject Name: Design and Analysis of
Algorithm
LTP
Suggested Readings:
1. E. Horowitz, S. Sahni, and S. Raj Sekaran, ―Fundamentals of Computer Algorithms, Galgotia
Publication.
2. T. H. Cormen, Leiserson, Rivest and Stein, ―“Introduction of Computer algorithm”,
UNIT Contents No. of Book/ Web Ref Test
Periods

Ist Introduction Algorithm Design paradigms - motivation, T. H. Cormen, Leiserson,


concept of algorithmic efficiency, run time analysis of Rivest and Stein,
algorithms, Asymptotic Notations. Recurrences- ―“Introduction of
substitution method, recursion tree method, master method. 10 Computer algorithm”

2nd Divide and Conquer Structure of divide-and-conquer T. H. Cormen, Leiserson,


algorithms: examples; Binary search, quick sort, Merge sort, Rivest and Stein,
Strassen Multiplication; Analysis of divide and conquer run ―“Introduction of
time recurrence relations. Greedy Method : Overview of the Computer algorithm”
greedy paradigm examples of exact optimization solution
(minimum cost spanning tree), Approximate solution 10
(Knapsack problem), Single source shortest paths, Traveling
Salesman Problem.

3rd Dynamic Programming Overview, difference between T. H. Cormen, Leiserson,


dynamic programming and divide and conquer, Applications: Rivest and Stein,
Shortest path in graph, chain Matrix multiplication, Traveling ―“Introduction of
salesman Problem, longest Common sequence, knapsack Computer algorithm”
8
problem.

4th Graph Searching and Traversal Overview, Representation of T. H. Cormen, Leiserson,


graphs, strongly connected components, Traversal methods Rivest and Stein,
(depth first and breadth first search).Back tracking : Overview, ―“Introduction of
8-queen problem, and Knapsack problem. Branch and Bound : Computer algorithm”
LC searching Bounding, FIFO branch and bound, LC branch
and bound application: 0/1 Knapsack problem, Traveling 8
Salesman Problem

5th Computational Complexity Complexity measures, Polynomial T. H. Cormen, Leiserson,


vs non-polynomial time complexity; NP-hard and NP- Rivest and Stein,
complete classes, examples. ―“Introduction of
7 Computer algorithm”

Signature Signature Signature


(Faculty) ( HOD ) ( Director)
------------------------- --------------------------- -------------

w.e.f.:Jan.2024

Noida International University Form No. Acad. -003

Department of Computer Science & Engineering

Lesson Plan

Session: 2023-2024 Semester: IV


Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm
LTP
S. No Contents/Topic No. of Planned Delivery Remark

Periods Date Date

1 Introduction to Algorithm (Basic Technology) 1 18/01/2024 19/01/2024 Complete

Algorithm and its Characteristics, Analysis and 1 19/01/2024 22/01/2024


2 Fundamental of Algorithm solving Complete

3 Pseudo code and Algorithm Analysis framework 1 22/01/2024 06/01/2024 Complete

Complexity (Time and Space), Asymptotic analysis of 1 06/01/2024 29/01/2024


4 complexity bounds Complete

5 Complexity analysis of insertion sort 1 29/01/2024 01/01/2024 Complete

6 Analysis of Big Oh, Omega and Theta 1 01/01/2024 02/01/2024 Complete

7 Analysis of Small Oh, & Small Omega Asymptotic 1 02/01/2024 05/01/2024 Complete

Problem based on Asymptotic relation & properties,


8 Recurrence relation 1 05/01/2024 08/01/2024 Complete

9 Substation method & problem based on it 1 08/01/2024 09/01/2024 Complete

10 Recursion tree method & problem based on it 1 09/01/2024 12/01/2024 Complete

11 Master theorem and problem based on it 1 12/01/2024 15/01/2024 Complete

12 Unit 2 Bruite force algorithm 1 15/01/2024 16/01/2024 Complete

13 Greedy Algorithm 1 16/01/2024 22/01/2024 Complete


14 Dynamic Programming 1 22/01/2024 26/01/2024 Complete

15 Branching & Back tracing methodology 2 26/01/2024 29/01/2024 Complete

16 Bin Packing Algorithm 2 29/01/2024 01/01/2024 Complete

17 Knap sack problem 2 01/01/2024 04/02/2024 Complete

18 TSP 2 04/02/2024 07/02/2024 Complete

19 Heaurtics- characterstics & their application domain 2 07/02/2024 11/02/2024 Complete

20 Class Test & Assignment 2 11/02/2024 14/02/2024 Complete

21 Unit 3 Graph Theory & It's Application 2 14/02/2024 15/02/2024 Complete

22 Tree Algorithm & Introduction 2 15/02/2024 18/02/2024 Complete

23 Depth First Search (DFS) Algorithm 2 18/02/2024 21/02/2024 Complete

24 Breath First Search(BFS) Algorithm 3 21/02/2024 22/02/2024 Complete

25 Shortest Path Algorithm- Dijkstra's Algorithm 3 22/02/2024 02/02/2024 Complete

26 Floyd warshall algorithm 3 02/02/2024 03/02/2024 Complete

27 Bellman ford algorithm 3 03/02/2024 06/03/2024 Complete

28 Minimum Spanning Trees(Prim's & kruskal Algorithm) 3 06/03/2024 09/03/2024 Complete

29 Class Test & Assignment 3 09/03/2024 10/03/2024 Complete

30 Unit 4 Tractable & Intractable Problem 3 10/03/2024 13/03/2024 Complete

31 Computability Algorithm 3 13/03/2024 27/03/2024 Complete

32 Computability Class P 4 27/03/2024 28/03/2024 Complete

33 Computability Classes - NP 4 28/03/2024 11/04/2024 Complete

34 Computability Classes -NP-Complete 4 11/04/2024 17/04/2024 Complete

35 NP-hard Problem 4 17/04/2024 19/04/2024 Complete

36 Standard NP- Complete Problems and Reduction Tech 4 19/04/2024 22/04/2024 Complete

37 Cooks Theorem 4 22/04/2024 24/04/2024 Complete

38 Class Test & Assignment 4 24/04/2024 01/05/2024 Complete

39 Unit 5 Approximation Algorithms 4 01/05/2024 03/05/2024 Complete

40 Randomized Algorithms 5 03/05/2024 06/05/2024 Complete

41 Class of problems beyond -NP -PSPAC 5 06/05/2024 07/05/2024 Complete

42 Revision and problem solving 5 07/05/2024 08/05/2024 Complete

302

Signature Signature Signature


(Faculty) ( HOD ) ( Director)
------------------------- --------------------------- -------------

w.e.f.:Jan. 2024
Form No. Acad. -004
Noida International University
Department of Computer Science & Engg.

Plan Summary of Assignment/ Test/ Quiz

Session: 2023-2024 Semester: IV


Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm
S. No. Assignment/ Test/ Quiz As1signment/ Test/ Quiz Date of Submission

1. Assignment 1Assignment-1 27/01/2024

2. Assignment 1Assignment-2 13/02/2024

3. Assignment 1Assignment-3 10/03/2024

4 Assignment 1Assignment-4 24/04/2024

5 Assignment 1Assignment-5 06/05/2024

6. Quiz Quiz 07/05/2024

Signature and Name of Faculty Member: Signature of HOD:

Date: Date:
w.e.f.:Jan 2024
Noida International University Form No. Acad. -005

Department of Computer Science & Engineering


List of Weak Students and their Make-up Classes

Session: 2023-2024 Semester: IV


Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm
Name of Faculty: Ms. Kumari Pragya
After Mid Term test:

A. List of weak students:

Marks obtained (less


S.No. Roll No. Name than < 40 % of Mid Remarks
Term test)

1
22044041540 ADITYA MODI

ANANT JAIN
2 22044041552
ALI MEHDI
3 22044041546
ASHUTOSH .
4 22044041564

5
22044041574 CHIRAG SHARMA

6
22044041575 DEEPAK MALIK

B. Arrangement of Make-up Classes with dates: NA

Planned Actual Topics Make- Planned Actual


Make-up class Topics Covered
Date Date Covered up class Date Date

Dynamic
19/04/2024 21/04/2024 08/05/2024 13/05/2024 MST
I Programming IV

25/04/2024 28/04/2024 Knap Sack


II V

03/05/2024 06/05/2024 Greedy Algo


III VI

Signature of the Faculty Member: Signature of the HOD:


Date: Date
w.e.f.:Jan 2024
Noida International University
Form No. Acad. -006

Department of Computer Science & Engineering


Extra Classes Report
Session: 2023-2024 Semester: IV
Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm

S.no. Planned Date Actual Date Topics Covered

1 19/04/2024 21/04/2024 Dynamic Programming


2 25/04/2024 28/04/2024 Knap Sack
3 03/05/2024 06/05/2024 Greedy Algo
4 08/05/2024 13/05/2024 MST

Signature & Name of the Faculty Member: Signature of the HOD:


Date: Date:
w.e.f.:Jan.2024
Noida International University Form No. Acad. -007

Department of Computer Science & Engineering

End of Semester Summary (Lecture, Attendance & Marks)

Session: 2023-2024 Semester: IV


Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm
Lecture Summary

No of Lectures No of Lectures Remark


Unit
Planned Delivered

1 8 10
Complete
2 8 10
Complete
3 8 8
Complete
4 8 8
Complete
5 8 7
Complete

Attendance Summary
S.No. Attendance No. of Students

1 Total Number of Students

2 More than 95%

3 Between 95% & 75%

4 Between 75% & 60%

5 Less than 60%

6 Average attendance of the students (%)

Summary of Assignment/ Test/ Quiz/ Sessional Exam Marks

S.No. Activity Average Marks Max Marks Min Marks

1 Assignment-1 3 4 2

2 Assignment- 2 3 4 2

3 Quiz 3 4

Signature & Name of the Faculty Member(s): Signature of the HOD:

Date: Date:
w.e.f.:Jan 2024
Noida International University
Form No. Acad. -006

Department of Computer Science & Engineering


Student List
Session: 2024-2025 Semester: IV
Subject Code: PCC-CS404 Subject Name: Design and Analysis of Algorithm

S.No. Student Name Enrollment No. Program and Semester


SAURAV CHAUDHARY 20044041774
1 B.Tech CSE
MANEESH PUNDIR 22044041527
2 B.Tech CSE
AAKASH DAGAR 22044041528
3 B.Tech CSE
AASHISH KUMAR JHA 22044041529
4 B.Tech CSE
MAHAMOUD ABDILLAHI 22044041530
5 B.Tech CSE
ABHIMANYU SINGH 22044041532
6 B.Tech CSE
ABHIMANYU SINGH 22044041533
7 B.Tech CSE
ABHISHEK GUPTA 22044041534
8 B.Tech CSE
ABHISHEK KUMAR 22044041536
9 B.Tech CSE
ABUZAR RAZA 22044041537
10 B.Tech CSE
ADIL HUSSAIN 22044041538
11 B.Tech CSE
ADITYA KUMAR 22044041539
12 B.Tech CSE
ADITYA MODI 22044041540
13 B.Tech CSE
ADITYA RAJ 22044041541
14 B.Tech CSE
ADITYA RAJ TIWARI 22044041542
15 B.Tech CSE
ADITYANATH YADAV 22044041543
16 B.Tech CSE
AKASH CHATTERJEE 22044041544
17 B.Tech CSE
AKASH TRIPATHI 22044041545
18 B.Tech CSE
ALI MEHDI 22044041546
19 B.Tech CSE
AMAN ALI 22044041547
20 B.Tech CSE
AMAN PRATAP 22044041548
21 B.Tech CSE
AMAN KUMAR SINGH 22044041549
22 B.Tech CSE
AMARNATH SHARMA 22044041550
23 B.Tech CSE
24 AMIT CHOUDHARY 22044041551 B.Tech CSE
ANANT JAIN 22044041552
25 B.Tech CSE
ANJALI SHARMA 22044041553
26 B.Tech CSE
ANKIT CHAUHAN 22044041554
27 B.Tech CSE
ANKIT NAGAR 22044041555
28 B.Tech CSE
ANUJ KUMAR 22044041557
29 B.Tech CSE
ANURAG MOURYA 22044041558
30 B.Tech CSE
ARUN KUMAR 22044041560
31 B.Tech CSE
ASHISH RANJAN 22044041561
32 B.Tech CSE
ASHISH KUMAR 22044041562
33 B.Tech CSE
ASTHA BALI 22044041563
34 B.Tech CSE
ASHUTOSH . 22044041564
35 B.Tech CSE
ATHIRA K 22044041565
36 B.Tech CSE
ATUL KUMAR SINGH 22044041566
37 B.Tech CSE
AYUSH SINHA 22044041567
38 B.Tech CSE
AYUSH PRATAP GAUR 22044041568
39 B.Tech CSE
BABU ALI 22044041569
40 B.Tech CSE
BADAL RAJ 22044041570
41 B.Tech CSE
BIBHAY SHARMA 22044041571
42 B.Tech CSE
BICKY YADAV 22044041572
43 B.Tech CSE
CHIRAG SHARMA 22044041574
44 B.Tech CSE
DEEPAK MALIK 22044041575
45 B.Tech CSE
DIXIT SHARMA 22044041577
46 B.Tech CSE
DJAFFAR BINTI HADJARA 22044041578
47 B.Tech CSE
DUSHYANT THAKUR 22044041579
48 B.Tech CSE
GOVIND SINGH 22044041581
49 B.Tech CSE
HARSH . 22044041582
50 B.Tech CSE
HARSH RANJAN 22044041583
51 B.Tech CSE
HARSH TOMAR 22044041585
52 B.Tech CSE
53 HARSHITA TEWATIA 22044041586 B.Tech CSE
HIMANSHI . 22044041587
54 B.Tech CSE
HIMANSHU SHARMA 22044041588
55 B.Tech CSE
HIMANSHU SINGH 22044041590
56 B.Tech CSE
HITESH CHAUHAN 22044041592
57 B.Tech CSE
HITESH KUMAR SETHI 22044041593
58 B.Tech CSE
ISHAN SHARMA 22044041594
59 B.Tech CSE
JOLLY DISWAR 22044041595
60 B.Tech CSE
KAIF KHAN 22044041597
61 B.Tech CSE
KARN KUMAR JHA 22044041598
62 B.Tech CSE
KAUSHLENDRA SINGH 22044041599
63 B.Tech CSE
KRITIKA RAI 22044041600
64 B.Tech CSE
LAINAT MOUKTARE 22044041602
65 B.Tech CSE
LOVE . 22044041603
66 B.Tech CSE
MANISH KUMAR SINHA 22044041604
67 B.Tech CSE
MANJEET KUMAR 22044041605
68 B.Tech CSE
MD ZEESHAN 22044041607
69 B.Tech CSE
MD ZAID ALAM 22044041609
70 B.Tech CSE
MINHAJ AHMAD 22044041610
71 B.Tech CSE
MOHAMMAD ASHIF 22044041611
72 B.Tech CSE
MOHAMMAD FAYAZ AHMAD 22044041612
73 B.Tech CSE
MOHD FAIZ HUSAIN 22044041613
74 B.Tech CSE
MOHIT KAUSHIK 22044041614
75 B.Tech CSE
MUHAMMAD LUQMAN 22044041615
76 B.Tech CSE
NIKHIL KUSHWAH 22044041619
77 B.Tech CSE
NISHANT CHAUHAN 22044041620
78 B.Tech CSE
NITIN BAGHEL 22044041621
79 B.Tech CSE
OMHARI KAUSHIK 22044041622
80 B.Tech CSE
OMPRATAP YADAV 22044041623
81 B.Tech CSE
82 PANKAJ SINGH 22044041624 B.Tech CSE
PARVEEN . 22044041625
83 B.Tech CSE
PIYUSH NISHAD 22044041626
84 B.Tech CSE
PRABAL SINGH 22044041627
85 B.Tech CSE
PRAKASH KUMAR JHA 22044041628
86 B.Tech CSE
PRASHANT KUMAR 22044041629
87 B.Tech CSE
PREM KUMAR SAH 22044041631
88 B.Tech CSE
PRIYANSHU SINGH 22044041633
89 B.Tech CSE
RACHNA . 22044041634
90 B.Tech CSE
RAHUL KUMAR 22044041635
91 B.Tech CSE
RAHUL KUMAR 22044041636
92 B.Tech CSE
RAJ ARYAN 22044041637
93 B.Tech CSE
RAJNEESH ATTRI 22044041638
94 B.Tech CSE
RINKU PRAJAPATI 22044041639
95 B.Tech CSE
RITIK NAGAR 22044041640
96 B.Tech CSE
RITIK PD CHAUDHARY 22044041641
97 B.Tech CSE
RIYA KUMARI 22044041642
98 B.Tech CSE
SABBU KUMAR 22044041644
99 B.Tech CSE
SACHIN TOMAR 22044041645
100 B.Tech CSE
SAHIL . 22044041646
101 B.Tech CSE
SALEEM ABBAS 22044041647
102 B.Tech CSE
SALMAN ANSARI 22044041648
103 B.Tech CSE
SARTHAK VERMA 22044041650
104 B.Tech CSE
SATYAM KUMAR THAKUR 22044041651
105 B.Tech CSE
SATYANAM YADAV 22044041652
106 B.Tech CSE
SAURAV KUMAR BICHHA 22044041654
107 B.Tech CSE
SAURAV KUMAR DUBEY 22044041655
108 B.Tech CSE
SAYYED MOHD.KAIF 22044041656
109 B.Tech CSE
SHIBU KUMAR 22044041657
110 B.Tech CSE
111 SHIVAM PANDEY 22044041658 B.Tech CSE
SHIVAM SHARMA 22044041659
112 B.Tech CSE
SHIVAM SINGH 22044041660
113 B.Tech CSE
SHIVAM YADAV 22044041661
114 B.Tech CSE
SHWETA KUMARI 22044041663
115 B.Tech CSE
SIMRAN . 22044041664
116 B.Tech CSE
SUBHANSHU PRATAP SINGH 22044041665
117 B.Tech CSE
SUJAL SINGH 22044041666
118 B.Tech CSE
SUKHDEV SINGH 22044041668
119 B.Tech CSE
SUNNY . 22044041669
120 B.Tech CSE
SUNNY RAJ 22044041670
121 B.Tech CSE
TAMANNA SHARMA 22044041672
122 B.Tech CSE
TAUSIF AKHTAR 22044041674
123 B.Tech CSE
UTKARSH ANAND 22044041675
124 B.Tech CSE
VAIBHAV GAUTAM 22044041676
125 B.Tech CSE
VIJAY 22044041679
126 B.Tech CSE
VIRENDRA KUMAR 22044041681
127 B.Tech CSE
VISHAL 22044041682
128 B.Tech CSE
VIVEK KUMAR SINGH 22044041684
129 B.Tech CSE
AHMED ABDOU BACAR AMIL 22044041687
130 B.Tech CSE
ANKIT VISHWAKARMA 22044041688
131 B.Tech CSE
TANVI RANA 22044041784
132 B.Tech CSE
ADITYA KUMAR SINGH 22044042682
133 B.Tech CSE
ADNAN ASLAM 22044042683
134 B.Tech CSE
VIVEK SHARMA 22044042691
135 B.Tech CSE
BITTU PRASAD MANDAL 22044042864
136 B.Tech CSE
KARAN SAHANI 22044042866
137 B.Tech CSE
UTTAM SINGH 22044042867
138 B.Tech CSE
ANURAG KUSHWAHA 22044101714
139 B.Tech CSE
140 SANJANA BHARTI 22044591709 B.Tech CSE
ANSHI KUMARI 23044043659
141 B.Tech CSE
SUDHANSHU MISHRA 23044043853
142 B.Tech CSE
RAHUL KUMAR 23044043955
143 B.Tech CSE
BIBEK KUMAR MAHATO 23044044005
144 B.Tech CSE
Subject Notes

UNIT -1

What is an algorithm?

Algorithm is a set of steps to complete a task.

For example,

Task: to make a cup of tea.

Algorithm:

• add water and milk to the kettle,

• boilit, add tea leaves,

• Add sugar, and then serve it in cup.

What is Computer algorithm?

‘’a set of steps to accomplish or complete a task that is described precisely enough that a computer
can run it’’.

Described precisely: very difficult for a machine to know how much water, milk to be added etc. in
the above tea making algorithm.

These algorithms run on computers or computational devices. For example, GPS in our
smartphones, Google hangouts.

GPS uses shortest path algorithm. Online shopping uses cryptography which uses RSA algorithm.

Characteristics of an algorithm: -

Must take an input.

Must give some output (yes/no, value etc.)

Definiteness –each instruction is clear and unambiguous. Finiteness –


algorithm terminates after a finite number of steps. Effectiveness –every
instruction must be basic i.e. simple instruction

Expectation from an algorithm

Correctness:-

Correct: Algorithms must produce correct result.

Produce an incorrect answer: Even if it fails to give correct results all the time still there is a
control on how often it gives wrong result. Eg. Rabin-Miller Primality Test (Used in
50
RSA algorithm): It doesn’t give correct answer all the time.1 out of 2 times it gives
incorrect result.
Approximation algorithm: Exact solution is not found, but near optimal solution can be
found out. (Applied to optimization problem.)

Less resource usage:


Algorithms should use
less resources (time and
space).

Resource usage:

Here, the time is considered to be the primary measure of efficiency. We are also concerned with how
much the respective algorithm involves the computer memory. But mostly time is the resource that is
dealt with. And the actual running time depends on a variety of backgrounds: like the speed of the
Computer, the language in which the algorithm is implemented, the compiler/interpreter, skill of the
programmers etc.

So, mainly the resource usage can be divided into:

1. Memory (space)

2. Time

Time taken by an algorithm?

performance measurement or Apostoriori Analysis: Implementing the algorithm in a machine and


then calculating the time taken by the system to execute the program successfully.

Performance Evaluation or Apriori Analysis. Before implementing the algorithm in a system. This is
done as follows

1. How long the algorithm takes :-will be represented as a function of the size of the
input.

f(n)how long it takes if ‘n’ is the size of input.

2. How fast the function that characterizes the running time grows with the input size.

“Rate of growth of running time”.

The algorithm with less rate of growth of running time is considered better

How algorithm is a technology?

Algorithms are just like a technology. We all use latest and greatest processors but we need to run
implementations of good algorithms on that computer in order to properly take benefits of our money
that we spent to have the latest processor. Let’s make this example more concrete by pitting a faster
2
computer (computer A) running a sorting algorithm whose running time on n values grows like n
against a slower computer (computer B) running asorting algorithm whose running time grows like n
lg n. They eachmust sort an array of 10 million numbers. Suppose that computer A executes 10
billion instructions per second (faster than anysingle sequential computer at the time of this writing)
and computer B executes only 10 million instructions per second, so that computer A is1000 times
faster than computer B in raw computing power. To makethe difference even more dramatic,
suppose that the world’s craftiestprogrammer codes in machine language for computer A, and the
2
resulting code requires 2n instructions to sort n numbers. Suppose further that just an average
programmer writes for computer B, using a high- level language with an inefficient compiler, with
the resulting code taking 50n lg n instructions.

Computer A (Faster) Computer B(Slower)

2
Running time grows like n . Grows in nlogn.

10 billion instructions per sec. 10million instruction per sec

2
2n instruction. 50 nlogn instruction

Time taken=

It is more than 5.5hrs it is under 20 mins.

So choosing a good algorithm (algorithm with slower rate of growth) as used by computer B
affects a lot.

Growth of Functions (Asymptotic notations)

Before going for growth of functions and asymptotic notation let us see how to analyase an
algorithm.

How to analyse an Algorithm

Let us form an algorithm for Insertion sort (which sort a sequence of numbers).The pseudo code for
the algorithm is give below.

Pseudo code:

for j=2 to A length -------------------------------------------------- C1

key=A[j]-----------------------------------------------------------------C2

//Insert A[j] into sorted Array A[1.....j-1]---------------- --C3

i=j-1------------------------------------------------------------------------C4
while i>0 & A[j]>key------------------------- C5

A[i+1]=A[i]---------------------------------------------------------------C6

i=i-1------------------------------------------------------------------------C7

A[i+1]=key----------------------------------------------------------------C8
th
Let Ci be the cost of i line. Since comment lines will not incur any cost C3=0.

Cost No. Of times Executed

C1n

C2 n-1

C3=0 n-1

C4n-1

C5 C6 ) C7

C8n-1

Running time of the algorithm is:

T(n)=C1n+C2(n-1)+0(n-1)+C4(n-1)+C5 +C6( )+C7( )+ C8(n-1)

Best case:

It occurs when Array is sorted.

All tjvalues are 1.

T(n)=C1n+C2(n-1)+0 (n-1)+C4(n-1)+C5 +C6( )+C7( )+ C8(n-1)

=C1n+C2 (n-1) +0 (n-1) +C4 (n-1) +C5 + C8 (n-1)

= (C1+C2+C4+C5+ C8) n-(C2+C4+C5+


C8) Which is of the forman+b.

Linear function of n.So, linear growth.

Worst case:

It occurs when Array is reverse sorted, and tj =j

T(n)=C1n+C2(n-1)+0 (n-1)+C4(n-1)+C5 +C6( )+C7( )+ C8(n-1)

=C1n+C2(n-1)+C4(n-1)+C5 +C6( )+C7( )+ C8(n-1)


2
which is of the form an +bn+c

2
Quadratic function. So in worst case insertion set grows in n .
Why we concentrate on worst-case running time?

The worst-case running time gives a guaranteed upper bound on the runningtime for any input.

For some algorithms, the worst case occurs often. For example, when searching, the worst case
often occurs when the item being searched for is not present, and searches for absent items may be
frequent.

Why not analyze the average case? Because it’s often about as bad as the worst case. Order
of growth: It is described by the highest degree term of the formula for running time. (Drop lower-
order terms. Ignore the constant coefficient in the leading term.)

2
Example: We found out that for insertion sort the worst-case running time is of the form an + bn +
c.

2 2
Drop lower-order terms. What remains is an .Ignore constant coefficient. It results in n .But we
2 2
cannot say that the worst-case running time T(n) equals n .Rather It grows like n . But it doesn’t
2 2 2
equal n .We say that the running time is Θ (n ) to capture the notion that the order of growthis n .

We usually consider one algorithm to be more efficient than another if its worst-case running
time has a smaller order of growth.

Asymptotic notation

It is a way to describe the characteristics of a function in the limit. It


describes the rate of growth of functions.

Focus on what’s important by abstracting away low-order terms and constant factors. It is
a way to compare “sizes” of functions:

o <
2 2
Example: n /2 2n = (n ), with c1 = 1/4, c2 = 1/2, and n0 = 8.
Recurrences, Solution of Recurrences by substitution, Recursion Tree and Master Method
Recursion is a particularly powerful kind of reduction, which can be described loosely as follows:

• If the given instance of the problem is small or simple enough, just solve it.

• Otherwise, reduce the problem to one or more simpler instances of the same problem.

Recursion is generally expressed in terms of recurrences. In other words, when an algorithm calls to
itself, we can often describe its running time by a recurrence equation which describes the overall
running time of a problem of size n in terms of the running time on smaller inputs. E.g.the worst case
running time T(n) of the merge sort procedure by recurrence can be expressed as

T(n)= ϴ(1) ; if n=1

2T(n/2) + ϴ(n) ;if n>1 whose


solution can be found as T(n)=ϴ(nlogn)

There are various techniques to solve recurrences.

1. SUBSTITUTION METHOD:
The substitution method comprises of 3 steps

i. Guess the form of the solution


ii. Verify by induction
iii. Solve forconstants
We substitute the guessed solution for the function when applying the inductive hypothesis to
smaller values. Hence the name “substitution method”. This method is powerful, but we must be
able to guess the form of the answer in order to apply it.
e.g.recurrence equation: T(n)=4T(n/2)+n

step 1: guess the form of solution

T(n)=4T(n/2)

F(n)=4f(n/2)

F(2n)=4f(n)

2
F(n)=n

2
So, T(n) is order of n

3
Guess T(n)=O(n )

Step 2: verify the induction

3
Assume T(k)<=ck

T(n)=4T(n/2)+n

3
<=4c(n/2) +n

3
<=cn /2+n

3 3
<=cn -(cn /2-n)

3 3
T(n)<=cn as (cn /2 –n) is always positive

So what we assumed was true.

3
T(n)=O(n )

Step 3: solve for constants

3
Cn /2-n>=0

n>=1

c>=2

2
Now suppose we guess that T(n)=O(n ) which is tight upper bound

2
Assume,T(k)<=ck
2
so,we should prove that T(n)<=cn

T(n)=4T(n/2)+n

2
4c(n/2) +n

2
cn +n

2 2
So,T(n) will never be less than cn . But if we will take the assumption of T(k)=c1 k -

2
c2k, then we can find that T(n) = O(n )

2. BY ITERATIVE METHOD:

e.g. T(n)=2T(n/2)+n

=> 2[2T(n/4) + n/2 ]+n

2
=>2 T(n/4)+n+n

2
=> 2 [2T(n/8)+ n/4]+2n

3 3)
=>2 T(n/2 +3n

k k
After k iterations ,T(n)=2 T(n/2 )+kn--------------(1) Sub
k
problem size is 1 after n/2 =1 => k=logn So,afterlogn
iterations ,the sub-problem size will be 1.

So,when k=logn is put in equation 1

T(n)=nT(1)+nlogn

nc+nlogn ( say c=T(1))

O(nlogn)

3.BY RECURSSION TREE METHOD:

In a recursion tree ,each node represents the cost of a single sub-problem somewhere in the set of
recursive problems invocations .we sum the cost within each level of the tree to obtain a set of per
level cost,and then we sum all the per level cost to determine the total cost of all levels of recursion .

2
Constructing a recursion tree for the recurrence T(n)=3T(n/4)+cn
2
Constructing a recursion tree for the recurrence T (n)= 3T (n=4) + cn .. Part (a) shows T (n), which
progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree. The fully
expanded tree in part (d) has height log4n (it has log4n + 1 levels).

i
Sub problem size at depth i =n/4

i
Sub problem size is 1 when n/4 =1 => i=log4n

So, no. of levels =1+ log4n

Cost of each level = (no. of nodes) x (cost of each node)

i
No. Of nodes at depth i=3

i2
Cost of each node at depth i=c (n/4 )

i i2 i 2
Cost of each level at depth i=3 c (n/4 ) = (3/16) cn

log n 2 i
T(n)= i=0 4 cn (3/16)

log n -1 2 i
T(n)= i=0 4 cn (3/16) + cost of last level

i
Cost of nodes in last level =3 T(1)

c3log n (at last level i=log n)


44

cnlog 34

log 3 4
T(n)= + c n

2
<= cn

2 log 34 2
<= cn *(16/13)+ cn => T(n)=O(n )

4.BY MASTER METHOD:

The master method solves recurrences of the form

T(n)=aT(n/b)+f(n)
where a>=1 and b>1 are constants and f(n) is a asymptotically positive function. To

use the master method, we have to remember 3 cases:

1. If f(n)=O(nlog a - Ɛ) for some constants Ɛ >0,then T(n)=ϴ(nlog a)


bb

2. If f(n)=ϴ( nlog a) then T(n)=ϴ(nlog alogn)


bb
log a+Ɛb
3. If f(n)=Ὠ(n ) for some constant Ɛ>0 ,and if a*f(n/b)<=c*f(n) for some constant c<1 and all
sufficiently large n,then T(n)=ϴ(f(n))

e.g. T(n)=2T(n/2)+nlogn

ans: a=2 b=2 f(n)=nlogn

nd
using 2 formula

log
2 2 k
f(n)=ϴ( n log n)

1 k
=>ϴ(n log n)=nlogn =>K=1

log
2 2 1
T(n)=ϴ( n log n)

2
=>ϴ(nlog n)

Design and analysis of Divide and Conquer Algorithms

DIVIDE AND CONQUER ALGORITHM

In this approach ,we solve a problem recursively by applying 3 steps

1. DIVIDE-break the problem into several sub problems of smaller size.


then return S(P)

2. CONQUER-solve the problem recursively.

3. COMBINE-combine these solutions to create a solution to the original problem.

CONTROL ABSTRACTION FOR DIVIDE AND CONQUER ALGORITHM

Algorithm D and C (P)

if small(P)

else

Let a recurrence relation on is expressed as

T(n)= ϴ(1), if n<=C

aT(n/b) + D(n)+ C(n) ,

otherwise then n=input size a=no. Of sub- blemsn/b= input size of the sub-problems

{ divide P into smaller instances P1 ,P2 .....Pk

Apply D and C to each sub problem

Return combine (D and C(P1)+ D and C(P2)+.......+D and C(Pk)


Worst case analysis of merge sort, quick sort

Merge sort

It is one of the well-known divide-and-conquer algorithm. This is a simple and very efficient
algorithm for sorting a list of numbers.

We are given a sequence of n numberswhich we will assume is stored in an array A [1...n].


Theobjective is to output a permutation of this sequence, sorted in increasing order. This is
normally done by permuting the elements within the array A.

How can we apply divide-and-conquer to sorting? Here are the major elements of the Merge

Sort algorithm.

Divide: Split A down the middle into two sub-sequences, each of size roughly n/2
. Conquer: Sort each subsequence (by calling MergeSort recursively on each).
Combine: Merge the two sorted sub-sequences into a single sorted list.

The dividing process ends when we have split the sub-sequences down to a single item. A
sequence of length one is trivially sorted. The key operation where all the work is done is in the
combine stage,which merges together two sorted lists into a single sorted list. It turns out that
the merging process is quite easy to implement.

The following figure gives a high-level view of the algorithm. The “divide” phase is shown on
the left. It works top-down splitting up the list into smaller sublists. The “conquer and combine”
phases areshown on the right. They work bottom-up, merging sorted lists together into larger
sorted lists.

Merge Sort

Designing the Merge Sort algorithm top-down. We’ll assume that the procedure thatmerges two
sortedlist is available to us. We’ll implement it later. Because the algorithm is called recursively
on sublists,in addition to passing in the array itself, we will pass in two indices, which indicate
the first and lastindices of the subarray that we are to sort. The call MergeSort(A, p, r) will sort
the sub-arrayA [ p..r ] and return the sorted result in the same subarray.
Here is the overview. If r = p, then this means that there is only one element to sort, and we may
returnimmediately. Otherwise (if p < r) there are at least two elements, and we will invoke the
divide-and-conquer. We find the index q, midway between p and r, namely q = ( p + r ) / 2
(rounded down to thenearest integer). Then we split the array into subarrays A [ p..q ] and A [ q

+ 1 ..r ] . Call Merge Sort recursively to sort each subarray. Finally, we invoke a procedure

(which we have yet to write) whichmerges these two subarrays into a single sorted
array.

MergeSort(array A, int p, int r)


{

if (p < r) { // we have at least 2 items q = (p + r)/2

MergeSort(A, p, q) // sort A[p..q] MergeSort(A, q+1, r) // sort A[q+1..r]

Merge(A, p, q, r) // merge everything together

}}

Merging: All that is left is to describe the procedure that merges two sorted lists. Merge(A, p, q,
r)assumes that the left subarray, A [ p..q ] , and the right subarray, A [ q + 1 ..r ] , have already
been sorted.We merge these two subarrays by copying the elements to a temporary working
array called B. Forconvenience, we will assume that the array B has the same index range A,
that is, B [ p..r ] . We have to indices i and j, that point to the current elements ofeach subarray.
We move the smaller element into the next position of B (indicated by index k) andthen
increment the corresponding index (either i or j). When we run out of elements in one array,
thenwe just copy the rest of the other array into B. Finally, we copy the entire contents of B
back into A.

Merge(array A, int p, int q, int r) { // merges A[p..q] with A[q+1..r]

array B[p..r]

i = k = p //initialize pointers j = q+1

while (i <= q and j <= r) { // while both subarrays are nonempty if (A[i] <= A[j]) B[k+
+] = A[i++] // copy from left subarray

else B[k++] = A[j++] // copy from right subarray

}
while (i <= q) B[k++] = A[i++] // copy any leftover to B

while (j <= r) B[k++] = A[j++]

for i = p to r do A[i] = B[i] // copy B back to A }

Analysis: What remains is to analyze the running time of MergeSort. First let us consider the running timeof th

Now, how do we describe the running time of the entire MergeSort algorithm? We will do this
throughthe use of a recurrence, that is, a function that is defined recursively in terms of itself.
To avoidcircularity, the recurrence for a given value of n is defined in terms of values that are
strictly smallerthan n. Finally, a recurrence has some basis values (e.g. for n = 1 ), which are
defined explicitly.

Let’s see how to apply this to MergeSort. Let T ( n ) denote the worst case running time of
MergeSort onan array of length n. For concreteness we could count whatever we like: number
of lines of pseudocode,number of comparisons, number of array accesses, since these will only
differ by a constant factor.Since all of the real work is done in the Merge procedure, we will
count the total time spent in theMerge procedure.

First observe that if we call MergeSort with a list containing a single element, then the running time is aconsta
Finally, to merge both sorted lists takes n time, by the comments made above. In conclusion
we have

T ( n ) =1 if n =
1,

2T (n/ 2) + n otherwise.

Solving the above recurrence we can see that merge sort has a time complexity of Θ (n log
n) .

QUICKSORT

Worst-case running time: O (n2).

Expected running time: O (n lgn).

Sorts in place.

Description of quicksort

Quicksort is based on the three-step process of divide-and-conquer.

• To sort the subarrayA[p . . r ]:

Divide: Partition A[p . . r ], into two (possibly empty) subarraysA[p . . q 1] and

A[q + 1 . . r ], such that each element in the ÞrstsubarrayA[p . . q 1] is A[q] and

A[q ] is each element in the second subarray A[q + 1 . . r ].

Conquer: Sort the two subarrays by recursive calls to QUICKSORT.

Combine: No work is needed to combine the subarrays, because they are sorted in place.

• Perform the divide step by a procedure PARTITION, which returns the index q that marks
the position separating the subarrays.
QUICKSORT (A, p, r)

ifp < r

thenq PARTITION (A, p, r )


QUICKSORT (A, p, q
1
) QUICKSORT (A, q +
1, r)

Initial call is QUICKSORT (A, 1, n)

Partitioning

Partition subarrayA [p . . . r] by the following


procedure: PARTITION (A, p, r)

x A[r ]

i p –1

for j p to r –1

do if A[ j ] x

theni i+1

exchangeA[i ] A[ j ]

exchangeA[i + 1] A[r ]

returni + 1

PARTITION always selects the last element A[r ] in the subarrayA[p . . r ] as the pivot the
element around which to partition.

As the procedure executes, the array is partitioned into four regions, some of which may be
empty:
Performance of Quicksort

The running time of Quicksort depends on the partitioning of the subarrays:

• If the subarrays are balanced, then Quicksort can run as fast as mergesort.

• If they are unbalanced, then Quicksort can run as slowly as insertion sort.

Worst case

• Occurs when the subarrays are completely unbalanced.

• Have 0 elements in one subarray and n 1 elements in the other subarray.

• Get the recurrence

T (n) = T (n 1 ) + T (0) + Θ (n)

= T (n 1 ) + Θ (n)

= O (n2) .

• Same running time as insertion sort.


• In fact, the worst-case running time occurs when Quicksort takes a sorted array as input, but
insertion sort runs in O(n) time in this case.

Best case

• Occurs when the subarrays are completely balanced every time.

• Each subarray has n/2 elements.

• Get the recurrence

Balanced partitioning

• QuickPort’s average running time is much closer to the best case than to the worst case.

• Imagine that PARTITION always produces a 9-to-1 split.

• Get the recurrence

T (n) T (9n/10) + T (n/10) + _ (n) = O (n


lgn).

• Intuition: look at the recursion tree.

• It’s like the one for T (n) = T (n/3) + T (2n/3) + O (n).

• Except that here the constants are different; we get log10 n full levels and log10/9 n levels
that are nonempty.

• As long as it’s a constant, the base of the log doesn’t matter in asymptotic notation.

• Any split of constant proportionality will yield a recursion tree of depth O (lgn).

Heaps and Heap sort

HEAPSORT

Inplace algorithm
Running Time: O(n log n)

Complete Binary Tree

The (binary) heap data structure is an array object that we can view as a nearly complete
binary tree.Each node of the tree corresponds to an element of the array. The tree is completely
filled on all levels except possibly the lowest, which is filled from the left up to a point.

The root of the tree is A[1], and given the index i of a node, we can easily compute the indices
of its parent, left child, and right child:

PARENT (i) => return [ i /


2]

LEFT (i) => return 2i

RIGHT (i) => return 2i+ 1

On most computers, the LEFT procedure can compute 2i in one instruction by simply shifting
the binary representation of i left by one bit position.

Similarly, the RIGHT procedure can quickly compute 2i + 1 by shifting the binary
representation of i left by one bit position and then adding in a 1 as the low-order bit.
The PARENT procedure can compute [i/2] by shifting i right one bit position. Good
implementations of heapsort often implement these procedures as "macros" or "inline"
procedures.

There are two kinds of binary heaps: max-heaps and min-


heaps.

In a max-heap,the max-heap property is that for every node i other than the root,
A[PARENT(i)] >= A[i] ,that is, the value of a node is at most the value of its parent.
Thus, the largest element in a max-heap is stored at the root, and the subtree rooted at a
node contains values no larger than that contained at the node itself.

A min-heap is organized in the opposite way; the min-heap property is that for every
node i other than the root, A[PARENT(i)<=A[i] ,

The smallest element in a min-heap is at the


root.

The height of a node in a heap is the number of edges on the longest simple downward
path from the node to a leaf and

The height of the heap is the height of its root.

Height of a heap of n elements which is based on a complete binary tree is O(log n).

Maintaining the heap property

MAX-HEAPIFY lets the value at A[i] "float down" in the max-heap so that the
subtree rooted at index i obeys the max-heap property.

MAX-HEAPIFY(A,i)

1. l LEFT(i)

2. r RIGHT(i)

3. if A[l] > A[i]

4. largest l

5. if A[r] > A[largest]

6. Largest r
7. if largest != i

8. Then exchange A[i] A[largest]

9. MAX-HEAPIFY(A,largest)

At each step, the largest of the elements A[i], A[LEFT(i)], and A[RIGHT(i)] is
determined, and its index is stored in largest. If A[i] is largest, then the subtree rooted at node
i is already a max-heap and the procedure terminates. Otherwise, one of the two children has
the largest element, and A[i ] is swapped with A[largest], which causes node i and its children
to satisfy the max-heap property. The node indexed by largest, however, now has the original
value A[i], and thus the subtree rooted at largest might violate the max-heap property.
Consequently,

we call MAX-HEAPIFY recursively on that subtree.

Figure: The action of MAX-HEAPIFY (A, 2), where heap-size = 10. (a) The initial con-
figuration, with A [2] at node i = 2 violating the max-heap property since it is not larger than
both children. The max-heap property is restored for node 2 in (b) by exchanging A [2] with
A[4], which destroys the max-heap property for node 4. The recursive call MAX-HEAPIFY
(A,4)
now has i = 4. After swapping A[4] with A[9], as shown in (c), node 4 is fixed up, and the
recursive call MAX-HEAPIFY(A, 9) yields no further change to the data structure.

The running time of MAX-HEAPIFY by the recurrence can be described as

T (n) < = T (2n/3) + O (1)

The solution to this recurrence is T(n)=O(log n)

Building a heap

Build-Max-Heap(A)

1. for i [n/2] to 1

2. do MAX-HEAPIFY(A,i)

4 1 3 2 1 9 1 1 8 7
We can derive a tighter bound by observing that the time for MAX-HEAPIFY to run at a node
varies with the height of the node in the tree, and the heights of most nodes are small. Our
tighter analysis relies on the properties that an n-element heap has height [log n] and at most
h+1
[n/2 ] nodes of any height h.

The total cost of BUILD-MAX-HEAP as being bounded is


T(n)=O(n)

The HEAPSORTAlgorithm

HEAPSORT(A)

1. BUILD MAX-HEAP(A)

2. for i=n to 2

3. exchange A[1] with A[i]

4. MAX-HEAPIFY(A,1)
1 2 3 4 7 8 9 1 1 1
0 4 6

TheHEAPSORT procedure takes time O(n log n), since the call to BUILD-MAX- HEAP takes
time

O(n) and each of the n - 1 calls to MAX-HEAPIFY takes time O(log


n).

Lower Bounds for Sorting

Review of Sorting: So far we have seen a number of algorithms for sorting a list of numbers in
ascendingorder. Recall that an in-place sorting algorithm is one that uses no additional array
storage (however,we allow Quicksort to be called in-place even though they need a stack of size
O(log n) for keepingtrack of the recursion). A sorting algorithm is stable if duplicate elements
remain in the same relativeposition after sorting.

Slow Algorithms: Include BubbleSort, InsertionSort, and SelectionSort. These are all
simple

2
Θ (n )in-place sorting algorithms. BubbleSort and InsertionSort can be implemented as stable
algorithms,but SelectionSort cannot (without significant modifications).
Mergesort: Mergesort is a stable Θ(n log n) sorting algorithm. The downside is that MergeSort
isthe only algorithm of the three that requires additional array storage, implying that it is not
anin-place algorithm.

Quicksort: Widely regarded as the fastest of the fast algorithms. This algorithm is O(n log n) in
theexpected case, and O(n2) in the worst case. The probability that the algorithm takes
asymptoticallylonger (assuming that the pivot is chosen randomly) is extremely small for large
n. It is an(almost) in-place sorting algorithm but is not stable.

Heapsort: Heapsort is based on a nice data structure, called a heap, which is a fast priority
queue.Elements can be inserted into a heap in O(log n) time, and the largest item can be
extracted inO(log n) time. (It is also easy to set up a heap for extracting the smallest item.) If
you only wantto extract the k largest values, a heap can allow you to do this is O(n + k log n)
time. It is anin-place algorithm, but it is not stable.

Lower Bounds for Comparison-Based Sorting: Can we sort faster than O(n log n)
time?

We will give anargument that if the sorting algorithm is based solely on making comparisons
between the keys in thearray, then it is impossible to sort more efficiently than (n log n) time.
Such an algorithm is called acomparison-based sorting algorithm, and includes all of the
algorithms given above.Virtually all known general purpose sorting algorithms are based on
making comparisons, so this isnot a very restrictive assumption. This does not preclude the
possibility of a sorting algorithm whoseactions are determined by other types of operations, for
example, consulting the individual bits ofnumbers, performing arithmetic operations, indexing
into an array based on arithmetic operations onkeys.We will show that any comparison-based
sorting algorithm for a input sequence ha1; a2; : : : ; animust

make at least (n log n) comparisons in the worst-case. This is still a difficult task if you think
about it.It is easy to show that a problem can be solved fast (just give an algorithm). But to
show that a problemcannot be solved fast you need to reason in some way about all the possible
algorithms that might everbe written. In fact, it seems surprising that you could even hope to
prove such a thing. The catch hereis that we are limited to using comparison-based algorithms,
and there is a clean mathematical way ofcharacterizing all such algorithms.

Decision Tree Argument: In order to prove lower bounds, we need an abstract way of
modeling “any possible”comparison-based sorting algorithm, we model such algorithms in
terms of an abstract modelcalled a decision tree.In a comparison-based sorting algorithm only
comparisons between the keys are used to determinethe action of the algorithm. Let ha1;
a2; : : : ; anibe the input sequence. Given two elements, aiandaj, their relative order can only
be determined by the results of comparisons likeai<aj, ai<=aj,ai=aj, ai>=aj, and ai>aj.A
decision tree is a mathematical representation of a sorting algorithm (for a fixed value of n).
Eachnode of the decision tree represents a comparison made in the algorithm (e.g., a4 : a7), and
the twobranches represent the possible results, for example, the left subtree consists of the
remaining comparisonsmade under the assumption that a4 _ a7 and the right subtree for a4 >
a7. (Alternatively, onemight be labeled with a4 < a7 and the other with a4 _ a7.)Observe that
once we know the value of n, then the “action” of the sorting algorithm is
completelydetermined by the results of its comparisons. This action may involve moving
elements around in thearray, copying them to other locations in memory, performing various
arithmetic operations on non- keydata. But the bottom-line is that at the end of the algorithm,
the keys will be permuted in the final array in some way. Each leaf in the decision tree is
labeled with the final permutation that the algorithmgenerates after making all of its
comparisons.To make this more concrete, let us assume that n = 3, and let’s build a decision
tree for SelectionSort.Recall that the algorithm consists of two phases. The first finds the
smallest element of the entire list,and swaps it with the first element. The second finds the
smaller of the remaining two items, and swapsit with the second element. Here is the decision
tree (in outline form). The first comparison is betweena1 and a2. The possible results are:

a1 <= a2: Then a1 is the current minimum. Next we compare a1 with a3 whose results might
be either:

a1 <=a3: Then we know that a1 is the minimum overall, and the elements remain in their
originalpositions. Then we pass to phase 2 and compare a2 with a3. The possible
results are:

a2 <=a3: Final output is ha1; a2;


a3i.

a2 > a3: These two are swapped and the final output is ha1; a3;
a2i.

a1 > a3: Then we know that a3 is the minimum is the overall minimum, and it is swapped
witha1. The we pass to phase 2 and compare a2 with a1 (which is now in the third
position ofthe array) yielding either:

a2 <=a1: Final output is ha3; a2;


a1i.

a2 > a1: These two are swapped and the final output is ha3; a1;
a2i.

a1 > a2: Then a2 is the current minimum. Next we compare a2 with a3 whose results might
be either:

a2 <=a3: Then we know that a2 is the minimum overall. We swap a2 with a1, and then pass
to phase 2, and compare the remaining items a1 and a3. The possible results are:
a1 <=a3: Final output is ha2; a1;
a3i.

a1 > a3: These two are swapped and the final output is ha2; a3;
a1i.

a2 > a3: Then we know that a3 is the minimum is the overall minimum, and it is swapped
witha1. We pass to phase 2 and compare a2 with a1 (which is now in the third
position of thearray) yielding either:

a2<= a1: Final output is ha3; a2;


a1i.

a2 > a1: These two are swapped and the final output is ha3; a1;
a2i.

The final decision tree is shown below. Note that there are some nodes that are unreachable. For
example,in order to reach the fourth leaf from the left it must be that a1 _ a2 and a1 > a2,
which cannotboth be true. Can you explain this? (The answer is that virtually all sorting
algorithms, especiallyinefficient ones like selection sort, may make comparisons that are
redundant, in the sense that theiroutcome has already been determined by earlier comparisons.)
As you can see, converting a complexsorting algorithm like HeapSort into a decision tree for a
large value of n will be very tedious andcomplex, but I hope you are convinced by this exercise
that it can be done in a simple mechanical way.

(Decision Tree for Selection Sort


on 3 keys.)
Using Decision Trees for Analyzing Sorting: Consider any sorting algorithm. Let T(n) be the
maximum number of comparisons that this algorithm makes on any input of size n. Notice that
the running time the algorithm must be at least as large as T(n), since we are not counting data
movement or other computations at all. The algorithm defines a decision tree. Observe that the
height of the decision tree is exactly equal to T(n), because any path from the root to a leaf
corresponds to a sequence of comparisons made by the algorithm.

As we have seen earlier, any binary tree of height T (n) has at most 2T(n) leaves. This means
that thissorting algorithm can distinguish between at most 2T (n) different final actions. Let’s
call this quantityA (n), for the number of different final actions the algorithm can take. Each
action can be thought of asa specific way of permuting the original input to get the sorted
output.How many possible actions must any sorting algorithm distinguish between? If the input
consists of ndistinct numbers, then those numbers could be presented in any of n! different
permutations. For eachdifferent permutation, the algorithm must “unscramble” the numbers in
an essentially different way,that is it must take a different action, implying that A(n) >= n!.
(Again, A (n) is usually not exactlyequal to n! because most algorithms contain some redundant
unreachable leaves.)

Since A(n) 2T(n) we have 2T(n) n !, implying


that

T(n) lg(n!

You might also like