Competitive Programming Notes- Cheatsheet
Competitive Programming Notes- Cheatsheet
Template 1
Basic Data Types 1
Built-in Functions 1
Data Structures 4
Algorithms 6
Array Sorting 6
Binary Search 6
Kadane's Algorithm 7
Divide and Conquer algorithms 8
Graph Theory 9
Fenwick Tree 9
Segment Tree 9
Adjacency Matrix 12
DFS 13
BFS 14
Dijkstra 16
Dynamic Programming 17
Prefix Sum Array 17
Iterators, Recursion 18
Range Query 20
Queries on static arrays 20
Sum queries 20
Minimum queries 21
Time Complexity 21
Built-in Functions
C++ Built-in Functions Cheat Sheet
1. Basic Utility Functions
● std::max(a, b), std::min(a, b)
○ Returns the maximum/minimum of two values.
○ Reference
● std::swap(a, b)
○ Swaps values of a and b.
○ Reference
2. Sorting & Searching
● std::sort(begin, end)
○ Sorts elements in ascending order (default).
○ Reference
● std::binary_search(begin, end, value)
○ Checks if value exists in a sorted range.
○ Reference
● std::lower_bound(begin, end, value)
○ Finds the first position where value can be inserted without changing order.
○ Reference
● std::upper_bound(begin, end, value)
1 of 21
○ Finds the first position strictly greater than value.
○ Reference
3. Numeric & Math Functions
● std::gcd(a, b), std::lcm(a, b)
○ Computes GCD and LCM.
○ Reference
● std::pow(base, exp), std::sqrt(x), std::ceil(x), std::floor(x),
std::round(x)
○ Exponentiation, square root, rounding functions.
4. Data Structures & Algorithms
● std::priority_queue<T>
○ Implements a max-heap (default) or min-heap.
○ Reference
● std::unordered_map<Key, Value>, std::unordered_set<T>
○ Fast O(1) average-time complexity lookup.
○ Reference
5. Graph Algorithms
● std::queue<T> (for BFS), std::stack<T> (for DFS)
○ BFS and DFS traversal utilities.
6. Time Measurement
● std::chrono::high_resolution_clock::now()
○ Measures execution time.
○ Reference
7. Threading & Concurrency
● std::thread t(function_name)
○ Creates a new thread for parallel execution.
○ Reference
8. String Manipulation
● std::stoi(str), std::stoll(str), std::stod(str)
○ Convert string to integer, long long, or double.
● std::to_string(x)
○ Converts number x to a string.
9. Bit Manipulation
● __builtin_popcount(x), __builtin_ctz(x), __builtin_clz(x)
○ Counts set bits, trailing zeroes, leading zeroes (for GCC/Clang).
Extra Knowledge: Iterators in C++
Iterators are like pointers that traverse STL containers like vector, set, map.
● Forward Iterator → Read/write, single-pass (vector, list)
● Bidirectional Iterator → Move forward & backward (map, set)
● Random Access Iterator → Supports indexing (vector, deque)
Data Structures
In C++, there are generally 3 kinds of STL (Standard Library) containers:
● Sequence Containers: data is sequentially ordered
● Associative Containers: elements are stored in a sorted order (order does not depend on
when the element is inserted)
● Unordered associative containers: position of elements does not matter
There are also container adaptors which are not full container classes, but classes that provide a
specific interface relying on an object of one of the container classes to handle the elements.
More reading: https://cplusplus.com/reference/stl/
2 of 21
#include <set>
// Include the map library
#include <map>
// Include the stack library
#include <stack>
// Include the queue library
#include <queue>
3 of 21
● Similar to vector, but more efficient in x.insert(x.begin()+3, 2); //insert 2 at the 4th
insertion and deletion of elements position
x.erase(x.begin()+1); //remove the 2nd item
x.front(); //returns frontmost item
x.back(); //returns last item
x.size(); //return size of x
//insert //insert
x["word"] = 4; x["word"] = 4;
x["notword"] = 5; x["notword"] = 5;
x["awesome"] = 1; x["awesome"] = 1;
//namespace
using namespace __gnu_pbds;
4 of 21
typedef tree<int,null_type,less_equal<int>,rb_tree_tag, tree_order_statistics_node_update>
multi_indexed_set;
indexed_set s;
s.insert(2);
s.insert(5);
s.insert(7);
s.insert(9);
Algorithms
Array Sorting
For small integers (0 ≤ arr[i] ≤ 10⁶), Counting Sort runs in O(N + K).
5 of 21
Which One to Use?std::sort() → Best for general use (default choice);
std::stable_sort() → Use when stability matters; Counting Sort → Use for sorting small
integers fast; For NOI, std::sort() is almost always the best choice! 🚀
Binary Search
Binary Search is a searching algorithm that allows for finding the index of an element in an array in
O(logn) time.
How it works:
● sort the array arr in ascending order
● set a left index l to be the first element of the array, set a right index r to be the last element of
the array
● set the middle index mid to be the average of these 2 indexes
○ if arr[mid]==target element, then the value has been found
○ if arr[mid]<target element, then set r to be mid−1
○ if arr[mid]>target element, then set l to be mid+1
● repeat previous step until value has been found or r<l
Implementation (copied from cp-algorithms.com):
6 of 21
Usage:
● used to find
○ lower bound: index of the first element that is >= a given value
○ upper bound: index of the first element that is > a given value
● Binary Search on The Answer (BSTA)
○ binary search can be used to find the answer x of a function f(x) such that f(x)=true
○ only works on monotonic functions (functions that are always non-decreasing/always
non-increasing)
○ more info here
Kadane's Algorithm
Kadane's Algorithm solves the largest contiguous subarray problem: given an array of size N, find the
sum of the contiguous subarray within the array with the largest sum.
How it works:
● go through the array and accumulate:
○ the current partial sum in some variable s
○ the maximum sum of contiguous subarray found so far in some variable m
● if s<0 at any point, we assign s=0
● if s>m at any point, we assign m=s
Implementation (copied from cp-algorithms.com):
else {
7 of 21
int m = (l+r)/2;
int lm = max(arr, l, m);
int rm = max(arr, m+1 , r);
References
● Divide and Conquer Algorithm: Introduction (geeksforgeeks)
● Divide and Conquer Algorithm (programiz)
Graph Theory
Fenwick Tree
● calculates the sum of elements over a given range in an array in O(logn) time
● updates the value of an element of in an array in O(logn) time
● requires O(n) memory
Segment Tree
✅ 🚀
A Fenwick Tree efficiently computes prefix sums and supports point updates in O(log N) time.
Use Cases: Dynamic prefix sums, inversion count, range queries.
Fenwick Tree
// C++ code to demonstrate operations of Binary Index Tree
#include <iostream>
8 of 21
// index in BITree[] is 1 more than the index in arr[]
index = index + 1;
return BITree;
9 of 21
}
return 0;
}
Segment Tree
#include <bits/stdc++.h>
using namespace std;
10 of 21
// move upward and update parents
for (int i=p; i > 1; i >>= 1)
tree[i>>1] = tree[i] + tree[i^1];
}
if (r&1)
res += tree[--r];
}
return res;
}
// n is global
n = sizeof(a)/sizeof(a[0]);
// build tree
build(a);
return 0;
}
Adjacency Matrix
int adjMat[n][n];
memset(adjMat, -1, sizeof adjMat);
cin >> u >> v >> w; // 0, 1, 6
adjMat[u][v] = w;
11 of 21
Adjacency List
0: (1, 6), (2, 3)
1: (3, 1)
2: (1, 2)
3: -
4: (3, 4)
typedef pair<int, int> ii;
vector<ii> adjList[n];
cin >> u >> v >> w; // 0, 1, 6
adjList[u].push_back(ii(v, w));
Edge list
[(0, 1, 6),
(0, 2, 3),
(1, 3, 1),
(2, 1, 2),
(4, 3, 4)]
typedef tuple<int, int, int> iii;
vector<iii> edgeList;
cin >> u >> v >> w; // 0, 1, 6
edgeList.push_back(iii(u, v, w));
SSSDSP Reconstruction
Keep track of the parent u of each vertex v (p[v] = u), then recursively print from p[t] to p[s]
DFS
#include <bits/stdc++.h>
using namespace std;
12 of 21
printf(" %d", u); //this vertex is visited
dfs_num[u] = VISITED; //mark u as visited
for (auto &[v, w] : AL[u]) //C++17 style, w ignored
if (dfs_num[v] == UNVISITED) //to avoid cycle
dfs(v); //recursively visits v
}
int main() {
/*
//Undirected Graph in Figure 4.1
9
1 1 0
3 0 0 2 0 3 0
2 1 0 3 0
3 1 0 2 0 4 0
1 3 0
0
2 7 0 8 0
1 6 0
1 6 0
*/
return 0;
}
BFS
#include <bits/stdc++.h>
using namespace std;
13 of 21
typedef pair<int, int> ii; // In this chapter, we will frequently use these
typedef vector<ii> vii; // three data type shortcuts. They may look cryptic
typedef vector<int> vi; // but shortcuts are useful in competitive
programming
const int INF = 1e9; // INF = 1B, not 2^31-1 to avoid overflow
vi p; // addition:parent vector
void printPath(int u) { // extract info from vi p
if (p[u] == -1) { printf("%d", u); return; }
printPath(p[u]); // output format: s -> ...
-> t
printf(" %d", u);
}
int main() {
/*
// Graph in Figure 4.3, format: list of unweighted edges
// This example shows another form of reading graph input
13 16
0 11 2 2 3 0 4 1 5 2 6 3 7 5 6
4 88 9 5 10 6 11 7 12 9 10 10 11 11 12
*/
14 of 21
p[v] = u; // parent of v
is u
q.push(v); // for next
iteration
}
else if ((dist[v]%2) == (dist[u]%2)) // same parity
isBipartite = false;
}
}
Dijkstra
#include <bits/stdc++.h>
using namespace std;
int main() {
/*
// Graph in Figure 4.17
5 7 0
0 1 2
0 2 6
0 3 7
1 3 3
1 4 6
2 4 1
3 4 5
*/
freopen("dijkstra_in.txt", "r", stdin);
int V, E, s; scanf("%d %d %d", &V, &E, &s);
vector<vii> AL(V, vii());
while (E--) {
int u, v, w; scanf("%d %d %d", &u, &v, &w);
AL[u].emplace_back(v, w); // directed graph
}
vi dist(V, INF); dist[s] = 0; // INF = 1e9 here
15 of 21
// sort the pairs by non-decreasing distance from s
while (!pq.empty()) { // main loop
auto [d, u] = *pq.begin(); // shortest unvisited u
pq.erase(pq.begin());
for (auto &[v, w] : AL[u]) { // all edges from u
if (dist[u]+w >= dist[v]) continue; // not improving, skip
pq.erase(pq.find({dist[v], v})); // erase old pair
dist[v] = dist[u]+w; // relax operation
pq.insert({dist[v], v}); // enqueue better pair
}
}
*/
// (Modified) Dijkstra's algorithm
priority_queue<ii, vector<ii>, greater<ii>> pq; pq.push({0, s});
// sort the pairs by non-decreasing distance from s
while (!pq.empty()) { // main loop
auto [d, u] = pq.top(); pq.pop(); // shortest unvisited u
if (d > dist[u]) continue; // a very important check
for (auto &[v, w] : AL[u]) { // all edges from u
if (dist[u]+w >= dist[v]) continue; // not improving, skip
dist[v] = dist[u]+w; // relax operation
pq.push({dist[v], v}); // enqueue better pair
}
}
return 0;
}
Dynamic Programming
Dynamic Programming is an algorithmic paradigm that splits a complex problem into subproblems
and stores answers of these subproblems to avoid computing these results again. It is an optimization
over plain recursion, which reduces time complexity from exponential to polynomial. However, it
increases space complexity (not a huge concern in competitive programming).
Dynamic programming is used to efficiently solve problems with 2 main properties: overlapping
subproblems and optimal substructure. Overlapping Subproblems Property Dynamic Programming
combines solutions to sub-problems. When the same subproblems are repeatedly needed, dynamic
programming is used.
Prefix Sum Array
2D Prefix Sum array
The general formula:
psa[i][j] = psa[i-1][j] + psa[i][j-1] -
psa[i-1][j-1] + a[i][j]
16 of 21
// C++ Program to find prefix sum of 2d array
#include <bits/stdc++.h>
using namespace std;
#define R 4
#define C 5
// driver code
int main()
{
int a[R][C] = { { 1, 1, 1, 1, 1 },
17 of 21
{ 1, 1, 1, 1, 1 },
{ 1, 1, 1, 1, 1 },
{ 1, 1, 1, 1, 1 } };
prefixSum2D(a);
return 0;
}
Iterators, Recursion
Iterators act like pointers and are used to traverse elements in STL containers like vector, set,
map, etc.
Types of Iterators
1. Input Iterator → Read-only, single-pass (e.g., cin)
2. Output Iterator → Write-only, single-pass (e.g., cout)
3. Forward Iterator → Read/write, multiple passes (e.g., forward_list)
4. Bidirectional Iterator → Read/write, move forward & backward (e.g., list, map)
5. Random Access Iterator → Supports indexing (e.g., vector, deque)
#include <iostream>
#include <vector>
#include <algorithm>
int main() {
std::vector<int> vec = {10, 20, 30, 40, 50};
Recursion is a method of solving problems by solving smaller instances of itself. Essentially, recursion
is a process where a function calls itself.
A recursive function solves problems by calling a copy of itself and solving smaller subproblems of the
original problems. Many more recursive calls can be generated if needed. Recursion performs the
same operations multiple times with different inputs. It makes the input smaller each time to make the
problem smaller and easier.
The Base condition/case of recursion is the stopping condition of recursion, to stop the growth of the
number of recursive calls. This helps prevent an infinite loop.
18 of 21
Call Stack
When a recursive function is called, it goes on top of the "call stack". This "call stack" uses a Last In
First Out (LIFO) method like a regular stack. This means that functions called last will be executed
first.
In recursion, function calls are nested until the base case is reached. Without the base case, there
would be infinite nested function calls.
The depth of recursion is the maximum degree of nesting the function calls over the course of
computation. The programming environment has to maintain a pushdown stack of size proportional to
the depth of the recursion. For huge problems, the space needed for this stack might prevent the use
of plain recursion.
Recursion VS Iteration
Recursion Iteration
1 Terminates when base case becomes true Terminates when loop condition
becomes false
3 Each recursive call needs additional memory in Each iteration does not use extra
the stack memory memory
Applications of recursion
Recursion is obviously very important and has widespread use. In competitive programming,
recursion is used for:
1. Divide and conquer
2. Dynamic programming
3. Graphs
4. Trees
Examples
1. Factorial function:
int factorial(int n) {
// base case
if(n==0) {
return 1;
}
return n*factorial(n-1);
}
19 of 21
}
Range Query
A query over a certain range in a set of elements. This includes range sum queries (calculating the
sum of values) and range minimum queries (finding the minimum value).
Range queries can be static, where the set of elements is not changes in between queries, or
dynamic, where the set of elements can be updated.
Queries on static arrays
When the set of elements is static and not updated, preprocessing will be suffice to efficiently
answering range queries.
Sum queries
A range sum query denotes the sum of array values in a specific range (e.g. 0-5).
Sum queries can be easily processed using a prefix sum array, where every value is equal to the sum
of values in the original array up to the corresponding position.
A prefix sum array can be constructed in O(n) time, using the formula: psa[i]=psa[i−1]+a[i] This allows
for calculation of any range sum query in O(1) time. For example, when the queried range is (a, b):
sum=psa[b]−psa[a−1]
We can even calculate a two-dimensional range sum using a 2D prefix sum array (a.k.a. prefix sum
matrix): psa[i][j]=psa[i−1][j]+psa[i][j−1]−psa[i−1][j−1]+a[i][j]
Minimum queries
A range minimum query in the form of l, r denotes the minimum element in an array between l and r
inclusive.
There are many ways to efficiently solve range minimum queries, some of which even work when the
array is dynamic (modifications to the array between queries).
Sparse Table
A Sparse Table is a data structure that allows answering range queries in mostly O(log(n)) time, or
O(1) time for range minimum queries. However, this data structure can only be used on
static/immutable arrays.
How it works:
Any non-negative number can be uniquely represented as a sum of decreasing powers of two e.g.
13=8+4+1. Similarly, any interval can be uniquely represented as a union of intervals with lengths that
are decreasing powers of 2.
The main idea behind Sparse Tables is to precompute all answers for range queries with power of two
length. Afterwards a different range query can be answered by splitting the range into ranges with
power of two lengths, then combining precomputed answers to receive a complete answer.
Implementation:
We create a table lookup such that lookup[i][j] contains the minimum of range i to i+2j−1.
Here is a C++ implementation:
#define MAX 5000 //arbitrary number
int lookup[MAX][MAX];
for(int j=1;(1<<j)<=n;j++) {
for(int i=0;i+(1 << j)-1<n;i++) {
20 of 21
lookup[i][j] = min(lookup[i][j-1], lookup[i+(1<<(j-1))][j-1]);
}
}
}
Time Complexity
Grading System: WA: wrong answer; RTE: run-time error; MLE: memory limit exceeded; TLE: time
limit exceeded; AC: all correct
Big O rule of thumb
Big O notation is used as an abstraction of time complexity.
In general, the value inside the O() should
be >= time limit * 1000000
● 10^6: O(N)/O(log N)/O(1)
● 10^5: O(N log N)
● 10^3: O(N^2)
● 10^2: O(N^3)
● 10: O(N!)/O(2^N)
Fast IO for C++
std::ios_base::sync_with_stdio(false);
cin.tie(NULL);
Fibonacci
int fib = 1;
int prev = 0;
for(int i=0;i<n;i++) {
fib += prev;
prev = fib - prev;
}
// recursive!
int fibonacci(int n) {
// base case
if(n==1 || n==2) {
return 1;
}
else return fibonacci(n-1)+fibonacci(n-2);
}
21 of 21
Repeated elements, so store the repeated ones: Memoisation Top down
vector<int> memo(n, -1);
memo[1] = 1; memo[2] = 1;
int fibonacci(int n) {
if(memo[n] != -1) {
return memo[n];
}
memo[n] = fibonacci(n-1)+fibonacci(n-2);
return memo[n];
}
Top up:
vector<int> memo(n, -1);
memo[1] = 1; memo[2] = 1;
for(int i=3;i<=n;i++) {
memo[i] = (memo[i-2] + memo[i-1]);
}
More DP
22 of 21