0% found this document useful (0 votes)
7 views3 pages

Data_Structures_Algorithms_Cheatsheet (1)

cheatsheet IT

Uploaded by

stella luna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

Data_Structures_Algorithms_Cheatsheet (1)

cheatsheet IT

Uploaded by

stella luna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Good 😀: Fast lookups/inserts

📘Data Structures & Bad 😒: Unordered, slow key iteration

Algorithms Cheatsheet Time Complexity: Search/Insert/Delete: O(1) avg, O(n)


worst
Space: O(n)

Data Structures (Summary) Linked Lists


Nodes with value + pointer, stored randomly in memory.
What are Data Structures?
A data structure is a way to store and organize data so it Types:
can be accessed and updated efficiently. Each type has its • Singly – forward only
own strengths. • Doubly – forward & backward

Basic Operations: Good 😀: Fast insert/delete, ordered, flexible


• Insertion – add a new item Bad 😒: Slow lookup, more memory
• Deletion – remove an item
Time Complexity:
• Traversal – access each item once
• Prepend: O(1)
• Searching – find an item’s location
• Append: O(1)
• Sorting – arrange data in order
• Lookup: O(n)
• Access – retrieve stored data
• Insert/Delete: O(n)
Arrays
Stacks & Queues
Collection of items stored in continuous memory.
Linear data structures.
Good 😀: Fast lookups, fast push/pop, ordered
Stack (LIFO – Last In, First Out):
Bad 😒: Slow inserts, slow deletes, fixed size • Push: O(1)
• Peek: O(n)
Types: • Pop: O(1)

• Static Array – fixed size Queue (FIFO – First In, First Out):
• Enqueue: O(1)
• Dynamic Array – flexible size
• Dequeue: O(1)
Time Complexity: • Peek: O(1)

• Access: O(1) Good 😀: Fast operations, ordered


• Search: O(n) Bad 😒: Slow lookup
• Insert: O(n)
Trees
• Delete: O(n)
Non-linear, hierarchical structure.
• Space: O(n)
Binary Tree: up to 2 children.
Hash Tables
• Full – each node 0 or 2 children
Store data as key-value pairs using a hash function.
• Perfect – all levels filled equally
Collision Handling: • Complete – filled left to right

• Linear probing – next free slot Binary Search Tree (BST): Left < Parent < Right
Avg: O(log n), Worst: O(n)
• Chaining – linked list per slot
Balanced Trees: AVL, Red-Black – self-balancing BSTs
• Resizing – enlarge when too full Binary Heap – priority queues
Good 😀: Ordered, flexible sorted
Bad 😒: No O(1) ops • Merge Sort – O(n log n), stable, needs extra memory
• Quick Sort – O(n log n) avg, O(n²) worst, very fast
Trie (Prefix Tree)
Stores strings by shared prefixes. Choosing Algorithm:
Good 😀: Prefix queries • Small/nearly sorted → Insertion
Bad 😒: Space-heavy • Large → Quick/Merge
Time: O(length of word) • Simple but slow → Bubble/Selection

Graphs Comparison vs Non-Comparison:


Collection of vertices connected by edges. • Comparison (Quick, Merge, etc.) – lower bound O(n log
n)
Types: Directed, undirected, weighted, cyclic/acyclic • Non-Comparison (Counting, Radix, Bucket) – can reach
O(n)
Representations: Adjacency matrix, adjacency list, edge
list 💡 Fun Fact: Chrome V8 uses hybrid Quick + Insertion.

Good 😀: Shows relationships Searching Algorithms


Bad 😒: Hard to scale • Linear Search: O(n), works unsorted.
• Binary Search: O(log n), needs sorted data.

Algorithms (General) Graph Traversal


Step-by-step method to solve problems. BFS (Breadth-First Search): Queue
Measured by Time Complexity (speed) & Space Time: O(V+E)
Complexity (memory). Good: shortest path
Bad: memory-heavy
Recursion
Function that calls itself. Needs base case to stop loop. DFS (Depth-First Search): Stack/Recursion
Good 😀: Clean, readable Time: O(V+E)
Bad 😒: Slower, more memory Good: less memory
Bad: can be slow
Recursion vs Iteration:
• Recursion – short code, more memory BFS vs DFS:
• Iteration – longer code, faster • BFS: shortest path, more memory
• DFS: explores all, less memory
Sorting Algorithms
Arrange elements in order. Advanced Pathfinding
Why Important: • Dijkstra’s Algorithm: Non-negative weights, O((V+E)
• Faster access & processing log V)
• Better efficiency in systems • Bellman-Ford: Handles negative weights, O(VE), slower
🔄 Relaxation: update shortest path if found
Why not just use sort():
• Different performance on datasets Graph Types
• May not keep equal elements in order • Sparse Graph: few edges vs vertices (common in
• Limited customization networks/maps, faster to process)

Why many algorithms: Each has tradeoffs. Dynamic Programming (DP)


Break into subproblems, store/reuse solutions (avoid
Common Sorting:
recomputation).
• Bubble Sort – simple, O(n²), slow
• Selection Sort – simple, O(n²)
• Insertion Sort – O(n²), best O(n), good for small/nearly
• Caching: Store frequent data.
• Memoization: Store function results.
Difference: Caching = general, Memoization = functions.

You might also like