DSA Interview Cheatsheet
Key points from 16 DSA concepts in one scannable reference. Click any topic to explore the full interactive explanation.
Foundations
Big O Notation →Measuring algorithm efficiency
- Describes WORST-CASE growth rate, not exact time
- We drop constants: O(2n) → O(n)
- We keep only the dominant term: O(n² + n) → O(n²)
- Common complexities: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ)
Binary System & Bit Manipulation →Base-2 representation and bitwise operations
- Binary is base-2: each bit is a power of two (1, 2, 4, 8, ...)
- A byte is 8 bits; unsigned 8-bit range is 0-255
- Signed integers typically use two's complement (highest bit is the sign)
- Bitwise ops (& | ^ ~) operate per-bit, not per-number
Data Structures
Arrays →Contiguous memory, indexed access
- Elements stored in contiguous (sequential) memory
- O(1) access by index - direct memory address calculation
- O(n) search in unsorted array - must check each element
- O(n) insert/delete in middle - must shift elements
Hash Tables →O(1) key-value lookup
- Hash function converts key → array index
- O(1) average for get, set, delete
- Collisions: when two keys hash to same index
- Collision handling: chaining (linked lists) or open addressing
Stacks →Last-In-First-Out (LIFO)
- LIFO: Last In, First Out
- Two main operations: push (add to top), pop (remove from top)
- peek: look at top without removing
- JS arrays work as stacks with push/pop
Queues →First-In-First-Out (FIFO)
- FIFO: First In, First Out
- enqueue: add to back, dequeue: remove from front
- JS arrays can work as queues but shift() is O(n)
- For performance, use a proper Queue class or linked list
Heaps & Priority Queues →Fast min/max access with efficient updates
- Parent index = floor((i - 1) / 2), children = 2i + 1 and 2i + 2
- Min-heap gives fast smallest-value retrieval (peek)
- Max-heap gives fast largest-value retrieval (peek)
- Insert and extract both rebalance along one root-to-leaf path
Trees →Hierarchical data and recursion
- A node can have multiple children (general tree) or up to two children (binary tree)
- Tree questions often map to recursion with clear base and recursive steps
- Traversal order matters: inorder, preorder, postorder, and level-order
- Many tree problems are either "find/compare path", "measure property", or "transform tree"
Trie (Prefix Tree) →Fast prefix lookup and word-based search
- Trie nodes store children by character plus a terminal flag
- Insert/search/startsWith operate in O(word length)
- Prefix sharing turns many repeated strings from O(n*m) checks to O(m) checks
- Use tries when constraints ask for many prefix queries on words
Linked Lists →Nodes connected by pointers
- Each node has: data + pointer to next node
- No contiguous memory needed - nodes can be anywhere
- O(1) insert/delete if you have the node reference
- O(n) access - must traverse from head
Algorithms
Graphs →Nodes, edges, traversal, and reachability
- Build the graph first: adjacency list is the most common interview-ready format
- BFS is your first tool for shortest path in unweighted graphs
- DFS is useful for connectivity, components, and state exploration
- Visited tracking is non-negotiable: it prevents infinite loops
Recursion →Think smaller, solve, then combine
- Every recursive function needs a base case that stops further calls
- The recursive step must move toward the base case
- The combine step is where you build the final answer from recursive results
- Divide-and-conquer uses recursion when the combine step is deterministic
Backtracking →Try options, prune, and backtrack cleanly
- Define the state needed for one recursive path
- Choose one decision, recurse, then revert to explore the next decision
- Prune early when constraints are already violated
- Return booleans for feasibility or collect candidate sets for enumeration
Sorting Algorithms →Arranging elements efficiently
- Comparison sorts cannot beat O(n log n) average case
- Stable sort preserves relative order of equal elements
- In-place sort uses O(1) extra space (QuickSort, HeapSort)
- Merge Sort: O(n log n) guaranteed, stable, but O(n) space
Dynamic Programming →Breaking problems into overlapping subproblems
- Two approaches: memoization (top-down, recursive + cache) and tabulation (bottom-up, iterative table)
- Overlapping subproblems: the same computation recurs many times in the recursive tree
- Optimal substructure: an optimal solution is built from optimal solutions to its subproblems
- Top-down starts from the original problem and caches results; bottom-up fills a table from base cases
Patterns
Sliding Window →Efficiently processing contiguous subarrays and substrings
- Fixed-size window: window size is given (e.g., max sum of k consecutive elements); slide by adding the new element and removing the oldest
- Variable-size window: window grows until a condition breaks, then shrinks from the left until the condition is restored
- Shrinking condition: clearly define when the window becomes invalid (e.g., sum exceeds target, duplicate found, too many distinct chars)
- Closely related to two pointers: the left and right boundaries of the window act as two pointers moving in the same direction