BIG O Breakdown
What is Big O Notation?
Big O notation is a mathematical way to describe the time complexity or space complexity of an algorithm. It represents the worst-case growth rate of an algorithm as the input size nnn increases, helping us understand how scalable an algorithm is.
Key Characteristics of Big O:
-
Focus on Growth: Big O focuses on how the number of operations grows with the size of the input (nnn).
- Example: An algorithm that performs 2n+52n + 52n+5 operations is O(n)O(n)O(n) because as nnn grows, the constant 555 and coefficient 222 become negligible.
-
Worst-Case Analysis: It assumes the largest number of operations the algorithm might perform for the input size nnn.
-
Ignore Constants and Lower-Order Terms:
- Constants don’t matter in Big O because they don’t scale with nnn.
- Example: O(2n)=O(n)O(2n) = O(n)O(2n)=O(n), O(n+10)=O(n)O(n + 10) = O(n)O(n+10)=O(n).
-
Only the Dominant Term Counts:
- If an algorithm has multiple terms like O(n2+n)O(n^2 + n)O(n2+n), the term with the fastest growth rate dominates. So, O(n2+n)=O(n2)O(n^2 + n) = O(n^2)O(n2+n)=O(n2).
Common Big O Complexities
Complexity |
Name |
Description |
O(1)O(1)O(1) |
Constant |
Takes the same amount of time regardless of input size. |
O(logn)O(\log n)O(logn) |
Logarithmic |
Reduces the problem size by half at every step. |
O(n)O(n)O(n) |
Linear |
Time grows directly proportional to the input size. |
O(nlogn)O(n \log n)O(nlogn) |
Linearithmic |
Common in efficient sorting algorithms like Merge Sort. |
O(n2)O(n^2)O(n2) |
Quadratic |
Common in nested loops, grows rapidly with input size. |
O(2n)O(2^n)O(2n) |
Exponential |
Doubles operations for each increment in input size. |
O(n!)O(n!)O(n!) |
Factorial |
Infeasible for even moderately large input sizes. |
How to Calculate Big O
To calculate Big O, analyze the algorithm step by step, focusing on loops, function calls, and recursive depth.
1. Loops
- A loop that runs nnn times is O(n)O(n)O(n).
- Nested loops multiply their complexities.
- Example:
Total = O(n)⋅O(n)=O(n2)O(n) \cdot O(n) = O(n^2)O(n)⋅O(n)=O(n2).
2. Sequential Steps
-
If an algorithm has multiple parts, add their complexities.
- Example:
Total = O(n)+O(m)O(n) + O(m)O(n)+O(m).
-
If n=mn = mn=m, the total is O(n)+O(n)=O(2n)=O(n)O(n) + O(n) = O(2n) = O(n)O(n)+O(n)=O(2n)=O(n).
3. Function Calls
- If a function is called recursively, analyze how many times it is called and the work done in each call.
- Example (Binary Search):
- Binary search splits the array into halves, so its complexity is O(logn)O(\log n)O(logn).
4. Recurrence Relations
- Recursive algorithms often have recurrence relations.
- Example (Merge Sort):
- Merge Sort divides the array into halves and merges them back.
- Relation: T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n) (two recursive calls + merge operation).
- Complexity: O(nlogn)O(n \log n)O(nlogn).
Practical Examples
Example 1: Single Loop
Example 2: Nested Loop
- Total = O(n)⋅O(n)=O(n2)O(n) \cdot O(n) = O(n^2)O(n)⋅O(n)=O(n2).
Example 3: Sorting and Traversal
- Total = O(nlogn)+O(n)=O(nlogn)O(n \log n) + O(n) = O(n \log n)O(nlogn)+O(n)=O(nlogn).
Example 4: Binary Search
- Complexity: O(logn)O(\log n)O(logn), because the array size is halved in each recursive call.
Tips for Calculating Big O
- Focus on Loops: Count how many times each loop runs.
- Break Down Steps: Analyze each segment of the algorithm separately.
- Remove Constants: Ignore constants and lower-order terms.
- Understand Recursive Depth: For recursive algorithms, determine how the input size shrinks with each call.
With practice, determining Big O becomes intuitive!