BIG O Notation and Calculation
What is Big O Notation?
Big O notation is a mathematical way to describe the time complexity or space complexity of an algorithm. It representshow the worst-casealgorithm growth rate of an algorithmgrow as the input size nnincreases. increases,Two helpingthings usit understandconsiders:
- Runtime
- Space
1. Basic Idea
-
What It Measures:
Big O notation focuses on the worst-case scenario of an algorithm’s performance. It tells you howscalablethe running time (or space) increases as the size of the input grows. -
Ignoring Constants:
It abstracts away constants and less significant terms. For instance, if an algorithmis.takes 3n + 5 steps, it’s considered O(n) because, for large n, the constant 5 and multiplier 3 become negligible compared to n.
2. KeyCommon CharacteristicsTime of Big O:Complexities
-
FocusO(1)on–GrowthConstant Time::
TheBigruntimeOdoesfocusesnoton how the number of operations growschange with the size of theinput (nn).input.-
Example:
AnAccessingalgorithmanthatelementperformsin2n+52na+list5byoperationsitsisindex.O(n)O(n)because asnngrows, the constant55and coefficient22become negligible.
-
-
Worst-CaseO(n)Analysis– Linear Time::
TheItruntimeassumesincreasesthelinearlylargest number of operations the algorithm might perform forwith the inputsizenn. Ignore Constants and Lower-Order Terms:size.Constantsdon’tExample:
matterIteratinginthroughBigallOelementsbecauseoftheyandon’tarrayscaletowithfindnn.a Example:specificO(2n)=O(n)O(2n)value.= O(n),O(n+10)=O(n)O(n + 10) = O(n).
-
OnlyO(n²) – Quadratic Time:
The runtime increases quadratically as theDominantinputTermsizeCounts:increases.IfExample: Using two nested loops to compare all pairs in an
algorithmarray.has multiple terms likeO(n2+n)O(n^2 + n), the term with the fastest growth rate dominates. So,O(n2+n)=O(n2)O(n^2 + n) = O(n^2).
Common Big O Complexities
O( | ||
– Logarithmic | The | |
Example: Binary search in a sorted list. O(n | :||
3. Understanding Through ExamplesConstant Time – O(1)
Regardless of list size, only one operation is performed. Linear Time – O(n)
In the |
||
– O( | n²)
Here, for each 4. Why Big O Matters
5. Space Complexity
|
|
6. HowVisualizing toGrowth Calculate Big ORates
ToImagine calculatea Biggraph O, analyzewhere the algorithmx-axis stepis bythe step,input focusing on loops, function calls,size and recursivethe depth.y-axis is the time taken:
1. Loops
AO(1) is a flat line.
-
O(n) is a straight, upward-sloping line.
-
O(n²) starts off similar for small inputs but grows much faster as n increases.
7. Combining big-o
When you have different parts of an algorithm with their own complexities, combining them depends on whether they run sequentially or are nested within each other.
1. Sequential Operations
If you perform one operation after the other, you add their complexities. For example, if one part of your code is O(n) and another is O(n²), then the total time is:
-
O(n) + O(n²) = O(n²)
This is because, as n grows, the O(n²) term dominates and the lower-order O(n) becomes negligible.
def sequential_example(nums):
# First part: O(n)
for num in nums:
print(num)
# Second part: O(n²)
for i in range(len(nums)):
for j in range(len(nums)):
print(nums[i], nums[j])
Here, the overall time complexity is O(n) + O(n²), which simplifies to O(n²).
2. Nested Operations
If one operation is inside another (nested loops), you multiply their complexities. For example, if you have an outer loop that runs nnO(n) times isand an inner loop that runs O(n)O(n).
- the
Example:total time is:def nested_example(nums): for i in range(
n)len(nums)): # O(n) for j in range(n)len(nums)): # O(n) for each i print(i,nums[i],j) # O(1)nums[j])Total
3. 2.Combining SequentialDifferent StepsParts
In
-
IfAddanthealgorithm has multiple parts, add their complexities.Example:for i in range(n): # O(n) print(i) # O(1)complexities forjsequentialin range(m): # O(m) print(j) # O(1)Total =O(n)+O(m)O(n) + O(m).
-
Ifn=mn = m,Multiply thetotalcomplexitiesisforO(n)+O(n)=O(2n)=O(n)O(n)nested+parts.O(n) -
O(2n)Drop
=theO(n).lower-order terms and constant factors.
Quick
When
3.the Functiondominating Calls
term - the
If a functionfastest) iscalledtherecursively,oneanalyzethathow many times it is called anddetermines thework done in each call.Example (Binary Search):def binary_search(arr, target, left, right): if left > right: return -1 mid = (left + right) // 2 if arr[mid] == target: return mid elif arr[mid] < target: return binary_search(arr, target, mid + 1, right) else: return binary_search(arr, target, left, mid - 1)Binary search splits the array into halves, so its complexity isO(logn)O(\log n).
4. Recurrence Relations
Recursive algorithms often have recurrence relations.Example (Merge Sort):Merge Sort divides the array into halves and merges them back.Relation:T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)(two recursive calls + merge operation).Complexity:O(nlogn)O(n \log n).
Practical Examples
Example 1: Single Loop
for i in range(n):
print(i) # O(1)
Total =O(n)O(n).
Example 2: Nested Loop
for i in range(n):
for j in range(n):
print(i, j) # O(1)
Total =O(n)⋅O(n)=O(n2)O(n) \cdot O(n) = O(n^2).
Example 3: Sorting and Traversal
arr.sort() # O(n log n)
for i in arr:
print(i) # O(n)
Total =O(nlogn)+O(n)=O(nlogn)O(n \log n) + O(n) = O(n \log n).
Example 4: Binary Search
def binary_search(arr, target, left, right):
if left > right:
return -1
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
return binary_search(arr, target, mid + 1, right)
else:
return binary_search(arr, target, left, mid - 1)
Complexity:O(logn)O(\log n), because the array size is halved in each recursive call.
Tips for Calculating Big O
Focus on Loops:Count how many times each loop runs.Break Down Steps:Analyze each segment of the algorithm separately.Remove Constants:Ignore constants and lower-order terms.Understand Recursive Depth:For recursive algorithms, determine how the input size shrinks with each call.
With practice, determiningoverall Big O becomes intuitive!notation.