Skip to main content

BIG O Notation and Calculation

What is Big O Notation?

Big O notation is a mathematical way to describe the time complexity or space complexity of an algorithm. It representshow the worst-casealgorithm growth rate of an algorithmgrow as the input size nnincreases. increases,Two helpingthings usit understandconsiders: 

  • Runtime
  • Space 

1. Basic Idea

  • What It Measures:
    Big O notation focuses on the worst-case scenario of an algorithm’s performance. It tells you how scalablethe running time (or space) increases as the size of the input grows.

  • Ignoring Constants:
    It abstracts away constants and less significant terms. For instance, if an algorithm is.takes 3n + 5 steps, it’s considered O(n) because, for large n, the constant 5 and multiplier 3 become negligible compared to n.


2. KeyCommon CharacteristicsTime of Big O:Complexities

    • FocusO(1) on GrowthConstant Time::
      The Bigruntime Odoes focusesnot on how the number of operations growschange with the size of the input (nn).input.

      • Example: AnAccessing algorithman thatelement performsin 2n+52na +list 5by operationsits isindex.

        O(n)O(n) because as nn grows, the constant 55 and coefficient 22 become negligible.
    • Worst-CaseO(n) Analysis– Linear Time::
      The Itruntime assumesincreases thelinearly largest number of operations the algorithm might perform forwith the input size nn.

    • Ignore Constants and Lower-Order Terms:size.

      • Constants don’t

        Example: matterIterating inthrough Bigall Oelements becauseof theyan don’tarray scaleto withfind nn.

      • a
      • Example:specific O(2n)=O(n)O(2n)value.

        = O(n)
        , O(n+10)=O(n)O(n + 10) = O(n).
    • OnlyO(n²) – Quadratic Time:
      The runtime increases quadratically as the Dominantinput Termsize Counts:increases.

      • If

        Example: Using two nested loops to compare all pairs in an algorithmarray.

        has multiple terms like O(n2+n)O(n^2 + n), the term with the fastest growth rate dominates. So, O(n2+n)=O(n2)O(n^2 + n) = O(n^2).

  • Common Big O Complexities

    sizeincreases.

    :
    Often orheapsort.

  • Example:
  • Sort

    Quadratic

    Time n²)
    def 

    ComplexityNameDescription
    O(1)O(1)ConstantTakes the same amount of time regardless of input size.
    O(log⁡n)O(\log n) Logarithmic ReducesTime:
    The theruntime problemincreases sizeslowly by half at every step.
    O(n)O(n)LinearTime grows directly proportional toas the input size.
    O(nlog⁡n)
  • Example: Binary search in a sorted list.

  • O(n \log n)

  • LinearithmicCommonseen in efficient sorting algorithms like Mergemergesort Sort.
    O(n2)O(n^2)Quadratic Commonthe list in nestedO(n), loops,then use a Binary search in a sorted list O(logn).

    3. Understanding Through Examples

    Constant Time – O(1)

    def get_first_element(lst):
        return lst[0]  

    Regardless of list size, only one operation is performed.

    Linear Time – O(n)

    def find_max(lst):
        max_val = lst[0]
        for num in lst:
            if num > max_val:
                max_val = num
        return max_val

    In the find_max function, every element is inspected once, so the time grows rapidlylinearly with inputthe list size.

    O(2n)O(2^n) ExponentialDoublesprint_all_pairs(lst): operationsfor i in range(len(lst)): for j in range(len(lst)): print(lst[i], lst[j])

    Here, for each incrementelement, inyou’re iterating over the entire list, leading to a quadratic number of comparisons.

    4. Why Big O Matters

    • Performance Insight:
      Understanding Big O helps you predict how your solution will scale.

      • O(n) solution will generally perform better than an O(n²) solution as the input size increases.

    • Algorithm Choice:
      It allows you to compare different algorithms and select the most efficient one for your needs. 

      • Discussing Big O can show your understanding of algorithm efficiency.

    5. Space Complexity

    • Just as you can analyze time complexity, Big O also helps you understand how much memory an algorithm requires relative to the input size.

    O(n!)O(n!)FactorialInfeasible for even moderately large input sizes.

    6. HowVisualizing toGrowth Calculate Big ORates

    ToImagine calculatea Biggraph O, analyzewhere the algorithmx-axis stepis bythe step,input focusing on loops, function calls,size and recursivethe depth.y-axis is the time taken:


    1. Loops

    • A

      O(1) is a flat line.

    • O(n) is a straight, upward-sloping line.

    • O(n²) starts off similar for small inputs but grows much faster as n increases.

    7. Combining big-o

    When you have different parts of an algorithm with their own complexities, combining them depends on whether they run sequentially or are nested within each other.

    1. Sequential Operations

    If you perform one operation after the other, you add their complexities. For example, if one part of your code is O(n) and another is O(n²), then the total time is:

    • O(n) + O(n²) = O(n²)

    This is because, as n grows, the O(n²) term dominates and the lower-order O(n) becomes negligible.

    def sequential_example(nums):
        # First part: O(n)
        for num in nums:
            print(num)
    
        # Second part: O(n²)
        for i in range(len(nums)):
            for j in range(len(nums)):
                print(nums[i], nums[j])

    Here, the overall time complexity is O(n) + O(n²), which simplifies to O(n²).

    2. Nested Operations

    If one operation is inside another (nested loops), you multiply their complexities. For example, if you have an outer loop that runs nnO(n) times isand an inner loop that runs O(n)O(n).

  • Nestedtimes loopsfor multiplyeach theiriteration, complexities.then
      the
    • Example:total time is:

      def nested_example(nums):
          for i in range(n)len(nums)):          # O(n)
              for j in range(n)len(nums)):      # O(n) for each i
                  print(i,nums[i], j)       # O(1)
      nums[j])
      Total
      =
      This O(n)⋅O(n)=O(n2)O(n)nested \cdotstructure gives you O(n)n²) = O(n^2).

  • overall.

    3. 2.Combining SequentialDifferent StepsParts

    In

    real
    problems,
    your code might have a mix of sequential and nested parts. The key idea is:

    • IfAdd anthe algorithm has multiple parts, add their complexities.

      • Example:
        for i in range(n):    # O(n)
            print(i)          # O(1)complexities for jsequential in range(m):    # O(m)
            print(j)          # O(1)
        
        Total = O(n)+O(m)O(n) + O(m).
      parts.

    • If n=mn = m,Multiply the totalcomplexities isfor O(n)+O(n)=O(2n)=O(n)O(n)nested +parts.

      O(n)
    • =
    • O(2n)

      Drop =the O(n).lower-order terms and constant factors.


    Quick

    Rule:
    When
    adding
    complexities,

    3.the Functiondominating Calls

    term
    (the
    one
    that
    grows
      the
    • If a functionfastest) is calledthe recursively,one analyzethat how many times it is called anddetermines the work done in each call.
      • Example (Binary Search):
        def binary_search(arr, target, left, right):
            if left > right:
                return -1
            mid = (left + right) // 2
            if arr[mid] == target:
                return mid
            elif arr[mid] < target:
                return binary_search(arr, target, mid + 1, right)
            else:
                return binary_search(arr, target, left, mid - 1)
        
        • Binary search splits the array into halves, so its complexity is O(log⁡n)O(\log n).

    4. Recurrence Relations

    • Recursive algorithms often have recurrence relations.
      • Example (Merge Sort):
        • Merge Sort divides the array into halves and merges them back.
        • Relation: T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n) (two recursive calls + merge operation).
        • Complexity: O(nlog⁡n)O(n \log n).

    Practical Examples

    Example 1: Single Loop

    for i in range(n):
        print(i)  # O(1)
    
    • Total = O(n)O(n).

    Example 2: Nested Loop

    for i in range(n):
        for j in range(n):
            print(i, j)  # O(1)
    
    • Total = O(n)⋅O(n)=O(n2)O(n) \cdot O(n) = O(n^2).

    Example 3: Sorting and Traversal

    arr.sort()  # O(n log n)
    for i in arr:
        print(i)  # O(n)
    
    • Total = O(nlog⁡n)+O(n)=O(nlog⁡n)O(n \log n) + O(n) = O(n \log n).

    Example 4: Binary Search

    def binary_search(arr, target, left, right):
        if left > right:
            return -1
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            return binary_search(arr, target, mid + 1, right)
        else:
            return binary_search(arr, target, left, mid - 1)
    
    • Complexity: O(log⁡n)O(\log n), because the array size is halved in each recursive call.

    Tips for Calculating Big O

    1. Focus on Loops: Count how many times each loop runs.
    2. Break Down Steps: Analyze each segment of the algorithm separately.
    3. Remove Constants: Ignore constants and lower-order terms.
    4. Understand Recursive Depth: For recursive algorithms, determine how the input size shrinks with each call.

    With practice, determiningoverall Big O becomes intuitive!notation.