Big O notation is a way of measuring the complexity of an algorithm, which is a set of instructions used to perform a task. It is a way of expressing how long an algorithm takes to run, based on the size of the input.

The “O” in Big O notation stands for “order of.” It is used to describe the upper bound of an algorithm’s running time. For example, an algorithm with a running time of O(n) means that the running time grows at most linearly with the size of the input (n). An algorithm with a running time of O(n^2) means that the running time grows at most quadratically with the size of the input.

O(n) Example

This example calculates the sum of the elements in a list:

def sum_list(lst):
    sum = 0
    for x in lst:
        sum += x
    return sum

The running time of this function is O(n), where n is the length of the list. This is because the for loop iterates through the list and performs an operation (the addition of sum and x) for each element in the list. The running time is directly proportional to the size of the input (the list), so the Big O is O(n).

O(n^2) Example

This example calculates the sum of the elements in a list, but it does so by iterating through the list twice:

def sum_list(lst):
    sum = 0
    for x in lst:
        for y in lst:
            sum += x + y
    return sum

It’s an extreme example and it is unlikely that anyone would write code like this, but it demonstrates how the running time of an algorithm can grow exponentially with the size of the input. The running time of this function is O(n^2), where n is the length of the list. This is because the for loop iterates through the list twice, and performs an operation (the addition of sum, x and y) for each element in the list. The running time is directly proportional to the square of the size of the input (the list), so the Big O is O(n^2).

O(2^n) Example

This example calculates the sum of the elements in a list, but it does so by iterating through the list recursively:

def sum_list(lst):
    if len(lst) == 0:
        return 0
    else:
        return lst[0] + sum_list(lst[1:])

The running time of this function is O(2^n), where n is the length of the list. This is because the function calls itself recursively, and performs an operation (the addition of the first element in the list and the sum of the remaining elements) for each element in the list. The running time is directly proportional to 2 raised to the power of the size of the input (the list), so the Big O is O(2^n).

Why use Big O notation?

Big O notation is useful because it allows us to compare the efficiency of different algorithms. It gives us a way to quantify how long an algorithm takes to run, which is important when we need to choose the most efficient algorithm for a particular task. For example, if we have two algorithms that both solve a problem, but one has a running time of O(n^2) and the other has a running time of O(n), we would choose the latter because it is more efficient.

Big O notation is also useful because it allows us to analyze the scalability of an algorithm. If we have an algorithm that has a running time of O(n^2), we know that it will take longer to run as the size of the input increases. This is important to consider when we need to run an algorithm on a large dataset.

Overall, Big O notation is a useful tool in computer science because it allows us to measure and compare the efficiency of different algorithms, and it helps us to understand how well an algorithm will scale as the size of the input increases.

Steps to derive the Big O of an algorithm

  1. Identify the operation(s) that dominate the running time of the algorithm. These are the operations that are performed the most number of times, and have the greatest impact on the running time.

  2. Determine the number of times each operation is performed. This will depend on the size of the input and the specific implementation of the algorithm.

  3. Use algebraic manipulations to express the running time as a function of the size of the input. For example, if the algorithm performs an operation n times for an input of size n, the running time would be O(n). If it performs an operation n^2 times for an input of size n, the running time would be O(n^2).

  4. Drop any lower-order terms and constants. For example, if the running time of an algorithm is 5n^2 + 3n + 2, the Big O would be O(n^2). The lower-order terms and constants are dropped because they become insignificant as the size of the input grows.

  5. Simplify the final result. If the running time can be expressed as a polynomial of degree k (e.g. O(n^k)), the Big O is O(n^k). If it can be expressed as a logarithmic function (e.g. O(log n)), the Big O is O(log n). If it can be expressed as a combination of polynomial and logarithmic terms (e.g. O(n^k log n)), the Big O is O(n^k log n).

Note that this process is an approximation, and the actual running time of an algorithm may not strictly follow the derived Big O. However, it is a useful way to get a rough estimate of the running time, and it is a widely-used method for analyzing the efficiency of algorithms.

Common Big O complexities

The following table lists some common Big O complexities, along with their running times for different input sizes.

Complexity Description Running time
O(1) Constant 1
O(log n) Logarithmic log n
O(n) Linear n
O(n log n) Log-linear n log n
O(n^2) Quadratic n^2
O(n^3) Cubic n^3
O(2^n) Exponential 2^n

References / Further Reading