Big O Notation

Big O notation is used in computer science to describe the performance or complexity of an algorithm. It provides an upper bound on the growth rate of the algorithm’s runtime or resource usage as the input size increases.

In Big O notation, the letter “O” represents the order of growth, and it is followed by a mathematical expression that describes the algorithm’s behavior. The expression typically includes the input size, represented by “n.”

Here are some commonly used notations in Big O:

O(1) – Constant time:

The algorithm’s runtime or resource usage remains constant, regardless of the input size. It is considered the most efficient complexity.

O(log n) – Logarithmic time:

The algorithm’s runtime or resource usage increases logarithmically with the input size. Algorithms with this complexity typically divide the problem into smaller subproblems.

O(n) – Linear time:

The algorithm’s runtime or resource usage grows linearly with the input size. In other words, it has a direct correlation with the size of the input.

O(n log n) – Linearithmic time:

The algorithm’s runtime or resource usage grows in proportion to n multiplied by the logarithm of n. Many efficient sorting algorithms, such as merge sort and quicksort, have this complexity.

O(n^2) – Quadratic time:

The algorithm’s runtime or resource usage grows quadratically with the input size. It is often associated with algorithms that involve nested loops over the input.

O(2^n) – Exponential time:

The algorithm’s runtime or resource usage grows exponentially with the input size. It is typically found in brute-force algorithms that generate all possible combinations.

These are just a few examples of Big O notation. There are other notations like O(n^3) (cubic time), O(n!) (factorial time), and more, each indicating different rates of growth. The goal is to choose algorithms with lower Big O complexities to achieve better efficiency and scalability.

Reference: