# Big O Notation

Big O notation is used in computer science to describe the performance or complexity of an algorithm. It provides an upper bound on the growth rate of the algorithm’s runtime or resource usage as the input size increases.

In Big O notation, the letter “O” represents the order of growth, and it is followed by a mathematical expression that describes the algorithm’s behavior. The expression typically includes the input size, represented by “n.”

Here are some commonly used notations in Big O:

## O(1) – Constant time:

The algorithm’s runtime or resource usage remains constant, regardless of the input size. It is considered the most efficient complexity.

## O(log n) – Logarithmic time:

The algorithm’s runtime or resource usage increases logarithmically with the input size. Algorithms with this complexity typically divide the problem into smaller subproblems.

## O(n) – Linear time:

The algorithm’s runtime or resource usage grows linearly with the input size. In other words, it has a direct correlation with the size of the input.

## O(n log n) – Linearithmic time:

The algorithm’s runtime or resource usage grows in proportion to n multiplied by the logarithm of n. Many efficient sorting algorithms, such as merge sort and quicksort, have this complexity.