Given two algorithms that perform the same task, what is the best way to compare their performance? At first glance, one might consider measuring the runtime of the two algorithms. However, this is not a reliable metric, as runtime depends heavily on the hardware and can be highly inconsistent due to real-world conditions, even on the same machine. To address this problem, scientists have developed a more theoretical approach called the time complexity of an algorithm. This measure evaluates how an algorithm's performance scales as the size of the input increases.
If $n$ is the number of input, then for a given algorithm, there exists a function $f(n)$ that measures the worst-case runtime of the algorithm as $n$ varies. The runtime of an algorithm generally follows a specific trend, such as constant, logarithmic, or linear. However, as noted, the runtime is significantly influenced by hardware and physical conditions. Therefore, these functions are often expressed with additional constant factors, such as:
where $C_1$ and $C_0$ are factors that depends on outside conditions. This expression is quite inconvenient, as we don't usually know what these constant factors are, so the general rule of thumb to express the worst-case runtime of an algorithm is through the big-O notation. For $f(n) = C_1n + C_0$, for example, if $f(n) = C_1n + C_0$, then $f(n)$ it can be said that
There are many categories of such functions $f(n)$, and the most important functions used in algorithmic analysis are the constant function, $f(n)=c$, the logarithm function $f(n) = \log_b(n)$, the linear function $f(n) = n$, $f(n) = n\log(n)$, the quadratic function $f(n) = n^2$, other polynomial functions $f(n) = n^d$, the exponential functions $f(n) = a^n$, and the factorial function $f(n) = n!$.
For example, we say that $f(n) = 3n \in O(n)$ because $|3n| \leq M|n|,$ for $n\geq 0$, and $M$ can be any number greater than or equal to 3. Similarly, $8n + 5\in O(n)$ because if we choose $M = 9$ and $n_0 = 5$, then $8n + 5 \leq 9n$, for $n\geq 5$.
Given two functions $f(n)$ and $g(n)$ defined on $n\geq 0$.
How can we determine the time complexity of an algorithm? Consider the following example: finding the largest element in an array.
// Algorithm: find the maximum element of an array of integers // Input: an array A of size n // Output: maximum element of A array_max(int A[], int n){ int current_max = A[0]; for (i = 1 ; i < n; i++) if (A[i] > current_max) current_max = A[i]; return current_max; }
The assignment operator current_max = A[0]
takes constant time, $f(n)=c$.
In the for loop, the operator i = 1
also takes constant time, $f(n)=c$. The
comparison operator, also takes constant time, but the operator repeated $n$ times, so the runtime function
is $f(n) = c_1n + c_0$.
The constant time complexity function, denoted
does not depend on the size of the input (unless the size of the input is exceptionally large). Some examples of algorithms with this complexity are:
Basic arithmetic and bitwise operations: Computers usually uses 8-bit, 16-bit, 32-bit, and 64-bit for integers and floating point numbers. Addition, subtraction, multiplication, or division of two numbers are considered constant time complexity because they are limited to a certain amount of bit. However, for large number arithmetic with arbitrary precision, the time complexity will no longer be constant.
Bitwise operations like AND, OR, and XOR also take constant time because, again, primitive data types are limited to a certain size on the computer.
Some examples of this type of algorithms are:
Input: a real number x; an integer n Output: x^n exp_by_squaring(x, n) if n < 0 then return exp_by_squaring(1 / x, -n); else if n = 0 then return 1; else if n is even then return exp_by_squaring(x * x, n / 2); else if n is odd then return x * exp_by_squaring(x * x, (n - 1) / 2); end function
These types of algorithms are usually optimized versions of $O(n^2)$ algorithms. Some examples of this type of algorithms are:
Algorithms of this type are generally very bad. Some examples are:
With the speed of the modern computers, it is not quite obvious to notice the slight change in runtime between different time complexities, especially when running only a small code segment, and especially if you are a beginner. To notice the impact of how you design your algorithm, we can actually plot out the runtime as $n$ increases. Below is a C++ program and a python script to verify the runtime.
Consider the following problem: Given an array $A = \{a_1, a_2, \ldots, a_n\}$.