Last Updated: March 31, 2026
Logarithms show up more often in coding interviews than you might expect. They help you reason about how fast algorithms grow, especially when dealing with divide-and-conquer approaches, binary search, and data structures like heaps and trees.
Instead of thinking in terms of “how big is n,” logarithms answer a different question: “how many times can we keep dividing n until we reach 1?” That perspective is key to understanding why certain algorithms run in O(log n) time and why others scale much more slowly.
In this chapter, we will build an intuitive understanding of logarithms, connect them to common algorithm patterns, and use practical examples to make them feel natural rather than mathematical.
A logarithm is the inverse of exponentiation. If you know that:
2^10 = 1024
Then the logarithm answers the reverse question: "What power do I raise 2 to in order to get 1024?"
log2(1024) = 10
More formally, log_b(x) = y means b^y = x. The base b is the number being raised to a power, x is the result, and y is the exponent we are solving for.
| Exponential Form | Logarithmic Form | In Plain English |
|---|---|---|
| 2^3 = 8 | log2(8) = 3 | "2 multiplied by itself 3 times gives 8" |
| 2^10 = 1024 | log2(1024) = 10 | "You need 10 doublings to reach 1024" |
| 10^3 = 1000 | log10(1000) = 3 | "1000 has 3 zeros, so log base 10 is 3" |
| 2^20 = 1,048,576 | log2(1,048,576) = 20 | "About a million needs 20 halvings" |
In computer science, the base is almost always 2. That is because computers operate in binary, and most divide-and-conquer algorithms split problems in half at each step.
Here is a visual way to think about logarithms. If you start with 16 elements and keep halving, how many levels does it take to reach 1?
The answer is 4 levels, because log2(16) = 4. Each level represents one halving operation. This is exactly what happens in binary search: at each step, you eliminate half the remaining elements.
These rules come up in complexity analysis and interview problems. You do not need to memorize proofs, but you should know how to apply each one.
1. Product Rule: log_b(a * c) = log_b(a) + log_b(c)
Multiplication inside the log becomes addition outside. This is why the logarithm of a large product can be computed by summing the logarithms of its parts.
For example:
log2(32) = log2(8 * 4) = log2(8) + log2(4) = 3 + 2 = 5
2. Quotient Rule: log_b(a / c) = log_b(a) - log_b(c)
Division inside the log becomes subtraction outside.
log2(8 / 2) = log2(8) - log2(2) = 3 - 1 = 2, which checks out because 8 / 2 = 4 and log2(4) = 2.
3. Power Rule: log_b(a^n) = n * log_b(a)
An exponent inside the log comes out as a multiplier. This one is particularly useful. For example:
log2(8^3) = 3 log2(8) = 3 * 3 = 9, and indeed 8^3 = 512 = 2^9.
4. Change of Base: log_b(a) = log_c(a) / log_c(b)
This lets you convert between bases. In most programming languages, the built-in log function uses base e (natural log) or base 10. To compute log base 2, you use:
log2(n) = ln(n) / ln(2), or equivalently, log10(n) / log10(2)
In Big-O analysis, the base of the logarithm does not matter. O(log2 n) and O(log10 n) differ only by a constant factor (log2 n = log10 n / log10 2 ≈ 3.32 * log10 n). So we simply write O(log n). However, when computing exact values (like tree height or number of bits), the base absolutely matters.
Most logarithms you encounter in CS are base 2. The reason is straightforward: the most common algorithmic pattern is dividing a problem into two equal halves.
Other bases do appear occasionally. Ternary search divides into three parts, giving O(log3 n). B-trees in databases have a branching factor of hundreds, giving O(log_b n) where b might be 100 or more. But since all logarithm bases differ by a constant factor, they are all O(log n) in Big-O notation.
In practice, log values are rarely whole numbers. When we need integer results (like tree height or array index), we use floor and ceiling.
| n | log2(n) | floor(log2(n)) | ceil(log2(n)) |
|---|---|---|---|
| 1 | 0.0 | 0 | 0 |
| 2 | 1.0 | 1 | 1 |
| 3 | 1.58 | 1 | 2 |
| 7 | 2.81 | 2 | 3 |
| 8 | 3.0 | 3 | 3 |
| 10 | 3.32 | 3 | 4 |
| 16 | 4.0 | 4 | 4 |
| 100 | 6.64 | 6 | 7 |
| 1000 | 9.97 | 9 | 10 |
Useful formulas that follow from this:
There is a deep connection between logarithms and binary numbers. The number 13 in binary is 1101, which is 4 bits long. And floor(log2(13)) + 1 = floor(3.7) + 1 = 3 + 1 = 4. This is not a coincidence.
The highest power of 2 that fits in n determines the most significant bit position. That position is exactly floor(log2(n)), and since we count from position 0, the total number of bits is floor(log2(n)) + 1.
Let us trace binary search on a sorted array of 16 elements to see exactly why it takes at most log2(16) = 4 comparisons.
Now let us try a worst case, searching for 47 (the last element):
This is an important detail. The maximum number of comparisons in binary search on n elements is floor(log2(n)) + 1, not log2(n). The "+1" accounts for the final comparison when only one element remains.
To build intuition, practice computing log2 values mentally:
Below are utility functions. Each implementation provides three functions: computing floor of log base 2, counting the number of decimal digits, and counting the number of bits needed to represent a positive integer.
Note that all implementations use bit shifting (n >>= 1) instead of division for the log2 computation. This is both faster and more idiomatic when working with powers of 2. The loop counts how many times we can right-shift n before it becomes 1, which is exactly the definition of floor(log2(n)).