AlgoMaster Logo

Logarithms

Last Updated: March 31, 2026

5 min read

Logarithms show up more often in coding interviews than you might expect. They help you reason about how fast algorithms grow, especially when dealing with divide-and-conquer approaches, binary search, and data structures like heaps and trees.

Instead of thinking in terms of “how big is n,” logarithms answer a different question: “how many times can we keep dividing n until we reach 1?” That perspective is key to understanding why certain algorithms run in O(log n) time and why others scale much more slowly.

In this chapter, we will build an intuitive understanding of logarithms, connect them to common algorithm patterns, and use practical examples to make them feel natural rather than mathematical.

What is a Logarithm?

A logarithm is the inverse of exponentiation. If you know that:

2^10 = 1024

Then the logarithm answers the reverse question: "What power do I raise 2 to in order to get 1024?"

log2(1024) = 10

More formally, log_b(x) = y means b^y = x. The base b is the number being raised to a power, x is the result, and y is the exponent we are solving for.

Exponential FormLogarithmic FormIn Plain English
2^3 = 8log2(8) = 3"2 multiplied by itself 3 times gives 8"
2^10 = 1024log2(1024) = 10"You need 10 doublings to reach 1024"
10^3 = 1000log10(1000) = 3"1000 has 3 zeros, so log base 10 is 3"
2^20 = 1,048,576log2(1,048,576) = 20"About a million needs 20 halvings"

In computer science, the base is almost always 2. That is because computers operate in binary, and most divide-and-conquer algorithms split problems in half at each step.

The Halving Tree

Here is a visual way to think about logarithms. If you start with 16 elements and keep halving, how many levels does it take to reach 1?

The answer is 4 levels, because log2(16) = 4. Each level represents one halving operation. This is exactly what happens in binary search: at each step, you eliminate half the remaining elements.

Logarithm Rules

These rules come up in complexity analysis and interview problems. You do not need to memorize proofs, but you should know how to apply each one.

1. Product Rule: log_b(a * c) = log_b(a) + log_b(c)

Multiplication inside the log becomes addition outside. This is why the logarithm of a large product can be computed by summing the logarithms of its parts.

For example:

log2(32) = log2(8 * 4) = log2(8) + log2(4) = 3 + 2 = 5

2. Quotient Rule: log_b(a / c) = log_b(a) - log_b(c)

Division inside the log becomes subtraction outside.

log2(8 / 2) = log2(8) - log2(2) = 3 - 1 = 2, which checks out because 8 / 2 = 4 and log2(4) = 2.

3. Power Rule: log_b(a^n) = n * log_b(a)

An exponent inside the log comes out as a multiplier. This one is particularly useful. For example:

log2(8^3) = 3 log2(8) = 3 * 3 = 9, and indeed 8^3 = 512 = 2^9.

4. Change of Base: log_b(a) = log_c(a) / log_c(b)

This lets you convert between bases. In most programming languages, the built-in log function uses base e (natural log) or base 10. To compute log base 2, you use:

log2(n) = ln(n) / ln(2), or equivalently, log10(n) / log10(2)

Why log2 Dominates in Computer Science

Most logarithms you encounter in CS are base 2. The reason is straightforward: the most common algorithmic pattern is dividing a problem into two equal halves.

  • Binary search eliminates half the array at each step.
  • Balanced binary trees split nodes into two subtrees at each level.
  • Merge sort divides the array in half, sorts both halves, then merges.
  • Binary representation uses base 2, so the number of bits in n is directly related to log2(n).

Other bases do appear occasionally. Ternary search divides into three parts, giving O(log3 n). B-trees in databases have a branching factor of hundreds, giving O(log_b n) where b might be 100 or more. But since all logarithm bases differ by a constant factor, they are all O(log n) in Big-O notation.

Floor and Ceiling of Log Values

In practice, log values are rarely whole numbers. When we need integer results (like tree height or array index), we use floor and ceiling.

  • floor(log2(n)): the largest integer k such that 2^k <= n
  • ceil(log2(n)): the smallest integer k such that 2^k >= n
nlog2(n)floor(log2(n))ceil(log2(n))
10.000
21.011
31.5812
72.8123
83.033
103.3234
164.044
1006.6467
10009.97910

Useful formulas that follow from this:

  • Number of bits to represent n (for n >= 1): floor(log2(n)) + 1
  • Number of digits in n (base 10, for n >= 1): floor(log10(n)) + 1
  • Height of a complete binary tree with n nodes: floor(log2(n))
  • Number of levels in binary search on n elements: floor(log2(n)) + 1

Connection to Binary Representation

There is a deep connection between logarithms and binary numbers. The number 13 in binary is 1101, which is 4 bits long. And floor(log2(13)) + 1 = floor(3.7) + 1 = 3 + 1 = 4. This is not a coincidence.

The highest power of 2 that fits in n determines the most significant bit position. That position is exactly floor(log2(n)), and since we count from position 0, the total number of bits is floor(log2(n)) + 1.

Examples

Binary Search and log2(n)

Let us trace binary search on a sorted array of 16 elements to see exactly why it takes at most log2(16) = 4 comparisons.

Now let us try a worst case, searching for 47 (the last element):

This is an important detail. The maximum number of comparisons in binary search on n elements is floor(log2(n)) + 1, not log2(n). The "+1" accounts for the final comparison when only one element remains.

Computing Log Values by Hand

To build intuition, practice computing log2 values mentally:

  • log2(64): 2^6 = 64, so log2(64) = 6
  • log2(100): 2^6 = 64, 2^7 = 128. So log2(100) is between 6 and 7. More precisely, about 6.64.
  • Number of bits for 100: floor(6.64) + 1 = 7 bits
  • Number of decimal digits in 5000: floor(log10(5000)) + 1 = floor(3.699) + 1 = 4 digits

Implementation

Below are utility functions. Each implementation provides three functions: computing floor of log base 2, counting the number of decimal digits, and counting the number of bits needed to represent a positive integer.

Note that all implementations use bit shifting (n >>= 1) instead of division for the log2 computation. This is both faster and more idiomatic when working with powers of 2. The loop counts how many times we can right-shift n before it becomes 1, which is exactly the definition of floor(log2(n)).