Last Updated: March 29, 2026
We need to compute x raised to the power n, where x is a floating-point number and n is a 32-bit integer. This sounds straightforward at first, but there are a few wrinkles that make it interesting.
First, n can be negative. If n is negative, x^n = 1 / x^(-n). So we need to handle that conversion. Second, n can be extremely large in magnitude, up to 2^31 - 1 (about 2.1 billion). Multiplying x by itself n times in a loop would be far too slow. We need a smarter approach.
The key observation is that exponentiation has a natural divide-and-conquer structure. Instead of multiplying x one at a time, we can square intermediate results to skip ahead. For example, x^10 = (x^5)^2, and x^5 = x * (x^2)^2. This lets us compute the answer in O(log n) multiplications instead of O(n).
-2^31 <= n <= 2^31 - 1 --> n can be up to ~2.1 billion, so an O(n) loop will time out. We need O(log n).-100.0 < x < 100.0 --> x is bounded, so intermediate results won't overflow to infinity for valid inputs where x^n is within [-10^4, 10^4].n can be negative --> We need to handle the conversion x^(-n) = 1 / x^n.n = -2^31, negating it causes integer overflow in 32-bit signed integers --> This is a critical edge case.The most straightforward way to compute x^n is to multiply x by itself n times. If n is negative, compute x^|n| first, then take the reciprocal.
This is simple and correct, but with n up to 2 billion, it is far too slow to pass on LeetCode.
Input: x = 2.0, n = 4
We multiply 2.0 by itself 4 times:
Output: 16.0
We are performing |n| multiplications one at a time, ignoring the structure of exponentiation. What if we could halve the exponent at each step by squaring intermediate results?
Here is the key insight: exponentiation has a recursive structure that lets us cut the problem in half at every step.
If n is even: x^n = (x^(n/2))^2. Compute x^(n/2) once, then square it.
If n is odd: x^n = x x^(n-1) = x (x^((n-1)/2))^2. Compute x^((n-1)/2), square it, and multiply by x one more time.
Either way, we reduce the exponent by roughly half in each recursive call. Starting from n, we reach 0 after about log2(n) steps. That takes us from O(n) multiplications down to O(log n).
The base case is simple: x^0 = 1 for any x.
For negative exponents, we convert up front: x^(-n) = (1/x)^n. One subtlety here: if n is Integer.MIN_VALUE (-2^31), negating it overflows a 32-bit int because the positive range only goes up to 2^31 - 1. So we use a long (or equivalent) to hold the absolute value.
Any positive integer n can be expressed in binary. For example, 10 in binary is 1010, meaning 10 = 8 + 2 = 2^3 + 2^1. So x^10 = x^8 * x^2. The recursive approach implicitly decomposes the exponent this way. When n is even, we just square. When n is odd, we square and multiply by x once. Each call halves n, so we make at most log2(n) calls.
The recursion uses O(log n) stack space. Can we convert the recursive halving into an iterative loop to achieve O(1) space?
The recursive approach works beautifully, but we can do the exact same thing iteratively by thinking about the binary representation of n directly.
Here is the idea: write n in binary. For example, n = 13 is 1101 in binary, meaning 13 = 8 + 4 + 1. So x^13 = x^8 x^4 x^1. We can compute x^1, x^2, x^4, x^8 by repeatedly squaring x. Then we just multiply in the powers that correspond to 1-bits in n.
We iterate through the bits of n from least significant to most significant. We maintain a variable currentProduct that starts at x and gets squared each iteration (representing x^1, x^2, x^4, x^8, ...). Whenever the current bit of n is 1, we multiply currentProduct into our result.
This is the same O(log n) time complexity as the recursive approach, but with O(1) space since there is no recursion stack.
Any integer n can be represented in binary as a sum of powers of 2. For example, 13 = 2^3 + 2^2 + 2^0 = 8 + 4 + 1. So x^13 = x^(8+4+1) = x^8 x^4 x^1. By squaring x repeatedly, we generate x^1, x^2, x^4, x^8, and so on. The bits of n tell us which of these powers to include in the product. This is why the algorithm checks each bit: if the bit is 1, that power of x contributes to the final result.
a. If the current bit (exponent & 1) is 1, multiply result by currentProduct
b. Square currentProduct (it now represents the next power of x)
c. Right-shift the exponent by 1 (equivalent to dividing by 2)