Last Updated: March 31, 2026
Matrix exponentiation is a powerful technique used to optimize certain recursive and dynamic programming problems from linear time to logarithmic time. It is especially useful when a problem follows a recurrence relation, such as Fibonacci or similar sequences.
Instead of computing each state one by one, matrix exponentiation lets you jump directly to the nth state by raising a transformation matrix to a power. This reduces time complexity from O(n) to O(log n), which is critical when n is very large.
In this chapter, we will build the intuition behind matrix exponentiation, understand how recurrence relations can be expressed as matrices, and apply this technique to common interview problems.
Before we dive into matrix exponentiation, let us make sure matrix multiplication itself is clear.
To multiply two matrices A (of size p x q) and B (of size q x r), we produce a result matrix C of size p x r. Each entry C[i][j] is the dot product of row i of A and column j of B:
A few important properties to remember:
The Fibonacci recurrence is:
Here is the trick. We can express the transition from one pair of consecutive Fibonacci numbers to the next as a matrix multiplication:
Let us call that 2x2 matrix T (the transition matrix). Starting from the base case:
Applying T once gives us F(2) and F(1). Applying T twice gives us F(3) and F(2). Applying T n times gives us F(n+1) and F(n). In other words:
This means computing F(n) reduces to computing T^n, which we can do in O(log n) using binary exponentiation.
Multiplying two k x k matrices takes O(k^3) time. For the Fibonacci problem k = 2, so each multiplication is just 8 multiplications and 4 additions, which is essentially O(1).
Here is how we multiply two 2x2 matrices:
Binary exponentiation (also called exponentiation by squaring) computes X^n in O(log n) multiplications. The idea works identically for numbers and matrices:
For example, to compute T^13 where 13 = 1101 in binary:
We compute T^1, T^2, T^4, T^8 by repeated squaring, and multiply together only the powers corresponding to set bits.
The Fibonacci matrix is just one example. Any linear recurrence of the form:
can be expressed as a k x k matrix multiplication. The transition matrix T is:
The first row contains the recurrence coefficients. The remaining rows form a "shift" that moves the state one step forward. The state vector holds the k most recent values of the recurrence.
Let us look at a slightly more complex example. Consider the recurrence:
This has a constant term (+1), which does not fit the standard pattern directly. The trick is to add an extra dimension to the state vector to handle the constant:
The extra row and column keep the constant "1" alive through each multiplication. This pattern generalizes: any time your recurrence has additive constants or polynomial terms, you can absorb them into the state vector.
Counting paths in graphs: If A is the adjacency matrix of a graph, then A^k[i][j] gives the number of paths of exactly k edges from node i to node j. With matrix exponentiation, you can compute this in O(V^3 * log k) instead of O(V^3 * k).
Tiling problems: Many tiling problems (e.g., "how many ways to tile a 2xN board with 1x2 dominoes") reduce to linear recurrences that matrix exponentiation handles efficiently.
Let us trace through computing Fibonacci(6) step by step using matrix exponentiation.
Our transition matrix is:
Our base state vector is:
We need T^6. Since 6 = 110 in binary, we need T^4 * T^2.
Step 1: Start with base = T, result = I (identity)
Step 2: Bit 0 of 6 is 0, so skip. Square the base.
Let us verify: T^2[0][0] = 11 + 11 = 2, T^2[0][1] = 11 + 10 = 1, T^2[1][0] = 11 + 01 = 1, T^2[1][1] = 11 + 00 = 1.
Step 3: Bit 1 of 6 is 1, so result = result * base. Then square the base
Step 4: Bit 2 of 6 is 1, so result = result * base
So T^6 is:
Step 5: Multiply T^6 by the base vector.
This gives us F(7) = 13 and F(6) = 8. Let us verify: the Fibonacci sequence is 0, 1, 1, 2, 3, 5, 8, 13... So F(6) = 8 is correct.
We computed F(6) using only 3 matrix multiplications (two squarings and one extra multiply) instead of 6 sequential additions. For large n, the savings are enormous.
Below is the complete implementation. The implementation provides three functions: matrix multiplication, matrix power using binary exponentiation, and Fibonacci computation using matrix exponentiation. All results are computed modulo 10^9 + 7 to prevent overflow.
Let k be the size of the transition matrix and n be the exponent.
Time Complexity: O(k^3 * log n)
Space Complexity: O(k^2)
| Approach | Time | Space | Works for n = 10^18? |
|---|---|---|---|
| Naive recursion | O(2^n) | O(n) | No |
| DP (iterative) | O(n) | O(1) | No |
| Matrix exponentiation | O(log n) | O(1) | Yes |
The jump from O(n) to O(log n) is what makes this technique indispensable. For n = 10^18, O(n) would take roughly 30 years on a modern CPU. O(log n) finishes in about 60 matrix multiplications, which takes microseconds.