Last Updated: March 28, 2026
This problem builds on the classic Longest Increasing Subsequence (LIS) problem, but with a twist. Instead of just finding the length of the LIS, we need to count how many subsequences achieve that maximum length.
A subsequence is formed by deleting zero or more elements from the array without changing the order of the remaining elements. "Strictly increasing" means each element must be larger than the previous one (not equal).
The key challenge here is that we are not just tracking one thing (the longest length), but two things simultaneously: the length of the longest increasing subsequence ending at each position, and the number of ways to achieve that length. This dual-tracking is what makes the problem interesting. If you can solve standard LIS, you are halfway there. The other half is maintaining counts correctly as you build longer subsequences.
1 <= nums.length <= 2000 --> With n up to 2000, O(n^2) means at most 4 million operations, which is perfectly fine. O(n^3) would be 8 billion, which is too slow.-10^6 <= nums[i] <= 10^6 --> Values can be negative, so we cannot use value-indexed arrays. The large value range means we need comparison-based approaches.The most direct way to solve this is to generate every possible increasing subsequence, find the maximum length among them, and count how many hit that maximum. We can use recursion (backtracking) to enumerate all subsequences: at each index, we either include the current element (if it is greater than the last included element) or skip it.
This works correctly but generates an exponential number of subsequences, which makes it impractical for anything beyond tiny inputs.
For nums = [1, 3, 5, 4, 7], the recursion explores all valid increasing subsequences. It finds two subsequences of maximum length 4: [1, 3, 5, 7] and [1, 3, 4, 7]. So the answer is 2.
This approach works correctly but is exponentially slow. What if we computed the LIS length and count ending at each index just once, reusing results from earlier indices?
The standard LIS DP approach computes, for each index i, the length of the longest increasing subsequence ending at i. We do this by looking at every earlier index j where nums[j] < nums[i] and taking the maximum length[j] + 1.
To count the number of LIS, we add a second array cnt alongside the length array. cnt[i] tracks how many increasing subsequences of length length[i] end at index i. When we examine a pair (j, i) where nums[j] < nums[i], there are three cases:
length[j] + 1 > length[i]: we found a longer subsequence ending at i. Update length[i] and reset cnt[i] = cnt[j].length[j] + 1 == length[i]: we found another set of subsequences with the same max length ending at i. Add cnt[j] to cnt[i].length[j] + 1 < length[i]: this path is shorter than what we already have. Ignore it.After filling both arrays, find the maximum value in length, then sum up cnt[i] for every index where length[i] equals that maximum.
The count propagates correctly because of how we build subsequences. When we set cnt[i] = cnt[j], we are saying "every way to build a LIS of length length[j] ending at j can be extended by appending nums[i]." When we add cnt[j] to cnt[i], we are saying "here are additional ways to reach the same length through a different predecessor." By the time we finish processing index i, cnt[i] holds the total number of distinct increasing subsequences of length length[i] that end at position i.
This works because DP processes indices left to right, so when we compute values for index i, all predecessors j < i already have their final length and cnt values.
length[i] = 1 and cnt[i] = 1 for all i (each element by itself is a subsequence of length 1, and there is exactly one such subsequence).i from 1 to n-1, scan all previous indices j from 0 to i-1.nums[j] < nums[i]:length[j] + 1 > length[i]: set length[i] = length[j] + 1 and cnt[i] = cnt[j].length[j] + 1 == length[i]: add cnt[j] to cnt[i].maxLen as you go.cnt[i] for all indices where length[i] == maxLen. Return that sum.length and cnt), plus a few variables.This DP approach is efficient enough for n <= 2000. But what if we needed to handle larger inputs? We can replace the linear scan of predecessors with a segment tree query.
The O(n^2) approach spends O(n) time per element querying all predecessors. We can speed this up by framing the query differently: for each nums[i], we want the maximum LIS length (and the count of subsequences achieving that length) among all elements with value strictly less than nums[i].
This is a range-max query, which a segment tree handles in O(log n). We coordinate-compress the values, build a segment tree over the compressed values, and for each element, query then update.
Each node in the segment tree stores a pair (maxLength, count): the longest LIS length among subsequences ending with a value in that range, and the total number of such subsequences.
The segment tree stores the best (length, count) pair for each possible value in the compressed range. When processing nums[i], querying [1, rank(nums[i])-1] gives us the best LIS information among all values strictly less than nums[i] that we have processed so far. The merge operation correctly combines results: if two ranges have the same max length, we sum their counts. If one is longer, we take the longer one.
This transforms the O(n) inner scan of the DP approach into an O(log n) query, while maintaining the same correctness guarantees.
nums to map them to the range [1, m] where m is the number of distinct values.(maxLength, count), initialized to (0, 0).nums[i] (left to right):(prevLen, prevCnt) among all smaller values.newLen = prevLen + 1. If prevLen == 0, set newCnt = 1. Otherwise, newCnt = prevCnt.compressed(nums[i]) by merging (newLen, newCnt) with whatever is already stored there.For nums = [1, 3, 5, 4, 7] with compressed ranks {1:1, 3:2, 4:3, 5:4, 7:5}: