Last Updated: April 3, 2026
We're building a stock price tracker that handles an unruly stream of data. Records arrive out of order, and sometimes a correction comes in that overwrites a previous price at the same timestamp. At any point, we need to answer three questions: what's the most recent price, what's the highest price across all timestamps, and what's the lowest?
The "current" price is straightforward if we track which timestamp is the latest. The tricky part is maintaining the maximum and minimum efficiently, especially when corrections happen. If timestamp 1's price changes from 10 to 3, and 10 was the maximum, we can't just decrement a counter. We need to know what the new maximum is after removing 10 from the picture.
The key observation is that corrections make this fundamentally different from a simple "running max/min" problem. We need a data structure that supports both adding new prices and removing old ones (when they get corrected), while still efficiently answering max and min queries.
1 <= timestamp, price <= 10^9 -> Values can be large, so we can't use arrays indexed by timestamp or price. We need hash maps and dynamic structures.At most 10^5 total calls -> We need each operation to be better than O(n) to be comfortable. O(log n) per call is the sweet spot, giving us roughly 10^5 * 17 = 1.7 million operations, well within time limits.The most natural approach: use a hash map to store the latest price for each timestamp, and track which timestamp is the most recent. For current, just look up the latest timestamp's price. For maximum and minimum, scan through all stored prices every time.
This is simple and correct, but the scan is the problem. Every max/min query walks through every timestamp we've ever seen.
update(timestamp, price), set map[timestamp] = price and update the latest timestamp if this one is newer.current(), return the price at the latest timestamp.maximum(), iterate through all values in the hash map and return the largest.minimum(), iterate through all values in the hash map and return the smallest.update and current, O(n) for maximum and minimum. Each update is a hash map put and a comparison. But maximum and minimum scan every entry in the map, where n is the number of distinct timestamps.Every time we call maximum or minimum, we scan through all n stored prices. What if we could maintain the max and min as prices are added and corrected, so that querying is O(log n) instead of O(n)?
We want max and min in O(1), and a heap gives us exactly that. A max-heap keeps the largest element on top, and a min-heap keeps the smallest. But there's a catch: when a timestamp's price gets corrected, the old price is still sitting somewhere deep inside the heap, and standard heaps don't support removing arbitrary elements efficiently.
The trick is lazy deletion. Instead of removing the stale entry, we leave it in the heap. Whenever we peek at the top, we check: "Does this entry's price still match what's in our hash map for that timestamp?" If not, the entry is stale and we pop it. We keep popping until we find a valid entry.
(price, timestamp) pair.update(timestamp, price): update the hash map, push (price, timestamp) to both heaps, and update the latest timestamp.current(): return the price at the latest timestamp from the hash map.maximum(): peek at the max-heap. If the top entry's price doesn't match the hash map for that timestamp, pop it (it's stale). Repeat until the top is valid. Return the top price.minimum(): same as maximum but with the min-heap.update is O(log n) for heap push. current is O(1). maximum/minimum is O(1) amortized since each entry is pushed and popped at most once across all operations.The heap approach works well but accumulates stale entries. Can we do better with a data structure that handles corrections cleanly without lazy deletion?
Instead of heaps with lazy deletion, we can use a sorted data structure that maps each price to how many timestamps currently have that price. When a new price is added, we increment its count. When a correction happens, we decrement the old price's count (and remove it if the count hits zero). The max and min are always at the ends of the sorted structure.
In Java, TreeMap does this perfectly. It's a balanced BST that gives O(log n) access to the first key (minimum) and last key (maximum), plus O(log n) for insertions and deletions.
The sorted map maintains an exact view of all current prices at all times. There are no stale entries. When a correction comes in, the old price is explicitly removed and the new price is explicitly added. The max and min are always at the endpoints of the sorted structure. This is conceptually cleaner than lazy deletion because the data structure is always in a consistent state.
update(timestamp, price): if the timestamp already exists, get the old price and decrement its count in the sorted map (remove if count reaches zero). Set the new price in the hash map. Increment the new price's count in the sorted map. Update the latest timestamp.current(): return the price at the latest timestamp.maximum(): return the last (highest) key in the sorted map.minimum(): return the first (lowest) key in the sorted map.update is O(log n) for the sorted map operations (insert, delete, put). current is O(1) for hash map lookup. maximum/minimum is O(log n) for accessing the first/last key of the sorted map.