Last Updated: March 29, 2026
We are simulating a process: repeatedly pick the two heaviest stones, smash them, and put the result back (if any). This continues until one or zero stones remain.
The core challenge is efficiency. Each round, we need to find the two largest elements. After removing them and possibly inserting a new value, we need to find the two largest again. A naive approach would sort every round, but there is a data structure built exactly for this kind of repeated "give me the maximum" operation: a max-heap (priority queue).
1 <= stones.length <= 30 → Extremely small input. Even O(n^3) would finish instantly. Any correct approach will pass, so the focus is on learning the right technique.1 <= stones[i] <= 1000 → All weights are positive integers, so we never deal with zero-weight stones in the input. The difference y - x is always non-negative.The simplest way to find the two heaviest stones is to sort the array. After sorting, the two largest elements are at the end. We smash them, compute the difference, remove the two stones, and if the difference is non-zero, insert it back into the array. Then we sort again and repeat.
It is not the most efficient approach, but with n up to 30, it is perfectly fine and easy to reason about.
a. Sort the list.
b. Take the last two elements (the two heaviest).
c. Remove them both from the list.
d. If they are not equal, add the difference back to the list.
This approach works but re-sorts the entire list every round just to find two elements. What if we used a data structure that keeps the maximum accessible at all times?
A max-heap (priority queue) is designed exactly for this scenario. It lets you extract the maximum element in O(log n) and insert a new element in O(log n). Instead of sorting every round, we build the heap once and then repeatedly extract the two largest, smash them, and push the result back.
In Java, PriorityQueue is a min-heap by default, so we pass a reverse comparator. In Python, heapq is also a min-heap, so we negate all the values: push -stone and when we pop, negate again to get the original value.
A max-heap maintains the heap property: the parent is always larger than or equal to its children. This means the root is always the maximum element. When we extract the root, the heap restructures itself in O(log n) time to bring the next largest element to the top. When we insert a new element, it bubbles up to its correct position in O(log n) time. So each round of our simulation does O(log n) work instead of O(n log n) for sorting.
a. Extract the two largest values (y and x, with y >= x).
b. If y != x, push (y - x) back into the heap.