Last Updated: March 31, 2026
Counting Sort is a non-comparison-based sorting algorithm that works by counting the frequency of each element. Instead of comparing elements, it uses the values themselves as indices to determine their correct positions in the sorted output.
This approach allows Counting Sort to achieve linear time complexity O(n + k), where k is the range of input values. It is especially efficient when the range of numbers is not significantly larger than the number of elements.
However, Counting Sort is only applicable when the input consists of integers within a limited range, and it requires additional space to store counts.
In this chapter, you will learn how Counting Sort works step by step, how to implement it, and when it is the right choice over comparison-based sorting algorithms.
Counting Sort is a non-comparison-based sorting algorithm. It works by counting the number of occurrences of each distinct value in the input, then using those counts to place each element directly into its correct position in the output.
Here is the core idea in three sentences. First, create a count array where index i stores how many times value i appears in the input. Second, transform the count array into a cumulative (prefix sum) array, so each index now tells you where the last element with that value should go. Third, iterate through the input from right to left, placing each element at the position indicated by the cumulative count and decrementing that count.
The algorithm has three key properties:
The key insight is that counting replaces comparing. Instead of determining relative order by comparing pairs of elements, we determine absolute positions by knowing exactly how many elements are smaller than each value.
Every comparison-based sorting algorithm (Merge Sort, Quick Sort, Heap Sort) must make at least O(n log n) comparisons. This is provable using a decision tree argument: with n! possible permutations of n elements, you need at least log2(n!) comparisons to distinguish between them, and log2(n!) is O(n log n).
Counting Sort sidesteps this lower bound because it never compares elements. It uses the values themselves as indices into an array, extracting ordering information from the values rather than from pairwise comparisons. This is why the time complexity depends on both n (number of elements) and k (range of values) rather than just n.
Let us walk through the algorithm step by step, then tie it together with a flowchart.
Scan the input array to find the minimum and maximum values. The range k = max - min + 1 determines the size of the count array. If elements are all non-negative with a known maximum, you can skip the scan and use the known range.
Create a count array of size k, initialized to all zeros. Iterate through the input and increment count[value - min] for each element. After this step, count[i] tells you exactly how many times the value (i + min) appears in the input.
Transform the count array by computing a running sum from left to right. Replace each count[i] with the sum of all counts from index 0 through i. After this step, count[i] tells you how many elements have a value less than or equal to (i + min). This means count[i] is the position (one-indexed) where the last element with value (i + min) should be placed in the output.
Create an output array of the same size as the input. Iterate through the input array from right to left. For each element, look up its cumulative count, place the element at position count[value - min] - 1 in the output, and decrement the count. The right-to-left traversal is what makes the sort stable, preserving the relative order of equal elements.
Copy the output array back into the original array (or simply return the output array).
The reason for the right-to-left traversal in Step 4 deserves emphasis. If two elements have the same value, the one that appears later in the input should end up at a higher index in the output. By going right to left and decrementing the count each time, we place the rightmost duplicate first (at the highest valid position), then the next one at the position just before it, and so on. This preserves the original relative order of equal elements.
Let us trace through the algorithm with the array [4, 2, 2, 8, 3, 3, 1] to see exactly how each step works.
We need a count array of size 8, with indices 0 through 7 representing values 1 through 8.
We scan the input and increment the corresponding count for each value. Remember, value v maps to index v - min = v - 1.
Reading this: value 1 appears once, value 2 appears twice, value 3 appears twice, value 4 appears once, values 5-7 do not appear, and value 8 appears once.
We transform the count array into cumulative counts by adding each element to the sum of all elements before it.
Now count[i] tells us: "the last element with value (i + 1) should go at position count[i] - 1 in the output." For example, count[1] = 3 means the last element with value 2 should go at index 2 (position 3, zero-indexed as 2).
We iterate the input from right to left, placing each element and decrementing its count.
The final sorted array is [1, 2, 2, 3, 3, 4, 8].
Notice how the two 2s and the two 3s maintained their original relative order. The 2 at index 1 in the input ended up before the 2 at index 2, and the 3 at index 4 ended up before the 3 at index 5. That is stability in action, a direct result of the right-to-left traversal.
| Metric | Value | Explanation |
|---|---|---|
| Time Complexity | O(n + k) | Finding min/max is O(n), counting is O(n), prefix sum is O(k), placement is O(n) |
| Space Complexity | O(n + k) | Count array uses O(k) space, output array uses O(n) space |
| Stable | Yes | Right-to-left placement preserves relative order of equal elements |
| In-place | No | Requires O(n + k) extra space |
| Comparison-based | No | Never compares two input elements |
| Adaptive | No | Does the same work regardless of input order |
Here, n is the number of elements in the input and k is the range of values (max - min + 1).
The time complexity breaks down as follows:
When k is O(n) or smaller (for example, sorting exam scores from 0 to 100 for 10,000 students), the algorithm runs in O(n) time. When k is much larger than n (say, sorting 10 integers that range from 0 to 10 million), the O(k) term dominates and Counting Sort becomes impractical.
| Algorithm | Time (Average) | Time (Worst) | Space | Stable | Comparison-based |
|---|---|---|---|---|---|
| Counting Sort | O(n + k) | O(n + k) | O(n + k) | Yes | No |
| Merge Sort | O(n log n) | O(n log n) | O(n) | Yes | Yes |
| Quick Sort | O(n log n) | O(n^2) | O(log n) | No | Yes |
| Heap Sort | O(n log n) | O(n log n) | O(1) | No | Yes |
| Radix Sort | O(d(n + k)) | O(d(n + k)) | O(n + k) | Yes | No |
Counting Sort wins on raw speed when k is small, but it pays for that speed with extra memory and a strict constraint on input type.
A practical rule of thumb: use Counting Sort when k (the range of values) is at most O(n). If k is significantly larger than n, comparison-based sorts like Merge Sort or Quick Sort will outperform it in both time and space.