AlgoMaster Logo

Counting Sort

Last Updated: March 31, 2026

Ashish

Ashish Pratap Singh

Counting Sort is a non-comparison-based sorting algorithm that works by counting the frequency of each element. Instead of comparing elements, it uses the values themselves as indices to determine their correct positions in the sorted output.

This approach allows Counting Sort to achieve linear time complexity O(n + k), where k is the range of input values. It is especially efficient when the range of numbers is not significantly larger than the number of elements.

However, Counting Sort is only applicable when the input consists of integers within a limited range, and it requires additional space to store counts.

In this chapter, you will learn how Counting Sort works step by step, how to implement it, and when it is the right choice over comparison-based sorting algorithms.

What Is Counting Sort?

Counting Sort is a non-comparison-based sorting algorithm. It works by counting the number of occurrences of each distinct value in the input, then using those counts to place each element directly into its correct position in the output.

Here is the core idea in three sentences. First, create a count array where index i stores how many times value i appears in the input. Second, transform the count array into a cumulative (prefix sum) array, so each index now tells you where the last element with that value should go. Third, iterate through the input from right to left, placing each element at the position indicated by the cumulative count and decrementing that count.

The algorithm has three key properties:

  1. Non-comparison-based: It never compares two input elements against each other
  2. Stable: Equal elements appear in the output in the same relative order as the input (when implemented correctly)
  3. Linear time: Runs in O(n + k) where k is the range of input values

The key insight is that counting replaces comparing. Instead of determining relative order by comparing pairs of elements, we determine absolute positions by knowing exactly how many elements are smaller than each value.

Breaking the O(n log n) Barrier

Every comparison-based sorting algorithm (Merge Sort, Quick Sort, Heap Sort) must make at least O(n log n) comparisons. This is provable using a decision tree argument: with n! possible permutations of n elements, you need at least log2(n!) comparisons to distinguish between them, and log2(n!) is O(n log n).

Counting Sort sidesteps this lower bound because it never compares elements. It uses the values themselves as indices into an array, extracting ordering information from the values rather than from pairwise comparisons. This is why the time complexity depends on both n (number of elements) and k (range of values) rather than just n.

How It Works

Let us walk through the algorithm step by step, then tie it together with a flowchart.

Step 1: Find the Range

Scan the input array to find the minimum and maximum values. The range k = max - min + 1 determines the size of the count array. If elements are all non-negative with a known maximum, you can skip the scan and use the known range.

Step 2: Count Occurrences

Create a count array of size k, initialized to all zeros. Iterate through the input and increment count[value - min] for each element. After this step, count[i] tells you exactly how many times the value (i + min) appears in the input.

Step 3: Compute Cumulative Counts (Prefix Sum)

Transform the count array by computing a running sum from left to right. Replace each count[i] with the sum of all counts from index 0 through i. After this step, count[i] tells you how many elements have a value less than or equal to (i + min). This means count[i] is the position (one-indexed) where the last element with value (i + min) should be placed in the output.

Step 4: Build the Output Array (Right to Left)

Create an output array of the same size as the input. Iterate through the input array from right to left. For each element, look up its cumulative count, place the element at position count[value - min] - 1 in the output, and decrement the count. The right-to-left traversal is what makes the sort stable, preserving the relative order of equal elements.

Step 5: Copy Back

Copy the output array back into the original array (or simply return the output array).

The reason for the right-to-left traversal in Step 4 deserves emphasis. If two elements have the same value, the one that appears later in the input should end up at a higher index in the output. By going right to left and decrementing the count each time, we place the rightmost duplicate first (at the highest valid position), then the next one at the position just before it, and so on. This preserves the original relative order of equal elements.

Code Implementation

Example Walkthrough

Let us trace through the algorithm with the array [4, 2, 2, 8, 3, 3, 1] to see exactly how each step works.

Step 1: Find the Range

We need a count array of size 8, with indices 0 through 7 representing values 1 through 8.

Step 2: Count Occurrences

We scan the input and increment the corresponding count for each value. Remember, value v maps to index v - min = v - 1.

Reading this: value 1 appears once, value 2 appears twice, value 3 appears twice, value 4 appears once, values 5-7 do not appear, and value 8 appears once.

Step 3: Compute Prefix Sums

We transform the count array into cumulative counts by adding each element to the sum of all elements before it.

Now count[i] tells us: "the last element with value (i + 1) should go at position count[i] - 1 in the output." For example, count[1] = 3 means the last element with value 2 should go at index 2 (position 3, zero-indexed as 2).

Step 4: Build Output (Right to Left)

We iterate the input from right to left, placing each element and decrementing its count.

The final sorted array is [1, 2, 2, 3, 3, 4, 8].

Notice how the two 2s and the two 3s maintained their original relative order. The 2 at index 1 in the input ended up before the 2 at index 2, and the 3 at index 4 ended up before the 3 at index 5. That is stability in action, a direct result of the right-to-left traversal.

Complexity Analysis

MetricValueExplanation
Time ComplexityO(n + k)Finding min/max is O(n), counting is O(n), prefix sum is O(k), placement is O(n)
Space ComplexityO(n + k)Count array uses O(k) space, output array uses O(n) space
StableYesRight-to-left placement preserves relative order of equal elements
In-placeNoRequires O(n + k) extra space
Comparison-basedNoNever compares two input elements
AdaptiveNoDoes the same work regardless of input order

Here, n is the number of elements in the input and k is the range of values (max - min + 1).

The time complexity breaks down as follows:

  • Finding min and max: One pass through the array, O(n)
  • Counting occurrences: One pass through the array, O(n)
  • Computing prefix sums: One pass through the count array, O(k)
  • Building output: One pass through the array, O(n)
  • Total: O(n + n + k + n) = O(n + k)

When k is O(n) or smaller (for example, sorting exam scores from 0 to 100 for 10,000 students), the algorithm runs in O(n) time. When k is much larger than n (say, sorting 10 integers that range from 0 to 10 million), the O(k) term dominates and Counting Sort becomes impractical.

Comparison with Other Sorting Algorithms

AlgorithmTime (Average)Time (Worst)SpaceStableComparison-based
Counting SortO(n + k)O(n + k)O(n + k)YesNo
Merge SortO(n log n)O(n log n)O(n)YesYes
Quick SortO(n log n)O(n^2)O(log n)NoYes
Heap SortO(n log n)O(n log n)O(1)NoYes
Radix SortO(d(n + k))O(d(n + k))O(n + k)YesNo

Counting Sort wins on raw speed when k is small, but it pays for that speed with extra memory and a strict constraint on input type.

When to Use Counting Sort

Good Use Cases

  • Sorting grades or scores: Exam scores range from 0 to 100. With k = 101 and potentially thousands of students, Counting Sort runs in O(n) and is much faster than comparison sorts.
  • Sorting ages: Human ages range from 0 to roughly 150. Sorting a database of millions of records by age is a perfect fit.
  • Sorting characters: ASCII characters have values 0-127, giving k = 128. Sorting characters in a string is a classic application.
  • As a subroutine for Radix Sort: Radix Sort processes one digit at a time, and each digit has a small range (0-9 for decimal). Counting Sort handles each digit pass efficiently and, critically, provides the stability that Radix Sort requires.
  • Sorting with known, bounded keys: Any scenario where input values come from a small, known universe.

When Not to Use

  • Large range of values: If you are sorting 1,000 integers that range from 0 to 10,000,000, the count array alone uses 10 million entries. The O(k) space and time make this far worse than O(n log n) comparison sorts.
  • Floating-point numbers: Counting Sort needs integer indices. You cannot directly use a float like 3.14 as an array index. (You could multiply and round, but this introduces precision issues.)
  • Strings or complex objects without integer keys: Counting Sort operates on integer keys. If your data does not naturally map to a small integer range, it is the wrong tool.
  • When stability is not needed and space is limited: If you do not care about stability and want O(1) extra space, Heap Sort or in-place Quick Sort are better choices.

The Decision Rule

A practical rule of thumb: use Counting Sort when k (the range of values) is at most O(n). If k is significantly larger than n, comparison-based sorts like Merge Sort or Quick Sort will outperform it in both time and space.