AlgoMaster Logo

Selection Sort

Last Updated: March 31, 2026

Ashish

Ashish Pratap Singh

Selection Sort is a simple and intuitive sorting algorithm based on the idea of repeatedly selecting the smallest element from the unsorted portion of the array and placing it in its correct position.

Unlike some other algorithms, Selection Sort minimizes the number of swaps. On each pass, it scans the remaining elements, finds the minimum, and swaps it with the current position. This makes its behavior predictable and easy to trace.

Although it is not efficient for large datasets, Selection Sort is valuable for understanding the fundamentals of sorting, especially the trade-off between comparisons and swaps.

In this chapter, you will learn how Selection Sort works step by step, how to implement it, and where it fits in the broader landscape of sorting algorithms.

What Is Selection Sort?

Selection sort is a comparison-based sorting algorithm that works by repeatedly finding the minimum element from the unsorted portion of the array and placing it at the beginning of the unsorted portion.

At any point during the algorithm, the array is divided into two regions:

  1. Sorted region (left side): contains elements that are already in their final sorted positions.
  2. Unsorted region (right side): contains elements that still need to be processed.

In each pass, the algorithm selects the smallest element from the unsorted region and swaps it with the first element of the unsorted region. This grows the sorted region by one element and shrinks the unsorted region by one.

How It Differs from Bubble Sort

If you have already studied bubble sort, you might wonder how selection sort is different. The key distinction lies in how they move elements into position.

Bubble sort repeatedly compares adjacent elements and swaps them if they are out of order. It performs many swaps per pass, bubbling the largest element to the end.

Selection sort scans the entire unsorted portion to find the minimum, then performs exactly one swap per pass. This means selection sort always does fewer swaps than bubble sort for the same input. In fact, selection sort performs at most n-1 swaps for an array of n elements, while bubble sort can perform as many as O(n^2) swaps.

PropertyBubble SortSelection Sort
Swaps per passUp to n-1Exactly 1
Total swaps (worst case)O(n^2)O(n)
Comparison countO(n^2)O(n^2)
Early terminationYes (if no swaps)No
StableYesNo

This difference matters when swaps are expensive. If you are sorting large objects where moving data is costly, selection sort's minimal number of swaps gives it a practical advantage over bubble sort.

How It Works

Here is the algorithm in plain English:

  1. Start with the first position (index 0) as the current position.
  2. Scan the entire unsorted region (from the current position to the end) to find the index of the minimum element.
  3. Swap the minimum element with the element at the current position.
  4. Move the current position one step to the right (the sorted region grows by one).
  5. Repeat steps 2-4 until the current position reaches the second-to-last element.

After n-1 passes, every element is in its correct sorted position. We only need n-1 passes because once n-1 elements are placed correctly, the last element must already be in the right spot.

The key insight here is that the inner loop's only job is to find the index of the minimum element. It does not perform any swaps. The single swap happens after the inner loop finishes. This is what makes selection sort efficient in terms of write operations.

Code Implementation

Notice how clean and uniform the implementations are. The algorithm translates almost line-for-line across languages because it uses only basic array operations: comparisons, index tracking, and a single swap.

Example Walkthrough

Let us trace through selection sort with the array [29, 10, 14, 37, 13]. We will follow each pass carefully and see how the sorted region grows from left to right.

Pass 1 (i = 0)

We need to find the minimum element in the entire array (indices 0 through 4).

Array after Pass 1: [10, 29, 14, 37, 13]

The element 10 is now in its final sorted position.

Pass 2 (i = 1)

We search the unsorted region (indices 1 through 4) for the minimum.

Array after Pass 2: [10, 13, 14, 37, 29]

Now both 10 and 13 are in their final positions.

Pass 3 (i = 2)

We search the unsorted region (indices 2 through 4) for the minimum.

Array after Pass 3: [10, 13, 14, 37, 29]

The element 14 was already in the correct position. The if (minIndex != i) check saves us an unnecessary swap.

Pass 4 (i = 3)

We search the unsorted region (indices 3 through 4) for the minimum.

Array after Pass 4: [10, 13, 14, 29, 37]

The algorithm is done. After 4 passes (n - 1 = 5 - 1 = 4), all 5 elements are sorted.

Visual Summary

Here is the complete sorting process at a glance:

Complexity Analysis

Selection sort has a straightforward complexity profile. Unlike some other algorithms, its performance does not change based on the input.

CaseTime ComplexityExplanation
BestO(n^2)Even if the array is already sorted, the algorithm still scans the unsorted region to confirm the minimum. There is no early termination.
AverageO(n^2)The number of comparisons is always the same regardless of element ordering.
WorstO(n^2)Same number of comparisons as the best case. The only difference is whether swaps actually move elements.

Exact comparison count: In the first pass, we make n-1 comparisons. In the second pass, n-2. And so on. The total is:

(n-1) + (n-2) + ... + 1 = n(n-1)/2

This is always the same, no matter what the input looks like.

MetricValueNotes
Space complexityO(1)Only uses a few extra variables (minIndex, temp). Sorts in place.
Number of swapsO(n)At most n-1 swaps, one per pass. This is the best swap count among O(n^2) sorting algorithms.
StableNoSwapping can change the relative order of equal elements (see Interview Questions below).
AdaptiveNoDoes not benefit from partially sorted input. Always does the same number of comparisons.

The fact that selection sort is not adaptive is one of its biggest drawbacks. Bubble sort, for example, can terminate early if no swaps occur in a pass (meaning the array is already sorted). Insertion sort runs in O(n) on nearly sorted arrays. Selection sort always does the full O(n^2) comparisons no matter what.

When to Use Selection Sort

Good Situations

  • When swaps are expensive. If you are sorting records where each swap involves moving large chunks of memory (for example, sorting structs or objects without using pointers), selection sort minimizes the number of write operations. With at most n-1 swaps, it does far fewer writes than bubble sort or insertion sort.
  • When memory is extremely constrained. Selection sort uses O(1) auxiliary space. It does not allocate any extra arrays or buffers. For embedded systems or environments where every byte counts, this can matter.
  • Small arrays (n < 20). For tiny arrays, the overhead of more sophisticated algorithms like quicksort or merge sort is not worth it. Selection sort's simplicity means low constant factors and minimal overhead.
  • When you need a simple, predictable algorithm. Selection sort always does the same amount of work regardless of input. If you need guaranteed behavior with no surprises, its consistency can be a feature.

Bad Situations

  • Large datasets. O(n^2) time becomes impractical very quickly. Sorting 10,000 elements requires roughly 50 million comparisons. For anything beyond a few hundred elements, use O(n log n) algorithms like merge sort or quicksort.
  • When stability is required. Selection sort is not stable. If you need equal elements to preserve their original relative order, use insertion sort or merge sort instead.
  • Nearly sorted arrays. Selection sort does not benefit from existing order in the array. Insertion sort, by contrast, runs in nearly O(n) time on almost-sorted data. If your data is likely to be partially sorted, selection sort is a poor choice.
  • When you need average-case efficiency. Even among O(n^2) algorithms, insertion sort tends to outperform selection sort on average because it does fewer comparisons on random data and adapts to partial order.