Last Updated: January 18, 2026
A greedy algorithm builds a solution piece by piece, always choosing the next piece that offers the most immediate benefit. Once a choice is made, it is never reconsidered.
Think of it like climbing a mountain in dense fog. You cannot see the peak, only your immediate surroundings. A greedy climber always takes the step that goes most steeply upward. Sometimes this leads to the summit. Other times, it leads to a local peak that is not the highest point.
The key characteristics of greedy algorithms:
The greedy path (solid arrows) commits to choices without exploring alternatives (dotted arrows). This is both its strength (efficiency) and its weakness (may miss better solutions).
Greedy algorithms work when the problem has two key properties:
A globally optimal solution can be constructed by making locally optimal choices. In other words, choosing what looks best right now will not prevent us from finding the best overall solution.
For the coin change example with US denominations (25, 10, 5, 1 cents), always picking the largest coin that fits gives the minimum number of coins. But this is not universally true. If coins were (1, 3, 4) cents and we needed 6 cents, greedy would pick 4 + 1 + 1 = 3 coins, but 3 + 3 = 2 coins is optimal.
An optimal solution to the problem contains optimal solutions to its subproblems. After making a greedy choice, the remaining problem should also have an optimal solution that, combined with the greedy choice, gives the overall optimum.
Consider activity selection: if we greedily pick an activity that ends earliest, the remaining problem is to select activities from those that start after this one ends. The optimal solution to this subproblem, combined with our greedy choice, gives the global optimum.
Understanding when to use each technique is crucial. Here is how they compare:
| Aspect | Greedy | Dynamic Programming | Backtracking |
|---|---|---|---|
| Approach | Make best local choice | Try all options, cache results | Explore all paths |
| Choice consideration | One choice, final | All choices, pick best | All choices, explore each |
| Subproblems | No overlap assumed | Overlapping subproblems | Tree of choices |
| Time complexity | Usually O(n) or O(n log n) | Usually O(n²) or O(n * m) | Usually exponential |
| Space complexity | Usually O(1) or O(n) | O(n) to O(n²) | O(depth) |
| When to use | Greedy choice property holds | Need optimal + overlapping subproblems | Need all solutions |
| Example | Activity selection | 0/1 Knapsack | Generate permutations |
A common interview strategy: propose greedy, identify why it might fail with a counterexample, then pivot to DP if needed.
Certain problem types have well-known greedy solutions. Recognizing these patterns helps you quickly identify when greedy applies.
Problem type: Select maximum non-overlapping intervals, or minimum intervals to cover a range.
Greedy strategy: Sort by end time and pick intervals that end earliest.
Why it works: By picking the interval that ends earliest, we leave maximum room for subsequent intervals.
Problems using this pattern:
Problem type: Minimize total wait time or maximize throughput.
Greedy strategy: Process shortest jobs first (SJF) or prioritize by some efficiency metric.
Why it works: Completing short tasks first reduces the total waiting time for all subsequent tasks.
Problems using this pattern:
Problem type: Problems involving sorted arrays where you need to find pairs or optimize based on positions.
Greedy strategy: Use two pointers (often start and end) and greedily move the pointer that improves the solution.
Why it works: Moving the "less promising" pointer cannot miss a better solution.
Problems using this pattern:
Problem type: Problems where you track a running value (sum, maximum, minimum) and make greedy decisions based on it.
Greedy strategy: Maintain state that captures the "best so far" and update greedily.
Why it works: The greedy invariant ensures you never miss a better solution.
Problems using this pattern:
Problem type: Problems where you repeatedly select the smallest/largest elements and combine them.
Greedy strategy: Use a heap to always process the optimal element first.
Why it works: Processing in optimal order ensures each combination step is locally optimal.
Problems using this pattern:
Most greedy solutions follow a similar structure:
The key decisions when applying this template:
Let us work through a complete example that demonstrates the greedy approach.
Problem: You have a knapsack that can hold weight W. Given n items, each with a weight and value, maximize the total value you can carry. Unlike 0/1 knapsack, you can take fractions of items.
Example:
What makes an item "attractive"? Neither weight alone nor value alone tells the full story. The key insight is value per unit weight (value density).
| Item | Weight | Value | Value/Weight |
|---|---|---|---|
| 1 | 10 | 60 | 6.0 |
| 2 | 20 | 100 | 5.0 |
| 3 | 30 | 120 | 4.0 |
Sort items by value/weight in descending order: Item 1, Item 2, Item 3.
Fractional knapsack has the greedy choice property because:
Contrast with 0/1 Knapsack: If we cannot take fractions, greedy fails. Consider W=10, items: [(6, 60), (5, 50), (5, 50)]. Greedy takes (6, 60) first, then cannot fit anything else. Optimal is (5, 50) + (5, 50) = 100 > 60.
Let us reinforce the greedy approach with another classic example that uses a different criterion.
Problem: Given n activities with start and end times, select the maximum number of non-overlapping activities.
Example:
Which activity should we pick first? Several options seem reasonable:
The correct criterion is earliest end time.
The key insight: by picking the activity that ends earliest, we maximize the remaining time available for other activities. Any other first choice would end later, leaving less room.
Proof by exchange argument: Suppose an optimal solution O does not start with the earliest-ending activity E. Let O start with activity A instead. Since E ends no later than A, we can replace A with E in O without causing any conflicts. The modified solution is still valid and has the same count. Repeating this argument, we can transform any optimal solution into the greedy solution.
Just because a problem looks like it should be greedy does not mean it is. Always verify with counterexamples.
Example: Coin change with denominations [1, 3, 4] and target 6.
Before committing to greedy in an interview, think: "Can I construct a case where the greedy choice leads to a suboptimal solution?"
Choosing the wrong sorting or selection criterion leads to incorrect results.
Example: For interval scheduling, sorting by start time instead of end time gives wrong answers.
| Interval | Start | End |
|---|---|---|
| A | 1 | 10 |
| B | 2 | 3 |
| C | 4 | 5 |
Greedy algorithms often have subtle edge cases:
Always consider these before submitting.
If the problem asks "how many ways" or "count all solutions," greedy usually does not apply. These typically need DP or backtracking.
Look for these indicators:
| Signal | Explanation |
|---|---|
| "Maximum/Minimum" with O(n) or O(n log n) expected | Greedy often achieves these complexities |
| "Select/Schedule/Assign" problems | Classic greedy territory |
| "At each step" language | Hints at making sequential choices |
| Sorting seems natural | Many greedy solutions start with sorting |
| Local choice determines global optimum | The core greedy property |
| "Earliest/Latest/Shortest/Longest first" intuition | Common greedy strategies |
Red flags that greedy might not work:
| Signal | Explanation |
|---|---|
| "Count all ways" or "enumerate all solutions" | Usually needs DP or backtracking |
| Dependencies between choices | One choice affects what choices remain |
| "0/1" or "all or nothing" constraints | Often need DP |
| No obvious sorting criterion | Hard to define "greedy choice" |