AlgoMaster Logo

Introduction to Greedy Algorithms

Last Updated: January 18, 2026

Ashish

Ashish Pratap Singh

A greedy algorithm builds a solution piece by piece, always choosing the next piece that offers the most immediate benefit. Once a choice is made, it is never reconsidered.

Think of it like climbing a mountain in dense fog. You cannot see the peak, only your immediate surroundings. A greedy climber always takes the step that goes most steeply upward. Sometimes this leads to the summit. Other times, it leads to a local peak that is not the highest point.

The key characteristics of greedy algorithms:

  1. Makes one choice at a time: At each step, select the option that looks best right now
  2. Never reconsiders: Once a choice is made, it is final
  3. Builds solution incrementally: Each choice extends the partial solution
  4. Local optimization: Each step is locally optimal, not necessarily globally optimal

The greedy path (solid arrows) commits to choices without exploring alternatives (dotted arrows). This is both its strength (efficiency) and its weakness (may miss better solutions).

When Does Greedy Work?

Greedy algorithms work when the problem has two key properties:

1. Greedy Choice Property

A globally optimal solution can be constructed by making locally optimal choices. In other words, choosing what looks best right now will not prevent us from finding the best overall solution.

For the coin change example with US denominations (25, 10, 5, 1 cents), always picking the largest coin that fits gives the minimum number of coins. But this is not universally true. If coins were (1, 3, 4) cents and we needed 6 cents, greedy would pick 4 + 1 + 1 = 3 coins, but 3 + 3 = 2 coins is optimal.

2. Optimal Substructure

An optimal solution to the problem contains optimal solutions to its subproblems. After making a greedy choice, the remaining problem should also have an optimal solution that, combined with the greedy choice, gives the overall optimum.

Consider activity selection: if we greedily pick an activity that ends earliest, the remaining problem is to select activities from those that start after this one ends. The optimal solution to this subproblem, combined with our greedy choice, gives the global optimum.

Greedy vs Dynamic Programming vs Backtracking

Understanding when to use each technique is crucial. Here is how they compare:

AspectGreedyDynamic ProgrammingBacktracking
ApproachMake best local choiceTry all options, cache resultsExplore all paths
Choice considerationOne choice, finalAll choices, pick bestAll choices, explore each
SubproblemsNo overlap assumedOverlapping subproblemsTree of choices
Time complexityUsually O(n) or O(n log n)Usually O(n²) or O(n * m)Usually exponential
Space complexityUsually O(1) or O(n)O(n) to O(n²)O(depth)
When to useGreedy choice property holdsNeed optimal + overlapping subproblemsNeed all solutions
ExampleActivity selection0/1 KnapsackGenerate permutations

How to identify which to use:

  1. Try greedy first if the problem is an optimization (min/max) and you can prove or intuit that local optimal leads to global optimal
  2. Use DP if greedy fails (local optimal does not give global optimal) and the problem has overlapping subproblems
  3. Use backtracking if you need all solutions or cannot decompose into subproblems

A common interview strategy: propose greedy, identify why it might fail with a counterexample, then pivot to DP if needed.

Common Greedy Patterns

Certain problem types have well-known greedy solutions. Recognizing these patterns helps you quickly identify when greedy applies.

Pattern 1: Interval Scheduling

Problem type: Select maximum non-overlapping intervals, or minimum intervals to cover a range.

Greedy strategy: Sort by end time and pick intervals that end earliest.

Why it works: By picking the interval that ends earliest, we leave maximum room for subsequent intervals.

Problems using this pattern:

  • Non-overlapping Intervals (LeetCode 435)
  • Merge Intervals (LeetCode 56)
  • Meeting Rooms (LeetCode 252, 253)

Pattern 2: Greedy Scheduling

Problem type: Minimize total wait time or maximize throughput.

Greedy strategy: Process shortest jobs first (SJF) or prioritize by some efficiency metric.

Why it works: Completing short tasks first reduces the total waiting time for all subsequent tasks.

Problems using this pattern:

  • Task Scheduler (LeetCode 621)
  • Minimum Number of Arrows to Burst Balloons (LeetCode 452)

Pattern 3: Two-Pointer Greedy

Problem type: Problems involving sorted arrays where you need to find pairs or optimize based on positions.

Greedy strategy: Use two pointers (often start and end) and greedily move the pointer that improves the solution.

Why it works: Moving the "less promising" pointer cannot miss a better solution.

Problems using this pattern:

  • Container With Most Water (LeetCode 11)
  • Two Sum II (LeetCode 167)
  • Trapping Rain Water (partial greedy logic)

Pattern 4: Greedy Choice with Running State

Problem type: Problems where you track a running value (sum, maximum, minimum) and make greedy decisions based on it.

Greedy strategy: Maintain state that captures the "best so far" and update greedily.

Why it works: The greedy invariant ensures you never miss a better solution.

Problems using this pattern:

  • Jump Game (LeetCode 55)
  • Jump Game II (LeetCode 45)
  • Gas Station (LeetCode 134)
  • Best Time to Buy and Sell Stock (LeetCode 121)

Pattern 5: Huffman-Style / Priority Queue Greedy

Problem type: Problems where you repeatedly select the smallest/largest elements and combine them.

Greedy strategy: Use a heap to always process the optimal element first.

Why it works: Processing in optimal order ensures each combination step is locally optimal.

Problems using this pattern:

  • Merge k Sorted Lists (LeetCode 23)
  • Minimum Cost to Connect Sticks (LeetCode 1167)
  • Reorganize String (LeetCode 767)

The Greedy Template

Most greedy solutions follow a similar structure:

The key decisions when applying this template:

  1. What is the greedy criterion? How do you sort or prioritize elements?
  2. What state do you track? What information carries forward?
  3. What is the greedy choice? When do you include/exclude an element?
  4. How do you update? How does the result and state change with each choice?

Example Walkthrough: Fractional Knapsack

Let us work through a complete example that demonstrates the greedy approach.

Problem: You have a knapsack that can hold weight W. Given n items, each with a weight and value, maximize the total value you can carry. Unlike 0/1 knapsack, you can take fractions of items.

Example:

  • Knapsack capacity: W = 50
  • Items: [(weight=10, value=60), (weight=20, value=100), (weight=30, value=120)]

Step 1: Identify the Greedy Criterion

What makes an item "attractive"? Neither weight alone nor value alone tells the full story. The key insight is value per unit weight (value density).

ItemWeightValueValue/Weight
110606.0
2201005.0
3301204.0

Step 2: Sort by Greedy Criterion

Sort items by value/weight in descending order: Item 1, Item 2, Item 3.

Step 3: Make Greedy Choices

Implementation

Why Greedy Works Here

Fractional knapsack has the greedy choice property because:

  1. Taking the highest ratio item first maximizes value per weight used
  2. Since we can take fractions, there is no "blocking" effect. We never regret taking a high-ratio item because we can always fill remaining space optimally
  3. Each greedy choice leads to an optimal subproblem

Contrast with 0/1 Knapsack: If we cannot take fractions, greedy fails. Consider W=10, items: [(6, 60), (5, 50), (5, 50)]. Greedy takes (6, 60) first, then cannot fit anything else. Optimal is (5, 50) + (5, 50) = 100 > 60.

Second Example: Activity Selection

Let us reinforce the greedy approach with another classic example that uses a different criterion.

Problem: Given n activities with start and end times, select the maximum number of non-overlapping activities.

Example:

  • Activities: [(1,4), (3,5), (0,6), (5,7), (3,9), (5,9), (6,10), (8,11)]

The Greedy Criterion

Which activity should we pick first? Several options seem reasonable:

  • Earliest start time? Might pick a long activity that blocks many others
  • Shortest duration? A short activity in the middle might still block others
  • Earliest end time? Leaves maximum room for remaining activities

The correct criterion is earliest end time.

The Algorithm

  1. Sort activities by end time
  2. Pick the first activity (earliest end)
  3. For each subsequent activity: if it starts after the last picked activity ends, pick it

Walkthrough

Implementation

Why Earliest End Time Works

The key insight: by picking the activity that ends earliest, we maximize the remaining time available for other activities. Any other first choice would end later, leaving less room.

Proof by exchange argument: Suppose an optimal solution O does not start with the earliest-ending activity E. Let O start with activity A instead. Since E ends no later than A, we can replace A with E in O without causing any conflicts. The modified solution is still valid and has the same count. Repeating this argument, we can transform any optimal solution into the greedy solution.

Common Mistakes and Pitfalls

Mistake 1: Assuming Greedy Works Without Proof

Just because a problem looks like it should be greedy does not mean it is. Always verify with counterexamples.

Example: Coin change with denominations [1, 3, 4] and target 6.

  • Greedy: 4 + 1 + 1 = 3 coins
  • Optimal: 3 + 3 = 2 coins

Before committing to greedy in an interview, think: "Can I construct a case where the greedy choice leads to a suboptimal solution?"

Mistake 2: Wrong Greedy Criterion

Choosing the wrong sorting or selection criterion leads to incorrect results.

Example: For interval scheduling, sorting by start time instead of end time gives wrong answers.

IntervalStartEnd
A110
B23
C45
  • Sort by start: Pick A (1-10), no others fit. Count = 1
  • Sort by end: Pick B (2-3), then C (4-5). Count = 2

Mistake 3: Not Handling Edge Cases

Greedy algorithms often have subtle edge cases:

  • Empty input
  • Single element
  • All elements identical
  • No valid solution exists

Always consider these before submitting.

Mistake 4: Greedy for Counting Problems

If the problem asks "how many ways" or "count all solutions," greedy usually does not apply. These typically need DP or backtracking.

How to Identify Greedy Problems in Interviews

Look for these indicators:

SignalExplanation
"Maximum/Minimum" with O(n) or O(n log n) expectedGreedy often achieves these complexities
"Select/Schedule/Assign" problemsClassic greedy territory
"At each step" languageHints at making sequential choices
Sorting seems naturalMany greedy solutions start with sorting
Local choice determines global optimumThe core greedy property
"Earliest/Latest/Shortest/Longest first" intuitionCommon greedy strategies

Red flags that greedy might not work:

SignalExplanation
"Count all ways" or "enumerate all solutions"Usually needs DP or backtracking
Dependencies between choicesOne choice affects what choices remain
"0/1" or "all or nothing" constraintsOften need DP
No obvious sorting criterionHard to define "greedy choice"