AlgoMaster Logo

Maximum Profit in Job Scheduling

Last Updated: March 28, 2026

hard
5 min read

Understanding the Problem

We have a set of jobs, each with a start time, end time, and profit. We want to select a subset of non-overlapping jobs that maximizes total profit. Two jobs overlap if one starts before the other ends (but a job ending at time X does not overlap with a job starting at time X).

This is a classic weighted job scheduling problem. What makes it trickier than the unweighted version (where you just maximize the count of non-overlapping intervals) is that each job has a different profit. You cannot simply be greedy and pick the shortest or earliest-ending jobs. A single high-profit job might be worth more than several low-profit ones that fit in the same time window.

The key observation is this: for each job, you have exactly two choices. Either you take it (and earn its profit, but then you can only take jobs that do not overlap with it), or you skip it. This "take it or leave it" structure, combined with overlapping subproblems, points directly to dynamic programming.

Key Constraints:

  • n <= 5 * 10^4 --> With n up to 50,000, an O(n^2) solution (2.5 billion operations) is too slow. We need O(n log n) or better.
  • startTime[i], endTime[i] <= 10^9 --> Times can be huge, so any approach that iterates over time values (like a time-indexed DP array) is out of the question. We must work with job indices instead.
  • profit[i] <= 10^4 --> Profits are positive, so taking a job is never harmful on its own. The only reason to skip a job is because it conflicts with a more profitable combination.

Approach 1: Brute Force (Recursion)

Intuition

The most natural starting point is to consider every possible subset of non-overlapping jobs and pick the one with the highest total profit. For each job, we decide: take it or skip it. If we take it, we skip ahead to the next job that does not overlap. If we skip it, we move to the next job.

To make this structured, sort the jobs by start time. Then define a recursive function starting from job 0: either include the current job and jump to the next non-conflicting one, or skip it and move to the next job.

Algorithm

  1. Combine start times, end times, and profits into a list of jobs. Sort by start time.
  2. Define solve(i) that returns the maximum profit from jobs i through n-1.
  3. Base case: if i >= n, return 0.
  4. Skip job i: solve(i + 1).
  5. Take job i: profit[i] + solve(next) where next is the first job whose start time >= end time of job i. Find next by linearly scanning from i + 1.
  6. Return the maximum of skip and take.

Example Walkthrough

1solve(0): Job [1,3] p=50. Take or skip?
[1, 3]
[2, 4]
[3, 5]
[3, 6]
16
1/4

Code

The recursion recomputes the same subproblems many times. What if we cached the result of each solve(i) call, and also replaced the linear scan with binary search?

Approach 2: DP with Memoization + Binary Search

Intuition

The brute force recomputes the same subproblems over and over. Adding memoization fixes that: we cache the result of solve(i) the first time and return it directly on subsequent calls. But there is a second bottleneck. Finding the next non-overlapping job with a linear scan takes O(n) per job, which makes the overall solution O(n^2) even with memoization.

Here is the key insight: if we sort jobs by start time, the end time of job i defines a threshold. We need the first job with a start time >= that end time. Since the jobs are sorted by start time, this is a classic binary search problem. Instead of scanning linearly, we binary search for the leftmost job whose start time is at or after the current job's end time. This drops the per-job lookup from O(n) to O(log n).

Algorithm

  1. Combine and sort jobs by start time.
  2. Define solve(i) with memoization: the maximum profit from jobs i through n-1.
  3. Base case: if i >= n, return 0.
  4. Skip: solve(i + 1).
  5. Take: Use binary search on start times to find the first job whose start time >= end time of job i. Then profit[i] + solve(next).
  6. Return max(skip, take).

Example Walkthrough

1solve(0): Job [1,3] p=50. Skip or take?
[1, 3]
[2, 4]
[3, 5]
[3, 6]
16
1/6
1Initialize memo: all -1 (uncomputed)
0
-1
1
-1
2
-1
3
-1
1/6

Code

The memoized approach avoids recursion overhead. Can we convert it to an iterative bottom-up DP that fills a table from left to right?

Approach 3: Bottom-Up DP + Binary Search (Optimal)

Intuition

The memoized recursion from Approach 2 already gives us O(n log n) time. But we can make it cleaner and avoid recursion by switching to bottom-up DP. The idea is the same, just flipped.

Sort jobs by end time this time (either sort direction works, but sorting by end time is the more classic formulation for this problem). Define dp[i] as the maximum profit considering only the first i jobs (after sorting by end time). For each job i, we have two choices:

  • Skip job i: The best profit is whatever we had with the first i-1 jobs, so dp[i-1].
  • Take job i: We earn profit[i], plus the best profit from all jobs that finish before job i starts. We need the largest j such that endTime[j] <= startTime[i]. Since jobs are sorted by end time, this is a binary search.

So dp[i] = max(dp[i-1], profit[i] + dp[j]) where j is found via binary search.

Algorithm

  1. Combine jobs into tuples and sort by end time.
  2. Create a dp array of size n + 1, where dp[0] = 0 (no jobs, no profit).
  3. For each job i (1-indexed after sorting), binary search for the latest job j whose end time <= start time of job i.
  4. dp[i] = max(dp[i-1], profit[i] + dp[j]).
  5. Return dp[n].

Example Walkthrough

1Jobs sorted by end time. dp[0]=0 (no jobs).
[1, 3]
[2, 4]
[3, 5]
[3, 6]
16
1/6
1Initialize dp[0]=0 (no jobs, no profit)
0
0
base
1
0
2
0
3
0
4
0
1/6

Code