Last Updated: March 28, 2026
We have a set of jobs, each with a start time, end time, and profit. We want to select a subset of non-overlapping jobs that maximizes total profit. Two jobs overlap if one starts before the other ends (but a job ending at time X does not overlap with a job starting at time X).
This is a classic weighted job scheduling problem. What makes it trickier than the unweighted version (where you just maximize the count of non-overlapping intervals) is that each job has a different profit. You cannot simply be greedy and pick the shortest or earliest-ending jobs. A single high-profit job might be worth more than several low-profit ones that fit in the same time window.
The key observation is this: for each job, you have exactly two choices. Either you take it (and earn its profit, but then you can only take jobs that do not overlap with it), or you skip it. This "take it or leave it" structure, combined with overlapping subproblems, points directly to dynamic programming.
n <= 5 * 10^4 --> With n up to 50,000, an O(n^2) solution (2.5 billion operations) is too slow. We need O(n log n) or better.startTime[i], endTime[i] <= 10^9 --> Times can be huge, so any approach that iterates over time values (like a time-indexed DP array) is out of the question. We must work with job indices instead.profit[i] <= 10^4 --> Profits are positive, so taking a job is never harmful on its own. The only reason to skip a job is because it conflicts with a more profitable combination.The most natural starting point is to consider every possible subset of non-overlapping jobs and pick the one with the highest total profit. For each job, we decide: take it or skip it. If we take it, we skip ahead to the next job that does not overlap. If we skip it, we move to the next job.
To make this structured, sort the jobs by start time. Then define a recursive function starting from job 0: either include the current job and jump to the next non-conflicting one, or skip it and move to the next job.
solve(i) that returns the maximum profit from jobs i through n-1.i >= n, return 0.solve(i + 1).profit[i] + solve(next) where next is the first job whose start time >= end time of job i. Find next by linearly scanning from i + 1.The recursion recomputes the same subproblems many times. What if we cached the result of each solve(i) call, and also replaced the linear scan with binary search?
The brute force recomputes the same subproblems over and over. Adding memoization fixes that: we cache the result of solve(i) the first time and return it directly on subsequent calls. But there is a second bottleneck. Finding the next non-overlapping job with a linear scan takes O(n) per job, which makes the overall solution O(n^2) even with memoization.
Here is the key insight: if we sort jobs by start time, the end time of job i defines a threshold. We need the first job with a start time >= that end time. Since the jobs are sorted by start time, this is a classic binary search problem. Instead of scanning linearly, we binary search for the leftmost job whose start time is at or after the current job's end time. This drops the per-job lookup from O(n) to O(log n).
The memoization ensures each subproblem solve(i) is computed exactly once. There are n subproblems, and each one does O(log n) work for the binary search. The recursion builds the solution bottom-up implicitly: to know the best profit from job i onward, we need the best profit from some later job onward, which has already been cached (or will be computed and cached exactly once).
The binary search works because sorting by start time means start times are non-decreasing. Finding the first start time >= a given end time is equivalent to finding the leftmost position in a sorted array, which is exactly what binary search on a sorted array does.
solve(i) with memoization: the maximum profit from jobs i through n-1.i >= n, return 0.solve(i + 1).profit[i] + solve(next).max(skip, take).The memoized approach avoids recursion overhead. Can we convert it to an iterative bottom-up DP that fills a table from left to right?
The memoized recursion from Approach 2 already gives us O(n log n) time. But we can make it cleaner and avoid recursion by switching to bottom-up DP. The idea is the same, just flipped.
Sort jobs by end time this time (either sort direction works, but sorting by end time is the more classic formulation for this problem). Define dp[i] as the maximum profit considering only the first i jobs (after sorting by end time). For each job i, we have two choices:
dp[i-1].profit[i], plus the best profit from all jobs that finish before job i starts. We need the largest j such that endTime[j] <= startTime[i]. Since jobs are sorted by end time, this is a binary search.So dp[i] = max(dp[i-1], profit[i] + dp[j]) where j is found via binary search.
The bottom-up DP processes jobs in order of increasing end time. When we consider job i, all jobs that could potentially come before it (those ending earlier) have already been processed and their optimal profits stored in the dp array. The binary search finds exactly where to look up the best compatible profit.
The max(dp[i-1], profit[i] + dp[j]) captures the core decision: either job i is not in our optimal set (so we keep the best from the first i-1 jobs), or it is (so we add its profit to the best compatible set before it). Since dp[i] is non-decreasing (adding more jobs to consider can never decrease the optimum), the binary search on end times correctly finds the best compatible prefix.
dp array of size n + 1, where dp[0] = 0 (no jobs, no profit).dp[i] = max(dp[i-1], profit[i] + dp[j]).dp[n].