Last Updated: February 1, 2026
The blocking queue provides two guarantees that regular queues cannot: producers block when the queue is full (backpressure), and consumers block when the queue is empty (no busy waiting).
Combined with thread safety, this makes blocking queues the backbone of producer-consumer systems, from message brokers to thread pools to request buffering. Every Java ExecutorService uses a blocking queue internally. Understanding how to build one exposes the core techniques of concurrent data structure design.
A bounded, thread-safe queue that blocks producers when full and blocks consumers when empty. The capacity limit provides backpressure, preventing memory exhaustion. The blocking semantics eliminate busy waiting and coordinate producers with consumers naturally.
| Operation | Description | Expected Complexity |
|---|---|---|
put(item) | Insert item, block if queue is full | O(1) amortized |
take() | Remove and return head, block if empty | O(1) |
offer(item) | Insert if space available, return false if full | O(1) |
poll() | Remove head if available, return null if empty | O(1) |
peek() | View head without removing, null if empty | O(1) |
size() | Current number of elements | O(1) |
put() simultaneously without losing itemstake() simultaneously without duplicate consumptionBefore adding concurrency, we need a queue that efficiently uses bounded memory. A circular buffer is the standard choice: a fixed-size array where head and tail wrap around when they reach the end.
The buffer holds elements A, B, C at indices 0, 1, 2. The head pointer at 0 means the next poll() returns A. The tail pointer at 3 means the next offer() places an item at index 3. When tail reaches 5 and another item is added, tail wraps to 0 (if space exists after dequeues). This wrap-around reuses memory without shifting elements.
Adding thread safety to our circular buffer reveals several subtle problems. Understanding these challenges is essential for choosing the right synchronization strategy.
The most insidious bug in blocking queue implementations. A producer signals "not empty" after adding an item, but the consumer hasn't started waiting yet. The signal is lost. The consumer then waits forever, even though the queue has items.
The consumer checks the condition, finds it empty, but before it can wait, the producer adds an item and signals. The signal goes nowhere because no thread is waiting yet. The consumer then waits on a condition that's already satisfied.
A thread waiting on a condition variable can wake up even when no other thread signaled it. This is a documented behavior of most threading implementations (POSIX pthreads, Java, etc.). If the code assumes every wakeup means the condition is satisfied, it will dequeue from an empty queue.
Multiple producers competing for the last slot, or multiple consumers competing for the last item, can cause races if signaling isn't handled correctly.
Both producers wake up when a slot opens, but only one can use it. The other must go back to waiting. This is why the wait must be in a loop, not an if statement.
Should you signal while holding the lock or after releasing it? Signaling while holding the lock is safer (no lost signals) but may cause "hurry up and wait": the signaled thread wakes up but immediately blocks on the lock the signaler still holds.
| Property | Requirement | Why It Matters |
|---|---|---|
| Atomicity | put/take must be atomic | Prevents partial updates, duplicate items |
| FIFO Order | Items dequeue in enqueue order | Expected queue semantics |
| Blocking | Block rather than busy-wait | CPU efficiency, battery life |
| Liveness | Every waiting thread eventually proceeds | No indefinite blocking when items/space exist |
The straightforward solution: one lock protects all queue state, with two condition variables for blocking semantics. Producers wait on "not full" and signal "not empty". Consumers wait on "not empty" and signal "not full".
| Property | Status | Explanation |
|---|---|---|
| Thread-safe | Yes | Single lock serializes all access |
| Deadlock-free | Yes | Only one lock, no circular wait |
| Starvation-free | Yes (with fair lock) | FIFO lock acquisition |
| Scalability | Limited | Producers block consumers and vice versa |
The diagram shows the fundamental limitation: all four threads contend for one lock. A producer inserting at the tail blocks a consumer removing from the head, even though they're accessing different parts of the queue.
The key insight: in a queue, producers only touch the tail and consumers only touch the head. If we use separate locks for head and tail, a producer can insert while a consumer removes simultaneously. This is the approach used by Java's LinkedBlockingQueue.
The consumer acquires the take lock and removes from head. The producer acquires the put lock and appends at tail. They operate simultaneously because they hold different locks.
With two locks, we can't atomically read and update count under a single lock. We need an AtomicInteger. But signaling gets complex: after a put, how do we signal waiting consumers if we only hold the put lock and the notEmpty condition belongs to the take lock?
Solution: Cascading Signals
When count goes from 0 to 1, a put operation must signal consumers. But it holds the put lock, not the take lock. We use a two-step approach:
| Property | Status | Explanation |
|---|---|---|
| Thread-safe | Yes | Separate locks for head and tail |
| Deadlock-free | Yes | Lock ordering not needed (disjoint resources) |
| Concurrent put+take | Yes | Different locks, different data |
| Scalability | Good | Producers and consumers don't block each other |
| Aspect | Single Lock (ArrayBlockingQueue) | Two-Lock (LinkedBlockingQueue) | Lock-Free |
|---|---|---|---|
| Concurrent operations | None | put + take | Full |
| Lock overhead | 1 lock acquisition | 2 lock acquisitions (cascade) | No locks, CAS overhead |
| Memory layout | Contiguous array (cache-friendly) | Linked nodes (allocation) | Complex structures |
| Bounded capacity | Yes, fixed at construction | Yes, but can be unbounded | Typically unbounded |
| size() accuracy | Exact | Exact (atomic count) | Eventually consistent |
| Implementation complexity | Low | Medium | High |
| Best for | Small queues, low contention | High throughput | Extreme performance |
java.util.concurrent.ArrayBlockingQueue (single-lock, bounded) or LinkedBlockingQueue (two-lock, optionally bounded). Both are battle-tested.boost::lockfree::queue for lock-free or implement the two-lock pattern with std::mutex and std::condition_variable.queue.Queue (built-in, single-lock) or multiprocessing.Queue for inter-process.This section provides the production-ready single-lock ArrayBlockingQueue, which is most relevant for interviews. It includes all operations, proper interruption handling, and clear documentation.
Expected Output: