Last Updated: February 1, 2026
When multiple threads access shared data, you need a way to prevent race conditions. In Java, the two most common tools are synchronized and ReentrantLock. Both provide mutual exclusion, meaning only one thread can enter a critical section at a time.
synchronized is built into the language. It is simple to use, automatically releases the lock even if an exception occurs, and benefits from JVM optimizations.
ReentrantLock is part of java.util.concurrent.locks and offers more control: timed and interruptible lock acquisition, optional fairness, and multiple condition variables. That flexibility is useful in advanced scenarios, but it requires more discipline, especially always unlocking in a finally block.
In this chapter, we will compare both across usability, correctness, performance under contention, and when to choose each in real systems.
Every object in Java has an associated monitor (also called intrinsic lock). When a thread enters a synchronized block, it acquires the monitor. When it exits, it releases the monitor. Only one thread can hold a monitor at a time.
To understand synchronization, you need to understand the object header. Every Java object starts with a header containing metadata:
The mark word is the key to understanding lock implementation. Its contents change based on the lock state:
| Lock State | Mark Word Contents |
|---|---|
| Unlocked | HashCode, GC age, 01 |
| Biased | Thread ID, epoch, GC age, 101 |
| Thin (Lightweight) | Pointer to lock record in stack, 00 |
| Fat (Heavyweight) | Pointer to monitor object, 10 |
| GC marked | Forwarding address, 11 |
The JVM uses a clever optimization called lock escalation (or lock inflation). Locks start cheap and become heavier only when contention requires it.
Biased Locking (Deprecated in JDK 15, disabled by default in JDK 18+)
When a thread first acquires a lock, the JVM assumes the same thread will acquire it again. It "biases" the lock to that thread by storing the thread ID in the mark word. Subsequent acquisitions by the same thread require no atomic operations, just a comparison.
If a different thread tries to acquire a biased lock, bias revocation occurs at a safepoint, which is expensive.
When bias is revoked or disabled, the JVM uses thin locks. The acquiring thread:
Thin locks are still very fast because CAS is a single CPU instruction. They work well under low contention.
When spinning fails repeatedly, the lock inflates to a fat lock. The JVM allocates a full monitor object (ObjectMonitor in HotSpot) that includes:
Fat locks are expensive because blocked threads must be parked (put to sleep by the OS) and later unparked, involving system calls and context switches.
Now that we understand how locks escalate under the hood, let's look at how you actually use synchronized in practice. There are two forms: method synchronization and block synchronization.
When you add synchronized to an instance method, the lock is the this object. When you add it to a static method, the lock is the Class object. The following examples show the equivalence between method and block forms.
Understanding this equivalence matters because it affects what gets locked. Two synchronized instance methods on the same object cannot run concurrently. But a synchronized instance method and a synchronized static method can, because they lock different objects.
At the bytecode level:
ACC_SYNCHRONIZED flagmonitorenter and monitorexit instructionsThe compiler generates two monitorexit instructions: one for normal exit and one for exception handling. This ensures the lock is always released.
Why does reentrancy matter? Consider what happens without it. A thread acquires a lock, then calls another method that also needs the lock. Without reentrancy, the thread would deadlock waiting for itself to release a lock it's holding.
Both synchronized and ReentrantLock allow the same thread to acquire the lock multiple times. This is called reentrancy, and it solves a fundamental problem in object-oriented programming where methods often call other methods on the same object.
Here is a simple example showing reentrancy in action.
When outer() calls inner(), the thread already holds the lock on this. Without reentrancy, the thread would block forever waiting to acquire a lock it already holds. With reentrancy, the thread recognizes it already owns the lock and proceeds.
How Entry Count Works
The lock maintains an entry count (also called hold count or recursion count) that tracks how many times the owning thread has acquired the lock. Here is the mechanics.
The key insight is that each acquisition increments the count, and each release decrements it. The lock is only truly released when the count reaches zero. This allows methods to safely call other synchronized methods without worrying about lock ownership.
Here is a more detailed example that demonstrates the entry count mechanics.
Running this produces the following output showing exactly how the entry count changes.
A Common Reentrancy Mistake
One subtle bug occurs when the number of unlocks does not match the number of locks.
The safest approach is to use try-finally for every lock() call. This ensures each acquisition has exactly one corresponding release, regardless of how deeply nested your reentrant calls become.
Now that we understand how both types of locks work internally, let's look at ReentrantLock specifically and see what additional capabilities it provides beyond the synchronized keyword.
ReentrantLock is part of java.util.concurrent.locks and provides explicit lock operations with additional features. It was introduced in Java 5 as part of Doug Lea's java.util.concurrent package, giving developers more control over locking behavior.
The fundamental pattern for using ReentrantLock is lock-try-finally-unlock. This example shows a simple thread-safe counter.
Notice the structure: lock() is called before the try block, and unlock() is in the finally block. This ensures the lock is released regardless of whether an exception occurs. The try-finally pattern is essential because, unlike synchronized which guarantees release even on exception, ReentrantLock requires explicit unlock.
So how does ReentrantLock actually work under the hood? It is built on AbstractQueuedSynchronizer (AQS), a powerful framework for building locks and synchronizers. Understanding AQS helps you understand not just ReentrantLock, but also Semaphore, CountDownLatch, and other synchronizers.
The diagram below shows the key components of AQS.
Key components:
When you call lock(), what actually happens? The following simplified code shows the acquisition path for a non-fair lock.
The first thing a thread does is try a CAS (compare-and-swap) to change state from 0 to 1. If successful, it owns the lock. If not, it checks whether it's the current owner (for reentrancy) and either increments the state or joins the CLH queue. The flowchart below visualizes this process.
The key insight from this flow is that lock acquisition is optimistic: try the fast path first (CAS), only fall back to queuing if that fails. This design minimizes overhead in the common uncontended case while still providing correct behavior under contention.
Now that we understand how both synchronized and ReentrantLock work internally, let's compare their features. This is where ReentrantLock really shines, offering capabilities that synchronized simply cannot provide.
One of the most useful features of ReentrantLock is tryLock(), which attempts to acquire the lock without blocking. Here is a basic example.
No equivalent exists for synchronized. You either block waiting for the lock, or you do not try at all. This makes tryLock() invaluable for implementing patterns like "try the primary resource, fall back to secondary if busy."
Building on tryLock(), you can specify a maximum wait time. This is essential for systems with SLAs or timeout requirements.
Timed locking is invaluable for preventing deadlocks (if a lock cycle exists, one thread will eventually timeout) and implementing responsive systems that fail fast rather than hang indefinitely.
What if you want to cancel a thread that is waiting for a lock? With synchronized, you cannot. The thread will wait until it acquires the lock or the JVM terminates. ReentrantLock solves this with lockInterruptibly().
This is essential for implementing graceful shutdown. When shutting down a server, you interrupt all worker threads. With synchronized, any thread blocked on a monitor wait continues blocking. With lockInterruptibly(), those threads receive an InterruptedException and can exit cleanly.
By default, ReentrantLock is non-fair: a thread arriving when the lock happens to be released can "barge" ahead of threads already waiting in the queue. This maximizes throughput but can cause starvation. Pass true to the constructor for a fair lock.
A fair lock grants access in strict FIFO order. Threads that have been waiting longest get the lock first. The difference in behavior is significant, as shown in this diagram.
Non-fair (default): An arriving thread tries to acquire immediately, potentially jumping ahead of queued threads. Better throughput, possible starvation.
Fair: Arriving threads always go to the back of the queue. Lower throughput (10-100x slower under contention), guaranteed no starvation.
The synchronized keyword uses wait(), notify(), and notifyAll() on the lock object for thread coordination. The limitation is that each object has exactly one wait set, so you cannot distinguish between different types of waiting threads.
The problem becomes clear in producer-consumer scenarios: notifyAll() wakes all waiting threads, even if only consumers need to wake up. This causes unnecessary context switches and reduces throughput.
ReentrantLock solves this with Condition objects. You can create multiple conditions from a single lock, each with its own independent wait set.
This code shows the producer-consumer pattern implemented cleanly: producers wait on notFull and signal notEmpty; consumers wait on notEmpty and signal notFull. No thread is ever woken unnecessarily.
Multiple conditions shine whenever you have different types of waiting threads sharing a single lock. Beyond producer-consumer, think of read-write scenarios where readers wait for writers and vice versa, or resource pools where different consumers wait for different resource types.
The table below summarizes all the feature differences between synchronized and ReentrantLock. Use this as a quick reference when making your choice.
| Feature | synchronized | ReentrantLock |
|---|---|---|
| Basic mutual exclusion | Yes | Yes |
| Reentrancy | Yes | Yes |
| Automatic release | Yes (even on exception) | No (must use try-finally) |
| tryLock (non-blocking) | No | Yes |
| Timed lock | No | Yes |
| Interruptible waiting | No | Yes |
| Fairness policy | No (JVM decides) | Yes (configurable) |
| Multiple conditions | No (one per object) | Yes |
| Lock query (isLocked, etc.) | No | Yes |
| Memory footprint | Object header (8 bytes) | Object + AQS (~48 bytes) |
The flowchart below provides a quick decision path, but the real decision requires understanding the trade-offs in depth. Let's walk through both the decision tree and the reasoning behind each choice.
Simple mutual exclusion with straightforward scope. When your locking needs are simple (protect a method or a block of code), synchronized wins on simplicity. Consider a basic counter.
Compare this to the ReentrantLock version.
For simple cases, the extra boilerplate offers no benefit and introduces the risk of forgetting the try-finally.
Automatic release is critical for correctness. If your team is large, has varying experience levels, or the code will be maintained for years, synchronized reduces bug risk. The JVM guarantees release even on exceptions, panics, or OOM errors. With ReentrantLock, one forgotten unlock() can cause permanent deadlocks that are hard to debug.
Lock scope is within a single method. synchronized naturally maps to method or block scope. When you need to hold a lock across multiple methods or release it conditionally, synchronized becomes awkward, but for single-scope locking it's cleaner.
Memory footprint matters. If you have millions of objects that each need a lock, the difference matters. synchronized uses the object header (already exists), while ReentrantLock requires a separate object (~48 bytes). For a cache with 10 million entries, that's potentially 480MB extra just for locks.
Legacy code compatibility. If your codebase already uses synchronized extensively, mixing in ReentrantLock creates cognitive overhead. Consistency has value.
You need tryLock or timed locking. This is the most common reason to choose ReentrantLock. When you cannot afford to block indefinitely or need to implement timeout behavior, synchronized simply cannot help.
Real-world use cases include connection pools, distributed systems with SLAs, and any scenario where waiting forever is unacceptable.
You need to interrupt threads waiting for locks. With synchronized, a thread blocked on a monitor wait is not responsive to interrupts. This makes it hard to implement graceful shutdown. ReentrantLock's lockInterruptibly() solves this problem.
This pattern is essential for thread pools, task executors, and any long-running service that needs clean shutdown.
You need fairness guarantees. When thread starvation is unacceptable, fair ReentrantLock ensures FIFO ordering. This is critical in systems with SLAs or regulatory requirements around resource access.
Keep in mind the performance cost (10-100x slower under contention). Fair locks are a correctness feature, not a performance feature.
You need multiple conditions for a single lock. The producer-consumer pattern illustrates this perfectly. With synchronized, you have one wait set per object, so notifyAll() wakes both producers and consumers.
This reduces unnecessary context switches and improves throughput.
You need to query lock state for monitoring. For debugging, metrics, or adaptive algorithms, ReentrantLock provides visibility that synchronized cannot.
This is valuable for monitoring dashboards, deadlock detection tools, and adaptive lock-striping strategies.
Sometimes either choice works fine, and you should pick based on team familiarity or codebase conventions. These scenarios include:
Low contention, simple critical sections. If your lock is rarely contested and protects a quick operation (a few memory accesses), both perform nearly identically. The 5 ns difference between synchronized (~20 ns uncontended) and ReentrantLock (~25 ns uncontended) is irrelevant compared to the work inside the critical section.
The advanced features are "nice to have" but not required. If you might use tryLock someday but do not need it now, do not over-engineer. Start with synchronized. You can always refactor later when the need becomes concrete.
Mixed existing codebase. If half your code uses synchronized and half uses ReentrantLock, pick whichever is used in the surrounding code for consistency.
The table below summarizes when each choice is clearly better versus when it is a toss-up.
| Scenario | Best Choice | Reason |
|---|---|---|
| Simple mutual exclusion | synchronized | Simpler, safer |
| Timeout needed | ReentrantLock | Not possible otherwise |
| Interruptible waiting | ReentrantLock | Not possible otherwise |
| Fairness required | ReentrantLock | Not possible otherwise |
| Multiple conditions | ReentrantLock | More efficient signaling |
| Millions of lock objects | synchronized | Memory efficiency |
| Team unfamiliar with explicit locks | synchronized | Reduced bug risk |
| Monitoring/debugging needs | ReentrantLock | Query methods available |
| Low contention, simple code | Either | Pick based on convention |
Understanding when to use each mechanism is important, but you also need to understand how they perform under different conditions. Let's look at the performance characteristics in detail.
Performance is a common reason developers consider switching between synchronized and ReentrantLock, but the reality is more nuanced than "X is faster than Y." Let's break down performance across different scenarios with concrete numbers.
When there is no contention (only one thread accesses the lock), both mechanisms are extremely fast because modern JVMs heavily optimize this common case.
| Scenario | synchronized | ReentrantLock | Notes |
|---|---|---|---|
| Uncontended, single thread | ~20 ns | ~25 ns | Biased locking (if enabled) makes synchronized slightly faster |
| Lock elision (JIT optimized) | ~0 ns | ~0 ns | JIT removes lock entirely if proven safe |
| Lock coarsening | ~20 ns total | ~25 ns total | Multiple adjacent locks merged into one |
| Reentrant acquisition | ~5 ns | ~10 ns | Thread ID comparison vs. CAS |
What do these numbers mean in practice? At 20 ns per lock/unlock cycle, you can perform 50 million lock operations per second on a single thread. For most applications, the lock itself is not the bottleneck, the work inside the critical section is.
JIT Optimizations
The JIT compiler performs two powerful optimizations that can eliminate lock overhead entirely.
Lock Elision removes locks when the JIT proves the lock object does not escape the current thread.
Lock Coarsening merges multiple adjacent locks into one.
These optimizations work for both synchronized and ReentrantLock, so uncontended performance is rarely a differentiator.
With occasional contention (multiple threads, but they rarely collide), both mechanisms remain efficient.
| Metric | synchronized | ReentrantLock |
|---|---|---|
| Acquisition when available | ~20-25 ns (thin lock CAS) | ~25-30 ns (AQS CAS) |
| Brief spin before parking | ~500-2000 ns | ~500-2000 ns |
| Wakeup latency | ~5-10 μs | ~5-10 μs |
The diagram below shows what happens under low contention.
Both use CAS (compare-and-swap) operations and brief spinning before resorting to OS-level parking. Performance is nearly identical.
Under heavy contention (many threads competing for the same lock), meaningful differences appear.
| Metric | synchronized | ReentrantLock (non-fair) | ReentrantLock (fair) |
|---|---|---|---|
| Throughput (relative) | 1.0x | 1.0-1.2x | 0.01-0.1x |
| Latency variance | Higher | Lower | Lowest |
| Lock inflation overhead | Yes (one-time ~10 μs) | None | None |
| Bias revocation overhead | Yes (safepoint, ~ms) | None | None |
Why ReentrantLock Often Wins Under High Contention
Here is what high contention looks like.
Benchmark Numbers (Approximate)
The following numbers come from various JMH benchmarks. Your results will vary based on hardware, JVM version, and workload characteristics.
| Threads | synchronized (ops/μs) | ReentrantLock (ops/μs) | ReentrantLock fair (ops/μs) |
|---|---|---|---|
| 1 | 45-50 | 40-45 | 35-40 |
| 2 | 20-25 | 22-28 | 2-4 |
| 4 | 10-15 | 12-18 | 0.5-1 |
| 8 | 5-8 | 6-10 | 0.2-0.5 |
| 16 | 3-5 | 4-7 | 0.1-0.3 |
Key observations from these benchmarks:
Performance rarely matters when:
Performance might matter when:
The right response to high contention is usually not to switch lock types, but to reduce contention:
The table below summarizes the performance guidance.
| Your Situation | Recommendation |
|---|---|
| Uncontended or low contention | Either works, pick based on features |
| High contention, need max throughput | ReentrantLock (non-fair) has slight edge |
| High contention, need bounded latency | ReentrantLock (fair), but consider redesign |
| Performance-critical code | Profile first, then decide |
| "synchronized is slow" myth | It's not. Profile before switching. |
The Bottom Line: Do not choose between synchronized and ReentrantLock based on performance unless you have profiled and proven that locking is your bottleneck. The feature differences (tryLock, fairness, conditions) are usually more important than the ~20% performance differences under contention.
With performance characteristics understood, let's look at the common mistakes developers make with both mechanisms and how to avoid them.