AlgoMaster Logo

synchronized vs ReentrantLock

Last Updated: February 1, 2026

When multiple threads access shared data, you need a way to prevent race conditions. In Java, the two most common tools are synchronized and ReentrantLock. Both provide mutual exclusion, meaning only one thread can enter a critical section at a time.

synchronized is built into the language. It is simple to use, automatically releases the lock even if an exception occurs, and benefits from JVM optimizations.

ReentrantLock is part of java.util.concurrent.locks and offers more control: timed and interruptible lock acquisition, optional fairness, and multiple condition variables. That flexibility is useful in advanced scenarios, but it requires more discipline, especially always unlocking in a finally block.

In this chapter, we will compare both across usability, correctness, performance under contention, and when to choose each in real systems.

How synchronized Works Internally

Every object in Java has an associated monitor (also called intrinsic lock). When a thread enters a synchronized block, it acquires the monitor. When it exits, it releases the monitor. Only one thread can hold a monitor at a time.

Object Header and Mark Word

To understand synchronization, you need to understand the object header. Every Java object starts with a header containing metadata:

The mark word is the key to understanding lock implementation. Its contents change based on the lock state:

Lock StateMark Word Contents
UnlockedHashCode, GC age, 01
BiasedThread ID, epoch, GC age, 101
Thin (Lightweight)Pointer to lock record in stack, 00
Fat (Heavyweight)Pointer to monitor object, 10
GC markedForwarding address, 11

Lock Escalation

The JVM uses a clever optimization called lock escalation (or lock inflation). Locks start cheap and become heavier only when contention requires it.

Biased Locking (Deprecated in JDK 15, disabled by default in JDK 18+)

When a thread first acquires a lock, the JVM assumes the same thread will acquire it again. It "biases" the lock to that thread by storing the thread ID in the mark word. Subsequent acquisitions by the same thread require no atomic operations, just a comparison.

If a different thread tries to acquire a biased lock, bias revocation occurs at a safepoint, which is expensive.

Thin (Lightweight) Locking

When bias is revoked or disabled, the JVM uses thin locks. The acquiring thread:

  1. Creates a lock record in its stack frame
  2. Copies the mark word to the lock record
  3. Uses CAS to replace the mark word with a pointer to the lock record

Thin locks are still very fast because CAS is a single CPU instruction. They work well under low contention.

Fat (Heavyweight) Locking

When spinning fails repeatedly, the lock inflates to a fat lock. The JVM allocates a full monitor object (ObjectMonitor in HotSpot) that includes:

  • Owner thread
  • Entry count (for reentrancy)
  • Wait set (threads that called wait())
  • Entry list (threads waiting to acquire)

Fat locks are expensive because blocked threads must be parked (put to sleep by the OS) and later unparked, involving system calls and context switches.

synchronized Mechanics

Now that we understand how locks escalate under the hood, let's look at how you actually use synchronized in practice. There are two forms: method synchronization and block synchronization.

Method vs Block Synchronization

When you add synchronized to an instance method, the lock is the this object. When you add it to a static method, the lock is the Class object. The following examples show the equivalence between method and block forms.

Understanding this equivalence matters because it affects what gets locked. Two synchronized instance methods on the same object cannot run concurrently. But a synchronized instance method and a synchronized static method can, because they lock different objects.

At the bytecode level:

  • Method synchronization uses the ACC_SYNCHRONIZED flag
  • Block synchronization uses monitorenter and monitorexit instructions

The compiler generates two monitorexit instructions: one for normal exit and one for exception handling. This ensures the lock is always released.

Reentrancy

Why does reentrancy matter? Consider what happens without it. A thread acquires a lock, then calls another method that also needs the lock. Without reentrancy, the thread would deadlock waiting for itself to release a lock it's holding.

Both synchronized and ReentrantLock allow the same thread to acquire the lock multiple times. This is called reentrancy, and it solves a fundamental problem in object-oriented programming where methods often call other methods on the same object.

Here is a simple example showing reentrancy in action.

When outer() calls inner(), the thread already holds the lock on this. Without reentrancy, the thread would block forever waiting to acquire a lock it already holds. With reentrancy, the thread recognizes it already owns the lock and proceeds.

How Entry Count Works

The lock maintains an entry count (also called hold count or recursion count) that tracks how many times the owning thread has acquired the lock. Here is the mechanics.

The key insight is that each acquisition increments the count, and each release decrements it. The lock is only truly released when the count reaches zero. This allows methods to safely call other synchronized methods without worrying about lock ownership.

Here is a more detailed example that demonstrates the entry count mechanics.

Running this produces the following output showing exactly how the entry count changes.

A Common Reentrancy Mistake

One subtle bug occurs when the number of unlocks does not match the number of locks.

The safest approach is to use try-finally for every lock() call. This ensures each acquisition has exactly one corresponding release, regardless of how deeply nested your reentrant calls become.

Now that we understand how both types of locks work internally, let's look at ReentrantLock specifically and see what additional capabilities it provides beyond the synchronized keyword.

ReentrantLock Deep Dive

ReentrantLock is part of java.util.concurrent.locks and provides explicit lock operations with additional features. It was introduced in Java 5 as part of Doug Lea's java.util.concurrent package, giving developers more control over locking behavior.

Basic Usage

The fundamental pattern for using ReentrantLock is lock-try-finally-unlock. This example shows a simple thread-safe counter.

Notice the structure: lock() is called before the try block, and unlock() is in the finally block. This ensures the lock is released regardless of whether an exception occurs. The try-finally pattern is essential because, unlike synchronized which guarantees release even on exception, ReentrantLock requires explicit unlock.

ReentrantLock Internals

So how does ReentrantLock actually work under the hood? It is built on AbstractQueuedSynchronizer (AQS), a powerful framework for building locks and synchronizers. Understanding AQS helps you understand not just ReentrantLock, but also Semaphore, CountDownLatch, and other synchronizers.

The diagram below shows the key components of AQS.

Key components:

  • state: 0 = unlocked, >0 = locked (value = hold count for reentrancy)
  • CLH Queue: A FIFO queue of waiting threads
  • CAS operations: Lock-free manipulation of state and queue

Lock Acquisition Flow

When you call lock(), what actually happens? The following simplified code shows the acquisition path for a non-fair lock.

The first thing a thread does is try a CAS (compare-and-swap) to change state from 0 to 1. If successful, it owns the lock. If not, it checks whether it's the current owner (for reentrancy) and either increments the state or joins the CLH queue. The flowchart below visualizes this process.

The key insight from this flow is that lock acquisition is optimistic: try the fast path first (CAS), only fall back to queuing if that fails. This design minimizes overhead in the common uncontended case while still providing correct behavior under contention.

Feature Comparison

Now that we understand how both synchronized and ReentrantLock work internally, let's compare their features. This is where ReentrantLock really shines, offering capabilities that synchronized simply cannot provide.

tryLock: Non-blocking Acquisition

One of the most useful features of ReentrantLock is tryLock(), which attempts to acquire the lock without blocking. Here is a basic example.

No equivalent exists for synchronized. You either block waiting for the lock, or you do not try at all. This makes tryLock() invaluable for implementing patterns like "try the primary resource, fall back to secondary if busy."

Timed Lock Acquisition

Building on tryLock(), you can specify a maximum wait time. This is essential for systems with SLAs or timeout requirements.

Timed locking is invaluable for preventing deadlocks (if a lock cycle exists, one thread will eventually timeout) and implementing responsive systems that fail fast rather than hang indefinitely.

Interruptible Lock Acquisition

What if you want to cancel a thread that is waiting for a lock? With synchronized, you cannot. The thread will wait until it acquires the lock or the JVM terminates. ReentrantLock solves this with lockInterruptibly().

This is essential for implementing graceful shutdown. When shutting down a server, you interrupt all worker threads. With synchronized, any thread blocked on a monitor wait continues blocking. With lockInterruptibly(), those threads receive an InterruptedException and can exit cleanly.

Fairness

By default, ReentrantLock is non-fair: a thread arriving when the lock happens to be released can "barge" ahead of threads already waiting in the queue. This maximizes throughput but can cause starvation. Pass true to the constructor for a fair lock.

A fair lock grants access in strict FIFO order. Threads that have been waiting longest get the lock first. The difference in behavior is significant, as shown in this diagram.

Non-fair (default): An arriving thread tries to acquire immediately, potentially jumping ahead of queued threads. Better throughput, possible starvation.

Fair: Arriving threads always go to the back of the queue. Lower throughput (10-100x slower under contention), guaranteed no starvation.

Condition Variables

The synchronized keyword uses wait(), notify(), and notifyAll() on the lock object for thread coordination. The limitation is that each object has exactly one wait set, so you cannot distinguish between different types of waiting threads.

The problem becomes clear in producer-consumer scenarios: notifyAll() wakes all waiting threads, even if only consumers need to wake up. This causes unnecessary context switches and reduces throughput.

ReentrantLock solves this with Condition objects. You can create multiple conditions from a single lock, each with its own independent wait set.

This code shows the producer-consumer pattern implemented cleanly: producers wait on notFull and signal notEmpty; consumers wait on notEmpty and signal notFull. No thread is ever woken unnecessarily.

Multiple conditions shine whenever you have different types of waiting threads sharing a single lock. Beyond producer-consumer, think of read-write scenarios where readers wait for writers and vice versa, or resource pools where different consumers wait for different resource types.

Complete Feature Comparison

The table below summarizes all the feature differences between synchronized and ReentrantLock. Use this as a quick reference when making your choice.

FeaturesynchronizedReentrantLock
Basic mutual exclusionYesYes
ReentrancyYesYes
Automatic releaseYes (even on exception)No (must use try-finally)
tryLock (non-blocking)NoYes
Timed lockNoYes
Interruptible waitingNoYes
Fairness policyNo (JVM decides)Yes (configurable)
Multiple conditionsNo (one per object)Yes
Lock query (isLocked, etc.)NoYes
Memory footprintObject header (8 bytes)Object + AQS (~48 bytes)

Decision Flowchart: When to Use Which

The flowchart below provides a quick decision path, but the real decision requires understanding the trade-offs in depth. Let's walk through both the decision tree and the reasoning behind each choice.

Prefer synchronized When

Simple mutual exclusion with straightforward scope. When your locking needs are simple (protect a method or a block of code), synchronized wins on simplicity. Consider a basic counter.

Compare this to the ReentrantLock version.

For simple cases, the extra boilerplate offers no benefit and introduces the risk of forgetting the try-finally.

Automatic release is critical for correctness. If your team is large, has varying experience levels, or the code will be maintained for years, synchronized reduces bug risk. The JVM guarantees release even on exceptions, panics, or OOM errors. With ReentrantLock, one forgotten unlock() can cause permanent deadlocks that are hard to debug.

Lock scope is within a single method. synchronized naturally maps to method or block scope. When you need to hold a lock across multiple methods or release it conditionally, synchronized becomes awkward, but for single-scope locking it's cleaner.

Memory footprint matters. If you have millions of objects that each need a lock, the difference matters. synchronized uses the object header (already exists), while ReentrantLock requires a separate object (~48 bytes). For a cache with 10 million entries, that's potentially 480MB extra just for locks.

Legacy code compatibility. If your codebase already uses synchronized extensively, mixing in ReentrantLock creates cognitive overhead. Consistency has value.

Prefer ReentrantLock When

You need tryLock or timed locking. This is the most common reason to choose ReentrantLock. When you cannot afford to block indefinitely or need to implement timeout behavior, synchronized simply cannot help.

Real-world use cases include connection pools, distributed systems with SLAs, and any scenario where waiting forever is unacceptable.

You need to interrupt threads waiting for locks. With synchronized, a thread blocked on a monitor wait is not responsive to interrupts. This makes it hard to implement graceful shutdown. ReentrantLock's lockInterruptibly() solves this problem.

This pattern is essential for thread pools, task executors, and any long-running service that needs clean shutdown.

You need fairness guarantees. When thread starvation is unacceptable, fair ReentrantLock ensures FIFO ordering. This is critical in systems with SLAs or regulatory requirements around resource access.

Keep in mind the performance cost (10-100x slower under contention). Fair locks are a correctness feature, not a performance feature.

You need multiple conditions for a single lock. The producer-consumer pattern illustrates this perfectly. With synchronized, you have one wait set per object, so notifyAll() wakes both producers and consumers.

This reduces unnecessary context switches and improves throughput.

You need to query lock state for monitoring. For debugging, metrics, or adaptive algorithms, ReentrantLock provides visibility that synchronized cannot.

This is valuable for monitoring dashboards, deadlock detection tools, and adaptive lock-striping strategies.

When They're Essentially Equal

Sometimes either choice works fine, and you should pick based on team familiarity or codebase conventions. These scenarios include:

Low contention, simple critical sections. If your lock is rarely contested and protects a quick operation (a few memory accesses), both perform nearly identically. The 5 ns difference between synchronized (~20 ns uncontended) and ReentrantLock (~25 ns uncontended) is irrelevant compared to the work inside the critical section.

The advanced features are "nice to have" but not required. If you might use tryLock someday but do not need it now, do not over-engineer. Start with synchronized. You can always refactor later when the need becomes concrete.

Mixed existing codebase. If half your code uses synchronized and half uses ReentrantLock, pick whichever is used in the surrounding code for consistency.

The table below summarizes when each choice is clearly better versus when it is a toss-up.

ScenarioBest ChoiceReason
Simple mutual exclusionsynchronizedSimpler, safer
Timeout neededReentrantLockNot possible otherwise
Interruptible waitingReentrantLockNot possible otherwise
Fairness requiredReentrantLockNot possible otherwise
Multiple conditionsReentrantLockMore efficient signaling
Millions of lock objectssynchronizedMemory efficiency
Team unfamiliar with explicit lockssynchronizedReduced bug risk
Monitoring/debugging needsReentrantLockQuery methods available
Low contention, simple codeEitherPick based on convention

Understanding when to use each mechanism is important, but you also need to understand how they perform under different conditions. Let's look at the performance characteristics in detail.

Performance Characteristics

Performance is a common reason developers consider switching between synchronized and ReentrantLock, but the reality is more nuanced than "X is faster than Y." Let's break down performance across different scenarios with concrete numbers.

Uncontended Performance: Both Are Fast

When there is no contention (only one thread accesses the lock), both mechanisms are extremely fast because modern JVMs heavily optimize this common case.

ScenariosynchronizedReentrantLockNotes
Uncontended, single thread~20 ns~25 nsBiased locking (if enabled) makes synchronized slightly faster
Lock elision (JIT optimized)~0 ns~0 nsJIT removes lock entirely if proven safe
Lock coarsening~20 ns total~25 ns totalMultiple adjacent locks merged into one
Reentrant acquisition~5 ns~10 nsThread ID comparison vs. CAS

What do these numbers mean in practice? At 20 ns per lock/unlock cycle, you can perform 50 million lock operations per second on a single thread. For most applications, the lock itself is not the bottleneck, the work inside the critical section is.

JIT Optimizations

The JIT compiler performs two powerful optimizations that can eliminate lock overhead entirely.

Lock Elision removes locks when the JIT proves the lock object does not escape the current thread.

Lock Coarsening merges multiple adjacent locks into one.

These optimizations work for both synchronized and ReentrantLock, so uncontended performance is rarely a differentiator.

Low Contention Performance: Still Similar

With occasional contention (multiple threads, but they rarely collide), both mechanisms remain efficient.

MetricsynchronizedReentrantLock
Acquisition when available~20-25 ns (thin lock CAS)~25-30 ns (AQS CAS)
Brief spin before parking~500-2000 ns~500-2000 ns
Wakeup latency~5-10 μs~5-10 μs

The diagram below shows what happens under low contention.

Both use CAS (compare-and-swap) operations and brief spinning before resorting to OS-level parking. Performance is nearly identical.

High Contention Performance: Differences Emerge

Under heavy contention (many threads competing for the same lock), meaningful differences appear.

MetricsynchronizedReentrantLock (non-fair)ReentrantLock (fair)
Throughput (relative)1.0x1.0-1.2x0.01-0.1x
Latency varianceHigherLowerLowest
Lock inflation overheadYes (one-time ~10 μs)NoneNone
Bias revocation overheadYes (safepoint, ~ms)NoneNone

Why ReentrantLock Often Wins Under High Contention

  1. No lock inflation overhead. synchronized must inflate from thin to fat lock under contention, which costs ~10 μs the first time. ReentrantLock's AQS is always the same mechanism.
  2. No bias revocation. When a second thread contests a biased lock, bias revocation requires a safepoint (global JVM pause). This can take milliseconds. ReentrantLock has no such mechanism.
  3. More predictable queuing. ReentrantLock's CLH queue provides consistent FIFO-ish ordering. synchronized's entry list management is more complex and can have higher variance.
  4. Adaptive spinning. Both spin before parking, but ReentrantLock's spinning is more predictable and tunable.

Here is what high contention looks like.

Benchmark Numbers (Approximate)

The following numbers come from various JMH benchmarks. Your results will vary based on hardware, JVM version, and workload characteristics.

Threadssynchronized (ops/μs)ReentrantLock (ops/μs)ReentrantLock fair (ops/μs)
145-5040-4535-40
220-2522-282-4
410-1512-180.5-1
85-86-100.2-0.5
163-54-70.1-0.3

Key observations from these benchmarks:

  1. Single-threaded: synchronized is slightly faster (biased locking advantage)
  2. 2-8 threads: ReentrantLock is slightly faster (no inflation overhead)
  3. 16+ threads: Both degrade significantly, ReentrantLock maintains a small edge
  4. Fair lock: 10-100x slower, but provides bounded latency

When Performance Matters and When It Does Not

Performance rarely matters when:

  • Critical sections are short (a few memory operations)
  • Contention is low (< 2-4 threads typically)
  • Lock operations are infrequent (< 10,000/second)
  • The work inside the lock dominates (I/O, computation)

Performance might matter when:

  • Millions of lock operations per second
  • Many threads (> 8) contending on the same lock
  • Sub-millisecond latency requirements
  • Real-time or near-real-time systems

The right response to high contention is usually not to switch lock types, but to reduce contention:

  • Use finer-grained locks (lock striping)
  • Use lock-free data structures
  • Use optimistic locking (CAS-based algorithms)
  • Reduce critical section size
  • Use read-write locks for read-heavy workloads

The table below summarizes the performance guidance.

Your SituationRecommendation
Uncontended or low contentionEither works, pick based on features
High contention, need max throughputReentrantLock (non-fair) has slight edge
High contention, need bounded latencyReentrantLock (fair), but consider redesign
Performance-critical codeProfile first, then decide
"synchronized is slow" mythIt's not. Profile before switching.

The Bottom Line: Do not choose between synchronized and ReentrantLock based on performance unless you have profiled and proven that locking is your bottleneck. The feature differences (tryLock, fairness, conditions) are usually more important than the ~20% performance differences under contention.

With performance characteristics understood, let's look at the common mistakes developers make with both mechanisms and how to avoid them.