Last Updated: February 2, 2026
The Print FooBar Alternately problem is a simple concurrency challenge. Two threads must coordinate to print "foo" and "bar" in strict alternation: foobarfoobarfoobar.
Despite its simplicity, it tests your understanding of thread synchronization at a fundamental level. There's no complex resource sharing, no deadlock risk from circular dependencies, just two threads that need to take turns.
This makes it an excellent starting point for understanding synchronization primitives before tackling more complex problems like Dining Philosophers or the Bounded Buffer.
Two methods, foo() and bar(), are called by two separate threads. The foo() method prints "foo" and the bar() method prints "bar".
Design a mechanism so that the output is always "foobar" repeated n times.
You have two threads running simultaneously, each with a simple job.
foo() in a loop n times. Each call should print "foo".bar() in a loop n times. Each call should print "bar".The catch? These threads start concurrently and can be scheduled in any order by the operating system. Without synchronization, you might get "barfoo", "foofoofoobarbar", or any chaotic interleaving.
With the setup clear, here are the ordering constraints that make this problem interesting.
These rules define a strict alternation: foo, bar, foo, bar, foo, bar. Any deviation, like two consecutive foos or bars printing, means your solution is broken.
So what does a correct solution look like?
We need to design a synchronization mechanism that achieves all of these properties.
The following diagram shows this ping-pong pattern in action.
Notice how each thread, after printing, explicitly hands off control to the other. Thread 1 prints, then signals Thread 2. Thread 2 prints, then signals Thread 1. This ping-pong continues for n iterations until both threads complete.
The FooBar problem distills synchronization to its essence: turn-taking. Many real-world systems require this exact pattern:
Before jumping into solutions, let's understand what makes this problem tricky.
The most obvious approach is using a shared flag: if (flag == FOO_TURN) print("foo"); flag = BAR_TURN;. But this naive implementation has race conditions hiding in plain sight. Thread 2 might check the flag just as Thread 1 is modifying it, or both threads might read the old value simultaneously before either updates it.
The diagram below shows how this race can manifest.
The problem is that checking the flag and acting on it aren't atomic. Between the check and the action, the other thread might change things.
Even if we correctly use atomic operations to avoid the race, there's another issue. A spin loop wastes CPU cycles.
With two threads on a single-core CPU, this is particularly wasteful. While one thread spins checking the flag over and over, the other thread, which could actually make progress, can't run because the CPU is busy spinning. Even on multi-core systems, spinning wastes power and generates heat.
The question becomes: how do we make a thread sleep until it's their turn, rather than constantly checking?
This brings us to the trickiest challenge. With condition variables, we might signal before the other thread starts waiting. If Thread 1 signals "bar's turn" before Thread 2 calls wait(), the signal is lost. Thread 2 then waits forever for a signal that already happened.
This "lost wakeup" problem is subtle because it depends on timing. Your code might work 99% of the time in testing, then fail in production when thread scheduling is slightly different.
When evaluating solutions, we'll check each approach against these properties.
| Property | Definition | Why It Matters |
|---|---|---|
| Correct ordering | Output is exactly "foobar" n times | Functional correctness |
| Deadlock-free | Both threads complete | System liveness |
| No busy waiting | Threads sleep when blocked | CPU efficiency |
| Simple | Easy to understand and maintain | Engineering practicality |
Now let's see how different approaches stack up against these criteria.
Let's start with the simplest possible approach: use a shared flag and spin until it's your turn.
The idea is straightforward. We maintain a boolean flag indicating whose turn it is. Each thread spins in a loop, checking the flag. When it becomes your turn, you print and flip the flag.
The state machine below shows this simple alternation.
Now let's see what this looks like in code.
| Property | Status | Explanation |
|---|---|---|
| Correct ordering | Yes | Flag ensures strict alternation |
| Deadlock-free | Yes | No circular dependencies |
| No busy waiting | No | Spin loop wastes CPU |
| Simple | Yes | Easy to understand |
This solution is correct but inefficient. The issue is that while one thread is waiting for its turn, it's constantly checking the flag in a tight loop. This is called "busy waiting" or "spinning."
On a single-core machine, the spinning thread prevents the other thread from running. Thread 1 spins, consuming the CPU. Thread 2, which could actually make progress and flip the flag, can't get scheduled. The OS eventually preempts Thread 1, but we've wasted a lot of CPU time.
Even on multi-core machines, spinning wastes power and generates heat. For n = 1000, we might waste millions of CPU cycles just checking a flag. In a production system, this would show up as high CPU usage for no useful work.
The solution works, and it's worth understanding, but interviewers will expect you to recognize its limitations. That's where our next solution comes in.
The busy-wait solution wastes CPU because threads spin instead of sleeping. Semaphores solve this by letting threads block efficiently. The OS puts a blocked thread to sleep, freeing the CPU for other work, and wakes it when the semaphore is released.
Think of semaphores as tickets or permits. A thread that calls acquire() needs a permit to proceed. If none is available, the thread sleeps. A thread that calls release() adds a permit, potentially waking a sleeping thread.
For FooBar, imagine a single baton being passed back and forth. Foo starts with the baton, prints, hands it to bar. Bar prints, hands it back to foo. We model this with two semaphores: foo starts with one permit, bar starts with zero.
Here's how the semaphore handoff works.
Why these specific initial values? Think about it: we want foo to go first. If fooSemaphore starts at 1, foo can acquire it immediately. If barSemaphore starts at 0, bar blocks immediately and waits for foo to release it. This exactly models "foo goes first."
| Property | Status | Explanation |
|---|---|---|
| Correct ordering | Yes | Semaphore handoff ensures alternation |
| Deadlock-free | Yes | Linear dependency, no cycle |
| No busy waiting | Yes | Semaphores block efficiently |
| Simple | Yes | Elegant ping-pong pattern |
The semaphore solution is elegant because it directly models the turn-taking. At any moment, the sum of fooSemaphore.permits + barSemaphore.permits is always 1. Exactly one thread can proceed at any time. There's no possibility of both threads printing simultaneously or either thread printing twice in a row.
There's also no risk of lost signals. Unlike condition variables, semaphore permits are "sticky." If foo releases barSemaphore before bar calls acquire, the permit is stored. When bar eventually calls acquire, it gets the permit immediately. This solves the lost wakeup problem elegantly.
Condition variables offer more flexibility when the synchronization condition is complex. For FooBar, they're overkill, but understanding this solution helps with harder problems where conditions can't be modeled as simple permit counts.
A condition variable lets a thread wait until some condition is true, and another thread signals when the condition changes. The pattern is always: acquire lock, check condition in a loop, if false then wait (which releases the lock and sleeps), when woken re-check the condition.
For FooBar, we combine a condition variable with a boolean flag indicating whose turn it is. Each thread waits until the flag says it's their turn.
Here's how the pieces fit together.
The critical insight is the while loop around the wait. We check the condition, and if it's not our turn, we wait. When we wake up, we check again. This handles spurious wakeups, where a thread might wake without being signaled.
Notice the loop: if fooTurn is false, we wait. When signaled, we loop back and check again. Only when fooTurn is true do we proceed to print.
Let's look at the key synchronization points in this solution, because they're easy to get wrong.
while, not if, because of spurious wakeups. A thread might wake up even when it's not its turn. The while loop ensures we re-check the condition after every wakeup.| Property | Status | Explanation |
|---|---|---|
| Correct ordering | Yes | Flag and condition ensure alternation |
| Deadlock-free | Yes | Linear dependency, no cycle |
| No busy waiting | Yes | Condition variables block efficiently |
| Simple | Medium | More verbose than semaphores |
Why does this solution work? Let's trace through the logic.
The condition variable solution is more verbose than semaphores for this problem, but it demonstrates understanding of the wait-notify pattern. For more complex problems where the condition isn't a simple permit count, condition variables become essential.
We've seen three approaches to the FooBar problem. Here's how they compare across our evaluation criteria.
| Approach | Correct | No Busy Wait | Efficiency | Complexity | Best For |
|---|---|---|---|---|---|
| Naive (flag + spin) | Yes | No | Low | Simple | Quick prototype |
| Semaphore-based | Yes | Yes | High | Simple | Interview answer |
| Condition variable | Yes | Yes | High | Medium | Complex conditions |
Use the semaphore solution for interviews. It's elegant, efficient, and shows mastery of semaphore semantics. The condition variable solution is equally valid and demonstrates understanding of the wait-notify pattern, which is useful for more complex problems where the condition can't be expressed as a permit count.
If time permits in an interview, mention that you know both approaches and explain when you'd choose each. This shows depth of understanding.
Beyond the main three approaches, there are variations worth knowing for interviews. Let's explore two useful alternatives.
For situations where you want to avoid the overhead of semaphores but can't accept pure spinning, you can use yield. This gives up the CPU instead of spinning in a tight loop.
When to prefer: When you need lock-free behavior and can tolerate some CPU overhead. Better than pure spinning but not as efficient as semaphores. This is a good middle ground when blocking primitives aren't available or are too expensive.
The barrier-based solution is elegant, but what if we need to extend the problem to more threads? That's where barriers shine. If the problem extends to N threads printing in sequence, a barrier-based approach generalizes better.
When to prefer: When extending to more threads (FooBarBaz with 3 threads) or when all threads need to synchronize at each step. Barriers ensure all participants reach the synchronization point before any can proceed.