Last Updated: February 5, 2026
When you write a concurrent program, you face a fundamental choice: should you use multiple processes or multiple threads? This decision affects everything from memory usage to fault isolation to communication patterns.
Both processes and threads allow concurrent execution, but they work very differently under the hood. Understanding these differences is essential for designing systems correctly and for answering interview questions confidently.
A process is an instance of a running program. When you double-click an application or run a command in the terminal, the operating system creates a process.
Each process has its own:
The operating system isolates processes from each other. Process A cannot directly read or write Process B's memory. If Process A crashes, Process B keeps running unaffected.
Think of a process like a house. Each house has its own walls, rooms, plumbing, and electrical system. What happens in one house doesn't affect the neighbors. You can't just walk into someone else's house; you need explicit permission.
A process's virtual address space is divided into segments:
| Segment | Contents | Growth |
|---|---|---|
| Text (Code) | Executable instructions | Fixed |
| Data | Initialized global/static variables | Fixed |
| BSS | Uninitialized global/static variables | Fixed |
| Heap | Dynamically allocated memory (malloc, new) | Grows upward |
| Stack | Local variables, function call frames | Grows downward |
The gap between heap and stack allows both to grow. If they meet, you get a stack overflow or out-of-memory error.
A thread is a unit of execution within a process. Every process has at least one thread (the main thread). A process can create additional threads that run concurrently within the same address space.
Threads within the same process share:
Each thread has its own:
If a process is a house, threads are the people living in it. They share the kitchen, living room, and bathroom. They can easily communicate by talking or leaving notes. But they can also get in each other's way, fight over resources, and create messes that affect everyone.
Processes and threads both let you run multiple tasks, but they make very different trade-offs. Here’s how they compare across the dimensions that matter most.
| Aspect | Processes | Threads |
|---|---|---|
| Address space | Separate | Shared |
| Memory access | Cannot directly access other process's memory | Can access all memory in the process |
| Isolation | Strong (OS-enforced) | Weak (programmer responsibility) |
| Data sharing | Requires IPC | Direct memory access |
Processes provide true isolation. Process A cannot read or overwrite Process B’s memory. Even if Process A has a nasty bug (like a wild pointer), it cannot corrupt another process.
Threads share everything inside the same address space. Any thread can read or write any memory in the process. That makes sharing easy, but it also means a bug in one thread can corrupt data that every other thread relies on.
Creating a process is relatively expensive because the OS has to do a lot of setup:
fork)Creating a thread is much cheaper. The OS mainly needs to:
Typical ballpark timings:
| Operation | Typical Time |
|---|---|
| Process creation (fork) | 1-10 ms |
| Thread creation | 10-100 μs |
That is often around a 100× difference, which matters a lot when you need thousands of concurrent tasks.
All IPC mechanisms involve the OS kernel, which adds overhead. The choice of IPC method depends on your data size, latency requirements, and whether processes are on the same machine.
Thread communication is faster because it is often just memory access, but you must synchronize correctly to avoid races, visibility bugs, and deadlocks.
Processes fail independently. If Process A crashes, Process B keeps running. This is why many web browsers isolate tabs into separate processes. One misbehaving tab should not take down the whole browser.
Threads share fate. If one thread crashes due to a segmentation fault or an unhandled fatal error, the entire process typically terminates, and all threads die with it. One buggy thread can bring everything down.
| Resource | Per Process | Per Thread |
|---|---|---|
| Virtual address space | Full copy (~GBs) | Shared |
| Page tables | Separate | Shared |
| Stack | One | One per thread |
| Kernel structures | Full process descriptor | Lightweight thread descriptor |
| File descriptors | Separate table | Shared table |
A process might consume 10-100 MB of memory overhead. A thread might consume 1-8 MB (mostly stack space). You can have thousands of threads more easily than thousands of processes.
When the OS stops one task and runs another, it performs a context switch. At a high level, that means:
The cost depends heavily on whether you’re switching between processes or between threads in the same process.
A process switch is more expensive because it typically requires changing the virtual address space:
Typical cost: ~1–10 μs, plus additional slowdown from cache and TLB “warm-up.”
A thread switch within the same process is usually cheaper:
Typical cost: ~0.1–1 μs.
Choose processes when isolation matters more than sharing.
If one component crashing should not bring down others, processes are the safer choice.
Processes come with OS-enforced boundaries. They can run as different users with different permissions.
A process is the natural unit of deployment. You can run processes on different machines and coordinate them over the network. Threads cannot span machines.
Some runtimes restrict thread-based parallelism for CPU-bound code.
If you need real CPU parallelism in these environments, processes are often the practical solution.
When tasks do not need to share much state, processes give you clean separation without adding synchronization complexity.
Choose threads when sharing and fast coordination matter.
If concurrent work operates on the same in-memory structures, threads are usually the right tool because they avoid IPC overhead.
Thread-to-thread coordination can be as cheap as reading and writing memory (with the right synchronization). Process communication usually involves system calls and often data copying.
For high-frequency coordination, threads tend to win.
Threads are typically lighter than processes. If you need a large number of concurrent tasks, a thread pool is often practical, while spawning the same number of processes can get expensive.
For CPU-bound work that benefits from parallel execution, threads scale well across cores. Spinning up a process for each small unit of computation is usually wasteful.
Languages like Java, C++, Go, and Rust provide strong threading primitives, clear memory models, and solid synchronization tools, which makes thread-based designs safer and more effective.
Real-world systems often use both processes and threads. It’s rarely an either-or choice. Processes give you isolation and clean failure boundaries, while threads give you cheap concurrency and fast sharing within a worker.
A very common architecture looks like this:
This gives you isolation at the process level, and efficient concurrency within each process.
Instead of creating a new process for every task, you pre-fork a fixed pool of worker processes and reuse them. That way, you pay the expensive process-creation cost only once.
Typical flow:
You get fault isolation (a crashed worker does not take down the whole service) with good efficiency (no per-request process creation).
Inside a process, you often also avoid creating a new thread per task. Instead, you keep a thread pool and feed it work through a queue.
Typical flow:
This avoids per-task thread creation overhead while still supporting high concurrency and good CPU utilization.
Understanding how the OS models processes and threads explains what the scheduler actually manages and what state must be saved and restored.
For every process, the OS keeps a Process Control Block (PCB), which is essentially the process’s “record” in the kernel. It typically includes:
The PCB exists because the OS needs enough information to pause a process and later resume it correctly.
Each thread has its own Thread Control Block (TCB). It is smaller than a PCB because a thread shares most resources with its parent process. A TCB usually contains:
The key idea: threads share the process’s address space and resources, but each thread still needs its own execution context.
The relationship between “user threads” and what the OS schedules depends on the threading model.
Each user thread maps to a kernel thread. The OS scheduler can see and schedule every thread independently.
Many user threads map to a single kernel thread. A user-space runtime schedules threads, while the OS only sees one schedulable entity.
Many user threads map to fewer kernel threads. The runtime schedules user threads onto a pool of kernel threads.
What is the primary difference between a process and a thread?