Threads in OS — Process vs Thread, Multithreading | Tutorials Logic
Process vs Thread
A process is an independent program in execution with its own memory space. A thread is the smallest unit of execution within a process. Multiple threads within the same process share the process's resources.
| Feature | Process | Thread |
|---|---|---|
| Memory | Separate address space | Shared address space (within process) |
| Resources | Own file handles, I/O, etc. | Shares process resources |
| Creation overhead | High (fork/exec) | Low (lightweight) |
| Communication | IPC (pipes, sockets, shared memory) | Direct (shared memory) |
| Isolation | Crash doesn't affect other processes | Crash can kill entire process |
| Context switch | Expensive (full address space switch) | Cheaper (same address space) |
What each thread has (private): Thread ID, program counter, register set, stack
What threads share (with process): Code section, data section, heap, open files, signals
Benefits of Multithreading
- Responsiveness: A thread can continue running while another is blocked (e.g., waiting for I/O). UI remains responsive.
- Resource sharing: Threads share memory - no need for expensive IPC mechanisms.
- Economy: Creating and context-switching threads is cheaper than processes.
- Scalability: Threads can run in parallel on multiple CPU cores (true parallelism).
User Threads vs Kernel Threads
| Feature | User-Level Threads | Kernel-Level Threads |
|---|---|---|
| Management | Managed by user-space thread library | Managed by OS kernel |
| Kernel awareness | Kernel sees only one process | Kernel knows about each thread |
| Context switch | Fast (no kernel mode switch) | Slower (kernel involvement) |
| Blocking | If one thread blocks, all threads block | Other threads can continue |
| Parallelism | Cannot run on multiple CPUs simultaneously | Can run on multiple CPUs |
| Examples | POSIX Pthreads (user-space), Java green threads | Windows threads, Linux pthreads (kernel) |
Threading Models
Threading models define how user threads map to kernel threads:
- Many-to-One (M:1): Many user threads map to one kernel thread. Fast but no parallelism; one blocking call blocks all threads. (Green threads)
- One-to-One (1:1): Each user thread maps to one kernel thread. True parallelism; more overhead. (Linux pthreads, Windows threads)
- Many-to-Many (M:N): Many user threads map to many (but fewer) kernel threads. Best of both worlds; complex to implement. (Solaris, older Java)
- Two-Level Model: Hybrid of M:N and 1:1 - allows binding specific user threads to kernel threads.
Thread Pool
A thread pool is a collection of pre-created threads waiting for tasks. Instead of creating a new thread for each task (expensive), tasks are submitted to the pool and executed by available threads.
Benefits:
- Eliminates thread creation/destruction overhead
- Limits the number of concurrent threads (prevents resource exhaustion)
- Improves response time (threads are ready immediately)
Thread pool parameters:
- Core pool size: Minimum number of threads always kept alive
- Maximum pool size: Maximum number of threads allowed
- Queue: Holds tasks when all threads are busy
- Keep-alive time: How long idle threads above core size are kept before termination
Race Conditions in Multithreading
A race condition occurs when two or more threads access shared data concurrently and the result depends on the order of execution. This leads to unpredictable, incorrect behavior.
Example: Two threads both read a counter (value=5), both increment it, and both write back. Expected result: 7. Actual result: 6 (one increment is lost).
Solutions:
- Mutex/Lock: Only one thread can access the critical section at a time
- Atomic operations: Hardware-level operations that complete without interruption (e.g.,
AtomicIntegerin Java) - Synchronized methods: Java's
synchronizedkeyword - Immutable objects: Objects that cannot be modified after creation are inherently thread-safe
Level Up Your Operating system Skills
Master Operating system with these hand-picked resources