Overview
Thread creation and synchronization constitute the foundation of concurrent programming, where multiple execution contexts operate simultaneously within a single process. A thread represents an independent sequence of instructions that shares the process's memory space with other threads but maintains its own execution state, including program counter, registers, and stack.
The concept emerged from the need to maximize CPU utilization and improve program responsiveness. Single-threaded programs execute instructions sequentially, leaving the processor idle during I/O operations or other blocking tasks. Threads address this inefficiency by allowing other work to proceed while one thread waits.
Thread synchronization mechanisms prevent race conditions and data corruption when multiple threads access shared resources. Without proper synchronization, non-atomic operations can interleave unpredictably, producing incorrect results. Consider two threads incrementing a shared counter:
# Without synchronization - race condition
counter = 0
threads = 2.times.map do
Thread.new do
1000.times { counter += 1 }
end
end
threads.each(&:join)
puts counter # Expected: 2000, Actual: varies (often < 2000)
The operation counter += 1 involves three steps: read the current value, increment it, and write it back. When threads interleave these steps, updates can be lost. Thread 1 reads 0, Thread 2 reads 0, both increment to 1, both write 1—resulting in one increment instead of two.
Synchronization primitives like mutexes, semaphores, and condition variables provide the tools to coordinate thread execution and protect shared state. These mechanisms trade some performance for correctness, introducing controlled serialization points where threads must wait their turn.
Key Principles
Thread creation involves allocating resources for the new execution context and initializing its state. The operating system assigns each thread its own stack for local variables and function call frames, while the heap and global data remain shared across all threads in the process. The thread receives a starting function or code block that serves as its entry point.
Thread lifecycle proceeds through several states: created, ready, running, blocked, and terminated. A newly created thread enters the ready state, waiting for the scheduler to assign it processor time. Once running, the thread executes instructions until it blocks on I/O, waits for a lock, yields control, or terminates. The scheduler decides which ready thread runs next based on priority, time slices, and scheduling policy.
Synchronization primitives enforce mutual exclusion and coordinate thread execution through atomic operations and waiting mechanisms. A mutex (mutual exclusion lock) provides the most basic synchronization: only one thread can hold the lock at any time, forcing others to wait. This creates a critical section where shared data can be safely modified.
# Mutex protecting critical section
mutex = Mutex.new
counter = 0
threads = 2.times.map do
Thread.new do
1000.times do
mutex.synchronize { counter += 1 }
end
end
end
threads.each(&:join)
puts counter # => 2000 (consistent)
Condition variables extend mutexes by providing a mechanism for threads to wait until specific conditions become true. A thread acquires a mutex, checks a condition, and if false, releases the mutex while waiting on the condition variable. When another thread signals the condition, waiting threads wake up, reacquire the mutex, and recheck the condition.
Semaphores generalize mutexes by allowing multiple threads simultaneous access up to a specified count. A binary semaphore (count of 1) functions identically to a mutex. Counting semaphores (count > 1) limit concurrent access to a fixed number, useful for resource pools or rate limiting.
Memory visibility and ordering present subtle challenges in multi-threaded programs. Modern processors and compilers reorder operations for performance, potentially making one thread's writes invisible to another thread or visible in a different order. Synchronization operations include memory barriers that force visibility and ordering, ensuring threads see a consistent view of memory.
The happens-before relationship defines ordering guarantees in concurrent programs. If operation A happens-before operation B, then A's effects are visible to B. Synchronization operations establish happens-before relationships: releasing a lock happens-before acquiring that lock, starting a thread happens-before any operation in that thread, and any operation in a thread happens-before joining that thread.
Atomic operations provide lock-free synchronization for simple operations like incrementing a counter or swapping a value. These operations complete in a single, uninterruptible step, preventing the interleaving that causes race conditions. While faster than locks for simple cases, atomic operations offer limited functionality and still require careful reasoning about memory ordering.
Ruby Implementation
Ruby provides the Thread class for creating and managing threads. The Global Interpreter Lock (GIL) restricts thread execution in MRI (Matz's Ruby Interpreter), allowing only one thread to execute Ruby code at a time. Despite this limitation, threads remain useful for I/O-bound operations where threads yield control while waiting, and other Ruby implementations like JRuby and TruffleRuby support true parallel execution.
Creating a thread passes a block to Thread.new, which becomes the thread's execution context. The thread runs until the block completes or the thread is terminated:
# Basic thread creation
thread = Thread.new do
puts "Thread #{Thread.current.object_id} starting"
sleep 2
puts "Thread #{Thread.current.object_id} finishing"
end
puts "Main thread continues"
thread.join # Wait for thread completion
puts "Main thread done"
Thread-local variables isolate data per thread, preventing sharing even when using the same variable name. Ruby provides thread-local storage through bracket notation on Thread objects:
# Thread-local storage
def process_data(id)
Thread.current[:request_id] = id
# Later in the call stack
puts "Processing request #{Thread.current[:request_id]}"
end
threads = 3.times.map do |i|
Thread.new { process_data(i) }
end
threads.each(&:join)
The Mutex class provides mutual exclusion for protecting shared state. The synchronize method acquires the lock, executes the block, and releases the lock even if an exception occurs:
# Mutex for protecting shared state
class ThreadSafeCounter
def initialize
@count = 0
@mutex = Mutex.new
end
def increment
@mutex.synchronize do
@count += 1
end
end
def value
@mutex.synchronize { @count }
end
end
counter = ThreadSafeCounter.new
threads = 10.times.map do
Thread.new do
100.times { counter.increment }
end
end
threads.each(&:join)
puts counter.value # => 1000
ConditionVariable enables threads to wait for specific conditions efficiently. A thread acquires a mutex, checks a condition, and waits on the condition variable if false. Other threads signal the condition variable when they modify shared state:
# Producer-consumer with condition variable
require 'thread'
queue = []
mutex = Mutex.new
cv = ConditionVariable.new
# Producer thread
producer = Thread.new do
5.times do |i|
mutex.synchronize do
queue << i
puts "Produced: #{i}"
cv.signal
end
sleep 0.1
end
end
# Consumer thread
consumer = Thread.new do
5.times do
mutex.synchronize do
while queue.empty?
cv.wait(mutex)
end
item = queue.shift
puts "Consumed: #{item}"
end
end
end
producer.join
consumer.join
The Queue class provides a thread-safe queue implementation that internally handles synchronization. It blocks on pop when empty and supports non-blocking operations:
# Thread-safe queue
require 'thread'
queue = Queue.new
# Multiple producers
producers = 3.times.map do |i|
Thread.new do
5.times do |j|
item = "P#{i}-#{j}"
queue << item
puts "Enqueued: #{item}"
sleep rand(0.1)
end
end
end
# Multiple consumers
consumers = 2.times.map do |i|
Thread.new do
loop do
item = queue.pop
puts "Consumer #{i} got: #{item}"
sleep rand(0.1)
break if item =~ /P2-4/ # Stop after last item
end
end
end
producers.each(&:join)
consumers.each(&:join)
Thread pools reuse a fixed number of threads to execute tasks, avoiding the overhead of creating and destroying threads repeatedly. Ruby's standard library doesn't include a built-in thread pool, but implementing one demonstrates key synchronization concepts:
# Basic thread pool implementation
class ThreadPool
def initialize(size)
@queue = Queue.new
@threads = size.times.map do
Thread.new do
loop do
task = @queue.pop
break if task == :shutdown
task.call
end
end
end
end
def schedule(&block)
@queue << block
end
def shutdown
@threads.size.times { @queue << :shutdown }
@threads.each(&:join)
end
end
pool = ThreadPool.new(3)
10.times do |i|
pool.schedule do
puts "Task #{i} on thread #{Thread.current.object_id}"
sleep 0.1
end
end
pool.shutdown
Practical Examples
Web servers handle multiple client requests concurrently using threads. Each connection spawns a thread that processes the request independently while other threads handle different clients:
# Multi-threaded web server concept
require 'socket'
server = TCPServer.new(8080)
puts "Server listening on port 8080"
loop do
client = server.accept
Thread.new(client) do |conn|
request = conn.gets
puts "Request: #{request}"
response = "HTTP/1.1 200 OK\r\n\r\nHello, World!\r\n"
conn.write(response)
conn.close
end
end
Database connection pools manage a fixed number of connections shared across threads. Threads check out connections, use them, and return them to the pool:
# Connection pool pattern
class ConnectionPool
def initialize(size, &factory)
@factory = factory
@mutex = Mutex.new
@available = []
@in_use = {}
@cv = ConditionVariable.new
size.times { @available << factory.call }
end
def with_connection
conn = checkout
begin
yield conn
ensure
checkin(conn)
end
end
private
def checkout
@mutex.synchronize do
while @available.empty?
@cv.wait(@mutex)
end
conn = @available.pop
@in_use[conn] = Thread.current
conn
end
end
def checkin(conn)
@mutex.synchronize do
@in_use.delete(conn)
@available << conn
@cv.signal
end
end
end
# Usage
pool = ConnectionPool.new(3) { "Connection-#{rand(1000)}" }
threads = 10.times.map do |i|
Thread.new do
pool.with_connection do |conn|
puts "Thread #{i} using #{conn}"
sleep 0.2
end
end
end
threads.each(&:join)
Background job processing systems use worker threads to process tasks from a queue while the main application continues handling requests:
# Background job processor
class JobProcessor
def initialize(worker_count)
@jobs = Queue.new
@workers = worker_count.times.map do
Thread.new { process_jobs }
end
end
def enqueue(job)
@jobs << job
end
def shutdown
@workers.size.times { @jobs << :shutdown }
@workers.each(&:join)
end
private
def process_jobs
loop do
job = @jobs.pop
break if job == :shutdown
begin
job.perform
rescue => e
puts "Job failed: #{e.message}"
end
end
end
end
class EmailJob
def initialize(to, subject)
@to, @subject = to, subject
end
def perform
puts "Sending email to #{@to}: #{@subject}"
sleep 0.5 # Simulate sending
end
end
processor = JobProcessor.new(3)
10.times do |i|
processor.enqueue(EmailJob.new("user#{i}@example.com", "Test #{i}"))
end
sleep 2 # Let jobs complete
processor.shutdown
Parallel data processing splits work across multiple threads to reduce total processing time. Each thread handles a portion of the data:
# Parallel data processing
def parallel_map(array, thread_count)
chunk_size = (array.size / thread_count.to_f).ceil
chunks = array.each_slice(chunk_size).to_a
results = Array.new(chunks.size)
mutex = Mutex.new
threads = chunks.each_with_index.map do |chunk, index|
Thread.new do
chunk_result = chunk.map { |item| yield item }
mutex.synchronize { results[index] = chunk_result }
end
end
threads.each(&:join)
results.flatten
end
# Process large dataset in parallel
data = (1..1000).to_a
result = parallel_map(data, 4) do |n|
n * n # Expensive computation
end
puts "Processed #{result.size} items"
Common Patterns
The producer-consumer pattern decouples data production from consumption using a synchronized queue. Producers add items without knowing when they'll be consumed, and consumers process items as they become available:
# Producer-consumer pattern
class BoundedQueue
def initialize(capacity)
@capacity = capacity
@queue = []
@mutex = Mutex.new
@not_full = ConditionVariable.new
@not_empty = ConditionVariable.new
end
def put(item)
@mutex.synchronize do
while @queue.size >= @capacity
@not_full.wait(@mutex)
end
@queue << item
@not_empty.signal
end
end
def take
@mutex.synchronize do
while @queue.empty?
@not_empty.wait(@mutex)
end
item = @queue.shift
@not_full.signal
item
end
end
end
queue = BoundedQueue.new(5)
producer = Thread.new do
10.times do |i|
queue.put(i)
puts "Produced: #{i}"
end
end
consumer = Thread.new do
10.times do
item = queue.take
puts "Consumed: #{item}"
sleep 0.1
end
end
producer.join
consumer.join
The reader-writer lock allows multiple concurrent readers or one exclusive writer, optimizing for read-heavy workloads:
# Reader-writer lock pattern
class RWLock
def initialize
@mutex = Mutex.new
@readers = 0
@writer = false
@reader_cv = ConditionVariable.new
@writer_cv = ConditionVariable.new
end
def read_lock
@mutex.synchronize do
while @writer
@reader_cv.wait(@mutex)
end
@readers += 1
end
end
def read_unlock
@mutex.synchronize do
@readers -= 1
@writer_cv.signal if @readers == 0
end
end
def write_lock
@mutex.synchronize do
while @writer || @readers > 0
@writer_cv.wait(@mutex)
end
@writer = true
end
end
def write_unlock
@mutex.synchronize do
@writer = false
@reader_cv.broadcast
@writer_cv.signal
end
end
end
# Usage
rwlock = RWLock.new
cache = {}
# Multiple readers
readers = 5.times.map do |i|
Thread.new do
rwlock.read_lock
value = cache[:data]
puts "Reader #{i} read: #{value}"
sleep 0.1
rwlock.read_unlock
end
end
# Single writer
writer = Thread.new do
sleep 0.05
rwlock.write_lock
cache[:data] = "updated"
puts "Writer updated cache"
rwlock.write_unlock
end
(readers + [writer]).each(&:join)
The future pattern represents a value that will be computed asynchronously, allowing the caller to continue work and retrieve the result later:
# Future pattern
class Future
def initialize(&block)
@mutex = Mutex.new
@cv = ConditionVariable.new
@computed = false
@exception = nil
@thread = Thread.new do
begin
@value = block.call
rescue => e
@exception = e
ensure
@mutex.synchronize do
@computed = true
@cv.broadcast
end
end
end
end
def value
@mutex.synchronize do
until @computed
@cv.wait(@mutex)
end
raise @exception if @exception
@value
end
end
end
# Usage
future = Future.new do
sleep 1
42
end
puts "Computing in background..."
sleep 0.5
puts "Result: #{future.value}"
The barrier pattern synchronizes multiple threads at a specific point, ensuring all threads reach the barrier before any proceed:
# Barrier synchronization
class Barrier
def initialize(count)
@count = count
@waiting = 0
@mutex = Mutex.new
@cv = ConditionVariable.new
end
def wait
@mutex.synchronize do
@waiting += 1
if @waiting == @count
@waiting = 0
@cv.broadcast
else
@cv.wait(@mutex)
end
end
end
end
# Usage
barrier = Barrier.new(3)
threads = 3.times.map do |i|
Thread.new do
puts "Thread #{i} working"
sleep rand(1.0)
puts "Thread #{i} at barrier"
barrier.wait
puts "Thread #{i} proceeding"
end
end
threads.each(&:join)
Error Handling & Edge Cases
Race conditions occur when program correctness depends on the timing of thread execution. The most common race condition involves check-then-act sequences where the condition changes between the check and the action:
# Race condition in lazy initialization
class Cache
def initialize
@data = nil
@mutex = Mutex.new
end
# INCORRECT - race condition
def get_wrong
if @data.nil?
@data = expensive_computation
end
@data
end
# CORRECT - synchronized check
def get_correct
@mutex.synchronize do
@data ||= expensive_computation
end
end
private
def expensive_computation
sleep 0.1
"computed_value"
end
end
Deadlock occurs when threads wait for each other in a cycle, preventing all from proceeding. The classic example involves two threads acquiring locks in opposite orders:
# Deadlock scenario
mutex_a = Mutex.new
mutex_b = Mutex.new
thread1 = Thread.new do
mutex_a.synchronize do
sleep 0.01
mutex_b.synchronize do
puts "Thread 1 got both locks"
end
end
end
thread2 = Thread.new do
mutex_b.synchronize do
sleep 0.01
mutex_a.synchronize do
puts "Thread 2 got both locks"
end
end
end
# Threads will deadlock
# Solution: acquire locks in consistent order
Thread leaks occur when threads are created but not properly joined or terminated, consuming resources indefinitely:
# Thread leak prevention
class ManagedThreadPool
def initialize(size)
@threads = []
@mutex = Mutex.new
@queue = Queue.new
size.times do
@threads << create_worker
end
end
def execute(&block)
@queue << block
end
def shutdown(timeout: 5)
@threads.size.times { @queue << nil }
@threads.each do |thread|
thread.join(timeout)
thread.kill if thread.alive?
end
end
private
def create_worker
Thread.new do
while (task = @queue.pop)
task.call rescue nil
end
end
end
end
Exception handling in threads requires special attention because exceptions don't propagate to the parent thread. Unhandled exceptions terminate only the thread where they occur:
# Thread exception handling
class SafeWorker
def initialize
@threads = []
@errors = []
@mutex = Mutex.new
end
def execute(&block)
thread = Thread.new do
begin
block.call
rescue => e
@mutex.synchronize do
@errors << { thread: Thread.current, error: e }
end
raise # Re-raise to set thread abort_on_exception
end
end
@threads << thread
thread
end
def wait_all
@threads.each(&:join)
unless @errors.empty?
raise "#{@errors.size} thread(s) failed: #{@errors.first[:error]}"
end
end
end
worker = SafeWorker.new
worker.execute { puts "Success" }
worker.execute { raise "Failed" }
begin
worker.wait_all
rescue => e
puts "Error: #{e.message}"
end
Performance Considerations
The Global Interpreter Lock in MRI Ruby prevents true parallel execution of Ruby code. Only one thread executes Ruby code at a time, but threads yield during I/O operations, making threads beneficial for I/O-bound work:
# GIL impact demonstration
require 'benchmark'
def cpu_bound_work
sum = 0
1_000_000.times { sum += 1 }
sum
end
def io_bound_work
sleep 0.1
"done"
end
# CPU-bound work sees no speedup
Benchmark.bm do |x|
x.report("sequential") do
4.times { cpu_bound_work }
end
x.report("threaded") do
threads = 4.times.map { Thread.new { cpu_bound_work } }
threads.each(&:join)
end
end
# I/O-bound work benefits from threads
Benchmark.bm do |x|
x.report("sequential") do
4.times { io_bound_work }
end
x.report("threaded") do
threads = 4.times.map { Thread.new { io_bound_work } }
threads.each(&:join)
end
end
Thread creation overhead makes thread pools essential for workloads with many short-lived tasks. Creating a thread involves system calls, memory allocation, and scheduler updates:
# Thread pool vs repeated creation
require 'benchmark'
tasks = 1000
Benchmark.bm do |x|
x.report("new threads") do
tasks.times do
Thread.new { sleep 0.001 }.join
end
end
x.report("thread pool") do
pool = ThreadPool.new(10)
tasks.times { pool.schedule { sleep 0.001 } }
pool.shutdown
end
end
Lock contention reduces parallel performance when multiple threads frequently compete for the same lock. Fine-grained locking and lock-free data structures minimize contention:
# Lock contention mitigation
class SegmentedCounter
def initialize(segments: 4)
@segments = segments.times.map do
{ count: 0, mutex: Mutex.new }
end
end
def increment
segment = @segments[Thread.current.object_id % @segments.size]
segment[:mutex].synchronize do
segment[:count] += 1
end
end
def total
@segments.sum do |segment|
segment[:mutex].synchronize { segment[:count] }
end
end
end
# Reduced contention compared to single mutex
counter = SegmentedCounter.new(segments: 4)
threads = 100.times.map do
Thread.new { 1000.times { counter.increment } }
end
threads.each(&:join)
puts counter.total
Context switching overhead increases with thread count as the scheduler cycles through threads. The optimal thread count depends on workload characteristics and processor cores:
# Finding optimal thread count
require 'benchmark'
def measure_throughput(thread_count)
queue = Queue.new
1000.times { |i| queue << i }
start = Time.now
threads = thread_count.times.map do
Thread.new do
while item = queue.pop(true) rescue nil
item * item # Simulate work
end
end
end
threads.each(&:join)
Time.now - start
end
[1, 2, 4, 8, 16, 32].each do |count|
time = measure_throughput(count)
puts "#{count} threads: #{time.round(3)}s"
end
Common Pitfalls
Forgetting to synchronize shared mutable state causes intermittent failures that are difficult to reproduce and debug. Every shared variable accessed by multiple threads requires protection:
# Missing synchronization
class UnsafeCache
def initialize
@cache = {}
end
# BUG: Hash is not thread-safe
def get(key)
@cache[key]
end
def set(key, value)
@cache[key] = value
end
end
# Can cause errors in concurrent access
cache = UnsafeCache.new
threads = 100.times.map do |i|
Thread.new do
100.times { |j| cache.set("key#{i}", j) }
end
end
threads.each(&:join)
Holding locks while performing blocking operations causes other threads to wait unnecessarily, reducing parallelism:
# Lock held during I/O - BAD
class SlowCache
def initialize
@cache = {}
@mutex = Mutex.new
end
def get(key)
@mutex.synchronize do
@cache[key] ||= begin
sleep 1 # Simulate slow I/O
"value for #{key}"
end
end
end
end
# Better: compute outside lock
class FastCache
def initialize
@cache = {}
@mutex = Mutex.new
end
def get(key)
value = @mutex.synchronize { @cache[key] }
return value if value
new_value = begin
sleep 1
"value for #{key}"
end
@mutex.synchronize do
@cache[key] ||= new_value
end
end
end
Assuming operations are atomic when they are not leads to race conditions. Even simple operations like += consist of multiple steps:
# Non-atomic operations
class Counter
attr_reader :count
def initialize
@count = 0
end
# BUG: Not atomic
def increment
@count += 1
end
end
# Must use synchronization
class SafeCounter
def initialize
@count = 0
@mutex = Mutex.new
end
def increment
@mutex.synchronize { @count += 1 }
end
def value
@mutex.synchronize { @count }
end
end
Thread-local variables can create confusion when threads are pooled and reused. Thread-local state persists across tasks in a thread pool:
# Thread-local state pollution
def process_with_context(id)
Thread.current[:request_id] = id
# Process...
# BUG: Not clearing thread-local variable
end
# Thread pool reuses threads
pool = ThreadPool.new(2)
pool.schedule { process_with_context(1) }
pool.schedule do
# May see request_id from previous task
puts Thread.current[:request_id]
end
# Must clean up thread-locals
def safe_process(id)
Thread.current[:request_id] = id
# Process...
ensure
Thread.current[:request_id] = nil
end
Attempting to kill threads forcefully can leave shared state inconsistent. Threads should be requested to stop gracefully:
# Unsafe thread termination
worker = Thread.new do
mutex = Mutex.new
loop do
mutex.synchronize do
# If killed here, mutex remains locked
perform_work
end
end
end
sleep 1
worker.kill # Dangerous
# Safe termination pattern
class GracefulWorker
def initialize
@running = true
@mutex = Mutex.new
@thread = Thread.new { work_loop }
end
def stop
@mutex.synchronize { @running = false }
@thread.join
end
private
def work_loop
while running?
perform_work
end
end
def running?
@mutex.synchronize { @running }
end
def perform_work
# Actual work
end
end
Reference
Thread Operations
| Operation | Method | Description |
|---|---|---|
| Create | Thread.new { block } | Starts new thread executing block |
| Wait | thread.join | Blocks until thread terminates |
| Timeout join | thread.join(timeout) | Waits maximum timeout seconds |
| Check status | thread.alive? | Returns true if thread is running |
| Current thread | Thread.current | Returns current thread object |
| Kill | thread.kill | Terminates thread immediately |
| Priority | thread.priority = n | Sets scheduling priority (-3 to 3) |
| Sleep | sleep(seconds) | Suspends current thread |
| Yield | Thread.pass | Yields to scheduler |
Synchronization Primitives
| Primitive | Use Case | Operations |
|---|---|---|
| Mutex | Mutual exclusion | lock, unlock, synchronize |
| ConditionVariable | Wait for condition | wait, signal, broadcast |
| Queue | Thread-safe queue | push/<<, pop, empty? |
| SizedQueue | Bounded queue | push, pop with blocking |
| Monitor | Object-level sync | synchronize, wait, signal |
Mutex Methods
| Method | Behavior | Notes |
|---|---|---|
| lock | Acquire lock, block if held | Manual lock management |
| unlock | Release lock | Must be held by current thread |
| synchronize { block } | Lock, execute, unlock | Automatic unlock on exception |
| try_lock | Attempt lock without blocking | Returns true if acquired |
| locked? | Check if locked | Status check only |
| owned? | Check if current thread holds lock | Ownership verification |
ConditionVariable Usage Pattern
mutex = Mutex.new
cv = ConditionVariable.new
# Waiting thread
mutex.synchronize do
while !condition_met?
cv.wait(mutex)
end
# Condition is true, mutex held
end
# Signaling thread
mutex.synchronize do
modify_state
cv.signal # Wake one waiter
# or cv.broadcast to wake all
end
Queue Operations
| Operation | Blocking | Thread-safe |
|---|---|---|
| push/enq/<< | No | Yes |
| pop/deq | Yes (when empty) | Yes |
| shift | Yes (when empty) | Yes |
| length/size | No | Yes |
| empty? | No | Yes |
| clear | No | Yes |
| num_waiting | No | Returns count of waiting threads |
Common Thread Patterns Quick Reference
| Pattern | Purpose | Key Components |
|---|---|---|
| Producer-Consumer | Decouple production/consumption | Queue, multiple threads |
| Thread Pool | Reuse threads for tasks | Worker threads, task queue |
| Future/Promise | Async computation | Background thread, result retrieval |
| Barrier | Synchronize at checkpoint | Counter, condition variable |
| Reader-Writer Lock | Optimize read-heavy workloads | Read/write locks, counters |
Thread Safety Checklist
- Identify all shared mutable state
- Protect shared state with synchronization primitives
- Acquire locks in consistent order to prevent deadlock
- Minimize critical section duration
- Avoid blocking operations while holding locks
- Use thread-safe data structures when available
- Clean up thread-local variables in pooled threads
- Join or terminate threads before program exit
- Handle exceptions in threads explicitly
- Test concurrent code with high thread counts