Overview
Shared memory provides a region of memory that multiple processes can read from and write to simultaneously. This mechanism represents one of the fastest forms of inter-process communication (IPC) because data does not need to be copied between processes. When one process writes to shared memory, other processes with access to that memory segment can immediately read the updated data.
Operating systems implement shared memory by mapping the same physical memory pages into the address spaces of multiple processes. Each process accesses the shared memory through its own virtual address, but these virtual addresses all point to the same physical memory location. This approach eliminates the overhead associated with other IPC mechanisms like pipes or message queues, which require kernel intervention to transfer data between processes.
Shared memory finds common application in scenarios requiring high-performance data exchange between processes, such as database management systems, scientific computing applications, and real-time data processing pipelines. For example, a web server might use shared memory to maintain a session cache accessible to multiple worker processes, or a data processing pipeline might use shared memory to pass large datasets between analysis stages without serialization overhead.
# Conceptual example: Two processes accessing shared memory
# Process A writes data
shared_mem = SharedMemory.create(key: 1234, size: 1024)
shared_mem.write("Critical sensor data: temperature=95.3")
# Process B reads data (no data copying required)
shared_mem = SharedMemory.attach(key: 1234)
data = shared_mem.read
# => "Critical sensor data: temperature=95.3"
The primary trade-off with shared memory involves synchronization complexity. Because multiple processes access the same memory concurrently, developers must implement explicit synchronization mechanisms to prevent race conditions and ensure data consistency. Without proper synchronization, two processes might simultaneously modify the same memory location, leading to corrupted or inconsistent data.
Key Principles
Shared memory operates on several fundamental principles that govern its behavior and usage patterns. Understanding these principles is essential for implementing correct and efficient shared memory solutions.
Memory Mapping and Virtual Addressing: Operating systems manage shared memory through virtual memory mechanisms. Each process maintains its own virtual address space, but the operating system maps specific virtual addresses from multiple processes to the same physical memory pages. When a process accesses a virtual address mapped to shared memory, the memory management unit (MMU) translates that address to the underlying physical address. Different processes may use different virtual addresses to access the same shared memory segment, but all references resolve to identical physical memory locations.
Attachment and Detachment: Processes interact with shared memory through an attach-detach lifecycle. A process must explicitly attach to a shared memory segment before accessing it, which establishes the mapping between the process's virtual address space and the shared physical memory. After finishing with the shared memory, the process detaches, removing the mapping from its address space. The shared memory segment persists independently of any individual process's attachment state until explicitly destroyed.
Persistence and Lifecycle: Shared memory segments exist independently of the processes that create them. When a process creates a shared memory segment, that segment persists until explicitly destroyed, even if the creating process terminates. This persistence model differs from process-private memory, which the operating system automatically reclaims when a process exits. The persistence property requires careful lifecycle management to prevent memory leaks at the system level.
Synchronization Requirements: Shared memory provides no inherent synchronization mechanisms. Multiple processes can simultaneously read from and write to shared memory without any automatic coordination. This design maximizes performance but places the burden of preventing race conditions on the application developer. Synchronization primitives like semaphores, mutexes, or file locks must be used in conjunction with shared memory to ensure data consistency.
Key-Based Identification: Most shared memory implementations use a numeric key to identify memory segments. This key serves as a system-wide identifier that processes use to access the same segment. The key selection impacts security and collision avoidance. Related processes typically use a predetermined key value, while unrelated processes must avoid key collisions.
Memory Segment Properties: Each shared memory segment has specific properties including size, permissions, and ownership. The segment size defines the amount of memory available and must be specified at creation time. Permissions control which processes can read from or write to the segment. Operating systems enforce these permissions to maintain process isolation when appropriate.
# Demonstrating key principles
# Process creates shared memory with specific properties
segment = SharedMemory.create(
key: 5678,
size: 4096,
permissions: 0666 # Read/write for owner, group, others
)
# Segment exists independently of creating process
segment.detach
# Another process attaches using the same key
segment = SharedMemory.attach(key: 5678)
segment.size # => 4096
segment.permissions # => 0666
Implementation Approaches
Different shared memory implementations provide varying levels of functionality and portability. Selecting an appropriate approach depends on platform requirements, performance needs, and feature requirements.
System V Shared Memory: The System V IPC mechanism represents the traditional Unix shared memory implementation. This approach uses three separate functions for creation, attachment, and control operations. System V shared memory identifies segments using integer keys, typically generated using the ftok function to derive keys from filesystem paths. The implementation provides extensive control over segment properties and permissions but requires careful resource management to prevent leaks.
System V shared memory segments persist until explicitly destroyed using control operations. This persistence means that if a program creates a segment and terminates without cleanup, the segment continues occupying system resources. Administrators can inspect and remove orphaned segments using system utilities. The implementation supports page-aligned memory sizes and enforces system-wide limits on the number and total size of shared memory segments.
POSIX Shared Memory: POSIX shared memory provides a more modern, standardized approach to shared memory management. This implementation treats shared memory segments as named objects in the filesystem namespace, typically mounted under /dev/shm. The file-based approach simplifies discovery and provides better integration with standard Unix tools.
POSIX shared memory uses string names rather than numeric keys, improving readability and reducing collision potential. The implementation integrates with file descriptors and standard file operations, allowing developers to use familiar system calls like ftruncate to set segment size and mmap to map segments into process address space. Memory-mapped files created through POSIX shared memory benefit from filesystem permission models and can be more easily monitored and managed.
# System V style (conceptual)
key = 0x12345678
shmid = Shmget(key, 4096, IPC_CREATE | 0666)
addr = Shmat(shmid, nil, 0)
# Use memory through addr
Shmdt(addr)
Shmctl(shmid, IPC_RMID, nil) # Cleanup required
# POSIX style (conceptual)
fd = ShmOpen("/myshm", O_CREATE | O_RDWR, 0666)
Ftruncate(fd, 4096)
addr = Mmap(nil, 4096, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)
# Use memory through addr
Munmap(addr, 4096)
ShmUnlink("/myshm") # Cleanup
Memory-Mapped Files: Memory-mapped files provide shared memory functionality through the filesystem. This approach uses regular files as backing storage for shared memory regions. Multiple processes map the same file into their address spaces, creating a shared view of the file's contents. Changes made by one process become visible to others without explicit I/O operations.
Memory-mapped files offer portability advantages since they rely on standard filesystem operations. The approach provides automatic persistence through file storage and simplifies debugging since developers can inspect shared memory contents using standard file tools. Performance characteristics differ from pure shared memory because the operating system may write modified pages back to disk, though most systems optimize this by caching heavily-accessed memory-mapped regions in RAM.
Anonymous Shared Memory: Some systems support anonymous shared memory regions not backed by any persistent storage. These regions exist only in RAM and automatically disappear when the last process unmaps them. Anonymous shared memory provides the performance benefits of shared memory without creating filesystem artifacts. This approach suits temporary data exchange between related processes, particularly parent-child process relationships where the shared mapping can be established before forking.
Ruby Implementation
Ruby provides access to System V shared memory through the sysvipc gem and other native extensions. The standard Ruby distribution does not include built-in shared memory support, requiring external libraries for this functionality.
Basic Shared Memory Operations: Ruby programs can create and manipulate shared memory segments using wrapper libraries that interface with underlying system calls. The typical workflow involves creating or attaching to a segment, reading or writing data, and properly cleaning up resources.
require 'sysvipc'
# Create a shared memory segment
key = 12345
size = 1024
shm = SysVIPC::SharedMemory.new(key, size,
SysVIPC::IPC_CREATE | 0666)
# Write data to shared memory
message = "Processing queue: 150 items pending"
shm.write(message)
# Another process can attach and read
shm2 = SysVIPC::SharedMemory.new(key, size, 0)
data = shm2.read
puts data # => "Processing queue: 150 items pending"
# Cleanup
shm.remove
Structured Data in Shared Memory: Storing complex Ruby objects in shared memory requires serialization since shared memory operates on raw bytes. Ruby's Marshal module provides binary serialization suitable for this purpose. However, serialization adds overhead and complexity compared to simple string or numeric data.
require 'sysvipc'
# Serialize Ruby hash to shared memory
data = {
timestamp: Time.now,
metrics: { cpu: 45.2, memory: 78.9 },
status: :active
}
shm = SysVIPC::SharedMemory.new(20000, 2048,
SysVIPC::IPC_CREATE | 0666)
serialized = Marshal.dump(data)
shm.write(serialized)
# Another process deserializes
shm2 = SysVIPC::SharedMemory.new(20000, 2048, 0)
serialized_data = shm2.read
recovered = Marshal.load(serialized_data)
puts recovered[:metrics][:cpu] # => 45.2
Synchronization with Semaphores: Ruby shared memory implementations typically require separate synchronization mechanisms. The sysvipc gem also provides semaphore support for protecting critical sections.
require 'sysvipc'
# Create shared counter with semaphore protection
shm_key = 30000
sem_key = 30001
shm = SysVIPC::SharedMemory.new(shm_key, 256,
SysVIPC::IPC_CREATE | 0666)
sem = SysVIPC::Semaphore.new(sem_key, 1,
SysVIPC::IPC_CREATE | 0666)
sem.setval(0, 1) # Initialize to 1
# Process increments counter with locking
def increment_counter(shm, sem)
sem.wait(0) # Acquire lock
current = shm.read.to_i
new_value = current + 1
shm.write(new_value.to_s)
sem.signal(0) # Release lock
new_value
end
# Multiple processes can safely increment
shm.write("0")
10.times do
fork do
result = increment_counter(shm, sem)
puts "Process #{Process.pid}: counter = #{result}"
end
end
Process.waitall
final = shm.read.to_i
puts "Final counter: #{final}" # => 10
File-Backed Shared Memory with Tempfile: For simpler use cases, Ruby programs can use memory-mapped files through libraries like mmap2 gem, which provides a more Ruby-idiomatic interface.
require 'mmap'
# Create memory-mapped file
file = File.open('shared_data.bin', 'w+')
file.write("\0" * 4096) # Initialize file
file.flush
mmap = Mmap.new(file.path, 'rw', Mmap::MAP_SHARED)
# Write data
mmap[0, 100] = "Shared configuration: mode=production, workers=8"
# Another process maps same file
mmap2 = Mmap.new(file.path, 'r', Mmap::MAP_SHARED)
config = mmap2[0, 100]
puts config # => "Shared configuration: mode=production, workers=8"
mmap.munmap
mmap2.munmap
Cross-Process Communication Pattern: A common Ruby pattern uses shared memory for high-frequency data updates with periodic synchronization.
require 'sysvipc'
class SharedMetrics
def initialize(key)
@shm = SysVIPC::SharedMemory.new(key, 4096,
SysVIPC::IPC_CREATE | 0666)
@sem = SysVIPC::Semaphore.new(key + 1, 1,
SysVIPC::IPC_CREATE | 0666)
@sem.setval(0, 1)
end
def update_metric(name, value)
@sem.wait(0)
metrics = read_metrics
metrics[name] = value
write_metrics(metrics)
@sem.signal(0)
end
def read_metrics
data = @shm.read
data.empty? ? {} : Marshal.load(data)
rescue
{}
end
private
def write_metrics(metrics)
@shm.write(Marshal.dump(metrics))
end
end
# Worker processes update metrics
metrics = SharedMetrics.new(40000)
metrics.update_metric('requests_processed', 1523)
metrics.update_metric('average_latency', 45.7)
# Monitor process reads metrics
current = metrics.read_metrics
puts "Requests: #{current['requests_processed']}"
puts "Latency: #{current['average_latency']}ms"
Performance Considerations
Shared memory provides significant performance advantages over other IPC mechanisms, but achieving optimal performance requires understanding its characteristics and limitations.
Zero-Copy Data Transfer: The primary performance benefit of shared memory stems from eliminating data copying between processes. Traditional IPC mechanisms like pipes or sockets require the kernel to copy data from the sending process's memory to kernel space, then from kernel space to the receiving process's memory. Shared memory eliminates these copies since both processes directly access the same physical memory pages.
For large data transfers, this difference becomes substantial. Transferring a 1MB buffer through a pipe might require 2MB of memory operations (1MB from process A to kernel, 1MB from kernel to process B), while shared memory requires no copying at all. Benchmarks typically show shared memory achieving 10-100x higher throughput than socket-based IPC for large messages.
Cache Coherency Overhead: Modern multi-core processors maintain separate CPU caches for each core. When multiple processes on different cores access shared memory, the processor must maintain cache coherency to ensure each core sees consistent data. This coherency protocol introduces overhead through cache line invalidations and memory barrier operations.
Processes modifying the same cache line (typically 64 bytes) trigger coherency traffic between CPU cores. This effect, called "false sharing," occurs when independent data items occupy the same cache line. Structuring shared memory data to minimize false sharing improves performance.
# Suboptimal: Counter array causes false sharing
shm = SysVIPC::SharedMemory.new(50000, 1024,
SysVIPC::IPC_CREATE | 0666)
# Each process updates adjacent counter
process_id = fork { ... } || Process.pid
offset = process_id % 100
shm.seek(offset)
shm.write("1") # All counters in same cache line
# Better: Pad counters to separate cache lines
CACHE_LINE_SIZE = 64
offset = (process_id % 100) * CACHE_LINE_SIZE
shm.seek(offset)
shm.write("1") # Each counter in different cache line
Synchronization Bottlenecks: Synchronization primitives protecting shared memory become performance bottlenecks in high-contention scenarios. When many processes compete for the same semaphore or mutex, serialization occurs as processes wait for lock acquisition. This contention can negate shared memory's performance benefits.
Read-write locks reduce contention when read operations dominate. Multiple readers can access shared memory simultaneously, while writers require exclusive access. This approach improves throughput for read-heavy workloads.
Memory Access Patterns: Sequential memory access patterns achieve better performance than random access due to hardware prefetching and cache efficiency. Structuring shared memory data to support sequential scans improves throughput.
Allocation Overhead: Creating and destroying shared memory segments involves system calls with associated overhead. Applications requiring frequent segment creation/destruction should consider reusing segments or maintaining a pool of pre-allocated segments.
# Inefficient: Creating segment per request
def process_request(data)
shm = SysVIPC::SharedMemory.new(rand(100000), 1024,
SysVIPC::IPC_CREATE | 0666)
shm.write(data)
# Process data...
shm.remove
end
# Efficient: Reuse segment with size management
class SharedMemoryPool
def initialize
@shm = SysVIPC::SharedMemory.new(60000, 1_048_576,
SysVIPC::IPC_CREATE | 0666)
@sem = SysVIPC::Semaphore.new(60001, 1,
SysVIPC::IPC_CREATE | 0666)
@sem.setval(0, 1)
end
def write_data(offset, data)
@sem.wait(0)
@shm.seek(offset)
@shm.write(data)
@sem.signal(0)
end
end
Page Fault Overhead: First access to a shared memory page triggers a page fault as the operating system establishes the memory mapping. Pre-touching memory pages during initialization reduces runtime page faults.
Memory Bandwidth Limits: Shared memory operations consume memory bandwidth. Systems with many processes intensively accessing shared memory may saturate memory bandwidth, limiting throughput regardless of optimization efforts. Monitoring memory bandwidth utilization helps identify this bottleneck.
Error Handling & Edge Cases
Shared memory operations encounter various failure modes requiring careful error handling to maintain system stability and data integrity.
Segment Existence Checking: Attempting to attach to a non-existent shared memory segment fails with an error. Applications must handle this condition gracefully, either creating the segment if appropriate or failing cleanly.
require 'sysvipc'
def attach_or_create(key, size)
begin
# Try to attach to existing segment
shm = SysVIPC::SharedMemory.new(key, size, 0)
puts "Attached to existing segment #{key}"
shm
rescue Errno::ENOENT
# Segment doesn't exist, create it
puts "Creating new segment #{key}"
SysVIPC::SharedMemory.new(key, size,
SysVIPC::IPC_CREATE | 0666)
rescue Errno::EINVAL
# Size mismatch with existing segment
raise "Segment #{key} exists with different size"
end
end
shm = attach_or_create(70000, 2048)
Permission Denied Scenarios: Shared memory segments have associated permissions. Processes lacking appropriate permissions cannot attach to or modify segments. Permission errors must be distinguished from other failure modes.
Insufficient Resources: Systems impose limits on shared memory usage, including maximum segment count, maximum segment size, and total system-wide shared memory. Operations exceeding these limits fail with resource exhaustion errors.
def create_with_fallback(key, preferred_size)
sizes = [preferred_size, preferred_size / 2, preferred_size / 4]
sizes.each do |size|
begin
return SysVIPC::SharedMemory.new(key, size,
SysVIPC::IPC_CREATE | SysVIPC::IPC_EXCL | 0666)
rescue Errno::ENOSPC
puts "Insufficient memory for #{size} bytes, trying smaller"
next
rescue Errno::EEXIST
raise "Segment #{key} already exists"
end
end
raise "Cannot allocate shared memory"
end
Data Corruption from Race Conditions: Without proper synchronization, concurrent writes cause data corruption. This corruption may be subtle, manifesting as occasional incorrect values rather than obvious crashes.
# Demonstrates race condition danger
def unsafe_counter_increment(shm)
current = shm.read.to_i
# Context switch may occur here!
sleep(0.001) # Simulating processing
shm.write((current + 1).to_s)
end
# Multiple processes racing
shm = SysVIPC::SharedMemory.new(80000, 64,
SysVIPC::IPC_CREATE | 0666)
shm.write("0")
5.times do
fork do
10.times { unsafe_counter_increment(shm) }
end
end
Process.waitall
final = shm.read.to_i
puts "Expected: 50, Got: #{final}"
# Output varies: 31, 45, 50 depending on races
Orphaned Segments: Processes creating shared memory segments must ensure proper cleanup. If a process crashes or exits unexpectedly without removing its segments, those segments persist as orphaned resources consuming system memory.
Size Mismatches: Attaching to an existing segment with a different size parameter than the original creation fails. Applications must handle this scenario by either accepting the existing size or failing gracefully.
Deadlock with Multiple Locks: Using multiple synchronization primitives increases deadlock risk. Processes must acquire locks in consistent order to prevent circular wait conditions.
# Deadlock scenario
def transfer_data(sem1, sem2, shm1, shm2)
# Process A
sem1.wait(0)
# Process B acquires sem2 here - deadlock!
sem2.wait(0)
# Transfer data
sem2.signal(0)
sem1.signal(0)
end
# Deadlock prevention through ordered acquisition
def safe_transfer_data(sem1, sem2, shm1, shm2)
# Always acquire lower-numbered semaphore first
first, second = [sem1, sem2].sort_by { |s| s.key }
first.wait(0)
second.wait(0)
# Transfer data safely
second.signal(0)
first.signal(0)
end
Memory Leak Detection: Applications should track shared memory segment lifecycle and verify cleanup during shutdown. Leaked segments accumulate over time, exhausting system resources.
Signal Interruption: System calls operating on shared memory or semaphores may be interrupted by signals. Applications must handle EINTR errors by retrying operations or cleaning up appropriately.
Security Implications
Shared memory introduces security considerations affecting data confidentiality, integrity, and access control.
Unauthorized Access: Shared memory segments with overly permissive permissions allow unintended processes to read or modify data. Segment permissions should restrict access to only necessary processes, following the principle of least privilege.
# Insecure: World-readable/writable
shm_insecure = SysVIPC::SharedMemory.new(90000, 1024,
SysVIPC::IPC_CREATE | 0666)
# More secure: Owner and group only
shm_secure = SysVIPC::SharedMemory.new(90001, 1024,
SysVIPC::IPC_CREATE | 0660)
# Most secure: Owner only
shm_private = SysVIPC::SharedMemory.new(90002, 1024,
SysVIPC::IPC_CREATE | 0600)
Data Exposure: Shared memory persists beyond process lifetime. Sensitive data remaining in shared memory after processes exit remains accessible to other processes with appropriate permissions. Applications handling sensitive information must zero memory before detaching.
def secure_shared_memory_usage(key, size)
shm = SysVIPC::SharedMemory.new(key, size,
SysVIPC::IPC_CREATE | 0600)
begin
# Process sensitive data
shm.write("Credit card: 1234-5678-9012-3456")
# Use data...
ensure
# Zero memory before detaching
shm.write("\0" * size)
shm.detach
end
end
Key Prediction: Predictable shared memory keys enable unauthorized processes to discover and attach to segments. Using random keys or keys derived from secure sources reduces this risk. However, related processes must share key values, creating a key distribution challenge.
Process Isolation Bypass: Shared memory deliberately breaks process isolation, a fundamental operating system security mechanism. This bypass increases attack surface since vulnerabilities in one process may affect others sharing memory. Careful validation of all data read from shared memory prevents exploitation.
Time-of-Check to Time-of-Use (TOCTOU) Attacks: Race conditions between checking shared memory contents and acting on those contents create TOCTOU vulnerabilities. Attackers modifying shared memory between check and use operations can bypass security checks.
# Vulnerable pattern
def vulnerable_operation(shm)
filename = shm.read
# Check if file is safe
if File.stat(filename).uid == Process.uid
# Attacker changes shm content here!
File.delete(filename) # May delete wrong file
end
end
# Safer pattern
def safer_operation(shm, sem)
sem.wait(0)
filename = shm.read
if File.stat(filename).uid == Process.uid
File.delete(filename)
end
sem.signal(0)
end
Information Leakage Through Timing: Operations on shared memory may have timing characteristics that leak information about the data or operations being performed. Cryptographic operations in shared memory should use constant-time algorithms to prevent timing side-channels.
Denial of Service: Malicious processes with access to shared memory can corrupt data or exhaust system resources by creating numerous segments. Resource limits and proper cleanup mechanisms mitigate these risks.
Privilege Escalation: Shared memory between processes with different privilege levels creates privilege escalation opportunities. Higher-privileged processes must validate all data from shared memory as untrusted input.
Reference
Shared Memory Operations
| Operation | System V | POSIX | Description |
|---|---|---|---|
| Create | shmget with IPC_CREATE | shm_open with O_CREATE | Allocates new segment |
| Attach | shmat | mmap after shm_open | Maps segment into address space |
| Detach | shmdt | munmap | Removes mapping from address space |
| Delete | shmctl with IPC_RMID | shm_unlink | Destroys segment |
| Query | shmctl with IPC_STAT | fstat on descriptor | Retrieves segment information |
| Modify | shmctl with IPC_SET | fchmod/fchown | Changes segment properties |
Permission Bits
| Octal | Binary | Meaning |
|---|---|---|
| 0400 | r-------- | Owner read |
| 0200 | -w------- | Owner write |
| 0040 | ---r----- | Group read |
| 0020 | ----w---- | Group write |
| 0004 | ------r-- | Others read |
| 0002 | -------w- | Others write |
| 0666 | rw-rw-rw- | All can read/write |
| 0600 | rw------- | Owner only |
Common Error Codes
| Error | Condition | Resolution |
|---|---|---|
| EACCES | Permission denied | Check segment permissions |
| EEXIST | Segment exists | Use IPC_EXCL flag or different key |
| EINVAL | Invalid size or parameter | Verify size matches existing segment |
| ENOENT | Segment not found | Create segment first |
| ENOSPC | Resource limit exceeded | Check system limits or reduce size |
| ENOMEM | Insufficient memory | Free memory or reduce allocation |
System Limits
| Limit | Description | Typical Value |
|---|---|---|
| SHMMAX | Maximum segment size | 32MB - 8GB |
| SHMMIN | Minimum segment size | 1 byte |
| SHMMNI | Maximum segments system-wide | 4096 |
| SHMALL | Maximum total shared memory | 8GB - 16GB |
| SHMSEG | Maximum segments per process | 4096 |
Ruby Shared Memory Workflow
| Step | Code Pattern | Purpose |
|---|---|---|
| 1. Create | SharedMemory.new(key, size, IPC_CREATE or 0666) | Allocate segment |
| 2. Write | shm.write(data) | Store data |
| 3. Attach | SharedMemory.new(key, size, 0) | Access from other process |
| 4. Read | shm.read | Retrieve data |
| 5. Synchronize | semaphore.wait / semaphore.signal | Coordinate access |
| 6. Detach | shm.detach | Remove mapping |
| 7. Cleanup | shm.remove | Delete segment |
Synchronization Patterns
| Pattern | Use Case | Implementation |
|---|---|---|
| Mutex | Exclusive access | Binary semaphore initialized to 1 |
| Producer-Consumer | Queue operations | Two semaphores for empty/full |
| Reader-Writer | Read-heavy workload | Multiple reader, single writer lock |
| Barrier | Synchronize phases | Counter with threshold |
Performance Characteristics
| Metric | Shared Memory | Socket IPC | Pipe IPC |
|---|---|---|---|
| Latency | 0.1-1 μs | 5-50 μs | 5-50 μs |
| Throughput (1MB) | 10-50 GB/s | 1-5 GB/s | 1-5 GB/s |
| CPU overhead | Low (no copy) | High (kernel copy) | High (kernel copy) |
| Setup cost | High (system call) | Medium | Low |
| Memory overhead | High (persistent) | Low (transient) | Low (transient) |