Overview
Ractor provides Ruby with true parallel execution through isolated processes that communicate via message passing. Each Ractor runs in its own execution context with separate memory space, eliminating shared state concerns that plague traditional threading models. Ruby creates Ractors as lightweight processes that can execute code simultaneously on multiple CPU cores.
The Ractor
class serves as the primary interface for creating and managing parallel execution units. Ractors communicate through send
and receive
operations, creating a message-passing system similar to actor models in other languages. This isolation prevents data races and makes parallel programming more predictable.
# Create a simple Ractor
r = Ractor.new { 42 }
result = r.take # => 42
# Create a Ractor with input
r = Ractor.new(10) { |n| n * 2 }
result = r.take # => 20
Ruby imposes strict rules on what objects can cross Ractor boundaries. Only immutable objects, specific data types, and explicitly shareable objects can move between Ractors. This constraint maintains isolation while preventing common concurrency bugs.
The Ractor system includes several key components: the main Ractor (automatically created), worker Ractors created with Ractor.new
, and communication channels established through send
/receive
operations. Each Ractor maintains its own execution stack, local variables, and object references.
Basic Usage
Creating Ractors requires passing a block containing the code to execute. The block becomes the Ractor's main execution path, running independently from other Ractors. Ruby evaluates the block in the new Ractor's isolated context.
# Basic Ractor creation
worker = Ractor.new do
puts "Running in Ractor #{Ractor.current.name}"
"Work completed"
end
result = worker.take
puts result # => "Work completed"
Ractors accept arguments through the Ractor.new
constructor, making them accessible within the block. Ruby passes these arguments by value or reference depending on their shareable status. Primitive types and frozen objects move freely between Ractors.
# Ractor with arguments
calculator = Ractor.new(100, 50) do |a, b|
{
sum: a + b,
difference: a - b,
product: a * b
}
end
result = calculator.take
# => {sum: 150, difference: 50, product: 5000}
Message passing enables communication between Ractors through send
and receive
operations. The sending Ractor uses send
to transmit data, while the receiving Ractor calls receive
to retrieve messages. Ruby queues messages internally, allowing asynchronous communication.
# Two-way communication
processor = Ractor.new do
loop do
data = Ractor.receive
break if data == :stop
result = data.upcase
Ractor.yield(result)
end
end
processor.send("hello")
response = processor.take # => "HELLO"
processor.send("world")
response = processor.take # => "WORLD"
processor.send(:stop)
Named Ractors improve debugging and monitoring by providing identifiable labels. Ruby assigns default names automatically, but explicit naming creates clearer code and better error messages.
# Named Ractors
logger = Ractor.new(name: 'logger') do
loop do
message = Ractor.receive
puts "[#{Time.now}] #{message}"
end
end
worker = Ractor.new(logger, name: 'worker') do |log_ractor|
log_ractor.send("Worker started")
# Perform work here
log_ractor.send("Worker finished")
end
Thread Safety & Concurrency
Ractors achieve thread safety through complete isolation rather than synchronization mechanisms. Each Ractor operates in its own memory space, preventing shared state issues that require mutexes or other synchronization primitives. This design eliminates data races and deadlock conditions common in traditional threading.
Ruby enforces strict object sharing rules between Ractors. Shareable objects include frozen strings, numbers, symbols, true/false values, and objects explicitly marked as shareable. Mutable objects cannot cross Ractor boundaries unless they become immutable first.
# Object sharing constraints
data = { count: 0, items: [] }
# This fails - mutable objects cannot be shared
# worker = Ractor.new(data) { |d| d[:count] += 1 }
# This works - frozen objects are shareable
frozen_data = { count: 0, items: [] }.freeze
worker = Ractor.new(frozen_data) do |d|
# Can read but not modify frozen data
d[:count] # => 0
end
result = worker.take
Ractors communicate through copying or moving objects rather than sharing references. Ruby creates deep copies of complex data structures when transferring them between Ractors, ensuring complete isolation. This copying mechanism prevents accidental state sharing.
# Message copying demonstration
original_array = [1, 2, 3, 4, 5]
processor = Ractor.new do
received_array = Ractor.receive
received_array.map! { |x| x * 2 } # Modifying copy
Ractor.yield(received_array)
end
processor.send(original_array)
modified = processor.take # => [2, 4, 6, 8, 10]
# Original remains unchanged
puts original_array # => [1, 2, 3, 4, 5]
Multiple Ractors can run simultaneously without coordination overhead. Ruby's scheduler distributes Ractors across available CPU cores automatically, maximizing parallel execution. This automatic distribution removes the need for manual thread management.
# Parallel processing with multiple Ractors
def parallel_map(array, &block)
chunk_size = (array.size / 4.0).ceil
chunks = array.each_slice(chunk_size).to_a
ractors = chunks.map.with_index do |chunk, index|
Ractor.new(chunk, name: "worker_#{index}") do |data|
data.map(&block)
end
end
results = ractors.map(&:take)
results.flatten
end
# Usage
numbers = (1..1000).to_a
squares = parallel_map(numbers) { |n| n * n }
Ractors handle exceptions internally without affecting other Ractors. Uncaught exceptions terminate only the affected Ractor, leaving others running normally. This isolation prevents cascade failures in parallel processing scenarios.
# Exception isolation
workers = 3.times.map do |i|
Ractor.new(i, name: "worker_#{i}") do |index|
if index == 1
raise "Error in worker #{index}"
end
"Worker #{index} completed"
end
end
# Collect results, handling failures
results = workers.map do |worker|
begin
worker.take
rescue => e
"Failed: #{e.message}"
end
end
puts results
# => ["Worker 0 completed", "Failed: Error in worker 1", "Worker 2 completed"]
Performance & Memory
Ractors excel at CPU-intensive tasks that benefit from parallel execution across multiple cores. Ruby distributes Ractors efficiently, but optimal performance requires matching Ractor count to available CPU cores and task characteristics.
Memory usage in Ractors follows different patterns than traditional threading. Each Ractor maintains separate memory spaces, increasing overall memory consumption but eliminating garbage collection contention. Ruby's garbage collector runs independently in each Ractor.
# CPU-intensive parallel processing
def benchmark_parallel_vs_sequential(array, iterations = 1000)
require 'benchmark'
# Sequential processing
sequential_time = Benchmark.measure do
iterations.times do
array.map { |x| Math.sqrt(x) ** 2 }
end
end
# Parallel processing
parallel_time = Benchmark.measure do
iterations.times do
chunk_size = array.size / 4
chunks = array.each_slice(chunk_size).to_a
ractors = chunks.map do |chunk|
Ractor.new(chunk) { |data| data.map { |x| Math.sqrt(x) ** 2 } }
end
ractors.map(&:take).flatten
end
end
{
sequential: sequential_time.real,
parallel: parallel_time.real,
speedup: sequential_time.real / parallel_time.real
}
end
# Test with large dataset
data = (1..10000).to_a
results = benchmark_parallel_vs_sequential(data)
Communication overhead affects Ractor performance significantly. Frequent message passing and large data transfers reduce parallel efficiency. Ruby optimizes small message transfers but copying large objects creates bottlenecks.
# Communication overhead analysis
def measure_communication_overhead(message_size, message_count)
# Create test data of specified size
test_data = "x" * message_size
start_time = Time.now
worker = Ractor.new do
message_count.times do
Ractor.receive
Ractor.yield("processed")
end
end
message_count.times do
worker.send(test_data)
worker.take
end
Time.now - start_time
end
# Compare different message sizes
[100, 1_000, 10_000, 100_000].each do |size|
time = measure_communication_overhead(size, 100)
puts "#{size} byte messages: #{time.round(3)}s"
end
Memory allocation patterns differ between Ractors and threads. Ractors allocate separate heaps, preventing memory sharing benefits but eliminating allocation contention. This trade-off favors workloads with high allocation rates and minimal data sharing.
# Memory allocation patterns
def memory_intensive_task(size)
Ractor.new(size) do |n|
arrays = []
n.times { |i| arrays << (1..1000).to_a }
# Simulate processing
total = arrays.map(&:sum).sum
arrays = nil # Release memory
GC.start
total
end
end
# Multiple Ractors with heavy allocation
workers = 8.times.map { |i| memory_intensive_task(1000) }
results = workers.map(&:take)
Load balancing becomes critical with uneven task distribution. Ruby provides no built-in work stealing between Ractors, requiring manual load balancing for optimal performance.
# Dynamic work distribution
def work_stealing_pool(tasks, worker_count = 4)
task_queue = Queue.new
tasks.each { |task| task_queue << task }
workers = worker_count.times.map do |i|
Ractor.new(task_queue, name: "worker_#{i}") do |queue|
results = []
while !queue.empty?
begin
task = queue.pop(true) # Non-blocking pop
result = yield(task)
results << result
rescue ThreadError
break # Queue empty
end
end
results
end
end
workers.map(&:take).flatten
end
# Usage with uneven task distribution
tasks = (1..100).map { |i| i % 10 == 0 ? 1000 : 10 } # Some heavy tasks
results = work_stealing_pool(tasks) { |n| n.times.sum }
Production Patterns
Production Ractor usage requires careful resource management and error handling strategies. Long-running applications must monitor Ractor lifecycle, handle failures gracefully, and manage system resources effectively.
Worker pool patterns provide controlled parallel execution for web applications and background processing. Ractor pools maintain a fixed number of workers, distributing tasks efficiently while limiting resource consumption.
# Production-ready Ractor worker pool
class RactorPool
def initialize(size:, name_prefix: 'worker')
@size = size
@name_prefix = name_prefix
@workers = []
@task_queue = Queue.new
@results = Queue.new
initialize_workers
end
def submit(task_data, &block)
task_id = generate_task_id
@task_queue << { id: task_id, data: task_data, block: block }
task_id
end
def get_result(task_id, timeout: 10)
deadline = Time.now + timeout
while Time.now < deadline
begin
result = @results.pop(true)
return result[:result] if result[:task_id] == task_id
@results << result # Put back wrong result
rescue ThreadError
sleep 0.1
end
end
raise TimeoutError, "Task #{task_id} timeout"
end
def shutdown
@size.times { @task_queue << :shutdown }
@workers.each { |worker| worker.take rescue nil }
@workers.clear
end
private
def initialize_workers
@size.times do |i|
worker = Ractor.new(@task_queue, @results, name: "#{@name_prefix}_#{i}") do |queue, results|
loop do
task = queue.pop
break if task == :shutdown
begin
result = task[:block].call(task[:data])
results << { task_id: task[:id], result: result }
rescue => e
results << { task_id: task[:id], error: e.message }
end
end
end
@workers << worker
end
end
def generate_task_id
"task_#{Time.now.to_f}_#{rand(10000)}"
end
end
# Usage in web application context
pool = RactorPool.new(size: 8)
# Process user requests
def process_user_data(pool, user_data)
task_id = pool.submit(user_data) do |data|
# Expensive data processing
data.transform_values { |v| complex_calculation(v) }
end
begin
pool.get_result(task_id, timeout: 30)
rescue TimeoutError
{ error: "Processing timeout" }
end
end
Monitoring and health checking become essential in production Ractor deployments. Applications must track Ractor status, resource usage, and performance metrics to maintain system stability.
# Ractor health monitoring system
class RactorMonitor
def initialize
@metrics = {
created: 0,
completed: 0,
failed: 0,
active: 0
}
@health_checks = []
@mutex = Mutex.new
end
def register_ractor(ractor, metadata = {})
@mutex.synchronize do
@metrics[:created] += 1
@metrics[:active] += 1
end
# Monitor Ractor completion
monitor_thread = Thread.new do
begin
result = ractor.take
@mutex.synchronize do
@metrics[:completed] += 1
@metrics[:active] -= 1
end
yield(result, metadata) if block_given?
rescue => e
@mutex.synchronize do
@metrics[:failed] += 1
@metrics[:active] -= 1
end
handle_failure(e, metadata)
end
end
@health_checks << monitor_thread
end
def stats
@mutex.synchronize { @metrics.dup }
end
def health_report
stats = self.stats
{
status: stats[:failed] > stats[:completed] * 0.1 ? :unhealthy : :healthy,
success_rate: stats[:completed].to_f / (stats[:completed] + stats[:failed]),
active_workers: stats[:active],
total_processed: stats[:completed] + stats[:failed]
}
end
private
def handle_failure(error, metadata)
puts "Ractor failed: #{error.message}"
puts "Metadata: #{metadata}"
# Implement alerting, logging, etc.
end
end
# Integration with monitoring system
monitor = RactorMonitor.new
# Create monitored Ractors
10.times do |i|
ractor = Ractor.new(i) do |index|
# Simulate work with potential failure
raise "Simulated failure" if index == 7
"Result #{index}"
end
monitor.register_ractor(ractor, { worker_id: i, created_at: Time.now })
end
# Check system health
sleep 2
puts monitor.health_report
Error recovery and restart strategies maintain service availability during Ractor failures. Production systems implement supervisor patterns that automatically restart failed Ractors and redistribute work.
# Ractor supervisor with automatic restart
class RactorSupervisor
def initialize(max_restarts: 5, restart_window: 60)
@max_restarts = max_restarts
@restart_window = restart_window
@supervised_ractors = {}
@restart_counts = {}
end
def supervise(name, restart_strategy: :permanent, &block)
start_ractor(name, restart_strategy, &block)
end
def stop_supervision(name)
if ractor = @supervised_ractors[name]
ractor[:should_stop] = true
@supervised_ractors.delete(name)
@restart_counts.delete(name)
end
end
def status
@supervised_ractors.transform_values do |info|
{
strategy: info[:strategy],
restarts: @restart_counts[info[:name]] || 0,
running: info[:ractor].running?
}
end
end
private
def start_ractor(name, strategy, &block)
ractor_info = {
name: name,
strategy: strategy,
block: block,
should_stop: false
}
ractor = Ractor.new(ractor_info, name: name) do |info|
begin
info[:block].call
rescue => e
{ error: e.message, ractor_name: info[:name] }
end
end
ractor_info[:ractor] = ractor
@supervised_ractors[name] = ractor_info
# Monitor and restart if needed
Thread.new { monitor_ractor(ractor_info) }
end
def monitor_ractor(ractor_info)
loop do
begin
result = ractor_info[:ractor].take
if result.is_a?(Hash) && result[:error]
handle_ractor_failure(ractor_info, result[:error])
end
break if ractor_info[:should_stop]
# Restart if strategy permits
if should_restart?(ractor_info)
sleep 1 # Brief delay before restart
restart_ractor(ractor_info)
else
break
end
rescue => e
handle_ractor_failure(ractor_info, e.message)
break if ractor_info[:should_stop] || !should_restart?(ractor_info)
sleep 1
restart_ractor(ractor_info)
end
end
end
def should_restart?(ractor_info)
return false unless ractor_info[:strategy] == :permanent
name = ractor_info[:name]
@restart_counts[name] ||= 0
now = Time.now
@restart_counts[name] < @max_restarts
end
def restart_ractor(ractor_info)
name = ractor_info[:name]
@restart_counts[name] ||= 0
@restart_counts[name] += 1
new_ractor = Ractor.new(ractor_info, name: name) do |info|
begin
info[:block].call
rescue => e
{ error: e.message, ractor_name: info[:name] }
end
end
ractor_info[:ractor] = new_ractor
puts "Restarted Ractor #{name} (restart ##{@restart_counts[name]})"
end
def handle_ractor_failure(ractor_info, error_message)
puts "Ractor #{ractor_info[:name]} failed: #{error_message}"
# Implement logging, alerting, etc.
end
end
Common Pitfalls
Object shareability rules cause frequent confusion when moving data between Ractors. Ruby rejects mutable objects at Ractor boundaries, but determining shareability for complex objects requires understanding deep object graphs and mutability chains.
# Shareability confusion with nested objects
class UserData
attr_accessor :name, :preferences
def initialize(name, preferences = {})
@name = name
@preferences = preferences
end
end
user = UserData.new("Alice", { theme: "dark", notifications: true })
# This fails - mutable object with mutable attributes
# worker = Ractor.new(user) { |u| u.name.upcase }
# This also fails - even with frozen main object
frozen_user = user.freeze
# worker = Ractor.new(frozen_user) { |u| u.name.upcase }
# Fails because @preferences Hash is still mutable
# Correct approach - deep freeze all nested objects
def make_shareable(obj)
case obj
when Hash
obj.each { |k, v| make_shareable(v) }
obj.freeze
when Array
obj.each { |item| make_shareable(item) }
obj.freeze
when String
obj.freeze
else
obj.freeze if obj.respond_to?(:freeze)
end
obj
end
shareable_user = make_shareable(UserData.new("Bob", { theme: "light" }))
worker = Ractor.new(shareable_user) { |u| u.name.upcase }
result = worker.take # => "BOB"
Communication deadlocks occur when Ractors create circular wait conditions through send
and receive
operations. These deadlocks manifest as hanging programs without clear error messages.
# Deadlock scenario - mutual waiting
def deadlock_example
r1 = Ractor.new(name: 'ractor1') do
Ractor.receive # Waiting for message
puts "R1 received message, sending response"
Ractor.yield("response from R1")
end
r2 = Ractor.new(r1, name: 'ractor2') do |other|
other.send("message from R2")
other.take # This will hang - R1 hasn't yielded yet
end
# This hangs because R2 is waiting for R1's response
# but R1 is waiting for R2's initial message
r2.take
end
# Correct approach - avoid circular dependencies
def proper_communication
processor = Ractor.new(name: 'processor') do
loop do
data = Ractor.receive
break if data == :stop
result = data * 2
Ractor.yield(result)
end
end
# Send data first, then collect results
processor.send(10)
result1 = processor.take # => 20
processor.send(20)
result2 = processor.take # => 40
processor.send(:stop)
[result1, result2]
end
Exception handling misconceptions lead to silent failures and resource leaks. Developers often assume exceptions propagate between Ractors or that failing Ractors clean up automatically.
# Incorrect exception handling assumptions
def wrong_exception_handling
workers = 3.times.map do |i|
Ractor.new(i) do |index|
if index == 1
raise StandardError, "Worker #{index} failed"
end
"Worker #{index} success"
end
end
# This only gets successful results, silently ignores failures
results = workers.map do |worker|
begin
worker.take
rescue
nil # Silently ignoring failures
end
end.compact
puts "Got #{results.length} results" # Missing failed worker
end
# Correct exception handling with logging and recovery
def proper_exception_handling
workers = 3.times.map do |i|
Ractor.new(i) do |index|
if index == 1
raise StandardError, "Worker #{index} failed"
end
"Worker #{index} success"
end
end
results = workers.map.with_index do |worker, index|
begin
worker.take
rescue => e
puts "Worker #{index} failed: #{e.message}"
# Log the error appropriately
{ error: e.message, worker_id: index }
end
end
successful = results.reject { |r| r.is_a?(Hash) && r[:error] }
failed = results.select { |r| r.is_a?(Hash) && r[:error] }
puts "Successful: #{successful.length}, Failed: #{failed.length}"
{ successful: successful, failed: failed }
end
Memory management issues arise from misunderstanding Ractor isolation and garbage collection behavior. Each Ractor maintains separate memory spaces, but shared references and message passing can create unexpected memory retention.
# Memory leak through message accumulation
def memory_leak_example
# Creates a Ractor that accumulates messages without processing
accumulator = Ractor.new do
messages = []
loop do
msg = Ractor.receive
break if msg == :stop
messages << msg # Growing array never cleaned
end
messages.size
end
# Sending many messages without processing
1000.times { |i| accumulator.send("Message #{i}") }
accumulator.send(:stop)
count = accumulator.take
puts "Accumulated #{count} messages"
end
# Proper memory management with periodic cleanup
def managed_memory_example
processor = Ractor.new do
processed_count = 0
batch = []
loop do
msg = Ractor.receive
break if msg == :stop
batch << msg
# Process in batches and clean up
if batch.size >= 100
# Process batch
batch.each { |item| process_item(item) }
processed_count += batch.size
batch.clear # Release memory
GC.start # Suggest garbage collection
Ractor.yield({ processed: processed_count, batch_complete: true })
end
end
# Process remaining items
batch.each { |item| process_item(item) }
processed_count += batch.size
processed_count
end
1000.times { |i| processor.send("Message #{i}") }
# Collect batch completion notifications
progress_updates = []
begin
while update = processor.take
progress_updates << update
end
rescue
# No more updates
end
processor.send(:stop)
final_count = processor.take
puts "Final processed count: #{final_count}"
puts "Received #{progress_updates.length} progress updates"
end
def process_item(item)
# Simulate processing
item.length
end
Performance expectations often miss the overhead of Ractor creation and message passing. Developers expect linear speedup without accounting for coordination costs and task distribution overhead.
# Unrealistic performance expectations
def performance_pitfall_example
data = (1..1000).to_a
# Creating too many Ractors for small tasks
start_time = Time.now
# This is inefficient - too much overhead per small task
results = data.map do |item|
Ractor.new(item) { |n| n * 2 }.take
end
overhead_time = Time.now - start_time
# Efficient batch processing approach
start_time = Time.now
chunk_size = data.size / 4
chunks = data.each_slice(chunk_size).to_a
ractors = chunks.map do |chunk|
Ractor.new(chunk) { |batch| batch.map { |n| n * 2 } }
end
efficient_results = ractors.map(&:take).flatten
efficient_time = Time.now - start_time
puts "Overhead approach: #{overhead_time.round(3)}s"
puts "Efficient approach: #{efficient_time.round(3)}s"
puts "Speedup: #{(overhead_time / efficient_time).round(2)}x"
end
Reference
Core Classes and Methods
Method | Parameters | Returns | Description |
---|---|---|---|
Ractor.new(args..., **kwargs, &block) |
args (Any shareable), kwargs (Hash), block (Proc) |
Ractor |
Creates new Ractor with given arguments and execution block |
Ractor.current |
None | Ractor |
Returns the currently executing Ractor |
#take(**kwargs) |
timeout (Numeric) |
Any |
Retrieves result from Ractor, blocks until available |
#send(obj, move: false) |
obj (Any shareable), move (Boolean) |
Ractor |
Sends object to Ractor's incoming port |
Ractor.receive(**kwargs) |
timeout (Numeric) |
Any |
Receives message from incoming port in current Ractor |
Ractor.yield(obj, move: false) |
obj (Any shareable), move (Boolean) |
nil |
Yields object to Ractor's outgoing port |
#name |
None | String |
Returns Ractor's name |
#inspect |
None | String |
Returns detailed Ractor information |
#alive? |
None | Boolean |
Checks if Ractor is still running |
Shareable Object Types
Type | Shareable | Notes |
---|---|---|
Integer |
✓ | All integer values pass freely |
Float |
✓ | All float values pass freely |
String (frozen) |
✓ | Must be frozen before sharing |
String (mutable) |
✗ | Raises Ractor::MovedError |
Symbol |
✓ | Always immutable and shareable |
true/false/nil |
✓ | Singleton values always shareable |
Array (frozen, all elements shareable) |
✓ | Requires deep shareability |
Hash (frozen, all keys/values shareable) |
✓ | Requires deep shareability |
Class/Module |
✓ | Class objects are shareable |
Method/UnboundMethod |
✓ | Method objects are shareable |
Proc/Lambda |
✗ | Closures cannot cross boundaries |
Communication Patterns
Pattern | Usage | Example |
---|---|---|
Send-Receive | Asynchronous messaging | r.send(data); result = Ractor.receive |
Yield-Take | Synchronous result retrieval | Ractor.yield(result); r.take |
Bidirectional | Request-response pattern | Send request, receive response |
Broadcast | One-to-many messaging | Multiple Ractors receive same data |
Pipeline | Chain processing | Output of one becomes input of next |
Exception Hierarchy
StandardError
├── Ractor::Error (base error class)
├── Ractor::MovedError (object already moved)
├── Ractor::ClosedError (communication channel closed)
├── Ractor::UnsafeError (unsafe operation attempted)
└── Ractor::IsolationError (isolation violation)
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
name |
String | Auto-generated | Ractor identifier for debugging |
timeout |
Numeric | Infinite | Maximum wait time for operations |
move |
Boolean | false | Transfer object ownership instead of copying |
Performance Characteristics
Operation | Cost | Scale Factor |
---|---|---|
Ractor creation | High | O(1) per Ractor |
Message copying | Medium | O(object_size) |
Message moving | Low | O(1) reference transfer |
Context switching | Low | O(1) scheduler operation |
Memory allocation | High | O(heap_size) per Ractor |
Debugging Methods
Method | Returns | Purpose |
---|---|---|
Ractor.count |
Integer | Number of active Ractors |
Ractor.current.name |
String | Current Ractor identifier |
#backtrace |
Array | Execution stack trace |
ObjectSpace.each_object(Ractor) |
Enumerator | All Ractor instances |