Overview
Ractor.main? is a class method that returns true
when called from Ruby's main Ractor and false
when called from any sub-Ractor. The main Ractor represents the initial execution context that begins when a Ruby program starts, while sub-Ractors are concurrent execution contexts created explicitly through Ractor.new
.
Ruby implements the main Ractor as a special case within the Ractor system. Every Ruby program automatically starts with one main Ractor, and this Ractor has unique privileges that distinguish it from sub-Ractors. The main Ractor can access global variables, constants, and objects that sub-Ractors cannot directly reach due to Ruby's isolation model.
The method serves as a conditional check for code that needs different behavior depending on the execution context. This becomes critical when writing libraries or applications that work across both main and sub-Ractor environments.
# Basic identification of main Ractor
puts Ractor.main?
# => true (when run from main program)
# Create a sub-Ractor to demonstrate difference
sub_ractor = Ractor.new do
puts "In sub-Ractor: #{Ractor.main?}"
puts "Current Ractor: #{Ractor.current}"
end
sub_ractor.take
# Output: "In sub-Ractor: false"
The main Ractor maintains exclusive access to certain Ruby features including global variables, file system operations through some standard library methods, and direct manipulation of class variables. Sub-Ractors operate under strict isolation rules that prevent data races but also limit their capabilities.
# Demonstrate main Ractor privileges
$global_var = "accessible"
main_result = Ractor.new do
begin
# This will raise an exception in sub-Ractor
puts $global_var
rescue => e
"Error: #{e.class}"
end
end
puts main_result.take
# => "Error: Ractor::IsolationError"
# But main Ractor can access it
puts $global_var if Ractor.main?
# => "accessible"
Basic Usage
Ractor.main? returns a boolean value without accepting any parameters. The method provides a simple way to branch execution logic based on the current Ractor context. Most commonly, developers use this method to initialize resources differently or to restrict certain operations to the main Ractor.
# Conditional resource initialization
def setup_resources
if Ractor.main?
@database_pool = Database.create_pool(size: 10)
@cache = Redis.new(host: 'localhost')
puts "Main Ractor: Full resource setup complete"
else
@limited_cache = {}
puts "Sub-Ractor: Limited resource setup"
end
end
setup_resources
The method becomes particularly useful when writing code that needs to handle global state or external resources. Since sub-Ractors cannot access global variables or shared mutable state, checking the Ractor context prevents runtime errors.
class ConfigManager
def initialize
if Ractor.main?
load_global_config
setup_signal_handlers
else
@config = receive_config_from_main
end
end
private
def load_global_config
$app_config = YAML.load_file('config.yml')
@config = $app_config
end
def setup_signal_handlers
Signal.trap('INT') { graceful_shutdown }
Signal.trap('TERM') { graceful_shutdown }
end
def receive_config_from_main
# Sub-Ractor would receive config through message passing
{}
end
end
When designing concurrent applications, Ractor.main? helps establish communication patterns between the main Ractor and sub-Ractors. The main Ractor typically coordinates work distribution and resource management while sub-Ractors perform isolated computations.
def process_data_concurrently(data_chunks)
if Ractor.main?
# Main Ractor coordinates the work
workers = data_chunks.map do |chunk|
Ractor.new(chunk) do |data|
# Each worker processes its chunk
process_chunk(data)
end
end
# Collect results from all workers
workers.map(&:take)
else
# Sub-Ractor processes individual chunk
raise "This method should only be called from main Ractor"
end
end
def process_chunk(data)
puts "Processing in Ractor: #{Ractor.current}"
puts "Is main? #{Ractor.main?}"
# Simulate processing work
data.map { |item| item * 2 }
end
# Usage
data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
results = process_data_concurrently(data)
puts results.inspect
# => [[2, 4, 6], [8, 10, 12], [14, 16, 18]]
The method also serves validation purposes in library code that must ensure certain operations occur only in the main Ractor context. This prevents subtle bugs that could arise from attempting restricted operations in sub-Ractors.
module FileProcessor
def self.process_files(file_paths)
unless Ractor.main?
raise ArgumentError, "File processing must occur in main Ractor"
end
file_paths.map do |path|
File.read(path).upcase
end
end
end
Thread Safety & Concurrency
Ractor.main? is inherently thread-safe since it queries the current Ractor's identity rather than accessing shared state. Each Ractor maintains its own execution context, and the method simply returns the boolean status of whether the current context is the main Ractor.
The method's thread safety extends to its usage within concurrent operations. Multiple threads within the same Ractor will all receive the same result when calling Ractor.main?, making it safe for conditional logic in multi-threaded code.
require 'concurrent-ruby'
# Demonstrate thread safety within main Ractor
thread_pool = Concurrent::ThreadPoolExecutor.new(min_threads: 2, max_threads: 4)
10.times do |i|
thread_pool.post do
puts "Thread #{i}: Main Ractor? #{Ractor.main?}"
puts "Thread #{i}: Current Ractor: #{Ractor.current}"
end
end
thread_pool.shutdown
thread_pool.wait_for_termination
# All threads report: "Main Ractor? true"
When designing concurrent systems with Ractors, Ractor.main? helps establish communication protocols. The main Ractor often serves as a coordinator that manages shared resources and orchestrates work distribution to sub-Ractors.
class WorkCoordinator
def initialize
@result_queue = Queue.new if Ractor.main?
@workers = []
end
def start_workers(count)
raise "Workers can only be started from main Ractor" unless Ractor.main?
count.times do |i|
worker = Ractor.new(i, @result_queue) do |worker_id, queue|
puts "Worker #{worker_id} started. Main? #{Ractor.main?}"
# Worker loop - sub-Ractor cannot access @result_queue directly
loop do
work_item = Ractor.receive
break if work_item == :shutdown
result = perform_work(work_item)
Ractor.yield(result)
end
end
@workers << worker
end
end
def distribute_work(work_items)
return unless Ractor.main?
work_items.each_with_index do |item, index|
worker = @workers[index % @workers.length]
worker.send(item)
end
# Collect results
results = []
work_items.length.times do
@workers.each do |worker|
begin
result = worker.take(timeout: 0.1)
results << result
rescue Ractor::ClosedError
# Worker finished
end
end
end
results
end
private
def perform_work(item)
# Simulate work that can only be done in sub-Ractor
sleep(0.1)
item.to_s.upcase
end
end
Race conditions cannot occur with Ractor.main? itself since each Ractor's identity is immutable. However, the method's result should be cached if used frequently within performance-critical code paths, though the method call overhead is minimal.
class RactorAwareService
def initialize
@is_main_ractor = Ractor.main?
@resource_manager = @is_main_ractor ? ResourceManager.new : nil
end
def process_request(data)
if @is_main_ractor
# Main Ractor can access shared resources
@resource_manager.process(data)
else
# Sub-Ractor performs isolated computation
compute_result(data)
end
end
private
def compute_result(data)
# CPU-intensive work suitable for sub-Ractor
data.map { |x| Math.sqrt(x) }.sum
end
end
Synchronization primitives work differently across Ractor boundaries. The main Ractor can coordinate multiple sub-Ractors, but sub-Ractors cannot share synchronization objects due to Ruby's isolation model.
# Main Ractor coordination pattern
class ParallelProcessor
def initialize
@coordinator_mutex = Mutex.new if Ractor.main?
@completion_status = {}
end
def process_parallel(data_sets)
unless Ractor.main?
raise "Parallel processing must be initiated from main Ractor"
end
ractors = data_sets.map.with_index do |data, index|
Ractor.new(data, index) do |work_data, worker_id|
result = expensive_computation(work_data)
[worker_id, result]
end
end
# Collect results with coordination
results = {}
ractors.each do |ractor|
worker_id, result = ractor.take
@coordinator_mutex.synchronize do
@completion_status[worker_id] = :completed
results[worker_id] = result
end
end
results
end
private
def expensive_computation(data)
# Simulate CPU-bound work
(1..data.length).map { |i| Math.factorial(i % 10) }.sum
end
end
Common Pitfalls
A frequent mistake involves assuming Ractor.main? can be used to detect the "primary" thread within any Ractor. The method identifies the main Ractor, not the main thread. Each Ractor can spawn multiple threads, and all threads within the main Ractor will return true
for Ractor.main?.
# INCORRECT: Assuming main thread detection
def setup_logging
if Ractor.main? # This checks Ractor, not thread
puts "Setting up logging..." # This runs in ALL threads of main Ractor
end
end
# Multiple threads in main Ractor all trigger the condition
5.times do |i|
Thread.new do
setup_logging # All threads print "Setting up logging..."
end
end
# CORRECT: Check for main thread within main Ractor
def setup_logging_correctly
if Ractor.main? && Thread.current == Thread.main
puts "Setting up logging in main thread of main Ractor"
end
end
Another common error involves attempting to pass the result of Ractor.main? between Ractors as a means of identification. Since each Ractor evaluates the method independently, passing the boolean value loses its contextual meaning.
# INCORRECT: Passing main status between Ractors
main_status = Ractor.main? # true in main Ractor
sub_ractor = Ractor.new(main_status) do |is_main|
if is_main # This is misleading - we're in a sub-Ractor now
puts "I think I'm in main Ractor, but I'm not!"
end
end
# CORRECT: Each Ractor checks its own status
sub_ractor = Ractor.new do
if Ractor.main? # false - correctly identifies sub-Ractor
puts "Actually in main Ractor"
else
puts "Correctly identified as sub-Ractor"
end
end
Developers often misunderstand the method's behavior during Ractor transitions or when Ractors are nested. A sub-Ractor created within another sub-Ractor still returns false
for Ractor.main?, as only the initial Ruby process Ractor is considered "main".
# Nested Ractor creation
main_ractor_check = Ractor.main? # true
first_sub = Ractor.new do
first_level = Ractor.main? # false
second_sub = Ractor.new do
second_level = Ractor.main? # still false, not true
puts "Second level main?: #{second_level}"
puts "Current Ractor: #{Ractor.current}"
end
second_sub.take
puts "First level main?: #{first_level}"
end
first_sub.take
puts "Original main?: #{main_ractor_check}"
Performance pitfalls occur when developers call Ractor.main? repeatedly in tight loops instead of caching the result. While the method call is fast, unnecessary repeated calls create overhead in performance-critical code.
# INEFFICIENT: Repeated calls in loop
def process_items(items)
items.each do |item|
if Ractor.main? # Called for every item
perform_main_ractor_operation(item)
else
perform_sub_ractor_operation(item)
end
end
end
# EFFICIENT: Cache the result
def process_items_efficiently(items)
is_main = Ractor.main? # Called once
items.each do |item|
if is_main
perform_main_ractor_operation(item)
else
perform_sub_ractor_operation(item)
end
end
end
A subtle pitfall involves relying on Ractor.main? for exception handling strategies. Sub-Ractors have different error propagation behavior, and exceptions in sub-Ractors don't automatically bubble up to the main Ractor unless explicitly handled.
# PROBLEMATIC: Assuming error handling works the same
def risky_operation
if Ractor.main?
begin
dangerous_main_operation
rescue StandardError => e
log_error(e) # Main Ractor can log to files/external systems
end
else
begin
dangerous_sub_operation
rescue StandardError => e
log_error(e) # Sub-Ractor might not have access to logging
raise # Must explicitly re-raise or error is lost
end
end
end
# BETTER: Different error strategies per Ractor type
def safer_operation
if Ractor.main?
begin
dangerous_main_operation
rescue StandardError => e
ErrorLogger.log_to_file(e)
NotificationService.alert_admin(e)
end
else
begin
dangerous_sub_operation
rescue StandardError => e
# Sub-Ractor sends error info back to main for handling
Ractor.yield({ error: e.class.name, message: e.message })
end
end
end
Performance & Memory
Ractor.main? executes with minimal overhead since it queries a simple flag maintained by Ruby's Ractor implementation. The method performs a direct lookup without traversing data structures or performing complex calculations, making it suitable for performance-sensitive code paths.
Memory allocation is virtually zero for Ractor.main? calls. The method returns a boolean primitive that doesn't create new objects or trigger garbage collection pressure. This makes it safe to use in high-frequency operations without memory concerns.
require 'benchmark'
# Benchmark the method call overhead
iterations = 1_000_000
time = Benchmark.measure do
iterations.times { Ractor.main? }
end
puts "#{iterations} calls in #{time.real} seconds"
puts "#{(iterations / time.real).to_i} calls per second"
# Typical output: ~50-100 million calls per second
When designing concurrent systems, the performance characteristics of Ractor.main? enable efficient branching logic without introducing bottlenecks. The method's speed makes it practical for use in inner loops and frequently-called methods.
class DataProcessor
def initialize
@is_main_ractor = Ractor.main?
@batch_size = @is_main_ractor ? 10_000 : 1_000
end
def process_stream(data_stream)
batch = []
data_stream.each do |item|
# Fast check allows different batch sizes per Ractor type
if batch.size >= @batch_size
process_batch(batch)
batch.clear
end
batch << transform_item(item)
end
process_batch(batch) unless batch.empty?
end
private
def transform_item(item)
# Transformation logic that might differ by Ractor
@is_main_ractor ? item.upcase : item.downcase
end
def process_batch(batch)
puts "Processing #{batch.size} items in #{Ractor.current}"
end
end
Memory usage patterns differ significantly between main and sub-Ractors due to Ruby's isolation model. The main Ractor has access to all program memory, while sub-Ractors operate with isolated memory spaces. Ractor.main? helps optimize memory allocation strategies.
class MemoryOptimizedCache
def initialize
if Ractor.main?
# Main Ractor can use larger caches and shared structures
@cache = {}
@max_size = 100_000
@shared_data = load_reference_data
else
# Sub-Ractor uses smaller, isolated cache
@cache = {}
@max_size = 1_000
@shared_data = nil # Cannot access shared data
end
end
def get(key)
return @cache[key] if @cache.key?(key)
value = if Ractor.main?
expensive_lookup_with_shared_data(key)
else
simple_computation(key)
end
# Implement cache eviction when size limit reached
if @cache.size >= @max_size
@cache.shift # Remove oldest entry
end
@cache[key] = value
end
private
def load_reference_data
# Only available in main Ractor - large dataset
Array.new(50_000) { |i| "reference_#{i}" }
end
def expensive_lookup_with_shared_data(key)
@shared_data.find { |item| item.include?(key.to_s) } || "default"
end
def simple_computation(key)
# Computation that doesn't require shared data
key.to_s.reverse.upcase
end
end
Profiling reveals that Ractor.main? performs consistently across different Ruby implementations and platforms. The method's implementation uses native C code that accesses Ractor metadata directly, avoiding Ruby method dispatch overhead.
# Performance comparison: cached vs repeated calls
def benchmark_caching_strategy(iterations)
# Strategy 1: Cache the result
cached_time = Benchmark.measure do
is_main = Ractor.main?
iterations.times do |i|
result = is_main ? "main_#{i}" : "sub_#{i}"
end
end
# Strategy 2: Call method each time
repeated_time = Benchmark.measure do
iterations.times do |i|
result = Ractor.main? ? "main_#{i}" : "sub_#{i}"
end
end
puts "Cached approach: #{cached_time.real} seconds"
puts "Repeated calls: #{repeated_time.real} seconds"
puts "Overhead: #{((repeated_time.real - cached_time.real) * 1000).round(2)}ms"
end
benchmark_caching_strategy(1_000_000)
In production systems processing large datasets, the performance characteristics of Ractor.main? enable efficient work distribution strategies. Main Ractors can coordinate resource-intensive operations while sub-Ractors handle CPU-bound computations.
class ParallelImageProcessor
def initialize
@thread_count = Ractor.main? ? 8 : 1 # Main coordinates multiple workers
end
def process_images(image_paths)
if Ractor.main?
# Main Ractor distributes work efficiently
chunk_size = (image_paths.length / @thread_count.to_f).ceil
chunks = image_paths.each_slice(chunk_size).to_a
workers = chunks.map do |chunk|
Ractor.new(chunk) do |paths|
paths.map { |path| process_single_image(path) }
end
end
workers.flat_map(&:take)
else
# Sub-Ractor processes images sequentially
image_paths.map { |path| process_single_image(path) }
end
end
private
def process_single_image(path)
# Simulate image processing work
sleep(0.01) # Represents actual processing time
{ path: path, processed_at: Time.now, ractor: Ractor.current }
end
end
Reference
Method Signature
Method | Parameters | Returns | Description |
---|---|---|---|
Ractor.main? |
None | Boolean |
Returns true if called from main Ractor, false otherwise |
Return Values
Value | Context | Description |
---|---|---|
true |
Main Ractor | The initial Ractor created when Ruby program starts |
false |
Sub-Ractor | Any Ractor created through Ractor.new |
Related Methods
Method | Relationship | Description |
---|---|---|
Ractor.current |
Identity | Returns the current Ractor object |
Ractor.new |
Creation | Creates new sub-Ractor (always returns false for main?) |
Thread.main |
Threading | Main thread within any Ractor |
Thread.current |
Threading | Current thread within current Ractor |
Usage Patterns
Pattern | Code | Use Case |
---|---|---|
Conditional Setup | if Ractor.main? |
Different initialization per Ractor type |
Resource Access | @db = DB.new if Ractor.main? |
Restrict resource access to main Ractor |
Caching | @is_main = Ractor.main? |
Store result for repeated checks |
Validation | raise unless Ractor.main? |
Ensure operations run in main Ractor |
Common Combinations
# Check both Ractor and Thread context
if Ractor.main? && Thread.current == Thread.main
# Main thread of main Ractor
end
# Ractor type with error handling
begin
operation
rescue => e
if Ractor.main?
log_to_file(e)
else
Ractor.yield(error: e.message)
end
end
# Performance-optimized pattern
class Service
def initialize
@is_main_ractor = Ractor.main?
end
def process
@is_main_ractor ? main_logic : sub_logic
end
end
Error Conditions
Scenario | Behavior | Notes |
---|---|---|
Method call from any context | Always succeeds | No exceptions possible |
Threading context | Thread-safe | All threads in Ractor return same value |
Ractor shutdown | Method unavailable | Ractor no longer accessible |
Implementation Notes
- Method implemented in C for performance
- No garbage collection impact
- Thread-safe across all Ruby threading models
- Consistent behavior across Ruby versions 3.0+
- Zero memory allocation per call
Decision Matrix
Requirement | Use Ractor.main? | Alternative |
---|---|---|
Identify main Ractor | ✓ | Ractor.current == main_ractor_ref |
Identify main thread | ✗ | Thread.current == Thread.main |
Branch on Ractor type | ✓ | Store Ractor reference at startup |
Performance-critical code | ✓ (cache result) | Pre-compute in initialization |
Error handling strategy | ✓ | Different rescue blocks |
Compatibility Matrix
Ruby Version | Support | Notes |
---|---|---|
3.0.0+ | Full | Ractor feature introduction |
2.7.x | None | Ractor not available |
JRuby | Partial | Check implementation status |
TruffleRuby | Partial | Check implementation status |