Overview
Ruby provides built-in retry functionality through the retry
keyword and comprehensive exception handling via begin
/rescue
/ensure
blocks. The retry mechanism allows code to re-execute when exceptions occur, while exception handling enables graceful error management and recovery.
The exception system centers on the Exception
class hierarchy. StandardError
serves as the base class for most recoverable exceptions, while Exception
encompasses both recoverable and non-recoverable errors. Ruby's retry functionality works within rescue blocks to re-attempt failed operations.
begin
risky_operation
rescue StandardError
retry
end
The retry mechanism maintains state between attempts, allowing for conditional retry logic based on attempt counts, exception types, or external conditions. Exception handling supports multiple rescue clauses, ensure blocks for cleanup, and else clauses for success scenarios.
begin
network_call
rescue Timeout::Error => e
retry if attempts < 3
rescue Net::HTTPError => e
log_error(e)
raise
ensure
cleanup_resources
end
Ruby's exception handling integrates with blocks, methods, and classes. Methods can use implicit begin blocks, while classes can rescue exceptions during definition. The system supports custom exception classes, exception re-raising, and backtrace manipulation.
Basic Usage
The retry
keyword re-executes the begin block from the beginning. Without conditions, retry creates infinite loops when exceptions persist. Adding attempt counters prevents infinite retry cycles.
attempts = 0
begin
attempts += 1
unstable_network_request
rescue StandardError => e
retry if attempts < 3
raise
end
Exception handling uses rescue
clauses to catch specific exception types. Multiple rescue blocks handle different exception categories. The =>
operator captures exception objects for inspection.
begin
file_content = File.read('data.txt')
JSON.parse(file_content)
rescue Errno::ENOENT => e
puts "File not found: #{e.message}"
{}
rescue JSON::ParserError => e
puts "Invalid JSON: #{e.message}"
{}
end
The ensure
block executes regardless of success or failure. Use ensure for resource cleanup, logging, or state restoration. The ensure block runs after rescue and retry operations complete.
file = nil
begin
file = File.open('important.log', 'a')
file.write("Processing started\n")
risky_processing
rescue StandardError => e
file.write("Error: #{e.message}\n") if file
raise
ensure
file&.close
end
The else
clause executes only when no exceptions occur. Place success-only code in else blocks to avoid execution during retry attempts.
begin
result = calculate_complex_formula
rescue StandardError
retry if attempts_remaining?
else
save_result(result)
notify_success
end
Advanced Usage
Conditional retry patterns examine exception types, attempt counts, and external state to determine retry behavior. Create retry strategies that adapt to different failure scenarios.
class RetryHandler
def initialize(max_attempts: 3, backoff: :linear)
@max_attempts = max_attempts
@backoff = backoff
@attempts = 0
end
def call
@attempts += 1
yield
rescue StandardError => e
if should_retry?(e)
sleep(calculate_delay)
retry
else
raise
end
end
private
def should_retry?(exception)
@attempts < @max_attempts &&
retryable_exception?(exception)
end
def retryable_exception?(exception)
[Timeout::Error, Errno::ECONNREFUSED, Net::HTTPServerError].any? do |type|
exception.is_a?(type)
end
end
def calculate_delay
case @backoff
when :linear then @attempts
when :exponential then 2 ** (@attempts - 1)
else 1
end
end
end
# Usage
RetryHandler.new(max_attempts: 5, backoff: :exponential).call do
fetch_external_data
end
Custom exception classes enable domain-specific error handling and retry logic. Define exception hierarchies that reflect application error categories.
class ServiceError < StandardError; end
class TransientError < ServiceError; end
class PermanentError < ServiceError; end
class RateLimitError < TransientError
attr_reader :retry_after
def initialize(message, retry_after)
super(message)
@retry_after = retry_after
end
end
begin
api_call
rescue RateLimitError => e
sleep(e.retry_after)
retry
rescue TransientError
retry if attempts < 3
rescue PermanentError => e
log_permanent_failure(e)
raise
end
Circuit breaker patterns combine retry logic with failure tracking to prevent cascading failures. Implement state machines that open circuits after repeated failures.
class CircuitBreaker
def initialize(failure_threshold: 5, timeout: 30)
@failure_threshold = failure_threshold
@timeout = timeout
@failure_count = 0
@last_failure_time = nil
@state = :closed
end
def call(&block)
case @state
when :open
if Time.now - @last_failure_time > @timeout
@state = :half_open
attempt_call(&block)
else
raise CircuitOpenError, "Circuit breaker is open"
end
when :half_open, :closed
attempt_call(&block)
end
end
private
def attempt_call(&block)
result = block.call
record_success
result
rescue StandardError => e
record_failure
raise
end
def record_success
@failure_count = 0
@state = :closed
end
def record_failure
@failure_count += 1
@last_failure_time = Time.now
@state = :open if @failure_count >= @failure_threshold
end
end
Error Handling & Debugging
Exception objects contain valuable debugging information through attributes like message
, backtrace
, and cause
. Access this information for comprehensive error logging and debugging.
begin
complex_operation
rescue StandardError => e
error_details = {
class: e.class.name,
message: e.message,
backtrace: e.backtrace.first(5),
cause: e.cause&.message,
time: Time.now.iso8601
}
logger.error("Operation failed: #{error_details}")
# Re-raise with additional context
raise e.class, "#{e.message} (attempt #{current_attempt})", e.backtrace
end
The cause
attribute chains exceptions to preserve error context during re-raising. Ruby automatically sets cause when exceptions occur during rescue blocks.
def process_data(raw_data)
parsed_data = JSON.parse(raw_data)
validate_data(parsed_data)
rescue JSON::ParserError => e
raise DataProcessingError, "Invalid data format"
rescue ValidationError => e
raise DataProcessingError, "Data validation failed"
end
begin
process_data(user_input)
rescue DataProcessingError => e
puts "Processing error: #{e.message}"
puts "Root cause: #{e.cause.class} - #{e.cause.message}"
end
Custom exception handling for retry scenarios requires careful state management. Track retry attempts, timing, and failure patterns for debugging and monitoring.
class RetryTracker
attr_reader :attempts, :exceptions, :total_duration
def initialize
@attempts = 0
@exceptions = []
@start_time = Time.now
@total_duration = 0
end
def execute(max_attempts: 3, &block)
loop do
@attempts += 1
attempt_start = Time.now
result = block.call
@total_duration = Time.now - @start_time
return result
rescue StandardError => e
@exceptions << {
attempt: @attempts,
exception: e,
duration: Time.now - attempt_start
}
if @attempts >= max_attempts
@total_duration = Time.now - @start_time
raise RetryExhaustedError.new(self)
end
sleep(calculate_backoff)
end
end
private
def calculate_backoff
[@attempts * 0.5, 10].min
end
end
class RetryExhaustedError < StandardError
attr_reader :tracker
def initialize(tracker)
@tracker = tracker
super(build_message)
end
private
def build_message
exceptions = @tracker.exceptions.map { |e| "#{e[:attempt]}: #{e[:exception].class}" }
"Failed after #{@tracker.attempts} attempts (#{@tracker.total_duration.round(2)}s): #{exceptions.join(', ')}"
end
end
Production Patterns
Web applications require robust retry patterns for external service calls, database operations, and queue processing. Implement retry logic that degrades gracefully under load.
class HttpRetryClient
def initialize(base_url, max_attempts: 3, timeout: 10)
@base_url = base_url
@max_attempts = max_attempts
@timeout = timeout
end
def get(path, **options)
attempt = 0
begin
attempt += 1
response = Net::HTTP.get_response(
URI("#{@base_url}#{path}"),
timeout: @timeout
)
handle_response(response)
rescue Net::TimeoutError, Errno::ECONNREFUSED => e
if attempt < @max_attempts
delay = exponential_backoff(attempt)
Rails.logger.warn("HTTP request failed, retrying in #{delay}s: #{e.message}")
sleep(delay)
retry
else
Rails.logger.error("HTTP request failed permanently: #{e.message}")
raise ServiceUnavailableError, "External service unavailable after #{@max_attempts} attempts"
end
rescue Net::HTTPServerError => e
if attempt < @max_attempts && e.code.to_i >= 500
delay = exponential_backoff(attempt)
Rails.logger.warn("Server error, retrying in #{delay}s: #{e.code}")
sleep(delay)
retry
else
raise
end
end
end
private
def handle_response(response)
case response.code.to_i
when 200..299
JSON.parse(response.body)
when 429
retry_after = response['Retry-After']&.to_i || 60
raise RateLimitError.new("Rate limited", retry_after)
when 400..499
raise ClientError.new("Client error: #{response.code}")
else
raise Net::HTTPServerError.new("Server error: #{response.code}", response)
end
end
def exponential_backoff(attempt)
[2 ** (attempt - 1), 30].min + rand(0.1..1.0)
end
end
Background job processing requires retry strategies that handle various failure scenarios while preventing job queues from backing up with failing jobs.
class JobProcessor
MAX_ATTEMPTS = 5
RETRY_DELAYS = [1, 5, 25, 125, 625].freeze
def self.perform_with_retry(job_class, *args)
attempt = 0
begin
attempt += 1
job_class.new.perform(*args)
rescue StandardError => e
if attempt < MAX_ATTEMPTS && retryable_error?(e)
delay = RETRY_DELAYS[attempt - 1]
logger.warn(
"Job failed (attempt #{attempt}/#{MAX_ATTEMPTS}), " \
"retrying in #{delay}s: #{e.message}",
job: job_class.name,
args: args,
error: e.class.name
)
sleep(delay)
retry
else
logger.error(
"Job failed permanently after #{attempt} attempts: #{e.message}",
job: job_class.name,
args: args,
error: e.class.name,
backtrace: e.backtrace.first(10)
)
raise JobFailedError.new("Job failed permanently", job_class, args, e)
end
end
end
private
def self.retryable_error?(error)
case error
when Net::TimeoutError, Errno::ECONNREFUSED, Redis::TimeoutError
true
when ActiveRecord::DeadlockRetry, PG::TRDeadlockDetected
true
when RateLimitError
true
else
false
end
end
end
Database retry patterns handle connection failures, deadlocks, and temporary unavailability while maintaining data consistency.
module DatabaseRetry
def self.with_retry(max_attempts: 3, &block)
attempt = 0
begin
attempt += 1
ActiveRecord::Base.transaction(&block)
rescue ActiveRecord::Deadlocked => e
if attempt < max_attempts
Rails.logger.warn("Database deadlock, attempt #{attempt}: #{e.message}")
sleep(rand(0.1..0.5)) # Random jitter to avoid thundering herd
retry
else
raise
end
rescue ActiveRecord::ConnectionNotEstablished,
PG::ConnectionBad,
PG::UnableToSend => e
if attempt < max_attempts
Rails.logger.warn("Database connection failed, attempt #{attempt}: #{e.message}")
ActiveRecord::Base.clear_active_connections!
sleep(attempt * 2)
retry
else
raise DatabaseUnavailableError, "Database unavailable after #{max_attempts} attempts"
end
end
end
end
# Usage in models
class User < ApplicationRecord
def self.create_with_retry(attributes)
DatabaseRetry.with_retry do
create!(attributes)
end
end
end
Common Pitfalls
Infinite retry loops occur when exceptions persist and no exit conditions exist. Always include attempt limits, timeout mechanisms, or circuit breaker patterns.
# Dangerous - infinite loop potential
begin
unreliable_service_call
rescue StandardError
retry # Will retry forever if service is permanently down
end
# Safe - bounded retry with escalation
max_attempts = 3
attempt = 0
begin
attempt += 1
unreliable_service_call
rescue StandardError => e
if attempt < max_attempts
Rails.logger.warn("Service call failed, attempt #{attempt}: #{e.message}")
sleep(attempt * 2)
retry
else
# Escalate to error monitoring system
ErrorTracker.notify(e, context: { attempts: attempt })
raise ServicePermanentlyUnavailableError, "Service failed after #{attempt} attempts"
end
end
Variable scope issues arise when retry re-executes variable assignments. Variables initialized inside begin blocks reset on each retry attempt.
# Problematic - counter resets on each retry
begin
counter = 0 # Resets to 0 on each retry
counter += 1
risky_operation
rescue StandardError
retry if counter < 3 # counter is always 1
end
# Correct - counter persists across retries
counter = 0
begin
counter += 1
risky_operation
rescue StandardError
retry if counter < 3
end
Exception masking occurs when rescue blocks catch more exceptions than intended. Always rescue specific exception types rather than using bare rescue or rescuing Exception
.
# Dangerous - masks all exceptions including system exits
begin
important_operation
rescue Exception # Catches SystemExit, NoMemoryError, etc.
retry
end
# Dangerous - masks syntax errors and system signals
begin
important_operation
rescue # Equivalent to rescue StandardError, but unclear
retry
end
# Safe - explicit exception types
begin
important_operation
rescue Net::TimeoutError, Errno::ECONNREFUSED => e
Rails.logger.warn("Network error: #{e.message}")
retry if attempt_count < 3
rescue JSON::ParserError => e
Rails.logger.error("Invalid response format: #{e.message}")
raise # Don't retry parsing errors
end
State corruption during retries occurs when partial operations complete before exceptions occur. Use transactions, compensating actions, or idempotent operations.
# Problematic - partial state changes on retry
def transfer_funds(from_account, to_account, amount)
begin
from_account.withdraw(amount) # This might succeed
to_account.deposit(amount) # This might fail and cause retry
rescue StandardError
retry # from_account already withdrawn, will withdraw again
end
end
# Correct - transactional retry
def transfer_funds(from_account, to_account, amount)
attempt = 0
begin
attempt += 1
ActiveRecord::Base.transaction do
from_account.withdraw(amount)
to_account.deposit(amount)
end
rescue ActiveRecord::StatementInvalid => e
if attempt < 3 && retryable_database_error?(e)
sleep(attempt * 0.5)
retry
else
raise
end
end
end
Memory leaks can occur in long-running retry loops when objects accumulate without garbage collection. Monitor memory usage in retry-heavy code paths.
# Memory leak potential - objects accumulate in retry loop
def process_large_dataset
begin
large_data = fetch_massive_dataset # New objects on each retry
process_data(large_data)
rescue StandardError
# large_data objects accumulate if retry happens frequently
retry
end
end
# Memory-conscious retry
def process_large_dataset
large_data = nil
begin
large_data = fetch_massive_dataset
process_data(large_data)
rescue StandardError => e
large_data = nil # Explicit cleanup
GC.start if retry_count > 5 # Force garbage collection under pressure
retry if should_retry?
ensure
large_data = nil
end
end
Reference
Core Keywords and Methods
Keyword/Method | Usage | Description |
---|---|---|
begin |
begin; code; end |
Starts exception handling block |
rescue |
rescue ExceptionClass => var |
Catches specific exception types |
retry |
retry |
Re-executes begin block from start |
ensure |
ensure; cleanup; end |
Always executes, even during exceptions |
else |
else; success_code; end |
Executes only when no exceptions occur |
raise |
raise , raise(msg) , raise(Class, msg) |
Raises exception |
Standard Exception Hierarchy
Exception
+-- NoMemoryError
+-- ScriptError
| +-- LoadError
| +-- NotImplementedError
| +-- SyntaxError
+-- SecurityError
+-- SignalException
| +-- Interrupt
+-- StandardError (default for rescue)
| +-- ArgumentError
| +-- EncodingError
| +-- IOError
| | +-- EOFError
| +-- LocalJumpError
| +-- NameError
| | +-- NoMethodError
| +-- RangeError
| | +-- FloatDomainError
| +-- RegexpError
| +-- RuntimeError (default for raise)
| +-- SystemCallError
| | +-- Errno::*
| +-- ThreadError
| +-- TypeError
| +-- ZeroDivisionError
+-- SystemExit
+-- SystemStackError
Exception Object Methods
Method | Returns | Description |
---|---|---|
#message |
String |
Exception message |
#backtrace |
Array of String |
Stack trace array |
#backtrace_locations |
Array of Thread::Backtrace::Location |
Detailed backtrace objects |
#cause |
Exception or nil |
Exception that caused this exception |
#full_message |
String |
Formatted exception with backtrace |
#inspect |
String |
Exception class and message |
#set_backtrace(array) |
Array |
Sets custom backtrace |
Common Network Exception Types
Exception Class | Retry Recommended | Description |
---|---|---|
Net::TimeoutError |
Yes | Request timeout |
Errno::ECONNREFUSED |
Yes | Connection refused |
Errno::ECONNRESET |
Yes | Connection reset by peer |
Net::HTTPServerError |
Conditional | 5xx HTTP responses |
Net::HTTPTooManyRequests |
Yes (with delay) | 429 rate limiting |
Net::HTTPBadRequest |
No | 4xx client errors |
SocketError |
Conditional | DNS/socket issues |
Retry Strategy Patterns
Pattern | Use Case | Implementation |
---|---|---|
Linear Backoff | Simple operations | sleep(attempt) |
Exponential Backoff | Network calls | sleep(2 ** attempt) |
Random Jitter | Preventing thundering herd | sleep(base + rand(jitter)) |
Circuit Breaker | Cascading failures | Track failure rate, open circuit |
Deadline-based | Time-sensitive operations | Check total elapsed time |
Exception-specific | Different error types | Rescue specific exception classes |
Rescue Clause Syntax
# Single exception type
rescue StandardError => e
# Multiple exception types
rescue TypeError, ArgumentError => e
# Multiple rescue clauses
rescue NetworkError => e
handle_network_error(e)
rescue TimeoutError => e
handle_timeout_error(e)
rescue StandardError => e
handle_generic_error(e)
# Bare rescue (catches StandardError)
rescue => e
# Re-raising exceptions
rescue SomeError => e
log_error(e)
raise # Re-raises original exception
Ensure Block Behavior
Scenario | Ensure Block Executes |
---|---|
Normal completion | Yes |
Exception raised | Yes |
Return statement | Yes |
Break statement | Yes |
Next statement | Yes |
System exit | Yes |
Process kill | No |
Performance Considerations
Factor | Impact | Mitigation |
---|---|---|
Exception creation | High object allocation | Use specific exception types |
Backtrace generation | CPU intensive | Disable with --backtrace-limit 0 |
Deep call stacks | Memory usage | Monitor stack depth |
Retry loops | Exponential resource usage | Implement circuit breakers |
String interpolation in messages | Memory allocation | Use lazy evaluation |