CrackedRuby logo

CrackedRuby

Object Pooling

A comprehensive guide to implementing and managing object pools in Ruby for memory optimization and performance enhancement.

Performance Optimization Optimization Techniques
7.4.4

Overview

Object pooling manages a collection of pre-allocated, reusable objects to reduce the overhead of frequent object creation and garbage collection. Ruby applications benefit from object pooling when dealing with expensive-to-create objects like database connections, HTTP clients, or complex data structures that get instantiated repeatedly.

Ruby's object pooling implementations typically involve maintaining a pool of objects in various states: available for use, currently in use, or being cleaned up. The pool automatically manages object lifecycle, returning clean objects to callers and accepting used objects back for reuse or disposal.

The core pattern involves three main components: a pool manager that tracks object availability, factory methods that create new objects when needed, and cleanup procedures that reset object state between uses. Ruby's flexible object model and metaprogramming capabilities make implementing sophisticated pooling strategies straightforward.

class ConnectionPool
  def initialize(size: 5)
    @size = size
    @available = []
    @in_use = {}
  end

  def checkout
    connection = @available.pop || create_connection
    @in_use[connection.object_id] = connection
    connection
  end

  def checkin(connection)
    @in_use.delete(connection.object_id)
    @available.push(connection) if @available.size < @size
  end

  private

  def create_connection
    # Create expensive resource
    Object.new
  end
end

Object pools address memory pressure by reusing objects instead of creating new instances. This approach particularly benefits applications with predictable object usage patterns, high-frequency allocations, or resource constraints where object creation carries significant overhead.

# Without pooling - creates new objects constantly
1000.times do
  expensive_object = ExpensiveClass.new
  expensive_object.process_data(data)
end

# With pooling - reuses objects
pool = ObjectPool.new(ExpensiveClass, size: 10)
1000.times do
  pool.with_object do |obj|
    obj.process_data(data)
  end
end

Basic Usage

Object pooling in Ruby typically starts with a pool class that manages object lifecycle and availability. The basic pattern involves checking out objects from the pool, using them for specific tasks, then returning them for future use.

class GenericObjectPool
  def initialize(object_class, size: 10, &factory_block)
    @object_class = object_class
    @size = size
    @factory = factory_block || proc { object_class.new }
    @available = []
    @in_use = Set.new
    @mutex = Mutex.new
  end

  def with_object
    object = checkout
    begin
      yield object
    ensure
      checkin(object)
    end
  end

  def checkout
    @mutex.synchronize do
      object = @available.pop || @factory.call
      @in_use.add(object)
      reset_object(object)
      object
    end
  end

  def checkin(object)
    @mutex.synchronize do
      @in_use.delete(object)
      @available.push(object) if @available.size < @size
    end
  end

  private

  def reset_object(object)
    object.reset if object.respond_to?(:reset)
  end
end

The with_object method provides automatic checkout and checkin through block execution, ensuring objects return to the pool even when exceptions occur. This pattern prevents resource leaks and maintains pool integrity.

class DatabaseConnection
  def initialize(host, port)
    @host = host
    @port = port
    @connected = false
  end

  def connect
    @connected = true
    puts "Connected to #{@host}:#{@port}"
  end

  def query(sql)
    raise "Not connected" unless @connected
    "Results for: #{sql}"
  end

  def reset
    @connected = false
  end
end

pool = GenericObjectPool.new(DatabaseConnection, size: 5) do
  conn = DatabaseConnection.new("localhost", 5432)
  conn.connect
  conn
end

# Usage with automatic cleanup
pool.with_object do |conn|
  results = conn.query("SELECT * FROM users")
  puts results
end

Pool sizing depends on expected concurrent usage and object creation costs. Smaller pools conserve memory but may cause blocking when all objects are in use. Larger pools reduce blocking but increase memory overhead and potential resource waste.

class HttpClientPool < GenericObjectPool
  def initialize(base_url, size: 8)
    require 'net/http'
    super(nil, size: size) do
      uri = URI(base_url)
      http = Net::HTTP.new(uri.host, uri.port)
      http.use_ssl = uri.scheme == 'https'
      http.start
      http
    end
  end

  private

  def reset_object(http_client)
    # HTTP connections typically don't need reset
    # but could implement connection health checks here
  end
end

# Create pool for API client
api_pool = HttpClientPool.new("https://api.example.com", size: 5)

# Make concurrent requests
threads = 10.times.map do |i|
  Thread.new do
    api_pool.with_object do |http|
      response = http.get("/endpoint/#{i}")
      puts "Response #{i}: #{response.code}"
    end
  end
end

threads.each(&:join)

Object state management becomes critical when objects maintain state between uses. The reset mechanism ensures objects return to a known state, preventing data leakage between pool uses.

Performance & Memory

Object pooling delivers performance benefits by eliminating allocation and garbage collection overhead for frequently used objects. The impact varies significantly based on object complexity, allocation frequency, and Ruby's garbage collector behavior.

Memory allocation profiling reveals object pooling's effectiveness. Objects that require significant initialization time or memory footprint show the greatest improvement when pooled. Ruby's generational garbage collector performs better when fewer objects reach the old generation, making pooling particularly valuable for long-running applications.

require 'benchmark'
require 'json'

class ExpensiveObject
  def initialize
    @data = Hash.new { |h, k| h[k] = [] }
    @computed_values = {}
    @json_parser = JSON
    populate_initial_data
  end

  def populate_initial_data
    1000.times { |i| @data[:items] << "item_#{i}" }
    calculate_derived_values
  end

  def calculate_derived_values
    @computed_values[:sum] = @data[:items].size
    @computed_values[:hash] = @data[:items].hash
  end

  def process(input)
    @data[:recent] = input
    @computed_values[:processed] = Time.now
    "Processed: #{input}"
  end

  def reset
    @data.clear
    @computed_values.clear
    populate_initial_data
  end
end

# Performance comparison
iterations = 1000

# Without pooling
no_pool_time = Benchmark.realtime do
  iterations.times do |i|
    obj = ExpensiveObject.new
    obj.process("data_#{i}")
  end
end

# With pooling
pool = GenericObjectPool.new(ExpensiveObject, size: 10)
pool_time = Benchmark.realtime do
  iterations.times do |i|
    pool.with_object do |obj|
      obj.process("data_#{i}")
    end
  end
end

puts "Without pooling: #{no_pool_time.round(3)}s"
puts "With pooling: #{pool_time.round(3)}s"
puts "Speedup: #{(no_pool_time / pool_time).round(2)}x"

Memory profiling shows object pooling reduces total allocations and garbage collection pressure. The trade-off involves holding objects in memory longer, potentially increasing resident memory usage while reducing allocation churn.

class MemoryTrackingPool < GenericObjectPool
  def initialize(object_class, size: 10, &factory_block)
    super
    @checkout_count = 0
    @creation_count = 0
    @memory_samples = []
  end

  def checkout
    @checkout_count += 1
    sample_memory if @checkout_count % 100 == 0
    super
  end

  def stats
    {
      checkouts: @checkout_count,
      objects_created: @creation_count,
      pool_size: @available.size,
      in_use: @in_use.size,
      memory_trend: memory_trend
    }
  end

  private

  def sample_memory
    # Ruby memory tracking (requires gem or external tool)
    @memory_samples << GC.stat[:total_allocated_objects]
  end

  def memory_trend
    return "insufficient data" if @memory_samples.size < 2
    recent = @memory_samples.last(10)
    slope = (recent.last - recent.first).to_f / recent.size
    slope > 0 ? "increasing" : "stable"
  end
end

Pool sizing directly impacts performance characteristics. Undersized pools cause blocking when all objects are checked out, while oversized pools waste memory on unused objects. The optimal size depends on concurrent usage patterns and object lifecycle duration.

class AdaptivePool < GenericObjectPool
  def initialize(object_class, initial_size: 5, max_size: 20, &factory_block)
    super(object_class, size: max_size, &factory_block)
    @initial_size = initial_size
    @max_size = max_size
    @checkout_times = []
    @expansion_threshold = 0.1 # seconds
  end

  def checkout
    start_time = Time.now
    object = super
    wait_time = Time.now - start_time
    
    @checkout_times << wait_time
    @checkout_times.shift if @checkout_times.size > 100
    
    expand_pool if should_expand?
    object
  end

  private

  def should_expand?
    return false if @available.size + @in_use.size >= @max_size
    return false if @checkout_times.size < 20
    
    avg_wait = @checkout_times.sum / @checkout_times.size
    avg_wait > @expansion_threshold
  end

  def expand_pool
    @mutex.synchronize do
      new_objects = [@max_size - (@available.size + @in_use.size), 2].min
      new_objects.times { @available << @factory.call }
    end
  end
end

Thread Safety & Concurrency

Object pooling in concurrent environments requires careful synchronization to prevent race conditions and ensure thread safety. Multiple threads accessing the same pool can corrupt internal state without proper locking mechanisms.

The primary concurrency challenge involves coordinating access to the pool's internal collections. Race conditions occur when threads simultaneously modify the available objects list or in-use tracking structures. Ruby's Mutex provides the basic synchronization primitive for pool operations.

class ThreadSafePool
  def initialize(object_class, size: 10, &factory_block)
    @object_class = object_class
    @size = size
    @factory = factory_block || proc { object_class.new }
    @available = []
    @in_use = {}
    @mutex = Mutex.new
    @condition = ConditionVariable.new
    @timeout = 30 # seconds
  end

  def checkout(timeout: @timeout)
    @mutex.synchronize do
      deadline = Time.now + timeout
      
      while @available.empty? && current_pool_size >= @size
        remaining = deadline - Time.now
        raise TimeoutError, "Pool checkout timeout" if remaining <= 0
        @condition.wait(@mutex, remaining)
      end
      
      object = @available.pop || create_object
      @in_use[Thread.current] = object
      object
    end
  end

  def checkin(object = nil)
    @mutex.synchronize do
      object ||= @in_use[Thread.current]
      return unless object
      
      @in_use.delete(Thread.current)
      reset_object_state(object)
      
      if @available.size < @size
        @available.push(object)
        @condition.signal
      end
    end
  end

  def with_object(timeout: @timeout)
    object = checkout(timeout: timeout)
    begin
      yield object
    ensure
      checkin(object)
    end
  end

  private

  def current_pool_size
    @available.size + @in_use.size
  end

  def create_object
    @factory.call
  end

  def reset_object_state(object)
    object.reset if object.respond_to?(:reset)
  rescue => e
    # Log reset errors but don't crash the pool
    warn "Object reset failed: #{e.message}"
  end
end

Thread-local storage provides an alternative approach for certain pooling scenarios. Each thread maintains its own object cache, eliminating synchronization overhead at the cost of potentially higher memory usage.

class ThreadLocalPool
  def initialize(object_class, &factory_block)
    @object_class = object_class
    @factory = factory_block || proc { object_class.new }
    @thread_objects = {}
    @mutex = Mutex.new
  end

  def with_object
    object = get_thread_object
    reset_object_state(object)
    begin
      yield object
    ensure
      # Objects remain with thread - no checkin needed
    end
  end

  def clear_thread_cache
    @mutex.synchronize do
      @thread_objects.delete(Thread.current)
    end
  end

  def total_objects
    @mutex.synchronize { @thread_objects.size }
  end

  private

  def get_thread_object
    @mutex.synchronize do
      @thread_objects[Thread.current] ||= @factory.call
    end
  end

  def reset_object_state(object)
    object.reset if object.respond_to?(:reset)
  end
end

Concurrent access patterns reveal different synchronization strategies. High-contention scenarios benefit from fine-grained locking or lock-free data structures, while low-contention cases work well with simple mutex protection.

class LockFreePool
  def initialize(object_class, size: 10, &factory_block)
    require 'concurrent'
    
    @object_class = object_class
    @factory = factory_block || proc { object_class.new }
    @available = Concurrent::Array.new
    @size = size
    
    # Pre-populate pool
    size.times { @available << @factory.call }
  end

  def with_object
    object = checkout_nonblocking
    object ||= @factory.call # Create if pool empty
    
    begin
      reset_object_state(object)
      yield object
    ensure
      checkin_nonblocking(object)
    end
  end

  def checkout_nonblocking
    @available.pop
  end

  def checkin_nonblocking(object)
    @available.push(object) if @available.size < @size
  end

  def size
    @available.size
  end

  private

  def reset_object_state(object)
    object.reset if object.respond_to?(:reset)
  end
end

Deadlock prevention becomes important when pooled objects themselves acquire locks or when multiple pools interact. Consistent lock ordering and timeout mechanisms help avoid deadlock scenarios.

Production Patterns

Production deployments of object pools require monitoring, configuration management, and integration with application frameworks. Pool health metrics inform capacity planning and performance optimization decisions.

Web applications commonly use connection pooling for database access, HTTP clients, and external service integrations. Pool configuration must balance resource utilization with response time requirements under varying load conditions.

class ProductionPool < ThreadSafePool
  def initialize(object_class, **options, &factory_block)
    @metrics = {
      checkouts: 0,
      checkins: 0,
      timeouts: 0,
      creation_count: 0,
      reset_failures: 0,
      max_wait_time: 0,
      total_wait_time: 0
    }
    @health_check_interval = options.delete(:health_check_interval) || 300
    @last_health_check = Time.now
    @unhealthy_objects = Set.new
    
    super(object_class, **options, &factory_block)
    start_health_monitor if @health_check_interval > 0
  end

  def checkout(timeout: @timeout)
    start_time = Time.now
    
    begin
      object = super(timeout: timeout)
      @metrics[:checkouts] += 1
      record_wait_time(Time.now - start_time)
      object
    rescue TimeoutError
      @metrics[:timeouts] += 1
      raise
    end
  end

  def checkin(object = nil)
    super(object)
    @metrics[:checkins] += 1
  end

  def health_status
    @mutex.synchronize do
      {
        pool_size: current_pool_size,
        available: @available.size,
        in_use: @in_use.size,
        unhealthy: @unhealthy_objects.size,
        metrics: @metrics.dup,
        average_wait_time: calculate_average_wait_time,
        utilization: calculate_utilization
      }
    end
  end

  def reset_metrics!
    @mutex.synchronize do
      @metrics.each_key { |key| @metrics[key] = 0 }
    end
  end

  private

  def create_object
    object = super
    @metrics[:creation_count] += 1
    object
  end

  def reset_object_state(object)
    super(object)
  rescue => e
    @metrics[:reset_failures] += 1
    @unhealthy_objects.add(object)
    raise
  end

  def record_wait_time(wait_time)
    @metrics[:total_wait_time] += wait_time
    @metrics[:max_wait_time] = [wait_time, @metrics[:max_wait_time]].max
  end

  def calculate_average_wait_time
    return 0 if @metrics[:checkouts] == 0
    @metrics[:total_wait_time] / @metrics[:checkouts]
  end

  def calculate_utilization
    return 0 if current_pool_size == 0
    @in_use.size.to_f / current_pool_size
  end

  def start_health_monitor
    Thread.new do
      loop do
        sleep(@health_check_interval)
        perform_health_check
      end
    rescue => e
      warn "Health monitor error: #{e.message}"
    end
  end

  def perform_health_check
    @mutex.synchronize do
      @unhealthy_objects.clear # Reset unhealthy tracking
      @available.select! do |object|
        healthy = object_healthy?(object)
        @unhealthy_objects.add(object) unless healthy
        healthy
      end
      @last_health_check = Time.now
    end
  end

  def object_healthy?(object)
    object.respond_to?(:healthy?) ? object.healthy? : true
  end
end

Configuration management allows pools to adapt to different deployment environments. Development environments might use smaller pools with aggressive health checking, while production uses larger pools with relaxed monitoring.

class ConfigurablePool
  DEFAULT_CONFIG = {
    size: 10,
    timeout: 30,
    health_check_interval: 300,
    max_idle_time: 3600,
    validate_on_checkout: false,
    validate_on_checkin: true
  }.freeze

  def self.from_config(object_class, config = {}, &factory_block)
    merged_config = DEFAULT_CONFIG.merge(config)
    
    case merged_config[:strategy]
    when :thread_local
      ThreadLocalPool.new(object_class, &factory_block)
    when :lock_free
      LockFreePool.new(object_class, **merged_config, &factory_block)
    else
      ProductionPool.new(object_class, **merged_config, &factory_block)
    end
  end

  def self.rails_database_pool
    config = Rails.application.config.database_configuration[Rails.env]
    pool_config = {
      size: config['pool'] || 5,
      timeout: config['timeout'] || 5,
      health_check_interval: Rails.env.production? ? 300 : 60
    }
    
    from_config(ActiveRecord::Base, pool_config) do
      ActiveRecord::Base.connection
    end
  end

  def self.redis_pool(redis_url, **options)
    require 'redis'
    
    from_config(Redis, **options) do
      Redis.new(url: redis_url)
    end
  end
end

Monitoring integration provides visibility into pool performance and capacity utilization. Metrics collection helps identify bottlenecks and optimize pool sizing for changing workload patterns.

class MonitoredPool < ProductionPool
  def initialize(object_class, metrics_reporter: nil, **options, &factory_block)
    @metrics_reporter = metrics_reporter || default_metrics_reporter
    @reporting_interval = options.delete(:reporting_interval) || 60
    
    super(object_class, **options, &factory_block)
    start_metrics_reporting
  end

  def checkout(timeout: @timeout)
    start_time = Time.now
    object = super(timeout: timeout)
    
    @metrics_reporter.timing('pool.checkout_time', Time.now - start_time)
    @metrics_reporter.increment('pool.checkout')
    
    object
  end

  def checkin(object = nil)
    super(object)
    @metrics_reporter.increment('pool.checkin')
  end

  private

  def default_metrics_reporter
    # Simple metrics reporter - replace with StatsD, Prometheus, etc.
    @default_reporter ||= Class.new do
      def timing(metric, value)
        puts "[METRIC] #{metric}: #{value}s"
      end
      
      def increment(metric, value = 1)
        puts "[METRIC] #{metric}: +#{value}"
      end
      
      def gauge(metric, value)
        puts "[METRIC] #{metric}: #{value}"
      end
    end.new
  end

  def start_metrics_reporting
    Thread.new do
      loop do
        sleep(@reporting_interval)
        report_pool_metrics
      end
    rescue => e
      warn "Metrics reporting error: #{e.message}"
    end
  end

  def report_pool_metrics
    status = health_status
    @metrics_reporter.gauge('pool.size', status[:pool_size])
    @metrics_reporter.gauge('pool.available', status[:available])
    @metrics_reporter.gauge('pool.in_use', status[:in_use])
    @metrics_reporter.gauge('pool.utilization', status[:utilization])
    @metrics_reporter.gauge('pool.average_wait_time', status[:average_wait_time])
  end
end

Common Pitfalls

Object state management represents the most frequent source of pooling errors. Objects retain state between pool uses, causing data leakage, incorrect behavior, or security vulnerabilities when state cleanup fails or proves incomplete.

class ProblematicBankAccount
  def initialize
    @balance = 0
    @transactions = []
    @user_id = nil
  end

  def login(user_id)
    @user_id = user_id
    load_user_data
  end

  def withdraw(amount)
    raise "Not logged in" unless @user_id
    @balance -= amount
    @transactions << { type: :withdrawal, amount: amount, time: Time.now }
  end

  def reset
    # PROBLEMATIC: Incomplete reset
    @balance = 0
    # Missing: @transactions.clear and @user_id = nil
  end

  private

  def load_user_data
    # Simulate loading balance and transactions
    @balance = 1000
    @transactions = []
  end
end

# Demonstrates the problem
pool = GenericObjectPool.new(ProblematicBankAccount, size: 2)

# User A uses the account
pool.with_object do |account|
  account.login("user_a")
  account.withdraw(100)
  puts "User A balance: #{account.instance_variable_get(:@balance)}" # 900
end

# User B gets the same object with leftover state
pool.with_object do |account|
  account.login("user_b")
  puts "User B balance: #{account.instance_variable_get(:@balance)}" # Still 900!
  puts "User B transactions: #{account.instance_variable_get(:@transactions).size}" # Has User A's transaction
end

The solution requires comprehensive state reset that clears all instance variables or returns objects to known initial states. Implementing proper reset methods prevents state bleeding between pool uses.

class SecureBankAccount
  def initialize
    @balance = 0
    @transactions = []
    @user_id = nil
    @session_data = {}
  end

  def login(user_id)
    @user_id = user_id
    load_user_data
  end

  def withdraw(amount)
    raise "Not logged in" unless @user_id
    @balance -= amount
    @transactions << { type: :withdrawal, amount: amount, time: Time.now }
  end

  def reset
    # Complete state cleanup
    @balance = 0
    @transactions.clear
    @user_id = nil
    @session_data.clear
    
    # Reset any other stateful components
    cleanup_connections if respond_to?(:cleanup_connections, true)
  end

  def valid_state?
    @user_id.nil? && @balance == 0 && @transactions.empty? && @session_data.empty?
  end

  private

  def load_user_data
    @balance = 1000
    @transactions = []
  end
end

Object validation during pool operations catches state management failures before they cause problems. Validation on checkout ensures clean objects, while validation on checkin detects incomplete cleanup.

class ValidatingPool < ThreadSafePool
  def initialize(object_class, validate_on_checkout: true, validate_on_checkin: true, **options, &factory_block)
    @validate_on_checkout = validate_on_checkout
    @validate_on_checkin = validate_on_checkin
    super(object_class, **options, &factory_block)
  end

  def checkout(timeout: @timeout)
    object = super(timeout: timeout)
    
    if @validate_on_checkout && object.respond_to?(:valid_state?)
      unless object.valid_state?
        # Object has invalid state - remove from pool
        @mutex.synchronize { @in_use.delete(Thread.current) }
        raise "Object failed checkout validation: #{object.inspect}"
      end
    end
    
    object
  end

  def checkin(object = nil)
    if @validate_on_checkin && object&.respond_to?(:valid_state?)
      unless object.valid_state?
        # Don't return invalid objects to pool
        @mutex.synchronize { @in_use.delete(Thread.current) }
        warn "Object failed checkin validation, discarding: #{object.inspect}"
        return
      end
    end
    
    super(object)
  end
end

Resource leaks occur when objects hold external resources like file handles, network connections, or locks that reset methods fail to release. Pool objects must implement comprehensive cleanup to prevent resource exhaustion.

class ResourceLeakExample
  def initialize
    @file_handles = []
    @network_connections = []
    @mutex = Mutex.new
  end

  def process_file(filename)
    @mutex.synchronize do
      file = File.open(filename, 'r')
      @file_handles << file
      file.read.upcase
    end
  end

  def make_request(url)
    require 'net/http'
    uri = URI(url)
    http = Net::HTTP.new(uri.host, uri.port)
    @network_connections << http
    http.start
    response = http.get(uri.path)
    response.body
  end

  def reset
    # PROBLEMATIC: Leaks file handles and connections
    @file_handles.clear  # Files remain open!
    @network_connections.clear  # Connections remain active!
  end
end

class ProperResourceManagement
  def initialize
    @file_handles = []
    @network_connections = []
    @mutex = Mutex.new
  end

  def process_file(filename)
    @mutex.synchronize do
      file = File.open(filename, 'r')
      @file_handles << file
      file.read.upcase
    end
  end

  def make_request(url)
    require 'net/http'
    uri = URI(url)
    http = Net::HTTP.new(uri.host, uri.port)
    @network_connections << http
    http.start
    response = http.get(uri.path)
    response.body
  end

  def reset
    # Properly close all resources
    @file_handles.each(&:close)
    @file_handles.clear
    
    @network_connections.each do |conn|
      conn.finish if conn.started?
    end
    @network_connections.clear
  end
end

Exception handling during reset operations requires careful consideration. Reset failures can corrupt pool state or leave objects in unusable conditions. Robust pools isolate reset failures and remove problematic objects from circulation.

class RobustPool < ValidatingPool
  def checkin(object = nil)
    return super unless object
    
    begin
      reset_object_state(object)
      super(object)
    rescue => reset_error
      # Reset failed - remove object from pool permanently
      @mutex.synchronize { @in_use.delete(Thread.current) }
      
      # Log the failure for debugging
      log_reset_failure(object, reset_error)
      
      # Don't propagate reset errors to caller
      nil
    end
  end

  private

  def log_reset_failure(object, error)
    warn "Object reset failed, removing from pool: #{object.class}##{object.object_id}"
    warn "Reset error: #{error.class}: #{error.message}"
    warn error.backtrace.first(5).join("\n  ") if error.backtrace
  end
end

Pool sizing errors create performance problems and resource waste. Undersized pools cause blocking and degraded response times, while oversized pools consume memory unnecessarily and may hold stale objects too long.

class AdaptiveSizingPool < RobustPool
  def initialize(object_class, min_size: 2, max_size: 20, target_utilization: 0.8, **options, &factory_block)
    @min_size = min_size
    @max_size = max_size
    @target_utilization = target_utilization
    @size_adjustments = []
    @last_adjustment = Time.now
    
    super(object_class, size: max_size, **options, &factory_block)
    
    # Pre-populate with minimum objects
    @mutex.synchronize do
      @min_size.times { @available << create_object }
    end
  end

  def checkout(timeout: @timeout)
    object = super(timeout: timeout)
    consider_size_adjustment
    object
  end

  private

  def consider_size_adjustment
    return if Time.now - @last_adjustment < 60 # Adjust at most once per minute
    
    utilization = calculate_utilization
    current_size = current_pool_size
    
    if utilization > @target_utilization && current_size < @max_size
      expand_pool
    elsif utilization < @target_utilization * 0.5 && current_size > @min_size
      contract_pool
    end
  end

  def expand_pool
    @mutex.synchronize do
      new_size = [current_pool_size + 2, @max_size].min
      while current_pool_size < new_size
        @available << create_object
      end
      @last_adjustment = Time.now
    end
  end

  def contract_pool
    @mutex.synchronize do
      target_size = [current_pool_size - 1, @min_size].max
      while @available.size > 0 && current_pool_size > target_size
        @available.pop
      end
      @last_adjustment = Time.now
    end
  end
end

Reference

Core Pool Interface

Method Parameters Returns Description
#checkout(timeout: 30) timeout (Numeric) Object Retrieves object from pool with optional timeout
#checkin(object) object (Object) nil Returns object to pool after use
#with_object(&block) block (Proc) Object Automatic checkout/checkin with block execution
#size None Integer Current total pool size (available + in use)
#available_count None Integer Number of objects available for checkout
#in_use_count None Integer Number of objects currently checked out

Configuration Options

Option Type Default Description
size Integer 10 Maximum number of objects in pool
timeout Numeric 30 Seconds to wait for available object
factory_block Proc object_class.new Custom object creation logic
validate_on_checkout Boolean false Validate object state before checkout
validate_on_checkin Boolean false Validate object state on checkin
health_check_interval Integer 300 Seconds between health checks (0 to disable)

Pool State Methods

Method Parameters Returns Description
#health_status None Hash Comprehensive pool health information
#reset_metrics! None nil Resets all collected metrics
#clear_thread_cache None nil Clears thread-local object cache (ThreadLocalPool only)
#total_objects None Integer Total objects across all threads (ThreadLocalPool only)

Object Requirements

Objects used in pools should implement these optional methods:

Method Parameters Returns Description
#reset None nil Restore object to clean initial state
#valid_state? None Boolean Check if object state is valid
#healthy? None Boolean Health check for pool monitoring

Exception Types

Exception Trigger Recovery
TimeoutError Pool checkout timeout exceeded Retry with larger timeout or expand pool
ArgumentError Invalid pool configuration Fix configuration parameters
StandardError Object reset/validation failure Object removed from pool automatically

Health Status Fields

{
  pool_size: 10,           # Total objects (available + in use)
  available: 7,            # Objects ready for checkout
  in_use: 3,              # Objects currently checked out
  unhealthy: 0,           # Objects failing health checks
  utilization: 0.3,       # Ratio of in_use / pool_size
  average_wait_time: 0.01, # Average checkout wait in seconds
  metrics: {
    checkouts: 1523,      # Total checkout operations
    checkins: 1520,       # Total checkin operations
    timeouts: 3,          # Checkout timeouts
    creation_count: 10,   # Objects created
    reset_failures: 2,    # Reset operation failures
    max_wait_time: 1.2,   # Longest checkout wait time
    total_wait_time: 15.3 # Sum of all wait times
  }
}

Pool Type Comparison

Pool Type Thread Safety Lock Overhead Memory Usage Use Case
ThreadSafePool Full Moderate Moderate General concurrent access
ThreadLocalPool Thread-isolated None High High-contention scenarios
LockFreePool Atomic operations Low Low Performance-critical paths
AdaptivePool Full Moderate Dynamic Variable load patterns

Configuration Examples

# Database connection pool
db_pool = ProductionPool.new(DatabaseConnection, 
  size: 20, 
  timeout: 10,
  validate_on_checkin: true
) { DatabaseConnection.new(DATABASE_URL) }

# HTTP client pool with monitoring
http_pool = MonitoredPool.new(HttpClient,
  size: 15,
  health_check_interval: 120,
  metrics_reporter: StatsDClient.new
) { HttpClient.new(base_url: API_BASE_URL) }

# Thread-local cache for compute-intensive objects
compute_pool = ThreadLocalPool.new(ExpensiveCalculator) do
  ExpensiveCalculator.new.tap(&:precompute_lookup_tables)
end