CrackedRuby logo

CrackedRuby

Caching Strategies

Techniques and patterns for implementing efficient data caching in Ruby applications using memoization, Rails.cache, Redis, and custom cache stores.

Performance Optimization Optimization Techniques
7.4.2

Overview

Caching strategies in Ruby provide mechanisms to store frequently accessed data in memory or persistent storage to reduce computational overhead and database queries. Ruby applications implement caching through several approaches: instance variable memoization, class-level caches, Rails cache framework, and external cache stores like Redis or Memcached.

The core caching concept involves storing the result of expensive operations and returning cached values on subsequent requests. Ruby's caching ecosystem includes built-in memoization patterns, Rails ActiveSupport cache framework, and integration with external cache systems.

# Basic memoization pattern
def expensive_calculation
  @expensive_calculation ||= perform_complex_operation
end

# Rails cache usage
Rails.cache.fetch("user_stats_#{user.id}", expires_in: 1.hour) do
  calculate_user_statistics(user)
end

# Redis integration
$redis.setex("session_#{session_id}", 3600, session_data.to_json)

Ruby caching operates at multiple levels: method-level memoization for single request optimization, application-level caching for data shared across requests, and distributed caching for multi-server environments. The choice of caching strategy depends on data volatility, access patterns, memory constraints, and scalability requirements.

Cache stores in Ruby implement a consistent interface through ActiveSupport::Cache::Store, allowing applications to switch between memory stores, file stores, database stores, and distributed cache systems without changing application code.

Basic Usage

Memoization represents the simplest caching pattern in Ruby, storing method results in instance variables to avoid repeated computation within the same object lifecycle.

class DataProcessor
  def initialize(data)
    @raw_data = data
  end

  def processed_data
    @processed_data ||= begin
      # Expensive processing operation
      @raw_data.map { |item| complex_transformation(item) }
    end
  end

  def summary_stats
    @summary_stats ||= {
      total: processed_data.size,
      average: processed_data.sum / processed_data.size,
      median: calculate_median(processed_data)
    }
  end

  private

  def complex_transformation(item)
    # Simulate expensive operation
    sleep(0.001)
    item * 2 + rand(100)
  end
end

Rails applications use Rails.cache for application-level caching that persists across requests. The cache supports automatic key generation, expiration times, and conditional caching with blocks.

class UserController < ApplicationController
  def profile
    @user = User.find(params[:id])
    
    # Cache user profile data for 30 minutes
    @profile_data = Rails.cache.fetch("user_profile_#{@user.id}", expires_in: 30.minutes) do
      {
        posts_count: @user.posts.count,
        followers_count: @user.followers.count,
        recent_activity: @user.recent_activities.limit(10).to_a
      }
    end
  end
  
  def dashboard
    # Cache expensive dashboard query
    @dashboard_stats = Rails.cache.fetch("dashboard_#{current_user.id}_#{Date.current}", expires_in: 1.hour) do
      calculate_dashboard_statistics(current_user)
    end
  end

  private

  def calculate_dashboard_statistics(user)
    {
      total_revenue: user.orders.sum(:total),
      monthly_growth: calculate_monthly_growth(user),
      top_products: user.orders.joins(:products).group('products.name').sum(:quantity)
    }
  end
end

External cache stores like Redis provide distributed caching capabilities for multi-server deployments. Ruby applications connect to Redis through gems like redis or connection_pool.

require 'redis'
require 'json'

class CacheManager
  def initialize
    @redis = Redis.new(url: ENV['REDIS_URL'])
  end

  def get(key)
    value = @redis.get(key)
    JSON.parse(value) if value
  end

  def set(key, value, expires_in: nil)
    serialized = JSON.generate(value)
    if expires_in
      @redis.setex(key, expires_in, serialized)
    else
      @redis.set(key, serialized)
    end
  end

  def delete(key)
    @redis.del(key)
  end

  def exists?(key)
    @redis.exists?(key) > 0
  end
end

# Usage in application
cache = CacheManager.new
cache.set("user_preferences_#{user.id}", user.preferences.to_h, expires_in: 3600)
preferences = cache.get("user_preferences_#{user.id}")

Class-level caching stores data at the class level, sharing cached values across all instances of a class. This pattern works well for configuration data or expensive class-level computations.

class ConfigurationManager
  @cache = {}

  def self.get_setting(key)
    @cache[key] ||= fetch_setting_from_database(key)
  end

  def self.clear_cache
    @cache.clear
  end

  def self.refresh_setting(key)
    @cache.delete(key)
    get_setting(key)
  end

  private

  def self.fetch_setting_from_database(key)
    # Database query to fetch configuration
    Setting.find_by(key: key)&.value
  end
end

Advanced Usage

Multi-level caching combines different cache stores to optimize for both speed and persistence, creating cache hierarchies that maximize hit rates while maintaining data consistency.

class MultiLevelCache
  def initialize
    @memory_store = ActiveSupport::Cache::MemoryStore.new(size: 64.megabytes)
    @redis_store = ActiveSupport::Cache::RedisCacheStore.new(url: ENV['REDIS_URL'])
  end

  def fetch(key, expires_in: 1.hour, &block)
    # Try memory cache first
    value = @memory_store.read(key)
    return value if value

    # Try Redis cache second
    value = @redis_store.read(key)
    if value
      # Populate memory cache from Redis
      @memory_store.write(key, value, expires_in: expires_in)
      return value
    end

    # Generate value and cache at both levels
    value = block.call
    @redis_store.write(key, value, expires_in: expires_in)
    @memory_store.write(key, value, expires_in: expires_in)
    value
  end

  def invalidate(key)
    @memory_store.delete(key)
    @redis_store.delete(key)
  end
end

# Usage with complex data processing
class ReportGenerator
  def initialize
    @cache = MultiLevelCache.new
  end

  def generate_monthly_report(year, month)
    cache_key = "monthly_report_#{year}_#{month}"
    
    @cache.fetch(cache_key, expires_in: 24.hours) do
      {
        sales_data: calculate_monthly_sales(year, month),
        user_metrics: calculate_user_metrics(year, month),
        performance_indicators: calculate_kpis(year, month),
        generated_at: Time.current
      }
    end
  end
end

Fragment caching in Rails allows granular caching of view components, enabling selective cache invalidation and reducing rendering time for complex pages.

# In Rails controller
class ProductsController < ApplicationController
  def index
    @products = Product.includes(:category, :reviews)
    
    # Cache expensive aggregation queries
    @category_stats = Rails.cache.fetch("category_stats", expires_in: 2.hours) do
      Category.joins(:products).group(:name).count
    end
    
    @featured_products = Rails.cache.fetch("featured_products", expires_in: 1.hour) do
      Product.featured.includes(:images).limit(10).to_a
    end
  end

  def show
    @product = Product.find(params[:id])
    
    # Cache product recommendations
    @recommendations = Rails.cache.fetch("recommendations_#{@product.id}", expires_in: 4.hours) do
      RecommendationEngine.similar_products(@product, limit: 5)
    end
  end
end

# In Rails view (products/index.html.erb)
<% cache("products_grid_#{@products.cache_key_with_version}", expires_in: 30.minutes) do %>
  <div class="products-grid">
    <% @products.each do |product| %>
      <% cache(product, expires_in: 1.hour) do %>
        <div class="product-card">
          <%= render 'product_summary', product: product %>
        </div>
      <% end %>
    <% end %>
  </div>
<% end %>

Cache warming strategies proactively populate caches before user requests, reducing cache miss penalties during peak traffic periods.

class CacheWarmer
  def initialize
    @cache = Rails.cache
  end

  def warm_user_caches(user_ids)
    user_ids.each_slice(50) do |batch_ids|
      User.where(id: batch_ids).includes(:profile, :preferences).find_each do |user|
        warm_user_data(user)
      end
    end
  end

  def warm_product_caches
    Product.featured.includes(:images, :reviews).find_each do |product|
      # Warm product detail cache
      cache_key = "product_detail_#{product.id}"
      @cache.fetch(cache_key, expires_in: 2.hours, force: true) do
        {
          details: product.attributes,
          reviews_summary: calculate_reviews_summary(product),
          related_products: find_related_products(product)
        }
      end
      
      # Warm product recommendations
      @cache.fetch("recommendations_#{product.id}", expires_in: 4.hours, force: true) do
        RecommendationEngine.similar_products(product, limit: 10)
      end
    end
  end

  def warm_dashboard_caches
    AdminUser.active.find_each do |admin|
      cache_key = "admin_dashboard_#{admin.id}_#{Date.current}"
      @cache.fetch(cache_key, expires_in: 1.hour, force: true) do
        calculate_admin_dashboard_data(admin)
      end
    end
  end

  private

  def warm_user_data(user)
    user_cache_key = "user_profile_#{user.id}"
    @cache.fetch(user_cache_key, expires_in: 30.minutes, force: true) do
      {
        profile: user.profile.attributes,
        preferences: user.preferences.to_h,
        recent_orders: user.orders.recent.limit(5).to_a
      }
    end
  end
end

# Scheduled cache warming
class CacheWarmingJob
  include Sidekiq::Worker

  def perform
    warmer = CacheWarmer.new
    warmer.warm_product_caches
    warmer.warm_user_caches(active_user_ids)
    warmer.warm_dashboard_caches
  end

  private

  def active_user_ids
    User.where('last_login > ?', 7.days.ago).pluck(:id)
  end
end

Performance & Memory

Cache performance analysis requires measuring hit rates, response times, and memory consumption to optimize cache strategies for specific application patterns.

class CacheAnalyzer
  def initialize(cache_store)
    @cache = cache_store
    @stats = {
      hits: 0,
      misses: 0,
      total_requests: 0,
      response_times: []
    }
  end

  def fetch_with_stats(key, **options, &block)
    start_time = Time.current
    @stats[:total_requests] += 1

    value = @cache.read(key)
    if value
      @stats[:hits] += 1
      record_response_time(start_time)
      return value
    end

    @stats[:misses] += 1
    value = block.call
    @cache.write(key, value, **options)
    record_response_time(start_time)
    value
  end

  def hit_rate
    return 0.0 if @stats[:total_requests] == 0
    (@stats[:hits].to_f / @stats[:total_requests] * 100).round(2)
  end

  def average_response_time
    return 0.0 if @stats[:response_times].empty?
    (@stats[:response_times].sum / @stats[:response_times].size * 1000).round(2)
  end

  def report
    {
      hit_rate: "#{hit_rate}%",
      total_requests: @stats[:total_requests],
      hits: @stats[:hits],
      misses: @stats[:misses],
      avg_response_time: "#{average_response_time}ms"
    }
  end

  private

  def record_response_time(start_time)
    @stats[:response_times] << (Time.current - start_time)
  end
end

# Benchmarking different cache strategies
require 'benchmark'

class CacheBenchmark
  def self.compare_strategies
    memory_cache = ActiveSupport::Cache::MemoryStore.new
    file_cache = ActiveSupport::Cache::FileStore.new('tmp/cache')
    
    data = (1..1000).to_a
    
    Benchmark.bm(20) do |x|
      x.report("Memory cache write:") do
        1000.times { |i| memory_cache.write("key_#{i}", data) }
      end
      
      x.report("File cache write:") do
        1000.times { |i| file_cache.write("key_#{i}", data) }
      end
      
      x.report("Memory cache read:") do
        1000.times { |i| memory_cache.read("key_#{i}") }
      end
      
      x.report("File cache read:") do
        1000.times { |i| file_cache.read("key_#{i}") }
      end
    end
  end
end

Memory management for caches requires monitoring cache size, implementing eviction policies, and preventing memory leaks from unbounded cache growth.

class MemoryAwareCache
  def initialize(max_size: 100, max_memory: 64.megabytes)
    @max_size = max_size
    @max_memory = max_memory
    @data = {}
    @access_times = {}
    @memory_usage = 0
  end

  def get(key)
    if @data.key?(key)
      @access_times[key] = Time.current
      @data[key]
    end
  end

  def set(key, value)
    value_size = calculate_object_size(value)
    
    # Evict if necessary
    while (@data.size >= @max_size || @memory_usage + value_size > @max_memory) && !@data.empty?
      evict_least_recently_used
    end
    
    # Store new value
    @data[key] = value
    @access_times[key] = Time.current
    @memory_usage += value_size
  end

  def stats
    {
      size: @data.size,
      max_size: @max_size,
      memory_usage: @memory_usage,
      max_memory: @max_memory,
      memory_utilization: (@memory_usage.to_f / @max_memory * 100).round(2)
    }
  end

  def clear
    @data.clear
    @access_times.clear
    @memory_usage = 0
  end

  private

  def evict_least_recently_used
    lru_key = @access_times.min_by { |_, time| time }[0]
    value_size = calculate_object_size(@data[lru_key])
    
    @data.delete(lru_key)
    @access_times.delete(lru_key)
    @memory_usage -= value_size
  end

  def calculate_object_size(object)
    # Approximate object size calculation
    case object
    when String
      object.bytesize
    when Array
      object.sum { |item| calculate_object_size(item) } + 40
    when Hash
      object.sum { |k, v| calculate_object_size(k) + calculate_object_size(v) } + 40
    when Integer
      8
    when Float
      8
    else
      Marshal.dump(object).bytesize
    end
  end
end

Cache partitioning strategies distribute cache load across multiple cache instances to improve performance and reduce contention in high-concurrency environments.

class PartitionedCache
  def initialize(partition_count: 8)
    @partition_count = partition_count
    @caches = Array.new(partition_count) { ActiveSupport::Cache::MemoryStore.new }
  end

  def get(key)
    partition = partition_for_key(key)
    @caches[partition].read(key)
  end

  def set(key, value, **options)
    partition = partition_for_key(key)
    @caches[partition].write(key, value, **options)
  end

  def delete(key)
    partition = partition_for_key(key)
    @caches[partition].delete(key)
  end

  def stats
    @caches.map.with_index do |cache, index|
      {
        partition: index,
        size: cache.instance_variable_get(:@data)&.size || 0
      }
    end
  end

  def clear
    @caches.each(&:clear)
  end

  private

  def partition_for_key(key)
    Digest::MD5.hexdigest(key.to_s).to_i(16) % @partition_count
  end
end

Production Patterns

Production cache deployment requires monitoring, health checks, and failover strategies to maintain application performance during cache failures or degradation.

class ProductionCacheManager
  def initialize
    @primary_cache = connect_to_primary_cache
    @fallback_cache = ActiveSupport::Cache::MemoryStore.new
    @metrics = MetricsCollector.new
    @circuit_breaker = CircuitBreaker.new(failure_threshold: 5, timeout: 30.seconds)
  end

  def fetch(key, **options, &block)
    @circuit_breaker.call do
      start_time = Time.current
      
      begin
        result = @primary_cache.fetch(key, **options, &block)
        @metrics.record_cache_hit(key, Time.current - start_time)
        result
      rescue => e
        @metrics.record_cache_error(key, e.class.name)
        
        # Fallback to memory cache
        Rails.logger.warn "Cache error for key #{key}: #{e.message}"
        @fallback_cache.fetch(key, **options, &block)
      end
    end
  rescue CircuitBreakerOpen
    # Circuit breaker is open, use fallback cache directly
    @metrics.record_circuit_breaker_open(key)
    @fallback_cache.fetch(key, **options, &block)
  end

  def health_check
    test_key = "health_check_#{SecureRandom.hex(4)}"
    test_value = { timestamp: Time.current.to_f, check: 'ok' }
    
    begin
      @primary_cache.write(test_key, test_value, expires_in: 1.minute)
      retrieved = @primary_cache.read(test_key)
      @primary_cache.delete(test_key)
      
      {
        status: 'healthy',
        response_time: measure_response_time,
        last_check: Time.current
      }
    rescue => e
      {
        status: 'unhealthy',
        error: e.message,
        last_check: Time.current
      }
    end
  end

  def clear_namespace(namespace)
    pattern = "#{namespace}:*"
    
    if @primary_cache.respond_to?(:delete_matched)
      @primary_cache.delete_matched(pattern)
    else
      # Fallback for cache stores that don't support pattern matching
      keys_to_delete = find_keys_with_prefix(namespace)
      keys_to_delete.each { |key| @primary_cache.delete(key) }
    end
  end

  private

  def connect_to_primary_cache
    if ENV['REDIS_URL'].present?
      ActiveSupport::Cache::RedisCacheStore.new(
        url: ENV['REDIS_URL'],
        connect_timeout: 1.second,
        read_timeout: 1.second,
        write_timeout: 1.second,
        reconnect_attempts: 2
      )
    else
      ActiveSupport::Cache::MemoryStore.new(size: 256.megabytes)
    end
  end

  def measure_response_time
    start_time = Time.current
    @primary_cache.exist?('response_time_test')
    ((Time.current - start_time) * 1000).round(2)
  end
end

# Monitoring and alerting
class CacheMonitor
  def initialize(cache_manager)
    @cache_manager = cache_manager
    @alert_threshold_error_rate = 0.05 # 5%
    @alert_threshold_response_time = 100 # 100ms
  end

  def monitor
    health = @cache_manager.health_check
    metrics = collect_metrics
    
    check_error_rate(metrics[:error_rate])
    check_response_time(metrics[:avg_response_time])
    check_hit_rate(metrics[:hit_rate])
    
    {
      health: health,
      metrics: metrics,
      alerts: current_alerts
    }
  end

  private

  def collect_metrics
    # Implementation would integrate with your metrics system
    # (Prometheus, StatsD, CloudWatch, etc.)
    {
      hit_rate: calculate_hit_rate,
      error_rate: calculate_error_rate,
      avg_response_time: calculate_avg_response_time,
      memory_usage: calculate_memory_usage
    }
  end

  def check_error_rate(error_rate)
    if error_rate > @alert_threshold_error_rate
      trigger_alert("High cache error rate: #{(error_rate * 100).round(2)}%")
    end
  end

  def check_response_time(response_time)
    if response_time > @alert_threshold_response_time
      trigger_alert("High cache response time: #{response_time}ms")
    end
  end

  def trigger_alert(message)
    Rails.logger.error("CACHE ALERT: #{message}")
    # Integration with alerting system (PagerDuty, Slack, etc.)
  end
end

Rails integration patterns optimize cache usage within the Rails request/response cycle, including automatic cache key generation and intelligent invalidation.

# Application-level cache configuration
class Application < Rails::Application
  # Configure cache store based on environment
  config.cache_store = if Rails.env.production?
    :redis_cache_store, {
      url: ENV['REDIS_URL'],
      connect_timeout: 1,
      read_timeout: 1,
      write_timeout: 1,
      pool_size: ENV.fetch('RAILS_MAX_THREADS', 5).to_i,
      namespace: "myapp_#{Rails.env}"
    }
  else
    :memory_store, { size: 64.megabytes }
  end
end

# Model-level caching with automatic invalidation
class Product < ApplicationRecord
  has_many :reviews
  belongs_to :category
  
  # Cache expensive associations
  def cached_reviews_summary
    Rails.cache.fetch("product_#{id}_reviews_summary", expires_in: 1.hour) do
      {
        average_rating: reviews.average(:rating).to_f.round(1),
        total_reviews: reviews.count,
        recent_reviews: reviews.recent.limit(5).includes(:user).to_a
      }
    end
  end

  # Cache complex calculations
  def popularity_score
    Rails.cache.fetch("product_#{id}_popularity", expires_in: 4.hours) do
      calculate_popularity_score
    end
  end

  # Automatic cache invalidation
  after_update :invalidate_caches
  after_destroy :invalidate_caches

  private

  def invalidate_caches
    Rails.cache.delete("product_#{id}_reviews_summary")
    Rails.cache.delete("product_#{id}_popularity")
    Rails.cache.delete("category_#{category_id}_featured_products")
  end

  def calculate_popularity_score
    # Complex calculation involving views, purchases, ratings
    (reviews.average(:rating).to_f * 0.4 + 
     normalized_view_count * 0.3 + 
     normalized_purchase_count * 0.3)
  end
end

# Service object with integrated caching
class RecommendationService
  def initialize(user)
    @user = user
    @cache_prefix = "recommendations_#{user.id}"
  end

  def similar_users
    Rails.cache.fetch("#{@cache_prefix}_similar_users", expires_in: 24.hours) do
      find_similar_users(@user)
    end
  end

  def recommended_products
    Rails.cache.fetch("#{@cache_prefix}_products", expires_in: 2.hours) do
      generate_recommendations(@user)
    end
  end

  def trending_in_category(category)
    Rails.cache.fetch("trending_#{category.id}", expires_in: 30.minutes) do
      calculate_trending_products(category)
    end
  end

  def invalidate_user_caches
    Rails.cache.delete_matched("#{@cache_prefix}_*")
  end
end

Common Pitfalls

Cache invalidation represents one of the most challenging aspects of caching, where stale data can lead to inconsistent application state and user confusion.

# PROBLEM: Race condition in cache invalidation
class ProblematicUserService
  def update_user_profile(user, attributes)
    user.update!(attributes)
    # Race condition: another request might read stale cache
    # between update and cache deletion
    Rails.cache.delete("user_profile_#{user.id}")
  end
end

# SOLUTION: Cache-aside pattern with proper ordering
class ImprovedUserService
  def update_user_profile(user, attributes)
    # Delete cache first to prevent serving stale data
    Rails.cache.delete("user_profile_#{user.id}")
    user.update!(attributes)
    
    # Optionally warm the cache immediately
    warm_user_profile_cache(user)
  end

  private

  def warm_user_profile_cache(user)
    Rails.cache.fetch("user_profile_#{user.id}", expires_in: 30.minutes) do
      generate_user_profile_data(user)
    end
  end
end

Cache key collision occurs when different data types share the same cache namespace, leading to incorrect data retrieval and application errors.

# PROBLEM: Cache key collision between different data types
class BadCacheUsage
  def user_summary(user_id)
    Rails.cache.fetch(user_id) do  # Dangerous: using bare ID as key
      calculate_user_summary(user_id)
    end
  end

  def product_details(product_id)
    Rails.cache.fetch(product_id) do  # Collision if product_id == user_id!
      fetch_product_details(product_id)
    end
  end
end

# SOLUTION: Proper namespacing and key structure
class GoodCacheUsage
  def user_summary(user_id)
    Rails.cache.fetch("user_summary:#{user_id}:v1") do
      calculate_user_summary(user_id)
    end
  end

  def product_details(product_id)
    Rails.cache.fetch("product_details:#{product_id}:v1") do
      fetch_product_details(product_id)
    end
  end

  # Include relevant factors in cache key
  def personalized_recommendations(user_id, page = 1)
    cache_key = [
      'recommendations',
      user_id,
      'page', page,
      'v2',  # Version for cache busting
      Date.current.strftime('%Y-%m-%d')  # Date-based invalidation
    ].join(':')
    
    Rails.cache.fetch(cache_key, expires_in: 2.hours) do
      generate_recommendations(user_id, page)
    end
  end
end

Memory leaks from unbounded caches can crash applications by consuming all available memory, particularly with memoization patterns that never evict old data.

# PROBLEM: Unbounded cache growth
class MemoryLeakingCache
  def initialize
    @cache = {}  # Never clears, grows indefinitely
  end

  def expensive_calculation(input)
    @cache[input] ||= perform_calculation(input)
  end

  def self.global_cache
    @global_cache ||= {}  # Class-level memory leak
  end
end

# SOLUTION: Bounded cache with size limits and eviction
class SafeCache
  def initialize(max_size: 1000)
    @cache = {}
    @access_order = []
    @max_size = max_size
  end

  def fetch(key)
    if @cache.key?(key)
      # Move to end of access order (most recently used)
      @access_order.delete(key)
      @access_order.push(key)
      return @cache[key]
    end

    value = yield
    store(key, value)
    value
  end

  private

  def store(key, value)
    # Evict least recently used items if at capacity
    while @cache.size >= @max_size && !@cache.empty?
      lru_key = @access_order.shift
      @cache.delete(lru_key)
    end

    @cache[key] = value
    @access_order.push(key)
  end
end

# Better approach: Use existing cache stores with built-in eviction
class ProductionSafeCache
  def initialize
    @cache = ActiveSupport::Cache::MemoryStore.new(
      size: 64.megabytes,  # Automatic memory-based eviction
      expires_in: 1.hour   # Automatic time-based expiration
    )
  end

  def expensive_calculation(input)
    @cache.fetch("calc:#{input}") { perform_calculation(input) }
  end
end

Thundering herd problems occur when multiple processes simultaneously attempt to regenerate the same expired cache entry, causing database overload and performance degradation.

# PROBLEM: Multiple processes regenerating same cache simultaneously
class ThunderingHerdExample
  def popular_data
    Rails.cache.fetch('popular_data', expires_in: 1.hour) do
      # If this expires during high traffic, many processes
      # will hit the database simultaneously
      expensive_database_query
    end
  end
end

# SOLUTION: Lock-based cache regeneration
class ThunderingHerdSolution
  def popular_data
    cache_key = 'popular_data'
    lock_key = "#{cache_key}:lock"
    
    # Try to get cached value first
    cached = Rails.cache.read(cache_key)
    return cached if cached
    
    # Try to acquire lock for cache regeneration
    if Rails.cache.write(lock_key, 'locked', expires_in: 30.seconds, unless_exist: true)
      begin
        # Double-check cache wasn't populated while acquiring lock
        cached = Rails.cache.read(cache_key)
        return cached if cached
        
        # Generate new value
        fresh_data = expensive_database_query
        Rails.cache.write(cache_key, fresh_data, expires_in: 1.hour)
        fresh_data
      ensure
        Rails.cache.delete(lock_key)
      end
    else
      # Another process is regenerating, wait briefly and check again
      sleep(0.1)
      Rails.cache.read(cache_key) || expensive_database_query
    end
  end
end

# ALTERNATIVE: Probabilistic early expiration
class ProbabilisticCache
  def fetch_with_probabilistic_refresh(key, expires_in:, refresh_probability: 0.1)
    cached_data = Rails.cache.read(key)
    
    if cached_data
      # Randomly refresh cache before expiration
      if rand < refresh_probability
        Rails.cache.write(key, yield, expires_in: expires_in)
      end
      cached_data
    else
      fresh_data = yield
      Rails.cache.write(key, fresh_data, expires_in: expires_in)
      fresh_data
    end
  end
end

Reference

Core Cache Methods

Method Parameters Returns Description
Rails.cache.fetch(key, **opts) key (String), block, options (Hash) Object Reads cached value or executes block to generate and cache value
Rails.cache.read(key) key (String) Object or nil Reads cached value without generation fallback
Rails.cache.write(key, value, **opts) key (String), value (Object), options (Hash) Boolean Writes value to cache with optional expiration and conditions
Rails.cache.delete(key) key (String) Boolean Removes cache entry for specified key
Rails.cache.exist?(key) key (String) Boolean Checks if cache entry exists for key
Rails.cache.delete_matched(pattern) pattern (String/Regexp) Integer Deletes all cache entries matching pattern
Rails.cache.clear None Boolean Removes all cache entries
Rails.cache.increment(key, amount) key (String), amount (Integer) Integer Atomically increments numeric cache value
Rails.cache.decrement(key, amount) key (String), amount (Integer) Integer Atomically decrements numeric cache value

Cache Store Options

Option Type Description Example
:expires_in Duration Cache entry expiration time expires_in: 30.minutes
:expires_at Time Absolute cache expiration time expires_at: Time.current + 1.hour
:namespace String Prefix for cache keys namespace: 'app_v2'
:unless_exist Boolean Write only if key doesn't exist unless_exist: true
:force Boolean Force cache regeneration force: true
:compress Boolean Enable cache value compression compress: true
:compress_threshold Integer Minimum size for compression compress_threshold: 1024

Cache Store Types

Store Type Class Use Case Configuration
Memory Store ActiveSupport::Cache::MemoryStore Single-process caching size: 64.megabytes
File Store ActiveSupport::Cache::FileStore Persistent single-server cache cache_path: 'tmp/cache'
Redis Store ActiveSupport::Cache::RedisCacheStore Distributed multi-server cache url: ENV['REDIS_URL']
Memcached Store ActiveSupport::Cache::MemCacheStore High-performance distributed cache 'localhost:11211'
Null Store ActiveSupport::Cache::NullStore Development/testing (no caching) No configuration

Memoization Patterns

Pattern Code Example Thread Safety Memory Impact
Basic Memoization `@result = expensive_call`
Defined Check defined?(@result) ? @result : @result = expensive_call Not thread-safe Low
Hash Memoization `@cache[key] = expensive_call(key)`
Thread-Safe Memo `@mutex.synchronize { @result = expensive_call }`
Class-Level Cache `@cache = {}`

Redis Integration Commands

Operation Redis Command Ruby Implementation Description
Set with expiration SETEX key seconds value redis.setex(key, ttl, value) Store value with TTL
Get value GET key redis.get(key) Retrieve cached value
Delete key DEL key redis.del(key) Remove cache entry
Check existence EXISTS key redis.exists?(key) Test if key exists
Pattern deletion DEL pattern* `redis.scan_each(match: pattern*) { k
Increment counter INCR key redis.incr(key) Atomic increment
Set if not exists SETNX key value redis.setnx(key, value) Conditional set operation

Performance Benchmarks

Cache Type Write (ops/sec) Read (ops/sec) Memory Overhead Network Latency
Memory Store 100,000+ 1,000,000+ 5-10% None
File Store 1,000-5,000 10,000-50,000 Disk space None
Redis (local) 10,000-50,000 50,000-100,000 External process <1ms
Redis (network) 1,000-10,000 5,000-20,000 External server 1-10ms
Memcached 20,000-80,000 100,000-200,000 External process <1ms

Cache Key Strategies

Strategy Pattern Example Use Case
Hierarchical namespace:type:id:version app:user:123:v2 Organized deletion
Timestamped key:timestamp stats:2025-08-31 Time-based invalidation
Versioned key:version:hash config:v3:abc123 Deployment-based invalidation
User-scoped user_id:feature:params 456:recommendations:page_1 User-specific data
Parameterized base:param1:param2 search:ruby:cache:limit_10 Query result caching