CrackedRuby logo

CrackedRuby

Memory Profiling

Comprehensive guide to analyzing memory usage patterns, detecting memory leaks, and optimizing memory consumption in Ruby applications.

Performance Optimization Memory Management
7.2.3

Overview

Memory profiling in Ruby involves analyzing how applications allocate, retain, and garbage collect objects during execution. Ruby provides several built-in tools and supports external gems for detailed memory analysis. The core approach centers on ObjectSpace module methods, GC statistics, and specialized profiling libraries like memory_profiler and heap-profiler.

Ruby's garbage collector automatically manages memory, but applications can still experience memory leaks through retained references, excessive object allocation, or inefficient data structures. Memory profiling identifies these issues by tracking object creation, retention patterns, and memory usage over time.

The ObjectSpace module serves as the foundation for memory introspection, providing methods to enumerate objects, track allocations, and analyze object relationships. Combined with garbage collection statistics from the GC module, these tools offer comprehensive insights into memory behavior.

# Enable allocation tracking
ObjectSpace.trace_object_allocations_start

# Code to profile
1000.times { "string #{rand}" }

# Get allocation statistics
puts ObjectSpace.count_objects
# => {:TOTAL=>41234, :FREE=>890, :T_STRING=>15678, ...}

External gems extend these capabilities with detailed reports, memory snapshots, and comparative analysis. The memory_profiler gem provides method-level memory tracking, while heap-profiler offers heap dumps and retention analysis.

require 'memory_profiler'

report = MemoryProfiler.report do
  array = []
  100.times { array << Object.new }
end

puts report.pretty_print

Basic Usage

Memory profiling typically starts with ObjectSpace.trace_object_allocations_start to enable allocation tracking. This method instruments Ruby's object allocation to record file, line, and method information for each object created.

# Start allocation tracking
ObjectSpace.trace_object_allocations_start

# Create objects to track
users = []
100.times do |i|
  users << { name: "User #{i}", email: "user#{i}@example.com" }
end

# Query allocation information
ObjectSpace.each_object(Hash) do |hash|
  file = ObjectSpace.allocation_sourcefile(hash)
  line = ObjectSpace.allocation_sourceline(hash)
  puts "#{file}:#{line}" if file&.include?(__FILE__)
end

# Stop tracking
ObjectSpace.trace_object_allocations_stop

The GC module provides statistics about garbage collection cycles, memory usage, and heap status. These metrics help identify memory pressure and collection frequency.

GC.start  # Force garbage collection

stats = GC.stat
puts "Heap slots: #{stats[:heap_live_slots]}"
puts "Free slots: #{stats[:heap_free_slots]}"
puts "GC count: #{stats[:count]}"
puts "Minor GC: #{stats[:minor_gc_count]}"
puts "Major GC: #{stats[:major_gc_count]}"

Object counting reveals memory distribution across Ruby classes. The ObjectSpace.count_objects method returns a hash with counts for each object type.

before_counts = ObjectSpace.count_objects

# Allocate various object types
strings = 1000.times.map { |i| "string_#{i}" }
arrays = 100.times.map { [] }
hashes = 50.times.map { {} }

after_counts = ObjectSpace.count_objects

# Calculate differences
diff = after_counts.transform_values.with_index do |count, i|
  before_counts.values[i] ? count - before_counts.values[i] : count
end

puts "New objects created:"
diff.each { |type, count| puts "#{type}: #{count}" if count > 0 }

The memory_profiler gem simplifies detailed memory analysis by wrapping code blocks and generating comprehensive reports. Install with gem install memory_profiler.

require 'memory_profiler'

report = MemoryProfiler.report do
  data = CSV.parse(File.read('large_file.csv'))
  processed = data.map { |row| row.join('|') }
  processed.select { |line| line.include?('important') }
end

report.pretty_print(to_file: 'memory_report.txt')

Performance & Memory

Memory profiling reveals performance bottlenecks through allocation patterns and garbage collection pressure. High allocation rates increase GC frequency, reducing application throughput. Profiling identifies hotspots where optimizations provide the greatest impact.

Object lifecycle analysis determines memory retention patterns. Short-lived objects stress the garbage collector, while long-lived objects consume heap space. The ObjectSpace.reachable_objects_from method traces object references to identify retention causes.

ObjectSpace.trace_object_allocations_start

class DataProcessor
  def initialize
    @cache = {}
  end
  
  def process_batch(items)
    items.map do |item|
      # Potential memory leak: cache grows indefinitely
      @cache[item.id] ||= expensive_calculation(item)
    end
  end
  
  private
  
  def expensive_calculation(item)
    item.data.split(',').map(&:strip).join('|')
  end
end

processor = DataProcessor.new

# Process multiple batches
5.times do |batch|
  items = 1000.times.map { OpenStruct.new(id: rand(10000), data: "a,b,c,d,e") }
  results = processor.process_batch(items)
  
  # Check memory usage after each batch
  stats = GC.stat
  puts "Batch #{batch}: #{stats[:heap_live_slots]} live objects"
end

# Analyze cache growth
cache = processor.instance_variable_get(:@cache)
puts "Cache size: #{cache.size}"

ObjectSpace.trace_object_allocations_stop

Memory sampling provides statistical analysis of allocation patterns without the overhead of tracking every allocation. The heap-profiler gem samples allocations at configurable intervals.

require 'heap-profiler'

# Sample every 1000th allocation
HeapProfiler.sample_rate = 1000

HeapProfiler.start_sampling

# Memory-intensive operation
large_data = []
100_000.times do |i|
  large_data << { id: i, payload: "x" * 100 }
end

# Generate heap dump
HeapProfiler.dump_heap('heap_sample.json')
HeapProfiler.stop_sampling

# Analyze the dump shows object retention and references
puts "Heap dump written to heap_sample.json"

Comparative profiling measures memory differences between code versions or configurations. This technique identifies memory regressions and validates optimizations.

require 'memory_profiler'

def inefficient_processing(data)
  data.map { |item| item.to_s.upcase.strip }.uniq.sort
end

def optimized_processing(data)
  seen = Set.new
  result = []
  data.each do |item|
    processed = item.to_s.upcase.strip
    next if seen.include?(processed)
    seen.add(processed)
    result << processed
  end
  result.sort
end

test_data = Array.new(10_000) { "  Item #{rand(1000)}  " }

# Profile both approaches
inefficient_report = MemoryProfiler.report { inefficient_processing(test_data) }
optimized_report = MemoryProfiler.report { optimized_processing(test_data) }

puts "Inefficient: #{inefficient_report.total_allocated_memsize} bytes"
puts "Optimized: #{optimized_report.total_allocated_memsize} bytes"
puts "Reduction: #{((inefficient_report.total_allocated_memsize - optimized_report.total_allocated_memsize) / inefficient_report.total_allocated_memsize.to_f * 100).round(2)}%"

Error Handling & Debugging

Memory profiling errors commonly arise from instrumentation overhead, large data sets, or profiling tool limitations. The ObjectSpace.trace_object_allocations_start method increases memory usage and slows execution, potentially causing timeouts in production environments.

Allocation tracking consumes significant memory when profiling long-running processes. Ruby stores allocation metadata for each object, doubling memory usage in worst-case scenarios. Monitor heap growth when enabling tracking.

# Check if allocation tracking is enabled
if ObjectSpace.allocation_sourcefile(Object.new)
  puts "Warning: Allocation tracking already enabled"
  puts "Current memory usage may be inflated"
end

ObjectSpace.trace_object_allocations_start

begin
  # Memory-intensive operation
  data = Array.new(1_000_000) { "string data" }
  
  # Check heap pressure
  stats = GC.stat
  if stats[:heap_live_slots] > 10_000_000
    puts "Warning: High memory usage detected"
    puts "Consider reducing dataset size or disabling tracking"
  end
  
rescue NoMemoryError => e
  puts "Memory exhausted during profiling"
  puts "Disable allocation tracking and retry with smaller dataset"
  ObjectSpace.trace_object_allocations_stop
  raise
ensure
  ObjectSpace.trace_object_allocations_stop
end

Profiling tools may fail with large object counts or complex object graphs. The memory_profiler gem has limits on report generation and can timeout or consume excessive memory.

require 'memory_profiler'

def safe_memory_profile(&block)
  timeout_duration = 30 # seconds
  
  report = nil
  thread = Thread.new do
    begin
      report = MemoryProfiler.report(&block)
    rescue => e
      puts "Profiling failed: #{e.message}"
      puts "Try reducing workload or increasing timeout"
    end
  end
  
  unless thread.join(timeout_duration)
    thread.kill
    puts "Profiling timed out after #{timeout_duration} seconds"
    return nil
  end
  
  report
rescue MemoryProfiler::Error => e
  puts "Memory profiler error: #{e.message}"
  puts "Check memory_profiler gem version and compatibility"
  nil
end

# Usage with error handling
report = safe_memory_profile do
  # Potentially problematic code
  huge_array = Array.new(10_000_000) { Object.new }
  huge_array.map(&:class)
end

if report
  puts "Profiling successful"
  puts "Total allocated: #{report.total_allocated}"
else
  puts "Profiling failed or timed out"
end

Heap dump analysis can fail with corrupted dumps or unsupported formats. Different Ruby versions generate incompatible heap dump formats.

require 'json'

def analyze_heap_dump(filename)
  unless File.exist?(filename)
    puts "Error: Heap dump file not found: #{filename}"
    return false
  end
  
  begin
    File.open(filename, 'r') do |file|
      # Validate heap dump format
      first_line = file.readline.strip
      unless first_line.start_with?('{"type"')
        puts "Error: Invalid heap dump format"
        puts "Expected JSON objects, got: #{first_line[0..50]}"
        return false
      end
      
      # Process dump in chunks to avoid memory issues
      object_count = 0
      file.each_line do |line|
        object = JSON.parse(line)
        object_count += 1
        
        # Process object data
        yield object if block_given?
        
        # Prevent memory buildup
        GC.start if object_count % 10_000 == 0
      end
    end
    
    puts "Successfully processed #{object_count} objects"
    true
  rescue JSON::ParserError => e
    puts "Error parsing heap dump: #{e.message}"
    puts "File may be corrupted or incomplete"
    false
  rescue => e
    puts "Unexpected error processing heap dump: #{e.message}"
    false
  end
end

# Usage
analyze_heap_dump('heap_dump.json') do |object|
  puts "#{object['type']}: #{object['class']}" if object['retained']
end

Production Patterns

Production memory profiling requires sampling strategies that minimize performance impact while providing actionable insights. Continuous profiling monitors long-term memory trends, while triggered profiling captures specific scenarios like memory leaks or performance degradation.

Sampling-based profiling reduces overhead by tracking a subset of allocations. Configure sampling rates based on application throughput and memory allocation patterns. Higher rates provide more detail but increase overhead.

require 'memory_profiler'

class ProductionMemoryProfiler
  def self.configure
    @sample_rate = ENV.fetch('MEMORY_PROFILE_SAMPLE_RATE', 10_000).to_i
    @profile_duration = ENV.fetch('MEMORY_PROFILE_DURATION', 60).to_i
    @enabled = ENV.fetch('MEMORY_PROFILING_ENABLED', 'false') == 'true'
  end
  
  def self.profile_request(request_id)
    return yield unless @enabled
    return yield if rand(@sample_rate) != 0
    
    report = MemoryProfiler.report do
      yield
    end
    
    # Log memory usage asynchronously
    Thread.new do
      log_memory_report(request_id, report)
    end
    
  rescue => e
    # Never let profiling crash the application
    puts "Memory profiling error for request #{request_id}: #{e.message}"
    yield
  end
  
  private
  
  def self.log_memory_report(request_id, report)
    summary = {
      request_id: request_id,
      total_allocated: report.total_allocated,
      total_retained: report.total_retained,
      allocated_memory: report.total_allocated_memsize,
      retained_memory: report.total_retained_memsize,
      timestamp: Time.now.iso8601
    }
    
    # Send to monitoring system
    puts "MEMORY_PROFILE: #{summary.to_json}"
  end
end

# Initialize configuration
ProductionMemoryProfiler.configure

# Usage in web application
def handle_api_request(params)
  ProductionMemoryProfiler.profile_request(params[:request_id]) do
    # Application logic
    process_user_data(params[:user_data])
    generate_response(params[:format])
  end
end

Memory leak detection compares memory usage before and after operations that should not increase baseline memory consumption. Implement periodic checks to identify gradual memory growth.

class MemoryLeakDetector
  def initialize(threshold_mb: 50, check_interval: 300)
    @threshold_bytes = threshold_mb * 1024 * 1024
    @check_interval = check_interval
    @baseline_memory = current_memory_usage
    @last_check = Time.now
  end
  
  def check_for_leaks
    return unless Time.now - @last_check > @check_interval
    
    current_memory = current_memory_usage
    growth = current_memory - @baseline_memory
    
    if growth > @threshold_bytes
      report_memory_leak(growth, current_memory)
      @baseline_memory = current_memory  # Reset baseline
    end
    
    @last_check = Time.now
  end
  
  private
  
  def current_memory_usage
    stats = GC.stat
    stats[:heap_live_slots] * stats[:heap_slot_size]
  end
  
  def report_memory_leak(growth_bytes, total_bytes)
    growth_mb = (growth_bytes / 1024.0 / 1024.0).round(2)
    total_mb = (total_bytes / 1024.0 / 1024.0).round(2)
    
    leak_report = {
      alert: "Memory leak detected",
      growth_mb: growth_mb,
      total_memory_mb: total_mb,
      gc_stats: GC.stat,
      object_counts: ObjectSpace.count_objects,
      timestamp: Time.now.iso8601
    }
    
    # Alert monitoring system
    puts "MEMORY_LEAK_ALERT: #{leak_report.to_json}"
    
    # Optional: Generate detailed heap dump
    generate_heap_dump if growth_mb > 100
  end
  
  def generate_heap_dump
    filename = "heap_dump_#{Time.now.to_i}.json"
    GC.start(full_mark: true, immediate_sweep: true)
    
    # Use ObjectSpace heap dump if available
    if ObjectSpace.respond_to?(:dump_all)
      File.open(filename, 'w') do |f|
        ObjectSpace.dump_all(output: f)
      end
      puts "Heap dump written to #{filename}"
    end
  end
end

# Usage in production application
detector = MemoryLeakDetector.new(threshold_mb: 100, check_interval: 600)

# Check periodically in background thread
Thread.new do
  loop do
    sleep 60
    detector.check_for_leaks
  end
end

Container-based deployments require memory profiling strategies that account for container limits and orchestration systems. Monitor both Ruby heap usage and total process memory to prevent container kills.

class ContainerMemoryMonitor
  def initialize
    @container_limit = detect_container_memory_limit
    @warning_threshold = @container_limit * 0.8
    @critical_threshold = @container_limit * 0.9
  end
  
  def monitor_memory_usage
    ruby_heap_size = calculate_ruby_heap_size
    process_memory = current_process_memory
    
    status = determine_memory_status(process_memory)
    
    memory_metrics = {
      ruby_heap_mb: (ruby_heap_size / 1024.0 / 1024.0).round(2),
      process_memory_mb: (process_memory / 1024.0 / 1024.0).round(2),
      container_limit_mb: (@container_limit / 1024.0 / 1024.0).round(2),
      memory_status: status,
      gc_stats: GC.stat.slice(:count, :heap_live_slots, :heap_free_slots),
      timestamp: Time.now.iso8601
    }
    
    puts "CONTAINER_MEMORY: #{memory_metrics.to_json}"
    
    # Trigger emergency GC if approaching limits
    if status == :critical
      puts "Critical memory usage - forcing full GC"
      GC.start(full_mark: true, immediate_sweep: true)
    end
    
    memory_metrics
  end
  
  private
  
  def detect_container_memory_limit
    # Check cgroup memory limit
    if File.exist?('/sys/fs/cgroup/memory/memory.limit_in_bytes')
      File.read('/sys/fs/cgroup/memory/memory.limit_in_bytes').to_i
    else
      # Fallback to system memory
      `grep MemTotal /proc/meminfo`.scan(/\d+/).first.to_i * 1024
    end
  rescue
    2 * 1024 * 1024 * 1024  # Default 2GB
  end
  
  def calculate_ruby_heap_size
    stats = GC.stat
    stats[:heap_live_slots] * stats[:heap_slot_size]
  end
  
  def current_process_memory
    File.read("/proc/#{Process.pid}/status")
        .lines
        .find { |line| line.start_with?('VmRSS:') }
        .scan(/\d+/)
        .first
        .to_i * 1024
  rescue
    0
  end
  
  def determine_memory_status(memory_usage)
    if memory_usage > @critical_threshold
      :critical
    elsif memory_usage > @warning_threshold
      :warning
    else
      :normal
    end
  end
end

Testing Strategies

Memory profiling in test environments validates that code changes don't introduce memory leaks or excessive allocations. Automated memory tests prevent performance regressions and ensure memory efficiency across the codebase.

Test-driven memory profiling uses baseline measurements to detect memory changes in unit tests. Establish acceptable allocation ranges for specific operations and fail tests when memory usage exceeds thresholds.

require 'minitest/autorun'
require 'memory_profiler'

class MemoryProfiledTest < Minitest::Test
  def setup
    # Warm up to stabilize memory measurements
    3.times { GC.start(full_mark: true, immediate_sweep: true) }
  end
  
  def assert_memory_usage(max_allocations: nil, max_retained: nil, &block)
    report = MemoryProfiler.report(&block)
    
    if max_allocations
      assert_operator report.total_allocated, :<=, max_allocations,
        "Expected ≤#{max_allocations} allocations, got #{report.total_allocated}"
    end
    
    if max_retained
      assert_operator report.total_retained, :<=, max_retained,
        "Expected ≤#{max_retained} retained objects, got #{report.total_retained}"
    end
    
    report
  end
  
  def test_user_processing_memory_usage
    users_data = Array.new(1000) do |i|
      { id: i, name: "User #{i}", email: "user#{i}@example.com" }
    end
    
    report = assert_memory_usage(max_allocations: 2000, max_retained: 100) do
      processor = UserProcessor.new
      processor.process_batch(users_data)
    end
    
    # Verify no string duplications
    string_allocations = report.allocated_memory_by_class[String] || 0
    assert_operator string_allocations, :<=, users_data.size * 2,
      "Too many string allocations suggest duplicated data"
  end
  
  def test_cache_memory_efficiency
    cache = LRUCache.new(capacity: 100)
    
    # Fill cache beyond capacity
    report = assert_memory_usage(max_retained: 150) do
      200.times do |i|
        cache.set("key_#{i}", "value_#{i}" * 100)
      end
    end
    
    # Verify cache eviction worked
    assert_equal 100, cache.size, "Cache should maintain size limit"
    
    # Check for memory leaks in evicted entries
    retained_strings = report.retained_memory_by_class[String] || 0
    assert_operator retained_strings, :<=, 100 * 100 * 10,  # Allow some overhead
      "Evicted cache entries may not be garbage collected"
  end
end

class UserProcessor
  def process_batch(users)
    # Simulate processing that should be memory efficient
    users.map do |user|
      "#{user[:name]} <#{user[:email]}>"
    end
  end
end

class LRUCache
  def initialize(capacity:)
    @capacity = capacity
    @cache = {}
    @order = []
  end
  
  def set(key, value)
    if @cache.key?(key)
      @order.delete(key)
    elsif @cache.size >= @capacity
      oldest_key = @order.shift
      @cache.delete(oldest_key)
    end
    
    @cache[key] = value
    @order.push(key)
  end
  
  def size
    @cache.size
  end
end

Benchmark testing measures memory allocation patterns across different implementations. Compare memory efficiency of alternative algorithms or data structures to make informed optimization decisions.

require 'minitest/autorun'
require 'memory_profiler'
require 'benchmark/memory'

class MemoryBenchmarkTest < Minitest::Test
  def test_string_concatenation_memory_efficiency
    test_data = Array.new(1000) { "string #{rand(1000)}" }
    
    # Test string concatenation with +
    plus_report = MemoryProfiler.report do
      result = ""
      test_data.each { |s| result = result + s }
    end
    
    # Test string concatenation with <<
    shovel_report = MemoryProfiler.report do
      result = ""
      test_data.each { |s| result << s }
    end
    
    # Test Array join
    join_report = MemoryProfiler.report do
      test_data.join
    end
    
    puts "String concatenation memory comparison:"
    puts "Plus operator: #{plus_report.total_allocated_memsize} bytes"
    puts "Shovel operator: #{shovel_report.total_allocated_memsize} bytes"
    puts "Array join: #{join_report.total_allocated_memsize} bytes"
    
    # Assert most efficient method
    assert_operator join_report.total_allocated_memsize, :<, 
                   plus_report.total_allocated_memsize,
                   "Array join should be more memory efficient than +"
    
    assert_operator shovel_report.total_allocated_memsize, :<,
                   plus_report.total_allocated_memsize,
                   "Shovel operator should be more memory efficient than +"
  end
  
  def test_data_structure_memory_comparison
    data = Array.new(10_000) { rand(100_000) }
    
    # Test Array storage
    array_report = MemoryProfiler.report do
      storage = []
      data.each { |item| storage << item unless storage.include?(item) }
    end
    
    # Test Set storage
    set_report = MemoryProfiler.report do
      require 'set'
      storage = Set.new
      data.each { |item| storage.add(item) }
    end
    
    # Test Hash storage
    hash_report = MemoryProfiler.report do
      storage = {}
      data.each { |item| storage[item] = true }
    end
    
    results = {
      array: array_report.total_allocated_memsize,
      set: set_report.total_allocated_memsize,
      hash: hash_report.total_allocated_memsize
    }
    
    puts "Data structure memory usage:"
    results.each { |type, memory| puts "#{type}: #{memory} bytes" }
    
    # Set should be most memory efficient for uniqueness
    assert_operator results[:set], :<=, results[:array],
                   "Set should be more memory efficient than Array for uniqueness"
  end
end

Integration testing validates memory behavior in complex scenarios that combine multiple components. Test memory usage patterns in realistic application workflows to identify cumulative allocation issues.

require 'minitest/autorun'
require 'memory_profiler'

class MemoryIntegrationTest < Minitest::Test
  def test_complete_request_processing_memory
    # Simulate web request processing workflow
    report = MemoryProfiler.report do
      # Parse request parameters
      params = simulate_request_parsing
      
      # Database query simulation
      records = simulate_database_query(params[:user_id])
      
      # Business logic processing
      results = simulate_business_logic(records)
      
      # Response serialization
      response = simulate_json_serialization(results)
      
      # Template rendering
      html = simulate_template_rendering(response)
    end
    
    # Validate reasonable memory usage for complete request
    assert_operator report.total_allocated, :<=, 10_000,
      "Complete request should allocate fewer than 10,000 objects"
    
    assert_operator report.total_allocated_memsize, :<=, 10 * 1024 * 1024,
      "Complete request should allocate less than 10MB"
    
    # Verify minimal object retention
    assert_operator report.total_retained, :<=, 100,
      "Request processing should retain minimal objects"
    
    # Check for specific problematic patterns
    string_allocations = report.allocated_memory_by_class[String] || 0
    hash_allocations = report.allocated_memory_by_class[Hash] || 0
    
    assert_operator string_allocations, :<=, 5 * 1024 * 1024,
      "String allocations should be reasonable"
    
    assert_operator hash_allocations, :<=, 2 * 1024 * 1024,
      "Hash allocations should be controlled"
  end
  
  private
  
  def simulate_request_parsing
    {
      user_id: rand(10_000),
      format: 'json',
      filters: ['active', 'verified'],
      pagination: { page: 1, per_page: 20 }
    }
  end
  
  def simulate_database_query(user_id)
    # Simulate database records
    Array.new(100) do |i|
      {
        id: i,
        user_id: user_id,
        title: "Record #{i}",
        data: "x" * 100,
        created_at: Time.now - rand(86400)
      }
    end
  end
  
  def simulate_business_logic(records)
    records.select { |r| r[:created_at] > Time.now - 3600 }
           .map { |r| r.merge(processed: true) }
           .sort_by { |r| r[:created_at] }
  end
  
  def simulate_json_serialization(data)
    require 'json'
    JSON.generate(data)
  end
  
  def simulate_template_rendering(json_data)
    data = JSON.parse(json_data)
    html_parts = data.map do |record|
      "<div>#{record['title']}: #{record['data'][0..20]}</div>"
    end
    "<html><body>#{html_parts.join}</body></html>"
  end
end

Reference

ObjectSpace Methods

Method Parameters Returns Description
ObjectSpace.trace_object_allocations_start None nil Enable allocation tracking for new objects
ObjectSpace.trace_object_allocations_stop None nil Disable allocation tracking
ObjectSpace.allocation_sourcefile(obj) obj (Object) String/nil File where object was allocated
ObjectSpace.allocation_sourceline(obj) obj (Object) Integer/nil Line number where object was allocated
ObjectSpace.allocation_class_path(obj) obj (Object) String/nil Class path where object was allocated
ObjectSpace.allocation_method_id(obj) obj (Object) Symbol/nil Method where object was allocated
ObjectSpace.count_objects(result_hash) result_hash (Hash, optional) Hash Count of objects by type
ObjectSpace.each_object(module) module (Module, optional) Enumerator Iterate over live objects
ObjectSpace.reachable_objects_from(obj) obj (Object) Array Objects reachable from given object
ObjectSpace.dump_all(output:) output (IO) nil Write heap dump to output stream

GC Statistics

Statistic Type Description
:count Integer Total garbage collections performed
:minor_gc_count Integer Minor garbage collections
:major_gc_count Integer Major garbage collections
:heap_allocated_pages Integer Pages allocated to heap
:heap_live_slots Integer Live object slots in heap
:heap_free_slots Integer Free object slots in heap
:heap_final_slots Integer Slots with finalizers
:heap_swept_slots Integer Swept object slots
:total_allocated_pages Integer Total pages ever allocated
:total_freed_pages Integer Total pages ever freed
:total_allocated_objects Integer Total objects ever allocated
:total_freed_objects Integer Total objects ever freed

MemoryProfiler Report Methods

Method Returns Description
#total_allocated Integer Total objects allocated
#total_retained Integer Total objects retained
#total_allocated_memsize Integer Total memory allocated in bytes
#total_retained_memsize Integer Total memory retained in bytes
#allocated_memory_by_class Hash Memory allocated by class
#retained_memory_by_class Hash Memory retained by class
#allocated_objects_by_file Hash Objects allocated by source file
#allocated_memory_by_file Hash Memory allocated by source file
#pretty_print(to_file:) String Generate formatted report

Memory Profiler Configuration

# Report generation options
MemoryProfiler.report(
  ignore_files: /gems/,           # Ignore allocations from gems
  allow_files: 'app/',           # Only track allocations from app/
  top: 50,                       # Show top 50 allocations
  trace: %i[name file line]      # Trace information to collect
) do
  # Code to profile
end

Heap Dump Analysis

# ObjectSpace heap dump format
{
  "address": "0x7f8b8c0a4040",    # Object memory address
  "type": "STRING",               # Object type
  "class": "0x7f8b8c0a3fc0",     # Class pointer
  "bytesize": 23,                # Object size in bytes
  "capacity": 23,                # String capacity
  "encoding": "UTF-8",           # String encoding
  "file": "app.rb",              # Allocation source file
  "line": 42,                    # Allocation line number
  "method": "process_data",      # Allocation method
  "generation": 5,               # GC generation
  "memsize": 64,                # Total memory size
  "flags": {                     # Object flags
    "wb_protected": true,
    "old": false,
    "marked": true
  }
}

Common Object Types

Type Description Memory Characteristics
T_STRING String objects Variable size, encoding overhead
T_ARRAY Array objects Pointer storage, capacity growth
T_HASH Hash objects Key-value storage, rehashing cost
T_OBJECT Generic objects Instance variable storage
T_CLASS Class objects Method table, constant storage
T_MODULE Module objects Method definitions, includes
T_SYMBOL Symbol objects Immutable, never garbage collected
T_RATIONAL Rational numbers Numerator/denominator storage
T_COMPLEX Complex numbers Real/imaginary component storage
T_FILE File objects Buffer and metadata storage