Overview
Memory profiler tracks object allocations and memory usage during Ruby code execution. The gem instruments Ruby's object allocation mechanism to capture detailed statistics about memory consumption patterns, allocation sites, and object lifecycle information.
The core functionality centers around the MemoryProfiler
module, which provides methods to profile memory usage during block execution. The profiler captures allocation data including object types, allocation locations, retained objects, and memory footprint measurements.
require 'memory_profiler'
report = MemoryProfiler.report do
1000.times { "string #{rand}" }
end
puts report.pretty_print
Ruby's memory profiler operates by hooking into the garbage collector's allocation tracking mechanisms. When profiling begins, the gem records the current state of allocated objects, then monitors new allocations during the profiled code execution. After the block completes, it performs garbage collection analysis to determine which objects remain allocated.
The profiler distinguishes between total allocations and retained objects. Total allocations represent every object created during profiling, while retained objects are those that survive garbage collection and remain in memory after the profiled code finishes executing.
report = MemoryProfiler.report do
array = []
10.times { |i| array << "item_#{i}" }
array
end
# Shows both allocated strings and the retained array
report.total_allocated_memsize # Total memory allocated
report.total_retained_memsize # Memory still held after GC
The profiler captures allocation context including file names, line numbers, and method names where objects were created. This location information enables developers to identify specific code sections responsible for high memory usage patterns.
Memory profiler integrates with Ruby's ObjectSpace allocation tracing, which requires Ruby 2.1 or later with allocation tracing enabled. The profiler automatically enables and disables allocation tracing around the profiled block to minimize performance impact on non-profiled code sections.
Basic Usage
Memory profiling begins with wrapping target code in a MemoryProfiler.report
block. The method returns a detailed report object containing allocation statistics and analysis data.
require 'memory_profiler'
def create_data
data = {}
100.times do |i|
data["key_#{i}"] = Array.new(50) { rand(1000) }
end
data
end
report = MemoryProfiler.report do
result = create_data
result.values.flatten.sum
end
puts report.pretty_print
The report object provides multiple analysis methods. The pretty_print
method outputs human-readable statistics including top allocation sites, object counts by type, and memory usage breakdowns. The output groups allocations by location and object class for easy identification of memory hotspots.
Report filtering controls which allocations appear in analysis output. The profiler accepts filtering options to focus on specific files, classes, or allocation patterns while excluding framework or library code from analysis.
report = MemoryProfiler.report(ignore_files: /gems/) do
require 'json'
data = { users: (1..100).map { |i| { id: i, name: "User #{i}" } } }
JSON.generate(data)
end
# Only shows allocations from application code, not gem dependencies
puts report.pretty_print(scale_bytes: true)
The scale_bytes
option in pretty_print
displays memory sizes in human-readable formats using KB, MB units instead of raw byte counts. This formatting makes memory consumption easier to interpret for large datasets.
Memory profiler supports custom configuration through options passed to the report method. Common options include allow_files
for whitelisting specific files, ignore_files
for excluding patterns, and trace
for controlling allocation detail levels.
# Profile only application code in app/ directory
report = MemoryProfiler.report(allow_files: /app\//) do
# Application logic here
User.all.map(&:profile_data).to_json
end
# Access raw statistics programmatically
puts "Total objects: #{report.total_allocated}"
puts "Retained objects: #{report.total_retained}"
puts "Memory allocated: #{report.total_allocated_memsize} bytes"
puts "Memory retained: #{report.total_retained_memsize} bytes"
The profiler distinguishes between allocated and retained memory statistics. Allocated counts include all objects created during profiling, while retained counts show objects that survive garbage collection. High retention rates often indicate memory leaks or excessive object caching.
Report generation automatically triggers garbage collection to accurately measure retained objects. This process may affect timing measurements but provides more reliable memory retention analysis.
Performance & Memory
Memory profiler introduces measurement overhead proportional to allocation rates in profiled code. Each tracked allocation requires additional memory and processing time for metadata storage and location tracking. High-allocation code sections experience more significant performance degradation during profiling.
The profiler's memory overhead grows with the number of tracked allocations. Each allocation record stores object class, size, location information, and retention status. Applications creating millions of objects during profiling may consume substantial additional memory for profiler metadata.
require 'benchmark'
# Measure profiling overhead
unprofiled_time = Benchmark.realtime do
10_000.times { Array.new(100) { rand } }
end
profiled_time = Benchmark.realtime do
MemoryProfiler.report do
10_000.times { Array.new(100) { rand } }
end
end
overhead = ((profiled_time - unprofiled_time) / unprofiled_time * 100).round(2)
puts "Profiling overhead: #{overhead}%"
Memory profiler automatically enables Ruby's allocation tracing during profiling blocks and disables it afterward. This approach minimizes global performance impact but means allocation tracing state changes during profiling execution. Code that depends on allocation tracing state may behave differently during profiling.
Large applications benefit from targeted profiling of specific code sections rather than full request profiling. Profiling narrow code paths reduces overhead while maintaining useful allocation insights for optimization efforts.
class UserController
def expensive_operation
# Profile just the expensive part
report = MemoryProfiler.report do
users = User.includes(:posts, :comments).limit(1000)
users.map(&:serialize_full_profile)
end
# Log memory usage for monitoring
logger.info "Memory allocated: #{report.total_allocated_memsize} bytes"
report
end
end
Memory profiler works well with sampling strategies for production monitoring. Running profiles on a percentage of requests or specific time intervals provides memory insights without constant overhead. Conditional profiling based on request parameters or user flags enables targeted analysis.
The profiler integrates with Ruby's garbage collector statistics through GC.stat
for comprehensive memory analysis. Combining profiler output with GC statistics reveals memory allocation patterns and garbage collection pressure.
def analyze_memory_patterns
gc_before = GC.stat
report = MemoryProfiler.report do
# Code under analysis
process_large_dataset
end
gc_after = GC.stat
{
allocated_objects: report.total_allocated,
retained_objects: report.total_retained,
gc_runs: gc_after[:count] - gc_before[:count],
gc_time: gc_after[:time] - gc_before[:time]
}
end
Memory profiler results vary between Ruby versions due to internal object allocation optimizations. Ruby's copy-on-write string optimizations and frozen string literals affect allocation counts and memory measurements. Comparing profiles across Ruby versions requires accounting for these implementation differences.
Production Patterns
Memory profiler deployment in production applications requires careful consideration of performance impact and data collection strategies. Continuous profiling creates significant overhead, making selective profiling approaches more practical for production environments.
Feature flags enable memory profiling for specific requests or user sessions without affecting overall application performance. Sampling-based profiling activates for a small percentage of requests or during specific time windows for ongoing memory monitoring.
class ApplicationController
around_action :profile_memory, if: :should_profile_memory?
private
def profile_memory
if memory_profiling_enabled?
report = MemoryProfiler.report(ignore_files: /gems/) do
yield
end
store_memory_report(report) if report.total_allocated > threshold
else
yield
end
end
def should_profile_memory?
# Sample 1% of requests for users in beta group
params[:profile_memory] || (beta_user? && rand < 0.01)
end
def store_memory_report(report)
MemoryReport.create!(
controller: params[:controller],
action: params[:action],
total_allocated: report.total_allocated,
total_retained: report.total_retained,
report_data: report.to_json
)
end
end
Memory profiling in web applications focuses on controller actions, background jobs, and API endpoints that process significant data volumes. Profiling these components identifies memory-intensive operations and guides optimization efforts.
Background job profiling reveals memory usage patterns in asynchronous processing systems. Jobs that process large datasets or generate reports often exhibit high memory allocation rates requiring optimization or resource allocation adjustments.
class DataProcessingJob
def perform(dataset_id)
dataset = Dataset.find(dataset_id)
report = MemoryProfiler.report do
processor = DataProcessor.new(dataset)
processor.analyze_trends
processor.generate_insights
end
if report.total_allocated_memsize > 100.megabytes
logger.warn "High memory usage in job: #{report.total_allocated_memsize} bytes"
AlertService.notify_high_memory_usage(self.class.name, report)
end
processor.results
end
end
Memory monitoring systems integrate profiler data with application performance monitoring tools. Automated analysis compares memory usage patterns across deployments and identifies regressions in memory efficiency.
Production profiling configurations typically exclude gem dependencies and focus on application code through file filtering. This approach reduces profiler overhead while maintaining visibility into application-specific memory patterns.
# config/initializers/memory_profiling.rb
class MemoryProfiler
def self.production_report(**options, &block)
default_options = {
ignore_files: [
/\/gems\//,
/\/rbenv\//,
/\/rvm\//,
/ruby\/\d+\.\d+\.\d+/
],
allow_files: [
/#{Rails.root}/
]
}
report(default_options.merge(options), &block)
end
end
Memory profiler data feeds into capacity planning decisions for application scaling. Historical memory usage patterns inform infrastructure requirements and help predict resource needs for traffic growth.
Alerting systems monitor memory profile statistics and trigger notifications when allocation patterns exceed defined thresholds. These alerts enable proactive response to memory-related performance issues before they impact users.
Error Handling & Debugging
Memory profiler handles various error conditions that arise during allocation tracking and report generation. The profiler gracefully handles scenarios where allocation tracking fails to enable or where garbage collection interferes with memory measurements.
Ruby versions without allocation tracking support cause MemoryProfiler.report
to raise NoMethodError
when attempting to access ObjectSpace allocation methods. Applications targeting multiple Ruby versions require feature detection before profiling attempts.
def safe_memory_profile(&block)
return yield unless memory_profiling_supported?
begin
MemoryProfiler.report(&block)
rescue NoMethodError => e
logger.warn "Memory profiling unavailable: #{e.message}"
yield
nil
rescue StandardError => e
logger.error "Memory profiling failed: #{e.message}"
yield
nil
end
end
def memory_profiling_supported?
ObjectSpace.respond_to?(:trace_object_allocations_start) &&
defined?(MemoryProfiler)
end
Memory profiler may encounter issues when profiling code that modifies ObjectSpace allocation tracking state. Nested profiling attempts or code that directly manipulates allocation tracing can interfere with profiler operation and produce incomplete results.
The profiler handles out-of-memory conditions during report generation by limiting the amount of allocation data stored. When memory pressure becomes severe, the profiler may truncate allocation records or skip detailed location tracking to prevent application crashes.
class SafeMemoryProfiler
def self.report(max_allocations: 100_000, **options, &block)
allocation_count = 0
MemoryProfiler.report(**options) do
# Monitor allocation count during profiling
original_trace = ObjectSpace.method(:trace_object_allocations_start)
ObjectSpace.define_singleton_method(:trace_object_allocations_start) do
allocation_count += 1
raise MemoryLimitExceeded if allocation_count > max_allocations
original_trace.call
end
begin
yield
ensure
ObjectSpace.define_singleton_method(:trace_object_allocations_start, original_trace)
end
end
rescue MemoryLimitExceeded
logger.warn "Memory profiling stopped: allocation limit exceeded"
nil
end
class MemoryLimitExceeded < StandardError; end
end
Memory profiler debugging benefits from enabling Ruby's verbose garbage collection logging through GC.stress = true
or verbose GC options. These debugging modes reveal garbage collection behavior during profiling and help identify timing-related issues.
Report generation errors often stem from memory corruption or invalid object references captured during allocation tracking. The profiler includes validation methods to detect and handle corrupted allocation data gracefully.
def debug_memory_profile
GC.stress = true if Rails.env.development?
report = MemoryProfiler.report do
# Code under investigation
problematic_method
end
validate_report(report)
report
ensure
GC.stress = false if Rails.env.development?
end
def validate_report(report)
return unless report
if report.total_allocated < 0 || report.total_retained < 0
raise "Invalid allocation counts in memory report"
end
if report.total_retained > report.total_allocated
logger.warn "Retained objects exceed allocated objects - possible measurement error"
end
end
Memory profiler may produce inconsistent results when profiling multi-threaded code due to race conditions in allocation tracking. Thread-safe profiling requires coordination between threads to ensure accurate allocation attribution and prevent data corruption.
Advanced Usage
Memory profiler supports sophisticated filtering and analysis patterns for complex applications. Advanced filtering combines multiple criteria to isolate specific allocation patterns while excluding irrelevant framework or library code from analysis.
Custom filtering functions provide fine-grained control over which allocations appear in reports. These functions receive allocation metadata and return boolean values indicating whether specific allocations should be tracked.
# Profile only ActiveRecord model allocations
ar_filter = lambda do |file, line, klass|
klass.to_s.match?(/ActiveRecord/) ||
file.include?('/app/models/')
end
report = MemoryProfiler.report(trace: ar_filter) do
User.includes(:posts).limit(100).each(&:serialize)
end
Memory profiler enables comparative analysis by capturing multiple reports and analyzing differences in allocation patterns. This approach identifies memory usage changes between code versions or different execution paths.
class MemoryComparator
def self.compare_implementations(name_a, name_b, &block)
implementation_a = proc { old_implementation }
implementation_b = proc { new_implementation }
report_a = MemoryProfiler.report(&implementation_a)
report_b = MemoryProfiler.report(&implementation_b)
{
name_a => {
allocated: report_a.total_allocated,
retained: report_a.total_retained,
memory: report_a.total_allocated_memsize
},
name_b => {
allocated: report_b.total_allocated,
retained: report_b.total_retained,
memory: report_b.total_allocated_memsize
},
improvement: calculate_improvement(report_a, report_b)
}
end
def self.calculate_improvement(before, after)
{
allocated_pct: ((before.total_allocated - after.total_allocated).to_f / before.total_allocated * 100).round(2),
memory_pct: ((before.total_allocated_memsize - after.total_allocated_memsize).to_f / before.total_allocated_memsize * 100).round(2)
}
end
end
Advanced reporting generates custom output formats tailored to specific analysis requirements. The profiler provides access to raw allocation data for building specialized reporting tools and integration with external monitoring systems.
class CustomReporter
def initialize(report)
@report = report
end
def allocation_hotspots(limit: 10)
allocations = @report.allocations_by_location
allocations
.sort_by { |location, count| -count }
.first(limit)
.map do |location, count|
{
file: location[0],
line: location[1],
method: location[2],
allocations: count,
percentage: (count.to_f / @report.total_allocated * 100).round(2)
}
end
end
def memory_by_class
@report.allocated_memory_by_class
.sort_by { |klass, memory| -memory }
.map do |klass, memory|
{
class: klass,
memory_bytes: memory,
memory_mb: (memory / 1024.0 / 1024.0).round(2),
percentage: (memory.to_f / @report.total_allocated_memsize * 100).round(2)
}
end
end
def to_json
{
summary: {
total_allocated: @report.total_allocated,
total_retained: @report.total_retained,
total_memory: @report.total_allocated_memsize
},
hotspots: allocation_hotspots,
memory_by_class: memory_by_class.first(15)
}.to_json
end
end
Memory profiler integrates with benchmarking tools for comprehensive performance analysis combining memory usage with execution time measurements. This integration reveals correlations between memory allocation patterns and performance characteristics.
def comprehensive_benchmark
results = {}
[10, 100, 1000, 10000].each do |size|
time = Benchmark.realtime do
results[size] = MemoryProfiler.report do
process_dataset(size)
end
end
report = results[size]
puts "Size #{size}: #{time.round(3)}s, #{report.total_allocated} objects, #{report.total_allocated_memsize} bytes"
end
results
end
Advanced memory profiling tracks object lifecycles across multiple garbage collection cycles. This analysis reveals objects that survive multiple GC runs and may indicate memory leaks or excessive caching.
Memory profiler supports custom allocation categorization based on object characteristics, allocation context, and usage patterns. These categories enable targeted optimization efforts for specific object types or allocation sites.
Reference
MemoryProfiler Module Methods
Method | Parameters | Returns | Description |
---|---|---|---|
MemoryProfiler.report(**opts, &block) |
options (Hash), block (Proc) | Report |
Profiles memory allocations during block execution |
MemoryProfiler.start(**opts) |
options (Hash) | nil |
Begins allocation tracking with specified options |
MemoryProfiler.stop |
none | Report |
Stops tracking and returns allocation report |
Report Instance Methods
Method | Parameters | Returns | Description |
---|---|---|---|
#pretty_print(**opts) |
options (Hash) | String |
Formatted report output with allocation statistics |
#total_allocated |
none | Integer |
Count of all objects allocated during profiling |
#total_retained |
none | Integer |
Count of objects surviving garbage collection |
#total_allocated_memsize |
none | Integer |
Total memory allocated in bytes |
#total_retained_memsize |
none | Integer |
Memory retained after garbage collection in bytes |
#allocated_memory_by_file |
none | Hash |
Memory allocation grouped by source file |
#allocated_memory_by_location |
none | Hash |
Memory allocation grouped by file:line location |
#allocated_memory_by_class |
none | Hash |
Memory allocation grouped by object class |
#allocated_objects_by_file |
none | Hash |
Object count grouped by source file |
#allocated_objects_by_location |
none | Hash |
Object count grouped by file:line location |
#allocated_objects_by_class |
none | Hash |
Object count grouped by object class |
#retained_memory_by_file |
none | Hash |
Retained memory grouped by source file |
#retained_memory_by_location |
none | Hash |
Retained memory grouped by file:line location |
#retained_memory_by_class |
none | Hash |
Retained memory grouped by object class |
#retained_objects_by_file |
none | Hash |
Retained object count grouped by source file |
#retained_objects_by_location |
none | Hash |
Retained object count grouped by file:line location |
#retained_objects_by_class |
none | Hash |
Retained object count grouped by object class |
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
ignore_files |
Regexp , Array |
nil |
Files matching pattern excluded from tracking |
allow_files |
Regexp , Array |
nil |
Only files matching pattern included in tracking |
trace |
Proc |
nil |
Custom filter function for allocation tracking |
normalize_paths |
Boolean |
true |
Normalize file paths in allocation locations |
top |
Integer |
50 |
Number of top allocations shown in reports |
Pretty Print Options
Option | Type | Default | Description |
---|---|---|---|
to_file |
String |
nil |
Write report output to specified file path |
color_output |
Boolean |
true |
Enable colored terminal output |
retained_strings |
Integer |
10 |
Number of retained strings to display |
allocated_strings |
Integer |
10 |
Number of allocated strings to display |
scale_bytes |
Boolean |
false |
Display memory sizes in KB/MB units |
normalize_paths |
Boolean |
true |
Show normalized file paths in output |
Object Allocation Data Structure
{
file: "/path/to/file.rb",
line: 42,
class_name: "String",
method_id: :method_name,
memsize: 240,
class_path: "String"
}
Common File Patterns for Filtering
Pattern | Description |
---|---|
/\/gems\// |
Exclude all gem dependencies |
/#{Rails.root}/ |
Include only Rails application files |
/app\/models/ |
Include only ActiveRecord models |
/lib\/.*\.rb$/ |
Include only library files |
/spec\/.*_spec\.rb$/ |
Include only RSpec test files |
/vendor\// |
Exclude vendor directory |
Error Classes
Class | Description |
---|---|
NoMethodError |
Raised when ObjectSpace allocation tracking unavailable |
ArgumentError |
Raised for invalid configuration parameters |
StandardError |
Base class for memory profiler runtime errors |