Overview
Memory profiling examines how an application allocates, uses, and releases memory during execution. This process reveals which parts of code create objects, how long those objects persist, and where memory consumption grows unexpectedly. Ruby developers use memory profiling to diagnose performance issues, prevent out-of-memory errors, and reduce infrastructure costs.
The need for memory profiling emerges when applications exhibit symptoms like gradually increasing memory consumption, slow response times, or crashes under load. Unlike CPU profiling that measures time spent in methods, memory profiling tracks object allocation patterns and retention. This distinction matters because memory issues often manifest differently than computational bottlenecks—a method might execute quickly but create thousands of objects that persist for the application lifetime.
Ruby's garbage collector manages memory automatically, but this automation introduces complexity. The GC reclaims memory from unreferenced objects, yet determining when objects become eligible for collection requires understanding reference graphs and object lifecycle. Memory profiling tools expose this hidden layer, showing not just what objects exist but why they remain allocated.
# Simple allocation that reveals profiling necessity
class Report
def generate(records)
records.map { |r| format_record(r) }.join("\n")
end
def format_record(record)
"#{record.id}: #{record.data.inspect}"
end
end
# Processing 100,000 records creates temporary strings
# Profiling reveals allocation patterns and retention
Memory profiling operates at multiple levels. High-level profiling tracks total memory usage over time, identifying trends and anomalies. Object-level profiling counts instances of each class, revealing which types dominate memory consumption. Allocation profiling captures where objects originate in source code, pinpointing hotspots. Retention profiling determines why objects persist, exposing unintended references.
Key Principles
Memory profiling rests on understanding allocation versus retention. Allocation measures object creation—how many objects a code path instantiates and their sizes. Retention examines object lifetime—which objects survive garbage collection cycles and why references persist. These metrics answer different questions: allocation reveals code patterns that create temporary waste, while retention exposes memory leaks and caching issues.
The Ruby object model treats most values as references to heap-allocated objects. Creating a string, array, or custom class instance allocates memory on the heap. Small integers, symbols, true, false, and nil use special representations that avoid per-instance allocation, but most application data structures require heap memory. This design simplifies programming but makes understanding allocation patterns essential for performance.
# Allocation patterns with different data types
1000.times do |i|
x = i # No allocation - small integer
y = :symbol # No allocation - symbols interned
z = "string #{i}" # Allocates new string object
a = [i, i+1] # Allocates array object
end
# Result: 2000 allocations (1000 strings + 1000 arrays)
# Integers and symbols reuse existing objects
Garbage collection cycles reclaim unreferenced memory, but collection timing affects profiling accuracy. Ruby uses generational garbage collection with minor and major cycles. Minor cycles scan young objects, major cycles examine the entire heap. Objects surviving multiple minor cycles move to older generations, reducing collection overhead but increasing retention visibility requirements.
Object retention occurs through references stored in variables, constants, data structures, or closures. A single reference keeps an object alive regardless of how many other references expired. Memory leaks in Ruby typically involve unintended references—objects added to caches without expiration, event listeners never removed, or closures capturing large contexts.
# Retention through closure capture
class EventProcessor
def initialize
@handlers = []
end
def on_event(&block)
@handlers << block
end
def process(data)
# If block captures large context, memory grows
on_event { |e| puts "Processed: #{data.inspect}" }
end
end
# Each process call adds closure retaining data reference
# Handlers array grows without limit
Heap snapshots capture complete memory state at a moment, recording every object with its class, size, and references. Comparing snapshots reveals allocation between points, showing which code paths created new objects. Snapshot differentials isolate memory growth, filtering stable baseline allocations from transient or leaked objects.
Reference graphs represent connections between objects, forming directed graphs where edges indicate references. Finding root paths traces connections from persistent roots (globals, constants, stack frames) to retained objects. This analysis answers why objects persist when code expects collection.
# Reference path example
CACHE = {}
def store_user(user)
CACHE[user.id] = user
end
# Reference path: CACHE (constant) -> Hash -> User object
# User remains allocated because constant root retains hash
Memory pressure describes system state when available memory decreases. High memory pressure triggers frequent garbage collection, degrading performance. Monitoring memory pressure guides profiling priorities—applications approaching memory limits require immediate optimization, while stable memory usage allows deferred analysis.
Ruby Implementation
Ruby provides built-in APIs for memory profiling through ObjectSpace and GC modules. These interfaces expose internal VM state, enabling measurement without external dependencies. The ObjectSpace module grants access to all allocated objects, while GC::Profiler tracks garbage collection statistics.
Counting objects by class uses ObjectSpace.count_objects_size or iterating with ObjectSpace.each_object. The count_objects_size method returns hash mapping class names to memory bytes, providing aggregate statistics. Iteration enables custom analysis but imposes higher overhead since touching each object prevents collection.
require 'objspace'
# Count objects and memory by class
counts = Hash.new(0)
sizes = Hash.new(0)
ObjectSpace.each_object do |obj|
next unless obj.class.name
counts[obj.class.name] += 1
sizes[obj.class.name] += ObjectSpace.memsize_of(obj)
end
# Display top memory consumers
sizes.sort_by { |k, v| -v }.first(10).each do |klass, bytes|
count = counts[klass]
avg = bytes / count
puts "#{klass}: #{count} objects, #{bytes} bytes (avg: #{avg})"
end
Allocation tracking captures where objects originate using ObjectSpace.trace_object_allocations. This feature records source file, line number, and method for each allocation. Enabling tracing introduces overhead but provides precise attribution for debugging hotspots.
require 'objspace'
def profile_allocations
ObjectSpace.trace_object_allocations_start
# Code to profile
result = yield
# Analyze allocations
allocations = []
ObjectSpace.each_object do |obj|
file = ObjectSpace.allocation_sourcefile(obj)
line = ObjectSpace.allocation_sourceline(obj)
next unless file && line
allocations << {
class: obj.class.name,
file: file,
line: line,
size: ObjectSpace.memsize_of(obj)
}
end
ObjectSpace.trace_object_allocations_stop
# Group by location
by_location = allocations.group_by { |a| "#{a[:file]}:#{a[:line]}" }
by_location.each do |location, allocs|
total_size = allocs.sum { |a| a[:size] }
puts "#{location}: #{allocs.size} objects, #{total_size} bytes"
end
result
end
# Usage
profile_allocations do
1000.times { |i| "string #{i}" }
end
GC::Profiler measures garbage collection performance, recording collection count, total time, and per-cycle statistics. Enabling the profiler adds minimal overhead, making it suitable for production monitoring.
GC::Profiler.enable
# Run code that triggers collections
1000.times do
Array.new(10000) { |i| "item #{i}" }
end
# Report statistics
puts GC::Profiler.result
GC::Profiler.disable
# Output shows:
# GC count, time spent, heap growth
The GC.stat method returns hash with detailed garbage collection metrics including heap size, allocated objects, and collection counts. Monitoring these values over time reveals memory trends.
def print_gc_stats
stats = GC.stat
puts "Heap slots: #{stats[:heap_available_slots]}"
puts "Live objects: #{stats[:heap_live_slots]}"
puts "Free slots: #{stats[:heap_free_slots]}"
puts "Total allocated: #{stats[:total_allocated_objects]}"
puts "Total freed: #{stats[:total_freed_objects]}"
puts "Minor GC count: #{stats[:minor_gc_count]}"
puts "Major GC count: #{stats[:major_gc_count]}"
end
# Monitor before and after operations
print_gc_stats
perform_work
print_gc_stats
ObjectSpace.memsize_of calculates individual object size in bytes. This method includes object header overhead and directly referenced memory but excludes referenced objects. Calculating deep size requires traversing reference graphs.
require 'objspace'
# Object sizes vary by content
small_str = "x"
large_str = "x" * 1000
empty_array = []
full_array = Array.new(100, 0)
puts "Small string: #{ObjectSpace.memsize_of(small_str)} bytes"
puts "Large string: #{ObjectSpace.memsize_of(large_str)} bytes"
puts "Empty array: #{ObjectSpace.memsize_of(empty_array)} bytes"
puts "Full array: #{ObjectSpace.memsize_of(full_array)} bytes"
# Note: Full array size excludes referenced integers
# Integers are typically not allocated separately
Tools & Ecosystem
The memory_profiler gem provides high-level profiling with allocation tracking and retention analysis. This gem wraps ObjectSpace APIs with reporting focused on finding optimization opportunities. Reports group allocations by gem, file, location, and class, highlighting hotspots.
require 'memory_profiler'
report = MemoryProfiler.report do
# Code to profile
1000.times do |i|
user = { id: i, name: "User #{i}", email: "user#{i}@example.com" }
process_user(user)
end
end
# Print detailed report
report.pretty_print
# Report shows:
# - Total allocated/retained objects
# - Allocation breakdown by gem/file/location
# - String/array allocations
# - Retained memory locations
The derailed_benchmarks gem measures memory usage in Rails applications, comparing branches and identifying regression sources. This tool provides commands for profiling memory per-request and detecting allocation increases.
# In Gemfile
gem 'derailed_benchmarks', group: :development
# Profile memory for specific endpoint
# Run from command line:
# bundle exec derailed exec perf:mem TEST_COUNT=100 \
# PATH_TO_HIT=/users/1
# Output shows memory allocation per request
# Useful for preventing memory regressions in PRs
The stackprof gem profiles both CPU and object allocations, providing flamegraph visualization. While primarily CPU-focused, stackprof's allocation mode tracks where objects originate with call-stack context.
require 'stackprof'
# Profile object allocations
StackProf.run(mode: :object, out: 'allocations.dump') do
# Code to profile
1000.times { User.create_guest }
end
# Generate flamegraph from dump file
# stackprof allocations.dump --flamegraph > allocations.html
The gc_tracer gem records garbage collection events with detailed metrics. This tool captures per-collection statistics, enabling time-series analysis of GC behavior and memory pressure.
require 'gc_tracer'
# Log GC events to file
GC::Tracer.start_logging('gc.log') do
# Run code that triggers collections
perform_memory_intensive_work
end
# Parse log to analyze collection patterns
# Shows minor/major GC frequency, heap growth, collection timing
The get_process_mem gem measures total process memory from OS perspective, useful for tracking absolute memory consumption rather than Ruby heap statistics.
require 'get_process_mem'
mem = GetProcessMem.new
puts "Current memory: #{mem.mb} MB"
# Run code
perform_work
puts "After work: #{mem.mb} MB"
puts "Increase: #{mem.mb - initial_mem} MB"
The rbtrace gem enables runtime tracing of method calls and allocations without modifying code. This tool attaches to running processes, injecting probes that report allocation hotspots.
# In target application
require 'rbtrace'
# From separate terminal:
# rbtrace -p <pid> --gc --methods
# Shows live allocation data from running process
The memory_metrics library tracks high-level memory statistics for monitoring systems. This tool exports metrics in formats compatible with Prometheus, Datadog, and other monitoring platforms.
require 'memory_metrics'
# Report current metrics
metrics = MemoryMetrics.current
puts "RSS: #{metrics.rss} MB"
puts "Heap: #{metrics.heap} MB"
puts "Objects: #{metrics.objects}"
# Integrate with monitoring system
MetricsExporter.push(metrics)
Practical Examples
Finding memory leaks requires capturing snapshots before and after suspected leak operations, then analyzing retained objects. This example demonstrates leak detection in a caching scenario.
require 'objspace'
require 'json'
class LeakDetector
def initialize
@baseline = nil
end
def capture_baseline
GC.start
@baseline = capture_snapshot
end
def find_leaks
GC.start
current = capture_snapshot
# Find objects present now but not in baseline
leaked = current.reject { |id, _| @baseline.key?(id) }
# Group by class and analyze
by_class = leaked.group_by { |_, info| info[:class] }
by_class.each do |klass, objects|
puts "#{klass}: #{objects.size} leaked objects"
objects.first(5).each do |id, info|
puts " #{info[:file]}:#{info[:line]}"
end
end
end
private
def capture_snapshot
snapshot = {}
ObjectSpace.each_object do |obj|
snapshot[obj.object_id] = {
class: obj.class.name,
file: ObjectSpace.allocation_sourcefile(obj),
line: ObjectSpace.allocation_sourceline(obj)
}
end
snapshot
end
end
# Usage
detector = LeakDetector.new
# Establish baseline
detector.capture_baseline
# Run suspected code
1000.times do |i|
CACHE[i] = { data: "x" * 1000 }
end
# Check for leaks
detector.find_leaks
Optimizing string allocations reduces memory pressure in text-processing code. This example profiles string creation patterns and applies optimizations.
require 'memory_profiler'
# Inefficient version with many allocations
def format_report_slow(records)
lines = []
records.each do |record|
lines << "ID: #{record[:id]}"
lines << "Name: #{record[:name]}"
lines << "Status: #{record[:status]}"
lines << "---"
end
lines.join("\n")
end
# Optimized version with fewer allocations
def format_report_fast(records)
records.map { |r|
"ID: #{r[:id]}\nName: #{r[:name]}\nStatus: #{r[:status]}\n---"
}.join("\n")
end
# Profile both versions
records = Array.new(1000) { |i|
{ id: i, name: "Record #{i}", status: "active" }
}
puts "Slow version:"
report1 = MemoryProfiler.report do
format_report_slow(records)
end
puts "Allocated: #{report1.total_allocated_memsize} bytes"
puts "\nFast version:"
report2 = MemoryProfiler.report do
format_report_fast(records)
end
puts "Allocated: #{report2.total_allocated_memsize} bytes"
Tracking allocation hotspots identifies code paths creating excessive objects. This example uses allocation tracing to find optimization targets.
require 'objspace'
def profile_allocations_by_location
ObjectSpace.trace_object_allocations_start
yield
# Collect allocation data
locations = Hash.new { |h, k| h[k] = { count: 0, size: 0 } }
ObjectSpace.each_object do |obj|
file = ObjectSpace.allocation_sourcefile(obj)
line = ObjectSpace.allocation_sourceline(obj)
next unless file && line
location = "#{file}:#{line}"
locations[location][:count] += 1
locations[location][:size] += ObjectSpace.memsize_of(obj)
end
ObjectSpace.trace_object_allocations_stop
# Report top allocators
sorted = locations.sort_by { |_, v| -v[:size] }
sorted.first(20).each do |location, stats|
puts "#{location}: #{stats[:count]} objects, #{stats[:size]} bytes"
end
end
# Profile application code
profile_allocations_by_location do
process_data_from_api
generate_reports
send_notifications
end
Analyzing retained memory reveals why objects persist after code expects collection. This example examines retention patterns in a web request handler.
require 'memory_profiler'
class RequestHandler
def initialize
@cache = {}
end
def handle_request(params)
# Intended: temporary data for request
data = fetch_data(params)
process_data(data)
# Unintended: cache retains references
@cache[params[:id]] = data
end
end
# Profile retention
handler = RequestHandler.new
report = MemoryProfiler.report do
100.times do |i|
handler.handle_request(id: i)
end
end
# Examine retained objects
puts "Retained memory: #{report.total_retained_memsize} bytes"
puts "\nRetained by location:"
report.retained_memory_by_location.each do |location, size|
puts "#{location}: #{size} bytes"
end
# Shows @cache retaining all data objects
Performance Considerations
Profiling overhead affects measurement accuracy and application performance. Allocation tracking with ObjectSpace.trace_object_allocations introduces 30-50% slowdown, while iteration with ObjectSpace.each_object temporarily prevents garbage collection. Production profiling requires balancing detail against performance impact.
Sampling reduces overhead by profiling subsets of operations. Instead of tracking every allocation, sampling profiles every Nth request or time interval. This approach maintains acceptable performance while gathering representative data.
# Sample-based profiling for production
class SamplingProfiler
def initialize(sample_rate: 0.01)
@sample_rate = sample_rate
@counter = 0
end
def profile_request
@counter += 1
if rand < @sample_rate
# Profile this request
report = MemoryProfiler.report do
yield
end
# Store or transmit report
persist_profile(report, @counter)
else
# Skip profiling
yield
end
end
end
# Usage: profiles ~1% of requests
profiler = SamplingProfiler.new(sample_rate: 0.01)
profiler.profile_request { handle_user_request }
Garbage collection timing affects retained object counts. Forcing collection with GC.start before measurement ensures unreferenced objects clear, but frequent forced collection degrades performance. Balancing collection frequency requires understanding collection costs versus measurement accuracy needs.
Memory allocation patterns interact with garbage collection strategies. Allocating many short-lived objects triggers frequent minor collections, while retaining objects moves them to older generations. Understanding generational collection informs optimization strategies—reducing allocation volume decreases collection frequency more than optimizing individual allocation sizes.
# Measuring GC impact on performance
require 'benchmark'
def with_gc_stats
GC.disable
before = Process.clock_gettime(Process::CLOCK_MONOTONIC)
before_stats = GC.stat
yield
after = Process.clock_gettime(Process::CLOCK_MONOTONIC)
after_stats = GC.stat
GC.enable
GC.start
{
time: after - before,
minor_gc: after_stats[:minor_gc_count] - before_stats[:minor_gc_count],
major_gc: after_stats[:major_gc_count] - before_stats[:major_gc_count]
}
end
# Compare allocation strategies
stats1 = with_gc_stats do
10000.times { Array.new(100) { |i| "item #{i}" } }
end
stats2 = with_gc_stats do
template = Array.new(100) { |i| "item #{i}" }
10000.times { template.dup }
end
puts "Strategy 1: #{stats1[:time]}s, #{stats1[:minor_gc]} minor GC"
puts "Strategy 2: #{stats2[:time]}s, #{stats2[:minor_gc]} minor GC"
Memory pressure monitoring guides optimization priorities. Applications approaching memory limits require immediate action, while stable memory usage allows gradual optimization. Monitoring process RSS, heap size, and collection frequency reveals pressure trends.
Object pooling reduces allocation overhead by reusing objects instead of creating new instances. This technique works for objects with expensive initialization or high allocation frequency, but requires careful management to avoid unintended sharing.
# Object pool for expensive resources
class ConnectionPool
def initialize(size: 10)
@pool = Array.new(size) { create_connection }
@mutex = Mutex.new
end
def with_connection
conn = @mutex.synchronize { @pool.pop }
begin
yield conn
ensure
conn.reset
@mutex.synchronize { @pool.push(conn) }
end
end
private
def create_connection
# Expensive allocation
Connection.new
end
end
# Reduces allocation from per-request to pool-size
pool = ConnectionPool.new(size: 10)
pool.with_connection { |conn| conn.execute(query) }
Common Pitfalls
Misinterpreting retained memory counts leads to false leak conclusions. Objects marked as retained include both leaked objects and legitimately cached data. Distinguishing requires analyzing reference paths and validating retention intent.
# False positive: intentional cache looks like leak
class UserCache
def initialize
@cache = {}
end
def find(id)
@cache[id] ||= User.find(id)
end
end
# Profiling shows retained User objects
# This is intentional caching, not a leak
# Need business logic context to confirm
Profiling non-representative workloads produces misleading results. Development environments with small datasets underestimate production memory usage. Profiling requires realistic data volumes and query patterns.
Ignoring Ruby's copy-on-write behavior affects memory measurement in forking servers. Forked processes share memory pages until modifications occur. Allocating objects after forking increases per-worker memory, defeating CoW benefits.
# CoW-unfriendly: loads data after fork
class Application
def initialize
@config = load_config
end
def start
# Fork workers
4.times do
fork do
# Loading large data after fork breaks CoW
@data = load_large_dataset
serve_requests
end
end
end
end
# CoW-friendly: loads before fork
class BetterApplication
def initialize
@config = load_config
@data = load_large_dataset # Load once before fork
end
def start
4.times { fork { serve_requests } }
end
end
Measuring during garbage collection produces inconsistent results. Collection timing affects live object counts and memory sizes. Forcing collection with GC.start before measurement reduces variability but impacts performance.
Overlooking string encoding overhead underestimates memory usage. Ruby strings store encoding information and may allocate additional buffers for multibyte characters. ASCII strings consume less memory than UTF-8 strings with same character count.
require 'objspace'
ascii = "x" * 1000
utf8 = "é" * 1000
puts "ASCII string: #{ObjectSpace.memsize_of(ascii)} bytes"
puts "UTF-8 string: #{ObjectSpace.memsize_of(utf8)} bytes"
# UTF-8 may require more bytes per character
Forgetting to disable allocation tracing after profiling leaks memory. ObjectSpace.trace_object_allocations_start allocates metadata for tracking, which persists until explicitly stopped.
# Memory leak: never stops tracing
def bad_profile
ObjectSpace.trace_object_allocations_start
perform_work
# Missing: ObjectSpace.trace_object_allocations_stop
end
# Correct: ensures cleanup
def good_profile
ObjectSpace.trace_object_allocations_start
begin
perform_work
ensure
ObjectSpace.trace_object_allocations_stop
end
end
Profiling includes profiler overhead. Memory profiling tools allocate objects to track allocations. Subtracting baseline measurements or using control runs isolates application allocations from profiler noise.
Reference
ObjectSpace Methods
| Method | Purpose | Overhead |
|---|---|---|
| ObjectSpace.each_object | Iterate all live objects | High - prevents GC |
| ObjectSpace.count_objects | Count objects by type | Low |
| ObjectSpace.count_objects_size | Count objects with memory size | Low |
| ObjectSpace.memsize_of | Size of individual object | Low |
| ObjectSpace.trace_object_allocations_start | Enable allocation tracking | Medium - 30-50% slowdown |
| ObjectSpace.trace_object_allocations_stop | Disable allocation tracking | Low |
| ObjectSpace.allocation_sourcefile | Source file where allocated | Low |
| ObjectSpace.allocation_sourceline | Source line where allocated | Low |
| ObjectSpace.reachable_objects_from | Objects referenced by target | Medium |
GC Statistics
| Metric | Description | Access Method |
|---|---|---|
| heap_available_slots | Total slots in heap | GC.stat[:heap_available_slots] |
| heap_live_slots | Slots with live objects | GC.stat[:heap_live_slots] |
| heap_free_slots | Available empty slots | GC.stat[:heap_free_slots] |
| total_allocated_objects | Objects allocated since start | GC.stat[:total_allocated_objects] |
| total_freed_objects | Objects freed by GC | GC.stat[:total_freed_objects] |
| minor_gc_count | Minor collection count | GC.stat[:minor_gc_count] |
| major_gc_count | Major collection count | GC.stat[:major_gc_count] |
| malloc_increase_bytes | Malloc growth since last GC | GC.stat[:malloc_increase_bytes] |
Profiling Gems Comparison
| Gem | Best For | Output Format | Overhead |
|---|---|---|---|
| memory_profiler | General profiling and leak detection | Text report with allocation/retention details | Medium |
| derailed_benchmarks | Rails memory regression testing | Per-request memory metrics | Low |
| stackprof | Allocation hotspots with call stacks | Flamegraph visualization | Low |
| gc_tracer | GC behavior analysis | Time-series GC event log | Low |
| get_process_mem | Process-level memory tracking | Simple MB values | Very low |
| rbtrace | Runtime profiling without restart | Live method and allocation traces | Low |
Common Memory Issues
| Issue | Symptom | Detection Method |
|---|---|---|
| Memory leak | Gradual memory growth | Snapshot comparison shows increasing retained objects |
| Excessive allocation | High GC frequency | Allocation profiling shows hotspots |
| Large object retention | Sudden memory jumps | Object size analysis finds large retained instances |
| Cache unbounded growth | Memory grows with usage | Check cache size over time |
| Closure capture | Unexpected retention | Reference path analysis shows closure references |
| String duplication | Many identical strings | Group strings by content and count |
Optimization Strategies
| Strategy | Application | Memory Impact | Code Impact |
|---|---|---|---|
| String freezing | Shared string constants | Reduces duplication | Minimal - add freeze calls |
| Symbol usage | Repeated string keys | Avoids allocation | Change strings to symbols |
| Lazy loading | Conditional data loading | Defers allocation | Requires restructuring |
| Object pooling | Expensive object reuse | Reduces allocation frequency | Moderate - implement pool |
| Streaming processing | Large dataset handling | Constant memory usage | Significant - change algorithms |
| Reference clearing | Explicit nil assignment | Allows earlier collection | Minimal - add nil assignments |
| Cache size limits | Bounded caching | Prevents unbounded growth | Moderate - implement eviction |
GC Tuning Environment Variables
| Variable | Purpose | Example Value |
|---|---|---|
| RUBY_GC_HEAP_INIT_SLOTS | Initial heap slots | 10000 |
| RUBY_GC_HEAP_FREE_SLOTS | Minimum free slots | 4096 |
| RUBY_GC_HEAP_GROWTH_FACTOR | Heap growth multiplier | 1.8 |
| RUBY_GC_HEAP_GROWTH_MAX_SLOTS | Maximum growth per collection | 0 (unlimited) |
| RUBY_GC_MALLOC_LIMIT | Malloc trigger threshold | 16777216 (16MB) |
| RUBY_GC_OLDMALLOC_LIMIT | Old malloc trigger | 16777216 (16MB) |