CrackedRuby CrackedRuby

Overview

Logging creates a persistent record of application events, errors, and state changes during execution. Unlike debugging tools that require active sessions, logs provide historical context for troubleshooting production issues, monitoring system health, and analyzing user behavior patterns.

Application logs serve multiple audiences with different needs. Developers investigate bugs and trace execution paths through detailed technical output. Operations teams monitor system health and respond to alerts based on error rates and performance metrics. Security analysts examine access patterns and detect anomalous behavior. Product managers analyze feature usage and user journeys.

The challenge in logging lies in balancing information density with noise. Too little logging obscures critical issues, while excessive logging overwhelms storage systems and makes relevant information difficult to locate. A 1% increase in log volume can translate to significant infrastructure costs at scale, while missing a crucial error log can extend incident response times from minutes to hours.

# Minimal logging - insufficient context
logger.info "User logged in"

# Balanced logging - actionable context
logger.info "User authentication successful",
  user_id: user.id,
  ip_address: request.ip,
  auth_method: "oauth2",
  duration_ms: auth_duration

Modern logging practices evolved from simple print statements to structured data formats. Early applications wrote unstructured text to standard output, making automated parsing difficult. Current approaches use structured formats like JSON that support querying, filtering, and aggregation across distributed systems.

Key Principles

Log levels establish a hierarchy of message importance. Each level signals different urgency and audience. DEBUG messages contain detailed execution traces for development troubleshooting. INFO records significant application events like successful requests or state transitions. WARN indicates potential issues that don't interrupt operation, such as deprecated API usage or approaching resource limits. ERROR captures failures that affect specific operations but allow the application to continue. FATAL represents catastrophic failures requiring immediate intervention.

Level selection affects both signal clarity and operational costs. Setting production logging to DEBUG generates gigabytes of data daily for moderately-trafficked applications, overwhelming log aggregation systems and obscuring critical errors. Conversely, logging only ERROR events provides insufficient context for diagnosing intermittent issues.

# Level progression example
logger.debug "SQL query: SELECT * FROM users WHERE id = ?"
logger.info "User profile retrieved", user_id: 123
logger.warn "Response time exceeded threshold", duration_ms: 1500, threshold_ms: 1000
logger.error "Database connection failed", error: e.message, retry_count: 3
logger.fatal "Critical configuration missing", config_file: path

Structured logging formats messages as key-value pairs rather than free-form text. Structured logs enable programmatic querying and aggregation. A log aggregation system can filter all requests exceeding 500ms response time when duration is a discrete field, but extracting response times from text messages requires fragile regular expressions.

The structure should balance flexibility with consistency. Establishing field naming conventions across services prevents confusion when correlating logs. Using snake_case for field names, including units in duration fields, and prefixing boolean fields with "is_" or "has_" creates predictable patterns.

Context propagation links related log messages across distributed operations. A single user request might touch multiple services, each generating logs. Request IDs, trace IDs, and correlation IDs connect these disparate messages, enabling reconstruction of the complete request flow.

Context accumulation adds relevant information at each layer. A request entering through an API gateway might generate a request ID, pick up user identity during authentication, collect feature flags during authorization, and accumulate error details during processing. Each log message includes all accumulated context.

# Context propagation through request lifecycle
logger.info "Request received",
  request_id: request_id

logger.info "User authenticated",
  request_id: request_id,
  user_id: user.id

logger.info "Database query executed",
  request_id: request_id,
  user_id: user.id,
  query_duration_ms: 45

Sampling reduces log volume while preserving statistical validity. Recording every cache hit in a high-traffic application generates millions of redundant entries. Sampling logs 1% of cache operations while recording 100% of cache misses maintains visibility into cache effectiveness without overwhelming storage.

Different log types warrant different sampling rates. Sample routine success messages aggressively, but record every error and slow operation. Sample DEBUG messages even during development when investigating specific code paths rather than general behavior.

Log message immutability preserves audit trails and debugging context. Messages should capture state at the moment of logging without subsequent modification. Mutable log data complicates forensic analysis when investigating security incidents or debugging race conditions.

Semantic meaning takes precedence over human readability in message structure. Logs primarily serve as machine-readable data for aggregation and analysis. Human-readable formatting can occur during display rather than at generation time.

Ruby Implementation

Ruby's standard library includes the Logger class for basic logging functionality. Logger writes messages to IO streams like files or stdout, formats messages with timestamps and severity labels, and supports log rotation based on age or size.

require 'logger'

# Basic logger configuration
logger = Logger.new('application.log')
logger.level = Logger::INFO

logger.debug "This won't appear in logs"
logger.info "Application started"
logger.error "Connection failed", error: exception

Logger provides five severity levels: DEBUG (0), INFO (1), WARN (2), ERROR (3), FATAL (4). Setting the logger level filters messages below that threshold. Production applications typically run at INFO or WARN level, while development environments use DEBUG.

The progname parameter identifies the component generating the log message. This becomes critical in applications with multiple subsystems logging to the same destination.

# Component-specific loggers
database_logger = Logger.new('logs/database.log', progname: 'Database')
api_logger = Logger.new('logs/api.log', progname: 'API')

database_logger.info "Query executed"  # Prefixed with "Database"
api_logger.info "Request processed"    # Prefixed with "API"

Rails extends Ruby's Logger with additional features including tagged logging, log level filtering per environment, and automatic request ID tracking. Rails logs include request paths, HTTP methods, controller actions, database query times, and view rendering durations.

# Rails logger with tags
Rails.logger.tagged("UserController", current_user.id) do
  Rails.logger.info "Profile update initiated"
  # All logs within this block include tags
end

Structured logging with semantic_logger provides JSON formatting, context propagation, and performance metrics. The gem supports multiple appenders, allowing simultaneous output to files, stdout, and log aggregation services.

require 'semantic_logger'

SemanticLogger.default_level = :info
SemanticLogger.add_appender(file_name: 'application.log', formatter: :json)

logger = SemanticLogger['UserService']

logger.info "User created",
  user_id: user.id,
  email: user.email,
  registration_source: params[:source],
  duration: (Time.now - start_time) * 1000

Semantic Logger automatically captures context like thread ID, host name, application name, and environment. The measure method tracks code block duration automatically.

logger.measure_info "Database query" do
  User.where(active: true).limit(100).to_a
end
# Outputs: Database query (duration: 45.2ms)

Custom formatters control log output structure. Logger's default formatter produces human-readable text, but production systems often require JSON for parsing by log aggregation tools.

class JsonFormatter < Logger::Formatter
  def call(severity, timestamp, progname, msg)
    {
      timestamp: timestamp.iso8601,
      severity: severity,
      progname: progname,
      message: msg.is_a?(String) ? msg : msg.inspect
    }.to_json + "\n"
  end
end

logger = Logger.new(STDOUT)
logger.formatter = JsonFormatter.new

Log rotation prevents disk space exhaustion. Logger supports rotation by file size or time period. Size-based rotation creates new files when the current file reaches a threshold. Time-based rotation creates new files daily, weekly, or monthly.

# Rotate when file reaches 10MB, keep 5 old files
logger = Logger.new('application.log', 5, 10 * 1024 * 1024)

# Rotate daily
logger = Logger.new('application.log', 'daily')

Thread-safe logging prevents interleaved output. Ruby's Logger synchronizes write operations, but application code must avoid splitting related log statements across multiple calls when atomic output is required.

# Atomic logging - single call
logger.info "Operation completed", steps: steps, duration: duration

# Non-atomic - messages may interleave with other threads
logger.info "Operation started"
perform_operation
logger.info "Operation completed"

Common Patterns

Request tracking assigns unique identifiers to each request for correlation across logs. Generate request IDs at system entry points and propagate them through all subsequent operations. Rails includes this pattern by default via ActionDispatch::RequestId middleware.

class RequestTrackingMiddleware
  def initialize(app)
    @app = app
  end

  def call(env)
    request_id = SecureRandom.uuid
    env['REQUEST_ID'] = request_id
    
    Thread.current[:request_id] = request_id
    
    logger.info "Request started",
      request_id: request_id,
      path: env['PATH_INFO'],
      method: env['REQUEST_METHOD']
    
    status, headers, response = @app.call(env)
    
    logger.info "Request completed",
      request_id: request_id,
      status: status
    
    [status, headers, response]
  ensure
    Thread.current[:request_id] = nil
  end
end

Error context enrichment captures surrounding state when exceptions occur. Recording only the exception message and stack trace often provides insufficient information for debugging. Include relevant variables, configuration values, and execution context.

def process_payment(order, payment_method)
  charge = PaymentGateway.charge(
    amount: order.total,
    payment_method: payment_method
  )
  
  logger.info "Payment processed",
    order_id: order.id,
    amount: order.total,
    transaction_id: charge.id
    
rescue PaymentGateway::Error => e
  logger.error "Payment failed",
    order_id: order.id,
    amount: order.total,
    payment_method_type: payment_method.type,
    gateway_response: e.gateway_response,
    error_code: e.code,
    error_message: e.message,
    stack_trace: e.backtrace.first(10)
  raise
end

Metrics logging records quantitative measurements for monitoring and alerting. While application performance monitoring tools provide detailed metrics, logs serve as a complementary source for business metrics and application-specific measurements.

def execute_batch_job(batch_size: 100)
  start_time = Time.now
  processed = 0
  failed = 0
  
  records.each_slice(batch_size) do |batch|
    results = process_batch(batch)
    processed += results[:success]
    failed += results[:failed]
  end
  
  duration = Time.now - start_time
  
  logger.info "Batch job completed",
    total_processed: processed,
    total_failed: failed,
    duration_seconds: duration.round(2),
    records_per_second: (processed / duration).round(2),
    error_rate: (failed.to_f / (processed + failed) * 100).round(2)
end

Contextual logging decorators add consistent fields to related log messages. Rather than repeating context in every log call, establish context once and reference it implicitly.

class OrderProcessor
  def initialize(order_id)
    @order_id = order_id
    @logger = Logger.new(STDOUT)
  end
  
  def process
    log_with_context("Processing started")
    
    validate_order
    charge_payment
    fulfill_order
    
    log_with_context("Processing completed")
  end
  
  private
  
  def log_with_context(message, **additional_fields)
    @logger.info message, {
      order_id: @order_id,
      processor: self.class.name
    }.merge(additional_fields)
  end
  
  def charge_payment
    start = Time.now
    # Payment logic
    log_with_context("Payment charged", duration_ms: (Time.now - start) * 1000)
  end
end

Conditional detail logging adjusts verbosity based on context. Development environments benefit from verbose logging, while production logging focuses on errors and significant events. Dynamic log levels enable verbose logging for specific users or requests without flooding production logs.

class DynamicLogger
  def initialize
    @base_logger = Logger.new(STDOUT)
    @base_logger.level = Logger::INFO
  end
  
  def debug(message, context = {})
    return unless should_log_debug?(context)
    @base_logger.debug message, context
  end
  
  def info(message, context = {})
    @base_logger.info message, context
  end
  
  private
  
  def should_log_debug?(context)
    # Enable debug logging for admin users
    return true if context[:user_id] && User.find(context[:user_id])&.admin?
    
    # Enable debug logging for flagged requests
    return true if context[:request_id] && DebugFlag.active?(context[:request_id])
    
    false
  end
end

Asynchronous logging decouples log writing from application logic. Writing logs to disk or network introduces latency. Background processing prevents logging from slowing request handling.

class AsyncLogger
  def initialize
    @queue = Queue.new
    @logger = Logger.new('application.log')
    
    @worker = Thread.new { process_queue }
  end
  
  def log(severity, message, context = {})
    @queue << { severity: severity, message: message, context: context }
  end
  
  private
  
  def process_queue
    loop do
      entry = @queue.pop
      @logger.send(entry[:severity], entry[:message], entry[:context])
    rescue => e
      warn "Logging error: #{e.message}"
    end
  end
end

Practical Examples

Example 1: HTTP API request logging with timing breakdowns

A typical API request involves multiple operations: request parsing, authentication, authorization, business logic execution, and response formatting. Logging each phase with duration tracking identifies bottlenecks.

class ApiController < ApplicationController
  around_action :log_request
  
  def create_order
    @start_times = {}
    
    time_phase(:authentication) { authenticate_user }
    time_phase(:authorization) { authorize_action }
    time_phase(:validation) { validate_params }
    time_phase(:order_creation) { create_order_record }
    time_phase(:payment) { process_payment }
    time_phase(:notification) { send_confirmation }
    
    render json: @order
  end
  
  private
  
  def log_request
    request_start = Time.now
    
    yield
    
    total_duration = (Time.now - request_start) * 1000
    
    logger.info "API request completed",
      request_id: request.uuid,
      method: request.method,
      path: request.path,
      status: response.status,
      user_id: current_user&.id,
      total_duration_ms: total_duration.round(2),
      phase_durations: @phase_durations
  rescue => e
    logger.error "API request failed",
      request_id: request.uuid,
      method: request.method,
      path: request.path,
      user_id: current_user&.id,
      error_class: e.class.name,
      error_message: e.message,
      backtrace: e.backtrace.first(5)
    raise
  end
  
  def time_phase(phase_name)
    start = Time.now
    result = yield
    duration = (Time.now - start) * 1000
    
    @phase_durations ||= {}
    @phase_durations[phase_name] = duration.round(2)
    
    result
  end
end

Example 2: Background job processing with error tracking

Background jobs often process large datasets over extended periods. Detailed logging tracks progress, identifies failures, and provides data for retry decisions.

class DataImportJob
  def perform(import_id)
    import = Import.find(import_id)
    
    logger.info "Import job started",
      import_id: import_id,
      source: import.source,
      record_count: import.records.count
    
    success_count = 0
    error_count = 0
    errors_by_type = Hash.new(0)
    
    import.records.each_with_index do |record, index|
      process_record(record)
      success_count += 1
      
      # Periodic progress logging
      if (index + 1) % 1000 == 0
        logger.info "Import progress",
          import_id: import_id,
          processed: index + 1,
          success: success_count,
          errors: error_count,
          completion_pct: ((index + 1).to_f / import.records.count * 100).round(2)
      end
      
    rescue StandardError => e
      error_count += 1
      errors_by_type[e.class.name] += 1
      
      logger.error "Record processing failed",
        import_id: import_id,
        record_id: record.id,
        record_index: index,
        error_class: e.class.name,
        error_message: e.message
    end
    
    logger.info "Import job completed",
      import_id: import_id,
      total_records: import.records.count,
      successful: success_count,
      failed: error_count,
      error_breakdown: errors_by_type,
      duration_seconds: (Time.now - start_time).round(2)
  end
end

Example 3: Database query monitoring

Slow database queries degrade application performance. Logging query execution times, result counts, and query patterns enables optimization identification.

module DatabaseQueryLogger
  def self.prepended(base)
    base.class_eval do
      alias_method :exec_query_without_logging, :exec_query
      alias_method :exec_query, :exec_query_with_logging
    end
  end
  
  def exec_query_with_logging(sql, name = nil, binds = [], **kwargs)
    start = Time.now
    
    result = exec_query_without_logging(sql, name, binds, **kwargs)
    
    duration = (Time.now - start) * 1000
    
    if duration > 100  # Log slow queries
      logger.warn "Slow database query",
        query: sql,
        duration_ms: duration.round(2),
        row_count: result.rows.count,
        binds: sanitize_binds(binds)
    elsif duration > 50  # Log moderately slow queries at info level
      logger.info "Database query",
        query: truncate_sql(sql),
        duration_ms: duration.round(2),
        row_count: result.rows.count
    end
    
    result
  end
  
  private
  
  def truncate_sql(sql)
    sql.length > 200 ? sql[0..200] + "..." : sql
  end
  
  def sanitize_binds(binds)
    binds.map { |b| b.value_for_database.class.name }
  end
end

ActiveRecord::ConnectionAdapters::AbstractAdapter.prepend(DatabaseQueryLogger)

Example 4: User activity audit logging

Compliance requirements often mandate logging of user actions for audit trails. Activity logs capture who performed what action on which resources at what time.

class AuditLogger
  def self.log_action(action:, user:, resource:, changes: {}, metadata: {})
    logger.info "User action",
      timestamp: Time.now.iso8601,
      action: action,
      user_id: user.id,
      user_email: user.email,
      user_role: user.role,
      resource_type: resource.class.name,
      resource_id: resource.id,
      changes: sanitize_changes(changes),
      metadata: metadata,
      ip_address: metadata[:ip_address],
      user_agent: metadata[:user_agent]
  end
  
  def self.sanitize_changes(changes)
    changes.transform_values do |value|
      case value
      when /password|token|secret/i
        '[REDACTED]'
      else
        value.to_s.truncate(1000)
      end
    end
  end
  
  private_class_method :sanitize_changes
end

# Usage in controller
def update
  old_values = @document.attributes.slice('title', 'status', 'owner_id')
  
  @document.update!(document_params)
  
  AuditLogger.log_action(
    action: 'document_updated',
    user: current_user,
    resource: @document,
    changes: @document.previous_changes,
    metadata: {
      ip_address: request.remote_ip,
      user_agent: request.user_agent
    }
  )
end

Security Implications

Sensitive data exposure through logs creates security vulnerabilities. Passwords, API keys, tokens, credit card numbers, social security numbers, and other confidential information must never appear in logs. Even hashed or partially redacted values can leak information.

Log aggregation systems often have broader access permissions than production databases, making them attractive targets for attackers. Developers and operations staff typically have log access, expanding the attack surface beyond database administrators.

# Dangerous - exposes sensitive data
logger.info "User login", email: email, password: password

# Safe - omits sensitive data
logger.info "User login attempt", email: email, success: true

# Safe - explicitly redacts
logger.info "Payment processed",
  card_last_four: card_number[-4..-1],
  amount: amount

Log injection attacks manipulate log content by including newlines or control characters in logged data. An attacker supplying a username like admin\n[ERROR] Authentication failed for real_user can create misleading log entries that complicate incident response or mask malicious activity.

# Vulnerable to injection
logger.info "User login: #{params[:username]}"

# Protected - structured logging prevents injection
logger.info "User login", username: params[:username]

# Protected - sanitization
def sanitize_for_log(input)
  input.to_s.gsub(/[\r\n\t]/, ' ').truncate(200)
end

logger.info "User login: #{sanitize_for_log(params[:username])}"

Log aggregation access control requires the same rigor as production database access. Logs containing customer data, financial information, or health records need appropriate access restrictions. Role-based access control limits log visibility based on team responsibilities.

Retention policies balance debugging needs with privacy regulations. GDPR and similar frameworks mandate data deletion timelines. Applications must implement log retention policies that purge old data automatically while preserving recent logs for troubleshooting.

class LogRetentionManager
  def purge_old_logs
    retention_days = ENV.fetch('LOG_RETENTION_DAYS', 90).to_i
    cutoff_date = Date.today - retention_days
    
    logger.info "Starting log purge",
      retention_days: retention_days,
      cutoff_date: cutoff_date
    
    deleted_count = LogEntry.where('created_at < ?', cutoff_date).delete_all
    
    logger.info "Log purge completed",
      deleted_count: deleted_count
  end
end

Authentication and authorization events require special attention. Failed login attempts, permission denials, and privilege escalations deserve logging at higher severity levels with additional context for security monitoring.

class SecurityLogger
  def self.log_auth_failure(user_identifier:, reason:, ip_address:, metadata: {})
    logger.warn "Authentication failed",
      user_identifier: user_identifier,
      failure_reason: reason,
      ip_address: ip_address,
      timestamp: Time.now.iso8601,
      user_agent: metadata[:user_agent],
      request_path: metadata[:request_path]
  end
  
  def self.log_authorization_denied(user:, action:, resource:)
    logger.warn "Authorization denied",
      user_id: user.id,
      user_role: user.role,
      attempted_action: action,
      resource_type: resource.class.name,
      resource_id: resource.id,
      timestamp: Time.now.iso8601
  end
end

Encrypted logs protect sensitive information at rest. While redaction prevents sensitive data from reaching logs, applications handling highly confidential information may require encrypting entire log files or specific log fields.

Performance Considerations

Log volume directly impacts application performance and infrastructure costs. Each log message consumes CPU for serialization, I/O for writing, network bandwidth for transmission to aggregation services, and storage for retention. A single high-traffic endpoint logging at DEBUG level can generate gigabytes of data daily.

Synchronous logging blocks request processing. Writing logs to disk or network sockets introduces latency ranging from microseconds for local disk to milliseconds for remote services. Buffering and asynchronous writing prevent this latency from affecting response times.

# Synchronous logging - blocks until write completes
logger.info "Request processed"

# Asynchronous with buffering
class BufferedLogger
  def initialize(target_logger, buffer_size: 1000, flush_interval: 5)
    @target_logger = target_logger
    @buffer = []
    @mutex = Mutex.new
    @buffer_size = buffer_size
    
    start_flush_timer(flush_interval)
  end
  
  def info(message, context = {})
    entry = { level: :info, message: message, context: context, timestamp: Time.now }
    
    @mutex.synchronize do
      @buffer << entry
      flush if @buffer.size >= @buffer_size
    end
  end
  
  def flush
    entries = []
    @mutex.synchronize do
      entries = @buffer.dup
      @buffer.clear
    end
    
    entries.each do |entry|
      @target_logger.send(entry[:level], entry[:message], entry[:context])
    end
  end
  
  private
  
  def start_flush_timer(interval)
    Thread.new do
      loop do
        sleep interval
        flush
      end
    end
  end
end

Sampling reduces load while preserving statistical significance. Recording 1% of successful requests provides sufficient data for error rate calculations and performance monitoring without storing millions of redundant messages. Errors and anomalies require 100% recording.

class SamplingLogger
  def initialize(base_logger, sample_rate: 0.01)
    @logger = base_logger
    @sample_rate = sample_rate
  end
  
  def info(message, context = {})
    return unless should_sample?(context)
    @logger.info(message, context)
  end
  
  def error(message, context = {})
    # Always log errors
    @logger.error(message, context)
  end
  
  def warn(message, context = {})
    # Always log warnings
    @logger.warn(message, context)
  end
  
  private
  
  def should_sample?(context)
    # Always log if flagged
    return true if context[:force_log]
    
    # Always log slow operations
    return true if context[:duration_ms] && context[:duration_ms] > 1000
    
    # Sample based on rate
    rand < @sample_rate
  end
end

String interpolation and object serialization waste CPU cycles when logs are filtered out. Constructing detailed log messages for DEBUG level when running at INFO level performs unnecessary work. Block-based logging defers expensive operations until the log level check passes.

# Wasteful - constructs message even if not logged
logger.debug "User data: #{expensive_serialization(user)}"

# Efficient - block only executes if logging enabled
logger.debug { "User data: #{expensive_serialization(user)}" }

Log aggregation network traffic impacts application performance. Sending logs to remote services over HTTP introduces network latency and bandwidth consumption. Local buffering and batch transmission reduce overhead.

class BatchLogShipper
  def initialize(endpoint, batch_size: 100)
    @endpoint = endpoint
    @batch_size = batch_size
    @buffer = []
    @mutex = Mutex.new
  end
  
  def ship(log_entry)
    @mutex.synchronize do
      @buffer << log_entry
      flush if @buffer.size >= @batch_size
    end
  end
  
  def flush
    batch = []
    @mutex.synchronize do
      batch = @buffer.dup
      @buffer.clear
    end
    
    return if batch.empty?
    
    HTTParty.post(@endpoint, 
      body: { logs: batch }.to_json,
      headers: { 'Content-Type' => 'application/json' }
    )
  rescue => e
    # Handle shipping errors without losing logs
    @mutex.synchronize { @buffer.concat(batch) }
  end
end

Log rotation frequency affects I/O patterns. Frequent rotation creates many small files, increasing filesystem metadata overhead. Infrequent rotation creates large files that strain log viewers and complicate searches. Balance rotation based on log volume and query patterns.

Memory-efficient logging avoids accumulating log data in application memory. Streaming logs directly to output targets prevents memory growth as log volume increases. Applications buffering logs for batch processing must implement size limits and overflow handling.

Reference

Log Level Decision Matrix

Scenario Level Rationale
Application startup INFO Significant lifecycle event
Configuration loaded INFO System state information
User authentication success INFO Security-relevant event
Cache hit DEBUG High-frequency, low importance
Cache miss INFO Performance indicator
External API called INFO Integration point tracking
Database query executed DEBUG High frequency in most apps
Slow database query detected WARN Performance concern
User input validation failed WARN Expected error condition
External service timeout ERROR Operation failure
Database connection failed ERROR Critical operation failure
Unhandled exception caught ERROR Code defect indication
Configuration file missing FATAL Application cannot start
Out of memory condition FATAL System-level failure

Ruby Logging Libraries Comparison

Library Key Features Best For
Logger Standard library, simple API, rotation support Basic applications, development
SemanticLogger Structured logging, multiple appenders, metrics Production applications
Ougai JSON formatting, Rails integration, structured data Microservices, containerized apps
Lograge Rails request log formatting, Logstash compatible Rails applications with JSON logs
Logster Web UI, background processing, error aggregation Debugging production Rails apps
Rails.logger Framework integration, automatic request tracking Rails applications

Structured Logging Field Conventions

Field Category Example Fields Format Guidelines
Identity user_id, request_id, session_id Opaque identifiers, UUIDs preferred
Timing duration_ms, started_at, completed_at Milliseconds for duration, ISO8601 for timestamps
Counts record_count, retry_count, error_count Integer values
Status success, is_valid, has_errors Boolean values with consistent prefixes
Classification error_type, request_method, log_level Enumerated string values
Context ip_address, user_agent, environment String values, sanitized for injection

Sensitive Data Patterns to Redact

Pattern Type Examples Redaction Approach
Authentication password, api_key, token, secret Complete redaction
Payment credit_card, cvv, account_number Last 4 digits only
Personal ssn, tax_id, drivers_license Complete redaction
Communication email, phone Hash or partial redaction
Location latitude, longitude, full_address City/state level only
Medical diagnosis, prescription, patient_id Complete redaction or anonymization

Log Sampling Strategy

Event Type Sample Rate Reasoning
Successful requests 1-10% High volume, low information density
Failed requests 100% Critical for debugging
Slow operations 100% Performance investigation
Authentication events 100% Security monitoring
Database queries 1-5% High volume, context-dependent value
Cache operations 0.1-1% Extremely high volume
Background jobs 100% start/end, 1-10% progress Balance progress tracking with volume

Common Log Message Antipatterns

Antipattern Problem Better Approach
User logged in No context, not actionable Include user_id, IP, auth method
Error occurred Generic, unhelpful Include error class, message, context
Processing data No progress indication Include count, percentage, rate
DEBUG: x = 42 Development leftover Remove or use appropriate context
Too much logging Obscures important messages Sample high-frequency events
Starting process... Noise without value Log only significant state changes