Overview
Threat modeling analyzes software systems to identify security risks before attackers exploit them. The practice emerged from military and intelligence operations in the 1960s and entered software development in the 1990s as systems became increasingly networked and vulnerable to remote attacks.
A threat model documents what an application does, what can go wrong from a security perspective, and what actions to take to mitigate identified threats. The process examines system architecture, data flows, trust boundaries, and entry points to discover where attackers might compromise security.
The process answers four fundamental questions: What are we building? What can go wrong? What should we do about it? Did we do a good enough job? Organizations conduct threat modeling during design phases to catch security flaws before implementation, though the process applies to existing systems as well.
Threat modeling differs from penetration testing and vulnerability scanning. Penetration testing attempts to exploit known vulnerabilities in running systems. Vulnerability scanning checks deployed applications against databases of known security flaws. Threat modeling examines design and architecture to find potential problems before code exists, making it a proactive rather than reactive security practice.
The practice integrates with secure development lifecycles by informing security requirements, guiding secure design decisions, and establishing security test cases. Development teams conduct threat modeling sessions collaboratively, bringing together developers, security specialists, operations staff, and business stakeholders to examine systems from multiple perspectives.
Key Principles
Threat modeling operates on several core principles that guide the analysis process. The principle of thinking like an attacker requires analysts to adopt an adversarial mindset, asking how malicious actors might abuse system functionality, manipulate inputs, or exploit trust relationships. This perspective shift reveals security issues that optimistic or feature-focused thinking overlooks.
Defense in depth structures security controls in layers, so compromise of one control does not compromise the entire system. Threat models identify where multiple defensive layers should exist and where single points of failure create unacceptable risk. The principle recognizes that no single security mechanism provides complete protection.
Least privilege constrains each component, user, and process to minimum necessary permissions. Threat models examine what access rights each system element truly requires and identify over-privileged components that increase attack surface. Reducing privileges limits damage from compromised components.
Attack surface reduction minimizes exposed functionality, interfaces, and data access points. Smaller attack surfaces present fewer opportunities for exploitation. Threat models catalog all system entry points, including APIs, user interfaces, file uploads, network ports, and inter-process communication channels, then evaluate whether each entry point serves necessary functions.
Trust boundary identification marks transitions between security contexts with different trust levels. Data crossing trust boundaries requires validation, authentication, authorization, and often encryption. Threat models map trust boundaries explicitly, as attackers target these transitions to escalate privileges or access restricted resources.
Data flow analysis tracks how information moves through systems, identifying where sensitive data exists in memory, storage, transit, and logs. Each data location and transition point represents a potential exposure. Understanding data flows reveals where encryption, access controls, and data sanitization should apply.
The principle of failing securely requires systems to default to safe states when errors occur. Threat models examine error handling to confirm failures do not expose sensitive information, bypass security controls, or leave systems in exploitable states. Secure failure modes prevent attackers from triggering errors to gain advantages.
Separation of concerns isolates security-critical functionality from application logic. Threat models identify security boundaries and validate that authentication, authorization, cryptography, and input validation occur in dedicated, well-audited components rather than scattered throughout application code.
Implementation Approaches
Multiple threat modeling methodologies exist, each with distinct focuses and analysis techniques. STRIDE, developed at Microsoft, categorizes threats into six types: Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. Analysts examine each system component to determine which STRIDE categories apply.
STRIDE analysis begins by creating data flow diagrams showing processes, data stores, external entities, and data flows. Each diagram element receives scrutiny for applicable threat categories. A process handling authentication faces spoofing threats. A data store containing user information faces information disclosure threats. The systematic application of categories to diagram elements ensures comprehensive coverage.
The PASTA methodology (Process for Attack Simulation and Threat Analysis) emphasizes risk-centric and attacker-centric analysis through seven stages: definition of objectives, technical scope definition, application decomposition, threat analysis, vulnerability analysis, attack modeling, and risk analysis. PASTA integrates business impact assessment with technical analysis, ensuring threat models align with organizational risk tolerance.
DREAD provides risk assessment after identifying threats. The acronym represents five risk factors: Damage potential, Reproducibility, Exploitability, Affected users, and Discoverability. Each factor receives a numerical score, typically 1-10, and the scores combine to produce overall risk ratings. Teams prioritize mitigation efforts based on DREAD scores, addressing highest-risk threats first.
Attack trees model how attackers might achieve specific goals. The tree root represents the attacker's objective, such as "access customer payment data." Child nodes represent sub-goals or attack steps that enable the parent goal. Leaf nodes represent atomic attacks that require no further decomposition. Attack trees help teams understand attack sequences and identify where defensive controls should interrupt attack chains.
VAST (Visual, Agile, and Simple Threat modeling) separates application threat models from operational threat models. Application threat models, created by development teams, focus on application-level threats during design. Operational threat models, created by security and operations teams, address infrastructure, deployment, and runtime threats. This separation acknowledges different stakeholders have different concerns and expertise.
The Trike methodology focuses on risk management from the defender's perspective rather than the attacker's. Trike models define acceptable use of assets, then identify how actors might violate these definitions. The approach generates requirements-based threat models that map directly to security requirements and test cases.
Hybrid approaches combine elements from multiple methodologies. Organizations might use STRIDE for initial threat identification, attack trees for understanding attack paths, and DREAD for risk prioritization. The hybrid approach adapts to organizational needs rather than strictly following one methodology.
Ruby Implementation
Ruby applications require threat modeling like applications in any language, and several tools support threat modeling Ruby systems. The threat modeling process for Ruby applications examines Rails applications, Sinatra services, background job processors, and APIs built with Ruby frameworks.
The Brakeman static analysis security scanner examines Rails applications for security vulnerabilities. While not a pure threat modeling tool, Brakeman identifies many threats during code analysis. Teams incorporate Brakeman findings into threat models to validate that identified threats correspond to actual vulnerabilities in code.
# Install and run Brakeman
# gem install brakeman
# Analyze a Rails application
# brakeman -A -f json -o threat_analysis.json
# Example Brakeman configuration
# config/brakeman.yml
:application_path: "."
:rails: true
:skip_files:
- spec/**/*
- test/**/*
:report_formats:
- json
- html
:confidence_levels:
- high
- medium
Ruby applications often integrate with external services through REST APIs, GraphQL endpoints, or message queues. Each integration point represents a trust boundary requiring scrutiny during threat modeling. Ruby's net/http, Faraday, or HTTParty libraries make external requests, and threat models must examine how these handle authentication, certificate validation, and response validation.
# Secure external API integration
require 'faraday'
class SecureAPIClient
TIMEOUT = 5
OPEN_TIMEOUT = 2
def initialize(base_url, api_key)
@base_url = base_url
@api_key = api_key
end
def fetch_user_data(user_id)
connection.get("/users/#{sanitize_id(user_id)}") do |req|
req.headers['Authorization'] = "Bearer #{@api_key}"
req.headers['Accept'] = 'application/json'
req.options.timeout = TIMEOUT
req.options.open_timeout = OPEN_TIMEOUT
end
rescue Faraday::Error => e
# Log security-relevant errors without exposing internals
Rails.logger.error("API request failed: #{e.class}")
nil
end
private
def connection
@connection ||= Faraday.new(url: @base_url) do |f|
f.request :json
f.response :json
f.adapter Faraday.default_adapter
# SSL verification enabled by default
# Certificate validation required
end
end
def sanitize_id(user_id)
# Prevent path traversal in URL
Integer(user_id)
rescue ArgumentError
raise ArgumentError, "Invalid user ID"
end
end
Rails applications have specific threat modeling considerations. Mass assignment vulnerabilities require examining strong parameters. SQL injection threats affect ActiveRecord queries, particularly those using string interpolation. Cross-site scripting vulnerabilities exist wherever user input renders in views without proper escaping.
# Threat modeling Rails controllers
class UsersController < ApplicationController
# Threat: CSRF attacks
# Mitigation: Rails includes CSRF protection by default
protect_from_forgery with: :exception
# Threat: Mass assignment
# Mitigation: Strong parameters
def create
@user = User.new(user_params)
if @user.save
# Threat: Session fixation
# Mitigation: Reset session after authentication
reset_session
session[:user_id] = @user.id
redirect_to @user
else
render :new
end
end
# Threat: Unauthorized access
# Mitigation: Authorization checks
def show
@user = User.find(params[:id])
authorize @user
rescue ActiveRecord::RecordNotFound
# Threat: Information disclosure through timing
# Mitigation: Generic error message
render_not_found
end
private
def user_params
# Only permit expected attributes
params.require(:user).permit(:name, :email, :password)
end
def authorize(user)
# Threat: Horizontal privilege escalation
# Mitigation: Verify user can access this resource
unless current_user&.can_access?(user)
raise Pundit::NotAuthorizedError
end
end
end
Background job processing in Sidekiq, Resque, or Delayed Job introduces additional threats. Job data stored in Redis or databases might contain sensitive information. Job execution environments might have excessive permissions. Threat models examine job serialization, deserialization, and execution contexts.
# Secure background job implementation
class DataExportJob
include Sidekiq::Job
# Threat: Resource exhaustion
# Mitigation: Limit retry attempts
sidekiq_options retry: 3, dead: false
def perform(user_id, export_type)
# Threat: Elevation of privilege
# Mitigation: Verify user permissions before export
user = User.find(user_id)
return unless user.can_export?(export_type)
# Threat: Information disclosure
# Mitigation: Encrypt exported data
exporter = DataExporter.new(user, export_type)
encrypted_data = exporter.export_and_encrypt
# Threat: Unauthorized access
# Mitigation: Store with time-limited signed URL
storage_key = SecureRandom.uuid
S3Storage.put(
key: storage_key,
data: encrypted_data,
encryption: 'AES256',
expires: 1.hour.from_now
)
# Notify user with signed URL
ExportMailer.ready(user, storage_key).deliver_later
rescue ActiveRecord::RecordNotFound
# Threat: Denial of service through error handling
# Mitigation: Log and continue, don't retry
Rails.logger.warn("User #{user_id} not found for export")
end
end
Practical Examples
A Rails e-commerce application provides a comprehensive threat modeling scenario. The system handles user accounts, payment processing, product inventory, and order fulfillment. Threat modeling begins by identifying assets: customer personal information, payment card data, order history, inventory data, and administrative credentials.
Trust boundaries exist between the public internet and web application, between the web application and database, between the application and payment processor, and between the application and administrative interfaces. Each boundary requires validation and authentication controls. Data crossing boundaries must receive appropriate protection.
Data flow analysis reveals customer payment information enters through checkout forms, flows to the Rails application, transmits to a payment processor API, and stores as tokenized references in the database. The full card numbers never persist in application storage, reducing information disclosure risks. The threat model documents this flow and validates tokenization occurs correctly.
# E-commerce payment processing threat model scenario
class CheckoutController < ApplicationController
# Threat: Session hijacking
# Mitigation: Secure session configuration, HTTPS only
before_action :require_ssl
before_action :authenticate_user
def create_payment
# Threat: CSRF on payment submission
# Mitigation: Verify authenticity token (Rails default)
order = current_user.orders.find(params[:order_id])
# Threat: Price manipulation
# Mitigation: Recalculate total from server-side data
calculated_total = order.calculate_total
if calculated_total != params[:amount].to_d
return render json: { error: 'Invalid amount' }, status: :bad_request
end
# Threat: Storing sensitive payment data
# Mitigation: Use payment processor tokenization
payment_result = PaymentProcessor.tokenize_and_charge(
amount: calculated_total,
card_details: payment_params,
idempotency_key: order.idempotency_key
)
if payment_result.success?
# Store only token, not card details
order.update!(
payment_token: payment_result.token,
status: 'paid'
)
# Threat: Information disclosure in logs
# Mitigation: Filter sensitive params
Rails.logger.info("Payment successful for order #{order.id}")
render json: { success: true, order_id: order.id }
else
# Threat: Information disclosure through error messages
# Mitigation: Generic error to user, detailed log
Rails.logger.error("Payment failed: #{payment_result.error_code}")
render json: { error: 'Payment failed' }, status: :unprocessable_entity
end
rescue PaymentProcessor::Error => e
# Threat: Denial of service through exception handling
# Mitigation: Catch and handle gracefully
render json: { error: 'Payment processing unavailable' }, status: :service_unavailable
end
private
def payment_params
# Only permit expected payment fields
params.require(:payment).permit(:card_number, :expiry, :cvv, :billing_zip)
end
end
A multi-tenant SaaS application built with Rails presents different threat modeling challenges. Tenant data isolation becomes critical. The threat model examines database queries to validate proper tenant scoping. Row-level security or tenant-scoped queries prevent data leakage between tenants.
# Multi-tenant data isolation threat model
class ApplicationRecord < ActiveRecord::Base
self.abstract_class = true
# Threat: Cross-tenant data access
# Mitigation: Automatic tenant scoping
def self.inherited(subclass)
super
# Skip tenant scoping for tenant model itself
return if subclass.name == 'Tenant'
# Apply default scope filtering by current tenant
subclass.class_eval do
default_scope { where(tenant_id: Current.tenant_id) if Current.tenant_id }
before_create :set_tenant_id
before_update :verify_tenant_id
private
def set_tenant_id
self.tenant_id = Current.tenant_id
end
def verify_tenant_id
# Threat: Tenant ID manipulation
# Mitigation: Prevent changing tenant_id
if tenant_id_changed? && persisted?
raise SecurityError, "Cannot change tenant_id"
end
end
end
end
end
# Request-level tenant context
class Current < ActiveSupport::CurrentAttributes
attribute :tenant_id
attribute :user
# Threat: Tenant context leaking between requests
# Mitigation: Reset after each request (Rails handles this)
end
class ApplicationController < ActionController::Base
before_action :set_tenant
private
def set_tenant
# Threat: Subdomain spoofing
# Mitigation: Validate subdomain against database
subdomain = request.subdomain
tenant = Tenant.find_by!(subdomain: subdomain)
Current.tenant_id = tenant.id
rescue ActiveRecord::RecordNotFound
# Threat: Tenant enumeration
# Mitigation: Generic error message
render plain: 'Not found', status: :not_found
end
end
An API service handling webhook callbacks from external services demonstrates trust boundary threats. External services send HTTP requests containing event data. The threat model examines authentication of webhook sources, validation of payload signatures, and handling of malicious or malformed payloads.
# Webhook receiver threat model
class WebhooksController < ApplicationController
# Threat: CSRF doesn't apply to API endpoints
skip_before_action :verify_authenticity_token
# Threat: Replay attacks
# Mitigation: Track processed webhook IDs
before_action :verify_not_duplicate
def stripe_webhook
payload = request.body.read
signature = request.headers['Stripe-Signature']
# Threat: Webhook source spoofing
# Mitigation: Verify signature with shared secret
event = verify_stripe_signature(payload, signature)
# Threat: Malformed event data causing crashes
# Mitigation: Parse and validate structure
return head :bad_request unless valid_event_structure?(event)
# Threat: Malicious event type execution
# Mitigation: Whitelist allowed event types
case event['type']
when 'payment_intent.succeeded'
handle_payment_success(event['data']['object'])
when 'payment_intent.payment_failed'
handle_payment_failure(event['data']['object'])
else
# Threat: Information disclosure through logs
# Mitigation: Log event type only, not full payload
Rails.logger.info("Unhandled event type: #{event['type']}")
end
# Mark as processed to prevent duplicates
ProcessedWebhook.create!(
external_id: event['id'],
event_type: event['type'],
processed_at: Time.current
)
head :ok
rescue SignatureVerificationError
# Threat: Brute force signature attempts
# Mitigation: Rate limit this endpoint (via Rack::Attack)
head :unauthorized
end
private
def verify_stripe_signature(payload, signature)
Stripe::Webhook.construct_event(
payload,
signature,
Rails.application.credentials.stripe_webhook_secret
)
rescue JSON::ParserError, Stripe::SignatureVerificationError => e
raise SignatureVerificationError
end
def verify_not_duplicate
external_id = JSON.parse(request.body.read)['id']
request.body.rewind
if ProcessedWebhook.exists?(external_id: external_id)
head :ok
return false
end
rescue JSON::ParserError
head :bad_request
return false
end
def valid_event_structure?(event)
event.is_a?(Hash) &&
event['id'].present? &&
event['type'].present? &&
event['data'].is_a?(Hash)
end
end
Security Implications
Ruby applications face specific security threats that threat models must address. Dynamic typing and meta-programming capabilities, while offering development flexibility, create opportunities for injection attacks and type confusion vulnerabilities. Threat models examine where untrusted data influences method calls, class instantiation, or code evaluation.
The eval family of methods (eval, instance_eval, class_eval, module_eval) executes arbitrary Ruby code. Threat models flag any eval usage with untrusted input as critical threats requiring mitigation or elimination. Similarly, send, public_send, and constantize methods allow string-based method invocation, creating code injection risks when strings derive from user input.
# Dangerous: Code injection through eval
def process_calculation(user_expression)
# THREAT: Arbitrary code execution
eval(user_expression) # Never do this
end
# Secure: Parse and validate expressions safely
def process_calculation(user_expression)
# Use a safe expression evaluator library
# Or whitelist allowed operations
parser = Dentaku::Calculator.new
parser.evaluate(user_expression)
rescue Dentaku::ParseError
raise ArgumentError, "Invalid expression"
end
# Dangerous: Method injection through send
def invoke_user_action(model, action_name)
# THREAT: Arbitrary method invocation
model.send(action_name)
end
# Secure: Whitelist allowed methods
def invoke_user_action(model, action_name)
allowed_methods = %w[calculate_total refresh_cache update_status]
unless allowed_methods.include?(action_name)
raise ArgumentError, "Invalid action"
end
model.public_send(action_name)
end
Deserialization vulnerabilities affect Ruby applications using Marshal, YAML, or JSON. Marshal.load and YAML.load execute arbitrary code through specially crafted payloads. Threat models identify all deserialization points and validate safe usage patterns.
# Dangerous: Arbitrary code execution through deserialization
def restore_session(session_data)
# THREAT: Remote code execution
Marshal.load(session_data) # Unsafe
end
# Secure: Use safe deserialization
def restore_session(session_data)
# Use JSON for session data (limited data types)
JSON.parse(session_data)
rescue JSON::ParserError
nil
end
# If YAML required, use safe_load with allowed classes
def load_configuration(yaml_data)
YAML.safe_load(
yaml_data,
permitted_classes: [Symbol, Date, Time],
permitted_symbols: [],
aliases: false
)
end
SQL injection remains a significant threat despite ActiveRecord's query interface. String interpolation in where clauses, raw SQL queries, and unsanitized ORDER BY or LIMIT clauses create vulnerabilities. Threat models examine all database queries to validate parameterization or safe construction.
# Dangerous: SQL injection in where clause
def search_users(query)
# THREAT: SQL injection
User.where("name LIKE '%#{query}%'")
end
# Secure: Parameterized queries
def search_users(query)
User.where("name LIKE ?", "%#{query}%")
end
# Dangerous: SQL injection in ORDER BY
def sorted_products(sort_column)
# THREAT: SQL injection
Product.order("#{sort_column} ASC")
end
# Secure: Whitelist sort columns
def sorted_products(sort_column)
allowed_columns = %w[name price created_at]
column = allowed_columns.include?(sort_column) ? sort_column : 'name'
Product.order("#{column} ASC")
end
Cross-site scripting (XSS) vulnerabilities occur when user input renders in views without proper escaping. Rails escapes content by default in ERB templates, but html_safe, raw, and tag helpers bypass escaping. Threat models identify where unescaped content renders and validate the content's safety.
# View: Dangerous XSS through html_safe
<!-- THREAT: XSS vulnerability -->
<div><%= @user_comment.html_safe %></div>
# View: Secure with default escaping
<div><%= @user_comment %></div>
# View: Secure sanitization for rich text
<div><%= sanitize @user_comment, tags: %w[p br strong em], attributes: [] %></div>
Tools & Ecosystem
Multiple tools support threat modeling for Ruby applications, ranging from manual diagramming tools to automated vulnerability scanners. These tools integrate into development workflows to maintain current threat models as applications evolve.
Microsoft Threat Modeling Tool provides visual threat model creation following STRIDE methodology. The tool generates data flow diagrams, applies threat patterns, and produces threat reports. While not Ruby-specific, teams threat modeling Ruby applications use it to diagram system architecture and identify threats across trust boundaries.
OWASP Threat Dragon offers open-source threat modeling with both desktop and web versions. The tool supports STRIDE methodology and generates threat reports. Development teams create threat models collaboratively during design sessions and update models as architectures change.
Brakeman scans Rails applications for security vulnerabilities, functioning as automated threat discovery for existing code. The tool detects SQL injection, cross-site scripting, command injection, mass assignment, and many other vulnerabilities. Teams run Brakeman in continuous integration to catch new threats introduced during development.
# Integrating Brakeman into CI
# .github/workflows/security.yml
name: Security Scan
on: [push, pull_request]
jobs:
brakeman:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: ruby/setup-ruby@v1
with:
ruby-version: 3.2
- name: Install Brakeman
run: gem install brakeman
- name: Run Brakeman
run: brakeman --format json --output brakeman-results.json
- name: Upload results
uses: actions/upload-artifact@v2
with:
name: brakeman-results
path: brakeman-results.json
Bundler-audit checks Ruby dependencies for known security vulnerabilities by comparing Gemfile.lock against a database of disclosed vulnerabilities. Threat models incorporate dependency analysis to identify third-party risks. Teams run bundler-audit regularly to detect vulnerable dependencies requiring updates.
# Check dependencies for vulnerabilities
# gem install bundler-audit
# Update vulnerability database
# bundle audit update
# Check for vulnerabilities
# bundle audit check
# Gemfile configuration for security updates
source 'https://rubygems.org'
gem 'rails', '~> 7.1.0' # Allow patch updates for security fixes
gem 'rack', '>= 3.0.8' # Minimum secure version
Dawn scanner provides static analysis for Ruby applications beyond Rails. The tool detects security issues in Sinatra applications, Rack middleware, and standalone Ruby code. Teams threat modeling microservices or non-Rails applications benefit from Dawn's broader coverage.
Rails security audit gems like rails_best_practices include security checks alongside code quality analysis. While primarily focused on best practices, these tools identify security anti-patterns that threat models should address.
Threat model documentation tools include wiki systems, document repositories, and specialized platforms. Teams store threat model diagrams, threat catalogs, mitigation plans, and security requirements in version-controlled repositories alongside code. Markdown-formatted threat models stored in Git repositories allow review and update through standard development workflows.
Common Pitfalls
Threat modeling failures often stem from incomplete analysis, outdated models, or insufficient follow-through on identified threats. One frequent mistake involves creating threat models during initial design but never updating them as systems evolve. Applications change through feature additions, architectural modifications, and dependency updates. Threat models become obsolete without regular revision.
Another common pitfall treats threat modeling as a checkbox exercise rather than meaningful security analysis. Teams produce threat model documents to satisfy compliance requirements or security reviews but invest minimal effort in thorough analysis. Surface-level threat models miss critical vulnerabilities and provide false security confidence.
Focusing exclusively on technical threats while ignoring business context creates incomplete threat models. Technical vulnerabilities matter, but understanding business impact determines appropriate mitigation investments. A data disclosure threat affecting non-sensitive information requires different response than one exposing financial records.
Inadequate trust boundary analysis leads to missing threats. Developers often assume internal components trust each other or that authentication at perimeter boundaries protects internal communications. Sophisticated attacks compromise individual components and pivot to other system parts. Threat models must examine internal trust relationships as rigorously as external boundaries.
# Pitfall: Trusting internal service responses without validation
class InternalServiceClient
def fetch_user_permissions(user_id)
# PITFALL: Assuming internal service always returns valid data
response = internal_service.get("/permissions/#{user_id}")
response['permissions'] # No validation
end
end
# Better: Validate internal responses
class InternalServiceClient
def fetch_user_permissions(user_id)
response = internal_service.get("/permissions/#{user_id}")
# Validate response structure even from internal services
unless response.is_a?(Hash) && response['permissions'].is_a?(Array)
raise InvalidResponseError, "Invalid permissions response"
end
# Validate permission format
response['permissions'].each do |perm|
unless perm.is_a?(String) && valid_permission_format?(perm)
raise InvalidResponseError, "Invalid permission format: #{perm}"
end
end
response['permissions']
rescue StandardError => e
# Log security-relevant failures
Rails.logger.error("Permission fetch failed for user #{user_id}: #{e.class}")
[] # Fail closed with no permissions
end
private
def valid_permission_format?(permission)
permission.match?(/\A[a-z_]+:[a-z_]+\z/)
end
end
Overlooking indirect vulnerabilities through dependencies introduces threats that scanning individual application code misses. Ruby gems pull in transitive dependencies, and vulnerabilities in any dependency affect the application. Threat models must account for dependency security and establish processes for monitoring and updating vulnerable libraries.
Insufficient consideration of deployment environments creates gaps in threat models. Applications behave differently in production than development. Threat models examining only development configurations miss production-specific threats from reverse proxies, load balancers, cloud services, and infrastructure components.
Failing to model threat impact and likelihood leads to poor prioritization. Not all identified threats deserve immediate mitigation. Some threats have low probability, minimal impact, or expensive mitigations. Threat models should include risk assessment to guide resource allocation. Teams need frameworks like DREAD to evaluate which threats require urgent attention.
Neglecting social engineering and physical security threats limits threat model completeness. Attackers compromise systems through phishing, credential theft, insider threats, and physical access. Pure technical threat models miss human factors that attackers exploit. Including social engineering scenarios reveals weaknesses in authentication factors, account recovery processes, and privileged access management.
Documentation inadequacy hampers threat model utility. Cryptic diagrams, incomplete threat descriptions, or missing mitigation plans render threat models unhelpful for developers implementing security controls. Threat models need clear descriptions of threats, attack scenarios, potential impacts, and specific countermeasures with implementation guidance.
# Pitfall: Insufficient error handling revealing system internals
class UsersController < ApplicationController
def show
@user = User.find(params[:id])
rescue ActiveRecord::RecordNotFound => e
# PITFALL: Exposing internal details
render json: { error: "User not found: #{e.message}" }, status: :not_found
rescue => e
# PITFALL: Exposing stack traces
render json: { error: e.message, backtrace: e.backtrace }, status: :internal_server_error
end
end
# Better: Generic errors externally, detailed logs internally
class UsersController < ApplicationController
def show
@user = User.find(params[:id])
rescue ActiveRecord::RecordNotFound
render json: { error: "Not found" }, status: :not_found
rescue => e
# Log details internally
Rails.logger.error("User lookup failed: #{e.class} - #{e.message}")
Rails.logger.error(e.backtrace.join("\n"))
# Generic message externally
render json: { error: "An error occurred" }, status: :internal_server_error
end
end
Reference
STRIDE Categories
| Category | Definition | Example Threat |
|---|---|---|
| Spoofing | Impersonating another user or system | Stolen authentication tokens |
| Tampering | Modifying data or code | SQL injection altering records |
| Repudiation | Denying an action occurred | Lack of audit logging |
| Information Disclosure | Exposing information to unauthorized parties | Database credentials in logs |
| Denial of Service | Disrupting service availability | Resource exhaustion attacks |
| Elevation of Privilege | Gaining unauthorized capabilities | Horizontal privilege escalation |
Trust Boundary Types
| Boundary | Description | Security Controls Required |
|---|---|---|
| Network Perimeter | Internet to application | Firewall, TLS, authentication |
| Application to Database | Application layer to data layer | Connection encryption, credential management, parameterized queries |
| User Roles | Between different privilege levels | Authorization checks, role validation |
| Multi-tenancy | Between tenant data spaces | Tenant scoping, data isolation |
| API Boundaries | External service integration | Authentication, input validation, rate limiting |
| Process Boundaries | Between application processes | IPC validation, permission checks |
Common Attack Vectors for Ruby Applications
| Attack Type | Vulnerable Code Pattern | Mitigation |
|---|---|---|
| SQL Injection | String interpolation in queries | Parameterized queries, ORM methods |
| Cross-Site Scripting | html_safe on user input | Default escaping, sanitize helper |
| Code Injection | eval with user input | Avoid eval, whitelist operations |
| Mass Assignment | Unpermitted params | Strong parameters |
| Deserialization | Marshal.load on untrusted data | JSON or safe_load with class whitelist |
| Command Injection | System calls with user input | Avoid shell commands, sanitize inputs |
| Path Traversal | File paths from user input | Validate and sanitize paths |
| Session Hijacking | Insecure session configuration | Secure cookies, HTTPS only |
Threat Modeling Process Checklist
| Phase | Activities | Outputs |
|---|---|---|
| Preparation | Identify stakeholders, gather requirements, define scope | Scope document, participant list |
| Decomposition | Create architecture diagrams, identify components, map data flows | Architecture diagrams, component list, data flow diagrams |
| Threat Identification | Apply methodology (STRIDE), enumerate threats per component | Threat catalog |
| Threat Analysis | Assess impact and likelihood, prioritize threats | Risk-ranked threat list |
| Mitigation Planning | Define countermeasures, assign ownership, estimate effort | Mitigation plan with assignments |
| Validation | Review completeness, verify mitigations, test controls | Updated threat model, security tests |
| Documentation | Record decisions, document threats and mitigations | Threat model document |
| Maintenance | Schedule reviews, update on changes, track implementation | Review schedule, change log |
Ruby Security Audit Commands
| Command | Purpose | Usage Frequency |
|---|---|---|
| brakeman | Static security analysis | Every commit or daily |
| bundle audit | Check dependencies for vulnerabilities | Daily or weekly |
| rubocop with security cops | Code quality and security patterns | Every commit |
| rails credentials:edit | Manage encrypted credentials | When updating secrets |
| rails secret | Generate secure random values | When creating new secrets |
Data Classification Impact
| Classification | Example Data | Required Controls |
|---|---|---|
| Public | Marketing content | None |
| Internal | Business documents | Authentication |
| Confidential | Customer PII | Encryption, access logging, retention policies |
| Restricted | Payment data, health records | Compliance controls, encryption at rest and transit, auditing |
Security Control Categories
| Control Type | Purpose | Ruby Examples |
|---|---|---|
| Preventive | Stop attacks before they succeed | Input validation, strong parameters, CSRF tokens |
| Detective | Identify attacks in progress | Logging, monitoring, intrusion detection |
| Corrective | Respond to detected attacks | Rate limiting, account lockout, incident response |
| Deterrent | Discourage attack attempts | Security warnings, legal notices |
| Compensating | Alternative when primary control unavailable | Additional authentication factors |