Overview
Regression testing validates that software modifications do not break existing functionality. When developers add features, fix bugs, or refactor code, they risk introducing new defects in previously working areas. Regression testing detects these unintended side effects by re-executing test cases that previously passed.
The practice emerged from the observation that software changes frequently caused unexpected failures in unrelated components. A bug fix in one module would mysteriously break another. A new feature would cause existing features to malfunction. These regressions occurred because software systems contain complex interdependencies that developers cannot fully predict.
Regression testing operates on a simple premise: if a test passed before a change and fails after, the change likely introduced a regression. The challenge lies in determining which tests to run, when to run them, and how to maintain the test suite as the software evolves.
Consider a web application with user authentication, product catalog, and checkout functionality. A developer modifies the authentication module to add two-factor authentication. Without regression testing, the team might verify that two-factor authentication works correctly but miss that the change broke the password reset feature or caused checkout to fail for authenticated users. Regression testing would run existing tests for all authentication-related features, catching these regressions before deployment.
The scope of regression testing ranges from a single modified function to the entire application. A localized change might require testing only the affected component and its direct dependencies. A change to shared infrastructure, database schema, or core libraries necessitates testing the entire system.
Key Principles
Regression testing builds on the principle of reproducibility. Test cases must produce consistent results when executed multiple times against the same code. A test that passes unpredictably has no value for detecting regressions because failures might indicate test problems rather than code problems.
Test isolation ensures that each test runs independently without depending on execution order or shared state from other tests. When tests interfere with each other, failures become difficult to diagnose because a regression in one area might cause failures in unrelated tests. Isolated tests fail only when the code they directly exercise contains regressions.
The regression test suite consists of test cases that verify specific functionality at various levels. Unit tests validate individual functions and classes. Integration tests verify interactions between components. End-to-end tests confirm complete user workflows. Each level serves a different purpose in regression detection:
Unit tests catch regressions in implementation details quickly with minimal execution time. A unit test for a string validation function runs in milliseconds and pinpoints the exact function containing the regression.
Integration tests detect regressions in component interactions that unit tests miss. Database query logic might work correctly in isolation but fail when combined with transaction management. Integration tests expose these issues.
End-to-end tests verify that the entire system functions correctly from a user perspective. These tests catch regressions in configuration, deployment, or cross-cutting concerns that lower-level tests cannot detect.
Test selection determines which tests to execute for a given change. Running every test for every change provides maximum confidence but consumes excessive time. Running only tests directly related to the changed code executes quickly but might miss regressions in unexpected areas. Effective test selection balances thoroughness with efficiency.
Change impact analysis identifies which parts of the codebase a modification might affect. Static analysis tools examine code dependencies, method call graphs, and data flow to estimate impact. Dynamic analysis observes which tests execute the changed code. Both approaches have limitations: static analysis may overestimate impact, while dynamic analysis depends on existing test coverage.
Test maintenance represents a critical challenge in regression testing. As the codebase evolves, tests require updates to reflect legitimate changes in behavior. Tests also need modification when APIs change, dependencies update, or testing approaches improve. A test suite that does not evolve with the code becomes a maintenance burden rather than a safety net.
Flaky tests produce inconsistent results, sometimes passing and sometimes failing without code changes. Flakiness undermines regression testing because developers cannot distinguish real regressions from test instability. Common causes include timing dependencies, shared resources, external service dependencies, and environmental variations.
Implementation Approaches
The corrective approach executes regression tests after defect fixes to verify the fix works and did not introduce new problems. When developers fix a bug, they first add a test that reproduces the bug and fails against the current code. After implementing the fix, the new test passes, confirming the bug is fixed. The team then runs the full regression suite to verify no new bugs appeared. This approach integrates naturally with bug tracking workflows but provides value only after defects occur.
The progressive approach adds regression tests as development proceeds, building the test suite incrementally alongside feature development. Each feature implementation includes tests that become part of the regression suite. This approach distributes testing effort across the development cycle but requires discipline to maintain comprehensive coverage as the codebase grows.
The retest-all strategy executes the complete test suite for every code change. This provides maximum confidence that no regressions occurred but becomes impractical as test suites grow. A full regression suite might require hours or days to execute, delaying feedback and blocking deployment pipelines. Retest-all works well for small projects or when execution infrastructure can parallelize tests effectively.
The selective approach chooses a subset of tests based on the nature of the change. Developers identify affected components and run tests for those components plus tests for their dependencies. This reduces execution time while maintaining reasonable coverage. The risk is missing regressions in unexpected areas that the selection criteria did not identify.
Risk-based test selection prioritizes tests based on the probability and impact of regressions. High-risk areas include frequently modified code, complex algorithms, code with historical defect density, and critical business functionality. The team runs all tests for high-risk areas and selectively tests lower-risk areas. This approach requires ongoing risk assessment and may miss regressions in areas incorrectly classified as low-risk.
The hybrid approach combines multiple strategies based on context. Critical changes receive full regression testing. Routine changes receive selective testing. Daily development uses fast-running unit and integration tests, while comprehensive end-to-end tests run nightly or before releases. This balances speed and thoroughness but requires careful orchestration.
Continuous integration pipelines automate regression testing by executing tests automatically when developers commit code. Fast tests run on every commit, providing rapid feedback. Slower tests run on scheduled intervals or before merging code into main branches. This approach detects regressions early when they are easier to fix but requires investment in test infrastructure and maintenance.
Test parallelization distributes test execution across multiple machines or processes to reduce total runtime. A suite requiring three hours on a single machine might complete in 15 minutes when distributed across 12 machines. Effective parallelization requires tests designed for independent execution and infrastructure capable of coordinating distributed test runs.
Ruby Implementation
Ruby provides multiple testing frameworks suitable for regression testing. RSpec offers behavior-driven development syntax with extensive matchers and mocking capabilities. Minitest provides a simpler, faster alternative with assertion-based syntax. Both frameworks support unit, integration, and end-to-end testing patterns.
A basic RSpec regression test verifies that a method continues to produce correct output after code changes:
# spec/models/user_spec.rb
RSpec.describe User, type: :model do
describe '#full_name' do
it 'combines first and last name with a space' do
user = User.new(first_name: 'Alice', last_name: 'Johnson')
expect(user.full_name).to eq('Alice Johnson')
end
it 'handles nil first name' do
user = User.new(first_name: nil, last_name: 'Johnson')
expect(user.full_name).to eq('Johnson')
end
it 'handles nil last name' do
user = User.new(first_name: 'Alice', last_name: nil)
expect(user.full_name).to eq('Alice')
end
end
end
These tests verify the current behavior of full_name. If a future refactoring changes this behavior unintentionally, the tests fail, alerting developers to the regression.
Regression tests for Rails applications often verify controller behavior:
# spec/controllers/products_controller_spec.rb
RSpec.describe ProductsController, type: :controller do
describe 'GET #index' do
it 'returns successful response' do
get :index
expect(response).to be_successful
end
it 'assigns all products' do
product1 = Product.create!(name: 'Widget', price: 10.0)
product2 = Product.create!(name: 'Gadget', price: 15.0)
get :index
expect(assigns(:products)).to match_array([product1, product2])
end
it 'renders index template' do
get :index
expect(response).to render_template(:index)
end
end
end
Database-backed tests require careful setup and teardown to maintain isolation:
# spec/rails_helper.rb
RSpec.configure do |config|
config.use_transactional_fixtures = true
config.before(:suite) do
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
DatabaseCleaner.strategy = :transaction
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
end
Transactional fixtures wrap each test in a database transaction that rolls back after execution, ensuring tests do not interfere with each other.
Integration tests verify interactions between multiple components:
# spec/integration/order_processing_spec.rb
RSpec.describe 'Order Processing', type: :request do
let(:user) { User.create!(email: 'test@example.com', password: 'password') }
let(:product) { Product.create!(name: 'Widget', price: 10.0, stock: 100) }
it 'completes purchase workflow' do
# Add item to cart
post '/cart/items', params: { product_id: product.id, quantity: 2 }
expect(response).to have_http_status(:created)
# Proceed to checkout
post '/orders', params: { user_id: user.id }
expect(response).to have_http_status(:created)
order = JSON.parse(response.body)
# Verify inventory updated
expect(product.reload.stock).to eq(98)
# Verify order created
expect(Order.find(order['id']).total).to eq(20.0)
end
end
Test helpers reduce duplication and improve maintainability:
# spec/support/authentication_helper.rb
module AuthenticationHelper
def sign_in(user)
post '/auth/sign_in', params: {
email: user.email,
password: user.password
}
@auth_token = JSON.parse(response.body)['token']
end
def authenticated_headers
{ 'Authorization' => "Bearer #{@auth_token}" }
end
end
RSpec.configure do |config|
config.include AuthenticationHelper, type: :request
end
Shared examples capture common regression test patterns:
# spec/support/shared_examples/authenticated_endpoint.rb
RSpec.shared_examples 'authenticated endpoint' do |method, path|
it 'returns 401 without authentication' do
send(method, path)
expect(response).to have_http_status(:unauthorized)
end
it 'returns success with valid authentication' do
user = User.create!(email: 'test@example.com', password: 'password')
sign_in(user)
send(method, path, headers: authenticated_headers)
expect(response).to be_successful
end
end
# Usage in controller specs
RSpec.describe ProductsController do
describe 'GET #index' do
it_behaves_like 'authenticated endpoint', :get, '/products'
end
end
Test tagging enables selective test execution:
# spec/models/user_spec.rb
RSpec.describe User, type: :model do
describe '#calculate_score', :slow do
it 'computes user engagement score' do
# Complex calculation test
end
end
describe '#email_valid?', :fast do
it 'validates email format' do
# Quick validation test
end
end
end
Run only fast tests during development:
rspec --tag fast
Run all tests in CI:
rspec
Tools & Ecosystem
RSpec dominates Ruby testing with comprehensive features for regression testing. The framework includes matchers for assertions, mocks for test doubles, and hooks for setup and teardown. RSpec's syntax reads like natural language, making tests understandable to developers and stakeholders.
Minitest ships with Ruby as the standard testing library. It provides two interfaces: a spec-style DSL similar to RSpec and a traditional assertion-based style. Minitest executes faster than RSpec and has fewer dependencies, making it suitable for projects prioritizing speed and simplicity.
Capybara enables end-to-end regression testing by simulating user interactions in web applications. It provides a domain-specific language for clicking links, filling forms, and verifying page content. Capybara works with multiple drivers including Selenium for JavaScript-heavy applications and Rack::Test for faster, lightweight testing.
# spec/features/user_registration_spec.rb
require 'rails_helper'
RSpec.feature 'User Registration', type: :feature do
scenario 'user signs up with valid credentials' do
visit '/signup'
fill_in 'Email', with: 'newuser@example.com'
fill_in 'Password', with: 'securepassword'
fill_in 'Confirm Password', with: 'securepassword'
click_button 'Sign Up'
expect(page).to have_content('Welcome!')
expect(User.find_by(email: 'newuser@example.com')).to be_present
end
end
FactoryBot generates test data with customizable attributes, reducing boilerplate in test setup:
# spec/factories/users.rb
FactoryBot.define do
factory :user do
sequence(:email) { |n| "user#{n}@example.com" }
password { 'password123' }
trait :admin do
admin { true }
end
trait :with_orders do
after(:create) do |user|
create_list(:order, 3, user: user)
end
end
end
end
# Usage in tests
user = create(:user)
admin = create(:user, :admin)
user_with_orders = create(:user, :with_orders)
SimpleCov measures test coverage, identifying code paths not exercised by regression tests:
# spec/rails_helper.rb
require 'simplecov'
SimpleCov.start 'rails' do
add_filter '/spec/'
add_filter '/config/'
minimum_coverage 90
refuse_coverage_drop
end
VCR records HTTP interactions during test execution and replays them in subsequent runs, eliminating dependencies on external services:
# spec/support/vcr.rb
VCR.configure do |config|
config.cassette_library_dir = 'spec/fixtures/vcr_cassettes'
config.hook_into :webmock
config.configure_rspec_metadata!
end
# spec/services/weather_service_spec.rb
RSpec.describe WeatherService do
it 'fetches current weather', :vcr do
service = WeatherService.new
weather = service.current_weather('New York')
expect(weather.temperature).to be_a(Numeric)
end
end
Guard automates test execution by monitoring file changes and running relevant tests:
# Guardfile
guard :rspec, cmd: 'bundle exec rspec' do
watch(%r{^spec/.+_spec\.rb$})
watch(%r{^app/(.+)\.rb$}) { |m| "spec/#{m[1]}_spec.rb" }
watch(%r{^app/controllers/(.+)_(controller)\.rb$}) do |m|
["spec/#{m[2]}s/#{m[1]}_#{m[2]}_spec.rb",
"spec/requests/#{m[1]}_spec.rb"]
end
end
Continuous integration systems execute regression tests automatically. GitHub Actions runs tests on every commit:
# .github/workflows/test.yml
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- uses: ruby/setup-ruby@v1
with:
ruby-version: 3.2
bundler-cache: true
- name: Setup Database
run: |
bin/rails db:create
bin/rails db:schema:load
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432
- name: Run Tests
run: bundle exec rspec
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432
Parallel test execution distributes tests across multiple processes:
# Gemfile
gem 'parallel_tests', group: :test
# Run with 4 processes
bundle exec parallel_rspec -n 4 spec/
Common Patterns
The smoke test pattern executes a minimal subset of critical regression tests to verify basic functionality quickly. After deploying code, smoke tests confirm the application starts, connects to databases, and responds to requests. This provides rapid feedback before running comprehensive regression suites.
# spec/smoke/application_spec.rb
RSpec.describe 'Application Smoke Tests', type: :request do
it 'serves homepage' do
get '/'
expect(response).to have_http_status(:success)
end
it 'connects to database' do
expect { User.count }.not_to raise_error
end
it 'responds to health check' do
get '/health'
expect(response).to have_http_status(:success)
end
end
The golden master pattern captures current system output as the expected baseline. When refactoring complex code with unclear specifications, developers run the code against various inputs, save the outputs, and use them as regression test expectations. This technique works when the current behavior is correct but difficult to specify precisely.
# spec/services/report_generator_spec.rb
RSpec.describe ReportGenerator do
it 'generates report matching golden master' do
generator = ReportGenerator.new
report = generator.generate(start_date: Date.new(2024, 1, 1),
end_date: Date.new(2024, 1, 31))
golden_master = File.read('spec/fixtures/reports/january_2024.json')
expect(report.to_json).to eq(golden_master)
end
end
The characterization test pattern documents existing behavior even when that behavior might be incorrect. When working with legacy code, developers write tests that capture current behavior before refactoring. If behavior changes during refactoring, tests fail, alerting developers to verify whether the change is intentional.
Test prioritization ranks tests by likelihood of detecting regressions. Tests frequently failing in the past receive higher priority. Tests covering recently modified code run first. Tests for critical functionality take precedence over tests for peripheral features.
# spec/support/test_priority.rb
RSpec.configure do |config|
config.around(:each) do |example|
priority = example.metadata[:priority] || 5
if ENV['RUN_PRIORITY'] && priority.to_s != ENV['RUN_PRIORITY']
skip "Skipping priority #{priority} test"
else
example.run
end
end
end
# spec/models/user_spec.rb
RSpec.describe User, priority: 1 do
# Critical tests
end
The test data builder pattern creates complex test objects through a fluent interface:
# spec/support/builders/order_builder.rb
class OrderBuilder
def initialize
@attributes = {
status: 'pending',
items: []
}
end
def with_user(user)
@attributes[:user] = user
self
end
def with_item(product, quantity: 1)
@attributes[:items] << { product: product, quantity: quantity }
self
end
def completed
@attributes[:status] = 'completed'
@attributes[:completed_at] = Time.current
self
end
def build
order = Order.create!(@attributes.except(:items))
@attributes[:items].each do |item|
order.line_items.create!(product: item[:product],
quantity: item[:quantity])
end
order
end
end
# Usage
order = OrderBuilder.new
.with_user(user)
.with_item(widget, quantity: 2)
.with_item(gadget, quantity: 1)
.completed
.build
Contract testing verifies that services maintain their API contracts across changes. When service A depends on service B, contract tests ensure that changes to service B do not break service A's expectations.
# spec/contracts/payment_service_contract_spec.rb
RSpec.describe 'Payment Service Contract' do
it 'processes payment request' do
stub_request(:post, 'https://api.payment.example.com/charges')
.with(
body: hash_including(
amount: anything,
currency: 'USD',
source: anything
)
)
.to_return(
status: 200,
body: { id: 'ch_123', status: 'succeeded' }.to_json
)
result = PaymentService.charge(amount: 100, currency: 'USD',
source: 'tok_123')
expect(result.success?).to be true
end
end
Common Pitfalls
Flaky tests randomly fail without code changes, undermining confidence in the regression suite. Time-dependent tests fail when execution timing varies. Tests relying on external services fail when those services experience downtime. Tests with race conditions fail unpredictably in concurrent execution.
# Flaky: depends on timing
RSpec.describe 'Cache Expiration' do
it 'expires after 1 second' do
Rails.cache.write('key', 'value', expires_in: 1.second)
sleep 1
expect(Rails.cache.read('key')).to be_nil # May fail due to timing
end
end
# Fixed: use time manipulation
RSpec.describe 'Cache Expiration' do
it 'expires after 1 second' do
Rails.cache.write('key', 'value', expires_in: 1.second)
travel 2.seconds
expect(Rails.cache.read('key')).to be_nil
end
end
Shared state between tests causes failures that depend on execution order. One test creates database records or modifies global configuration, affecting subsequent tests.
# Problematic: shared state
RSpec.describe User do
before(:all) do
@user = User.create!(email: 'test@example.com')
end
it 'finds user by email' do
expect(User.find_by(email: 'test@example.com')).to eq(@user)
end
it 'deletes user' do
@user.destroy
expect(User.find_by(email: 'test@example.com')).to be_nil
end
# Second test affects first test if execution order changes
end
# Fixed: isolated state
RSpec.describe User do
it 'finds user by email' do
user = create(:user, email: 'test@example.com')
expect(User.find_by(email: 'test@example.com')).to eq(user)
end
it 'deletes user' do
user = create(:user)
user.destroy
expect(User.find_by(id: user.id)).to be_nil
end
end
Brittle tests couple tightly to implementation details, failing when code changes even though behavior remains correct. Tests verifying private methods or internal data structures break during refactoring.
# Brittle: tests implementation
RSpec.describe OrderProcessor do
it 'calls validate_items' do
processor = OrderProcessor.new
expect(processor).to receive(:validate_items)
processor.process(order)
end
end
# Better: tests behavior
RSpec.describe OrderProcessor do
it 'rejects orders with invalid items' do
order = build(:order, items: [invalid_item])
processor = OrderProcessor.new
result = processor.process(order)
expect(result).to be_failure
expect(result.error).to include('invalid items')
end
end
Insufficient test data coverage misses edge cases that trigger regressions. Tests using only typical values fail to detect bugs in boundary conditions, null handling, or unusual input combinations.
# Insufficient coverage
RSpec.describe StringProcessor do
it 'processes string' do
expect(StringProcessor.clean('hello')).to eq('hello')
end
end
# Better coverage
RSpec.describe StringProcessor do
it 'handles normal strings' do
expect(StringProcessor.clean('hello')).to eq('hello')
end
it 'handles empty strings' do
expect(StringProcessor.clean('')).to eq('')
end
it 'handles nil' do
expect(StringProcessor.clean(nil)).to eq('')
end
it 'handles whitespace' do
expect(StringProcessor.clean(' hello ')).to eq('hello')
end
it 'handles special characters' do
expect(StringProcessor.clean('hello<script>')).to eq('hello')
end
end
Slow regression suites delay feedback and discourage frequent test execution. Tests performing unnecessary database operations, making actual HTTP requests, or processing large datasets consume excessive time.
# Slow: creates unnecessary data
RSpec.describe ReportGenerator do
it 'generates summary report' do
100.times { create(:order, :completed) }
report = ReportGenerator.new.summary
expect(report.total_orders).to eq(100)
end
end
# Faster: uses minimal data
RSpec.describe ReportGenerator do
it 'generates summary report' do
create_list(:order, 2, :completed)
report = ReportGenerator.new.summary
expect(report.total_orders).to eq(2)
end
end
Overuse of mocking creates tests that pass despite broken functionality. Tests mocking all dependencies verify only that the code calls the mocked methods, not that the system functions correctly.
# Over-mocked
RSpec.describe OrderController do
it 'creates order' do
allow(OrderService).to receive(:create).and_return(true)
post :create, params: { order: order_params }
expect(response).to be_successful
end
# Test passes even if OrderService.create is never implemented
end
# Better: integration test
RSpec.describe OrderController, type: :request do
it 'creates order' do
post '/orders', params: { order: valid_order_params }
expect(response).to have_http_status(:created)
expect(Order.last.status).to eq('pending')
end
end
Test maintenance neglect accumulates technical debt. Outdated tests fail for legitimate reasons but developers disable them rather than fix them. Duplicated test code makes updates difficult. Poor test organization makes finding relevant tests challenging.
Reference
Test Level Characteristics
| Level | Scope | Execution Speed | Isolation | Regression Value |
|---|---|---|---|---|
| Unit | Single method/class | Milliseconds | Complete | High for implementation changes |
| Integration | Multiple components | Seconds | Partial | High for interface changes |
| System | Full application | Minutes | None | High for deployment verification |
| End-to-End | User workflows | Minutes to hours | None | High for feature regressions |
RSpec Core Matchers
| Matcher | Purpose | Example |
|---|---|---|
| eq | Exact equality | expect(result).to eq(expected) |
| be | Identity comparison | expect(object).to be(same_object) |
| be_nil | Nil check | expect(value).to be_nil |
| be_truthy | Truthiness | expect(condition).to be_truthy |
| include | Collection membership | expect(array).to include(item) |
| match | Regex matching | expect(string).to match(/pattern/) |
| raise_error | Exception verification | expect { code }.to raise_error(ErrorClass) |
| change | State change | expect { action }.to change { counter }.by(1) |
Test Selection Strategies
| Strategy | When to Use | Execution Time | Coverage |
|---|---|---|---|
| Retest All | Small suites, critical releases | High | Complete |
| Selective | Development, low-risk changes | Low | Partial |
| Risk-Based | Regular releases | Medium | Focused on critical areas |
| Smoke | Deployment verification | Very Low | Minimal critical paths |
| Regression Pack | Scheduled testing | Medium | Historical failure areas |
Common Test Hooks
| Hook | Timing | Scope | Use Case |
|---|---|---|---|
| before(:suite) | Once before all tests | Global | Database setup, configuration |
| before(:all) | Once per example group | Group | Shared expensive setup |
| before(:each) | Before each test | Individual | Test isolation, fresh state |
| after(:each) | After each test | Individual | Cleanup, resource release |
| after(:all) | Once per example group | Group | Group-level cleanup |
| after(:suite) | Once after all tests | Global | Final cleanup, reporting |
Minitest Assertions
| Assertion | Purpose | RSpec Equivalent |
|---|---|---|
| assert_equal | Value equality | expect(x).to eq(y) |
| assert_nil | Nil check | expect(x).to be_nil |
| assert_includes | Collection membership | expect(array).to include(x) |
| assert_raises | Exception verification | expect { }.to raise_error |
| assert_match | Pattern matching | expect(str).to match(/pattern/) |
| refute | Falsy assertion | expect(x).to be_falsy |
Test Data Management Approaches
| Approach | Setup Method | Cleanup Method | Isolation | Speed |
|---|---|---|---|---|
| Fixtures | Load YAML files | Database truncation | Low | Fast |
| Factories | Build objects programmatically | Transaction rollback | High | Medium |
| Builders | Fluent construction API | Transaction rollback | High | Medium |
| Seeds | Database seeds | Manual cleanup | Low | Slow |
Coverage Metrics
| Metric | Measures | Target Range | Limitations |
|---|---|---|---|
| Line Coverage | Lines executed | 80-90% | Does not measure assertion quality |
| Branch Coverage | Conditional paths | 70-85% | Misses logical complexity |
| Method Coverage | Methods called | 85-95% | Does not verify correctness |
| Path Coverage | Execution paths | 60-75% | Exponential complexity for large methods |
CI/CD Integration Points
| Stage | Test Scope | Purpose | Failure Action |
|---|---|---|---|
| Pre-commit | Changed files | Fast feedback | Warn developer |
| Commit | Unit tests | Continuous validation | Block commit |
| Pull Request | Full suite | Change verification | Block merge |
| Nightly | Extended suite | Comprehensive check | Alert team |
| Pre-deployment | Smoke tests | Deployment validation | Block deployment |
| Post-deployment | Health checks | Production verification | Trigger rollback |