CrackedRuby CrackedRuby

Overview

Acceptance testing verifies that a software system meets business requirements and functions according to user expectations in real-world scenarios. This testing phase occurs after unit and integration testing, focusing on whether the application delivers value to stakeholders rather than technical correctness of individual components.

The practice originated from contractual software development, where clients needed formal validation that delivered software met agreed specifications before accepting delivery and making final payments. This requirement created a distinct testing phase separate from developer-driven quality assurance.

Acceptance tests differ from other testing types in perspective and scope. While unit tests validate individual components and integration tests verify component interactions, acceptance tests examine complete workflows from the user's viewpoint. A banking application might have unit tests for account balance calculations and integration tests for database transactions, but acceptance tests would verify the complete "transfer money between accounts" workflow as a user experiences it.

The tests typically use domain language that business stakeholders understand, avoiding technical implementation details. Instead of testing that a TransferService class correctly calls database methods, an acceptance test validates that "when a user transfers $100 from checking to savings, the checking balance decreases by $100 and the savings balance increases by $100."

# Acceptance test perspective
Feature: Account Transfer
  Scenario: Transfer money between accounts
    Given I have $500 in my checking account
    And I have $200 in my savings account
    When I transfer $100 from checking to savings
    Then my checking balance should be $400
    And my savings balance should be $300

# Unit test perspective (different focus)
describe TransferService do
  it "debits source account" do
    service.transfer(amount: 100, from: checking, to: savings)
    expect(checking.balance).to eq(400)
  end
end

Acceptance tests serve multiple purposes: validating functionality against requirements, providing living documentation of system behavior, enabling regression detection, and building stakeholder confidence in releases.

Key Principles

Acceptance testing operates on several fundamental principles that distinguish it from other testing approaches.

Business Value Focus: Tests validate that the system solves business problems rather than verifying technical implementation. Each test should map to a specific business requirement or user need. A test that verifies "users can search products by category" addresses a business need, while a test checking "the search index uses Elasticsearch" focuses on implementation details.

User Perspective: Tests simulate actual user interactions with the system. This includes using the same interfaces users access (web browsers, APIs, command-line tools) and following realistic workflows. Tests should reflect how users actually behave, including common mistakes and edge cases users encounter.

End-to-End Coverage: Acceptance tests exercise complete system slices, including all layers from user interface through business logic to data persistence. A single acceptance test might involve rendering a web page, processing form submissions, executing business rules, updating databases, sending emails, and displaying confirmation messages.

Domain Language: Tests use terminology from the business domain rather than technical jargon. Product managers and domain experts should understand test scenarios without technical knowledge. A retail system's tests would reference "customers," "orders," and "inventory" rather than "user records," "transaction objects," or "stock tables."

Executable Specification: Acceptance tests function as living documentation that stays synchronized with system behavior. When requirements change, tests change accordingly, and the test suite represents the current system specification. This contrasts with static documentation that often becomes outdated.

Stakeholder Collaboration: Writing acceptance tests involves developers, testers, product owners, and domain experts. This collaboration ensures tests capture actual requirements and use appropriate business language. The "Three Amigos" practice brings together developer, tester, and business analyst perspectives when defining acceptance criteria.

Outside-In Development: Acceptance tests define desired behavior before implementation begins. Developers write failing acceptance tests first, then build functionality to make tests pass. This ensures development focuses on delivering required behavior rather than building unnecessary features.

The relationship between acceptance criteria and acceptance tests creates a validation loop:

# Acceptance Criterion (business language)
"""
When a premium customer places an order over $100,
they receive free shipping
"""

# Acceptance Test (executable specification)
Scenario: Premium customer free shipping
  Given I am a premium customer
  And I have items totaling $150 in my cart
  When I proceed to checkout
  Then shipping cost should be $0
  And order total should be $150

Acceptance tests operate at different granularities. Story-level tests validate individual user stories, feature-level tests verify complete features across multiple stories, and release-level tests ensure major system capabilities function correctly. Each level addresses different stakeholder concerns.

Ruby Implementation

Ruby provides multiple frameworks and approaches for acceptance testing, each suited to different application types and testing preferences.

Cucumber and Gherkin: Cucumber uses Gherkin syntax to write tests in structured natural language. Tests consist of features containing scenarios written in Given-When-Then format. Step definitions in Ruby connect Gherkin steps to actual test code.

# features/user_registration.feature
Feature: User Registration
  Scenario: Successful registration
    Given I am on the registration page
    When I fill in "Email" with "user@example.com"
    And I fill in "Password" with "secure123"
    And I click "Sign Up"
    Then I should see "Welcome! Registration successful"
    And I should receive a confirmation email

# features/step_definitions/registration_steps.rb
Given('I am on the registration page') do
  visit '/register'
end

When('I fill in {string} with {string}') do |field, value|
  fill_in field, with: value
end

When('I click {string}') do |button|
  click_button button
end

Then('I should see {string}') do |text|
  expect(page).to have_content(text)
end

Then('I should receive a confirmation email') do
  email = ActionMailer::Base.deliveries.last
  expect(email.to).to include('user@example.com')
  expect(email.subject).to match(/confirmation/i)
end

RSpec Feature Specs: RSpec provides a more developer-centric approach to acceptance testing through feature specs. These tests use Ruby code with descriptive methods rather than Gherkin syntax.

# spec/features/order_placement_spec.rb
require 'rails_helper'

RSpec.feature 'Order Placement', type: :feature do
  scenario 'Customer places order successfully' do
    product = create(:product, name: 'Widget', price: 29.99)
    
    visit products_path
    click_link 'Widget'
    click_button 'Add to Cart'
    click_link 'Checkout'
    
    fill_in 'Name', with: 'Jane Smith'
    fill_in 'Email', with: 'jane@example.com'
    fill_in 'Card Number', with: '4242424242424242'
    
    click_button 'Place Order'
    
    expect(page).to have_content('Order confirmed')
    expect(page).to have_content('Order #')
    
    order = Order.last
    expect(order.customer_email).to eq('jane@example.com')
    expect(order.total).to eq(29.99)
  end
end

Capybara Integration: Both Cucumber and RSpec feature specs typically use Capybara for browser automation. Capybara provides a consistent API for interacting with web applications regardless of the underlying driver.

# spec/support/capybara.rb
require 'capybara/rspec'
require 'capybara/cuprite'

Capybara.register_driver :cuprite do |app|
  Capybara::Cuprite::Driver.new(app, window_size: [1200, 800])
end

Capybara.default_driver = :cuprite
Capybara.javascript_driver = :cuprite

# Using Capybara in tests
scenario 'User interacts with dynamic content' do
  visit dashboard_path
  
  # Wait for JavaScript to load content
  expect(page).to have_selector('.dashboard-widget', wait: 5)
  
  # Interact with dropdown
  find('.category-selector').click
  within('.dropdown-menu') do
    click_link 'Electronics'
  end
  
  # Verify AJAX-loaded results
  expect(page).to have_css('.product-item', count: 12)
  expect(page).to have_content('Electronics Category')
end

API Testing: Acceptance tests for APIs focus on request/response validation rather than browser interaction. Ruby's rack-test or http.rb libraries facilitate API testing.

# spec/acceptance/api/products_spec.rb
require 'rails_helper'

RSpec.describe 'Products API', type: :request do
  describe 'POST /api/products' do
    it 'creates product with valid data' do
      headers = { 'Authorization' => "Bearer #{api_token}" }
      product_data = {
        product: {
          name: 'New Widget',
          price: 49.99,
          category: 'gadgets'
        }
      }
      
      post '/api/products', 
           params: product_data.to_json,
           headers: headers.merge('Content-Type' => 'application/json')
      
      expect(response).to have_http_status(:created)
      
      json = JSON.parse(response.body)
      expect(json['name']).to eq('New Widget')
      expect(json['price']).to eq('49.99')
      expect(json['id']).to be_present
      
      # Verify database state
      product = Product.find(json['id'])
      expect(product.name).to eq('New Widget')
    end
  end
end

Background Jobs: Acceptance tests must handle asynchronous processes. Ruby applications using Sidekiq, Delayed Job, or similar systems require special handling.

scenario 'User exports large report' do
  visit reports_path
  click_button 'Export to CSV'
  
  expect(page).to have_content('Export started')
  
  # Process background jobs immediately in tests
  perform_enqueued_jobs do
    # Verify job completion
    expect(page).to have_content('Export complete')
  end
  
  # Verify file generation
  export = Export.last
  expect(export.status).to eq('completed')
  expect(export.file).to be_attached
end

Tools & Ecosystem

Ruby's acceptance testing ecosystem includes frameworks, browser automation tools, and supporting libraries that work together to create comprehensive test suites.

Cucumber remains the most popular tool for behavior-driven acceptance testing. It separates test scenarios (written in Gherkin) from implementation (Ruby step definitions), making tests readable by non-technical stakeholders. Cucumber supports multiple spoken languages for Gherkin keywords and integrates with various Ruby test frameworks.

# Gemfile
group :test do
  gem 'cucumber-rails', require: false
  gem 'database_cleaner-active_record'
end

# config/cucumber.yml
default: --publish-quiet --format progress --strict-undefined
html: --format html --out features_report.html

Capybara provides the standard interface for simulating user interactions in Ruby web applications. It abstracts browser automation, supporting multiple drivers including Selenium WebDriver, Cuprite (Chrome DevTools Protocol), and rack-test for non-JavaScript testing.

# Different Capybara drivers for different needs
Capybara.register_driver :selenium_chrome do |app|
  options = Selenium::WebDriver::Chrome::Options.new
  options.add_argument('--headless')
  options.add_argument('--no-sandbox')
  
  Capybara::Selenium::Driver.new(app, browser: :chrome, options: options)
end

# Cuprite for faster JavaScript testing
Capybara.register_driver :cuprite do |app|
  Capybara::Cuprite::Driver.new(
    app,
    window_size: [1400, 1000],
    browser_options: { 'no-sandbox' => nil },
    process_timeout: 15,
    timeout: 10
  )
end

SitePrism adds page object pattern support to Capybara, reducing code duplication and improving test maintainability. Page objects encapsulate page structure and interactions.

# spec/support/pages/login_page.rb
class LoginPage < SitePrism::Page
  set_url '/login'
  
  element :email_field, '#email'
  element :password_field, '#password'
  element :submit_button, 'input[type=submit]'
  element :error_message, '.alert-error'
  
  def login(email, password)
    email_field.set(email)
    password_field.set(password)
    submit_button.click
  end
end

# Using in tests
scenario 'User login with invalid credentials' do
  login_page = LoginPage.new
  login_page.load
  login_page.login('wrong@example.com', 'wrongpass')
  
  expect(login_page).to have_error_message
  expect(login_page.error_message.text).to eq('Invalid credentials')
end

FactoryBot creates test data for acceptance tests. While fixtures work, factories provide more flexibility and clearer test setup.

# spec/factories/users.rb
FactoryBot.define do
  factory :user do
    sequence(:email) { |n| "user#{n}@example.com" }
    password { 'password123' }
    
    trait :premium do
      subscription_tier { 'premium' }
      subscription_expires_at { 1.year.from_now }
    end
    
    trait :with_orders do
      after(:create) do |user|
        create_list(:order, 3, user: user)
      end
    end
  end
end

# Using in tests
scenario 'Premium user accesses exclusive features' do
  user = create(:user, :premium)
  login_as(user)
  
  visit premium_features_path
  expect(page).to have_content('Premium Features')
end

VCR and WebMock handle external HTTP requests during testing. VCR records real HTTP interactions and replays them in subsequent test runs, while WebMock stubs HTTP requests.

# spec/support/vcr.rb
VCR.configure do |config|
  config.cassette_library_dir = 'spec/vcr_cassettes'
  config.hook_into :webmock
  config.configure_rspec_metadata!
  config.filter_sensitive_data('<API_KEY>') { ENV['EXTERNAL_API_KEY'] }
end

# Using in tests
scenario 'User searches products via external API', :vcr do
  visit search_path
  fill_in 'Query', with: 'laptop'
  click_button 'Search'
  
  # First run records real API response
  # Subsequent runs use recorded response
  expect(page).to have_css('.search-result', minimum: 1)
end

DatabaseCleaner manages database state between tests, ensuring clean test isolation. Different strategies suit different needs.

# spec/support/database_cleaner.rb
RSpec.configure do |config|
  config.before(:suite) do
    DatabaseCleaner.clean_with(:truncation)
  end

  config.before(:each) do
    DatabaseCleaner.strategy = :transaction
  end

  config.before(:each, js: true) do
    DatabaseCleaner.strategy = :truncation
  end

  config.before(:each) do
    DatabaseCleaner.start
  end

  config.after(:each) do
    DatabaseCleaner.clean
  end
end

Turnip provides an alternative to Cucumber, allowing Gherkin-style scenarios within RSpec without separate feature files and step definition files.

# spec/features/checkout.feature
Feature: Checkout process
  Scenario: Complete purchase
    Given there are products available
    When I add a product to cart
    And I proceed to checkout
    Then I should see order confirmation

# spec/steps/checkout_steps.rb
module CheckoutSteps
  step 'there are products available' do
    @product = create(:product, name: 'Widget', price: 29.99)
  end
  
  step 'I add a product to cart' do
    visit product_path(@product)
    click_button 'Add to Cart'
  end
  
  step 'I proceed to checkout' do
    click_link 'Checkout'
    fill_in_checkout_form
    click_button 'Complete Order'
  end
  
  step 'I should see order confirmation' do
    expect(page).to have_content('Order confirmed')
  end
end

Practical Examples

Acceptance tests cover various application scenarios, each requiring different testing approaches and considerations.

E-commerce Purchase Flow: This example demonstrates testing a complex multi-step workflow with database state verification.

Feature: Complete purchase workflow
  Scenario: Guest user completes purchase
    Given the following products exist:
      | name          | price | stock |
      | Red Widget    | 29.99 | 50    |
      | Blue Gadget   | 49.99 | 30    |
    When I visit the products page
    And I add "Red Widget" to my cart
    And I add "Blue Gadget" to my cart
    And I view my cart
    Then I should see 2 items in my cart
    And the cart total should be $79.98
    
    When I proceed to checkout as guest
    And I fill in shipping information:
      | field          | value              |
      | Name           | John Doe           |
      | Email          | john@example.com   |
      | Address        | 123 Main St        |
      | City           | Springfield        |
      | Postal Code    | 12345              |
    And I fill in payment information with a valid card
    And I confirm the order
    
    Then I should see "Order confirmed"
    And I should see an order number
    And I should receive a confirmation email
    And the inventory should be updated:
      | product      | remaining_stock |
      | Red Widget   | 49              |
      | Blue Gadget  | 29              |

# Step definitions
Given('the following products exist:') do |table|
  table.hashes.each do |row|
    create(:product, 
           name: row['name'],
           price: row['price'].to_f,
           stock: row['stock'].to_i)
  end
end

When('I add {string} to my cart') do |product_name|
  product = Product.find_by(name: product_name)
  visit product_path(product)
  click_button 'Add to Cart'
  expect(page).to have_content('Added to cart')
end

Then('the inventory should be updated:') do |table|
  table.hashes.each do |row|
    product = Product.find_by(name: row['product'])
    expect(product.stock).to eq(row['remaining_stock'].to_i)
  end
end

User Authentication and Authorization: Tests verify security boundaries and access control across different user roles.

scenario 'Admin accesses restricted areas while regular users cannot' do
  admin = create(:user, :admin)
  regular_user = create(:user)
  
  # Test admin access
  login_as(admin)
  visit admin_dashboard_path
  expect(page).to have_content('Admin Dashboard')
  expect(page).to have_link('User Management')
  
  click_link 'User Management'
  expect(page).to have_content('All Users')
  expect(page).to have_button('Delete User')
  
  logout
  
  # Test regular user denial
  login_as(regular_user)
  visit admin_dashboard_path
  expect(page).to have_content('Access Denied')
  expect(current_path).to eq(root_path)
  
  # Verify direct URL access also denied
  visit admin_users_path
  expect(page).to have_content('Access Denied')
end

API Workflow Testing: Acceptance tests for APIs verify complete request/response cycles with proper authentication and data validation.

describe 'Task Management API' do
  let(:user) { create(:user) }
  let(:auth_headers) { { 'Authorization' => "Bearer #{generate_token(user)}" } }
  
  it 'manages task lifecycle' do
    # Create task
    post '/api/tasks',
         params: { task: { title: 'Complete report', priority: 'high' } }.to_json,
         headers: auth_headers.merge('Content-Type' => 'application/json')
    
    expect(response).to have_http_status(:created)
    task_id = JSON.parse(response.body)['id']
    
    # Retrieve task
    get "/api/tasks/#{task_id}", headers: auth_headers
    expect(response).to have_http_status(:ok)
    
    task_data = JSON.parse(response.body)
    expect(task_data['title']).to eq('Complete report')
    expect(task_data['priority']).to eq('high')
    expect(task_data['status']).to eq('pending')
    
    # Update task status
    patch "/api/tasks/#{task_id}",
          params: { task: { status: 'completed' } }.to_json,
          headers: auth_headers.merge('Content-Type' => 'application/json')
    
    expect(response).to have_http_status(:ok)
    expect(JSON.parse(response.body)['status']).to eq('completed')
    
    # Verify in list
    get '/api/tasks', headers: auth_headers
    tasks = JSON.parse(response.body)
    completed_task = tasks.find { |t| t['id'] == task_id }
    expect(completed_task['status']).to eq('completed')
  end
end

Background Job Integration: Tests verify asynchronous processing completes correctly and produces expected results.

scenario 'User generates report with background processing' do
  user = create(:user)
  create_list(:transaction, 100, user: user, created_at: 1.month.ago)
  
  login_as(user)
  visit reports_path
  
  select 'Last 30 Days', from: 'Period'
  select 'CSV', from: 'Format'
  click_button 'Generate Report'
  
  expect(page).to have_content('Report generation started')
  expect(page).to have_content('You will receive an email when complete')
  
  # Process background jobs
  perform_enqueued_jobs
  
  # Verify email sent
  email = ActionMailer::Base.deliveries.last
  expect(email.to).to include(user.email)
  expect(email.subject).to match(/Report Ready/)
  
  # Verify report file created
  report = Report.last
  expect(report.user).to eq(user)
  expect(report.status).to eq('completed')
  expect(report.file).to be_attached
  
  # Verify report content
  csv_content = report.file.download
  parsed = CSV.parse(csv_content, headers: true)
  expect(parsed.length).to eq(100)
end

Implementation Approaches

Different strategies exist for organizing and executing acceptance tests, each with distinct characteristics and appropriate use cases.

Cucumber-Style BDD: This approach uses Gherkin syntax to write scenarios in structured natural language, separating specification from implementation. Product owners and business analysts participate in writing scenarios, creating shared understanding of requirements.

The method works best when non-technical stakeholders actively participate in defining acceptance criteria. The readable format enables conversations about requirements, but requires maintaining step definitions and can become verbose for complex scenarios. Teams should establish clear guidelines for step definition reuse to avoid duplication.

# Organized by feature with shared step definitions
# features/user_management/registration.feature
# features/user_management/login.feature
# features/step_definitions/user_steps.rb (shared)
# features/step_definitions/navigation_steps.rb (shared)

# Clear scenario structure
Scenario: User registers and receives welcome email
  Given I am on the registration page
  When I register with valid information
  Then I should be logged in
  And I should receive a welcome email
  And my account should be created

RSpec Feature Specs: This developer-focused approach writes acceptance tests as Ruby code within RSpec's testing framework. Tests read more like code than prose, making them faster to write for developers but less accessible to non-technical stakeholders.

This strategy suits teams where developers own acceptance criteria or when rapid test development takes priority over stakeholder readability. The approach provides full Ruby language features without DSL constraints.

# Organized by feature area
# spec/features/authentication/
# spec/features/checkout/
# spec/features/admin/

feature 'User registration' do
  scenario 'successful registration creates account and sends email' do
    visit registration_path
    
    fill_in 'Email', with: 'newuser@example.com'
    fill_in 'Password', with: 'secure_password'
    fill_in 'Password Confirmation', with: 'secure_password'
    click_button 'Sign Up'
    
    expect(page).to have_content('Welcome')
    expect(User.last.email).to eq('newuser@example.com')
    expect(ActionMailer::Base.deliveries.last.to).to include('newuser@example.com')
  end
end

Page Object Pattern: This architectural approach encapsulates page structure and interactions within dedicated classes, separating test logic from page implementation details. Tests remain stable when page layouts change, as modifications only require updating page object classes.

# spec/support/pages/checkout_page.rb
class CheckoutPage < SitePrism::Page
  set_url '/checkout'
  
  section :shipping_form, '#shipping-info' do
    element :name_field, '#name'
    element :address_field, '#address'
    element :city_field, '#city'
  end
  
  section :payment_form, '#payment-info' do
    element :card_number, '#card-number'
    element :expiry, '#expiry'
    element :cvv, '#cvv'
  end
  
  element :submit_button, 'button[type=submit]'
  
  def fill_shipping(name:, address:, city:)
    shipping_form.name_field.set(name)
    shipping_form.address_field.set(address)
    shipping_form.city_field.set(city)
  end
  
  def fill_payment(card:, expiry:, cvv:)
    payment_form.card_number.set(card)
    payment_form.expiry.set(expiry)
    payment_form.cvv.set(cvv)
  end
  
  def complete_order
    submit_button.click
  end
end

# Test uses page objects
scenario 'complete checkout' do
  checkout_page = CheckoutPage.new
  checkout_page.load
  
  checkout_page.fill_shipping(
    name: 'Jane Smith',
    address: '123 Main St',
    city: 'Portland'
  )
  
  checkout_page.fill_payment(
    card: '4242424242424242',
    expiry: '12/25',
    cvv: '123'
  )
  
  checkout_page.complete_order
  
  expect(page).to have_content('Order confirmed')
end

API Contract Testing: For systems with separate frontend and backend, API contract tests validate that APIs meet agreed specifications independent of UI testing. This enables parallel development and catches integration issues early.

# API contract definition
# spec/contracts/orders_api_contract.rb
RSpec.describe 'Orders API Contract' do
  describe 'POST /api/orders' do
    it 'accepts valid order and returns expected schema' do
      request_body = {
        order: {
          items: [{ product_id: 1, quantity: 2 }],
          shipping_address: { /* address data */ }
        }
      }
      
      post '/api/orders', params: request_body.to_json, headers: api_headers
      
      expect(response).to have_http_status(:created)
      expect(response.content_type).to eq('application/json')
      
      body = JSON.parse(response.body)
      expect(body).to include('id', 'status', 'total', 'created_at')
      expect(body['items']).to be_an(Array)
      expect(body['status']).to eq('pending')
    end
  end
end

Service-Level Testing: For microservices architectures, acceptance tests validate individual services in isolation using test doubles for dependencies, then verify integration with contract tests or end-to-end tests across services.

This approach balances test speed with coverage. Service-level tests run quickly by avoiding dependencies, while selective end-to-end tests verify critical paths across service boundaries.

Common Pitfalls

Acceptance testing presents several recurring challenges that reduce test effectiveness and reliability.

Flaky Tests: Tests that pass and fail inconsistently without code changes undermine confidence in test suites. Common causes include timing issues with asynchronous operations, random test data, and external service dependencies.

# Flaky: Assumes immediate rendering
scenario 'view search results' do
  visit search_path
  fill_in 'query', with: 'laptop'
  click_button 'Search'
  expect(page).to have_css('.result-item') # Fails if results load slowly
end

# Fixed: Explicit wait
scenario 'view search results' do
  visit search_path
  fill_in 'query', with: 'laptop'
  click_button 'Search'
  expect(page).to have_css('.result-item', wait: 5)
end

# Flaky: Random data causes intermittent failures
let(:user) { create(:user, email: "user#{rand(1000)}@example.com") }

# Fixed: Deterministic data with sequences
let(:user) { create(:user) } # Factory uses sequence for email

Testing Implementation Instead of Behavior: Tests that verify internal implementation details break when refactoring code, even when behavior remains unchanged. Tests should focus on observable outcomes.

# Testing implementation
scenario 'user creation' do
  visit signup_path
  fill_in_form
  click_button 'Sign Up'
  
  # Bad: Tests internal implementation
  expect(UserCreationService).to have_received(:create_user)
  expect(EmailService).to have_received(:send_welcome_email)
end

# Testing behavior
scenario 'user creation' do
  visit signup_path
  fill_in_form
  click_button 'Sign Up'
  
  # Good: Tests observable outcomes
  expect(page).to have_content('Account created')
  expect(User.last.email).to eq('test@example.com')
  expect(ActionMailer::Base.deliveries.last.subject).to match(/Welcome/)
end

Insufficient Test Isolation: Tests that depend on execution order or share state cause cascading failures and make debugging difficult. Each test should set up its own data and clean up afterward.

# Bad: Tests depend on order
describe 'user workflow' do
  it 'creates user' do
    @user = create(:user, email: 'test@example.com')
  end
  
  it 'updates user' do
    @user.update(name: 'New Name') # Fails if previous test didn't run
  end
end

# Good: Each test independent
describe 'user workflow' do
  it 'creates user' do
    user = create(:user, email: 'test@example.com')
    expect(user).to be_persisted
  end
  
  it 'updates user' do
    user = create(:user, email: 'test@example.com')
    user.update(name: 'New Name')
    expect(user.reload.name).to eq('New Name')
  end
end

Excessive Test Duration: Slow acceptance tests discourage frequent execution, delaying feedback. Tests that launch full browsers and process complete workflows accumulate time quickly.

Strategies to improve speed include using headless browsers, running non-JavaScript tests with rack-test, parallelizing test execution, and using database transactions instead of truncation for cleanup.

# Configure faster drivers for non-JS tests
RSpec.configure do |config|
  config.before(:each, type: :feature) do
    Capybara.current_driver = Capybara.javascript_driver if example.metadata[:js]
  end
end

# Use transactions when possible
config.before(:each) do
  DatabaseCleaner.strategy = :transaction
end

config.before(:each, js: true) do
  DatabaseCleaner.strategy = :truncation
end

Overly Generic Step Definitions: Cucumber step definitions that try to handle too many scenarios become complex and difficult to maintain. Specific steps for specific scenarios often improve clarity.

# Overly generic
When(/^I fill in the form with (.+)$/) do |data|
  # Complex parsing logic to handle any form
  # Becomes unmaintainable
end

# Better: Specific and clear
When('I fill in the registration form') do
  fill_in 'Email', with: 'user@example.com'
  fill_in 'Password', with: 'secure123'
  fill_in 'Password Confirmation', with: 'secure123'
end

When('I fill in the shipping address') do
  fill_in 'Address', with: '123 Main St'
  fill_in 'City', with: 'Portland'
  fill_in 'Postal Code', with: '97201'
end

Ignoring Test Maintenance: Acceptance tests require ongoing maintenance as applications evolve. Outdated tests, brittle selectors, and accumulated technical debt reduce test suite value.

Teams should refactor tests alongside production code, update tests when requirements change, and remove obsolete tests. Regular test review sessions identify improvement opportunities.

Reference

Test Organization Patterns

Pattern Structure Use Case
By Feature features/user_management/ Domain-focused organization
By User Journey features/purchase_flow/ Workflow-based testing
By Priority features/critical/, features/standard/ Risk-based execution
By Team features/team_alpha/, features/team_beta/ Team ownership clarity

Common Capybara Methods

Method Purpose Example
visit Navigate to path visit root_path
click_link Click link by text or id click_link 'Sign Out'
click_button Click button by text or id click_button 'Submit'
fill_in Enter text in field fill_in 'Email', with: 'test@example.com'
select Choose from dropdown select 'Oregon', from: 'State'
check Check checkbox check 'Terms and Conditions'
uncheck Uncheck checkbox uncheck 'Subscribe to newsletter'
choose Select radio button choose 'Credit Card'
have_content Verify text present expect(page).to have_content('Success')
have_css Verify CSS selector expect(page).to have_css('.alert-success')
have_selector Verify element exists expect(page).to have_selector('h1', text: 'Welcome')
within Scope actions to element within('.modal') { click_button 'OK' }
find Locate element find('#submit-btn').click

Gherkin Keywords

Keyword Purpose Example
Feature Describe feature being tested Feature: User Authentication
Scenario Define test case Scenario: Successful login
Given Set up preconditions Given I am a registered user
When Describe action When I enter my credentials
Then Verify outcome Then I should see my dashboard
And Additional step And I should see a welcome message
But Negative condition But I should not see login form
Background Steps before each scenario Background: Given database is seeded
Scenario Outline Parameterized scenario Scenario Outline: Login with different roles
Examples Data table for outline Examples: credentials

Database Cleaner Strategies

Strategy Mechanism Speed Use Case
transaction Rollback after test Fastest Non-JavaScript tests
truncation Delete all rows Slower JavaScript tests
deletion DELETE statements Moderate When truncation unavailable

Capybara Configuration Options

Option Purpose Default
default_max_wait_time Timeout for expectations 2 seconds
default_driver Driver for non-JS tests rack_test
javascript_driver Driver for JS tests selenium
app_host Application URL nil
server_port Test server port Random
exact Exact text matching false
match How to match ambiguous finds smart
ignore_hidden_elements Ignore invisible elements true

Test Data Strategy Comparison

Strategy Advantages Disadvantages
Fixtures Fast, simple YAML files Hard to maintain, brittle
Factories Flexible, readable Slower than fixtures
Database seeds Realistic data volume Slow, cleanup complexity
API calls Tests real integrations Slow, external dependencies
Inline creation Clear test intent Verbose, duplication

HTTP Status Code Expectations

Status Meaning When to Test
200 OK Successful request Standard responses
201 Created Resource created POST requests
204 No Content Successful with no body DELETE requests
400 Bad Request Invalid input Validation scenarios
401 Unauthorized Authentication required Auth boundary tests
403 Forbidden Insufficient permissions Authorization tests
404 Not Found Resource missing Invalid ID scenarios
422 Unprocessable Entity Validation failed Form error handling
500 Internal Server Error Server error Error handling verification

Test Execution Commands

Command Purpose
bundle exec cucumber Run all Cucumber features
bundle exec cucumber features/login.feature Run specific feature
bundle exec cucumber --tags @critical Run tagged scenarios
bundle exec rspec spec/features Run all RSpec feature specs
bundle exec rspec spec/features/checkout_spec.rb Run specific spec
bundle exec rspec --tag js Run tagged specs
bundle exec cucumber --profile html Generate HTML report