Overview
ACID properties define a set of guarantees that database transactions must satisfy to ensure data integrity and reliability. The acronym stands for Atomicity, Consistency, Isolation, and Durability. These properties address the fundamental challenges of maintaining data correctness in concurrent, failure-prone systems.
Database transactions represent logical units of work that may involve multiple operations. Without ACID guarantees, concurrent transactions could interfere with each other, partial failures could leave data in inconsistent states, and system crashes could result in data loss. ACID properties provide a framework for reasoning about transaction behavior and data integrity.
The concept originated in the late 1970s and early 1980s as relational databases evolved to support concurrent access and transaction processing. The term was formalized by Andreas Reuter and Theo Härder in 1983, building on earlier work in database theory and distributed systems.
Modern applications depend on ACID properties for operations involving financial transactions, inventory management, user account modifications, and any scenario where data consistency matters. The guarantees apply regardless of concurrent access patterns, system failures, or power outages.
# Basic transaction demonstrating ACID properties
ActiveRecord::Base.transaction do
account_a = Account.find(1)
account_b = Account.find(2)
account_a.balance -= 100
account_b.balance += 100
account_a.save!
account_b.save!
end
# Either both updates succeed or neither does
Database systems implement ACID properties through various mechanisms including write-ahead logging, locking protocols, multi-version concurrency control, and transaction isolation levels. The specific implementation approach affects performance characteristics, concurrency levels, and the types of anomalies that may occur.
Key Principles
Atomicity ensures that transactions execute as indivisible units. A transaction either completes all its operations or none of them. Partial execution never leaves the database in an intermediate state. If any operation within the transaction fails, the system rolls back all previous operations, restoring the database to its state before the transaction began.
The atomic property prevents scenarios where a money transfer debits one account but fails to credit the other, or where a multi-row update modifies some rows but not others. Database systems implement atomicity through transaction logs that record operations and enable rollback on failure.
Consistency maintains database invariants across transactions. The database starts in a valid state and ends in a valid state after each transaction. Consistency encompasses several aspects: domain constraints (data type and range validation), referential integrity (foreign key relationships), application-specific business rules, and uniqueness constraints.
The database enforces schema-level constraints automatically. Application-level consistency requires careful transaction design to maintain business rules. A transaction transferring funds must ensure account balances never become negative. A transaction creating an order must verify inventory availability.
Isolation controls how concurrent transactions interact with each other. Perfect isolation means transactions execute as if they were the only transaction running on the system. In practice, databases offer multiple isolation levels that trade off correctness guarantees for performance. Lower isolation levels allow more concurrency but permit certain anomalies.
Three primary anomalies occur when isolation weakens:
Dirty reads happen when one transaction reads uncommitted changes from another transaction. If the second transaction rolls back, the first transaction read data that never existed.
Non-repeatable reads occur when a transaction reads the same data twice and gets different values because another transaction modified and committed the data between reads.
Phantom reads happen when a transaction executes the same query twice and gets different sets of rows because another transaction inserted or deleted rows between executions.
Durability guarantees that committed transactions survive system failures. Once a transaction commits, its changes persist even if the system crashes immediately afterward. The database writes changes to non-volatile storage before acknowledging the commit. Durability protects against data loss from power failures, operating system crashes, and hardware malfunctions.
Database systems achieve durability through write-ahead logging. Changes write to a transaction log on persistent storage before modifying the actual database pages. After a crash, the system replays the log to restore committed transactions. Some systems offer configurable durability levels, allowing applications to trade off durability guarantees for performance.
The four properties work together to provide transaction semantics. Atomicity and durability relate to failure handling. Consistency and isolation address correctness in concurrent execution. Database systems may implement these properties through different mechanisms, but the guarantees remain consistent.
Ruby Implementation
Ruby database libraries provide transaction support through block-based APIs that manage ACID properties automatically. ActiveRecord, Sequel, and other ORMs handle transaction demarcation, rollback on exceptions, and connection management.
# ActiveRecord transaction with automatic rollback
class BankTransfer
def self.execute(from_id, to_id, amount)
ActiveRecord::Base.transaction do
from_account = Account.lock.find(from_id)
to_account = Account.lock.find(to_id)
raise "Insufficient funds" if from_account.balance < amount
from_account.balance -= amount
to_account.balance += amount
from_account.save!
to_account.save!
end
end
end
# Atomicity: exception triggers automatic rollback
BankTransfer.execute(1, 2, 100)
The transaction block ensures atomicity. Any exception raised within the block triggers a rollback, preventing partial updates. The lock method acquires row-level locks to prevent concurrent modifications, supporting isolation. The save! methods raise exceptions on validation failures, integrating consistency checks with atomicity.
Ruby's Sequel library offers similar transaction semantics with a different API design:
DB.transaction do
from_account = Account.where(id: from_id).for_update.first
to_account = Account.where(id: to_id).for_update.first
raise "Insufficient funds" if from_account.balance < amount
from_account.update(balance: from_account.balance - amount)
to_account.update(balance: to_account.balance + amount)
end
The for_update method acquires pessimistic locks, similar to ActiveRecord's lock method. Sequel automatically manages transaction lifecycle, committing on successful block completion and rolling back on exceptions.
Ruby applications can configure transaction isolation levels through connection settings or per-transaction:
# Set isolation level for a specific transaction
ActiveRecord::Base.transaction(isolation: :serializable) do
# Operations execute with serializable isolation
user = User.find(user_id)
user.update(last_accessed: Time.now)
end
# Configure default isolation level
ActiveRecord::Base.establish_connection(
adapter: 'postgresql',
database: 'myapp',
variables: {
'default_transaction_isolation' => 'repeatable read'
}
)
Different isolation levels affect the types of anomalies that may occur:
# Read committed (default for PostgreSQL)
# Prevents dirty reads but allows non-repeatable reads
ActiveRecord::Base.transaction(isolation: :read_committed) do
account = Account.find(1)
balance_1 = account.balance
# Another transaction may modify and commit here
sleep(1)
account.reload
balance_2 = account.balance
# balance_1 may differ from balance_2
end
# Repeatable read
# Prevents dirty reads and non-repeatable reads
ActiveRecord::Base.transaction(isolation: :repeatable_read) do
account = Account.find(1)
balance_1 = account.balance
sleep(1)
account.reload
balance_2 = account.balance
# balance_1 equals balance_2
# But new rows may appear (phantom reads possible)
end
# Serializable
# Prevents all anomalies
ActiveRecord::Base.transaction(isolation: :serializable) do
# Complete isolation from concurrent transactions
# Database may abort transaction if conflicts detected
end
Ruby applications handle transaction retries for serialization failures:
def transfer_with_retry(from_id, to_id, amount, max_attempts = 3)
attempts = 0
begin
attempts += 1
ActiveRecord::Base.transaction(isolation: :serializable) do
from_account = Account.find(from_id)
to_account = Account.find(to_id)
from_account.balance -= amount
to_account.balance += amount
from_account.save!
to_account.save!
end
rescue ActiveRecord::SerializationFailure => e
retry if attempts < max_attempts
raise
end
end
Nested transactions in Ruby use savepoints rather than true nested transactions:
ActiveRecord::Base.transaction do
account = Account.find(1)
account.balance -= 100
account.save!
# Savepoint created
ActiveRecord::Base.transaction(requires_new: true) do
log = TransactionLog.create!(account_id: account.id, amount: -100)
raise "Logging failed" # Rolls back to savepoint
end
# Outer transaction continues
end
The requires_new: true option creates a savepoint. Rolling back the inner transaction returns to the savepoint without affecting the outer transaction's changes.
Practical Examples
Financial Transaction Processing
A payment processing system transfers money between accounts while maintaining audit trails and enforcing business rules:
class PaymentProcessor
def process_payment(payer_id, payee_id, amount, description)
ActiveRecord::Base.transaction do
payer = Account.lock.find(payer_id)
payee = Account.lock.find(payee_id)
# Consistency: validate business rules
raise InsufficientFundsError if payer.balance < amount
raise InvalidAmountError if amount <= 0
raise AccountFrozenError if payer.frozen? || payee.frozen?
# Atomicity: all operations succeed or all fail
payer.balance -= amount
payee.balance += amount
transaction = Transaction.create!(
from_account_id: payer_id,
to_account_id: payee_id,
amount: amount,
description: description,
processed_at: Time.now
)
payer.save!
payee.save!
# Isolation: locks prevent concurrent modifications
# Durability: commit ensures persistence
transaction
end
rescue StandardError => e
logger.error("Payment processing failed: #{e.message}")
raise
end
end
# Usage preserves ACID properties
processor = PaymentProcessor.new
processor.process_payment(1, 2, 50.00, "Invoice #12345")
The transaction ensures atomicity by rolling back all changes if any operation fails. Consistency validation occurs before modifying account balances. Pessimistic locking provides isolation. The database commits changes to persistent storage for durability.
Inventory Management
An e-commerce system manages product inventory across multiple warehouses:
class InventoryManager
def allocate_inventory(order_items)
ActiveRecord::Base.transaction(isolation: :serializable) do
order_items.each do |item|
product = Product.find(item[:product_id])
# Find warehouse with sufficient stock
warehouse = Warehouse
.joins(:inventory_items)
.where(inventory_items: { product_id: product.id })
.where('inventory_items.quantity >= ?', item[:quantity])
.lock
.first
raise OutOfStockError unless warehouse
# Atomically update inventory
inventory_item = warehouse.inventory_items.find_by(product_id: product.id)
inventory_item.quantity -= item[:quantity]
inventory_item.save!
# Record allocation
Allocation.create!(
order_id: order_items.first[:order_id],
product_id: product.id,
warehouse_id: warehouse.id,
quantity: item[:quantity],
allocated_at: Time.now
)
end
end
end
end
# Multiple concurrent orders compete for inventory
# Serializable isolation prevents overselling
InventoryManager.new.allocate_inventory([
{ order_id: 101, product_id: 5, quantity: 2 },
{ order_id: 101, product_id: 8, quantity: 1 }
])
Serializable isolation prevents race conditions where two concurrent transactions allocate the same inventory item. The transaction either allocates all items or none, maintaining consistency between inventory levels and allocation records.
User Registration with Referral Tracking
A user registration system maintains referral relationships and account creation atomicity:
class UserRegistration
def register(email, password, referral_code = nil)
ActiveRecord::Base.transaction do
# Consistency: validate uniqueness constraint
raise EmailTakenError if User.exists?(email: email)
user = User.create!(
email: email,
password_digest: BCrypt::Password.create(password),
created_at: Time.now
)
if referral_code
referrer = User.find_by!(referral_code: referral_code)
# Atomic referral tracking
Referral.create!(
referrer_id: referrer.id,
referred_id: user.id,
created_at: Time.now
)
# Award referral bonus
referrer.referral_credits += 10
referrer.save!
end
# Create default settings
UserSetting.create!(
user_id: user.id,
notification_enabled: true,
theme: 'light'
)
# Generate unique referral code for new user
user.referral_code = generate_unique_code
user.save!
user
end
end
private
def generate_unique_code
loop do
code = SecureRandom.alphanumeric(8).upcase
break code unless User.exists?(referral_code: code)
end
end
end
The transaction creates the user account, referral relationship, default settings, and unique referral code atomically. If any step fails, the entire registration rolls back, preventing orphaned records or incomplete account state.
Order Processing with Multiple Database Tables
An order system updates inventory, creates order records, and triggers notifications:
class OrderProcessor
def create_order(customer_id, items, shipping_address)
order = nil
ActiveRecord::Base.transaction do
# Create order record
order = Order.create!(
customer_id: customer_id,
status: 'pending',
total_amount: calculate_total(items),
created_at: Time.now
)
# Create order items and decrement inventory
items.each do |item|
product = Product.lock.find(item[:product_id])
raise InsufficientInventoryError if product.stock < item[:quantity]
OrderItem.create!(
order_id: order.id,
product_id: product.id,
quantity: item[:quantity],
unit_price: product.price
)
product.stock -= item[:quantity]
product.save!
end
# Create shipping record
Shipment.create!(
order_id: order.id,
address: shipping_address,
status: 'pending',
created_at: Time.now
)
# Update customer statistics
customer = Customer.find(customer_id)
customer.total_orders += 1
customer.lifetime_value += order.total_amount
customer.save!
end
# Send notifications after commit
OrderNotificationJob.perform_later(order.id)
order
end
private
def calculate_total(items)
items.sum { |item| item[:quantity] * Product.find(item[:product_id]).price }
end
end
Design Considerations
Isolation Level Selection
Choosing the appropriate isolation level involves trade-offs between consistency guarantees and performance. Stricter isolation levels prevent more anomalies but reduce concurrency and may increase transaction abort rates.
Read Committed isolation prevents dirty reads but allows non-repeatable reads and phantom reads. This level works well for applications where stale reads within a transaction are acceptable. Banking systems use Read Committed for balance inquiries where seeing the most recent committed value matters more than snapshot consistency.
Repeatable Read isolation prevents dirty reads and non-repeatable reads but allows phantom reads. Applications requiring consistent snapshots of individual rows but tolerating new rows appearing benefit from this level. Reporting queries that aggregate existing data use Repeatable Read to ensure calculations remain consistent even if new records arrive.
Serializable isolation prevents all anomalies by ensuring transactions execute in some serial order. Applications requiring complete isolation from concurrent transactions select this level. Financial systems use Serializable for operations where any concurrent modification could violate business rules. The cost includes reduced concurrency and increased transaction retry rates.
# Design decision: inventory allocation requires Serializable
# to prevent overselling from concurrent transactions
def allocate_inventory(product_id, quantity)
ActiveRecord::Base.transaction(isolation: :serializable) do
product = Product.find(product_id)
raise OutOfStock if product.available_quantity < quantity
product.available_quantity -= quantity
product.save!
end
rescue ActiveRecord::SerializationFailure
# Retry on serialization failure
retry
end
# Design decision: read-only reporting uses Read Committed
# for better concurrency and performance
def generate_sales_report(start_date, end_date)
ActiveRecord::Base.transaction(isolation: :read_committed, readonly: true) do
orders = Order.where(created_at: start_date..end_date)
# Generate report from committed data
end
end
Durability Configuration
Durability settings balance data safety against write performance. Synchronous commit modes wait for data to reach persistent storage before acknowledging the commit. Asynchronous modes return success before ensuring durability, accepting small risk of data loss for better throughput.
PostgreSQL offers multiple synchronous commit levels. The default setting synchronizes to disk for each commit. Setting synchronous_commit to off improves write performance but risks losing recent transactions in a crash. Applications processing high-volume, low-value transactions may accept this trade-off.
# Configure durability for high-throughput, low-criticality operations
ActiveRecord::Base.connection.execute(
"SET synchronous_commit TO OFF"
)
User.transaction do
# Session data updates don't require strict durability
session.last_seen = Time.now
session.save!
end
ActiveRecord::Base.connection.execute(
"SET synchronous_commit TO ON"
)
# Financial transactions require full durability
ActiveRecord::Base.transaction do
account.balance -= amount
account.save!
end
Applications can batch less critical updates into periodic commits while maintaining strict durability for critical operations.
Transaction Scope Design
Transaction boundaries significantly affect system behavior. Wide transaction scopes provide stronger consistency but reduce concurrency and increase lock contention. Narrow scopes improve performance but may leave data in inconsistent states if coordinating operations fail.
# Wide scope: strong consistency, potential performance issues
def wide_scope_order(customer_id, items)
ActiveRecord::Base.transaction do
customer = Customer.lock.find(customer_id)
items.each do |item|
# Holds locks for entire operation
product = Product.lock.find(item[:product_id])
product.stock -= item[:quantity]
product.save!
end
# Long-running external API call inside transaction
payment_result = PaymentGateway.charge(customer, total)
raise PaymentFailed unless payment_result.success?
end
end
# Narrow scope: better performance, requires compensation logic
def narrow_scope_order(customer_id, items)
# Reserve inventory first
reservations = reserve_inventory(items)
# Process payment outside transaction
payment_result = PaymentGateway.charge(customer, total)
if payment_result.success?
ActiveRecord::Base.transaction do
# Convert reservations to final allocations
finalize_order(customer_id, reservations)
end
else
# Release reservations
cancel_reservations(reservations)
end
end
The narrow scope approach keeps transactions short, reducing lock hold time. The trade-off requires explicit compensation logic for failure scenarios.
Optimistic vs Pessimistic Locking
Optimistic locking assumes conflicts are rare and detects them at commit time through version checking. Pessimistic locking assumes conflicts are common and prevents them by acquiring locks upfront.
# Optimistic locking for low-contention scenarios
class Product < ActiveRecord::Base
# Add version column to table
end
def update_product_optimistic(product_id, new_price)
product = Product.find(product_id)
product.price = new_price
product.save! # Fails if version changed
rescue ActiveRecord::StaleObjectError
# Handle conflict
retry
end
# Pessimistic locking for high-contention scenarios
def update_product_pessimistic(product_id, new_price)
ActiveRecord::Base.transaction do
product = Product.lock.find(product_id)
product.price = new_price
product.save!
end
end
Optimistic locking reduces lock contention but requires retry logic. Pessimistic locking simplifies code but serializes concurrent access. Applications with frequent updates to shared resources benefit from pessimistic locking. Applications with mostly independent updates use optimistic locking for better concurrency.
Implementation Approaches
Write-Ahead Logging
Write-ahead logging (WAL) forms the foundation for atomicity and durability. Changes write to a sequential log on stable storage before modifying the actual database pages. The log records contain enough information to redo or undo each operation.
The WAL protocol requires writing log records to disk before writing corresponding database pages. This ordering ensures the system can recover committed transactions after a crash by replaying the log. Uncommitted transactions roll back by undoing their logged operations.
PostgreSQL implements WAL through a series of log segments. Each transaction generates log records describing its changes. At commit time, the system forces the transaction's log records to disk. Once the log write completes, the commit succeeds. Background processes eventually write the actual database pages, but the commit does not wait for these writes.
# Application code remains simple
# Database handles WAL mechanics internally
ActiveRecord::Base.transaction do
user = User.find(user_id)
user.last_login = Time.now
user.save! # Triggers WAL write, then page modification
end
Checkpoint operations periodically write dirty pages to stable storage, allowing the system to truncate old log segments. The recovery process only needs to replay logs since the last checkpoint.
Multi-Version Concurrency Control
Multi-version concurrency control (MVCC) enables high concurrency by maintaining multiple versions of each data item. Readers access consistent snapshots without blocking writers. Writers create new versions without blocking readers.
PostgreSQL's MVCC implementation assigns each transaction a unique transaction ID. When a transaction modifies a row, the system creates a new version marked with the transaction ID. Old versions remain until no transaction needs them. Readers select the appropriate version based on their snapshot.
# Transaction A
ActiveRecord::Base.transaction do
account = Account.find(1)
account.balance += 100
account.save! # Creates new version
end
# Concurrent Transaction B
ActiveRecord::Base.transaction do
account = Account.find(1)
# Sees old version until Transaction A commits
current_balance = account.balance
end
MVCC eliminates read locks, allowing readers and writers to proceed without blocking each other. The cost includes storage overhead for maintaining multiple versions and periodic cleanup of obsolete versions through vacuum operations.
Lock-Based Concurrency Control
Lock-based approaches prevent conflicts by acquiring locks before accessing data. Shared locks allow concurrent reads. Exclusive locks provide sole access for modifications. Two-phase locking ensures serializability by requiring transactions to acquire all locks before releasing any lock.
# Shared lock for reading
ActiveRecord::Base.transaction do
account = Account.lock('FOR SHARE').find(1)
# Other transactions can also acquire shared locks
# but exclusive locks wait
balance = account.balance
end
# Exclusive lock for writing
ActiveRecord::Base.transaction do
account = Account.lock.find(1)
# Other transactions wait for lock release
account.balance += 100
account.save!
end
Strict two-phase locking acquires locks as needed and holds all locks until transaction end. This approach prevents deadlocks but reduces concurrency. Applications may use explicit lock ordering to prevent circular wait conditions.
Snapshot Isolation
Snapshot isolation provides each transaction a consistent view of the database as of transaction start. Reads access this snapshot without seeing changes from concurrent transactions. Writes check for conflicts with concurrent transactions at commit time.
# Transaction sees consistent snapshot
ActiveRecord::Base.transaction(isolation: :repeatable_read) do
# Snapshot taken at transaction start
products = Product.where(category: 'electronics')
# Consistent view even if other transactions
# modify these products concurrently
products.each do |product|
process_product(product)
end
end
Snapshot isolation prevents lost updates through first-committer-wins rules. If two transactions modify the same data, the second transaction to commit detects the conflict and aborts. Applications handle abort scenarios through retry logic.
Distributed Transactions
Distributed transactions coordinate multiple databases or services using two-phase commit protocol. A coordinator process manages transaction phases: prepare (voting) and commit/abort (decision).
# Distributed transaction across multiple databases
primary_db = ActiveRecord::Base.connection
secondary_db = SecondaryDatabase.connection
begin
primary_db.begin_db_transaction
secondary_db.begin_db_transaction
# Operations on primary database
user = User.find(user_id)
user.balance -= amount
user.save!
# Operations on secondary database
audit = AuditLog.new
audit.user_id = user_id
audit.action = 'withdrawal'
audit.save!
# Two-phase commit
primary_db.commit_db_transaction
secondary_db.commit_db_transaction
rescue StandardError => e
primary_db.rollback_db_transaction
secondary_db.rollback_db_transaction
raise
end
Two-phase commit ensures atomicity across distributed resources but introduces coordination overhead and blocking. Modern systems often use eventual consistency patterns with compensation for better availability.
Common Pitfalls
Long-Running Transactions
Holding transactions open for extended periods causes multiple problems. Locks accumulate, blocking concurrent transactions. Database resources remain allocated. In MVCC systems, old snapshots prevent garbage collection of obsolete row versions.
# Problematic: transaction includes slow external call
ActiveRecord::Base.transaction do
order = Order.create!(customer_id: customer_id)
# External API call holds transaction open
result = PaymentGateway.charge(customer, amount)
order.payment_status = result.status
order.save!
end
# Better: minimize transaction scope
order = Order.create!(customer_id: customer_id)
result = PaymentGateway.charge(customer, amount)
ActiveRecord::Base.transaction do
order.payment_status = result.status
order.save!
end
Transactions should complete quickly, typically within milliseconds. Move slow operations like API calls, file I/O, or complex calculations outside transaction boundaries.
Ignoring Serialization Failures
Applications using serializable isolation or snapshot isolation must handle serialization failures. These failures indicate conflicts between concurrent transactions. Simply allowing the error to propagate results in failed operations that should have succeeded.
# Wrong: no retry logic
def transfer_funds(from_id, to_id, amount)
ActiveRecord::Base.transaction(isolation: :serializable) do
from_account = Account.find(from_id)
to_account = Account.find(to_id)
from_account.balance -= amount
to_account.balance += amount
from_account.save!
to_account.save!
end
end
# Correct: implement retry with backoff
def transfer_funds(from_id, to_id, amount)
max_attempts = 5
attempt = 0
begin
attempt += 1
ActiveRecord::Base.transaction(isolation: :serializable) do
from_account = Account.find(from_id)
to_account = Account.find(to_id)
from_account.balance -= amount
to_account.balance += amount
from_account.save!
to_account.save!
end
rescue ActiveRecord::SerializationFailure => e
if attempt < max_attempts
sleep(0.1 * attempt) # Exponential backoff
retry
else
raise
end
end
end
Implement retry logic with exponential backoff for serialization failures. Set reasonable maximum attempt limits to prevent infinite retries.
Mixing Transaction Isolation Levels
Using different isolation levels inconsistently within an application creates subtle bugs. Operations relying on serializable semantics may fail when executed with lower isolation.
# Problematic: inconsistent isolation levels
class AccountService
def check_balance(account_id)
# Uses default Read Committed
Account.find(account_id).balance
end
def transfer(from_id, to_id, amount)
# Expects Serializable
ActiveRecord::Base.transaction(isolation: :serializable) do
from_balance = check_balance(from_id) # Wrong isolation!
raise "Insufficient funds" if from_balance < amount
# Race condition possible
end
end
end
# Correct: consistent isolation within operation
class AccountService
def transfer(from_id, to_id, amount)
ActiveRecord::Base.transaction(isolation: :serializable) do
from_account = Account.find(from_id)
to_account = Account.find(to_id)
raise "Insufficient funds" if from_account.balance < amount
from_account.balance -= amount
to_account.balance += amount
from_account.save!
to_account.save!
end
end
end
Keep isolation levels consistent within related operations. Document assumptions about required isolation levels for each operation.
Silent Rollback on Exception
Ruby database libraries silently roll back transactions when exceptions occur. This behavior prevents partial commits but can mask errors if exceptions are caught and ignored.
# Dangerous: exception handling masks rollback
def update_user(user_id, attributes)
ActiveRecord::Base.transaction do
user = User.find(user_id)
user.update!(attributes)
rescue StandardError => e
# Transaction already rolled back
# But code continues as if update succeeded
logger.error("Update failed: #{e}")
end
send_confirmation_email(user) # Wrong: user not updated
end
# Correct: let exceptions propagate
def update_user(user_id, attributes)
ActiveRecord::Base.transaction do
user = User.find(user_id)
user.update!(attributes)
user
end
# Only reached if update succeeds
end
Allow exceptions to propagate from transaction blocks unless specific recovery logic applies. If catching exceptions, re-raise them after logging or cleanup.
Deadlock Vulnerability
Transactions acquiring locks in different orders create deadlock potential. The database detects deadlocks and aborts one transaction, but frequent deadlocks indicate poor lock ordering.
# Deadlock prone: inconsistent lock ordering
def transfer_a_to_b
ActiveRecord::Base.transaction do
account_a = Account.lock.find(1)
account_b = Account.lock.find(2)
# Process transfer
end
end
def transfer_b_to_a
ActiveRecord::Base.transaction do
account_b = Account.lock.find(2) # Locks in different order
account_a = Account.lock.find(1)
# Process transfer
end
end
# Deadlock resistant: consistent lock ordering
def transfer_funds(from_id, to_id, amount)
# Always lock accounts in ID order
first_id, second_id = [from_id, to_id].sort
ActiveRecord::Base.transaction do
first_account = Account.lock.find(first_id)
second_account = Account.lock.find(second_id)
from_account = first_id == from_id ? first_account : second_account
to_account = first_id == to_id ? first_account : second_account
from_account.balance -= amount
to_account.balance += amount
from_account.save!
to_account.save!
end
end
Establish lock ordering conventions across the application. Always acquire locks in the same order regardless of operation type.
Forgetting Foreign Key Constraints
Applications relying on ACID properties for consistency but omitting foreign key constraints allow orphaned records to persist despite transaction rollbacks.
# Problematic: no foreign key constraint
ActiveRecord::Base.transaction do
order = Order.create!(customer_id: 999) # Invalid customer
# No error raised
end
# Database schema should enforce referential integrity
class CreateOrders < ActiveRecord::Migration
def change
create_table :orders do |t|
t.integer :customer_id
t.timestamps
end
add_foreign_key :orders, :customers # Enforces constraint
end
end
# Now invalid references are prevented
ActiveRecord::Base.transaction do
order = Order.create!(customer_id: 999)
# Raises foreign key violation
end
Define foreign key constraints in database schema. Database-enforced constraints provide stronger guarantees than application-level validation alone.
Reference
ACID Property Definitions
| Property | Definition | Guarantee |
|---|---|---|
| Atomicity | Transaction executes completely or not at all | No partial execution |
| Consistency | Transaction preserves database invariants | Valid state to valid state |
| Isolation | Concurrent transactions do not interfere | Appears sequential |
| Durability | Committed changes survive failures | Persistent storage |
Transaction Isolation Levels
| Level | Dirty Read | Non-Repeatable Read | Phantom Read | Performance |
|---|---|---|---|---|
| Read Uncommitted | Possible | Possible | Possible | Highest |
| Read Committed | Prevented | Possible | Possible | High |
| Repeatable Read | Prevented | Prevented | Possible | Medium |
| Serializable | Prevented | Prevented | Prevented | Lowest |
Ruby Transaction Methods
| Method | Purpose | Usage |
|---|---|---|
| transaction | Begin transaction block | ActiveRecord::Base.transaction do ... end |
| transaction(isolation: level) | Set isolation level | ActiveRecord::Base.transaction(isolation: :serializable) |
| transaction(requires_new: true) | Create savepoint | Nested transaction with rollback point |
| lock | Acquire pessimistic lock | Account.lock.find(id) |
| lock(mode) | Custom lock mode | Account.lock('FOR SHARE').find(id) |
| rollback | Abort transaction | raise ActiveRecord::Rollback |
ActiveRecord Isolation Level Options
| Option | PostgreSQL | MySQL | SQL Server |
|---|---|---|---|
| :read_uncommitted | READ UNCOMMITTED | READ UNCOMMITTED | READ UNCOMMITTED |
| :read_committed | READ COMMITTED | READ COMMITTED | READ COMMITTED |
| :repeatable_read | REPEATABLE READ | REPEATABLE READ | REPEATABLE READ |
| :serializable | SERIALIZABLE | SERIALIZABLE | SERIALIZABLE |
Common Transaction Exceptions
| Exception | Cause | Recovery |
|---|---|---|
| ActiveRecord::SerializationFailure | Serialization conflict | Retry transaction |
| ActiveRecord::Deadlocked | Deadlock detected | Retry with backoff |
| ActiveRecord::StatementInvalid | SQL error | Check query syntax |
| ActiveRecord::StaleObjectError | Optimistic locking conflict | Reload and retry |
| ActiveRecord::RecordNotFound | Record missing | Check existence first |
PostgreSQL Synchronous Commit Levels
| Setting | Durability | Performance | Data Loss Risk |
|---|---|---|---|
| on | Full | Standard | None on crash |
| remote_write | High | Better | None on crash |
| remote_apply | Full | Lower | None on crash |
| local | Partial | Better | Possible on crash |
| off | Minimal | Highest | Recent transactions on crash |
Lock Modes in PostgreSQL
| Mode | Blocks | Use Case |
|---|---|---|
| FOR UPDATE | FOR UPDATE, FOR SHARE | Exclusive modification |
| FOR NO KEY UPDATE | FOR UPDATE | Update without key change |
| FOR SHARE | FOR UPDATE | Shared read lock |
| FOR KEY SHARE | FOR UPDATE, FOR NO KEY UPDATE | Foreign key check |
Transaction Retry Strategies
| Strategy | When to Use | Implementation |
|---|---|---|
| Immediate Retry | Low contention | Retry once immediately |
| Linear Backoff | Medium contention | Wait fixed time between retries |
| Exponential Backoff | High contention | Double wait time each retry |
| Jittered Backoff | Very high contention | Add randomness to backoff |
| Circuit Breaker | Persistent failures | Stop retries after threshold |