Overview
Database indexes are auxiliary data structures that accelerate data retrieval operations by maintaining sorted references to table data. Without indexes, a database must perform a full table scan, examining every row to find matching records. Indexes reduce query time from O(n) to O(log n) or O(1) depending on the index type and query pattern.
An index functions similarly to a book's index: rather than reading every page to find a topic, readers consult the index to jump directly to relevant pages. Database indexes store a subset of table data in a structure optimized for fast lookups, containing pointers back to the full table rows.
Consider a users table with one million records. Finding a user by email without an index requires scanning all million rows. With a B-tree index on the email column, the database performs approximately 20 comparisons (log2(1,000,000) ≈ 20) to locate the record.
# Without index: scans all rows
User.where(email: 'user@example.com')
# Database: Seq Scan on users (cost=0.00..25000.00 rows=1)
# With index: direct lookup
# Database: Index Scan using index_users_on_email (cost=0.43..8.45 rows=1)
Indexes introduce trade-offs. Each index consumes disk space and memory. Write operations (INSERT, UPDATE, DELETE) become slower because the database must maintain index consistency alongside table data. A table with ten indexes performs writes ten times slower than an unindexed table, while dramatically accelerating specific read patterns.
Key Principles
Database indexes organize data using specialized data structures that enable efficient searching, insertion, and deletion. The choice of data structure determines performance characteristics, supported query types, and storage requirements.
B-tree Structure: Most databases use B-tree (balanced tree) indexes as the default. B-trees maintain sorted data in a hierarchical structure where each node contains multiple keys and child pointers. The tree remains balanced through automatic rebalancing during modifications. B-trees support range queries, equality comparisons, and ORDER BY operations efficiently. Leaf nodes form a linked list, enabling sequential scans without tree traversal.
Hash Structure: Hash indexes compute a hash value from the indexed column and store records in buckets based on the hash. Hash indexes provide O(1) lookup time for equality comparisons but cannot support range queries, sorting, or pattern matching. Hash collisions occur when different values produce the same hash, requiring collision resolution mechanisms.
Inverted Index Structure: Inverted indexes map content to locations, used primarily for full-text search. Rather than mapping documents to words, inverted indexes map words to documents. Each term in the vocabulary points to a posting list containing document identifiers where the term appears. This structure enables fast text search across large document collections.
Index Selectivity: Selectivity measures how many rows an index filters. High selectivity indexes (few matching rows per value) provide greater performance benefits than low selectivity indexes. An index on a boolean column has 50% selectivity, while an index on unique email addresses has near 100% selectivity. Databases ignore low selectivity indexes when the query optimizer estimates a table scan would be faster.
Cardinality: Cardinality represents the number of distinct values in a column. High cardinality columns (email, username) benefit significantly from indexes. Low cardinality columns (gender, status flags) rarely benefit from standard indexes because the optimizer chooses table scans over index scans when selectivity is poor.
Index Coverage: A covering index contains all columns required to satisfy a query, eliminating the need to access the table. The database retrieves all necessary data from the index itself. Covering indexes transform queries into index-only scans, providing substantial performance improvements for frequently executed queries.
Write Amplification: Each index adds overhead to write operations. Inserting one row into a table with five indexes requires six write operations: one for the table and five for indexes. Updates affecting indexed columns must update both the table and relevant indexes. This write amplification makes indexes a read-optimized, write-pessimized structure.
Index Maintenance: Databases perform automatic index maintenance during normal operations. Page splits occur when B-tree nodes overflow, requiring rebalancing. Deleted rows leave tombstones in indexes until vacuuming or compaction processes reclaim space. Index bloat accumulates over time, requiring periodic rebuilding to maintain optimal performance.
Ruby Implementation
Ruby applications interact with database indexes primarily through ActiveRecord migrations and the database adapter layer. ActiveRecord provides a database-agnostic interface for creating and managing indexes, translating Ruby method calls into appropriate SQL DDL statements.
Basic Index Creation: The add_index method creates indexes during migrations. The first argument specifies the table name, the second specifies the column or columns to index.
class AddEmailIndexToUsers < ActiveRecord::Migration[7.0]
def change
add_index :users, :email
end
end
This generates a standard B-tree index with a system-generated name like index_users_on_email. The database uses this index for WHERE clauses, JOIN conditions, and ORDER BY clauses referencing the email column.
Named Indexes: Specify custom index names for clarity or to comply with database naming conventions.
class CreateProducts < ActiveRecord::Migration[7.0]
def change
create_table :products do |t|
t.string :sku
t.timestamps
end
add_index :products, :sku, name: 'idx_product_sku'
end
end
Unique Indexes: Unique indexes enforce uniqueness constraints at the database level, preventing duplicate values. Unique indexes serve dual purposes: enforcing data integrity and accelerating queries.
class AddUniqueEmailToUsers < ActiveRecord::Migration[7.0]
def change
add_index :users, :email, unique: true
end
end
Attempting to insert duplicate emails raises ActiveRecord::RecordNotUnique. Unique indexes also improve query performance because the database stops searching after finding the first match.
Composite Indexes: Composite indexes span multiple columns, optimizing queries that filter on multiple conditions. Column order matters significantly: the leftmost columns must be used in queries for the index to be effective.
class AddLocationIndexToStores < ActiveRecord::Migration[7.0]
def change
add_index :stores, [:state, :city, :zip_code]
end
end
This index accelerates queries filtering on state alone, state and city, or all three columns. It does not help queries filtering only on city or zip_code because those columns are not leftmost.
# Uses the composite index
Store.where(state: 'CA')
Store.where(state: 'CA', city: 'San Francisco')
Store.where(state: 'CA', city: 'San Francisco', zip_code: '94102')
# Does not use the composite index
Store.where(city: 'San Francisco')
Store.where(zip_code: '94102')
Partial Indexes: Partial indexes include only rows matching a WHERE condition, reducing index size and improving performance for queries against subsets of data.
class AddActiveUserEmailIndex < ActiveRecord::Migration[7.0]
def change
add_index :users, :email, where: "status = 'active'"
end
end
The index contains only active users. Queries filtering on active status benefit from a smaller, more selective index. PostgreSQL and SQLite support partial indexes; MySQL does not.
Expression Indexes: Expression indexes store the result of a function or expression rather than raw column values. Use expression indexes when queries filter on calculated values.
class AddLowerEmailIndex < ActiveRecord::Migration[7.0]
def change
add_index :users, "lower(email)", name: "index_users_on_lower_email"
end
end
This index enables case-insensitive email lookups. The query must use the same expression for the index to be used:
# Uses the expression index
User.where("lower(email) = ?", "user@example.com".downcase)
# Does not use the expression index
User.where("email ILIKE ?", "user@example.com")
Database-Specific Index Types: ActiveRecord allows specifying index types through the using parameter. Available types depend on the database system.
# PostgreSQL GIN index for JSONB columns
class AddMetadataIndexToProducts < ActiveRecord::Migration[7.0]
def change
add_index :products, :metadata, using: :gin
end
end
# PostgreSQL B-tree index (default)
class AddPriceIndex < ActiveRecord::Migration[7.0]
def change
add_index :products, :price, using: :btree
end
end
# PostgreSQL Hash index for equality-only queries
class AddStatusHashIndex < ActiveRecord::Migration[7.0]
def change
add_index :orders, :status, using: :hash
end
end
Concurrent Index Creation: Standard index creation locks the table for writes, causing downtime on large tables. PostgreSQL supports concurrent index creation that allows concurrent writes during index building.
class AddConcurrentEmailIndex < ActiveRecord::Migration[7.0]
disable_ddl_transaction!
def change
add_index :users, :email, algorithm: :concurrently
end
end
The disable_ddl_transaction! method is required because concurrent index creation cannot run inside a transaction. This operation takes longer than standard index creation but avoids locking.
Removing Indexes: Remove unused or redundant indexes to improve write performance and reduce storage.
class RemoveUnusedIndexes < ActiveRecord::Migration[7.0]
def change
remove_index :users, :legacy_id
remove_index :products, name: 'idx_old_sku'
end
end
Checking Existing Indexes: Examine indexes through the database connection or schema.rb file.
# Get all indexes for a table
indexes = ActiveRecord::Base.connection.indexes(:users)
indexes.each do |index|
puts "#{index.name}: #{index.columns.join(', ')}"
end
# Check if specific index exists
if ActiveRecord::Base.connection.index_exists?(:users, :email)
puts "Email index exists"
end
Design Considerations
Index design requires balancing query performance against write overhead and storage costs. Poor indexing decisions create performance bottlenecks that worsen as data volumes grow.
Identifying Candidates: Analyze slow query logs and application profiling data to identify frequently executed queries with poor performance. Columns used in WHERE clauses, JOIN conditions, and ORDER BY clauses are prime index candidates. Focus on queries executed thousands of times daily rather than rare administrative queries.
Query analysis tools reveal which queries consume the most database time:
# Using ActiveRecord query logging
ActiveRecord::Base.logger = Logger.new(STDOUT)
# Examine slow queries in PostgreSQL
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 20;
Selectivity Analysis: High selectivity columns provide the greatest index benefits. Calculate selectivity by dividing distinct values by total rows. Columns with selectivity above 0.1 (10% unique) generally benefit from indexes.
# Check column selectivity
total_rows = User.count
distinct_emails = User.distinct.count(:email)
selectivity = distinct_emails.to_f / total_rows
# High selectivity: 0.95+ (excellent candidate)
# Medium selectivity: 0.1-0.95 (conditional candidate)
# Low selectivity: < 0.1 (poor candidate)
Composite vs Single-Column: Composite indexes support queries filtering on multiple columns but consume more space and complicate index selection. Create composite indexes when queries frequently filter on specific column combinations. The leftmost column should have high selectivity, with subsequent columns ordered by decreasing selectivity.
Single-column indexes provide flexibility because the optimizer can combine them through index merges (database-dependent). Composite indexes perform better for specific query patterns but limit the optimizer's options.
# Two single-column indexes
add_index :orders, :user_id
add_index :orders, :status
# Composite index for specific query pattern
add_index :orders, [:user_id, :status]
The composite index excels when filtering on both user_id and status together. Separate indexes provide more flexibility for queries filtering on either column individually.
Covering Index Strategy: Transform frequently executed queries into index-only scans by including additional columns in the index. Covering indexes eliminate table access but increase index size and write overhead.
# Query frequently executed: SELECT id, email, name FROM users WHERE email = ?
# Standard index requires table access for name
add_index :users, :email
# Covering index eliminates table access
add_index :users, [:email, :name]
The database retrieves all required columns (id, email, name) from the index without accessing the table. This optimization works best for queries with a small number of retrieved columns.
Partial Index Trade-offs: Partial indexes reduce storage and improve performance for queries against data subsets. Use partial indexes when query patterns consistently filter on specific conditions and the filtered subset represents a small percentage of total rows.
# Most queries target unarchived orders
add_index :orders, :created_at, where: "archived_at IS NULL"
Partial indexes become counterproductive when queries frequently access both included and excluded data, requiring the optimizer to choose between the partial index and a table scan.
Index Maintenance Windows: Plan index creation and rebuilding during low-traffic periods. Index creation on large tables locks resources and increases system load. Use concurrent index creation for production systems that cannot tolerate downtime, accepting the longer creation time as a trade-off.
Over-Indexing Prevention: Each additional index increases write latency and storage consumption. Tables supporting high write volumes should minimize index count. Remove redundant indexes where a composite index covers single-column index use cases. Monitor index usage statistics to identify unused indexes.
# PostgreSQL query to find unused indexes
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND indexrelname NOT LIKE 'pg_toast%';
Performance Considerations
Database indexes transform query performance from linear to logarithmic time complexity, but improper use creates performance degradation rather than improvement.
Query Execution Analysis: Database query planners choose between index scans, sequential scans, and bitmap scans based on cost estimates. Use EXPLAIN ANALYZE to understand execution plans and verify index usage.
# ActiveRecord EXPLAIN
User.where(email: 'user@example.com').explain
# Raw SQL EXPLAIN ANALYZE
result = ActiveRecord::Base.connection.execute(
"EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'user@example.com'"
)
The execution plan reveals whether indexes are used, estimated row counts, and actual query execution time. Index scans indicate successful index utilization; sequential scans suggest missing or unused indexes.
B-tree Performance Characteristics: B-tree indexes provide O(log n) lookup time. For a table with one billion rows, index lookups require approximately 30 comparisons (log2(1,000,000,000) ≈ 30) compared to scanning 500 million rows on average without indexes. B-trees maintain balanced height through splits and merges, keeping worst-case performance predictable.
Range queries benefit from B-tree leaf node linking. After locating the range start, the database reads sequential leaf nodes without tree traversal:
# Efficient range scan using B-tree index
Order.where(created_at: 1.week.ago..Time.current).count
# Index Scan using index_orders_on_created_at
# Reads consecutive leaf nodes for range
Hash Index Performance: Hash indexes provide O(1) average-case lookup for equality comparisons but lack range query support. Hash indexes work well for foreign key lookups and equality-based searches on high cardinality columns.
# Perfect hash index use case
User.find_by(uuid: 'a1b2c3d4-...')
# Cannot use hash index (requires range support)
Order.where('amount > ?', 100)
Hash indexes consume less space than B-tree indexes for the same data but face degradation as hash collisions increase. Most databases default to B-tree indexes because the performance difference is negligible for equality lookups while B-trees support additional query patterns.
Composite Index Column Ordering: The leftmost prefix rule determines composite index effectiveness. Queries must include leftmost columns for the index to be considered. Order columns by filtering frequency and selectivity.
# Index: [:user_id, :status, :created_at]
# Effective queries:
User.find(id).orders.where(status: 'pending') # Uses user_id, status
User.find(id).orders.where(status: 'pending', created_at: ...) # Uses all columns
# Ineffective queries:
Order.where(status: 'pending') # Missing leftmost user_id
Order.where(created_at: ...) # Missing user_id and status
Optimal ordering places the most selective column first, followed by decreasing selectivity. If user_id has 100,000 distinct values, status has 5 distinct values, place user_id leftmost.
Index-Only Scans: Index-only scans retrieve all required data from the index without accessing the table. This optimization eliminates the most expensive query operation: random table access. Covering indexes enable index-only scans but increase index size.
# Query: SELECT COUNT(*) FROM orders WHERE user_id = 123
# Standard index on user_id: requires table access to count rows
# Covering index including a constant column eliminates table access
add_index :orders, [:user_id, :id] # id always available in index
# Query becomes index-only scan
Monitor index-only scan ratios to evaluate covering index effectiveness. High ratios indicate successful optimization.
Write Performance Impact: Each index adds overhead to INSERT, UPDATE, and DELETE operations. Benchmark write-heavy applications before adding indexes to verify acceptable performance trade-offs.
# Measure write performance
require 'benchmark'
Benchmark.bm do |x|
x.report("Insert without indexes:") do
10_000.times { |i| Product.create(name: "Product #{i}") }
end
# Add index, repeat benchmark
end
Tables with ten or more indexes show measurably slower writes. High-throughput applications should minimize index count and consider batch updating indexes during off-peak hours.
Index Bloat and Maintenance: B-tree page splits create fragmentation. Deleted rows leave gaps consuming space. Bloated indexes consume memory and degrade cache efficiency. PostgreSQL's REINDEX command rebuilds indexes, eliminating bloat:
class ReindexMaintenance < ActiveRecord::Migration[7.0]
def up
execute "REINDEX TABLE users"
end
def down
# REINDEX cannot be rolled back
end
end
Schedule reindexing during maintenance windows. Concurrent reindexing (PostgreSQL 12+) allows online rebuilding but requires twice the disk space temporarily.
Buffer Cache Efficiency: Indexes compete with table data for buffer cache space. Large indexes reduce the amount of table data cached, potentially degrading performance for queries not using indexes. Monitor cache hit ratios to identify memory pressure.
Partial Index Performance: Partial indexes reduce storage and increase cache efficiency by excluding irrelevant data. The optimizer uses partial indexes only for queries matching the WHERE condition:
# Partial index: WHERE deleted_at IS NULL
add_index :orders, :created_at, where: "deleted_at IS NULL"
# Uses partial index
Order.where(deleted_at: nil).where('created_at > ?', 1.week.ago)
# Does not use partial index (condition mismatch)
Order.where('created_at > ?', 1.week.ago)
Common Pitfalls
Database indexing mistakes create performance problems that worsen over time as data volumes grow. Understanding common pitfalls prevents expensive mistakes.
Indexing Low Selectivity Columns: Creating indexes on columns with few distinct values wastes resources. Boolean columns, status flags with few values, and category fields with limited options rarely benefit from standard indexes.
# Poor index choice
add_index :users, :is_active # Only two values: true/false
# Better approach: partial index if queries filter on one value frequently
add_index :users, :email, where: "is_active = true"
The database optimizer often ignores low selectivity indexes, choosing full table scans instead. Monitor index usage statistics to identify and remove unused indexes.
Redundant Indexes: Multiple indexes covering the same access patterns waste storage and slow writes. A composite index makes single-column indexes on its leftmost columns redundant.
# Redundant indexes
add_index :orders, :user_id
add_index :orders, [:user_id, :status] # Makes first index redundant
# Solution: Keep only the composite index
remove_index :orders, :user_id
Audit indexes periodically to identify and eliminate redundancy. The composite index handles all queries served by the single-column index plus additional query patterns.
Wrong Composite Index Column Order: Placing low selectivity columns leftmost renders composite indexes ineffective for most queries.
# Poor column order
add_index :orders, [:status, :user_id] # status has only 5 distinct values
# Better column order
add_index :orders, [:user_id, :status] # user_id has high cardinality
The high cardinality column should be leftmost to maximize index effectiveness. Queries filtering on user_id alone benefit from the correctly ordered index.
Expression Index Mismatches: Expression indexes require queries to use identical expressions. Slight variations prevent index usage.
add_index :users, "lower(email)"
# Does not use expression index (different expression)
User.where("email ILIKE ?", search_term)
# Uses expression index (matching expression)
User.where("lower(email) = ?", search_term.downcase)
Document expression indexes clearly and ensure application code uses matching expressions consistently.
Partial Index Condition Mismatches: Queries must include the partial index WHERE condition for the index to be considered. Omitting the condition forces table scans.
add_index :products, :price, where: "available = true"
# Does not use partial index (missing condition)
Product.where('price < ?', 50)
# Uses partial index (includes condition)
Product.where('price < ? AND available = true', 50)
Partial indexes work best when the condition appears in most relevant queries. Document partial index conditions to ensure correct usage.
Forgetting Null Handling: NULL values complicate index behavior. Most databases include NULL values in indexes, but NULL comparisons behave differently than regular equality comparisons.
add_index :users, :deleted_at
# Index scan
User.where(deleted_at: Time.current)
# May not use index efficiently (NULL comparisons)
User.where(deleted_at: nil)
# Better: partial index excluding nulls
add_index :users, :deleted_at, where: "deleted_at IS NOT NULL"
Understand database-specific NULL handling in indexes to avoid unexpected performance problems.
Over-Indexing Write-Heavy Tables: Tables with high INSERT, UPDATE, and DELETE rates suffer from excessive indexes. Each index adds write overhead, multiplying the cost of modifications.
# Seven indexes on write-heavy audit log table
# Every log entry write performs 8 operations
# Solution: Minimize indexes, accept slower queries on historical data
Balance read optimization against write performance requirements. Consider archiving strategies that move old data to read-optimized storage with comprehensive indexes while keeping recent data in write-optimized tables with minimal indexes.
Ignoring Index Maintenance: Bloated indexes degrade performance silently. Regular maintenance prevents degradation.
# Monitor PostgreSQL index bloat
SELECT schemaname, tablename, indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
pg_size_pretty(pg_relation_size(indexrelid) -
pg_relation_size(indexrelid, 'main')) AS bloat
FROM pg_stat_user_indexes
ORDER BY pg_relation_size(indexrelid) DESC;
Schedule periodic index rebuilding during maintenance windows to maintain optimal performance.
Not Using EXPLAIN for Verification: Assuming indexes are used without verification leads to wasted effort. Always verify with EXPLAIN.
# Assumption: index on created_at is used
Order.where('created_at::date = ?', Date.today).explain
# Reality: type cast prevents index usage (Seq Scan)
# Corrected query uses index
Order.where(created_at: Date.today.beginning_of_day..Date.today.end_of_day).explain
# Index Scan using index_orders_on_created_at
Type casts, function calls, and expressions in WHERE clauses often prevent index usage. Rewrite queries to match index structure.
Database Migration Without Concurrent Index Creation: Creating indexes on production tables without concurrent options locks the table for writes, causing application downtime or timeouts.
# Causes write lock on large table
add_index :orders, :status
# Safe for production (PostgreSQL)
disable_ddl_transaction!
add_index :orders, :status, algorithm: :concurrently
Always use concurrent index creation for production databases with active traffic. The longer creation time is preferable to application downtime.
Reference
Index Types Comparison
| Index Type | Supported Queries | Time Complexity | Space Overhead | Best Use Cases |
|---|---|---|---|---|
| B-tree | Equality, range, sorting, pattern matching | O(log n) | Medium | Default choice for most columns |
| Hash | Equality only | O(1) average | Low | Exact match lookups on high cardinality |
| GIN | Array/JSONB contains, full-text search | O(log n) | High | JSONB queries, array operations, text search |
| GiST | Geometric, network, full-text | Varies | Medium-High | Spatial data, custom operators |
| BRIN | Range queries on sorted data | O(n) reduced | Very Low | Time-series, monotonically increasing data |
ActiveRecord Index Methods
| Method | Purpose | Example |
|---|---|---|
| add_index | Create new index | add_index :users, :email |
| remove_index | Drop existing index | remove_index :users, :email |
| index_exists? | Check index presence | index_exists?(:users, :email) |
| indexes | List table indexes | connection.indexes(:users) |
| add_index with unique | Create unique constraint | add_index :users, :email, unique: true |
| add_index with where | Create partial index | add_index :users, :email, where: "active = true" |
| add_index with using | Specify index type | add_index :data, :tags, using: :gin |
PostgreSQL Index Types
| Type | SQL Syntax | ActiveRecord Syntax | Query Support |
|---|---|---|---|
| B-tree | USING btree | using: :btree | All standard queries |
| Hash | USING hash | using: :hash | Equality only |
| GIN | USING gin | using: :gin | Array, JSONB, full-text |
| GiST | USING gist | using: :gist | Geometric, range types |
| SP-GiST | USING spgist | using: :spgist | Partitioned search trees |
| BRIN | USING brin | using: :brin | Block range queries |
Index Options
| Option | Effect | Migration Syntax |
|---|---|---|
| unique | Enforces uniqueness | unique: true |
| where | Partial index condition | where: "deleted_at IS NULL" |
| name | Custom index name | name: 'idx_custom' |
| algorithm | Creation method | algorithm: :concurrently |
| using | Index type | using: :gin |
| order | Column sort order | order: {created_at: :desc} |
| length | Index prefix length (MySQL) | length: {email: 10} |
Query Patterns and Index Usage
| Query Pattern | Required Index | Effectiveness |
|---|---|---|
| WHERE column = value | Single column B-tree | Excellent |
| WHERE col1 = ? AND col2 = ? | Composite on (col1, col2) | Excellent |
| WHERE column IN (values) | Single column B-tree | Good |
| WHERE column > value | Single column B-tree | Excellent |
| WHERE column LIKE 'prefix%' | Single column B-tree | Good |
| WHERE column LIKE '%suffix' | Full-text or trigram index | Fair |
| ORDER BY column | Single column B-tree | Excellent |
| GROUP BY column | Single column B-tree | Good |
Selectivity Thresholds
| Selectivity | Index Effectiveness | Recommendation |
|---|---|---|
| Above 0.95 | Excellent | Index highly beneficial |
| 0.5 - 0.95 | Good | Index usually beneficial |
| 0.1 - 0.5 | Moderate | Evaluate query patterns |
| Below 0.1 | Poor | Consider partial index or avoid |
Index Monitoring Queries
PostgreSQL index usage statistics:
# Find unused indexes
unused_indexes = ActiveRecord::Base.connection.execute(<<-SQL)
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND indexname NOT LIKE '%pkey'
SQL
# Check index sizes
index_sizes = ActiveRecord::Base.connection.execute(<<-SQL)
SELECT tablename, indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
ORDER BY pg_relation_size(indexrelid) DESC
SQL
Common Index Anti-Patterns
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Index every column | Write performance degradation | Index only filtered/sorted columns |
| Redundant indexes | Wasted storage and write overhead | Remove redundant single-column indexes |
| Low cardinality indexes | Optimizer ignores index | Use partial indexes for specific values |
| Function in WHERE clause | Index not used | Create expression index |
| Type casting in comparison | Index not used | Store and compare same types |
| Leading wildcards | Full table scan required | Use full-text search or trigram index |