CrackedRuby CrackedRuby

Overview

Pagination divides large result sets into smaller, discrete pages that can be requested and rendered incrementally. Instead of loading thousands of records simultaneously, pagination returns a subset of results with metadata indicating the current position and available navigation options.

Every web application dealing with lists eventually encounters performance degradation when result sets grow beyond a few hundred items. Database queries slow down, memory consumption increases, network transfer times extend, and browser rendering becomes sluggish. Pagination addresses these issues by limiting the number of records processed and transmitted in a single request.

The concept applies across multiple layers of application architecture. At the database level, pagination reduces the number of rows scanned and returned. At the application level, it minimizes memory usage and processing time. At the network level, it decreases payload size. At the presentation level, it creates manageable user interfaces that don't overwhelm users with information.

# Without pagination - problematic for large datasets
users = User.all  # Loads all records into memory
# => [#<User id: 1>, #<User id: 2>, ... #<User id: 50000>]

# With pagination - controlled memory usage
users = User.limit(25).offset(0)  # First page of 25 users
# => [#<User id: 1>, #<User id: 2>, ... #<User id: 25>]

Modern applications implement pagination in various contexts: search results, activity feeds, comment threads, product catalogs, administrative dashboards, and API responses. Each context may require different pagination strategies based on data characteristics, access patterns, and consistency requirements.

Key Principles

Pagination systems operate on three fundamental components: the page size (number of items per page), the page identifier (which page to retrieve), and the navigation mechanism (how to request different pages).

Page Size determines how many items appear in a single response. Smaller page sizes reduce per-request overhead but increase total requests needed. Larger page sizes reduce round trips but may cause timeout issues or poor user experience. Most applications use page sizes between 10 and 100 items, with 25 or 50 being common defaults.

Page Identification varies by strategy. Offset-based pagination uses numeric page numbers or record counts. Cursor-based pagination uses opaque tokens representing positions in the dataset. Keyset pagination uses actual data values as boundary markers. Each approach has different characteristics for consistency, performance, and implementation complexity.

State Management affects how pagination handles dataset changes. Stateless pagination relies solely on request parameters, making it simple but potentially inconsistent when underlying data changes. Stateful pagination maintains server-side context but requires session management. Cursor-based approaches provide a middle ground by encoding state in the cursor token.

Data ordering determines pagination behavior significantly. Consistent ordering across requests ensures users see results in a predictable sequence and that pages don't overlap or skip records. The sort key must be stable - typically using a unique column like a primary key as a tiebreaker when sorting by non-unique fields.

# Unstable ordering - may produce inconsistent pagination
Product.order(:price).limit(10)

# Stable ordering - consistent pagination results
Product.order(:price, :id).limit(10)

Boundary Conditions require special handling. The first page needs no predecessor reference. The last page may contain fewer items than the standard page size. Empty result sets should return meaningful metadata. Requesting pages beyond the dataset bounds should return empty results or error messages depending on the application's error handling strategy.

Pagination metadata provides clients with navigation information. This typically includes the current page identifier, page size, total count (when available), and links or tokens for adjacent pages. REST APIs often include this in response headers or a metadata object. GraphQL queries return it alongside data in the response structure.

# Typical pagination metadata structure
{
  data: [...],
  meta: {
    current_page: 2,
    per_page: 25,
    total_count: 1247,
    total_pages: 50
  },
  links: {
    first: "/users?page=1",
    prev: "/users?page=1",
    next: "/users?page=3",
    last: "/users?page=50"
  }
}

Implementation Approaches

Offset-Based Pagination uses LIMIT and OFFSET clauses to skip a calculated number of records and return the next batch. This approach maps naturally to page numbers, making it familiar to users and simple to implement. The offset value equals (page_number - 1) * page_size.

-- Page 1: offset 0, limit 25
SELECT * FROM users ORDER BY created_at DESC LIMIT 25 OFFSET 0;

-- Page 2: offset 25, limit 25
SELECT * FROM users ORDER BY created_at DESC LIMIT 25 OFFSET 25;

-- Page 3: offset 50, limit 25
SELECT * FROM users ORDER BY created_at DESC LIMIT 25 OFFSET 50;

Offset-based pagination suffers from performance degradation with large offsets. Databases must scan through all skipped rows to reach the offset position, even though those rows are discarded. Requesting page 1000 with 100 items per page forces the database to examine 100,000 rows before returning results. This makes deep pagination extremely slow on large tables.

Dataset mutations during pagination create consistency issues. If records are inserted or deleted between page requests, users may see duplicates or miss items entirely. Inserting 5 records before the current page position shifts all subsequent records, causing the next page request to show 5 items the user already saw.

Cursor-Based Pagination eliminates offset performance issues by encoding the position in an opaque token. Each response includes cursors for the next and previous pages. Clients pass these cursors back to retrieve adjacent pages without calculating offsets.

GET /api/posts?first=25
Response includes: next_cursor="eyJpZCI6MTAwLCJ0cyI6IjIwMjUtMDEtMTUifQ=="

GET /api/posts?first=25&after=eyJpZCI6MTAwLCJ0cyI6IjIwMjUtMDEtMTUifQ==
Response includes: next_cursor="eyJpZCI6MTI1LCJ0cyI6IjIwMjUtMDEtMTQifQ=="

The cursor typically encodes the values of the sort key from the last item on the current page. The next query uses these values in a WHERE clause rather than OFFSET, allowing the database to seek directly to the correct position using indexes. This maintains consistent performance regardless of pagination depth.

Keyset Pagination (also called seek method) uses actual column values as pagination markers, making the implementation transparent rather than opaque. Clients specify the last seen value, and the query returns records after that value.

-- First page
SELECT * FROM users ORDER BY id LIMIT 25;
-- Returns users with id 1-25

-- Second page using last seen id
SELECT * FROM users WHERE id > 25 ORDER BY id LIMIT 25;
-- Returns users with id 26-50

-- Third page using last seen id
SELECT * FROM users WHERE id > 50 ORDER BY id LIMIT 25;
-- Returns users with id 51-75

This approach requires exposing the sort column values to clients and works best with unique, sequential keys. Complex sorting on multiple columns requires more sophisticated WHERE clauses combining all sort fields.

-- Keyset pagination with composite sort key
SELECT * FROM products 
WHERE (price, id) > (19.99, 450)
ORDER BY price, id 
LIMIT 25;

Time-Based Pagination serves feeds or event streams where temporal ordering matters. Rather than page numbers or row-based cursors, clients request records before or after specific timestamps.

-- Get recent posts before a timestamp
SELECT * FROM posts 
WHERE created_at < '2025-10-07 10:00:00'
ORDER BY created_at DESC 
LIMIT 50;

-- Get posts after a timestamp (polling for new content)
SELECT * FROM posts 
WHERE created_at > '2025-10-07 10:00:00'
ORDER BY created_at ASC 
LIMIT 50;

This strategy works well for infinite scroll interfaces and real-time feeds where users navigate primarily forward in time. However, multiple records with identical timestamps require including a secondary unique field to prevent skipping or duplicating items.

Ruby Implementation

Rails provides pagination through ActiveRecord query methods and dedicated gems. The core approach uses limit and offset methods on ActiveRecord relations.

class UsersController < ApplicationController
  def index
    page = params[:page]&.to_i || 1
    per_page = 25
    
    @users = User
      .order(created_at: :desc, id: :desc)
      .limit(per_page)
      .offset((page - 1) * per_page)
    
    render json: {
      users: @users,
      meta: {
        current_page: page,
        per_page: per_page,
        total_count: User.count
      }
    }
  end
end

The Kaminari gem adds pagination methods directly to ActiveRecord models and view helpers for rendering pagination controls.

# Gemfile
gem 'kaminari'

# Controller
class PostsController < ApplicationController
  def index
    @posts = Post.page(params[:page]).per(25)
    
    render json: {
      posts: @posts,
      meta: {
        current_page: @posts.current_page,
        total_pages: @posts.total_pages,
        total_count: @posts.total_count,
        next_page: @posts.next_page,
        prev_page: @posts.prev_page
      }
    }
  end
end

# Model configuration
class Post < ApplicationRecord
  paginates_per 50  # Default per_page value
  max_paginates_per 100  # Maximum allowed per_page
end

The Pagy gem offers better performance than Kaminari by avoiding extra queries and memory allocations. It works with the backend controller and provides frontend helpers.

# Gemfile
gem 'pagy'

# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
  include Pagy::Backend
end

# Controller
class ArticlesController < ApplicationController
  def index
    @pagy, @articles = pagy(Article.order(published_at: :desc))
    
    render json: {
      articles: @articles,
      pagy: pagy_metadata(@pagy)
    }
  end
end

# Custom items per page and page parameter name
def index
  @pagy, @articles = pagy(
    Article.all,
    items: 30,
    page_param: :p
  )
end

Cursor-based pagination requires custom implementation in Rails. Encode the cursor values and decode them to construct WHERE clauses.

class FeedController < ApplicationController
  def index
    cursor = decode_cursor(params[:after])
    limit = params[:limit]&.to_i || 25
    
    posts = if cursor
      Post.where("(created_at, id) < (?, ?)", cursor[:created_at], cursor[:id])
    else
      Post.all
    end
    
    @posts = posts
      .order(created_at: :desc, id: :desc)
      .limit(limit)
    
    next_cursor = if @posts.size == limit
      encode_cursor(@posts.last)
    end
    
    render json: {
      posts: @posts,
      next_cursor: next_cursor
    }
  end
  
  private
  
  def encode_cursor(post)
    Base64.urlsafe_encode64({
      created_at: post.created_at.iso8601,
      id: post.id
    }.to_json)
  end
  
  def decode_cursor(cursor_string)
    return nil unless cursor_string
    
    JSON.parse(
      Base64.urlsafe_decode64(cursor_string),
      symbolize_names: true
    )
  rescue
    nil
  end
end

Keyset pagination with composite keys in ActiveRecord requires careful WHERE clause construction to handle the comparison correctly.

class Product < ApplicationRecord
  scope :page_after, ->(price, id) {
    where("(price, id) > (?, ?)", price, id)
      .order(:price, :id)
  }
  
  scope :page_before, ->(price, id) {
    where("(price, id) < (?, ?)", price, id)
      .order(price: :desc, id: :desc)
  }
end

# Controller usage
class ProductsController < ApplicationController
  def index
    if params[:after_price] && params[:after_id]
      @products = Product.page_after(
        params[:after_price],
        params[:after_id]
      ).limit(25)
    else
      @products = Product.order(:price, :id).limit(25)
    end
    
    last_product = @products.last
    next_params = if last_product
      { after_price: last_product.price, after_id: last_product.id }
    end
    
    render json: {
      products: @products,
      next_page_params: next_params
    }
  end
end

GraphQL APIs in Ruby typically use cursor-based pagination following the Relay connection specification. The graphql-ruby gem provides connection helpers.

# app/graphql/types/query_type.rb
module Types
  class QueryType < Types::BaseObject
    field :posts, Types::PostType.connection_type, null: false
    
    def posts
      Post.order(created_at: :desc)
    end
  end
end

# Query example
query {
  posts(first: 10, after: "MjU") {
    edges {
      cursor
      node {
        id
        title
        createdAt
      }
    }
    pageInfo {
      hasNextPage
      hasPreviousPage
      startCursor
      endCursor
    }
  }
}

Design Considerations

Offset-based pagination makes sense for traditional web interfaces with numbered pages where users jump to specific page numbers or navigate primarily near the beginning of the dataset. Search results, administrative dashboards, and catalogs with filters fit this pattern. The ability to bookmark or share specific page URLs provides value.

Applications avoid offset pagination when dealing with frequently changing datasets, very large tables, or real-time feeds. The performance penalty at deep offsets becomes prohibitive beyond a few thousand pages. Social media feeds, activity streams, and any interface using infinite scroll should use cursor-based approaches instead.

Cursor-based pagination excels for mobile applications and infinite scroll interfaces where users navigate sequentially forward through content. The opaque cursor prevents users from manipulating pagination in unexpected ways. Performance remains consistent regardless of depth. The main drawback is the inability to jump to arbitrary pages or show total page counts.

Applications choose cursor pagination when data consistency matters more than random access. If seeing every item exactly once is critical - such as in financial transactions, audit logs, or data exports - cursor-based approaches provide better guarantees. Offset pagination may skip or duplicate records when concurrent modifications occur.

Keyset pagination provides the performance benefits of cursor-based pagination while maintaining URL readability and bookmarkability. Exposing the sort key values lets users construct queries manually and understand their position in the dataset. This transparency helps debugging but may leak information about data distribution or sequencing.

The choice between opaque cursors and transparent keyset pagination depends on whether the sort column values should be public information. User IDs, timestamps, and sequential identifiers usually present no security concerns. Internal scores, rankings, or sensitive calculated values should remain hidden in opaque cursors.

Time-based pagination suits event logs, notifications, and chat messages where temporal ordering is the primary access pattern. Clients can request items before or after specific timestamps, enabling both historical browsing and polling for updates. Including a unique secondary sort field prevents issues with records sharing timestamps.

# Time-based with tiebreaker
class Message < ApplicationRecord
  scope :before, ->(timestamp, id) {
    where("(created_at, id) < (?, ?)", timestamp, id)
      .order(created_at: :desc, id: :desc)
  }
  
  scope :after, ->(timestamp, id) {
    where("(created_at, id) > (?, ?)", timestamp, id)
      .order(created_at: :asc, id: :asc)
  }
end

Total count calculation adds overhead to every paginated query. Many pagination strategies avoid counting total results, instead indicating only whether more results exist. Displaying "Showing 1-25 of many" with a next button performs better than "Page 1 of 487". Applications showing total counts should cache these values and refresh them periodically rather than counting on every request.

API design influences pagination strategy selection. REST APIs typically use link headers or metadata objects with URLs for adjacent pages. GraphQL APIs follow the Relay connection specification with edges, nodes, and pageInfo. The API style constrains how pagination metadata can be represented and accessed.

Performance Considerations

Offset-based pagination degrades linearly with the offset value. Requesting OFFSET 10000 LIMIT 50 forces the database to scan through 10,000 rows before returning 50 results. On a table with 10 million records, deep pagination queries can take seconds or minutes to complete.

# Performance comparison on 1 million records
Benchmark.bm do |x|
  x.report("Page 1 (offset 0):") do
    User.offset(0).limit(50).to_a
  end
  
  x.report("Page 100 (offset 5000):") do
    User.offset(5000).limit(50).to_a
  end
  
  x.report("Page 10000 (offset 500000):") do
    User.offset(500000).limit(50).to_a
  end
end

# Results (approximate):
#                          user     system      total        real
# Page 1 (offset 0):       0.002    0.001      0.003    (0.045)
# Page 100 (offset 5000):  0.008    0.002      0.010    (0.125)
# Page 10000 (offset 500k): 0.850   0.045      0.895   (2.347)

Cursor and keyset pagination maintain constant performance across all pages because they use WHERE clauses that can leverage indexes. The database seeks directly to the starting position rather than scanning through skipped rows.

# Cursor-based pagination - consistent performance
Benchmark.bm do |x|
  x.report("First page:") do
    User.order(:id).limit(50).to_a
  end
  
  x.report("After id 5000:") do
    User.where("id > ?", 5000).order(:id).limit(50).to_a
  end
  
  x.report("After id 500000:") do
    User.where("id > ?", 500000).order(:id).limit(50).to_a
  end
end

# Results (approximate):
#                          user     system      total        real
# First page:              0.002    0.001      0.003    (0.045)
# After id 5000:           0.002    0.001      0.003    (0.047)
# After id 500000:         0.002    0.001      0.003    (0.046)

Index design significantly affects pagination performance. The ORDER BY columns must have appropriate indexes. Composite indexes should match the sort order. For keyset pagination with multiple columns, a composite index covering all sort columns in order provides optimal performance.

# Migration for composite index supporting keyset pagination
class AddIndexForPagination < ActiveRecord::Migration[7.0]
  def change
    # Single column sort
    add_index :posts, :created_at
    
    # Composite sort with tiebreaker
    add_index :posts, [:created_at, :id]
    
    # Multiple sort columns
    add_index :products, [:price, :rating, :id]
  end
end

Counting total records adds a separate COUNT query to every paginated request. On large tables, COUNT operations scan the entire table or use index statistics that may be stale. Paginated queries should avoid total counts when possible or use caching strategies.

# Expensive: counts on every request
def index
  @pagy, @posts = pagy(Post.all)
  # Executes: SELECT COUNT(*) FROM posts
  # Executes: SELECT * FROM posts LIMIT 25 OFFSET 0
end

# Better: check for next page only
def index
  posts = Post.limit(26).offset(page * 25)
  has_next = posts.size > 25
  
  render json: {
    posts: posts.first(25),
    has_next: has_next
  }
end

# Best for stable data: cache count
def index
  total_count = Rails.cache.fetch("posts_count", expires_in: 10.minutes) do
    Post.count
  end
  
  @posts = Post.limit(25).offset(page * 25)
  
  render json: {
    posts: @posts,
    total_count: total_count
  }
end

Eager loading prevents N+1 queries when pagination returns records with associations. The pagination query should include necessary joins or includes before limiting results.

# N+1 problem with pagination
posts = Post.limit(25)
posts.each do |post|
  puts post.author.name  # Triggers separate query for each post
end

# Solved with eager loading
posts = Post.includes(:author).limit(25)
posts.each do |post|
  puts post.author.name  # Author loaded in single query
end

# Complex eager loading with pagination
@posts = Post
  .includes(:author, comments: :user)
  .where(published: true)
  .order(published_at: :desc, id: :desc)
  .limit(25)
  .offset((page - 1) * 25)

Database-specific optimizations improve pagination performance. PostgreSQL supports FETCH FIRST and row value comparisons efficiently. MySQL has different optimization characteristics for LIMIT/OFFSET versus derived tables.

-- PostgreSQL row value comparison for keyset pagination
SELECT * FROM posts 
WHERE (created_at, id) < ('2025-10-07'::timestamp, 1000)
ORDER BY created_at DESC, id DESC
FETCH FIRST 25 ROWS ONLY;

-- MySQL optimization using derived table
SELECT * FROM (
  SELECT id FROM posts 
  ORDER BY created_at DESC 
  LIMIT 25 OFFSET 10000
) AS page_ids
JOIN posts USING (id)
ORDER BY created_at DESC;

Practical Examples

E-commerce Product Listing with filters requires maintaining pagination state across filter changes. Resetting to page 1 when filters change prevents showing empty pages.

class ProductsController < ApplicationController
  def index
    @products = Product.all
    
    # Apply filters
    @products = @products.where(category: params[:category]) if params[:category]
    @products = @products.where("price >= ?", params[:min_price]) if params[:min_price]
    @products = @products.where("price <= ?", params[:max_price]) if params[:max_price]
    
    # Apply sorting
    sort_column = params[:sort] || "created_at"
    sort_direction = params[:direction] || "desc"
    @products = @products.order(sort_column => sort_direction, id: :desc)
    
    # Paginate
    page = params[:page]&.to_i || 1
    per_page = 24
    
    @products = @products.limit(per_page).offset((page - 1) * per_page)
    
    # Get total for this filter set
    total_count = Product.where(
      category: params[:category],
      price: params[:min_price]..params[:max_price]
    ).count
    
    render json: {
      products: @products,
      meta: {
        current_page: page,
        per_page: per_page,
        total_count: total_count,
        total_pages: (total_count.to_f / per_page).ceil
      },
      links: build_page_links(page, total_count, per_page)
    }
  end
  
  private
  
  def build_page_links(page, total_count, per_page)
    total_pages = (total_count.to_f / per_page).ceil
    base_url = request.base_url + request.path
    filter_params = params.except(:page).to_query
    
    {
      self: "#{base_url}?page=#{page}&#{filter_params}",
      first: "#{base_url}?page=1&#{filter_params}",
      last: "#{base_url}?page=#{total_pages}&#{filter_params}",
      prev: page > 1 ? "#{base_url}?page=#{page - 1}&#{filter_params}" : nil,
      next: page < total_pages ? "#{base_url}?page=#{page + 1}&#{filter_params}" : nil
    }
  end
end

Social Media Feed with infinite scroll uses cursor-based pagination to handle real-time updates and provide consistent sequential access.

class FeedItemsController < ApplicationController
  def index
    limit = params[:limit]&.to_i || 25
    limit = [limit, 100].min  # Cap at 100
    
    feed_items = FeedItem
      .includes(:user, :comments)
      .where(user_id: current_user.following_ids)
    
    if params[:cursor]
      decoded = decode_cursor(params[:cursor])
      feed_items = feed_items.where(
        "(created_at, id) < (?, ?)",
        decoded[:created_at],
        decoded[:id]
      )
    end
    
    @feed_items = feed_items
      .order(created_at: :desc, id: :desc)
      .limit(limit + 1)  # Fetch one extra to check for more
    
    has_more = @feed_items.size > limit
    items = @feed_items.first(limit)
    
    next_cursor = if has_more && items.any?
      encode_cursor(items.last)
    end
    
    render json: {
      items: items.as_json(include: { user: { only: [:id, :name, :avatar_url] } }),
      next_cursor: next_cursor,
      has_more: has_more
    }
  end
  
  def updates
    # Polling for new items since last check
    since_cursor = decode_cursor(params[:since_cursor])
    return render json: { items: [] } unless since_cursor
    
    new_items = FeedItem
      .includes(:user)
      .where(user_id: current_user.following_ids)
      .where(
        "(created_at, id) > (?, ?)",
        since_cursor[:created_at],
        since_cursor[:id]
      )
      .order(created_at: :asc, id: :asc)
      .limit(50)
    
    render json: {
      items: new_items,
      count: new_items.size
    }
  end
  
  private
  
  def encode_cursor(item)
    data = {
      created_at: item.created_at.utc.iso8601(6),
      id: item.id
    }
    Base64.urlsafe_encode64(data.to_json, padding: false)
  end
  
  def decode_cursor(cursor)
    return nil if cursor.blank?
    
    JSON.parse(
      Base64.urlsafe_decode64(cursor),
      symbolize_names: true
    )
  rescue JSON::ParserError, ArgumentError
    nil
  end
end

API Data Export requiring pagination to handle large datasets while maintaining data consistency uses keyset pagination with explicit sort keys.

class ExportsController < ApplicationController
  def transactions
    # Client requests: GET /api/exports/transactions?after_id=5000&limit=1000
    
    limit = params[:limit]&.to_i || 1000
    limit = [limit, 5000].min  # Cap at 5000 for exports
    
    transactions = Transaction
      .select(:id, :user_id, :amount, :currency, :created_at, :status)
      .where(created_at: params[:start_date]..params[:end_date])
    
    if params[:after_id]
      transactions = transactions.where("id > ?", params[:after_id])
    end
    
    @transactions = transactions
      .order(:id)
      .limit(limit + 1)
    
    has_more = @transactions.size > limit
    records = @transactions.first(limit)
    
    # Format for CSV export
    csv_data = CSV.generate(headers: true) do |csv|
      csv << ["ID", "User ID", "Amount", "Currency", "Created At", "Status"]
      records.each do |t|
        csv << [t.id, t.user_id, t.amount, t.currency, t.created_at, t.status]
      end
    end
    
    response.headers["Content-Type"] = "text/csv"
    response.headers["Content-Disposition"] = 
      "attachment; filename=transactions_#{params[:after_id] || 'start'}.csv"
    
    if has_more
      last_id = records.last.id
      next_url = export_transactions_url(
        after_id: last_id,
        limit: limit,
        start_date: params[:start_date],
        end_date: params[:end_date]
      )
      response.headers["X-Next-Page"] = next_url
    end
    
    render plain: csv_data
  end
end

Search Results with Relevance Ranking combines full-text search scores with pagination, requiring careful handling of sort keys and tie-breaking.

class SearchController < ApplicationController
  def articles
    query = params[:q]
    page = params[:page]&.to_i || 1
    per_page = 20
    
    # Using PostgreSQL full-text search
    @articles = Article
      .select("articles.*, ts_rank(search_vector, query) AS rank")
      .from("articles, plainto_tsquery('english', ?) query", query)
      .where("search_vector @@ query")
      .order("rank DESC, id DESC")  # Tie-breaker on id
      .limit(per_page)
      .offset((page - 1) * per_page)
    
    # Count with same search criteria
    total_count = Article
      .from("articles, plainto_tsquery('english', ?) query", query)
      .where("search_vector @@ query")
      .count
    
    render json: {
      articles: @articles.map { |a|
        {
          id: a.id,
          title: a.title,
          snippet: a.snippet,
          relevance_score: a.rank,
          url: article_path(a)
        }
      },
      meta: {
        query: query,
        page: page,
        per_page: per_page,
        total_results: total_count,
        total_pages: (total_count.to_f / per_page).ceil
      }
    }
  end
end

Reference

Pagination Strategy Comparison

Strategy Performance Consistency Random Access Use Case
Offset-based Degrades with depth Poor with mutations Yes - page numbers Small datasets, traditional UIs
Cursor-based Constant Good No - sequential only Large datasets, mobile apps
Keyset Constant Good Limited - via key values Large datasets, transparent state
Time-based Constant Good for time series Limited - via timestamps Event logs, feeds, chat

ActiveRecord Pagination Methods

Method Description Example
limit Sets maximum records to return User.limit(25)
offset Skips specified number of records User.offset(50)
page (Kaminari) Loads specific page number User.page(2)
per (Kaminari) Sets page size User.page(2).per(50)
pagy Creates pagy object and collection pagy(User.all)

Metadata Fields

Field Purpose Example Value
current_page Current page identifier 3
per_page Items per page 25
total_count Total matching records 1247
total_pages Total available pages 50
has_next_page More results available true
has_prev_page Previous results available true
next_cursor Token for next page eyJpZCI6MTAwfQ==
prev_cursor Token for previous page eyJpZCI6NzV9

Performance Optimization Checklist

Optimization Implementation Impact
Add indexes Index all ORDER BY columns High
Use composite indexes Match sort order exactly High
Avoid COUNT queries Use has_more flag instead Medium
Eager load associations Use includes or preload High
Cache total counts Store in cache for stable data Medium
Limit page depth Cap maximum offset value Medium
Use cursor pagination For large datasets High
Add query timeouts Prevent long-running queries Low

Common WHERE Clauses for Keyset Pagination

Sort Order WHERE Clause Example
Single column ASC column > last_value id > 100
Single column DESC column < last_value created_at < '2025-10-07'
Two columns ASC (col1, col2) > (val1, val2) (price, id) > (19.99, 450)
Two columns DESC (col1, col2) < (val1, val2) (created_at, id) < ('2025-10-07', 500)
Mixed direction Complex compound condition price > 19.99 OR (price = 19.99 AND id > 450)

GraphQL Connection Structure

Field Type Description
edges array Array of edge objects
edges[].node object The actual data record
edges[].cursor string Cursor for this record
pageInfo object Pagination metadata
pageInfo.hasNextPage boolean More results available
pageInfo.hasPreviousPage boolean Previous results available
pageInfo.startCursor string Cursor of first edge
pageInfo.endCursor string Cursor of last edge

URL Parameter Conventions

Parameter Purpose Example
page Page number for offset pagination ?page=3
per_page Items per page ?per_page=50
limit Maximum items to return ?limit=100
offset Number of items to skip ?offset=200
cursor Opaque pagination token ?cursor=eyJpZCI6MTAwfQ==
after Cursor for next page ?after=abc123
before Cursor for previous page ?before=xyz789
since Timestamp for newer items ?since=2025-10-07T10:00:00Z
until Timestamp for older items ?until=2025-10-06T10:00:00Z