Overview
GDBM provides a persistent hash-like interface for storing key-value pairs in disk-based database files. Ruby's GDBM library wraps the GNU Database Manager, offering a simple API that resembles Hash operations while maintaining data across program executions.
The GDBM
class serves as the primary interface, supporting string-based keys and values with automatic serialization for basic Ruby objects. GDBM databases operate as single files on disk, making them suitable for applications requiring simple persistence without the overhead of full database systems.
require 'gdbm'
# Open database file (creates if doesn't exist)
db = GDBM.open('data.gdbm', 0644, GDBM::WRCREAT)
db['user:123'] = 'alice@example.com'
db['config:theme'] = 'dark'
db.close
GDBM handles file locking automatically, preventing corruption during concurrent access. The library supports multiple access modes including read-only, write access, and creation modes. Unlike in-memory hashes, GDBM operations trigger disk I/O, affecting performance characteristics.
# Read-only access
db = GDBM.open('data.gdbm', 0644, GDBM::READER)
email = db['user:123'] # => "alice@example.com"
db.close
# Check existence without loading value
db = GDBM.open('data.gdbm', 0644, GDBM::READER)
exists = db.key?('user:123') # => true
db.close
GDBM excels in scenarios requiring simple key-value persistence: configuration storage, caching, session management, and small-scale data persistence. The format remains portable across different Ruby versions and platforms, though concurrent write access requires careful coordination.
Basic Usage
Opening a GDBM database requires specifying the filename, file permissions, and access mode. The mode parameter controls read/write capabilities and file creation behavior.
require 'gdbm'
# Create new database or open existing
db = GDBM.open('app.gdbm', 0644, GDBM::WRCREAT)
# Store values
db['session:abc123'] = Marshal.dump({user_id: 42, expires: Time.now + 3600})
db['counter'] = '15'
db['settings'] = JSON.generate({theme: 'light', language: 'en'})
# Retrieve values
session_data = Marshal.load(db['session:abc123'])
current_count = db['counter'].to_i
settings = JSON.parse(db['settings'])
db.close
GDBM stores all data as strings, requiring explicit conversion for numeric types and serialization for complex objects. Marshal provides the most comprehensive serialization for Ruby objects, while JSON offers better interoperability with external systems.
Iteration operates similarly to Hash, with each
, each_key
, and each_value
methods. GDBM maintains no guaranteed key ordering, as the underlying hash table structure prioritizes access efficiency over sequence preservation.
db = GDBM.open('inventory.gdbm', 0644, GDBM::WRCREAT)
# Populate database
products = {
'SKU001' => {name: 'Widget', price: 9.99, stock: 100},
'SKU002' => {name: 'Gadget', price: 15.50, stock: 50},
'SKU003' => {name: 'Tool', price: 25.00, stock: 25}
}
products.each do |sku, data|
db[sku] = Marshal.dump(data)
end
# Find products with low stock
low_stock = []
db.each do |sku, serialized_data|
product = Marshal.load(serialized_data)
low_stock << sku if product[:stock] < 30
end
puts "Low stock items: #{low_stock}" # => ["SKU002", "SKU003"]
db.close
Key deletion uses the delete
method, which returns the deleted value or nil
if the key doesn't exist. The clear
method removes all entries, while empty?
and length
provide database state information.
db = GDBM.open('cache.gdbm', 0644, GDBM::WRCREAT)
# Store temporary data
db['temp:file1'] = '/tmp/upload_123.dat'
db['temp:file2'] = '/tmp/upload_456.dat'
db['permanent:config'] = 'production_settings.yml'
puts "Database size: #{db.length}" # => 3
# Cleanup temporary entries
db.each_key do |key|
if key.start_with?('temp:')
deleted_value = db.delete(key)
puts "Removed #{key}: #{deleted_value}"
end
end
puts "Remaining entries: #{db.length}" # => 1
db.close
Error Handling & Debugging
GDBM operations raise specific exceptions for different error conditions. File access problems, locking conflicts, and invalid operations generate distinct exception types that applications must handle appropriately.
GDBM::FatalError
indicates serious database corruption or system-level problems that typically require intervention. This exception suggests file system issues, permission problems, or corrupted database files that cannot be recovered through normal operations.
require 'gdbm'
def safe_database_access(filename)
begin
db = GDBM.open(filename, 0644, GDBM::WRCREAT)
yield db
rescue GDBM::FatalError => e
puts "Fatal database error: #{e.message}"
puts "Database file may be corrupted or inaccessible"
return false
rescue Errno::EACCES => e
puts "Permission denied accessing #{filename}"
puts "Check file permissions and directory access"
return false
rescue Errno::ENOENT => e
puts "Database directory doesn't exist: #{filename}"
return false
ensure
db&.close
end
true
end
# Usage with error handling
success = safe_database_access('data/app.gdbm') do |db|
db['status'] = 'running'
db['last_update'] = Time.now.to_s
end
puts success ? "Database updated successfully" : "Database operation failed"
Key-related errors occur when accessing non-existent keys without proper checks. Unlike Hash, GDBM raises IndexError
when accessing missing keys with []
, making explicit existence checking crucial for reliable applications.
db = GDBM.open('users.gdbm', 0644, GDBM::WRCREAT)
# Unsafe access - raises IndexError if key missing
begin
user_data = db['user:999']
puts "User found: #{user_data}"
rescue IndexError
puts "User not found in database"
end
# Safe access patterns
user_id = 'user:999'
if db.key?(user_id)
user_data = db[user_id]
puts "User data: #{user_data}"
else
puts "User #{user_id} does not exist"
end
# Alternative using fetch with default
user_data = db.fetch(user_id, nil)
puts user_data ? "Found: #{user_data}" : "User not found"
db.close
Debugging GDBM issues often involves examining file permissions, disk space, and concurrent access patterns. Database corruption typically results from improper shutdown or concurrent writes without adequate locking coordination.
class GDBMDebugger
def self.diagnose_database(filename)
info = {
file_exists: File.exist?(filename),
file_size: File.exist?(filename) ? File.size(filename) : 0,
readable: File.readable?(filename),
writable: File.writable?(filename),
permissions: File.exist?(filename) ? File.stat(filename).mode.to_s(8) : 'N/A'
}
puts "Database file analysis for #{filename}:"
info.each { |key, value| puts " #{key}: #{value}" }
if info[:file_exists] && info[:readable]
begin
db = GDBM.open(filename, 0644, GDBM::READER)
puts " entries: #{db.length}"
puts " first_key: #{db.keys.first || 'none'}"
db.close
puts " status: healthy"
rescue => e
puts " status: corrupted (#{e.class}: #{e.message})"
end
else
puts " status: inaccessible"
end
end
end
# Debug problematic database
GDBMDebugger.diagnose_database('problematic.gdbm')
Performance & Memory
GDBM performance characteristics differ significantly from in-memory Hash operations. Each database operation involves disk I/O, making batch operations and connection pooling important optimization strategies.
Key lookup performance remains relatively constant regardless of database size, thanks to the underlying hash table structure. However, iteration performance degrades with larger databases since GDBM must traverse the entire file structure.
require 'benchmark'
require 'gdbm'
def benchmark_gdbm_operations
filename = 'benchmark.gdbm'
File.delete(filename) if File.exist?(filename)
db = GDBM.open(filename, 0644, GDBM::WRCREAT)
# Benchmark insertion
puts "Inserting 10,000 records:"
time = Benchmark.measure do
10_000.times do |i|
key = "key_#{i.to_s.rjust(5, '0')}"
value = "data_#{i}_#{('a'..'z').to_a.sample(10).join}"
db[key] = value
end
end
puts "Insert time: #{time.real.round(3)} seconds"
# Benchmark random access
puts "\nRandom access (1,000 lookups):"
time = Benchmark.measure do
1_000.times do
key = "key_#{rand(10_000).to_s.rjust(5, '0')}"
value = db[key] rescue nil
end
end
puts "Lookup time: #{time.real.round(3)} seconds"
# Benchmark iteration
puts "\nFull iteration:"
count = 0
time = Benchmark.measure do
db.each { |k, v| count += 1 }
end
puts "Iteration time: #{time.real.round(3)} seconds (#{count} records)"
db.close
File.delete(filename)
end
benchmark_gdbm_operations
Memory usage remains minimal since GDBM doesn't load the entire database into memory. Only accessed values consume memory, making GDBM suitable for databases larger than available RAM. However, each opened database consumes file descriptors, limiting concurrent database access.
Optimization strategies include batching operations, minimizing database open/close cycles, and using appropriate serialization formats. JSON serialization produces smaller files than Marshal but requires more CPU time for complex objects.
class OptimizedGDBMCache
def initialize(filename, max_connections: 5)
@filename = filename
@connection_pool = []
@max_connections = max_connections
@mutex = Mutex.new
end
def with_connection
connection = acquire_connection
begin
yield connection
ensure
release_connection(connection)
end
end
def batch_update(updates)
with_connection do |db|
updates.each { |key, value| db[key] = value }
end
end
def batch_fetch(keys)
results = {}
with_connection do |db|
keys.each do |key|
results[key] = db[key] if db.key?(key)
end
end
results
end
private
def acquire_connection
@mutex.synchronize do
if @connection_pool.empty?
GDBM.open(@filename, 0644, GDBM::WRCREAT)
else
@connection_pool.pop
end
end
end
def release_connection(connection)
@mutex.synchronize do
if @connection_pool.size < @max_connections
@connection_pool.push(connection)
else
connection.close
end
end
end
end
# Usage example
cache = OptimizedGDBMCache.new('app_cache.gdbm')
# Batch operations reduce I/O overhead
updates = (1..100).map { |i| ["batch_key_#{i}", "value_#{i}"] }.to_h
cache.batch_update(updates)
keys_to_fetch = ['batch_key_1', 'batch_key_50', 'batch_key_100']
results = cache.batch_fetch(keys_to_fetch)
puts "Fetched #{results.size} items in single operation"
Thread Safety & Concurrency
GDBM provides file-level locking to prevent corruption during concurrent access, but applications must coordinate database access patterns to avoid deadlocks and ensure data consistency.
Multiple threads can safely read from the same database file simultaneously when opened in read-only mode. However, write operations require exclusive access, blocking other threads attempting to access the same database file.
require 'thread'
require 'gdbm'
class ThreadSafeGDBMWrapper
def initialize(filename)
@filename = filename
@mutex = Mutex.new
@readers = 0
@writer = false
end
def read_transaction
@mutex.synchronize do
wait_for_writers
@readers += 1
end
begin
db = GDBM.open(@filename, 0644, GDBM::READER)
yield db
ensure
db&.close
@mutex.synchronize { @readers -= 1 }
end
end
def write_transaction
@mutex.synchronize do
wait_for_readers_and_writers
@writer = true
end
begin
db = GDBM.open(@filename, 0644, GDBM::WRCREAT)
yield db
ensure
db&.close
@mutex.synchronize { @writer = false }
end
end
private
def wait_for_writers
Thread.pass while @writer
end
def wait_for_readers_and_writers
Thread.pass while @readers > 0 || @writer
end
end
# Concurrent access example
wrapper = ThreadSafeGDBMWrapper.new('shared.gdbm')
# Multiple reader threads
readers = 5.times.map do |i|
Thread.new do
10.times do |j|
wrapper.read_transaction do |db|
value = db.fetch("key_#{i}", "default")
puts "Reader #{i}: #{value}"
sleep(0.01)
end
end
end
end
# Single writer thread
writer = Thread.new do
10.times do |i|
wrapper.write_transaction do |db|
db["key_#{i}"] = "value_#{i}_#{Time.now.to_f}"
puts "Writer: updated key_#{i}"
sleep(0.02)
end
end
end
[readers, writer].flatten.each(&:join)
Database-level locking prevents file corruption but doesn't provide transaction semantics. Applications requiring atomic multi-key operations must implement higher-level coordination mechanisms.
Process-level concurrency requires additional consideration since GDBM file locks operate at the system level. Multiple processes accessing the same database file coordinate through the operating system's file locking mechanism.
class ProcessSafeCounter
def initialize(filename, key = 'counter')
@filename = filename
@key = key
end
def increment(amount = 1)
retry_count = 0
begin
db = GDBM.open(@filename, 0644, GDBM::WRCREAT)
current_value = db.fetch(@key, '0').to_i
new_value = current_value + amount
db[@key] = new_value.to_s
db.close
new_value
rescue Errno::EAGAIN, Errno::EACCES => e
retry_count += 1
if retry_count < 5
sleep(0.01 * retry_count) # Exponential backoff
retry
else
raise "Failed to acquire database lock after #{retry_count} attempts"
end
end
end
def current_value
db = GDBM.open(@filename, 0644, GDBM::READER)
value = db.fetch(@key, '0').to_i
db.close
value
end
end
# Multi-process counter usage
counter = ProcessSafeCounter.new('shared_counter.gdbm')
# Simulate concurrent processes
processes = 3.times.map do |proc_id|
fork do
10.times do |i|
new_value = counter.increment
puts "Process #{proc_id}: incremented to #{new_value}"
sleep(rand(0.05))
end
end
end
processes.each { |pid| Process.wait(pid) }
puts "Final counter value: #{counter.current_value}"
Production Patterns
Production GDBM usage requires careful consideration of backup strategies, monitoring, and graceful degradation patterns. Database files can grow significantly over time, requiring rotation and maintenance procedures.
Configuration management represents a common production use case where GDBM provides persistent storage for application settings. The database stores serialized configuration data that survives application restarts while maintaining fast access characteristics.
class ProductionConfigManager
def initialize(config_file)
@config_file = config_file
@cache = {}
@last_reload = Time.now
@reload_interval = 300 # 5 minutes
end
def get_config(key, default = nil)
reload_if_stale
@cache.fetch(key, default)
end
def update_config(updates)
begin
db = GDBM.open(@config_file, 0644, GDBM::WRCREAT)
updates.each do |key, value|
db[key] = Marshal.dump(value)
@cache[key] = value
end
db.close
@last_reload = Time.now
true
rescue => e
Rails.logger.error "Config update failed: #{e.message}" if defined?(Rails)
false
end
end
def reload_config
begin
config = {}
db = GDBM.open(@config_file, 0644, GDBM::READER)
db.each do |key, serialized_value|
config[key] = Marshal.load(serialized_value)
end
db.close
@cache = config
@last_reload = Time.now
rescue => e
Rails.logger.warn "Config reload failed, using cached values: #{e.message}" if defined?(Rails)
end
end
private
def reload_if_stale
if Time.now - @last_reload > @reload_interval
reload_config
end
end
end
# Rails integration example
class ApplicationController < ActionController::Base
before_action :load_dynamic_config
private
def load_dynamic_config
@config_manager = ProductionConfigManager.new(Rails.root.join('config', 'dynamic.gdbm'))
@feature_flags = @config_manager.get_config('feature_flags', {})
@maintenance_mode = @config_manager.get_config('maintenance_mode', false)
end
end
Session storage provides another production pattern where GDBM offers persistent sessions without requiring external dependencies. The implementation includes session cleanup and size monitoring to prevent unbounded growth.
class GDBMSessionStore
def initialize(session_file, cleanup_probability: 0.001)
@session_file = session_file
@cleanup_probability = cleanup_probability
end
def write_session(session_id, session_data, expires_at)
session_record = {
data: session_data,
expires_at: expires_at,
created_at: Time.now
}
begin
db = GDBM.open(@session_file, 0644, GDBM::WRCREAT)
db[session_id] = Marshal.dump(session_record)
db.close
# Probabilistic cleanup to avoid constant overhead
cleanup_expired_sessions if rand < @cleanup_probability
true
rescue => e
Rails.logger.error "Session write failed: #{e.message}" if defined?(Rails)
false
end
end
def read_session(session_id)
begin
db = GDBM.open(@session_file, 0644, GDBM::READER)
serialized_record = db[session_id]
db.close
session_record = Marshal.load(serialized_record)
if session_record[:expires_at] > Time.now
session_record[:data]
else
delete_session(session_id)
nil
end
rescue IndexError, TypeError
nil
rescue => e
Rails.logger.error "Session read failed: #{e.message}" if defined?(Rails)
nil
end
end
def delete_session(session_id)
begin
db = GDBM.open(@session_file, 0644, GDBM::WRCREAT)
db.delete(session_id)
db.close
true
rescue => e
Rails.logger.error "Session delete failed: #{e.message}" if defined?(Rails)
false
end
end
def cleanup_expired_sessions
expired_count = 0
begin
db = GDBM.open(@session_file, 0644, GDBM::WRCREAT)
expired_sessions = []
db.each do |session_id, serialized_record|
session_record = Marshal.load(serialized_record)
if session_record[:expires_at] <= Time.now
expired_sessions << session_id
end
end
expired_sessions.each do |session_id|
db.delete(session_id)
expired_count += 1
end
db.close
Rails.logger.info "Cleaned up #{expired_count} expired sessions" if defined?(Rails) && expired_count > 0
rescue => e
Rails.logger.error "Session cleanup failed: #{e.message}" if defined?(Rails)
end
end
end
# Integration with Rails session store
class CustomSessionStore < ActionDispatch::Session::AbstractSecureStore
def initialize(app, options = {})
super
@store = GDBMSessionStore.new(options[:session_file])
end
private
def write_session(req, sid, session, options)
expires_at = Time.now + options[:expire_after]
@store.write_session(sid, session, expires_at)
end
def read_session(req, sid)
session_data = @store.read_session(sid)
[sid, session_data || {}]
end
def delete_session(req, sid, options)
@store.delete_session(sid)
generate_sid
end
end
Common Pitfalls
GDBM's string-only storage frequently catches developers expecting automatic type conversion. All values retrieve as strings, requiring explicit conversion for numeric types and deserialization for complex objects. This limitation affects mathematical operations and object comparisons.
db = GDBM.open('numbers.gdbm', 0644, GDBM::WRCREAT)
# Pitfall: Numeric values become strings
db['count'] = 42
db['price'] = 19.95
# These operations fail or produce unexpected results
# count_plus_one = db['count'] + 1 # TypeError: no implicit conversion
# total = db['price'] * 2 # String multiplication, not arithmetic
# Correct approach: explicit conversion
count = db['count'].to_i
price = db['price'].to_f
count_plus_one = count + 1
total = price * 2
puts "Count: #{count}, Price: #{price}, Total: #{total}"
# Pitfall: Boolean comparisons
db['enabled'] = true.to_s # Stores "true"
db['disabled'] = false.to_s # Stores "false"
# Wrong: String comparison instead of boolean logic
# if db['enabled'] # Always truthy because "false" is still a non-empty string
# Correct: Explicit boolean conversion
enabled = db['enabled'] == 'true'
disabled = db['disabled'] == 'true'
db.close
Key encoding issues arise when using non-ASCII characters or binary data as keys. GDBM expects string keys but may not handle all character encodings consistently across different Ruby versions or platforms.
db = GDBM.open('encoding.gdbm', 0644, GDBM::WRCREAT)
# Pitfall: Encoding mismatches
utf8_key = "café_menu"
binary_key = "\xFF\xFE\x00\x00"
begin
db[utf8_key] = "coffee and pastries"
db[binary_key] = "binary_data"
# Retrieval might fail with encoding errors
retrieved = db[utf8_key]
puts "Retrieved: #{retrieved}"
rescue Encoding::CompatibilityError => e
puts "Encoding error: #{e.message}"
end
# Safe approach: normalize key encoding
def safe_key(key)
key.to_s.encode('UTF-8', invalid: :replace, undef: :replace)
end
safe_utf8_key = safe_key(utf8_key)
db[safe_utf8_key] = "coffee and pastries"
# For binary keys, use encoding like Base64
require 'base64'
encoded_binary_key = Base64.strict_encode64(binary_key)
db[encoded_binary_key] = "binary_data"
db.close
Resource management pitfalls occur when databases aren't properly closed, leading to file descriptor leaks in long-running applications. The garbage collector doesn't automatically close GDBM databases, making explicit cleanup essential.
# Pitfall: Resource leaks in loops
def problematic_database_access
1000.times do |i|
db = GDBM.open("temp_#{i}.gdbm", 0644, GDBM::WRCREAT)
db["key"] = "value"
# Missing db.close - file descriptor leak!
end
end
# Correct approach: Always ensure cleanup
def safe_database_access
1000.times do |i|
db = nil
begin
db = GDBM.open("temp_#{i}.gdbm", 0644, GDBM::WRCREAT)
db["key"] = "value"
ensure
db&.close
end
end
end
# Best practice: Use block form for automatic cleanup
def best_practice_access
1000.times do |i|
GDBM.open("temp_#{i}.gdbm", 0644, GDBM::WRCREAT) do |db|
db["key"] = "value"
# Automatic cleanup when block exits
end
end
end
Concurrent access pitfalls involve assuming database operations are atomic beyond the individual key level. While GDBM prevents file corruption, it doesn't provide transaction semantics for multi-key operations.
# Pitfall: Non-atomic multi-key operations
def transfer_credits(from_user, to_user, amount)
db = GDBM.open('accounts.gdbm', 0644, GDBM::WRCREAT)
from_balance = db[from_user].to_i
to_balance = db[to_user].to_i
# Race condition: Another process might modify balances here
if from_balance >= amount
db[from_user] = (from_balance - amount).to_s
# If system crashes here, money disappears!
db[to_user] = (to_balance + amount).to_s
end
db.close
end
# Better approach: Application-level locking
class LockedGDBMOperations
def initialize(filename)
@filename = filename
@lock = Mutex.new
end
def atomic_transfer(from_user, to_user, amount)
@lock.synchronize do
db = GDBM.open(@filename, 0644, GDBM::WRCREAT)
from_balance = db[from_user].to_i
to_balance = db[to_user].to_i
if from_balance >= amount
# Write both updates before closing
db[from_user] = (from_balance - amount).to_s
db[to_user] = (to_balance + amount).to_s
success = true
else
success = false
end
db.close
success
end
end
end
Reference
Core Classes and Methods
Class/Method | Parameters | Returns | Description |
---|---|---|---|
GDBM.open(filename, mode, flags) |
filename (String), mode (Integer), flags (Integer) |
GDBM |
Opens database file with specified access mode |
GDBM.new(filename, mode, flags) |
filename (String), mode (Integer), flags (Integer) |
GDBM |
Alias for GDBM.open |
#[](key) |
key (String) |
String |
Retrieves value for key, raises IndexError if missing |
#[]=(key, value) |
key (String), value (String) |
String |
Stores key-value pair, returns value |
#fetch(key, default=nil) |
key (String), default (Object) |
String or default |
Retrieves value with default fallback |
#key?(key) |
key (String) |
Boolean |
Tests key existence |
#delete(key) |
key (String) |
String or nil |
Removes key, returns deleted value |
#clear |
None | self |
Removes all key-value pairs |
#close |
None | nil |
Closes database file |
#closed? |
None | Boolean |
Tests if database is closed |
#empty? |
None | Boolean |
Tests if database contains no keys |
#length |
None | Integer |
Returns number of key-value pairs |
#each {block} |
Block | self |
Iterates over key-value pairs |
#each_key {block} |
Block | self |
Iterates over keys |
#each_value {block} |
Block | self |
Iterates over values |
#keys |
None | Array |
Returns array of all keys |
#values |
None | Array |
Returns array of all values |
#to_hash |
None | Hash |
Converts database to Hash |
Access Mode Constants
Constant | Value | Description |
---|---|---|
GDBM::READER |
0 | Read-only access to existing database |
GDBM::WRITER |
1 | Read-write access to existing database |
GDBM::WRCREAT |
2 | Read-write access, creates database if missing |
GDBM::NEWDB |
3 | Create new database, truncates existing file |
File Permission Examples
Mode | Octal | Description |
---|---|---|
Owner read-write | 0600 | User can read/write, no group/other access |
Standard file | 0644 | User read/write, group/other read-only |
Shared access | 0664 | User/group read/write, other read-only |
World writable | 0666 | All users can read/write (security risk) |
Exception Hierarchy
Exception | Inheritance | Common Causes |
---|---|---|
GDBM::FatalError |
StandardError |
Database corruption, system errors |
IndexError |
StandardError |
Accessing non-existent key with [] |
Errno::EACCES |
SystemCallError |
Permission denied accessing file |
Errno::ENOENT |
SystemCallError |
Database file or directory not found |
Errno::EAGAIN |
SystemCallError |
Resource temporarily unavailable |
Serialization Options
Method | Pros | Cons | Best For |
---|---|---|---|
Marshal |
Full Ruby object support, fast | Ruby-specific, version sensitive | Ruby-only applications |
JSON |
Human-readable, language-neutral | Limited type support | API integration, debugging |
YAML |
Human-readable, Ruby types | Slower, security concerns | Configuration files |
String conversion |
Simple, fast | Manual type handling | Primitive types only |
Common Key Patterns
Pattern | Example | Use Case |
---|---|---|
Namespace prefix | user:123 , session:abc |
Logical grouping |
Hierarchical | config/database/host |
Tree-like organization |
Timestamp suffix | log_2024_08_30 |
Time-based partitioning |
Hash-based | cache_a1b2c3d4 |
Content-based keys |
Sequential | record_00001 |
Ordered data |