CrackedRuby CrackedRuby

Overview

Queues and dequeues represent fundamental abstract data types that control how elements enter and exit a collection. A queue follows First-In-First-Out (FIFO) ordering, where elements added first are removed first, similar to a line of people waiting for service. A dequeue (double-ended queue, pronounced "deck") extends this concept by permitting insertions and deletions at both ends.

These data structures appear throughout software systems. Operating systems use queues for process scheduling and I/O buffering. Web servers maintain request queues to handle incoming connections. Message brokers rely on queues to decouple producers from consumers. Dequeues serve algorithms requiring bidirectional access, such as sliding window problems and palindrome verification.

The abstract nature of queues and dequeues separates their logical behavior from physical implementation. Different underlying structures—arrays, linked lists, circular buffers—provide the same external interface while offering distinct performance characteristics. This separation allows developers to select implementations based on specific requirements without changing client code.

# Basic queue behavior
queue = []
queue.push(1)        # Enqueue at rear
queue.push(2)
queue.push(3)
queue.shift          # Dequeue from front
# => 1 (FIFO ordering)

# Basic dequeue behavior
dequeue = []
dequeue.push(1)      # Add to rear
dequeue.unshift(2)   # Add to front
dequeue.pop          # Remove from rear
# => 1
dequeue.shift        # Remove from front
# => 2

Key Principles

Queues maintain a strict access discipline where elements enter at one end (the rear or tail) and exit from the other end (the front or head). This constraint enforces FIFO ordering: the first element enqueued becomes the first element dequeued. The primary operations are enqueue (adding an element to the rear) and dequeue (removing an element from the front). Additional operations include peek (examining the front element without removal) and checking if the queue is empty.

The abstract queue interface hides implementation details from clients. Whether the queue uses an array, linked list, or circular buffer, client code interacts through the same operations. This abstraction boundary permits optimization and substitution without affecting correctness.

# Queue interface abstraction
class Queue
  def initialize
    @items = []
  end
  
  def enqueue(item)
    @items.push(item)
  end
  
  def dequeue
    @items.shift
  end
  
  def peek
    @items.first
  end
  
  def empty?
    @items.empty?
  end
  
  def size
    @items.length
  end
end

q = Queue.new
q.enqueue("first")
q.enqueue("second")
q.peek
# => "first"
q.dequeue
# => "first"

Dequeues generalize queues by supporting operations at both ends. The fundamental operations are push_front, push_back, pop_front, and pop_back. This flexibility enables dequeues to function as both queues (using push_back and pop_front) and stacks (using push_back and pop_back). The bidirectional access proves valuable for algorithms requiring examination or modification at either end.

Bounded queues limit capacity to prevent unbounded growth. When a bounded queue reaches capacity, enqueue operations either block until space becomes available (blocking queue) or reject the operation (bounded queue with rejection). This capacity constraint protects systems from memory exhaustion and applies backpressure to producers.

Priority queues modify FIFO ordering by associating priorities with elements. Elements with higher priority exit before those with lower priority, regardless of insertion order. When priorities match, implementations typically fall back to FIFO ordering. Priority queues implement the heap data structure rather than simple sequential storage.

Thread-safe queues coordinate access across multiple threads through synchronization primitives. Without synchronization, concurrent enqueue and dequeue operations corrupt internal state. Thread-safe implementations use locks, atomic operations, or lock-free algorithms to maintain consistency under concurrent access.

Ruby Implementation

Ruby's Array class provides methods supporting queue and dequeue operations, though Array internals optimize for stack operations rather than queue operations. The push method appends elements to the array's end, and shift removes elements from the beginning. However, shift requires moving all remaining elements forward, resulting in O(n) time complexity.

# Array-based queue (inefficient dequeue)
queue = []
queue.push(1)
queue.push(2)
queue.push(3)

queue.shift    # O(n) operation - moves remaining elements
# => 1
queue.shift
# => 2

For production queue requirements, Ruby's Thread::Queue class provides thread-safe blocking queue behavior. This class handles synchronization internally, supporting multiple producer and consumer threads safely. The push method adds elements, and pop removes them with optional blocking behavior.

require 'thread'

queue = Thread::Queue.new

# Producer thread
producer = Thread.new do
  5.times do |i|
    queue.push(i)
    sleep 0.1
  end
end

# Consumer thread
consumer = Thread.new do
  5.times do
    item = queue.pop    # Blocks until item available
    puts "Processed: #{item}"
  end
end

producer.join
consumer.join

Thread::SizedQueue extends Thread::Queue with capacity limits, implementing bounded queue semantics. When the queue reaches maximum capacity, push blocks until space becomes available. This mechanism applies backpressure automatically.

queue = Thread::SizedQueue.new(2)    # Maximum 2 items

Thread.new do
  5.times do |i|
    puts "Enqueueing #{i}"
    queue.push(i)    # Blocks when queue full
    puts "Enqueued #{i}"
  end
end

sleep 0.5    # Let some items accumulate

5.times do
  sleep 0.2
  item = queue.pop
  puts "Dequeued #{item}"
end

Implementing a dequeue requires supporting operations at both ends. Ruby Arrays provide efficient rear operations (push and pop) but inefficient front operations (unshift and shift). A linked list implementation offers O(1) operations at both ends.

class Node
  attr_accessor :value, :prev, :next
  
  def initialize(value)
    @value = value
    @prev = nil
    @next = nil
  end
end

class Dequeue
  def initialize
    @head = nil
    @tail = nil
    @size = 0
  end
  
  def push_front(value)
    node = Node.new(value)
    
    if @head.nil?
      @head = @tail = node
    else
      node.next = @head
      @head.prev = node
      @head = node
    end
    
    @size += 1
  end
  
  def push_back(value)
    node = Node.new(value)
    
    if @tail.nil?
      @head = @tail = node
    else
      node.prev = @tail
      @tail.next = node
      @tail = node
    end
    
    @size += 1
  end
  
  def pop_front
    return nil if @head.nil?
    
    value = @head.value
    @head = @head.next
    
    if @head.nil?
      @tail = nil
    else
      @head.prev = nil
    end
    
    @size -= 1
    value
  end
  
  def pop_back
    return nil if @tail.nil?
    
    value = @tail.value
    @tail = @tail.prev
    
    if @tail.nil?
      @head = nil
    else
      @tail.next = nil
    end
    
    @size -= 1
    value
  end
  
  def empty?
    @size == 0
  end
  
  def size
    @size
  end
end

dq = Dequeue.new
dq.push_back(1)
dq.push_front(0)
dq.push_back(2)
dq.pop_front
# => 0
dq.pop_back
# => 2

Ruby's standard library lacks a built-in dequeue class, though several gems provide implementations. The algorithms gem includes a Deque class with efficient operations. For simple use cases, developers often accept Array's performance characteristics or implement custom solutions.

Circular buffers provide fixed-size queues with efficient operations. This implementation preallocates an array and uses modular arithmetic to wrap indices.

class CircularQueue
  def initialize(capacity)
    @buffer = Array.new(capacity)
    @capacity = capacity
    @head = 0
    @tail = 0
    @size = 0
  end
  
  def enqueue(item)
    return false if full?
    
    @buffer[@tail] = item
    @tail = (@tail + 1) % @capacity
    @size += 1
    true
  end
  
  def dequeue
    return nil if empty?
    
    item = @buffer[@head]
    @head = (@head + 1) % @capacity
    @size -= 1
    item
  end
  
  def empty?
    @size == 0
  end
  
  def full?
    @size == @capacity
  end
  
  def size
    @size
  end
end

cq = CircularQueue.new(3)
cq.enqueue(1)
cq.enqueue(2)
cq.enqueue(3)
cq.full?
# => true
cq.dequeue
# => 1
cq.enqueue(4)
cq.dequeue
# => 2

Implementation Approaches

Array-based implementations store queue elements in contiguous memory. Two pointers track the front and rear positions. For queues where elements only dequeue from the front, a naive approach moves the front pointer forward but never reclaims space at the beginning. This approach wastes memory as the queue drifts toward the array's end.

Circular buffer implementations solve the drift problem by wrapping indices using modular arithmetic. When the rear pointer reaches the array's end, it wraps to position zero if space exists. This technique maintains constant-time operations while preventing memory waste. The implementation must distinguish between empty and full states, typically by tracking size separately or leaving one array slot unused.

Linked list implementations connect queue elements through pointers rather than contiguous storage. Each node contains a value and pointer(s) to adjacent nodes. For queues, singly-linked lists suffice with head and tail pointers. For dequeues, doubly-linked lists enable efficient operations at both ends. Linked lists eliminate capacity constraints and provide consistent O(1) operations, but consume additional memory for pointers and exhibit poor cache locality.

Dynamic array implementations combine contiguous storage with automatic resizing. When the array fills, the implementation allocates a larger array (typically doubling capacity) and copies existing elements. Amortized analysis shows enqueue operations remain O(1) despite occasional expensive resizing. Ruby's Array class follows this approach. For queues, dynamic arrays still suffer from O(n) dequeue operations when removing from the front.

Two-stack implementations simulate queues using two stacks. Elements enqueue onto the input stack. Dequeue operations pop from the output stack; when the output stack empties, the implementation transfers all elements from the input stack to the output stack, reversing their order. This approach provides amortized O(1) performance for both operations.

class QueueWithStacks
  def initialize
    @input = []
    @output = []
  end
  
  def enqueue(item)
    @input.push(item)
  end
  
  def dequeue
    if @output.empty?
      @output.push(@input.pop) until @input.empty?
    end
    @output.pop
  end
  
  def empty?
    @input.empty? && @output.empty?
  end
end

q = QueueWithStacks.new
q.enqueue(1)
q.enqueue(2)
q.enqueue(3)
q.dequeue
# => 1
q.enqueue(4)
q.dequeue
# => 2

Lock-free implementations enable concurrent access without blocking synchronization. These implementations use atomic compare-and-swap operations to update shared state safely. Lock-free queues avoid deadlock risks and reduce contention but require careful design to prevent the ABA problem and ensure progress guarantees. Ruby's CRuby interpreter serializes thread execution through the Global Interpreter Lock, limiting lock-free benefits except for I/O-bound scenarios.

Practical Examples

Breadth-first search (BFS) graph traversal uses queues to process vertices level by level. The algorithm enqueues a starting vertex, then repeatedly dequeues a vertex, processes it, and enqueues its unvisited neighbors. FIFO ordering ensures the algorithm explores vertices in distance order from the starting point.

def bfs(graph, start)
  visited = Set.new
  queue = [start]
  visited.add(start)
  result = []
  
  until queue.empty?
    vertex = queue.shift
    result << vertex
    
    graph[vertex].each do |neighbor|
      unless visited.include?(neighbor)
        visited.add(neighbor)
        queue.push(neighbor)
      end
    end
  end
  
  result
end

graph = {
  'A' => ['B', 'C'],
  'B' => ['A', 'D', 'E'],
  'C' => ['A', 'F'],
  'D' => ['B'],
  'E' => ['B', 'F'],
  'F' => ['C', 'E']
}

bfs(graph, 'A')
# => ["A", "B", "C", "D", "E", "F"]

Task scheduling systems use queues to manage work items. Multiple worker threads dequeue tasks, process them, and move to the next task. Thread::Queue handles synchronization automatically, blocking workers when no tasks exist and blocking producers when the queue reaches capacity (with SizedQueue).

require 'thread'

tasks = Thread::SizedQueue.new(10)
results = Thread::Queue.new

# Create worker threads
workers = 3.times.map do
  Thread.new do
    loop do
      task = tasks.pop
      break if task == :stop
      
      # Process task
      result = task * 2
      results.push(result)
    end
  end
end

# Enqueue tasks
10.times { |i| tasks.push(i) }

# Signal workers to stop
3.times { tasks.push(:stop) }

# Wait for workers
workers.each(&:join)

# Collect results
output = []
output << results.pop until results.empty?
output.sort
# => [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

Sliding window problems use dequeues to maintain elements within a moving window efficiently. For finding maximum values in each window of size k, a dequeue stores indices of potentially useful elements in descending order of their values. As the window slides, the algorithm removes indices outside the window from the front and removes smaller values from the rear before adding the new element.

def max_sliding_window(nums, k)
  return [] if nums.empty? || k == 0
  
  dq = []  # Stores indices
  result = []
  
  nums.each_with_index do |num, i|
    # Remove indices outside window
    dq.shift if !dq.empty? && dq.first <= i - k
    
    # Remove smaller elements from rear
    dq.pop while !dq.empty? && nums[dq.last] < num
    
    dq.push(i)
    
    # Add maximum to result once window is full
    result << nums[dq.first] if i >= k - 1
  end
  
  result
end

max_sliding_window([1, 3, -1, -3, 5, 3, 6, 7], 3)
# => [3, 3, 5, 5, 6, 7]

Palindrome checking with dequeues compares characters from both ends simultaneously. The algorithm removes characters from both ends, verifying they match until the dequeue becomes empty or contains one element.

def palindrome?(str)
  dq = Dequeue.new
  str.downcase.chars.each { |c| dq.push_back(c) if c =~ /[a-z0-9]/ }
  
  while dq.size > 1
    return false if dq.pop_front != dq.pop_back
  end
  
  true
end

palindrome?("A man, a plan, a canal: Panama")
# => true
palindrome?("race a car")
# => false

Level-order tree traversal visits binary tree nodes level by level, left to right. A queue stores nodes at the current level. The algorithm dequeues a node, processes it, and enqueues its children for the next level.

class TreeNode
  attr_accessor :value, :left, :right
  
  def initialize(value)
    @value = value
    @left = nil
    @right = nil
  end
end

def level_order(root)
  return [] if root.nil?
  
  result = []
  queue = [root]
  
  until queue.empty?
    level = []
    queue.size.times do
      node = queue.shift
      level << node.value
      
      queue.push(node.left) if node.left
      queue.push(node.right) if node.right
    end
    result << level
  end
  
  result
end

# Build tree: 
#       1
#      / \
#     2   3
#    / \
#   4   5

root = TreeNode.new(1)
root.left = TreeNode.new(2)
root.right = TreeNode.new(3)
root.left.left = TreeNode.new(4)
root.left.right = TreeNode.new(5)

level_order(root)
# => [[1], [2, 3], [4, 5]]

Performance Considerations

Queue operation complexity depends on the underlying implementation structure. Array-based queues with naive front-pointer advancement provide O(1) enqueue but O(n) dequeue due to element shifting. Circular buffer implementations achieve O(1) for both operations by reusing array space through index wrapping. Linked list queues maintain O(1) operations at both ends but consume additional memory for node pointers and exhibit poor cache performance.

Dequeue operations at both ends require careful implementation choice. Arrays excel at one end but suffer at the other. Doubly-linked lists provide O(1) at both ends. Circular buffers support efficient bidirectional access when bounded capacity suffices.

Memory allocation patterns affect performance significantly. Linked lists allocate nodes individually, fragmenting memory and stressing the allocator. Dynamic arrays amortize allocation cost by growing geometrically but waste memory when capacity exceeds size. Circular buffers preallocate fixed storage, eliminating allocation overhead during operation.

Cache locality influences real-world performance beyond asymptotic complexity. Arrays store elements contiguously, enabling efficient prefetching and cache line utilization. Linked lists scatter nodes across memory, causing cache misses. For small to medium queues, array-based implementations often outperform linked lists despite worse theoretical dequeue complexity.

Thread-safe queues introduce synchronization overhead. Lock-based implementations serialize access, potentially creating bottlenecks under high contention. Lock-free designs reduce contention but require expensive atomic operations. For single-threaded scenarios, unsynchronized implementations avoid this overhead entirely.

require 'benchmark'

# Compare array shift vs linked list dequeue
n = 10_000

Benchmark.bm(20) do |x|
  x.report("Array shift:") do
    queue = (1..n).to_a
    n.times { queue.shift }
  end
  
  x.report("Linked list dequeue:") do
    dq = Dequeue.new
    n.times { |i| dq.push_back(i) }
    n.times { dq.pop_front }
  end
end

# Array shift:             0.123456   0.001234   0.124690 (  0.125123)
# Linked list dequeue:     0.012345   0.000123   0.012468 (  0.012501)

Bounded queues with blocking behavior affect throughput and latency. When producers exceed consumer processing rate, the queue fills and blocks producers. This backpressure prevents memory exhaustion but may impact system responsiveness. Queue capacity sizing requires balancing memory usage against buffering needs.

Priority queues using binary heaps provide O(log n) enqueue and dequeue operations, slower than simple queues but maintaining priority ordering. Min-max heaps enable O(log n) extraction of both minimum and maximum elements. For fixed priority levels, bucketing elements by priority offers O(1) operations within each priority.

Common Patterns

The producer-consumer pattern decouples components generating work from components processing work. Producers enqueue tasks without knowing consumer details. Consumers dequeue and process tasks independently. Thread-safe queues coordinate access, blocking consumers when empty and producers when full (with bounded queues).

require 'thread'

queue = Thread::SizedQueue.new(5)

# Multiple producers
producers = 3.times.map do |id|
  Thread.new do
    5.times do |i|
      task = "Producer-#{id}-Task-#{i}"
      queue.push(task)
      puts "Enqueued: #{task}"
      sleep rand(0.1..0.3)
    end
  end
end

# Multiple consumers
consumers = 2.times.map do |id|
  Thread.new do
    loop do
      task = queue.pop
      break if task == :done
      puts "Consumer-#{id} processing: #{task}"
      sleep rand(0.1..0.2)
    end
  end
end

# Wait for producers
producers.each(&:join)

# Signal consumers to stop
2.times { queue.push(:done) }
consumers.each(&:join)

Request buffering in web servers uses queues to handle connection spikes. When requests arrive faster than the server can process them, the queue buffers incoming connections. This pattern smooths load and prevents connection rejection under burst traffic. Queue capacity determines how many requests buffer before the system rejects new connections.

Event loops use queues to process events sequentially. External sources enqueue events (user input, network packets, timer expirations). The event loop dequeues and dispatches events to handlers. This pattern provides single-threaded concurrency, avoiding synchronization complexity.

Cache invalidation with queues batches invalidation operations. Instead of invalidating cache entries immediately, the system enqueues invalidation requests. A background worker dequeues and processes invalidations in batches, reducing locking overhead and improving throughput.

Breadth-first traversal applies to any graph-like structure. The pattern processes elements by depth level, visiting all items at distance d before processing items at distance d+1. This ordering proves useful for shortest path algorithms, dependency resolution, and hierarchical data processing.

The sliding window pattern uses dequeues to maintain elements within a moving window efficiently. Applications include finding extrema in subarrays, computing moving averages, and detecting patterns within fixed-length sequences. The dequeue stores only potentially useful elements, discarding those that cannot affect future results.

Undo/redo functionality sometimes uses a dequeue to navigate history bidirectionally. Actions push onto one end. Undo operations pop from that end and push onto the opposite end (redo stack). Redo operations reverse this process. This pattern supports non-linear history navigation.

Round-robin scheduling uses circular queues to distribute tasks fairly. The scheduler dequeues a task, processes it for a time slice, and re-enqueues it if unfinished. This pattern ensures fair resource allocation and prevents starvation.

Reference

Core Queue Operations

Operation Description Typical Complexity
enqueue Add element to rear O(1) or O(1) amortized
dequeue Remove element from front O(1) or O(n)
peek View front element O(1)
empty? Check if queue is empty O(1)
size Get number of elements O(1)

Core Dequeue Operations

Operation Description Typical Complexity
push_front Add element to front O(1)
push_back Add element to rear O(1)
pop_front Remove element from front O(1)
pop_back Remove element from rear O(1)
peek_front View front element O(1)
peek_back View rear element O(1)

Implementation Comparison

Implementation Enqueue Dequeue Space Overhead Cache Locality
Array (naive) O(1) O(n) Low Excellent
Circular Buffer O(1) O(1) Low Excellent
Linked List O(1) O(1) High Poor
Dynamic Array O(1) amortized O(n) Medium Excellent
Two Stacks O(1) O(1) amortized Low Excellent

Ruby Queue Classes

Class Thread-Safe Bounded Blocking Use Case
Array No No No Single-threaded, simple queues
Thread::Queue Yes No Yes on pop Multi-threaded producer-consumer
Thread::SizedQueue Yes Yes Yes on push/pop Backpressure control
Custom Dequeue Depends No No Double-ended access

Common Applications

Application Data Structure Key Operations
BFS traversal Queue enqueue, dequeue
Task scheduling Thread-safe queue push, pop with blocking
Sliding window maximum Dequeue push_back, pop_front, pop_back
Request buffering Bounded queue enqueue, dequeue
Level-order traversal Queue enqueue children, dequeue parent
Palindrome checking Dequeue pop_front, pop_back
Cache invalidation Queue batch enqueue, batch dequeue
Round-robin scheduling Circular queue dequeue, re-enqueue

Performance Characteristics

Scenario Recommended Structure Rationale
Single-threaded, frequent dequeues Circular buffer or linked list Avoid O(n) array shift
Multi-threaded Thread::Queue or Thread::SizedQueue Built-in synchronization
Fixed capacity Circular buffer Efficient, no allocation
Unbounded growth Linked list or dynamic array No capacity limit
Bidirectional access Doubly-linked list O(1) at both ends
Cache-sensitive Circular buffer Contiguous memory
Small queues Array despite O(n) dequeue Cache benefits outweigh complexity