Overview
A priority queue is an abstract data type that operates similarly to a regular queue but with a critical difference: each element has an associated priority, and elements are dequeued based on their priority rather than their insertion order. The element with the highest priority is always removed first, regardless of when it was added to the queue.
Priority queues appear frequently in algorithms and systems that need to process items based on importance or urgency rather than arrival time. Operating systems use priority queues for task scheduling, where critical system processes receive higher priority than background tasks. Network routers employ priority queues to handle packet routing, ensuring time-sensitive traffic like video streaming receives preferential treatment over bulk data transfers.
The data structure supports three fundamental operations: inserting an element with a priority, retrieving the highest-priority element without removing it, and removing the highest-priority element. These operations must execute efficiently even as the queue grows to contain thousands or millions of elements.
# Conceptual priority queue operations
pq.insert(element, priority) # Add element with priority
pq.peek # View highest-priority element
pq.extract # Remove and return highest-priority element
Priority queues differ from sorted lists in their operational focus. While a sorted list maintains total ordering of all elements, a priority queue only guarantees access to the highest-priority element. This focused requirement allows priority queues to offer better performance characteristics for specific use cases.
The concept emerged from computer science research in the 1960s as researchers developed efficient algorithms for graph traversal and scheduling problems. The data structure gained prominence through its use in Dijkstra's shortest path algorithm and other graph algorithms that require repeatedly selecting the minimum or maximum element from a changing set.
Key Principles
Priority queues define priority in application-specific terms. In a min-priority queue, lower values indicate higher priority, making the minimum element the "highest priority" item. In a max-priority queue, larger values indicate higher priority. This distinction affects how the data structure orders elements internally but does not change the abstract interface.
The data structure maintains a partial ordering of elements rather than a complete sort. Only the highest-priority element must be immediately accessible; other elements can remain in any order that permits efficient operations. This relaxation of ordering requirements enables superior performance compared to maintaining a fully sorted collection.
Priority queues typically implement using binary heaps, which provide logarithmic time complexity for insertion and extraction operations. The heap property requires that each parent node has higher priority than its children, ensuring the root always contains the highest-priority element. This property holds throughout the tree structure, creating a partial ordering sufficient for priority queue operations.
# Min-heap property example
# 2 <- Root has minimum value
# / \
# 5 8 <- Children have values >= parent
# / \
# 9 7 <- Grandchildren have values >= their parents
The heap maintains its structural property through upheap and downheap operations. When inserting an element, the operation places it at the next available position in the tree and performs upheap (bubble-up), swapping the element with its parent until the heap property is restored. When extracting the root, the operation moves the last element to the root position and performs downheap (bubble-down), swapping it with its higher-priority child until the heap property is satisfied.
Priority queues handle duplicate priorities consistently but without guarantees about relative ordering. If two elements have identical priorities, the data structure may return either element when extracting the highest priority. Applications requiring stable ordering must incorporate secondary sorting criteria, such as insertion time or sequence numbers, into the priority calculation.
The data structure allows dynamic priority updates in some implementations, though this capability adds complexity. Supporting priority changes requires maintaining references to elements within the heap, typically through a separate hash map. Without this infrastructure, changing an element's priority requires removing it and reinserting it with the new priority.
Priority queues operate independently of element type. The data structure requires only that priorities can be compared; the elements themselves need not be comparable. This separation allows complex objects to serve as queue elements while simple numeric values determine priority.
# Priority and element are separate concepts
pq.insert({task: "backup", data: {...}}, priority: 10)
pq.insert({task: "report", data: {...}}, priority: 5)
# Element with priority 5 extracts first (min-priority queue)
Ruby Implementation
Ruby's standard library does not include a built-in priority queue implementation, requiring developers to implement the data structure or use third-party gems. Several gems provide priority queue functionality with varying features and performance characteristics.
The PQueue gem offers a complete priority queue implementation with both min and max variants. It uses an array-based binary heap internally, providing efficient operations with minimal memory overhead.
require 'pqueue'
# Min-priority queue (lower values = higher priority)
min_pq = PQueue.new
min_pq.push("low priority task", 10)
min_pq.push("high priority task", 1)
min_pq.push("medium priority task", 5)
min_pq.top # => "high priority task"
min_pq.pop # => "high priority task"
min_pq.size # => 2
# Max-priority queue (higher values = higher priority)
max_pq = PQueue.new { |a, b| b <=> a }
max_pq.push("task A", 10)
max_pq.push("task B", 50)
max_pq.push("task C", 25)
max_pq.pop # => "task B" (highest priority value)
Implementing a basic min-heap priority queue from scratch demonstrates the underlying mechanics. This implementation uses an array to store heap elements, with parent-child relationships determined by array indices.
class MinPriorityQueue
def initialize
@heap = []
end
def insert(element, priority)
@heap << [priority, element]
upheap(@heap.size - 1)
end
def extract_min
return nil if @heap.empty?
min = @heap[0]
@heap[0] = @heap.pop
downheap(0) unless @heap.empty?
min[1] # Return element, not priority
end
def peek
@heap.empty? ? nil : @heap[0][1]
end
def size
@heap.size
end
def empty?
@heap.empty?
end
private
def upheap(index)
return if index == 0
parent_index = (index - 1) / 2
if @heap[index][0] < @heap[parent_index][0]
@heap[index], @heap[parent_index] = @heap[parent_index], @heap[index]
upheap(parent_index)
end
end
def downheap(index)
left_child = 2 * index + 1
right_child = 2 * index + 2
smallest = index
if left_child < @heap.size && @heap[left_child][0] < @heap[smallest][0]
smallest = left_child
end
if right_child < @heap.size && @heap[right_child][0] < @heap[smallest][0]
smallest = right_child
end
if smallest != index
@heap[index], @heap[smallest] = @heap[smallest], @heap[index]
downheap(smallest)
end
end
end
# Usage
pq = MinPriorityQueue.new
pq.insert("send email", 3)
pq.insert("critical alert", 1)
pq.insert("log cleanup", 10)
pq.extract_min # => "critical alert"
pq.extract_min # => "send email"
pq.peek # => "log cleanup"
Ruby's SortedSet from the standard library provides an alternative implementation approach, though with different performance characteristics. SortedSet maintains elements in sorted order, making it suitable for priority queues when priorities change infrequently.
require 'set'
class SortedSetPriorityQueue
def initialize
@set = SortedSet.new
@counter = 0
end
def insert(element, priority)
# Use counter to handle equal priorities with FIFO order
@set.add([priority, @counter, element])
@counter += 1
end
def extract_min
return nil if @set.empty?
min = @set.first
@set.delete(min)
min[2] # Return element
end
def peek
@set.empty? ? nil : @set.first[2]
end
def size
@set.size
end
end
pq = SortedSetPriorityQueue.new
pq.insert("task 1", 5)
pq.insert("task 2", 2)
pq.insert("task 3", 2) # Same priority as task 2
pq.extract_min # => "task 2" (FIFO for equal priorities)
pq.extract_min # => "task 3"
For applications requiring priority updates, implementing a priority queue with element tracking adds the necessary infrastructure. This approach maintains a hash map from elements to their positions in the heap.
class UpdatablePriorityQueue
def initialize
@heap = []
@position = {} # element => heap index
end
def insert(element, priority)
raise "Element already exists" if @position.key?(element)
@heap << [priority, element]
index = @heap.size - 1
@position[element] = index
upheap(index)
end
def update_priority(element, new_priority)
return unless @position.key?(element)
index = @position[element]
old_priority = @heap[index][0]
@heap[index][0] = new_priority
if new_priority < old_priority
upheap(index)
else
downheap(index)
end
end
def extract_min
return nil if @heap.empty?
min = @heap[0]
@position.delete(min[1])
if @heap.size > 1
@heap[0] = @heap.pop
@position[@heap[0][1]] = 0
downheap(0)
else
@heap.pop
end
min[1]
end
private
def upheap(index)
return if index == 0
parent_index = (index - 1) / 2
if @heap[index][0] < @heap[parent_index][0]
swap(index, parent_index)
upheap(parent_index)
end
end
def downheap(index)
left_child = 2 * index + 1
right_child = 2 * index + 2
smallest = index
if left_child < @heap.size && @heap[left_child][0] < @heap[smallest][0]
smallest = left_child
end
if right_child < @heap.size && @heap[right_child][0] < @heap[smallest][0]
smallest = right_child
end
if smallest != index
swap(index, smallest)
downheap(smallest)
end
end
def swap(i, j)
@heap[i], @heap[j] = @heap[j], @heap[i]
@position[@heap[i][1]] = i
@position[@heap[j][1]] = j
end
end
pq = UpdatablePriorityQueue.new
pq.insert("task_A", 10)
pq.insert("task_B", 5)
pq.update_priority("task_A", 2) # Decrease priority
pq.extract_min # => "task_A" (now has priority 2)
Practical Examples
Priority queues solve Dijkstra's shortest path algorithm efficiently by maintaining the frontier of vertices to explore. The algorithm repeatedly extracts the vertex with minimum distance from the source, updating distances to neighboring vertices.
def dijkstra(graph, start)
distances = Hash.new(Float::INFINITY)
distances[start] = 0
previous = {}
pq = MinPriorityQueue.new
pq.insert(start, 0)
visited = Set.new
until pq.empty?
current = pq.extract_min
next if visited.include?(current)
visited.add(current)
graph[current].each do |neighbor, weight|
new_distance = distances[current] + weight
if new_distance < distances[neighbor]
distances[neighbor] = new_distance
previous[neighbor] = current
pq.insert(neighbor, new_distance)
end
end
end
{ distances: distances, previous: previous }
end
# Graph represented as adjacency list
graph = {
'A' => [['B', 4], ['C', 2]],
'B' => [['A', 4], ['C', 1], ['D', 5]],
'C' => [['A', 2], ['B', 1], ['D', 8]],
'D' => [['B', 5], ['C', 8]]
}
result = dijkstra(graph, 'A')
# => { distances: {'A'=>0, 'C'=>2, 'B'=>3, 'D'=>8}, previous: {...} }
Event-driven simulations use priority queues to manage events ordered by scheduled execution time. The simulation advances by processing the next scheduled event, which may generate additional future events.
class Event
attr_reader :time, :type, :data
def initialize(time, type, data)
@time = time
@type = type
@data = data
end
end
class EventSimulator
def initialize
@event_queue = MinPriorityQueue.new
@current_time = 0
end
def schedule_event(event_time, event_type, data)
event = Event.new(event_time, event_type, data)
@event_queue.insert(event, event_time)
end
def run
results = []
until @event_queue.empty?
event = @event_queue.extract_min
@current_time = event.time
result = process_event(event)
results << result
end
results
end
private
def process_event(event)
case event.type
when :customer_arrival
# Schedule service completion
service_time = @current_time + rand(5..15)
schedule_event(service_time, :service_complete, event.data)
"Customer #{event.data[:id]} arrived at #{event.time}"
when :service_complete
"Customer #{event.data[:id]} served at #{event.time}"
end
end
end
sim = EventSimulator.new
sim.schedule_event(0, :customer_arrival, {id: 1})
sim.schedule_event(3, :customer_arrival, {id: 2})
sim.schedule_event(7, :customer_arrival, {id: 3})
sim.run
# Events process in time order, generating new events dynamically
Task scheduling systems prioritize work items based on urgency, importance, or service-level agreements. A priority queue ensures high-priority tasks execute before lower-priority ones, even when submitted later.
class TaskScheduler
Task = Struct.new(:id, :name, :priority, :duration)
def initialize
@queue = MinPriorityQueue.new
@completed = []
end
def add_task(name, priority, duration)
task = Task.new(object_id, name, priority, duration)
@queue.insert(task, priority)
end
def process_tasks(available_time)
time_used = 0
while !@queue.empty? && time_used < available_time
task = @queue.extract_min
if time_used + task.duration <= available_time
@completed << task
time_used += task.duration
else
# Return unfinished task to queue
@queue.insert(task, task.priority)
break
end
end
@completed
end
def pending_count
@queue.size
end
end
scheduler = TaskScheduler.new
scheduler.add_task("Critical bug fix", 1, 30)
scheduler.add_task("Code review", 5, 15)
scheduler.add_task("Security patch", 1, 45)
scheduler.add_task("Documentation", 10, 60)
# Process tasks for 90 minutes
completed = scheduler.process_tasks(90)
# Processes: Critical bug fix (30m), Security patch (45m), Code review (15m)
# Total: 90 minutes, Documentation remains pending
Media streaming applications use priority queues for adaptive bitrate streaming, prioritizing video chunks based on playback urgency and bandwidth availability.
class VideoChunkScheduler
Chunk = Struct.new(:segment_id, :quality, :size, :deadline)
def initialize(buffer_target)
@queue = MinPriorityQueue.new
@buffer_target = buffer_target
@current_time = 0
end
def request_chunk(segment_id, quality, size, playback_time)
chunk = Chunk.new(segment_id, quality, size, playback_time)
# Priority based on urgency (time until needed)
urgency = playback_time - @current_time
@queue.insert(chunk, urgency)
end
def fetch_next_chunk(bandwidth)
return nil if @queue.empty?
# Get most urgent chunk that fits bandwidth
chunk = @queue.peek
download_time = chunk.size / bandwidth
if @current_time + download_time < chunk.deadline
@queue.extract_min
@current_time += download_time
chunk
else
# Too late, request lower quality
adjust_quality(chunk)
end
end
private
def adjust_quality(chunk)
@queue.extract_min
# Request lower quality version (smaller size)
new_chunk = Chunk.new(chunk.segment_id, chunk.quality - 1,
chunk.size * 0.6, chunk.deadline)
@queue.insert(new_chunk, new_chunk.deadline - @current_time)
end
end
scheduler = VideoChunkScheduler.new(buffer_target: 30)
scheduler.request_chunk(1, 1080, 5000, 10)
scheduler.request_chunk(2, 1080, 5000, 20)
scheduler.request_chunk(3, 1080, 5000, 30)
# Fetch chunks based on bandwidth and urgency
chunk = scheduler.fetch_next_chunk(bandwidth: 1000)
Implementation Approaches
Array-based binary heaps provide the most common implementation, storing heap elements in a contiguous array with parent-child relationships determined by index arithmetic. Element at index i has children at indices 2i+1 and 2i+2, with parent at index (i-1)/2. This approach offers excellent cache locality and minimal memory overhead.
# Array representation of heap:
# Index: 0 1 2 3 4 5 6
# Value: 2 5 8 9 7 15 12
#
# Tree structure:
# 2
# / \
# 5 8
# / \ / \
# 9 7 15 12
The array-based approach requires dynamic resizing as elements are added. Ruby's array implementation automatically handles growth, typically doubling capacity when space runs out. This amortized constant-time resizing maintains overall logarithmic insertion complexity.
Tree-based heaps use explicit node objects with pointers to children and optionally to parents. This approach adds memory overhead for storing pointers but simplifies certain operations, particularly when implementing additional functionality like merging heaps.
class HeapNode
attr_accessor :priority, :element, :left, :right, :parent
def initialize(priority, element)
@priority = priority
@element = element
@left = nil
@right = nil
@parent = nil
end
end
class TreeBasedHeap
def initialize
@root = nil
@size = 0
end
def insert(element, priority)
new_node = HeapNode.new(priority, element)
if @root.nil?
@root = new_node
else
insert_at_next_position(new_node)
upheap(new_node)
end
@size += 1
end
private
def insert_at_next_position(node)
# Find next available position using breadth-first search
queue = [@root]
until queue.empty?
current = queue.shift
if current.left.nil?
current.left = node
node.parent = current
return
elsif current.right.nil?
current.right = node
node.parent = current
return
else
queue << current.left
queue << current.right
end
end
end
def upheap(node)
return if node.parent.nil?
if node.priority < node.parent.priority
swap_nodes(node, node.parent)
upheap(node.parent)
end
end
def swap_nodes(node1, node2)
node1.priority, node2.priority = node2.priority, node1.priority
node1.element, node2.element = node2.element, node1.element
end
end
Fibonacci heaps provide theoretical advantages for specific operations, offering amortized constant-time decrease-key operations. These heaps consist of a collection of heap-ordered trees, making them beneficial for algorithms like Dijkstra's that perform many decrease-key operations. However, their implementation complexity and constant factors often make them impractical for typical applications.
Pairing heaps offer a middle ground between binary heaps and Fibonacci heaps, providing good amortized bounds with simpler implementation. The data structure uses a multiway tree where nodes can have arbitrary numbers of children, connected as siblings.
Binomial heaps support efficient merging of two heaps in logarithmic time, making them appropriate for applications that frequently combine priority queues. The structure consists of a collection of binomial trees, each satisfying the heap property.
Skew heaps provide a self-adjusting alternative to standard heaps, using a variant of heap merge as the fundamental operation. Skew heaps require no balance information and offer amortized logarithmic time bounds for operations despite their apparently simple merging strategy.
class SkewHeapNode
attr_accessor :priority, :element, :left, :right
def initialize(priority, element)
@priority = priority
@element = element
@left = nil
@right = nil
end
end
class SkewHeap
def initialize
@root = nil
end
def insert(element, priority)
new_node = SkewHeapNode.new(priority, element)
@root = merge(@root, new_node)
end
def extract_min
return nil if @root.nil?
min = @root.element
@root = merge(@root.left, @root.right)
min
end
private
def merge(heap1, heap2)
return heap2 if heap1.nil?
return heap1 if heap2.nil?
# Ensure heap1 has smaller root
heap1, heap2 = heap2, heap1 if heap1.priority > heap2.priority
# Merge heap2 with right subtree and swap children
heap1.right = merge(heap1.right, heap2)
heap1.left, heap1.right = heap1.right, heap1.left
heap1
end
end
Performance Considerations
Binary heap implementations provide O(log n) time complexity for insertion and extraction operations, where n represents the number of elements in the queue. The logarithmic bound follows from the heap's tree height, which grows logarithmically with the number of nodes.
Peek operations run in O(1) constant time since the highest-priority element always resides at the root position. This guarantee makes priority queues efficient for algorithms that need to examine the next element without removing it.
# Performance characteristics of binary heap
# n = number of elements
#
# insert: O(log n)
# extract_min: O(log n)
# peek: O(1)
# build_heap: O(n)
Building a heap from n elements can achieve O(n) linear time using the bottom-up heap construction algorithm, more efficient than inserting elements individually. This approach starts with an unsorted array and performs downheap operations from the last internal node upward.
class MinPriorityQueue
def self.build_from_array(elements_with_priorities)
pq = new
pq.instance_variable_set(:@heap, elements_with_priorities.dup)
# Start from last internal node and downheap
last_internal = (elements_with_priorities.size / 2) - 1
last_internal.downto(0) do |i|
pq.send(:downheap, i)
end
pq
end
end
# Build heap from 1000 elements: O(n) = O(1000)
# vs inserting individually: O(n log n) = O(1000 * log 1000) ≈ O(10000)
Memory overhead varies by implementation approach. Array-based heaps require storage for n priority-element pairs plus array capacity overhead, typically 1.5x to 2x the current size due to growth strategy. Tree-based implementations add pointer overhead, roughly 2-3 pointers per node on top of element storage.
Cache performance favors array-based implementations significantly. Array heaps access contiguous memory during operations, maximizing cache hits. Tree-based implementations scatter nodes across memory, generating more cache misses. For small to medium-sized queues (thousands of elements), cache effects often dominate theoretical complexity differences.
Priority queue performance degrades with frequent priority updates. Standard implementations require O(log n) time to update an element's priority after locating it. Applications with many updates benefit from implementations that maintain element-to-position mappings, adding O(1) space per element but reducing update complexity.
Worst-case insertion occurs when elements arrive in sorted order, forcing every insertion to bubble from bottom to top. However, the logarithmic bound still holds, and practical performance remains acceptable. Extraction worst-case occurs when replacing the root with the rightmost leaf requires downheaping to the deepest level.
require 'benchmark'
def benchmark_priority_queue(size)
pq = MinPriorityQueue.new
Benchmark.bm do |x|
x.report("insert #{size}:") do
size.times { |i| pq.insert("task_#{i}", rand(1..1000)) }
end
x.report("extract #{size}:") do
(size / 2).times { pq.extract_min }
end
end
end
benchmark_priority_queue(10_000)
# insert 10000: 0.015000 0.000000 0.015000
# extract 10000: 0.008000 0.000000 0.008000
Memory allocation patterns affect performance in garbage-collected environments like Ruby. Array-based heaps create less garbage since the array grows by reallocating a single object. Frequent insertions and extractions in tree-based implementations generate many short-lived objects, increasing garbage collection pressure.
Comparison operations dominate runtime for priority queues holding complex objects. When priorities derive from expensive computations, caching calculated priorities improves performance substantially. Store computed priorities with elements rather than recalculating during heap operations.
Common Patterns
The decrease-key pattern supports efficient priority updates in graph algorithms. When a shorter path to a vertex is discovered, the algorithm decreases the vertex's priority in the queue without removing and reinserting it.
class DijkstraOptimized
def shortest_path(graph, start, target)
distances = Hash.new(Float::INFINITY)
distances[start] = 0
pq = UpdatablePriorityQueue.new
pq.insert(start, 0)
until pq.empty?
current = pq.extract_min
return distances[target] if current == target
graph[current].each do |neighbor, weight|
new_distance = distances[current] + weight
if new_distance < distances[neighbor]
distances[neighbor] = new_distance
# Update priority if already in queue
if pq.contains?(neighbor)
pq.update_priority(neighbor, new_distance)
else
pq.insert(neighbor, new_distance)
end
end
end
end
Float::INFINITY
end
end
The lazy deletion pattern defers removal of invalid entries until extraction time. When elements become invalid or priorities change, the algorithm inserts new entries rather than updating existing ones. During extraction, the algorithm checks validity and discards stale entries.
class LazyPriorityQueue
def initialize
@pq = MinPriorityQueue.new
@valid = Set.new
end
def insert(element, priority)
@pq.insert(element, priority)
@valid.add(element)
end
def invalidate(element)
@valid.delete(element)
end
def extract_min
loop do
return nil if @pq.empty?
element = @pq.extract_min
if @valid.include?(element)
@valid.delete(element)
return element
end
# Skip invalid elements
end
end
end
pq = LazyPriorityQueue.new
pq.insert("task_A", 5)
pq.insert("task_B", 10)
pq.invalidate("task_A") # Mark as invalid
pq.extract_min # => "task_B" (skips invalid task_A)
The multi-level priority pattern combines multiple priority criteria into a composite priority. This approach handles tasks with primary and secondary ordering requirements, such as priority level followed by insertion time.
class MultiLevelPriority
attr_reader :primary, :secondary
def initialize(primary, secondary)
@primary = primary
@secondary = secondary
end
def <=>(other)
comparison = primary <=> other.primary
comparison == 0 ? secondary <=> other.secondary : comparison
end
end
pq = MinPriorityQueue.new
pq.insert("task_1", MultiLevelPriority.new(5, Time.now))
sleep(0.01)
pq.insert("task_2", MultiLevelPriority.new(5, Time.now))
pq.insert("task_3", MultiLevelPriority.new(3, Time.now))
# Extracts task_3 (primary=3), then task_1 and task_2 by timestamp
The rate limiting pattern uses priority queues to schedule work within rate limits. Tasks are assigned priorities based on their scheduled execution time, and the scheduler processes tasks when the rate limit permits.
class RateLimitedScheduler
def initialize(max_per_second)
@max_per_second = max_per_second
@interval = 1.0 / max_per_second
@queue = MinPriorityQueue.new
@last_execution = Time.now - @interval
end
def schedule(task)
next_available = @last_execution + @interval
schedule_time = [Time.now, next_available].max
@queue.insert(task, schedule_time.to_f)
end
def process_next
return nil if @queue.empty?
task = @queue.peek
scheduled_time = @queue.instance_variable_get(:@heap)[0][0]
if Time.now.to_f >= scheduled_time
@queue.extract_min
@last_execution = Time.now
execute(task)
else
nil
end
end
private
def execute(task)
# Perform task work
task
end
end
The merge pattern combines multiple priority queues into a single queue. Applications with multiple input streams use this pattern to maintain a single prioritized view of all elements.
def merge_priority_queues(queues)
merged = MinPriorityQueue.new
queues.each do |queue|
until queue.empty?
element = queue.extract_min
# Need to preserve original priorities
# This requires modification to track priorities during extraction
merged.insert(element, extract_priority(element))
end
end
merged
end
Reference
Core Operations
| Operation | Time Complexity | Description |
|---|---|---|
| insert | O(log n) | Add element with priority to queue |
| extract_min | O(log n) | Remove and return highest-priority element |
| peek | O(1) | View highest-priority element without removal |
| size | O(1) | Return number of elements in queue |
| empty? | O(1) | Check if queue contains elements |
| build_heap | O(n) | Construct heap from array of elements |
Implementation Comparison
| Approach | Insert | Extract | Decrease-Key | Merge | Memory Overhead |
|---|---|---|---|---|---|
| Binary Heap | O(log n) | O(log n) | O(log n) | O(n) | Low |
| Fibonacci Heap | O(1) | O(log n) | O(1) | O(1) | High |
| Binomial Heap | O(log n) | O(log n) | O(log n) | O(log n) | Medium |
| Pairing Heap | O(1) | O(log n) | O(log n) | O(1) | Medium |
| Skew Heap | O(log n) | O(log n) | O(log n) | O(log n) | Medium |
Heap Property Maintenance
| Operation | Mechanism | When Used |
|---|---|---|
| Upheap | Swap element with parent until heap property restored | After insertion at leaf |
| Downheap | Swap element with higher-priority child until heap property restored | After removing root |
| Heapify | Apply downheap from last internal node upward | Building heap from array |
Common Use Cases
| Application | Priority Basis | Queue Type |
|---|---|---|
| Dijkstra's Algorithm | Distance from source | Min-priority queue |
| Event Simulation | Event timestamp | Min-priority queue |
| Task Scheduling | Task priority level | Min or max depending on convention |
| Huffman Coding | Character frequency | Min-priority queue |
| A* Search | f(n) = g(n) + h(n) | Min-priority queue |
| Load Balancing | Server load | Min-priority queue |
| Median Maintenance | Element value | Two heaps (min and max) |
Ruby Gems
| Gem | Features | Best For |
|---|---|---|
| pqueue | Min/max queues, simple API | General purpose priority queues |
| algorithms | Multiple data structures including heaps | Learning and experimentation |
| pairing_heap | Efficient decrease-key | Graph algorithms with updates |
| priority_queue_cxx | C++ extension for performance | High-performance applications |
Index Arithmetic for Array-Based Heaps
| Relationship | Formula | Example (index 3) |
|---|---|---|
| Parent | (i - 1) / 2 | (3 - 1) / 2 = 1 |
| Left Child | 2 * i + 1 | 2 * 3 + 1 = 7 |
| Right Child | 2 * i + 2 | 2 * 3 + 2 = 8 |
| Last Internal | (n / 2) - 1 | For n=10: 4 |
Priority Queue Patterns
| Pattern | Use Case | Key Characteristic |
|---|---|---|
| Decrease-Key | Graph algorithms | Update priority without reinsertion |
| Lazy Deletion | Frequent invalidation | Defer cleanup until extraction |
| Multi-Level Priority | Complex ordering | Composite priority values |
| Rate Limiting | Throttled processing | Schedule based on rate constraints |
| Merge | Multiple streams | Combine multiple queues |
Common Pitfalls
| Issue | Symptom | Solution |
|---|---|---|
| Missing priority updates | Stale priorities in queue | Implement decrease-key or use lazy deletion |
| Incorrect comparison function | Wrong element extracted | Verify min vs max and comparison logic |
| Memory leaks with references | Growing memory usage | Clear element references after extraction |
| Unstable ordering | Unpredictable order for equal priorities | Include tiebreaker in priority calculation |
| Inefficient priority calculation | Slow operations | Cache computed priorities with elements |