CrackedRuby CrackedRuby

Overview

Cycle detection identifies circular paths within data structures where traversal returns to a previously visited node. This problem appears across multiple domains: linked list validation, graph analysis, dependency resolution, deadlock detection, and infinite loop prevention.

A cycle exists when following references or edges from a starting point eventually leads back to that same point. In a singly linked list, this occurs when a node's next pointer references an earlier node in the sequence. In directed graphs, cycles form when a path exists from a vertex back to itself through one or more edges.

The fundamental challenge in cycle detection involves distinguishing between legitimate graph structures containing cycles and infinite loops that prevent termination. Detection algorithms must track visited nodes efficiently while minimizing memory overhead and processing time.

# Simple linked list node
class Node
  attr_accessor :value, :next
  
  def initialize(value)
    @value = value
    @next = nil
  end
end

# Creating a cycle: 1 -> 2 -> 3 -> 2
node1 = Node.new(1)
node2 = Node.new(2)
node3 = Node.new(3)

node1.next = node2
node2.next = node3
node3.next = node2  # Creates cycle back to node2

Applications span from compiler optimization and static analysis to network protocol validation and distributed system debugging. Memory leak detection relies on identifying reference cycles that prevent garbage collection. Build systems check for circular dependencies that would cause infinite recursion during compilation.

Key Principles

Cycle detection algorithms operate on the principle that repeated traversal must eventually revisit a node if a cycle exists within bounded structures. The challenge lies in determining whether revisitation occurred due to a cycle or because the same node appears through different paths.

State Tracking: Algorithms maintain information about visited nodes to detect when traversal encounters a previously seen element. The storage and lookup mechanism for this state directly impacts both time and space complexity. Hash-based tracking provides O(1) lookup but requires O(n) space. Alternative approaches trade increased time complexity for reduced memory usage.

Traversal Strategy: Different data structures require adapted traversal methods. Linked lists support sequential traversal with pointer following. Graphs demand either depth-first or breadth-first search to explore all reachable vertices. The traversal order affects which cycles are detected first and how quickly detection occurs.

Termination Conditions: Detection algorithms must recognize both cycle presence and cycle absence. For finite structures, exhausting all nodes without finding a cycle proves none exists. For potentially infinite structures, algorithms need explicit cycle detection to avoid non-termination.

Cycle Characteristics: Beyond binary detection, algorithms may extract cycle properties including entry point, cycle length, and participating nodes. These characteristics enable deeper analysis and informed decision-making about handling detected cycles.

# Graph represented as adjacency list
class Graph
  def initialize
    @adjacency = Hash.new { |h, k| h[k] = [] }
  end
  
  def add_edge(from, to)
    @adjacency[from] << to
  end
  
  def neighbors(vertex)
    @adjacency[vertex]
  end
  
  def vertices
    @adjacency.keys
  end
end

# Graph with cycle: A -> B -> C -> A
graph = Graph.new
graph.add_edge('A', 'B')
graph.add_edge('B', 'C')
graph.add_edge('C', 'A')  # Creates cycle

Node Marking: Algorithms distinguish between unvisited, currently visiting, and completely visited nodes. This three-state model enables detection of back edges in directed graphs during depth-first traversal. A back edge from a currently visiting node indicates cycle presence.

The relationship between detection speed and memory consumption creates an optimization space. Algorithms balance early cycle detection against minimal state storage, choosing approaches appropriate to the specific use case and resource constraints.

Implementation Approaches

Cycle detection strategies vary based on data structure characteristics, resource constraints, and required information beyond binary detection.

Hash Set Tracking: Stores visited nodes in a hash set, checking membership before processing each node. This approach provides O(n) time complexity with O(n) space complexity, making it suitable when memory availability is not constrained. Hash set tracking works uniformly across linked lists, graphs, and arbitrary pointer structures.

visited = empty hash set
current = head

while current is not null:
    if current in visited:
        return true  // Cycle detected
    
    add current to visited
    current = current.next

return false  // No cycle

This method detects cycles immediately upon revisiting any node. The entry point may differ from the first revisited node, requiring additional analysis if entry point identification is needed.

Floyd's Cycle Detection: Uses two pointers moving at different speeds through the structure. The slow pointer advances one step per iteration while the fast pointer advances two steps. If a cycle exists, the fast pointer eventually laps the slow pointer, meeting within the cycle. This algorithm requires O(1) space but only applies to structures supporting sequential traversal.

The mathematical foundation relies on the pigeonhole principle: in a cycle of length c, after at most c iterations, two pointers with different speeds must occupy the same position. The meeting point occurs within the cycle but not necessarily at the entry point.

Depth-First Search with Colors: Marks nodes with three states during graph traversal. White nodes are unvisited, gray nodes are currently being explored, and black nodes have completed exploration. A gray node encountered during traversal indicates a back edge and confirms cycle presence. This approach integrates naturally with graph algorithms requiring depth-first traversal.

function has_cycle(graph):
    color = new hash map (default: white)
    
    for each vertex in graph:
        if color[vertex] is white:
            if dfs_visit(vertex, color, graph):
                return true
    
    return false

function dfs_visit(vertex, color, graph):
    color[vertex] = gray
    
    for each neighbor in graph.neighbors(vertex):
        if color[neighbor] is gray:
            return true  // Back edge found
        
        if color[neighbor] is white:
            if dfs_visit(neighbor, color, graph):
                return true
    
    color[vertex] = black
    return false

Topological Sort Based: Attempts to order graph vertices such that all edges point forward. Directed acyclic graphs admit valid topological orderings while cyclic graphs do not. Kahn's algorithm performs topological sort using in-degree tracking, detecting cycles when vertices remain after exhausting zero in-degree nodes.

This approach simultaneously detects cycles and produces a valid ordering for acyclic graphs, making it efficient when both capabilities are needed. The method works only for directed graphs and requires computing in-degrees as preprocessing.

Brent's Algorithm: An optimization over Floyd's method that reduces the number of pointer operations. Instead of fixed speed differences, it uses exponentially increasing cycle lengths to check for repetition. This reduces the constant factors in time complexity while maintaining O(1) space usage.

Selection criteria depend on structure type, memory constraints, and required output. Hash set tracking provides simplicity and broad applicability. Floyd's algorithm optimizes space for sequential structures. DFS coloring integrates with graph algorithms. Topological sort combines detection with ordering. Brent's algorithm optimizes sequential detection.

Ruby Implementation

Ruby's object-oriented nature and dynamic typing enable concise cycle detection implementations. The language provides built-in data structures that simplify state tracking and node management.

Linked List Detection with Floyd's Algorithm:

class LinkedListCycleDetector
  def self.has_cycle?(head)
    return false if head.nil? || head.next.nil?
    
    slow = head
    fast = head
    
    while fast && fast.next
      slow = slow.next
      fast = fast.next.next
      
      return true if slow == fast
    end
    
    false
  end
  
  def self.find_cycle_start(head)
    return nil unless has_cycle?(head)
    
    # Find meeting point
    slow = head
    fast = head
    
    while fast && fast.next
      slow = slow.next
      fast = fast.next.next
      break if slow == fast
    end
    
    # Find entry point
    slow = head
    while slow != fast
      slow = slow.next
      fast = fast.next
    end
    
    slow
  end
  
  def self.cycle_length(head)
    return 0 unless has_cycle?(head)
    
    slow = head
    fast = head
    
    # Find meeting point
    while fast && fast.next
      slow = slow.next
      fast = fast.next.next
      break if slow == fast
    end
    
    # Count cycle length
    count = 1
    fast = fast.next
    while slow != fast
      fast = fast.next
      count += 1
    end
    
    count
  end
end

The implementation uses object identity (==) rather than value equality, correctly identifying when pointers reference the same node object. The find_cycle_start method leverages the mathematical property that moving both pointers one step at a time from the head and meeting point respectively causes them to meet at the cycle entry.

Graph Cycle Detection with DFS:

class GraphCycleDetector
  COLOR_WHITE = :white
  COLOR_GRAY = :gray
  COLOR_BLACK = :black
  
  def initialize(graph)
    @graph = graph
    @colors = {}
    @parent = {}
  end
  
  def has_cycle?
    @graph.vertices.each do |vertex|
      @colors[vertex] ||= COLOR_WHITE
    end
    
    @graph.vertices.each do |vertex|
      if @colors[vertex] == COLOR_WHITE
        return true if dfs_visit(vertex)
      end
    end
    
    false
  end
  
  def find_cycle
    @graph.vertices.each do |vertex|
      @colors[vertex] ||= COLOR_WHITE
    end
    
    @graph.vertices.each do |vertex|
      if @colors[vertex] == COLOR_WHITE
        cycle = dfs_cycle_path(vertex)
        return cycle if cycle
      end
    end
    
    nil
  end
  
  private
  
  def dfs_visit(vertex)
    @colors[vertex] = COLOR_GRAY
    
    @graph.neighbors(vertex).each do |neighbor|
      @colors[neighbor] ||= COLOR_WHITE
      
      if @colors[neighbor] == COLOR_GRAY
        return true  # Back edge found
      end
      
      if @colors[neighbor] == COLOR_WHITE
        @parent[neighbor] = vertex
        return true if dfs_visit(neighbor)
      end
    end
    
    @colors[vertex] = COLOR_BLACK
    false
  end
  
  def dfs_cycle_path(vertex)
    @colors[vertex] = COLOR_GRAY
    
    @graph.neighbors(vertex).each do |neighbor|
      @colors[neighbor] ||= COLOR_WHITE
      
      if @colors[neighbor] == COLOR_GRAY
        # Reconstruct cycle path
        path = [neighbor]
        current = vertex
        while current != neighbor
          path.unshift(current)
          current = @parent[current]
        end
        path.unshift(neighbor)
        return path
      end
      
      if @colors[neighbor] == COLOR_WHITE
        @parent[neighbor] = vertex
        cycle = dfs_cycle_path(neighbor)
        return cycle if cycle
      end
    end
    
    @colors[vertex] = COLOR_BLACK
    nil
  end
end

The color-based approach enables distinguishing between ancestors in the current DFS path (gray) and nodes from completed paths (black). This prevents false positives from cross edges in directed graphs.

Hash-Based Detection for General Structures:

class GeneralCycleDetector
  def self.detect_cycle(start, next_function)
    visited = Set.new
    current = start
    
    while current
      return current if visited.include?(current)
      
      visited.add(current)
      current = next_function.call(current)
    end
    
    nil
  end
  
  def self.detect_with_path(start, next_function)
    visited = {}
    current = start
    position = 0
    
    while current
      if visited.key?(current)
        cycle_start_position = visited[current]
        cycle_length = position - cycle_start_position
        return {
          cycle_detected: true,
          entry_point: current,
          cycle_length: cycle_length,
          path_to_cycle: cycle_start_position
        }
      end
      
      visited[current] = position
      current = next_function.call(current)
      position += 1
    end
    
    { cycle_detected: false }
  end
end

# Usage with lambda
node1 = Node.new(1)
node2 = Node.new(2)
node3 = Node.new(3)
node1.next = node2
node2.next = node3
node3.next = node2

result = GeneralCycleDetector.detect_with_path(
  node1,
  ->(n) { n&.next }
)
# => { cycle_detected: true, entry_point: #<Node @value=2>, 
#      cycle_length: 2, path_to_cycle: 1 }

This implementation accepts a lambda defining traversal logic, enabling reuse across different data structure types. The detect_with_path variant tracks positions to compute cycle characteristics beyond binary detection.

Union-Find for Undirected Graphs:

class UnionFind
  def initialize(size)
    @parent = Array.new(size) { |i| i }
    @rank = Array.new(size, 0)
  end
  
  def find(x)
    if @parent[x] != x
      @parent[x] = find(@parent[x])  # Path compression
    end
    @parent[x]
  end
  
  def union(x, y)
    root_x = find(x)
    root_y = find(y)
    
    return false if root_x == root_y  # Already connected
    
    if @rank[root_x] < @rank[root_y]
      @parent[root_x] = root_y
    elsif @rank[root_x] > @rank[root_y]
      @parent[root_y] = root_x
    else
      @parent[root_y] = root_x
      @rank[root_x] += 1
    end
    
    true
  end
end

class UndirectedGraphCycleDetector
  def self.has_cycle?(edges, num_vertices)
    uf = UnionFind.new(num_vertices)
    
    edges.each do |u, v|
      return true unless uf.union(u, v)
    end
    
    false
  end
end

# Usage
edges = [[0, 1], [1, 2], [2, 0]]  # Triangle cycle
UndirectedGraphCycleDetector.has_cycle?(edges, 3)  # => true

Union-Find efficiently detects cycles in undirected graphs by tracking connected components. When an edge connects vertices already in the same component, a cycle exists. This approach avoids the overhead of graph traversal for simple connectivity analysis.

Practical Examples

Dependency Resolution in Package Managers:

class PackageManager
  def initialize
    @packages = {}
  end
  
  def add_package(name, dependencies = [])
    @packages[name] = dependencies
  end
  
  def install_order(package_name)
    order = []
    visiting = Set.new
    visited = Set.new
    
    unless resolve_dependencies(package_name, visiting, visited, order)
      raise "Circular dependency detected involving #{package_name}"
    end
    
    order
  end
  
  private
  
  def resolve_dependencies(package, visiting, visited, order)
    return true if visited.include?(package)
    return false if visiting.include?(package)  # Cycle detected
    
    visiting.add(package)
    
    dependencies = @packages[package] || []
    dependencies.each do |dep|
      unless resolve_dependencies(dep, visiting, visited, order)
        return false
      end
    end
    
    visiting.delete(package)
    visited.add(package)
    order << package
    
    true
  end
end

pm = PackageManager.new
pm.add_package('app', ['database', 'auth'])
pm.add_package('database', ['connection_pool'])
pm.add_package('auth', ['database', 'crypto'])
pm.add_package('crypto', [])
pm.add_package('connection_pool', [])

puts pm.install_order('app')
# => ["connection_pool", "database", "crypto", "auth", "app"]

# Circular dependency
pm.add_package('circular_a', ['circular_b'])
pm.add_package('circular_b', ['circular_a'])

begin
  pm.install_order('circular_a')
rescue => e
  puts e.message  # => "Circular dependency detected involving circular_a"
end

The package manager uses DFS with visiting/visited sets to detect circular dependencies. The visiting set tracks the current dependency path, enabling detection of back edges that indicate cycles. The algorithm produces a valid installation order for acyclic dependencies while raising errors for circular references.

Deadlock Detection in Resource Allocation:

class ResourceAllocator
  def initialize
    @allocations = Hash.new { |h, k| h[k] = [] }
    @waiting = Hash.new { |h, k| h[k] = [] }
  end
  
  def allocate(process, resource)
    @allocations[process] << resource
  end
  
  def wait(process, resource)
    @waiting[process] << resource
  end
  
  def detect_deadlock
    # Build wait-for graph
    wait_for_graph = build_wait_for_graph
    
    # Detect cycle in wait-for graph
    detector = GraphCycleDetector.new(wait_for_graph)
    cycle = detector.find_cycle
    
    if cycle
      {
        deadlock: true,
        involved_processes: cycle
      }
    else
      { deadlock: false }
    end
  end
  
  private
  
  def build_wait_for_graph
    graph = Graph.new
    
    # Process P waits for process Q if:
    # P is waiting for resource R and Q holds resource R
    @waiting.each do |waiting_process, resources|
      resources.each do |resource|
        @allocations.each do |holding_process, held_resources|
          if held_resources.include?(resource) && waiting_process != holding_process
            graph.add_edge(waiting_process, holding_process)
          end
        end
      end
    end
    
    graph
  end
end

allocator = ResourceAllocator.new

# Process P1 holds R1, waits for R2
allocator.allocate('P1', 'R1')
allocator.wait('P1', 'R2')

# Process P2 holds R2, waits for R3
allocator.allocate('P2', 'R2')
allocator.wait('P2', 'R3')

# Process P3 holds R3, waits for R1 (creates cycle)
allocator.allocate('P3', 'R3')
allocator.wait('P3', 'R1')

result = allocator.detect_deadlock
puts result
# => { deadlock: true, involved_processes: ["P1", "P3", "P2", "P1"] }

Deadlock detection transforms resource allocation state into a wait-for graph where edges represent waiting relationships. A cycle in this graph indicates circular waiting, the necessary condition for deadlock. The system identifies not just deadlock presence but the specific processes involved.

Finding Duplicate File Paths:

class FileSystemAnalyzer
  def initialize
    @graph = Graph.new
    @inodes = {}
  end
  
  def add_link(source_path, target_inode)
    @inodes[source_path] = target_inode
    @graph.add_edge(source_path, target_inode)
    @graph.add_edge(target_inode, source_path)
  end
  
  def detect_symlink_cycle(start_path)
    visited = Set.new
    current = start_path
    path = []
    
    while current
      if visited.include?(current)
        cycle_start = path.index(current)
        return path[cycle_start..-1] + [current]
      end
      
      visited.add(current)
      path << current
      
      # Follow symlink if current is a path
      target = @inodes[current]
      
      # Find next path that links to target
      next_path = nil
      @graph.neighbors(target)&.each do |neighbor|
        if neighbor.is_a?(String) && neighbor != current
          next_path = neighbor
          break
        end
      end
      
      current = next_path
    end
    
    nil
  end
end

fs = FileSystemAnalyzer.new
fs.add_link('/home/user/link1', 'inode_100')
fs.add_link('/home/user/link2', 'inode_200')
fs.add_link('/home/user/link3', 'inode_100')  # Points back to same inode

cycle = fs.detect_symlink_cycle('/home/user/link1')
puts cycle if cycle
# Detects when symbolic links create circular references

File system analyzers detect cycles in symbolic links to prevent infinite loops during traversal. The implementation tracks both paths and inodes, identifying when following links returns to a previously encountered file system location.

State Machine Cycle Detection:

class StateMachine
  def initialize
    @transitions = Hash.new { |h, k| h[k] = [] }
  end
  
  def add_transition(from_state, to_state, condition)
    @transitions[from_state] << { to: to_state, condition: condition }
  end
  
  def find_reachable_cycles(start_state)
    cycles = []
    
    def explore(state, path, visited, cycles)
      if path.include?(state)
        cycle_start = path.index(state)
        cycles << path[cycle_start..-1] + [state]
        return
      end
      
      return if visited.include?(state)
      
      visited.add(state)
      path.push(state)
      
      @transitions[state].each do |transition|
        explore(transition[:to], path.dup, visited, cycles)
      end
    end
    
    explore(start_state, [], Set.new, cycles)
    cycles
  end
  
  def has_deadlock_state?
    # Deadlock: state with no outgoing transitions
    @transitions.each do |state, transitions|
      return state if transitions.empty? && !state.nil?
    end
    
    nil
  end
end

sm = StateMachine.new
sm.add_transition('idle', 'processing', 'start')
sm.add_transition('processing', 'waiting', 'wait')
sm.add_transition('waiting', 'processing', 'continue')
sm.add_transition('processing', 'complete', 'finish')

cycles = sm.find_reachable_cycles('idle')
cycles.each { |cycle| puts "Cycle: #{cycle.join(' -> ')}" }
# => Cycle: processing -> waiting -> processing

State machine analysis identifies potential infinite loops where the system cycles through states without reaching terminal conditions. This verification prevents non-terminating workflows and ensures all execution paths eventually complete or explicitly loop.

Performance Considerations

Cycle detection algorithm performance depends on data structure size, cycle presence location, and memory constraints. Different approaches optimize for various scenarios.

Time Complexity Analysis:

Floyd's cycle detection requires O(n) time where n represents the number of nodes in the structure. The fast pointer traverses at most 2n nodes before either detecting a cycle or reaching the end. The algorithm performs constant work per iteration, maintaining linear time complexity.

Hash-based detection achieves O(n) time with O(n) space, storing each visited node. Hash set operations (insertion and lookup) average O(1), resulting in linear overall complexity. This approach detects cycles on first revisit, minimizing traversal.

Graph DFS coloring requires O(V + E) time for V vertices and E edges. Each vertex transitions through three colors exactly once, and each edge is examined once. The algorithm explores the entire graph structure, making it suitable when full graph information is needed beyond cycle detection.

Union-Find with path compression achieves nearly O(E α(V)) time where α represents the inverse Ackermann function, effectively constant for practical inputs. Each edge triggers find and union operations, both optimized through path compression and union by rank.

Space Complexity Trade-offs:

Floyd's algorithm uses O(1) auxiliary space, storing only two pointers regardless of structure size. This constant space usage makes it optimal for memory-constrained environments with sequential access patterns.

Hash-based approaches require O(n) space to store visited nodes. The actual memory consumption depends on hash set implementation overhead and load factor. Ruby's Set class uses Hash internally, adding object overhead per stored element.

DFS coloring needs O(V) space for color mappings plus O(V) recursion stack depth in worst case. Iterative implementations using explicit stacks have identical space requirements. The color map storage is unavoidable but provides reusable information for other graph algorithms.

Optimization Techniques:

class OptimizedCycleDetector
  # Brent's algorithm - reduces comparison operations
  def self.brent_cycle_detection(head)
    return false if head.nil?
    
    power = 1
    length = 1
    tortoise = head
    hare = head.next
    
    while hare
      if tortoise == hare
        return true
      end
      
      if power == length
        tortoise = hare
        power *= 2
        length = 0
      end
      
      hare = hare.next
      length += 1
    end
    
    false
  end
  
  # Memoized graph cycle detection
  def self.cached_dfs_cycle_check(graph)
    @cycle_cache ||= {}
    cache_key = graph.object_id
    
    return @cycle_cache[cache_key] if @cycle_cache.key?(cache_key)
    
    detector = GraphCycleDetector.new(graph)
    result = detector.has_cycle?
    @cycle_cache[cache_key] = result
    
    result
  end
end

Brent's algorithm reduces constant factors by checking for cycles at exponentially increasing intervals. This decreases the number of comparisons while maintaining the same asymptotic complexity.

Early Termination Strategies:

class EarlyTerminationDetector
  def self.detect_cycle_with_limit(head, max_steps)
    visited = Set.new
    current = head
    steps = 0
    
    while current && steps < max_steps
      return { cycle: true, steps: steps } if visited.include?(current)
      
      visited.add(current)
      current = current.next
      steps += 1
    end
    
    if steps >= max_steps
      { timeout: true, steps: steps }
    else
      { cycle: false, steps: steps }
    end
  end
end

Early termination limits traversal when analyzing potentially infinite structures or when timeout constraints exist. This prevents indefinite execution while providing partial results indicating suspected cycle presence.

Parallel Detection for Large Graphs:

For graphs exceeding single-core processing capabilities, parallel cycle detection partitions the graph and performs independent checks on subgraphs. This approach requires coordination to detect cycles spanning partition boundaries. The coordination overhead often exceeds benefits for graphs fitting in memory, making parallelization effective only for distributed graph storage scenarios.

Memory-Efficient Approaches:

When memory constraints prevent hash-based tracking, modified Floyd's algorithm variants adapt to different structures. For graphs, random walk techniques sample paths to detect cycles probabilistically, trading certainty for reduced memory usage. These approaches suit approximate detection where false negatives are acceptable but false positives are not.

Common Patterns

Linked List Validation Pattern:

class LinkedList
  attr_reader :head
  
  def initialize
    @head = nil
  end
  
  def append(value)
    new_node = Node.new(value)
    
    if @head.nil?
      @head = new_node
    else
      current = @head
      visited = Set.new
      
      while current.next
        raise "Cycle detected during append" if visited.include?(current)
        visited.add(current)
        current = current.next
      end
      
      current.next = new_node
    end
  end
  
  def validate_structure
    return true if @head.nil?
    
    detector = LinkedListCycleDetector
    unless detector.has_cycle?(@head)
      return true
    end
    
    cycle_start = detector.find_cycle_start(@head)
    cycle_length = detector.cycle_length(@head)
    
    {
      valid: false,
      issue: "Cycle detected",
      entry_node: cycle_start.value,
      cycle_length: cycle_length
    }
  end
end

Validation patterns check structure integrity before operations that assume acyclic structure. Operations like computing length or finding the tail node enter infinite loops on cyclic lists without detection.

Dependency Graph Topological Ordering:

class DependencyResolver
  def initialize
    @graph = Graph.new
    @in_degree = Hash.new(0)
  end
  
  def add_dependency(task, depends_on)
    @graph.add_edge(depends_on, task)
    @in_degree[task] += 1
    @in_degree[depends_on] ||= 0
  end
  
  def resolve
    queue = @in_degree.select { |_, degree| degree == 0 }.keys
    result = []
    
    while !queue.empty?
      current = queue.shift
      result << current
      
      @graph.neighbors(current).each do |neighbor|
        @in_degree[neighbor] -= 1
        queue << neighbor if @in_degree[neighbor] == 0
      end
    end
    
    if result.length != @in_degree.length
      # Cycle detected - not all nodes processed
      remaining = @in_degree.keys - result
      raise "Circular dependency involving: #{remaining.join(', ')}"
    end
    
    result
  end
end

Topological ordering patterns combine cycle detection with useful output production. The algorithm fails gracefully on cyclic graphs while producing valid orderings for acyclic inputs.

Graph Component Cycle Analysis:

class ComponentAnalyzer
  def initialize(graph)
    @graph = graph
  end
  
  def analyze_components
    visited = Set.new
    components = []
    
    @graph.vertices.each do |vertex|
      next if visited.include?(vertex)
      
      component = []
      explore_component(vertex, visited, component)
      
      # Check if this component contains a cycle
      component_graph = extract_subgraph(component)
      detector = GraphCycleDetector.new(component_graph)
      has_cycle = detector.has_cycle?
      
      components << {
        vertices: component,
        cyclic: has_cycle
      }
    end
    
    components
  end
  
  private
  
  def explore_component(vertex, visited, component)
    visited.add(vertex)
    component << vertex
    
    @graph.neighbors(vertex).each do |neighbor|
      explore_component(neighbor, visited, component) unless visited.include?(neighbor)
    end
  end
  
  def extract_subgraph(vertices)
    subgraph = Graph.new
    vertices.each do |vertex|
      @graph.neighbors(vertex).each do |neighbor|
        subgraph.add_edge(vertex, neighbor) if vertices.include?(neighbor)
      end
    end
    subgraph
  end
end

Component analysis patterns separate graphs into connected components, analyzing each independently. This approach localizes cycle detection to relevant subgraphs, improving efficiency for disconnected graphs.

Recursive Structure Detection:

class RecursiveStructureDetector
  def self.detect_in_nested_hash(hash, path = [])
    cycles = []
    
    hash.each do |key, value|
      current_path = path + [key]
      
      if value.is_a?(Hash)
        if path_contains_hash?(path, value)
          cycles << {
            path: current_path,
            points_to: find_cycle_target(path, value)
          }
        else
          cycles.concat(detect_in_nested_hash(value, current_path))
        end
      end
    end
    
    cycles
  end
  
  def self.path_contains_hash?(path, target_hash)
    # Check if any hash in the path is the same object as target
    path.any? { |element| element.is_a?(Hash) && element.object_id == target_hash.object_id }
  end
  
  def self.find_cycle_target(path, target_hash)
    path.each_with_index do |element, index|
      return index if element.is_a?(Hash) && element.object_id == target_hash.object_id
    end
    nil
  end
end

# Detecting cycles in nested data structures
data = { a: { b: {} } }
data[:a][:b][:c] = data[:a]  # Creates cycle

cycles = RecursiveStructureDetector.detect_in_nested_hash(data)
# Identifies self-referential nested structures

Recursive structure patterns detect cycles in nested data like JSON, XML, or object graphs. Object identity comparison prevents false positives from structurally similar but distinct objects.

Reference

Algorithm Comparison

Algorithm Time Space Structure Provides Entry Point
Floyd's Tortoise and Hare O(n) O(1) Sequential only Yes with additional pass
Hash Set Tracking O(n) O(n) Any Yes immediately
DFS Color Marking O(V+E) O(V) Graphs Yes
Topological Sort O(V+E) O(V) Directed graphs No, detects presence only
Union-Find O(E α(V)) O(V) Undirected graphs No
Brent's Algorithm O(n) O(1) Sequential only Yes with additional work

Detection Method Selection

Scenario Recommended Approach Rationale
Singly linked list Floyd's algorithm Constant space, optimal for sequential access
Doubly linked list Hash set tracking Bidirectional traversal reduces benefit of two-pointer
Directed graph DFS with colors Natural fit for depth-first traversal
Undirected graph Union-Find Efficient component tracking
Memory constrained Floyd's or Brent's Minimize space usage
Need cycle path Hash set with path tracking Enables path reconstruction
Dependency resolution Topological sort Combines detection with ordering
Large distributed graph Parallel DFS variants Leverage distributed processing

Ruby Implementation Patterns

Pattern Code Template Use Case
Sequential detection slow and fast pointers Linked lists, function iteration
Graph traversal Recursive DFS with state Dependency graphs, file systems
Hash tracking Set with membership test General structures, quick detection
Component analysis Connected component DFS Disconnected graphs
Early termination Step counter with limit Potentially infinite structures

Common Edge Cases

Edge Case Detection Consideration Handling
Empty structure No nodes to traverse Return false immediately
Single node with self-loop Trivial cycle Check if node points to itself
Two-node cycle Minimal cycle Fast pointer advances by 2
Long tail before cycle Detection delay Acceptable in O(n) algorithms
Multiple disconnected cycles Component-level detection Process each component separately
Cycle at graph entry Immediate detection First node marks cycle start

Complexity Characteristics

Characteristic Floyd's Hash Set DFS Coloring Topological
Best case O(1) no cycle O(n) O(V+E) O(V+E)
Average case O(n) O(n) O(V+E) O(V+E)
Worst case O(n) O(n) O(V+E) O(V+E)
Space O(1) O(n) O(V) O(V)
Preprocessing None None None Compute in-degrees O(E)

Detection Result Information

Information Type Floyd's Hash Set DFS Description
Cycle exists Yes Yes Yes Binary detection result
Entry point Additional pass Immediate Path tracking First node in cycle
Cycle length Count after meeting Tracking needed Path analysis Number of nodes in cycle
Full cycle path Complex Hash map variant Natural with parent tracking All nodes forming cycle
Pre-cycle length Calculate Position tracking Path analysis Nodes before cycle entry