Overview
Asymptotic notation provides a mathematical framework for analyzing algorithm efficiency by describing how runtime or space requirements grow as input size increases. Three primary notations serve distinct purposes: Big O describes upper bounds (worst-case scenarios), Omega describes lower bounds (best-case scenarios), and Theta describes tight bounds (exact growth rates). These notations abstract away constant factors and lower-order terms to focus on the fundamental growth rate that dominates as input size approaches infinity.
The notations originated from number theory and were introduced to computer science by Donald Knuth in 1976. They address a critical problem in algorithm analysis: comparing algorithms across different hardware, programming languages, and implementation details. Rather than measuring absolute runtime in seconds or milliseconds, asymptotic notation characterizes how an algorithm scales, allowing meaningful comparisons independent of environmental factors.
Consider searching for an element in an unsorted array. The operation might complete immediately if the target appears first (best case), require examining every element if the target appears last or doesn't exist (worst case), or typically require checking half the elements on average. Asymptotic notation captures these scenarios mathematically:
def linear_search(array, target)
array.each_with_index do |element, index|
return index if element == target
end
nil
end
# Best case: Ω(1) - target at first position
linear_search([5, 2, 8, 1], 5) # => 0
# Worst case: O(n) - target at last position or absent
linear_search([5, 2, 8, 1], 1) # => 3
linear_search([5, 2, 8, 1], 9) # => nil
Algorithm analysis with asymptotic notation informs critical engineering decisions: choosing data structures, optimizing bottlenecks, predicting scalability limits, and establishing service level agreements. An O(n²) algorithm processing 1,000 records may execute acceptably fast, but fails catastrophically at 1,000,000 records where an O(n log n) alternative would remain viable.
Key Principles
Asymptotic notation relies on mathematical limits to describe function growth rates. For functions f(n) and g(n) representing algorithm complexity:
Big O Notation (O) defines an asymptotic upper bound. f(n) = O(g(n)) means there exist positive constants c and n₀ such that f(n) ≤ c·g(n) for all n ≥ n₀. This states that f grows no faster than g, possibly multiplied by some constant factor, beyond some threshold input size. Big O characterizes worst-case performance.
Omega Notation (Ω) defines an asymptotic lower bound. f(n) = Ω(g(n)) means there exist positive constants c and n₀ such that f(n) ≥ c·g(n) for all n ≥ n₀. This states that f grows at least as fast as g, multiplied by some constant factor, beyond some threshold. Omega characterizes best-case performance.
Theta Notation (Θ) defines an asymptotic tight bound. f(n) = Θ(g(n)) means f(n) = O(g(n)) and f(n) = Ω(g(n)) simultaneously. This states that f grows at the same rate as g, sandwiched between two constant multiples. Theta characterizes performance when best and worst cases have the same growth rate.
The key insight is that these notations ignore constant factors and lower-order terms. An algorithm requiring 5n² + 3n + 7 operations has complexity Θ(n²) because the quadratic term dominates as n grows large. The coefficient 5, the linear term 3n, and the constant 7 become negligible compared to n² when n reaches thousands or millions.
def quadratic_example(n)
count = 0
# Outer loop: n iterations
n.times do |i|
# Inner loop: n iterations
n.times do |j|
count += 1 # O(1) operation
end
count += i # O(1) operation
end
count += 7 # O(1) operation
count
end
# Total operations: n² + n + 1
# Asymptotic complexity: Θ(n²)
# The n² term dominates
Mathematical properties govern how these notations compose:
Transitivity: If f = O(g) and g = O(h), then f = O(h). This allows chaining complexity relationships.
Reflexivity: f = O(f), f = Ω(f), and f = Θ(f) for all functions f. Every function bounds itself.
Symmetry: f = Θ(g) if and only if g = Θ(f). Tight bounds work bidirectionally.
Max Rule: If f = O(h) and g = O(h), then f + g = O(h). The maximum complexity dominates when combining algorithms.
Product Rule: If f = O(h) and g = O(k), then f·g = O(h·k). Nested operations multiply complexities.
These properties enable systematic complexity analysis of composite algorithms. When an algorithm performs multiple sequential operations, the operation with the highest complexity determines overall complexity. When operations nest, their complexities multiply.
The distinction between notations matters for precise algorithm characterization. Binary search has O(log n) worst-case complexity, Ω(1) best-case complexity when the target sits at the middle position, but cannot be described with Theta notation because these differ. Conversely, merge sort has Θ(n log n) complexity because it performs the same number of comparisons regardless of input order.
def binary_search(sorted_array, target)
low = 0
high = sorted_array.length - 1
while low <= high
mid = (low + high) / 2
return mid if sorted_array[mid] == target # Ω(1) best case
if sorted_array[mid] < target
low = mid + 1
else
high = mid - 1
end
end
nil # O(log n) worst case
end
Asymptotic analysis focuses on growth rates as input size approaches infinity, which introduces important limitations. For small input sizes, an O(n²) algorithm with low constant factors may outperform an O(n log n) algorithm with high constant factors. The notation becomes meaningful when input sizes grow large enough that the growth rate dominates constant factors.
Ruby Implementation
Analyzing Ruby code complexity requires understanding how Ruby's built-in methods scale and how to reason about algorithm structure. Ruby's dynamic nature and abstractions can obscure complexity, making explicit analysis essential.
Array operations in Ruby have well-defined complexity characteristics:
# O(1) - Constant time operations
array = [1, 2, 3, 4, 5]
array[2] # Direct index access
array.push(6) # Append to end
array.pop # Remove from end
array.length # Size query
# O(n) - Linear time operations
array.include?(3) # Search entire array
array.find { |x| x > 2 } # Iterate until found
array.sum # Iterate entire array
array.reverse # Create reversed copy
# O(n) - Linear for these operations
array.unshift(0) # Insert at beginning (shifts all elements)
array.shift # Remove from beginning (shifts all elements)
array.delete_at(2) # Remove at index (shifts subsequent elements)
Hash operations provide different complexity characteristics based on Ruby's hash table implementation:
# O(1) average case - Hash operations
hash = { a: 1, b: 2, c: 3 }
hash[:a] # Key lookup
hash[:d] = 4 # Key insertion
hash.delete(:b) # Key deletion
hash.key?(:c) # Key existence check
# O(n) - Operations requiring full iteration
hash.values # Collect all values
hash.each { |k, v| puts "#{k}: #{v}" } # Iterate all pairs
hash.select { |k, v| v > 1 } # Filter operations
Nested iteration multiplies complexity. Each additional nested loop multiplies by the iteration count:
def nested_complexity(n)
# Single loop: O(n)
result = []
n.times do |i|
result << i * 2
end
# Double nested loop: O(n²)
matrix = []
n.times do |i|
n.times do |j|
matrix << [i, j]
end
end
# Triple nested loop: O(n³)
cube = []
n.times do |i|
n.times do |j|
n.times do |k|
cube << [i, j, k]
end
end
end
{ single: result.length, matrix: matrix.length, cube: cube.length }
end
nested_complexity(10)
# => {:single=>10, :matrix=>100, :cube=>1000}
Recursive algorithms require different analysis techniques. The complexity depends on recursion depth and work per recursive call:
def factorial(n)
return 1 if n <= 1
n * factorial(n - 1)
end
# Recursion depth: O(n)
# Work per call: O(1)
# Total complexity: O(n)
def fibonacci_naive(n)
return n if n <= 1
fibonacci_naive(n - 1) + fibonacci_naive(n - 2)
end
# Creates binary tree of calls
# Complexity: O(2ⁿ) - exponential
def fibonacci_memoized(n, memo = {})
return n if n <= 1
return memo[n] if memo[n]
memo[n] = fibonacci_memoized(n - 1, memo) + fibonacci_memoized(n - 2, memo)
end
# Each value computed once
# Complexity: O(n)
Ruby's enumerable methods have complexity determined by their implementation:
array = (1..1000).to_a
# O(n) - Single pass operations
array.map { |x| x * 2 }
array.select { |x| x.even? }
array.reduce(:+)
array.any? { |x| x > 500 }
# O(n log n) - Sorting operations
array.sort
array.sort_by { |x| -x }
# O(n²) - Nested enumerable operations
array.combination(2).to_a # Generates n*(n-1)/2 pairs
array.repeated_permutation(2).to_a # Generates n² permutations
Analyzing algorithm complexity in Ruby requires identifying loops, recursive calls, and built-in method complexities:
def analyze_complexity_example(array, target)
# Step 1: Sort array - O(n log n)
sorted = array.sort
# Step 2: Binary search - O(log n)
index = binary_search(sorted, target)
# Step 3: Linear scan for verification - O(n)
verified = sorted.select { |x| x == target }
{ index: index, all_matches: verified }
end
# Dominated by O(n log n) sorting operation
# Overall complexity: O(n log n)
def combined_operations(data)
# O(n) operation
filtered = data.select { |x| x > 0 }
# O(n log n) operation
sorted = filtered.sort
# O(n²) operation
pairs = sorted.combination(2).select { |a, b| a + b > 100 }
pairs
end
# Dominated by O(n²) combination operation
# Overall complexity: O(n²)
Space complexity measures memory usage growth. Ruby's garbage collection and object allocation affect space analysis:
def space_complexity_examples(n)
# O(1) space - constant additional memory
def constant_space(n)
sum = 0
n.times { |i| sum += i }
sum
end
# O(n) space - array size grows with input
def linear_space(n)
array = []
n.times { |i| array << i }
array
end
# O(n²) space - nested array structure
def quadratic_space(n)
matrix = Array.new(n) { Array.new(n, 0) }
matrix
end
{
constant: constant_space(n),
linear: linear_space(n).length,
quadratic: quadratic_space(n).length * quadratic_space(n).first.length
}
end
Practical Examples
Analyzing sorting algorithms demonstrates how different approaches yield different complexity characteristics:
def bubble_sort(array)
n = array.length
(n - 1).times do |i|
(n - i - 1).times do |j|
if array[j] > array[j + 1]
array[j], array[j + 1] = array[j + 1], array[j]
end
end
end
array
end
# Outer loop: n iterations
# Inner loop: n-i iterations (decreases each time)
# Total comparisons: n(n-1)/2 ≈ n²/2
# Best case (sorted): Ω(n²) - still performs all comparisons
# Worst case (reversed): O(n²)
# Average case: Θ(n²)
numbers = [64, 34, 25, 12, 22, 11, 90]
bubble_sort(numbers.dup)
# => [11, 12, 22, 25, 34, 64, 90]
Comparing search strategies illustrates how algorithm design affects complexity:
def linear_search_with_analysis(array, target)
comparisons = 0
result = nil
array.each_with_index do |element, index|
comparisons += 1
if element == target
result = index
break
end
end
{ index: result, comparisons: comparisons }
end
def binary_search_with_analysis(sorted_array, target)
comparisons = 0
low = 0
high = sorted_array.length - 1
while low <= high
comparisons += 1
mid = (low + high) / 2
if sorted_array[mid] == target
return { index: mid, comparisons: comparisons }
elsif sorted_array[mid] < target
low = mid + 1
else
high = mid - 1
end
end
{ index: nil, comparisons: comparisons }
end
array = (1..1000).to_a
target = 750
linear_search_with_analysis(array, target)
# => {:index=>749, :comparisons=>750}
# Required 750 comparisons for O(n) linear search
binary_search_with_analysis(array, target)
# => {:index=>749, :comparisons=>10}
# Required 10 comparisons for O(log n) binary search
String manipulation operations demonstrate how complexity grows with different operations:
def string_concatenation_naive(n)
result = ""
n.times do |i|
result += "a" # Creates new string each iteration
end
result
end
# Each += creates new string and copies existing content
# Iteration 1: 1 character copied
# Iteration 2: 2 characters copied
# Iteration n: n characters copied
# Total: 1 + 2 + ... + n = n(n+1)/2
# Complexity: O(n²)
def string_concatenation_optimized(n)
result = []
n.times do |i|
result << "a" # Appends to array: O(1) amortized
end
result.join # Single concatenation: O(n)
end
# Array append: O(n) total
# Final join: O(n)
# Complexity: O(n)
# Demonstrate performance difference
require 'benchmark'
n = 10_000
Benchmark.bm(20) do |x|
x.report("naive (O(n²)):") { string_concatenation_naive(n) }
x.report("optimized (O(n)):") { string_concatenation_optimized(n) }
end
Hash table operations versus array operations show tradeoffs between different data structures:
def find_duplicates_array(array)
duplicates = []
array.each_with_index do |element, i|
# Check if element appears later in array
if array[(i + 1)..-1].include?(element)
duplicates << element unless duplicates.include?(element)
end
end
duplicates
end
# Outer loop: O(n)
# Inner include?: O(n)
# Total: O(n²)
def find_duplicates_hash(array)
seen = {}
duplicates = []
array.each do |element|
if seen[element]
duplicates << element unless duplicates.include?(element)
else
seen[element] = true
end
end
duplicates
end
# Hash lookup: O(1) average
# Single iteration: O(n)
# Total: O(n)
data = [1, 2, 3, 2, 4, 5, 3, 6, 7, 4]
find_duplicates_array(data) # => [2, 3, 4]
find_duplicates_hash(data) # => [2, 3, 4]
# Same result, vastly different performance at scale
Graph traversal algorithms demonstrate complexity analysis with multiple dimensions:
class Graph
def initialize
@adjacency_list = Hash.new { |h, k| h[k] = [] }
end
def add_edge(from, to)
@adjacency_list[from] << to
end
def breadth_first_search(start, target)
return start if start == target
visited = Set.new([start])
queue = [start]
until queue.empty?
node = queue.shift
@adjacency_list[node].each do |neighbor|
return neighbor if neighbor == target
unless visited.include?(neighbor)
visited.add(neighbor)
queue.push(neighbor)
end
end
end
nil
end
end
# V = number of vertices, E = number of edges
# Each vertex visited once: O(V)
# Each edge examined once: O(E)
# Total: O(V + E)
graph = Graph.new
graph.add_edge(1, 2)
graph.add_edge(1, 3)
graph.add_edge(2, 4)
graph.add_edge(3, 4)
graph.add_edge(4, 5)
graph.breadth_first_search(1, 5)
# Explores vertices: 1 -> 2, 3 -> 4 -> 5
Performance Considerations
Asymptotic complexity provides theoretical guidance but actual performance depends on constant factors, hardware characteristics, and implementation details. An O(n log n) algorithm may perform worse than an O(n²) algorithm for small input sizes if it has significantly higher constant factors or worse cache locality.
Ruby's interpreter overhead affects constant factors. Operations requiring method dispatch, block calls, or object allocation carry additional cost compared to languages with direct machine code execution:
def compare_approaches(n)
# Approach 1: Block-based iteration
time1 = Time.now
result1 = (1..n).reduce(0) { |sum, i| sum + i }
elapsed1 = Time.now - time1
# Approach 2: Direct formula (O(1) vs O(n))
time2 = Time.now
result2 = n * (n + 1) / 2
elapsed2 = Time.now - time2
{
iterative: { result: result1, time: elapsed1 },
formula: { result: result2, time: elapsed2 }
}
end
compare_approaches(1_000_000)
# Iterative: O(n) with Ruby overhead
# Formula: O(1) with single calculation
# Formula vastly faster despite both being "fast" asymptotically
Memory hierarchy effects influence real performance. Algorithms with poor cache locality perform worse than predicted by complexity analysis alone. Sequential array access benefits from CPU cache prefetching, while random access patterns cause cache misses:
def sequential_access(matrix)
sum = 0
# Access by rows (cache-friendly)
matrix.each do |row|
row.each do |element|
sum += element
end
end
sum
end
# Better cache locality despite same O(n²) complexity
def random_access(matrix)
sum = 0
size = matrix.length
# Random access pattern (cache-unfriendly)
size.times do |i|
size.times do |j|
sum += matrix[rand(size)][rand(size)]
end
end
sum
end
# Worse cache performance despite same O(n²) complexity
Amortized analysis considers average cost over sequences of operations rather than worst-case individual operations. Ruby arrays use dynamic resizing with amortized O(1) append:
def demonstrate_amortized_complexity(n)
array = []
costs = []
n.times do |i|
initial_capacity = array.capacity rescue array.length
array << i
final_capacity = array.capacity rescue array.length
# Track when resizing occurs
costs << (final_capacity - initial_capacity)
end
costs
end
# Most appends: O(1)
# Occasional resizes: O(n) for that operation
# Amortized across all operations: O(1) per append
Algorithm complexity differs for different input characteristics. Quicksort demonstrates how best and worst cases vary dramatically:
def quicksort(array)
return array if array.length <= 1
pivot = array[array.length / 2]
left = array.select { |x| x < pivot }
middle = array.select { |x| x == pivot }
right = array.select { |x| x > pivot }
quicksort(left) + middle + quicksort(right)
end
# Best/Average case: O(n log n)
# Balanced partitions at each level
random_data = (1..1000).to_a.shuffle
quicksort(random_data)
# Worst case: O(n²)
# Unbalanced partitions create deep recursion
sorted_data = (1..1000).to_a
quicksort(sorted_data)
# Choosing middle element as pivot helps, but worst case exists
Space complexity interacts with time complexity. Some algorithms trade space for time improvements:
def fibonacci_space_time_tradeoff
# Time: O(2ⁿ), Space: O(n) for call stack
def recursive_fibonacci(n)
return n if n <= 1
recursive_fibonacci(n - 1) + recursive_fibonacci(n - 2)
end
# Time: O(n), Space: O(n) for memoization
def memoized_fibonacci(n, cache = {})
return n if n <= 1
cache[n] ||= memoized_fibonacci(n - 1, cache) + memoized_fibonacci(n - 2, cache)
end
# Time: O(n), Space: O(1) - only tracking last two values
def iterative_fibonacci(n)
return n if n <= 1
prev, curr = 0, 1
(n - 1).times do
prev, curr = curr, prev + curr
end
curr
end
n = 30
{
recursive: recursive_fibonacci(n),
memoized: memoized_fibonacci(n),
iterative: iterative_fibonacci(n)
}
end
# All produce same result with different tradeoffs
Input size threshold determines when complexity matters. Small inputs may favor simpler O(n²) algorithms over complex O(n log n) algorithms with high setup costs:
def adaptive_sort(array)
# Use insertion sort for small arrays
if array.length < 10
insertion_sort(array) # O(n²) but fast for small n
else
array.sort # O(n log n) optimized implementation
end
end
def insertion_sort(array)
(1...array.length).each do |i|
key = array[i]
j = i - 1
while j >= 0 && array[j] > key
array[j + 1] = array[j]
j -= 1
end
array[j + 1] = key
end
array
end
# Hybrid approach leverages constant factors
Common Patterns
Certain algorithm structures produce predictable complexity patterns. Recognizing these patterns accelerates complexity analysis without detailed calculation.
Single loop iterating over input yields O(n) linear complexity:
# Pattern: Single pass over input
def linear_pattern_examples(array)
# Summing elements
sum = array.reduce(0, :+) # O(n)
# Finding maximum
max = array.max # O(n)
# Filtering elements
filtered = array.select(&:even?) # O(n)
# Transforming elements
transformed = array.map { |x| x * 2 } # O(n)
{ sum: sum, max: max, filtered: filtered, transformed: transformed }
end
Nested loops where each iterates over input yield O(n²) quadratic complexity:
# Pattern: Nested iteration
def quadratic_pattern_examples(array)
# Finding all pairs
pairs = []
array.each do |first|
array.each do |second|
pairs << [first, second]
end
end
# Computing pairwise distances
distances = []
array.each_with_index do |x, i|
array.each_with_index do |y, j|
distances << (x - y).abs if i < j
end
end
{ pairs: pairs.length, distances: distances }
end
Dividing input size in half each iteration yields O(log n) logarithmic complexity:
# Pattern: Binary division
def logarithmic_pattern_examples
# Binary search
def binary_search_pattern(sorted_array, target)
low, high = 0, sorted_array.length - 1
while low <= high
mid = (low + high) / 2
return mid if sorted_array[mid] == target
if sorted_array[mid] < target
low = mid + 1
else
high = mid - 1
end
end
nil
end
# Finding power of 2
def find_power_of_two(n)
power = 0
value = 1
while value < n
value *= 2
power += 1
end
power
end
{
search: binary_search_pattern([1, 3, 5, 7, 9], 7),
power: find_power_of_two(100)
}
end
Divide-and-conquer algorithms combining division with linear merge operations yield O(n log n) linearithmic complexity:
# Pattern: Divide, conquer, and merge
def merge_sort(array)
return array if array.length <= 1
mid = array.length / 2
left = merge_sort(array[0...mid])
right = merge_sort(array[mid..-1])
merge(left, right)
end
def merge(left, right)
result = []
until left.empty? || right.empty?
if left.first <= right.first
result << left.shift
else
result << right.shift
end
end
result + left + right
end
# Recursion depth: log n
# Work per level: n
# Total: O(n log n)
Generating all subsets or combinations yields exponential O(2ⁿ) complexity:
# Pattern: Generating all combinations
def exponential_pattern_examples(array)
# All subsets
def power_set(array)
return [[]] if array.empty?
first = array.first
rest = array[1..-1]
subsets_without = power_set(rest)
subsets_with = subsets_without.map { |subset| [first] + subset }
subsets_without + subsets_with
end
# All permutations
def permutations(array)
return [array] if array.length <= 1
result = []
array.each_with_index do |element, i|
rest = array[0...i] + array[(i+1)..-1]
permutations(rest).each do |perm|
result << [element] + perm
end
end
result
end
{
subsets: power_set(array).length, # 2ⁿ
permutations: permutations(array).length # n!
}
end
Dynamic programming patterns convert exponential recursion to polynomial complexity through memoization:
# Pattern: Memoization for overlapping subproblems
def dynamic_programming_pattern
# Without memoization: O(2ⁿ)
def fibonacci_exponential(n)
return n if n <= 1
fibonacci_exponential(n - 1) + fibonacci_exponential(n - 2)
end
# With memoization: O(n)
def fibonacci_linear(n, cache = {})
return n if n <= 1
cache[n] ||= fibonacci_linear(n - 1, cache) + fibonacci_linear(n - 2, cache)
end
# Bottom-up approach: O(n)
def fibonacci_iterative(n)
return n if n <= 1
dp = Array.new(n + 1)
dp[0], dp[1] = 0, 1
(2..n).each do |i|
dp[i] = dp[i - 1] + dp[i - 2]
end
dp[n]
end
n = 20
{
exponential: fibonacci_exponential(n),
linear_memo: fibonacci_linear(n),
linear_iterative: fibonacci_iterative(n)
}
end
Sorting-based solutions yield O(n log n) as lower bound:
# Pattern: Sort then process
def sorting_based_patterns(array)
# Finding median requires sorting
def find_median(array)
sorted = array.sort # O(n log n)
mid = sorted.length / 2
sorted.length.odd? ? sorted[mid] : (sorted[mid - 1] + sorted[mid]) / 2.0
end
# Finding closest pair requires sorting
def closest_pair(array)
sorted = array.sort # O(n log n)
min_diff = Float::INFINITY
pair = nil
(0...sorted.length - 1).each do |i| # O(n)
diff = sorted[i + 1] - sorted[i]
if diff < min_diff
min_diff = diff
pair = [sorted[i], sorted[i + 1]]
end
end
pair
end
{
median: find_median(array),
closest: closest_pair(array)
}
end
# Dominated by O(n log n) sort
Reference
Complexity Classes
| Notation | Name | Growth Rate | Example Operations |
|---|---|---|---|
| O(1) | Constant | Stays constant | Array index access, hash lookup, variable assignment |
| O(log n) | Logarithmic | Doubles input for each step increase | Binary search, balanced tree operations, finding power of 2 |
| O(n) | Linear | Proportional to input | Array iteration, linear search, finding min/max |
| O(n log n) | Linearithmic | n times log n | Merge sort, quicksort average case, heap sort |
| O(n²) | Quadratic | Proportional to input squared | Bubble sort, selection sort, nested loops |
| O(n³) | Cubic | Proportional to input cubed | Triple nested loops, naive matrix multiplication |
| O(2ⁿ) | Exponential | Doubles with each input increase | Generating all subsets, recursive fibonacci without memoization |
| O(n!) | Factorial | Extremely rapid growth | Generating all permutations, traveling salesman brute force |
Mathematical Definitions
| Notation | Definition | Meaning |
|---|---|---|
| f(n) = O(g(n)) | ∃c, n₀ > 0: f(n) ≤ c·g(n) for all n ≥ n₀ | f grows no faster than g |
| f(n) = Ω(g(n)) | ∃c, n₀ > 0: f(n) ≥ c·g(n) for all n ≥ n₀ | f grows at least as fast as g |
| f(n) = Θ(g(n)) | f(n) = O(g(n)) and f(n) = Ω(g(n)) | f grows at same rate as g |
| f(n) = o(g(n)) | lim(n→∞) f(n)/g(n) = 0 | f grows strictly slower than g |
| f(n) = ω(g(n)) | lim(n→∞) f(n)/g(n) = ∞ | f grows strictly faster than g |
Common Ruby Operations
| Operation | Complexity | Notes |
|---|---|---|
| array[i] | O(1) | Direct memory access |
| array.push(x) | O(1) amortized | May trigger resize |
| array.pop | O(1) | Removes last element |
| array.unshift(x) | O(n) | Shifts all elements |
| array.shift | O(n) | Shifts remaining elements |
| array.include?(x) | O(n) | Linear search |
| array.sort | O(n log n) | Optimized quicksort/mergesort |
| hash[key] | O(1) average | Hash collision worst case O(n) |
| hash[key] = value | O(1) average | May trigger rehash |
| hash.delete(key) | O(1) average | Removes key-value pair |
| string + string | O(n) | Creates new string |
| string.concat(x) | O(n) | Modifies in place |
| array.select | O(n) | Iterates entire array |
| array.map | O(n) | Transforms each element |
Complexity Comparison
| Input Size (n) | O(1) | O(log n) | O(n) | O(n log n) | O(n²) | O(2ⁿ) |
|---|---|---|---|---|---|---|
| 10 | 1 | 3 | 10 | 33 | 100 | 1024 |
| 100 | 1 | 7 | 100 | 664 | 10000 | 1.27×10³⁰ |
| 1000 | 1 | 10 | 1000 | 9966 | 1000000 | Too large |
| 10000 | 1 | 13 | 10000 | 132877 | 100000000 | Too large |
Rules for Complexity Analysis
| Rule | Description | Example |
|---|---|---|
| Drop constants | Ignore constant multipliers | 5n + 3 becomes O(n) |
| Drop lower terms | Keep only highest-order term | n² + n + 1 becomes O(n²) |
| Sequential adds | Max of sequential operations | O(n) + O(n²) becomes O(n²) |
| Nested multiplies | Multiply nested operation complexities | O(n) nested in O(n) becomes O(n²) |
| Logarithm base irrelevant | All logs equivalent asymptotically | log₂(n) and log₁₀(n) both O(log n) |
| Best vs worst | Use appropriate notation | Best Ω, worst O, tight Θ |