Overview
Arrays represent contiguous blocks of memory that store elements of the same type in sequential order. Each element occupies a fixed position accessible through integer indices, providing constant-time direct access to any element. The array data structure forms one of the most fundamental building blocks in computer science, serving as the foundation for more complex data structures and algorithms.
Static arrays have fixed capacity determined at creation time. Once allocated, the array cannot grow or shrink, and attempting to add elements beyond capacity results in errors or undefined behavior. Dynamic arrays overcome this limitation by automatically resizing when capacity is reached, creating the illusion of unbounded growth while maintaining the performance characteristics of array access.
Ruby's Array class implements dynamic arrays natively. When declaring arr = [] or arr = Array.new, Ruby allocates an array with automatic resizing. The runtime manages capacity and handles reallocation transparently, hiding the complexity of memory management from developers.
# Static-style array with fixed initial size
fixed = Array.new(5) # Creates array with 5 nil elements
# => [nil, nil, nil, nil, nil]
# Dynamic array that grows automatically
dynamic = []
dynamic << 1 << 2 << 3 # Appends elements without capacity concerns
# => [1, 2, 3]
The distinction between logical size (number of elements stored) and physical capacity (allocated memory slots) defines dynamic array behavior. When size reaches capacity, the array allocates a new, larger block of memory, copies existing elements to the new location, and deallocates the old memory. This operation, called reallocation or resizing, occurs periodically rather than on every insertion.
Key Principles
Contiguous Memory Layout
Arrays store elements in adjacent memory addresses. If an integer array begins at memory address 1000 and each integer occupies 4 bytes, element 0 resides at address 1000, element 1 at address 1004, element 2 at address 1008, and so forth. This layout enables constant-time access through pointer arithmetic: base_address + (index * element_size) computes any element's location directly.
Contiguous storage provides cache locality. When the CPU loads one array element into cache, adjacent elements often come along for the ride. Sequential array traversal exhibits excellent cache performance compared to scattered data structures like linked lists, where each node may reside in disparate memory regions.
Index-Based Access
Arrays map integer indices to elements. Index 0 refers to the first element, index 1 to the second, continuing to index n-1 for an array of size n. This zero-based indexing matches memory offset calculations and has become conventional in most programming languages.
Out-of-bounds access—referencing indices less than 0 or greater than or equal to size—represents a critical error category. Languages handle this differently: C permits undefined behavior, Java throws ArrayIndexOutOfBoundsException, Ruby returns nil for reads and raises IndexError for certain operations.
arr = [10, 20, 30]
arr[1] # Valid index access
# => 20
arr[5] # Beyond array bounds
# => nil
arr[-1] # Negative index, counts from end
# => 30
arr[-4] # Beyond negative bounds
# => nil
Dynamic Resizing Strategy
When a dynamic array reaches capacity, it allocates new memory with increased capacity, typically 1.5× to 2× the current capacity. This geometric growth strategy ensures that n insertions require O(n) total time despite individual reallocations taking O(n) time to copy elements. The amortized cost per insertion remains O(1).
Growth factors balance memory efficiency and performance. A factor of 2 provides fast growth but may waste memory. A factor of 1.5 uses memory more conservatively but requires more frequent reallocations. Ruby's Array class uses different growth strategies depending on array size, starting with smaller increments and switching to multiplicative growth for larger arrays.
Memory Management
Dynamic arrays maintain a capacity buffer beyond current size. When capacity exceeds size, the array has room for additional elements without reallocation. This buffer reduces reallocation frequency but consumes memory for empty slots.
Shrinking operations create complexity. When elements are removed, the array may maintain its capacity, resulting in a sparse array with significant unused space. Some implementations shrink capacity when size drops below a threshold, but shrinking risks triggering reallocation cycles if the array subsequently grows again.
arr = Array.new(1000) # Allocates capacity for 1000 elements
arr << 1 # Size is 1, but capacity remains 1000
# Ruby manages capacity internally
# No direct capacity query method, unlike Java's ArrayList
Resizing Complexity
Individual resizing operations take O(n) time, where n is the current array size, because all elements must be copied to new memory. However, the frequency of resizing makes amortized analysis relevant.
For n insertions with a doubling strategy:
- Insertion 1: no resize
- Insertion 2: resize, copy 1 element
- Insertion 3: resize, copy 2 elements
- Insertion 5: resize, copy 4 elements
- Insertion 9: resize, copy 8 elements
Total copies: 1 + 2 + 4 + 8 + ... + n/2 = n - 1
Total time: O(n) for n insertions, yielding O(1) amortized time per insertion. This mathematical property makes dynamic arrays practical despite expensive individual resizing operations.
Ruby Implementation
Ruby's Array class provides dynamic array functionality with automatic memory management. The implementation maintains internal capacity tracking and handles resizing transparently. Developers interact with arrays through a rich API without managing memory directly.
Array Creation
Ruby offers multiple array construction methods. Literal syntax [] creates empty arrays. The Array.new constructor accepts size and optional default values. Block initialization generates arrays with computed elements.
# Literal syntax
empty = []
literal = [1, 2, 3, 4, 5]
# Constructor with size
sized = Array.new(10) # 10 nil elements
# => [nil, nil, nil, nil, nil, nil, nil, nil, nil, nil]
# Constructor with default value
defaults = Array.new(5, 0) # 5 zeros
# => [0, 0, 0, 0, 0]
# WARNING: Object reference trap
trap = Array.new(3, []) # All elements reference SAME array
trap[0] << 1
# => [[1], [1], [1]] # All modified!
# Block initialization for unique objects
safe = Array.new(3) { [] } # Each element gets unique array
safe[0] << 1
# => [[1], [], []] # Only first element modified
Dynamic Growth Operations
The shovel operator << and push method append elements to the array end. Both trigger resizing when capacity is reached. The unshift method prepends elements to the beginning, requiring all existing elements to shift one position right—an O(n) operation even without resizing.
arr = []
# Append operations (O(1) amortized)
arr << 10
arr.push(20)
arr.push(30, 40, 50) # Push accepts multiple arguments
# => [10, 20, 30, 40, 50]
# Prepend operation (O(n))
arr.unshift(5)
# => [5, 10, 20, 30, 40, 50]
# Insert at arbitrary position (O(n))
arr.insert(3, 25) # Insert 25 at index 3
# => [5, 10, 20, 25, 30, 40, 50]
Dynamic Shrinking Operations
The pop method removes and returns the last element in O(1) time. The shift method removes the first element but requires shifting all remaining elements left, taking O(n) time. The delete_at method removes elements at specific indices with O(n) complexity for shifting.
arr = [10, 20, 30, 40, 50]
# Remove last element (O(1))
last = arr.pop
# last => 50
# arr => [10, 20, 30, 40]
# Remove first element (O(n))
first = arr.shift
# first => 10
# arr => [20, 30, 40]
# Remove at index (O(n))
arr.delete_at(1) # Removes element at index 1
# => [20, 40]
Accessing Elements
Ruby arrays support positive and negative indexing. Negative indices count from the array end: -1 refers to the last element, -2 to the second-to-last, and so on. The fetch method provides bounds checking with optional default values or block execution for missing indices.
arr = [10, 20, 30, 40, 50]
arr[0] # => 10
arr[2] # => 30
arr[-1] # => 50
arr[-2] # => 40
arr[10] # => nil (out of bounds)
# Fetch with exception
arr.fetch(10) # Raises IndexError
# Fetch with default
arr.fetch(10, -1) # => -1
# Fetch with block
arr.fetch(10) { |i| "Index #{i} not found" }
# => "Index 10 not found"
Slicing and Ranges
Array slicing extracts subarrays using range syntax or start-length notation. Slicing creates new arrays containing copies of element references (shallow copy), not the elements themselves. Modifications to mutable elements affect both original and sliced arrays.
arr = [10, 20, 30, 40, 50, 60, 70]
# Range slicing
arr[2..4] # => [30, 40, 50] (inclusive range)
arr[2...4] # => [30, 40] (exclusive range)
# Start and length
arr[2, 3] # => [30, 40, 50] (start at 2, take 3 elements)
# Negative indices in ranges
arr[-3..-1] # => [50, 60, 70]
# Slice assignment
arr[2..4] = [100, 200]
# => [10, 20, 100, 200, 60, 70]
Capacity Management
Ruby does not expose capacity directly, unlike languages such as Java with ArrayList's capacity method. The interpreter manages capacity internally, optimizing for typical usage patterns. Developers cannot pre-allocate capacity to avoid resizing during bulk insertions, though creating arrays with initial size via Array.new(n) provides a workaround.
# Creating with size allocates internal capacity
arr = Array.new(1000)
# arr.size => 1000
# Internal capacity >= 1000
# Populating existing slots avoids dynamic growth
1000.times { |i| arr[i] = i }
# Building through append may trigger multiple resizes
arr2 = []
1000.times { |i| arr2 << i } # Multiple reallocations occur
Performance Considerations
Access Complexity
Reading or writing array elements by index takes O(1) time regardless of array size or element position. The constant-time guarantee stems from direct address calculation: base_address + (index * element_size). This formula computes any element's location without traversing other elements.
Random access makes arrays optimal for algorithms requiring frequent index-based lookups. Binary search, quicksort partitioning, and direct element updates all depend on O(1) access. Data structures like linked lists lack this property, requiring O(n) traversal to reach arbitrary elements.
Insertion and Deletion Complexity
Appending elements to array ends (push, <<) takes O(1) amortized time. While individual operations may trigger O(n) resizing, the geometric growth strategy distributes resize costs across many insertions. For n operations, total time remains O(n), averaging O(1) per operation.
Removing the last element (pop) takes O(1) time without resizing. The array simply decrements its size counter. Shrinking capacity is optional and typically avoided to prevent resize thrashing when arrays grow and shrink repeatedly.
Inserting or deleting at arbitrary positions takes O(n) time in the worst case. These operations require shifting all subsequent elements to maintain contiguous storage. Inserting at index 0 moves all n elements one position right. Deleting from index 0 moves all n-1 elements one position left.
require 'benchmark'
arr = (1..100_000).to_a
Benchmark.bm do |x|
x.report("push (O(1) amortized):") { arr.push(0) }
x.report("pop (O(1)):") { arr.pop }
x.report("unshift (O(n)):") { arr.unshift(0) }
x.report("shift (O(n)):") { arr.shift }
end
# Results show unshift/shift significantly slower than push/pop
Resize Cost Analysis
Consider building an array of n elements through repeated appending with a doubling growth strategy:
- Capacity sequence: 1, 2, 4, 8, 16, ..., n
- Resize operations: log₂(n)
- Elements copied per resize: 1, 2, 4, 8, ..., n/2
- Total elements copied: 1 + 2 + 4 + ... + n/2 = n - 1
Although individual resizes take O(n) time, they occur logarithmically in the array size. The total cost of n insertions remains O(n), yielding O(1) amortized cost per insertion. This analysis justifies treating append operations as constant-time in algorithm complexity discussions.
Memory Overhead
Dynamic arrays maintain excess capacity beyond current size. A growth factor of 2 means capacity can reach 2n when size equals n. Memory utilization ranges from 50% (just after resize) to 100% (just before resize). For large arrays, this overhead becomes significant.
Ruby's array implementation uses different growth strategies for different size ranges, balancing memory efficiency against reallocation frequency. Small arrays grow by smaller increments; large arrays use geometric growth. This hybrid approach reduces memory waste for small arrays while maintaining good amortized performance for large ones.
Cache Performance
Contiguous memory layout makes arrays cache-friendly. Modern CPUs load data in cache lines (typically 64 bytes). When accessing one array element, adjacent elements often come into cache together. Sequential array traversal exhibits excellent spatial locality.
# Sequential access pattern (cache-friendly)
sum = 0
arr.each { |x| sum += x }
# Random access pattern (cache-unfriendly)
indices = arr.size.times.to_a.shuffle
sum = 0
indices.each { |i| sum += arr[i] }
Sequential access outperforms random access significantly for large arrays. This performance characteristic influences algorithm design—sequential scans often outperform more sophisticated algorithms with random access patterns on modern hardware.
Comparison with Alternative Structures
Linked lists provide O(1) insertion and deletion at known positions but require O(n) access time. For workloads dominated by random access, arrays outperform linked lists. For workloads dominated by insertions and deletions at arbitrary positions, linked lists may perform better.
Hash tables provide O(1) average-case access by key but consume more memory and lack ordering. Trees provide O(log n) access with ordering but add complexity. Array choice depends on operation patterns: frequent random access favors arrays, while frequent insertions and deletions at arbitrary positions favor other structures.
Practical Examples
Sliding Window Algorithms
Sliding window techniques process array subarrays of fixed or variable size. The window "slides" across the array, maintaining aggregate information efficiently. This pattern solves problems like finding maximum sum subarrays or longest substrings with constraints.
# Maximum sum of any subarray of length k
def max_sum_subarray(arr, k)
return nil if arr.size < k
# Calculate first window sum
window_sum = arr[0...k].sum
max_sum = window_sum
# Slide window: add next element, remove first element
(k...arr.size).each do |i|
window_sum = window_sum - arr[i - k] + arr[i]
max_sum = [max_sum, window_sum].max
end
max_sum
end
arr = [2, 1, 5, 1, 3, 2, 4, 1]
max_sum_subarray(arr, 3)
# => 9 (subarray [3, 2, 4])
The sliding window avoids recalculating the sum for each subarray. Removing the leftmost element and adding the next rightmost element updates the sum in O(1) time. For an array of size n and window size k, this reduces complexity from O(nk) to O(n).
Two-Pointer Technique
Two-pointer algorithms maintain two indices that move through the array according to specific rules. This technique solves problems like finding pairs with given sum, removing duplicates, or partitioning arrays.
# Find all pairs in sorted array that sum to target
def find_pairs(arr, target)
pairs = []
left = 0
right = arr.size - 1
while left < right
current_sum = arr[left] + arr[right]
if current_sum == target
pairs << [arr[left], arr[right]]
left += 1
right -= 1
elsif current_sum < target
left += 1 # Need larger sum
else
right -= 1 # Need smaller sum
end
end
pairs
end
arr = [1, 2, 3, 4, 5, 6, 7]
find_pairs(arr, 8)
# => [[1, 7], [2, 6], [3, 5]]
This algorithm exploits the sorted property to avoid checking all n² pairs. Each step moves one pointer, and each element is visited at most once, yielding O(n) time complexity.
Dynamic Programming with Arrays
Many dynamic programming solutions use arrays to store subproblem results. Array indexing provides constant-time access to previously computed values, making memoization efficient.
# Longest increasing subsequence length
def longest_increasing_subsequence(arr)
return 0 if arr.empty?
# dp[i] = length of longest increasing subsequence ending at i
dp = Array.new(arr.size, 1)
(1...arr.size).each do |i|
(0...i).each do |j|
if arr[j] < arr[i]
dp[i] = [dp[i], dp[j] + 1].max
end
end
end
dp.max
end
arr = [10, 9, 2, 5, 3, 7, 101, 18]
longest_increasing_subsequence(arr)
# => 4 (subsequence [2, 3, 7, 101])
The dp array stores the solution for each prefix of the input array. Each cell depends only on previous cells, and array indexing provides O(1) access to those dependencies. This pattern appears throughout dynamic programming algorithms.
Array Rotation
Array rotation shifts elements cyclically. Rotating right by k positions moves the last k elements to the front. Rotation algorithms demonstrate array manipulation techniques and handle edge cases like rotation counts exceeding array size.
# Rotate array right by k positions
def rotate_right(arr, k)
return arr if arr.empty?
n = arr.size
k = k % n # Handle k > n
return arr if k == 0
# Reverse entire array
arr.reverse!
# Reverse first k elements
arr[0...k] = arr[0...k].reverse
# Reverse remaining elements
arr[k..-1] = arr[k..-1].reverse
arr
end
arr = [1, 2, 3, 4, 5, 6, 7]
rotate_right(arr, 3)
# => [5, 6, 7, 1, 2, 3, 4]
The reversal algorithm rotates arrays in O(n) time with O(1) space. Three reversals achieve rotation: reverse the entire array, reverse the first k elements, then reverse the remaining elements. This approach avoids allocating temporary arrays.
Common Patterns
Prefix Sum Arrays
Prefix sum arrays store cumulative sums at each position. The prefix sum at index i contains the sum of elements from index 0 to i. This preprocessing enables constant-time range sum queries.
# Build prefix sum array
def build_prefix_sum(arr)
prefix = Array.new(arr.size)
prefix[0] = arr[0]
(1...arr.size).each do |i|
prefix[i] = prefix[i - 1] + arr[i]
end
prefix
end
# Query sum of elements from index i to j
def range_sum(prefix, i, j)
return prefix[j] if i == 0
prefix[j] - prefix[i - 1]
end
arr = [3, 1, 4, 1, 5, 9, 2, 6]
prefix = build_prefix_sum(arr)
# prefix => [3, 4, 8, 9, 14, 23, 25, 31]
range_sum(prefix, 2, 5) # Sum from index 2 to 5
# => 19 (4 + 1 + 5 + 9)
Without prefix sums, each range query requires O(n) time to sum elements. With prefix sums, queries take O(1) time after O(n) preprocessing. This trade-off benefits applications with many queries on static arrays.
In-Place Array Modification
In-place algorithms modify arrays without allocating additional arrays. They use constant extra space, critical for memory-constrained environments. In-place operations often require careful index tracking to avoid overwriting unprocessed elements.
# Remove duplicates from sorted array in-place
def remove_duplicates(arr)
return arr.size if arr.size <= 1
write_index = 1 # Position for next unique element
(1...arr.size).each do |read_index|
if arr[read_index] != arr[read_index - 1]
arr[write_index] = arr[read_index]
write_index += 1
end
end
arr[0...write_index] # Return array with duplicates removed
end
arr = [1, 1, 2, 2, 2, 3, 4, 4, 5]
remove_duplicates(arr)
# => [1, 2, 3, 4, 5]
The algorithm maintains two indices: read_index scans all elements, while write_index tracks where to place the next unique element. This pattern appears in many array modification algorithms.
Partitioning
Partitioning rearranges array elements so that elements satisfying a condition appear before those that don't. The partition operation forms the basis of quicksort and appears in many selection algorithms.
# Partition array around pivot value
def partition(arr, pivot_index)
pivot_value = arr[pivot_index]
# Move pivot to end
arr[pivot_index], arr[-1] = arr[-1], arr[pivot_index]
store_index = 0
(0...arr.size - 1).each do |i|
if arr[i] < pivot_value
arr[i], arr[store_index] = arr[store_index], arr[i]
store_index += 1
end
end
# Move pivot to final position
arr[store_index], arr[-1] = arr[-1], arr[store_index]
store_index # Return pivot's final position
end
arr = [3, 7, 1, 4, 9, 2, 6, 5, 8]
pivot_pos = partition(arr, 4) # Partition around value at index 4 (9)
# arr => [3, 7, 1, 4, 2, 6, 5, 8, 9]
# pivot_pos => 8
After partitioning, elements less than the pivot appear before the pivot, and elements greater than or equal to the pivot appear after. The pivot reaches its final sorted position. Quicksort applies this operation recursively to partition subarrays.
Frequency Counting
Frequency counting maps array elements to their occurrence counts. Hash tables typically implement frequency maps, but arrays work when elements are integers in a known range.
# Count character frequencies using array (for lowercase letters)
def char_frequencies(str)
freq = Array.new(26, 0)
str.each_char do |char|
next unless char >= 'a' && char <= 'z'
index = char.ord - 'a'.ord
freq[index] += 1
end
freq
end
freq = char_frequencies("hello")
# freq[7] => 1 (count of 'h')
# freq[4] => 1 (count of 'e')
# freq[11] => 2 (count of 'l')
# freq[14] => 1 (count of 'o')
# Convert back to readable format
result = []
freq.each_with_index do |count, i|
next if count == 0
char = ('a'.ord + i).chr
result << "#{char}: #{count}"
end
# => ["e: 1", "h: 1", "l: 2", "o: 1"]
Array-based frequency counting works when the element range is small and known. For characters, a 26-element array suffices for lowercase letters. For arbitrary elements, hash tables provide more flexibility at the cost of slower access.
Common Pitfalls
Off-by-One Errors
Array indexing creates numerous opportunities for off-by-one errors. Zero-based indexing means the last element has index size - 1, not size. Loop conditions like i <= arr.size cause out-of-bounds access. Range operations with inclusive versus exclusive endpoints confuse developers.
arr = [10, 20, 30, 40, 50]
# WRONG: Accesses beyond array
(0..arr.size).each do |i|
puts arr[i] # arr[5] doesn't exist
end
# CORRECT: Exclusive range
(0...arr.size).each do |i|
puts arr[i]
end
# WRONG: Range extends one element too far
subset = arr[0..5] # Attempts to include non-existent index 5
# CORRECT: Exclusive range or size - 1
subset = arr[0...arr.size]
subset = arr[0..arr.size - 1]
Ruby returns nil for out-of-bounds reads rather than raising exceptions, masking errors that might surface in other languages. Careful attention to loop bounds and range endpoints prevents these errors.
Shared Default Objects
Creating arrays with default objects using Array.new(n, obj) results in all elements referencing the same object. Modifying one element affects all elements. This trap catches developers who expect independent objects at each index.
# WRONG: All elements reference same array
matrix = Array.new(3, [])
matrix[0] << 1
# => [[1], [1], [1]] # All rows modified!
# CORRECT: Block creates unique object for each element
matrix = Array.new(3) { [] }
matrix[0] << 1
# => [[1], [], []] # Only first row modified
# CORRECT: Alternative using map
matrix = Array.new(3).map { [] }
This issue affects any mutable default object. Integers, symbols, and other immutable objects don't exhibit this problem because assignment creates new references rather than modifying shared objects.
Modifying Arrays During Iteration
Modifying an array while iterating over it causes unpredictable behavior. Insertions and deletions shift indices, causing elements to be skipped or processed multiple times. Ruby's iterators don't protect against this.
arr = [1, 2, 3, 4, 5]
# WRONG: Modifying during iteration
arr.each do |x|
arr.delete(x) if x.even? # Causes elements to be skipped
end
# arr => [1, 3, 5] # Looks correct, but 4 was at a shifted position
arr = [1, 2, 3, 4, 5]
# WRONG: Index shifts cause skips
arr.each_with_index do |x, i|
arr.delete_at(i) if x.even?
end
# arr => [1, 2, 3, 5] # Element 4 was skipped!
# CORRECT: Iterate over copy or collect indices
arr = [1, 2, 3, 4, 5]
arr.delete_if { |x| x.even? }
# => [1, 3, 5]
# CORRECT: Reverse iteration for in-place deletion
arr = [1, 2, 3, 4, 5]
(arr.size - 1).downto(0) do |i|
arr.delete_at(i) if arr[i].even?
end
Using methods like delete_if, select, or reject that create new arrays avoids this pitfall. Alternatively, iterate in reverse when deleting elements in place, ensuring deletions don't affect upcoming indices.
Assuming Contiguous Growth
Arrays don't always grow by exactly one element when appending. Internal capacity typically exceeds size, and growth occurs in chunks. Code that assumes array memory addresses remain stable across insertions will fail when reallocation moves the array to new memory.
This pitfall rarely affects Ruby developers because Ruby manages memory automatically. However, understanding the principle matters when reasoning about performance or working with languages exposing more memory details.
Index Range Confusion
Ruby supports multiple range syntaxes: arr[start..end] includes both endpoints, while arr[start...end] excludes the end. Additionally, arr[start, length] takes a starting index and element count. Mixing these syntaxes causes slicing errors.
arr = [0, 1, 2, 3, 4, 5]
arr[1..3] # => [1, 2, 3] (inclusive range)
arr[1...3] # => [1, 2] (exclusive range)
arr[1, 3] # => [1, 2, 3] (start at 1, take 3 elements)
# Confusion: arr[1, 3] and arr[1..3] are identical
# But arr[1, 3] and arr[1...3] differ
# When length exceeds remaining elements
arr[4, 10] # => [4, 5] (takes available elements)
arr[4..13] # => [4, 5] (range clamped to array)
Consistently using one range style reduces confusion. Exclusive ranges ... match array indexing semantics better: arr[0...arr.size] refers to the entire array, while arr[0..arr.size] attempts to include a non-existent element.
Reference
Ruby Array Methods
| Method | Time Complexity | Description |
|---|---|---|
| [] | O(1) | Access element by index |
| []= | O(1) | Set element at index |
| << | O(1) amortized | Append element to end |
| push | O(1) amortized | Append one or more elements |
| pop | O(1) | Remove and return last element |
| unshift | O(n) | Prepend element to beginning |
| shift | O(n) | Remove and return first element |
| insert | O(n) | Insert element at index |
| delete_at | O(n) | Remove element at index |
| slice | O(m) | Extract subarray of size m |
| concat | O(m) | Append array of size m |
| clear | O(1) | Remove all elements |
| size | O(1) | Return number of elements |
| empty? | O(1) | Check if array is empty |
| first | O(1) | Return first element |
| last | O(1) | Return last element |
| fetch | O(1) | Access with bounds checking |
Complexity Summary
| Operation | Static Array | Dynamic Array | Ruby Array |
|---|---|---|---|
| Access by index | O(1) | O(1) | O(1) |
| Update by index | O(1) | O(1) | O(1) |
| Insert at end | Not supported | O(1) amortized | O(1) amortized |
| Insert at beginning | Not supported | O(n) | O(n) |
| Insert at arbitrary position | Not supported | O(n) | O(n) |
| Delete from end | Not supported | O(1) | O(1) |
| Delete from beginning | Not supported | O(n) | O(n) |
| Delete at arbitrary position | Not supported | O(n) | O(n) |
| Search unsorted | O(n) | O(n) | O(n) |
| Search sorted | O(log n) | O(log n) | O(log n) |
Growth Factor Analysis
| Growth Factor | Memory Efficiency | Resize Frequency | Total Copies for n Insertions |
|---|---|---|---|
| 1.5 | High | High | ~2.89n |
| 2.0 | Medium | Low | ~2n |
| 3.0 | Low | Very Low | ~1.5n |
Common Array Patterns
| Pattern | Use Case | Key Technique |
|---|---|---|
| Two Pointer | Pair finding, partitioning | Maintain two indices moving according to rules |
| Sliding Window | Subarray problems | Maintain window invariant while sliding |
| Prefix Sum | Range queries | Precompute cumulative values |
| Frequency Count | Element counting | Array or hash mapping elements to counts |
| In-place Modification | Space-constrained algorithms | Separate read and write indices |
| Partitioning | Quicksort, selection | Rearrange around pivot element |
Memory Layout
| Array Size | Typical Growth Steps (Approximate) |
|---|---|
| 0-16 | 4, 8, 16 |
| 16-256 | 32, 64, 128, 256 |
| 256+ | Multiply by 1.5-2.0 |
Note: Actual growth patterns vary by Ruby implementation and version. Ruby uses different strategies for different size ranges to balance memory and performance.
Index Arithmetic
| Expression | Result for arr of size n | Notes |
|---|---|---|
| arr[0] | First element | Most common access |
| arr[n-1] | Last element | Requires size calculation |
| arr[-1] | Last element | Ruby convenience syntax |
| arr[-n] | First element | Negative index from end |
| arr[n] | nil | Out of bounds returns nil |
| arr[0...n] | Entire array | Exclusive range |
| arr[0..n-1] | Entire array | Inclusive range |
| arr[i...i+k] | k elements starting at i | Subarray slice |