Overview
Sets represent unordered collections of unique elements, modeled after mathematical sets. A set contains each element at most once, automatically rejecting duplicate insertions. The primary operations test membership, add elements, remove elements, and compute relationships between sets such as unions, intersections, and differences.
Multisets, also called bags, extend the set concept by tracking element multiplicity. Each element appears zero or more times, with operations accounting for repetition counts. A multiset containing three 'a' elements and two 'b' elements differs fundamentally from one containing one 'a' and five 'b' elements, even though both contain the same element types.
The distinction matters for different problem domains. Sets answer questions about presence: "Does this collection contain element X?" Multisets answer questions about frequency: "How many times does X appear?" Shopping carts, word frequency analysis, and resource allocation require multiset semantics, while user authentication, permission systems, and duplicate detection require set semantics.
Ruby provides a Set class in the standard library, requiring an explicit require statement to load. The Set class wraps a Hash internally, using hash keys to enforce uniqueness and provide constant-time membership testing on average. Ruby lacks a built-in multiset implementation, requiring custom solutions using Hash to map elements to counts.
require 'set'
# Set creation and basic operations
colors = Set.new(['red', 'blue', 'green'])
colors.add('yellow')
colors.add('red') # No effect - already present
colors.size # => 4
# Hash-based multiset
inventory = Hash.new(0)
inventory['apple'] += 3
inventory['banana'] += 2
inventory['apple'] += 1
inventory['apple'] # => 4
Key Principles
Sets derive from mathematical set theory, where collections contain distinct objects without ordering or duplication. The fundamental property states that for any set S and element x, either x belongs to S or x does not belong to S. No element appears twice. Testing membership, the core set operation, determines whether an element belongs to a set in constant time for hash-based implementations or logarithmic time for tree-based implementations.
Set operations combine or compare sets according to mathematical definitions. The union of sets A and B contains all elements present in either set. The intersection contains elements present in both sets. The difference A - B contains elements in A but not in B. The symmetric difference contains elements in exactly one set but not both. These operations produce new sets without modifying the operands.
Subset relationships define hierarchical structure between sets. Set A is a subset of B if every element of A also appears in B. Set A is a proper subset of B if A is a subset of B and B contains at least one element not in A. These relationships enable reasoning about set containment and establishing partial orderings over set collections.
Cardinality measures set size, counting the number of distinct elements. For finite sets, cardinality equals the element count. The empty set contains no elements and has cardinality zero. Cardinality determines properties like whether one set can map bijectively onto another and whether operations produce empty sets.
Multisets extend sets by associating each element with a positive integer multiplicity. The multiplicity function m(x) returns the count of element x in the multiset. When m(x) = 0, element x does not appear in the multiset. When m(x) = 1, the multiset behaves like a set for that element. When m(x) > 1, the multiset tracks repetitions.
Multiset operations generalize set operations to account for multiplicities. The union of multisets A and B assigns each element the maximum multiplicity from either multiset: m_union(x) = max(m_A(x), m_B(x)). The intersection assigns the minimum: m_intersection(x) = min(m_A(x), m_B(x)). The sum adds multiplicities: m_sum(x) = m_A(x) + m_B(x). The difference subtracts multiplicities, clamping to zero: m_difference(x) = max(0, m_A(x) - m_B(x)).
require 'set'
# Set relationships
a = Set[1, 2, 3, 4]
b = Set[3, 4, 5, 6]
c = Set[1, 2]
a.union(b) # => #<Set: {1, 2, 3, 4, 5, 6}>
a.intersection(b) # => #<Set: {3, 4}>
a.difference(b) # => #<Set: {1, 2}>
a ^ b # => #<Set: {1, 2, 5, 6}> (symmetric difference)
c.subset?(a) # => true
c.proper_subset?(a) # => true
a.subset?(a) # => true
a.proper_subset?(a) # => false
# Multiset operations using Hash
def multiset_union(a, b)
result = Hash.new(0)
(a.keys + b.keys).uniq.each do |key|
result[key] = [a[key], b[key]].max
end
result
end
def multiset_sum(a, b)
result = a.dup
b.each { |key, count| result[key] += count }
result
end
a = {'x' => 3, 'y' => 1}
b = {'x' => 2, 'y' => 4, 'z' => 1}
multiset_union(a, b) # => {"x"=>3, "y"=>4, "z"=>1}
multiset_sum(a, b) # => {"x"=>5, "y"=>5, "z"=>1}
Ruby Implementation
Ruby's Set class lives in the standard library, requiring require 'set' before use. The implementation wraps a Hash, storing elements as hash keys with true values. This design provides O(1) average-case membership testing, insertion, and deletion through hash table operations. The Set class includes the Enumerable module, making all enumerable methods available.
Creating sets accepts various initializers. The Set.new constructor accepts an optional enumerable argument and an optional block for transforming elements during insertion. Array literals convert to sets through Set[]. Sets copy from other sets, converting arrays, ranges, or any enumerable collection.
require 'set'
# Set creation methods
empty_set = Set.new
from_array = Set.new([1, 2, 2, 3]) # => #<Set: {1, 2, 3}>
from_range = Set.new(1..5) # => #<Set: {1, 2, 3, 4, 5}>
literal = Set[1, 2, 3] # => #<Set: {1, 2, 3}>
# Block transformation during creation
normalized = Set.new(['Hello', 'WORLD']) { |s| s.downcase }
# => #<Set: {"hello", "world"}>
# Copy constructor
copy = Set.new(from_array)
Element manipulation provides methods for adding, removing, and testing elements. The add method inserts elements, returning the set to enable chaining. The add? method returns nil if the element already exists, otherwise returns the set. The delete method removes elements if present. The include? method tests membership. The clear method removes all elements.
require 'set'
users = Set.new
# Adding elements
users.add('alice')
users.add('bob').add('charlie') # Chaining
users << 'david' # Alias for add
users.add?('alice') # => nil (already present)
users.add?('eve') # => #<Set: {...}> (newly added)
# Testing membership
users.include?('alice') # => true
users.member?('frank') # => false (member? is alias)
# Removing elements
users.delete('bob')
users.delete('nonexistent') # No error, just no effect
# Size and emptiness
users.size # => 3
users.empty? # => false
users.clear
users.empty? # => true
Set operations implement mathematical operations as instance methods and operators. Union combines sets using the | operator or union method. Intersection finds common elements using the & operator or intersection method. Difference removes elements using the - operator or difference method. Symmetric difference finds elements in exactly one set using the ^ operator.
require 'set'
evens = Set[2, 4, 6, 8]
primes = Set[2, 3, 5, 7]
# Union - all elements from either set
evens | primes # => #<Set: {2, 4, 6, 8, 3, 5, 7}>
evens.union(primes) # Equivalent method form
# Intersection - elements in both sets
evens & primes # => #<Set: {2}>
evens.intersection(primes) # Equivalent method form
# Difference - elements in first but not second
evens - primes # => #<Set: {4, 6, 8}>
evens.difference(primes) # Equivalent method form
# Symmetric difference - elements in exactly one set
evens ^ primes # => #<Set: {4, 6, 8, 3, 5, 7}>
# Operations create new sets
original = Set[1, 2, 3]
result = original | Set[4, 5]
original.object_id != result.object_id # => true
Subset and superset testing determines hierarchical relationships between sets. The subset? method returns true if all elements of the receiver appear in the argument. The superset? method returns true if the receiver contains all elements of the argument. The proper_subset? and proper_superset? methods require strict inequality, returning false when sets are equal.
require 'set'
all_colors = Set['red', 'blue', 'green', 'yellow']
primary = Set['red', 'blue', 'yellow']
warm = Set['red', 'yellow', 'orange']
# Subset testing
primary.subset?(all_colors) # => true
primary.proper_subset?(all_colors) # => true
primary.subset?(primary) # => true
primary.proper_subset?(primary) # => false
# Superset testing
all_colors.superset?(primary) # => true
all_colors.proper_superset?(primary) # => true
# Overlapping but neither is subset
primary.subset?(warm) # => false
warm.subset?(primary) # => false
# Disjoint testing
Set[1, 2].disjoint?(Set[3, 4]) # => true
Set[1, 2].disjoint?(Set[2, 3]) # => false
Ruby lacks a built-in multiset class, requiring custom implementation. The standard approach uses Hash with default value zero, mapping elements to integer counts. Operations manipulate count values rather than presence flags. This pattern provides O(1) average-case access to element counts while handling automatic initialization for new elements.
# Multiset implementation using Hash
class Multiset
def initialize(elements = [])
@counts = Hash.new(0)
elements.each { |e| add(e) }
end
def add(element, count = 1)
@counts[element] += count
self
end
def remove(element, count = 1)
@counts[element] = [@counts[element] - count, 0].max
@counts.delete(element) if @counts[element] == 0
self
end
def count(element)
@counts[element]
end
def include?(element)
@counts[element] > 0
end
def size
@counts.values.sum
end
def cardinality
@counts.keys.size
end
def to_a
@counts.flat_map { |element, count| [element] * count }
end
def ==(other)
@counts == other.instance_variable_get(:@counts)
end
end
# Using the multiset
inventory = Multiset.new(['apple', 'apple', 'banana', 'apple'])
inventory.count('apple') # => 3
inventory.count('banana') # => 1
inventory.size # => 4
inventory.cardinality # => 2 (distinct elements)
inventory.add('apple', 2)
inventory.count('apple') # => 5
inventory.remove('apple', 3)
inventory.count('apple') # => 2
Practical Examples
Membership testing drives many authentication and authorization systems. Sets store authorized user IDs, permitted actions, or assigned roles. Testing membership determines access rights without iterating through lists or executing database queries. Sets provide constant-time lookups regardless of permission set size.
require 'set'
class AccessControl
def initialize
@admins = Set.new
@moderators = Set.new
@banned_users = Set.new
end
def grant_admin(user_id)
@admins.add(user_id)
@moderators.delete(user_id) # Admin supersedes moderator
end
def grant_moderator(user_id)
@moderators.add(user_id) unless @admins.include?(user_id)
end
def ban_user(user_id)
@banned_users.add(user_id)
@admins.delete(user_id)
@moderators.delete(user_id)
end
def can_delete_post?(user_id)
return false if @banned_users.include?(user_id)
@admins.include?(user_id) || @moderators.include?(user_id)
end
def all_privileged_users
@admins | @moderators # Union of both sets
end
def admin_count
@admins.size
end
end
acl = AccessControl.new
acl.grant_admin('user_123')
acl.grant_moderator('user_456')
acl.ban_user('user_789')
acl.can_delete_post?('user_123') # => true
acl.can_delete_post?('user_999') # => false
acl.can_delete_post?('user_789') # => false (banned)
Duplicate detection eliminates repeated elements from data streams or batch imports. Sets automatically reject duplicates during insertion, simplifying deduplication logic. Processing large datasets benefits from set-based approaches that avoid O(n²) comparison loops.
require 'set'
class EmailProcessor
def initialize
@seen_addresses = Set.new
@processed_count = 0
@duplicate_count = 0
end
def process_email(address)
normalized = address.downcase.strip
if @seen_addresses.add?(normalized)
# New address, actually process
send_welcome_email(normalized)
@processed_count += 1
else
# Duplicate, skip processing
@duplicate_count += 1
end
end
def process_batch(addresses)
addresses.each { |addr| process_email(addr) }
end
def stats
{
unique_addresses: @seen_addresses.size,
processed: @processed_count,
duplicates_skipped: @duplicate_count
}
end
private
def send_welcome_email(address)
# Email sending logic
end
end
processor = EmailProcessor.new
processor.process_batch([
'Alice@Example.com',
'bob@test.com',
'alice@example.com', # Duplicate after normalization
'Bob@Test.com', # Duplicate after normalization
'charlie@demo.com'
])
processor.stats
# => {:unique_addresses=>3, :processed=>3, :duplicates_skipped=>2}
Multisets model inventory systems where quantity matters. Shopping carts, warehouse stock, and resource pools track item counts rather than simple presence. Multiset operations naturally express quantity adjustments, transfers between locations, and stock reconciliation.
class ShoppingCart
def initialize
@items = Hash.new(0)
end
def add_item(product_id, quantity = 1)
@items[product_id] += quantity
end
def remove_item(product_id, quantity = 1)
@items[product_id] = [@items[product_id] - quantity, 0].max
@items.delete(product_id) if @items[product_id] == 0
end
def quantity(product_id)
@items[product_id]
end
def total_items
@items.values.sum
end
def merge_from(other_cart)
other_items = other_cart.instance_variable_get(:@items)
other_items.each do |product_id, quantity|
@items[product_id] += quantity
end
end
def to_order
@items.reject { |_, quantity| quantity == 0 }
end
end
cart = ShoppingCart.new
cart.add_item('PROD_001', 3)
cart.add_item('PROD_002', 1)
cart.add_item('PROD_001', 2) # Now 5 total
cart.quantity('PROD_001') # => 5
cart.total_items # => 6
cart.remove_item('PROD_001', 2)
cart.quantity('PROD_001') # => 3
guest_cart = ShoppingCart.new
guest_cart.add_item('PROD_002', 2)
guest_cart.add_item('PROD_003', 1)
cart.merge_from(guest_cart)
cart.quantity('PROD_002') # => 3
cart.quantity('PROD_003') # => 1
Word frequency analysis requires multisets to count word occurrences in documents. Search engines, text analysis tools, and natural language processing pipelines process term frequencies for ranking and classification. Multiset operations combine frequencies across documents or compute term overlap between texts.
class TextAnalyzer
def initialize(text)
@word_counts = Hash.new(0)
tokenize(text).each { |word| @word_counts[word] += 1 }
end
def frequency(word)
@word_counts[word.downcase]
end
def total_words
@word_counts.values.sum
end
def unique_words
@word_counts.keys.size
end
def most_frequent(n = 10)
@word_counts.sort_by { |_, count| -count }.first(n).to_h
end
def common_words_with(other_analyzer)
other_counts = other_analyzer.instance_variable_get(:@word_counts)
Set.new(@word_counts.keys) & Set.new(other_counts.keys)
end
private
def tokenize(text)
text.downcase.scan(/\b[a-z]+\b/)
end
end
doc1 = TextAnalyzer.new("the quick brown fox jumps over the lazy dog")
doc2 = TextAnalyzer.new("the lazy cat sits on the warm mat")
doc1.frequency('the') # => 2
doc1.frequency('quick') # => 1
doc1.total_words # => 9
doc1.unique_words # => 8
doc1.common_words_with(doc2) # => #<Set: {"the", "lazy"}>
doc1.most_frequent(3)
# => {"the"=>2, "quick"=>1, "brown"=>1}
Performance Considerations
Hash-based set implementations provide O(1) average-case performance for insertion, deletion, and membership testing. The underlying hash table maintains a load factor, resizing when element density exceeds thresholds. Resizing copies all elements to a larger table, resulting in O(n) cost amortized across insertions. Individual operations experience constant time excluding rare resize operations.
Worst-case performance degrades to O(n) when hash collisions force all elements into the same bucket. Poor hash functions or adversarial inputs cause this behavior. Ruby's Hash implementation uses cryptographic randomization to prevent collision attacks. Real-world performance matches average-case expectations for non-malicious inputs.
Tree-based set implementations using balanced binary search trees provide O(log n) guaranteed performance for all operations. No hash function required eliminates collision concerns. Tree structures maintain sorted order, enabling efficient range queries and ordered iteration. Ruby's SortedSet class historically provided this behavior but was extracted to a gem in Ruby 3.0.
Space overhead for hash-based sets includes the hash table array plus linked list nodes for collision resolution. Ruby's Hash maintains roughly 1.5x to 2x overhead relative to stored key count. Sets storing primitive values consume less memory than sets storing large objects, but the set structure itself dominates memory usage for small objects.
Multiset implementations using Hash incur similar performance characteristics as sets. Element counts add marginal overhead per element—a single integer per unique element. Operations iterating over all elements run in O(n) time where n equals unique element count, not total element count including duplicates. Summing element counts requires full iteration.
Set operations vary in performance based on set sizes. Union, intersection, and difference iterate one or both sets, resulting in O(n + m) complexity where n and m are set sizes. Ruby's Set implementation optimizes by iterating the smaller set when possible. Symmetric difference requires processing both sets completely.
require 'set'
require 'benchmark'
# Performance comparison: Array vs Set for membership testing
array = (1..10_000).to_a
set = Set.new(array)
target = 9_999
Benchmark.bmbm do |x|
x.report("Array#include?") { 10_000.times { array.include?(target) } }
x.report("Set#include?") { 10_000.times { set.include?(target) } }
end
# Array: O(n) linear search - slower
# Set: O(1) hash lookup - much faster
# Set construction cost
large_array = (1..100_000).to_a
Benchmark.bmbm do |x|
x.report("Build Set") { Set.new(large_array) }
x.report("Build Hash") { large_array.each_with_object(Hash.new(0)) { |e, h| h[e] += 1 } }
end
# Similar performance - both use hash tables internally
Memory efficiency improves for applications that test membership frequently but modify sets infrequently. The construction cost pays off through faster queries. Applications that build sets once and query thousands of times benefit dramatically from set structures. Applications that modify collections on every operation may find array operations competitive for small collections.
Multiset operations requiring full enumeration cannot optimize better than O(n) in unique element count. Computing total element count including duplicates sums all count values. Finding maximum or minimum elements requires checking all keys. These operations iterate the underlying hash table completely.
Common Patterns
Filtering with sets provides efficient deduplication in data processing pipelines. Convert input sequences to sets to eliminate duplicates, then convert back to arrays when order matters. This pattern appears in data imports, API response merging, and cache invalidation lists.
require 'set'
# Deduplicate while preserving some order
def deduplicate_preserving_first(array)
seen = Set.new
array.select { |element| seen.add?(element) }
end
tags = ['ruby', 'rails', 'ruby', 'javascript', 'rails', 'python']
deduplicate_preserving_first(tags)
# => ["ruby", "rails", "javascript", "python"]
# Bulk operations with sets
def process_user_updates(updates)
users_to_notify = Set.new
users_to_reindex = Set.new
updates.each do |update|
users_to_notify.add(update[:user_id])
users_to_reindex.add(update[:user_id]) if update[:affects_search]
end
notify_users(users_to_notify.to_a)
reindex_users(users_to_reindex.to_a)
end
Set algebra simplifies permission calculations combining multiple access rules. Union combines permissions from different sources. Intersection finds common permissions across restrictive policies. Difference removes revoked permissions from granted sets.
require 'set'
class PermissionCalculator
def initialize(user)
@user = user
end
def effective_permissions
role_permissions | group_permissions | direct_permissions - revoked_permissions
end
def role_permissions
Set.new(@user.roles.flat_map(&:permissions))
end
def group_permissions
Set.new(@user.groups.flat_map(&:permissions))
end
def direct_permissions
Set.new(@user.direct_permissions)
end
def revoked_permissions
Set.new(@user.revoked_permissions)
end
def can?(permission)
effective_permissions.include?(permission)
end
end
Caching with sets tracks processed items to avoid redundant work. Background jobs, web scrapers, and data synchronization systems maintain sets of processed identifiers. Checking set membership before expensive operations eliminates duplicate processing.
require 'set'
class IncrementalSync
def initialize
@synced_ids = Set.new
load_previous_sync_state
end
def sync_new_records(records)
new_records = records.reject { |r| @synced_ids.include?(r.id) }
new_records.each do |record|
process_record(record)
@synced_ids.add(record.id)
end
save_sync_state
{ processed: new_records.size, skipped: records.size - new_records.size }
end
def reset_sync_state
@synced_ids.clear
save_sync_state
end
private
def load_previous_sync_state
# Load from persistent storage
end
def save_sync_state
# Save to persistent storage
end
def process_record(record)
# Expensive processing logic
end
end
Multiset merging combines frequency data from multiple sources. Log aggregation, metrics collection, and distributed counting scenarios accumulate counts across processes or time windows. Multiset addition sums frequencies while preserving element identity.
class MetricsAggregator
def initialize
@hourly_counts = Hash.new { |h, k| h[k] = Hash.new(0) }
end
def record_event(timestamp, event_type)
hour = timestamp.beginning_of_hour
@hourly_counts[hour][event_type] += 1
end
def daily_totals(date)
hours = (0..23).map { |h| date + h.hours }
hours.reduce(Hash.new(0)) do |totals, hour|
@hourly_counts[hour].each do |event_type, count|
totals[event_type] += count
end
totals
end
end
def top_events(date, limit = 10)
daily_totals(date).sort_by { |_, count| -count }.first(limit).to_h
end
end
Difference operations identify changes between snapshots. Configuration management, database synchronization, and version control systems compute differences to determine additions, removals, and modifications. Set difference isolates new or removed elements between versions.
require 'set'
class ConfigurationDiff
def self.compute(old_config, new_config)
old_keys = Set.new(old_config.keys)
new_keys = Set.new(new_config.keys)
{
added: new_keys - old_keys,
removed: old_keys - new_keys,
modified: (old_keys & new_keys).select { |key|
old_config[key] != new_config[key]
}.to_set,
unchanged: (old_keys & new_keys).select { |key|
old_config[key] == new_config[key]
}.to_set
}
end
end
old = { 'timeout' => 30, 'retries' => 3, 'cache' => true }
new = { 'timeout' => 60, 'retries' => 3, 'debug' => false }
diff = ConfigurationDiff.compute(old, new)
# => {
# added: #<Set: {"debug"}>,
# removed: #<Set: {"cache"}>,
# modified: #<Set: {"timeout"}>,
# unchanged: #<Set: {"retries"}>
# }
Reference
Set Methods
| Method | Description | Returns |
|---|---|---|
| Set.new(enum) | Creates set from enumerable | Set |
| add(element) | Adds element to set | Set (self) |
| add?(element) | Adds element, returns nil if present | Set or nil |
| delete(element) | Removes element from set | Set (self) |
| include?(element) | Tests membership | Boolean |
| member?(element) | Alias for include? | Boolean |
| size | Returns element count | Integer |
| empty? | Tests if set contains no elements | Boolean |
| clear | Removes all elements | Set (self) |
Set Operations
| Operation | Method | Operator | Description |
|---|---|---|---|
| Union | union(other) | pipe | All elements from either set |
| Intersection | intersection(other) | & | Elements in both sets |
| Difference | difference(other) | - | Elements in first but not second |
| Symmetric Difference | N/A | ^ | Elements in exactly one set |
Set Relationships
| Method | Description | Returns |
|---|---|---|
| subset?(other) | All elements in other | Boolean |
| proper_subset?(other) | Subset but not equal | Boolean |
| superset?(other) | Contains all elements of other | Boolean |
| proper_superset?(other) | Superset but not equal | Boolean |
| disjoint?(other) | No common elements | Boolean |
Multiset Operations
| Operation | Hash Implementation | Description |
|---|---|---|
| Add element | hash[key] += 1 | Increment count |
| Remove element | hash[key] -= 1 | Decrement count |
| Get count | hash[key] | Returns count (0 if absent) |
| Total size | hash.values.sum | Sum of all counts |
| Cardinality | hash.keys.size | Number of unique elements |
| Union | max(a[k], b[k]) | Maximum count per element |
| Intersection | min(a[k], b[k]) | Minimum count per element |
| Sum | a[k] + b[k] | Add counts per element |
| Difference | max(0, a[k] - b[k]) | Subtract with floor at zero |
Performance Characteristics
| Operation | Hash-Based Set | Tree-Based Set | Hash Multiset |
|---|---|---|---|
| Insert | O(1) average | O(log n) | O(1) average |
| Delete | O(1) average | O(log n) | O(1) average |
| Membership | O(1) average | O(log n) | O(1) average |
| Union | O(n + m) | O(n + m) | O(n + m) |
| Intersection | O(min(n, m)) | O(n + m) | O(min(n, m)) |
| Iteration | O(n) unordered | O(n) ordered | O(n) unique elements |
| Space | O(n) | O(n) | O(n) unique elements |
Common Patterns
| Pattern | Use Case | Implementation |
|---|---|---|
| Deduplication | Remove duplicates from collection | Convert to Set then to Array |
| Membership cache | Fast lookup of valid values | Store in Set, test with include? |
| Permission calculation | Combine access rules | Union for OR, intersection for AND |
| Change detection | Find additions/removals | Use set difference |
| Frequency counting | Count element occurrences | Hash with default value 0 |
| Bulk operations | Apply operation to unique items | Collect IDs in Set, process once |
Initialization Options
| Pattern | Code | Description |
|---|---|---|
| Empty set | Set.new | No elements |
| From array | Set.new(array) | Convert array to set |
| From range | Set.new(1..10) | Convert range to set |
| Literal syntax | Set[1, 2, 3] | Direct element list |
| With transformation | Set.new(array) block | Transform during insert |
| Hash multiset | Hash.new(0) | Default count of zero |
Set Equality
| Test | Condition | Method |
|---|---|---|
| Equality | Same elements | == |
| Subset | All elements in other | subset? |
| Proper subset | Subset and not equal | proper_subset? |
| Superset | Contains all of other | superset? |
| Proper superset | Superset and not equal | proper_superset? |
| Disjoint | No shared elements | disjoint? |