CrackedRuby CrackedRuby

Overview

User Datagram Protocol (UDP) operates as a connectionless transport layer protocol in the Internet Protocol Suite. Unlike TCP, UDP transmits datagrams without establishing connections, performing handshakes, or guaranteeing delivery. This fundamental difference makes UDP suitable for applications where speed matters more than reliability.

UDP emerged alongside TCP in the original Internet Protocol specification (RFC 768, 1980). The protocol's simplicity stems from its minimal header overhead and lack of connection state management. Each UDP datagram travels independently through the network, carrying source and destination port numbers, length information, and an optional checksum.

The protocol fits specific use cases in the software development landscape. Real-time applications like video streaming, online gaming, and VoIP telephony choose UDP because retransmitting lost packets would create more problems than dropping them. DNS queries use UDP for quick responses without connection overhead. Network monitoring tools, time synchronization protocols, and IoT device communication also rely on UDP's lightweight characteristics.

require 'socket'

# Simple UDP echo server
socket = UDPSocket.new
socket.bind('localhost', 9000)

loop do
  data, addr = socket.recvfrom(1024)
  puts "Received: #{data} from #{addr[3]}:#{addr[1]}"
  socket.send(data, 0, addr[3], addr[1])
end

This basic example demonstrates UDP's core characteristic: the server receives datagrams and sends responses without maintaining client connections or session state.

Key Principles

UDP operates on four fundamental principles that distinguish it from connection-oriented protocols.

Connectionless Communication

UDP transmits datagrams without establishing or maintaining connections. Each datagram contains complete addressing information, allowing independent routing through the network. The sender transmits data without waiting for acknowledgment or verifying the receiver's availability. This stateless operation eliminates the overhead of connection establishment, maintenance, and teardown phases.

Unreliable Delivery

The protocol provides no guarantees about datagram delivery, ordering, or duplication prevention. Datagrams may arrive out of order, multiple times, or not at all. Network congestion, routing changes, or hardware failures can cause packet loss without notification to either sender or receiver. Applications requiring reliability must implement their own mechanisms for acknowledgment, retransmission, and ordering.

Message Boundaries

UDP preserves message boundaries, treating each send operation as a discrete unit. When an application sends a 500-byte message, the receiver gets either the complete 500 bytes or nothing. The protocol never splits messages across multiple receive operations or combines multiple sends into one receive. This datagram-oriented approach differs fundamentally from TCP's byte-stream model.

Minimal Protocol Overhead

The UDP header contains only 8 bytes: source port (2 bytes), destination port (2 bytes), length (2 bytes), and checksum (2 bytes). This minimal header reduces bandwidth consumption and processing requirements compared to TCP's 20-byte minimum header plus options. The protocol performs no flow control, congestion avoidance, or connection state management.

# UDP header structure representation
class UDPHeader
  attr_accessor :source_port, :dest_port, :length, :checksum
  
  def initialize(source_port, dest_port, data)
    @source_port = source_port
    @dest_port = dest_port
    @length = 8 + data.bytesize  # Header + data
    @checksum = calculate_checksum(data)
  end
  
  def calculate_checksum(data)
    # Simplified checksum calculation
    # Real implementation includes pseudo-header
    sum = @source_port + @dest_port + @length
    data.bytes.each_slice(2) do |pair|
      sum += (pair[0] << 8) + (pair[1] || 0)
    end
    ~((sum & 0xFFFF) + (sum >> 16)) & 0xFFFF
  end
end

Multiplexing Through Port Numbers

Port numbers enable multiple applications to share a single network interface. The combination of IP address and port number creates a unique endpoint called a socket. A single host can run multiple UDP services simultaneously, each bound to different ports. The operating system demultiplexes incoming datagrams based on destination port numbers, routing them to the appropriate application.

Optional Checksum Verification

UDP includes an optional checksum covering the header, data, and portions of the IP header. The sender calculates the checksum and includes it in the header. The receiver recalculates the checksum and compares it against the header value. A mismatch indicates corruption during transmission, causing the receiver to discard the datagram. Setting the checksum to zero disables this verification, though modern systems typically leave it enabled.

Ruby Implementation

Ruby provides UDP networking capabilities through the Socket library's UDPSocket class. This class wraps BSD socket operations, offering both low-level control and convenient high-level methods.

Creating UDP Sockets

require 'socket'

# Create unbound socket
socket = UDPSocket.new

# Create and bind to specific address and port
socket = UDPSocket.new
socket.bind('0.0.0.0', 8080)

# IPv6 socket
socket = UDPSocket.new(Socket::AF_INET6)
socket.bind('::', 8080)

The UDPSocket constructor accepts an address family parameter, defaulting to AF_INET for IPv4. Binding attaches the socket to a specific interface and port. Binding to '0.0.0.0' listens on all interfaces, while 'localhost' or '127.0.0.1' restricts traffic to local connections.

Sending Datagrams

socket = UDPSocket.new

# Send to specific host and port
socket.send('Hello, UDP!', 0, 'example.com', 9000)

# Connect to set default destination
socket.connect('example.com', 9000)
socket.send('Hello again!', 0)

# Send with flags
socket.send(data, Socket::MSG_DONTROUTE, host, port)

The send method requires the data, flags, destination host, and destination port. Setting flags to 0 uses default behavior. Calling connect on a UDP socket doesn't establish a connection; it sets default destination parameters and enables the kernel to filter incoming datagrams, accepting only those from the connected address.

Receiving Datagrams

socket = UDPSocket.new
socket.bind('localhost', 9000)

# Receive up to 1024 bytes
data, addr = socket.recvfrom(1024)
puts "Received #{data.bytesize} bytes from #{addr[3]}:#{addr[1]}"

# Non-blocking receive
socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, [1, 0].pack('l_2'))
begin
  data, addr = socket.recvfrom(1024)
rescue IO::EAGAINWaitReadable
  puts "No data available"
end

The recvfrom method blocks until data arrives, returning the data and an address array containing address family, port, hostname, and IP address. The buffer size parameter specifies maximum bytes to receive. Datagrams larger than the buffer get truncated without error notification.

Advanced Socket Options

socket = UDPSocket.new

# Enable broadcast
socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_BROADCAST, true)

# Set time-to-live for multicast
socket.setsockopt(Socket::IPPROTO_IP, Socket::IP_MULTICAST_TTL, 4)

# Join multicast group
membership = IPAddr.new('239.1.2.3').hton + 
             IPAddr.new('0.0.0.0').hton
socket.setsockopt(Socket::IPPROTO_IP, Socket::IP_ADD_MEMBERSHIP, membership)

# Set receive buffer size
socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVBUF, 65536)

Socket options control broadcast capability, multicast behavior, buffer sizes, and timeout values. The setsockopt method takes a level, option name, and value. Buffer size adjustments affect how many datagrams the kernel queues before dropping packets.

Complete Client-Server Example

# Server
class UDPServer
  def initialize(host, port)
    @socket = UDPSocket.new
    @socket.bind(host, port)
    @running = false
  end
  
  def start
    @running = true
    puts "UDP Server listening on #{@socket.addr[3]}:#{@socket.addr[1]}"
    
    while @running
      begin
        data, addr = @socket.recvfrom(1024)
        Thread.new(data, addr) { |d, a| handle_request(d, a) }
      rescue Interrupt
        stop
      end
    end
  end
  
  def handle_request(data, addr)
    puts "Received: #{data}"
    response = data.upcase
    @socket.send(response, 0, addr[3], addr[1])
  end
  
  def stop
    @running = false
    @socket.close
    puts "Server stopped"
  end
end

# Client
class UDPClient
  def initialize(host, port)
    @socket = UDPSocket.new
    @host = host
    @port = port
  end
  
  def send_message(message)
    @socket.send(message, 0, @host, @port)
    
    # Wait for response with timeout
    if IO.select([@socket], nil, nil, 2)
      response, addr = @socket.recvfrom(1024)
      puts "Response: #{response}"
      response
    else
      puts "Request timed out"
      nil
    end
  end
  
  def close
    @socket.close
  end
end

This implementation demonstrates asynchronous request handling, timeouts, and proper resource cleanup. The server spawns threads for concurrent request processing, while the client uses IO.select for timeout-based response handling.

Design Considerations

Choosing UDP requires evaluating multiple factors that affect application performance, reliability, and complexity.

UDP vs TCP Trade-offs

TCP provides reliability through acknowledgments, retransmissions, and flow control. This overhead costs time and bandwidth. UDP eliminates this overhead but shifts reliability responsibilities to the application layer. Applications must decide whether the protocol's simplicity justifies implementing custom reliability mechanisms.

Speed-critical applications favor UDP when data freshness matters more than completeness. In video conferencing, displaying current frames matters more than retransmitting dropped frames. The same principle applies to sensor networks transmitting periodic readings where stale data loses value.

UDP fits naturally with request-response patterns involving small messages. DNS queries exemplify this use case: a single datagram carries the query, another carries the response. The requester retries if no response arrives within a timeout period. This simple reliability mechanism works because entire transactions fit within single datagrams.

Application-Level Reliability Requirements

Applications needing guaranteed delivery over UDP must implement acknowledgment and retransmission logic. This involves assigning sequence numbers to messages, tracking which messages need acknowledgment, and retransmitting unacknowledged messages after timeouts. The complexity approaches TCP's implementation for fully reliable protocols.

Selective reliability offers a middle ground. Applications can implement acknowledgments for critical messages while accepting loss for non-critical data. A multiplayer game might require reliable delivery for player actions but tolerate occasional loss of position updates.

Message Size Constraints

UDP datagrams face size limitations from multiple sources. The protocol's length field allows up to 65,535 bytes including the 8-byte header, yielding a 65,507-byte maximum payload. Network infrastructure typically imposes lower limits through the Maximum Transmission Unit (MTU).

Standard Ethernet networks use a 1500-byte MTU. Subtracting IP and UDP headers leaves approximately 1472 bytes for application data. Exceeding this size triggers IP fragmentation, where the IP layer splits the datagram into multiple fragments. Fragmentation increases loss probability because losing any fragment loses the entire datagram. Networks may also drop fragmented packets for security reasons.

# Safe datagram size calculation
class DatagramSizer
  ETHERNET_MTU = 1500
  IP_HEADER_MIN = 20
  UDP_HEADER = 8
  
  def self.safe_payload_size
    ETHERNET_MTU - IP_HEADER_MIN - UDP_HEADER
  end
  
  def self.fragment_if_needed(data, max_size = safe_payload_size)
    data.chars.each_slice(max_size).map(&:join)
  end
end

# Usage
DatagramSizer.safe_payload_size  # => 1472
chunks = DatagramSizer.fragment_if_needed(large_data)

Latency Sensitivity Analysis

UDP reduces latency by eliminating connection establishment and acknowledgment delays. A TCP connection requires a three-way handshake before data transmission begins, adding at least one round-trip time. UDP starts transmitting immediately, making it suitable for applications where every millisecond counts.

Jitter, the variation in packet arrival times, affects real-time applications differently than average latency. Video and audio streaming applications buffer incoming data to smooth out jitter. UDP's lack of retransmission avoids the head-of-line blocking that occurs in TCP when a lost packet delays subsequent packets.

Firewall and NAT Considerations

Firewalls often treat UDP traffic more restrictively than TCP. Stateful firewalls track TCP connections through SYN, ACK, and FIN flags, but UDP's stateless nature makes connection tracking ambiguous. Firewalls may block UDP entirely, allow it only for specific ports, or implement short timeout periods for UDP "connections."

Network Address Translation (NAT) presents challenges for UDP applications. NAT devices maintain mappings between internal addresses and external ports. These mappings typically expire after periods of inactivity. UDP applications must send periodic keep-alive messages to maintain NAT bindings. Determining the external address and port assigned by NAT requires techniques like STUN (Session Traversal Utilities for NAT).

Broadcast and Multicast Capabilities

UDP supports broadcast and multicast transmission, sending single datagrams to multiple recipients. Broadcast sends to all hosts on a local network segment. Multicast sends to hosts that have joined specific multicast groups. These capabilities enable efficient one-to-many communication patterns impossible with TCP.

Broadcast and multicast introduce additional considerations. Network equipment must support and properly configure multicast routing. Multicast group management requires additional protocol support through IGMP. Applications must handle duplicate packet delivery when multiple network paths exist.

Practical Examples

DNS Query Implementation

require 'socket'
require 'resolv'

class SimpleDNSClient
  DNS_PORT = 53
  
  def initialize(dns_server = '8.8.8.8')
    @socket = UDPSocket.new
    @dns_server = dns_server
    @transaction_id = rand(0xFFFF)
  end
  
  def query(domain, record_type = 'A')
    query_packet = build_query(domain, record_type)
    @socket.send(query_packet, 0, @dns_server, DNS_PORT)
    
    if IO.select([@socket], nil, nil, 5)
      response, _ = @socket.recvfrom(512)
      parse_response(response)
    else
      { error: 'Query timeout' }
    end
  end
  
  private
  
  def build_query(domain, record_type)
    header = [@transaction_id, 0x0100, 1, 0, 0, 0].pack('n6')
    
    question = domain.split('.').map { |label|
      [label.length, label].pack('CA*')
    }.join + "\x00"
    
    type_code = record_type == 'A' ? 1 : 28  # A or AAAA
    question += [type_code, 1].pack('n2')  # Type and Class
    
    header + question
  end
  
  def parse_response(data)
    header = data.unpack('n6')
    return { error: 'Invalid response' } if header[0] != @transaction_id
    
    answer_count = header[3]
    return { error: 'No answers' } if answer_count == 0
    
    # Skip question section
    offset = 12
    loop do
      length = data.getbyte(offset)
      break if length == 0
      offset += length + 1
    end
    offset += 5  # Null byte + type + class
    
    # Parse first answer
    offset += 2  # Skip name pointer
    type, _, ttl, rdlength = data[offset, 10].unpack('nnNn')
    offset += 10
    
    address = data[offset, rdlength].bytes.join('.')
    { address: address, ttl: ttl }
  end
  
  def close
    @socket.close
  end
end

# Usage
client = SimpleDNSClient.new
result = client.query('example.com')
puts "Address: #{result[:address]}, TTL: #{result[:ttl]}"

This DNS client demonstrates building binary protocol packets, handling timeouts, and parsing responses. The 512-byte buffer size reflects DNS's traditional maximum message size over UDP.

Real-Time Game State Synchronization

class GameStateSync
  TICK_RATE = 20  # Updates per second
  PORT = 7777
  
  def initialize(is_server: false)
    @socket = UDPSocket.new
    @is_server = is_server
    @sequence_number = 0
    @last_ack = 0
    @client_states = {}
    
    if @is_server
      @socket.bind('0.0.0.0', PORT)
    end
  end
  
  def send_state(state, destination_addr = nil)
    @sequence_number += 1
    
    packet = {
      seq: @sequence_number,
      timestamp: Time.now.to_f,
      ack: @last_ack,
      state: state
    }
    
    data = Marshal.dump(packet)
    
    if @is_server && destination_addr
      @socket.send(data, 0, destination_addr[:host], destination_addr[:port])
    elsif !@is_server
      @socket.send(data, 0, 'server.example.com', PORT)
    end
  end
  
  def receive_state
    return nil unless IO.select([@socket], nil, nil, 0)
    
    data, addr = @socket.recvfrom(8192)
    packet = Marshal.load(data)
    
    # Update acknowledgment
    @last_ack = packet[:seq]
    
    # Calculate latency
    latency = Time.now.to_f - packet[:timestamp]
    
    if @is_server
      # Track client state
      client_key = "#{addr[3]}:#{addr[1]}"
      @client_states[client_key] = {
        addr: { host: addr[3], port: addr[1] },
        last_seq: packet[:seq],
        last_seen: Time.now,
        latency: latency
      }
    end
    
    {
      state: packet[:state],
      latency: latency,
      addr: { host: addr[3], port: addr[1] }
    }
  end
  
  def broadcast_state(state)
    @client_states.each do |_, client|
      send_state(state, client[:addr])
    end
  end
  
  def cleanup_stale_clients(timeout = 10)
    cutoff = Time.now - timeout
    @client_states.reject! { |_, client| client[:last_seen] < cutoff }
  end
end

# Server usage
server = GameStateSync.new(is_server: true)
game_loop = Thread.new do
  loop do
    # Receive client updates
    while (update = server.receive_state)
      puts "Client update with #{update[:latency] * 1000}ms latency"
    end
    
    # Broadcast game state
    game_state = { players: [...], objects: [...] }
    server.broadcast_state(game_state)
    
    server.cleanup_stale_clients
    sleep(1.0 / GameStateSync::TICK_RATE)
  end
end

This game networking implementation shows sequence numbering for detecting out-of-order packets, latency calculation, and client state management. The server tracks multiple clients without maintaining connections, demonstrating UDP's scalability advantages.

Network Discovery with Broadcast

class ServiceDiscovery
  DISCOVERY_PORT = 5353
  BROADCAST_ADDR = '255.255.255.255'
  
  def initialize(service_name)
    @service_name = service_name
    @socket = UDPSocket.new
    @socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_BROADCAST, true)
    @discovered_services = {}
  end
  
  def advertise(service_port, info = {})
    @socket.bind('0.0.0.0', DISCOVERY_PORT)
    
    Thread.new do
      loop do
        data, addr = @socket.recvfrom(1024)
        request = Marshal.load(data)
        
        if request[:type] == 'discover' && 
           request[:service] == @service_name
          response = {
            type: 'announce',
            service: @service_name,
            port: service_port,
            host: get_local_ip,
            info: info,
            timestamp: Time.now.to_i
          }
          
          @socket.send(Marshal.dump(response), 0, addr[3], addr[1])
        end
      end
    end
  end
  
  def discover(timeout = 3)
    @discovered_services.clear
    
    request = {
      type: 'discover',
      service: @service_name,
      timestamp: Time.now.to_i
    }
    
    @socket.send(Marshal.dump(request), 0, BROADCAST_ADDR, DISCOVERY_PORT)
    
    deadline = Time.now + timeout
    while Time.now < deadline
      remaining = deadline - Time.now
      next unless IO.select([@socket], nil, nil, [remaining, 0].max)
      
      data, addr = @socket.recvfrom(1024)
      response = Marshal.load(data)
      
      if response[:type] == 'announce' && 
         response[:service] == @service_name
        key = "#{addr[3]}:#{response[:port]}"
        @discovered_services[key] = {
          host: response[:host],
          port: response[:port],
          info: response[:info],
          discovered_at: Time.now
        }
      end
    end
    
    @discovered_services.values
  end
  
  private
  
  def get_local_ip
    Socket.ip_address_list.find do |addr|
      addr.ipv4? && !addr.ipv4_loopback?
    end&.ip_address || 'localhost'
  end
end

# Usage
# Service provider
discovery = ServiceDiscovery.new('my-api-service')
discovery.advertise(8080, { version: '1.2.3', capabilities: ['rest', 'grpc'] })

# Service consumer
discovery = ServiceDiscovery.new('my-api-service')
services = discovery.discover(5)
services.each do |service|
  puts "Found service at #{service[:host]}:#{service[:port]}"
  puts "Capabilities: #{service[:info][:capabilities].join(', ')}"
end

This service discovery mechanism demonstrates UDP broadcast for local network discovery. Services announce themselves in response to discovery requests without maintaining registration state.

Error Handling & Edge Cases

Packet Loss Detection and Mitigation

UDP provides no built-in notification of packet loss. Applications must implement detection mechanisms based on sequence numbers and timeouts.

class ReliableUDPSender
  def initialize(socket, destination)
    @socket = socket
    @destination = destination
    @pending_packets = {}
    @next_sequence = 0
    @retransmit_thread = start_retransmit_thread
  end
  
  def send_reliable(data, timeout = 2.0, max_retries = 3)
    sequence = @next_sequence
    @next_sequence += 1
    
    packet = {
      seq: sequence,
      data: data,
      timestamp: Time.now
    }
    
    @pending_packets[sequence] = {
      packet: packet,
      retries: 0,
      max_retries: max_retries,
      timeout: timeout,
      last_sent: Time.now
    }
    
    transmit_packet(packet)
    sequence
  end
  
  def acknowledge(sequence)
    @pending_packets.delete(sequence)
  end
  
  private
  
  def transmit_packet(packet)
    data = Marshal.dump(packet)
    @socket.send(data, 0, @destination[:host], @destination[:port])
  end
  
  def start_retransmit_thread
    Thread.new do
      loop do
        sleep 0.1
        now = Time.now
        
        @pending_packets.each do |seq, info|
          next if now - info[:last_sent] < info[:timeout]
          
          if info[:retries] >= info[:max_retries]
            @pending_packets.delete(seq)
            puts "Packet #{seq} dropped after #{info[:retries]} retries"
            next
          end
          
          info[:retries] += 1
          info[:last_sent] = now
          info[:timeout] *= 1.5  # Exponential backoff
          
          transmit_packet(info[:packet])
        end
      end
    end
  end
end

class ReliableUDPReceiver
  def initialize(socket)
    @socket = socket
    @received_sequences = {}
    @duplicate_count = 0
  end
  
  def receive_reliable
    data, addr = @socket.recvfrom(8192)
    packet = Marshal.load(data)
    
    # Send acknowledgment
    ack = { type: 'ack', seq: packet[:seq] }
    @socket.send(Marshal.dump(ack), 0, addr[3], addr[1])
    
    # Check for duplicates
    if @received_sequences[packet[:seq]]
      @duplicate_count += 1
      return nil  # Discard duplicate
    end
    
    @received_sequences[packet[:seq]] = true
    cleanup_old_sequences
    
    packet[:data]
  end
  
  private
  
  def cleanup_old_sequences
    # Keep only recent sequences to prevent memory growth
    @received_sequences.shift if @received_sequences.size > 1000
  end
end

This implementation adds reliability through acknowledgments, retransmissions with exponential backoff, and duplicate detection. Applications should tune timeout values based on measured round-trip times.

Out-of-Order Delivery Handling

UDP datagrams may arrive in different order than sent. Applications requiring ordered delivery must implement sequence number checking and buffering.

class OrderedReceiver
  def initialize(socket)
    @socket = socket
    @next_expected = 0
    @buffer = {}
  end
  
  def receive_ordered
    loop do
      # Check buffer first
      if @buffer[@next_expected]
        data = @buffer.delete(@next_expected)
        @next_expected += 1
        return data
      end
      
      # Receive new packet
      raw_data, _ = @socket.recvfrom(8192)
      packet = Marshal.load(raw_data)
      
      seq = packet[:seq]
      
      if seq < @next_expected
        # Duplicate or very late packet
        next
      elsif seq == @next_expected
        # Expected packet arrived
        @next_expected += 1
        return packet[:data]
      else
        # Future packet, buffer it
        @buffer[seq] = packet[:data]
        
        # Prevent buffer overflow
        if @buffer.size > 100
          # Skip ahead and accept packets
          @next_expected = @buffer.keys.min
        end
      end
    end
  end
end

Buffer Overflow Prevention

Operating system receive buffers have finite capacity. When datagrams arrive faster than the application reads them, the kernel drops packets.

class BufferMonitor
  def initialize(socket)
    @socket = socket
    @dropped_count = 0
  end
  
  def monitor_drops
    begin
      # Get current receive buffer size
      rcvbuf = @socket.getsockopt(Socket::SOL_SOCKET, Socket::SO_RCVBUF)
      current_size = rcvbuf.unpack1('i')
      
      # Attempt to increase if needed
      if current_size < 262144  # 256KB
        @socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVBUF, 262144)
      end
      
      # Monitor queue depth (platform-specific)
      # Check /proc/net/udp on Linux for queue statistics
      
    rescue SystemCallError => e
      puts "Buffer monitoring failed: #{e.message}"
    end
  end
  
  def process_with_priority
    # Process high-priority messages first
    messages = []
    
    # Drain receive buffer
    while IO.select([@socket], nil, nil, 0)
      data, addr = @socket.recvfrom(8192)
      packet = Marshal.load(data)
      messages << { packet: packet, addr: addr }
    end
    
    # Sort by priority if included in packets
    messages.sort_by! { |msg| -(msg[:packet][:priority] || 0) }
    
    messages.each do |msg|
      handle_message(msg[:packet], msg[:addr])
    end
  end
end

Corruption Detection Beyond Checksums

UDP checksums catch most transmission errors but applications may add additional integrity checks.

require 'digest'

class IntegrityProtectedSender
  def initialize(socket)
    @socket = socket
  end
  
  def send_protected(data, destination)
    # Add CRC32 checksum
    checksum = Zlib.crc32(data)
    
    # Add cryptographic hash for security-critical data
    sha256 = Digest::SHA256.hexdigest(data)
    
    packet = {
      data: data,
      crc32: checksum,
      sha256: sha256,
      length: data.bytesize
    }
    
    @socket.send(Marshal.dump(packet), 0, destination[:host], destination[:port])
  end
end

class IntegrityProtectedReceiver
  def initialize(socket)
    @socket = socket
    @corruption_count = 0
  end
  
  def receive_protected
    raw_data, addr = @socket.recvfrom(65536)
    packet = Marshal.load(raw_data)
    
    data = packet[:data]
    
    # Verify length
    if data.bytesize != packet[:length]
      @corruption_count += 1
      return nil
    end
    
    # Verify CRC32
    if Zlib.crc32(data) != packet[:crc32]
      @corruption_count += 1
      return nil
    end
    
    # Verify SHA256 for critical data
    if packet[:sha256] && Digest::SHA256.hexdigest(data) != packet[:sha256]
      @corruption_count += 1
      return nil
    end
    
    data
  end
end

Handling Maximum Datagram Size

Applications must handle cases where messages exceed practical datagram sizes.

class FragmentedSender
  MTU = 1400  # Conservative MTU
  
  def initialize(socket)
    @socket = socket
    @message_id = 0
  end
  
  def send_large(data, destination)
    return send_small(data, destination) if data.bytesize <= MTU
    
    @message_id += 1
    fragments = data.chars.each_slice(MTU).map(&:join)
    
    fragments.each_with_index do |fragment, index|
      packet = {
        msg_id: @message_id,
        fragment: index,
        total: fragments.size,
        data: fragment
      }
      
      @socket.send(Marshal.dump(packet), 0, destination[:host], destination[:port])
      sleep 0.001  # Small delay to prevent drops
    end
  end
  
  def send_small(data, destination)
    packet = { data: data }
    @socket.send(Marshal.dump(packet), 0, destination[:host], destination[:port])
  end
end

class FragmentedReceiver
  def initialize(socket)
    @socket = socket
    @pending_messages = {}
  end
  
  def receive_large(timeout = 5)
    deadline = Time.now + timeout
    
    loop do
      remaining = deadline - Time.now
      return nil if remaining <= 0
      
      next unless IO.select([@socket], nil, nil, [remaining, 0.1].min)
      
      raw_data, _ = @socket.recvfrom(8192)
      packet = Marshal.load(raw_data)
      
      # Single packet message
      return packet[:data] unless packet[:msg_id]
      
      # Fragmented message
      msg_id = packet[:msg_id]
      @pending_messages[msg_id] ||= {}
      @pending_messages[msg_id][packet[:fragment]] = packet[:data]
      
      # Check if complete
      if @pending_messages[msg_id].size == packet[:total]
        fragments = @pending_messages.delete(msg_id)
        return fragments.sort.map { |_, data| data }.join
      end
    end
  end
end

Performance Considerations

Throughput Optimization

UDP can achieve higher throughput than TCP by eliminating acknowledgment overhead. The theoretical maximum depends on network bandwidth and packet size.

class ThroughputOptimizedSender
  def initialize(socket, destination)
    @socket = socket
    @destination = destination
    @batch_size = 100
    @send_buffer = []
  end
  
  def send_batched(data)
    @send_buffer << data
    
    if @send_buffer.size >= @batch_size
      flush
    end
  end
  
  def flush
    return if @send_buffer.empty?
    
    batch = {
      count: @send_buffer.size,
      data: @send_buffer
    }
    
    @socket.send(Marshal.dump(batch), 0, @destination[:host], @destination[:port])
    @send_buffer.clear
  end
  
  def send_continuous(data_stream)
    # Send without waiting for anything
    data_stream.each do |chunk|
      @socket.send(chunk, 0, @destination[:host], @destination[:port])
    end
  end
end

Batching multiple logical messages into single datagrams reduces per-packet overhead. This approach works when the application can tolerate slight delays for batch assembly.

CPU Efficiency

UDP requires less CPU than TCP for protocol processing. The kernel performs minimal work: checksum validation, demultiplexing to the correct socket, and copying data to userspace.

require 'benchmark'

def benchmark_udp_vs_tcp
  # UDP benchmark
  udp_time = Benchmark.realtime do
    socket = UDPSocket.new
    socket.bind('localhost', 9001)
    
    sender = UDPSocket.new
    10000.times do |i|
      sender.send("Message #{i}", 0, 'localhost', 9001)
      socket.recvfrom(100)
    end
    
    socket.close
    sender.close
  end
  
  # TCP benchmark
  tcp_time = Benchmark.realtime do
    server = TCPServer.new('localhost', 9002)
    
    Thread.new do
      client = server.accept
      10000.times { client.gets }
      client.close
    end
    
    socket = TCPSocket.new('localhost', 9002)
    10000.times { |i| socket.puts "Message #{i}" }
    socket.close
    server.close
  end
  
  {
    udp: udp_time,
    tcp: tcp_time,
    speedup: (tcp_time / udp_time).round(2)
  }
end

Memory Footprint

UDP requires less memory per connection than TCP. TCP maintains send and receive buffers, congestion control state, and retransmission queues for each connection. UDP sockets maintain only receive buffers.

class MemoryEfficientServer
  def initialize(port)
    @socket = UDPSocket.new
    @socket.bind('0.0.0.0', port)
    
    # Single socket handles all clients
    # No per-client state required
    @request_handler = method(:handle_request)
  end
  
  def run
    loop do
      data, addr = @socket.recvfrom(8192)
      
      # Process request without storing client state
      Thread.new do
        response = @request_handler.call(data)
        @socket.send(response, 0, addr[3], addr[1])
      end
    end
  end
  
  def handle_request(data)
    # Stateless request processing
    JSON.parse(data)
    # ... process ...
    JSON.generate(result)
  end
end

A single UDP socket handles unlimited clients without maintaining per-client memory. This characteristic enables massive scalability for stateless services.

Latency Characteristics

UDP eliminates several latency sources present in TCP:

  • Connection establishment: TCP requires round-trip time before sending data
  • Acknowledgment waiting: TCP sender may wait for ACKs before sending more data
  • Head-of-line blocking: Lost TCP packets block delivery of subsequent packets
class LatencyMeasurement
  def initialize(socket)
    @socket = socket
    @samples = []
  end
  
  def measure_rtt(destination, count = 100)
    count.times do
      start = Time.now
      
      @socket.send('ping', 0, destination[:host], destination[:port])
      
      if IO.select([@socket], nil, nil, 1)
        @socket.recvfrom(64)
        rtt = (Time.now - start) * 1000  # milliseconds
        @samples << rtt
      end
      
      sleep 0.01
    end
    
    analyze_latency
  end
  
  def analyze_latency
    return {} if @samples.empty?
    
    sorted = @samples.sort
    {
      min: sorted.first.round(2),
      max: sorted.last.round(2),
      avg: (@samples.sum / @samples.size).round(2),
      median: sorted[sorted.size / 2].round(2),
      p95: sorted[(sorted.size * 0.95).to_i].round(2),
      p99: sorted[(sorted.size * 0.99).to_i].round(2),
      jitter: calculate_jitter.round(2)
    }
  end
  
  def calculate_jitter
    return 0 if @samples.size < 2
    
    differences = @samples.each_cons(2).map { |a, b| (b - a).abs }
    differences.sum / differences.size
  end
end

Packet Loss Impact

Packet loss affects UDP differently than TCP. TCP retransmits lost packets automatically, maintaining throughput but increasing latency. UDP drops lost packets, reducing latency but potentially reducing effective throughput if applications retransmit.

class PacketLossSimulator
  def initialize(socket, loss_rate = 0.1)
    @socket = socket
    @loss_rate = loss_rate
    @sent_count = 0
    @dropped_count = 0
  end
  
  def send_with_loss(data, destination)
    @sent_count += 1
    
    if rand < @loss_rate
      @dropped_count += 1
      return :dropped
    end
    
    @socket.send(data, 0, destination[:host], destination[:port])
    :sent
  end
  
  def statistics
    {
      sent: @sent_count,
      dropped: @dropped_count,
      effective_rate: @loss_rate,
      success_rate: ((@sent_count - @dropped_count).to_f / @sent_count * 100).round(2)
    }
  end
end

Applications must balance retransmission attempts against accepting loss. Aggressive retransmission can congest networks and worsen loss rates. Conservative retransmission accepts higher loss but maintains lower latency.

Multicast Scaling

UDP multicast enables efficient one-to-many distribution. A single packet reaches multiple receivers without the sender transmitting multiple copies.

class MulticastPublisher
  def initialize(multicast_addr, port)
    @socket = UDPSocket.new
    @addr = multicast_addr
    @port = port
    
    # Set multicast options
    @socket.setsockopt(Socket::IPPROTO_IP, Socket::IP_MULTICAST_TTL, 4)
    @socket.setsockopt(Socket::IPPROTO_IP, Socket::IP_MULTICAST_LOOP, 1)
  end
  
  def publish(data)
    @socket.send(data, 0, @addr, @port)
  end
  
  def publish_bulk(messages)
    # Single send reaches all subscribers
    batch = Marshal.dump(messages)
    @socket.send(batch, 0, @addr, @port)
  end
end

class MulticastSubscriber
  def initialize(multicast_addr, port)
    @socket = UDPSocket.new
    @socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1)
    @socket.bind('0.0.0.0', port)
    
    membership = IPAddr.new(multicast_addr).hton + 
                 IPAddr.new('0.0.0.0').hton
    @socket.setsockopt(Socket::IPPROTO_IP, Socket::IP_ADD_MEMBERSHIP, membership)
  end
  
  def receive
    data, _ = @socket.recvfrom(65536)
    Marshal.load(data)
  end
end

Multicast scales efficiently to hundreds or thousands of receivers. Network equipment forwards packets along multicast distribution trees without sender involvement.

Reference

UDP Header Structure

Field Size (bytes) Description
Source Port 2 Sending application port number
Destination Port 2 Receiving application port number
Length 2 Total datagram length including header
Checksum 2 Optional integrity check over data and pseudo-header

UDPSocket Methods

Method Parameters Description
new address_family Creates UDP socket with specified address family
bind host, port Binds socket to address and port
connect host, port Sets default destination for sends
send data, flags, host, port Sends datagram to specified destination
recvfrom maxlen Receives datagram, returns data and address
recvfrom_nonblock maxlen Non-blocking receive operation
setsockopt level, option, value Sets socket option
getsockopt level, option Gets socket option value
close none Closes socket and releases resources

Common Socket Options

Level Option Purpose
SOL_SOCKET SO_BROADCAST Enable broadcast transmission
SOL_SOCKET SO_RCVBUF Set receive buffer size
SOL_SOCKET SO_SNDBUF Set send buffer size
SOL_SOCKET SO_REUSEADDR Allow address reuse
SOL_SOCKET SO_RCVTIMEO Set receive timeout
IPPROTO_IP IP_MULTICAST_TTL Set multicast time-to-live
IPPROTO_IP IP_MULTICAST_LOOP Enable multicast loopback
IPPROTO_IP IP_ADD_MEMBERSHIP Join multicast group
IPPROTO_IP IP_DROP_MEMBERSHIP Leave multicast group

Size Constraints

Limit Value Notes
Maximum datagram size 65,507 bytes 65,535 minus IP and UDP headers
Typical MTU 1,500 bytes Standard Ethernet MTU
Safe payload size 1,472 bytes MTU minus IP and UDP headers
DNS maximum (traditional) 512 bytes Without EDNS extensions
Jumbo frame MTU 9,000 bytes Requires network support

Port Number Ranges

Range Description Purpose
0-1023 Well-known ports System services require privileges
1024-49151 Registered ports Registered applications
49152-65535 Dynamic ports Ephemeral client ports

Common UDP Services

Service Port Description
DNS 53 Domain Name System queries
DHCP 67, 68 Dynamic Host Configuration
NTP 123 Network Time Protocol
SNMP 161, 162 Simple Network Management
TFTP 69 Trivial File Transfer Protocol
Syslog 514 System logging
mDNS 5353 Multicast DNS
QUIC 443 Quick UDP Internet Connections

Error Codes

Error Meaning Common Causes
ECONNREFUSED Connection refused No process listening on port
EHOSTUNREACH Host unreachable Routing failure or firewall
EMSGSIZE Message too long Exceeds send buffer or MTU
ENOBUFS No buffer space System resource exhaustion
EAGAIN Resource temporarily unavailable Non-blocking operation would block
EADDRINUSE Address already in use Port bound by another process

Performance Benchmarks

Metric Typical Value Notes
Local loopback latency 10-50 microseconds System dependent
LAN latency 0.1-1 milliseconds Gigabit Ethernet
WAN latency 10-100 milliseconds Internet routing
Maximum throughput Network bandwidth No protocol overhead beyond headers
Packet processing rate 1M+ packets/second Modern server hardware

Reliability Patterns

Pattern Complexity Use Case
Fire and forget Minimal Metrics, telemetry
Single retry Low DNS queries, simple RPC
Acknowledgment Medium Important messages
Selective repeat High File transfer, bulk data
Forward error correction High Real-time streaming