CrackedRuby logo

CrackedRuby

TCPSocket and UDPSocket

Overview

Ruby provides dedicated socket classes for network communication through TCPSocket and UDPSocket. These classes wrap the underlying operating system socket APIs, offering object-oriented interfaces for TCP and UDP network operations.

TCPSocket handles connection-oriented communication with guaranteed delivery and ordered packets. The class inherits from IPSocket and provides methods for establishing client connections to TCP servers. TCP connections maintain state throughout their lifetime and handle flow control automatically.

# TCP client connection
socket = TCPSocket.new('example.com', 80)
socket.write("GET / HTTP/1.1\r\nHost: example.com\r\n\r\n")
response = socket.read
socket.close

UDPSocket manages connectionless communication through individual packet transmission. UDP operations require no connection establishment and provide no delivery guarantees. Each packet travels independently with minimal protocol overhead.

# UDP client communication
socket = UDPSocket.new
socket.send('Hello UDP', 0, 'localhost', 9999)
data, addr = socket.recvfrom(1024)
socket.close

Both classes integrate with Ruby's IO system, supporting standard read and write operations. They handle address resolution automatically and provide access to underlying socket options through platform-specific system calls.

The socket classes manage network addressing through the Socket module's address family constants. IPv4 and IPv6 addresses work transparently, with automatic family detection based on address format. Socket creation accepts both string hostnames and IP addresses.

Basic Usage

TCPSocket establishes client connections to remote TCP services. The constructor requires a hostname and port number, resolving addresses and creating connections in a single operation.

# Basic TCP client
require 'socket'

socket = TCPSocket.new('www.google.com', 80)
socket.write("GET / HTTP/1.1\r\n")
socket.write("Host: www.google.com\r\n")
socket.write("Connection: close\r\n\r\n")

response = socket.read
puts response.length
socket.close

TCP connections support bidirectional communication through standard IO methods. The write method sends data immediately, while read blocks until data arrives or the connection closes. Partial reads occur when the network buffer contains less data than requested.

# TCP with explicit reading patterns  
socket = TCPSocket.new('localhost', 8080)
socket.write("COMMAND\n")

# Read line by line
while line = socket.gets
  puts "Received: #{line}"
  break if line.strip == 'END'
end

socket.close

UDPSocket operates without persistent connections, sending individual packets to specified destinations. Each send operation includes the target address and port, allowing communication with multiple endpoints through a single socket.

# UDP client operations
socket = UDPSocket.new
socket.bind('localhost', 0)  # Bind to random available port

# Send to multiple destinations
socket.send('Message 1', 0, 'server1.com', 9001)  
socket.send('Message 2', 0, 'server2.com', 9002)

# Receive responses
3.times do
  data, sender = socket.recvfrom(1024)
  puts "From #{sender[2]}:#{sender[1]}: #{data}"
end

socket.close

Both socket types support server operation through listening and accepting connections. TCPSocket uses TCPServer.new for server creation, while UDPSocket handles server operations through the same class used for clients.

# Simple TCP echo server
server = TCPServer.new('localhost', 9999)
puts "Server listening on port 9999"

loop do
  client = server.accept
  data = client.read
  client.write("Echo: #{data}")
  client.close
end

Address binding controls which network interfaces accept connections. Binding to '0.0.0.0' accepts connections from any interface, while 'localhost' restricts access to local connections only.

Error Handling & Debugging

Network operations generate various exception types reflecting different failure modes. Connection failures, timeouts, and protocol errors each produce specific exception classes that applications must handle appropriately.

Errno::ECONNREFUSED occurs when target services refuse connections. This happens with closed ports, stopped services, or firewall blocks. Applications should distinguish between temporary and permanent connection failures.

# Connection error handling
def connect_with_retry(host, port, max_attempts = 3)
  attempts = 0
  
  begin
    attempts += 1
    socket = TCPSocket.new(host, port)
    return socket
  rescue Errno::ECONNREFUSED => e
    puts "Connection refused (attempt #{attempts}): #{e.message}"
    sleep(2)
    retry if attempts < max_attempts
    raise "Failed to connect after #{max_attempts} attempts"
  rescue Errno::ETIMEDOUT => e
    puts "Connection timeout: #{e.message}" 
    raise "Connection timeout to #{host}:#{port}"
  rescue SocketError => e
    puts "Socket error: #{e.message}"
    raise "DNS resolution failed for #{host}"
  end
end

Timeout handling prevents applications from blocking indefinitely on network operations. Ruby provides timeout mechanisms through the Timeout module and socket-specific timeout settings.

require 'timeout'

# Operation with timeout
def fetch_data(host, port, timeout_seconds = 30)
  socket = nil
  begin
    Timeout.timeout(timeout_seconds) do
      socket = TCPSocket.new(host, port)
      socket.write("GET /data HTTP/1.1\r\nHost: #{host}\r\n\r\n")
      socket.read
    end
  rescue Timeout::Error
    puts "Operation timed out after #{timeout_seconds} seconds"
    raise "Network operation timeout"
  rescue => e
    puts "Network error: #{e.class} - #{e.message}"
    raise
  ensure
    socket&.close
  end
end

UDP socket errors differ from TCP errors due to the connectionless nature. send operations rarely fail immediately, but recvfrom operations can timeout or fail with network unreachable errors.

# UDP with comprehensive error handling
def udp_request_response(host, port, message, timeout = 10)
  socket = UDPSocket.new
  
  begin
    socket.send(message, 0, host, port)
    
    # Set receive timeout
    socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, [timeout, 0].pack("l_2"))
    
    data, addr = socket.recvfrom(1024)
    return data
    
  rescue Errno::EHOSTUNREACH
    raise "Host #{host} is unreachable"
  rescue Errno::ENETUNREACH  
    raise "Network unreachable to #{host}"
  rescue Errno::EAGAIN, Errno::EWOULDBLOCK
    raise "UDP receive timeout after #{timeout} seconds"
  rescue => e
    puts "UDP error: #{e.class} - #{e.message}"
    raise
  ensure
    socket.close if socket
  end
end

Socket debugging requires inspection of connection states, buffer contents, and network addresses. Ruby sockets provide methods for accessing this diagnostic information.

# Socket debugging utilities
def debug_tcp_socket(socket)
  puts "Socket class: #{socket.class}"
  puts "Local address: #{socket.addr.inspect}"
  puts "Remote address: #{socket.peeraddr.inspect}"
  puts "Socket closed: #{socket.closed?}"
  
  # Check socket options
  reuse = socket.getsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR)
  puts "SO_REUSEADDR: #{reuse.bool}"
  
  keepalive = socket.getsockopt(Socket::SOL_SOCKET, Socket::SO_KEEPALIVE)  
  puts "SO_KEEPALIVE: #{keepalive.bool}"
rescue => e
  puts "Debug error: #{e.message}"
end

Thread Safety & Concurrency

Socket operations require careful coordination in multi-threaded applications. While individual socket instances are not thread-safe, multiple threads can operate on separate sockets without synchronization.

TCP connections handle concurrent read and write operations through separate threads, but applications must ensure proper coordination to avoid data races and ensure message boundaries.

# Concurrent TCP client with separate read/write threads
def threaded_tcp_client(host, port)
  socket = TCPSocket.new(host, port)
  
  # Writer thread for sending data
  writer = Thread.new do
    loop do
      message = gets.chomp
      break if message == 'quit'
      
      socket.write("#{message}\n")
    end
    socket.close_write  # Signal no more writing
  end
  
  # Reader thread for receiving data  
  reader = Thread.new do
    while line = socket.gets
      puts "Server: #{line}"
    end
  end
  
  writer.join
  reader.join
  socket.close
end

Connection pooling enables safe socket reuse across multiple threads. Pool implementations must synchronize access to prevent simultaneous use of individual sockets.

# Thread-safe TCP connection pool
class TCPConnectionPool
  def initialize(host, port, pool_size = 10)
    @host, @port = host, port
    @pool = Queue.new
    @mutex = Mutex.new
    
    pool_size.times do
      @pool << create_connection
    end
  end
  
  def with_connection
    connection = @pool.pop
    
    begin
      yield connection
    ensure
      # Return connection or create new one if broken
      if connection.closed?
        connection = create_connection
      end
      @pool << connection
    end
  end
  
  private
  
  def create_connection
    TCPSocket.new(@host, @port)
  end
end

# Usage with thread safety
pool = TCPConnectionPool.new('api.service.com', 80)

threads = 10.times.map do |i|
  Thread.new do
    pool.with_connection do |socket|
      socket.write("GET /data/#{i} HTTP/1.1\r\nHost: api.service.com\r\n\r\n")
      response = socket.read
      puts "Thread #{i}: #{response.length} bytes"
    end
  end
end

threads.each(&:join)

UDP sockets support concurrent operation more naturally since they handle discrete messages rather than streams. Multiple threads can share a single UDP socket for receiving, but sending requires coordination to prevent message corruption.

# Multi-threaded UDP server
class UDPServer
  def initialize(port)
    @socket = UDPSocket.new
    @socket.bind('localhost', port)
    @running = true
  end
  
  def start(worker_threads = 5)
    # Create worker thread pool
    workers = worker_threads.times.map do |i|
      Thread.new do
        puts "Worker #{i} started"
        while @running
          process_message
        end
      end
    end
    
    workers.each(&:join)
  end
  
  def stop
    @running = false
    @socket.close
  end
  
  private
  
  def process_message
    return unless @running
    
    begin
      # Receive with timeout to allow shutdown
      @socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, [1, 0].pack("l_2"))
      data, sender = @socket.recvfrom(1024)
      
      # Process in current thread
      response = handle_request(data)
      @socket.send(response, 0, sender[3], sender[1])
      
    rescue Errno::EAGAIN, Errno::EWOULDBLOCK
      # Timeout occurred, continue loop
    rescue => e
      puts "Worker error: #{e.message}" if @running
    end
  end
  
  def handle_request(data)
    "Echo: #{data.upcase}"
  end
end

Atomic operations prevent race conditions when multiple threads access shared connection state. Using proper locking ensures consistency without deadlocks.

Performance & Memory

Network socket performance depends on buffer sizes, system call frequency, and data copying patterns. Ruby socket operations involve transitions between Ruby and C code, affecting throughput and latency characteristics.

Buffer size configuration controls memory usage and network efficiency. Larger buffers reduce system call overhead but increase memory consumption. Applications should balance buffer sizes based on message patterns and available memory.

# Performance-optimized TCP reading
def efficient_tcp_read(socket, expected_size = nil)
  if expected_size && expected_size > 8192
    # Use larger buffer for big transfers
    buffer_size = [expected_size, 65536].min
    socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVBUF, buffer_size)
  end
  
  # Read in chunks to avoid large allocations
  chunks = []
  total_size = 0
  
  while chunk = socket.readpartial(8192)
    chunks << chunk
    total_size += chunk.length
    
    break if expected_size && total_size >= expected_size
  end
  
  chunks.join
rescue EOFError
  chunks.join
end

# Benchmark buffer size impact
require 'benchmark'

def benchmark_buffer_sizes(host, port, data_size)
  Benchmark.bm(15) do |x|
    [1024, 4096, 8192, 16384, 65536].each do |buffer_size|
      x.report("#{buffer_size} bytes") do
        socket = TCPSocket.new(host, port)
        socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_SNDBUF, buffer_size)
        
        data = 'x' * data_size
        socket.write(data)
        socket.close
      end
    end
  end
end

Memory allocation patterns affect performance in high-throughput applications. Reusing string buffers and avoiding unnecessary copying improves efficiency.

# Memory-efficient UDP packet processing
class UDPPacketProcessor
  def initialize(max_packet_size = 1500)
    @buffer = String.new(capacity: max_packet_size)
    @socket = UDPSocket.new
    @stats = { packets: 0, bytes: 0, errors: 0 }
  end
  
  def bind(host, port)
    @socket.bind(host, port)
  end
  
  def process_packets(duration_seconds = 60)
    start_time = Time.now
    
    while Time.now - start_time < duration_seconds
      begin
        @buffer.clear
        data, addr = @socket.recvfrom_nonblock(1500, 0, @buffer)
        
        @stats[:packets] += 1
        @stats[:bytes] += data.length
        
        # Process without additional allocation
        process_packet_data(@buffer, addr)
        
      rescue IO::WaitReadable
        # No data available, continue
        sleep 0.01
      rescue => e
        @stats[:errors] += 1
      end
    end
    
    print_statistics
  end
  
  private
  
  def process_packet_data(buffer, addr)
    # Process buffer in place without creating new strings
    if buffer.start_with?('PING')
      response = 'PONG'
      @socket.send(response, 0, addr[3], addr[1])
    end
  end
  
  def print_statistics
    puts "Packets: #{@stats[:packets]}"
    puts "Bytes: #{@stats[:bytes]}" 
    puts "Errors: #{@stats[:errors]}"
    puts "Rate: #{@stats[:packets] / 60.0} packets/sec"
  end
end

Connection reuse eliminates handshake overhead in TCP applications. Persistent connections amortize connection establishment costs across multiple operations.

# High-performance TCP connection manager
class PersistentTCPConnection
  def initialize(host, port, keepalive_interval = 30)
    @host, @port = host, port
    @keepalive_interval = keepalive_interval
    @socket = nil
    @last_activity = Time.now
    @mutex = Mutex.new
  end
  
  def request(data, timeout = 10)
    @mutex.synchronize do
      ensure_connection
      
      start_time = Time.now
      @socket.write(data)
      
      response = read_with_timeout(timeout)
      @last_activity = Time.now
      
      response
    end
  end
  
  def close
    @mutex.synchronize do
      @socket&.close
      @socket = nil
    end
  end
  
  private
  
  def ensure_connection
    if @socket.nil? || @socket.closed? || stale_connection?
      reconnect
    end
  end
  
  def stale_connection?
    Time.now - @last_activity > @keepalive_interval
  end
  
  def reconnect
    @socket&.close
    @socket = TCPSocket.new(@host, @port)
    
    # Enable TCP keepalive
    @socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, true)
    @socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPINTVL, 10)
    @socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPCNT, 3)
  end
  
  def read_with_timeout(timeout)
    @socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, [timeout, 0].pack("l_2"))
    @socket.read
  rescue Errno::EAGAIN, Errno::EWOULDBLOCK
    raise "Read timeout after #{timeout} seconds"
  end
end

Production Patterns

Production socket applications require robust error handling, monitoring, and graceful shutdown capabilities. Applications must handle network partitions, service restarts, and capacity limits.

Health checks verify network service availability and response times. Regular connectivity tests detect failures before they affect user operations.

# Production-ready TCP health monitor
class ServiceHealthMonitor
  def initialize(services)
    @services = services  # Hash of service_name => {host:, port:, path:}
    @results = {}
    @logger = Logger.new(STDOUT)
  end
  
  def check_all_services
    @services.each do |name, config|
      @results[name] = check_service(name, config)
    end
    
    log_results
    @results
  end
  
  def healthy_services
    @results.select { |name, result| result[:healthy] }.keys
  end
  
  private
  
  def check_service(name, config)
    start_time = Time.now
    
    begin
      socket = TCPSocket.new(config[:host], config[:port])
      
      if config[:path]
        socket.write("GET #{config[:path]} HTTP/1.1\r\nHost: #{config[:host]}\r\nConnection: close\r\n\r\n")
        response = socket.gets
        healthy = response&.include?('200 OK')
      else
        healthy = true
      end
      
      response_time = Time.now - start_time
      socket.close
      
      {
        healthy: healthy,
        response_time: response_time,
        error: nil,
        checked_at: Time.now
      }
      
    rescue => e
      {
        healthy: false,
        response_time: nil,
        error: e.message,
        checked_at: Time.now
      }
    end
  end
  
  def log_results
    @results.each do |name, result|
      if result[:healthy]
        @logger.info "Service #{name}: healthy (#{result[:response_time]}s)"
      else
        @logger.error "Service #{name}: unhealthy - #{result[:error]}"
      end
    end
  end
end

# Monitor usage
services = {
  'web_api' => { host: 'api.company.com', port: 443, path: '/health' },
  'database' => { host: 'db.company.com', port: 5432 },
  'cache' => { host: 'cache.company.com', port: 6379 }
}

monitor = ServiceHealthMonitor.new(services)
results = monitor.check_all_services
puts "Healthy services: #{monitor.healthy_services.join(', ')}"

Graceful shutdown handling ensures connections close cleanly and data completes processing. Signal handlers coordinate shutdown across multiple threads and connections.

# Production TCP server with graceful shutdown
class ProductionTCPServer
  def initialize(host, port, max_connections = 100)
    @host, @port = host, port
    @max_connections = max_connections
    @server_socket = nil
    @client_threads = []
    @shutdown_requested = false
    @mutex = Mutex.new
    @logger = Logger.new(STDOUT)
  end
  
  def start
    setup_signal_handlers
    
    @server_socket = TCPServer.new(@host, @port)
    @logger.info "Server listening on #{@host}:#{@port}"
    
    while !@shutdown_requested
      begin
        # Accept with timeout to check shutdown flag
        client_socket = accept_with_timeout(1)
        next unless client_socket
        
        if @client_threads.length >= @max_connections
          @logger.warn "Max connections reached, rejecting client"
          client_socket.close
          next
        end
        
        handle_client_async(client_socket)
        
      rescue => e
        @logger.error "Accept error: #{e.message}"
        break if @shutdown_requested
      end
    end
    
    shutdown
  end
  
  def stop
    @shutdown_requested = true
  end
  
  private
  
  def setup_signal_handlers
    ['INT', 'TERM'].each do |signal|
      Signal.trap(signal) do
        @logger.info "Received #{signal}, initiating shutdown"
        stop
      end
    end
  end
  
  def accept_with_timeout(timeout)
    return nil if @shutdown_requested
    
    ready = IO.select([@server_socket], nil, nil, timeout)
    return nil unless ready
    
    @server_socket.accept
  rescue => e
    @logger.error "Accept timeout error: #{e.message}"
    nil
  end
  
  def handle_client_async(socket)
    thread = Thread.new do
      begin
        handle_client(socket)
      rescue => e
        @logger.error "Client error: #{e.message}"
      ensure
        socket.close
        remove_client_thread(Thread.current)
      end
    end
    
    @mutex.synchronize { @client_threads << thread }
  end
  
  def handle_client(socket)
    socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, [30, 0].pack("l_2"))
    
    while !@shutdown_requested && line = socket.gets
      response = process_request(line.strip)
      socket.write("#{response}\n")
    end
  end
  
  def process_request(request)
    case request
    when /^PING/
      'PONG'
    when /^STATUS/
      "CONNECTIONS: #{@client_threads.length}"
    else
      'UNKNOWN COMMAND'
    end
  end
  
  def remove_client_thread(thread)
    @mutex.synchronize { @client_threads.delete(thread) }
  end
  
  def shutdown
    @logger.info "Shutting down server"
    
    @server_socket&.close
    
    @logger.info "Waiting for #{@client_threads.length} clients to finish"
    @client_threads.each { |t| t.join(5) }  # Wait up to 5 seconds per thread
    
    @logger.info "Server shutdown complete"
  end
end

Load balancing distributes connections across multiple backend services. Client-side load balancing provides failover capabilities and connection distribution.

# Production load balancer for TCP connections
class TCPLoadBalancer  
  def initialize(backends)
    @backends = backends.map { |b| { host: b[:host], port: b[:port], healthy: true, connections: 0 } }
    @current_backend = 0
    @mutex = Mutex.new
    @health_check_thread = start_health_monitoring
  end
  
  def get_connection
    backend = select_backend
    return nil unless backend
    
    begin
      socket = TCPSocket.new(backend[:host], backend[:port])
      increment_connections(backend)
      
      # Wrap socket to track connection lifecycle
      LoadBalancedSocket.new(socket, backend, self)
      
    rescue => e
      mark_unhealthy(backend)
      raise "Connection failed to #{backend[:host]}:#{backend[:port]} - #{e.message}"
    end
  end
  
  def connection_closed(backend)
    @mutex.synchronize { backend[:connections] -= 1 }
  end
  
  def stop
    @health_check_thread&.kill
  end
  
  private
  
  def select_backend
    @mutex.synchronize do
      healthy_backends = @backends.select { |b| b[:healthy] }
      return nil if healthy_backends.empty?
      
      # Round-robin among healthy backends
      backend = healthy_backends[@current_backend % healthy_backends.length]
      @current_backend += 1
      backend
    end
  end
  
  def increment_connections(backend)
    @mutex.synchronize { backend[:connections] += 1 }
  end
  
  def mark_unhealthy(backend)
    @mutex.synchronize { backend[:healthy] = false }
  end
  
  def start_health_monitoring
    Thread.new do
      loop do
        sleep 10
        check_backend_health
      end
    end
  end
  
  def check_backend_health
    @backends.each do |backend|
      begin
        socket = TCPSocket.new(backend[:host], backend[:port])
        socket.close
        
        @mutex.synchronize { backend[:healthy] = true }
        
      rescue
        @mutex.synchronize { backend[:healthy] = false }
      end
    end
  end
end

# Wrapper socket that reports connection closure
class LoadBalancedSocket
  def initialize(socket, backend, balancer)
    @socket = socket
    @backend = backend  
    @balancer = balancer
  end
  
  def method_missing(method, *args, &block)
    @socket.send(method, *args, &block)
  end
  
  def respond_to_missing?(method, include_private = false)
    @socket.respond_to?(method, include_private)
  end
  
  def close
    @socket.close
    @balancer.connection_closed(@backend)
  end
end

Reference

TCPSocket Methods

Method Parameters Returns Description
TCPSocket.new(host, port) host (String), port (Integer) TCPSocket Creates TCP client connection
#read(length = nil) length (Integer, optional) String Reads data from connection
#write(data) data (String) Integer Writes data to connection
#gets(separator = $/) separator (String) String Reads line from connection
#close None nil Closes the connection
#close_read None nil Closes read end of connection
#close_write None nil Closes write end of connection
#closed? None Boolean Returns connection status
#addr None Array Returns local address info
#peeraddr None Array Returns remote address info

UDPSocket Methods

Method Parameters Returns Description
UDPSocket.new(family = Socket::AF_INET) family (Integer, optional) UDPSocket Creates UDP socket
#bind(host, port) host (String), port (Integer) Integer Binds socket to address
#send(data, flags, host, port) data (String), flags (Integer), host (String), port (Integer) Integer Sends UDP packet
#recvfrom(maxlen, flags = 0) maxlen (Integer), flags (Integer) Array Receives packet with sender info
#connect(host, port) host (String), port (Integer) Integer Connects to specific endpoint
#recv(maxlen, flags = 0) maxlen (Integer), flags (Integer) String Receives packet data only

Common Socket Options

Option Level Description Type
SO_REUSEADDR SOL_SOCKET Allow address reuse Boolean
SO_KEEPALIVE SOL_SOCKET Enable TCP keepalive Boolean
SO_RCVBUF SOL_SOCKET Receive buffer size Integer
SO_SNDBUF SOL_SOCKET Send buffer size Integer
SO_RCVTIMEO SOL_SOCKET Receive timeout Packed time
SO_SNDTIMEO SOL_SOCKET Send timeout Packed time
TCP_NODELAY IPPROTO_TCP Disable Nagle algorithm Boolean

Exception Hierarchy

Exception Inherits From Common Causes
SocketError StandardError Address resolution failure
Errno::ECONNREFUSED SystemCallError Connection refused by peer
Errno::ETIMEDOUT SystemCallError Connection timeout
Errno::EHOSTUNREACH SystemCallError Host unreachable
Errno::ENETUNREACH SystemCallError Network unreachable
Errno::EADDRINUSE SystemCallError Address already in use
Errno::EACCES SystemCallError Permission denied

Address Format

# Address array format: [family, port, hostname, numeric_address]
socket.addr
# => ["AF_INET", 54321, "localhost", "127.0.0.1"]

socket.peeraddr  
# => ["AF_INET", 80, "example.com", "93.184.216.34"]

Socket States

State TCP UDP Description
CLOSED Socket closed
LISTEN - Server accepting connections
SYN_SENT - Connection request sent
ESTABLISHED - Connection established
FIN_WAIT - Connection closing
TIME_WAIT - Connection closed, waiting

Performance Tuning Options

Setting Impact Typical Values
Receive buffer Memory usage, throughput 8KB - 64KB
Send buffer Memory usage, throughput 8KB - 64KB
TCP_NODELAY Latency vs efficiency true for low-latency
SO_KEEPALIVE Connection reliability true for long-lived
Connection timeout Responsiveness 5-30 seconds
Read timeout Application responsiveness 10-60 seconds