CrackedRuby CrackedRuby

Client-Server Architecture

Overview

Client-Server Architecture separates computing systems into two distinct roles: clients that request services and servers that provide services. This architectural pattern forms the foundation of most networked applications, from web browsers accessing websites to mobile apps communicating with backend APIs.

The architecture establishes a clear separation of concerns where clients handle presentation logic and user interaction while servers manage data storage, business logic, and resource sharing. This separation enables multiple clients to access shared resources through a central server, creating efficient resource utilization and centralized management.

The model emerged in the 1980s as networks became more prevalent and computing moved away from monolithic mainframe systems. Organizations needed to distribute computing resources across multiple machines while maintaining centralized control over data and business rules. Client-Server Architecture addressed this need by defining explicit request-response protocols between consumer and provider systems.

A web browser exemplifies the client role: it sends HTTP requests to web servers, receives HTML/CSS/JavaScript responses, and renders content for users. The web server exemplifies the server role: it listens for incoming requests, processes them according to business logic, retrieves data from databases, and returns formatted responses. Neither component can function alone—the client needs the server's resources, and the server exists to serve clients.

# Basic client-server interaction
require 'socket'

# Server listens for connections
server = TCPServer.new(8080)
client_connection = server.accept
request = client_connection.gets
client_connection.puts "Response to: #{request}"
client_connection.close

# Client connects and sends request
client = TCPSocket.new('localhost', 8080)
client.puts 'GET /data'
response = client.gets
# => "Response to: GET /data"

Key Principles

Request-Response Protocol defines the fundamental communication pattern. Clients initiate interactions by sending requests to servers. Servers process these requests and return responses. This asymmetric relationship means clients must know server locations (addresses), but servers need not know about clients until they receive requests. The protocol determines message format, expected behavior, error handling, and response timing.

Stateless vs Stateful Communication represents a critical design decision. Stateless servers treat each request independently, maintaining no information about previous interactions. Each request contains all information needed for processing. Stateful servers remember client context across multiple requests, tracking session data, user preferences, or transaction state. HTTP exemplifies stateless design—each request includes authentication tokens and necessary parameters. Database connections exemplify stateful design—the server maintains transaction context and cursor positions.

Synchronous vs Asynchronous Operations determine how clients wait for responses. Synchronous clients block execution until the server responds, simplifying code flow but limiting concurrency. Asynchronous clients continue processing while waiting for responses, handling replies through callbacks or events. A synchronous HTTP client freezes the calling thread during network I/O. An asynchronous client processes other tasks while network operations complete in the background.

Network Protocol Layering organizes communication into distinct levels. Application protocols (HTTP, FTP, SMTP) define message semantics and structure. Transport protocols (TCP, UDP) handle reliable delivery and error detection. Network protocols (IP) manage routing and addressing. Physical protocols control actual data transmission. Each layer provides services to the layer above while using services from the layer below.

Service Contracts establish expectations between clients and servers. These contracts specify available operations, required parameters, expected responses, error conditions, and performance characteristics. API documentation, WSDL files, OpenAPI specifications, and protocol RFCs all define service contracts. Contracts allow independent development—clients and servers can evolve separately as long as they maintain contract compliance.

Connection Management determines how clients and servers establish and maintain communication channels. Connection-oriented protocols like TCP establish dedicated channels before data exchange, maintaining state throughout the conversation. Connectionless protocols like UDP send individual messages without establishing channels. Connection pooling reuses established connections across multiple requests, reducing overhead. Long-lived connections (WebSockets) maintain bidirectional channels for continuous communication.

# Stateless server - each request is independent
class StatelessServer
  def handle_request(request)
    # No session state maintained
    user_id = request[:auth_token] # Client sends auth with every request
    data = fetch_data(user_id, request[:params])
    { status: 200, body: data }
  end
end

# Stateful server - maintains session context
class StatefulServer
  def initialize
    @sessions = {}
  end
  
  def handle_request(request)
    session = @sessions[request[:session_id]] ||= {}
    session[:last_access] = Time.now
    session[:request_count] ||= 0
    session[:request_count] += 1
    
    { status: 200, body: "Request #{session[:request_count]} in this session" }
  end
end

Design Considerations

Scalability Requirements influence architectural decisions. Vertical scaling adds resources to a single server, increasing CPU, memory, or storage capacity. This approach simplifies deployment but hits physical limits. Horizontal scaling distributes load across multiple servers, providing theoretically unlimited capacity but requiring load balancing and data consistency mechanisms. Stateless designs scale horizontally more easily than stateful designs because any server can handle any request.

Fault Tolerance Strategies determine system resilience. Single points of failure create vulnerability—if the only server fails, all clients lose service. Server redundancy maintains multiple identical servers behind load balancers. Client retry logic handles transient failures by resending requests. Circuit breakers prevent cascading failures by stopping requests to failing servers. Data replication ensures availability even when servers fail.

Network Latency Impact affects user experience and architectural choices. Each client-server interaction incurs network round-trip time—sending requests, processing, and receiving responses. High-latency networks make chatty protocols (many small requests) perform poorly. Batching operations reduces round trips. Caching frequently accessed data on clients minimizes server requests. Content delivery networks place servers geographically close to clients.

Security Boundaries require careful consideration. Client-server separation creates a trust boundary—clients operate in potentially hostile environments while servers operate in controlled data centers. Servers must validate all client input, never trusting client-provided data. Authentication verifies client identity. Authorization controls resource access. Encryption protects data in transit. The server enforces business rules because clients can be modified or impersonated.

Data Consistency Models vary based on requirements. Strong consistency guarantees clients always see current data, requiring coordination that limits availability and performance. Eventual consistency allows temporary inconsistencies, improving availability and performance but complicating application logic. Read-your-writes consistency ensures clients see their own updates immediately. Causal consistency preserves cause-effect relationships.

Protocol Selection shapes the entire system. HTTP provides ubiquitous support, stateless semantics, and rich tooling but carries overhead for small messages. WebSockets offer full-duplex communication for real-time applications but require connection management. gRPC provides efficient binary protocols with strong typing but requires protocol buffer definitions. Custom TCP/UDP protocols offer maximum control but increase implementation complexity.

# Horizontal scaling with load balancing
class LoadBalancer
  def initialize(servers)
    @servers = servers
    @current = 0
  end
  
  def route_request(request)
    # Round-robin distribution
    server = @servers[@current % @servers.length]
    @current += 1
    server.handle(request)
  end
end

servers = [
  Server.new(port: 8081),
  Server.new(port: 8082),
  Server.new(port: 8083)
]
balancer = LoadBalancer.new(servers)

# Retry logic for fault tolerance
def request_with_retry(client, request, max_retries: 3)
  retries = 0
  begin
    client.send(request)
  rescue NetworkError => e
    retries += 1
    retry if retries < max_retries
    raise
  end
end

Implementation Approaches

Two-Tier Architecture places clients in direct communication with database servers. Client applications contain both presentation and business logic. Database servers manage data persistence and retrieval. This approach works well for small-scale applications with limited users and simple business logic. However, it concentrates business rules in client code, complicating updates and creating security risks. Thick clients require significant resources and updates on every machine.

Three-Tier Architecture introduces a middle tier between clients and data. Presentation tier (client) handles user interface. Application tier (middle tier) implements business logic and processes requests. Data tier stores and retrieves persistent data. This separation allows independent scaling of each tier, centralizes business logic for easier updates, and improves security by isolating the database. Web applications typically follow this pattern: browsers (presentation), application servers (business logic), database servers (data).

N-Tier Architecture extends three-tier concepts with additional layers. Separate tiers handle authentication, caching, message queuing, search indexing, and other specialized functions. Each tier focuses on specific concerns and communicates through defined interfaces. This modularity improves maintainability and allows independent technology choices for each tier. However, additional layers increase complexity and latency.

Microservices Architecture decomposes applications into small, independent services. Each service implements specific business capabilities and communicates through lightweight protocols (typically HTTP/REST). Services deploy independently, scale independently, and use different technology stacks. This approach provides maximum flexibility and scalability but introduces distributed system complexity—service discovery, distributed transactions, network failures, and operational overhead.

Peer-to-Peer Hybrid combines client-server and peer-to-peer models. Central servers handle authentication, service discovery, and coordination. Clients communicate directly with each other for data transfer. BitTorrent exemplifies this pattern: tracker servers coordinate file sharing, but actual data flows between peers. This approach reduces server load and bandwidth costs while maintaining centralized control.

# Three-tier architecture implementation
class PresentationTier
  def initialize(application_tier)
    @app_tier = application_tier
  end
  
  def render_user_profile(user_id)
    data = @app_tier.get_user_data(user_id)
    "<html><body>#{data[:name]}</body></html>"
  end
end

class ApplicationTier
  def initialize(data_tier)
    @data_tier = data_tier
  end
  
  def get_user_data(user_id)
    # Business logic: validate, transform, aggregate
    raw_data = @data_tier.fetch_user(user_id)
    {
      name: raw_data[:first_name] + ' ' + raw_data[:last_name],
      age: calculate_age(raw_data[:birth_date]),
      status: determine_status(raw_data)
    }
  end
  
  private
  
  def calculate_age(birth_date)
    ((Time.now - birth_date) / 31_557_600).to_i
  end
  
  def determine_status(data)
    data[:verified] ? 'active' : 'pending'
  end
end

class DataTier
  def fetch_user(user_id)
    # Database query
    @db.query("SELECT * FROM users WHERE id = ?", user_id).first
  end
end

# Microservices approach
class UserService
  def get_user(id)
    # Handles user data operations
  end
end

class OrderService
  def initialize(user_service)
    @user_service = user_service
  end
  
  def create_order(user_id, items)
    user = @user_service.get_user(user_id) # Service-to-service call
    # Process order logic
  end
end

Ruby Implementation

Ruby provides comprehensive support for client-server development through standard library modules and third-party gems. The socket library offers low-level TCP/UDP socket operations. The net/http library implements HTTP client functionality. Web frameworks like Rails and Sinatra build server applications with routing, middleware, and request handling.

TCP Servers use the TCPServer class to listen for incoming connections. The server blocks on accept until a client connects, then receives a TCPSocket object for communication. Each connection typically runs in a separate thread to handle multiple concurrent clients.

require 'socket'

class MultiThreadedServer
  def initialize(port)
    @server = TCPServer.new(port)
    @clients = []
  end
  
  def start
    puts "Server listening on port #{@server.addr[1]}"
    
    loop do
      client = @server.accept
      @clients << client
      
      Thread.new(client) do |connection|
        handle_client(connection)
      end
    end
  end
  
  private
  
  def handle_client(client)
    puts "Client connected: #{client.peeraddr[3]}"
    
    loop do
      request = client.gets
      break if request.nil? || request.strip == 'quit'
      
      response = process_request(request)
      client.puts response
    end
    
    client.close
    @clients.delete(client)
    puts "Client disconnected"
  end
  
  def process_request(request)
    # Business logic
    "Processed: #{request.strip.upcase}"
  end
end

server = MultiThreadedServer.new(8080)
server.start

HTTP Servers benefit from Rack, the Ruby web server interface. Rack defines a standard API between web servers and application frameworks. Applications implement a call method that receives an environment hash and returns a status, headers, and body array.

require 'rack'

class SimpleHTTPServer
  def call(env)
    request_method = env['REQUEST_METHOD']
    path = env['PATH_INFO']
    
    case path
    when '/'
      [200, { 'Content-Type' => 'text/html' }, ['<h1>Welcome</h1>']]
    when '/api/data'
      handle_api_request(request_method, env)
    else
      [404, { 'Content-Type' => 'text/plain' }, ['Not Found']]
    end
  end
  
  private
  
  def handle_api_request(method, env)
    case method
    when 'GET'
      data = { users: ['Alice', 'Bob'], timestamp: Time.now.to_i }
      [200, { 'Content-Type' => 'application/json' }, [data.to_json]]
    when 'POST'
      # Parse request body
      body = env['rack.input'].read
      [201, { 'Content-Type' => 'application/json' }, ['{"status":"created"}']]
    else
      [405, { 'Content-Type' => 'text/plain' }, ['Method Not Allowed']]
    end
  end
end

Rack::Handler::WEBrick.run(SimpleHTTPServer.new, Port: 9292)

HTTP Clients utilize the Net::HTTP library or more modern alternatives like httparty or faraday. These libraries handle connection management, request formatting, and response parsing.

require 'net/http'
require 'json'

class APIClient
  def initialize(base_url)
    @base_url = URI(base_url)
  end
  
  def get(path, params = {})
    uri = @base_url.dup
    uri.path = path
    uri.query = URI.encode_www_form(params) unless params.empty?
    
    request = Net::HTTP::Get.new(uri)
    request['Accept'] = 'application/json'
    
    response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == 'https') do |http|
      http.request(request)
    end
    
    handle_response(response)
  end
  
  def post(path, data)
    uri = @base_url.dup
    uri.path = path
    
    request = Net::HTTP::Post.new(uri)
    request['Content-Type'] = 'application/json'
    request.body = data.to_json
    
    response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == 'https') do |http|
      http.request(request)
    end
    
    handle_response(response)
  end
  
  private
  
  def handle_response(response)
    case response.code.to_i
    when 200..299
      JSON.parse(response.body)
    when 400..499
      raise ClientError, "Client error: #{response.code} - #{response.body}"
    when 500..599
      raise ServerError, "Server error: #{response.code} - #{response.body}"
    else
      raise "Unexpected response: #{response.code}"
    end
  end
end

client = APIClient.new('https://api.example.com')
users = client.get('/users', { limit: 10 })
result = client.post('/users', { name: 'Charlie', email: 'charlie@example.com' })

WebSocket Servers maintain persistent bidirectional connections. The faye-websocket gem provides WebSocket support for Rack applications.

require 'faye/websocket'
require 'eventmachine'

class WebSocketServer
  def initialize
    @clients = []
  end
  
  def call(env)
    if Faye::WebSocket.websocket?(env)
      ws = Faye::WebSocket.new(env)
      
      ws.on :open do |event|
        @clients << ws
        broadcast({ type: 'user_joined', count: @clients.length })
      end
      
      ws.on :message do |event|
        message = JSON.parse(event.data)
        broadcast(message)
      end
      
      ws.on :close do |event|
        @clients.delete(ws)
        broadcast({ type: 'user_left', count: @clients.length })
      end
      
      ws.rack_response
    else
      [200, { 'Content-Type' => 'text/plain' }, ['WebSocket Server']]
    end
  end
  
  private
  
  def broadcast(message)
    @clients.each do |client|
      client.send(message.to_json)
    end
  end
end

EM.run do
  Rack::Handler.get('thin').run(WebSocketServer.new, Port: 8080)
end

Security Implications

Authentication Mechanisms verify client identity before granting access. Basic authentication transmits credentials with each request, encoded in base64 (not encrypted). Token-based authentication issues signed tokens after initial login, which clients include in subsequent requests. OAuth delegates authentication to third-party providers. Certificate-based authentication uses public key infrastructure to verify identity. Servers must implement secure credential storage (hashed passwords with salt) and protect authentication endpoints from brute force attacks.

Authorization Controls determine what authenticated clients can access. Role-based access control (RBAC) assigns permissions to roles and roles to users. Attribute-based access control (ABAC) evaluates dynamic policies based on user attributes, resource attributes, and environmental conditions. The server enforces all authorization decisions—clients cannot bypass access controls by modifying requests.

Input Validation prevents injection attacks. Servers must validate, sanitize, and escape all client-provided data before processing. SQL injection occurs when untrusted input modifies database queries. Command injection executes arbitrary system commands. Cross-site scripting (XSS) injects malicious scripts into web pages. Parameter validation ensures data types, ranges, and formats match expectations. Whitelist validation accepts only known-good inputs.

Encryption Requirements protect data in transit and at rest. Transport Layer Security (TLS) encrypts network communication between clients and servers, preventing eavesdropping and tampering. TLS 1.2 and higher provide adequate security; earlier versions contain vulnerabilities. Certificate validation ensures clients connect to legitimate servers, not imposters. Servers encrypt sensitive data in databases to protect against unauthorized access to storage.

Session Management maintains user state securely. Session tokens must be cryptographically random, unpredictable, and sufficiently long. Servers should regenerate session IDs after authentication to prevent fixation attacks. Sessions expire after inactivity or absolute time limits. Servers invalidate sessions on logout. Storing sessions in signed, encrypted cookies reduces server state but requires careful implementation.

Rate Limiting protects servers from abuse. Request throttling limits the number of requests per client per time period. Different endpoints may have different limits—authentication endpoints need stricter limits than read-only data endpoints. Rate limiting prevents denial-of-service attacks, brute force attempts, and resource exhaustion.

require 'bcrypt'
require 'securerandom'

class SecureServer
  def initialize
    @users = {}
    @sessions = {}
    @rate_limits = Hash.new { |h, k| h[k] = { count: 0, reset_at: Time.now + 60 } }
  end
  
  def register(username, password)
    # Hash password with bcrypt
    password_hash = BCrypt::Password.create(password)
    @users[username] = { password_hash: password_hash }
    { status: 'registered' }
  end
  
  def login(username, password, client_ip)
    # Rate limiting
    unless check_rate_limit(client_ip, limit: 5, window: 60)
      return { status: 'error', message: 'Too many login attempts' }
    end
    
    user = @users[username]
    return { status: 'error', message: 'Invalid credentials' } unless user
    
    # Verify password
    password_hash = BCrypt::Password.new(user[:password_hash])
    return { status: 'error', message: 'Invalid credentials' } unless password_hash == password
    
    # Generate secure session token
    session_token = SecureRandom.hex(32)
    @sessions[session_token] = { username: username, expires_at: Time.now + 3600 }
    
    { status: 'success', token: session_token }
  end
  
  def authenticate_request(token)
    session = @sessions[token]
    return nil unless session
    return nil if session[:expires_at] < Time.now
    
    session[:username]
  end
  
  def handle_api_request(token, client_ip, path, params)
    # Rate limiting
    unless check_rate_limit(client_ip, limit: 100, window: 60)
      return { status: 'error', message: 'Rate limit exceeded' }
    end
    
    # Authentication
    username = authenticate_request(token)
    return { status: 'error', message: 'Unauthorized' } unless username
    
    # Input validation
    validated_params = validate_params(path, params)
    return { status: 'error', message: 'Invalid parameters' } unless validated_params
    
    # Authorization
    return { status: 'error', message: 'Forbidden' } unless authorized?(username, path)
    
    # Process request
    process_authenticated_request(username, path, validated_params)
  end
  
  private
  
  def check_rate_limit(client_ip, limit:, window:)
    limiter = @rate_limits[client_ip]
    
    if Time.now > limiter[:reset_at]
      limiter[:count] = 0
      limiter[:reset_at] = Time.now + window
    end
    
    limiter[:count] += 1
    limiter[:count] <= limit
  end
  
  def validate_params(path, params)
    case path
    when '/api/users'
      return nil unless params[:id].is_a?(Integer) && params[:id].positive?
    when '/api/search'
      return nil unless params[:query].is_a?(String) && params[:query].length <= 100
    end
    params
  end
  
  def authorized?(username, path)
    # Check user permissions for resource
    true # Simplified
  end
  
  def process_authenticated_request(username, path, params)
    { status: 'success', data: "Processed for #{username}" }
  end
end

Performance Considerations

Connection Pooling reuses established connections across multiple requests. Creating new TCP connections requires a three-way handshake, consuming time and system resources. Connection pools maintain a set of active connections, lending them to requests and returning them after use. This approach significantly reduces latency for workloads with many short requests. Pool sizing requires balancing resource consumption against request concurrency.

Caching Strategies reduce server load and improve response times. Client-side caching stores responses locally, eliminating network requests for repeated data. Server-side caching stores computed results in memory, avoiding expensive database queries or calculations. HTTP caching headers (Cache-Control, ETag) instruct clients and intermediaries when and how to cache responses. Cache invalidation—determining when cached data becomes stale—presents the main challenge. Time-based expiration works for stable data. Event-based invalidation provides freshness but requires additional infrastructure.

Compression reduces network bandwidth consumption. HTTP compression (gzip, brotli) shrinks text-based responses (HTML, JSON, CSS, JavaScript) by 60-80%. Compression trades CPU time for reduced transfer time. The trade-off favors compression for slow networks and large responses. Binary protocols reduce overhead compared to text protocols but require specialized parsing.

Asynchronous Processing improves server throughput. Synchronous servers dedicate one thread per client connection, limiting concurrent connections to available threads. Asynchronous servers handle many connections with few threads by multiplexing I/O operations. Event-driven architectures (EventMachine, async gems) process operations without blocking threads. Background job queues (Sidekiq, Resque) handle long-running tasks without blocking request handlers.

Database Query Optimization eliminates common bottlenecks. N+1 queries occur when code loads collections then queries for each item's related data. Eager loading fetches related data in fewer queries. Query analysis tools identify slow queries. Database indexes accelerate lookups but slow writes. Connection pooling reduces database connection overhead.

Load Balancing Algorithms distribute requests across servers. Round-robin sends requests sequentially to each server. Least connections routes to the server handling fewest active connections. Weighted distribution sends more traffic to more capable servers. Consistent hashing maintains request routing stability when servers join or leave the pool.

require 'connection_pool'

class OptimizedAPIClient
  def initialize(base_url, pool_size: 5)
    @base_url = base_url
    # Connection pooling
    @pool = ConnectionPool.new(size: pool_size, timeout: 5) do
      create_connection
    end
    # Response caching
    @cache = {}
  end
  
  def get(path, params = {}, cache_ttl: nil)
    cache_key = "#{path}:#{params.to_json}"
    
    # Check cache
    if cache_ttl && @cache[cache_key]
      cached = @cache[cache_key]
      return cached[:data] if Time.now < cached[:expires_at]
    end
    
    # Use pooled connection
    response = @pool.with do |connection|
      connection.get(path, params)
    end
    
    # Cache response
    if cache_ttl
      @cache[cache_key] = {
        data: response,
        expires_at: Time.now + cache_ttl
      }
    end
    
    response
  end
  
  private
  
  def create_connection
    # Create persistent HTTP connection
    uri = URI(@base_url)
    Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == 'https')
  end
end

# Asynchronous server with EventMachine
require 'eventmachine'

class AsyncServer
  def initialize(port)
    @port = port
  end
  
  def start
    EM.run do
      EM.start_server('0.0.0.0', @port, ConnectionHandler)
      puts "Async server listening on port #{@port}"
    end
  end
end

module ConnectionHandler
  def receive_data(data)
    # Process asynchronously
    EM.defer(
      proc { process_request(data) },  # Background thread
      proc { |result| send_data(result) }  # Callback
    )
  end
  
  def process_request(data)
    # Simulate processing
    sleep 0.1
    "Processed: #{data}"
  end
end

# Background job processing
require 'sidekiq'

class EmailWorker
  include Sidekiq::Worker
  
  def perform(user_id, template)
    # Long-running task processed asynchronously
    user = User.find(user_id)
    send_email(user, template)
  end
end

# API endpoint queues job instead of processing synchronously
def handle_signup(params)
  user = create_user(params)
  EmailWorker.perform_async(user.id, 'welcome')  # Non-blocking
  { status: 'success', user_id: user.id }
end

Tools & Ecosystem

Rack provides the standard Ruby web server interface. All major Ruby web frameworks (Rails, Sinatra, Hanami) build on Rack. Rack middleware components handle cross-cutting concerns: logging, authentication, compression, caching. Custom middleware implements shared functionality across applications.

Puma serves as the default application server for Rails. Puma uses a multi-threaded, multi-process model to maximize concurrency. Workers handle multiple requests concurrently within a single process. Clustered mode runs multiple worker processes for additional parallelism. Configuration controls thread count, worker count, and connection timeout.

Thin provides an EventMachine-based server for asynchronous Ruby applications. Thin handles WebSocket connections and other long-lived connections efficiently. Applications built with em-http-request or faye-websocket pair well with Thin.

Unicorn uses a pre-fork worker model where a master process spawns worker processes that handle requests. Each worker handles one request at a time. This model tolerates memory leaks and misbehaving requests by restarting workers. Configuration specifies worker count and timeout values.

HTTParty simplifies HTTP client development with a clean DSL. The gem handles JSON/XML parsing, persistent connections, and common HTTP patterns. HTTParty suits API consumption and web scraping tasks.

require 'httparty'

class GitHubClient
  include HTTParty
  base_uri 'https://api.github.com'
  
  def initialize(token)
    @options = {
      headers: {
        'Authorization' => "token #{token}",
        'User-Agent' => 'Ruby HTTParty'
      }
    }
  end
  
  def user(username)
    self.class.get("/users/#{username}", @options)
  end
  
  def repos(username)
    self.class.get("/users/#{username}/repos", @options)
  end
end

client = GitHubClient.new('your_token')
user = client.user('octocat')
repos = client.repos('octocat')

Faraday provides a flexible HTTP client with middleware support. Middleware handles concerns like authentication, logging, caching, and retry logic. Different adapters support various HTTP backends (Net::HTTP, HTTPClient, em-http).

require 'faraday'

conn = Faraday.new(url: 'https://api.example.com') do |f|
  f.request :json  # Encode request bodies as JSON
  f.request :retry, max: 3, interval: 0.5  # Retry failed requests
  f.response :json  # Decode response bodies as JSON
  f.response :logger  # Log requests and responses
  f.adapter Faraday.default_adapter
end

response = conn.post('/users') do |req|
  req.headers['Authorization'] = 'Bearer token'
  req.body = { name: 'Alice', email: 'alice@example.com' }
end

ActionCable integrates WebSocket support into Rails applications. ActionCable handles connection management, channel subscriptions, and message broadcasting. Channels define communication protocols between clients and servers. Adapters support different message brokers (Redis, PostgreSQL) for multi-server deployments.

gRPC implements Google's RPC framework in Ruby. Protocol buffers define service contracts and message formats. gRPC provides strongly-typed APIs, bidirectional streaming, and efficient binary serialization. The grpc gem includes client and server implementations.

Sidekiq processes background jobs outside request-response cycles. Jobs serialize to Redis for persistence and distribution across multiple worker processes. Retry logic handles transient failures. Web dashboard monitors job processing status.

Reference

Architecture Patterns

Pattern Description Use Cases
Two-Tier Direct client-database communication Small applications, limited users
Three-Tier Presentation, application, data layers Web applications, moderate scale
N-Tier Multiple specialized layers Large enterprise systems
Microservices Independent service deployment High scalability requirements

Communication Models

Model Description Advantages Disadvantages
Synchronous Client blocks waiting for response Simple programming model Limited concurrency
Asynchronous Client continues processing High concurrency Complex callback management
Stateless Each request independent Easy scaling Higher bandwidth
Stateful Server maintains session Lower bandwidth Scaling complexity

Ruby Server Implementations

Server Model Best For
Puma Multi-threaded General-purpose Rails
Thin Event-driven WebSockets, long-polling
Unicorn Pre-fork Stability, isolation
Passenger Hybrid Apache/Nginx integration

HTTP Status Codes

Code Range Meaning Example Use
200-299 Success 200 OK, 201 Created
300-399 Redirection 301 Moved, 304 Not Modified
400-499 Client Error 400 Bad Request, 404 Not Found
500-599 Server Error 500 Internal Error, 503 Unavailable

Security Mechanisms

Mechanism Purpose Implementation
TLS/SSL Encrypt network traffic HTTPS, certificate validation
JWT Token-based authentication Signed JSON tokens
OAuth 2.0 Delegated authorization Third-party authentication
CORS Cross-origin requests Response headers
Rate Limiting Prevent abuse Request counting per IP

Performance Optimization

Technique Impact Trade-offs
Connection Pooling Reduces latency Memory consumption
Response Caching Improves throughput Stale data risk
Compression Reduces bandwidth CPU overhead
Async Processing Increases concurrency Complexity
Load Balancing Distributes load Infrastructure cost

Common Ports

Port Protocol Service
80 HTTP Web traffic
443 HTTPS Secure web
22 SSH Remote access
3000 HTTP Rails development
8080 HTTP Alternative web
5432 PostgreSQL Database
6379 Redis Cache/queue

Ruby Client Libraries

Library Type Features
Net::HTTP Standard library Built-in, no dependencies
HTTParty High-level client Simple DSL, auto-parsing
Faraday Middleware-based Flexible, adapter support
RestClient REST-focused Simple API, synchronous
Typhoeus Parallel requests libcurl-based, high performance