CrackedRuby logo

CrackedRuby

Kubernetes

Overview

The kubernetes-client gem provides Ruby developers with programmatic access to Kubernetes clusters. The library implements the Kubernetes REST API, supporting resource management, cluster monitoring, and deployment automation. Ruby applications use this client to create, read, update, and delete Kubernetes resources such as pods, services, deployments, and custom resources.

The primary entry point is the Kubeclient::Client class, which establishes connections to Kubernetes API servers. The client authenticates using kubeconfig files, service account tokens, or explicit credentials. Each API group requires a separate client instance configured for specific API versions.

require 'kubeclient'

# Create client for core API group
client = Kubeclient::Client.new(
  'https://kubernetes.example.com/api/',
  'v1',
  ssl_options: { verify_ssl: OpenSSL::SSL::VERIFY_PEER }
)

# Authenticate with bearer token
client.auth_options = { bearer_token: 'your-service-account-token' }

# List all pods in default namespace
pods = client.get_pods(namespace: 'default')
puts pods.items.count
# => 5

The client maps Kubernetes resources to Ruby objects with method-based access to properties. Resource operations follow RESTful conventions: get_* for reading, create_* for creation, update_* for modifications, and delete_* for removal. The library handles JSON serialization, HTTP status codes, and API version negotiation automatically.

# Create a new pod
pod_definition = {
  metadata: { name: 'ruby-app', namespace: 'default' },
  spec: {
    containers: [{
      name: 'app',
      image: 'ruby:3.0',
      command: ['sleep', '3600']
    }]
  }
}

new_pod = client.create_pod(pod_definition)
puts new_pod.metadata.name
# => "ruby-app"

The client supports both core Kubernetes resources and custom resource definitions (CRDs). API discovery enables dynamic resource access without hardcoded schemas. Watch operations provide real-time event streams for resource changes, supporting reactive application patterns.

Basic Usage

Establishing cluster connections requires endpoint URLs and authentication credentials. The client constructor accepts base URLs for specific API groups, with separate instances needed for core APIs (/api/v1) and extension APIs (/apis/group/version).

# Core API client
core_client = Kubeclient::Client.new(
  'https://cluster.local/api/',
  'v1'
)

# Extensions API client for apps
apps_client = Kubeclient::Client.new(
  'https://cluster.local/apis/',
  'apps/v1'
)

Authentication methods include bearer tokens, client certificates, and kubeconfig file parsing. The Kubeclient::Config class loads configuration from standard kubeconfig locations or explicit file paths.

# Load from default kubeconfig location
config = Kubeclient::Config.read
context = config.context('production-cluster')

client = Kubeclient::Client.new(
  context.api_endpoint,
  context.api_version,
  auth_options: context.auth_options,
  ssl_options: context.ssl_options
)

Resource retrieval operations accept namespace parameters and label selectors for filtering. The client returns resource collections or individual objects wrapped in Ruby classes that expose Kubernetes properties as methods.

# Get specific pod by name
pod = client.get_pod('nginx-pod', 'web-namespace')
puts pod.status.phase
# => "Running"

# List pods with label selector  
app_pods = client.get_pods(
  namespace: 'production',
  label_selector: 'app=frontend,version=v2'
)

app_pods.items.each do |pod|
  puts "#{pod.metadata.name}: #{pod.spec.containers.first.image}"
end
# => frontend-1: nginx:1.20
# => frontend-2: nginx:1.20

Resource creation requires hash definitions matching Kubernetes YAML structure. The client validates required fields and returns the created object with server-generated metadata like UIDs and timestamps.

service_spec = {
  metadata: { 
    name: 'web-service',
    namespace: 'production'
  },
  spec: {
    selector: { app: 'frontend' },
    ports: [{ port: 80, targetPort: 8080 }],
    type: 'ClusterIP'
  }
}

service = client.create_service(service_spec)
puts service.spec.clusterIP
# => "10.96.123.45"

Update operations modify existing resources through update_* methods or strategic merge patches. The client handles resource version conflicts and optimistic concurrency control automatically.

# Update pod labels
pod = client.get_pod('app-pod', 'default')
pod.metadata.labels.merge!({ 'version' => 'v2', 'environment' => 'staging' })
updated_pod = client.update_pod(pod)

puts updated_pod.metadata.labels
# => {"app"=>"frontend", "version"=>"v2", "environment"=>"staging"}

Advanced Usage

Multi-cluster operations require separate client instances with distinct authentication and endpoint configurations. Applications managing multiple environments typically maintain client pools organized by cluster context or region.

class ClusterManager
  def initialize
    @clients = {}
    load_cluster_configs
  end

  private

  def load_cluster_configs
    config = Kubeclient::Config.read
    
    config.contexts.each do |context_name, context|
      @clients[context_name] = {
        core: create_core_client(context),
        apps: create_apps_client(context),
        extensions: create_extensions_client(context)
      }
    end
  end

  def create_core_client(context)
    Kubeclient::Client.new(
      context.api_endpoint + 'api/',
      'v1',
      auth_options: context.auth_options,
      ssl_options: context.ssl_options
    )
  end
  
  def create_apps_client(context)
    Kubeclient::Client.new(
      context.api_endpoint + 'apis/',
      'apps/v1',
      auth_options: context.auth_options,
      ssl_options: context.ssl_options
    )
  end

  def deploy_across_clusters(manifest, cluster_names)
    cluster_names.map do |cluster|
      Thread.new do
        client = @clients[cluster][:apps]
        client.create_deployment(manifest)
      end
    end.map(&:value)
  end
end

manager = ClusterManager.new
deployment_spec = { /* deployment definition */ }
results = manager.deploy_across_clusters(deployment_spec, ['us-east', 'eu-west'])

Custom resource definitions extend Kubernetes APIs with application-specific objects. The client discovers CRD schemas dynamically and generates method names from resource specifications.

# Create CRD client for custom resources
crd_client = Kubeclient::Client.new(
  'https://cluster.local/apis/',
  'example.com/v1'
)

# Define custom resource instance
database_instance = {
  metadata: {
    name: 'production-db',
    namespace: 'databases'
  },
  spec: {
    engine: 'postgresql',
    version: '13.4',
    storage: '100Gi',
    replicas: 3
  }
}

# CRD methods generated from resource definitions
db = crd_client.create_database(database_instance)
puts db.spec.engine
# => "postgresql"

# List custom resources with field selectors
databases = crd_client.get_databases(
  namespace: 'databases',
  field_selector: 'spec.engine=postgresql'
)

Watch operations establish persistent connections for real-time resource monitoring. The client streams events as resources change, supporting reactive architectures and controller patterns.

# Watch pod changes with event handling
watcher = client.watch_pods(namespace: 'production')

watcher.each do |notice|
  case notice.type
  when 'ADDED'
    puts "Pod created: #{notice.object.metadata.name}"
    handle_pod_creation(notice.object)
  when 'MODIFIED'
    puts "Pod updated: #{notice.object.metadata.name}"
    handle_pod_update(notice.object)
  when 'DELETED'
    puts "Pod deleted: #{notice.object.metadata.name}"
    handle_pod_deletion(notice.object)
  end
end

def handle_pod_creation(pod)
  if pod.status.phase == 'Pending'
    # Trigger additional resource provisioning
    create_supporting_resources(pod)
  end
end

Batch operations reduce API round-trips when managing multiple resources simultaneously. Strategic planning minimizes cluster load and improves deployment performance.

def batch_deploy_microservices(services_config)
  deployment_futures = services_config.map do |service_name, config|
    Thread.new do
      # Create deployment
      deployment = apps_client.create_deployment(config[:deployment])
      
      # Create service
      service = core_client.create_service(config[:service])
      
      # Create ingress if specified
      ingress = extensions_client.create_ingress(config[:ingress]) if config[:ingress]
      
      {
        service: service_name,
        deployment: deployment,
        service_obj: service,
        ingress: ingress
      }
    end
  end

  # Wait for all operations to complete
  results = deployment_futures.map(&:value)
  
  # Verify all deployments are ready
  results.each do |result|
    wait_for_deployment_ready(result[:deployment])
  end
  
  results
end

def wait_for_deployment_ready(deployment, timeout: 300)
  start_time = Time.now
  
  loop do
    current = apps_client.get_deployment(
      deployment.metadata.name, 
      deployment.metadata.namespace
    )
    
    if current.status.readyReplicas == current.spec.replicas
      return true
    end
    
    if Time.now - start_time > timeout
      raise "Deployment #{deployment.metadata.name} not ready within timeout"
    end
    
    sleep 5
  end
end

Error Handling & Debugging

Kubernetes API errors manifest as HTTP exceptions with specific status codes and detailed error messages. The client raises Kubeclient::HttpError for API failures, including resource conflicts, authorization issues, and validation errors.

begin
  client.create_pod(invalid_pod_spec)
rescue Kubeclient::HttpError => e
  case e.error_code
  when 409
    puts "Resource conflict: #{e.message}"
    # Handle resource already exists
  when 422
    puts "Validation error: #{e.message}"
    # Parse and display field validation failures
  when 403
    puts "Authorization denied: #{e.message}"
    # Check service account permissions
  else
    puts "API error #{e.error_code}: #{e.message}"
  end
end

Resource version conflicts occur during concurrent modifications. The client provides mechanisms for handling optimistic concurrency control and implementing retry strategies with exponential backoff.

def update_pod_with_retry(pod_name, namespace, update_proc, max_retries: 5)
  retries = 0
  
  begin
    pod = client.get_pod(pod_name, namespace)
    updated_pod = update_proc.call(pod)
    client.update_pod(updated_pod)
  rescue Kubeclient::ResourceVersionError => e
    retries += 1
    if retries <= max_retries
      sleep_time = 2 ** retries
      puts "Resource version conflict, retrying in #{sleep_time}s (attempt #{retries})"
      sleep sleep_time
      retry
    else
      raise "Failed to update pod after #{max_retries} retries: #{e.message}"
    end
  end
end

# Usage with retry logic
update_proc = ->(pod) do
  pod.metadata.labels['last-updated'] = Time.now.iso8601
  pod
end

update_pod_with_retry('web-app', 'production', update_proc)

Network connectivity issues require robust error handling for cluster communication failures. Implementing connection pooling and circuit breaker patterns improves application resilience.

class ResilientKubernetesClient
  def initialize(config)
    @config = config
    @circuit_breaker = CircuitBreaker.new(
      failure_threshold: 5,
      recovery_timeout: 30
    )
    @retry_config = { max_retries: 3, base_delay: 1 }
  end

  def execute_with_resilience(operation)
    @circuit_breaker.call do
      retry_with_backoff do
        yield
      end
    end
  rescue CircuitBreaker::OpenError
    raise "Kubernetes cluster unavailable - circuit breaker open"
  end

  private

  def retry_with_backoff
    retries = 0
    begin
      yield
    rescue Net::TimeoutError, Errno::ECONNREFUSED => e
      retries += 1
      if retries <= @retry_config[:max_retries]
        delay = @retry_config[:base_delay] * (2 ** (retries - 1))
        puts "Connection failed, retrying in #{delay}s: #{e.message}"
        sleep delay
        retry
      else
        raise "Connection failed after #{@retry_config[:max_retries]} retries: #{e.message}"
      end
    end
  end
end

# Usage with resilience wrapper
resilient_client = ResilientKubernetesClient.new(cluster_config)

pods = resilient_client.execute_with_resilience("list pods") do
  client.get_pods(namespace: 'production')
end

Authentication debugging requires examining certificate validity, token expiration, and RBAC permissions. The client provides access to response details for troubleshooting authorization failures.

def diagnose_auth_issues(client)
  begin
    # Test basic cluster connectivity
    client.get_namespaces
    puts "✓ Cluster connection successful"
  rescue Kubeclient::HttpError => e
    puts "✗ Cluster connection failed: #{e.error_code} - #{e.message}"
    
    if e.error_code == 401
      puts "Authentication issue - check token/certificate validity"
      check_token_expiry
    elsif e.error_code == 403  
      puts "Authorization issue - check RBAC permissions"
      check_service_account_permissions(client)
    end
    
    return false
  end
  
  true
end

def check_token_expiry
  # Decode JWT token to check expiration
  token = client.auth_options[:bearer_token]
  if token
    payload = JWT.decode(token, nil, false).first
    exp_time = Time.at(payload['exp'])
    puts "Token expires: #{exp_time}"
    puts "Token expired: #{exp_time < Time.now}"
  end
rescue JWT::DecodeError
  puts "Invalid JWT token format"
end

def check_service_account_permissions(client)
  begin
    # Test specific resource access
    test_resources = ['pods', 'services', 'deployments']
    test_resources.each do |resource|
      client.send("get_#{resource}", namespace: 'default', limit: 1)
      puts "✓ Can access #{resource}"
    end
  rescue Kubeclient::HttpError => e
    puts "✗ Cannot access #{resource}: #{e.message}"
  end
end

Performance & Memory

Large-scale cluster operations require careful resource management to prevent memory exhaustion and API server overload. The client supports pagination, field selection, and result streaming for efficient data processing.

# Process large pod lists with pagination
def process_all_pods_efficiently(client, namespace)
  processed_count = 0
  continue_token = nil
  
  loop do
    response = client.get_pods(
      namespace: namespace,
      limit: 100,  # Process in batches
      continue: continue_token
    )
    
    # Process current batch
    response.items.each do |pod|
      process_single_pod(pod)
      processed_count += 1
      
      # Explicit garbage collection for long-running operations
      GC.start if processed_count % 1000 == 0
    end
    
    continue_token = response.metadata&.continue
    break unless continue_token
    
    puts "Processed #{processed_count} pods..."
  end
  
  processed_count
end

def process_single_pod(pod)
  # Extract only needed data to minimize memory usage
  {
    name: pod.metadata.name,
    namespace: pod.metadata.namespace,
    phase: pod.status.phase,
    node: pod.spec.nodeName
  }
end

Connection pooling and keep-alive settings reduce network overhead for applications making frequent API calls. HTTP/2 multiplexing improves throughput for concurrent operations.

# Configure HTTP client for optimal performance
require 'net/http/persistent'

class OptimizedKubernetesClient
  def initialize(endpoint, version)
    @http = Net::HTTP::Persistent.new('kubeclient')
    @http.keep_alive = 300
    @http.pool_size = 10
    @http.idle_timeout = 60
    
    @client = Kubeclient::Client.new(
      endpoint,
      version,
      http_proxy_uri: nil,
      http_max_redirects: 3,
      timeouts: {
        open: 10,
        read: 30
      }
    )
    
    # Configure custom HTTP adapter
    @client.instance_variable_set(:@http_client, @http)
  end

  def batch_get_resources(resource_requests)
    threads = resource_requests.map do |request|
      Thread.new do
        start_time = Time.now
        result = @client.send(request[:method], **request[:params])
        duration = Time.now - start_time
        
        {
          request: request,
          result: result,
          duration: duration
        }
      end
    end
    
    results = threads.map(&:value)
    total_time = results.sum { |r| r[:duration] }
    puts "Batch completed: #{results.count} requests in #{total_time}s"
    
    results
  end
end

Memory profiling reveals resource usage patterns for optimization. Watch operations require careful memory management to prevent accumulation of event objects over time.

require 'memory_profiler'

def profile_kubernetes_operations
  report = MemoryProfiler.report do
    # Simulate typical application workflow
    1000.times do |i|
      pods = client.get_pods(namespace: 'production', limit: 50)
      
      pods.items.each do |pod|
        # Process pod data
        analyze_pod_resources(pod)
      end
      
      # Clear references periodically
      pods = nil
      GC.start if i % 100 == 0
    end
  end
  
  puts "Memory usage report:"
  puts "Total allocated: #{report.total_allocated_memsize} bytes"
  puts "Total retained: #{report.total_retained_memsize} bytes"
  
  # Show top memory allocators
  report.pretty_print(scale_bytes: true)
end

def analyze_pod_resources(pod)
  # Extract resource usage data without retaining full objects
  resources = {
    cpu_request: pod.spec.containers.sum { |c| parse_cpu(c.resources&.requests&.cpu) },
    memory_request: pod.spec.containers.sum { |c| parse_memory(c.resources&.requests&.memory) },
    cpu_limit: pod.spec.containers.sum { |c| parse_cpu(c.resources&.limits&.cpu) },
    memory_limit: pod.spec.containers.sum { |c| parse_memory(c.resources&.limits&.memory) }
  }
  
  # Store in external system or aggregation structure
  ResourceTracker.record(pod.metadata.name, resources)
end

Concurrent operations scale through thread pools and async patterns. The client thread-safety considerations require careful coordination when sharing client instances across threads.

require 'concurrent-ruby'

class ConcurrentKubernetesManager
  def initialize(client)
    @client = client
    @thread_pool = Concurrent::FixedThreadPool.new(10)
    @semaphore = Concurrent::Semaphore.new(20) # Limit concurrent API calls
  end

  def parallel_resource_creation(resource_specs)
    futures = resource_specs.map do |spec|
      Concurrent::Future.execute(executor: @thread_pool) do
        @semaphore.acquire
        
        begin
          case spec[:type]
          when :pod
            @client.create_pod(spec[:definition])
          when :service
            @client.create_service(spec[:definition])
          when :deployment
            @apps_client.create_deployment(spec[:definition])
          end
        rescue => e
          { error: e, spec: spec }
        ensure
          @semaphore.release
        end
      end
    end
    
    # Collect results with timeout
    results = futures.map { |f| f.value(30) }  # 30 second timeout per operation
    
    successful = results.count { |r| !r.is_a?(Hash) || !r[:error] }
    failed = results.count - successful
    
    puts "Resource creation complete: #{successful} successful, #{failed} failed"
    results
  end
  
  def shutdown
    @thread_pool.shutdown
    @thread_pool.wait_for_termination(10)
  end
end

Production Patterns

Production deployments require robust configuration management, health monitoring, and graceful degradation strategies. Applications typically maintain separate client instances for different cluster environments with environment-specific authentication and endpoints.

class ProductionKubernetesService
  def initialize(environment)
    @environment = environment
    @config = load_environment_config(environment)
    @clients = initialize_clients
    @metrics = MetricsCollector.new
    @health_checker = HealthChecker.new(@clients)
  end

  def deploy_application(app_name, manifest_bundle)
    deployment_id = SecureRandom.uuid
    
    begin
      @metrics.deployment_started(app_name, deployment_id)
      
      # Pre-deployment validation
      validate_cluster_capacity(manifest_bundle)
      validate_resource_quotas(manifest_bundle)
      
      # Execute deployment steps
      results = execute_deployment_sequence(manifest_bundle)
      
      # Post-deployment verification
      verify_deployment_health(app_name, results)
      
      @metrics.deployment_completed(app_name, deployment_id, :success)
      
      results
    rescue => e
      @metrics.deployment_completed(app_name, deployment_id, :failed)
      rollback_deployment(app_name, results)
      raise "Deployment failed: #{e.message}"
    end
  end

  private

  def load_environment_config(environment)
    config_path = "/etc/kubernetes/#{environment}/config.yaml"
    YAML.load_file(config_path)
  end

  def initialize_clients
    {
      core: create_authenticated_client(@config['core_api']),
      apps: create_authenticated_client(@config['apps_api']),
      networking: create_authenticated_client(@config['networking_api'])
    }
  end

  def create_authenticated_client(api_config)
    client = Kubeclient::Client.new(
      api_config['endpoint'],
      api_config['version'],
      ssl_options: {
        verify_ssl: api_config['verify_ssl'],
        ca_file: api_config['ca_certificate_path']
      }
    )
    
    # Production authentication using service account
    token = File.read('/var/run/secrets/kubernetes.io/serviceaccount/token')
    client.auth_options = { bearer_token: token }
    
    client
  end

  def execute_deployment_sequence(manifest_bundle)
    results = {}
    
    # Deploy in dependency order
    ['configmaps', 'secrets', 'services', 'deployments', 'ingresses'].each do |resource_type|
      next unless manifest_bundle[resource_type]
      
      results[resource_type] = deploy_resources(resource_type, manifest_bundle[resource_type])
      
      # Wait for readiness before proceeding
      wait_for_resources_ready(resource_type, results[resource_type])
    end
    
    results
  end

  def deploy_resources(resource_type, resources)
    client = select_client_for_resource(resource_type)
    method_name = "create_#{resource_type.singularize}"
    
    resources.map do |resource_spec|
      begin
        client.send(method_name, resource_spec)
      rescue Kubeclient::HttpError => e
        if e.error_code == 409  # Already exists
          # Update existing resource
          update_method = "update_#{resource_type.singularize}"
          client.send(update_method, resource_spec)
        else
          raise
        end
      end
    end
  end
end

Health monitoring integrates with application lifecycle management and alerting systems. The client supports custom health check implementations that verify both cluster connectivity and application-specific resource states.

class KubernetesHealthMonitor
  def initialize(clients, alert_manager)
    @clients = clients
    @alert_manager = alert_manager
    @check_interval = 30
    @failure_threshold = 3
    @consecutive_failures = Hash.new(0)
  end

  def start_monitoring
    @monitoring_thread = Thread.new do
      loop do
        perform_health_checks
        sleep @check_interval
      end
    rescue => e
      @alert_manager.critical_alert("Health monitor crashed: #{e.message}")
      raise
    end
  end

  def perform_health_checks
    checks = [
      { name: 'cluster_connectivity', check: -> { check_cluster_connectivity } },
      { name: 'api_server_health', check: -> { check_api_server_health } },
      { name: 'critical_pods', check: -> { check_critical_pods_health } },
      { name: 'resource_quotas', check: -> { check_resource_quota_usage } }
    ]

    checks.each do |check_config|
      begin
        result = check_config[:check].call
        handle_check_success(check_config[:name], result)
      rescue => e
        handle_check_failure(check_config[:name], e)
      end
    end
  end

  private

  def check_cluster_connectivity
    @clients[:core].get_namespaces(limit: 1)
    { status: 'healthy', message: 'Cluster connectivity OK' }
  end

  def check_critical_pods_health
    critical_pods = @clients[:core].get_pods(
      namespace: 'kube-system',
      label_selector: 'tier=control-plane'
    )

    unhealthy_pods = critical_pods.items.select do |pod|
      pod.status.phase != 'Running'
    end

    if unhealthy_pods.any?
      {
        status: 'unhealthy',
        message: "Critical pods not running: #{unhealthy_pods.map(&:metadata).map(&:name).join(', ')}"
      }
    else
      { status: 'healthy', message: "All critical pods running" }
    end
  end

  def handle_check_failure(check_name, error)
    @consecutive_failures[check_name] += 1
    
    if @consecutive_failures[check_name] >= @failure_threshold
      @alert_manager.health_check_failed(check_name, error, @consecutive_failures[check_name])
    end
    
    puts "Health check failed (#{@consecutive_failures[check_name]}): #{check_name} - #{error.message}"
  end

  def handle_check_success(check_name, result)
    if @consecutive_failures[check_name] > 0
      @alert_manager.health_check_recovered(check_name)
      @consecutive_failures[check_name] = 0
    end
  end
end

Logging and observability integrate with centralized monitoring systems. Applications emit structured logs with correlation IDs and performance metrics for operational visibility.

require 'logger'
require 'json'

class StructuredKubernetesLogger
  def initialize(log_level: Logger::INFO)
    @logger = Logger.new(STDOUT)
    @logger.level = log_level
    @logger.formatter = method(:json_formatter)
  end

  def log_api_operation(operation, resource_type, namespace, duration, success, error: nil)
    log_data = {
      timestamp: Time.now.iso8601,
      level: success ? 'INFO' : 'ERROR',
      operation: operation,
      resource_type: resource_type,
      namespace: namespace,
      duration_ms: (duration * 1000).round(2),
      success: success,
      kubernetes_api: true
    }
    
    log_data[:error] = error.message if error
    log_data[:error_code] = error.error_code if error.respond_to?(:error_code)
    
    @logger.info(log_data.to_json)
  end

  def log_deployment_event(event_type, deployment_id, app_name, details = {})
    log_data = {
      timestamp: Time.now.iso8601,
      level: 'INFO',
      event_type: 'kubernetes_deployment',
      deployment_id: deployment_id,
      app_name: app_name,
      phase: event_type
    }.merge(details)
    
    @logger.info(log_data.to_json)
  end

  private

  def json_formatter(severity, timestamp, progname, msg)
    if msg.is_a?(String) && msg.start_with?('{')
      "#{msg}\n"
    else
      {
        timestamp: timestamp.iso8601,
        level: severity,
        message: msg
      }.to_json + "\n"
    end
  end
end

# Instrumented client wrapper
class InstrumentedKubernetesClient
  def initialize(client, logger)
    @client = client
    @logger = logger
  end

  def method_missing(method_name, *args, **kwargs)
    start_time = Time.now
    
    begin
      result = @client.send(method_name, *args, **kwargs)
      duration = Time.now - start_time
      
      @logger.log_api_operation(
        method_name.to_s,
        extract_resource_type(method_name),
        kwargs[:namespace],
        duration,
        true
      )
      
      result
    rescue => e
      duration = Time.now - start_time
      
      @logger.log_api_operation(
        method_name.to_s,
        extract_resource_type(method_name),
        kwargs[:namespace],
        duration,
        false,
        error: e
      )
      
      raise
    end
  end

  private

  def extract_resource_type(method_name)
    method_name.to_s.gsub(/^(get|create|update|delete)_/, '').gsub(/s$/, '')
  end
end

Reference

Core Client Configuration

Method Parameters Returns Description
Kubeclient::Client.new(uri, version, **options) uri (String), version (String), options (Hash) Client Creates authenticated API client
#auth_options= options (Hash) void Sets authentication configuration
#ssl_options= options (Hash) void Configures SSL/TLS settings

Authentication Options

Option Type Description
:bearer_token String Service account or user token
:bearer_token_file String Path to token file
:username String Basic authentication username
:password String Basic authentication password

SSL Configuration

Option Type Description
:verify_ssl Integer SSL verification mode (OpenSSL constants)
:ca_file String CA certificate file path
:client_cert String Client certificate file path
:client_key String Client private key file path

Resource Operations

Pattern Parameters Returns Description
get_<resource>s(**opts) namespace, label_selector, field_selector, limit, continue ResourceList Lists resources with filtering
get_<resource>(name, namespace, **opts) name (String), namespace (String) Resource Retrieves single resource
create_<resource>(resource_definition) resource_definition (Hash) Resource Creates new resource
update_<resource>(resource) resource (Resource) Resource Updates existing resource
delete_<resource>(name, namespace, **opts) name (String), namespace (String), options Status Deletes resource
watch_<resource>s(**opts) Watch options Enumerator Streams resource changes

Common Parameters

Parameter Type Description
namespace String Kubernetes namespace
label_selector String Label-based filtering (e.g., "app=web,env=prod")
field_selector String Field-based filtering (e.g., "status.phase=Running")
resource_version String Resource version for watch operations
timeout_seconds Integer Request timeout in seconds

Exception Types

Exception Cause Handler Pattern
Kubeclient::HttpError API HTTP errors Check error_code property
Kubeclient::ResourceVersionError Resource version conflicts Retry with fresh resource
Net::TimeoutError Network timeouts Implement retry with backoff
OpenSSL::SSL::SSLError Certificate/TLS issues Verify SSL configuration

HTTP Status Codes

Code Meaning Typical Cause
400 Bad Request Malformed resource definition
401 Unauthorized Invalid or expired authentication
403 Forbidden Insufficient RBAC permissions
404 Not Found Resource or namespace doesn't exist
409 Conflict Resource already exists or version conflict
422 Unprocessable Entity Resource validation failure

Watch Event Types

Event Type Description Object Content
ADDED Resource created Full resource object
MODIFIED Resource updated Updated resource object
DELETED Resource removed Final resource state
ERROR Watch error occurred Error details

Resource Definition Structure

# Standard resource template
resource_definition = {
  apiVersion: 'v1',           # API version
  kind: 'Pod',               # Resource type
  metadata: {
    name: 'resource-name',     # Required: resource name
    namespace: 'default',     # Namespace (default if omitted)
    labels: {},               # Optional: label map
    annotations: {}           # Optional: annotation map
  },
  spec: {
    # Resource-specific specification
  }
}

Configuration Loading

Method Parameters Returns Description
Kubeclient::Config.read(kubeconfig_path) kubeconfig_path (String, optional) Config Loads kubeconfig file
#context(name) name (String) Context Retrieves named context
#contexts None Hash Lists available contexts

Context Properties

Property Type Description
api_endpoint String Kubernetes API server URL
api_version String API version string
auth_options Hash Authentication configuration
ssl_options Hash SSL/TLS configuration
namespace String Default namespace for operations