Overview
Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. Originally developed by Google and released in 2014, Kubernetes builds on fifteen years of experience running production workloads at scale. The name derives from the Greek word for helmsman or pilot, with the abbreviation K8s representing the eight letters between K and s.
The system manages clusters of hosts running containers, abstracting the underlying infrastructure to provide a unified API for deploying and managing applications. Kubernetes handles container placement, replication, load balancing, and self-healing without requiring manual intervention. Applications describe their desired state through declarative configuration, and Kubernetes continuously works to maintain that state.
Modern cloud-native applications run as distributed systems composed of multiple microservices. Kubernetes provides the infrastructure to run these systems reliably across diverse environments—from on-premises data centers to public clouds. The platform handles service discovery, configuration management, storage orchestration, and automated rollouts and rollbacks.
# Simple pod definition
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
This basic example defines a pod running a single container. Kubernetes reads this specification and creates the necessary resources to run the container on a cluster node. The declarative approach separates what should run from how it runs, allowing Kubernetes to make intelligent scheduling decisions based on available resources.
Key Principles
Kubernetes operates on several foundational principles that distinguish it from traditional deployment systems. The declarative configuration model requires users to specify the desired end state rather than the steps to achieve it. Control loops continuously observe the actual state and reconcile differences with the desired state, creating a self-healing system.
The smallest deployable unit in Kubernetes is the pod, not the container. A pod encapsulates one or more containers that share network and storage resources and run on the same host. Containers within a pod communicate through localhost and share the same IP address. This design supports tightly coupled application components while maintaining the benefits of container isolation.
Controllers implement the reconciliation loop pattern. Each controller watches the API server for changes to specific resource types and takes action to move the current state toward the desired state. The Deployment controller manages ReplicaSets, which in turn manage pods. The Service controller configures load balancing for pod groups. This distributed control architecture makes Kubernetes highly extensible.
Labels and selectors provide flexible grouping mechanisms. Labels are key-value pairs attached to objects like pods and services. Selectors define queries over labels to identify sets of objects. A service uses selectors to determine which pods should receive traffic. Deployments use selectors to manage pod replicas. This loose coupling allows dynamic reconfiguration without changing service definitions.
# Deployment with labels and selectors
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
tier: frontend
template:
metadata:
labels:
app: web
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.14.2
The namespace concept provides scope for resource names and enables multi-tenancy within a cluster. Different teams or applications can use the same resource names in different namespaces without conflict. Resource quotas and access controls apply at the namespace level, enabling isolation and governance.
Kubernetes uses a master-worker architecture. The control plane runs on master nodes and consists of the API server, scheduler, controller manager, and etcd datastore. Worker nodes run the kubelet agent, container runtime, and kube-proxy network component. The API server handles all communication—both internal components and external users interact exclusively through the API.
The scheduler assigns pods to nodes based on resource requirements, affinity rules, and constraints. It considers CPU and memory requests, node taints and tolerations, pod anti-affinity to spread replicas across failure domains, and custom scheduling policies. The scheduler makes placement decisions but delegates actual pod startup to the kubelet on the selected node.
Services abstract pod networking through stable endpoints. Pods are ephemeral—they can be rescheduled to different nodes with different IP addresses. Services provide consistent DNS names and IP addresses that remain stable as backend pods change. The kube-proxy component implements service networking through iptables or IPVS rules that distribute traffic across healthy pod endpoints.
Ruby Implementation
Ruby applications interact with Kubernetes through the official client library or by executing kubectl commands. The kubeclient gem provides an idiomatic Ruby interface to the Kubernetes API, supporting authentication, resource management, and watch streams.
require 'kubeclient'
# Connect to cluster using kubeconfig
config = Kubeclient::Config.read('/path/to/kubeconfig')
context = config.context
client = Kubeclient::Client.new(
context.api_endpoint,
context.api_version,
ssl_options: context.ssl_options,
auth_options: context.auth_options
)
# List all pods in default namespace
pods = client.get_pods(namespace: 'default')
pods.each do |pod|
puts "Pod: #{pod.metadata.name}, Status: #{pod.status.phase}"
end
Creating resources requires building resource objects that match the Kubernetes API schema. The kubeclient gem provides resource classes for standard types and supports custom resource definitions.
# Create a deployment
deployment = Kubeclient::Resource.new({
metadata: {
name: 'ruby-app',
namespace: 'production'
},
spec: {
replicas: 3,
selector: {
matchLabels: { app: 'ruby-app' }
},
template: {
metadata: {
labels: { app: 'ruby-app' }
},
spec: {
containers: [{
name: 'web',
image: 'myregistry/ruby-app:v1.2.3',
ports: [{ containerPort: 3000 }],
env: [
{ name: 'RAILS_ENV', value: 'production' },
{ name: 'DATABASE_URL', valueFrom: {
secretKeyRef: { name: 'db-secret', key: 'url' }
}}
],
resources: {
requests: { memory: '256Mi', cpu: '100m' },
limits: { memory: '512Mi', cpu: '500m' }
}
}]
}
}
}
})
client.create_deployment(deployment)
Watch streams enable real-time monitoring of resource changes. The watch API returns a stream of events (ADDED, MODIFIED, DELETED) as resources change, allowing applications to react immediately.
# Watch pod events
watcher = client.watch_pods(namespace: 'default')
watcher.each do |notice|
case notice.type
when 'ADDED'
puts "New pod created: #{notice.object.metadata.name}"
when 'MODIFIED'
pod = notice.object
if pod.status.phase == 'Failed'
puts "Pod failed: #{pod.metadata.name}"
# Trigger alert or remediation
end
when 'DELETED'
puts "Pod deleted: #{notice.object.metadata.name}"
end
end
Ruby applications running inside Kubernetes pods can access the API using the service account token automatically mounted into the pod. This eliminates the need for external credentials.
# Client configuration for in-cluster access
require 'kubeclient'
config = Kubeclient::Config.new
# Read service account token
token_file = '/var/run/secrets/kubernetes.io/serviceaccount/token'
token = File.read(token_file)
# Read CA certificate
ca_file = '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
client = Kubeclient::Client.new(
'https://kubernetes.default.svc',
'v1',
ssl_options: {
ca_file: ca_file,
verify_ssl: OpenSSL::SSL::VERIFY_PEER
},
auth_options: {
bearer_token: token
}
)
# Now the pod can query the API
current_namespace = File.read(
'/var/run/secrets/kubernetes.io/serviceaccount/namespace'
)
pods = client.get_pods(namespace: current_namespace)
Executing kubectl commands from Ruby provides an alternative when direct API access is unnecessary. This approach works well for scripts and automation tasks.
require 'open3'
def kubectl(*args)
cmd = ['kubectl'] + args
stdout, stderr, status = Open3.capture3(*cmd)
raise "kubectl failed: #{stderr}" unless status.success?
stdout
end
# Apply configuration from file
kubectl('apply', '-f', 'deployment.yaml')
# Get pod status as JSON
output = kubectl('get', 'pods', '-n', 'production', '-o', 'json')
pods = JSON.parse(output)
# Scale deployment
kubectl('scale', 'deployment/ruby-app', '--replicas=5')
# Execute command in pod
kubectl('exec', 'ruby-app-abc123', '--', 'rake', 'db:migrate')
Practical Examples
Deploying a Ruby on Rails application to Kubernetes involves creating several resource types that work together. A typical deployment includes a Deployment for the application pods, a Service for internal networking, and an Ingress for external access.
# Deployment for Rails application
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: rails-app
template:
metadata:
labels:
app: rails-app
version: v2.1.0
spec:
initContainers:
- name: migrate
image: myregistry/rails-app:v2.1.0
command: ['bundle', 'exec', 'rake', 'db:migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: database-credentials
key: url
containers:
- name: web
image: myregistry/rails-app:v2.1.0
ports:
- containerPort: 3000
env:
- name: RAILS_ENV
value: production
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: rails-secrets
key: secret_key_base
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 1000m
The init container runs database migrations before the application starts. Kubernetes guarantees init containers complete successfully before starting the main containers. The liveness probe tells Kubernetes when to restart unhealthy containers, while the readiness probe controls when the pod receives traffic from the service.
A Service exposes the deployment internally within the cluster:
apiVersion: v1
kind: Service
metadata:
name: rails-app-service
namespace: production
spec:
selector:
app: rails-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
Background job processing requires a separate deployment for workers. The workers use the same container image but run a different command:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-workers
namespace: production
spec:
replicas: 2
selector:
matchLabels:
app: rails-workers
template:
metadata:
labels:
app: rails-workers
spec:
containers:
- name: worker
image: myregistry/rails-app:v2.1.0
command: ['bundle', 'exec', 'sidekiq']
env:
- name: RAILS_ENV
value: production
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: redis-config
key: url
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500m
ConfigMaps store non-sensitive configuration data, while Secrets store sensitive information like passwords and API keys. Both can be mounted as files or exposed as environment variables:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
smtp_host: smtp.example.com
smtp_port: "587"
feature_flags.json: |
{
"new_ui": true,
"beta_features": false
}
---
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: production
type: Opaque
stringData:
smtp_username: notifications@example.com
smtp_password: secretpassword123
api_key: abc123def456
Persistent storage for uploaded files or other stateful data requires PersistentVolumeClaims:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uploads-storage
namespace: production
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: nfs-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app-stateful
spec:
replicas: 1
template:
spec:
containers:
- name: web
image: myregistry/rails-app:v2.1.0
volumeMounts:
- name: uploads
mountPath: /app/public/uploads
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-storage
Tools & Ecosystem
The Kubernetes ecosystem includes numerous tools for deployment, monitoring, and cluster management. kubectl is the primary command-line interface for interacting with clusters. It communicates with the API server to create, read, update, and delete resources.
Helm functions as a package manager for Kubernetes, enabling templated application definitions called charts. Charts parameterize Kubernetes manifests, allowing the same application to be deployed with different configurations across environments.
# Using helm from Ruby
def helm(*args)
cmd = ['helm'] + args
stdout, stderr, status = Open3.capture3(*cmd)
raise "Helm failed: #{stderr}" unless status.success?
stdout
end
# Install chart with custom values
helm(
'install', 'my-release', 'stable/postgresql',
'--namespace', 'production',
'--set', 'postgresqlPassword=secretpass',
'--set', 'persistence.size=50Gi'
)
# Upgrade release
helm('upgrade', 'my-release', 'stable/postgresql', '--reuse-values')
# List installed releases
releases = helm('list', '--namespace', 'production', '-o', 'json')
JSON.parse(releases).each do |release|
puts "#{release['name']}: #{release['status']}"
end
Kustomize provides another approach to configuration management through declarative overlays. Base configurations define common resources, while overlays customize them for specific environments without templating.
Prometheus has become the standard monitoring solution for Kubernetes. It scrapes metrics from applications and infrastructure, storing time-series data and supporting powerful queries. The kube-state-metrics component exposes cluster resource metrics.
# Exposing metrics from a Ruby application
require 'prometheus/client'
require 'prometheus/client/rack/exporter'
prometheus = Prometheus::Client.registry
# Create metrics
http_requests = prometheus.counter(
:http_requests_total,
docstring: 'Total HTTP requests',
labels: [:method, :path, :status]
)
request_duration = prometheus.histogram(
:http_request_duration_seconds,
docstring: 'HTTP request duration',
labels: [:method, :path]
)
# Instrument application
class App
def call(env)
start = Time.now
status, headers, body = @app.call(env)
duration = Time.now - start
http_requests.increment(
labels: {
method: env['REQUEST_METHOD'],
path: env['PATH_INFO'],
status: status
}
)
request_duration.observe(
duration,
labels: {
method: env['REQUEST_METHOD'],
path: env['PATH_INFO']
}
)
[status, headers, body]
end
end
# Expose metrics endpoint
use Prometheus::Client::Rack::Exporter
The corresponding Kubernetes ServiceMonitor resource tells Prometheus to scrape the application:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: rails-app-metrics
namespace: production
spec:
selector:
matchLabels:
app: rails-app
endpoints:
- port: metrics
interval: 30s
path: /metrics
Cluster management tools include Rancher for multi-cluster management, Lens as a desktop IDE for Kubernetes, and k9s as a terminal-based UI. Cloud providers offer managed Kubernetes services—EKS on AWS, GKE on Google Cloud, and AKS on Azure—that handle control plane management.
Integration & Interoperability
Kubernetes integrates with existing infrastructure through multiple mechanisms. Service meshes like Istio and Linkerd add advanced networking capabilities including traffic management, security, and observability without modifying application code.
External DNS integration automatically creates DNS records for Kubernetes resources. When a service or ingress is created, External DNS updates DNS providers like Route53, CloudFlare, or Google Cloud DNS:
apiVersion: v1
kind: Service
metadata:
name: api-service
annotations:
external-dns.alpha.kubernetes.io/hostname: api.example.com
spec:
type: LoadBalancer
selector:
app: api
ports:
- port: 80
targetPort: 3000
Certificate management through cert-manager automates TLS certificate provisioning and renewal. It integrates with Let's Encrypt and other certificate authorities:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: api-tls
namespace: production
spec:
secretName: api-tls-secret
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- api.example.com
- www.example.com
Database integration typically occurs through external services. Applications running in Kubernetes connect to managed databases like RDS or Cloud SQL. The ExternalName service type creates DNS aliases for external services:
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: production
spec:
type: ExternalName
externalName: mydb.abc123.us-east-1.rds.amazonaws.com
Ruby applications access external services through environment variables or mounted secrets:
# Database connection in Rails
# config/database.yml uses environment variables
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch('DB_POOL', 5) %>
# Service discovery for other microservices
class UserServiceClient
def initialize
# Kubernetes DNS resolves service names
@host = ENV.fetch('USER_SERVICE_HOST', 'user-service.production.svc.cluster.local')
@port = ENV.fetch('USER_SERVICE_PORT', '80')
end
def get_user(id)
uri = URI("http://#{@host}:#{@port}/users/#{id}")
response = Net::HTTP.get_response(uri)
JSON.parse(response.body) if response.is_a?(Net::HTTPSuccess)
end
end
CI/CD pipelines integrate with Kubernetes for automated deployments. Build systems create container images, push them to registries, and trigger deployments:
# Deployment script for CI/CD
require 'kubeclient'
require 'json'
class KubernetesDeployer
def initialize(cluster_config)
@client = Kubeclient::Client.new(
cluster_config[:api_endpoint],
'apps/v1',
ssl_options: cluster_config[:ssl_options],
auth_options: cluster_config[:auth_options]
)
@namespace = cluster_config[:namespace]
end
def deploy(image_tag)
deployment = @client.get_deployment('rails-app', @namespace)
# Update image tag
deployment.spec.template.spec.containers[0].image =
"myregistry/rails-app:#{image_tag}"
# Add deployment annotations for tracking
deployment.metadata.annotations ||= {}
deployment.metadata.annotations['deployment.timestamp'] = Time.now.iso8601
deployment.metadata.annotations['deployment.git_sha'] = ENV['GIT_SHA']
@client.update_deployment(deployment)
wait_for_rollout('rails-app')
end
def wait_for_rollout(deployment_name, timeout: 300)
deadline = Time.now + timeout
loop do
deployment = @client.get_deployment(deployment_name, @namespace)
status = deployment.status
if status.updatedReplicas == status.replicas &&
status.availableReplicas == status.replicas
puts "Deployment successful"
return true
end
raise "Deployment timeout" if Time.now > deadline
sleep 5
end
end
end
Real-World Applications
Production Kubernetes deployments require careful consideration of reliability, security, and operational complexity. High-availability applications distribute replicas across multiple availability zones using pod anti-affinity rules:
apiVersion: apps/v1
kind: Deployment
metadata:
name: critical-service
spec:
replicas: 6
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: critical-service
topologyKey: topology.kubernetes.io/zone
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-type
operator: In
values:
- high-memory
Resource quotas prevent individual namespaces from consuming excessive cluster resources:
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-quota
namespace: team-alpha
spec:
hard:
requests.cpu: "20"
requests.memory: 40Gi
limits.cpu: "40"
limits.memory: 80Gi
persistentvolumeclaims: "10"
pods: "50"
Horizontal Pod Autoscaling adjusts replica counts based on CPU utilization or custom metrics:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: rails-app-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: rails-app
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "1000"
Blue-green deployments minimize downtime during updates by running both old and new versions simultaneously:
class BlueGreenDeployment
def initialize(client, namespace)
@client = client
@namespace = namespace
end
def deploy_green(version)
# Create green deployment
green_deployment = create_deployment(
name: 'app-green',
version: version,
replicas: 3
)
@client.create_deployment(green_deployment)
wait_for_ready('app-green')
# Run smoke tests
unless smoke_tests_pass?('app-green')
@client.delete_deployment('app-green', @namespace)
raise "Smoke tests failed"
end
# Switch service to green
service = @client.get_service('app-service', @namespace)
service.spec.selector[:version] = version
@client.update_service(service)
# Keep blue running for rollback capability
puts "Deployment complete. Blue version still running for rollback."
end
def rollback_to_blue
service = @client.get_service('app-service', @namespace)
blue_version = get_blue_version
service.spec.selector[:version] = blue_version
@client.update_service(service)
end
def cleanup_blue
@client.delete_deployment('app-blue', @namespace)
end
end
Log aggregation collects logs from all pods into a centralized system. The Fluentd DaemonSet runs on every node, collecting container logs:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Reference
Core Resource Types
| Resource | Purpose | Scope |
|---|---|---|
| Pod | Smallest deployable unit containing one or more containers | Namespaced |
| Deployment | Manages ReplicaSets and provides declarative updates | Namespaced |
| ReplicaSet | Ensures specified number of pod replicas are running | Namespaced |
| StatefulSet | Manages stateful applications with stable network identities | Namespaced |
| DaemonSet | Ensures pod runs on all or selected nodes | Namespaced |
| Job | Creates one or more pods and ensures successful completion | Namespaced |
| CronJob | Schedules jobs to run periodically | Namespaced |
| Service | Exposes pods through stable network endpoint | Namespaced |
| Ingress | HTTP/HTTPS routing to services | Namespaced |
| ConfigMap | Stores non-confidential configuration data | Namespaced |
| Secret | Stores sensitive information | Namespaced |
| PersistentVolume | Cluster-level storage resource | Cluster |
| PersistentVolumeClaim | Request for storage by user | Namespaced |
| Namespace | Virtual cluster for resource isolation | Cluster |
| Node | Worker machine in cluster | Cluster |
| ServiceAccount | Identity for processes running in pods | Namespaced |
Container States
| State | Description | Next State |
|---|---|---|
| Waiting | Container has not started yet | Running |
| Running | Container is executing | Terminated |
| Terminated | Container has stopped | None |
Pod Phases
| Phase | Meaning |
|---|---|
| Pending | Pod accepted but containers not yet created |
| Running | At least one container is running |
| Succeeded | All containers terminated successfully |
| Failed | All containers terminated, at least one failed |
| Unknown | Pod state cannot be determined |
Service Types
| Type | Behavior | Use Case |
|---|---|---|
| ClusterIP | Internal cluster IP, only reachable within cluster | Internal services |
| NodePort | Exposes service on each node's IP at static port | Development, debugging |
| LoadBalancer | Creates external load balancer with cloud provider | Production external access |
| ExternalName | Maps service to DNS name | External service integration |
kubectl Command Reference
| Command | Purpose | Example |
|---|---|---|
| get | List resources | kubectl get pods -n production |
| describe | Show detailed resource information | kubectl describe pod nginx-abc123 |
| logs | Print container logs | kubectl logs -f rails-app-abc123 |
| exec | Execute command in container | kubectl exec -it pod-name -- bash |
| apply | Create or update resources from file | kubectl apply -f deployment.yaml |
| delete | Delete resources | kubectl delete deployment app-name |
| scale | Change replica count | kubectl scale deployment/app --replicas=5 |
| rollout | Manage deployment rollouts | kubectl rollout status deployment/app |
| port-forward | Forward local port to pod | kubectl port-forward pod-name 8080:80 |
| create | Create resource from file or stdin | kubectl create -f pod.yaml |
Resource Request and Limit Units
| Resource | Units | Example |
|---|---|---|
| CPU | Millicores (m) or cores | 100m, 0.5, 2 |
| Memory | Bytes with suffix | 128Mi, 1Gi, 500M |
| Ephemeral Storage | Bytes with suffix | 1Gi, 500Mi |
Label Selector Operators
| Operator | Syntax | Meaning |
|---|---|---|
| Equality | app=nginx | Label value equals nginx |
| Inequality | tier!=frontend | Label value not equal to frontend |
| Set-based | environment in (prod, staging) | Label value in set |
| Set-based | tier notin (frontend, backend) | Label value not in set |
| Existence | partition | Label key exists |
| Non-existence | !partition | Label key does not exist |
Probe Types and Fields
| Probe | Purpose |
|---|---|
| livenessProbe | Determines when to restart container |
| readinessProbe | Determines when container ready for traffic |
| startupProbe | Determines when application has started |
| Field | Description | Default |
|---|---|---|
| initialDelaySeconds | Delay before first probe | 0 |
| periodSeconds | Probe frequency | 10 |
| timeoutSeconds | Probe timeout | 1 |
| successThreshold | Successes needed after failure | 1 |
| failureThreshold | Failures before action taken | 3 |
Common Annotations
| Annotation | Purpose |
|---|---|
| kubernetes.io/change-cause | Records reason for change |
| prometheus.io/scrape | Enable Prometheus scraping |
| prometheus.io/port | Port for metrics endpoint |
| cert-manager.io/issuer | Certificate issuer reference |
| external-dns.alpha.kubernetes.io/hostname | DNS record to create |