Overview
Containers package applications and dependencies into isolated, portable units that run consistently across different computing environments. Unlike virtual machines, containers share the host operating system kernel while maintaining process isolation, making them lighter and faster to start. Container best practices encompass image construction, runtime configuration, resource management, security hardening, and orchestration patterns that ensure applications run reliably in production.
The container ecosystem centers on the Open Container Initiative (OCI) specification, which defines standards for container images and runtimes. Docker popularized containerization and remains the dominant toolset, though alternatives like Podman and containerd operate with the same image formats. Kubernetes emerged as the standard orchestration platform for managing containers at scale, though simpler deployment targets include Docker Compose, Docker Swarm, and cloud-native services like AWS ECS or Google Cloud Run.
Container adoption addresses several operational challenges. Development environments match production exactly, eliminating "works on my machine" discrepancies. Applications scale horizontally by running multiple container instances. Deployments become atomic operations—either the new container starts successfully or the old version continues running. Resource isolation prevents one application from affecting others on shared infrastructure.
A minimal container for a Ruby application demonstrates core concepts:
FROM ruby:3.2-alpine
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle install --without development test
COPY . .
CMD ["ruby", "app.rb"]
This Dockerfile uses Alpine Linux for a smaller base image, installs only production dependencies, and copies application code. The resulting image contains the Ruby runtime, gems, and application files in a portable format that runs identically on any container host.
Container best practices prevent common production issues. Images must remain small to reduce deployment times and attack surface. Containers should run as non-root users to limit security exposure. Applications must handle signals properly for graceful shutdowns. Logs go to stdout/stderr rather than files inside containers. Configuration comes from environment variables rather than hardcoded values. Health checks ensure orchestrators can detect and restart unhealthy containers.
Key Principles
Image Immutability: Container images function as immutable artifacts—they never change after creation. Applications receive configuration through environment variables, mounted volumes, or configuration management systems rather than modifying image contents at runtime. This immutability ensures identical behavior across development, staging, and production. When application code changes, build a new image with a new tag rather than updating an existing image.
Single Process Per Container: Each container runs one primary process, following the Unix philosophy of doing one thing well. A web application container runs the application server, not the application server plus a database plus a background job processor. Orchestration platforms like Kubernetes manage multiple containers as a unit (pods) when applications need coordinated processes. This separation simplifies logging, resource allocation, health checks, and scaling decisions.
Layered Filesystem: Container images consist of read-only layers stacked on top of each other, with a writable layer added when a container runs. Each Dockerfile instruction creates a new layer. The FROM instruction establishes the base layer, RUN commands add additional layers, and COPY instructions add files as layers. Container runtimes use copy-on-write filesystems to share base layers between containers, saving disk space. Optimizing layer order and combining related operations reduces image size and build times.
Ephemeral Containers: Containers can stop and start without losing critical data because applications store state externally in databases, object storage, or mounted volumes. Container filesystems (except mounted volumes) reset when containers restart. This ephemeral nature enables features like automatic failover, rolling deployments, and horizontal scaling. Applications must expect containers to start, stop, and move between hosts frequently.
Resource Constraints: Containers operate within explicit CPU and memory limits preventing resource exhaustion. Without limits, one misbehaving container can consume all host resources and affect other containers. Orchestration platforms like Kubernetes distinguish between resource requests (guaranteed minimum) and limits (hard maximum). Setting appropriate values requires understanding application behavior under load.
Build-Time vs Runtime Separation: Image builds install dependencies and prepare application artifacts, while container runtime provides configuration and connects to external services. Secrets like database passwords never appear in images—they come from environment variables or secret management systems at runtime. This separation prevents credentials from leaking through image registries and supports deploying the same image to multiple environments with different configurations.
Signal Handling: Containers must respond to SIGTERM signals by shutting down gracefully within a timeout period (typically 30 seconds). Orchestrators send SIGTERM before SIGKILL when stopping containers. Applications should finish processing current requests, close database connections, and clean up resources. Ruby applications handle signals through Signal.trap:
Signal.trap('TERM') do
puts 'Received SIGTERM, shutting down gracefully'
# Close database connections
# Finish current requests
# Save state if needed
exit(0)
end
Health Check Endpoints: Containers expose health check endpoints that orchestrators probe to determine container health. Liveness probes detect completely broken containers that need restarting. Readiness probes identify containers not ready to receive traffic (still starting up or temporarily overloaded). Health checks should verify actual application functionality, not just process existence:
# Sinatra health check endpoint
get '/health' do
# Check database connection
begin
DB.test_connection
status 200
{ status: 'healthy' }.to_json
rescue => e
status 503
{ status: 'unhealthy', error: e.message }.to_json
end
end
Logging to Standard Streams: Container applications write logs to stdout and stderr rather than log files. Container runtimes capture these streams and forward them to logging systems. This pattern separates application logging from log management—the container platform handles collection, aggregation, and retention. Applications should log structured data (JSON) for easier parsing:
require 'json'
def log_event(level, message, metadata = {})
event = {
timestamp: Time.now.utc.iso8601,
level: level,
message: message
}.merge(metadata)
puts event.to_json
end
log_event('info', 'Request processed', { path: '/api/users', duration_ms: 45 })
Implementation Approaches
Multi-Stage Builds: Multi-stage builds create optimized images by using multiple FROM statements in a single Dockerfile. Early stages compile code and build artifacts using full development toolchains, while final stages copy only runtime dependencies and built artifacts. This approach produces smaller production images without build tools:
# Build stage
FROM ruby:3.2 as builder
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle install --jobs 4 --retry 3
# Production stage
FROM ruby:3.2-alpine
WORKDIR /app
COPY --from=builder /usr/local/bundle /usr/local/bundle
COPY . .
CMD ["bundle", "exec", "puma"]
The builder stage installs all gems, while the production stage copies only the installed bundle. For compiled languages, build stages contain compilers and development libraries that production images exclude.
Base Image Selection: Choosing base images balances size, security, and functionality. Official language images (ruby:3.2) provide complete environments but larger sizes. Alpine variants (ruby:3.2-alpine) use Alpine Linux for 50-80% size reduction but may have compatibility issues with native extensions. Distroless images contain only application dependencies without package managers or shells, minimizing attack surface but complicating debugging. Slim variants (ruby:3.2-slim) remove documentation and non-essential packages while maintaining compatibility.
For Ruby applications requiring native extensions, Debian-based images (slim variant) often provide the best balance. Alpine requires additional build dependencies and may have issues with gems using native code:
# Debian slim - good balance for Ruby
FROM ruby:3.2-slim
# Alpine - smallest but may have gem compatibility issues
FROM ruby:3.2-alpine
RUN apk add --no-cache build-base postgresql-dev
# Distroless - most secure but hardest to debug
FROM ruby:3.2-slim as builder
WORKDIR /app
COPY Gemfile* ./
RUN bundle install
FROM gcr.io/distroless/ruby:3.2
COPY --from=builder /usr/local/bundle /usr/local/bundle
COPY . /app
CMD ["/app/server.rb"]
Layer Optimization: Dockerfile instruction order affects build caching and image size. Instructions that change frequently should appear late in the Dockerfile. Copy dependency manifests before application code so dependency layers cache across code changes:
# Optimized layer ordering
FROM ruby:3.2-alpine
WORKDIR /app
# Dependencies change infrequently - cache this layer
COPY Gemfile Gemfile.lock ./
RUN bundle install --without development test
# Code changes frequently - separate layer
COPY . .
CMD ["ruby", "app.rb"]
This ordering ensures bundle install only reruns when dependencies change. If application code appears before bundle install, every code change invalidates the bundle layer cache.
Combining related RUN commands reduces layers but balances against build caching. Each RUN creates a layer, but combining all RUN commands into one prevents caching intermediate steps:
# Multiple layers - better caching during development
RUN apk add --no-cache postgresql-dev
RUN apk add --no-cache imagemagick
RUN bundle install
# Single layer - smaller image, worse caching
RUN apk add --no-cache postgresql-dev imagemagick && \
bundle install && \
rm -rf /var/cache/apk/*
Configuration Management: Applications receive configuration through environment variables, avoiding hardcoded values in images. Twelve-factor app methodology treats config as environment-specific data separate from code. Ruby applications read environment variables through ENV:
database_url = ENV.fetch('DATABASE_URL')
redis_url = ENV.fetch('REDIS_URL', 'redis://localhost:6379')
log_level = ENV.fetch('LOG_LEVEL', 'info')
# Validate required variables at startup
%w[DATABASE_URL SECRET_KEY_BASE].each do |var|
raise "Missing required environment variable: #{var}" unless ENV[var]
end
Container orchestration platforms provide multiple configuration mechanisms. Kubernetes ConfigMaps store non-sensitive configuration, while Secrets store sensitive data. Docker Compose defines environment variables in docker-compose.yml. Cloud platforms offer parameter stores and secret managers that inject values at runtime.
Volume Mount Strategies: Containers use volumes to persist data and share files between containers. Bind mounts map host directories into containers, named volumes provide managed storage, and tmpfs mounts create temporary memory-backed filesystems. Applications store database data, uploaded files, and other stateful information in volumes:
# docker-compose.yml
services:
app:
image: myapp:latest
volumes:
# Named volume for application data
- app_data:/app/data
# Bind mount for development
- ./config:/app/config:ro
# tmpfs for temporary files
- type: tmpfs
target: /tmp
tmpfs:
size: 100M
volumes:
app_data:
Read-only mounts (:ro) prevent containers from modifying mounted content. Temporary filesystems store session data, caches, or processing artifacts that don't need persistence. Named volumes abstract storage location—the container runtime manages where data actually resides.
Build Argument Patterns: Build arguments (ARG) customize image builds for different targets without maintaining separate Dockerfiles. Arguments can specify base image versions, install optional components, or configure build-time settings:
ARG RUBY_VERSION=3.2
FROM ruby:${RUBY_VERSION}-alpine
ARG RAILS_ENV=production
ENV RAILS_ENV=${RAILS_ENV}
ARG BUNDLE_WITHOUT="development test"
RUN bundle install --without ${BUNDLE_WITHOUT}
Build arguments differ from environment variables—ARG values exist only during build, while ENV values persist in the final image. Pass build arguments using --build-arg:
docker build --build-arg RUBY_VERSION=3.1 --build-arg RAILS_ENV=staging .
Security Implications
Non-Root User Execution: Containers should run as non-root users to limit security exposure. Default container execution uses root (UID 0), giving processes full system capabilities within the container. If container isolation breaks, root processes potentially affect the host. Creating unprivileged users prevents privilege escalation:
FROM ruby:3.2-alpine
WORKDIR /app
# Create application user
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
# Install dependencies as root
COPY Gemfile Gemfile.lock ./
RUN bundle install
# Copy code and set ownership
COPY --chown=appuser:appuser . .
# Switch to non-root user
USER appuser
CMD ["bundle", "exec", "puma"]
The USER instruction switches the execution context. All subsequent instructions and the final container process run as the specified user. Set file ownership during COPY operations to ensure the application user can access necessary files.
Secrets Management: Never embed secrets in container images—they persist in image layers even after deletion and leak through registries. Environment variables provide runtime secrets, but they appear in container inspect output and process listings. More secure approaches include:
Secret files mounted from orchestration platform secret stores (Kubernetes Secrets, Docker Secrets). These mount as files into containers at runtime:
# Read secret from mounted file
secret_key = File.read('/run/secrets/secret_key_base').strip
database_password = File.read('/run/secrets/db_password').strip
Cloud provider secret managers (AWS Secrets Manager, Google Secret Manager) accessed through application code:
require 'aws-sdk-secretsmanager'
def fetch_secret(secret_name)
client = Aws::SecretsManager::Client.new(region: 'us-east-1')
response = client.get_secret_value(secret_id: secret_name)
JSON.parse(response.secret_string)
end
db_credentials = fetch_secret('production/database')
Image Scanning: Container images should undergo security scanning before deployment. Scanners detect known vulnerabilities (CVEs) in base images and installed packages. Tools like Trivy, Snyk, and Clair identify vulnerable components and suggest updates:
# Scan image for vulnerabilities
trivy image myapp:latest
# Scan and fail on high/critical vulnerabilities
trivy image --severity HIGH,CRITICAL --exit-code 1 myapp:latest
Integrate scanning into CI/CD pipelines to prevent vulnerable images from reaching production. Regular rescanning catches newly discovered vulnerabilities in existing images. Automated base image updates through Dependabot or Renovate keep dependencies current.
Read-Only Filesystems: Running containers with read-only root filesystems prevents runtime modifications and limits attack vectors. Applications needing writable areas use tmpfs mounts for temporary data:
# Kubernetes pod with read-only filesystem
apiVersion: v1
kind: Pod
metadata:
name: readonly-app
spec:
containers:
- name: app
image: myapp:latest
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/tmp
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
Ruby applications often need writable /tmp directories for temporary files and session storage. Declare these explicitly as volume mounts.
Network Segmentation: Container network isolation separates application tiers and limits lateral movement during security incidents. Internal services should not expose ports to public networks. Orchestration platforms provide network policies defining allowed communication:
# Kubernetes NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 3000
This policy allows only frontend pods to access API pods on port 3000. Default-deny policies require explicit allow rules for all communication.
Resource Limits for DoS Prevention: Resource limits prevent denial-of-service scenarios where containers consume excessive CPU or memory. Without limits, memory leaks or CPU-intensive operations affect other containers on the same host:
# Kubernetes resource limits
apiVersion: v1
kind: Pod
metadata:
name: resource-limited-app
spec:
containers:
- name: app
image: myapp:latest
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Requests guarantee minimum resources. Limits enforce maximum consumption—containers exceeding memory limits get terminated (OOMKilled), while CPU limits throttle processing. Monitor actual usage to set appropriate values.
Tools & Ecosystem
Docker: Docker provides the complete container development lifecycle—building images, running containers locally, and pushing to registries. Docker Engine includes the container runtime, while Docker CLI offers command-line management. Docker Desktop packages Docker Engine with Kubernetes for local development on macOS and Windows.
Common Docker commands for Ruby applications:
# Build image
docker build -t myapp:1.0.0 .
# Run container with environment variables
docker run -d -p 3000:3000 \
-e DATABASE_URL=postgresql://localhost/mydb \
-e SECRET_KEY_BASE=abc123 \
myapp:1.0.0
# Execute commands in running container
docker exec -it container_id bundle exec rails console
# View logs
docker logs -f container_id
# Stop container gracefully
docker stop container_id
Docker Compose manages multi-container applications through declarative YAML configuration:
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://db:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: secret
redis:
image: redis:7-alpine
volumes:
postgres_data:
Kubernetes: Kubernetes orchestrates containers across clusters of machines, managing deployment, scaling, and networking. Core Kubernetes concepts include Pods (container groups), Deployments (replica management), Services (networking), and Namespaces (isolation). Ruby applications deploy to Kubernetes through manifest files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
spec:
replicas: 3
selector:
matchLabels:
app: rails-app
template:
metadata:
labels:
app: rails-app
spec:
containers:
- name: app
image: myapp:1.0.0
ports:
- containerPort: 3000
env:
- name: RAILS_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
Kubernetes Ingress manages external access to services, LoadBalancer services expose applications externally, and ConfigMaps/Secrets provide configuration.
Container Registries: Container registries store and distribute images. Docker Hub hosts public images and private repositories. Cloud providers offer managed registries (Amazon ECR, Google Container Registry, Azure Container Registry) with integrated security scanning and access control. Self-hosted options include Harbor, GitLab Container Registry, and JFrog Artifactory.
Push images to registries after building:
# Tag image with registry URL
docker tag myapp:1.0.0 registry.example.com/myapp:1.0.0
# Authenticate to registry
docker login registry.example.com
# Push image
docker push registry.example.com/myapp:1.0.0
# Pull image on deployment targets
docker pull registry.example.com/myapp:1.0.0
BuildKit and Buildah: BuildKit provides advanced Docker build features including parallel builds, build secrets, and SSH forwarding. Enable BuildKit through environment variable:
DOCKER_BUILDKIT=1 docker build .
BuildKit syntax supports secret mounts preventing credential leakage:
# syntax=docker/dockerfile:1.4
FROM ruby:3.2-alpine
RUN --mount=type=secret,id=bundle_config \
BUNDLE_CONFIG=$(cat /run/secrets/bundle_config) bundle install
Buildah offers daemonless container builds without Docker daemon, producing OCI-compatible images through command-line operations. Podman provides Docker-compatible CLI without daemon requirements.
Monitoring and Observability: Container monitoring tracks resource usage, health status, and application metrics. Prometheus collects metrics from containers, Grafana visualizes data, and Jaeger traces distributed requests. Cloud platforms include integrated monitoring (CloudWatch, Stackdriver, Azure Monitor).
Ruby applications expose metrics through prometheus-client gem:
require 'prometheus/client'
prometheus = Prometheus::Client.registry
request_counter = prometheus.counter(
:http_requests_total,
docstring: 'Total HTTP requests',
labels: [:method, :path, :status]
)
# Increment in application code
request_counter.increment(labels: { method: 'GET', path: '/api/users', status: 200 })
Structured logging JSON output integrates with log aggregation systems like ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, or Loki.
Real-World Applications
Rolling Deployments: Rolling deployments update applications without downtime by gradually replacing old containers with new versions. Kubernetes Deployments handle this automatically:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
labels:
app: rails-app
spec:
containers:
- name: app
image: myapp:2.0.0
The maxSurge parameter allows creating 2 extra pods during rollout, while maxUnavailable permits 1 pod to be down. Kubernetes gradually creates new pods, waits for readiness probes to pass, then terminates old pods. If new pods fail health checks, the rollout stops automatically.
Blue-green deployments maintain two complete environments, switching traffic atomically:
# Blue deployment (current)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: rails-app
version: blue
ports:
- port: 80
targetPort: 3000
# Switch to green by updating selector
# version: green
Database Migration Patterns: Database migrations during container deployments require coordination between schema changes and application code. Common approaches include:
Init containers run migrations before application containers start:
apiVersion: v1
kind: Pod
spec:
initContainers:
- name: migrate
image: myapp:2.0.0
command: ['bundle', 'exec', 'rails', 'db:migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
containers:
- name: app
image: myapp:2.0.0
Kubernetes Jobs run one-off migration tasks:
apiVersion: batch/v1
kind: Job
metadata:
name: rails-migration
spec:
template:
spec:
containers:
- name: migrate
image: myapp:2.0.0
command: ['bundle', 'exec', 'rails', 'db:migrate']
restartPolicy: Never
backoffLimit: 3
For zero-downtime deployments, migrations must remain compatible with the previous application version. Additive changes (new tables, columns) work across versions, while destructive changes (dropping columns) require multi-step deployments.
Horizontal Pod Autoscaling: Applications scale automatically based on metrics like CPU usage or request rate. Kubernetes Horizontal Pod Autoscaler adjusts replica counts:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: rails-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: rails-app
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "1000"
This configuration maintains CPU usage around 70% by adding or removing pods. Custom metrics like request rate provide application-specific scaling triggers. Ruby applications expose custom metrics through Prometheus.
Sidecar Pattern: Sidecars run helper containers alongside application containers in the same pod. Common uses include log forwarding, metrics collection, and SSL termination. A logging sidecar forwards application logs to central storage:
apiVersion: v1
kind: Pod
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: logs
mountPath: /app/logs
- name: log-forwarder
image: fluentd:latest
volumeMounts:
- name: logs
mountPath: /var/log/app
volumes:
- name: logs
emptyDir: {}
The sidecar pattern keeps the main application container focused on business logic while helper containers handle cross-cutting concerns.
Multi-Environment Deployment: Identical container images deploy across development, staging, and production environments with environment-specific configuration. Kubernetes namespaces isolate environments:
# production namespace
apiVersion: v1
kind: Namespace
metadata:
name: production
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
namespace: production
spec:
template:
spec:
containers:
- name: app
image: myapp:1.0.0
env:
- name: RAILS_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: production-db
key: url
The same image runs in staging with different environment variables and secrets. This approach ensures production testing happens on identical code.
Common Pitfalls
PID 1 Zombie Process Problem: Containers running applications as PID 1 must handle child process cleanup. Unix processes orphaned by parent death get adopted by PID 1, which should reap zombie processes. Many applications don't handle this correctly, causing zombie accumulation.
Using shell scripts as entrypoints exacerbates this issue:
# Problematic - shell doesn't reap zombies
CMD sh -c "bundle exec puma"
Solutions include using exec form of CMD:
# Better - direct process execution
CMD ["bundle", "exec", "puma"]
Or using init systems like tini or dumb-init:
RUN apk add --no-cache tini
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["bundle", "exec", "puma"]
Build Context Size: Docker sends the entire build context to the Docker daemon before building. Large contexts (node_modules, log files, git history) significantly slow builds. Use .dockerignore to exclude unnecessary files:
# .dockerignore
.git
node_modules
*.log
tmp/*
coverage/*
.env.local
Excluding development dependencies and generated files reduces context size from hundreds of megabytes to kilobytes.
Hardcoded Localhost References: Applications connecting to databases or caches via localhost fail in containers—services run in separate containers with different network addresses. Configuration must use service names or environment variables:
# Wrong - hardcoded localhost
redis = Redis.new(host: 'localhost', port: 6379)
# Correct - environment-based configuration
redis = Redis.new(url: ENV.fetch('REDIS_URL'))
Container orchestration platforms provide DNS resolution mapping service names to IP addresses.
Missing Health Checks: Applications without health checks appear healthy to orchestrators even when failing. Missing health checks prevent automatic recovery and cause cascading failures. Implement both liveness and readiness probes:
class HealthController < ApplicationController
def liveness
# Simple check - is the process running?
render json: { status: 'alive' }, status: 200
end
def readiness
# Comprehensive check - can we handle requests?
checks = {
database: check_database,
redis: check_redis,
storage: check_storage
}
if checks.values.all?
render json: { status: 'ready', checks: checks }, status: 200
else
render json: { status: 'not_ready', checks: checks }, status: 503
end
end
private
def check_database
ActiveRecord::Base.connection.active?
rescue
false
end
def check_redis
Redis.current.ping == 'PONG'
rescue
false
end
end
Ignoring Exit Codes: Container exit codes indicate success or failure. Exit code 0 signals successful termination, while non-zero codes indicate errors. Applications should exit with appropriate codes:
begin
run_application
exit 0
rescue ApplicationError => e
logger.error "Application error: #{e.message}"
exit 1
rescue => e
logger.fatal "Unexpected error: #{e.message}"
exit 2
end
Orchestrators use exit codes to determine restart behavior—exit 0 might not restart the container, while non-zero codes trigger restarts.
Environment Variable Injection Vulnerabilities: Applications executing shell commands with environment variables risk command injection if variables contain malicious input. Avoid shell interpolation:
# Vulnerable to injection
system("curl #{ENV['API_URL']}")
# Safe - no shell interpretation
system('curl', ENV['API_URL'])
# Safe with validation
url = ENV.fetch('API_URL')
raise 'Invalid URL' unless url.match?(/\Ahttps?:\/\//)
system('curl', url)
Timestamp Synchronization: Containers share the host system clock. Applications assuming specific timezones or relying on local time face issues when hosts use different timezones. Use UTC consistently:
# Set timezone in Dockerfile
ENV TZ=UTC
# Use UTC in application code
Time.now.utc
DateTime.now.utc
Configure logging frameworks to output UTC timestamps for consistent log correlation across containers.
Reference
Dockerfile Instructions
| Instruction | Purpose | Example |
|---|---|---|
| FROM | Sets base image | FROM ruby:3.2-alpine |
| WORKDIR | Sets working directory | WORKDIR /app |
| COPY | Copies files into image | COPY Gemfile Gemfile.lock ./ |
| ADD | Copies files with URL/archive support | ADD https://example.com/file.tar.gz /tmp |
| RUN | Executes commands during build | RUN bundle install |
| CMD | Default command when container starts | CMD bundle exec puma |
| ENTRYPOINT | Configures container as executable | ENTRYPOINT rails server |
| ENV | Sets environment variables | ENV RAILS_ENV=production |
| ARG | Defines build-time variables | ARG RUBY_VERSION=3.2 |
| EXPOSE | Documents listening ports | EXPOSE 3000 |
| VOLUME | Creates mount point | VOLUME /app/data |
| USER | Sets user for remaining instructions | USER appuser |
| LABEL | Adds metadata to image | LABEL version=1.0.0 |
Common Docker Commands
| Command | Purpose | Example |
|---|---|---|
| build | Creates image from Dockerfile | docker build -t myapp:latest . |
| run | Creates and starts container | docker run -d -p 3000:3000 myapp:latest |
| ps | Lists running containers | docker ps -a |
| logs | Shows container output | docker logs -f container_id |
| exec | Runs command in running container | docker exec -it container_id bash |
| stop | Stops running container | docker stop container_id |
| rm | Removes container | docker rm container_id |
| rmi | Removes image | docker rmi myapp:latest |
| push | Uploads image to registry | docker push registry.io/myapp:latest |
| pull | Downloads image from registry | docker pull registry.io/myapp:latest |
| inspect | Shows detailed container info | docker inspect container_id |
| stats | Shows resource usage | docker stats container_id |
Container Lifecycle Signals
| Signal | Meaning | Container Response |
|---|---|---|
| SIGTERM | Graceful shutdown request | Complete current work, clean up, exit |
| SIGKILL | Forced termination | Immediate termination, no cleanup |
| SIGHUP | Reload configuration | Reload config without restart |
| SIGUSR1 | User-defined signal | Application-specific behavior |
| SIGUSR2 | User-defined signal | Application-specific behavior |
Resource Limits
| Resource | Request | Limit | Behavior |
|---|---|---|---|
| Memory | Guaranteed minimum | Hard maximum | OOMKilled when exceeded |
| CPU | Minimum share | Maximum usage | Throttled when exceeded |
| Storage | Initial allocation | Maximum size | Fails when full |
| Ephemeral Storage | Guaranteed space | Maximum size | Eviction when exceeded |
Health Check Types
| Check Type | Purpose | Failure Action | Timing |
|---|---|---|---|
| Liveness | Detects broken containers | Restart container | During execution |
| Readiness | Determines traffic routing | Remove from load balancer | During startup and execution |
| Startup | Delays liveness checks | Allows slow startup | During initial startup only |
Security Best Practices Checklist
| Practice | Implementation | Verification |
|---|---|---|
| Non-root user | USER instruction in Dockerfile | docker inspect shows User field |
| Read-only filesystem | securityContext.readOnlyRootFilesystem | Attempt file write fails |
| No secrets in images | Environment variables or mounted secrets | Scan image layers for credentials |
| Minimal base image | Alpine or distroless variants | Image size under 100MB for Ruby |
| Updated dependencies | Regular dependency updates | Vulnerability scan shows no criticals |
| Resource limits | Set memory and CPU limits | Container respects limits under load |
| Network policies | Restrict ingress/egress | Network traffic blocked as configured |
| Image scanning | CI/CD integration | Build fails on critical vulnerabilities |
Common Environment Variables
| Variable | Purpose | Example Value |
|---|---|---|
| DATABASE_URL | Database connection string | postgresql://user:pass@host:5432/db |
| REDIS_URL | Redis connection string | redis://host:6379/0 |
| RAILS_ENV | Rails environment | production |
| RACK_ENV | Rack environment | production |
| SECRET_KEY_BASE | Encryption key | random-64-char-hex-string |
| LOG_LEVEL | Logging verbosity | info |
| PORT | Application port | 3000 |
| LANG | Character encoding | en_US.UTF-8 |
| TZ | Timezone | UTC |
| MALLOC_ARENA_MAX | Memory allocator tuning | 2 |