Overview
Virtualization creates abstracted versions of computing resources, separating the logical representation from the physical hardware. This technology enables multiple isolated environments to run on a single physical machine, each operating as if it has dedicated resources.
The concept emerged in the 1960s with IBM mainframes but gained widespread adoption in the 2000s with x86 virtualization. Modern software development relies heavily on virtualization for development environments, testing, deployment, and production infrastructure.
Virtualization operates through a hypervisor or virtualization layer that mediates between virtual machines and physical hardware. This layer manages resource allocation, maintains isolation between virtual environments, and presents virtual hardware interfaces to guest systems.
# Basic Docker container interaction in Ruby
require 'docker'
# Create a container from an image
container = Docker::Container.create(
'Image' => 'ruby:3.2',
'Cmd' => ['ruby', '-e', 'puts "Hello from container"']
)
# Start and retrieve output
container.start
container.logs(stdout: true)
# => "Hello from container\n"
Software teams use virtualization to achieve reproducible environments, resource isolation, and efficient hardware utilization. A developer can run multiple operating systems simultaneously, test across different configurations, and deploy applications with guaranteed environmental consistency.
Key Principles
Virtualization builds on three fundamental principles: abstraction, isolation, and encapsulation. Abstraction separates the logical resource from the physical implementation. Isolation ensures operations in one virtual environment do not affect others. Encapsulation packages the entire runtime environment including dependencies and configurations.
Virtual Machine Architecture
A virtual machine presents a complete hardware abstraction to guest operating systems. The hypervisor intercepts privileged instructions from guest systems and translates them to operations on physical hardware. Virtual machines include virtual CPUs, memory, storage, and network interfaces.
Type 1 hypervisors run directly on hardware, replacing the traditional operating system. Examples include VMware ESXi, Microsoft Hyper-V, and Xen. These hypervisors manage hardware resources and schedule virtual machine execution.
Type 2 hypervisors run as applications within a host operating system. VirtualBox and VMware Workstation represent this category. The host OS manages hardware while the hypervisor creates virtual environments within that context.
Resource Allocation
Virtualization platforms allocate CPU time, memory, storage, and network bandwidth to virtual environments. The hypervisor schedules virtual CPU execution on physical cores, implements memory management with techniques like ballooning and overcommitment, and virtualizes storage through disk images and volumes.
Memory ballooning allows the hypervisor to reclaim unused memory from virtual machines. The hypervisor installs a balloon driver in the guest OS that can inflate to consume memory, forcing the guest to release pages back to the hypervisor for reallocation.
# Configure container resource limits
require 'docker'
container = Docker::Container.create(
'Image' => 'ubuntu:22.04',
'HostConfig' => {
'Memory' => 512 * 1024 * 1024, # 512 MB RAM limit
'MemorySwap' => 1024 * 1024 * 1024, # 1 GB including swap
'CpuShares' => 512, # CPU weight
'CpuQuota' => 50000, # 50% of one core
'CpuPeriod' => 100000
}
)
Isolation Mechanisms
Virtual machines achieve isolation through hardware virtualization extensions like Intel VT-x and AMD-V. These extensions enable the hypervisor to run guest OS kernels in isolated contexts without modification. Each virtual machine operates in a separate address space with no direct access to other virtual machines.
Containers implement isolation through operating system features like namespaces and control groups. Namespaces isolate process trees, network stacks, filesystem mounts, and user IDs. Control groups limit resource consumption and prevent one container from monopolizing system resources.
Storage Virtualization
Virtual storage abstracts physical disks into virtual disk images. These images can be thin-provisioned, allocating storage dynamically as the guest writes data, or thick-provisioned, allocating the full capacity upfront. Snapshots capture the state of virtual storage at a specific point, enabling rollback and cloning operations.
Copy-on-write mechanisms optimize storage for virtual machines. When creating a snapshot or clone, the virtualization layer creates a new disk image that references the original. Only modified blocks are written to the new image, saving storage space and creation time.
Implementation Approaches
Full Virtualization
Full virtualization presents complete virtual hardware to guest operating systems without requiring modifications. The hypervisor traps and translates all privileged instructions, maintaining the illusion of exclusive hardware access. This approach supports unmodified operating systems but incurs performance overhead from instruction translation.
Binary translation converts privileged x86 instructions into safe sequences that execute on the host. When a guest OS attempts a privileged operation, the hypervisor intercepts the instruction, translates it, and executes equivalent operations. This technique enabled x86 virtualization before hardware assistance became available.
Hardware-assisted virtualization through Intel VT-x and AMD-V extensions reduces the overhead of full virtualization. These extensions add processor modes specifically for virtualization, allowing guest operating systems to execute privileged instructions directly under hypervisor control. The processor automatically traps to the hypervisor when necessary.
Paravirtualization
Paravirtualization modifies guest operating systems to replace privileged instructions with hypercalls to the hypervisor. This approach eliminates the overhead of binary translation but requires guest OS modifications. Xen pioneered paravirtualization, achieving performance close to native execution.
Hypercalls function as system calls from the guest OS to the hypervisor. When the guest needs privileged operations, it invokes a hypercall rather than executing privileged instructions. The hypervisor processes the request and returns results to the guest.
VirtIO drivers represent a modern paravirtualization approach. Instead of emulating hardware, the hypervisor presents standardized virtual devices. Guest operating systems include VirtIO drivers that communicate directly with these interfaces, reducing emulation overhead while maintaining portability.
Operating System-Level Virtualization
Containers share the host operating system kernel while isolating application processes. The kernel provides separate namespaces for process IDs, network stacks, filesystem mounts, and other resources. Containers start faster and consume fewer resources than virtual machines but cannot run different kernel versions.
Linux namespaces create isolated views of system resources:
- PID namespace: Isolated process ID space
- Network namespace: Separate network interfaces and routing tables
- Mount namespace: Independent filesystem mount points
- UTS namespace: Distinct hostname and domain name
- IPC namespace: Separate inter-process communication
- User namespace: Independent user and group IDs
Control groups (cgroups) limit resource consumption for container processes. Administrators configure CPU shares, memory limits, disk I/O quotas, and network bandwidth restrictions. The kernel enforces these limits, preventing containers from exceeding their allocations.
# Ruby script to create isolated namespace for process execution
require 'fiddle'
# Define clone system call constants
CLONE_NEWPID = 0x20000000 # New PID namespace
CLONE_NEWNET = 0x40000000 # New network namespace
CLONE_NEWNS = 0x00020000 # New mount namespace
libc = Fiddle.dlopen(nil)
clone_fn = Fiddle::Function.new(
libc['clone'],
[Fiddle::TYPE_VOIDP, Fiddle::TYPE_VOIDP, Fiddle::TYPE_INT, Fiddle::TYPE_VOIDP],
Fiddle::TYPE_INT
)
# Execute process in isolated namespace
# Production code would use proper clone implementation
# This demonstrates the namespace concept
Application Virtualization
Application virtualization packages software with dependencies into isolated units. Unlike system virtualization, this approach focuses on individual applications rather than complete operating systems. Technologies like Docker and AppImage create self-contained application packages.
Layer-based filesystems optimize storage for application containers. Each layer represents a filesystem delta, and the container runtime combines these layers to create the final filesystem view. Base layers containing common dependencies can be shared across multiple containers.
Network Virtualization
Virtual networks create isolated network topologies within and across physical hosts. Software-defined networking separates the network control plane from the data plane, enabling programmable network configuration. Virtual switches, routers, and firewalls provide network services to virtual machines and containers.
Bridge networking connects containers to the host network stack. The container runtime creates a virtual bridge interface, and each container receives a virtual ethernet pair. Containers communicate through the bridge, and the host can route traffic between containers and external networks.
Overlay networks span multiple hosts, creating a unified network for distributed containers. Technologies like VXLAN encapsulate container network traffic in host network packets. Container network packets travel through the overlay, while the physical network sees only host-to-host communication.
Ruby Implementation
Docker Integration
The docker-api gem provides Ruby bindings for Docker Engine. This library enables container lifecycle management, image operations, network configuration, and volume management from Ruby applications.
require 'docker'
# Connect to Docker daemon
Docker.url = 'unix:///var/run/docker.sock'
# Create and run a container
container = Docker::Container.create(
'Image' => 'ruby:3.2-alpine',
'Cmd' => ['ruby', '--version'],
'Labels' => {
'app' => 'ruby-version-checker',
'environment' => 'development'
}
)
container.start
container.wait
output = container.logs(stdout: true)
puts output
container.delete(force: true)
Image management operations include pulling, building, and tagging. The gem supports streaming build output and handling authentication for private registries.
require 'docker'
# Pull an image
image = Docker::Image.create('fromImage' => 'postgres:15')
# Build image from Dockerfile
build_output = []
image = Docker::Image.build_from_dir('.') do |chunk|
data = JSON.parse(chunk)
build_output << data['stream'] if data['stream']
print data['stream'] if data['stream']
end
# Tag image
image.tag('repo' => 'myapp', 'tag' => 'v1.0.0')
# Push to registry (requires authentication)
image.push(Docker.creds(
username: 'user',
password: 'pass',
serveraddress: 'https://index.docker.io/v1/'
))
Container Orchestration
The kubeclient gem interfaces with Kubernetes clusters. Ruby applications can deploy, scale, and manage containerized workloads across distributed infrastructure.
require 'kubeclient'
# Connect to Kubernetes cluster
config = Kubeclient::Config.read('/path/to/kubeconfig')
client = Kubeclient::Client.new(
config.context.api_endpoint + '/api',
config.context.api_version,
ssl_options: config.context.ssl_options,
auth_options: config.context.auth_options
)
# Create a deployment
deployment = Kubeclient::Resource.new({
metadata: {
name: 'ruby-app',
namespace: 'production'
},
spec: {
replicas: 3,
selector: {
matchLabels: { app: 'ruby-app' }
},
template: {
metadata: {
labels: { app: 'ruby-app' }
},
spec: {
containers: [{
name: 'ruby-app',
image: 'myregistry/ruby-app:v1.0.0',
ports: [{ containerPort: 3000 }],
resources: {
limits: { cpu: '500m', memory: '512Mi' },
requests: { cpu: '250m', memory: '256Mi' }
}
}]
}
}
}
})
client.create_deployment(deployment)
Vagrant Management
Vagrant provides a Ruby-based DSL for defining and managing development environments. The Vagrantfile configures virtual machines, provisioning scripts, and network settings.
# Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/jammy64"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = 2
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
config.vm.network "private_network", ip: "192.168.33.10"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder ".", "/vagrant", type: "virtualbox"
config.vm.provision "shell", inline: <<-SHELL
apt-get update
apt-get install -y ruby-full postgresql redis-server
sudo -u vagrant bash -c 'cd /vagrant && bundle install'
SHELL
config.trigger.after :up do |trigger|
trigger.info = "Running database migrations"
trigger.run_remote = {inline: "cd /vagrant && bundle exec rake db:migrate"}
end
end
System Containers with LXC
The ruby-lxc gem provides bindings for Linux Containers. This low-level interface creates and manages system containers with fine-grained control.
require 'lxc'
# Create a container
container = LXC::Container.new('ruby-dev')
# Configure container
container.set_config_item('lxc.net.0.type', 'veth')
container.set_config_item('lxc.net.0.link', 'lxcbr0')
container.set_config_item('lxc.net.0.flags', 'up')
container.set_config_item('lxc.rootfs.path', 'dir:/var/lib/lxc/ruby-dev/rootfs')
# Create rootfs from template
container.create('ubuntu', nil, {}, 0, ['-r', 'jammy'])
# Start container
container.start
# Execute commands in container
container.attach do
system('apt-get update')
system('apt-get install -y ruby')
end
# Stop and destroy
container.stop
container.destroy
Tools & Ecosystem
Docker
Docker packages applications with dependencies into containers. The Docker Engine manages container lifecycle, networking, and storage. Docker Compose defines multi-container applications using YAML configuration files.
A typical Docker workflow involves writing a Dockerfile to specify the application environment, building an image, and running containers from that image. The layered filesystem architecture enables efficient image storage and distribution.
FROM ruby:3.2-slim
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle config set --local deployment 'true' && \
bundle config set --local without 'development test' && \
bundle install
COPY . .
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
Docker Compose coordinates multiple containers for complex applications. The configuration defines services, networks, volumes, and dependencies.
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://postgres:password@db:5432/myapp
REDIS_URL: redis://redis:6379/0
depends_on:
- db
- redis
volumes:
- ./:/app
- bundle:/usr/local/bundle
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
bundle:
Kubernetes
Kubernetes orchestrates containerized applications across clusters. The platform handles deployment, scaling, load balancing, and self-healing. Kubernetes abstractions include Pods, Services, Deployments, StatefulSets, and DaemonSets.
Pods represent the smallest deployable units, containing one or more containers that share network and storage. Services provide stable network endpoints for Pods. Deployments manage replica sets and rolling updates.
Vagrant
Vagrant creates reproducible development environments using virtual machines. The tool supports multiple providers including VirtualBox, VMware, and cloud platforms. Provisioners automate software installation and configuration.
Multi-machine configurations define complex topologies with multiple virtual machines. Vagrant manages the entire environment lifecycle through a single interface.
Podman
Podman provides a Docker-compatible container engine without requiring a daemon. Containers run as unprivileged processes, improving security. Podman generates Kubernetes YAML from existing containers and supports pod concepts similar to Kubernetes.
LXC/LXD
Linux Containers provide operating system-level virtualization. LXC creates lightweight system containers that share the host kernel but maintain isolated user spaces. LXD adds a modern API and user experience on top of LXC.
System containers run complete Linux distributions with their own init systems. This approach suits workloads requiring traditional server environments without virtual machine overhead.
Integration & Interoperability
Network Integration
Virtual environments require network connectivity to host systems, other virtual environments, and external networks. Bridge networking connects virtual interfaces to host network adapters. Network address translation (NAT) enables outbound connectivity while masquerading virtual machine addresses.
Port forwarding maps host ports to virtual machine or container ports. External clients connect to the host IP and port, and the virtualization layer routes traffic to the internal service.
require 'docker'
# Create container with port mapping
container = Docker::Container.create(
'Image' => 'nginx:alpine',
'ExposedPorts' => {
'80/tcp' => {}
},
'HostConfig' => {
'PortBindings' => {
'80/tcp' => [{ 'HostPort' => '8080' }]
}
}
)
container.start
# Container nginx accessible at localhost:8080
Custom bridge networks isolate container groups and provide DNS resolution. Containers on the same network communicate using container names as hostnames.
require 'docker'
# Create custom network
network = Docker::Network.create('app-network', {
'Driver' => 'bridge',
'IPAM' => {
'Config' => [{ 'Subnet' => '172.20.0.0/16' }]
}
})
# Connect containers to network
web = Docker::Container.create(
'Image' => 'nginx',
'name' => 'web',
'NetworkingConfig' => {
'EndpointsConfig' => {
'app-network' => {}
}
}
)
app = Docker::Container.create(
'Image' => 'ruby:3.2',
'name' => 'app',
'NetworkingConfig' => {
'EndpointsConfig' => {
'app-network' => {}
}
}
)
# Containers can reach each other at 'web' and 'app' hostnames
Storage Integration
Shared storage enables data persistence and communication between host and virtual environments. Volume mounts map host directories into containers or virtual machines. Named volumes provide managed storage independent of host filesystem structure.
Bind mounts directly expose host directories inside virtual environments. The virtualization layer maps filesystem operations to the host path. Changes in the virtual environment appear immediately on the host and vice versa.
require 'docker'
# Create named volume
volume = Docker::Volume.create('app-data')
# Mount volume in container
container = Docker::Container.create(
'Image' => 'postgres:15',
'HostConfig' => {
'Binds' => [
'app-data:/var/lib/postgresql/data',
'/host/configs:/etc/postgresql:ro' # Read-only bind mount
]
}
)
Inter-Container Communication
Container orchestration platforms establish service discovery mechanisms. Kubernetes Services provide stable IP addresses and DNS names for Pod groups. Containers query cluster DNS to resolve service names to IP addresses.
Environment variable injection passes configuration between containers. The orchestrator injects connection details for dependent services as environment variables.
# Kubernetes service manifest
require 'kubeclient'
service = Kubeclient::Resource.new({
metadata: { name: 'database' },
spec: {
selector: { app: 'postgres' },
ports: [
{ name: 'postgres', port: 5432, targetPort: 5432 }
],
type: 'ClusterIP'
}
})
# Dependent containers can connect to 'database:5432'
Cloud Platform Integration
Major cloud providers offer managed virtual machine and container services. Ruby SDKs interact with these platforms for infrastructure provisioning and management.
require 'aws-sdk-ec2'
# Launch EC2 instance
ec2 = Aws::EC2::Client.new(region: 'us-west-2')
instance = ec2.run_instances({
image_id: 'ami-0c55b159cbfafe1f0',
instance_type: 't3.medium',
min_count: 1,
max_count: 1,
key_name: 'my-key-pair',
security_group_ids: ['sg-12345678'],
user_data: Base64.encode64(<<-SCRIPT)
#!/bin/bash
apt-get update
apt-get install -y docker.io
systemctl start docker
SCRIPT
})
Real-World Applications
Development Environment Standardization
Development teams use virtualization to ensure consistent environments across team members. Containers package application dependencies, eliminating "works on my machine" problems. Developers pull the same base images, run identical database versions, and configure services uniformly.
A typical development workflow defines the application stack in Docker Compose. Developers clone the repository, run a single command, and obtain a complete working environment. Database schemas, cache servers, message queues, and application code run in coordinated containers.
# Development setup script
require 'docker'
class DevEnvironment
def self.setup
# Pull required images
['postgres:15', 'redis:7', 'ruby:3.2'].each do |image|
Docker::Image.create('fromImage' => image)
end
# Start infrastructure services
db = Docker::Container.create(
'Image' => 'postgres:15',
'name' => 'dev-db',
'Env' => ['POSTGRES_PASSWORD=devpass']
)
db.start
redis = Docker::Container.create(
'Image' => 'redis:7',
'name' => 'dev-redis'
)
redis.start
# Wait for services to be ready
sleep 5
# Run database migrations
app = Docker::Container.create(
'Image' => 'ruby:3.2',
'Cmd' => ['bundle', 'exec', 'rake', 'db:setup'],
'HostConfig' => {
'Binds' => ["#{Dir.pwd}:/app"],
'Links' => ['dev-db:postgres', 'dev-redis:redis']
},
'WorkingDir' => '/app'
)
app.start
app.wait
end
end
Continuous Integration Pipelines
CI/CD systems run tests and builds in containers to ensure reproducibility and isolation. Each pipeline execution starts with a clean environment, preventing test pollution from previous runs. Build artifacts generate in containers with specific toolchain versions.
Container registries store build artifacts as images. The CI system builds an application image, runs tests against it, and pushes successful builds to the registry. Deployment systems pull these tested images for production release.
Microservices Deployment
Production microservices architectures deploy services as containers in orchestration platforms. Each service runs in dedicated containers with configured resource limits. The orchestrator distributes containers across cluster nodes, handles failures, and scales services based on load.
Health checks monitor container status. The orchestrator restarts failed containers and removes unhealthy instances from load balancer rotation. Rolling updates deploy new versions gradually, maintaining service availability during deployments.
# Kubernetes deployment with health checks
require 'kubeclient'
deployment = Kubeclient::Resource.new({
metadata: { name: 'payment-service' },
spec: {
replicas: 5,
selector: { matchLabels: { app: 'payment' } },
template: {
metadata: { labels: { app: 'payment' } },
spec: {
containers: [{
name: 'payment',
image: 'registry.example.com/payment:v2.1.0',
ports: [{ containerPort: 8080 }],
livenessProbe: {
httpGet: { path: '/health', port: 8080 },
initialDelaySeconds: 30,
periodSeconds: 10
},
readinessProbe: {
httpGet: { path: '/ready', port: 8080 },
initialDelaySeconds: 10,
periodSeconds: 5
},
resources: {
requests: { cpu: '200m', memory: '256Mi' },
limits: { cpu: '500m', memory: '512Mi' }
}
}]
}
}
}
})
Testing Infrastructure
Automated testing frameworks spin up containers for integration tests. Tests start database containers, message queues, and external service mocks. Each test suite executes against fresh instances, ensuring tests do not interfere with each other.
Test containers use specific versions of dependencies, enabling compatibility testing across multiple versions. A project might test against PostgreSQL 13, 14, and 15 by running the test suite with different database container versions.
Multi-Tenant Systems
Platform-as-a-Service providers use container isolation to separate customer workloads. Each tenant receives dedicated containers with resource quotas. The platform schedules containers across infrastructure, enforces security boundaries, and monitors resource usage for billing.
Container orchestrators assign tenants to isolated namespaces with network policies and resource quotas. Tenants cannot access other tenant containers or exceed allocated resources.
Reference
Virtualization Types Comparison
| Type | Isolation Level | Performance | Startup Time | Resource Overhead | Use Case |
|---|---|---|---|---|---|
| Full Virtualization | Complete OS isolation | Good with HW assist | Minutes | High | Running different OS types |
| Paravirtualization | Complete OS isolation | Excellent | Minutes | Medium | High-performance VM workloads |
| Containers | Process isolation | Native | Seconds | Low | Microservices, CI/CD |
| Application Virtualization | Application isolation | Native | Instant | Minimal | Portable applications |
Resource Limit Parameters
| Parameter | Description | Example Value | Effect |
|---|---|---|---|
| Memory | RAM allocation limit | 512Mi, 2Gi | Container killed if exceeded |
| CPU Shares | Relative CPU weight | 512, 1024 | Proportional CPU time when contended |
| CPU Quota | Absolute CPU limit | 50000 (50% core) | Hard cap on CPU usage |
| Block IO Weight | Disk I/O priority | 500 | Relative disk bandwidth |
| PIDs Limit | Maximum processes | 100 | Limits fork bombs |
Docker Container States
| State | Description | Can Transition To |
|---|---|---|
| Created | Container exists but not started | Running, Deleted |
| Running | Container process executing | Paused, Stopped, Deleted |
| Paused | Container process frozen | Running |
| Stopped | Container exited | Running, Deleted |
| Deleted | Container removed | None |
Kubernetes Resource Types
| Resource | Scope | Purpose | Example |
|---|---|---|---|
| Pod | Namespaced | Group of containers | Single app instance |
| Deployment | Namespaced | Manages replica sets | Stateless applications |
| StatefulSet | Namespaced | Manages stateful pods | Databases, queues |
| Service | Namespaced | Network endpoint | Load balancing |
| ConfigMap | Namespaced | Configuration data | App settings |
| Secret | Namespaced | Sensitive data | Credentials, keys |
| PersistentVolume | Cluster | Storage resource | Disk volumes |
| Namespace | Cluster | Resource grouping | Tenant isolation |
Network Driver Comparison
| Driver | Scope | Use Case | Performance | Complexity |
|---|---|---|---|---|
| Bridge | Single host | Container isolation | High | Low |
| Host | Single host | Maximum performance | Highest | Minimal |
| Overlay | Multi-host | Swarm/K8s networking | Medium | High |
| Macvlan | Single host | Direct network access | High | Medium |
| None | N/A | No networking | N/A | Minimal |
Common Docker Commands
| Command | Purpose | Example |
|---|---|---|
| docker run | Create and start container | docker run -d nginx |
| docker exec | Execute in running container | docker exec -it web bash |
| docker logs | View container output | docker logs -f web |
| docker inspect | Detailed container info | docker inspect web |
| docker stats | Resource usage | docker stats web |
| docker network | Manage networks | docker network create app-net |
| docker volume | Manage volumes | docker volume create data |
Vagrant Box Operations
| Command | Purpose | Effect |
|---|---|---|
| vagrant up | Start environment | Creates and provisions VMs |
| vagrant halt | Stop VMs | Graceful shutdown |
| vagrant destroy | Remove VMs | Deletes all VM data |
| vagrant reload | Restart VMs | Applies config changes |
| vagrant provision | Run provisioners | Updates VM software |
| vagrant snapshot save | Create snapshot | Saves current state |
| vagrant snapshot restore | Restore snapshot | Returns to saved state |
Container Security Practices
| Practice | Implementation | Benefit |
|---|---|---|
| Non-root user | USER directive in Dockerfile | Limits privilege escalation |
| Read-only filesystem | ReadOnlyRootFilesystem: true | Prevents tampering |
| Resource limits | Enforced CPU/memory quotas | Prevents resource exhaustion |
| Network policies | Kubernetes NetworkPolicy | Restricts traffic |
| Secret management | External secret stores | Protects credentials |
| Image scanning | Automated vulnerability detection | Identifies known issues |
| Minimal base images | Alpine, distroless images | Reduces attack surface |