Overview
Container networking defines how isolated container processes communicate with each other, the host system, and external networks. Unlike traditional networking where applications run directly on host network stacks, containers operate in isolated network namespaces that require explicit configuration for connectivity. This abstraction creates a virtual network layer that containers use for communication while maintaining process isolation.
The Container Network Interface (CNI) specification standardizes how container runtimes configure network interfaces. Docker, Kubernetes, Podman, and other container platforms implement CNI to provide consistent networking behavior across different environments. Each container runtime creates network namespaces, virtual ethernet devices (veth pairs), and routing rules to connect containers.
Container networking solves several infrastructure challenges. Applications can run with identical network configurations across development, staging, and production environments. Services scale horizontally by adding container instances behind load balancers without manual network reconfiguration. Network isolation prevents unauthorized container-to-container communication while allowing controlled service discovery.
require 'docker'
# Connect to Docker daemon
Docker.url = 'unix:///var/run/docker.sock'
# Inspect default bridge network
network = Docker::Network.get('bridge')
puts network.info['IPAM']['Config'].first['Subnet']
# => "172.17.0.0/16"
The example shows Docker's default bridge network configuration. Every container attached to this network receives an IP address from the 172.17.0.0/16 subnet. The Docker daemon acts as the DHCP server, assigning addresses and maintaining ARP tables for container name resolution.
Key Principles
Container networking operates through network namespaces that isolate network stacks between containers and the host. Each namespace contains its own network interfaces, routing tables, firewall rules, and socket connections. When a container starts, the runtime creates a new namespace and configures virtual network devices to connect it to other network resources.
The veth (virtual ethernet) pair forms the basic building block of container networks. One end of the pair exists inside the container's namespace appearing as eth0, while the other end connects to a bridge device on the host. Packets sent to the container's eth0 interface traverse the veth pair and arrive at the bridge, where the kernel routes them to their destination.
Network bridges act as virtual switches connecting multiple containers. The bridge maintains a forwarding database that maps container MAC addresses to veth interfaces. When a container sends a packet, the bridge examines the destination MAC address and forwards it to the appropriate veth pair. If the destination container exists on the same bridge, packets flow directly between containers without leaving the host.
DNS resolution in container networks uses either embedded DNS servers or external resolvers. Docker runs an internal DNS server at 127.0.0.11 inside each container, resolving container names to IP addresses within user-defined networks. Kubernetes provides cluster DNS through CoreDNS, allowing pods to resolve service names using standard DNS queries.
Port mapping enables external access to containerized services. The host system binds a port and forwards incoming connections to a container's internal port. Network Address Translation (NAT) rules rewrite packet headers, changing the destination IP and port to match the container's address. Return packets undergo reverse translation before leaving the host.
Network policies define traffic rules between containers. These policies specify which containers can communicate, on which ports, and using which protocols. Container orchestrators enforce policies through iptables rules or eBPF programs that filter packets at the kernel level.
Service discovery mechanisms allow containers to locate and connect to services without hardcoded IP addresses. Container platforms maintain service registries that map service names to container endpoints. Applications query these registries to discover available service instances.
Implementation Approaches
Container networking supports multiple modes, each balancing isolation, performance, and complexity differently. The bridge network mode creates an isolated network segment for containers with NAT-based external connectivity. Host network mode bypasses network namespaces entirely, placing containers directly on the host's network stack. Overlay networks enable communication across multiple host machines using encapsulation protocols.
Bridge networking creates a private network segment where containers receive IP addresses from a dedicated subnet. The container runtime configures iptables rules to masquerade outbound traffic, translating container IPs to the host IP address. Inbound traffic requires explicit port mappings that forward host ports to container endpoints. This mode provides strong isolation at the cost of NAT overhead and complex port management.
Host networking eliminates network namespacing overhead by running containers on the host's network interface. Containers bind directly to host ports and see all network interfaces visible to the host. This mode offers maximum performance for network-intensive applications but sacrifices isolation and creates port conflicts when multiple containers need the same port.
Overlay networks span multiple hosts, creating a flat network where containers communicate regardless of physical location. VXLAN (Virtual Extensible LAN) encapsulates layer 2 ethernet frames inside UDP packets, allowing them to traverse layer 3 networks. Each host runs a VXLAN Tunnel Endpoint (VTEP) that handles encapsulation and decapsulation. Distributed routing tables map container MAC addresses to host IP addresses.
Container Network Interface (CNI) plugins extend networking functionality beyond basic modes. Calico implements network policies using BGP routing and iptables rules. Flannel provides overlay networking with minimal configuration. Cilium uses eBPF for high-performance packet filtering and observability. Weave creates a mesh network where each host maintains connections to other hosts.
MacVLAN assigns distinct MAC addresses to containers, making them appear as physical devices on the network. This mode requires promiscuous mode on the host interface and consumes more network resources but provides the most realistic network topology for containers that need direct layer 2 access.
None network mode creates containers without network connectivity, using them for batch processing, file manipulation, or other non-networked workloads.
require 'docker'
# Create custom bridge network
network = Docker::Network.create('app_network',
'Driver' => 'bridge',
'IPAM' => {
'Config' => [{
'Subnet' => '172.25.0.0/16',
'Gateway' => '172.25.0.1'
}]
}
)
# Create container on custom network
container = Docker::Container.create(
'Image' => 'nginx:alpine',
'HostConfig' => {
'NetworkMode' => 'app_network'
}
)
container.start
Ruby Implementation
Ruby applications interact with container networking through Docker SDK libraries and Kubernetes clients. The docker-api gem provides bindings to the Docker Engine API, enabling network creation, inspection, and container attachment from Ruby code. Kubernetes Ruby clients interact with cluster networking through the Kubernetes API.
require 'docker'
# List all networks
Docker::Network.all.each do |network|
info = network.info
puts "#{info['Name']}: #{info['Driver']} (#{info['Scope']})"
# Show connected containers
info['Containers']&.each do |id, container|
puts " - #{container['Name']}: #{container['IPv4Address']}"
end
end
The docker-api gem wraps Docker's REST API, translating Ruby method calls into HTTP requests. Network objects expose methods for inspection, connection, and deletion. Container objects connect to networks dynamically, receiving IP addresses from the network's address pool.
# Create network with custom DNS
network = Docker::Network.create('service_network',
'Driver' => 'bridge',
'Options' => {
'com.docker.network.bridge.name' => 'svc_bridge'
},
'IPAM' => {
'Driver' => 'default',
'Config' => [{
'Subnet' => '10.0.1.0/24',
'Gateway' => '10.0.1.1'
}]
}
)
# Connect running container to network
container = Docker::Container.get('web_app')
network.connect(container.id, {
'IPAMConfig' => {
'IPv4Address' => '10.0.1.10'
},
'Aliases' => ['webapp', 'api']
})
Network aliases enable service discovery within Docker networks. Containers resolve aliases to IP addresses using Docker's embedded DNS server. Multiple containers share the same alias for load balancing scenarios.
require 'kubeclient'
# Connect to Kubernetes cluster
config = Kubeclient::Config.read('/path/to/kubeconfig')
client = Kubeclient::Client.new(
config.context.api_endpoint,
'v1',
ssl_options: config.context.ssl_options,
auth_options: config.context.auth_options
)
# Create network policy
policy = Kubeclient::Resource.new({
metadata: {
name: 'api-isolation',
namespace: 'production'
},
spec: {
podSelector: {
matchLabels: { role: 'api' }
},
policyTypes: ['Ingress', 'Egress'],
ingress: [{
from: [{
podSelector: {
matchLabels: { role: 'frontend' }
}
}],
ports: [{
protocol: 'TCP',
port: 8080
}]
}]
}
})
client.create_network_policy(policy)
Kubernetes network policies define allowed traffic patterns between pods. The policy restricts ingress to pods labeled 'api' from pods labeled 'frontend' on TCP port 8080. The Ruby client serializes the policy specification and submits it to the Kubernetes API server.
Ruby applications running inside containers require network configuration awareness. Reading environment variables provides service endpoint information in Kubernetes environments:
# Service discovery in Kubernetes
class ServiceDiscovery
def initialize
@namespace = ENV['POD_NAMESPACE'] || 'default'
end
def postgres_host
# Kubernetes DNS format: <service>.<namespace>.svc.cluster.local
"postgres.#{@namespace}.svc.cluster.local"
end
def redis_endpoint
host = ENV['REDIS_SERVICE_HOST']
port = ENV['REDIS_SERVICE_PORT']
"redis://#{host}:#{port}"
end
end
discovery = ServiceDiscovery.new
db_config = {
host: discovery.postgres_host,
port: 5432,
database: 'app_production'
}
Container health checks verify network connectivity:
require 'socket'
require 'timeout'
class HealthCheck
def self.port_open?(host, port, timeout: 2)
Timeout.timeout(timeout) do
TCPSocket.new(host, port).close
true
end
rescue Errno::ECONNREFUSED, Errno::EHOSTUNREACH, Timeout::Error
false
end
def self.check_dependencies
services = {
postgres: ['db', 5432],
redis: ['cache', 6379],
api: ['backend', 8080]
}
services.each do |name, (host, port)|
unless port_open?(host, port)
raise "Cannot connect to #{name} at #{host}:#{port}"
end
end
end
end
HealthCheck.check_dependencies
Practical Examples
Multi-container applications require coordinated network configuration. A typical web application stack consists of a frontend proxy, application servers, cache layer, and database. Each component connects to specific networks based on communication requirements.
require 'docker'
# Create isolated networks
frontend_net = Docker::Network.create('frontend',
'Driver' => 'bridge',
'Internal' => false # Allows external connectivity
)
backend_net = Docker::Network.create('backend',
'Driver' => 'bridge',
'Internal' => true # No external access
)
# Database container - backend network only
db_container = Docker::Container.create(
'Image' => 'postgres:15',
'Env' => [
'POSTGRES_PASSWORD=secret',
'POSTGRES_DB=app_production'
],
'HostConfig' => {
'NetworkMode' => 'backend'
}
)
db_container.start
# Application server - both networks
app_container = Docker::Container.create(
'Image' => 'ruby:3.2',
'Env' => [
'DATABASE_URL=postgresql://postgres:secret@db:5432/app_production',
'REDIS_URL=redis://cache:6379'
],
'HostConfig' => {
'NetworkMode' => 'backend'
}
)
app_container.start
# Connect app to frontend network
frontend_net.connect(app_container.id, {
'Aliases' => ['app']
})
# Nginx proxy - frontend network with published port
proxy_container = Docker::Container.create(
'Image' => 'nginx:alpine',
'HostConfig' => {
'NetworkMode' => 'frontend',
'PortBindings' => {
'80/tcp' => [{ 'HostPort' => '8080' }]
}
}
)
proxy_container.start
This configuration isolates the database on the backend network, preventing direct external access. The application server bridges both networks, communicating with the database internally while serving requests from the proxy. The proxy exposes port 80 to the host on port 8080.
Service mesh networking adds observability and traffic management:
require 'docker'
# Create mesh network
mesh_net = Docker::Network.create('service_mesh',
'Driver' => 'overlay',
'Attachable' => true,
'Labels' => {
'mesh.enabled' => 'true'
}
)
# Deploy application with sidecar proxy
%w[service-a service-b service-c].each do |service|
# Main application container
app = Docker::Container.create(
'Image' => "myapp/#{service}:latest",
'HostConfig' => {
'NetworkMode' => 'service_mesh'
},
'Labels' => {
'mesh.service' => service,
'mesh.version' => 'v1'
}
)
app.start
# Envoy sidecar for traffic management
envoy = Docker::Container.create(
'Image' => 'envoyproxy/envoy:v1.27',
'HostConfig' => {
'NetworkMode' => "container:#{app.id}"
}
)
envoy.start
end
The sidecar pattern shares the network namespace between the application and proxy. Envoy intercepts all network traffic, implementing circuit breaking, retries, and observability without modifying application code.
Cross-host networking with overlay networks:
# Initialize Swarm mode on manager node
Docker.swarm.init({
'AdvertiseAddr' => '192.168.1.10',
'ListenAddr' => '0.0.0.0:2377'
})
# Create overlay network
overlay_net = Docker::Network.create('distributed',
'Driver' => 'overlay',
'Scope' => 'swarm',
'IPAM' => {
'Config' => [{
'Subnet' => '10.10.0.0/16'
}]
}
)
# Deploy service across multiple hosts
service = Docker::Service.create({
'Name' => 'web',
'TaskTemplate' => {
'ContainerSpec' => {
'Image' => 'nginx:alpine'
},
'Networks' => [{ 'Target' => overlay_net.id }]
},
'Mode' => {
'Replicated' => {
'Replicas' => 3
}
}
})
Docker Swarm distributes service tasks across worker nodes. The overlay network enables container-to-container communication regardless of physical host location. VXLAN encapsulation transparently routes packets between hosts.
Security Implications
Container networks introduce security boundaries that require careful configuration. Network isolation prevents unauthorized container communication, but misconfigurations expose services to attack. Default bridge networks allow unrestricted container-to-container traffic, creating lateral movement opportunities for attackers.
User-defined bridge networks provide automatic DNS resolution and network isolation. Containers on different user-defined networks cannot communicate unless explicitly connected to both networks. This isolation contains security breaches within network boundaries.
require 'docker'
# Create isolated tenant networks
tenant_a_net = Docker::Network.create('tenant_a',
'Driver' => 'bridge',
'Internal' => true,
'Labels' => {
'security.isolation' => 'tenant',
'tenant.id' => 'a123'
}
)
tenant_b_net = Docker::Network.create('tenant_b',
'Driver' => 'bridge',
'Internal' => true,
'Labels' => {
'security.isolation' => 'tenant',
'tenant.id' => 'b456'
}
)
# Containers cannot communicate across tenant networks
tenant_a_container = Docker::Container.create(
'Image' => 'app:latest',
'HostConfig' => {
'NetworkMode' => 'tenant_a'
}
)
Network policies enforce traffic rules at the packet level. Kubernetes NetworkPolicy resources define allowed ingress and egress connections based on pod labels, namespaces, and IP ranges.
# Deny all ingress traffic by default
deny_all_policy = Kubeclient::Resource.new({
metadata: {
name: 'default-deny-ingress',
namespace: 'production'
},
spec: {
podSelector: {}, # Applies to all pods
policyTypes: ['Ingress']
}
})
# Allow specific traffic patterns
allow_frontend_policy = Kubeclient::Resource.new({
metadata: {
name: 'allow-frontend',
namespace: 'production'
},
spec: {
podSelector: {
matchLabels: { role: 'api' }
},
policyTypes: ['Ingress'],
ingress: [{
from: [
{
podSelector: {
matchLabels: { role: 'frontend' }
}
},
{
namespaceSelector: {
matchLabels: { name: 'staging' }
}
}
],
ports: [{
protocol: 'TCP',
port: 8080
}]
}]
}
})
Port exposure creates attack surface. Publishing container ports to the host binds them to all network interfaces by default, exposing services to external networks. Restrict bindings to specific interfaces or localhost:
# Insecure - binds to all interfaces
insecure_container = Docker::Container.create(
'Image' => 'app:latest',
'HostConfig' => {
'PortBindings' => {
'8080/tcp' => [{ 'HostPort' => '8080' }]
}
}
)
# Secure - binds to localhost only
secure_container = Docker::Container.create(
'Image' => 'app:latest',
'HostConfig' => {
'PortBindings' => {
'8080/tcp' => [{
'HostIp' => '127.0.0.1',
'HostPort' => '8080'
}]
}
}
)
TLS encryption protects data in transit between containers. Service mesh proxies terminate TLS connections and enforce mutual authentication:
# Configure Envoy for mTLS
envoy_config = {
'transport_socket' => {
'name' => 'envoy.transport_sockets.tls',
'typed_config' => {
'@type' => 'type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext',
'common_tls_context' => {
'tls_certificates' => [{
'certificate_chain' => { 'filename' => '/certs/server.pem' },
'private_key' => { 'filename' => '/certs/server-key.pem' }
}],
'validation_context' => {
'trusted_ca' => { 'filename' => '/certs/ca.pem' }
}
},
'require_client_certificate' => true
}
}
}
Container escape vulnerabilities exploit kernel bugs to break out of network namespaces. Run containers with restricted capabilities and AppArmor/SELinux profiles:
container = Docker::Container.create(
'Image' => 'app:latest',
'HostConfig' => {
'CapDrop' => ['ALL'],
'CapAdd' => ['NET_BIND_SERVICE'],
'SecurityOpt' => ['no-new-privileges'],
'ReadonlyRootfs' => true
}
)
Tools & Ecosystem
Docker provides the most common container networking implementation. The Docker daemon manages networks through the libnetwork library, implementing bridge, host, overlay, macvlan, and none drivers. The docker network command-line interface creates, inspects, and manages networks.
Kubernetes networking operates through CNI plugins. Each cluster chooses a CNI implementation that provides pod networking, network policies, and cross-node communication. Common CNI plugins include Calico, Flannel, Weave, and Cilium.
Calico uses BGP routing to distribute routes between cluster nodes. Each node runs a BGP daemon that advertises pod IP addresses to other nodes. Network policies translate to iptables rules that filter traffic at layer 3 and layer 4. Calico supports both overlay and non-overlay modes, with non-overlay providing better performance by avoiding encapsulation overhead.
Flannel creates a simple overlay network using VXLAN or host-gw backend. The VXLAN backend encapsulates pod traffic in UDP packets for cross-node communication. The host-gw backend configures static routes on each node, offering better performance but requiring layer 2 connectivity between nodes.
Cilium leverages eBPF (extended Berkeley Packet Filter) for high-performance networking and security. eBPF programs run in the kernel, processing packets without context switching to userspace. Cilium implements network policies, load balancing, and observability through eBPF, achieving lower latency and higher throughput than iptables-based solutions.
require 'docker'
# Inspect CNI plugins in Kubernetes
# Read CNI configuration from host
cni_config_dir = '/etc/cni/net.d'
cni_configs = Dir.glob("#{cni_config_dir}/*.conflist").map do |file|
JSON.parse(File.read(file))
end
cni_configs.each do |config|
puts "CNI Version: #{config['cniVersion']}"
puts "Plugin Name: #{config['name']}"
config['plugins'].each do |plugin|
puts " - #{plugin['type']}: #{plugin.inspect}"
end
end
Service meshes add application-level networking features. Istio, Linkerd, and Consul Connect inject sidecar proxies that intercept container traffic, implementing advanced routing, observability, and security features.
Ruby gems for container networking include:
- docker-api: Docker Engine API client for network management
- kubeclient: Kubernetes API client for network policy configuration
- net-ssh: SSH tunneling for secure container access
- faraday: HTTP client for service-to-service communication
Network debugging tools:
require 'docker'
# Execute network commands in container
def debug_network(container_id)
container = Docker::Container.get(container_id)
# Show network interfaces
interfaces = container.exec(['ip', 'addr', 'show'])
puts "Interfaces:\n#{interfaces[0].join}"
# Show routing table
routes = container.exec(['ip', 'route', 'show'])
puts "Routes:\n#{routes[0].join}"
# Test DNS resolution
dns = container.exec(['nslookup', 'google.com'])
puts "DNS:\n#{dns[0].join}"
# Test connectivity
ping = container.exec(['ping', '-c', '3', '8.8.8.8'])
puts "Connectivity:\n#{ping[0].join}"
end
tcpdump and Wireshark capture container network traffic for analysis:
# Capture packets from container network interface
def capture_traffic(container_id, duration: 10)
container = Docker::Container.get(container_id)
# Get veth interface name for container
inspect = container.json
network_mode = inspect['HostConfig']['NetworkMode']
# Capture on container's veth pair
command = [
'timeout', duration.to_s,
'tcpdump', '-i', 'any',
'-w', '/tmp/capture.pcap',
"host #{inspect['NetworkSettings']['IPAddress']}"
]
system(*command)
puts "Captured traffic saved to /tmp/capture.pcap"
end
Common Pitfalls
Default bridge network limitations cause DNS resolution failures. Docker's default bridge network does not provide automatic DNS resolution between containers. Containers must use IP addresses or create user-defined networks for name-based discovery.
# Fails - default bridge has no DNS
container_a = Docker::Container.create(
'Image' => 'alpine',
'Cmd' => ['ping', 'container_b'],
'HostConfig' => {
'NetworkMode' => 'bridge'
}
)
# Works - user-defined network provides DNS
custom_network = Docker::Network.create('app_net', 'Driver' => 'bridge')
container_a = Docker::Container.create(
'Image' => 'alpine',
'Cmd' => ['ping', 'container_b'],
'HostConfig' => {
'NetworkMode' => 'app_net'
},
'Name' => 'container_a'
)
container_b = Docker::Container.create(
'Image' => 'alpine',
'Cmd' => ['sleep', '3600'],
'HostConfig' => {
'NetworkMode' => 'app_net'
},
'Name' => 'container_b'
)
Port binding conflicts occur when multiple containers attempt to publish the same host port. Docker accepts the configuration but fails at runtime:
# First container succeeds
container_1 = Docker::Container.create(
'Image' => 'nginx',
'HostConfig' => {
'PortBindings' => {
'80/tcp' => [{ 'HostPort' => '8080' }]
}
}
)
container_1.start
# Second container fails with port already allocated
container_2 = Docker::Container.create(
'Image' => 'nginx',
'HostConfig' => {
'PortBindings' => {
'80/tcp' => [{ 'HostPort' => '8080' }]
}
}
)
begin
container_2.start
rescue Docker::Error::DockerError => e
puts "Error: #{e.message}"
# => "port is already allocated"
end
Network mode dependencies break when containers start in the wrong order. Containers using container: network mode must start after their target container:
# Wrong order - fails
sidecar = Docker::Container.create(
'Image' => 'envoy',
'HostConfig' => {
'NetworkMode' => 'container:app'
}
)
sidecar.start # Fails - 'app' container doesn't exist
app = Docker::Container.create('Image' => 'myapp', 'Name' => 'app')
app.start
# Correct order
app = Docker::Container.create('Image' => 'myapp', 'Name' => 'app')
app.start
sidecar = Docker::Container.create(
'Image' => 'envoy',
'HostConfig' => {
'NetworkMode' => 'container:app'
}
)
sidecar.start # Succeeds
MTU mismatches cause packet fragmentation and connection failures. Container networks default to 1500-byte MTU, but overlay networks require lower MTU for encapsulation headers:
# Create overlay network with correct MTU
overlay_net = Docker::Network.create('overlay_net',
'Driver' => 'overlay',
'Options' => {
'com.docker.network.driver.mtu' => '1450'
}
)
IPv6 configuration requires explicit enablement. Docker disables IPv6 by default, causing connection failures for IPv6-only services:
# Enable IPv6 on network
ipv6_net = Docker::Network.create('ipv6_network',
'Driver' => 'bridge',
'EnableIPv6' => true,
'IPAM' => {
'Config' => [
{
'Subnet' => '172.25.0.0/16',
'Gateway' => '172.25.0.1'
},
{
'Subnet' => '2001:db8:1::/64',
'Gateway' => '2001:db8:1::1'
}
]
}
)
Network cleanup failures leave orphaned resources. Removing containers does not automatically remove their networks if other containers remain connected:
def cleanup_networks
Docker::Network.all.each do |network|
next if %w[bridge host none].include?(network.info['Name'])
begin
# Disconnect all containers first
network.info['Containers']&.each do |id, _|
network.disconnect(id, force: true)
end
# Remove network
network.delete
rescue Docker::Error::ConflictError => e
puts "Cannot remove #{network.info['Name']}: #{e.message}"
end
end
end
Reference
Network Drivers
| Driver | Scope | Use Case | Isolation |
|---|---|---|---|
| bridge | local | Single host containers | Namespace isolation |
| host | local | Performance-critical apps | No isolation |
| overlay | swarm | Multi-host services | VXLAN encapsulation |
| macvlan | local | Legacy apps needing MAC addresses | L2 isolation |
| none | local | Offline processing | Complete isolation |
| container | local | Shared network namespace | Shared with target |
Docker Network Commands
| Command | Description | Example |
|---|---|---|
| network create | Create user-defined network | docker network create app_net |
| network ls | List networks | docker network ls |
| network inspect | Show network details | docker network inspect bridge |
| network connect | Attach container to network | docker network connect net1 container1 |
| network disconnect | Detach container from network | docker network disconnect net1 container1 |
| network rm | Remove network | docker network rm app_net |
| network prune | Remove unused networks | docker network prune |
Port Binding Syntax
| Format | Description | Security |
|---|---|---|
| 8080:80 | Map host 8080 to container 80 | Binds all interfaces |
| 127.0.0.1:8080:80 | Bind localhost only | Local access only |
| 8080:80/tcp | Explicit TCP protocol | TCP only |
| 8080:80/udp | Explicit UDP protocol | UDP only |
| 8080-8090:80-90 | Port range mapping | Multiple port exposure |
Network Policy Types
| Type | Direction | Effect |
|---|---|---|
| Ingress | Inbound | Controls incoming traffic |
| Egress | Outbound | Controls outgoing traffic |
| Both | Bidirectional | Controls both directions |
CNI Plugin Comparison
| Plugin | Routing | Policies | Performance | Complexity |
|---|---|---|---|---|
| Flannel | VXLAN or host-gw | No | Medium | Low |
| Calico | BGP | Yes | High | Medium |
| Weave | Mesh | Yes | Medium | Low |
| Cilium | eBPF | Yes | Very High | High |
Docker Network API (Ruby)
| Method | Parameters | Returns |
|---|---|---|
| Docker::Network.create | name, options | Network object |
| Docker::Network.get | id | Network object |
| Docker::Network.all | none | Array of networks |
| network.info | none | Hash of network details |
| network.connect | container_id, config | nil |
| network.disconnect | container_id, options | nil |
| network.delete | none | nil |
Network Namespace Commands
| Command | Purpose |
|---|---|
| ip netns list | List network namespaces |
| ip netns exec NAME cmd | Execute command in namespace |
| ip netns add NAME | Create namespace |
| ip netns delete NAME | Remove namespace |
| nsenter --net=/var/run/netns/NAME | Enter namespace |
IPAM Configuration
| Field | Type | Description |
|---|---|---|
| Driver | string | IPAM driver name (default, host-local) |
| Config | array | Array of subnet configurations |
| Subnet | string | CIDR notation subnet |
| Gateway | string | Gateway IP address |
| IPRange | string | Range for allocation |
| AuxAddress | map | Reserved IP addresses |
Common Network Issues
| Symptom | Cause | Solution |
|---|---|---|
| DNS resolution fails | Default bridge network | Use user-defined network |
| Port already allocated | Duplicate port binding | Change host port or stop other container |
| Cannot connect to service | Network mode mismatch | Verify containers on same network |
| Slow cross-node traffic | MTU mismatch | Configure proper MTU for overlay |
| IPv6 not working | IPv6 disabled | Enable IPv6 on network |
| Network cannot be removed | Containers still connected | Disconnect containers first |