- Deploy /etc/containers/auth.json with GHCR credentials - Support for private container image pulls - Credentials encrypted in Ansible vault - Used by devigo and other services pulling from private registries - Updated documentation with authentication setup
Podman Infrastructure Role
This role provides Podman container infrastructure for the rick-infra project, following the same patterns established by the PostgreSQL and Valkey infrastructure roles.
Overview
Podman is a daemonless container engine for developing, managing, and running OCI containers and pods on Linux systems. This role establishes the foundational container infrastructure that containerized application services can build upon.
Podman is deployed as a system-level infrastructure service that provides container runtime capabilities for service-specific users to utilize.
Features
- Daemonless architecture: No background daemon required, improved security
- Rootless containers: Applications run containers as dedicated users
- OCI compliance: Full Docker compatibility with enhanced security
- Systemd integration: Native systemd integration via quadlet files
- Security-focused: crun runtime, secure registries, restrictive defaults
- Infrastructure pattern: Matches PostgreSQL/Valkey role architecture
Architecture
Container Runtime Stack:
- Podman: Container engine and CLI
- crun: Fast, lightweight OCI runtime (instead of runc)
- netavark: Modern container networking
- aardvark-dns: Container DNS resolution
- conmon: Container monitoring
- catatonit: Container init process
Registry Configuration:
- docker.io: Docker Hub (primary)
- quay.io: Red Hat Quay
- ghcr.io: GitHub Container Registry
All registries configured with HTTPS-only, no insecure connections allowed.
Private Registry Authentication:
For private container images (e.g., from GitHub Container Registry), this role deploys authentication credentials:
- Auth file:
/etc/containers/auth.json(system-wide, for root containers) - Permissions: 0600 (root:root only)
- Credentials: Stored encrypted in Ansible Vault
- Automatic: Quadlet containers automatically use authentication when pulling images
Configuration:
# In group_vars/production/vault.yml (encrypted)
vault_github_username: "your-username"
vault_github_token: "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# In group_vars/production/main.yml
github_username: "{{ vault_github_username }}"
github_token: "{{ vault_github_token }}"
When these variables are defined, the role automatically deploys authentication for ghcr.io.
Application Integration
Applications should create service-specific users and manage their own container deployments:
Service-Specific User Pattern:
# Example from future authentik role
- name: Create authentik user
user:
name: authentik
system: yes
shell: /bin/bash
home: /var/lib/authentik
create_home: yes
- name: Configure rootless podman for authentik
lineinfile:
path: "{{ item.file }}"
line: "authentik:{{ item.start }}:{{ item.count }}"
loop:
- { file: /etc/subuid, start: 100000, count: 65536 }
- { file: /etc/subgid, start: 100000, count: 65536 }
Quadlet Integration:
# Deploy quadlet files to user systemd directory
- name: Deploy container quadlets
template:
src: "{{ item }}.j2"
dest: "/var/lib/authentik/.config/containers/systemd/{{ item }}"
owner: authentik
group: authentik
loop:
- authentik.pod
- authentik-server.container
- authentik-worker.container
Container-to-Host Connectivity:
# Pod quadlet with host access
[Pod]
Network=service-internal
AddHost=host.containers.internal:host-gateway
PublishPort=127.0.0.1:9000:9000/tcp
# Environment variables for host service access
DATABASE_HOST=host.containers.internal
REDIS_HOST=host.containers.internal
Configuration
Default Settings:
# Service management
podman_service_enabled: true
podman_service_state: "started"
# Registry configuration
podman_registries:
- docker.io
- quay.io
- ghcr.io
# Security settings
podman_rootless: true
podman_systemd_security: true
All settings use secure defaults and can be overridden in host_vars if needed.
Dependencies
This is an infrastructure role with no dependencies. Applications that need containers should declare this role as a dependency:
# roles/authentik/meta/main.yml
dependencies:
- role: postgresql
- role: valkey
- role: podman # Container infrastructure
- role: caddy
Service Management
# System service (API endpoint)
sudo systemctl status podman
sudo systemctl status podman.socket
# Container functionality
sudo podman --version
sudo podman run --rm hello-world
# User containers (example with authentik user)
sudo -u authentik systemctl --user status authentik-pod
sudo -u authentik podman ps -a
Container User Isolation
Each containerized service gets:
- Dedicated system user (e.g.,
authentik,nextcloud) - Isolated subuid/subgid ranges (100000+65536 IDs per service)
- Private container storage in user home directory
- User systemd session for container service management
- Network isolation via custom bridge networks
File Locations
- Global configuration:
/etc/containers/registries.conf- Registry settingscontainers.conf- Global podman settings
- User containers:
/var/lib/{service}/.config/containers/systemd/- Quadlet files.config/systemd/user/- Generated systemd services.local/share/containers/- Container storage
Security
- Rootless operation: All containers run as unprivileged users
- Registry security: HTTPS-only, trusted registries
- Runtime security: crun with security features enabled
- Network isolation: Custom networks, no privileged networking
- Storage isolation: Per-user container storage
- Systemd integration: Full systemd security features available
Integration with Host Services
Containerized applications access host services (PostgreSQL, Valkey) via:
- Bridge networking with host gateway access
- Host mapping:
host.containers.internal→ host IP - Localhost binding: Host services remain on 127.0.0.1
- Standard ports: No port conflicts, containers use bridge networks
Notes
This role follows the rick-infra infrastructure pattern where foundational services (Podman, PostgreSQL, Valkey) are provided as host-level infrastructure, and applications manage their own specific usage patterns and user isolation rather than sharing a common container user.
Architecture Benefits:
- Clear separation between infrastructure and applications
- Service-level isolation and security
- Native systemd integration for all containers
- Familiar management patterns consistent with native services