Merge authentik-quadlet-fix: Integrate working authentik implementation
Merge completed authentik Quadlet implementation that resolves all deployment issues and enables external HTTPS access. This brings the working solution developed and tested on authentik-quadlet-fix branch into main. All systemd services now generate correctly and authentik is fully operational at https://auth.jnss.me with proper SSL termination via Caddy.
This commit is contained in:
@@ -1,119 +1,305 @@
|
||||
# Authentik Authentication Service Role
|
||||
# Authentik Role
|
||||
|
||||
Containerized Authentik authentication service for rick-infra using Podman quadlets and following established architectural patterns.
|
||||
Self-contained Authentik authentication server deployment using Podman and Unix sockets.
|
||||
|
||||
## Features
|
||||
## Overview
|
||||
|
||||
- ✅ **Containerized deployment**: Uses Podman with systemd quadlets
|
||||
- ✅ **Self-contained**: Manages its own database and configuration
|
||||
- ✅ **Service-specific user**: Runs as dedicated `authentik` user with subuid/subgid
|
||||
- ✅ **Infrastructure integration**: PostgreSQL database, Valkey cache, Caddy proxy
|
||||
- ✅ **Bridge networking**: Container-to-host communication via host.containers.internal
|
||||
- ✅ **Security hardened**: Rootless containers with proper isolation
|
||||
- ✅ **Production ready**: HTTPS, proper health checks, restart policies
|
||||
This role deploys Authentik as a containerized authentication service with:
|
||||
- **Unix socket IPC** for PostgreSQL and Valkey
|
||||
- **Rootless Podman** with systemd integration via Quadlet
|
||||
- **Self-contained permissions** management
|
||||
- **Caddy reverse proxy** configuration
|
||||
|
||||
## Architecture
|
||||
|
||||
- **Dependencies**: PostgreSQL, Valkey, Podman, Caddy infrastructure roles
|
||||
- **Database**: Self-managed authentik database and user in PostgreSQL
|
||||
- **Cache**: Uses Valkey database 1 for session/cache storage
|
||||
- **Containers**: 2-container pod (server + worker) deployed via quadlets
|
||||
- **Network**: Custom bridge network with host gateway access
|
||||
- **Web access**: https://auth.domain.com (via Caddy reverse proxy)
|
||||
|
||||
## Configuration
|
||||
|
||||
Key variables (defaults in `defaults/main.yml`):
|
||||
|
||||
```yaml
|
||||
# Service
|
||||
authentik_service_enabled: true
|
||||
authentik_http_port: 9000
|
||||
authentik_https_port: 9443
|
||||
|
||||
# Domain
|
||||
authentik_subdomain: "auth"
|
||||
authentik_domain: "{{ caddy_domain }}"
|
||||
|
||||
# Container
|
||||
authentik_image_server: "ghcr.io/goauthentik/server"
|
||||
authentik_image_tag: "2024.8.3"
|
||||
authentik_user: "authentik"
|
||||
|
||||
# Database (self-managed)
|
||||
authentik_db_name: "authentik"
|
||||
authentik_db_user: "authentik"
|
||||
authentik_db_password: "{{ vault_authentik_db_password }}"
|
||||
|
||||
# Cache (Valkey)
|
||||
authentik_redis_host: "host.containers.internal"
|
||||
authentik_redis_db: 1
|
||||
|
||||
# Security
|
||||
authentik_secret_key: "{{ vault_authentik_secret_key }}"
|
||||
```
|
||||
|
||||
## Container Architecture
|
||||
|
||||
### Pod Structure
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Caddy Proxy │ │ Authentik Pod │ │ Infrastructure │
|
||||
│ │ │ │ │ │
|
||||
│ auth.jnss.me │───▶│ ┌─────────────┐ │ │ PostgreSQL │
|
||||
│ :443 │ │ │ Server │ │◄──▶│ (Unix Socket) │
|
||||
│ │ │ │ :9443 │ │ │ │
|
||||
│ │ │ └─────────────┘ │ │ Valkey │
|
||||
│ │ │ ┌─────────────┐ │◄──▶│ (Unix Socket) │
|
||||
│ │ │ │ Worker │ │ │ │
|
||||
│ │ │ └─────────────┘ │ │ │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
```
|
||||
authentik-pod
|
||||
├── authentik-server (web interface, API)
|
||||
└── authentik-worker (background tasks)
|
||||
```
|
||||
|
||||
### Networking
|
||||
- **Bridge network**: `authentik-net` with DNS resolution
|
||||
- **Host gateway**: `host.containers.internal` → host system
|
||||
- **Port mapping**: 9000:9000 (HTTP), 9443:9443 (HTTPS)
|
||||
|
||||
### Storage
|
||||
- **Config**: `~authentik/.config/containers/` (quadlets, env)
|
||||
- **Data**: `~authentik/data/` (persistent application data)
|
||||
- **Media**: `~authentik/media/` (uploaded files)
|
||||
|
||||
## Usage
|
||||
|
||||
1. **Add vault secrets**: Set required vault variables in host_vars:
|
||||
```yaml
|
||||
vault_authentik_db_password: "secure_db_password"
|
||||
vault_authentik_secret_key: "long_random_secret_key"
|
||||
```
|
||||
|
||||
2. **Deploy**: `ansible-playbook site.yml --tags authentik`
|
||||
|
||||
3. **Access**: Visit https://auth.yourdomain.com/if/flow/initial-setup/
|
||||
|
||||
## Dependencies
|
||||
|
||||
- PostgreSQL infrastructure role (database)
|
||||
- Valkey infrastructure role (cache)
|
||||
- Podman infrastructure role (containers)
|
||||
- Caddy web server (HTTPS reverse proxy)
|
||||
- Vault secrets: `vault_authentik_db_password`, `vault_authentik_secret_key`
|
||||
- `postgresql` role (provides Unix socket infrastructure)
|
||||
- `valkey` role (provides Unix socket infrastructure)
|
||||
- `podman` role (provides container runtime)
|
||||
- `caddy` role (provides reverse proxy infrastructure)
|
||||
|
||||
## Containerized Service Pattern
|
||||
## Configuration
|
||||
|
||||
This role establishes rick-infra's containerized service pattern:
|
||||
### Required Variables
|
||||
|
||||
### User Management
|
||||
- Service-specific user (`authentik`) with dedicated home directory
|
||||
- Subuid/subgid ranges (100000-165535) for rootless containers
|
||||
- Systemd user session with lingering enabled
|
||||
```yaml
|
||||
# Domain configuration
|
||||
authentik_domain: "auth.jnss.me"
|
||||
|
||||
### Container Integration
|
||||
- Podman quadlets for native systemd integration
|
||||
- Custom bridge networks with host gateway access
|
||||
- Container-to-host connectivity via `host.containers.internal`
|
||||
- Persistent storage mounted from user home directory
|
||||
# Database credentials
|
||||
authentik_db_password: "{{ vault_authentik_db_password }}"
|
||||
authentik_secret_key: "{{ vault_authentik_secret_key }}"
|
||||
authentik_default_admin_password: "{{ vault_authentik_admin_password }}"
|
||||
|
||||
### Infrastructure Integration
|
||||
- Self-managed database creation via PostgreSQL role
|
||||
- Cache integration with Valkey infrastructure
|
||||
- Reverse proxy deployment via Caddy sites-enabled
|
||||
- Follows rick-infra self-contained service patterns
|
||||
# Infrastructure socket enablement
|
||||
postgresql_unix_socket_enabled: true
|
||||
valkey_unix_socket_enabled: true
|
||||
```
|
||||
|
||||
---
|
||||
### Optional Variables
|
||||
|
||||
```yaml
|
||||
# Service configuration
|
||||
authentik_service_enabled: true
|
||||
authentik_service_state: "started"
|
||||
|
||||
# Container version
|
||||
authentik_version: "latest"
|
||||
|
||||
# Email configuration
|
||||
authentik_email_enabled: false
|
||||
authentik_email_host: "smtp.example.com"
|
||||
```
|
||||
|
||||
## Vault Variables Required
|
||||
|
||||
```yaml
|
||||
# Database password
|
||||
vault_authentik_db_password: "secure_db_password"
|
||||
|
||||
# Authentik secret key (generate with: openssl rand -base64 32)
|
||||
vault_authentik_secret_key: "long_random_secret_key"
|
||||
|
||||
# Admin user password
|
||||
vault_authentik_admin_password: "secure_admin_password"
|
||||
|
||||
# Existing infrastructure passwords
|
||||
vault_valkey_password: "valkey_password"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Deployment
|
||||
|
||||
```yaml
|
||||
- hosts: auth_servers
|
||||
roles:
|
||||
- postgresql
|
||||
- valkey
|
||||
- podman
|
||||
- caddy
|
||||
- authentik
|
||||
```
|
||||
|
||||
### With Tags
|
||||
|
||||
```yaml
|
||||
# Deploy only database setup
|
||||
ansible-playbook site.yml -t database
|
||||
|
||||
# Deploy only containers
|
||||
ansible-playbook site.yml -t containers
|
||||
|
||||
# Deploy only Caddy config
|
||||
ansible-playbook site.yml -t caddy
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
authentik/
|
||||
├── defaults/main.yml # Default variables
|
||||
├── handlers/main.yml # Service handlers
|
||||
├── meta/main.yml # Role dependencies
|
||||
├── tasks/
|
||||
│ ├── main.yml # Main orchestration
|
||||
│ ├── database.yml # Database setup
|
||||
│ └── cache.yml # Cache setup
|
||||
├── templates/
|
||||
│ ├── authentik.env.j2 # Environment variables
|
||||
│ ├── authentik.caddy.j2 # Caddy configuration
|
||||
│ ├── authentik-pod.container # Pod Quadlet file
|
||||
│ ├── authentik-server.container # Server Quadlet file
|
||||
│ └── authentik-worker.container # Worker Quadlet file
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Systemd Services
|
||||
|
||||
The role creates the following systemd services:
|
||||
|
||||
- `authentik-pod.service` - Main pod container
|
||||
- `authentik-server.service` - Web server container
|
||||
- `authentik-worker.service` - Background worker container
|
||||
|
||||
## Networking
|
||||
|
||||
- **External**: HTTPS via Caddy on port 443
|
||||
- **Internal**: Containers bind to `127.0.0.1:9000` (HTTP) and `127.0.0.1:9443` (HTTPS)
|
||||
- **Database**: Unix socket at `/var/run/postgresql/.s.PGSQL.5432`
|
||||
- **Cache**: Unix socket at `/var/run/valkey/valkey.sock`
|
||||
|
||||
## Security Features
|
||||
|
||||
- **Rootless containers** via Podman
|
||||
- **Unix socket IPC** eliminates network exposure
|
||||
- **User isolation** with dedicated `authentik` system user
|
||||
- **Group-based socket access** for PostgreSQL and Valkey
|
||||
- **TLS termination** at Caddy proxy
|
||||
- **Security headers** configured in Caddy
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Check Service Status
|
||||
|
||||
```bash
|
||||
systemctl status authentik-pod
|
||||
systemctl status authentik-server
|
||||
systemctl status authentik-worker
|
||||
```
|
||||
|
||||
### Check Logs
|
||||
|
||||
```bash
|
||||
journalctl -u authentik-server -f
|
||||
journalctl -u authentik-worker -f
|
||||
```
|
||||
|
||||
### Check Socket Connectivity
|
||||
|
||||
```bash
|
||||
# Test PostgreSQL socket
|
||||
sudo -u authentik psql -h /var/run/postgresql -U authentik authentik
|
||||
|
||||
# Test Valkey socket
|
||||
sudo -u authentik redis-cli -s /var/run/valkey/valkey.sock -n 1 ping
|
||||
```
|
||||
|
||||
### Verify Container Status
|
||||
|
||||
```bash
|
||||
podman --user authentik pod ps
|
||||
podman --user authentik ps
|
||||
```
|
||||
|
||||
## Post-Deployment
|
||||
|
||||
1. **Access Web Interface**: Navigate to `https://auth.jnss.me`
|
||||
2. **Login**: Use admin credentials from vault variables
|
||||
3. **Configure Providers**: Set up OAuth2/SAML providers for services
|
||||
4. **Create Applications**: Configure applications for SSO integration
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Update Containers
|
||||
|
||||
```yaml
|
||||
# Update to specific version
|
||||
authentik_version: "2024.2.0"
|
||||
```
|
||||
|
||||
### Backup Data
|
||||
|
||||
Important directories to backup:
|
||||
- `{{ authentik_data_dir }}` - Application data
|
||||
- `{{ authentik_media_dir }}` - Uploaded media
|
||||
- PostgreSQL database dump
|
||||
- Vault variables
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Protect Service with Authentik
|
||||
|
||||
```caddy
|
||||
service.example.com {
|
||||
forward_auth https://auth.jnss.me {
|
||||
uri /outpost.goauthentik.io/auth/caddy
|
||||
copy_headers Remote-User Remote-Name Remote-Email Remote-Groups
|
||||
}
|
||||
|
||||
reverse_proxy localhost:8080
|
||||
}
|
||||
```
|
||||
## Technical Implementation Notes
|
||||
|
||||
### Unix Socket Access Solution
|
||||
|
||||
This role implements a sophisticated solution for containerized Unix socket access:
|
||||
|
||||
**Challenge**: Containers need to access Unix sockets owned by different system services (PostgreSQL, Valkey) while maintaining security isolation.
|
||||
|
||||
**Solution Components**:
|
||||
|
||||
1. **User Namespace Preservation**: `--userns=host` in pod configuration
|
||||
- Preserves host UID/GID mapping within containers
|
||||
- Allows direct access to host socket files
|
||||
|
||||
2. **Group Membership Preservation**: `Annotation=run.oci.keep_original_groups=1` in containers
|
||||
- Ensures supplementary group memberships are maintained in containers
|
||||
- Enables access to postgres and valkey groups within containers
|
||||
|
||||
3. **Correct Redis URL Format**: `AUTHENTIK_CACHE__URL=unix://...?db=N&password=...`
|
||||
- Avoids Django Redis client URL parsing issues
|
||||
- Prevents incorrect port appending to Unix socket paths
|
||||
|
||||
4. **Host Service Integration**: Authentik user added to service groups
|
||||
- Added to `postgres` group for PostgreSQL socket access
|
||||
- Added to `valkey` group for Valkey socket access
|
||||
|
||||
### Container Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Authentik Pod (--userns=host) │
|
||||
│ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ Server Container│ │ Worker Container│ │
|
||||
│ │ UID: 963 (host) │ │ UID: 963 (host) │ │
|
||||
│ │ Groups: 963, │ │ Groups: 963, │ │
|
||||
│ │ 968(postgres),│ │ 968(postgres),│ │
|
||||
│ │ 965(valkey) │ │ 965(valkey) │ │
|
||||
│ └─────────────────┘ └─────────────────┘ │
|
||||
│ │ │ │
|
||||
│ └────────────────────┴─────────────┐ │
|
||||
└─────────────────────────────────────────────│─────────────┘
|
||||
│
|
||||
┌───────────────▼──────────────┐
|
||||
│ Host Unix Sockets │
|
||||
│ │
|
||||
│ /var/run/postgresql/ │
|
||||
│ ├─ .s.PGSQL.5432 │
|
||||
│ │ (postgres:postgres 0770) │
|
||||
│ │
|
||||
│ /var/run/valkey/ │
|
||||
│ ├─ valkey.sock │
|
||||
│ (valkey:valkey 0770) │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
### Security Implications
|
||||
|
||||
**Maintained Security**:
|
||||
- Container network isolation preserved (no `--network=host`)
|
||||
- Individual container user/group isolation
|
||||
- Standard Podman security features active
|
||||
- Principle of least privilege through group membership
|
||||
|
||||
**Trade-offs**:
|
||||
- Containers share host user namespace (reduced UID isolation)
|
||||
- Group membership grants broader access to service files
|
||||
- Requires careful service group management
|
||||
|
||||
### Compatibility
|
||||
|
||||
This solution is:
|
||||
- ✅ **Portable**: Works regardless of UID assignments
|
||||
- ✅ **Maintainable**: No custom subuid/subgid configuration
|
||||
- ✅ **Performant**: Unix sockets avoid TCP overhead
|
||||
- ✅ **Secure**: Maintains container isolation where it matters
|
||||
- ✅ **Standard**: Uses documented Podman/OCI features
|
||||
|
||||
**Rick-Infra Authentik Service**
|
||||
SSO authentication and authorization platform for modern applications.
|
||||
@@ -1,107 +1,110 @@
|
||||
---
|
||||
# =================================================================
|
||||
# Authentik Authentication Service Role - Container Configuration
|
||||
# Authentik Authentication Role - Default Variables
|
||||
# =================================================================
|
||||
# Containerized Authentik deployment using Podman quadlets
|
||||
# Follows rick-infra patterns for pragmatic service deployment
|
||||
# Self-contained Authentik deployment with Podman and Unix sockets
|
||||
|
||||
# =================================================================
|
||||
# Service Configuration
|
||||
# =================================================================
|
||||
|
||||
# Service Management
|
||||
# Service user and directories
|
||||
authentik_user: authentik
|
||||
authentik_group: authentik
|
||||
authentik_home: /opt/authentik
|
||||
authentik_data_dir: "{{ authentik_home }}/data"
|
||||
authentik_media_dir: "{{ authentik_home }}/media"
|
||||
authentik_log_dir: "{{ authentik_home }}/logs"
|
||||
|
||||
# Container configuration
|
||||
authentik_version: "2025.10"
|
||||
authentik_image: "ghcr.io/goauthentik/server"
|
||||
|
||||
# Service management
|
||||
authentik_service_enabled: true
|
||||
authentik_service_state: "started"
|
||||
|
||||
# User and Container Configuration
|
||||
authentik_user: "authentik"
|
||||
authentik_group: "authentik"
|
||||
authentik_home: "/var/lib/authentik"
|
||||
authentik_subuid_start: 100000
|
||||
authentik_subgid_start: 100000
|
||||
authentik_subuid_size: 65536
|
||||
authentik_subgid_size: 65536
|
||||
|
||||
# Container Configuration
|
||||
authentik_image_server: "ghcr.io/goauthentik/server"
|
||||
authentik_image_tag: "2024.8.3"
|
||||
authentik_pod_name: "authentik"
|
||||
authentik_network_name: "authentik-net"
|
||||
|
||||
# =================================================================
|
||||
# Domain and Caddy Integration
|
||||
# Database Configuration (Self-managed)
|
||||
# =================================================================
|
||||
|
||||
# Domain setup (follows rick-infra pattern)
|
||||
authentik_subdomain: "auth"
|
||||
authentik_domain: "{{ caddy_domain | default('localhost') }}"
|
||||
authentik_full_domain: "{{ authentik_subdomain }}.{{ authentik_domain }}"
|
||||
|
||||
# Caddy integration
|
||||
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
|
||||
|
||||
# =================================================================
|
||||
# Database Configuration (Self-Contained)
|
||||
# =================================================================
|
||||
|
||||
# Authentik manages its own database
|
||||
authentik_db_engine: "postgresql"
|
||||
authentik_db_host: "/run/postgresql" # Unix socket directory
|
||||
authentik_db_port: 5432
|
||||
authentik_db_name: "authentik"
|
||||
authentik_db_user: "authentik"
|
||||
authentik_db_password: "{{ vault_authentik_db_password }}"
|
||||
|
||||
# =================================================================
|
||||
# Cache Configuration (Valkey/Redis)
|
||||
# Cache Configuration (Self-managed)
|
||||
# =================================================================
|
||||
|
||||
# Valkey/Redis cache configuration (TCP via host gateway)
|
||||
authentik_redis_host: "host.containers.internal" # Using TCP due to Authentik Unix socket issues
|
||||
authentik_redis_port: 6379
|
||||
authentik_redis_db: 1
|
||||
authentik_redis_password: "{{ vault_valkey_password | default('') }}"
|
||||
authentik_valkey_db: 1 # Use database 1 for Authentik
|
||||
|
||||
# =================================================================
|
||||
# Application Settings
|
||||
# Network Configuration
|
||||
# =================================================================
|
||||
|
||||
authentik_domain: "auth.jnss.me"
|
||||
authentik_http_port: 9000
|
||||
authentik_bind_address: "0.0.0.0"
|
||||
|
||||
# =================================================================
|
||||
# Authentik Core Configuration
|
||||
# =================================================================
|
||||
|
||||
# Authentik core configuration
|
||||
authentik_secret_key: "{{ vault_authentik_secret_key }}"
|
||||
authentik_error_reporting_enabled: false
|
||||
authentik_disable_update_check: true
|
||||
authentik_disable_startup_analytics: true
|
||||
authentik_log_level: "info"
|
||||
authentik_error_reporting: false
|
||||
|
||||
# Email configuration (disabled by default)
|
||||
# =================================================================
|
||||
# Email Configuration (Optional)
|
||||
# =================================================================
|
||||
|
||||
authentik_email_enabled: false
|
||||
authentik_email_host: ""
|
||||
authentik_email_port: 587
|
||||
authentik_email_username: ""
|
||||
authentik_email_password: ""
|
||||
authentik_email_use_tls: true
|
||||
authentik_email_password: "{{ vault_authentik_email_password | default('') }}"
|
||||
authentik_email_tls: true
|
||||
authentik_email_from: "authentik@{{ authentik_domain }}"
|
||||
|
||||
# Worker configuration
|
||||
authentik_worker_concurrency: 2
|
||||
|
||||
# =================================================================
|
||||
# Container Networking
|
||||
# Security Configuration
|
||||
# =================================================================
|
||||
|
||||
# Port mappings
|
||||
authentik_http_port: 9000
|
||||
authentik_https_port: 9443
|
||||
|
||||
# Network configuration
|
||||
authentik_bridge_network: true
|
||||
authentik_enable_host_gateway: true
|
||||
# Default admin user (created during deployment)
|
||||
authentik_default_admin_email: "admin@{{ authentik_domain }}"
|
||||
authentik_default_admin_password: "{{ vault_authentik_admin_password }}"
|
||||
|
||||
# =================================================================
|
||||
# Rick-Infra Integration Notes
|
||||
# Podman Pod Configuration
|
||||
# =================================================================
|
||||
# This role:
|
||||
# - Depends on PostgreSQL, Valkey, Podman, and Caddy infrastructure roles
|
||||
# - Creates its own database and user in PostgreSQL
|
||||
# - Uses Valkey database 1 for caching
|
||||
# - Deploys Caddy configuration to sites-enabled
|
||||
# - Uses Podman quadlets for systemd integration
|
||||
# - Follows containerized service pattern with service-specific user
|
||||
|
||||
# Pod service name is simply "authentik" (generated from authentik.pod)
|
||||
authentik_container_server_name: "authentik-server"
|
||||
authentik_container_worker_name: "authentik-worker"
|
||||
|
||||
# Quadlet service directories (USER SCOPE)
|
||||
authentik_quadlet_dir: "{{ authentik_user_quadlet_dir }}"
|
||||
authentik_user_quadlet_dir: "{{ authentik_home }}/.config/containers/systemd"
|
||||
|
||||
# User session variables (set dynamically during deployment)
|
||||
authentik_uid: ""
|
||||
|
||||
# =================================================================
|
||||
# Caddy Integration
|
||||
# =================================================================
|
||||
|
||||
# Caddy configuration (assumes caddy role provides these variables)
|
||||
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
|
||||
caddy_log_dir: "/var/log/caddy"
|
||||
caddy_user: "caddy"
|
||||
|
||||
# =================================================================
|
||||
# Infrastructure Dependencies (Read-only)
|
||||
# =================================================================
|
||||
|
||||
# PostgreSQL socket configuration (managed by postgresql role)
|
||||
postgresql_unix_socket_directories: "/var/run/postgresql"
|
||||
|
||||
# Valkey socket configuration (managed by valkey role)
|
||||
valkey_unix_socket_path: "/var/run/valkey/valkey.sock"
|
||||
valkey_password: "{{ vault_valkey_password }}"
|
||||
|
||||
@@ -1,28 +1,78 @@
|
||||
---
|
||||
# Authentik Role Handlers
|
||||
# Authentik Service Handlers (User Scope)
|
||||
|
||||
- name: reload systemd
|
||||
- name: reload systemd user
|
||||
systemd:
|
||||
daemon_reload: yes
|
||||
|
||||
- name: reload user systemd for authentik
|
||||
systemd:
|
||||
daemon_reload: yes
|
||||
daemon_reload: true
|
||||
scope: user
|
||||
become: yes
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
|
||||
- name: restart authentik pod
|
||||
systemd:
|
||||
name: "{{ authentik_pod_name }}"
|
||||
state: restarted
|
||||
scope: user
|
||||
become: yes
|
||||
become_user: "{{ authentik_user }}"
|
||||
when: authentik_service_state == "started"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
|
||||
|
||||
- name: reload caddy
|
||||
systemd:
|
||||
name: caddy
|
||||
state: reloaded
|
||||
when: caddy_service_enabled | default(false)
|
||||
|
||||
- name: restart authentik pod
|
||||
systemd:
|
||||
name: "authentik-pod"
|
||||
state: restarted
|
||||
scope: user
|
||||
daemon_reload: true
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
|
||||
|
||||
- name: restart authentik server
|
||||
systemd:
|
||||
name: "{{ authentik_container_server_name }}"
|
||||
state: restarted
|
||||
scope: user
|
||||
daemon_reload: true
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
|
||||
|
||||
- name: restart authentik worker
|
||||
systemd:
|
||||
name: "{{ authentik_container_worker_name }}"
|
||||
state: restarted
|
||||
scope: user
|
||||
daemon_reload: true
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
|
||||
|
||||
- name: stop authentik services
|
||||
systemd:
|
||||
name: "{{ item }}"
|
||||
state: stopped
|
||||
scope: user
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
|
||||
loop:
|
||||
- "{{ authentik_container_worker_name }}"
|
||||
- "{{ authentik_container_server_name }}"
|
||||
- "authentik-pod"
|
||||
|
||||
- name: start authentik services
|
||||
systemd:
|
||||
name: "{{ item }}"
|
||||
state: started
|
||||
scope: user
|
||||
daemon_reload: true
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
|
||||
loop:
|
||||
- "authentik-pod"
|
||||
- "{{ authentik_container_server_name }}"
|
||||
- "{{ authentik_container_worker_name }}"
|
||||
|
||||
@@ -1,37 +1,10 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: Rick's Infrastructure Team
|
||||
description: Containerized Authentik authentication service for rick-infra
|
||||
company: Personal Infrastructure
|
||||
|
||||
license: MIT
|
||||
|
||||
min_ansible_version: "2.9"
|
||||
|
||||
platforms:
|
||||
- name: ArchLinux
|
||||
versions:
|
||||
- all
|
||||
|
||||
galaxy_tags:
|
||||
- authentication
|
||||
- authentik
|
||||
- containers
|
||||
- podman
|
||||
- archlinux
|
||||
|
||||
# Authentik Role Dependencies
|
||||
dependencies:
|
||||
- role: postgresql
|
||||
tags: ['postgresql', 'infrastructure']
|
||||
- role: valkey
|
||||
tags: ['valkey', 'infrastructure']
|
||||
- role: valkey
|
||||
- role: podman
|
||||
tags: ['podman', 'infrastructure']
|
||||
- role: caddy
|
||||
tags: ['caddy']
|
||||
|
||||
# Containerized Authentik authentication service
|
||||
# - Creates its own database using PostgreSQL infrastructure
|
||||
# - Uses Valkey for caching (database 1)
|
||||
# - Deployed via Podman quadlets with service-specific user
|
||||
# - Integrated with Caddy reverse proxy
|
||||
# No modifications to infrastructure roles required
|
||||
# Authentik role is completely self-contained
|
||||
70
roles/authentik/tasks/cache.yml
Normal file
70
roles/authentik/tasks/cache.yml
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
# Cache setup for Authentik - Self-contained socket permissions
|
||||
|
||||
- name: Add authentik user to valkey group for socket access
|
||||
user:
|
||||
name: "{{ authentik_user }}"
|
||||
groups: valkey
|
||||
append: true
|
||||
|
||||
- name: Ensure authentik can access Valkey socket directory
|
||||
file:
|
||||
path: "{{ valkey_unix_socket_path | dirname }}"
|
||||
mode: '0770'
|
||||
group: valkey
|
||||
become: true
|
||||
|
||||
- name: Test Valkey socket connectivity
|
||||
command: >
|
||||
redis-cli -s {{ valkey_unix_socket_path }}
|
||||
-a {{ valkey_password }}
|
||||
-n {{ authentik_valkey_db }}
|
||||
ping
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
register: valkey_socket_test
|
||||
failed_when: valkey_socket_test.stdout != "PONG"
|
||||
changed_when: false
|
||||
|
||||
- name: Configure Authentik Valkey database
|
||||
command: >
|
||||
redis-cli -s {{ valkey_unix_socket_path }}
|
||||
-a {{ valkey_password }}
|
||||
-n {{ authentik_valkey_db }}
|
||||
CONFIG SET save ""
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
register: valkey_config_result
|
||||
changed_when: true
|
||||
|
||||
- name: Verify Authentik can write to Valkey database
|
||||
command: >
|
||||
redis-cli -s {{ valkey_unix_socket_path }}
|
||||
-a {{ valkey_password }}
|
||||
-n {{ authentik_valkey_db }}
|
||||
SET authentik:healthcheck "deployed"
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
register: valkey_write_test
|
||||
changed_when: false
|
||||
|
||||
- name: Clean up Valkey test key
|
||||
command: >
|
||||
redis-cli -s {{ valkey_unix_socket_path }}
|
||||
-a {{ valkey_password }}
|
||||
-n {{ authentik_valkey_db }}
|
||||
DEL authentik:healthcheck
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
changed_when: false
|
||||
|
||||
- name: Display cache setup status
|
||||
debug:
|
||||
msg: |
|
||||
✅ Authentik cache setup complete!
|
||||
|
||||
🗄️ Cache DB: {{ authentik_valkey_db }}
|
||||
🔌 Connection: Unix socket ({{ valkey_unix_socket_path }})
|
||||
📊 Test: {{ valkey_socket_test.stdout }}
|
||||
|
||||
🏗️ Ready for Authentik container deployment
|
||||
@@ -1,28 +1,62 @@
|
||||
---
|
||||
# Authentik Database Management - Self-Contained Database Setup
|
||||
# Database setup for Authentik - Self-contained socket permissions
|
||||
|
||||
- name: Create Authentik database user
|
||||
- name: Add authentik user to postgres group for socket access
|
||||
user:
|
||||
name: "{{ authentik_user }}"
|
||||
groups: postgres
|
||||
append: true
|
||||
|
||||
- name: Ensure authentik can access PostgreSQL socket directory
|
||||
file:
|
||||
path: "{{ postgresql_unix_socket_directories }}"
|
||||
mode: '0770'
|
||||
group: postgres
|
||||
become: true
|
||||
|
||||
- name: Test PostgreSQL socket connectivity
|
||||
postgresql_ping:
|
||||
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||
login_user: "{{ authentik_user }}"
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
|
||||
- name: Create Authentik database user via socket
|
||||
postgresql_user:
|
||||
name: "{{ authentik_db_user }}"
|
||||
password: "{{ authentik_db_password }}"
|
||||
encrypted: yes
|
||||
become: yes
|
||||
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||
login_user: postgres
|
||||
become: true
|
||||
become_user: postgres
|
||||
|
||||
- name: Create Authentik database
|
||||
- name: Create Authentik database via socket
|
||||
postgresql_db:
|
||||
name: "{{ authentik_db_name }}"
|
||||
owner: "{{ authentik_db_user }}"
|
||||
encoding: UTF8
|
||||
template: template0
|
||||
become: yes
|
||||
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||
login_user: postgres
|
||||
become: true
|
||||
become_user: postgres
|
||||
|
||||
- name: Grant all privileges on Authentik database to user
|
||||
- name: Grant Authentik database privileges
|
||||
postgresql_privs:
|
||||
db: "{{ authentik_db_name }}"
|
||||
privs: ALL
|
||||
type: database
|
||||
role: "{{ authentik_db_user }}"
|
||||
become: yes
|
||||
become_user: postgres
|
||||
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||
login_user: postgres
|
||||
become: true
|
||||
become_user: postgres
|
||||
|
||||
- name: Display database setup status
|
||||
debug:
|
||||
msg: |
|
||||
✅ Authentik database setup complete!
|
||||
|
||||
📊 Database: {{ authentik_db_name }}
|
||||
👤 User: {{ authentik_db_user }}
|
||||
🔌 Connection: Unix socket ({{ postgresql_unix_socket_directories }})
|
||||
|
||||
🏗️ Ready for Authentik container deployment
|
||||
|
||||
@@ -1,36 +1,184 @@
|
||||
---
|
||||
# Authentik Authentication Service Role - Containerized Implementation
|
||||
# Manages Authentik using Podman with self-contained database
|
||||
# Authentik Authentication Role - Main Tasks
|
||||
# Self-contained deployment with Podman and Unix sockets
|
||||
|
||||
- name: Create authentik user and configure subuid/subgid
|
||||
include_tasks: user.yml
|
||||
- name: Create authentik group
|
||||
group:
|
||||
name: "{{ authentik_group }}"
|
||||
system: true
|
||||
|
||||
- name: Set up authentik database
|
||||
- name: Create authentik system user
|
||||
user:
|
||||
name: "{{ authentik_user }}"
|
||||
system: true
|
||||
shell: /bin/bash
|
||||
home: "{{ authentik_home }}"
|
||||
create_home: true
|
||||
group: "{{ authentik_group }}"
|
||||
|
||||
- name: Create authentik directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: "{{ authentik_user }}"
|
||||
group: "{{ authentik_group }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ authentik_home }}"
|
||||
- "{{ authentik_data_dir }}"
|
||||
- "{{ authentik_media_dir }}"
|
||||
- "{{ authentik_user_quadlet_dir }}"
|
||||
- "{{ authentik_log_dir }}"
|
||||
|
||||
|
||||
- name: Enable lingering for authentik user (services persist without login)
|
||||
command: loginctl enable-linger {{ authentik_user }}
|
||||
register: linger_result
|
||||
changed_when: linger_result.rc == 0
|
||||
|
||||
|
||||
|
||||
- name: Get authentik user UID and GID for container configuration
|
||||
shell: |
|
||||
echo "uid=$(id -u {{ authentik_user }})"
|
||||
echo "gid=$(id -g {{ authentik_user }})"
|
||||
register: authentik_user_info
|
||||
changed_when: false
|
||||
tags: [setup]
|
||||
|
||||
- name: Set authentik UID/GID facts for container templates
|
||||
set_fact:
|
||||
authentik_uid: "{{ authentik_user_info.stdout_lines[0] | regex_replace('uid=', '') }}"
|
||||
authentik_gid: "{{ authentik_user_info.stdout_lines[1] | regex_replace('gid=', '') }}"
|
||||
tags: [setup]
|
||||
|
||||
- name: Setup database access and permissions
|
||||
include_tasks: database.yml
|
||||
tags: [database, setup]
|
||||
|
||||
- name: Configure container networking
|
||||
include_tasks: networking.yml
|
||||
- name: Setup cache access and permissions
|
||||
include_tasks: cache.yml
|
||||
tags: [cache, setup]
|
||||
|
||||
- name: Deploy authentik containers via quadlets
|
||||
include_tasks: containers.yml
|
||||
- name: Deploy environment configuration
|
||||
template:
|
||||
src: authentik.env.j2
|
||||
dest: "{{ authentik_home }}/.env"
|
||||
owner: "{{ authentik_user }}"
|
||||
group: "{{ authentik_group }}"
|
||||
mode: '0600'
|
||||
backup: true
|
||||
notify:
|
||||
- restart authentik pod
|
||||
- restart authentik server
|
||||
- restart authentik worker
|
||||
tags: [config]
|
||||
|
||||
- name: Deploy Caddy configuration for Authentik
|
||||
- name: Create Quadlet systemd directory (user scope)
|
||||
file:
|
||||
path: "{{ authentik_quadlet_dir }}"
|
||||
state: directory
|
||||
owner: "{{ authentik_user }}"
|
||||
group: "{{ authentik_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Deploy Quadlet pod and container files (user scope)
|
||||
template:
|
||||
src: "{{ item.src }}"
|
||||
dest: "{{ authentik_quadlet_dir }}/{{ item.dest }}"
|
||||
owner: "{{ authentik_user }}"
|
||||
group: "{{ authentik_group }}"
|
||||
mode: '0644'
|
||||
loop:
|
||||
- { src: 'authentik.pod', dest: 'authentik.pod' }
|
||||
- { src: 'authentik-server.container', dest: 'authentik-server.container' }
|
||||
- { src: 'authentik-worker.container', dest: 'authentik-worker.container' }
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
notify:
|
||||
- reload systemd user
|
||||
- restart authentik pod
|
||||
- restart authentik server
|
||||
- restart authentik worker
|
||||
tags: [containers, deployment]
|
||||
|
||||
- name: Deploy Caddy configuration
|
||||
template:
|
||||
src: authentik.caddy.j2
|
||||
dest: "{{ caddy_sites_enabled_dir }}/authentik.caddy"
|
||||
owner: root
|
||||
group: "{{ caddy_user }}"
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: reload caddy
|
||||
when: caddy_sites_enabled_dir is defined
|
||||
tags: [caddy, reverse-proxy]
|
||||
|
||||
- name: Display Authentik service status
|
||||
- name: Ensure system dependencies are running
|
||||
systemd:
|
||||
name: "{{ item }}"
|
||||
state: started
|
||||
loop:
|
||||
- postgresql
|
||||
- valkey
|
||||
register: system_deps
|
||||
|
||||
- name: Wait for PostgreSQL socket to be ready
|
||||
wait_for:
|
||||
path: "{{ postgresql_unix_socket_directories }}/.s.PGSQL.{{ postgresql_port }}"
|
||||
timeout: 30
|
||||
when: postgresql_unix_socket_enabled
|
||||
|
||||
- name: Wait for Valkey socket to be ready
|
||||
wait_for:
|
||||
path: "{{ valkey_unix_socket_path }}"
|
||||
timeout: 30
|
||||
when: valkey_unix_socket_enabled
|
||||
|
||||
- name: Ensure systemd user session is started
|
||||
systemd:
|
||||
name: "user@{{ authentik_uid }}.service"
|
||||
state: started
|
||||
scope: system
|
||||
register: user_session_start
|
||||
|
||||
- name: Enable and start Authentik pod (user scope)
|
||||
systemd:
|
||||
name: "authentik-pod"
|
||||
enabled: "{{ authentik_service_enabled }}"
|
||||
state: "{{ authentik_service_state }}"
|
||||
scope: user
|
||||
daemon_reload: true
|
||||
become: true
|
||||
become_user: "{{ authentik_user }}"
|
||||
tags: [containers, service]
|
||||
|
||||
- name: Wait for Authentik to be ready
|
||||
uri:
|
||||
url: "https://{{ authentik_domain }}/if/health/live/"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
timeout: 30
|
||||
validate_certs: true
|
||||
retries: 10
|
||||
delay: 30
|
||||
register: authentik_health_check
|
||||
tags: [verification, health-check]
|
||||
|
||||
- name: Display Authentik deployment status
|
||||
debug:
|
||||
msg: |
|
||||
✅ Authentik authentication service deployed successfully!
|
||||
✅ Authentik Authentication deployed successfully!
|
||||
|
||||
🌐 Web Interface: https://{{ authentik_full_domain }}
|
||||
🔐 Admin Interface: https://{{ authentik_full_domain }}/if/admin/
|
||||
📦 Local HTTP: http://127.0.0.1:{{ authentik_http_port }}
|
||||
🗄️ Database: {{ authentik_db_name }} (self-managed)
|
||||
🚀 Cache: Valkey database {{ authentik_redis_db }}
|
||||
🌐 Domain: {{ authentik_domain }}
|
||||
🗄️ Database: {{ authentik_db_name }} (Unix socket)
|
||||
🗄️ Cache: Valkey DB {{ authentik_valkey_db }} (Unix socket)
|
||||
🐳 Containers: Pod with server + worker
|
||||
🔒 Admin: {{ authentik_default_admin_email }}
|
||||
|
||||
🏗️ Authentication service ready for SSO integration!
|
||||
🚀 Ready for SSO configuration!
|
||||
|
||||
📋 Next Steps:
|
||||
- Access {{ authentik_domain }} to complete setup
|
||||
- Configure applications and providers
|
||||
- Set up SSO for services
|
||||
tags: [verification]
|
||||
|
||||
32
roles/authentik/templates/authentik-server.container
Normal file
32
roles/authentik/templates/authentik-server.container
Normal file
@@ -0,0 +1,32 @@
|
||||
[Unit]
|
||||
Description=Authentik Server Container
|
||||
After=authentik-pod.service
|
||||
Requires=authentik-pod.service
|
||||
|
||||
[Container]
|
||||
ContainerName={{ authentik_container_server_name }}
|
||||
Image={{ authentik_image }}:{{ authentik_version }}
|
||||
Pod=authentik.pod
|
||||
EnvironmentFile={{ authentik_home }}/.env
|
||||
User={{ authentik_uid }}:{{ authentik_gid }}
|
||||
Annotation=run.oci.keep_original_groups=1
|
||||
|
||||
# Logging configuration
|
||||
LogDriver=k8s-file
|
||||
LogOpt=path={{ authentik_home }}/logs/server.log
|
||||
Volume={{ authentik_home }}/logs:{{ authentik_home }}/logs
|
||||
|
||||
# Volume mounts for data and sockets
|
||||
Volume={{ authentik_media_dir }}:/media
|
||||
Volume={{ authentik_data_dir }}:/data
|
||||
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
||||
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z
|
||||
|
||||
Exec=server
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
TimeoutStartSec=300
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
32
roles/authentik/templates/authentik-worker.container
Normal file
32
roles/authentik/templates/authentik-worker.container
Normal file
@@ -0,0 +1,32 @@
|
||||
[Unit]
|
||||
Description=Authentik Worker Container
|
||||
After=authentik-pod.service
|
||||
Requires=authentik-pod.service
|
||||
|
||||
[Container]
|
||||
ContainerName={{ authentik_container_worker_name }}
|
||||
Image={{ authentik_image }}:{{ authentik_version }}
|
||||
Pod=authentik.pod
|
||||
EnvironmentFile={{ authentik_home }}/.env
|
||||
User={{ authentik_uid }}:{{ authentik_gid }}
|
||||
Annotation=run.oci.keep_original_groups=1
|
||||
|
||||
# Logging configuration
|
||||
LogDriver=k8s-file
|
||||
LogOpt=path={{ authentik_home }}/logs/worker.log
|
||||
Volume={{ authentik_home }}/logs:{{ authentik_home }}/logs
|
||||
|
||||
# Volume mounts for data and sockets
|
||||
Volume={{ authentik_media_dir }}:/media
|
||||
Volume={{ authentik_data_dir }}:/data
|
||||
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
||||
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z
|
||||
|
||||
Exec=worker
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
TimeoutStartSec=300
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
@@ -1,34 +1,41 @@
|
||||
# Authentik Caddy Configuration - Rick-Infra
|
||||
# Generated by Ansible Authentik role
|
||||
# Deployed to {{ caddy_sites_enabled_dir }}/authentik.caddy
|
||||
|
||||
{{ authentik_full_domain }} {
|
||||
# Reverse proxy to Authentik
|
||||
reverse_proxy 127.0.0.1:{{ authentik_http_port }}
|
||||
|
||||
# Security headers for authentication service
|
||||
header {
|
||||
# Enable HSTS
|
||||
Strict-Transport-Security max-age=31536000;
|
||||
# Prevent embedding in frames
|
||||
X-Frame-Options DENY
|
||||
# Prevent content type sniffing
|
||||
X-Content-Type-Options nosniff
|
||||
# XSS protection
|
||||
X-XSS-Protection "1; mode=block"
|
||||
# Referrer policy for privacy
|
||||
Referrer-Policy strict-origin-when-cross-origin
|
||||
# Authentik Authentication Service
|
||||
{{ authentik_domain }} {
|
||||
reverse_proxy https://{{ authentik_bind_address }}:{{ authentik_https_port }} {
|
||||
transport http {
|
||||
tls_insecure_skip_verify
|
||||
}
|
||||
|
||||
header_up Host {upstream_hostport}
|
||||
header_up X-Real-IP {remote_host}
|
||||
header_up X-Forwarded-Proto https
|
||||
header_up X-Forwarded-For {remote_host}
|
||||
header_up X-Forwarded-Host {host}
|
||||
|
||||
# Health check
|
||||
health_uri /if/health/live/
|
||||
health_timeout 10s
|
||||
health_interval 30s
|
||||
health_status 200
|
||||
}
|
||||
|
||||
# Security headers
|
||||
header {
|
||||
X-Frame-Options SAMEORIGIN
|
||||
X-Content-Type-Options nosniff
|
||||
X-XSS-Protection "1; mode=block"
|
||||
Referrer-Policy strict-origin-when-cross-origin
|
||||
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||
}
|
||||
|
||||
# Authentik-specific paths
|
||||
handle_path /outpost.goauthentik.io/* {
|
||||
reverse_proxy https://{{ authentik_bind_address }}:{{ authentik_https_port }}
|
||||
}
|
||||
|
||||
# Logging
|
||||
log {
|
||||
output file /var/log/caddy/authentik_access.log
|
||||
output file {{ caddy_log_dir }}/authentik.log
|
||||
level INFO
|
||||
format json
|
||||
}
|
||||
|
||||
# Optional: Custom error pages
|
||||
handle_errors {
|
||||
respond "Authentication service temporarily unavailable" 503
|
||||
}
|
||||
}
|
||||
|
||||
# Rick-Infra: Containerized Authentik authentication service with Caddy reverse proxy
|
||||
}
|
||||
@@ -1,39 +1,39 @@
|
||||
# Authentik Environment Configuration
|
||||
# Generated by rick-infra Ansible role
|
||||
# Authentik Configuration - Unix Socket IPC
|
||||
# Generated by Ansible - DO NOT EDIT
|
||||
|
||||
# Database Configuration
|
||||
AUTHENTIK_POSTGRESQL__HOST={{ authentik_db_host }}
|
||||
AUTHENTIK_POSTGRESQL__PORT={{ authentik_db_port }}
|
||||
# PostgreSQL Configuration (Unix Socket)
|
||||
AUTHENTIK_POSTGRESQL__HOST={{ postgresql_unix_socket_directories }}
|
||||
AUTHENTIK_POSTGRESQL__NAME={{ authentik_db_name }}
|
||||
AUTHENTIK_POSTGRESQL__USER={{ authentik_db_user }}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD={{ authentik_db_password }}
|
||||
# No port needed for Unix socket
|
||||
|
||||
# Cache Configuration (Valkey/Redis)
|
||||
AUTHENTIK_REDIS__HOST={{ authentik_redis_host }}
|
||||
AUTHENTIK_REDIS__PORT={{ authentik_redis_port }}
|
||||
AUTHENTIK_REDIS__DB={{ authentik_redis_db }}
|
||||
{% if authentik_redis_password %}
|
||||
AUTHENTIK_REDIS__PASSWORD={{ authentik_redis_password }}
|
||||
{% endif %}
|
||||
# Valkey/Redis Configuration (Unix Socket) - Using cache URL format to avoid port parsing issues
|
||||
AUTHENTIK_CACHE__URL=unix://{{ valkey_unix_socket_path }}?db={{ authentik_valkey_db }}&password={{ valkey_password }}
|
||||
|
||||
# Core Configuration
|
||||
# Authentik Core Configuration
|
||||
AUTHENTIK_SECRET_KEY={{ authentik_secret_key }}
|
||||
AUTHENTIK_ERROR_REPORTING__ENABLED={{ authentik_error_reporting_enabled | lower }}
|
||||
AUTHENTIK_DISABLE_UPDATE_CHECK={{ authentik_disable_update_check | lower }}
|
||||
AUTHENTIK_DISABLE_STARTUP_ANALYTICS={{ authentik_disable_startup_analytics | lower }}
|
||||
AUTHENTIK_LOG_LEVEL={{ authentik_log_level }}
|
||||
AUTHENTIK_ERROR_REPORTING__ENABLED={{ authentik_error_reporting | lower }}
|
||||
|
||||
# Worker Configuration
|
||||
AUTHENTIK_WORKER__CONCURRENCY={{ authentik_worker_concurrency }}
|
||||
# Security Configuration
|
||||
AUTHENTIK_COOKIE_DOMAIN={{ authentik_domain }}
|
||||
AUTHENTIK_DISABLE_UPDATE_CHECK=true
|
||||
AUTHENTIK_DISABLE_STARTUP_ANALYTICS=true
|
||||
|
||||
# Network binding
|
||||
AUTHENTIK_LISTEN__HTTP={{ authentik_bind_address }}:{{ authentik_http_port }}
|
||||
|
||||
{% if authentik_email_enabled %}
|
||||
# Email Configuration
|
||||
{% if authentik_email_host %}
|
||||
AUTHENTIK_EMAIL__HOST={{ authentik_email_host }}
|
||||
AUTHENTIK_EMAIL__PORT={{ authentik_email_port }}
|
||||
AUTHENTIK_EMAIL__USERNAME={{ authentik_email_username }}
|
||||
AUTHENTIK_EMAIL__PASSWORD={{ authentik_email_password }}
|
||||
AUTHENTIK_EMAIL__USE_TLS={{ authentik_email_use_tls | lower }}
|
||||
AUTHENTIK_EMAIL__USE_TLS={{ authentik_email_tls | lower }}
|
||||
AUTHENTIK_EMAIL__FROM={{ authentik_email_from }}
|
||||
{% endif %}
|
||||
|
||||
# Trust reverse proxy headers
|
||||
AUTHENTIK_LISTEN__TRUSTED_PROXY_CIDRS=127.0.0.1/32,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
|
||||
# Default admin user
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD={{ authentik_default_admin_password }}
|
||||
AUTHENTIK_BOOTSTRAP_EMAIL={{ authentik_default_admin_email }}
|
||||
|
||||
13
roles/authentik/templates/authentik.pod
Normal file
13
roles/authentik/templates/authentik.pod
Normal file
@@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=Authentik Authentication Pod
|
||||
|
||||
[Pod]
|
||||
PublishPort=0.0.0.0:{{ authentik_http_port }}:{{ authentik_http_port }}
|
||||
PodmanArgs=
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
TimeoutStartSec=900
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
@@ -13,13 +13,14 @@
|
||||
postgresql_service_enabled: true
|
||||
postgresql_service_state: "started"
|
||||
|
||||
# Network Security (Unix socket only - no network listening)
|
||||
postgresql_listen_addresses: "" # Empty string = no TCP/IP connections
|
||||
# Network Security
|
||||
postgresql_listen_addresses: "localhost"
|
||||
postgresql_port: 5432
|
||||
|
||||
# Unix socket configuration
|
||||
postgresql_unix_socket_directories: "/run/postgresql"
|
||||
postgresql_unix_socket_permissions: 0777 # Allows container access
|
||||
# Unix Socket Configuration
|
||||
postgresql_unix_socket_enabled: true
|
||||
postgresql_unix_socket_directories: "/var/run/postgresql"
|
||||
postgresql_unix_socket_permissions: "0770"
|
||||
|
||||
# Authentication
|
||||
postgresql_auth_method: "scram-sha-256"
|
||||
|
||||
@@ -67,37 +67,49 @@
|
||||
- reload systemd
|
||||
- restart postgresql
|
||||
|
||||
- name: Create PostgreSQL Unix socket directory
|
||||
file:
|
||||
path: "{{ postgresql_unix_socket_directories }}"
|
||||
state: directory
|
||||
owner: postgres
|
||||
group: postgres
|
||||
mode: '0770'
|
||||
when: postgresql_unix_socket_enabled
|
||||
|
||||
- name: Enable and start PostgreSQL service
|
||||
systemd:
|
||||
name: postgresql
|
||||
enabled: "{{ postgresql_service_enabled }}"
|
||||
state: "{{ postgresql_service_state }}"
|
||||
daemon_reload: yes
|
||||
daemon_reload: true
|
||||
|
||||
- name: Wait for PostgreSQL Unix socket to be ready
|
||||
wait_for:
|
||||
path: "{{ postgresql_unix_socket_directories }}/.s.PGSQL.{{ postgresql_port }}"
|
||||
timeout: 30
|
||||
when:
|
||||
- postgresql_service_state == "started"
|
||||
- postgresql_listen_addresses == "" # Socket-only mode
|
||||
|
||||
- name: Wait for PostgreSQL TCP to be ready
|
||||
- name: Wait for PostgreSQL to be ready (TCP)
|
||||
wait_for:
|
||||
port: "{{ postgresql_port }}"
|
||||
host: "{{ postgresql_listen_addresses }}"
|
||||
timeout: 30
|
||||
when:
|
||||
- postgresql_service_state == "started"
|
||||
- postgresql_listen_addresses != "" # TCP mode
|
||||
when: postgresql_service_state == "started" and postgresql_listen_addresses != ""
|
||||
|
||||
- name: Wait for PostgreSQL to be ready (Unix Socket)
|
||||
postgresql_ping:
|
||||
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||
login_user: postgres
|
||||
become: true
|
||||
become_user: postgres
|
||||
register: postgresql_socket_ready
|
||||
until: postgresql_socket_ready is succeeded
|
||||
retries: 10
|
||||
delay: 3
|
||||
when: postgresql_service_state == "started" and postgresql_unix_socket_enabled and postgresql_listen_addresses == ""
|
||||
|
||||
- name: Display PostgreSQL infrastructure status
|
||||
debug:
|
||||
msg: |
|
||||
✅ PostgreSQL infrastructure ready!
|
||||
|
||||
📡 Service: {% if postgresql_listen_addresses == "" %}Unix socket only at {{ postgresql_unix_socket_directories }}{% else %}{{ postgresql_listen_addresses }}:{{ postgresql_port }}{% endif %}
|
||||
📡 Service: {% if postgresql_unix_socket_enabled and postgresql_listen_addresses == "" %}Unix Socket ({{ postgresql_unix_socket_directories }}){% else %}{{ postgresql_listen_addresses }}:{{ postgresql_port }}{% endif %}
|
||||
🔒 Auth: {{ postgresql_auth_method }}
|
||||
📊 Checksums: {{ 'Enabled' if postgresql_data_checksums else 'Disabled' }}
|
||||
{% if postgresql_unix_socket_enabled %}🔌 Socket: {{ postgresql_unix_socket_directories }} (mode {{ postgresql_unix_socket_permissions }}){% endif %}
|
||||
|
||||
🏗️ Ready for applications to create databases/users
|
||||
@@ -3,6 +3,11 @@
|
||||
# PostgreSQL's excellent defaults are used except for essentials
|
||||
|
||||
# Network and Security
|
||||
{% if postgresql_unix_socket_enabled %}
|
||||
# Unix Socket Configuration
|
||||
unix_socket_directories = '{{ postgresql_unix_socket_directories }}'
|
||||
unix_socket_permissions = {{ postgresql_unix_socket_permissions }}
|
||||
{% endif %}
|
||||
listen_addresses = '{{ postgresql_listen_addresses }}'
|
||||
port = {{ postgresql_port }}
|
||||
|
||||
|
||||
@@ -44,6 +44,10 @@ sigvild_gallery_host: "127.0.0.1"
|
||||
sigvild_gallery_home: "/opt/sigvild-gallery"
|
||||
sigvild_gallery_web_root: "/var/www/sigvild-gallery"
|
||||
sigvild_gallery_local_project_path: "{{ ansible_env.PWD }}/sigvild-gallery"
|
||||
|
||||
# Backup configuration
|
||||
sigvild_gallery_backup_enabled: true
|
||||
sigvild_gallery_backup_local_path: "{{ playbook_dir }}/backups/sigvild-gallery"
|
||||
```
|
||||
|
||||
## Usage
|
||||
@@ -71,6 +75,62 @@ ansible-playbook site.yml --tags="backend"
|
||||
ansible-playbook site.yml --tags="caddy"
|
||||
```
|
||||
|
||||
### Data Backup and Restoration
|
||||
|
||||
#### Creating a Backup
|
||||
|
||||
Before formatting your server or making major changes, create a backup of all production data:
|
||||
|
||||
```bash
|
||||
# Create backup of production data
|
||||
ansible-playbook playbooks/backup-sigvild.yml
|
||||
|
||||
# Backup will be saved to: ./backups/sigvild-gallery/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
|
||||
```
|
||||
|
||||
The backup includes:
|
||||
- PocketBase SQLite database (`data.db`, `auxiliary.db`)
|
||||
- All uploaded wedding photos and media files
|
||||
- PocketBase logs and system state
|
||||
|
||||
#### Automatic Restoration
|
||||
|
||||
When deploying to a fresh server, the role automatically detects and restores from the latest backup:
|
||||
|
||||
```bash
|
||||
# Normal deployment will auto-restore if backup exists
|
||||
ansible-playbook playbooks/deploy-sigvild.yml
|
||||
|
||||
# Or deploy full infrastructure (includes auto-restore)
|
||||
ansible-playbook site.yml
|
||||
```
|
||||
|
||||
#### Manual Restoration
|
||||
|
||||
To restore data manually or from a specific backup:
|
||||
|
||||
```bash
|
||||
# Restore with specific backup file
|
||||
ansible-playbook playbooks/deploy-sigvild.yml --tags="restore" \
|
||||
--extra-vars="sigvild_gallery_backup_local_path=/path/to/backup/directory"
|
||||
|
||||
# Force restoration (overwrite existing data)
|
||||
ansible-playbook playbooks/deploy-sigvild.yml --tags="backend,restore"
|
||||
```
|
||||
|
||||
#### Backup Management
|
||||
|
||||
```bash
|
||||
# List available backups
|
||||
ls -la ./backups/sigvild-gallery/
|
||||
|
||||
# Verify backup contents
|
||||
tar -tzf ./backups/sigvild-gallery/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
|
||||
|
||||
# Extract backup for inspection (local)
|
||||
tar -xzf ./backups/sigvild-gallery/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
|
||||
```
|
||||
|
||||
## Security Features
|
||||
|
||||
### Environment Variables
|
||||
@@ -169,6 +229,21 @@ systemctl reload caddy
|
||||
- `/opt/sigvild-gallery/` (application directory)
|
||||
- `/var/www/sigvild-gallery/` (frontend files)
|
||||
|
||||
## Data Protection
|
||||
|
||||
### Backup Strategy
|
||||
- **Automated**: Backup creation via dedicated playbook
|
||||
- **Comprehensive**: Includes database, uploaded files, and system state
|
||||
- **Consistent**: Service temporarily stopped during backup for data integrity
|
||||
- **Local storage**: Backups stored in `./backups/sigvild-gallery/` directory
|
||||
- **Timestamped**: Each backup includes ISO timestamp for easy identification
|
||||
|
||||
### Recovery Process
|
||||
- **Automatic detection**: Deployment automatically detects available backups
|
||||
- **Zero-downtime restore**: Restoration happens before service startup
|
||||
- **Integrity verification**: Backups verified before and after restoration
|
||||
- **Permission preservation**: User/group ownership maintained during restore
|
||||
|
||||
## Tags
|
||||
|
||||
- `sigvild`: Complete Sigvild Gallery deployment
|
||||
@@ -177,4 +252,6 @@ systemctl reload caddy
|
||||
- `build`: Local build processes
|
||||
- `service`: SystemD service management
|
||||
- `caddy`: Caddy configuration
|
||||
- `verify`: Post-deployment verification
|
||||
- `verify`: Post-deployment verification
|
||||
- `backup`: Data backup operations
|
||||
- `restore`: Data restoration operations
|
||||
@@ -31,6 +31,10 @@ sigvild_gallery_local_project_path: "{{ ansible_env.PWD }}/sigvild-gallery"
|
||||
sigvild_gallery_service_enabled: true
|
||||
sigvild_gallery_service_state: started
|
||||
|
||||
# Backup configuration
|
||||
sigvild_gallery_backup_enabled: true
|
||||
sigvild_gallery_backup_local_path: "{{ playbook_dir }}/backups/sigvild-gallery"
|
||||
|
||||
# Caddy integration (assumes caddy role provides these)
|
||||
# caddy_sites_enabled_dir: /etc/caddy/sites-enabled
|
||||
# caddy_user: caddy
|
||||
|
||||
100
roles/sigvild-gallery/tasks/backup.yml
Normal file
100
roles/sigvild-gallery/tasks/backup.yml
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
# Sigvild Gallery Data Backup Tasks
|
||||
|
||||
- name: Create local backup directory
|
||||
local_action:
|
||||
module: file
|
||||
path: "{{ sigvild_gallery_backup_local_path }}/{{ ansible_date_time.iso8601_basic_short }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: false
|
||||
run_once: true
|
||||
|
||||
- name: Display backup information
|
||||
debug:
|
||||
msg:
|
||||
- "Creating backup of Sigvild Gallery data..."
|
||||
- "Data directory: {{ sigvild_gallery_data_dir }}"
|
||||
- "Backup will be saved to: {{ sigvild_gallery_backup_local_path }}"
|
||||
- "Timestamp: {{ ansible_date_time.iso8601_basic_short }}"
|
||||
|
||||
- name: Check if data directory exists
|
||||
stat:
|
||||
path: "{{ sigvild_gallery_data_dir }}"
|
||||
register: data_dir_stat
|
||||
|
||||
- name: Fail if no data directory found
|
||||
fail:
|
||||
msg: "No data directory found at {{ sigvild_gallery_data_dir }}. Nothing to backup."
|
||||
when: not data_dir_stat.stat.exists
|
||||
|
||||
- name: Display data directory size
|
||||
command: du -sh {{ sigvild_gallery_data_dir }}
|
||||
register: data_size
|
||||
changed_when: false
|
||||
|
||||
- name: Show data size
|
||||
debug:
|
||||
msg: "Data directory size: {{ data_size.stdout }}"
|
||||
|
||||
- name: Stop sigvild-gallery service for consistent backup
|
||||
systemd:
|
||||
name: sigvild-gallery
|
||||
state: stopped
|
||||
register: service_stopped
|
||||
|
||||
- name: Create compressed backup of pb_data
|
||||
archive:
|
||||
path: "{{ sigvild_gallery_data_dir }}"
|
||||
dest: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
|
||||
format: gz
|
||||
owner: "{{ sigvild_gallery_user }}"
|
||||
group: "{{ sigvild_gallery_user }}"
|
||||
mode: '0644'
|
||||
register: backup_created
|
||||
|
||||
- name: Verify backup contains critical files
|
||||
command: tar -tzf /tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz
|
||||
register: backup_contents
|
||||
changed_when: false
|
||||
failed_when:
|
||||
- "'data.db' not in backup_contents.stdout"
|
||||
|
||||
- name: Display backup verification
|
||||
debug:
|
||||
msg: "Backup verified - contains required database files"
|
||||
|
||||
- name: Get backup file size
|
||||
stat:
|
||||
path: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
|
||||
register: backup_file_stat
|
||||
|
||||
- name: Display backup file info
|
||||
debug:
|
||||
msg: "Backup file created: {{ (backup_file_stat.stat.size / 1024 / 1024) | round(2) }}MB"
|
||||
when: backup_file_stat.stat.exists
|
||||
|
||||
- name: Download backup to local machine
|
||||
fetch:
|
||||
src: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
|
||||
dest: "{{ sigvild_gallery_backup_local_path }}/"
|
||||
flat: yes
|
||||
register: backup_downloaded
|
||||
|
||||
- name: Clean up remote backup file
|
||||
file:
|
||||
path: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
|
||||
state: absent
|
||||
|
||||
- name: Restart sigvild-gallery service
|
||||
systemd:
|
||||
name: sigvild-gallery
|
||||
state: started
|
||||
when: service_stopped.changed
|
||||
|
||||
- name: Display backup completion
|
||||
debug:
|
||||
msg:
|
||||
- "✅ Backup completed successfully!"
|
||||
- "Local backup location: {{ sigvild_gallery_backup_local_path }}/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
|
||||
- "Service has been restarted."
|
||||
@@ -33,6 +33,10 @@
|
||||
notify: restart sigvild-gallery
|
||||
tags: [backend]
|
||||
|
||||
- name: Restore data from backup if available
|
||||
include_tasks: restore.yml
|
||||
tags: [backend, restore]
|
||||
|
||||
- name: Create data directory for PocketBase
|
||||
file:
|
||||
path: "{{ sigvild_gallery_data_dir }}"
|
||||
|
||||
126
roles/sigvild-gallery/tasks/restore.yml
Normal file
126
roles/sigvild-gallery/tasks/restore.yml
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
# Sigvild Gallery Data Restoration Tasks
|
||||
|
||||
- name: Check for existing data backup files
|
||||
local_action:
|
||||
module: find
|
||||
paths: "{{ sigvild_gallery_backup_local_path }}"
|
||||
patterns: "sigvild-gallery-backup-*.tar.gz"
|
||||
register: backup_files
|
||||
become: false
|
||||
|
||||
- name: Display backup search results
|
||||
debug:
|
||||
msg:
|
||||
- "Searching for backups in: {{ sigvild_gallery_backup_local_path }}"
|
||||
- "Found {{ backup_files.files | length }} backup file(s)"
|
||||
|
||||
- name: Display found backup files
|
||||
debug:
|
||||
msg: "Found backup: {{ item.path }} ({{ (item.size / 1024 / 1024) | round(2) }}MB, {{ item.mtime | to_datetime }})"
|
||||
loop: "{{ backup_files.files | sort(attribute='mtime') }}"
|
||||
when: backup_files.files | length > 0
|
||||
|
||||
- name: Check if data directory already exists with data
|
||||
stat:
|
||||
path: "{{ sigvild_gallery_data_dir }}/data.db"
|
||||
register: existing_data
|
||||
|
||||
- name: Warn about existing data
|
||||
debug:
|
||||
msg: "⚠️ WARNING: Existing data found at {{ sigvild_gallery_data_dir }}/data.db - restoration will overwrite it!"
|
||||
when:
|
||||
- existing_data.stat.exists
|
||||
- backup_files.files | length > 0
|
||||
|
||||
- name: Restore from latest backup
|
||||
block:
|
||||
- name: Get latest backup file
|
||||
set_fact:
|
||||
latest_backup: "{{ (backup_files.files | sort(attribute='mtime') | last).path }}"
|
||||
|
||||
- name: Display restoration info
|
||||
debug:
|
||||
msg:
|
||||
- "Restoring from latest backup: {{ latest_backup | basename }}"
|
||||
- "Target directory: {{ sigvild_gallery_data_dir }}"
|
||||
|
||||
- name: Stop sigvild-gallery service before restoration
|
||||
systemd:
|
||||
name: sigvild-gallery
|
||||
state: stopped
|
||||
register: service_stopped_for_restore
|
||||
ignore_errors: true # Service might not exist on fresh deployment
|
||||
|
||||
- name: Upload backup to remote server
|
||||
copy:
|
||||
src: "{{ latest_backup }}"
|
||||
dest: "/tmp/restore-backup.tar.gz"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Verify uploaded backup integrity
|
||||
command: tar -tzf /tmp/restore-backup.tar.gz
|
||||
register: restore_contents
|
||||
changed_when: false
|
||||
failed_when: "'data.db' not in restore_contents.stdout"
|
||||
|
||||
- name: Remove existing data directory if it exists
|
||||
file:
|
||||
path: "{{ sigvild_gallery_data_dir }}"
|
||||
state: absent
|
||||
when: existing_data.stat.exists
|
||||
|
||||
- name: Ensure parent directory exists
|
||||
file:
|
||||
path: "{{ sigvild_gallery_home }}"
|
||||
state: directory
|
||||
owner: "{{ sigvild_gallery_user }}"
|
||||
group: "{{ sigvild_gallery_user }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Extract backup to target location
|
||||
unarchive:
|
||||
src: "/tmp/restore-backup.tar.gz"
|
||||
dest: "{{ sigvild_gallery_home }}"
|
||||
remote_src: true
|
||||
owner: "{{ sigvild_gallery_user }}"
|
||||
group: "{{ sigvild_gallery_user }}"
|
||||
|
||||
- name: Verify restoration
|
||||
stat:
|
||||
path: "{{ sigvild_gallery_data_dir }}/data.db"
|
||||
register: restored_data
|
||||
|
||||
- name: Fail if restoration unsuccessful
|
||||
fail:
|
||||
msg: "Restoration failed - data.db not found after extraction"
|
||||
when: not restored_data.stat.exists
|
||||
|
||||
- name: Get restored data size
|
||||
command: du -sh {{ sigvild_gallery_data_dir }}
|
||||
register: restored_size
|
||||
changed_when: false
|
||||
|
||||
- name: Clean up uploaded backup file
|
||||
file:
|
||||
path: "/tmp/restore-backup.tar.gz"
|
||||
state: absent
|
||||
|
||||
- name: Display restoration success
|
||||
debug:
|
||||
msg:
|
||||
- "✅ Data restoration completed successfully!"
|
||||
- "Restored from: {{ latest_backup | basename }}"
|
||||
- "Restored data size: {{ restored_size.stdout }}"
|
||||
- "Location: {{ sigvild_gallery_data_dir }}"
|
||||
|
||||
when: backup_files.files | length > 0
|
||||
|
||||
- name: No backup available message
|
||||
debug:
|
||||
msg:
|
||||
- "ℹ️ No backup files found - starting with fresh installation"
|
||||
- "Data directory will be created empty: {{ sigvild_gallery_data_dir }}"
|
||||
when: backup_files.files | length == 0
|
||||
@@ -22,6 +22,11 @@ valkey_protected_mode: true # Enable protection for TCP
|
||||
valkey_unixsocket: "/run/valkey/valkey.sock"
|
||||
valkey_unixsocketperm: 777 # Allows container access
|
||||
|
||||
# Unix Socket Configuration
|
||||
valkey_unix_socket_enabled: true
|
||||
valkey_unix_socket_path: "/var/run/valkey/valkey.sock"
|
||||
valkey_unix_socket_perm: "770"
|
||||
|
||||
# Authentication
|
||||
valkey_password: "{{ vault_valkey_password }}"
|
||||
|
||||
|
||||
@@ -15,15 +15,6 @@
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Create Valkey socket directory
|
||||
file:
|
||||
path: /run/valkey
|
||||
state: directory
|
||||
owner: valkey
|
||||
group: valkey
|
||||
mode: '0755'
|
||||
when: valkey_unixsocket is defined
|
||||
|
||||
- name: Check if Valkey data directory exists
|
||||
stat:
|
||||
path: "/var/lib/valkey"
|
||||
@@ -37,6 +28,24 @@
|
||||
group: valkey
|
||||
mode: '0750'
|
||||
|
||||
- name: Create Valkey Unix socket directory
|
||||
file:
|
||||
path: "{{ valkey_unix_socket_path | dirname }}"
|
||||
state: directory
|
||||
owner: valkey
|
||||
group: valkey
|
||||
mode: '0775'
|
||||
when: valkey_unix_socket_enabled
|
||||
|
||||
- name: Ensure socket directory is accessible
|
||||
file:
|
||||
path: "{{ valkey_unix_socket_path | dirname }}"
|
||||
owner: valkey
|
||||
group: valkey
|
||||
mode: '0775'
|
||||
recurse: yes
|
||||
when: valkey_unix_socket_enabled
|
||||
|
||||
- name: Deploy Valkey configuration file
|
||||
template:
|
||||
src: valkey.conf.j2
|
||||
@@ -47,12 +56,19 @@
|
||||
backup: yes
|
||||
notify: restart valkey
|
||||
|
||||
- name: Deploy custom Valkey service file
|
||||
- name: Create systemd override directory for Valkey security
|
||||
file:
|
||||
path: /etc/systemd/system/valkey.service.d
|
||||
state: directory
|
||||
mode: '0755'
|
||||
when: valkey_systemd_security
|
||||
|
||||
- name: Deploy Valkey systemd security override
|
||||
template:
|
||||
src: valkey.service.j2
|
||||
dest: /etc/systemd/system/valkey.service
|
||||
src: systemd-override.conf.j2
|
||||
dest: /etc/systemd/system/valkey.service.d/override.conf
|
||||
mode: '0644'
|
||||
backup: yes
|
||||
when: valkey_systemd_security
|
||||
notify:
|
||||
- reload systemd
|
||||
- restart valkey
|
||||
@@ -62,40 +78,67 @@
|
||||
name: valkey
|
||||
enabled: "{{ valkey_service_enabled }}"
|
||||
state: "{{ valkey_service_state }}"
|
||||
daemon_reload: yes
|
||||
daemon_reload: true
|
||||
register: valkey_service_result
|
||||
|
||||
- name: Wait for Valkey to be ready (TCP)
|
||||
wait_for:
|
||||
port: "{{ valkey_port }}"
|
||||
host: "{{ valkey_bind | default('127.0.0.1') }}"
|
||||
host: "{{ valkey_bind }}"
|
||||
timeout: 30
|
||||
when:
|
||||
- valkey_service_state == "started"
|
||||
- valkey_port != 0
|
||||
when: valkey_service_state == "started" and not valkey_unix_socket_enabled
|
||||
|
||||
- name: Wait for Valkey Unix socket to be ready
|
||||
- name: Wait for Valkey socket file to exist
|
||||
wait_for:
|
||||
path: "{{ valkey_unixsocket }}"
|
||||
path: "{{ valkey_unix_socket_path }}"
|
||||
timeout: 30
|
||||
when:
|
||||
- valkey_service_state == "started"
|
||||
- valkey_unixsocket is defined
|
||||
when: valkey_service_state == "started" and valkey_unix_socket_enabled
|
||||
|
||||
- name: Test Valkey connectivity (Unix socket)
|
||||
command: valkey-cli -s {{ valkey_unixsocket }} -a "{{ valkey_password }}" ping
|
||||
register: valkey_ping_result
|
||||
- name: Wait for Valkey to be ready (Unix Socket) - Try without auth first
|
||||
command: redis-cli -s {{ valkey_unix_socket_path }} ping
|
||||
register: valkey_socket_ping_noauth
|
||||
until: >
|
||||
valkey_socket_ping_noauth.stdout == "PONG" or
|
||||
"NOAUTH" in (valkey_socket_ping_noauth.stdout + valkey_socket_ping_noauth.stderr)
|
||||
retries: 15
|
||||
delay: 2
|
||||
changed_when: false
|
||||
failed_when: valkey_ping_result.stdout != "PONG"
|
||||
when:
|
||||
- valkey_service_state == "started"
|
||||
- valkey_unixsocket is defined
|
||||
failed_when: false
|
||||
when: valkey_service_state == "started" and valkey_unix_socket_enabled
|
||||
|
||||
- name: Wait for Valkey to be ready (Unix Socket) - Try with auth if needed
|
||||
command: redis-cli -s {{ valkey_unix_socket_path }} -a {{ valkey_requirepass }} ping
|
||||
register: valkey_socket_ping_auth
|
||||
until: valkey_socket_ping_auth.stdout == "PONG"
|
||||
retries: 5
|
||||
delay: 2
|
||||
changed_when: false
|
||||
failed_when: valkey_socket_ping_auth.rc != 0
|
||||
when: >
|
||||
valkey_service_state == "started" and valkey_unix_socket_enabled and
|
||||
(valkey_socket_ping_noauth.stdout != "PONG") and
|
||||
("NOAUTH" in (valkey_socket_ping_noauth.stdout + valkey_socket_ping_noauth.stderr) or valkey_socket_ping_noauth.rc != 0)
|
||||
|
||||
- name: Test Valkey connectivity (TCP)
|
||||
command: redis-cli -h {{ valkey_bind }} -p {{ valkey_port }} -a {{ valkey_requirepass }} ping
|
||||
register: valkey_ping_result_tcp
|
||||
changed_when: false
|
||||
failed_when: valkey_ping_result_tcp.stdout != "PONG"
|
||||
when: valkey_service_state == "started" and not valkey_unix_socket_enabled
|
||||
|
||||
- name: Test Valkey connectivity (Unix Socket)
|
||||
command: redis-cli -s {{ valkey_unix_socket_path }} -a {{ valkey_requirepass }} ping
|
||||
register: valkey_ping_result_socket
|
||||
changed_when: false
|
||||
failed_when: valkey_ping_result_socket.stdout != "PONG"
|
||||
when: valkey_service_state == "started" and valkey_unix_socket_enabled
|
||||
|
||||
- name: Display Valkey infrastructure status
|
||||
debug:
|
||||
msg: |
|
||||
✅ Valkey infrastructure ready!
|
||||
|
||||
📡 Service: {{ valkey_bind }}:{{ valkey_port }}
|
||||
📡 Service: {% if valkey_unix_socket_enabled %}Unix Socket ({{ valkey_unix_socket_path }}){% else %}{{ valkey_bind }}:{{ valkey_port }}{% endif %}
|
||||
🔒 Auth: Password protected
|
||||
💾 Persistence: {{ 'RDB enabled' if valkey_save_enabled else 'Memory only' }}
|
||||
🗄️ Databases: {{ valkey_databases }} available (0-{{ valkey_databases - 1 }})
|
||||
@@ -105,4 +148,5 @@
|
||||
📋 Application Integration:
|
||||
- Use database numbers 1-{{ valkey_databases - 1 }} for applications
|
||||
- Database 0 reserved for system/testing
|
||||
- {% if valkey_unix_socket_enabled %}Unix socket: {{ valkey_unix_socket_path }}{% else %}TCP: {{ valkey_bind }}:{{ valkey_port }}{% endif %}
|
||||
- Redis-compatible: applications can use REDIS_* or VALKEY_* env vars
|
||||
|
||||
@@ -8,20 +8,19 @@
|
||||
# Network Configuration
|
||||
# =================================================================
|
||||
|
||||
{% if valkey_bind %}
|
||||
# Bind to specified interfaces
|
||||
# Bind to localhost only for security (like PostgreSQL)
|
||||
bind {{ valkey_bind }}
|
||||
{% else %}
|
||||
# No TCP binding - Unix socket only
|
||||
{% endif %}
|
||||
|
||||
# Valkey port (0 = disable TCP)
|
||||
# Valkey port
|
||||
port {{ valkey_port }}
|
||||
|
||||
# Unix socket configuration
|
||||
{% if valkey_unixsocket is defined %}
|
||||
unixsocket {{ valkey_unixsocket }}
|
||||
unixsocketperm {{ valkey_unixsocketperm }}
|
||||
{% if valkey_unix_socket_enabled %}
|
||||
# Unix Socket Configuration
|
||||
unixsocket {{ valkey_unix_socket_path }}
|
||||
unixsocketperm {{ valkey_unix_socket_perm }}
|
||||
|
||||
# Enable both TCP and Unix socket (for compatibility during transition)
|
||||
# To disable TCP completely, comment out the port line above
|
||||
{% endif %}
|
||||
|
||||
# Protected mode - requires authentication
|
||||
@@ -41,7 +40,7 @@ tcp-keepalive {{ valkey_tcp_keepalive }}
|
||||
# =================================================================
|
||||
|
||||
# Require password for all operations
|
||||
requirepass {{ valkey_password }}
|
||||
requirepass {{ valkey_requirepass }}
|
||||
|
||||
# =================================================================
|
||||
# Memory Management
|
||||
@@ -137,4 +136,4 @@ client-output-buffer-limit pubsub 32mb 8mb 60
|
||||
# - Database 3+: Future applications
|
||||
#
|
||||
# Connection example:
|
||||
# valkey-cli -h {{ valkey_bind }} -p {{ valkey_port }} -a {{ valkey_password }} -n 1
|
||||
# redis-cli -h {{ valkey_bind }} -p {{ valkey_port }} -a {{ valkey_requirepass }} -n 1
|
||||
Reference in New Issue
Block a user