Fix: Complete authentik Quadlet implementation with networking solution

Resolves authentik deployment issues by implementing proper Podman Quadlet
configuration and fixing networking for external access through Caddy.

Core Fixes:
• Add missing [Install] sections to container Quadlet files for systemd service generation
• Fix pod references from 'systemd-authentik' to 'authentik.pod' for proper Quadlet linking
• Remove problematic --userns=host to use proper rootless user namespaces
• Configure subuid/subgid ranges for authentik user (200000:65536)
• Update networking to bind 0.0.0.0:9000 only (remove unnecessary HTTPS port 9443)
• Add AUTHENTIK_LISTEN__HTTP=0.0.0.0:9000 environment configuration
• Fix Caddy reverse proxy to use HTTP backend instead of HTTPS

Infrastructure Updates:
• Enhance PostgreSQL role with Unix socket configuration and user management
• Improve Valkey role with proper systemd integration and socket permissions
• Add comprehensive service integration documentation
• Update deployment playbooks with backup and restore capabilities

Security Improvements:
• Secure network isolation with Caddy SSL termination
• Reduced attack surface by removing direct HTTPS container exposure
• Proper rootless container configuration with user namespace mapping

Result: authentik now fully operational with external HTTPS access via auth.jnss.me
All systemd services (authentik-pod, authentik-server, authentik-worker) running correctly.
This commit is contained in:
2025-12-04 19:42:31 +01:00
parent df4ae0eb17
commit b42ee2a22b
25 changed files with 986 additions and 92 deletions

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
vault-password-file vault-password-file
vault.yml vault.yml
backups/
.*temp/

View File

@@ -19,3 +19,18 @@ Reverse proxy.
### Containers ### Containers
Containers are managed by rootless Podman. Containers are managed by rootless Podman.
## Documentation
### Service Integration
- [Service Integration Guide](docs/service-integration-guide.md) - How to add containerized services with PostgreSQL/Valkey access
### Role Documentation
- [Authentik Role](roles/authentik/README.md) - Authentication service with Unix socket implementation
- [PostgreSQL Role](roles/postgresql/README.md) - Database service with Unix socket support
- [Valkey Role](roles/valkey/README.md) - Cache service with Unix socket support
- [Caddy Role](roles/caddy/README.md) - Reverse proxy and SSL termination
### Infrastructure Guides
- [Deployment Guide](docs/deployment-guide.md) - Complete deployment walkthrough
- [Security Hardening](docs/security-hardening.md) - Security configuration and best practices

View File

@@ -0,0 +1,312 @@
# Service Integration Guide
This guide explains how to add new containerized services to rick-infra with PostgreSQL and Valkey/Redis access via Unix sockets.
## Overview
Rick-infra provides a standardized approach for containerized services to access infrastructure services through Unix sockets, maintaining security while providing optimal performance.
## Architecture Pattern
```
┌─────────────────────────────────────────────────────────────┐
│ Application Service (Podman Container) │
│ │
│ ┌─────────────────┐ │
│ │ Your Container │ │
│ │ UID: service │ (host user namespace) │
│ │ Groups: service,│ │
│ │ postgres, │ (supplementary groups preserved) │
│ │ valkey │ │
│ └─────────────────┘ │
│ │ │
│ └─────────────────────┐ │
└─────────────────────────────────│───────────────────────────┘
┌───────────────▼──────────────┐
│ Host Infrastructure Services │
│ │
│ PostgreSQL Unix Socket │
│ /var/run/postgresql/ │
│ │
│ Valkey Unix Socket │
│ /var/run/valkey/ │
└──────────────────────────────┘
```
## Prerequisites
Your service must be deployed as:
1. **Systemd user service** (via Quadlet)
2. **Dedicated system user**
3. **Podman container** (rootless)
## Step 1: User Setup
Create a dedicated system user for your service and add it to infrastructure groups:
```yaml
- name: Create service user
user:
name: myservice
system: true
shell: /bin/false
home: /opt/myservice
create_home: true
- name: Add service user to infrastructure groups
user:
name: myservice
groups:
- postgres # For PostgreSQL access
- valkey # For Valkey/Redis access
append: true
```
## Step 2: Container Configuration
### Pod Configuration (`myservice.pod`)
```ini
[Unit]
Description=My Service Pod
[Pod]
PublishPort=127.0.0.1:8080:8080
PodmanArgs=--userns=host
[Service]
Restart=always
TimeoutStartSec=900
[Install]
WantedBy=default.target
```
**Key Points**:
- `--userns=host` preserves host user namespace
- Standard port publishing for network access
### Container Configuration (`myservice.container`)
```ini
[Unit]
Description=My Service Container
[Container]
Image=my-service:latest
Pod=myservice.pod
EnvironmentFile=/opt/myservice/.env
User={{ service_uid }}:{{ service_gid }}
Annotation=run.oci.keep_original_groups=1
# Volume mounts for sockets
Volume=/var/run/postgresql:/var/run/postgresql:Z
Volume=/var/run/valkey:/var/run/valkey:Z
# Application volumes
Volume=/opt/myservice/data:/data
Volume=/opt/myservice/logs:/logs
Exec=my-service
[Service]
Restart=always
[Install]
WantedBy=default.target
```
**Key Points**:
- `Annotation=run.oci.keep_original_groups=1` preserves supplementary groups
- Mount socket directories with `:Z` for SELinux relabeling
- Use host UID/GID for the service user
## Step 3: Service Configuration
### PostgreSQL Connection
Use Unix socket connection strings:
```bash
# Environment variable
DATABASE_URL=postgresql://myservice@/myservice_db?host=/var/run/postgresql
# Or separate variables
DB_HOST=/var/run/postgresql
DB_USER=myservice
DB_NAME=myservice_db
# No DB_PORT needed for Unix sockets
```
### Valkey/Redis Connection
**Correct Format** (avoids URL parsing issues):
```bash
# Single URL format (recommended)
CACHE_URL=unix:///var/run/valkey/valkey.sock?db=2&password=your_password
# Alternative format
REDIS_URL=redis://localhost/2?unix_socket_path=/var/run/valkey/valkey.sock
```
**Avoid** separate HOST/DB variables which can cause port parsing issues:
```bash
# DON'T USE - causes parsing problems
REDIS_HOST=unix:///var/run/valkey/valkey.sock
REDIS_DB=2
```
## Step 4: Database Setup
Add database setup tasks to your role:
```yaml
- name: Create application database
postgresql_db:
name: "{{ service_db_name }}"
owner: "{{ service_db_user }}"
encoding: UTF-8
lc_collate: en_US.UTF-8
lc_ctype: en_US.UTF-8
become_user: postgres
- name: Create application database user
postgresql_user:
name: "{{ service_db_user }}"
password: "{{ service_db_password }}"
db: "{{ service_db_name }}"
priv: ALL
become_user: postgres
- name: Grant connect privileges
postgresql_privs:
db: "{{ service_db_name }}"
role: "{{ service_db_user }}"
objs: ALL_IN_SCHEMA
privs: ALL
become_user: postgres
```
## Step 5: Service Role Template
Create an Ansible role using this pattern:
```
myservice/
├── defaults/main.yml
├── handlers/main.yml
├── tasks/
│ ├── main.yml
│ ├── database.yml
│ └── cache.yml
├── templates/
│ ├── myservice.env.j2
│ ├── myservice.pod
│ ├── myservice.container
│ └── myservice.caddy.j2
└── README.md
```
### Example Environment Template
```bash
# My Service Configuration
# Generated by Ansible - DO NOT EDIT
# Database Configuration (Unix Socket)
DATABASE_URL=postgresql://{{ service_db_user }}@/{{ service_db_name }}?host={{ postgresql_unix_socket_directories }}
DB_PASSWORD={{ service_db_password }}
# Cache Configuration (Unix Socket)
CACHE_URL=unix://{{ valkey_unix_socket_path }}?db={{ service_valkey_db }}&password={{ valkey_password }}
# Application Configuration
SECRET_KEY={{ service_secret_key }}
LOG_LEVEL={{ service_log_level }}
BIND_ADDRESS={{ service_bind_address }}:{{ service_port }}
```
## Troubleshooting
### Socket Permission Issues
If you get permission denied errors:
1. **Check group membership**:
```bash
groups myservice
# Should show: myservice postgres valkey
```
2. **Verify container annotations**:
```bash
podman inspect myservice --format='{{.Config.Annotations}}'
# Should include: run.oci.keep_original_groups=1
```
3. **Check socket permissions**:
```bash
ls -la /var/run/postgresql/
ls -la /var/run/valkey/
```
### Connection Issues
1. **Test socket access from host**:
```bash
sudo -u myservice psql -h /var/run/postgresql -U myservice myservice_db
sudo -u myservice redis-cli -s /var/run/valkey/valkey.sock ping
```
2. **Check URL format**:
- Use single `CACHE_URL` instead of separate variables
- Include password in URL if required
- Verify database number is correct
### Container Issues
1. **Check container user**:
```bash
podman exec myservice id
# Should show correct UID and supplementary groups
```
2. **Verify socket mounts**:
```bash
podman exec myservice ls -la /var/run/postgresql/
podman exec myservice ls -la /var/run/valkey/
```
## Best Practices
1. **Security**:
- Use dedicated system users for each service
- Limit group memberships to required infrastructure
- Use vault variables for secrets
2. **Configuration**:
- Use single URL format for Redis connections
- Mount socket directories with appropriate SELinux labels
- Include `run.oci.keep_original_groups=1` annotation
3. **Deployment**:
- Test socket access before container deployment
- Use proper dependency ordering in playbooks
- Include database and cache setup tasks
4. **Monitoring**:
- Monitor socket file permissions
- Check service logs for connection errors
- Verify group memberships after user changes
## Example Integration
See the `authentik` role for a complete example of this pattern:
- **Templates**: `roles/authentik/templates/`
- **Tasks**: `roles/authentik/tasks/`
- **Documentation**: `roles/authentik/README.md`
This provides a working reference implementation for Unix socket integration.

View File

@@ -38,6 +38,33 @@ sigvild_gallery_pb_su_password: "{{ vault_pb_su_password}}"
sigvild_gallery_host_password: "{{ vault_sigvild_host_password }}" sigvild_gallery_host_password: "{{ vault_sigvild_host_password }}"
sigvild_gallery_guest_password: "{{ vault_sigvild_guest_password }}" sigvild_gallery_guest_password: "{{ vault_sigvild_guest_password }}"
# =================================================================
# Authentik Configuration
# =================================================================
authentik_domain: "auth.jnss.me"
# Database configuration
authentik_db_name: "authentik"
authentik_db_user: "authentik"
authentik_db_password: "{{ vault_authentik_db_password }}"
# Cache configuration
authentik_valkey_db: 1
# Core configuration
authentik_secret_key: "{{ vault_authentik_secret_key }}"
authentik_default_admin_email: "admin@jnss.me"
authentik_default_admin_password: "{{ vault_authentik_admin_password }}"
# Service configuration
authentik_service_enabled: true
authentik_service_state: "started"
# Infrastructure socket configuration
postgresql_unix_socket_enabled: true
postgresql_listen_addresses: "" # Socket-only mode (no TCP)
valkey_unix_socket_enabled: true
# ================================================================= # =================================================================
# Security & Logging # Security & Logging
# ================================================================= # =================================================================

View File

@@ -0,0 +1,66 @@
---
# Sigvild Gallery Data Backup Playbook
#
# This playbook creates a compressed backup of all Sigvild Gallery production data
# including the PocketBase database and uploaded files.
- name: Backup Sigvild Gallery Production Data
hosts: arch-vps
become: true
gather_facts: true
vars:
# Backup configuration - can be overridden with --extra-vars
sigvild_gallery_backup_local_path: "{{ playbook_dir }}/backups/sigvild-gallery"
pre_tasks:
- name: Ensure local backup directory exists
local_action:
module: file
path: "{{ sigvild_gallery_backup_local_path }}"
state: directory
mode: '0755'
become: false
run_once: true
- name: Display backup operation info
debug:
msg:
- "🔄 Starting Sigvild Gallery Data Backup"
- "Target server: {{ inventory_hostname }}"
- "Local backup storage: {{ sigvild_gallery_backup_local_path }}"
- "Timestamp: {{ ansible_date_time.iso8601 }}"
tasks:
- name: Execute backup tasks
include_role:
name: sigvild-gallery
tasks_from: backup
tags: [backup, data]
post_tasks:
- name: List local backups
local_action:
module: find
paths: "{{ sigvild_gallery_backup_local_path }}"
patterns: "sigvild-gallery-backup-*.tar.gz"
register: all_backups
become: false
- name: Display backup summary
debug:
msg:
- "✅ Backup operation completed successfully!"
- "Total backups available: {{ all_backups.files | length }}"
- "Latest backup: sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
- "Backup location: {{ sigvild_gallery_backup_local_path }}"
- ""
- "⚠️ IMPORTANT: Store this backup safely before formatting your server!"
- "💡 To restore: Run normal deployment - restoration is automatic if backup exists"
- name: Show backup file details
debug:
msg: "Backup: {{ item.path | basename }} - {{ (item.size / 1024 / 1024) | round(2) }}MB - {{ item.mtime | to_datetime('%Y-%m-%d %H:%M:%S') }}"
loop: "{{ all_backups.files | sort(attribute='mtime') | reverse }}"
loop_control:
label: "{{ item.path | basename }}"

View File

@@ -224,4 +224,82 @@ service.example.com {
reverse_proxy localhost:8080 reverse_proxy localhost:8080
} }
``` ```
## Technical Implementation Notes
### Unix Socket Access Solution
This role implements a sophisticated solution for containerized Unix socket access:
**Challenge**: Containers need to access Unix sockets owned by different system services (PostgreSQL, Valkey) while maintaining security isolation.
**Solution Components**:
1. **User Namespace Preservation**: `--userns=host` in pod configuration
- Preserves host UID/GID mapping within containers
- Allows direct access to host socket files
2. **Group Membership Preservation**: `Annotation=run.oci.keep_original_groups=1` in containers
- Ensures supplementary group memberships are maintained in containers
- Enables access to postgres and valkey groups within containers
3. **Correct Redis URL Format**: `AUTHENTIK_CACHE__URL=unix://...?db=N&password=...`
- Avoids Django Redis client URL parsing issues
- Prevents incorrect port appending to Unix socket paths
4. **Host Service Integration**: Authentik user added to service groups
- Added to `postgres` group for PostgreSQL socket access
- Added to `valkey` group for Valkey socket access
### Container Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Authentik Pod (--userns=host) │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Server Container│ │ Worker Container│ │
│ │ UID: 963 (host) │ │ UID: 963 (host) │ │
│ │ Groups: 963, │ │ Groups: 963, │ │
│ │ 968(postgres),│ │ 968(postgres),│ │
│ │ 965(valkey) │ │ 965(valkey) │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ └────────────────────┴─────────────┐ │
└─────────────────────────────────────────────│─────────────┘
┌───────────────▼──────────────┐
│ Host Unix Sockets │
│ │
│ /var/run/postgresql/ │
│ ├─ .s.PGSQL.5432 │
│ │ (postgres:postgres 0770) │
│ │
│ /var/run/valkey/ │
│ ├─ valkey.sock │
│ (valkey:valkey 0770) │
└──────────────────────────────┘
```
### Security Implications
**Maintained Security**:
- Container network isolation preserved (no `--network=host`)
- Individual container user/group isolation
- Standard Podman security features active
- Principle of least privilege through group membership
**Trade-offs**:
- Containers share host user namespace (reduced UID isolation)
- Group membership grants broader access to service files
- Requires careful service group management
### Compatibility
This solution is:
-**Portable**: Works regardless of UID assignments
-**Maintainable**: No custom subuid/subgid configuration
-**Performant**: Unix sockets avoid TCP overhead
-**Secure**: Maintains container isolation where it matters
-**Standard**: Uses documented Podman/OCI features

View File

@@ -14,10 +14,11 @@ authentik_group: authentik
authentik_home: /opt/authentik authentik_home: /opt/authentik
authentik_data_dir: "{{ authentik_home }}/data" authentik_data_dir: "{{ authentik_home }}/data"
authentik_media_dir: "{{ authentik_home }}/media" authentik_media_dir: "{{ authentik_home }}/media"
authentik_log_dir: "{{ authentik_home }}/logs"
# Container configuration # Container configuration
authentik_version: "latest" authentik_version: "2025.10"
authentik_image: "ghcr.io/goauthentik/authentik" authentik_image: "ghcr.io/goauthentik/server"
# Service management # Service management
authentik_service_enabled: true authentik_service_enabled: true
@@ -43,8 +44,7 @@ authentik_valkey_db: 1 # Use database 1 for Authentik
authentik_domain: "auth.jnss.me" authentik_domain: "auth.jnss.me"
authentik_http_port: 9000 authentik_http_port: 9000
authentik_https_port: 9443 authentik_bind_address: "0.0.0.0"
authentik_bind_address: "127.0.0.1"
# ================================================================= # =================================================================
# Authentik Core Configuration # Authentik Core Configuration
@@ -107,4 +107,4 @@ postgresql_unix_socket_directories: "/var/run/postgresql"
# Valkey socket configuration (managed by valkey role) # Valkey socket configuration (managed by valkey role)
valkey_unix_socket_path: "/var/run/valkey/valkey.sock" valkey_unix_socket_path: "/var/run/valkey/valkey.sock"
valkey_password: "{{ vault_valkey_password }}" valkey_password: "{{ vault_valkey_password }}"

View File

@@ -42,7 +42,7 @@
redis-cli -s {{ valkey_unix_socket_path }} redis-cli -s {{ valkey_unix_socket_path }}
-a {{ valkey_password }} -a {{ valkey_password }}
-n {{ authentik_valkey_db }} -n {{ authentik_valkey_db }}
SET authentik:healthcheck "{{ ansible_date_time.iso8601 }}" SET authentik:healthcheck "deployed"
become: true become: true
become_user: "{{ authentik_user }}" become_user: "{{ authentik_user }}"
register: valkey_write_test register: valkey_write_test

View File

@@ -28,29 +28,29 @@
- "{{ authentik_data_dir }}" - "{{ authentik_data_dir }}"
- "{{ authentik_media_dir }}" - "{{ authentik_media_dir }}"
- "{{ authentik_user_quadlet_dir }}" - "{{ authentik_user_quadlet_dir }}"
- "{{ authentik_log_dir }}"
- name: Get authentik user UID
getent:
database: passwd
key: "{{ authentik_user }}"
register: authentik_user_info
- name: Set authentik UID variable
set_fact:
authentik_uid: "{{ authentik_user_info.ansible_facts.getent_passwd[authentik_user][1] }}"
- name: Enable lingering for authentik user (services persist without login) - name: Enable lingering for authentik user (services persist without login)
command: loginctl enable-linger {{ authentik_user }} command: loginctl enable-linger {{ authentik_user }}
register: linger_result register: linger_result
changed_when: linger_result.rc == 0 changed_when: linger_result.rc == 0
- name: Ensure XDG runtime directory exists
file:
path: "/run/user/{{ authentik_uid }}" - name: Get authentik user UID and GID for container configuration
state: directory shell: |
owner: "{{ authentik_user }}" echo "uid=$(id -u {{ authentik_user }})"
group: "{{ authentik_group }}" echo "gid=$(id -g {{ authentik_user }})"
mode: '0700' register: authentik_user_info
changed_when: false
tags: [setup]
- name: Set authentik UID/GID facts for container templates
set_fact:
authentik_uid: "{{ authentik_user_info.stdout_lines[0] | regex_replace('uid=', '') }}"
authentik_gid: "{{ authentik_user_info.stdout_lines[1] | regex_replace('gid=', '') }}"
tags: [setup]
- name: Setup database access and permissions - name: Setup database access and permissions
include_tasks: database.yml include_tasks: database.yml
@@ -134,15 +134,12 @@
timeout: 30 timeout: 30
when: valkey_unix_socket_enabled when: valkey_unix_socket_enabled
- name: Reload systemd daemon for Quadlet (user scope) - name: Ensure systemd user session is started
systemd: systemd:
daemon_reload: true name: "user@{{ authentik_uid }}.service"
scope: user state: started
become: true scope: system
become_user: "{{ authentik_user }}" register: user_session_start
environment:
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
tags: [containers, deployment]
- name: Enable and start Authentik pod (user scope) - name: Enable and start Authentik pod (user scope)
systemd: systemd:
@@ -153,34 +150,6 @@
daemon_reload: true daemon_reload: true
become: true become: true
become_user: "{{ authentik_user }}" become_user: "{{ authentik_user }}"
environment:
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
tags: [containers, service]
- name: Enable and start Authentik server (user scope)
systemd:
name: "{{ authentik_container_server_name }}"
enabled: "{{ authentik_service_enabled }}"
state: "{{ authentik_service_state }}"
scope: user
daemon_reload: true
become: true
become_user: "{{ authentik_user }}"
environment:
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
tags: [containers, service]
- name: Enable and start Authentik worker (user scope)
systemd:
name: "{{ authentik_container_worker_name }}"
enabled: "{{ authentik_service_enabled }}"
state: "{{ authentik_service_state }}"
scope: user
daemon_reload: true
become: true
become_user: "{{ authentik_user }}"
environment:
XDG_RUNTIME_DIR: "/run/user/{{ authentik_uid }}"
tags: [containers, service] tags: [containers, service]
- name: Wait for Authentik to be ready - name: Wait for Authentik to be ready

View File

@@ -6,8 +6,15 @@ Requires=authentik-pod.service
[Container] [Container]
ContainerName={{ authentik_container_server_name }} ContainerName={{ authentik_container_server_name }}
Image={{ authentik_image }}:{{ authentik_version }} Image={{ authentik_image }}:{{ authentik_version }}
Pod=authentik Pod=authentik.pod
EnvironmentFile={{ authentik_home }}/.env EnvironmentFile={{ authentik_home }}/.env
User={{ authentik_uid }}:{{ authentik_gid }}
Annotation=run.oci.keep_original_groups=1
# Logging configuration
LogDriver=k8s-file
LogOpt=path={{ authentik_home }}/logs/server.log
Volume={{ authentik_home }}/logs:{{ authentik_home }}/logs
# Volume mounts for data and sockets # Volume mounts for data and sockets
Volume={{ authentik_media_dir }}:/media Volume={{ authentik_media_dir }}:/media

View File

@@ -6,8 +6,15 @@ Requires=authentik-pod.service
[Container] [Container]
ContainerName={{ authentik_container_worker_name }} ContainerName={{ authentik_container_worker_name }}
Image={{ authentik_image }}:{{ authentik_version }} Image={{ authentik_image }}:{{ authentik_version }}
Pod=authentik Pod=authentik.pod
EnvironmentFile={{ authentik_home }}/.env EnvironmentFile={{ authentik_home }}/.env
User={{ authentik_uid }}:{{ authentik_gid }}
Annotation=run.oci.keep_original_groups=1
# Logging configuration
LogDriver=k8s-file
LogOpt=path={{ authentik_home }}/logs/worker.log
Volume={{ authentik_home }}/logs:{{ authentik_home }}/logs
# Volume mounts for data and sockets # Volume mounts for data and sockets
Volume={{ authentik_media_dir }}:/media Volume={{ authentik_media_dir }}:/media

View File

@@ -8,11 +8,8 @@ AUTHENTIK_POSTGRESQL__USER={{ authentik_db_user }}
AUTHENTIK_POSTGRESQL__PASSWORD={{ authentik_db_password }} AUTHENTIK_POSTGRESQL__PASSWORD={{ authentik_db_password }}
# No port needed for Unix socket # No port needed for Unix socket
# Valkey/Redis Configuration (Unix Socket) # Valkey/Redis Configuration (Unix Socket) - Using cache URL format to avoid port parsing issues
AUTHENTIK_REDIS__HOST=unix://{{ valkey_unix_socket_path }} AUTHENTIK_CACHE__URL=unix://{{ valkey_unix_socket_path }}?db={{ authentik_valkey_db }}&password={{ valkey_password }}
AUTHENTIK_REDIS__PASSWORD={{ valkey_password }}
AUTHENTIK_REDIS__DB={{ authentik_valkey_db }}
# No port needed for Unix socket
# Authentik Core Configuration # Authentik Core Configuration
AUTHENTIK_SECRET_KEY={{ authentik_secret_key }} AUTHENTIK_SECRET_KEY={{ authentik_secret_key }}
@@ -26,7 +23,6 @@ AUTHENTIK_DISABLE_STARTUP_ANALYTICS=true
# Network binding # Network binding
AUTHENTIK_LISTEN__HTTP={{ authentik_bind_address }}:{{ authentik_http_port }} AUTHENTIK_LISTEN__HTTP={{ authentik_bind_address }}:{{ authentik_http_port }}
AUTHENTIK_LISTEN__HTTPS={{ authentik_bind_address }}:{{ authentik_https_port }}
{% if authentik_email_enabled %} {% if authentik_email_enabled %}
# Email Configuration # Email Configuration
@@ -40,4 +36,4 @@ AUTHENTIK_EMAIL__FROM={{ authentik_email_from }}
# Default admin user # Default admin user
AUTHENTIK_BOOTSTRAP_PASSWORD={{ authentik_default_admin_password }} AUTHENTIK_BOOTSTRAP_PASSWORD={{ authentik_default_admin_password }}
AUTHENTIK_BOOTSTRAP_EMAIL={{ authentik_default_admin_email }} AUTHENTIK_BOOTSTRAP_EMAIL={{ authentik_default_admin_email }}

View File

@@ -2,10 +2,8 @@
Description=Authentik Authentication Pod Description=Authentik Authentication Pod
[Pod] [Pod]
PodName=authentik PublishPort=0.0.0.0:{{ authentik_http_port }}:{{ authentik_http_port }}
PublishPort={{ authentik_bind_address }}:{{ authentik_http_port }}:{{ authentik_http_port }} PodmanArgs=
PublishPort={{ authentik_bind_address }}:{{ authentik_https_port }}:{{ authentik_https_port }}
PodmanArgs=--userns=keep-id
[Service] [Service]
Restart=always Restart=always

View File

@@ -13,10 +13,15 @@
postgresql_service_enabled: true postgresql_service_enabled: true
postgresql_service_state: "started" postgresql_service_state: "started"
# Network Security (localhost only) # Network Security
postgresql_listen_addresses: "localhost" postgresql_listen_addresses: "localhost"
postgresql_port: 5432 postgresql_port: 5432
# Unix Socket Configuration
postgresql_unix_socket_enabled: true
postgresql_unix_socket_directories: "/var/run/postgresql"
postgresql_unix_socket_permissions: "0770"
# Authentication # Authentication
postgresql_auth_method: "scram-sha-256" postgresql_auth_method: "scram-sha-256"

View File

@@ -25,7 +25,7 @@
--auth-local=peer --auth-local=peer
--auth-host={{ postgresql_auth_method }} --auth-host={{ postgresql_auth_method }}
{{ '--data-checksums' if postgresql_data_checksums else '' }} {{ '--data-checksums' if postgresql_data_checksums else '' }}
become: yes become: true
become_user: postgres become_user: postgres
when: not postgresql_initialized.stat.exists when: not postgresql_initialized.stat.exists
notify: restart postgresql notify: restart postgresql
@@ -67,27 +67,49 @@
- reload systemd - reload systemd
- restart postgresql - restart postgresql
- name: Create PostgreSQL Unix socket directory
file:
path: "{{ postgresql_unix_socket_directories }}"
state: directory
owner: postgres
group: postgres
mode: '0770'
when: postgresql_unix_socket_enabled
- name: Enable and start PostgreSQL service - name: Enable and start PostgreSQL service
systemd: systemd:
name: postgresql name: postgresql
enabled: "{{ postgresql_service_enabled }}" enabled: "{{ postgresql_service_enabled }}"
state: "{{ postgresql_service_state }}" state: "{{ postgresql_service_state }}"
daemon_reload: yes daemon_reload: true
- name: Wait for PostgreSQL to be ready - name: Wait for PostgreSQL to be ready (TCP)
wait_for: wait_for:
port: "{{ postgresql_port }}" port: "{{ postgresql_port }}"
host: "{{ postgresql_listen_addresses }}" host: "{{ postgresql_listen_addresses }}"
timeout: 30 timeout: 30
when: postgresql_service_state == "started" when: postgresql_service_state == "started" and postgresql_listen_addresses != ""
- name: Wait for PostgreSQL to be ready (Unix Socket)
postgresql_ping:
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
login_user: postgres
become: true
become_user: postgres
register: postgresql_socket_ready
until: postgresql_socket_ready is succeeded
retries: 10
delay: 3
when: postgresql_service_state == "started" and postgresql_unix_socket_enabled and postgresql_listen_addresses == ""
- name: Display PostgreSQL infrastructure status - name: Display PostgreSQL infrastructure status
debug: debug:
msg: | msg: |
✅ PostgreSQL infrastructure ready! ✅ PostgreSQL infrastructure ready!
📡 Service: {{ postgresql_listen_addresses }}:{{ postgresql_port }} 📡 Service: {% if postgresql_unix_socket_enabled and postgresql_listen_addresses == "" %}Unix Socket ({{ postgresql_unix_socket_directories }}){% else %}{{ postgresql_listen_addresses }}:{{ postgresql_port }}{% endif %}
🔒 Auth: {{ postgresql_auth_method }} 🔒 Auth: {{ postgresql_auth_method }}
📊 Checksums: {{ 'Enabled' if postgresql_data_checksums else 'Disabled' }} 📊 Checksums: {{ 'Enabled' if postgresql_data_checksums else 'Disabled' }}
{% if postgresql_unix_socket_enabled %}🔌 Socket: {{ postgresql_unix_socket_directories }} (mode {{ postgresql_unix_socket_permissions }}){% endif %}
🏗️ Ready for applications to create databases/users 🏗️ Ready for applications to create databases/users

View File

@@ -3,6 +3,11 @@
# PostgreSQL's excellent defaults are used except for essentials # PostgreSQL's excellent defaults are used except for essentials
# Network and Security # Network and Security
{% if postgresql_unix_socket_enabled %}
# Unix Socket Configuration
unix_socket_directories = '{{ postgresql_unix_socket_directories }}'
unix_socket_permissions = {{ postgresql_unix_socket_permissions }}
{% endif %}
listen_addresses = '{{ postgresql_listen_addresses }}' listen_addresses = '{{ postgresql_listen_addresses }}'
port = {{ postgresql_port }} port = {{ postgresql_port }}

View File

@@ -44,6 +44,10 @@ sigvild_gallery_host: "127.0.0.1"
sigvild_gallery_home: "/opt/sigvild-gallery" sigvild_gallery_home: "/opt/sigvild-gallery"
sigvild_gallery_web_root: "/var/www/sigvild-gallery" sigvild_gallery_web_root: "/var/www/sigvild-gallery"
sigvild_gallery_local_project_path: "{{ ansible_env.PWD }}/sigvild-gallery" sigvild_gallery_local_project_path: "{{ ansible_env.PWD }}/sigvild-gallery"
# Backup configuration
sigvild_gallery_backup_enabled: true
sigvild_gallery_backup_local_path: "{{ playbook_dir }}/backups/sigvild-gallery"
``` ```
## Usage ## Usage
@@ -71,6 +75,62 @@ ansible-playbook site.yml --tags="backend"
ansible-playbook site.yml --tags="caddy" ansible-playbook site.yml --tags="caddy"
``` ```
### Data Backup and Restoration
#### Creating a Backup
Before formatting your server or making major changes, create a backup of all production data:
```bash
# Create backup of production data
ansible-playbook playbooks/backup-sigvild.yml
# Backup will be saved to: ./backups/sigvild-gallery/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
```
The backup includes:
- PocketBase SQLite database (`data.db`, `auxiliary.db`)
- All uploaded wedding photos and media files
- PocketBase logs and system state
#### Automatic Restoration
When deploying to a fresh server, the role automatically detects and restores from the latest backup:
```bash
# Normal deployment will auto-restore if backup exists
ansible-playbook playbooks/deploy-sigvild.yml
# Or deploy full infrastructure (includes auto-restore)
ansible-playbook site.yml
```
#### Manual Restoration
To restore data manually or from a specific backup:
```bash
# Restore with specific backup file
ansible-playbook playbooks/deploy-sigvild.yml --tags="restore" \
--extra-vars="sigvild_gallery_backup_local_path=/path/to/backup/directory"
# Force restoration (overwrite existing data)
ansible-playbook playbooks/deploy-sigvild.yml --tags="backend,restore"
```
#### Backup Management
```bash
# List available backups
ls -la ./backups/sigvild-gallery/
# Verify backup contents
tar -tzf ./backups/sigvild-gallery/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
# Extract backup for inspection (local)
tar -xzf ./backups/sigvild-gallery/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
```
## Security Features ## Security Features
### Environment Variables ### Environment Variables
@@ -169,6 +229,21 @@ systemctl reload caddy
- `/opt/sigvild-gallery/` (application directory) - `/opt/sigvild-gallery/` (application directory)
- `/var/www/sigvild-gallery/` (frontend files) - `/var/www/sigvild-gallery/` (frontend files)
## Data Protection
### Backup Strategy
- **Automated**: Backup creation via dedicated playbook
- **Comprehensive**: Includes database, uploaded files, and system state
- **Consistent**: Service temporarily stopped during backup for data integrity
- **Local storage**: Backups stored in `./backups/sigvild-gallery/` directory
- **Timestamped**: Each backup includes ISO timestamp for easy identification
### Recovery Process
- **Automatic detection**: Deployment automatically detects available backups
- **Zero-downtime restore**: Restoration happens before service startup
- **Integrity verification**: Backups verified before and after restoration
- **Permission preservation**: User/group ownership maintained during restore
## Tags ## Tags
- `sigvild`: Complete Sigvild Gallery deployment - `sigvild`: Complete Sigvild Gallery deployment
@@ -177,4 +252,6 @@ systemctl reload caddy
- `build`: Local build processes - `build`: Local build processes
- `service`: SystemD service management - `service`: SystemD service management
- `caddy`: Caddy configuration - `caddy`: Caddy configuration
- `verify`: Post-deployment verification - `verify`: Post-deployment verification
- `backup`: Data backup operations
- `restore`: Data restoration operations

View File

@@ -31,6 +31,10 @@ sigvild_gallery_local_project_path: "{{ ansible_env.PWD }}/sigvild-gallery"
sigvild_gallery_service_enabled: true sigvild_gallery_service_enabled: true
sigvild_gallery_service_state: started sigvild_gallery_service_state: started
# Backup configuration
sigvild_gallery_backup_enabled: true
sigvild_gallery_backup_local_path: "{{ playbook_dir }}/backups/sigvild-gallery"
# Caddy integration (assumes caddy role provides these) # Caddy integration (assumes caddy role provides these)
# caddy_sites_enabled_dir: /etc/caddy/sites-enabled # caddy_sites_enabled_dir: /etc/caddy/sites-enabled
# caddy_user: caddy # caddy_user: caddy

View File

@@ -0,0 +1,100 @@
---
# Sigvild Gallery Data Backup Tasks
- name: Create local backup directory
local_action:
module: file
path: "{{ sigvild_gallery_backup_local_path }}/{{ ansible_date_time.iso8601_basic_short }}"
state: directory
mode: '0755'
become: false
run_once: true
- name: Display backup information
debug:
msg:
- "Creating backup of Sigvild Gallery data..."
- "Data directory: {{ sigvild_gallery_data_dir }}"
- "Backup will be saved to: {{ sigvild_gallery_backup_local_path }}"
- "Timestamp: {{ ansible_date_time.iso8601_basic_short }}"
- name: Check if data directory exists
stat:
path: "{{ sigvild_gallery_data_dir }}"
register: data_dir_stat
- name: Fail if no data directory found
fail:
msg: "No data directory found at {{ sigvild_gallery_data_dir }}. Nothing to backup."
when: not data_dir_stat.stat.exists
- name: Display data directory size
command: du -sh {{ sigvild_gallery_data_dir }}
register: data_size
changed_when: false
- name: Show data size
debug:
msg: "Data directory size: {{ data_size.stdout }}"
- name: Stop sigvild-gallery service for consistent backup
systemd:
name: sigvild-gallery
state: stopped
register: service_stopped
- name: Create compressed backup of pb_data
archive:
path: "{{ sigvild_gallery_data_dir }}"
dest: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
format: gz
owner: "{{ sigvild_gallery_user }}"
group: "{{ sigvild_gallery_user }}"
mode: '0644'
register: backup_created
- name: Verify backup contains critical files
command: tar -tzf /tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz
register: backup_contents
changed_when: false
failed_when:
- "'data.db' not in backup_contents.stdout"
- name: Display backup verification
debug:
msg: "Backup verified - contains required database files"
- name: Get backup file size
stat:
path: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
register: backup_file_stat
- name: Display backup file info
debug:
msg: "Backup file created: {{ (backup_file_stat.stat.size / 1024 / 1024) | round(2) }}MB"
when: backup_file_stat.stat.exists
- name: Download backup to local machine
fetch:
src: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
dest: "{{ sigvild_gallery_backup_local_path }}/"
flat: yes
register: backup_downloaded
- name: Clean up remote backup file
file:
path: "/tmp/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
state: absent
- name: Restart sigvild-gallery service
systemd:
name: sigvild-gallery
state: started
when: service_stopped.changed
- name: Display backup completion
debug:
msg:
- "✅ Backup completed successfully!"
- "Local backup location: {{ sigvild_gallery_backup_local_path }}/sigvild-gallery-backup-{{ ansible_date_time.iso8601_basic_short }}.tar.gz"
- "Service has been restarted."

View File

@@ -33,6 +33,10 @@
notify: restart sigvild-gallery notify: restart sigvild-gallery
tags: [backend] tags: [backend]
- name: Restore data from backup if available
include_tasks: restore.yml
tags: [backend, restore]
- name: Create data directory for PocketBase - name: Create data directory for PocketBase
file: file:
path: "{{ sigvild_gallery_data_dir }}" path: "{{ sigvild_gallery_data_dir }}"

View File

@@ -0,0 +1,126 @@
---
# Sigvild Gallery Data Restoration Tasks
- name: Check for existing data backup files
local_action:
module: find
paths: "{{ sigvild_gallery_backup_local_path }}"
patterns: "sigvild-gallery-backup-*.tar.gz"
register: backup_files
become: false
- name: Display backup search results
debug:
msg:
- "Searching for backups in: {{ sigvild_gallery_backup_local_path }}"
- "Found {{ backup_files.files | length }} backup file(s)"
- name: Display found backup files
debug:
msg: "Found backup: {{ item.path }} ({{ (item.size / 1024 / 1024) | round(2) }}MB, {{ item.mtime | to_datetime }})"
loop: "{{ backup_files.files | sort(attribute='mtime') }}"
when: backup_files.files | length > 0
- name: Check if data directory already exists with data
stat:
path: "{{ sigvild_gallery_data_dir }}/data.db"
register: existing_data
- name: Warn about existing data
debug:
msg: "⚠️ WARNING: Existing data found at {{ sigvild_gallery_data_dir }}/data.db - restoration will overwrite it!"
when:
- existing_data.stat.exists
- backup_files.files | length > 0
- name: Restore from latest backup
block:
- name: Get latest backup file
set_fact:
latest_backup: "{{ (backup_files.files | sort(attribute='mtime') | last).path }}"
- name: Display restoration info
debug:
msg:
- "Restoring from latest backup: {{ latest_backup | basename }}"
- "Target directory: {{ sigvild_gallery_data_dir }}"
- name: Stop sigvild-gallery service before restoration
systemd:
name: sigvild-gallery
state: stopped
register: service_stopped_for_restore
ignore_errors: true # Service might not exist on fresh deployment
- name: Upload backup to remote server
copy:
src: "{{ latest_backup }}"
dest: "/tmp/restore-backup.tar.gz"
owner: root
group: root
mode: '0644'
- name: Verify uploaded backup integrity
command: tar -tzf /tmp/restore-backup.tar.gz
register: restore_contents
changed_when: false
failed_when: "'data.db' not in restore_contents.stdout"
- name: Remove existing data directory if it exists
file:
path: "{{ sigvild_gallery_data_dir }}"
state: absent
when: existing_data.stat.exists
- name: Ensure parent directory exists
file:
path: "{{ sigvild_gallery_home }}"
state: directory
owner: "{{ sigvild_gallery_user }}"
group: "{{ sigvild_gallery_user }}"
mode: '0755'
- name: Extract backup to target location
unarchive:
src: "/tmp/restore-backup.tar.gz"
dest: "{{ sigvild_gallery_home }}"
remote_src: true
owner: "{{ sigvild_gallery_user }}"
group: "{{ sigvild_gallery_user }}"
- name: Verify restoration
stat:
path: "{{ sigvild_gallery_data_dir }}/data.db"
register: restored_data
- name: Fail if restoration unsuccessful
fail:
msg: "Restoration failed - data.db not found after extraction"
when: not restored_data.stat.exists
- name: Get restored data size
command: du -sh {{ sigvild_gallery_data_dir }}
register: restored_size
changed_when: false
- name: Clean up uploaded backup file
file:
path: "/tmp/restore-backup.tar.gz"
state: absent
- name: Display restoration success
debug:
msg:
- "✅ Data restoration completed successfully!"
- "Restored from: {{ latest_backup | basename }}"
- "Restored data size: {{ restored_size.stdout }}"
- "Location: {{ sigvild_gallery_data_dir }}"
when: backup_files.files | length > 0
- name: No backup available message
debug:
msg:
- " No backup files found - starting with fresh installation"
- "Data directory will be created empty: {{ sigvild_gallery_data_dir }}"
when: backup_files.files | length == 0

View File

@@ -18,6 +18,11 @@ valkey_bind: "127.0.0.1"
valkey_port: 6379 valkey_port: 6379
valkey_protected_mode: true valkey_protected_mode: true
# Unix Socket Configuration
valkey_unix_socket_enabled: true
valkey_unix_socket_path: "/var/run/valkey/valkey.sock"
valkey_unix_socket_perm: "770"
# Authentication # Authentication
valkey_requirepass: "{{ vault_valkey_password }}" valkey_requirepass: "{{ vault_valkey_password }}"

View File

@@ -28,6 +28,24 @@
group: valkey group: valkey
mode: '0750' mode: '0750'
- name: Create Valkey Unix socket directory
file:
path: "{{ valkey_unix_socket_path | dirname }}"
state: directory
owner: valkey
group: valkey
mode: '0775'
when: valkey_unix_socket_enabled
- name: Ensure socket directory is accessible
file:
path: "{{ valkey_unix_socket_path | dirname }}"
owner: valkey
group: valkey
mode: '0775'
recurse: yes
when: valkey_unix_socket_enabled
- name: Deploy Valkey configuration file - name: Deploy Valkey configuration file
template: template:
src: valkey.conf.j2 src: valkey.conf.j2
@@ -60,28 +78,67 @@
name: valkey name: valkey
enabled: "{{ valkey_service_enabled }}" enabled: "{{ valkey_service_enabled }}"
state: "{{ valkey_service_state }}" state: "{{ valkey_service_state }}"
daemon_reload: yes daemon_reload: true
register: valkey_service_result
- name: Wait for Valkey to be ready - name: Wait for Valkey to be ready (TCP)
wait_for: wait_for:
port: "{{ valkey_port }}" port: "{{ valkey_port }}"
host: "{{ valkey_bind }}" host: "{{ valkey_bind }}"
timeout: 30 timeout: 30
when: valkey_service_state == "started" when: valkey_service_state == "started" and not valkey_unix_socket_enabled
- name: Test Valkey connectivity - name: Wait for Valkey socket file to exist
command: redis-cli -h {{ valkey_bind }} -p {{ valkey_port }} -a {{ valkey_requirepass }} ping wait_for:
register: valkey_ping_result path: "{{ valkey_unix_socket_path }}"
timeout: 30
when: valkey_service_state == "started" and valkey_unix_socket_enabled
- name: Wait for Valkey to be ready (Unix Socket) - Try without auth first
command: redis-cli -s {{ valkey_unix_socket_path }} ping
register: valkey_socket_ping_noauth
until: >
valkey_socket_ping_noauth.stdout == "PONG" or
"NOAUTH" in (valkey_socket_ping_noauth.stdout + valkey_socket_ping_noauth.stderr)
retries: 15
delay: 2
changed_when: false changed_when: false
failed_when: valkey_ping_result.stdout != "PONG" failed_when: false
when: valkey_service_state == "started" when: valkey_service_state == "started" and valkey_unix_socket_enabled
- name: Wait for Valkey to be ready (Unix Socket) - Try with auth if needed
command: redis-cli -s {{ valkey_unix_socket_path }} -a {{ valkey_requirepass }} ping
register: valkey_socket_ping_auth
until: valkey_socket_ping_auth.stdout == "PONG"
retries: 5
delay: 2
changed_when: false
failed_when: valkey_socket_ping_auth.rc != 0
when: >
valkey_service_state == "started" and valkey_unix_socket_enabled and
(valkey_socket_ping_noauth.stdout != "PONG") and
("NOAUTH" in (valkey_socket_ping_noauth.stdout + valkey_socket_ping_noauth.stderr) or valkey_socket_ping_noauth.rc != 0)
- name: Test Valkey connectivity (TCP)
command: redis-cli -h {{ valkey_bind }} -p {{ valkey_port }} -a {{ valkey_requirepass }} ping
register: valkey_ping_result_tcp
changed_when: false
failed_when: valkey_ping_result_tcp.stdout != "PONG"
when: valkey_service_state == "started" and not valkey_unix_socket_enabled
- name: Test Valkey connectivity (Unix Socket)
command: redis-cli -s {{ valkey_unix_socket_path }} -a {{ valkey_requirepass }} ping
register: valkey_ping_result_socket
changed_when: false
failed_when: valkey_ping_result_socket.stdout != "PONG"
when: valkey_service_state == "started" and valkey_unix_socket_enabled
- name: Display Valkey infrastructure status - name: Display Valkey infrastructure status
debug: debug:
msg: | msg: |
✅ Valkey infrastructure ready! ✅ Valkey infrastructure ready!
📡 Service: {{ valkey_bind }}:{{ valkey_port }} 📡 Service: {% if valkey_unix_socket_enabled %}Unix Socket ({{ valkey_unix_socket_path }}){% else %}{{ valkey_bind }}:{{ valkey_port }}{% endif %}
🔒 Auth: Password protected 🔒 Auth: Password protected
💾 Persistence: {{ 'RDB enabled' if valkey_save_enabled else 'Memory only' }} 💾 Persistence: {{ 'RDB enabled' if valkey_save_enabled else 'Memory only' }}
🗄️ Databases: {{ valkey_databases }} available (0-{{ valkey_databases - 1 }}) 🗄️ Databases: {{ valkey_databases }} available (0-{{ valkey_databases - 1 }})
@@ -91,4 +148,5 @@
📋 Application Integration: 📋 Application Integration:
- Use database numbers 1-{{ valkey_databases - 1 }} for applications - Use database numbers 1-{{ valkey_databases - 1 }} for applications
- Database 0 reserved for system/testing - Database 0 reserved for system/testing
- {% if valkey_unix_socket_enabled %}Unix socket: {{ valkey_unix_socket_path }}{% else %}TCP: {{ valkey_bind }}:{{ valkey_port }}{% endif %}
- Redis-compatible: applications can use REDIS_* or VALKEY_* env vars - Redis-compatible: applications can use REDIS_* or VALKEY_* env vars

View File

@@ -14,6 +14,15 @@ bind {{ valkey_bind }}
# Valkey port # Valkey port
port {{ valkey_port }} port {{ valkey_port }}
{% if valkey_unix_socket_enabled %}
# Unix Socket Configuration
unixsocket {{ valkey_unix_socket_path }}
unixsocketperm {{ valkey_unix_socket_perm }}
# Enable both TCP and Unix socket (for compatibility during transition)
# To disable TCP completely, comment out the port line above
{% endif %}
# Protected mode - requires authentication # Protected mode - requires authentication
protected-mode {{ 'yes' if valkey_protected_mode else 'no' }} protected-mode {{ 'yes' if valkey_protected_mode else 'no' }}

View File

@@ -4,8 +4,8 @@
- name: Deploy Core Infrastructure - name: Deploy Core Infrastructure
hosts: arch-vps hosts: arch-vps
become: yes become: true
gather_facts: yes gather_facts: true
roles: roles:
# Infrastructure services # Infrastructure services
@@ -13,12 +13,14 @@
# tags: ['postgresql', 'infrastructure', 'database'] # tags: ['postgresql', 'infrastructure', 'database']
# - role: valkey # - role: valkey
# tags: ['valkey', 'redis', 'infrastructure', 'cache'] # tags: ['valkey', 'redis', 'infrastructure', 'cache']
- role: podman # - role: podman
tags: ['podman', 'containers', 'infrastructure'] # tags: ['podman', 'containers', 'infrastructure']
# - role: caddy # - role: caddy
# tags: ['caddy', 'infrastructure', 'web'] # tags: ['caddy', 'infrastructure', 'web']
# Application services # Application services
- role: authentik
tags: ['authentik']
# - role: gitea # - role: gitea
# tags: ['gitea', 'git', 'development'] # tags: ['gitea', 'git', 'development']
# - role: sigvild-gallery # - role: sigvild-gallery