# Service Integration Guide This guide explains how to add new containerized services to rick-infra with PostgreSQL and Valkey/Redis access via Unix sockets. ## Overview Rick-infra provides a standardized approach for containerized services to access infrastructure services through Unix sockets, maintaining security while providing optimal performance. **Architecture**: Services are deployed as **system-level (rootful) containers** running as dedicated users with group-based access to infrastructure sockets. Infrastructure roles (PostgreSQL, Valkey) export client group GIDs as Ansible facts, which application roles consume for dynamic container configuration. **Note**: A previous rootless approach was evaluated but discarded due to user namespace UID/GID remapping breaking group-based socket permissions. See [ADR-005](architecture-decisions.md#adr-005-rootful-containers-with-infrastructure-fact-pattern) for details. ## Architecture Pattern ``` ┌─────────────────────────────────────────────────────────────┐ │ systemd System Service (/system.slice/) │ │ │ │ ┌─────────────────┐ │ │ │ Your Container │ │ │ │ User: UID:GID │ (dedicated system user) │ │ │ Groups: GID, │ │ │ │ 961,962 │ (postgres-clients, valkey-clients) │ │ └─────────────────┘ │ │ │ │ │ │ PodmanArgs=--group-add 962 --group-add 961 │ │ └─────────────────────┐ │ └─────────────────────────────────│───────────────────────────┘ │ ┌───────────────▼──────────────┐ │ Host Infrastructure Services │ │ │ │ PostgreSQL Unix Socket │ │ /var/run/postgresql/ │ │ Owner: postgres:postgres- │ │ clients (GID 962) │ │ │ │ Valkey Unix Socket │ │ /var/run/valkey/ │ │ Owner: valkey:valkey-clients │ │ (GID 961) │ └──────────────────────────────┘ ``` ## Prerequisites Your service must be deployed as: 1. **System-level systemd service** (via Quadlet) 2. **Dedicated system user** with infrastructure group membership 3. **Podman container** (rootful, running as dedicated user) ## Step 1: User Setup Create a dedicated system user for your service and add it to infrastructure groups: ```yaml - name: Create service group group: name: myservice system: true - name: Create service user user: name: myservice group: myservice groups: [postgres-clients, valkey-clients] system: true shell: /bin/bash home: /opt/myservice create_home: true append: true ``` ## Step 2: Container Configuration ### Pod Configuration (`myservice.pod`) ```ini [Unit] Description=My Service Pod [Pod] PublishPort=0.0.0.0:8080:8080 ShmSize=256m [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=multi-user.target ``` **Key Points**: - No user namespace arguments needed (rootful containers) - `WantedBy=multi-user.target` for system-level services - `ShmSize` for shared memory if needed by application ### Container Configuration (`myservice.container`) ```ini [Unit] Description=My Service Container After=myservice-pod.service Requires=myservice-pod.service [Container] ContainerName=myservice Image=my-service:latest Pod=myservice.pod EnvironmentFile=/opt/myservice/.env User={{ service_uid }}:{{ service_gid }} PodmanArgs=--group-add {{ postgresql_client_group_gid }} --group-add {{ valkey_client_group_gid }} # Volume mounts for sockets Volume=/var/run/postgresql:/var/run/postgresql:Z Volume=/var/run/valkey:/var/run/valkey:Z # Application volumes Volume=/opt/myservice/data:/data Volume=/opt/myservice/logs:/logs Exec=my-service [Service] Restart=always TimeoutStartSec=300 [Install] WantedBy=multi-user.target ``` **Key Points**: - `PodmanArgs=--group-add` uses dynamic GID facts from infrastructure roles - Mount socket directories with `:Z` for SELinux relabeling - Use host UID/GID for the service user - `WantedBy=multi-user.target` for system-level services **Note**: The `postgresql_client_group_gid` and `valkey_client_group_gid` facts are exported by infrastructure roles and consumed in container templates. ## Step 3: Service Configuration ### PostgreSQL Connection Use Unix socket connection strings: ```bash # Environment variable DATABASE_URL=postgresql://myservice@/myservice_db?host=/var/run/postgresql # Or separate variables DB_HOST=/var/run/postgresql DB_USER=myservice DB_NAME=myservice_db # No DB_PORT needed for Unix sockets ``` ### Valkey/Redis Connection **Correct Format** (avoids URL parsing issues): ```bash # Single URL format (recommended) CACHE_URL=unix:///var/run/valkey/valkey.sock?db=2&password=your_password # Alternative format REDIS_URL=redis://localhost/2?unix_socket_path=/var/run/valkey/valkey.sock ``` **Avoid** separate HOST/DB variables which can cause port parsing issues: ```bash # DON'T USE - causes parsing problems REDIS_HOST=unix:///var/run/valkey/valkey.sock REDIS_DB=2 ``` ## Step 4: Infrastructure Fact Validation Before deploying containers, validate that infrastructure facts are available: ```yaml - name: Validate infrastructure facts are available assert: that: - postgresql_client_group_gid is defined - valkey_client_group_gid is defined fail_msg: | Required infrastructure facts are not available. Ensure PostgreSQL and Valkey roles have run and exported client group GIDs. tags: [validation] ``` **Why this matters**: Container templates use these facts for `--group-add` arguments. If facts are missing, containers will deploy with incorrect group membership and socket access will fail. ## Step 5: Database Setup Add database setup tasks to your role: ```yaml - name: Create application database postgresql_db: name: "{{ service_db_name }}" owner: "{{ service_db_user }}" encoding: UTF-8 lc_collate: en_US.UTF-8 lc_ctype: en_US.UTF-8 become_user: postgres - name: Create application database user postgresql_user: name: "{{ service_db_user }}" password: "{{ service_db_password }}" db: "{{ service_db_name }}" priv: ALL become_user: postgres - name: Grant connect privileges postgresql_privs: db: "{{ service_db_name }}" role: "{{ service_db_user }}" objs: ALL_IN_SCHEMA privs: ALL become_user: postgres ``` ## Step 6: Service Role Template Create an Ansible role using this pattern: ``` myservice/ ├── defaults/main.yml ├── handlers/main.yml ├── tasks/ │ ├── main.yml │ ├── database.yml │ └── cache.yml ├── templates/ │ ├── myservice.env.j2 │ ├── myservice.pod │ ├── myservice.container │ └── myservice.caddy.j2 └── README.md ``` ### Example Environment Template ```bash # My Service Configuration # Generated by Ansible - DO NOT EDIT # Database Configuration (Unix Socket) DATABASE_URL=postgresql://{{ service_db_user }}@/{{ service_db_name }}?host={{ postgresql_unix_socket_directories }} DB_PASSWORD={{ service_db_password }} # Cache Configuration (Unix Socket) CACHE_URL=unix://{{ valkey_unix_socket_path }}?db={{ service_valkey_db }}&password={{ valkey_password }} # Application Configuration SECRET_KEY={{ service_secret_key }} LOG_LEVEL={{ service_log_level }} BIND_ADDRESS={{ service_bind_address }}:{{ service_port }} ``` ## Troubleshooting ### Socket Permission Issues If you get permission denied errors: 1. **Check group membership**: ```bash groups myservice # Should show: myservice postgres-clients valkey-clients ``` 2. **Verify container process groups**: ```bash ps aux | grep myservice | head -1 | awk '{print $2}' | \ xargs -I {} cat /proc/{}/status | grep Groups # Should show GIDs matching infrastructure client groups ``` 3. **Check socket permissions**: ```bash ls -la /var/run/postgresql/ # drwxrwx--- postgres postgres-clients ls -la /var/run/valkey/ # drwxrwx--- valkey valkey-clients ``` ### Connection Issues 1. **Test socket access from host**: ```bash sudo -u myservice psql -h /var/run/postgresql -U myservice myservice_db sudo -u myservice redis-cli -s /var/run/valkey/valkey.sock ping ``` 2. **Check URL format**: - Use single `CACHE_URL` instead of separate variables - Include password in URL if required - Verify database number is correct ### Container Issues 1. **Check container user**: ```bash podman exec myservice id # Should show correct UID and supplementary groups ``` 2. **Verify socket mounts**: ```bash podman exec myservice ls -la /var/run/postgresql/ podman exec myservice ls -la /var/run/valkey/ ``` ## Best Practices 1. **Security**: - Use dedicated system users for each service - Add users to infrastructure client groups (`postgres-clients`, `valkey-clients`) - Use vault variables for secrets - Deploy Quadlet files to `/etc/containers/systemd/` (system-level) 2. **Configuration**: - Use single URL format for Redis connections - Mount socket directories with appropriate SELinux labels (`:Z`) - Use dynamic GID facts from infrastructure roles in `PodmanArgs=--group-add` - Set `WantedBy=multi-user.target` in Quadlet files 3. **Deployment**: - Ensure infrastructure roles run first to export GID facts - Validate facts are defined before container deployment - Test socket access before container deployment - Use proper dependency ordering in playbooks - Include database and cache setup tasks 4. **Monitoring**: - Monitor socket file permissions (should be 770 with client group) - Check service logs for connection errors - Verify container process has correct supplementary groups - Verify services are in `/system.slice/` 5. **Logging**: - Use **journald** (Podman's default) for system-level containers - Never use file-based logging (`LogDriver=k8s-file`) for system services - Access logs via `journalctl -u service-name` or `podman logs container-name` - Logs are automatically persistent and integrated with systemd 6. **Infrastructure Variables**: - Always use infrastructure variables (e.g., `{{ postgresql_unix_socket_directories }}`) - Never hardcode infrastructure paths or group names - Define infrastructure dependencies in application role defaults with read-only notice - Validate infrastructure variables exist in tasks before use ## Logging Best Practices System-level containers should **always use journald** for logging integration with systemd. ### Why Journald for System Containers - **Centralized logging**: All service logs accessible via `journalctl` - **Persistent by default**: Logs survive container restarts - **No disk management**: systemd handles log rotation and retention - **Unified interface**: Same logging interface as native systemd services - **Query capabilities**: Filter by time, level, service, or pattern ### Configuration **Do NOT use file-based logging** in system containers: ```ini # ❌ WRONG - File-based logging (rootless pattern) [Container] LogDriver=k8s-file LogOpt=path=/opt/service/logs/app.log Volume=/opt/service/logs:/opt/service/logs ``` **Use journald (Podman's default)** for system containers: ```ini # ✅ CORRECT - journald logging (system-level) [Container] ContainerName=myservice Image=myservice:latest # No LogDriver needed - journald is default ``` ### Accessing Logs Both `journalctl` and `podman logs` provide access to the **same log stream**: ```bash # Via journalctl (system-wide) journalctl -u myservice -f # Follow logs journalctl -u myservice --lines 50 # Last 50 lines journalctl -u myservice --since "10 min ago" # Time-filtered journalctl -u myservice | grep ERROR # Pattern search # Via podman (container-specific) podman logs -f myservice # Same output as journalctl podman logs --tail 50 myservice ``` ### JSON Structured Logging Many modern applications (like Authentik) output JSON-formatted logs: ```bash # Filter JSON logs by field journalctl -u authentik-server | grep '"level":"error"' journalctl -u authentik-server | grep '"event":"database connection"' # Pretty-print JSON logs with jq journalctl -u authentik-server --output=cat | jq '.' ``` ## Infrastructure Variable Pattern Application roles should **reference** infrastructure variables, never redefine them. ### Pattern Overview **Infrastructure roles** (PostgreSQL, Valkey) define and export variables: ```yaml # roles/postgresql/defaults/main.yml postgresql_unix_socket_directories: "/var/run/postgresql" postgresql_client_group: "postgres-clients" postgresql_port: 5432 # roles/postgresql/tasks/main.yml - name: Export PostgreSQL client group GID as fact set_fact: postgresql_client_group_gid: "{{ getent_group[postgresql_client_group].1 }}" ``` **Application roles** reference these variables (never redefine): ```yaml # roles/myservice/defaults/main.yml # ================================================================= # Infrastructure Dependencies (Read-only) # ================================================================= # These variables reference infrastructure services defined by their roles # Applications MUST NOT modify these values - they are provided by infrastructure postgresql_unix_socket_directories: "/var/run/postgresql" postgresql_client_group: "postgres-clients" postgresql_port: 5432 ``` ### Using Infrastructure Variables **In application role defaults**: ```yaml # Use infrastructure variables for configuration myservice_db_host: "{{ postgresql_unix_socket_directories }}" myservice_db_port: "{{ postgresql_port }}" ``` **In application role tasks**: ```yaml # Validate infrastructure variables exist - name: Validate infrastructure variables are defined assert: that: - postgresql_unix_socket_directories is defined - postgresql_client_group is defined fail_msg: "Missing required infrastructure variables" # Use variables for group membership - name: Add service user to PostgreSQL client group user: name: "{{ myservice_user }}" groups: "{{ postgresql_client_group }}" append: true # Use variables for database operations - name: Create database via socket postgresql_db: name: "{{ myservice_db_name }}" login_unix_socket: "{{ postgresql_unix_socket_directories }}" ``` ### Example: Gitea Role See `roles/gitea/` for a complete example of infrastructure variable usage: **defaults/main.yml**: ```yaml # Infrastructure Dependencies section postgresql_unix_socket_directories: "/var/run/postgresql" postgresql_client_group: "postgres-clients" postgresql_port: 5432 # Database Configuration using infrastructure variables gitea_db_host: "{{ postgresql_unix_socket_directories }}" gitea_db_port: "{{ postgresql_port }}" ``` **tasks/main.yml**: ```yaml # Validate infrastructure variables - name: Validate infrastructure variables are defined assert: that: - postgresql_unix_socket_directories is defined - postgresql_client_group is defined # Use infrastructure variables - name: Add git user to PostgreSQL client group user: name: git groups: "{{ postgresql_client_group }}" append: true - name: Create Gitea database postgresql_db: name: gitea login_unix_socket: "{{ postgresql_unix_socket_directories }}" ``` ### Benefits 1. **Single Source of Truth**: Infrastructure paths defined once 2. **Consistency**: All services use same socket paths and groups 3. **Maintainability**: Change infrastructure path in one place 4. **Validation**: Catch missing dependencies early via assertions 5. **Documentation**: Clear separation between infrastructure and application ## Authentication Integration with Authentik ### Overview All services in rick-infra should integrate with Authentik for centralized authentication and authorization. This section covers the authentication integration patterns available. ### Authentication Integration Patterns #### Pattern 1: Forward Authentication (Recommended) **Use Case**: HTTP services that don't need to handle authentication internally **Benefits**: - No application code changes required - Consistent authentication across all services - Centralized session management - Service receives user identity via HTTP headers **Implementation**: ```yaml # Service role task to deploy Caddy configuration - name: Deploy service Caddy configuration with Authentik forward auth template: src: myservice.caddy.j2 dest: "{{ caddy_sites_enabled_dir }}/myservice.caddy" owner: root group: "{{ caddy_user }}" mode: '0644' backup: true notify: reload caddy tags: [caddy, auth] ``` ```caddyfile # templates/myservice.caddy.j2 {{ service_domain }} { # Forward authentication to Authentik forward_auth https://auth.jnss.me { uri /outpost.goauthentik.io/auth/caddy copy_headers Remote-User Remote-Name Remote-Email Remote-Groups } # Your service backend reverse_proxy {{ service_backend }} # Optional: Restrict access by group @not_authorized { not header Remote-Groups "*{{ required_group }}*" } respond @not_authorized "Access denied: insufficient privileges" 403 } ``` **Service Code Example** (Python Flask): ```python # Application receives authentication information via headers from flask import Flask, request app = Flask(__name__) @app.route('/dashboard') def dashboard(): # Extract user information from headers (provided by Authentik) username = request.headers.get('Remote-User') user_name = request.headers.get('Remote-Name') user_email = request.headers.get('Remote-Email') user_groups = request.headers.get('Remote-Groups', '').split(',') # Authorization based on groups if not username: return "Authentication required", 401 if 'service_users' not in user_groups: return "Access denied: insufficient privileges", 403 return render_template('dashboard.html', username=username, name=user_name, groups=user_groups) @app.route('/admin') def admin(): user_groups = request.headers.get('Remote-Groups', '').split(',') if 'admins' not in user_groups: return "Admin access required", 403 return render_template('admin.html') ``` #### Pattern 2: OAuth2/OIDC Integration **Use Case**: Applications that can implement OAuth2 client functionality **Benefits**: - Fine-grained scope control - API access tokens - Better integration with application user models - Support for mobile/SPA applications **Service Configuration**: ```yaml # Service environment configuration oauth2_config: client_id: "{{ service_oauth_client_id }}" client_secret: "{{ vault_service_oauth_secret }}" discovery_url: "https://auth.jnss.me/application/o/{{ service_slug }}/.well-known/openid_configuration" scopes: "openid email profile groups" redirect_uri: "https://{{ service_domain }}/oauth/callback" ``` ```python # OAuth2 integration example (Python) from authlib.integrations.flask_client import OAuth oauth = OAuth(app) oauth.register( 'authentik', client_id=app.config['OAUTH_CLIENT_ID'], client_secret=app.config['OAUTH_CLIENT_SECRET'], server_metadata_url=app.config['OAUTH_DISCOVERY_URL'], client_kwargs={ 'scope': 'openid email profile groups' } ) @app.route('/login') def login(): redirect_uri = url_for('oauth_callback', _external=True) return oauth.authentik.authorize_redirect(redirect_uri) @app.route('/oauth/callback') def oauth_callback(): token = oauth.authentik.authorize_access_token() user_info = oauth.authentik.parse_id_token(token) # Store user information in session session['user'] = { 'id': user_info['sub'], 'username': user_info['preferred_username'], 'email': user_info['email'], 'name': user_info['name'], 'groups': user_info.get('groups', []) } return redirect('/dashboard') ``` #### Pattern 3: API-Only Authentication **Use Case**: REST APIs, microservices, machine-to-machine communication **Implementation**: ```python # API service with token validation import requests from flask import Flask, request, jsonify def validate_token(token): """Validate Bearer token with Authentik introspection endpoint""" try: response = requests.post( 'https://auth.jnss.me/application/o/introspect/', headers={ 'Authorization': f'Bearer {app.config["API_CLIENT_TOKEN"]}' }, data={'token': token}, timeout=5 ) if response.status_code == 200: return response.json() return None except Exception as e: app.logger.error(f"Token validation error: {e}") return None @app.route('/api/data') def api_data(): auth_header = request.headers.get('Authorization') if not auth_header or not auth_header.startswith('Bearer '): return jsonify({'error': 'Missing or invalid Authorization header'}), 401 token = auth_header[7:] # Remove 'Bearer ' prefix token_info = validate_token(token) if not token_info or not token_info.get('active'): return jsonify({'error': 'Invalid or expired token'}), 401 # Extract user information from token username = token_info.get('username') scope = token_info.get('scope', '').split() # Check required scope if 'api:read' not in scope: return jsonify({'error': 'Insufficient permissions'}), 403 return jsonify({ 'message': f'Hello {username}', 'data': 'Your API response data here' }) ``` ### Authentik Provider Configuration For each service integration, configure the appropriate provider in Authentik: #### Forward Auth Provider (Pattern 1) ```yaml # Authentik provider configuration (via admin interface) provider_config: name: "{{ service_name }} Forward Auth" type: "Proxy Provider" authorization_flow: "default-provider-authorization-implicit-consent" external_host: "https://{{ service_domain }}" internal_host: "http://localhost:{{ service_port }}" skip_path_regex: "^/(health|metrics|static).*" ``` #### OAuth2 Provider (Pattern 2) ```yaml # Authentik OAuth2 provider configuration oauth_provider_config: name: "{{ service_name }} OAuth2" type: "OAuth2/OpenID Provider" authorization_flow: "default-provider-authorization-explicit-consent" client_type: "confidential" client_id: "{{ service_oauth_client_id }}" redirect_uris: - "https://{{ service_domain }}/oauth/callback" post_logout_redirect_uris: - "https://{{ service_domain }}/" ``` #### API Provider (Pattern 3) ```yaml # Authentik API provider configuration api_provider_config: name: "{{ service_name }} API" type: "OAuth2/OpenID Provider" authorization_flow: "default-provider-authorization-implicit-consent" client_type: "confidential" client_id: "{{ service_api_client_id }}" include_claims_in_id_token: true issuer_mode: "per_provider" ``` ### Group-Based Authorization #### Service-Specific Groups ```yaml # Create service-specific groups in Authentik service_groups: - name: "{{ service_name }}_users" description: "Users who can access {{ service_name }}" is_superuser: false - name: "{{ service_name }}_admins" description: "Administrators for {{ service_name }}" is_superuser: false parent: "{{ service_name }}_users" - name: "{{ service_name }}_readonly" description: "Read-only access to {{ service_name }}" is_superuser: false parent: "{{ service_name }}_users" ``` #### Policy-Based Access Control ```yaml # Authentik policies for service access service_policies: - name: "{{ service_name }} Group Access" policy_type: "Group Membership Policy" groups: ["{{ service_name }}_users"] - name: "{{ service_name }} Business Hours" policy_type: "Time-based Policy" parameters: start_time: "08:00" end_time: "18:00" days: ["monday", "tuesday", "wednesday", "thursday", "friday"] - name: "{{ service_name }} IP Restriction" policy_type: "Source IP Policy" parameters: cidr: "10.0.0.0/8" ``` ### Service Role Template with Authentication Here's a complete service role template that includes authentication integration: ```yaml # roles/myservice/defaults/main.yml --- # Service configuration service_name: "myservice" service_domain: "myservice.jnss.me" service_port: 8080 service_backend: "localhost:{{ service_port }}" # Authentication configuration auth_enabled: true auth_pattern: "forward_auth" # forward_auth, oauth2, api_only required_group: "myservice_users" # OAuth2 configuration (if auth_pattern is oauth2) oauth_client_id: "{{ service_name }}-oauth-client" oauth_client_secret: "{{ vault_myservice_oauth_secret }}" # Dependencies postgresql_db_name: "{{ service_name }}" valkey_db_number: 2 ``` ```yaml # roles/myservice/tasks/main.yml --- - name: Create service user and setup include_tasks: user.yml tags: [user, setup] - name: Setup database access include_tasks: database.yml tags: [database, setup] - name: Setup cache access include_tasks: cache.yml tags: [cache, setup] - name: Deploy service configuration template: src: myservice.env.j2 dest: "{{ service_home }}/.env" owner: "{{ service_user }}" group: "{{ service_group }}" mode: '0600' tags: [config] - name: Deploy container configuration template: src: "{{ item.src }}" dest: "{{ service_quadlet_dir }}/{{ item.dest }}" owner: "{{ service_user }}" group: "{{ service_group }}" mode: '0644' loop: - { src: 'myservice.pod', dest: 'myservice.pod' } - { src: 'myservice.container', dest: 'myservice.container' } become: true become_user: "{{ service_user }}" notify: - reload systemd user - restart myservice tags: [containers] - name: Deploy Caddy configuration with authentication template: src: myservice.caddy.j2 dest: "{{ caddy_sites_enabled_dir }}/{{ service_name }}.caddy" owner: root group: "{{ caddy_user }}" mode: '0644' backup: true notify: reload caddy tags: [caddy, auth] when: auth_enabled - name: Start and enable service systemd: name: "{{ service_name }}" enabled: true state: started scope: user daemon_reload: true become: true become_user: "{{ service_user }}" tags: [service] ``` ```caddyfile # roles/myservice/templates/myservice.caddy.j2 {{ service_domain }} { {% if auth_enabled and auth_pattern == 'forward_auth' %} # Forward authentication to Authentik forward_auth https://auth.jnss.me { uri /outpost.goauthentik.io/auth/caddy copy_headers Remote-User Remote-Name Remote-Email Remote-Groups } {% if required_group %} # Restrict access to specific group @not_authorized { not header Remote-Groups "*{{ required_group }}*" } respond @not_authorized "Access denied: {{ required_group }} group required" 403 {% endif %} {% endif %} # Service backend reverse_proxy {{ service_backend }} # Health check endpoint (no auth required) handle /health { reverse_proxy {{ service_backend }} } } ``` ### Integration Testing #### Authentication Integration Tests ```yaml # Service authentication tests authentication_tests: - name: "Test unauthenticated access denied" uri: "https://{{ service_domain }}/" method: "GET" expected_status: [302, 401] # Redirect to login or unauthorized - name: "Test authenticated access allowed" uri: "https://{{ service_domain }}/" method: "GET" headers: Cookie: "authentik_session={{ valid_session_cookie }}" expected_status: [200] - name: "Test group authorization" uri: "https://{{ service_domain }}/admin" method: "GET" headers: Cookie: "authentik_session={{ admin_session_cookie }}" expected_status: [200] - name: "Test insufficient privileges" uri: "https://{{ service_domain }}/admin" method: "GET" headers: Cookie: "authentik_session={{ user_session_cookie }}" expected_status: [403] ``` #### Automated Testing Script ```bash #!/bin/bash # test-service-auth.sh SERVICE_DOMAIN="myservice.jnss.me" echo "Testing service authentication for $SERVICE_DOMAIN" # Test 1: Unauthenticated access should redirect or deny echo "Test 1: Unauthenticated access" RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "https://$SERVICE_DOMAIN/") if [[ "$RESPONSE" == "302" || "$RESPONSE" == "401" ]]; then echo "✓ PASS: Unauthenticated access properly denied ($RESPONSE)" else echo "✗ FAIL: Unauthenticated access allowed ($RESPONSE)" fi # Test 2: Health endpoint should be accessible echo "Test 2: Health endpoint access" RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "https://$SERVICE_DOMAIN/health") if [[ "$RESPONSE" == "200" ]]; then echo "✓ PASS: Health endpoint accessible" else echo "✗ FAIL: Health endpoint not accessible ($RESPONSE)" fi # Test 3: Authentik forward auth endpoint exists echo "Test 3: Authentik forward auth endpoint" RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "https://auth.jnss.me/outpost.goauthentik.io/auth/caddy") if [[ "$RESPONSE" == "401" ]]; then echo "✓ PASS: Forward auth endpoint responding" else echo "✗ FAIL: Forward auth endpoint not responding correctly ($RESPONSE)" fi echo "Authentication tests completed" ``` ## Example Integration See the `authentik` role for a complete example of this pattern: - **Templates**: `roles/authentik/templates/` - **Tasks**: `roles/authentik/tasks/` - **Documentation**: `roles/authentik/README.md` This provides a working reference implementation for Unix socket integration. ## Authentication Integration Examples For practical authentication integration examples, see: - **[Authentication Architecture](authentication-architecture.md)** - Complete authentication patterns and examples - **[Authentik Deployment Guide](authentik-deployment-guide.md)** - Authentik-specific configuration - **[Architecture Decisions](architecture-decisions.md)** - Authentication model rationale ## Quick Start Checklist When integrating a new service with rick-infra: ### Infrastructure Integration - [ ] Create dedicated system user for the service - [ ] Add user to `postgres` and `valkey` groups for database access - [ ] Configure Unix socket connections in service environment - [ ] Set up Quadlet container configuration with proper user/group settings - [ ] Test database and cache connectivity ### Authentication Integration - [ ] Choose authentication pattern (forward auth recommended for most services) - [ ] Create Caddy configuration with Authentik forward auth - [ ] Create Authentik provider and application configuration - [ ] Set up service-specific groups and policies in Authentik - [ ] Test authentication flow and authorization ### Deployment Integration - [ ] Add service role dependencies in `meta/main.yml` - [ ] Configure service in `site.yml` with appropriate tags - [ ] Set up vault variables for secrets - [ ] Deploy and verify service functionality - [ ] Add monitoring and backup procedures This comprehensive integration approach ensures all services benefit from the security, performance, and operational advantages of the rick-infra architecture.