Files
rick-infra/roles/devigo/README.md
Joakim 1350d10a7c Add devigo deployment role for mini-vps production environment
- Created comprehensive devigo Ansible role with Podman Quadlet support
- Deployed devigo-site container (Hugo + nginx) via systemd
- Deployed devigo-decap-oauth OAuth2 proxy for Decap CMS
- Integrated with Caddy reverse proxy for HTTPS

Services deployed:
- devigo.no (apex domain, primary)
- www.devigo.no (redirects to apex)
- decap.jnss.me (OAuth proxy)

Key features:
- REGISTRY_AUTH_FILE environment for Podman GHCR authentication
- TRUSTED_ORIGINS (plural) for decapcms-oauth2 multi-origin support
- JavaScript-based Decap CMS initialization (eliminates YAML MIME dependency)
- nginx location block for YAML MIME type (text/yaml)
- Automated deployment via GitHub Actions CI/CD
- Comprehensive documentation with troubleshooting guide
- Architecture decision records

Fixes applied during deployment:
- OAuth origin trust validation (TRUSTED_ORIGINS vs TRUSTED_ORIGIN)
- MIME type handling strategy (location-specific vs server-level types block)
- Decap CMS initialization method (JavaScript vs link tag)
- Podman authentication for systemd services (REGISTRY_AUTH_FILE)

Testing status:
-  MIME types verified (HTML, CSS, YAML all correct)
-  OAuth authentication working
-  Container image pulls from private GHCR
-  Automated deployments functional
-  Site fully operational at devigo.no
2025-12-16 00:53:33 +01:00

17 KiB

Devigo Ansible Role

Deploys the Devigo sales training website (devigo.no) to production infrastructure using Podman Quadlet and Caddy reverse proxy.

Description

Devigo is a Hugo-based static marketing website deployed via a hybrid approach:

  • Infrastructure: Managed by Ansible (this role)
  • Application: Built and deployed via GitHub Actions CI/CD
  • Runtime: Podman Quadlet containers with systemd integration
  • Web Server: Caddy reverse proxy with automatic HTTPS

This role deploys two services:

  1. devigo-site: The main website container (Hugo + Nginx)
  2. devigo-decap-oauth: OAuth2 proxy for Decap CMS content management

Architecture

GitHub Actions (CI/CD)
    ↓
Build Hugo site → Docker image → GHCR
    ↓
SSH to mini-vps → systemctl restart devigo-site
    ↓
Podman Quadlet pulls latest image
    ↓
Systemd manages container lifecycle
    ↓
Caddy reverse proxy (HTTPS)
    ↓
Internet (devigo.no)

Requirements

Control Machine

  • Ansible 2.20+
  • Vault password for encrypted credentials
  • SSH access to production host

Target Host

  • Arch Linux (or compatible)
  • Podman 5.7+
  • Caddy with Cloudflare DNS plugin
  • Systemd with Quadlet support

Role Variables

Required Variables (set in group_vars/production/main.yml)

# Domains
devigo_domain: "devigo.no"                    # Apex domain (primary)
devigo_www_domain: "www.devigo.no"            # WWW subdomain (redirects to apex)
devigo_primary_domain: "devigo.no"            # Which domain is primary

# OAuth Service
devigo_oauth_domain: "decap.jnss.me"          # OAuth proxy domain
devigo_oauth_client_id: "{{ vault_devigo_oauth_client_id }}"
devigo_oauth_client_secret: "{{ vault_devigo_oauth_client_secret }}"

# IMPORTANT: Must be comma-separated string, NOT YAML array
# The decapcms-oauth2 service requires TRUSTED_ORIGINS (plural) as a simple string
devigo_oauth_trusted_origins: "https://devigo.no,https://www.devigo.no"

# Container Registry
devigo_ghcr_image: "ghcr.io/jnschaffer/rustan:prod"

Optional Variables (defaults/main.yml)

devigo_container_name: "devigo-site"          # Container name
devigo_container_port: 80                      # Nginx port inside container
devigo_oauth_user: "devigo-oauth"             # OAuth service user
devigo_oauth_home: "/opt/devigo-oauth"        # OAuth config directory
devigo_oauth_container_name: "devigo-decap-oauth"
devigo_oauth_container_port: 12000            # OAuth service port

Vault Variables (group_vars/production/vault.yml - encrypted)

vault_devigo_oauth_client_id: "Ov23liNvuDONNB0dhwj6"
vault_devigo_oauth_client_secret: "5dc4d17cfa7d550dd1b5ade0e33d1c9689bbbfad"

Dependencies

This role depends on:

  • podman - Container runtime
  • caddy - Reverse proxy with TLS

These are automatically included via meta/main.yml.

Deployed Services

1. devigo-site (Main Website)

Systemd Unit: devigo-site.service
Quadlet File: /etc/containers/systemd/devigo-site.container
Container: ghcr.io/jnschaffer/rustan:prod
Network: caddy (Podman network)
Health Check: Verifies /usr/share/nginx/html/index.html exists

Features:

  • Auto-update support via podman auto-update
  • Systemd managed (auto-restart on failure)
  • Health monitoring
  • Connected to Caddy network for reverse proxy

2. devigo-decap-oauth (CMS OAuth Proxy)

Systemd Unit: devigo-decap-oauth.service
Quadlet File: /etc/containers/systemd/devigo-decap-oauth.container
Container: docker.io/alukovenko/decapcms-oauth2:latest
Port: 127.0.0.1:12000:12000
Purpose: OAuth2 proxy for Decap CMS GitHub authentication

Features:

  • Enables content editing via Decap CMS at /admin
  • Authenticates with GitHub OAuth app
  • Proxied via Caddy at decap.jnss.me

Caddy Configuration

Site Configuration

  • devigo.no - Primary domain, serves content
  • www.devigo.no - Redirects to apex domain
  • Reverse proxy to devigo-site container via Podman network
  • Security headers (X-Frame-Options, CSP, etc.)
  • JSON logging to /var/log/caddy/devigo.log

OAuth Configuration

  • decap.jnss.me - OAuth proxy endpoint
  • Reverse proxy to localhost:12000
  • Required for Decap CMS /admin functionality

GitHub Actions Integration

Workflow Overview

The site is deployed via GitHub Actions when code is pushed to the prod branch:

  1. Build Phase: Hugo site built in Docker container
  2. Push Phase: Image pushed to ghcr.io/jnschaffer/rustan:prod
  3. Deploy Phase: SSH to mini-vps, restart systemd service

Deploy Script (in .github/workflows/deploy.yml)

deploy:
  needs: build-and-push
  runs-on: ubuntu-latest
  if: github.ref == 'refs/heads/prod'
  
  steps:
    - name: Deploy to mini-vps
      uses: appleboy/ssh-action@v1.0.0
      with:
        host: ${{ secrets.VPS_HOST }}
        username: ${{ secrets.VPS_USER }}
        key: ${{ secrets.VPS_SSH_KEY }}
        script: |
          # Check for updates
          podman auto-update --dry-run devigo-site
          
          # Restart service to pull and use new image
          systemctl restart devigo-site
          
          # Verify deployment
          sleep 5
          systemctl is-active devigo-site
          podman ps | grep devigo-site
          
          # Cleanup
          podman system prune -f

Required GitHub Secrets

Configure in repository settings (https://github.com/jnschaffer/rustan/settings/secrets/actions):

  • VPS_HOST: 72.62.91.251 (mini-vps IP)
  • VPS_USER: root
  • VPS_SSH_KEY: SSH private key for mini-vps access

Usage

Deploy Infrastructure

# Deploy all devigo infrastructure
ansible-playbook playbooks/production.yml --tags devigo

# Deploy all production services
ansible-playbook playbooks/production.yml

Manual Operations

# Check service status
ansible mini-vps -m shell -a "systemctl status devigo-site"
ansible mini-vps -m shell -a "systemctl status devigo-decap-oauth"

# View logs
ansible mini-vps -m shell -a "journalctl -u devigo-site -n 50"
ansible mini-vps -m shell -a "journalctl -u devigo-decap-oauth -n 50"

# Check containers
ansible mini-vps -m shell -a "podman ps"

# Manually restart services
ansible mini-vps -m shell -a "systemctl restart devigo-site"
ansible mini-vps -m shell -a "systemctl restart devigo-decap-oauth"

# Check for available updates
ansible mini-vps -m shell -a "podman auto-update --dry-run"

# Force pull latest image
ansible mini-vps -m shell -a "systemctl restart devigo-site"

Testing Deployment

# Test site (after DNS update)
curl -I https://devigo.no
curl -I https://www.devigo.no  # Should redirect to apex

# Test OAuth proxy
curl -I https://decap.jnss.me

# Test from control machine with IP
curl -I -k -H "Host: devigo.no" https://72.62.91.251

Content Management

Decap CMS Access

  1. Navigate to https://devigo.no/admin
  2. Click "Login with GitHub"
  3. Authenticate via OAuth proxy (decap.jnss.me)
  4. Edit content via CMS interface
  5. Changes commit to GitHub → triggers rebuild → auto-deploy

Direct Git Workflow

cd ~/rustan
# Make changes to content/norwegian/
git add .
git commit -m "Update content"
git push origin prod
# GitHub Actions builds and deploys automatically

DNS Configuration

Required DNS records:

devigo.no        A      72.62.91.251
www.devigo.no    CNAME  devigo.no.
decap.jnss.me    A      72.62.91.251

TLS certificates are automatically obtained via Caddy + Cloudflare DNS challenge.

Troubleshooting

Container Won't Start

# Check Quadlet file syntax
ansible mini-vps -m shell -a "cat /etc/containers/systemd/devigo-site.container"

# Check systemd status
ansible mini-vps -m shell -a "systemctl status devigo-site"

# View detailed logs
ansible mini-vps -m shell -a "journalctl -u devigo-site -n 100"

# Check if image exists
ansible mini-vps -m shell -a "podman images | grep rustan"

OAuth Not Working

# Verify OAuth container running
ansible mini-vps -m shell -a "systemctl status devigo-decap-oauth"

# Check OAuth logs
ansible mini-vps -m shell -a "journalctl -u devigo-decap-oauth -n 50"

# Test OAuth endpoint locally
ansible mini-vps -m shell -a "curl -I http://localhost:12000"

# Verify environment variables
ansible mini-vps -m shell -a "cat /opt/devigo-oauth/decap-oauth.env"

TLS Certificate Issues

# Check Caddy logs
ansible mini-vps -m shell -a "journalctl -u caddy -n 100"

# Verify certificate obtained
ansible mini-vps -m shell -a "ls -la /var/lib/caddy/.local/share/caddy/certificates"

# Force certificate renewal
ansible mini-vps -m shell -a "systemctl reload caddy"

GitHub Actions Deployment Fails

  1. Verify secrets are set in GitHub repository
  2. Check SSH connectivity: ssh -i /path/to/key root@72.62.91.251
  3. Verify image was pushed to GHCR: https://github.com/jnschaffer/rustan/pkgs/container/rustan
  4. Check GitHub Actions logs for detailed error messages

Automatic Updates (Future Enhancement)

For fully automatic updates without GitHub Actions triggering:

Systemd Timer Approach

Create a systemd timer to run podman auto-update daily:

# /etc/systemd/system/podman-auto-update.timer
[Unit]
Description=Podman Auto Update Timer

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target
# /etc/systemd/system/podman-auto-update.service  
[Unit]
Description=Podman Auto Update Service

[Service]
Type=oneshot
ExecStart=/usr/bin/podman auto-update
ExecStartPost=/usr/bin/systemctl restart devigo-site

Enable timer:

systemctl enable --now podman-auto-update.timer

This provides automatic daily checks for new images independent of GitHub Actions.

Files Deployed

/etc/containers/systemd/
├── devigo-site.container           # Website Quadlet
└── devigo-decap-oauth.container    # OAuth Quadlet

/etc/caddy/sites-enabled/
├── devigo.caddy                    # Site reverse proxy config
└── devigo-decap-oauth.caddy        # OAuth reverse proxy config

/opt/devigo-oauth/
└── decap-oauth.env                 # OAuth environment variables

/var/log/caddy/
├── devigo.log                      # Site access logs
└── devigo-decap-oauth.log          # OAuth access logs

Security Considerations

  1. OAuth Credentials: Stored encrypted in Ansible Vault
  2. Container Security: Runs with NoNewPrivileges=true
  3. Network Isolation: Containers only on caddy network
  4. TLS: All traffic encrypted via Cloudflare DNS challenge
  5. SSH Keys: GitHub Actions uses dedicated SSH key
  6. Image Verification: Images pulled from trusted GHCR

Notes

  • Hugo version on control machine (v0.152.2) is newer than package.json (v0.144.2) - builds work fine
  • Contact form uses Airform.io service (patrick@devigo.no endpoint)
  • Build time: ~3-5 seconds locally, ~30-60 seconds in GitHub Actions
  • Image size: ~50MB (Hugo + Nginx Alpine-based)
  • The role does not build the Hugo site - that's handled by GitHub Actions
  • DNS must point to mini-vps (72.62.91.251) before site will be accessible

Example Playbook

---
- name: Deploy Production Services
  hosts: production
  become: true
  roles:
    - role: devigo
      tags: ['devigo', 'website', 'sales', 'oauth']

License

MIT

Author

Created for rick-infra project by Claude Code

Troubleshooting

OAuth "Origin not trusted" Error

Symptom: Browser console shows Origin not trusted: https://devigo.no when clicking "Login with GitHub"

Cause: The TRUSTED_ORIGINS environment variable is incorrectly formatted or using the wrong variable name.

Solution:

  1. Check the deployed env file:

    ansible mini-vps -m shell -a "cat /opt/devigo-oauth/decap-oauth.env"
    
  2. Verify it contains:

    TRUSTED_ORIGINS=https://devigo.no,https://www.devigo.no
    

    NOT:

    • TRUSTED_ORIGIN= (singular - only supports one origin)
    • TRUSTED_ORIGINS=['https://...'] (YAML array - incorrect format)
  3. If incorrect, ensure roles/devigo/defaults/main.yml contains:

    devigo_oauth_trusted_origins: "https://devigo.no,https://www.devigo.no"
    
  4. Redeploy and restart:

    ansible-playbook playbooks/production.yml --tags devigo
    ansible mini-vps -m shell -a "systemctl restart devigo-decap-oauth"
    

Decap CMS "Response was not yaml" Error

Symptom: Decap CMS shows error: Response for config.yml was not yaml. (Content-Type: application/octet-stream)

Cause: The /admin/config.yml file is not being served with Content-Type: text/yaml

Solution:

  1. Verify nginx configuration in the container has the location block:

    ansible mini-vps -m shell -a "podman exec devigo-site grep -A 3 'location.*yml' /etc/nginx/nginx.conf"
    
  2. Should contain:

    location ~ \.(yml|yaml)$ {
        default_type text/yaml;
    }
    
  3. Test the MIME type:

    curl -I https://devigo.no/admin/config.yml | grep content-type
    # Expected: content-type: text/yaml
    
  4. If missing, check the rustan repository's nginx.conf file and redeploy.

HTML/CSS Files Show as "application/octet-stream"

Symptom: Site doesn't render correctly, browser shows files as application/octet-stream

Cause: Server-level types {} block in nginx.conf overriding all MIME types

Solution:

  1. Check nginx.conf in the container:

    ansible mini-vps -m shell -a "podman exec devigo-site cat /etc/nginx/nginx.conf | head -20"
    
  2. Ensure there is NO types { ... } block at the server level

  3. YAML MIME type should be handled by location block only (see above)

  4. Fix in rustan repository, commit, and push to trigger rebuild

Cannot Pull Private GHCR Images

Symptom: podman pull fails with 401 Unauthorized or authentication errors

Cause: Missing or incorrect GitHub Container Registry authentication

Solution:

  1. Verify auth.json exists:

    ansible mini-vps -m shell -a "ls -la /etc/containers/auth.json"
    # Expected: -rw------- 1 root root
    
  2. Test authentication:

    ansible mini-vps -m shell -a "podman login ghcr.io --get-login"
    # Expected: jnschaffer (your GitHub username)
    
  3. If fails, verify the Quadlet has the environment variable:

    ansible mini-vps -m shell -a "systemctl cat devigo-site.service | grep REGISTRY_AUTH_FILE"
    # Expected: Environment=REGISTRY_AUTH_FILE=/etc/containers/auth.json
    
  4. If missing, redeploy the role:

    ansible-playbook playbooks/production.yml --tags devigo,podman
    

Service Won't Start After GitHub Actions Deployment

Symptom: systemctl status devigo-site shows failed or inactive

Causes and Solutions:

  1. Image pull failed:

    ansible mini-vps -m shell -a "journalctl -u devigo-site -n 50"
    # Look for "401 Unauthorized" - see authentication troubleshooting above
    
  2. Port already in use:

    ansible mini-vps -m shell -a "ss -tlnp | grep 9080"
    # If occupied, stop conflicting service or change port
    
  3. Corrupted image:

    ansible mini-vps -m shell -a "podman pull --force ghcr.io/jnschaffer/rustan:prod"
    ansible mini-vps -m shell -a "systemctl restart devigo-site"
    

Architecture Decisions

Why Apex Domain (devigo.no) Over WWW?

Decision: Use devigo.no as primary, redirect www.devigo.nodevigo.no

Rationale:

  • Shorter, cleaner URLs
  • Better for SEO (single canonical domain)
  • Simpler TLS certificate management
  • Modern web convention (www is legacy)

Why JavaScript Initialization for Decap CMS?

Decision: Use CMS.init() with manual config loading instead of <link> tag

Rationale:

  • Eliminates dependency on YAML MIME type for <link> element
  • More portable across different server configurations
  • Explicit and debuggable (clear in source what's happening)
  • Matches working configuration from previous deployment

Trade-off: Slightly more verbose HTML (3 extra lines of JavaScript)

Why Location Block for YAML MIME Type?

Decision: Use location ~ \.(yml|yaml)$ { default_type text/yaml; } instead of server-level types {} block

Rationale:

  • Surgical approach - only affects YAML files
  • Doesn't override nginx's default MIME types for HTML/CSS/JS
  • Easier to understand and maintain
  • Prevents accidental breaking of other file types

Trade-off: Requires location block in nginx.conf (but this is minimal)

Why Quadlet Over Docker Compose?

Decision: Use Podman Quadlet with systemd integration instead of docker-compose

Rationale:

  • Native systemd integration (journald logs, dependency management)
  • Better security (rootless capable, no daemon required)
  • Automatic service management via systemd generators
  • Auto-update support via podman auto-update
  • Aligns with Ansible infrastructure-as-code approach

Trade-off: Less familiar to developers used to Docker Compose (but well-documented)

License

Part of the rick-infra project. See repository root for license information.