Compare commits
10 Commits
4f8da38ca6
...
2fe194ba82
| Author | SHA1 | Date | |
|---|---|---|---|
| 2fe194ba82 | |||
| 9b12225ec8 | |||
| f40349c2e7 | |||
| 4f8b46fa14 | |||
| a9f814d929 | |||
| 44584c68f1 | |||
| 0ecbb84fa5 | |||
| 1350d10a7c | |||
| ecbeb07ba2 | |||
| e8b76c6a72 |
74
README.md
74
README.md
@@ -32,43 +32,64 @@ Rick-infra implements a security-first infrastructure stack featuring:
|
|||||||
└─────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Infrastructure Environments
|
||||||
|
|
||||||
|
Rick-infra manages **two separate environments**:
|
||||||
|
|
||||||
|
### 🏠 Homelab (arch-vps)
|
||||||
|
Personal services and experimentation at **jnss.me**:
|
||||||
|
- PostgreSQL, Valkey, Podman infrastructure
|
||||||
|
- Authentik SSO (auth.jnss.me)
|
||||||
|
- Nextcloud (cloud.jnss.me)
|
||||||
|
- Gitea (git.jnss.me)
|
||||||
|
|
||||||
|
### 🚀 Production (mini-vps)
|
||||||
|
Client projects requiring high uptime:
|
||||||
|
- Sigvild Gallery (sigvild.no, api.sigvild.no)
|
||||||
|
- Minimal infrastructure footprint
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
- **VPS**: Fresh Arch Linux VPS with root access
|
- **VPS**: Fresh Arch Linux VPS with root access
|
||||||
- **DNS**: Domain pointed to VPS IP address
|
- **DNS**: Domains pointed to VPS IP addresses
|
||||||
- **SSH**: Key-based authentication configured
|
- **SSH**: Key-based authentication configured
|
||||||
|
|
||||||
### Deploy Complete Stack
|
### Deploy Infrastructure
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Clone repository
|
# 1. Clone repository
|
||||||
git clone https://github.com/your-username/rick-infra.git
|
git clone https://github.com/your-username/rick-infra.git
|
||||||
cd rick-infra
|
cd rick-infra
|
||||||
|
|
||||||
# 2. Configure inventory
|
# 2. Configure inventory (already set up)
|
||||||
cp inventory/hosts.yml.example inventory/hosts.yml
|
# inventory/hosts.yml defines homelab and production groups
|
||||||
# Edit inventory/hosts.yml with your VPS details
|
|
||||||
|
|
||||||
# 3. Set up vault variables
|
# 3. Set up vault variables
|
||||||
ansible-vault create host_vars/arch-vps/vault.yml
|
ansible-vault edit group_vars/production/vault.yml # Production secrets
|
||||||
# Add required secrets (see deployment guide)
|
ansible-vault edit host_vars/arch-vps/vault.yml # Homelab secrets
|
||||||
|
|
||||||
# 4. Deploy complete infrastructure
|
# 4. Deploy to specific environment
|
||||||
ansible-playbook -i inventory/hosts.yml site.yml --ask-vault-pass
|
ansible-playbook playbooks/homelab.yml # Deploy homelab
|
||||||
|
ansible-playbook playbooks/production.yml # Deploy production
|
||||||
|
ansible-playbook site.yml # Deploy both
|
||||||
```
|
```
|
||||||
|
|
||||||
**Total deployment time**: 8-14 minutes for complete stack
|
**Deployment times**:
|
||||||
|
- Homelab (full stack): 8-14 minutes
|
||||||
|
- Production (minimal): 3-5 minutes
|
||||||
|
|
||||||
### Verify Deployment
|
### Verify Deployment
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check services
|
# Check homelab services
|
||||||
curl -I https://auth.jnss.me/ # Authentik SSO
|
curl -I https://auth.jnss.me/ # Authentik SSO
|
||||||
curl -I https://git.jnss.me/ # Gitea (if enabled)
|
curl -I https://cloud.jnss.me/ # Nextcloud
|
||||||
|
ansible homelab -a "systemctl status postgresql valkey caddy"
|
||||||
|
|
||||||
# Check infrastructure
|
# Check production services
|
||||||
ansible arch-vps -m command -a "systemctl status postgresql valkey caddy"
|
curl -I https://sigvild.no/ # Sigvild Gallery
|
||||||
|
ansible production -a "systemctl status sigvild-gallery caddy"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Key Features
|
## Key Features
|
||||||
@@ -116,21 +137,32 @@ ansible arch-vps -m command -a "systemctl status postgresql valkey caddy"
|
|||||||
|
|
||||||
## Core Services
|
## Core Services
|
||||||
|
|
||||||
### Infrastructure Services (Native systemd)
|
### Homelab Services (arch-vps)
|
||||||
|
|
||||||
|
**Infrastructure (Native systemd)**:
|
||||||
- **PostgreSQL** - High-performance database with Unix socket support
|
- **PostgreSQL** - High-performance database with Unix socket support
|
||||||
- **Valkey** - Redis-compatible cache with Unix socket support
|
- **Valkey** - Redis-compatible cache with Unix socket support
|
||||||
- **Caddy** - Automatic HTTPS reverse proxy with Cloudflare DNS
|
- **Caddy** - Automatic HTTPS reverse proxy with Cloudflare DNS
|
||||||
- **Podman** - Rootless container runtime with systemd integration
|
- **Podman** - Rootless container runtime with systemd integration
|
||||||
|
|
||||||
### Authentication Services
|
**Authentication**:
|
||||||
- **Authentik** - Modern SSO server with OAuth2/OIDC/SAML support
|
- **Authentik** - Modern SSO server with OAuth2/OIDC/SAML support
|
||||||
- **Forward Auth** - Transparent service protection via Caddy integration
|
- **Forward Auth** - Transparent service protection via Caddy
|
||||||
- **Multi-Factor Authentication** - TOTP, WebAuthn, SMS support
|
|
||||||
|
|
||||||
### Application Services (Containerized)
|
**Applications (Containerized)**:
|
||||||
|
- **Nextcloud** - Personal cloud storage and file sync
|
||||||
- **Gitea** - Self-hosted Git service with SSO integration
|
- **Gitea** - Self-hosted Git service with SSO integration
|
||||||
- **Gallery** - Media gallery with authentication
|
|
||||||
- **Custom Services** - Template for additional service integration
|
### Production Services (mini-vps)
|
||||||
|
|
||||||
|
**Infrastructure**:
|
||||||
|
- **Caddy** - Automatic HTTPS reverse proxy with Cloudflare DNS
|
||||||
|
|
||||||
|
**Applications**:
|
||||||
|
- **Sigvild Gallery** - Wedding photo gallery with PocketBase API
|
||||||
|
- Frontend: SvelteKit static site
|
||||||
|
- Backend: Go + SQLite (PocketBase)
|
||||||
|
- Domains: sigvild.no, api.sigvild.no
|
||||||
|
|
||||||
## Architecture Benefits
|
## Architecture Benefits
|
||||||
|
|
||||||
|
|||||||
@@ -1,68 +0,0 @@
|
|||||||
---
|
|
||||||
# Deploy Unix Socket Updates for PostgreSQL, Valkey, Authentik, and Gitea
|
|
||||||
# This playbook updates services to use Unix sockets for inter-process communication
|
|
||||||
|
|
||||||
- name: Deploy Unix socket configuration updates
|
|
||||||
hosts: arch-vps
|
|
||||||
become: yes
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
- name: Display deployment plan
|
|
||||||
debug:
|
|
||||||
msg: |
|
|
||||||
🔧 Unix Socket Migration Plan
|
|
||||||
=============================
|
|
||||||
|
|
||||||
📦 Services to Update:
|
|
||||||
1. PostgreSQL - Switch to socket-only (no TCP)
|
|
||||||
2. Valkey - Add Unix socket support
|
|
||||||
3. Authentik - Use sockets for DB/cache
|
|
||||||
4. Gitea - Use sockets for DB/cache
|
|
||||||
|
|
||||||
🔒 Security Benefits:
|
|
||||||
- Zero network exposure for databases
|
|
||||||
- Better performance (25-30% faster)
|
|
||||||
- Simplified security model
|
|
||||||
|
|
||||||
- name: Update PostgreSQL to socket-only
|
|
||||||
include_role:
|
|
||||||
name: postgresql
|
|
||||||
tags: [postgresql]
|
|
||||||
|
|
||||||
- name: Update Valkey with Unix socket
|
|
||||||
include_role:
|
|
||||||
name: valkey
|
|
||||||
tags: [valkey]
|
|
||||||
|
|
||||||
- name: Update Authentik for Unix sockets
|
|
||||||
include_role:
|
|
||||||
name: authentik
|
|
||||||
tags: [authentik]
|
|
||||||
|
|
||||||
- name: Update Gitea for Unix sockets
|
|
||||||
include_role:
|
|
||||||
name: gitea
|
|
||||||
tags: [gitea]
|
|
||||||
|
|
||||||
- name: Verify socket files exist
|
|
||||||
stat:
|
|
||||||
path: "{{ item }}"
|
|
||||||
loop:
|
|
||||||
- /run/postgresql/.s.PGSQL.5432
|
|
||||||
- /run/valkey/valkey.sock
|
|
||||||
register: socket_checks
|
|
||||||
|
|
||||||
- name: Display results
|
|
||||||
debug:
|
|
||||||
msg: |
|
|
||||||
✅ Deployment Complete!
|
|
||||||
|
|
||||||
Socket Status:
|
|
||||||
{% for check in socket_checks.results %}
|
|
||||||
- {{ check.item }}: {{ "EXISTS" if check.stat.exists else "MISSING" }}
|
|
||||||
{% endfor %}
|
|
||||||
|
|
||||||
Next Steps:
|
|
||||||
1. Check service logs: journalctl -u authentik-pod
|
|
||||||
2. Test Authentik: curl http://arch-vps:9000/if/flow/initial-setup/
|
|
||||||
3. Test Gitea: curl http://arch-vps:3000/
|
|
||||||
@@ -1,139 +1,9 @@
|
|||||||
# Architecture Decision Records (ADR)
|
# Architecture Decision Records (ADR)
|
||||||
|
|
||||||
This document records the significant architectural decisions made in the rick-infra project, particularly focusing on the authentication and infrastructure components.
|
This document records the significant architectural decisions made in the rick-infra project.
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
|
|
||||||
- [ADR-001: Native Database Services over Containerized](#adr-001-native-database-services-over-containerized)
|
|
||||||
- [ADR-002: Unix Socket IPC Architecture](#adr-002-unix-socket-ipc-architecture)
|
|
||||||
- [ADR-003: Podman + systemd Container Orchestration](#adr-003-podman--systemd-container-orchestration)
|
|
||||||
- [ADR-004: Forward Authentication Security Model](#adr-004-forward-authentication-security-model)
|
|
||||||
- [ADR-005: Rootful Containers with Infrastructure Fact Pattern](#adr-005-rootful-containers-with-infrastructure-fact-pattern)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
## Unix Socket IPC Architecture
|
||||||
## ADR-001: Native Database Services over Containerized
|
|
||||||
|
|
||||||
**Status**: ✅ Accepted
|
|
||||||
**Date**: December 2025
|
|
||||||
**Deciders**: Infrastructure Team
|
|
||||||
**Technical Story**: Need reliable database and cache services for containerized applications with optimal performance and security.
|
|
||||||
|
|
||||||
### Context
|
|
||||||
|
|
||||||
When deploying containerized applications that require database and cache services, there are two primary architectural approaches:
|
|
||||||
|
|
||||||
1. **Containerized Everything**: Deploy databases and cache services as containers
|
|
||||||
2. **Native Infrastructure Services**: Use systemd-managed native services for infrastructure, containers for applications
|
|
||||||
|
|
||||||
### Decision
|
|
||||||
|
|
||||||
We will use **native systemd services** for core infrastructure components (PostgreSQL, Valkey/Redis) while using containers only for application services (Authentik, Gitea, etc.).
|
|
||||||
|
|
||||||
### Rationale
|
|
||||||
|
|
||||||
#### Performance Benefits
|
|
||||||
|
|
||||||
- **No Container Overhead**: Native services eliminate container runtime overhead
|
|
||||||
```bash
|
|
||||||
# Native PostgreSQL: Direct filesystem access
|
|
||||||
# Containerized PostgreSQL: Container filesystem layer overhead
|
|
||||||
```
|
|
||||||
- **Direct System Resources**: Native services access system resources without abstraction layers
|
|
||||||
- **Optimized Memory Management**: OS-level memory management without container constraints
|
|
||||||
- **Disk I/O Performance**: Direct access to storage without container volume mounting overhead
|
|
||||||
|
|
||||||
#### Security Advantages
|
|
||||||
|
|
||||||
- **Unix Socket Security**: Native services can provide Unix sockets with filesystem-based security
|
|
||||||
```bash
|
|
||||||
# Native: /var/run/postgresql/.s.PGSQL.5432 (postgres:postgres 0770)
|
|
||||||
# Containerized: Requires network exposure or complex socket mounting
|
|
||||||
```
|
|
||||||
- **Reduced Attack Surface**: No container runtime vulnerabilities for critical infrastructure
|
|
||||||
- **OS-Level Security**: Standard system security mechanisms apply directly
|
|
||||||
- **Group-Based Access Control**: Simple Unix group membership for service access
|
|
||||||
|
|
||||||
#### Operational Excellence
|
|
||||||
|
|
||||||
- **Standard Tooling**: Familiar systemd service management
|
|
||||||
```bash
|
|
||||||
systemctl status postgresql
|
|
||||||
journalctl -u postgresql -f
|
|
||||||
systemctl restart postgresql
|
|
||||||
```
|
|
||||||
- **Package Management**: Standard OS package updates and security patches
|
|
||||||
- **Backup Integration**: Native backup tools work seamlessly
|
|
||||||
```bash
|
|
||||||
pg_dump -h /var/run/postgresql authentik > backup.sql
|
|
||||||
```
|
|
||||||
- **Monitoring**: Standard system monitoring tools apply directly
|
|
||||||
|
|
||||||
#### Reliability
|
|
||||||
|
|
||||||
- **systemd Integration**: Robust service lifecycle management
|
|
||||||
```ini
|
|
||||||
[Unit]
|
|
||||||
Description=PostgreSQL database server
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=forking
|
|
||||||
Restart=always
|
|
||||||
RestartSec=5
|
|
||||||
```
|
|
||||||
- **Resource Isolation**: systemd provides resource isolation without container overhead
|
|
||||||
- **Proven Architecture**: Battle-tested approach used by major infrastructure providers
|
|
||||||
|
|
||||||
### Consequences
|
|
||||||
|
|
||||||
#### Positive
|
|
||||||
|
|
||||||
- **Performance**: 15-25% better database performance in benchmarks
|
|
||||||
- **Security**: Eliminated network-based database attacks via Unix sockets
|
|
||||||
- **Operations**: Simplified backup, monitoring, and maintenance procedures
|
|
||||||
- **Resource Usage**: Lower memory and CPU overhead
|
|
||||||
- **Reliability**: More predictable service behavior
|
|
||||||
|
|
||||||
#### Negative
|
|
||||||
|
|
||||||
- **Containerization Purity**: Not a "pure" containerized environment
|
|
||||||
- **Portability**: Slightly less portable than full-container approach
|
|
||||||
- **Learning Curve**: Team needs to understand both systemd and container management
|
|
||||||
|
|
||||||
#### Neutral
|
|
||||||
|
|
||||||
- **Complexity**: Different but not necessarily more complex than container orchestration
|
|
||||||
- **Tooling**: Different toolset but equally capable
|
|
||||||
|
|
||||||
### Implementation Notes
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Infrastructure services (native systemd)
|
|
||||||
- postgresql # Native database service
|
|
||||||
- valkey # Native cache service
|
|
||||||
- caddy # Native reverse proxy
|
|
||||||
- podman # Container runtime
|
|
||||||
|
|
||||||
# Application services (containerized)
|
|
||||||
- authentik # Authentication service
|
|
||||||
- gitea # Git service
|
|
||||||
```
|
|
||||||
|
|
||||||
### Alternatives Considered
|
|
||||||
|
|
||||||
1. **Full Containerization**: Rejected due to performance and operational complexity
|
|
||||||
2. **Mixed with Docker**: Rejected in favor of Podman for security benefits
|
|
||||||
3. **VM-based Infrastructure**: Rejected due to resource overhead
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ADR-002: Unix Socket IPC Architecture
|
|
||||||
|
|
||||||
**Status**: ✅ Accepted
|
|
||||||
**Date**: December 2025
|
|
||||||
**Deciders**: Infrastructure Team
|
|
||||||
**Technical Story**: Secure and performant communication between containerized applications and native infrastructure services.
|
|
||||||
|
|
||||||
### Context
|
### Context
|
||||||
|
|
||||||
@@ -141,11 +11,10 @@ Containerized applications need to communicate with database and cache services.
|
|||||||
|
|
||||||
1. **Network TCP/IP**: Standard network protocols
|
1. **Network TCP/IP**: Standard network protocols
|
||||||
2. **Unix Domain Sockets**: Filesystem-based IPC
|
2. **Unix Domain Sockets**: Filesystem-based IPC
|
||||||
3. **Shared Memory**: Direct memory sharing (complex)
|
|
||||||
|
|
||||||
### Decision
|
### Decision
|
||||||
|
|
||||||
We will use **Unix domain sockets** for all communication between containerized applications and infrastructure services.
|
We will use **Unix domain sockets** for all communication between applications and infrastructure services.
|
||||||
|
|
||||||
### Rationale
|
### Rationale
|
||||||
|
|
||||||
@@ -269,10 +138,6 @@ podman exec authentik-server id
|
|||||||
|
|
||||||
## ADR-003: Podman + systemd Container Orchestration
|
## ADR-003: Podman + systemd Container Orchestration
|
||||||
|
|
||||||
**Status**: ✅ Accepted
|
|
||||||
**Date**: December 2025
|
|
||||||
**Updated**: December 2025 (System-level deployment pattern)
|
|
||||||
**Deciders**: Infrastructure Team
|
|
||||||
**Technical Story**: Container orchestration solution for secure application deployment with systemd integration.
|
**Technical Story**: Container orchestration solution for secure application deployment with systemd integration.
|
||||||
|
|
||||||
### Context
|
### Context
|
||||||
@@ -493,12 +358,9 @@ ps aux | grep authentik-server | head -1 | awk '{print $2}' | \
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ADR-004: Forward Authentication Security Model
|
## OAuth/OIDC and Forward Authentication Security Model
|
||||||
|
|
||||||
**Status**: ✅ Accepted
|
**Technical Story**: Centralized authentication and authorization for multiple services using industry-standard OAuth2/OIDC protocols where supported, with forward authentication as a fallback.
|
||||||
**Date**: December 2025
|
|
||||||
**Deciders**: Infrastructure Team
|
|
||||||
**Technical Story**: Centralized authentication and authorization for multiple services without modifying existing applications.
|
|
||||||
|
|
||||||
### Context
|
### Context
|
||||||
|
|
||||||
@@ -506,53 +368,122 @@ Authentication strategies for multiple services:
|
|||||||
|
|
||||||
1. **Per-Service Authentication**: Each service handles its own authentication
|
1. **Per-Service Authentication**: Each service handles its own authentication
|
||||||
2. **Shared Database**: Services share authentication database
|
2. **Shared Database**: Services share authentication database
|
||||||
3. **Forward Authentication**: Reverse proxy handles authentication
|
3. **OAuth2/OIDC Integration**: Services implement standard OAuth2/OIDC clients
|
||||||
4. **OAuth2/OIDC Integration**: Services implement OAuth2 clients
|
4. **Forward Authentication**: Reverse proxy handles authentication for services without OAuth support
|
||||||
|
|
||||||
### Decision
|
### Decision
|
||||||
|
|
||||||
We will use **forward authentication** with Caddy reverse proxy and Authentik authentication server as the primary authentication model.
|
We will use **OAuth2/OIDC integration** as the primary authentication method for services that support it, and **forward authentication** for services that do not support native OAuth2/OIDC integration.
|
||||||
|
|
||||||
### Rationale
|
### Rationale
|
||||||
|
|
||||||
#### Security Benefits
|
#### OAuth/OIDC as Primary Method
|
||||||
|
|
||||||
- **Single Point of Control**: Centralized authentication policy
|
**Security Benefits**:
|
||||||
|
- **Standard Protocol**: Industry-standard authentication flow (RFC 6749, RFC 7636)
|
||||||
|
- **Token-Based Security**: Secure JWT tokens with cryptographic signatures
|
||||||
|
- **Proper Session Management**: Native application session handling with refresh tokens
|
||||||
|
- **Scope-Based Authorization**: Fine-grained permission control via OAuth scopes
|
||||||
|
- **PKCE Support**: Protection against authorization code interception attacks
|
||||||
|
|
||||||
|
**Integration Benefits**:
|
||||||
|
- **Native Support**: Applications designed for OAuth/OIDC work seamlessly
|
||||||
|
- **Better UX**: Proper redirect flows, logout handling, and token refresh
|
||||||
|
- **API Access**: OAuth tokens enable secure API integrations
|
||||||
|
- **Standard Claims**: OpenID Connect user info endpoint provides standardized user data
|
||||||
|
- **Multi-Application SSO**: Proper single sign-on with token sharing
|
||||||
|
|
||||||
|
**Examples**: Nextcloud, Gitea, Grafana, many modern applications
|
||||||
|
|
||||||
|
#### Forward Auth as Fallback
|
||||||
|
|
||||||
|
**Use Cases**:
|
||||||
|
- Services without OAuth/OIDC support
|
||||||
|
- Legacy applications that cannot be modified
|
||||||
|
- Static sites requiring authentication
|
||||||
|
- Simple internal tools
|
||||||
|
|
||||||
|
**Security Benefits**:
|
||||||
- **Zero Application Changes**: Protect existing services without modification
|
- **Zero Application Changes**: Protect existing services without modification
|
||||||
- **Consistent Security**: Same security model across all services
|
- **Header-Based Identity**: Simple identity propagation to backend
|
||||||
- **Session Management**: Centralized session handling and timeouts
|
- **Transparent Protection**: Services receive pre-authenticated requests
|
||||||
- **Multi-Factor Authentication**: MFA applied consistently across services
|
|
||||||
|
|
||||||
#### Operational Advantages
|
**Limitations**:
|
||||||
|
- **Non-Standard**: Not using industry-standard authentication protocols
|
||||||
|
- **Proxy Dependency**: All requests must flow through authenticating proxy
|
||||||
|
- **Limited Logout**: Complex logout scenarios across services
|
||||||
|
- **Header Trust**: Backend must trust proxy-provided headers
|
||||||
|
|
||||||
- **Simplified Deployment**: No per-service authentication setup
|
#### Shared Benefits (Both Methods)
|
||||||
|
|
||||||
|
- **Single Point of Control**: Centralized authentication policy via Authentik
|
||||||
|
- **Consistent Security**: Same authentication provider across all services
|
||||||
|
- **Multi-Factor Authentication**: MFA applied consistently via Authentik
|
||||||
- **Audit Trail**: Centralized authentication logging
|
- **Audit Trail**: Centralized authentication logging
|
||||||
- **Policy Management**: Single place to manage access policies
|
|
||||||
- **User Management**: One system for all user administration
|
- **User Management**: One system for all user administration
|
||||||
- **Service Independence**: Services focus on business logic
|
|
||||||
|
|
||||||
#### Integration Benefits
|
|
||||||
|
|
||||||
- **Transparent to Applications**: Services receive authenticated requests
|
|
||||||
- **Header-Based Identity**: Simple identity propagation
|
|
||||||
```http
|
|
||||||
Remote-User: john.doe
|
|
||||||
Remote-Name: John Doe
|
|
||||||
Remote-Email: john.doe@company.com
|
|
||||||
Remote-Groups: admins,developers
|
|
||||||
```
|
|
||||||
- **Gradual Migration**: Can protect services incrementally
|
|
||||||
- **Fallback Support**: Can coexist with service-native authentication
|
|
||||||
|
|
||||||
### Implementation Architecture
|
### Implementation Architecture
|
||||||
|
|
||||||
|
#### OAuth/OIDC Flow (Primary Method)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||||
|
│ User │ │ Service │ │ Authentik │
|
||||||
|
│ │ │ (OAuth App) │ │ (IdP) │
|
||||||
|
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
|
||||||
|
│ │ │
|
||||||
|
│ Access Service │ │
|
||||||
|
│─────────────────▶│ │
|
||||||
|
│ │ │
|
||||||
|
│ │ No session │
|
||||||
|
│ 302 → OAuth │ │
|
||||||
|
│◀─────────────────│ │
|
||||||
|
│ │ │
|
||||||
|
│ GET /authorize?client_id=...&redirect_uri=...
|
||||||
|
│──────────────────────────────────────▶│
|
||||||
|
│ │ │
|
||||||
|
│ Login form (if not authenticated) │
|
||||||
|
│◀────────────────────────────────────│
|
||||||
|
│ │ │
|
||||||
|
│ Credentials │ │
|
||||||
|
│─────────────────────────────────────▶│
|
||||||
|
│ │ │
|
||||||
|
│ 302 → callback?code=AUTH_CODE │
|
||||||
|
│◀────────────────────────────────────│
|
||||||
|
│ │ │
|
||||||
|
│ GET /callback?code=AUTH_CODE │
|
||||||
|
│─────────────────▶│ │
|
||||||
|
│ │ │
|
||||||
|
│ │ POST /token │
|
||||||
|
│ │ code=AUTH_CODE │
|
||||||
|
│ │─────────────────▶│
|
||||||
|
│ │ │
|
||||||
|
│ │ access_token │
|
||||||
|
│ │ id_token (JWT) │
|
||||||
|
│ │◀─────────────────│
|
||||||
|
│ │ │
|
||||||
|
│ Set-Cookie │ GET /userinfo │
|
||||||
|
│ 302 → /dashboard │─────────────────▶│
|
||||||
|
│◀─────────────────│ │
|
||||||
|
│ │ User claims │
|
||||||
|
│ │◀─────────────────│
|
||||||
|
│ │ │
|
||||||
|
│ GET /dashboard │ │
|
||||||
|
│─────────────────▶│ │
|
||||||
|
│ │ │
|
||||||
|
│ Dashboard │ │
|
||||||
|
│◀─────────────────│ │
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Forward Auth Flow (Fallback Method)
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||||
│ User │ │ Caddy │ │ Authentik │ │ Service │
|
│ User │ │ Caddy │ │ Authentik │ │ Service │
|
||||||
│ │ │ (Proxy) │ │ (Auth) │ │ (Backend) │
|
│ │ │ (Proxy) │ │ (Forward) │ │ (Backend) │
|
||||||
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘
|
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ GET /dashboard │ │ │
|
│ GET / │ │ │
|
||||||
│─────────────────▶│ │ │
|
│─────────────────▶│ │ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ │ Forward Auth │ │
|
│ │ Forward Auth │ │
|
||||||
@@ -561,19 +492,19 @@ We will use **forward authentication** with Caddy reverse proxy and Authentik au
|
|||||||
│ │ 401 Unauthorized │ │
|
│ │ 401 Unauthorized │ │
|
||||||
│ │◀─────────────────│ │
|
│ │◀─────────────────│ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ 302 → /auth/login│ │ │
|
│ 302 → /auth │ │ │
|
||||||
│◀─────────────────│ │ │
|
│◀─────────────────│ │ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ Login form │ │ │
|
│ Login form │ │ │
|
||||||
│─────────────────▶│─────────────────▶│ │
|
│──────────────────────────────────────▶│ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ Credentials │ │ │
|
│ Credentials │ │ │
|
||||||
│─────────────────▶│─────────────────▶│ │
|
│──────────────────────────────────────▶│ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ Set-Cookie │ │ │
|
│ Set-Cookie │ │ │
|
||||||
│◀─────────────────│◀─────────────────│ │
|
│◀──────────────────────────────────────│ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ GET /dashboard │ │ │
|
│ GET / │ │ │
|
||||||
│─────────────────▶│ │ │
|
│─────────────────▶│ │ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ │ Forward Auth │ │
|
│ │ Forward Auth │ │
|
||||||
@@ -582,21 +513,93 @@ We will use **forward authentication** with Caddy reverse proxy and Authentik au
|
|||||||
│ │ 200 + Headers │ │
|
│ │ 200 + Headers │ │
|
||||||
│ │◀─────────────────│ │
|
│ │◀─────────────────│ │
|
||||||
│ │ │ │
|
│ │ │ │
|
||||||
│ │ GET /dashboard + Auth Headers │
|
│ │ Proxy + Headers │ │
|
||||||
│ │─────────────────────────────────────▶│
|
│ │─────────────────────────────────────▶│
|
||||||
│ │ │
|
│ │ │ │
|
||||||
│ │ Dashboard Content │
|
│ │ Response │ │
|
||||||
│ │◀─────────────────────────────────────│
|
│ │◀─────────────────────────────────────│
|
||||||
│ │ │
|
│ │ │ │
|
||||||
│ Dashboard │ │
|
│ Content │ │ │
|
||||||
│◀─────────────────│ │
|
│◀─────────────────│ │ │
|
||||||
```
|
```
|
||||||
|
|
||||||
### Caddy Configuration
|
### OAuth/OIDC Configuration Examples
|
||||||
|
|
||||||
|
#### Nextcloud OAuth Configuration
|
||||||
|
|
||||||
|
```php
|
||||||
|
// Nextcloud config.php
|
||||||
|
'oidc_login_provider_url' => 'https://auth.jnss.me/application/o/nextcloud/',
|
||||||
|
'oidc_login_client_id' => 'nextcloud-client-id',
|
||||||
|
'oidc_login_client_secret' => 'secret-from-authentik',
|
||||||
|
'oidc_login_auto_redirect' => true,
|
||||||
|
'oidc_login_end_session_redirect' => true,
|
||||||
|
'oidc_login_button_text' => 'Login with SSO',
|
||||||
|
'oidc_login_hide_password_form' => true,
|
||||||
|
'oidc_login_use_id_token' => true,
|
||||||
|
'oidc_login_attributes' => [
|
||||||
|
'id' => 'preferred_username',
|
||||||
|
'name' => 'name',
|
||||||
|
'mail' => 'email',
|
||||||
|
'groups' => 'groups',
|
||||||
|
],
|
||||||
|
'oidc_login_default_group' => 'users',
|
||||||
|
'oidc_login_use_external_storage' => false,
|
||||||
|
'oidc_login_scope' => 'openid profile email groups',
|
||||||
|
'oidc_login_proxy_ldap' => false,
|
||||||
|
'oidc_login_disable_registration' => false,
|
||||||
|
'oidc_login_redir_fallback' => true,
|
||||||
|
'oidc_login_tls_verify' => true,
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Gitea OAuth Configuration
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# Gitea app.ini
|
||||||
|
[openid]
|
||||||
|
ENABLE_OPENID_SIGNIN = false
|
||||||
|
ENABLE_OPENID_SIGNUP = false
|
||||||
|
|
||||||
|
[oauth2_client]
|
||||||
|
REGISTER_EMAIL_CONFIRM = false
|
||||||
|
OPENID_CONNECT_SCOPES = openid email profile groups
|
||||||
|
ENABLE_AUTO_REGISTRATION = true
|
||||||
|
USERNAME = preferred_username
|
||||||
|
EMAIL = email
|
||||||
|
ACCOUNT_LINKING = auto
|
||||||
|
```
|
||||||
|
|
||||||
|
**Authentik Provider Configuration** (Gitea):
|
||||||
|
- Provider Type: OAuth2/OpenID Provider
|
||||||
|
- Client ID: `gitea`
|
||||||
|
- Client Secret: Generated by Authentik
|
||||||
|
- Redirect URIs: `https://git.jnss.me/user/oauth2/Authentik/callback`
|
||||||
|
- Scopes: `openid`, `profile`, `email`, `groups`
|
||||||
|
|
||||||
|
#### Authentik OAuth2 Provider Settings
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# OAuth2/OIDC Provider configuration in Authentik
|
||||||
|
name: "Nextcloud OAuth Provider"
|
||||||
|
authorization_flow: "default-authorization-flow"
|
||||||
|
client_type: "confidential"
|
||||||
|
client_id: "nextcloud-client-id"
|
||||||
|
redirect_uris: "https://cloud.jnss.me/apps/oidc_login/oidc"
|
||||||
|
signing_key: "authentik-default-key"
|
||||||
|
property_mappings:
|
||||||
|
- "authentik default OAuth Mapping: OpenID 'openid'"
|
||||||
|
- "authentik default OAuth Mapping: OpenID 'email'"
|
||||||
|
- "authentik default OAuth Mapping: OpenID 'profile'"
|
||||||
|
- "Custom: Groups" # Maps user groups to 'groups' claim
|
||||||
|
```
|
||||||
|
|
||||||
|
### Forward Auth Configuration Examples
|
||||||
|
|
||||||
|
#### Caddy Configuration for Forward Auth
|
||||||
|
|
||||||
```caddyfile
|
```caddyfile
|
||||||
# Service protection template
|
# whoami service with forward authentication
|
||||||
dashboard.jnss.me {
|
whoami.jnss.me {
|
||||||
# Forward authentication to Authentik
|
# Forward authentication to Authentik
|
||||||
forward_auth https://auth.jnss.me {
|
forward_auth https://auth.jnss.me {
|
||||||
uri /outpost.goauthentik.io/auth/caddy
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
@@ -608,120 +611,194 @@ dashboard.jnss.me {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Service Integration
|
#### Authentik Proxy Provider Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Authentik Proxy Provider for forward auth
|
||||||
|
name: "Whoami Forward Auth"
|
||||||
|
type: "proxy"
|
||||||
|
authorization_flow: "default-authorization-flow"
|
||||||
|
external_host: "https://whoami.jnss.me"
|
||||||
|
internal_host: "http://localhost:8080"
|
||||||
|
skip_path_regex: "^/(health|metrics).*"
|
||||||
|
mode: "forward_single" # Single application mode
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Service Integration (Forward Auth)
|
||||||
|
|
||||||
Services receive authentication information via HTTP headers:
|
Services receive authentication information via HTTP headers:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Example service code (Python Flask)
|
# Example service code (Python Flask)
|
||||||
@app.route('/dashboard')
|
@app.route('/')
|
||||||
def dashboard():
|
def index():
|
||||||
username = request.headers.get('Remote-User')
|
username = request.headers.get('Remote-User')
|
||||||
name = request.headers.get('Remote-Name')
|
name = request.headers.get('Remote-Name')
|
||||||
email = request.headers.get('Remote-Email')
|
email = request.headers.get('Remote-Email')
|
||||||
groups = request.headers.get('Remote-Groups', '').split(',')
|
groups = request.headers.get('Remote-Groups', '').split(',')
|
||||||
|
|
||||||
if 'admins' in groups:
|
return render_template('index.html',
|
||||||
# Admin functionality
|
|
||||||
pass
|
|
||||||
|
|
||||||
return render_template('dashboard.html',
|
|
||||||
username=username,
|
username=username,
|
||||||
name=name)
|
name=name,
|
||||||
```
|
email=email,
|
||||||
|
groups=groups)
|
||||||
### Authentik Provider Configuration
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Authentik Proxy Provider configuration
|
|
||||||
name: "Service Forward Auth"
|
|
||||||
authorization_flow: "default-authorization-flow"
|
|
||||||
external_host: "https://service.jnss.me"
|
|
||||||
internal_host: "http://localhost:8080"
|
|
||||||
skip_path_regex: "^/(health|metrics|static).*"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Authorization Policies
|
### Authorization Policies
|
||||||
|
|
||||||
|
Both OAuth and Forward Auth support Authentik authorization policies:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# Example authorization policy in Authentik
|
# Example authorization policy in Authentik
|
||||||
policy_bindings:
|
policy_bindings:
|
||||||
- policy: "group_admins_only"
|
- policy: "group_admins_only"
|
||||||
target: "service_dashboard"
|
target: "nextcloud_oauth_provider"
|
||||||
order: 0
|
order: 0
|
||||||
|
|
||||||
- policy: "deny_external_ips"
|
- policy: "require_mfa"
|
||||||
target: "admin_endpoints"
|
target: "gitea_oauth_provider"
|
||||||
order: 1
|
order: 1
|
||||||
|
|
||||||
|
- policy: "internal_network_only"
|
||||||
|
target: "whoami_proxy_provider"
|
||||||
|
order: 0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Decision Matrix: OAuth/OIDC vs Forward Auth
|
||||||
|
|
||||||
|
| Criteria | OAuth/OIDC | Forward Auth |
|
||||||
|
|----------|-----------|-------------|
|
||||||
|
| **Application Support** | Requires native OAuth/OIDC support | Any application |
|
||||||
|
| **Protocol Standard** | Industry standard (RFC 6749, 7636) | Proprietary/custom |
|
||||||
|
| **Token Management** | Native refresh tokens, proper expiry | Session-based only |
|
||||||
|
| **Logout Handling** | Proper logout flow | Complex, proxy-dependent |
|
||||||
|
| **API Access** | Full API support via tokens | Header-only |
|
||||||
|
| **Implementation Effort** | Configure OAuth settings | Zero app changes |
|
||||||
|
| **User Experience** | Standard OAuth redirects | Transparent |
|
||||||
|
| **Security Model** | Token-based with scopes | Header trust model |
|
||||||
|
| **When to Use** | **Nextcloud, Gitea, modern apps** | **Static sites, legacy apps, whoami** |
|
||||||
|
|
||||||
### Consequences
|
### Consequences
|
||||||
|
|
||||||
#### Positive
|
#### Positive
|
||||||
|
|
||||||
- **Security**: Consistent, centralized authentication and authorization
|
- **Standards Compliance**: OAuth/OIDC uses industry-standard protocols
|
||||||
- **Simplicity**: No application changes required for protection
|
- **Security**: Multiple authentication options with appropriate security models
|
||||||
- **Flexibility**: Fine-grained access control through Authentik policies
|
- **Flexibility**: Right tool for each service (OAuth when possible, forward auth when needed)
|
||||||
- **Auditability**: Centralized authentication logging
|
- **Auditability**: Centralized authentication logging via Authentik
|
||||||
- **User Experience**: Single sign-on across all services
|
- **User Experience**: Proper SSO across all services
|
||||||
|
- **Token Security**: OAuth provides secure token refresh and scope management
|
||||||
|
- **Graceful Degradation**: Forward auth available for services without OAuth support
|
||||||
|
|
||||||
#### Negative
|
#### Negative
|
||||||
|
|
||||||
- **Single Point of Failure**: Authentication system failure affects all services
|
- **Complexity**: Need to understand two authentication methods
|
||||||
- **Performance**: Additional hop for authentication checks
|
- **Configuration Overhead**: OAuth requires per-service configuration
|
||||||
- **Complexity**: Additional component in the request path
|
- **Single Point of Failure**: Authentik failure affects all services
|
||||||
|
- **Learning Curve**: Team must understand OAuth flows and forward auth model
|
||||||
|
|
||||||
#### Mitigation Strategies
|
#### Mitigation Strategies
|
||||||
|
|
||||||
- **High Availability**: Robust deployment and monitoring of auth components
|
- **Documentation**: Clear decision guide for choosing OAuth vs forward auth
|
||||||
- **Caching**: Session caching to reduce authentication overhead
|
- **Templates**: Reusable OAuth configuration templates for common services
|
||||||
- **Fallback**: Emergency bypass procedures for critical services
|
- **High Availability**: Robust deployment and monitoring of Authentik
|
||||||
- **Monitoring**: Comprehensive monitoring of authentication flow
|
- **Monitoring**: Comprehensive monitoring of both authentication flows
|
||||||
|
- **Testing**: Automated tests for authentication flows
|
||||||
|
|
||||||
### Security Considerations
|
### Security Considerations
|
||||||
|
|
||||||
#### Session Security
|
#### OAuth/OIDC Security
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# Authentik session settings
|
# Authentik OAuth2 Provider security settings
|
||||||
session_cookie_age: 3600 # 1 hour
|
authorization_code_validity: 60 # 1 minute
|
||||||
session_cookie_secure: true
|
access_code_validity: 3600 # 1 hour
|
||||||
session_cookie_samesite: "Strict"
|
refresh_code_validity: 2592000 # 30 days
|
||||||
session_remember_me: false
|
include_claims_in_id_token: true
|
||||||
|
signing_key: "authentik-default-key"
|
||||||
|
sub_mode: "hashed_user_id"
|
||||||
|
issuer_mode: "per_provider"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Best Practices**:
|
||||||
|
- Use PKCE for all OAuth flows (protection against interception)
|
||||||
|
- Implement proper token rotation (refresh tokens expire and rotate)
|
||||||
|
- Validate `aud` (audience) and `iss` (issuer) claims in JWT tokens
|
||||||
|
- Use short-lived access tokens (1 hour)
|
||||||
|
- Store client secrets securely (Ansible Vault)
|
||||||
|
|
||||||
|
#### Forward Auth Security
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Authentik Proxy Provider security settings
|
||||||
|
token_validity: 3600 # 1 hour session
|
||||||
|
cookie_domain: ".jnss.me"
|
||||||
|
skip_path_regex: "^/(health|metrics|static).*"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Best Practices**:
|
||||||
|
- Trust only Authentik-provided headers
|
||||||
|
- Validate `Remote-User` header exists before granting access
|
||||||
|
- Use HTTPS for all forward auth endpoints
|
||||||
|
- Implement proper session timeouts
|
||||||
|
- Strip user-provided authentication headers at proxy
|
||||||
|
|
||||||
#### Access Control
|
#### Access Control
|
||||||
|
|
||||||
- **Group-Based Authorization**: Users assigned to groups, groups to applications
|
- **Group-Based Authorization**: Users assigned to groups, groups to applications
|
||||||
- **Time-Based Access**: Temporary access grants
|
- **Policy Engine**: Authentik policies for fine-grained access control
|
||||||
- **IP-Based Restrictions**: Geographic or network-based access control
|
|
||||||
- **MFA Requirements**: Multi-factor authentication for sensitive services
|
- **MFA Requirements**: Multi-factor authentication for sensitive services
|
||||||
|
- **IP-Based Restrictions**: Geographic or network-based access control
|
||||||
|
- **Time-Based Access**: Temporary access grants via policies
|
||||||
|
|
||||||
#### Audit Logging
|
#### Audit Logging
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"timestamp": "2025-12-11T17:52:31Z",
|
"timestamp": "2025-12-15T10:30:00Z",
|
||||||
"event": "authentication_success",
|
"event": "oauth_authorization",
|
||||||
"user": "john.doe",
|
"user": "john.doe",
|
||||||
"service": "dashboard.jnss.me",
|
"application": "nextcloud",
|
||||||
|
"scopes": ["openid", "email", "profile", "groups"],
|
||||||
"ip": "192.168.1.100",
|
"ip": "192.168.1.100",
|
||||||
"user_agent": "Mozilla/5.0..."
|
"user_agent": "Mozilla/5.0..."
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Alternative Models Supported
|
### Implementation Examples by Service Type
|
||||||
|
|
||||||
While forward auth is primary, we also support:
|
#### OAuth/OIDC Services (Primary Method)
|
||||||
|
|
||||||
1. **OAuth2/OIDC Integration**: For applications that can implement OAuth2
|
**Nextcloud**:
|
||||||
2. **API Key Authentication**: For service-to-service communication
|
```caddyfile
|
||||||
3. **Service-Native Auth**: For legacy applications that cannot be easily protected
|
cloud.jnss.me {
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
# OAuth configured within Nextcloud application
|
||||||
|
```
|
||||||
|
|
||||||
### Implementation Examples
|
**Gitea**:
|
||||||
|
```caddyfile
|
||||||
|
git.jnss.me {
|
||||||
|
reverse_proxy localhost:3000
|
||||||
|
}
|
||||||
|
# OAuth configured within Gitea application settings
|
||||||
|
```
|
||||||
|
|
||||||
#### Protecting a Static Site
|
#### Forward Auth Services (Fallback Method)
|
||||||
|
|
||||||
|
**Whoami (test/demo service)**:
|
||||||
|
```caddyfile
|
||||||
|
whoami.jnss.me {
|
||||||
|
forward_auth https://auth.jnss.me {
|
||||||
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
|
copy_headers Remote-User Remote-Name Remote-Email Remote-Groups
|
||||||
|
}
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Static Documentation Site**:
|
||||||
```caddyfile
|
```caddyfile
|
||||||
docs.jnss.me {
|
docs.jnss.me {
|
||||||
forward_auth https://auth.jnss.me {
|
forward_auth https://auth.jnss.me {
|
||||||
@@ -734,33 +811,44 @@ docs.jnss.me {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Protecting an API
|
**Internal API (no OAuth support)**:
|
||||||
|
|
||||||
```caddyfile
|
```caddyfile
|
||||||
api.jnss.me {
|
api.jnss.me {
|
||||||
forward_auth https://auth.jnss.me {
|
forward_auth https://auth.jnss.me {
|
||||||
uri /outpost.goauthentik.io/auth/caddy
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
copy_headers Remote-User Remote-Email Remote-Groups
|
copy_headers Remote-User Remote-Email Remote-Groups
|
||||||
}
|
}
|
||||||
|
|
||||||
reverse_proxy localhost:3000
|
reverse_proxy localhost:3000
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Public Endpoints with Selective Protection
|
#### Selective Protection (Public + Protected Paths)
|
||||||
|
|
||||||
```caddyfile
|
```caddyfile
|
||||||
app.jnss.me {
|
app.jnss.me {
|
||||||
# Public endpoints (no auth)
|
# Public endpoints (no auth required)
|
||||||
handle /health {
|
handle /health {
|
||||||
reverse_proxy localhost:8080
|
reverse_proxy localhost:8080
|
||||||
}
|
}
|
||||||
|
|
||||||
|
handle /metrics {
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
|
||||||
handle /public/* {
|
handle /public/* {
|
||||||
reverse_proxy localhost:8080
|
reverse_proxy localhost:8080
|
||||||
}
|
}
|
||||||
|
|
||||||
# Protected endpoints
|
# Protected endpoints (forward auth)
|
||||||
|
handle /admin/* {
|
||||||
|
forward_auth https://auth.jnss.me {
|
||||||
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
|
copy_headers Remote-User Remote-Groups
|
||||||
|
}
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
|
||||||
|
# Default: protected
|
||||||
handle {
|
handle {
|
||||||
forward_auth https://auth.jnss.me {
|
forward_auth https://auth.jnss.me {
|
||||||
uri /outpost.goauthentik.io/auth/caddy
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
@@ -773,18 +861,17 @@ app.jnss.me {
|
|||||||
|
|
||||||
### Alternatives Considered
|
### Alternatives Considered
|
||||||
|
|
||||||
1. **OAuth2 Only**: Rejected due to application modification requirements
|
1. **OAuth2/OIDC Only**: Rejected because many services don't support OAuth natively
|
||||||
2. **Shared Database**: Rejected due to tight coupling between services
|
2. **Forward Auth Only**: Rejected because it doesn't leverage native OAuth support in modern apps
|
||||||
3. **VPN-Based Access**: Rejected due to operational complexity for web services
|
3. **Per-Service Authentication**: Rejected due to management overhead and inconsistent security
|
||||||
4. **Per-Service Authentication**: Rejected due to management overhead
|
4. **Shared Database**: Rejected due to tight coupling between services
|
||||||
|
5. **VPN-Based Access**: Rejected due to operational complexity for web services
|
||||||
|
6. **SAML**: Rejected in favor of modern OAuth2/OIDC standards
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ADR-005: Rootful Containers with Infrastructure Fact Pattern
|
## Rootful Containers with Infrastructure Fact Pattern
|
||||||
|
|
||||||
**Status**: ✅ Accepted
|
|
||||||
**Date**: December 2025
|
|
||||||
**Deciders**: Infrastructure Team
|
|
||||||
**Technical Story**: Enable containerized applications to access native infrastructure services (PostgreSQL, Valkey) via Unix sockets with group-based permissions.
|
**Technical Story**: Enable containerized applications to access native infrastructure services (PostgreSQL, Valkey) via Unix sockets with group-based permissions.
|
||||||
|
|
||||||
### Context
|
### Context
|
||||||
@@ -1038,25 +1125,122 @@ curl -I http://127.0.0.1:9000/
|
|||||||
1. **Rootless with user namespace** - Discarded due to GID remapping breaking group-based socket access
|
1. **Rootless with user namespace** - Discarded due to GID remapping breaking group-based socket access
|
||||||
2. **TCP-only connections** - Rejected to maintain Unix socket security and performance benefits
|
2. **TCP-only connections** - Rejected to maintain Unix socket security and performance benefits
|
||||||
3. **Hardcoded GIDs** - Rejected for portability; facts provide dynamic resolution
|
3. **Hardcoded GIDs** - Rejected for portability; facts provide dynamic resolution
|
||||||
4. **Directory permissions (777)** - Rejected for security; group-based access more restrictive
|
4. **Directory permissions (777)** - Rejected for security; group-based access more restrictive. This is then later changed again to 777, due to Nextcloud switching from root to www-data, breaking group-based permissions.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Summary
|
---
|
||||||
|
## ADR-007: Multi-Environment Infrastructure Architecture
|
||||||
|
|
||||||
These architecture decisions collectively create a robust, secure, and performant infrastructure:
|
**Date**: December 2025
|
||||||
|
**Status**: Accepted
|
||||||
|
**Context**: Separation of homelab services from production client projects
|
||||||
|
|
||||||
- **Native Services** provide optimal performance and security
|
### Decision
|
||||||
- **Unix Sockets** eliminate network attack vectors
|
|
||||||
- **Podman + systemd** delivers secure container orchestration
|
|
||||||
- **Forward Authentication** enables centralized security without application changes
|
|
||||||
- **Rootful Container Pattern** enables group-based socket access with infrastructure fact sharing
|
|
||||||
|
|
||||||
The combination results in an infrastructure that prioritizes security and performance while maintaining operational simplicity and reliability.
|
Rick-infra will manage two separate environments with different purposes and uptime requirements:
|
||||||
|
|
||||||
## References
|
1. **Homelab Environment** (arch-vps)
|
||||||
|
- Purpose: Personal services and experimentation
|
||||||
|
- Infrastructure: Full stack (PostgreSQL, Valkey, Podman, Caddy)
|
||||||
|
- Services: Authentik, Nextcloud, Gitea
|
||||||
|
- Uptime requirement: Best effort
|
||||||
|
|
||||||
- [Service Integration Guide](service-integration-guide.md)
|
2. **Production Environment** (mini-vps)
|
||||||
- [Authentik Deployment Guide](authentik-deployment-guide.md)
|
- Purpose: Client projects requiring high uptime
|
||||||
- [Security Hardening](security-hardening.md)
|
- Infrastructure: Minimal (Caddy only)
|
||||||
- [Authentik Role Documentation](../roles/authentik/README.md)
|
- Services: Sigvild Gallery
|
||||||
|
- Uptime requirement: High availability
|
||||||
|
|
||||||
|
### Rationale
|
||||||
|
|
||||||
|
**Separation of Concerns**:
|
||||||
|
- Personal experiments don't affect client services
|
||||||
|
- Client services isolated from homelab maintenance
|
||||||
|
- Clear distinction between environments in code
|
||||||
|
|
||||||
|
**Infrastructure Optimization**:
|
||||||
|
- Production runs minimal services (no PostgreSQL/Valkey overhead)
|
||||||
|
- Homelab can be rebooted/upgraded without affecting clients
|
||||||
|
- Cost optimization: smaller VPS for production
|
||||||
|
|
||||||
|
**Operational Flexibility**:
|
||||||
|
- Different backup strategies per environment
|
||||||
|
- Different monitoring/alerting levels
|
||||||
|
- Independent deployment schedules
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
**Variable Organization**:
|
||||||
|
```
|
||||||
|
rick-infra/
|
||||||
|
├── group_vars/
|
||||||
|
│ └── production/ # Production environment config
|
||||||
|
│ ├── main.yml
|
||||||
|
│ └── vault.yml
|
||||||
|
├── host_vars/
|
||||||
|
│ └── arch-vps/ # Homelab host config
|
||||||
|
│ ├── main.yml
|
||||||
|
│ └── vault.yml
|
||||||
|
└── playbooks/
|
||||||
|
├── homelab.yml # Homelab deployment
|
||||||
|
├── production.yml # Production deployment
|
||||||
|
└── site.yml # Orchestrates both
|
||||||
|
```
|
||||||
|
|
||||||
|
**Playbook Structure**:
|
||||||
|
- `site.yml` imports both homelab.yml and production.yml
|
||||||
|
- Each playbook manually loads variables (Ansible 2.20 workaround)
|
||||||
|
- Services deploy only to their designated environment
|
||||||
|
|
||||||
|
**Inventory Groups**:
|
||||||
|
```yaml
|
||||||
|
homelab:
|
||||||
|
hosts:
|
||||||
|
arch-vps:
|
||||||
|
ansible_host: 69.62.119.31
|
||||||
|
|
||||||
|
production:
|
||||||
|
hosts:
|
||||||
|
mini-vps:
|
||||||
|
ansible_host: 72.62.91.251
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Example
|
||||||
|
|
||||||
|
**Sigvild Gallery Migration** (December 2025):
|
||||||
|
- **From**: arch-vps (homelab)
|
||||||
|
- **To**: mini-vps (production)
|
||||||
|
- **Reason**: Client project requiring higher uptime
|
||||||
|
- **Process**:
|
||||||
|
1. Created backup on arch-vps
|
||||||
|
2. Deployed to mini-vps with automatic restore
|
||||||
|
3. Updated DNS (5 min downtime)
|
||||||
|
4. Removed from arch-vps configuration
|
||||||
|
|
||||||
|
### Consequences
|
||||||
|
|
||||||
|
**Positive**:
|
||||||
|
- Clear separation of personal vs. client services
|
||||||
|
- Reduced blast radius for experiments
|
||||||
|
- Optimized resource usage per environment
|
||||||
|
- Independent scaling and management
|
||||||
|
|
||||||
|
**Negative**:
|
||||||
|
- Increased complexity in playbook organization
|
||||||
|
- Need to manage multiple VPS instances
|
||||||
|
- Ansible 2.20 variable loading requires workarounds
|
||||||
|
- Duplicate infrastructure code (Caddy on both)
|
||||||
|
|
||||||
|
**Neutral**:
|
||||||
|
- Services can be migrated between environments with minimal friction
|
||||||
|
- Backup/restore procedures work across environments
|
||||||
|
- Group_vars vs. host_vars hybrid approach
|
||||||
|
|
||||||
|
### Future Considerations
|
||||||
|
|
||||||
|
- Consider grouping multiple client projects on production VPS
|
||||||
|
- Evaluate if homelab needs full infrastructure stack
|
||||||
|
- Monitor for opportunities to share infrastructure between environments
|
||||||
|
- Document migration procedures for moving services between environments
|
||||||
|
|
||||||
|
---
|
||||||
|
|||||||
@@ -7,7 +7,8 @@ This document describes the comprehensive authentication and authorization strat
|
|||||||
Rick-infra implements a modern, security-focused authentication architecture that provides:
|
Rick-infra implements a modern, security-focused authentication architecture that provides:
|
||||||
|
|
||||||
- **Centralized SSO**: Single sign-on across all services via Authentik
|
- **Centralized SSO**: Single sign-on across all services via Authentik
|
||||||
- **Forward Authentication**: Transparent protection without application changes
|
- **OAuth2/OIDC Integration**: Industry-standard authentication for services that support it (primary method)
|
||||||
|
- **Forward Authentication**: Transparent protection for legacy applications and services without OAuth support (fallback method)
|
||||||
- **Zero Network Exposure**: Database and cache communication via Unix sockets
|
- **Zero Network Exposure**: Database and cache communication via Unix sockets
|
||||||
- **Granular Authorization**: Fine-grained access control through groups and policies
|
- **Granular Authorization**: Fine-grained access control through groups and policies
|
||||||
- **Standards Compliance**: OAuth2, OIDC, SAML support for enterprise integration
|
- **Standards Compliance**: OAuth2, OIDC, SAML support for enterprise integration
|
||||||
@@ -98,15 +99,86 @@ sequenceDiagram
|
|||||||
|
|
||||||
## Service Integration Patterns
|
## Service Integration Patterns
|
||||||
|
|
||||||
### Pattern 1: Forward Authentication (Recommended)
|
### Choosing the Right Pattern
|
||||||
|
|
||||||
**Use Case**: Existing HTTP services that don't need to handle authentication
|
**Use OAuth2/OIDC (Primary Method)** when:
|
||||||
|
- ✅ Your service/application natively supports OAuth2/OIDC
|
||||||
|
- ✅ Examples: Nextcloud, Gitea, Grafana, modern web applications
|
||||||
|
- ✅ Provides better security, standard protocols, and proper token management
|
||||||
|
|
||||||
|
**Use Forward Authentication (Fallback Method)** when:
|
||||||
|
- ⚠️ Service doesn't support OAuth2/OIDC integration
|
||||||
|
- ⚠️ Legacy applications that cannot be modified
|
||||||
|
- ⚠️ Static sites requiring authentication
|
||||||
|
- ⚠️ Simple internal tools without authentication capabilities
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Pattern 1: OAuth2/OIDC Integration (Primary Method)
|
||||||
|
|
||||||
|
**Use Case**: Applications with native OAuth2/OIDC support (Nextcloud, Gitea, Grafana, etc.)
|
||||||
|
|
||||||
**Benefits**:
|
**Benefits**:
|
||||||
- No application code changes required
|
- ✅ Industry-standard authentication protocol (RFC 6749, RFC 7636)
|
||||||
- Consistent authentication across all services
|
- ✅ Secure token-based authentication with JWT
|
||||||
- Service receives authenticated user information via headers
|
- ✅ Proper session management with refresh tokens
|
||||||
- Centralized session management
|
- ✅ Native application integration
|
||||||
|
- ✅ Support for API access tokens
|
||||||
|
- ✅ Better logout handling and token refresh
|
||||||
|
|
||||||
|
**Implementation**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example OAuth2 configuration
|
||||||
|
OAUTH2_CONFIG = {
|
||||||
|
'client_id': 'your-client-id',
|
||||||
|
'client_secret': 'your-client-secret',
|
||||||
|
'server_metadata_url': 'https://auth.jnss.me/application/o/your-app/.well-known/openid_configuration',
|
||||||
|
'client_kwargs': {
|
||||||
|
'scope': 'openid email profile groups'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# OAuth2 flow implementation
|
||||||
|
from authlib.integrations.flask_client import OAuth
|
||||||
|
|
||||||
|
oauth = OAuth(app)
|
||||||
|
oauth.register('authentik', **OAUTH2_CONFIG)
|
||||||
|
|
||||||
|
@app.route('/login')
|
||||||
|
def login():
|
||||||
|
redirect_uri = url_for('callback', _external=True)
|
||||||
|
return oauth.authentik.authorize_redirect(redirect_uri)
|
||||||
|
|
||||||
|
@app.route('/callback')
|
||||||
|
def callback():
|
||||||
|
token = oauth.authentik.authorize_access_token()
|
||||||
|
user_info = oauth.authentik.parse_id_token(token)
|
||||||
|
# Store user info in session
|
||||||
|
session['user'] = user_info
|
||||||
|
return redirect('/dashboard')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Real-World Example**: See Example 1 below for Gitea OAuth/OIDC configuration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Pattern 2: Forward Authentication (Fallback Method)
|
||||||
|
|
||||||
|
**Use Case**: Existing HTTP services that don't support OAuth2/OIDC
|
||||||
|
|
||||||
|
**Benefits**:
|
||||||
|
- ✅ No application code changes required
|
||||||
|
- ✅ Consistent authentication across all services
|
||||||
|
- ✅ Service receives authenticated user information via headers
|
||||||
|
- ✅ Centralized session management
|
||||||
|
- ✅ Works with any HTTP application
|
||||||
|
|
||||||
|
**Limitations**:
|
||||||
|
- ⚠️ Non-standard authentication approach
|
||||||
|
- ⚠️ All requests must flow through authenticating proxy
|
||||||
|
- ⚠️ Limited logout functionality
|
||||||
|
- ⚠️ Backend must trust proxy-provided headers
|
||||||
|
|
||||||
**Implementation**:
|
**Implementation**:
|
||||||
|
|
||||||
@@ -143,48 +215,9 @@ def dashboard():
|
|||||||
return f"Welcome {user_name} ({username})"
|
return f"Welcome {user_name} ({username})"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Pattern 2: OAuth2/OIDC Integration
|
**Real-World Example**: See Example 3 below for static site protection with forward auth.
|
||||||
|
|
||||||
**Use Case**: Applications that can implement OAuth2 client functionality
|
---
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- More control over authentication flow
|
|
||||||
- Better integration with application user models
|
|
||||||
- Support for API access tokens
|
|
||||||
- Offline access via refresh tokens
|
|
||||||
|
|
||||||
**Implementation**:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Example OAuth2 configuration
|
|
||||||
OAUTH2_CONFIG = {
|
|
||||||
'client_id': 'your-client-id',
|
|
||||||
'client_secret': 'your-client-secret',
|
|
||||||
'server_metadata_url': 'https://auth.jnss.me/application/o/your-app/.well-known/openid_configuration',
|
|
||||||
'client_kwargs': {
|
|
||||||
'scope': 'openid email profile groups'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# OAuth2 flow implementation
|
|
||||||
from authlib.integrations.flask_client import OAuth
|
|
||||||
|
|
||||||
oauth = OAuth(app)
|
|
||||||
oauth.register('authentik', **OAUTH2_CONFIG)
|
|
||||||
|
|
||||||
@app.route('/login')
|
|
||||||
def login():
|
|
||||||
redirect_uri = url_for('callback', _external=True)
|
|
||||||
return oauth.authentik.authorize_redirect(redirect_uri)
|
|
||||||
|
|
||||||
@app.route('/callback')
|
|
||||||
def callback():
|
|
||||||
token = oauth.authentik.authorize_access_token()
|
|
||||||
user_info = oauth.authentik.parse_id_token(token)
|
|
||||||
# Store user info in session
|
|
||||||
session['user'] = user_info
|
|
||||||
return redirect('/dashboard')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pattern 3: API-Only Authentication
|
### Pattern 3: API-Only Authentication
|
||||||
|
|
||||||
@@ -512,35 +545,69 @@ Remote-Session-ID: sess_abc123def456
|
|||||||
|
|
||||||
## Integration Examples
|
## Integration Examples
|
||||||
|
|
||||||
### Example 1: Protecting Gitea with Groups
|
### Example 1: Gitea with OAuth2/OIDC Integration
|
||||||
|
|
||||||
**Objective**: Protect Git repository access with role-based permissions
|
**Objective**: Git repository access with native OAuth2/OIDC authentication
|
||||||
|
|
||||||
|
**Why OAuth for Gitea**: Gitea has native OAuth2/OIDC support, providing better security and user experience than forward auth.
|
||||||
|
|
||||||
|
#### Step 1: Configure Authentik OAuth2 Provider
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Authentik Provider Configuration
|
||||||
|
name: "Gitea OAuth Provider"
|
||||||
|
type: "OAuth2/OpenID Provider"
|
||||||
|
authorization_flow: "default-provider-authorization-implicit-consent"
|
||||||
|
client_type: "confidential"
|
||||||
|
client_id: "gitea"
|
||||||
|
redirect_uris: "https://git.jnss.me/user/oauth2/Authentik/callback"
|
||||||
|
signing_key: "authentik-default-key"
|
||||||
|
scopes: ["openid", "profile", "email", "groups"]
|
||||||
|
property_mappings:
|
||||||
|
- "authentik default OAuth Mapping: OpenID 'openid'"
|
||||||
|
- "authentik default OAuth Mapping: OpenID 'email'"
|
||||||
|
- "authentik default OAuth Mapping: OpenID 'profile'"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Caddy Configuration
|
||||||
|
|
||||||
```caddyfile
|
```caddyfile
|
||||||
# Caddy configuration for Gitea
|
# Caddy configuration for Gitea - No forward auth needed
|
||||||
git.jnss.me {
|
git.jnss.me {
|
||||||
forward_auth https://auth.jnss.me {
|
|
||||||
uri /outpost.goauthentik.io/auth/caddy
|
|
||||||
copy_headers Remote-User Remote-Groups
|
|
||||||
}
|
|
||||||
|
|
||||||
reverse_proxy localhost:3000
|
reverse_proxy localhost:3000
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Gitea Configuration**:
|
#### Step 3: Gitea Configuration
|
||||||
```ini
|
|
||||||
# app.ini - Gitea configuration
|
|
||||||
[auth]
|
|
||||||
REVERSE_PROXY_AUTHENTICATION = true
|
|
||||||
REVERSE_PROXY_AUTO_REGISTRATION = true
|
|
||||||
|
|
||||||
[auth.reverse_proxy]
|
```ini
|
||||||
USER_HEADER = Remote-User
|
# app.ini - Gitea OAuth2 configuration
|
||||||
EMAIL_HEADER = Remote-Email
|
[openid]
|
||||||
FULL_NAME_HEADER = Remote-Name
|
ENABLE_OPENID_SIGNIN = false
|
||||||
|
ENABLE_OPENID_SIGNUP = false
|
||||||
|
|
||||||
|
[oauth2_client]
|
||||||
|
REGISTER_EMAIL_CONFIRM = false
|
||||||
|
OPENID_CONNECT_SCOPES = openid email profile groups
|
||||||
|
ENABLE_AUTO_REGISTRATION = true
|
||||||
|
USERNAME = preferred_username
|
||||||
|
EMAIL = email
|
||||||
|
ACCOUNT_LINKING = auto
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Step 4: Add OAuth Source in Gitea Web UI
|
||||||
|
|
||||||
|
1. Navigate to **Site Administration → Authentication Sources**
|
||||||
|
2. Click **Add Authentication Source**
|
||||||
|
3. **Authentication Type**: OAuth2
|
||||||
|
4. **Authentication Name**: Authentik
|
||||||
|
5. **OAuth2 Provider**: OpenID Connect
|
||||||
|
6. **Client ID**: `gitea` (from Authentik provider)
|
||||||
|
7. **Client Secret**: (from Authentik provider)
|
||||||
|
8. **OpenID Connect Auto Discovery URL**: `https://auth.jnss.me/application/o/gitea/.well-known/openid-configuration`
|
||||||
|
9. **Additional Scopes**: `profile email groups`
|
||||||
|
10. Enable: **Skip local 2FA**, **Automatically create users**
|
||||||
|
|
||||||
**Group Mapping**:
|
**Group Mapping**:
|
||||||
```yaml
|
```yaml
|
||||||
# Authentik group configuration for Gitea
|
# Authentik group configuration for Gitea
|
||||||
@@ -555,6 +622,8 @@ groups:
|
|||||||
permissions: ["admin"]
|
permissions: ["admin"]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Result**: Users see "Sign in with Authentik" button on Gitea login page with full OAuth flow.
|
||||||
|
|
||||||
### Example 2: API Service with Scoped Access
|
### Example 2: API Service with Scoped Access
|
||||||
|
|
||||||
**Objective**: REST API with OAuth2 authentication and scoped permissions
|
**Objective**: REST API with OAuth2 authentication and scoped permissions
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ This guide covers the complete deployment process for Authentik, a modern authen
|
|||||||
- **Native PostgreSQL** - High-performance database with Unix socket IPC
|
- **Native PostgreSQL** - High-performance database with Unix socket IPC
|
||||||
- **Native Valkey** - Redis-compatible cache with Unix socket IPC
|
- **Native Valkey** - Redis-compatible cache with Unix socket IPC
|
||||||
- **Podman Containers** - System-level container orchestration via systemd/Quadlet
|
- **Podman Containers** - System-level container orchestration via systemd/Quadlet
|
||||||
- **Caddy Reverse Proxy** - TLS termination and forward authentication
|
- **Caddy Reverse Proxy** - TLS termination, OAuth2/OIDC provider, and forward authentication fallback
|
||||||
|
|
||||||
## Architecture Summary
|
## Architecture Summary
|
||||||
|
|
||||||
@@ -279,7 +279,9 @@ curl -s https://auth.jnss.me/api/v3/admin/version/
|
|||||||
# Launch URL: Your application URL
|
# Launch URL: Your application URL
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 3. Configure Forward Auth (for Caddy integration)
|
#### 3. Configure Forward Auth (for services without OAuth support)
|
||||||
|
|
||||||
|
**Note**: Only use Forward Auth for services that don't support OAuth2/OIDC integration. For services like Nextcloud, Gitea, or Grafana, use OAuth2 providers (see step 1 above) instead.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Navigate to Applications → Providers → Create
|
# Navigate to Applications → Providers → Create
|
||||||
@@ -287,17 +289,72 @@ curl -s https://auth.jnss.me/api/v3/admin/version/
|
|||||||
# Name: "Forward Auth Provider"
|
# Name: "Forward Auth Provider"
|
||||||
# External Host: https://your-service.jnss.me
|
# External Host: https://your-service.jnss.me
|
||||||
# Internal Host: http://localhost:8080 (your service backend)
|
# Internal Host: http://localhost:8080 (your service backend)
|
||||||
|
# Use this only for: static sites, legacy apps, simple tools without OAuth support
|
||||||
```
|
```
|
||||||
|
|
||||||
## Service Integration Examples
|
## Service Integration Examples
|
||||||
|
|
||||||
### Example 1: Protect Existing HTTP Service with Forward Auth
|
### Choosing the Right Integration Method
|
||||||
|
|
||||||
|
**Primary Method - OAuth2/OIDC** (Use when service supports it):
|
||||||
|
- ✅ **Nextcloud**: Native OIDC support
|
||||||
|
- ✅ **Gitea**: Native OAuth2 support
|
||||||
|
- ✅ **Grafana**: Native OAuth2 support
|
||||||
|
- ✅ **Custom applications**: Applications with OAuth2 client libraries
|
||||||
|
|
||||||
|
**Fallback Method - Forward Auth** (Use when service doesn't support OAuth):
|
||||||
|
- ⚠️ **Static sites**: No authentication capabilities
|
||||||
|
- ⚠️ **Legacy applications**: Cannot be modified
|
||||||
|
- ⚠️ **Simple tools**: whoami, monitoring dashboards without auth
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Example 1: OAuth2/OIDC Integration (Nextcloud)
|
||||||
|
|
||||||
|
**Recommended for services with native OAuth support**
|
||||||
|
|
||||||
|
For applications that can handle OAuth2/OIDC directly (like Nextcloud, Gitea):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Nextcloud OIDC configuration (config.php)
|
||||||
|
'oidc_login_provider_url' => 'https://auth.jnss.me/application/o/nextcloud/',
|
||||||
|
'oidc_login_client_id' => 'nextcloud-client-id',
|
||||||
|
'oidc_login_client_secret' => 'your_client_secret',
|
||||||
|
'oidc_login_auto_redirect' => true,
|
||||||
|
'oidc_login_end_session_redirect' => true,
|
||||||
|
'oidc_login_button_text' => 'Login with SSO',
|
||||||
|
'oidc_login_hide_password_form' => true,
|
||||||
|
'oidc_login_use_id_token' => true,
|
||||||
|
'oidc_login_attributes' => [
|
||||||
|
'id' => 'preferred_username',
|
||||||
|
'name' => 'name',
|
||||||
|
'mail' => 'email',
|
||||||
|
'groups' => 'groups',
|
||||||
|
],
|
||||||
|
'oidc_login_default_group' => 'users',
|
||||||
|
'oidc_login_scope' => 'openid profile email groups',
|
||||||
|
'oidc_login_tls_verify' => true,
|
||||||
|
```
|
||||||
|
|
||||||
|
**Caddy configuration** (no forward auth needed):
|
||||||
|
```caddyfile
|
||||||
|
# In /etc/caddy/sites-enabled/nextcloud.caddy
|
||||||
|
cloud.jnss.me {
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Example 2: Forward Auth for Services Without OAuth Support
|
||||||
|
|
||||||
|
**Use only when OAuth2/OIDC is not available**
|
||||||
|
|
||||||
Add to your service's Caddy configuration:
|
Add to your service's Caddy configuration:
|
||||||
|
|
||||||
```caddyfile
|
```caddyfile
|
||||||
# In /etc/caddy/sites-enabled/myservice.caddy
|
# In /etc/caddy/sites-enabled/whoami.caddy
|
||||||
myservice.jnss.me {
|
whoami.jnss.me {
|
||||||
# Forward authentication to authentik
|
# Forward authentication to authentik
|
||||||
forward_auth https://auth.jnss.me {
|
forward_auth https://auth.jnss.me {
|
||||||
uri /outpost.goauthentik.io/auth/caddy
|
uri /outpost.goauthentik.io/auth/caddy
|
||||||
@@ -309,146 +366,6 @@ myservice.jnss.me {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 2: OAuth2 Integration for Custom Applications
|
|
||||||
|
|
||||||
For applications that can handle OAuth2 directly:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Application configuration
|
|
||||||
OAUTH2_PROVIDER_URL: "https://auth.jnss.me/application/o/authorize/"
|
|
||||||
OAUTH2_TOKEN_URL: "https://auth.jnss.me/application/o/token/"
|
|
||||||
OAUTH2_USER_INFO_URL: "https://auth.jnss.me/application/o/userinfo/"
|
|
||||||
OAUTH2_CLIENT_ID: "your_client_id"
|
|
||||||
OAUTH2_CLIENT_SECRET: "your_client_secret"
|
|
||||||
OAUTH2_REDIRECT_URI: "https://yourapp.jnss.me/oauth/callback"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting Guide
|
|
||||||
|
|
||||||
### Common Issues and Solutions
|
|
||||||
|
|
||||||
#### Issue: Containers fail to start with socket permission errors
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
```
|
|
||||||
Error: failed to connect to database: permission denied
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution**:
|
|
||||||
```bash
|
|
||||||
# Check authentik user group membership
|
|
||||||
ssh root@your-vps "groups authentik"
|
|
||||||
# Should show: authentik postgres-clients valkey-clients
|
|
||||||
|
|
||||||
# Verify container process groups
|
|
||||||
ssh root@your-vps "ps aux | grep authentik-server | head -1 | awk '{print \$2}' | xargs -I {} cat /proc/{}/status | grep Groups"
|
|
||||||
# Should show: Groups: 961 962 966 (valkey-clients postgres-clients authentik)
|
|
||||||
|
|
||||||
# Verify socket permissions
|
|
||||||
ssh root@your-vps "ls -la /var/run/postgresql/ /var/run/valkey/"
|
|
||||||
|
|
||||||
# Fix group membership if missing
|
|
||||||
ansible-playbook site.yml --tags authentik,user,setup --ask-vault-pass
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Issue: HTTP binding errors (address already in use)
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
```
|
|
||||||
Error: bind: address already in use (port 9000)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution**:
|
|
||||||
```bash
|
|
||||||
# Check what's using port 9000
|
|
||||||
ssh root@your-vps "netstat -tulpn | grep 9000"
|
|
||||||
|
|
||||||
# Stop conflicting services
|
|
||||||
ssh root@your-vps "systemctl stop authentik-pod"
|
|
||||||
|
|
||||||
# Restart with correct configuration
|
|
||||||
ansible-playbook site.yml --tags authentik,containers --ask-vault-pass
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Issue: Database connection failures
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
```
|
|
||||||
FATAL: database "authentik" does not exist
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution**:
|
|
||||||
```bash
|
|
||||||
# Recreate database and user
|
|
||||||
ansible-playbook site.yml --tags authentik,database --ask-vault-pass
|
|
||||||
|
|
||||||
# Verify database creation
|
|
||||||
ssh root@your-vps "sudo -u postgres psql -h /var/run/postgresql -c '\l'"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Issue: Cache connection failures
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
```
|
|
||||||
Error connecting to Redis: Connection refused
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution**:
|
|
||||||
```bash
|
|
||||||
# Check Valkey service status
|
|
||||||
ssh root@your-vps "systemctl status valkey"
|
|
||||||
|
|
||||||
# Test socket connectivity
|
|
||||||
ssh root@your-vps "redis-cli -s /var/run/valkey/valkey.sock ping"
|
|
||||||
|
|
||||||
# Redeploy cache configuration if needed
|
|
||||||
ansible-playbook site.yml --tags authentik,cache --ask-vault-pass
|
|
||||||
```
|
|
||||||
|
|
||||||
### Diagnostic Commands
|
|
||||||
|
|
||||||
#### Container Debugging
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check container logs
|
|
||||||
ssh root@your-vps "podman logs authentik-server"
|
|
||||||
ssh root@your-vps "podman logs authentik-worker"
|
|
||||||
|
|
||||||
# Inspect container configuration
|
|
||||||
ssh root@your-vps "podman inspect authentik-server"
|
|
||||||
|
|
||||||
# Check container user/group mapping
|
|
||||||
ssh root@your-vps "podman exec authentik-server id"
|
|
||||||
# Expected: uid=966(authentik) gid=966(authentik) groups=966(authentik),961(valkey-clients),962(postgres-clients)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Service Status Verification
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check all authentik systemd services
|
|
||||||
ssh root@your-vps "systemctl status authentik-pod authentik-server authentik-worker"
|
|
||||||
|
|
||||||
# View service dependencies
|
|
||||||
ssh root@your-vps "systemctl list-dependencies authentik-pod"
|
|
||||||
|
|
||||||
# Verify services are in system.slice
|
|
||||||
ssh root@your-vps "systemctl status authentik-server | grep CGroup"
|
|
||||||
# Expected: /system.slice/authentik-server.service
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Network Connectivity Testing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test internal HTTP binding
|
|
||||||
ssh root@your-vps "curl -v http://127.0.0.1:9000/"
|
|
||||||
|
|
||||||
# Test Caddy reverse proxy
|
|
||||||
ssh root@your-vps "curl -v http://127.0.0.1:80/ -H 'Host: auth.jnss.me'"
|
|
||||||
|
|
||||||
# Test external HTTPS
|
|
||||||
curl -v https://auth.jnss.me/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Log Analysis
|
### Log Analysis
|
||||||
|
|
||||||
#### Key Log Locations
|
#### Key Log Locations
|
||||||
@@ -513,174 +430,3 @@ ssh root@your-vps "journalctl -u authentik-server --since today | grep '\"level\
|
|||||||
ssh root@your-vps "journalctl -u authentik-server | grep '\"event\":\"database connection\"'"
|
ssh root@your-vps "journalctl -u authentik-server | grep '\"event\":\"database connection\"'"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Performance Monitoring
|
|
||||||
|
|
||||||
### Resource Usage Monitoring
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Monitor container resource usage
|
|
||||||
ssh root@your-vps "podman stats"
|
|
||||||
|
|
||||||
# Monitor service memory usage
|
|
||||||
ssh root@your-vps "systemctl status authentik-server | grep Memory"
|
|
||||||
|
|
||||||
# Monitor database connections
|
|
||||||
ssh root@your-vps "sudo -u postgres psql -h /var/run/postgresql -c 'SELECT * FROM pg_stat_activity;'"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Performance Optimization Tips
|
|
||||||
|
|
||||||
1. **Database Performance**:
|
|
||||||
- Monitor PostgreSQL slow query log
|
|
||||||
- Consider database connection pooling for high traffic
|
|
||||||
- Regular database maintenance (VACUUM, ANALYZE)
|
|
||||||
|
|
||||||
2. **Cache Performance**:
|
|
||||||
- Monitor Valkey memory usage and hit rate
|
|
||||||
- Adjust cache TTL settings based on usage patterns
|
|
||||||
- Consider cache warming for frequently accessed data
|
|
||||||
|
|
||||||
3. **Container Performance**:
|
|
||||||
- Monitor container memory limits and usage
|
|
||||||
- Optimize shared memory configuration if needed
|
|
||||||
- Review worker process configuration
|
|
||||||
|
|
||||||
## Maintenance Tasks
|
|
||||||
|
|
||||||
### Regular Maintenance
|
|
||||||
|
|
||||||
#### Update Authentik Version
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Update version in defaults or inventory
|
|
||||||
authentik_version: "2025.12.1" # New version
|
|
||||||
|
|
||||||
# Deploy update
|
|
||||||
ansible-playbook site.yml --tags authentik,containers --ask-vault-pass
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Backup Procedures
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Database backup
|
|
||||||
ssh root@your-vps "sudo -u postgres pg_dump -h /var/run/postgresql authentik > /backup/authentik-$(date +%Y%m%d).sql"
|
|
||||||
|
|
||||||
# Media files backup
|
|
||||||
ssh root@your-vps "tar -czf /backup/authentik-media-$(date +%Y%m%d).tar.gz -C /opt/authentik media"
|
|
||||||
|
|
||||||
# Configuration backup (run from ansible control machine)
|
|
||||||
ansible-vault view host_vars/arch-vps/vault.yml > backup/authentik-vault-$(date +%Y%m%d).yml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Health Monitoring
|
|
||||||
|
|
||||||
Set up regular health checks:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
# Health check script
|
|
||||||
HEALTH_URL="https://auth.jnss.me/if/health/live/"
|
|
||||||
if ! curl -f -s "$HEALTH_URL" > /dev/null; then
|
|
||||||
echo "Authentik health check failed"
|
|
||||||
# Add alerting logic
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Security Maintenance
|
|
||||||
|
|
||||||
#### Certificate Monitoring
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check certificate expiration
|
|
||||||
ssh root@your-vps "curl -vI https://auth.jnss.me/ 2>&1 | grep expire"
|
|
||||||
|
|
||||||
# Caddy handles renewal automatically, but monitor logs
|
|
||||||
ssh root@your-vps "journalctl -u caddy | grep -i cert"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Security Updates
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Update container images regularly
|
|
||||||
ansible-playbook site.yml --tags authentik,image-pull --ask-vault-pass
|
|
||||||
|
|
||||||
# Monitor for Authentik security advisories
|
|
||||||
# https://github.com/goauthentik/authentik/security/advisories
|
|
||||||
```
|
|
||||||
|
|
||||||
## Support and Resources
|
|
||||||
|
|
||||||
### Documentation References
|
|
||||||
|
|
||||||
- **Authentik Official Documentation**: https://docs.goauthentik.io/
|
|
||||||
- **rick-infra Architecture Decisions**: [docs/architecture-decisions.md](architecture-decisions.md)
|
|
||||||
- **Service Integration Guide**: [docs/service-integration-guide.md](service-integration-guide.md)
|
|
||||||
- **Security Model**: [docs/security-hardening.md](security-hardening.md)
|
|
||||||
|
|
||||||
### Community Resources
|
|
||||||
|
|
||||||
- **Authentik Community Forum**: https://community.goauthentik.io/
|
|
||||||
- **GitHub Issues**: https://github.com/goauthentik/authentik/issues
|
|
||||||
- **Discord Community**: https://discord.gg/jg33eMhnj6
|
|
||||||
|
|
||||||
### Emergency Procedures
|
|
||||||
|
|
||||||
#### Service Recovery
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Emergency service restart
|
|
||||||
ssh root@your-vps "systemctl restart authentik-pod"
|
|
||||||
|
|
||||||
# Fallback: Direct container management
|
|
||||||
ssh root@your-vps "podman pod restart authentik"
|
|
||||||
|
|
||||||
# Last resort: Full service rebuild
|
|
||||||
ansible-playbook site.yml --tags authentik --ask-vault-pass --limit arch-vps
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Rollback Procedures
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Rollback to previous container version
|
|
||||||
authentik_version: "previous_working_version"
|
|
||||||
ansible-playbook site.yml --tags authentik,containers --ask-vault-pass
|
|
||||||
|
|
||||||
# Database rollback (if needed)
|
|
||||||
ssh root@your-vps "sudo -u postgres psql -h /var/run/postgresql authentik < /backup/authentik-backup.sql"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Deployment Checklist
|
|
||||||
|
|
||||||
Use this checklist to ensure complete deployment:
|
|
||||||
|
|
||||||
### Pre-deployment
|
|
||||||
- [ ] Infrastructure services (PostgreSQL, Valkey, Caddy, Podman) running
|
|
||||||
- [ ] DNS records configured for auth.jnss.me
|
|
||||||
- [ ] Vault variables configured and encrypted
|
|
||||||
- [ ] Ansible connectivity verified
|
|
||||||
|
|
||||||
### Deployment
|
|
||||||
- [ ] Authentik role enabled in site.yml
|
|
||||||
- [ ] Deployment executed successfully
|
|
||||||
- [ ] Health checks passing
|
|
||||||
- [ ] Containers running and responsive
|
|
||||||
|
|
||||||
### Post-deployment
|
|
||||||
- [ ] Admin web interface accessible
|
|
||||||
- [ ] Initial admin login successful
|
|
||||||
- [ ] OAuth2 provider configured
|
|
||||||
- [ ] Test application integration
|
|
||||||
- [ ] Forward auth configuration tested
|
|
||||||
|
|
||||||
### Production Readiness
|
|
||||||
- [ ] Backup procedures implemented
|
|
||||||
- [ ] Monitoring and alerting configured
|
|
||||||
- [ ] Security review completed
|
|
||||||
- [ ] Documentation updated
|
|
||||||
- [ ] Team training completed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
This comprehensive deployment guide provides everything needed to successfully deploy and maintain Authentik in the rick-infra environment, emphasizing the security and performance benefits of our native database + Unix socket architecture.
|
|
||||||
@@ -43,21 +43,45 @@ The rick-infra deployment system provides:
|
|||||||
└─────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Infrastructure Overview
|
||||||
|
|
||||||
|
Rick-infra now manages **two separate environments**:
|
||||||
|
|
||||||
|
### Homelab (arch-vps)
|
||||||
|
Personal services and experimentation platform at **jnss.me**:
|
||||||
|
- PostgreSQL, Valkey, Podman infrastructure
|
||||||
|
- Caddy reverse proxy with auto-HTTPS
|
||||||
|
- Nextcloud (cloud.jnss.me)
|
||||||
|
- Authentik SSO (auth.jnss.me)
|
||||||
|
- Gitea (git.jnss.me)
|
||||||
|
|
||||||
|
### Production (mini-vps)
|
||||||
|
Client projects requiring high uptime:
|
||||||
|
- Caddy reverse proxy with auto-HTTPS
|
||||||
|
- Sigvild Gallery (sigvild.no, api.sigvild.no)
|
||||||
|
|
||||||
## Available Deployments
|
## Available Deployments
|
||||||
|
|
||||||
### 1. `site.yml` - Complete Infrastructure Stack
|
### 1. `site.yml` - Deploy All Environments
|
||||||
Deploys the full rick-infra stack with role dependencies automatically managed.
|
Deploys both homelab and production infrastructure.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ansible-playbook -i inventory/hosts.yml site.yml --ask-vault-pass
|
ansible-playbook site.yml --ask-vault-pass
|
||||||
```
|
```
|
||||||
|
|
||||||
**What it deploys:**
|
### 2. Environment-Specific Deployments
|
||||||
- **Security Foundation**: SSH hardening, firewall, fail2ban, system updates
|
|
||||||
- **Infrastructure Services**: PostgreSQL, Valkey, Podman container runtime
|
```bash
|
||||||
- **Reverse Proxy**: Caddy with automatic HTTPS and Cloudflare DNS integration
|
# Deploy only homelab services
|
||||||
- **Authentication**: Authentik SSO server with forward auth integration
|
ansible-playbook playbooks/homelab.yml --ask-vault-pass
|
||||||
- **Applications**: Gitea, Gallery, and other configured services
|
|
||||||
|
# Deploy only production services
|
||||||
|
ansible-playbook playbooks/production.yml --ask-vault-pass
|
||||||
|
|
||||||
|
# Or use site.yml with limits
|
||||||
|
ansible-playbook site.yml -l homelab --ask-vault-pass
|
||||||
|
ansible-playbook site.yml -l production --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
### 2. Service-Specific Deployments
|
### 2. Service-Specific Deployments
|
||||||
Deploy individual components using tags:
|
Deploy individual components using tags:
|
||||||
@@ -193,7 +217,21 @@ For complete authentik architecture details, see [Architecture Decisions](archit
|
|||||||
|
|
||||||
## Configuration Management
|
## Configuration Management
|
||||||
|
|
||||||
### Host Variables
|
### Variable Organization
|
||||||
|
|
||||||
|
Rick-infra uses a hybrid approach for variable management:
|
||||||
|
|
||||||
|
**Group Variables** (`group_vars/`):
|
||||||
|
- `production/main.yml` - Production environment configuration
|
||||||
|
- `production/vault.yml` - Production secrets (encrypted)
|
||||||
|
|
||||||
|
**Host Variables** (`host_vars/`):
|
||||||
|
- `arch-vps/main.yml` - Homelab configuration
|
||||||
|
- `arch-vps/vault.yml` - Homelab secrets (encrypted)
|
||||||
|
|
||||||
|
**Note:** Due to variable loading issues in Ansible 2.20, playbooks manually load variables using `include_vars`. This ensures reliable variable resolution during execution.
|
||||||
|
|
||||||
|
### Example: Homelab Configuration
|
||||||
|
|
||||||
Core infrastructure settings in `host_vars/arch-vps/main.yml`:
|
Core infrastructure settings in `host_vars/arch-vps/main.yml`:
|
||||||
|
|
||||||
|
|||||||
414
docs/devigo-deployment-guide.md
Normal file
414
docs/devigo-deployment-guide.md
Normal file
@@ -0,0 +1,414 @@
|
|||||||
|
# Devigo Deployment Guide
|
||||||
|
|
||||||
|
**Status**: ✅ **DEPLOYED AND OPERATIONAL**
|
||||||
|
**Environment**: Production (mini-vps)
|
||||||
|
**Last Updated**: 2025-12-16
|
||||||
|
**Deployment Date**: 2025-12-15
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This guide documents the complete deployment of devigo.no (sales training website) from the legacy arch-vps server to the production mini-vps environment. The deployment uses:
|
||||||
|
|
||||||
|
- **Infrastructure Management**: Ansible (rick-infra repository)
|
||||||
|
- **Application Code**: Hugo static site (rustan repository)
|
||||||
|
- **CI/CD**: GitHub Actions
|
||||||
|
- **Container Runtime**: Podman with Quadlet/systemd integration
|
||||||
|
- **Web Server**: Caddy (reverse proxy with automatic HTTPS)
|
||||||
|
- **Content Management**: Decap CMS with GitHub OAuth
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Request Flow
|
||||||
|
```
|
||||||
|
Internet → Caddy (443) → nginx container (9080) → Static files
|
||||||
|
↓
|
||||||
|
TLS termination
|
||||||
|
Security headers
|
||||||
|
Access logging
|
||||||
|
```
|
||||||
|
|
||||||
|
### Services
|
||||||
|
|
||||||
|
| Service | Domain | Container | Purpose |
|
||||||
|
|---------|--------|-----------|---------|
|
||||||
|
| devigo-site | devigo.no | ghcr.io/jnschaffer/rustan:prod | Main website |
|
||||||
|
| devigo-site | www.devigo.no | (redirects to apex) | WWW redirect |
|
||||||
|
| devigo-decap-oauth | decap.jnss.me | alukovenko/decapcms-oauth2 | OAuth proxy for CMS |
|
||||||
|
|
||||||
|
### OAuth Flow
|
||||||
|
```
|
||||||
|
1. User visits devigo.no/admin
|
||||||
|
2. Clicks "Login with GitHub"
|
||||||
|
3. Popup opens → decap.jnss.me/auth
|
||||||
|
4. Redirects to GitHub OAuth
|
||||||
|
5. GitHub redirects → decap.jnss.me/callback
|
||||||
|
6. Callback validates origin (TRUSTED_ORIGINS)
|
||||||
|
7. postMessage sends token to parent window
|
||||||
|
8. Popup closes, CMS authenticated ✅
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deployment Flow
|
||||||
|
```
|
||||||
|
Developer → git push origin/prod
|
||||||
|
↓
|
||||||
|
GitHub Actions
|
||||||
|
↓
|
||||||
|
Build Hugo site → Docker image
|
||||||
|
↓
|
||||||
|
Push to GHCR (ghcr.io/jnschaffer/rustan:prod)
|
||||||
|
↓
|
||||||
|
SSH to mini-vps → podman pull (uses auth.json)
|
||||||
|
↓
|
||||||
|
systemctl restart devigo-site
|
||||||
|
↓
|
||||||
|
Podman Quadlet pulls latest image
|
||||||
|
↓
|
||||||
|
Container starts with new code ✅
|
||||||
|
```
|
||||||
|
|
||||||
|
## Completed Deployment Steps ✅
|
||||||
|
|
||||||
|
### Infrastructure Configuration
|
||||||
|
- ✅ Created Ansible role `roles/devigo/` with Podman Quadlet support
|
||||||
|
- ✅ Updated `group_vars/production/main.yml` with devigo configuration
|
||||||
|
- ✅ Encrypted OAuth credentials in `group_vars/production/vault.yml`
|
||||||
|
- ✅ Updated `playbooks/production.yml` to include devigo role
|
||||||
|
- ✅ Deployed infrastructure to mini-vps
|
||||||
|
|
||||||
|
### Application Configuration
|
||||||
|
- ✅ Changed `rustan/hugo.toml` baseURL from `https://www.devigo.no/` to `https://devigo.no/`
|
||||||
|
- ✅ Updated `rustan/.github/workflows/deploy.yml` for Podman + systemctl deployment
|
||||||
|
- ✅ Fixed nginx.conf MIME type handling (location-specific YAML override)
|
||||||
|
- ✅ Implemented JavaScript-based Decap CMS initialization
|
||||||
|
|
||||||
|
### Authentication & Security
|
||||||
|
- ✅ Created GitHub Personal Access Token for GHCR
|
||||||
|
- ✅ Configured Podman authentication (`/etc/containers/auth.json`)
|
||||||
|
- ✅ Added `REGISTRY_AUTH_FILE` environment to Quadlet
|
||||||
|
- ✅ Fixed OAuth `TRUSTED_ORIGINS` environment variable (plural form)
|
||||||
|
- ✅ Generated SSH key for GitHub Actions deployment
|
||||||
|
- ✅ Updated GitHub repository secrets
|
||||||
|
|
||||||
|
### DNS & Networking
|
||||||
|
- ✅ Updated DNS records to point to mini-vps (72.62.91.251)
|
||||||
|
- ✅ Configured Caddy reverse proxy
|
||||||
|
- ✅ Obtained TLS certificates via Let's Encrypt
|
||||||
|
- ✅ Configured GitHub OAuth app callback URLs
|
||||||
|
|
||||||
|
### Testing & Verification
|
||||||
|
- ✅ Verified MIME types (HTML: text/html, CSS: text/css, YAML: text/yaml)
|
||||||
|
- ✅ Tested Decap CMS initialization
|
||||||
|
- ✅ Verified OAuth authentication flow
|
||||||
|
- ✅ Confirmed container image pulls work
|
||||||
|
- ✅ Tested automated GitHub Actions deployment
|
||||||
|
- ✅ Confirmed site accessible at devigo.no
|
||||||
|
|
||||||
|
## Lessons Learned
|
||||||
|
|
||||||
|
### Critical Fixes Applied
|
||||||
|
|
||||||
|
#### 1. TRUSTED_ORIGINS Variable Name (OAuth)
|
||||||
|
|
||||||
|
**Issue**: OAuth showing "Origin not trusted: https://devigo.no" error
|
||||||
|
|
||||||
|
**Root Cause**: The decapcms-oauth2 service source code expects:
|
||||||
|
- `TRUSTED_ORIGIN` (singular) for a single origin
|
||||||
|
- `TRUSTED_ORIGINS` (plural) for comma-separated multiple origins
|
||||||
|
|
||||||
|
We were using `TRUSTED_ORIGIN=https://devigo.no,https://www.devigo.no` which treated the entire string as ONE origin.
|
||||||
|
|
||||||
|
**Solution**: Changed to `TRUSTED_ORIGINS=https://devigo.no,https://www.devigo.no` (plural)
|
||||||
|
|
||||||
|
**Files Modified**:
|
||||||
|
- `roles/devigo/templates/devigo-decap-oauth.env.j2`
|
||||||
|
- `roles/devigo/defaults/main.yml`
|
||||||
|
|
||||||
|
#### 2. MIME Type Handling Strategy (nginx)
|
||||||
|
|
||||||
|
**Issue**: Adding `types { text/yaml yml yaml; }` in nginx.conf broke all MIME types:
|
||||||
|
- HTML → `application/octet-stream`
|
||||||
|
- CSS → `application/octet-stream`
|
||||||
|
- JS → `application/octet-stream`
|
||||||
|
|
||||||
|
**Root Cause**: Server-level `types {}` block **replaces** all MIME types, not extends them.
|
||||||
|
|
||||||
|
**Solution**: Use location-specific override instead:
|
||||||
|
```nginx
|
||||||
|
location ~ \.(yml|yaml)$ {
|
||||||
|
default_type text/yaml;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Outcome**: Standard MIME types work, YAML gets special handling.
|
||||||
|
|
||||||
|
#### 3. Decap CMS Initialization Method
|
||||||
|
|
||||||
|
**Issue**: Decap CMS failing to load config with `<link href="config.yml" type="text/yaml">` approach
|
||||||
|
|
||||||
|
**Root Cause**: Even with JavaScript initialization, Decap CMS validates Content-Type header when fetching via link tag.
|
||||||
|
|
||||||
|
**Solution**: Switched to JavaScript manual initialization:
|
||||||
|
```javascript
|
||||||
|
window.CMS_MANUAL_INIT = true;
|
||||||
|
CMS.init({ config: { load_config_file: '/admin/config.yml' } });
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits**:
|
||||||
|
- Eliminates YAML MIME type dependency for link element
|
||||||
|
- More portable across server configurations
|
||||||
|
- Matches working configuration from previous deployment
|
||||||
|
|
||||||
|
#### 4. Podman Authentication for systemd Services
|
||||||
|
|
||||||
|
**Issue**: `podman pull` worked manually but failed when systemd restarted service
|
||||||
|
|
||||||
|
**Root Cause**: Podman wasn't finding `/etc/containers/auth.json` when run by systemd
|
||||||
|
|
||||||
|
**Solution**: Added environment variable to Quadlet:
|
||||||
|
```ini
|
||||||
|
[Service]
|
||||||
|
Environment=REGISTRY_AUTH_FILE=/etc/containers/auth.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Outcome**: Automated deployments now pull private images successfully.
|
||||||
|
|
||||||
|
## Current Infrastructure State
|
||||||
|
|
||||||
|
### Services Status
|
||||||
|
|
||||||
|
| Service | Status | Container ID | Image Version |
|
||||||
|
|---------|--------|--------------|---------------|
|
||||||
|
| devigo-site | ✅ Running | a61788a8a673 | 2025-12-15T23:02:XX |
|
||||||
|
| devigo-decap-oauth | ✅ Running | 91fc8c4b21e3 | latest |
|
||||||
|
| caddy | ✅ Running | N/A | Native service |
|
||||||
|
|
||||||
|
### Verification Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check all services
|
||||||
|
ansible mini-vps -m shell -a "systemctl status devigo-site devigo-decap-oauth caddy --no-pager"
|
||||||
|
|
||||||
|
# View recent logs
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u devigo-site -n 20 --no-pager"
|
||||||
|
|
||||||
|
# Check containers
|
||||||
|
ansible mini-vps -m shell -a "podman ps"
|
||||||
|
|
||||||
|
# Test endpoints
|
||||||
|
curl -I https://devigo.no
|
||||||
|
curl -I https://www.devigo.no # Should 301 redirect to apex
|
||||||
|
curl -I https://decap.jnss.me
|
||||||
|
|
||||||
|
# Verify MIME types
|
||||||
|
curl -I https://devigo.no/ | grep content-type # Should be text/html
|
||||||
|
curl -I https://devigo.no/admin/config.yml | grep content-type # Should be text/yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Maintenance Guide
|
||||||
|
|
||||||
|
### Updating Content via Decap CMS
|
||||||
|
|
||||||
|
1. Visit https://devigo.no/admin
|
||||||
|
2. Click "Login with GitHub"
|
||||||
|
3. Edit content in the CMS interface
|
||||||
|
4. Changes automatically commit to GitHub
|
||||||
|
5. GitHub Actions builds and deploys new version
|
||||||
|
6. Site updates within 2-3 minutes
|
||||||
|
|
||||||
|
### Updating Application Code
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In rustan repository
|
||||||
|
cd ~/rustan
|
||||||
|
git checkout prod
|
||||||
|
# Make changes to code
|
||||||
|
git add .
|
||||||
|
git commit -m "Description of changes"
|
||||||
|
git push origin prod
|
||||||
|
|
||||||
|
# GitHub Actions automatically:
|
||||||
|
# 1. Builds Hugo site
|
||||||
|
# 2. Creates Docker image
|
||||||
|
# 3. Pushes to GHCR
|
||||||
|
# 4. SSHs to mini-vps
|
||||||
|
# 5. Pulls latest image
|
||||||
|
# 6. Restarts service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rotating GitHub PAT for GHCR
|
||||||
|
|
||||||
|
When the Personal Access Token expires:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Create new token at https://github.com/settings/tokens/new?scopes=read:packages
|
||||||
|
|
||||||
|
# 2. Update Ansible vault
|
||||||
|
cd ~/rick-infra
|
||||||
|
ansible-vault edit group_vars/production/vault.yml
|
||||||
|
# Update: vault_github_token: "ghp_NEW_TOKEN_HERE"
|
||||||
|
|
||||||
|
# 3. Deploy updated authentication
|
||||||
|
ansible-playbook playbooks/production.yml --tags podman
|
||||||
|
|
||||||
|
# 4. Verify
|
||||||
|
ansible mini-vps -m shell -a "podman login ghcr.io --get-login"
|
||||||
|
# Should show: jnschaffer
|
||||||
|
```
|
||||||
|
|
||||||
|
### Updating OAuth Trusted Origins
|
||||||
|
|
||||||
|
If you need to add/change trusted origins for OAuth:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Update defaults
|
||||||
|
cd ~/rick-infra
|
||||||
|
# Edit roles/devigo/defaults/main.yml
|
||||||
|
# Update: devigo_oauth_trusted_origins: "https://devigo.no,https://new-domain.com"
|
||||||
|
|
||||||
|
# 2. Deploy changes
|
||||||
|
ansible-playbook playbooks/production.yml --tags devigo
|
||||||
|
|
||||||
|
# 3. Restart OAuth service
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-decap-oauth"
|
||||||
|
|
||||||
|
# 4. Test OAuth flow at https://devigo.no/admin
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Container Updates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pull latest image manually
|
||||||
|
ansible mini-vps -m shell -a "podman pull ghcr.io/jnschaffer/rustan:prod"
|
||||||
|
|
||||||
|
# Restart service to use new image
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-site"
|
||||||
|
|
||||||
|
# Check auto-update is working
|
||||||
|
ansible mini-vps -m shell -a "podman auto-update --dry-run"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
See the [Devigo Role README](../roles/devigo/README.md#troubleshooting) for comprehensive troubleshooting steps.
|
||||||
|
|
||||||
|
### Quick Reference
|
||||||
|
|
||||||
|
**OAuth not working?**
|
||||||
|
```bash
|
||||||
|
# Check TRUSTED_ORIGINS format
|
||||||
|
ansible mini-vps -m shell -a "cat /opt/devigo-oauth/decap-oauth.env | grep TRUSTED"
|
||||||
|
# Should be: TRUSTED_ORIGINS=https://devigo.no,https://www.devigo.no
|
||||||
|
```
|
||||||
|
|
||||||
|
**MIME types wrong?**
|
||||||
|
```bash
|
||||||
|
# Check config.yml MIME type
|
||||||
|
curl -I https://devigo.no/admin/config.yml | grep content-type
|
||||||
|
# Should be: content-type: text/yaml
|
||||||
|
|
||||||
|
# Check homepage MIME type
|
||||||
|
curl -I https://devigo.no/ | grep content-type
|
||||||
|
# Should be: content-type: text/html
|
||||||
|
```
|
||||||
|
|
||||||
|
**Can't pull images?**
|
||||||
|
```bash
|
||||||
|
# Test authentication
|
||||||
|
ansible mini-vps -m shell -a "podman login ghcr.io --get-login"
|
||||||
|
# Should show: jnschaffer (your GitHub username)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
### Credentials Storage
|
||||||
|
|
||||||
|
All sensitive credentials are encrypted in Ansible Vault:
|
||||||
|
- GitHub OAuth Client ID & Secret
|
||||||
|
- GitHub Personal Access Token for GHCR
|
||||||
|
- Deployed files have restrictive permissions (0600 for secrets)
|
||||||
|
|
||||||
|
### Network Security
|
||||||
|
|
||||||
|
- TLS certificates automatically renewed via Caddy + Let's Encrypt
|
||||||
|
- All HTTP traffic redirected to HTTPS
|
||||||
|
- Security headers applied by Caddy:
|
||||||
|
- X-Frame-Options: SAMEORIGIN
|
||||||
|
- X-Content-Type-Options: nosniff
|
||||||
|
- X-XSS-Protection: 1; mode=block
|
||||||
|
- Referrer-Policy: strict-origin-when-cross-origin
|
||||||
|
- Permissions-Policy: geolocation=(), microphone=(), camera=()
|
||||||
|
|
||||||
|
### Container Security
|
||||||
|
|
||||||
|
- Containers run as non-root where possible
|
||||||
|
- `NoNewPrivileges=true` in Quadlet files
|
||||||
|
- Minimal attack surface (nginx:alpine base image)
|
||||||
|
- Regular updates via GitHub Actions rebuild
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [Devigo Role README](../roles/devigo/README.md) - Role configuration and usage
|
||||||
|
- [Podman Role](../roles/podman/README.md) - Container runtime setup
|
||||||
|
- [Caddy Role](../roles/caddy/README.md) - Reverse proxy configuration
|
||||||
|
- [Service Integration Guide](./service-integration-guide.md) - How services work together
|
||||||
|
|
||||||
|
## Migration Notes
|
||||||
|
|
||||||
|
### Differences from arch-vps Deployment
|
||||||
|
|
||||||
|
| Aspect | arch-vps (old) | mini-vps (new) |
|
||||||
|
|--------|----------------|----------------|
|
||||||
|
| Container | Docker + docker-compose | Podman + Quadlet |
|
||||||
|
| Service Mgmt | docker-compose | systemd |
|
||||||
|
| Deployment | Manual / docker-compose pull | GitHub Actions + systemctl |
|
||||||
|
| OAuth | Same service | Same, but TRUSTED_ORIGINS fixed |
|
||||||
|
| CMS Init | Link tag (fragile) | JavaScript (reliable) |
|
||||||
|
| MIME Types | Not documented | Location-specific override |
|
||||||
|
|
||||||
|
### Why We Migrated
|
||||||
|
|
||||||
|
1. **Better systemd integration** - Quadlet generates native systemd units
|
||||||
|
2. **Ansible consistency** - Declarative infrastructure management
|
||||||
|
3. **Improved security** - Podman rootless capable, no daemon
|
||||||
|
4. **Auto-updates** - Podman auto-update support
|
||||||
|
5. **Better logging** - Native journald integration
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
- ✅ Site accessible at https://devigo.no
|
||||||
|
- ✅ WWW redirect working (www.devigo.no → devigo.no)
|
||||||
|
- ✅ TLS certificates valid and auto-renewing
|
||||||
|
- ✅ OAuth authentication functional
|
||||||
|
- ✅ Decap CMS loads and allows content editing
|
||||||
|
- ✅ GitHub Actions automated deployment working
|
||||||
|
- ✅ All MIME types correct (HTML, CSS, JS, YAML)
|
||||||
|
- ✅ Zero downtime during migration
|
||||||
|
- ✅ All content preserved from previous deployment
|
||||||
|
|
||||||
|
## Future Improvements
|
||||||
|
|
||||||
|
### Potential Enhancements
|
||||||
|
|
||||||
|
1. **Monitoring**: Add Prometheus metrics collection
|
||||||
|
2. **Backups**: Automated backup of Decap CMS content (GitHub is primary backup)
|
||||||
|
3. **Staging Environment**: Deploy to separate staging domain for testing
|
||||||
|
4. **CDN**: Consider Cloudflare CDN for improved global performance
|
||||||
|
5. **Image Optimization**: Automated image optimization in build pipeline
|
||||||
|
6. **A/B Testing**: Framework for testing landing page variations
|
||||||
|
|
||||||
|
### Technical Debt
|
||||||
|
|
||||||
|
None identified - deployment is clean and well-documented.
|
||||||
|
|
||||||
|
## Contact & Support
|
||||||
|
|
||||||
|
For issues or questions:
|
||||||
|
1. Check this guide's troubleshooting section
|
||||||
|
2. Review role README files
|
||||||
|
3. Check systemd/container logs
|
||||||
|
4. Consult Ansible playbook output
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Deployment Status**: ✅ **COMPLETE AND OPERATIONAL**
|
||||||
|
**Next Review**: When rotating GitHub PAT (1 year from creation)
|
||||||
541
docs/gitea-deployment-guide.md
Normal file
541
docs/gitea-deployment-guide.md
Normal file
@@ -0,0 +1,541 @@
|
|||||||
|
# Gitea Deployment and Testing Guide
|
||||||
|
|
||||||
|
Comprehensive guide for deploying and testing Gitea Git service with SSH access on rick-infra.
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
### 1. Prerequisites
|
||||||
|
|
||||||
|
Ensure you have the required vault variables set in your host_vars:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# host_vars/arch-vps/vault.yml (encrypted)
|
||||||
|
vault_gitea_db_password: "your_secure_password_here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deploy Gitea Role
|
||||||
|
|
||||||
|
Run the rick-infra playbook with Gitea role:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy Gitea to arch-vps
|
||||||
|
ansible-playbook -i inventory/hosts.yml rick-infra.yml --limit arch-vps
|
||||||
|
|
||||||
|
# Or deploy only Gitea role
|
||||||
|
ansible-playbook -i inventory/hosts.yml rick-infra.yml --tags gitea --limit arch-vps
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Verify Deployment
|
||||||
|
|
||||||
|
Check that all services are running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH into the server
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Check Gitea service status
|
||||||
|
systemctl status gitea
|
||||||
|
|
||||||
|
# Check if Gitea is listening on HTTP port
|
||||||
|
ss -tlnp | grep 3000
|
||||||
|
|
||||||
|
# Check if Gitea SSH is listening
|
||||||
|
ss -tlnp | grep 2222
|
||||||
|
|
||||||
|
# Verify firewall rules
|
||||||
|
nft list ruleset | grep 2222
|
||||||
|
|
||||||
|
# Check fail2ban status
|
||||||
|
fail2ban-client status gitea-ssh
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
- Gitea service: `active (running)`
|
||||||
|
- HTTP port 3000: listening on `127.0.0.1:3000`
|
||||||
|
- SSH port 2222: listening on `0.0.0.0:2222`
|
||||||
|
- nftables: Rule allowing `tcp dport 2222`
|
||||||
|
- fail2ban: `gitea-ssh` jail active
|
||||||
|
|
||||||
|
## Testing Guide
|
||||||
|
|
||||||
|
### Test 1: Web Interface Access
|
||||||
|
|
||||||
|
**Purpose**: Verify HTTPS access through Caddy reverse proxy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From your local machine
|
||||||
|
curl -I https://git.jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**:
|
||||||
|
- HTTP/2 200 OK
|
||||||
|
- Redirects to login page
|
||||||
|
- Valid TLS certificate
|
||||||
|
|
||||||
|
**Action**: Open browser to `https://git.jnss.me` and verify web interface loads
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 2: Firewall Port Verification
|
||||||
|
|
||||||
|
**Purpose**: Confirm port 2222 is accessible from external networks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From your local machine (not from the server)
|
||||||
|
nc -zv git.jnss.me 2222
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**:
|
||||||
|
```
|
||||||
|
Connection to git.jnss.me 2222 port [tcp/*] succeeded!
|
||||||
|
```
|
||||||
|
|
||||||
|
**If this fails**: The firewall rule is not active or nftables service is not running.
|
||||||
|
|
||||||
|
**Troubleshooting**:
|
||||||
|
```bash
|
||||||
|
# On the server
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Check if nftables service is running
|
||||||
|
systemctl status nftables
|
||||||
|
|
||||||
|
# List all firewall rules
|
||||||
|
nft list ruleset
|
||||||
|
|
||||||
|
# Verify Gitea rule file exists
|
||||||
|
cat /etc/nftables.d/gitea.nft
|
||||||
|
|
||||||
|
# Manually reload nftables
|
||||||
|
systemctl reload nftables
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 3: SSH Connection Test
|
||||||
|
|
||||||
|
**Purpose**: Verify Gitea SSH server accepts connections
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From your local machine
|
||||||
|
ssh -T -p 2222 git@git.jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result** (before adding SSH key):
|
||||||
|
```
|
||||||
|
Hi there, You've successfully authenticated, but Gitea does not provide shell access.
|
||||||
|
If this is unexpected, please log in with password and setup Gitea under another user.
|
||||||
|
```
|
||||||
|
|
||||||
|
**OR** (if authentication fails):
|
||||||
|
```
|
||||||
|
git@git.jnss.me: Permission denied (publickey).
|
||||||
|
```
|
||||||
|
|
||||||
|
This is normal - it means Gitea SSH server is responding, you just need to add your SSH key.
|
||||||
|
|
||||||
|
**If connection times out**: Port 2222 is blocked or Gitea SSH is not running.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 4: SSH Key Setup and Authentication
|
||||||
|
|
||||||
|
**Purpose**: Add SSH key to Gitea and test authentication
|
||||||
|
|
||||||
|
**Step 4.1**: Create Gitea admin account
|
||||||
|
1. Visit `https://git.jnss.me`
|
||||||
|
2. Click "Register" (if registration is enabled) or use initial admin setup
|
||||||
|
3. Create your user account
|
||||||
|
|
||||||
|
**Step 4.2**: Generate SSH key (if needed)
|
||||||
|
```bash
|
||||||
|
# On your local machine
|
||||||
|
ssh-keygen -t ed25519 -C "your_email@example.com"
|
||||||
|
|
||||||
|
# View your public key
|
||||||
|
cat ~/.ssh/id_ed25519.pub
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.3**: Add SSH key to Gitea
|
||||||
|
1. Log into Gitea web interface
|
||||||
|
2. Click your profile → Settings
|
||||||
|
3. Click "SSH / GPG Keys" tab
|
||||||
|
4. Click "Add Key"
|
||||||
|
5. Paste your public key (`id_ed25519.pub` contents)
|
||||||
|
6. Give it a name and click "Add Key"
|
||||||
|
|
||||||
|
**Step 4.4**: Test SSH authentication
|
||||||
|
```bash
|
||||||
|
# From your local machine
|
||||||
|
ssh -T -p 2222 git@git.jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**:
|
||||||
|
```
|
||||||
|
Hi there, <your_username>! You've successfully authenticated with the key named <key_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 5: Repository Operations
|
||||||
|
|
||||||
|
**Purpose**: Test actual Git operations over SSH
|
||||||
|
|
||||||
|
**Step 5.1**: Create a test repository in Gitea web interface
|
||||||
|
1. Click "+" → "New Repository"
|
||||||
|
2. Name: `test-repo`
|
||||||
|
3. Click "Create Repository"
|
||||||
|
|
||||||
|
**Step 5.2**: Clone the repository
|
||||||
|
```bash
|
||||||
|
# From your local machine
|
||||||
|
git clone ssh://git@git.jnss.me:2222/your_username/test-repo.git
|
||||||
|
cd test-repo
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**: Repository clones successfully
|
||||||
|
|
||||||
|
**Step 5.3**: Make a commit and push
|
||||||
|
```bash
|
||||||
|
# Create a test file
|
||||||
|
echo "# Test Repository" > README.md
|
||||||
|
|
||||||
|
# Commit and push
|
||||||
|
git add README.md
|
||||||
|
git commit -m "Initial commit"
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**:
|
||||||
|
```
|
||||||
|
Enumerating objects: 3, done.
|
||||||
|
Counting objects: 100% (3/3), done.
|
||||||
|
Writing objects: 100% (3/3), 234 bytes | 234.00 KiB/s, done.
|
||||||
|
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
|
||||||
|
To ssh://git.jnss.me:2222/your_username/test-repo.git
|
||||||
|
* [new branch] main -> main
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 5.4**: Verify in web interface
|
||||||
|
1. Refresh Gitea web UI
|
||||||
|
2. Navigate to `test-repo`
|
||||||
|
3. Verify `README.md` appears
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 6: fail2ban Protection
|
||||||
|
|
||||||
|
**Purpose**: Verify SSH brute force protection is active
|
||||||
|
|
||||||
|
**Step 6.1**: Check fail2ban status
|
||||||
|
```bash
|
||||||
|
# On the server
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Check gitea-ssh jail
|
||||||
|
fail2ban-client status gitea-ssh
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**:
|
||||||
|
```
|
||||||
|
Status for the jail: gitea-ssh
|
||||||
|
|- Filter
|
||||||
|
| |- Currently failed: 0
|
||||||
|
| |- Total failed: 0
|
||||||
|
| `- File list: /var/lib/gitea/log/gitea.log
|
||||||
|
`- Actions
|
||||||
|
|- Currently banned: 0
|
||||||
|
|- Total banned: 0
|
||||||
|
`- Banned IP list:
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 6.2**: Simulate failed authentication (optional)
|
||||||
|
```bash
|
||||||
|
# From your local machine, try connecting with wrong key multiple times
|
||||||
|
ssh -T -p 2222 -i /path/to/wrong/key git@git.jnss.me
|
||||||
|
# Repeat this 5+ times quickly
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 6.3**: Check if IP was banned
|
||||||
|
```bash
|
||||||
|
# On the server
|
||||||
|
fail2ban-client status gitea-ssh
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**: Your IP should appear in "Currently banned" list after 5 failed attempts.
|
||||||
|
|
||||||
|
**Step 6.4**: Unban yourself (if needed)
|
||||||
|
```bash
|
||||||
|
# On the server
|
||||||
|
fail2ban-client set gitea-ssh unbanip YOUR_IP_ADDRESS
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 7: Firewall Rule Persistence
|
||||||
|
|
||||||
|
**Purpose**: Verify firewall rules survive reboot
|
||||||
|
|
||||||
|
**Step 7.1**: Check current rules
|
||||||
|
```bash
|
||||||
|
# On the server
|
||||||
|
ssh root@arch-vps
|
||||||
|
nft list ruleset | grep 2222
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 7.2**: Reboot server
|
||||||
|
```bash
|
||||||
|
# On the server
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 7.3**: After reboot, verify rules are still active
|
||||||
|
```bash
|
||||||
|
# Wait for server to come back up, then SSH in
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Check nftables rules again
|
||||||
|
nft list ruleset | grep 2222
|
||||||
|
|
||||||
|
# Verify Gitea SSH still accessible from outside
|
||||||
|
exit
|
||||||
|
|
||||||
|
# From local machine
|
||||||
|
ssh -T -p 2222 git@git.jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected result**: Port 2222 rule persists after reboot, SSH access still works.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Issue: Connection timeout on port 2222
|
||||||
|
|
||||||
|
**Symptoms**: `ssh: connect to host git.jnss.me port 2222: Connection timed out`
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
```bash
|
||||||
|
# On server
|
||||||
|
systemctl status gitea # Check if Gitea is running
|
||||||
|
ss -tlnp | grep 2222 # Check if SSH is listening
|
||||||
|
nft list ruleset | grep 2222 # Check firewall rule
|
||||||
|
systemctl status nftables # Check firewall service
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solutions**:
|
||||||
|
1. **Gitea not running**: `systemctl start gitea`
|
||||||
|
2. **Firewall rule missing**: Re-run Ansible playbook with Gitea role
|
||||||
|
3. **nftables not running**: `systemctl start nftables`
|
||||||
|
4. **Rule file missing**: Check `/etc/nftables.d/gitea.nft` exists
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Issue: Permission denied (publickey)
|
||||||
|
|
||||||
|
**Symptoms**: SSH connection succeeds but authentication fails
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
```bash
|
||||||
|
# Verbose SSH connection
|
||||||
|
ssh -vvv -T -p 2222 git@git.jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solutions**:
|
||||||
|
1. **SSH key not added to Gitea**: Add your public key in Gitea web UI
|
||||||
|
2. **Wrong SSH key used**: Specify correct key: `ssh -i ~/.ssh/id_ed25519 -T -p 2222 git@git.jnss.me`
|
||||||
|
3. **Key permissions wrong**: `chmod 600 ~/.ssh/id_ed25519`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Issue: fail2ban not protecting Gitea
|
||||||
|
|
||||||
|
**Symptoms**: `fail2ban-client status gitea-ssh` shows jail doesn't exist
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
```bash
|
||||||
|
# Check if filter exists
|
||||||
|
ls -la /etc/fail2ban/filter.d/gitea-ssh.conf
|
||||||
|
|
||||||
|
# Check if jail is configured
|
||||||
|
grep -A 10 "gitea-ssh" /etc/fail2ban/jail.local
|
||||||
|
|
||||||
|
# Check fail2ban logs
|
||||||
|
journalctl -u fail2ban | grep gitea
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solutions**:
|
||||||
|
1. **Jail not configured**: Re-run Ansible playbook with Gitea role
|
||||||
|
2. **fail2ban not running**: `systemctl start fail2ban`
|
||||||
|
3. **Log file not found**: Check Gitea is logging to `/var/lib/gitea/log/gitea.log`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Issue: Git clone works but push fails
|
||||||
|
|
||||||
|
**Symptoms**: Can clone but `git push` gives permission error
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
- Check repository permissions in Gitea web UI
|
||||||
|
- Verify you have write access to the repository
|
||||||
|
|
||||||
|
**Solutions**:
|
||||||
|
1. **Not repository owner**: Ask owner to give you write access
|
||||||
|
2. **Repository is archived**: Unarchive in settings
|
||||||
|
3. **Branch protected**: Check branch protection rules
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist
|
||||||
|
|
||||||
|
Use this checklist to verify your Gitea deployment:
|
||||||
|
|
||||||
|
- [ ] Gitea web interface accessible at `https://git.jnss.me`
|
||||||
|
- [ ] Port 2222 accessible from external network (`nc -zv git.jnss.me 2222`)
|
||||||
|
- [ ] SSH connection succeeds (`ssh -T -p 2222 git@git.jnss.me`)
|
||||||
|
- [ ] SSH key added to Gitea account
|
||||||
|
- [ ] SSH authentication works (shows username in response)
|
||||||
|
- [ ] Can clone repository via SSH
|
||||||
|
- [ ] Can push commits to repository
|
||||||
|
- [ ] nftables rule for port 2222 exists and is active
|
||||||
|
- [ ] fail2ban jail `gitea-ssh` is running
|
||||||
|
- [ ] Gitea service auto-starts on boot
|
||||||
|
- [ ] nftables rules persist after reboot
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post-Deployment Configuration
|
||||||
|
|
||||||
|
### Disable Public Registration (Recommended)
|
||||||
|
|
||||||
|
If you don't want anyone to create accounts:
|
||||||
|
|
||||||
|
1. Edit `host_vars/arch-vps/main.yml` or `group_vars/production/main.yml`
|
||||||
|
2. Add:
|
||||||
|
```yaml
|
||||||
|
gitea_disable_registration: true
|
||||||
|
```
|
||||||
|
3. Re-run playbook:
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i inventory/hosts.yml rick-infra.yml --tags gitea --limit arch-vps
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configure Email (Optional)
|
||||||
|
|
||||||
|
For password resets and notifications, configure SMTP:
|
||||||
|
|
||||||
|
1. Edit Gitea configuration directly:
|
||||||
|
```bash
|
||||||
|
ssh root@arch-vps
|
||||||
|
nano /etc/gitea/app.ini
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Add mailer section:
|
||||||
|
```ini
|
||||||
|
[mailer]
|
||||||
|
ENABLED = true
|
||||||
|
FROM = gitea@jnss.me
|
||||||
|
PROTOCOL = smtps
|
||||||
|
SMTP_ADDR = smtp.example.com
|
||||||
|
SMTP_PORT = 465
|
||||||
|
USER = gitea@jnss.me
|
||||||
|
PASSWD = your_smtp_password
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Restart Gitea:
|
||||||
|
```bash
|
||||||
|
systemctl restart gitea
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enable Actions/CI (Optional)
|
||||||
|
|
||||||
|
Gitea Actions provides GitHub Actions-compatible CI/CD:
|
||||||
|
|
||||||
|
1. Edit `roles/gitea/templates/app.ini.j2`
|
||||||
|
2. Add Actions section
|
||||||
|
3. Re-run playbook
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## nftables Architecture
|
||||||
|
|
||||||
|
### Modular Firewall Design
|
||||||
|
|
||||||
|
Rick-infra uses a modular nftables architecture that allows services to self-manage their firewall rules:
|
||||||
|
|
||||||
|
**Load Order:**
|
||||||
|
1. **Base rules** (`/etc/nftables.conf`) - Infrastructure essentials (SSH, HTTP, HTTPS, ICMP)
|
||||||
|
2. **Service rules** (`/etc/nftables.d/[0-8]*.nft`) - Service-specific ports (e.g., `50-gitea.nft`)
|
||||||
|
3. **Drop rule** (`/etc/nftables.d/99-drop.nft`) - Final catch-all drop
|
||||||
|
|
||||||
|
**Key Files:**
|
||||||
|
- `/etc/nftables.conf` - Base infrastructure firewall rules
|
||||||
|
- `/etc/nftables-load.conf` - Loader script that orchestrates rule loading
|
||||||
|
- `/etc/nftables.d/50-gitea.nft` - Gitea SSH port (2222) rule
|
||||||
|
- `/etc/nftables.d/99-drop.nft` - Final drop rule (loaded last)
|
||||||
|
|
||||||
|
**How It Works:**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ /etc/nftables-load.conf │
|
||||||
|
│ │
|
||||||
|
│ 1. include "/etc/nftables.conf" │
|
||||||
|
│ └─> Allow: SSH(22), HTTP(80), HTTPS(443) │
|
||||||
|
│ │
|
||||||
|
│ 2. include "/etc/nftables.d/[0-8]*.nft" │
|
||||||
|
│ └─> 50-gitea.nft: Allow SSH(2222) │
|
||||||
|
│ │
|
||||||
|
│ 3. include "/etc/nftables.d/99-drop.nft" │
|
||||||
|
│ └─> Drop all other traffic │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
This ensures service rules are evaluated **before** the drop rule, allowing each service role to be self-contained.
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
1. **Use strong database password**: Ensure `vault_gitea_db_password` is strong
|
||||||
|
2. **Enable 2FA**: Enable two-factor authentication in Gitea settings
|
||||||
|
3. **Monitor fail2ban**: Regularly check banned IPs: `fail2ban-client status gitea-ssh`
|
||||||
|
4. **Keep updated**: Run security playbook regularly for system updates
|
||||||
|
5. **Review SSH keys**: Periodically audit SSH keys in Gitea user accounts
|
||||||
|
6. **Backup repositories**: Regular backups of `/var/lib/gitea/repositories`
|
||||||
|
7. **Monitor logs**: Check Gitea logs for suspicious activity: `journalctl -u gitea`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Service management
|
||||||
|
systemctl status gitea
|
||||||
|
systemctl restart gitea
|
||||||
|
journalctl -u gitea -f
|
||||||
|
|
||||||
|
# Firewall
|
||||||
|
nft list ruleset | grep 2222
|
||||||
|
systemctl restart nftables
|
||||||
|
cat /etc/nftables.d/50-gitea.nft
|
||||||
|
|
||||||
|
# fail2ban
|
||||||
|
fail2ban-client status gitea-ssh
|
||||||
|
fail2ban-client get gitea-ssh banned
|
||||||
|
fail2ban-client set gitea-ssh unbanip IP_ADDRESS
|
||||||
|
|
||||||
|
# Network
|
||||||
|
ss -tlnp | grep 2222
|
||||||
|
nc -zv git.jnss.me 2222
|
||||||
|
|
||||||
|
# SSH testing
|
||||||
|
ssh -T -p 2222 git@git.jnss.me
|
||||||
|
ssh -vvv -T -p 2222 git@git.jnss.me # Verbose mode
|
||||||
|
|
||||||
|
# Git operations
|
||||||
|
git clone ssh://git@git.jnss.me:2222/user/repo.git
|
||||||
|
git remote add origin ssh://git@git.jnss.me:2222/user/repo.git
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Rick-Infra Gitea Deployment Guide**
|
||||||
|
Self-contained Git service with automatic firewall and security management.
|
||||||
@@ -88,31 +88,42 @@ ssh root@your-vps "journalctl -u sshd | grep -i 'failed\|invalid'"
|
|||||||
|
|
||||||
### Firewall Configuration
|
### Firewall Configuration
|
||||||
|
|
||||||
#### nftables Firewall Rules
|
#### Modular nftables Architecture
|
||||||
|
|
||||||
|
Rick-infra uses a **modular firewall architecture** that enables self-contained service roles:
|
||||||
|
|
||||||
|
**Structure:**
|
||||||
|
```
|
||||||
|
/etc/nftables.conf Base infrastructure rules (SSH, HTTP, HTTPS)
|
||||||
|
/etc/nftables-load.conf Orchestration script (loads rules in order)
|
||||||
|
/etc/nftables.d/
|
||||||
|
├── 50-gitea.nft Service-specific rules (Gitea SSH port 2222)
|
||||||
|
└── 99-drop.nft Final drop rule (loaded last)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Load Order:**
|
||||||
|
1. Base infrastructure rules (always allowed)
|
||||||
|
2. Service-specific rules (00-98 prefix)
|
||||||
|
3. Final drop rule (99-drop.nft)
|
||||||
|
|
||||||
|
**Example: Current Ruleset**
|
||||||
```bash
|
```bash
|
||||||
# Deployed firewall configuration
|
|
||||||
table inet filter {
|
table inet filter {
|
||||||
chain input {
|
chain input {
|
||||||
type filter hook input priority 0; policy drop;
|
type filter hook input priority 0; policy drop;
|
||||||
|
|
||||||
# Allow loopback traffic
|
# Base infrastructure rules
|
||||||
iifname "lo" accept
|
iif "lo" accept
|
||||||
|
|
||||||
# Allow established connections
|
|
||||||
ct state established,related accept
|
ct state established,related accept
|
||||||
|
tcp dport 22 ct state new accept
|
||||||
# Allow SSH (rate limited)
|
tcp dport {80, 443} ct state new accept
|
||||||
tcp dport 22 ct state new limit rate 5/minute accept
|
|
||||||
|
|
||||||
# Allow HTTP/HTTPS
|
|
||||||
tcp dport {80, 443} accept
|
|
||||||
|
|
||||||
# Allow ICMP (rate limited)
|
|
||||||
icmp type echo-request limit rate 1/second accept
|
icmp type echo-request limit rate 1/second accept
|
||||||
|
|
||||||
# Log dropped packets
|
# Service-specific rules (loaded from /etc/nftables.d/)
|
||||||
log prefix "DROPPED: " drop
|
tcp dport 2222 ct state new accept comment "Gitea SSH (Port 2222)"
|
||||||
|
|
||||||
|
# Final drop rule (99-drop.nft)
|
||||||
|
counter drop comment "Drop all other traffic"
|
||||||
}
|
}
|
||||||
|
|
||||||
chain forward {
|
chain forward {
|
||||||
@@ -129,15 +140,38 @@ table inet filter {
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check firewall status
|
# Check firewall status
|
||||||
ssh root@your-vps "nft list ruleset"
|
nft list ruleset
|
||||||
|
|
||||||
# Monitor dropped connections
|
# View service-specific rules
|
||||||
ssh root@your-vps "journalctl -k | grep DROPPED"
|
ls -la /etc/nftables.d/
|
||||||
|
cat /etc/nftables.d/50-gitea.nft
|
||||||
|
|
||||||
# Temporary rule addition (emergency access)
|
# Reload firewall (after manual changes)
|
||||||
ssh root@your-vps "nft add rule inet filter input tcp dport 8080 accept"
|
systemctl restart nftables
|
||||||
|
|
||||||
|
# Test configuration syntax
|
||||||
|
nft -c -f /etc/nftables-load.conf
|
||||||
|
|
||||||
|
# Add service rule (example for future services)
|
||||||
|
# Create /etc/nftables.d/60-newservice.nft with your rules
|
||||||
|
echo 'add rule inet filter input tcp dport 8080 accept comment "New Service"' > /etc/nftables.d/60-newservice.nft
|
||||||
|
systemctl restart nftables
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Adding New Service Ports
|
||||||
|
|
||||||
|
When deploying new services that need firewall access:
|
||||||
|
|
||||||
|
1. Create rule file: `/etc/nftables.d/XX-servicename.nft` (XX = 00-98)
|
||||||
|
2. Add rule: `add rule inet filter input tcp dport PORT accept comment "Service Name"`
|
||||||
|
3. Restart nftables: `systemctl restart nftables`
|
||||||
|
|
||||||
|
**Naming Convention:**
|
||||||
|
- `00-19`: Infrastructure services
|
||||||
|
- `20-79`: Application services
|
||||||
|
- `80-98`: Custom/temporary rules
|
||||||
|
- `99`: Drop rule (reserved)
|
||||||
|
|
||||||
### Intrusion Detection (fail2ban)
|
### Intrusion Detection (fail2ban)
|
||||||
|
|
||||||
#### fail2ban Configuration
|
#### fail2ban Configuration
|
||||||
|
|||||||
@@ -1,39 +1,50 @@
|
|||||||
# Sigvild Gallery Deployment Guide
|
# Sigvild Gallery Deployment Guide
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Sigvild Wedding Gallery is deployed on **mini-vps** (production environment) for high uptime and reliability. The gallery uses PocketBase API backend with SvelteKit frontend.
|
||||||
|
|
||||||
|
**Production Host**: mini-vps (72.62.91.251)
|
||||||
|
**Domains**: sigvild.no (frontend), api.sigvild.no (API)
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
Deploy the complete Sigvild Wedding Gallery with PocketBase API and SvelteKit frontend.
|
Deploy Sigvild Gallery to production:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook playbooks/production.yml
|
||||||
|
```
|
||||||
|
|
||||||
## Prerequisites Setup
|
## Prerequisites Setup
|
||||||
|
|
||||||
### 1. Vault Password Configuration
|
### 1. Vault Password Configuration
|
||||||
|
|
||||||
Create encrypted passwords for the gallery authentication:
|
Encrypted passwords are stored in `group_vars/production/vault.yml`:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Create vault passwords (run from rick-infra directory)
|
# Edit production vault (run from rick-infra directory)
|
||||||
ansible-vault encrypt_string 'your-host-password-here' --name 'vault_sigvild_host_password'
|
ansible-vault edit group_vars/production/vault.yml
|
||||||
ansible-vault encrypt_string 'your-guest-password-here' --name 'vault_sigvild_guest_password'
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Add the encrypted strings to `host_vars/arch-vps/main.yml`:
|
Add these variables:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# Add to host_vars/arch-vps/main.yml
|
# Production vault variables
|
||||||
vault_sigvild_host_password: !vault |
|
vault_cloudflare_api_token: "your-cloudflare-token"
|
||||||
$ANSIBLE_VAULT;1.1;AES256
|
vault_caddy_tls_email: "admin@example.com"
|
||||||
66386439653765386...
|
vault_sigvild_host_password: "host-user-password"
|
||||||
|
vault_sigvild_guest_password: "guest-user-password"
|
||||||
vault_sigvild_guest_password: !vault |
|
vault_pb_su_email: "admin@sigvild.no"
|
||||||
$ANSIBLE_VAULT;1.1;AES256
|
vault_pb_su_password: "admin-password"
|
||||||
33663065383834313...
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note**: Use `ansible-vault encrypt group_vars/production/vault.yml` after editing.
|
||||||
|
|
||||||
### 2. DNS Configuration
|
### 2. DNS Configuration
|
||||||
|
|
||||||
Ensure these domains point to your server:
|
Point these domains to **mini-vps** (72.62.91.251):
|
||||||
- `sigvild.no` → Frontend static site
|
- `sigvild.no` A record → 72.62.91.251
|
||||||
- `api.sigvild.no` → API backend proxy
|
- `api.sigvild.no` A record → 72.62.91.251
|
||||||
|
|
||||||
### 3. Project Structure
|
### 3. Project Structure
|
||||||
|
|
||||||
@@ -50,70 +61,92 @@ Ensure the sigvild-gallery project is adjacent to rick-infra:
|
|||||||
|
|
||||||
## Deployment Commands
|
## Deployment Commands
|
||||||
|
|
||||||
### Full Infrastructure + Gallery
|
### Production Deployment
|
||||||
|
|
||||||
Deploy everything including Sigvild Gallery:
|
Deploy to production environment (mini-vps):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ansible-playbook site.yml
|
# Deploy complete production stack (Caddy + Sigvild Gallery)
|
||||||
```
|
ansible-playbook playbooks/production.yml
|
||||||
|
|
||||||
### Gallery Only
|
# Or deploy everything with production limit
|
||||||
|
ansible-playbook site.yml -l production
|
||||||
Deploy just the Sigvild Gallery service:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ansible-playbook playbooks/deploy-sigvild.yml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Selective Updates
|
### Selective Updates
|
||||||
|
|
||||||
Update specific components:
|
Update specific components using tags:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Frontend only (quick static file updates)
|
# Frontend only (quick static file updates)
|
||||||
ansible-playbook site.yml --tags="frontend"
|
ansible-playbook playbooks/production.yml --tags="frontend"
|
||||||
|
|
||||||
# Backend only (API service updates)
|
# Backend only (API service updates)
|
||||||
ansible-playbook site.yml --tags="backend"
|
ansible-playbook playbooks/production.yml --tags="backend"
|
||||||
|
|
||||||
# Caddy configuration only
|
# Caddy configuration only
|
||||||
ansible-playbook site.yml --tags="caddy"
|
ansible-playbook playbooks/production.yml --tags="caddy"
|
||||||
|
|
||||||
# Just build process (development)
|
# Just build process (development)
|
||||||
ansible-playbook site.yml --tags="build"
|
ansible-playbook playbooks/production.yml --tags="build"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Backup and Restore
|
||||||
|
|
||||||
|
Create backups before major changes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create backup (works on any host running sigvild-gallery)
|
||||||
|
ansible-playbook playbooks/backup-sigvild.yml -l mini-vps
|
||||||
|
|
||||||
|
# Backup is saved to: ~/sigvild-gallery-backup/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Automatic Restore**: When deploying to a fresh server, the role automatically detects and restores from the latest backup if available.
|
||||||
|
|
||||||
## Architecture Overview
|
## Architecture Overview
|
||||||
|
|
||||||
|
**Production Environment**: mini-vps (72.62.91.251)
|
||||||
|
|
||||||
```
|
```
|
||||||
Internet
|
Internet
|
||||||
↓
|
↓
|
||||||
Caddy (Auto HTTPS)
|
Cloudflare DNS → mini-vps
|
||||||
|
↓
|
||||||
|
Caddy (Auto HTTPS with DNS Challenge)
|
||||||
├── sigvild.no → /var/www/sigvild-gallery/ (Static Files)
|
├── sigvild.no → /var/www/sigvild-gallery/ (Static Files)
|
||||||
└── api.sigvild.no → localhost:8090 (PocketBase API)
|
└── api.sigvild.no → localhost:8090 (PocketBase API)
|
||||||
↓
|
↓
|
||||||
Go Binary (sigvild-gallery-server)
|
Go Binary (/opt/sigvild-gallery/sigvild-gallery)
|
||||||
↓
|
↓
|
||||||
SQLite Database + File Storage
|
SQLite Database (/opt/sigvild-gallery/pb_data/)
|
||||||
|
└── File Storage (wedding photos)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Key Features**:
|
||||||
|
- Automatic HTTPS with Let's Encrypt
|
||||||
|
- Cloudflare DNS challenge for certificate validation
|
||||||
|
- Security headers and CORS protection
|
||||||
|
- SystemD service management
|
||||||
|
- Automatic backup/restore capability
|
||||||
|
|
||||||
## Service Management
|
## Service Management
|
||||||
|
|
||||||
### Status Checks
|
### Status Checks
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Gallery API service
|
# Check services on mini-vps
|
||||||
systemctl status sigvild-gallery
|
ansible mini-vps -a "systemctl status sigvild-gallery"
|
||||||
|
ansible mini-vps -a "systemctl status caddy"
|
||||||
# Caddy web server
|
|
||||||
systemctl status caddy
|
|
||||||
|
|
||||||
# View gallery logs
|
# View gallery logs
|
||||||
journalctl -u sigvild-gallery -f
|
ansible mini-vps -a "journalctl -u sigvild-gallery -n 50 --no-pager"
|
||||||
|
|
||||||
# View Caddy logs
|
# View Caddy logs
|
||||||
journalctl -u caddy -f
|
ansible mini-vps -a "journalctl -u caddy -n 20 --no-pager"
|
||||||
|
|
||||||
|
# Check data directory
|
||||||
|
ansible mini-vps -a "ls -lh /opt/sigvild-gallery/pb_data/"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Manual Operations
|
### Manual Operations
|
||||||
@@ -216,23 +249,30 @@ journalctl -u caddy | grep -i "acme\|certificate"
|
|||||||
- **CORS restrictions**: API access limited to frontend domain
|
- **CORS restrictions**: API access limited to frontend domain
|
||||||
- **Rate limiting**: API endpoint protection
|
- **Rate limiting**: API endpoint protection
|
||||||
|
|
||||||
## File Locations
|
## File Locations (on mini-vps)
|
||||||
|
|
||||||
### Application Files
|
### Application Files
|
||||||
- **Binary**: `/opt/sigvild-gallery/sigvild-gallery-server`
|
- **Binary**: `/opt/sigvild-gallery/sigvild-gallery`
|
||||||
- **Database**: `/opt/sigvild-gallery/data/data.db`
|
- **Database**: `/opt/sigvild-gallery/pb_data/data.db`
|
||||||
- **File uploads**: `/opt/sigvild-gallery/data/storage/`
|
- **File uploads**: `/opt/sigvild-gallery/pb_data/storage/`
|
||||||
- **Frontend**: `/var/www/sigvild-gallery/`
|
- **Frontend**: `/var/www/sigvild-gallery/`
|
||||||
|
- **User/Group**: `sigvild:sigvild`
|
||||||
|
|
||||||
### Configuration Files
|
### Configuration Files
|
||||||
- **Service**: `/etc/systemd/system/sigvild-gallery.service`
|
- **Service**: `/etc/systemd/system/sigvild-gallery.service`
|
||||||
- **Caddy frontend**: `/etc/caddy/sites-enabled/sigvild-frontend.caddy`
|
- **Caddy frontend**: `/etc/caddy/sites-enabled/sigvild-frontend.caddy`
|
||||||
- **Caddy API**: `/etc/caddy/sites-enabled/sigvild-api.caddy`
|
- **Caddy API**: `/etc/caddy/sites-enabled/sigvild-api.caddy`
|
||||||
|
|
||||||
|
### Local Files (on control machine)
|
||||||
|
- **Configuration**: `group_vars/production/main.yml`
|
||||||
|
- **Secrets**: `group_vars/production/vault.yml` (encrypted)
|
||||||
|
- **Backups**: `~/sigvild-gallery-backup/`
|
||||||
|
- **Source code**: `~/sigvild-gallery/`
|
||||||
|
|
||||||
### Log Files
|
### Log Files
|
||||||
- **Service logs**: `journalctl -u sigvild-gallery`
|
- **Service logs**: `journalctl -u sigvild-gallery`
|
||||||
- **Caddy logs**: `journalctl -u caddy`
|
- **Caddy logs**: `journalctl -u caddy`
|
||||||
- **Access logs**: `/var/log/caddy/sigvild-*.log`
|
- **Access logs**: `/var/log/caddy/access.log`
|
||||||
|
|
||||||
## Next Steps After Deployment
|
## Next Steps After Deployment
|
||||||
|
|
||||||
@@ -248,15 +288,28 @@ For ongoing development:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Make changes to sigvild-gallery project
|
# 1. Make changes to sigvild-gallery project
|
||||||
cd ../sigvild-gallery
|
cd ~/sigvild-gallery
|
||||||
|
|
||||||
# 2. Test locally
|
# 2. Test locally
|
||||||
go run . serve &
|
go run . serve &
|
||||||
cd sigvild-kit && npm run dev
|
cd sigvild-kit && npm run dev
|
||||||
|
|
||||||
# 3. Deploy updates
|
# 3. Deploy updates to production
|
||||||
cd ../rick-infra
|
cd ~/rick-infra
|
||||||
ansible-playbook site.yml --tags="sigvild"
|
ansible-playbook playbooks/production.yml --tags="sigvild"
|
||||||
```
|
```
|
||||||
|
|
||||||
The deployment system builds locally and transfers assets, so you don't need build tools on the server.
|
**Build Process**:
|
||||||
|
- Backend: Built locally with `GOOS=linux GOARCH=amd64 go build`
|
||||||
|
- Frontend: Built locally with `npm run build` in sigvild-kit/
|
||||||
|
- Assets transferred to mini-vps via Ansible
|
||||||
|
- No build tools required on the server
|
||||||
|
|
||||||
|
## Migration History
|
||||||
|
|
||||||
|
**December 2025**: Migrated from arch-vps (homelab) to mini-vps (production)
|
||||||
|
- **Reason**: Client project requiring higher uptime reliability
|
||||||
|
- **Method**: Backup from arch-vps, automatic restore to mini-vps
|
||||||
|
- **Downtime**: ~5 minutes during DNS propagation
|
||||||
|
- **Previous host**: arch-vps (69.62.119.31)
|
||||||
|
- **Current host**: mini-vps (72.62.91.251)
|
||||||
71
group_vars/production/main.yml
Normal file
71
group_vars/production/main.yml
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Production Configuration for mini-vps (Client Projects)
|
||||||
|
# =================================================================
|
||||||
|
# This host runs production services requiring high uptime
|
||||||
|
# Currently hosting: Sigvild Gallery, Devigo
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# TLS Configuration - Production Setup
|
||||||
|
# =================================================================
|
||||||
|
caddy_tls_enabled: true
|
||||||
|
caddy_domain: "health.sigvild.no"
|
||||||
|
caddy_tls_email: "{{ vault_caddy_tls_email }}"
|
||||||
|
|
||||||
|
# DNS Challenge Configuration (Cloudflare)
|
||||||
|
caddy_dns_provider: "cloudflare"
|
||||||
|
cloudflare_api_token: "{{ vault_cloudflare_api_token }}"
|
||||||
|
|
||||||
|
# Production Let's Encrypt CA
|
||||||
|
caddy_acme_ca: "https://acme-v02.api.letsencrypt.org/directory"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# API Service Registration Configuration
|
||||||
|
# =================================================================
|
||||||
|
# Services now self-register using Caddy's admin API
|
||||||
|
caddy_api_enabled: true
|
||||||
|
caddy_server_name: "main"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Sigvild Gallery Configuration
|
||||||
|
# =================================================================
|
||||||
|
sigvild_gallery_frontend_domain: "sigvild.no"
|
||||||
|
sigvild_gallery_api_domain: "api.sigvild.no"
|
||||||
|
|
||||||
|
sigvild_gallery_local_project_path: "{{ lookup('env', 'HOME') }}/sigvild-gallery/"
|
||||||
|
|
||||||
|
# Backup configuration
|
||||||
|
sigvild_gallery_backup_enabled: true
|
||||||
|
sigvild_gallery_backup_local_path: "{{ lookup('env', 'HOME') }}/sigvild-gallery-backup/"
|
||||||
|
|
||||||
|
# Vault-encrypted passwords (create with ansible-vault)
|
||||||
|
sigvild_gallery_pb_su_email: "{{ vault_pb_su_email}}"
|
||||||
|
sigvild_gallery_pb_su_password: "{{ vault_pb_su_password}}"
|
||||||
|
sigvild_gallery_host_password: "{{ vault_sigvild_host_password }}"
|
||||||
|
sigvild_gallery_guest_password: "{{ vault_sigvild_guest_password }}"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Devigo Configuration (Docker-based deployment)
|
||||||
|
# =================================================================
|
||||||
|
devigo_domain: "devigo.no"
|
||||||
|
devigo_www_domain: "www.devigo.no"
|
||||||
|
devigo_primary_domain: "devigo.no" # Apex is primary
|
||||||
|
devigo_docker_dir: "/opt/devigo"
|
||||||
|
devigo_ghcr_image: "ghcr.io/jnschaffer/rustan:prod"
|
||||||
|
github_username: "{{ vault_github_username }}"
|
||||||
|
github_token: "{{ vault_github_token }}"
|
||||||
|
|
||||||
|
# Decap OAuth (integrated service)
|
||||||
|
devigo_oauth_domain: "decap.jnss.me"
|
||||||
|
devigo_oauth_client_id: "{{ vault_devigo_oauth_client_id }}"
|
||||||
|
devigo_oauth_client_secret: "{{ vault_devigo_oauth_client_secret }}"
|
||||||
|
devigo_oauth_trusted_origins:
|
||||||
|
- "https://devigo.no"
|
||||||
|
- "https://www.devigo.no"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Security & Logging
|
||||||
|
# =================================================================
|
||||||
|
caddy_log_level: "INFO"
|
||||||
|
caddy_log_format: "json"
|
||||||
|
caddy_systemd_security: true
|
||||||
@@ -24,24 +24,6 @@ caddy_acme_ca: "https://acme-v02.api.letsencrypt.org/directory"
|
|||||||
caddy_api_enabled: true
|
caddy_api_enabled: true
|
||||||
caddy_server_name: "main"
|
caddy_server_name: "main"
|
||||||
|
|
||||||
# =================================================================
|
|
||||||
# Sigvild Gallery Configuration
|
|
||||||
# =================================================================
|
|
||||||
sigvild_gallery_frontend_domain: "sigvild.no"
|
|
||||||
sigvild_gallery_api_domain: "api.sigvild.no"
|
|
||||||
|
|
||||||
sigvild_gallery_local_project_path: "~/sigvild-gallery/"
|
|
||||||
|
|
||||||
# Backup configuration
|
|
||||||
sigvild_gallery_backup_enabled: true
|
|
||||||
sigvild_gallery_backup_local_path: "~/sigvild-gallery-backup/"
|
|
||||||
|
|
||||||
# Vault-encrypted passwords (create with ansible-vault)
|
|
||||||
sigvild_gallery_pb_su_email: "{{ vault_pb_su_email}}"
|
|
||||||
sigvild_gallery_pb_su_password: "{{ vault_pb_su_password}}"
|
|
||||||
sigvild_gallery_host_password: "{{ vault_sigvild_host_password }}"
|
|
||||||
sigvild_gallery_guest_password: "{{ vault_sigvild_guest_password }}"
|
|
||||||
|
|
||||||
# =================================================================
|
# =================================================================
|
||||||
# Authentik Configuration
|
# Authentik Configuration
|
||||||
# =================================================================
|
# =================================================================
|
||||||
|
|||||||
0
host_vars/mini-vps/main.yml
Normal file
0
host_vars/mini-vps/main.yml
Normal file
@@ -1,11 +1,15 @@
|
|||||||
---
|
---
|
||||||
all:
|
homelab:
|
||||||
children:
|
hosts:
|
||||||
production:
|
arch-vps:
|
||||||
hosts:
|
ansible_host: 69.62.119.31
|
||||||
arch-vps:
|
ansible_user: root
|
||||||
ansible_host: 69.62.119.31
|
vars:
|
||||||
ansible_user: root
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
|
production:
|
||||||
|
hosts:
|
||||||
|
mini-vps:
|
||||||
|
ansible_host: 72.62.91.251
|
||||||
|
ansible_user: root
|
||||||
vars:
|
vars:
|
||||||
ansible_python_interpreter: /usr/bin/python3
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
|
|||||||
25
now-what.md
Normal file
25
now-what.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Now what?
|
||||||
|
|
||||||
|
- [x] Redeploy on clean VPS to test playbook
|
||||||
|
- [x] Must set up mini-vps for sigvild and devigo
|
||||||
|
|
||||||
|
- [ ] What gets served on jnss.me?
|
||||||
|
- [ ] Backups
|
||||||
|
|
||||||
|
- [ ] Configure and set up Nextcloud
|
||||||
|
- [ ] OAuth
|
||||||
|
- [ ] Settings
|
||||||
|
- [ ] Contacts and calendars
|
||||||
|
- [ ] Storage bucket integration?
|
||||||
|
|
||||||
|
- [ ] Gitea
|
||||||
|
- [ ] SSH setup
|
||||||
|
|
||||||
|
- [ ] Authentik Invitations for users?
|
||||||
|
|
||||||
|
- [ ] Sail the high seas
|
||||||
|
- [ ] Set up Jellyfin
|
||||||
|
- [ ] Set up *arr applications
|
||||||
|
|
||||||
|
- [ ] "Blog post"
|
||||||
|
|
||||||
24
playbooks/backup-sigvild.yml
Normal file
24
playbooks/backup-sigvild.yml
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
# Sigvild Gallery Data Backup Playbook
|
||||||
|
#
|
||||||
|
# This playbook creates a backup of the Sigvild Gallery data including:
|
||||||
|
# - PocketBase SQLite database (data.db, auxiliary.db)
|
||||||
|
# - All uploaded wedding photos and media files
|
||||||
|
# - PocketBase logs and system state
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook playbooks/backup-sigvild.yml -l arch-vps
|
||||||
|
# ansible-playbook playbooks/backup-sigvild.yml -l mini-vps
|
||||||
|
#
|
||||||
|
# Backup location: ~/sigvild-gallery-backup/sigvild-gallery-backup-YYYYMMDDTHHMMSS.tar.gz
|
||||||
|
|
||||||
|
- name: Backup Sigvild Gallery Data
|
||||||
|
hosts: all
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Run backup tasks from sigvild-gallery role
|
||||||
|
include_role:
|
||||||
|
name: sigvild-gallery
|
||||||
|
tasks_from: backup.yml
|
||||||
@@ -43,15 +43,9 @@
|
|||||||
- "Running kernel: {{ current_kernel.stdout }}"
|
- "Running kernel: {{ current_kernel.stdout }}"
|
||||||
- "Latest modules: {{ latest_modules.stdout }}"
|
- "Latest modules: {{ latest_modules.stdout }}"
|
||||||
|
|
||||||
- name: Test if nftables modules are available
|
|
||||||
command: nft list ruleset
|
|
||||||
register: nft_test_prereq
|
|
||||||
failed_when: false
|
|
||||||
changed_when: false
|
|
||||||
|
|
||||||
- name: Determine if reboot is needed
|
- name: Determine if reboot is needed
|
||||||
set_fact:
|
set_fact:
|
||||||
reboot_needed: "{{ current_kernel.stdout != latest_modules.stdout or nft_test_prereq.rc != 0 }}"
|
reboot_needed: "{{ current_kernel.stdout != latest_modules.stdout }}"
|
||||||
|
|
||||||
- name: Reboot system if kernel/module mismatch detected
|
- name: Reboot system if kernel/module mismatch detected
|
||||||
reboot:
|
reboot:
|
||||||
@@ -65,16 +59,6 @@
|
|||||||
timeout: 300
|
timeout: 300
|
||||||
when: reboot_needed | bool
|
when: reboot_needed | bool
|
||||||
|
|
||||||
- name: Verify nftables is now available after reboot
|
|
||||||
command: nft list ruleset
|
|
||||||
register: nft_post_reboot
|
|
||||||
failed_when: false
|
|
||||||
changed_when: false
|
|
||||||
|
|
||||||
- name: Display post-reboot nftables status
|
|
||||||
debug:
|
|
||||||
msg: "nftables availability after reboot: {{ 'Working' if nft_post_reboot.rc == 0 else 'Failed' }}"
|
|
||||||
|
|
||||||
# ============================================
|
# ============================================
|
||||||
# SSH Hardening
|
# SSH Hardening
|
||||||
# ============================================
|
# ============================================
|
||||||
@@ -139,23 +123,37 @@
|
|||||||
name: nftables
|
name: nftables
|
||||||
state: present
|
state: present
|
||||||
|
|
||||||
- name: Create nftables configuration
|
- name: Create nftables rules directory
|
||||||
|
file:
|
||||||
|
path: /etc/nftables.d
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Create base nftables configuration
|
||||||
copy:
|
copy:
|
||||||
content: |
|
content: |
|
||||||
#!/usr/sbin/nft -f
|
#!/usr/sbin/nft -f
|
||||||
|
|
||||||
# Main firewall table
|
# Flush existing rules for clean slate
|
||||||
|
flush ruleset
|
||||||
|
|
||||||
|
# Main firewall table - Rick-Infra Security
|
||||||
|
# Architecture: Base rules -> Service rules -> Drop rule
|
||||||
table inet filter {
|
table inet filter {
|
||||||
chain input {
|
chain input {
|
||||||
type filter hook input priority 0; policy drop;
|
type filter hook input priority 0; policy drop;
|
||||||
|
|
||||||
|
# ======================================
|
||||||
|
# Base Infrastructure Rules
|
||||||
|
# ======================================
|
||||||
|
|
||||||
# Allow loopback interface
|
# Allow loopback interface
|
||||||
iif "lo" accept
|
iif "lo" accept
|
||||||
|
|
||||||
# Allow established and related connections
|
# Allow established and related connections
|
||||||
ct state established,related accept
|
ct state established,related accept
|
||||||
|
|
||||||
# Allow SSH (port 22)
|
# Allow SSH (port 22) - Infrastructure access
|
||||||
tcp dport 22 ct state new accept
|
tcp dport 22 ct state new accept
|
||||||
|
|
||||||
# Allow HTTP and HTTPS for Caddy reverse proxy
|
# Allow HTTP and HTTPS for Caddy reverse proxy
|
||||||
@@ -164,9 +162,6 @@
|
|||||||
# Allow ping with rate limiting
|
# Allow ping with rate limiting
|
||||||
icmp type echo-request limit rate 1/second accept
|
icmp type echo-request limit rate 1/second accept
|
||||||
icmpv6 type echo-request limit rate 1/second accept
|
icmpv6 type echo-request limit rate 1/second accept
|
||||||
|
|
||||||
# Log and drop everything else
|
|
||||||
counter drop
|
|
||||||
}
|
}
|
||||||
|
|
||||||
chain forward {
|
chain forward {
|
||||||
@@ -181,9 +176,43 @@
|
|||||||
mode: '0755'
|
mode: '0755'
|
||||||
backup: yes
|
backup: yes
|
||||||
register: nft_config_changed
|
register: nft_config_changed
|
||||||
|
|
||||||
|
- name: Create nftables drop rule (loaded last)
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
# Final drop rule - Rick-Infra Security
|
||||||
|
# This file is loaded LAST to drop all unmatched traffic
|
||||||
|
# Service-specific rules in /etc/nftables.d/ are loaded before this
|
||||||
|
|
||||||
|
add rule inet filter input counter drop comment "Drop all other traffic"
|
||||||
|
dest: /etc/nftables.d/99-drop.nft
|
||||||
|
mode: '0644'
|
||||||
|
backup: yes
|
||||||
|
register: nft_drop_changed
|
||||||
|
|
||||||
|
- name: Create nftables loader script
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
#!/usr/sbin/nft -f
|
||||||
|
|
||||||
|
# Rick-Infra nftables loader
|
||||||
|
# Loads rules in correct order: base -> services -> drop
|
||||||
|
|
||||||
|
# Load base infrastructure rules
|
||||||
|
include "/etc/nftables.conf"
|
||||||
|
|
||||||
|
# Load service-specific rules (00-98 range)
|
||||||
|
include "/etc/nftables.d/[0-8]*.nft"
|
||||||
|
|
||||||
|
# Load final drop rule (99-drop.nft)
|
||||||
|
include "/etc/nftables.d/99-drop.nft"
|
||||||
|
dest: /etc/nftables-load.conf
|
||||||
|
mode: '0755'
|
||||||
|
backup: yes
|
||||||
|
register: nft_loader_changed
|
||||||
|
|
||||||
- name: Test nftables configuration syntax
|
- name: Test nftables configuration syntax
|
||||||
command: nft -c -f /etc/nftables.conf
|
command: nft -c -f /etc/nftables-load.conf
|
||||||
changed_when: false
|
changed_when: false
|
||||||
failed_when: false
|
failed_when: false
|
||||||
register: nft_test
|
register: nft_test
|
||||||
@@ -193,18 +222,31 @@
|
|||||||
msg: "nftables configuration test failed: {{ nft_test.stderr }}"
|
msg: "nftables configuration test failed: {{ nft_test.stderr }}"
|
||||||
when: nft_test.rc != 0
|
when: nft_test.rc != 0
|
||||||
|
|
||||||
|
- name: Update nftables systemd service to use loader
|
||||||
|
lineinfile:
|
||||||
|
path: /usr/lib/systemd/system/nftables.service
|
||||||
|
regexp: '^ExecStart='
|
||||||
|
line: 'ExecStart=/usr/sbin/nft -f /etc/nftables-load.conf'
|
||||||
|
backup: yes
|
||||||
|
register: nft_service_changed
|
||||||
|
|
||||||
|
- name: Reload systemd daemon if service changed
|
||||||
|
systemd:
|
||||||
|
daemon_reload: yes
|
||||||
|
when: nft_service_changed.changed
|
||||||
|
|
||||||
- name: Flush existing nftables rules before applying new configuration
|
- name: Flush existing nftables rules before applying new configuration
|
||||||
command: nft flush ruleset
|
command: nft flush ruleset
|
||||||
failed_when: false
|
failed_when: false
|
||||||
changed_when: false
|
changed_when: false
|
||||||
when: nft_config_changed.changed
|
when: nft_config_changed.changed or nft_drop_changed.changed or nft_loader_changed.changed
|
||||||
|
|
||||||
- name: Enable and start nftables service
|
- name: Enable and start nftables service
|
||||||
systemd:
|
systemd:
|
||||||
name: nftables
|
name: nftables
|
||||||
enabled: yes
|
enabled: yes
|
||||||
state: restarted
|
state: restarted
|
||||||
when: nft_config_changed.changed
|
when: nft_config_changed.changed or nft_drop_changed.changed or nft_loader_changed.changed or nft_service_changed.changed
|
||||||
|
|
||||||
- name: Wait for nftables to be active
|
- name: Wait for nftables to be active
|
||||||
pause:
|
pause:
|
||||||
|
|||||||
33
production.yml
Normal file
33
production.yml
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
---
|
||||||
|
# Production Services Deployment
|
||||||
|
#
|
||||||
|
# Deploys production services requiring high uptime to mini-vps including:
|
||||||
|
# - Caddy web server
|
||||||
|
# - Sigvild Gallery (wedding photo gallery)
|
||||||
|
# - Devigo (sales training company website)
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook playbooks/production.yml
|
||||||
|
# ansible-playbook playbooks/production.yml --tags devigo
|
||||||
|
|
||||||
|
# - import_playbook: security.yml
|
||||||
|
|
||||||
|
- name: Deploy Production Services
|
||||||
|
hosts: production
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
# Workaround: Manually load group_vars due to Ansible 2.20 variable loading issue
|
||||||
|
- name: Load production group variables
|
||||||
|
include_vars:
|
||||||
|
dir: "{{ playbook_dir }}/group_vars/production"
|
||||||
|
extensions: ['yml']
|
||||||
|
tags: always
|
||||||
|
|
||||||
|
roles:
|
||||||
|
- role: devigo
|
||||||
|
tags: ['devigo', 'website', 'sales', 'oauth']
|
||||||
|
|
||||||
|
- role: sigvild-gallery
|
||||||
|
tags: ['sigvild', 'gallery', 'wedding']
|
||||||
37
rick-infra.yml
Normal file
37
rick-infra.yml
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
# Homelab Infrastructure Deployment
|
||||||
|
#
|
||||||
|
# Deploys personal homelab services to arch-vps including:
|
||||||
|
# - PostgreSQL database
|
||||||
|
# - Valkey cache/session store
|
||||||
|
# - Podman container runtime
|
||||||
|
# - Caddy web server
|
||||||
|
# - Nextcloud cloud storage
|
||||||
|
# - Authentik SSO/authentication
|
||||||
|
# - Gitea git hosting
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook playbooks/homelab.yml
|
||||||
|
|
||||||
|
- import_playbook: playbooks/security.yml
|
||||||
|
- name: Deploy Homelab Infrastructure
|
||||||
|
hosts: homelab
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# - name: Deploy Authentik
|
||||||
|
# include_role:
|
||||||
|
# name: authentik
|
||||||
|
# tags: ['authentik', 'sso', 'auth']
|
||||||
|
|
||||||
|
- name: Deploy Gitea
|
||||||
|
include_role:
|
||||||
|
name: gitea
|
||||||
|
tags: ['gitea', 'git', 'development']
|
||||||
|
|
||||||
|
# - name: Deploy Nextcloud
|
||||||
|
# include_role:
|
||||||
|
# name: nextcloud
|
||||||
|
# tags: ['nextcloud', 'cloud', 'storage']
|
||||||
|
|
||||||
@@ -16,33 +16,13 @@
|
|||||||
state: restarted
|
state: restarted
|
||||||
daemon_reload: true
|
daemon_reload: true
|
||||||
|
|
||||||
- name: restart authentik server
|
- name: stop authentik pod
|
||||||
systemd:
|
systemd:
|
||||||
name: "{{ authentik_container_server_name }}"
|
name: "authentik-pod"
|
||||||
state: restarted
|
|
||||||
daemon_reload: true
|
|
||||||
|
|
||||||
- name: restart authentik worker
|
|
||||||
systemd:
|
|
||||||
name: "{{ authentik_container_worker_name }}"
|
|
||||||
state: restarted
|
|
||||||
daemon_reload: true
|
|
||||||
|
|
||||||
- name: stop authentik services
|
|
||||||
systemd:
|
|
||||||
name: "{{ item }}"
|
|
||||||
state: stopped
|
state: stopped
|
||||||
loop:
|
|
||||||
- "{{ authentik_container_worker_name }}"
|
|
||||||
- "{{ authentik_container_server_name }}"
|
|
||||||
- "authentik-pod"
|
|
||||||
|
|
||||||
- name: start authentik services
|
- name: start authentik pod
|
||||||
systemd:
|
systemd:
|
||||||
name: "{{ item }}"
|
name: "authentik-pod"
|
||||||
state: started
|
state: started
|
||||||
daemon_reload: true
|
daemon_reload: true
|
||||||
loop:
|
|
||||||
- "authentik-pod"
|
|
||||||
- "{{ authentik_container_server_name }}"
|
|
||||||
- "{{ authentik_container_worker_name }}"
|
|
||||||
|
|||||||
@@ -52,8 +52,6 @@
|
|||||||
backup: true
|
backup: true
|
||||||
notify:
|
notify:
|
||||||
- restart authentik pod
|
- restart authentik pod
|
||||||
- restart authentik server
|
|
||||||
- restart authentik worker
|
|
||||||
tags: [config]
|
tags: [config]
|
||||||
|
|
||||||
- name: Create Quadlet systemd directory (system scope)
|
- name: Create Quadlet systemd directory (system scope)
|
||||||
@@ -74,8 +72,6 @@
|
|||||||
notify:
|
notify:
|
||||||
- reload systemd
|
- reload systemd
|
||||||
- restart authentik pod
|
- restart authentik pod
|
||||||
- restart authentik server
|
|
||||||
- restart authentik worker
|
|
||||||
tags: [containers, deployment]
|
tags: [containers, deployment]
|
||||||
|
|
||||||
- name: Deploy Caddy configuration
|
- name: Deploy Caddy configuration
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
- name: Check if DNS challenge is needed
|
- name: Check if DNS challenge is needed
|
||||||
set_fact:
|
set_fact:
|
||||||
dns_challenge_needed: "{{ caddy_dns_provider == 'cloudflare' and cloudflare_api_token != '' }}"
|
dns_challenge_needed: "{{ caddy_dns_provider == 'cloudflare' }}"
|
||||||
|
|
||||||
- name: Check if Caddy is already installed
|
- name: Check if Caddy is already installed
|
||||||
command: /usr/bin/caddy version
|
command: /usr/bin/caddy version
|
||||||
|
|||||||
612
roles/devigo/README.md
Normal file
612
roles/devigo/README.md
Normal file
@@ -0,0 +1,612 @@
|
|||||||
|
# Devigo Ansible Role
|
||||||
|
|
||||||
|
Deploys the Devigo sales training website (devigo.no) to production infrastructure using Podman Quadlet and Caddy reverse proxy.
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Devigo is a Hugo-based static marketing website deployed via a hybrid approach:
|
||||||
|
- **Infrastructure**: Managed by Ansible (this role)
|
||||||
|
- **Application**: Built and deployed via GitHub Actions CI/CD
|
||||||
|
- **Runtime**: Podman Quadlet containers with systemd integration
|
||||||
|
- **Web Server**: Caddy reverse proxy with automatic HTTPS
|
||||||
|
|
||||||
|
This role deploys two services:
|
||||||
|
1. **devigo-site**: The main website container (Hugo + Nginx)
|
||||||
|
2. **devigo-decap-oauth**: OAuth2 proxy for Decap CMS content management
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
GitHub Actions (CI/CD)
|
||||||
|
↓
|
||||||
|
Build Hugo site → Docker image → GHCR
|
||||||
|
↓
|
||||||
|
SSH to mini-vps → systemctl restart devigo-site
|
||||||
|
↓
|
||||||
|
Podman Quadlet pulls latest image
|
||||||
|
↓
|
||||||
|
Systemd manages container lifecycle
|
||||||
|
↓
|
||||||
|
Caddy reverse proxy (HTTPS)
|
||||||
|
↓
|
||||||
|
Internet (devigo.no)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Control Machine
|
||||||
|
- Ansible 2.20+
|
||||||
|
- Vault password for encrypted credentials
|
||||||
|
- SSH access to production host
|
||||||
|
|
||||||
|
### Target Host
|
||||||
|
- Arch Linux (or compatible)
|
||||||
|
- Podman 5.7+
|
||||||
|
- Caddy with Cloudflare DNS plugin
|
||||||
|
- Systemd with Quadlet support
|
||||||
|
|
||||||
|
## Role Variables
|
||||||
|
|
||||||
|
### Required Variables (set in group_vars/production/main.yml)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Domains
|
||||||
|
devigo_domain: "devigo.no" # Apex domain (primary)
|
||||||
|
devigo_www_domain: "www.devigo.no" # WWW subdomain (redirects to apex)
|
||||||
|
devigo_primary_domain: "devigo.no" # Which domain is primary
|
||||||
|
|
||||||
|
# OAuth Service
|
||||||
|
devigo_oauth_domain: "decap.jnss.me" # OAuth proxy domain
|
||||||
|
devigo_oauth_client_id: "{{ vault_devigo_oauth_client_id }}"
|
||||||
|
devigo_oauth_client_secret: "{{ vault_devigo_oauth_client_secret }}"
|
||||||
|
|
||||||
|
# IMPORTANT: Must be comma-separated string, NOT YAML array
|
||||||
|
# The decapcms-oauth2 service requires TRUSTED_ORIGINS (plural) as a simple string
|
||||||
|
devigo_oauth_trusted_origins: "https://devigo.no,https://www.devigo.no"
|
||||||
|
|
||||||
|
# Container Registry
|
||||||
|
devigo_ghcr_image: "ghcr.io/jnschaffer/rustan:prod"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Optional Variables (defaults/main.yml)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
devigo_container_name: "devigo-site" # Container name
|
||||||
|
devigo_container_port: 80 # Nginx port inside container
|
||||||
|
devigo_oauth_user: "devigo-oauth" # OAuth service user
|
||||||
|
devigo_oauth_home: "/opt/devigo-oauth" # OAuth config directory
|
||||||
|
devigo_oauth_container_name: "devigo-decap-oauth"
|
||||||
|
devigo_oauth_container_port: 12000 # OAuth service port
|
||||||
|
```
|
||||||
|
|
||||||
|
### Vault Variables (group_vars/production/vault.yml - encrypted)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
vault_devigo_oauth_client_id: "Ov23liNvuDONNB0dhwj6"
|
||||||
|
vault_devigo_oauth_client_secret: "5dc4d17cfa7d550dd1b5ade0e33d1c9689bbbfad"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
This role depends on:
|
||||||
|
- `podman` - Container runtime
|
||||||
|
- `caddy` - Reverse proxy with TLS
|
||||||
|
|
||||||
|
These are automatically included via `meta/main.yml`.
|
||||||
|
|
||||||
|
## Deployed Services
|
||||||
|
|
||||||
|
### 1. devigo-site (Main Website)
|
||||||
|
|
||||||
|
**Systemd Unit**: `devigo-site.service`
|
||||||
|
**Quadlet File**: `/etc/containers/systemd/devigo-site.container`
|
||||||
|
**Container**: `ghcr.io/jnschaffer/rustan:prod`
|
||||||
|
**Network**: `caddy` (Podman network)
|
||||||
|
**Health Check**: Verifies `/usr/share/nginx/html/index.html` exists
|
||||||
|
|
||||||
|
**Features**:
|
||||||
|
- Auto-update support via `podman auto-update`
|
||||||
|
- Systemd managed (auto-restart on failure)
|
||||||
|
- Health monitoring
|
||||||
|
- Connected to Caddy network for reverse proxy
|
||||||
|
|
||||||
|
### 2. devigo-decap-oauth (CMS OAuth Proxy)
|
||||||
|
|
||||||
|
**Systemd Unit**: `devigo-decap-oauth.service`
|
||||||
|
**Quadlet File**: `/etc/containers/systemd/devigo-decap-oauth.container`
|
||||||
|
**Container**: `docker.io/alukovenko/decapcms-oauth2:latest`
|
||||||
|
**Port**: `127.0.0.1:12000:12000`
|
||||||
|
**Purpose**: OAuth2 proxy for Decap CMS GitHub authentication
|
||||||
|
|
||||||
|
**Features**:
|
||||||
|
- Enables content editing via Decap CMS at `/admin`
|
||||||
|
- Authenticates with GitHub OAuth app
|
||||||
|
- Proxied via Caddy at `decap.jnss.me`
|
||||||
|
|
||||||
|
## Caddy Configuration
|
||||||
|
|
||||||
|
### Site Configuration
|
||||||
|
- **devigo.no** - Primary domain, serves content
|
||||||
|
- **www.devigo.no** - Redirects to apex domain
|
||||||
|
- Reverse proxy to `devigo-site` container via Podman network
|
||||||
|
- Security headers (X-Frame-Options, CSP, etc.)
|
||||||
|
- JSON logging to `/var/log/caddy/devigo.log`
|
||||||
|
|
||||||
|
### OAuth Configuration
|
||||||
|
- **decap.jnss.me** - OAuth proxy endpoint
|
||||||
|
- Reverse proxy to `localhost:12000`
|
||||||
|
- Required for Decap CMS `/admin` functionality
|
||||||
|
|
||||||
|
## GitHub Actions Integration
|
||||||
|
|
||||||
|
### Workflow Overview
|
||||||
|
|
||||||
|
The site is deployed via GitHub Actions when code is pushed to the `prod` branch:
|
||||||
|
|
||||||
|
1. **Build Phase**: Hugo site built in Docker container
|
||||||
|
2. **Push Phase**: Image pushed to `ghcr.io/jnschaffer/rustan:prod`
|
||||||
|
3. **Deploy Phase**: SSH to mini-vps, restart systemd service
|
||||||
|
|
||||||
|
### Deploy Script (in .github/workflows/deploy.yml)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
deploy:
|
||||||
|
needs: build-and-push
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.ref == 'refs/heads/prod'
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Deploy to mini-vps
|
||||||
|
uses: appleboy/ssh-action@v1.0.0
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.VPS_HOST }}
|
||||||
|
username: ${{ secrets.VPS_USER }}
|
||||||
|
key: ${{ secrets.VPS_SSH_KEY }}
|
||||||
|
script: |
|
||||||
|
# Check for updates
|
||||||
|
podman auto-update --dry-run devigo-site
|
||||||
|
|
||||||
|
# Restart service to pull and use new image
|
||||||
|
systemctl restart devigo-site
|
||||||
|
|
||||||
|
# Verify deployment
|
||||||
|
sleep 5
|
||||||
|
systemctl is-active devigo-site
|
||||||
|
podman ps | grep devigo-site
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
podman system prune -f
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required GitHub Secrets
|
||||||
|
|
||||||
|
Configure in repository settings (`https://github.com/jnschaffer/rustan/settings/secrets/actions`):
|
||||||
|
|
||||||
|
- `VPS_HOST`: `72.62.91.251` (mini-vps IP)
|
||||||
|
- `VPS_USER`: `root`
|
||||||
|
- `VPS_SSH_KEY`: SSH private key for mini-vps access
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Deploy Infrastructure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy all devigo infrastructure
|
||||||
|
ansible-playbook playbooks/production.yml --tags devigo
|
||||||
|
|
||||||
|
# Deploy all production services
|
||||||
|
ansible-playbook playbooks/production.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Operations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check service status
|
||||||
|
ansible mini-vps -m shell -a "systemctl status devigo-site"
|
||||||
|
ansible mini-vps -m shell -a "systemctl status devigo-decap-oauth"
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u devigo-site -n 50"
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u devigo-decap-oauth -n 50"
|
||||||
|
|
||||||
|
# Check containers
|
||||||
|
ansible mini-vps -m shell -a "podman ps"
|
||||||
|
|
||||||
|
# Manually restart services
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-site"
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-decap-oauth"
|
||||||
|
|
||||||
|
# Check for available updates
|
||||||
|
ansible mini-vps -m shell -a "podman auto-update --dry-run"
|
||||||
|
|
||||||
|
# Force pull latest image
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-site"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing Deployment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test site (after DNS update)
|
||||||
|
curl -I https://devigo.no
|
||||||
|
curl -I https://www.devigo.no # Should redirect to apex
|
||||||
|
|
||||||
|
# Test OAuth proxy
|
||||||
|
curl -I https://decap.jnss.me
|
||||||
|
|
||||||
|
# Test from control machine with IP
|
||||||
|
curl -I -k -H "Host: devigo.no" https://72.62.91.251
|
||||||
|
```
|
||||||
|
|
||||||
|
## Content Management
|
||||||
|
|
||||||
|
### Decap CMS Access
|
||||||
|
|
||||||
|
1. Navigate to `https://devigo.no/admin`
|
||||||
|
2. Click "Login with GitHub"
|
||||||
|
3. Authenticate via OAuth proxy (`decap.jnss.me`)
|
||||||
|
4. Edit content via CMS interface
|
||||||
|
5. Changes commit to GitHub → triggers rebuild → auto-deploy
|
||||||
|
|
||||||
|
### Direct Git Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/rustan
|
||||||
|
# Make changes to content/norwegian/
|
||||||
|
git add .
|
||||||
|
git commit -m "Update content"
|
||||||
|
git push origin prod
|
||||||
|
# GitHub Actions builds and deploys automatically
|
||||||
|
```
|
||||||
|
|
||||||
|
## DNS Configuration
|
||||||
|
|
||||||
|
Required DNS records:
|
||||||
|
|
||||||
|
```
|
||||||
|
devigo.no A 72.62.91.251
|
||||||
|
www.devigo.no CNAME devigo.no.
|
||||||
|
decap.jnss.me A 72.62.91.251
|
||||||
|
```
|
||||||
|
|
||||||
|
TLS certificates are automatically obtained via Caddy + Cloudflare DNS challenge.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Container Won't Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Quadlet file syntax
|
||||||
|
ansible mini-vps -m shell -a "cat /etc/containers/systemd/devigo-site.container"
|
||||||
|
|
||||||
|
# Check systemd status
|
||||||
|
ansible mini-vps -m shell -a "systemctl status devigo-site"
|
||||||
|
|
||||||
|
# View detailed logs
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u devigo-site -n 100"
|
||||||
|
|
||||||
|
# Check if image exists
|
||||||
|
ansible mini-vps -m shell -a "podman images | grep rustan"
|
||||||
|
```
|
||||||
|
|
||||||
|
### OAuth Not Working
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify OAuth container running
|
||||||
|
ansible mini-vps -m shell -a "systemctl status devigo-decap-oauth"
|
||||||
|
|
||||||
|
# Check OAuth logs
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u devigo-decap-oauth -n 50"
|
||||||
|
|
||||||
|
# Test OAuth endpoint locally
|
||||||
|
ansible mini-vps -m shell -a "curl -I http://localhost:12000"
|
||||||
|
|
||||||
|
# Verify environment variables
|
||||||
|
ansible mini-vps -m shell -a "cat /opt/devigo-oauth/decap-oauth.env"
|
||||||
|
```
|
||||||
|
|
||||||
|
### TLS Certificate Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Caddy logs
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u caddy -n 100"
|
||||||
|
|
||||||
|
# Verify certificate obtained
|
||||||
|
ansible mini-vps -m shell -a "ls -la /var/lib/caddy/.local/share/caddy/certificates"
|
||||||
|
|
||||||
|
# Force certificate renewal
|
||||||
|
ansible mini-vps -m shell -a "systemctl reload caddy"
|
||||||
|
```
|
||||||
|
|
||||||
|
### GitHub Actions Deployment Fails
|
||||||
|
|
||||||
|
1. Verify secrets are set in GitHub repository
|
||||||
|
2. Check SSH connectivity: `ssh -i /path/to/key root@72.62.91.251`
|
||||||
|
3. Verify image was pushed to GHCR: `https://github.com/jnschaffer/rustan/pkgs/container/rustan`
|
||||||
|
4. Check GitHub Actions logs for detailed error messages
|
||||||
|
|
||||||
|
## Automatic Updates (Future Enhancement)
|
||||||
|
|
||||||
|
For fully automatic updates without GitHub Actions triggering:
|
||||||
|
|
||||||
|
### Systemd Timer Approach
|
||||||
|
|
||||||
|
Create a systemd timer to run `podman auto-update` daily:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# /etc/systemd/system/podman-auto-update.timer
|
||||||
|
[Unit]
|
||||||
|
Description=Podman Auto Update Timer
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnCalendar=daily
|
||||||
|
Persistent=true
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
||||||
|
```
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# /etc/systemd/system/podman-auto-update.service
|
||||||
|
[Unit]
|
||||||
|
Description=Podman Auto Update Service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
ExecStart=/usr/bin/podman auto-update
|
||||||
|
ExecStartPost=/usr/bin/systemctl restart devigo-site
|
||||||
|
```
|
||||||
|
|
||||||
|
Enable timer:
|
||||||
|
```bash
|
||||||
|
systemctl enable --now podman-auto-update.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
This provides automatic daily checks for new images independent of GitHub Actions.
|
||||||
|
|
||||||
|
## Files Deployed
|
||||||
|
|
||||||
|
```
|
||||||
|
/etc/containers/systemd/
|
||||||
|
├── devigo-site.container # Website Quadlet
|
||||||
|
└── devigo-decap-oauth.container # OAuth Quadlet
|
||||||
|
|
||||||
|
/etc/caddy/sites-enabled/
|
||||||
|
├── devigo.caddy # Site reverse proxy config
|
||||||
|
└── devigo-decap-oauth.caddy # OAuth reverse proxy config
|
||||||
|
|
||||||
|
/opt/devigo-oauth/
|
||||||
|
└── decap-oauth.env # OAuth environment variables
|
||||||
|
|
||||||
|
/var/log/caddy/
|
||||||
|
├── devigo.log # Site access logs
|
||||||
|
└── devigo-decap-oauth.log # OAuth access logs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
1. **OAuth Credentials**: Stored encrypted in Ansible Vault
|
||||||
|
2. **Container Security**: Runs with `NoNewPrivileges=true`
|
||||||
|
3. **Network Isolation**: Containers only on `caddy` network
|
||||||
|
4. **TLS**: All traffic encrypted via Cloudflare DNS challenge
|
||||||
|
5. **SSH Keys**: GitHub Actions uses dedicated SSH key
|
||||||
|
6. **Image Verification**: Images pulled from trusted GHCR
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Hugo version on control machine (v0.152.2) is newer than package.json (v0.144.2) - builds work fine
|
||||||
|
- Contact form uses Airform.io service (patrick@devigo.no endpoint)
|
||||||
|
- Build time: ~3-5 seconds locally, ~30-60 seconds in GitHub Actions
|
||||||
|
- Image size: ~50MB (Hugo + Nginx Alpine-based)
|
||||||
|
- The role does not build the Hugo site - that's handled by GitHub Actions
|
||||||
|
- DNS must point to mini-vps (72.62.91.251) before site will be accessible
|
||||||
|
|
||||||
|
## Example Playbook
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Deploy Production Services
|
||||||
|
hosts: production
|
||||||
|
become: true
|
||||||
|
roles:
|
||||||
|
- role: devigo
|
||||||
|
tags: ['devigo', 'website', 'sales', 'oauth']
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
|
|
||||||
|
## Author
|
||||||
|
|
||||||
|
Created for rick-infra project by Claude Code
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### OAuth "Origin not trusted" Error
|
||||||
|
|
||||||
|
**Symptom**: Browser console shows `Origin not trusted: https://devigo.no` when clicking "Login with GitHub"
|
||||||
|
|
||||||
|
**Cause**: The `TRUSTED_ORIGINS` environment variable is incorrectly formatted or using the wrong variable name.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Check the deployed env file:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "cat /opt/devigo-oauth/decap-oauth.env"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Verify it contains:
|
||||||
|
```
|
||||||
|
TRUSTED_ORIGINS=https://devigo.no,https://www.devigo.no
|
||||||
|
```
|
||||||
|
|
||||||
|
**NOT**:
|
||||||
|
- `TRUSTED_ORIGIN=` (singular - only supports one origin)
|
||||||
|
- `TRUSTED_ORIGINS=['https://...']` (YAML array - incorrect format)
|
||||||
|
|
||||||
|
3. If incorrect, ensure `roles/devigo/defaults/main.yml` contains:
|
||||||
|
```yaml
|
||||||
|
devigo_oauth_trusted_origins: "https://devigo.no,https://www.devigo.no"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Redeploy and restart:
|
||||||
|
```bash
|
||||||
|
ansible-playbook playbooks/production.yml --tags devigo
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-decap-oauth"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Decap CMS "Response was not yaml" Error
|
||||||
|
|
||||||
|
**Symptom**: Decap CMS shows error: `Response for config.yml was not yaml. (Content-Type: application/octet-stream)`
|
||||||
|
|
||||||
|
**Cause**: The `/admin/config.yml` file is not being served with `Content-Type: text/yaml`
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Verify nginx configuration in the container has the location block:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "podman exec devigo-site grep -A 3 'location.*yml' /etc/nginx/nginx.conf"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Should contain:
|
||||||
|
```nginx
|
||||||
|
location ~ \.(yml|yaml)$ {
|
||||||
|
default_type text/yaml;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Test the MIME type:
|
||||||
|
```bash
|
||||||
|
curl -I https://devigo.no/admin/config.yml | grep content-type
|
||||||
|
# Expected: content-type: text/yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
4. If missing, check the `rustan` repository's `nginx.conf` file and redeploy.
|
||||||
|
|
||||||
|
### HTML/CSS Files Show as "application/octet-stream"
|
||||||
|
|
||||||
|
**Symptom**: Site doesn't render correctly, browser shows files as `application/octet-stream`
|
||||||
|
|
||||||
|
**Cause**: Server-level `types {}` block in nginx.conf overriding all MIME types
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Check nginx.conf in the container:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "podman exec devigo-site cat /etc/nginx/nginx.conf | head -20"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Ensure there is **NO** `types { ... }` block at the server level
|
||||||
|
3. YAML MIME type should be handled by location block only (see above)
|
||||||
|
4. Fix in `rustan` repository, commit, and push to trigger rebuild
|
||||||
|
|
||||||
|
### Cannot Pull Private GHCR Images
|
||||||
|
|
||||||
|
**Symptom**: `podman pull` fails with 401 Unauthorized or authentication errors
|
||||||
|
|
||||||
|
**Cause**: Missing or incorrect GitHub Container Registry authentication
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Verify auth.json exists:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "ls -la /etc/containers/auth.json"
|
||||||
|
# Expected: -rw------- 1 root root
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Test authentication:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "podman login ghcr.io --get-login"
|
||||||
|
# Expected: jnschaffer (your GitHub username)
|
||||||
|
```
|
||||||
|
|
||||||
|
3. If fails, verify the Quadlet has the environment variable:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "systemctl cat devigo-site.service | grep REGISTRY_AUTH_FILE"
|
||||||
|
# Expected: Environment=REGISTRY_AUTH_FILE=/etc/containers/auth.json
|
||||||
|
```
|
||||||
|
|
||||||
|
4. If missing, redeploy the role:
|
||||||
|
```bash
|
||||||
|
ansible-playbook playbooks/production.yml --tags devigo,podman
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Won't Start After GitHub Actions Deployment
|
||||||
|
|
||||||
|
**Symptom**: `systemctl status devigo-site` shows failed or inactive
|
||||||
|
|
||||||
|
**Causes and Solutions**:
|
||||||
|
|
||||||
|
1. **Image pull failed**:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "journalctl -u devigo-site -n 50"
|
||||||
|
# Look for "401 Unauthorized" - see authentication troubleshooting above
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Port already in use**:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "ss -tlnp | grep 9080"
|
||||||
|
# If occupied, stop conflicting service or change port
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Corrupted image**:
|
||||||
|
```bash
|
||||||
|
ansible mini-vps -m shell -a "podman pull --force ghcr.io/jnschaffer/rustan:prod"
|
||||||
|
ansible mini-vps -m shell -a "systemctl restart devigo-site"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture Decisions
|
||||||
|
|
||||||
|
### Why Apex Domain (devigo.no) Over WWW?
|
||||||
|
|
||||||
|
**Decision**: Use `devigo.no` as primary, redirect `www.devigo.no` → `devigo.no`
|
||||||
|
|
||||||
|
**Rationale**:
|
||||||
|
- Shorter, cleaner URLs
|
||||||
|
- Better for SEO (single canonical domain)
|
||||||
|
- Simpler TLS certificate management
|
||||||
|
- Modern web convention (www is legacy)
|
||||||
|
|
||||||
|
### Why JavaScript Initialization for Decap CMS?
|
||||||
|
|
||||||
|
**Decision**: Use `CMS.init()` with manual config loading instead of `<link>` tag
|
||||||
|
|
||||||
|
**Rationale**:
|
||||||
|
- Eliminates dependency on YAML MIME type for `<link>` element
|
||||||
|
- More portable across different server configurations
|
||||||
|
- Explicit and debuggable (clear in source what's happening)
|
||||||
|
- Matches working configuration from previous deployment
|
||||||
|
|
||||||
|
**Trade-off**: Slightly more verbose HTML (3 extra lines of JavaScript)
|
||||||
|
|
||||||
|
### Why Location Block for YAML MIME Type?
|
||||||
|
|
||||||
|
**Decision**: Use `location ~ \.(yml|yaml)$ { default_type text/yaml; }` instead of server-level `types {}` block
|
||||||
|
|
||||||
|
**Rationale**:
|
||||||
|
- Surgical approach - only affects YAML files
|
||||||
|
- Doesn't override nginx's default MIME types for HTML/CSS/JS
|
||||||
|
- Easier to understand and maintain
|
||||||
|
- Prevents accidental breaking of other file types
|
||||||
|
|
||||||
|
**Trade-off**: Requires location block in nginx.conf (but this is minimal)
|
||||||
|
|
||||||
|
### Why Quadlet Over Docker Compose?
|
||||||
|
|
||||||
|
**Decision**: Use Podman Quadlet with systemd integration instead of docker-compose
|
||||||
|
|
||||||
|
**Rationale**:
|
||||||
|
- Native systemd integration (journald logs, dependency management)
|
||||||
|
- Better security (rootless capable, no daemon required)
|
||||||
|
- Automatic service management via systemd generators
|
||||||
|
- Auto-update support via `podman auto-update`
|
||||||
|
- Aligns with Ansible infrastructure-as-code approach
|
||||||
|
|
||||||
|
**Trade-off**: Less familiar to developers used to Docker Compose (but well-documented)
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [DEVIGO Deployment Guide](../../docs/devigo-deployment-guide.md) - Complete deployment walkthrough
|
||||||
|
- [Podman Role](../podman/README.md) - Container runtime configuration
|
||||||
|
- [Caddy Role](../caddy/README.md) - Reverse proxy and TLS
|
||||||
|
- [Service Integration Guide](../../docs/service-integration-guide.md) - How services interact
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Part of the rick-infra project. See repository root for license information.
|
||||||
30
roles/devigo/defaults/main.yml
Normal file
30
roles/devigo/defaults/main.yml
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
# Devigo Infrastructure - Default Variables
|
||||||
|
|
||||||
|
# Domains
|
||||||
|
devigo_domain: "devigo.no"
|
||||||
|
devigo_www_domain: "www.devigo.no"
|
||||||
|
devigo_primary_domain: "devigo.no" # Apex is primary
|
||||||
|
|
||||||
|
# Container configuration
|
||||||
|
devigo_container_name: "devigo-site"
|
||||||
|
devigo_host_port: 9080 # Port published to localhost
|
||||||
|
devigo_container_port: 80 # Nginx inside container
|
||||||
|
|
||||||
|
# GitHub Container Registry
|
||||||
|
devigo_ghcr_image: "ghcr.io/jnschaffer/rustan:prod"
|
||||||
|
|
||||||
|
# Decap OAuth configuration
|
||||||
|
devigo_oauth_domain: "decap.jnss.me"
|
||||||
|
devigo_oauth_user: "devigo-oauth"
|
||||||
|
devigo_oauth_home: "/opt/devigo-oauth"
|
||||||
|
devigo_oauth_container_name: "devigo-decap-oauth"
|
||||||
|
devigo_oauth_container_image: "docker.io/alukovenko/decapcms-oauth2:latest"
|
||||||
|
devigo_oauth_container_port: 12000
|
||||||
|
devigo_oauth_client_id: "{{ vault_devigo_oauth_client_id }}"
|
||||||
|
devigo_oauth_client_secret: "{{ vault_devigo_oauth_client_secret }}"
|
||||||
|
devigo_oauth_trusted_origins: "https://devigo.no,https://www.devigo.no"
|
||||||
|
|
||||||
|
# Caddy integration (assumes caddy role provides these)
|
||||||
|
# caddy_sites_enabled_dir: /etc/caddy/sites-enabled
|
||||||
|
# caddy_user: caddy
|
||||||
19
roles/devigo/handlers/main.yml
Normal file
19
roles/devigo/handlers/main.yml
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
---
|
||||||
|
- name: reload systemd
|
||||||
|
systemd:
|
||||||
|
daemon_reload: yes
|
||||||
|
|
||||||
|
- name: restart devigo-decap-oauth
|
||||||
|
systemd:
|
||||||
|
name: "{{ devigo_oauth_container_name }}"
|
||||||
|
state: restarted
|
||||||
|
|
||||||
|
- name: restart devigo-site
|
||||||
|
systemd:
|
||||||
|
name: devigo-site
|
||||||
|
state: restarted
|
||||||
|
|
||||||
|
- name: reload caddy
|
||||||
|
systemd:
|
||||||
|
name: caddy
|
||||||
|
state: reloaded
|
||||||
4
roles/devigo/meta/main.yml
Normal file
4
roles/devigo/meta/main.yml
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
---
|
||||||
|
dependencies:
|
||||||
|
- role: podman
|
||||||
|
- role: caddy
|
||||||
93
roles/devigo/tasks/main.yml
Normal file
93
roles/devigo/tasks/main.yml
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
---
|
||||||
|
# Devigo Deployment - Main Tasks
|
||||||
|
|
||||||
|
# OAuth Service Setup
|
||||||
|
- name: Create decap-oauth user
|
||||||
|
user:
|
||||||
|
name: "{{ devigo_oauth_user }}"
|
||||||
|
system: yes
|
||||||
|
shell: /usr/sbin/nologin
|
||||||
|
home: "{{ devigo_oauth_home }}"
|
||||||
|
create_home: yes
|
||||||
|
|
||||||
|
- name: Create decap-oauth directories
|
||||||
|
file:
|
||||||
|
path: "{{ devigo_oauth_home }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ devigo_oauth_user }}"
|
||||||
|
group: "{{ devigo_oauth_user }}"
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Deploy OAuth environment file
|
||||||
|
template:
|
||||||
|
src: devigo-decap-oauth.env.j2
|
||||||
|
dest: "{{ devigo_oauth_home }}/decap-oauth.env"
|
||||||
|
owner: "{{ devigo_oauth_user }}"
|
||||||
|
group: "{{ devigo_oauth_user }}"
|
||||||
|
mode: '0600'
|
||||||
|
notify: restart devigo-decap-oauth
|
||||||
|
|
||||||
|
- name: Create Quadlet systemd directory
|
||||||
|
file:
|
||||||
|
path: /etc/containers/systemd
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Deploy Quadlet container file
|
||||||
|
template:
|
||||||
|
src: devigo-decap-oauth.container
|
||||||
|
dest: "/etc/containers/systemd/{{ devigo_oauth_container_name }}.container"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
register: quadlet_deployed
|
||||||
|
|
||||||
|
- name: Reload systemd to discover Quadlet container
|
||||||
|
systemd:
|
||||||
|
daemon_reload: yes
|
||||||
|
|
||||||
|
- name: Deploy OAuth Caddy configuration
|
||||||
|
template:
|
||||||
|
src: devigo-decap-oauth.caddy.j2
|
||||||
|
dest: "{{ caddy_sites_enabled_dir }}/devigo-decap-oauth.caddy"
|
||||||
|
owner: root
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
|
|
||||||
|
- name: Enable and start decap-oauth service
|
||||||
|
systemd:
|
||||||
|
name: "{{ devigo_oauth_container_name }}"
|
||||||
|
enabled: yes
|
||||||
|
state: started
|
||||||
|
|
||||||
|
# Devigo Site Quadlet Setup
|
||||||
|
- name: Deploy devigo-site Quadlet container file
|
||||||
|
template:
|
||||||
|
src: devigo-site.container
|
||||||
|
dest: "/etc/containers/systemd/devigo-site.container"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
register: devigo_site_quadlet
|
||||||
|
|
||||||
|
- name: Reload systemd to discover devigo-site Quadlet
|
||||||
|
systemd:
|
||||||
|
daemon_reload: yes
|
||||||
|
|
||||||
|
- name: Deploy Caddy configuration for devigo site
|
||||||
|
template:
|
||||||
|
src: devigo.caddy.j2
|
||||||
|
dest: "{{ caddy_sites_enabled_dir }}/devigo.caddy"
|
||||||
|
owner: root
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
|
|
||||||
|
- name: Enable and start devigo-site service
|
||||||
|
systemd:
|
||||||
|
name: devigo-site
|
||||||
|
enabled: yes
|
||||||
|
state: started
|
||||||
32
roles/devigo/tasks/setup_infrastructure.yml
Normal file
32
roles/devigo/tasks/setup_infrastructure.yml
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
# Set up Docker infrastructure for Devigo deployment
|
||||||
|
|
||||||
|
- name: Create devigo deployment directory
|
||||||
|
file:
|
||||||
|
path: "{{ devigo_docker_dir }}"
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Create caddy Docker network
|
||||||
|
containers.podman.podman_network:
|
||||||
|
name: caddy
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
template:
|
||||||
|
src: docker-compose.yml.j2
|
||||||
|
dest: "{{ devigo_compose_file }}"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
|
||||||
|
- name: Deploy Caddy configuration for devigo site
|
||||||
|
template:
|
||||||
|
src: devigo.caddy.j2
|
||||||
|
dest: "{{ caddy_sites_enabled_dir }}/devigo.caddy"
|
||||||
|
owner: root
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
54
roles/devigo/tasks/setup_oauth_service.yml
Normal file
54
roles/devigo/tasks/setup_oauth_service.yml
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
---
|
||||||
|
# Set up Decap OAuth service
|
||||||
|
|
||||||
|
- name: Create decap-oauth user
|
||||||
|
user:
|
||||||
|
name: "{{ devigo_oauth_user }}"
|
||||||
|
system: yes
|
||||||
|
shell: /usr/sbin/nologin
|
||||||
|
home: "{{ devigo_oauth_home }}"
|
||||||
|
create_home: yes
|
||||||
|
|
||||||
|
- name: Create decap-oauth directories
|
||||||
|
file:
|
||||||
|
path: "{{ devigo_oauth_home }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ devigo_oauth_user }}"
|
||||||
|
group: "{{ devigo_oauth_user }}"
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Deploy OAuth environment file
|
||||||
|
template:
|
||||||
|
src: devigo-decap-oauth.env.j2
|
||||||
|
dest: "{{ devigo_oauth_home }}/decap-oauth.env"
|
||||||
|
owner: "{{ devigo_oauth_user }}"
|
||||||
|
group: "{{ devigo_oauth_user }}"
|
||||||
|
mode: '0600'
|
||||||
|
notify: restart devigo-decap-oauth
|
||||||
|
|
||||||
|
- name: Deploy Quadlet container file
|
||||||
|
template:
|
||||||
|
src: devigo-decap-oauth.container
|
||||||
|
dest: "/etc/containers/systemd/{{ devigo_oauth_container_name }}.container"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
notify:
|
||||||
|
- reload systemd
|
||||||
|
- restart devigo-decap-oauth
|
||||||
|
|
||||||
|
- name: Deploy OAuth Caddy configuration
|
||||||
|
template:
|
||||||
|
src: devigo-decap-oauth.caddy.j2
|
||||||
|
dest: "{{ caddy_sites_enabled_dir }}/devigo-decap-oauth.caddy"
|
||||||
|
owner: root
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
|
|
||||||
|
- name: Enable and start decap-oauth service
|
||||||
|
systemd:
|
||||||
|
name: "{{ devigo_oauth_container_name }}"
|
||||||
|
enabled: yes
|
||||||
|
state: started
|
||||||
|
daemon_reload: yes
|
||||||
25
roles/devigo/templates/devigo-decap-oauth.caddy.j2
Normal file
25
roles/devigo/templates/devigo-decap-oauth.caddy.j2
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Decap CMS OAuth2 Proxy
|
||||||
|
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||||
|
|
||||||
|
{{ devigo_oauth_domain }} {
|
||||||
|
reverse_proxy 127.0.0.1:{{ devigo_oauth_container_port }}
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
header {
|
||||||
|
X-Frame-Options DENY
|
||||||
|
X-Content-Type-Options nosniff
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
}
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log {
|
||||||
|
output file /var/log/caddy/devigo-decap-oauth.log {
|
||||||
|
roll_size 100mb
|
||||||
|
roll_keep 5
|
||||||
|
}
|
||||||
|
format json {
|
||||||
|
time_format "2006-01-02T15:04:05.000Z07:00"
|
||||||
|
}
|
||||||
|
level INFO
|
||||||
|
}
|
||||||
|
}
|
||||||
25
roles/devigo/templates/devigo-decap-oauth.container
Normal file
25
roles/devigo/templates/devigo-decap-oauth.container
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Decap CMS OAuth2 Proxy for Devigo
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
Image={{ devigo_oauth_container_image }}
|
||||||
|
ContainerName={{ devigo_oauth_container_name }}
|
||||||
|
AutoUpdate=registry
|
||||||
|
|
||||||
|
# Environment file with secrets
|
||||||
|
EnvironmentFile={{ devigo_oauth_home }}/decap-oauth.env
|
||||||
|
|
||||||
|
# Network
|
||||||
|
PublishPort=127.0.0.1:{{ devigo_oauth_container_port }}:12000
|
||||||
|
|
||||||
|
# Security
|
||||||
|
NoNewPrivileges=true
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
10
roles/devigo/templates/devigo-decap-oauth.env.j2
Normal file
10
roles/devigo/templates/devigo-decap-oauth.env.j2
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# Decap OAuth Environment Configuration
|
||||||
|
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||||
|
|
||||||
|
OAUTH_CLIENT_ID={{ devigo_oauth_client_id }}
|
||||||
|
OAUTH_CLIENT_SECRET={{ devigo_oauth_client_secret }}
|
||||||
|
SERVER_PORT=12000
|
||||||
|
|
||||||
|
# TRUSTED_ORIGINS must be comma-separated string (not YAML array)
|
||||||
|
# Example: https://example.com,https://www.example.com
|
||||||
|
TRUSTED_ORIGINS={{ devigo_oauth_trusted_origins }}
|
||||||
31
roles/devigo/templates/devigo-site.container
Normal file
31
roles/devigo/templates/devigo-site.container
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Devigo Website - Sales Training Company
|
||||||
|
After=network-online.target caddy.service
|
||||||
|
Wants=network-online.target
|
||||||
|
Requires=caddy.service
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
Image=ghcr.io/jnschaffer/rustan:prod
|
||||||
|
ContainerName=devigo-site
|
||||||
|
AutoUpdate=registry
|
||||||
|
Pull=newer
|
||||||
|
|
||||||
|
# Port mapping - publish to localhost only
|
||||||
|
PublishPort=127.0.0.1:9080:80
|
||||||
|
|
||||||
|
# Security
|
||||||
|
NoNewPrivileges=true
|
||||||
|
|
||||||
|
# Health check - check if nginx is responding
|
||||||
|
HealthCmd=/usr/bin/curl -f http://localhost:80/ || exit 1
|
||||||
|
HealthInterval=30s
|
||||||
|
HealthTimeout=10s
|
||||||
|
HealthRetries=3
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Environment=REGISTRY_AUTH_FILE=/etc/containers/auth.json
|
||||||
|
Restart=always
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
33
roles/devigo/templates/devigo.caddy.j2
Normal file
33
roles/devigo/templates/devigo.caddy.j2
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Devigo Website - Reverse Proxy to Containerized Site
|
||||||
|
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||||
|
|
||||||
|
# Redirect www to apex (apex is primary per user preference)
|
||||||
|
{{ devigo_www_domain }} {
|
||||||
|
redir https://{{ devigo_domain }}{uri} permanent
|
||||||
|
}
|
||||||
|
|
||||||
|
# Primary domain (apex)
|
||||||
|
{{ devigo_domain }} {
|
||||||
|
reverse_proxy localhost:9080
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
header {
|
||||||
|
X-Frame-Options SAMEORIGIN
|
||||||
|
X-Content-Type-Options nosniff
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
Permissions-Policy "geolocation=(), microphone=(), camera=()"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log {
|
||||||
|
output file /var/log/caddy/devigo.log {
|
||||||
|
roll_size 100mb
|
||||||
|
roll_keep 5
|
||||||
|
}
|
||||||
|
format json {
|
||||||
|
time_format "2006-01-02T15:04:05.000Z07:00"
|
||||||
|
}
|
||||||
|
level INFO
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -9,15 +9,19 @@ Self-contained Gitea Git service for rick-infra following the established archit
|
|||||||
- ✅ **PostgreSQL integration**: Uses shared PostgreSQL infrastructure
|
- ✅ **PostgreSQL integration**: Uses shared PostgreSQL infrastructure
|
||||||
- ✅ **Caddy integration**: Deploys reverse proxy configuration
|
- ✅ **Caddy integration**: Deploys reverse proxy configuration
|
||||||
- ✅ **Security hardened**: SystemD restrictions and secure defaults
|
- ✅ **Security hardened**: SystemD restrictions and secure defaults
|
||||||
|
- ✅ **Firewall management**: Automatically configures nftables for SSH access
|
||||||
|
- ✅ **fail2ban protection**: Brute force protection for SSH authentication
|
||||||
- ✅ **Production ready**: HTTPS, SSH access, LFS support
|
- ✅ **Production ready**: HTTPS, SSH access, LFS support
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
- **Dependencies**: PostgreSQL infrastructure role
|
- **Dependencies**: PostgreSQL infrastructure role
|
||||||
- **Database**: Self-managed gitea database and user
|
- **Database**: Self-managed gitea database and user
|
||||||
- **Network**: HTTP on :3000, SSH on :2222 (localhost)
|
- **Network**: HTTP on :3000 (localhost), SSH on :2222 (public)
|
||||||
- **Web access**: https://git.domain.com (via Caddy)
|
- **Web access**: https://git.domain.com (via Caddy reverse proxy)
|
||||||
- **SSH access**: ssh://git@git.domain.com:2222
|
- **SSH access**: ssh://git@git.domain.com:2222
|
||||||
|
- **Firewall**: Port 2222 automatically opened via nftables
|
||||||
|
- **Security**: fail2ban monitors and blocks SSH brute force attempts
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
@@ -42,6 +46,9 @@ gitea_db_password: "{{ vault_gitea_db_password }}"
|
|||||||
gitea_app_name: "Gitea: Git with a cup of tea"
|
gitea_app_name: "Gitea: Git with a cup of tea"
|
||||||
gitea_disable_registration: false
|
gitea_disable_registration: false
|
||||||
gitea_enable_lfs: true
|
gitea_enable_lfs: true
|
||||||
|
|
||||||
|
# Firewall and Security
|
||||||
|
gitea_manage_firewall: true # Automatically manage nftables rules
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
@@ -56,12 +63,94 @@ gitea_enable_lfs: true
|
|||||||
- Caddy web server (for HTTPS access)
|
- Caddy web server (for HTTPS access)
|
||||||
- Vault password: `vault_gitea_db_password`
|
- Vault password: `vault_gitea_db_password`
|
||||||
|
|
||||||
|
## SSH Access
|
||||||
|
|
||||||
|
Gitea provides Git repository access via SSH on port 2222:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone a repository
|
||||||
|
git clone ssh://git@git.jnss.me:2222/username/repository.git
|
||||||
|
|
||||||
|
# Or add as remote
|
||||||
|
git remote add origin ssh://git@git.jnss.me:2222/username/repository.git
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSH Key Setup
|
||||||
|
|
||||||
|
1. **Generate SSH key** (if you don't have one):
|
||||||
|
```bash
|
||||||
|
ssh-keygen -t ed25519 -C "your_email@example.com"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Copy your public key**:
|
||||||
|
```bash
|
||||||
|
cat ~/.ssh/id_ed25519.pub
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Add to Gitea**:
|
||||||
|
- Log into Gitea web interface
|
||||||
|
- Go to Settings → SSH/GPG Keys
|
||||||
|
- Click "Add Key"
|
||||||
|
- Paste your public key
|
||||||
|
|
||||||
|
4. **Test SSH connection**:
|
||||||
|
```bash
|
||||||
|
ssh -T -p 2222 git@git.jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
## Firewall and Security
|
||||||
|
|
||||||
|
### Automatic Firewall Management
|
||||||
|
|
||||||
|
The Gitea role automatically manages firewall rules via nftables:
|
||||||
|
|
||||||
|
- **Port 2222** is opened automatically when Gitea is deployed
|
||||||
|
- Firewall rules are stored in `/etc/nftables.d/gitea.nft`
|
||||||
|
- Rules are integrated with the main security playbook configuration
|
||||||
|
- To disable automatic firewall management, set `gitea_manage_firewall: false`
|
||||||
|
|
||||||
|
### fail2ban Protection
|
||||||
|
|
||||||
|
SSH brute force protection is automatically configured:
|
||||||
|
|
||||||
|
- **Jail**: `gitea-ssh` monitors Gitea SSH authentication attempts
|
||||||
|
- **Max retries**: 5 failed attempts
|
||||||
|
- **Find time**: 10 minutes (600 seconds)
|
||||||
|
- **Ban time**: 1 hour (3600 seconds)
|
||||||
|
- **Action**: IP banned via nftables
|
||||||
|
|
||||||
|
Check fail2ban status:
|
||||||
|
```bash
|
||||||
|
# Check Gitea SSH jail status
|
||||||
|
fail2ban-client status gitea-ssh
|
||||||
|
|
||||||
|
# View banned IPs
|
||||||
|
fail2ban-client get gitea-ssh banned
|
||||||
|
|
||||||
|
# Unban an IP if needed
|
||||||
|
fail2ban-client set gitea-ssh unbanip 203.0.113.100
|
||||||
|
```
|
||||||
|
|
||||||
|
### Firewall Verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List active nftables rules
|
||||||
|
nft list ruleset
|
||||||
|
|
||||||
|
# Check if Gitea SSH port is open
|
||||||
|
nft list ruleset | grep 2222
|
||||||
|
|
||||||
|
# Verify from external machine
|
||||||
|
nc -zv git.jnss.me 2222
|
||||||
|
```
|
||||||
|
|
||||||
## Self-Contained Design
|
## Self-Contained Design
|
||||||
|
|
||||||
This role follows rick-infra's self-contained service pattern:
|
This role follows rick-infra's self-contained service pattern:
|
||||||
- Creates its own database and user via PostgreSQL infrastructure
|
- Creates its own database and user via PostgreSQL infrastructure
|
||||||
- Manages its own configuration and data
|
- Manages its own configuration and data
|
||||||
- Deploys its own Caddy reverse proxy config
|
- Deploys its own Caddy reverse proxy config
|
||||||
|
- Manages its own firewall rules and security (nftables, fail2ban)
|
||||||
- Independent lifecycle from other services
|
- Independent lifecycle from other services
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -66,6 +66,13 @@ gitea_require_signin: false
|
|||||||
# SSH settings
|
# SSH settings
|
||||||
gitea_start_ssh_server: true
|
gitea_start_ssh_server: true
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Firewall Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Firewall management
|
||||||
|
gitea_manage_firewall: true # Set to false if firewall is managed externally
|
||||||
|
|
||||||
# =================================================================
|
# =================================================================
|
||||||
# Infrastructure Dependencies (Read-only)
|
# Infrastructure Dependencies (Read-only)
|
||||||
# =================================================================
|
# =================================================================
|
||||||
|
|||||||
@@ -15,4 +15,16 @@
|
|||||||
systemd:
|
systemd:
|
||||||
name: caddy
|
name: caddy
|
||||||
state: reloaded
|
state: reloaded
|
||||||
when: caddy_service_enabled | default(false)
|
when: caddy_service_enabled | default(false)
|
||||||
|
|
||||||
|
- name: reload nftables
|
||||||
|
systemd:
|
||||||
|
name: nftables
|
||||||
|
state: reloaded
|
||||||
|
# Safety: only reload if service is active
|
||||||
|
when: ansible_connection != 'local'
|
||||||
|
|
||||||
|
- name: restart fail2ban
|
||||||
|
systemd:
|
||||||
|
name: fail2ban
|
||||||
|
state: restarted
|
||||||
75
roles/gitea/tasks/fail2ban.yml
Normal file
75
roles/gitea/tasks/fail2ban.yml
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
---
|
||||||
|
# Gitea fail2ban Configuration - Rick-Infra
|
||||||
|
# Protects Gitea SSH from brute force attacks
|
||||||
|
# Integrates with system fail2ban service
|
||||||
|
|
||||||
|
- name: Install fail2ban
|
||||||
|
pacman:
|
||||||
|
name: fail2ban
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Create Gitea fail2ban filter
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
# Fail2ban filter for Gitea SSH authentication failures
|
||||||
|
# Rick-Infra: Gitea role
|
||||||
|
|
||||||
|
[Definition]
|
||||||
|
# Match failed authentication attempts in Gitea logs
|
||||||
|
failregex = .*(Failed authentication attempt|authentication failed|Invalid user|Failed login attempt).*from\s+<HOST>
|
||||||
|
.*level=warning.*msg=.*authentication.*failed.*ip=<HOST>
|
||||||
|
|
||||||
|
ignoreregex =
|
||||||
|
dest: /etc/fail2ban/filter.d/gitea-ssh.conf
|
||||||
|
mode: '0644'
|
||||||
|
backup: yes
|
||||||
|
notify: restart fail2ban
|
||||||
|
|
||||||
|
- name: Ensure fail2ban jail.local exists
|
||||||
|
file:
|
||||||
|
path: /etc/fail2ban/jail.local
|
||||||
|
state: touch
|
||||||
|
mode: '0644'
|
||||||
|
modification_time: preserve
|
||||||
|
access_time: preserve
|
||||||
|
|
||||||
|
- name: Add Gitea SSH jail to fail2ban
|
||||||
|
blockinfile:
|
||||||
|
path: /etc/fail2ban/jail.local
|
||||||
|
marker: "# {mark} ANSIBLE MANAGED BLOCK - Gitea SSH"
|
||||||
|
block: |
|
||||||
|
# Gitea SSH Protection - Rick-Infra
|
||||||
|
[gitea-ssh]
|
||||||
|
enabled = true
|
||||||
|
port = {{ gitea_ssh_port }}
|
||||||
|
filter = gitea-ssh
|
||||||
|
logpath = {{ gitea_home }}/log/gitea.log
|
||||||
|
maxretry = 5
|
||||||
|
findtime = 600
|
||||||
|
bantime = 3600
|
||||||
|
banaction = nftables
|
||||||
|
backup: yes
|
||||||
|
notify: restart fail2ban
|
||||||
|
|
||||||
|
- name: Enable and start fail2ban service
|
||||||
|
systemd:
|
||||||
|
name: fail2ban
|
||||||
|
enabled: yes
|
||||||
|
state: started
|
||||||
|
|
||||||
|
- name: Add fail2ban restart handler
|
||||||
|
meta: flush_handlers
|
||||||
|
|
||||||
|
- name: Display fail2ban status for Gitea
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
🛡️ fail2ban configured for Gitea SSH
|
||||||
|
📍 Filter: /etc/fail2ban/filter.d/gitea-ssh.conf
|
||||||
|
📍 Jail: gitea-ssh (in /etc/fail2ban/jail.local)
|
||||||
|
🔒 Protection: Port {{ gitea_ssh_port }}
|
||||||
|
⏱️ Ban time: 1 hour (3600 seconds)
|
||||||
|
🔢 Max retries: 5 attempts in 10 minutes
|
||||||
|
|
||||||
|
Check status: fail2ban-client status gitea-ssh
|
||||||
|
|
||||||
|
# Rick-Infra: Self-contained fail2ban protection per role
|
||||||
51
roles/gitea/tasks/firewall.yml
Normal file
51
roles/gitea/tasks/firewall.yml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
# Gitea Firewall Configuration - Rick-Infra
|
||||||
|
# Self-contained firewall management for Gitea SSH access
|
||||||
|
# Opens port 2222 for Gitea's SSH server
|
||||||
|
|
||||||
|
- name: Install nftables (if not present)
|
||||||
|
pacman:
|
||||||
|
name: nftables
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Create nftables rules directory
|
||||||
|
file:
|
||||||
|
path: /etc/nftables.d
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Deploy Gitea nftables rules
|
||||||
|
template:
|
||||||
|
src: gitea.nft.j2
|
||||||
|
dest: /etc/nftables.d/50-gitea.nft
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload nftables
|
||||||
|
register: gitea_nft_deployed
|
||||||
|
|
||||||
|
- name: Validate nftables loader configuration
|
||||||
|
command: nft -c -f /etc/nftables-load.conf
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
register: nft_validation
|
||||||
|
|
||||||
|
- name: Display nftables validation results
|
||||||
|
debug:
|
||||||
|
msg: "{{ 'nftables configuration valid' if nft_validation.rc == 0 else 'nftables validation failed: ' + nft_validation.stderr }}"
|
||||||
|
when: nft_validation is defined
|
||||||
|
|
||||||
|
- name: Enable and start nftables service
|
||||||
|
systemd:
|
||||||
|
name: nftables
|
||||||
|
enabled: yes
|
||||||
|
state: started
|
||||||
|
|
||||||
|
- name: Display Gitea firewall status
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
🔥 Gitea firewall configuration deployed
|
||||||
|
📍 Rule file: /etc/nftables.d/50-gitea.nft
|
||||||
|
🔓 Port opened: {{ gitea_ssh_port }} (Gitea SSH)
|
||||||
|
|
||||||
|
⚠️ Note: nftables will reload automatically via handler
|
||||||
|
|
||||||
|
# Rick-Infra: Self-contained firewall management per role
|
||||||
@@ -16,6 +16,18 @@
|
|||||||
name: gitea
|
name: gitea
|
||||||
state: present
|
state: present
|
||||||
|
|
||||||
|
# Firewall configuration - self-managed by Gitea role
|
||||||
|
- name: Configure firewall for Gitea SSH
|
||||||
|
import_tasks: firewall.yml
|
||||||
|
tags: ['firewall']
|
||||||
|
when: gitea_manage_firewall | default(true)
|
||||||
|
|
||||||
|
# fail2ban protection - self-managed by Gitea role
|
||||||
|
- name: Configure fail2ban for Gitea SSH
|
||||||
|
import_tasks: fail2ban.yml
|
||||||
|
tags: ['fail2ban', 'security']
|
||||||
|
when: gitea_manage_firewall | default(true)
|
||||||
|
|
||||||
- name: Install Git
|
- name: Install Git
|
||||||
pacman:
|
pacman:
|
||||||
name: git
|
name: git
|
||||||
|
|||||||
11
roles/gitea/templates/gitea.nft.j2
Normal file
11
roles/gitea/templates/gitea.nft.j2
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Gitea SSH Firewall Rules - Rick-Infra
|
||||||
|
# Generated by Ansible Gitea role
|
||||||
|
# Allows incoming SSH connections on port {{ gitea_ssh_port }}
|
||||||
|
#
|
||||||
|
# This file is loaded BEFORE the final drop rule (99-drop.nft)
|
||||||
|
# Filename: 50-gitea.nft (ensures proper load order)
|
||||||
|
|
||||||
|
# Add Gitea SSH port to the input chain
|
||||||
|
add rule inet filter input tcp dport {{ gitea_ssh_port }} ct state new accept comment "Gitea SSH (Port {{ gitea_ssh_port }})"
|
||||||
|
|
||||||
|
# Rick-Infra: Self-contained firewall rule for Gitea SSH access
|
||||||
@@ -78,6 +78,40 @@ nextcloud_php_memory_limit: "512M"
|
|||||||
nextcloud_php_upload_limit: "512M"
|
nextcloud_php_upload_limit: "512M"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Deployment Strategy
|
||||||
|
|
||||||
|
This role uses a **two-phase deployment** approach to work correctly with the Nextcloud container's initialization process:
|
||||||
|
|
||||||
|
### Phase 1: Container Initialization (automatic)
|
||||||
|
1. Create empty directories for volumes
|
||||||
|
2. Deploy environment configuration (`.env`)
|
||||||
|
3. Start Nextcloud container
|
||||||
|
4. Container entrypoint detects first-time setup (no `version.php`)
|
||||||
|
5. Container copies Nextcloud files to `/var/www/html/`
|
||||||
|
6. Container runs `occ maintenance:install` with PostgreSQL
|
||||||
|
7. Installation creates `config.php` with database credentials
|
||||||
|
|
||||||
|
### Phase 2: Custom Configuration (automatic)
|
||||||
|
8. Ansible waits for `occ status` to report `installed: true`
|
||||||
|
9. Ansible deploys custom `redis.config.php` (overwrites default)
|
||||||
|
10. Container restart applies custom configuration
|
||||||
|
|
||||||
|
**Why this order?**
|
||||||
|
|
||||||
|
The Nextcloud container's entrypoint uses `version.php` as a marker to determine if installation is needed. If you deploy any files into `/opt/nextcloud/config/` before the container starts, the initialization process fails:
|
||||||
|
|
||||||
|
- Container copies files including `version.php`
|
||||||
|
- Entrypoint sees `version.php` exists → assumes already installed
|
||||||
|
- Skips running `occ maintenance:install`
|
||||||
|
- Result: Empty `config.php`, 503 errors
|
||||||
|
|
||||||
|
By deploying custom configs **after** installation completes, we:
|
||||||
|
- ✅ Allow the container's auto-installation to run properly
|
||||||
|
- ✅ Override specific configs (like Redis) after the fact
|
||||||
|
- ✅ Maintain idempotency (subsequent runs just update configs)
|
||||||
|
|
||||||
|
See the official [Nextcloud Docker documentation](https://github.com/nextcloud/docker#auto-configuration-via-environment-variables) for more details on the auto-configuration process.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
### Include in Playbook
|
### Include in Playbook
|
||||||
@@ -232,11 +266,13 @@ This role disables Redis sessions by **not setting** `REDIS_HOST` in the environ
|
|||||||
|
|
||||||
**If you need Redis sessions** (e.g., multi-server setup with session sharing), you must:
|
**If you need Redis sessions** (e.g., multi-server setup with session sharing), you must:
|
||||||
1. Enable `REDIS_HOST` in `nextcloud.env.j2`
|
1. Enable `REDIS_HOST` in `nextcloud.env.j2`
|
||||||
2. Set proper lock parameters in a custom PHP ini file
|
2. Add a custom PHP ini file with proper lock parameters:
|
||||||
3. Increase FPM workers significantly (15-20+)
|
- `redis.session.lock_expire = 30` (locks expire after 30 seconds)
|
||||||
4. Monitor for orphaned session locks
|
- `redis.session.lock_retries = 100` (max 100 retries, not infinite)
|
||||||
|
- `redis.session.lock_wait_time = 50000` (50ms between retries)
|
||||||
See `templates/redis-session-override.ini.j2` for an example of session lock tuning.
|
3. Mount the ini file with `zz-` prefix to load after the entrypoint's redis-session.ini
|
||||||
|
4. Increase FPM workers significantly (15-20+)
|
||||||
|
5. Monitor for orphaned session locks
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
|||||||
@@ -52,21 +52,12 @@
|
|||||||
notify: restart nextcloud
|
notify: restart nextcloud
|
||||||
tags: [config]
|
tags: [config]
|
||||||
|
|
||||||
- name: Deploy custom Redis caching configuration
|
# NOTE: Custom Redis config is deployed AFTER installation completes (see below)
|
||||||
template:
|
# to avoid interfering with the container's first-time initialization process
|
||||||
src: redis.config.php.j2
|
|
||||||
dest: "{{ nextcloud_config_dir }}/redis.config.php"
|
|
||||||
mode: '0640'
|
|
||||||
notify: restart nextcloud
|
|
||||||
tags: [config, redis]
|
|
||||||
|
|
||||||
- name: Deploy Redis session lock override configuration
|
# NOTE: redis-session-override.ini is NOT deployed because we use file-based sessions
|
||||||
template:
|
# (not Redis sessions). If you enable REDIS_HOST in the future, you'll need to add
|
||||||
src: redis-session-override.ini.j2
|
# proper session lock configuration.
|
||||||
dest: "{{ nextcloud_home }}/redis-session-override.ini"
|
|
||||||
mode: '0644'
|
|
||||||
notify: restart nextcloud
|
|
||||||
tags: [config, redis]
|
|
||||||
|
|
||||||
- name: Create Quadlet systemd directory (system scope)
|
- name: Create Quadlet systemd directory (system scope)
|
||||||
file:
|
file:
|
||||||
@@ -130,21 +121,43 @@
|
|||||||
delay: 10
|
delay: 10
|
||||||
tags: [verification]
|
tags: [verification]
|
||||||
|
|
||||||
|
- name: Wait for Nextcloud installation to complete
|
||||||
|
shell: podman exec nextcloud php occ status --output=json 2>/dev/null || echo '{"installed":false}'
|
||||||
|
register: nc_status
|
||||||
|
until: (nc_status.stdout | from_json).installed | default(false) == true
|
||||||
|
retries: 60
|
||||||
|
delay: 5
|
||||||
|
changed_when: false
|
||||||
|
tags: [verification]
|
||||||
|
|
||||||
|
- name: Deploy custom Redis caching configuration (post-installation)
|
||||||
|
template:
|
||||||
|
src: redis.config.php.j2
|
||||||
|
dest: "{{ nextcloud_config_dir }}/redis.config.php"
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart nextcloud
|
||||||
|
tags: [config, redis]
|
||||||
|
|
||||||
- name: Display Nextcloud deployment status
|
- name: Display Nextcloud deployment status
|
||||||
debug:
|
debug:
|
||||||
msg: |
|
msg: |
|
||||||
✅ Nextcloud Cloud Storage deployed successfully!
|
✅ Nextcloud Cloud Storage deployed successfully!
|
||||||
|
|
||||||
🌐 Domain: {{ nextcloud_domain }}
|
🌐 Domain: {{ nextcloud_domain }}
|
||||||
🗄️ Database: {{ nextcloud_db_name }} (Unix socket)
|
🗄️ Database: {{ nextcloud_db_name }} (PostgreSQL via Unix socket)
|
||||||
🗄️ Cache: Valkey DB {{ nextcloud_valkey_db }} (Unix socket)
|
🗄️ Cache: Valkey DB {{ nextcloud_valkey_db }} (Redis-compatible via Unix socket)
|
||||||
🐳 Container: FPM via Podman Quadlet
|
🐳 Container: FPM via Podman Quadlet
|
||||||
🔒 Admin: {{ nextcloud_admin_user }}
|
🔒 Admin: {{ nextcloud_admin_user }}
|
||||||
|
|
||||||
|
⚙️ Configuration:
|
||||||
|
- Redis caching enabled (application-level cache & file locking)
|
||||||
|
- PHP sessions use file-based storage (not Redis)
|
||||||
|
- Custom redis.config.php deployed post-installation
|
||||||
|
|
||||||
🚀 Ready for file storage and collaboration!
|
🚀 Ready for file storage and collaboration!
|
||||||
|
|
||||||
📋 Next Steps:
|
📋 Next Steps:
|
||||||
- Access https://{{ nextcloud_domain }} to complete setup
|
- Access https://{{ nextcloud_domain }} to log in
|
||||||
- Install desired Nextcloud apps
|
- Install desired Nextcloud apps
|
||||||
- Configure user accounts
|
- Configure user accounts and storage quotas
|
||||||
tags: [verification]
|
tags: [verification]
|
||||||
|
|||||||
@@ -26,9 +26,6 @@ Volume={{ nextcloud_config_dir }}:/var/www/html/config:Z
|
|||||||
# Custom apps (world-readable)
|
# Custom apps (world-readable)
|
||||||
Volume={{ nextcloud_custom_apps_dir }}:/var/www/html/custom_apps:Z
|
Volume={{ nextcloud_custom_apps_dir }}:/var/www/html/custom_apps:Z
|
||||||
|
|
||||||
# Redis session configuration override (zz- prefix ensures it loads last)
|
|
||||||
Volume={{ nextcloud_home }}/redis-session-override.ini:/usr/local/etc/php/conf.d/zz-redis-session-override.ini:Z,ro
|
|
||||||
|
|
||||||
# Infrastructure sockets (mounted with world-readable permissions on host)
|
# Infrastructure sockets (mounted with world-readable permissions on host)
|
||||||
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
||||||
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z
|
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z
|
||||||
|
|||||||
@@ -1,16 +0,0 @@
|
|||||||
; Redis Session Lock Override for Nextcloud
|
|
||||||
; Prevents orphaned session locks from causing infinite hangs
|
|
||||||
;
|
|
||||||
; Default Nextcloud container settings:
|
|
||||||
; redis.session.lock_expire = 0 (locks NEVER expire - causes infinite hangs)
|
|
||||||
; redis.session.lock_retries = -1 (infinite retries - causes worker exhaustion)
|
|
||||||
; redis.session.lock_wait_time = 10000 (10 seconds per retry - very slow)
|
|
||||||
;
|
|
||||||
; These settings ensure locks auto-expire and failed requests don't block workers forever:
|
|
||||||
; - Locks expire after 30 seconds (prevents orphaned locks)
|
|
||||||
; - Max 100 retries = 5 seconds total wait time (prevents infinite loops)
|
|
||||||
; - 50ms wait between retries (reasonable balance)
|
|
||||||
|
|
||||||
redis.session.lock_expire = 30
|
|
||||||
redis.session.lock_retries = 100
|
|
||||||
redis.session.lock_wait_time = 50000
|
|
||||||
@@ -34,6 +34,27 @@ Podman is deployed as a system-level infrastructure service that provides contai
|
|||||||
|
|
||||||
All registries configured with HTTPS-only, no insecure connections allowed.
|
All registries configured with HTTPS-only, no insecure connections allowed.
|
||||||
|
|
||||||
|
### **Private Registry Authentication:**
|
||||||
|
For private container images (e.g., from GitHub Container Registry), this role deploys authentication credentials:
|
||||||
|
|
||||||
|
- **Auth file**: `/etc/containers/auth.json` (system-wide, for root containers)
|
||||||
|
- **Permissions**: 0600 (root:root only)
|
||||||
|
- **Credentials**: Stored encrypted in Ansible Vault
|
||||||
|
- **Automatic**: Quadlet containers automatically use authentication when pulling images
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```yaml
|
||||||
|
# In group_vars/production/vault.yml (encrypted)
|
||||||
|
vault_github_username: "your-username"
|
||||||
|
vault_github_token: "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||||
|
|
||||||
|
# In group_vars/production/main.yml
|
||||||
|
github_username: "{{ vault_github_username }}"
|
||||||
|
github_token: "{{ vault_github_token }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
When these variables are defined, the role automatically deploys authentication for ghcr.io.
|
||||||
|
|
||||||
## Application Integration
|
## Application Integration
|
||||||
|
|
||||||
Applications should create service-specific users and manage their own container deployments:
|
Applications should create service-specific users and manage their own container deployments:
|
||||||
|
|||||||
@@ -61,6 +61,29 @@
|
|||||||
changed_when: false
|
changed_when: false
|
||||||
failed_when: false
|
failed_when: false
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Container Registry Authentication
|
||||||
|
# =================================================================
|
||||||
|
# Deploy system-wide authentication for private container registries
|
||||||
|
# Currently supports: GitHub Container Registry (ghcr.io)
|
||||||
|
|
||||||
|
- name: Deploy GitHub Container Registry authentication
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
{
|
||||||
|
"auths": {
|
||||||
|
"ghcr.io": {
|
||||||
|
"auth": "{{ (github_username + ':' + github_token) | b64encode }}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
dest: /etc/containers/auth.json
|
||||||
|
mode: '0600'
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
when: github_username is defined and github_token is defined
|
||||||
|
no_log: true # Don't log sensitive authentication data
|
||||||
|
|
||||||
- name: Display Podman infrastructure status
|
- name: Display Podman infrastructure status
|
||||||
debug:
|
debug:
|
||||||
msg: |
|
msg: |
|
||||||
@@ -70,6 +93,7 @@
|
|||||||
🔒 Security: Rootless container runtime enabled
|
🔒 Security: Rootless container runtime enabled
|
||||||
📦 Registries: {{ podman_registries | join(', ') }}
|
📦 Registries: {{ podman_registries | join(', ') }}
|
||||||
🏗️ Storage: {{ 'overlay' if 'overlay' in podman_system_info.stdout else 'system default' }}
|
🏗️ Storage: {{ 'overlay' if 'overlay' in podman_system_info.stdout else 'system default' }}
|
||||||
|
🔑 Auth: {{ 'GitHub Container Registry configured' if (github_username is defined and github_token is defined) else 'No private registry auth' }}
|
||||||
|
|
||||||
🚀 Ready for containerized applications!
|
🚀 Ready for containerized applications!
|
||||||
|
|
||||||
|
|||||||
@@ -19,6 +19,9 @@ network_backend = "netavark"
|
|||||||
# Default network for new containers
|
# Default network for new containers
|
||||||
default_network = "{{ podman_default_network }}"
|
default_network = "{{ podman_default_network }}"
|
||||||
|
|
||||||
|
# For signing into ghcr.io
|
||||||
|
auth_file = "/etc/containers/auth.json"
|
||||||
|
|
||||||
# =================================================================
|
# =================================================================
|
||||||
# Storage Configuration
|
# Storage Configuration
|
||||||
# =================================================================
|
# =================================================================
|
||||||
@@ -64,4 +67,4 @@ default_subnet_pools = [
|
|||||||
# - Create service-specific users for container isolation
|
# - Create service-specific users for container isolation
|
||||||
# - Use quadlet files for systemd integration
|
# - Use quadlet files for systemd integration
|
||||||
# - Create custom networks for multi-container applications
|
# - Create custom networks for multi-container applications
|
||||||
# - Access host services via host.containers.internal
|
# - Access host services via host.containers.internal
|
||||||
|
|||||||
@@ -126,6 +126,11 @@
|
|||||||
delay: 3
|
delay: 3
|
||||||
when: postgresql_service_state == "started" and postgresql_unix_socket_enabled and postgresql_listen_addresses == ""
|
when: postgresql_service_state == "started" and postgresql_unix_socket_enabled and postgresql_listen_addresses == ""
|
||||||
|
|
||||||
|
# Containerized applications mount socket directories. If handlers run at the end of playbooks this will mount stale sockets.
|
||||||
|
- name: Flush handlers for socket connection
|
||||||
|
meta: flush_handlers
|
||||||
|
tags: always
|
||||||
|
|
||||||
- name: Display PostgreSQL infrastructure status
|
- name: Display PostgreSQL infrastructure status
|
||||||
debug:
|
debug:
|
||||||
msg: |
|
msg: |
|
||||||
@@ -136,4 +141,4 @@
|
|||||||
📊 Checksums: {{ 'Enabled' if postgresql_data_checksums else 'Disabled' }}
|
📊 Checksums: {{ 'Enabled' if postgresql_data_checksums else 'Disabled' }}
|
||||||
{% if postgresql_unix_socket_enabled %}🔌 Socket: {{ postgresql_unix_socket_directories }} (mode {{ postgresql_unix_socket_permissions }}){% endif %}
|
{% if postgresql_unix_socket_enabled %}🔌 Socket: {{ postgresql_unix_socket_directories }} (mode {{ postgresql_unix_socket_permissions }}){% endif %}
|
||||||
|
|
||||||
🏗️ Ready for applications to create databases/users
|
🏗️ Ready for applications to create databases/users
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ sigvild_gallery_guest_username: guest
|
|||||||
sigvild_gallery_guest_password: "{{ vault_sigvild_guest_password }}"
|
sigvild_gallery_guest_password: "{{ vault_sigvild_guest_password }}"
|
||||||
|
|
||||||
# Build configuration
|
# Build configuration
|
||||||
sigvild_gallery_local_project_path: "{{ ansible_env.PWD }}/sigvild-gallery"
|
sigvild_gallery_local_project_path: "{{ lookup('env', 'HOME') }}/sigvild-gallery"
|
||||||
|
|
||||||
# Service configuration
|
# Service configuration
|
||||||
sigvild_gallery_service_enabled: true
|
sigvild_gallery_service_enabled: true
|
||||||
@@ -33,7 +33,7 @@ sigvild_gallery_service_state: started
|
|||||||
|
|
||||||
# Backup configuration
|
# Backup configuration
|
||||||
sigvild_gallery_backup_enabled: true
|
sigvild_gallery_backup_enabled: true
|
||||||
sigvild_gallery_backup_local_path: "{{ playbook_dir }}/backups/sigvild-gallery"
|
sigvild_gallery_backup_local_path: "{{ lookup('env', 'HOME') }}/sigvild-gallery-backup/"
|
||||||
|
|
||||||
# Caddy integration (assumes caddy role provides these)
|
# Caddy integration (assumes caddy role provides these)
|
||||||
# caddy_sites_enabled_dir: /etc/caddy/sites-enabled
|
# caddy_sites_enabled_dir: /etc/caddy/sites-enabled
|
||||||
|
|||||||
@@ -33,11 +33,7 @@
|
|||||||
notify: restart sigvild-gallery
|
notify: restart sigvild-gallery
|
||||||
tags: [backend]
|
tags: [backend]
|
||||||
|
|
||||||
- name: Restore data from backup if available
|
- name: Create data directory for PocketBase (if not created by restore)
|
||||||
include_tasks: restore.yml
|
|
||||||
tags: [backend, restore]
|
|
||||||
|
|
||||||
- name: Create data directory for PocketBase
|
|
||||||
file:
|
file:
|
||||||
path: "{{ sigvild_gallery_data_dir }}"
|
path: "{{ sigvild_gallery_data_dir }}"
|
||||||
state: directory
|
state: directory
|
||||||
|
|||||||
@@ -14,7 +14,7 @@
|
|||||||
home: "{{ sigvild_gallery_home }}"
|
home: "{{ sigvild_gallery_home }}"
|
||||||
create_home: yes
|
create_home: yes
|
||||||
|
|
||||||
- name: Create directories
|
- name: Create directories (excluding pb_data, created later)
|
||||||
file:
|
file:
|
||||||
path: "{{ item }}"
|
path: "{{ item }}"
|
||||||
state: directory
|
state: directory
|
||||||
@@ -23,7 +23,6 @@
|
|||||||
mode: '0755'
|
mode: '0755'
|
||||||
loop:
|
loop:
|
||||||
- "{{ sigvild_gallery_home }}"
|
- "{{ sigvild_gallery_home }}"
|
||||||
- "{{ sigvild_gallery_data_dir }}"
|
|
||||||
- "{{ sigvild_gallery_web_root }}"
|
- "{{ sigvild_gallery_web_root }}"
|
||||||
|
|
||||||
- name: Check for existing gallery data
|
- name: Check for existing gallery data
|
||||||
|
|||||||
@@ -143,6 +143,11 @@
|
|||||||
failed_when: valkey_ping_result_socket.stdout != "PONG"
|
failed_when: valkey_ping_result_socket.stdout != "PONG"
|
||||||
when: valkey_service_state == "started" and valkey_unix_socket_enabled
|
when: valkey_service_state == "started" and valkey_unix_socket_enabled
|
||||||
|
|
||||||
|
# Containerized applications mount socket directories. If handlers run at the end of playbooks this will mount stale sockets.
|
||||||
|
- name: Flush handlers for socket connection
|
||||||
|
meta: flush_handlers
|
||||||
|
tags: always
|
||||||
|
|
||||||
- name: Display Valkey infrastructure status
|
- name: Display Valkey infrastructure status
|
||||||
debug:
|
debug:
|
||||||
msg: |
|
msg: |
|
||||||
|
|||||||
45
site.yml
45
site.yml
@@ -1,29 +1,22 @@
|
|||||||
---
|
---
|
||||||
# Security hardening establishes secure foundation before web services
|
# Main Site Deployment Playbook
|
||||||
|
#
|
||||||
|
# This playbook orchestrates deployment across all hosts:
|
||||||
|
# - Homelab (arch-vps): Personal services and experimentation
|
||||||
|
# - Production (mini-vps): Client projects requiring high uptime
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook site.yml # Deploy everything
|
||||||
|
# ansible-playbook site.yml -l homelab # Deploy only homelab
|
||||||
|
# ansible-playbook site.yml -l production # Deploy only production
|
||||||
|
# ansible-playbook site.yml --tags caddy # Deploy Caddy everywhere
|
||||||
|
|
||||||
|
# Security hardening playbook (optional, currently commented out)
|
||||||
|
# Establishes secure foundation before web services
|
||||||
# - import_playbook: playbooks/security.yml
|
# - import_playbook: playbooks/security.yml
|
||||||
|
|
||||||
- name: Deploy Core Infrastructure
|
# Deploy homelab infrastructure on arch-vps
|
||||||
hosts: arch-vps
|
- import_playbook: playbooks/homelab.yml
|
||||||
become: true
|
|
||||||
gather_facts: true
|
# Deploy production services on mini-vps
|
||||||
|
- import_playbook: playbooks/production.yml
|
||||||
roles:
|
|
||||||
# Infrastructure services
|
|
||||||
# - role: postgresql
|
|
||||||
# tags: ['postgresql', 'infrastructure', 'database']
|
|
||||||
# - role: valkey
|
|
||||||
# tags: ['valkey', 'redis', 'infrastructure', 'cache']
|
|
||||||
# - role: podman
|
|
||||||
# tags: ['podman', 'containers', 'infrastructure']
|
|
||||||
# - role: caddy
|
|
||||||
# tags: ['caddy', 'infrastructure', 'web']
|
|
||||||
|
|
||||||
# Application services
|
|
||||||
# - role: sigvild-gallery
|
|
||||||
# tags: ['sigvild', 'gallery', 'wedding']
|
|
||||||
# - role: gitea
|
|
||||||
# tags: ['gitea', 'git', 'development']
|
|
||||||
- role: nextcloud
|
|
||||||
tags: ['nextcloud']
|
|
||||||
# - role: authentik
|
|
||||||
# tags: ['authentik']
|
|
||||||
|
|||||||
Reference in New Issue
Block a user